Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Probabiliy Distribution Proof Please help and provide suggestions.
(i) A discrete random variable X has the distribution $P(X = i) = 2a^i $ for i ∈ N+ (where N+ := {1,2,...}). What is the value of a?
$P(X = 1) = 2a^1 ;P(X = 2) = 2a^2 ;P(X = 1) = 2a^3 ;P(X = 1) = 2a^4;...... $
$S_n=\sum_{i=1}^\infty 2a^i=\frac{2a(1-a^n)}{1-a}=1\iff2a-2a^{n+1}=1-a\iff 2a^{n+1}-3a+1=0\iff a=1$
Is this correct?
(ii) A random variable X is said to follow the Cauchy distribution if it’s density function $f_X(x)$ is given by $$f_X(x)= \frac1π\frac{1}{(1+x^2)}$$Show that $f_X (x)$ is a valid probability density function and compute the variance
of X.
$\int_{-\infty} ^\infty \frac1π\frac{1}{(1+x^2)}=\frac1π[tan^{-1}x]_{-\infty}^\infty=\frac1π[\frac\pi2-\frac{-\pi}{2}]=1$
$Var(X)=E(X^2)-E(X)^2$
I've got no idea but is this right.
$E(X)=\int_{-\infty} ^\infty \frac1π\frac{x}{(1+x^2)}=\frac{1}{2\pi}ln(\frac{1+(\infty)^2}{1+(-\infty)^2})=\frac{1}{2\pi}ln(\frac\infty \infty) $ is undefined
$E(X^2)=\int_{-\infty} ^\infty \frac1π\frac{x^2}{(1+x^2)}=\frac1π\int_{-\infty} ^\infty 1+\frac{-1}{(1+x^2)}=\frac{1}{\pi}[x-tan^{-1}x]_{-\infty}^{\infty}=\frac{1}{\pi}[(\infty-\frac{\pi}{2})-(-\infty-\frac{-\pi}{2})]=\frac{1}{\pi}[2\infty-1]=\infty$
Therfore, $Var(X)=E(X^2)-E(X)^2=\infty -(undefined)^2=undefined$
|
As $0\le 2a\le 1$ then $a^n\to 0$ as $n\to \infty$. So
$$1=S=\sum_{i=1}^\infty 2a^i=\frac{2a}{1-a}.$$
So $2a=1-a$ and then $a=\frac{1}{3}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1191663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Solving a recurrence relation (textbook question) $a_{n+1} - a_n = 3n^2 - n$ ;$a_0=3$
I need help for solving the particular solution.
Based on a chart in my textbook if you get $n^2$ the particular solution would be
$A_2n^2 + A_1n + A_0$ and $n$ has the particular solution of $A_1n+A_0$.
So given $3n^2 - n$, my first thought was that if the equation was $n^2-n$ you can have something like $An^2 + Bn+C - (Bn + C) = An^2$.
Is this process correct if I simply had $n^2-n$ ? If so how would the $3$ in $3n^2$ affect this step?
|
hint: $a_n = (a_n-a_{n-1})+(a_{n-1}-a_{n-2})+\cdots +(a_2-a_1)+(a_1-a_0)+a_0$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1191740",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Is it true that every element of $V \otimes W$ is a simple tensor $v \otimes w$? I know that every vector in a tensor product $V \otimes W$ is a sum of simple tensors $v \otimes w$ with $v \in V$ and $w \in W$. In other words, any $u \in V \otimes W$ can be expressed in the form$$u = \sum_{i=1}^r v_i \otimes w_i$$for some vectors $v_i \in V$ and $w_i \in W$. This follows from the proof of the existence of $V \otimes W$, where one shows that $V \otimes W$ is spanned by the simple tensors $v \otimes w$; the assertion now follows from the fact that, in forming linear combinations, the scales can be absorbed in the vectors: $c(v \otimes w) = (cv) \otimes w = v\otimes (cw)$.
My question is, is it true in general that every element of $V \otimes W$ is a simple tensor $v \otimes w$?
|
This is equivalent to asking, does every multivariable polynomial factor into polynomials of one variable? No.
Consider the polynomial $x^2 + y$. If it could be factored into $P(x)Q(y)$ Then for some value of $y$, $Q(y) = 0$, and thus $x^2 + y = 0$ no matter the value of $x$. This is obviously false.
To be precise, let $\{1, x, x^2\}$ be a basis of $X$ and $\{1, y\}$ be a basis of $Y$. Then the basis of $X \otimes Y$ is just
$$\{x^2 \otimes y, x^2 \otimes 1, x \otimes y, x \otimes 1, 1 \otimes y, 1 \otimes 1 \}$$
The element $x^2 \otimes 1 + 1 \otimes y$ cannot be factored into a pure tensor.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1191924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 6,
"answer_id": 1
}
|
Finding the order of an element in the dihedral group of order 4. How do I find the order of
$$S_1=\left({\begin{array}{cc}
\cos\frac{\pi}{3} & \sin\frac{\pi}{3}\\
\sin\frac{\pi}{3} & -\cos\frac{\pi}{3}\\
\end{array} }\right)$$
I know that $S_1$ is a dihedral group and is a reflection of the line that makes an angle of $\frac{\pi}{3}$ with the x-axis.. but finding its order is what I don't know how to do.. Please help.
|
$$S_1^2=\begin{bmatrix} 1&0\\0&1\end{bmatrix}$$
In more detail
$$\begin{align}S_1^2=&\begin{matrix} \cos^2{\frac{\pi}{3}}+\sin^2{\frac{\pi}{3}}&\cos{\frac{\pi}{3}}\sin{\frac{\pi}{3}}-\cos{\frac{\pi}{3}}\sin{\frac{\pi}{3}}\\\cos{\frac{\pi}{3}}\sin{\frac{\pi}{3}}-\cos{\frac{\pi}{3}}\sin{\frac{\pi}{3}}&\cos^2{\frac{\pi}{3}}+\sin^2{\frac{\pi}{3}}\end{matrix}\end{align}$$
And so the order is $2$. By the way reflection is always of order two
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1192024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove complements of independent events are independent. Given a finite set of events $\{A_i\}$ which are mutually independent, i.e., for every subset $\{A_n\}$,
$$\mathrm{P}\left(\bigcap_{i=1}^n A_i\right)=\prod_{i=1}^n \mathrm{P}(A_i).$$
show that the set $\{A_i^c\}$, that is the set of complements of the original events, is also mutually independent.
I can prove this, but my proof relies on the Inclusion-Exclusion principle (as does the proof given in this question). I'm hoping there is a more concise proof.
Can this statement be proved without the use of the Inclusion-Exclusion principle?
|
Hint: prove that the set of events stays independent if you replace one of them by its complement, i.e. that given your conditions the set $\{A_1^c, A_2, \ldots, A_n\}$ is independent. Then use this $n$ times to replace all of $A_i$ by their complements one by one.
Update. Hint 2: to avoid clutter, let me show you what I mean on the example of two events, $B$ and $C$. Suppose $B$ and $C$ are independent, i.e. $P(B \cap C) = P(B) \cdot P(C)$. We want to show that $B^c$ and $C$ are independent. Indeed:
$$
\begin{align*}
P(B^c \cap C) &= P(C \setminus (B \cap C)) \\
&= P(C) - P(B \cap C) \\
&= P(C) - P(B) \cdot P(C) \\
&= P(C) \cdot (1-P(B)) \\
&= P(B^c) \cdot P(C).
\end{align*}
$$
I didn't use inclusion-exclusion here. And this approach scales, i.e. it works the same if you consider more than $2$ variables.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1192151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 2,
"answer_id": 1
}
|
What is ⌊0.9 recurring ⌋? For a ceiling and floor function, the number is taken to 0 decimal places. Does this process mean that 0.9 recurring inside a floor function would go to 0? Or would the mathematician take 0.9 recurring to be equal to 1, thus making the answer 1?
And if 0.9 recurring does equal 1, does that mean (by definition) that ⌊1⌋ = 0?
|
You have to look at the $.9$ recurring as a sum... then you'll know the answer.
$$\bar{.9} = \sum_{i=1}^{\infty} \frac{9}{10^{i}}$$
So, $$\lfloor \bar{.9}\rfloor = \left\lfloor \sum_{i=1}^{\infty}\frac{9}{10^{i}}\right\rfloor=\lfloor 1 \rfloor = 1.$$ You cannot split up the floor function over a sum, i.e. $\lfloor a+b\rfloor \neq \lfloor a\rfloor + \lfloor b\rfloor$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1192269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 6,
"answer_id": 0
}
|
Number Theory Taxicab Number How to prove that there are infinite taxicab numbers?
ok i was reading this http://en.wikipedia.org/wiki/Taxicab_number#Known_taxicab_numbers
and thought of this question..any ideas?
|
It is easy to show that there are infinitely many positive integers which are representable as the sum of two cubes, e.g., see the article Characterizing the Sum of Two Cubes by K.A. Broughan (2003). If we require a representation as the sum of two cubes in at least $N\ge 2$ different ways, then the result is more difficult to show; and the proof uses the theory of elliptic curves etc. For a good survey, see the article Taxicabs and sum of two cubes by J. H. Silverman. In particular, the following result due to K. Mahler is discussed:
Theorem(Mahler): There is a constant $c>0$ such that for infinitely many positive integers $m$, the number of positive integer solutions to the equation $x^3+y^3=m$ exceeds $c(\log(m))^{1/3}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1192338",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
$(1-\zeta_m)$ is a unit in $\mathbb{Z}[\zeta_m]$ if m contains at least two prime factors We know that for $m=p^r, 1-\zeta_m$ is a prime.Now suppose that m has at least 2 distinct primes appearing in its prime factorization,we need to show that $1-\zeta_m$ is a unit in its ring of integers $\mathcal{O}_{\mathbb{Q}(\zeta_m)}=\mathbb{Z}[\zeta_m]$.
I tried proving that $N_{\mathbb{Q}}^{\mathbb{Q}(\zeta_m)} (1-\zeta_m)=\pm 1$ but got stuck in finding norm of $\zeta_m$. Some hint would be nice.
|
Write $1+x+x^2+...+x^{n-1} = \prod_{j=1}^{n-1} (x-\zeta_n^j)$ and put x=1 to get $n = \prod_{j=1}^{n-1} (1-\zeta_n^j)$. If $p^a||n$, then running $j$ through multiples of $n/p^a$, we see that the product contains $p^a = \prod_{j=1}^{p^a-1} (x-\zeta_{p^a}^j)$. Remove all such factors and get $1 = \prod (1 - \zeta_n^j)$ with the product over the $j$ which are not prime powers. By your assumption, $n$ is not a prime power, so you will have $(1-\zeta_n)$ in this product, therefore it is a unit.
Edit: If you extend this proof and write the product properly, you will actually see that the norm is +1.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1192438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Is there a formal name for this matrix? I've been using a matrix of the following form:
$$
\begin{bmatrix}
-1 & 1 & 0 \\
0 & -1 & 1 \\
1 & 0 & -1
\end{bmatrix}
$$
Which is just a circularly-shifted $I$ matrix minus another $I$ matrix. Essentially, a permutation matrix minus an identity. Is there a formal name for that?
|
Not that I have heard. It is a circulant matrix, though, and the permutation matrix alone without the $-I$ is sometimes called the cyclic shift matrix, circulant generator or generator of the circulant algebra.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1192519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $\sqrt[5]{672}$ is irrational How would you prove $\sqrt[5]{672}$ is irrational?
I was trying proof by contradiction starting by saying:
Suppose $\sqrt[5]{672}$ is rational ...
|
$$672^{1/5}=\frac pq,$$($p$ and $q$ relative primes) then
$$p^5=672q^5,$$
which is possible only if $p$ is a multiple of $7$, which in turn implies that $q$ is a multiple of $7$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1192633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Proving an identity involving differentials of integrals
Define $$ E(\theta,k) = \int_0^\theta \sqrt{1-k^2\sin{^2{x}}} dx$$ and $$F(\theta,k) = \int_0^\theta \frac{1}{\sqrt{1-k^2\sin{^2{x}}}} dx$$ We are to show $$\left(\frac{\partial E}{\partial k}\right)_\theta = \frac{E-F}{k} $$
Am I right in thinking: $$\left(\frac{\partial E}{\partial k}\right)_\theta =\int_0^\theta \frac{\partial}{\partial k}\sqrt{1-k^2\sin{^2{x}}} dx $$
Which gives $$k\left[\int_0^\theta \frac{\cos^2{x}}{\sqrt{1-k^2\sin{^2{x}}}} dx - \int_0^\theta \frac{1}{\sqrt{1-k^2\sin{^2{x}}}} dx\right]$$
which is $$k\left[\int_0^\theta \frac{\cos^2{x}}{\sqrt{1-k^2\sin{^2{x}}}} dx -F\right]$$
However, I can't seem to reduce the left integral to give what is required. Many thanks in advance.
|
Starting from your correct assumption
$$\left(\frac{\partial E}{\partial k}\right)_\theta =\int_0^\theta \frac{\partial}{\partial k}\sqrt{1-k^2\sin{^2{x}}}\ dx$$
the differential, however, evaluates to
$$\begin{align}\left(\frac{\partial E}{\partial k}\right)_\theta &=\int_0^\theta \frac{-k\sin^2\theta}{\sqrt{1-k^2\sin{^2{x}}}}\ dx\\&=\frac{1}{k}\int_0^\theta \frac{-k^2\sin^2\theta}{\sqrt{1-k^2\sin{^2{x}}}}\ dx\\&=\frac{1}{k}\int_0^\theta \frac{1-k^2\sin^2\theta-1}{\sqrt{1-k^2\sin{^2{x}}}}\ dx\\&=\frac{1}{k}\left(\int_0^\theta \sqrt{1-k^2\sin{^2{x}}}\ dx-\int_0^\theta \frac{1}{\sqrt{1-k^2\sin{^2{x}}}}\ dx\right)\\&=\frac{E-F}{k}\end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1192770",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Calculate limit of $\sqrt[n]{2^n - n}$ Calculate limit of $\sqrt[n]{2^n - n}$.
I know that lim $\sqrt[n]{2^n - n} \le 2$, but don't know where to go from here.
|
The exponential function is continuous, so $$\lim_{n\rightarrow \infty}\sqrt[n]{2^n -n} = \lim_{n\rightarrow \infty} e^{\frac{\ln(2^n-n)}{n}} = e^{(\lim_{n\rightarrow \infty}\frac{\ln(2^n-n)}{n})}$$
Then you could use l'Hospital to show that $$\lim_{n\rightarrow \infty}\frac{\ln(2^n-n)}{n} = \ln(2)$$
So then you'd have $$\lim_{n\rightarrow \infty}\sqrt[n]{2^n -n} = e^{\ln(2)} = 2$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1192860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Can we subtract a trigonometric term from a polynomial? Can we find the root of a function like $f(x) = x^2-\cos(x)$ using accurate algebra or do we need to resort to numerical methods approximations?
thanks.
|
The answer to your posed problem cannot be expressed in general in terms of elementary formulas in closed form. So in practice, numerical root finding is the only way to go. Luckily, for your question finding a root approximately is not that hard using Newton's method.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1193047",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
I'm unsure which test to use for this series: $\sum\limits_{n=1}^\infty{\frac{3^\frac{1}{n} \sqrt{n}}{2n^2-5}}$? I want to determine if this series converges or diverges: $$\sum\limits_{n=1}^\infty{\frac{3^\frac{1}{n} \sqrt{n}}{2n^2-5}}$$
I tried the Ratio Test at first, and didn't get anywhere with that. I'm thinking I have to use Comparison Test, and compare the test to the series $\sum{\frac{1}{n^2}}$ for convergence? But I wasn't sure how to prove that in this case. Can someone help me out here?
|
You may write, as $n \to \infty$,
$$\frac{3^\frac{1}{n} \sqrt{n}}{2n^2-5}= \frac{\sqrt{n}}{2n^2}\frac{e^{\large \frac{\ln 3}{n}}}{1-\frac{5}{2n^2}}=\frac{1}{2}\frac{1}{n^{3/2}}\frac{1+\frac{\ln 3}{n}+\mathcal{O}\left(\frac {1}{n^2}\right)}{1-\frac{5}{2n^2}}\sim \frac{1}{2}\frac{1}{n^{3/2}}$$ and your initial series is convergent as is the series $\displaystyle \sum\frac{1}{n^{3/2}}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1193134",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Proving logical equivalences The question is to prove
$\neg (p \wedge q) \to (p \vee r)$ equivalent to $p \vee r$
So far, I got
*
*$¬[¬(p \wedge q)] \vee (p \vee r)$ - implication
*$(p \wedge q) \vee (p \vee r)$ - double negation
Now, is this question logically not equivalent?
Or is there some way I can prove this is logically equivalent?
|
$(p \land q) \lor (p \lor r)$ is logically equivalent to $(p \land q) \lor p \lor r$ (the parentheses can be removed because we have the same $\lor$ sign inside and outside the parentheses. This is in turn logically equivalent to $p \land q \lor r$ which implies $p \lor r$. But the converse is not true, so I don't think the question has specified a valid logical equivalence.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1193358",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Verification - does $x < \sup A$ necessarily mean $x \in A$? Suppose we have a non-empty and bounded above set $A$, and some $x \in \mathbb{R}$ such that $x < \sup A$.
Do we then have that $x \in A$?
Since we assume $\sup A$ exists, it follows that $A \subset \mathbb{R}$.
If we take $x = \sqrt{2}$ and define $A = \{...,-3,-2,-1,0,1,2\}$
Then $A$ is non-empty, bounded above and has $\sup A = 2$. Furthermore $x < \sup A$, but $x \notin A$.
I know this seems rather elementary, but I just wanted to make sure there was absolutely no flaw in my example.
Is there an even simpler case for which you could prove this?
thanks.
|
There is no problem in your example. In fact you don't need to consider an infinite set as an example. You can just consider any finite set and $x < \min(A) \leq \max(A)$ but $x \notin A$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1193448",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Discrete subset of $\mathbb R^2$ such that $\mathbb R^2\setminus S$ is path connected. Let, $S\subset \mathbb R^2$ be defined by $$S=\left\{\left(m+\frac{1}{2^{|p|}},n+\frac{1}{2^{|q|}}\right):m,n,p,q\in \mathbb Z\right\}.$$ Then, which are correct?
(A) $S$ is a discrete set.
(B) $\mathbb R^2\setminus S$ is path connected.
I think $S$ is a discrete set. If we fix any three of $m,n,p,q$ then the set which we get is countable. Thus we get $S$ as the union of four countable sets. So $S$ is countable & so $S$ is discrete. But I am not sure about it..If I am wrong please detect my fallacy and give what happen?
If $S$ is a discrete set then $\mathbb R^2\setminus S$ is path connected. But if NOT then what about the set $\mathbb R^2\setminus S$ ?
Edit : I know that a set $S$ is said to be discrete if it is closed and all points of it are isolated.
Am I correct ? Please explain.
|
Hint: Try to think of the graph of S near any of its limit points (m,n). It will somewhat look like kitchen sink filter which has more and more holes as you approach towards its centre (a limit point).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1193545",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
integration of definite integral involving sinx and cos x Evaluate $\int_0^{\pi}\frac{dx}{a^2\cos^2x +b^2 \sin^2x}$
I got numerator $\sec^2 x$ and denominator $b^2 ( a^2/b^2 + \tan^2x)$.
I made substitution $u= \tan x$. That way $\sec^2 x$ got cancelled and the answer was of form $1/ab$ ($\tan^{-1} (bu/a)$)
And then if I put limits answer is $0$ but answer is wrong. Where did I go wrong?
|
Are you talking about this?:
$$\small\int\frac{dx}{a^2\cos^2x +b^2 \sin^2x}=\int\frac{\sec^2xdx}{a^2+b^2 \tan^2x}\stackrel{u=\tan x}=\frac1{b^2}\int\frac{du}{a^2/b^2+u^2}=\frac1{b^2}\frac1{a/b}\arctan\frac{\tan x}{a/b}$$
So:
$$\int_0^{\pi}\frac{dx}{a^2\cos^2x +b^2 \sin^2x}=\frac1{ab}\arctan\frac{b\tan x}a\Bigg|_0^{\pi}=0$$
This is wrong because at $\pi/2$, $\cos x=0\iff \sec x\to\infty$!! Or you can say because $\tan x$(=u) misbehaves at $x=\pi/2$.It should rather be done like:
$$\int_0^{\pi}\frac{dx}{a^2\cos^2x +b^2 \sin^2x}=\frac1{ab}\arctan\frac{b\tan x}a\Bigg|_0^{\displaystyle\pi/2^-}+\frac1{ab}\arctan\frac{b\tan x}a\Bigg|_{\displaystyle\pi/2^+}^{\displaystyle\pi}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1193700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Proving $\sum_{i=0}^n 2^i=2^{n+1}-1$ by induction. Firstly, this is a homework problem so please do not just give an answer away. Hints and suggestions are really all I'm looking for.
I must prove the following using mathematical induction:
For all $n\in\mathbb{Z^+}$, $1+2+2^2+2^3+\cdots+2^n=2^{n+1}-1$.
This is what I have in my proof so far:
Proof: Let $p\left(n\right)$ be $\sum_{j=0}^n 2^j=1+2+2^2+\cdots+2^n=2^{n+1}-1$. Then $P\left(0\right)$ is
\begin{align} \sum\limits_{j=0}^0 2^j=1=2^1-1.\tag{1} \end{align} Hence
$P\left(0\right)$ is true. Next, for some $k\in\mathbb{Z^+}$, assume
$P\left(k\right)$ is true. Then \begin{align}
\sum\limits_{j=0}^{k+1}2^j&=\left(\sum\limits_{j=0}^k
2^j\right)+2^{k+1}\tag{2}\\ &=2^{k+1}-1+2^{k+1}\tag{3} \\
&=2\cdot 2^{k+1}-1\tag{4}\\
&=2^{k+2}-1\tag{5}\\
&=2^{\left(k+1\right)+1}-1.\tag{6}
\end{align}
Therefore, $P\left(n\right)$ holds for all $n$.$\;\;\;\;\blacksquare$
Is everything right? My professor went over the methodology in class quite fast (and I had a bit too much coffee that night) and I want to make sure I have it down for tomorrow night's exam.
|
Your problem has already been answered in details. I would like to point out that if you want to train your inductive skills, once you have a solution to your problem, you still can explore it from many other sides, especially using visual proofs that can be quite effective. And find other proofs. Some are illustrated in An Invitation to Proofs Without Words. I will try to describe two of them related to your case.
First, $2^k$ can be represented in binary with only zeros, except for a $1$ in the $(k+1)^{\textrm{th}}$ position from the right:
- 01: 00000001
- 02: 00000010
- 04: 00000100
- 08: 00001000
- 16: 00010000
So their sum will add the columns. Since there is exactly one $1$, you do not have carries, and hence get ones up to the $(K+1)^{\textrm{th}}$ position, and the number you are looking for is:
- ??: 00011111
The induction may appear as: if you have a binary digit with ones up to the $(K+1)^{\textrm{th}}$ position, adding one to it yields a number with a single one in the $(K+2)^{\textrm{th}}$ position, for instance:
- 00011111+0000001=0010000
So your mysterious number is $2^{K+2}-1$.
The second proof is also illustrated with fractions of powers of two (see the top-right picture in 1/2 + 1/4 + 1/8 + 1/16 + ⋯). One can observe that powers of two are either:
*
*pure squares with sides $2^k\times 2^k$
*double-square rectangles with sides $2^k\times 2^{k+1}$
Here, you can get a double-induction, that often happens in series: one property
for odd indices, another property for even indices. The two properties are:
*
*the sum of powers of two up to a square ($k=2K$) give the next rectangle minus one,
*the sum of powers of two up to a rectangle ($k=2K+1$) give the next square minus one,
which you can prove by alternated inductions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1193942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Is there a more general version of $row rank(A) = column rank(A)$? $A$ is a $m\times n$ matrix in field $F$ There's a conclusion in matrix that, given $A$ a $m\times n$ matrix in field $F$, one has $$row\, rank(A) = column\,rank(A)$$
Since linear algebra conclusions are sometimes related to more general ones in abstract algebra, I'm wondering, is there a more general version of this conclusion?
Edit: let me explain what is "more general".
For example in linear matrix we have: $A * B = I_n \Rightarrow B * A = I_n$.
In abstract algebra we have that in a group, if there is a "left identity", and every element has a "left inverse", then the left identity is also right identity, and left inverse is also right inverse.
The conclusion in abstract algebra is more general and powerful as it can be applied to matrix product as a group, also can be applied to other cases.
|
Depends on how general you want to get. If going from matrices to linear maps is general enough, then if you have $V$ and $W$ as finite-dimensional vector spaces over the same field $\mathbb{F}$, and if $A:V\rightarrow W$ is a linear map, we call the rank of $A$ the dimension of its range (which is as we know is a subspace of $W$), so $$ \mathrm{rank}A=\dim\mathrm{ran}A. $$
Now, what your statement says in the language of linear maps is that $$ \mathrm{rank}A=\mathrm{rank}A^*, $$ where $A^*$ is the adjoint or dual map of $A$ defined as $$(A^*\omega)(x)=\omega(Ax) $$ for any $\omega\in W^*$ and $x\in V$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1194022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Is it true that $2^n$ is $O(n!)$? I had a similar problem to this saying:
Is it true that $n!$ is $O(2^n)$?
I got that to be false because if we look at the dominant power of $n!$ it results in $n^n$. So because the base numbers are not the same it is false.
Is it true that $2^n$ is $O(n!)$?
So likewise with the bases, this question should result in false, however it is true. Why? Is the approach I am taking to solve these questions wrong?
|
If $f(n)$ is $O(g(n))$, then there is a constant $C$ such that $f(n) \leq C g(n)$ eventually.
It turns out that $2^n$ is $O(n!)$. Can you find a constant $C$ and prove the inequality? Hint: Choose $C=2$ and try to prove $2^n \leq 2 n!$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1194097",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 7,
"answer_id": 1
}
|
For any $n\ge2$ prove that $H(X_1,X_2,...,X_n)\ge\sum\limits_{i=n}^\mathbb{n}\ H(X_i|X_j , j \neq i)$ I am trying to figure this out and I am stuck. Any ideas?
For any $n\ge2$ prove that $H(X_1,X_2,\ldots,X_n)\ge\sum\limits_{i=1}^\mathbb{n}\ H(X_i\mid X_j , \ j \neq i)$
|
For n=2 we have $$H(X1,X2)=H(X1)+H(X2|X1) \ge H(X_1|X_2)+ H(X_2|X_1) \ (1)$$ which stands because conditioning does not increase entropy.
Same logic for n>2
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1194171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
How is the gradient of a function $f$ equal to the Frechet derivative? A mapping $f$ from an open set $S \subset \mathbb{R^n}$ into $\mathbb{R^m}$ is said to be differentiable at $\vec{a} \in S$ if there is an $n\times m$ matrix $L$ such that $\lim_{\vec{h} \to 0}\dfrac{|f(\vec{a}+\vec{h})-f(\vec{a})-L\cdot \vec{h}|}{|\vec{h}|}=0$.
If we look at it component wise, we get $\lim_{\vec{h} \to 0}\dfrac{|f_j(\vec{a}+\vec{h})-f_j(\vec{a})-L^j\cdot \vec{h}|}{|\vec{h}|}=0$ for $j=1,...,m$ where $L^j$ is the row of the matrix $L$. The text book I am reading then states that the components $f_j$ are differentiable at $\vec{x}=\vec{a}$ and that $\nabla f_j(\vec{a})=L^j$.
I fail to see how $\nabla f_j(\vec{a})=L^j$.
|
Gradient of function usually means right away that you are restricting to $m=1$ (otherwise we'd be talking about a Jacobian matrix). But even then, it's preferable to call the linearized map $L$ you found as the differential, Frechet or total derivative before associating it to the gradient vector. Take a look at the few paragraphs down in the wiki article.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1194292",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Calculate the maximum area (maximum value) TX farmer has 100 metres of fencing to use to make a rectangular enclosure for sheep as shown.
He will use existing walls for two sides of the enclosure and leave an opening of 2 metres for a gate.
a) Show that the area of the enclosure is given by: $A = 102x – x^2.$
b) Find the value of x that will give the maximum possible area.
c) Calculate the maximum possible area.
How do I assign the two variables for area ? Can anyone assist me in solving this problem?
|
From your picture, one side of the rectangle is x.
Since you have 100 metres, this means the other side has length (100 - x) + 2 = 102 - x.
So, the Area A = $(102 - x) * x = 102x - x^2$
The maximum area occurs where $\frac{dA}{dx} = 102 - 2x = 0$
or where $x = 51$
So, the max area A = $102(51) - 51^2$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1194451",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Surfaces in $\mathbb P^3$ not containing any line Let $d \geq 4$. I'm interested by know if there is a surface $S$ of degree $d$ in $\mathbb P^3_{\mathbb C}$ such that $S$ does not contains a line. I know
I have no idea how to do it.
|
This is the famous Noether-Lefschetz theorem. The answer is that for a "very general" such surface -- meaning away from a countable union of proper closed subsets in the parameter space of all degree-d surfaces -- the only algebraic curves are complete intersections with other surfaces. In particular, there are no lines, no conics, etc. on such a surface. While this is true in general, Mumford in the 60s or maybe 70s gave a challenge to come up with a specific example of even one quartic surface not containing a line, a challenge that was not met until just a few years ago.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1194532",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
}
|
Atiyah-Macdonald, Exercise 5.4 I was having some trouble with the following exercise from Atiyah-Macdonald.
Let $A$ be a subring of $B$ such that $B$ is integral over $A$. Let $\mathfrak{n}$ be a maximal ideal of $B$ and let $\mathfrak{m}=\mathfrak{n} \cap A$ be the corresponding maximal ideal of $A$. Is $B_{\mathfrak{n}}$ integral over $A_{\mathfrak{m}}$?
The book gives a hint which serves as a counter-example. Consider the subring $k[x^{2}-1]$ of $k[x]$ where $k$ is a field, and let $\mathfrak{n}=(x-1)$. I am trying to show that $1/(x+1)$ could not be integral over $k[x^{2}-1]_{\mathfrak{n}^{c}}$.
I have understood why this situation serves as a counterexample. But I am essentially stuck at trying to draw a contradiction. A hint or any help would be great.
|
Maybe you already noticed that $\mathfrak n^c=(x^2-1)$. Now apply the definition of integrality and after clearing the denominators you get $\sum_{i=0}^n a_is_i(x+1)^{n-i}=0$ with $a_i\in A$, $a_n=1$, and $s_i\in A-\mathfrak n^c$. Then $x+1\mid s_n$ (in $B$), so $s_n\in (x+1)B\cap A=(x^2-1)$, a contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1194636",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
}
|
Prove there exists $m$ and $b$ such that $h(x) = mx + b$ Problem:
Suppose ∃ function h: ℝ → ℝ such that h as a second derivative h''(x) = 0 ∀ x ∈ ℝ. Prove ∃ numbers m, b:
h(x) = mx + b, ∀ x ∈ ℝ.
My attempt:
Consider h(x) = mx + b, with constants m and b. Note:
h(x) = mx + b
h'(x) = m + 0
h''(x) = 0.
Hence the statement.
This feels a little weak to me. I'd appreciate any suggestions on how better to prove (or if I'm totally off base, maybe set me on the right track). Thanks for any help in advance.
|
Because integration is the inverse of differentiation, we can find that if $h(x)=0,$
$$\int h ''(x) \,dx=\int 0\, dx\implies h'(x)=C_1\implies \int h'(x) \,dx=\int C_1 \,dx\implies h(x)=xC_1+C_2$$
Now we can choose $m=C_1$ and $b=C_2$, and we are done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1194754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
liminf inequality in measure spaces
Let $(X;\mathscr{M},\mu)$ be a measure space and $\{E_j\}_{j=1}^\infty\subset \mathscr{M}$. Show that $$\mu(\liminf E_j)\leq \liminf \mu(E_j)$$
and, if $\mu\left(\bigcup_{j=1}^\infty E_j\right)<\infty$, that
$$\mu(\limsup E_j)\geq \limsup \mu(E_j).$$
I'm trying to parse what's going on. On the left, we're taking the measure of $\liminf E_j$, which is $\cup_{i=1}^\infty\cap_{j=i}^\infty E_i$. This is the union of the tails... okay.
On the right, we've got $\lim_{n\to\infty}\inf\{\mu(E_j):n\leq j\}$. The smallest $\mu$ for everything after $n$ (or the greatest lower bound, anyway).
I can't make any progress, I've been stuck here for quite a while. I just don't know where to make the comparison. Can I get a nudge?
|
$\left(\bigcap_{j=i}^\infty E_j\right)_{i=1}^\infty$ is an increasing sequence of sets, so you may have a theorem that states that $$\mu\left(\bigcup_{i=1}^\infty \bigcap_{j=i}^\infty E_j\right) = \lim_{i \to \infty} \mu\left(\bigcap_{j=i}^\infty E_j\right).$$ Then, note that $\mu\left(\bigcap_{j=i}^\infty E_j\right) \le \mu(E_j)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1194833",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
Working with norms I was hoping to get some help with being able to properly work with norms and derivatives so I can actually understand my PDE course. We are currently working on Sobolev spaces.
Example, I want to show that:
$$-\int_{U} u \Delta u dx \leq C \int_{U}|u||D^2u|dx$$
$u \in C_{c}^{\infty}(U)$ with $U$ bounded.
I get to:
$$-\int_{U} u \Delta u dx \leq \int_{U} |u||\Delta u|dx$$
I know it is not very far at all, but I am so confused. I am missing some key skills in multivariable calculus.
My main question: What is $|D^2u|$? I thought that $D^2u$ was the hessian, and I'm confused about taking the norm.
If you have any links that would help me better understand operations with norms and $D^ku$ I would greatly appreciate it.
I'm really trying, but its just not clicking. Its really frustrating to be undone by the simpler concepts in an extremely theoretical PDE course.
|
It looks like you're using a book using notation similar to Evans. If you are, check Appendix A, in the section Notation for derivatives, where we see
$$
|D^k u| = \left(\sum_{|\alpha|=k} |D^\alpha u|^2\right)^{1/2}.
$$
With this it should be pretty clear why $|\Delta u| \leq C|D^2 u|$.
Notation for derivatives is a bit weird across books. When in doubt, check your appendix or symbol glossary.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1194935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Calculate Wronksian of Second Order Differential Equation Use variation of parameters to find a particular solution to:
$\frac{d^{2}y}{dx^{x}} + 2 \frac{dy}{dx} + y = \frac{1}{x^{4}e^{4}}.$
There are no solutions given so finding a wronskian that way is nil.
But since it is still in the order $p(x)y'' + q(x)y' + r(x)y = g(x)$ I think there is still a way to calculate a Wronskian. I have not worked with second order differential equations before and some hints/tips/help would be appreciated.
|
Since the discriminant of the differential equation $y'' + 2y' + y = 0$ is $2^{2} - 4 = 0,$ it follows that $$u_{1} := e^{-x},\ u_{2} := xe^{-x}$$ are the basis solutions. If $x \mapsto w$ is the Wronskain of $u_{1}$ and $u_{2}$, then
$$w = u_{1}u_{2}' - u_{2}u_{1}' = e^{-2x}.$$
Let $R(x) := 1/x^{4}e^{4},$ let $t_{1} := -D^{-1}u_{2}R(x)/w,$ and let $t_{2} := D^{-1}u_{1}R(x)/w,$ where $D^{-1}$ means the primitive "operator". Then
$$t_{1} = -D^{-1}x^{-3}e^{x-4},\ t_{2} = D^{-1}x^{-4}e^{x-4}.$$ Then the particular solution $y_{1}$ is simply
$$y_{1} = t_{1}u_{1} + t_{2}u_{2}.$$
As to the underlying theorems, please simply check any book on ordinary differential equations.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1195033",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Show that if every nonempty open set in $X$ is non-meager, then every comeager set in $X$ is dense Suppose $X$ is a topological space. Prove that if every nonempty open set in $X$ is non-meager, then every comeager set in $X$ is dense.
My attempt:
Suppose $A \subset X$ is a comeager set and let $O \subset X$ be an open set. I want to show that $A \cap O \neq \emptyset$.
By definition of comeager set, $A$ contains $\cap_{n \in \mathbb{N}}{A_n}$ where $A_n$ is dense and open for all $n \in \mathbb{N}$.
By definition of denseness, $A_n \cap O \neq \emptyset$ for all $n \in \mathbb{N}$. Hence, $(\cap_{n \in \mathbb{N}}{A_n}) \cap O \neq \emptyset$. Since $\cap_{n \in \mathbb{N}}{A_n} \subset A$, we have $(\cap_{n \in \mathbb{N}}{A_n}) \cap O \subset A \cap O \neq \emptyset$.
Question:
Is my proof correct? I don't think so. Because the assumption is not used anywhere in the proof.
|
No. Although, for all $n$, the set $A_n\cap O$ is not empty, it might contain entirely different points for different $n$. So we may not conclude $\bigcap_n{A_n}\cap O$ nonempty. This is precisely why you need the unused hypothesis.
On the other hand, note that if $$\emptyset=\bigcap_n{A_n}\cap O$$ then, taking the complement in $O$, we should have $$O=\bigcup_n{(O\setminus A_n)}$$ But $A_n$ is dense and open in $O$. What does this say about $O\setminus A_n$ (and, by extension, $O$)?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1195229",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Finite integral with removable singularity I wanted to integrate $\frac{(exp(-x) -1)^2}{x}$ from $x=0$ to $x=a$ where $a$ is finite. Since the integrand, viz., $\frac{(exp(-x) -1)^2}{x}$ has a removable singularity at $x=0$ , I can take the lower limit to be zero for the integration. Further, if I use finite integration upper limit, I cannot use Jordan's Lemma. What approach do I use? Is there any other method? Or is there a way out using contour integration??
Thank You!
|
Probably the best way is to regularise the integral by adding an $x^s$, calculate it in terms of incomplete Gamma functions, then let $s \to 0$. We have
$$ \int_0^a x^{s-1}(e^{-x}-1)^2 \, dx = \int_0^a x^{s-1}(e^{-2x}-2e^{-x}+1) \, dx = \frac{a^s}{s} - 2\gamma(s,a) + 2^{-s}\gamma(s,2a), $$
in terms of the lower incomplete gamma function. In terms of the upper incomplete gamma function, this is
$$ \frac{a^s}{s} + (2^{-s}-2)\Gamma(s) + 2\Gamma(s,a) - 2^{-s} \Gamma(s,2a). $$
The last two terms are analytic in $s$, so we can happily take $s=0$ in them. For the former, we have
$$ \frac{a^s}{s} = \frac{1}{s} + \log{a} + O(s), $$
using the usual series expansion for a power, and
$$ (2^{-s}-2)\Gamma(s) = \left(-1-s\log{2}+O(s^2)\right) \left( \frac{1}{s} -\gamma + O(s) \right) = -\frac{1}{s} + \gamma - \log{2} + O(s), $$
using the Laurent series of $\Gamma$ at $0$.
Adding these, the divergent $s^{-1}$s cancel as they should, and we get
$$ \int_0^a x^{-1}(e^{-x}-1)^2 \, dx = \log{a}-\log{2}+\gamma + 2\Gamma(0,a) - \Gamma(0,2a). $$
(The latter two terms can also be written using exponential integrals if so desired.)
Edited to add:
Recall a possible definition of $a^s$ is $a^s = e^{s\log{a}}$. Since we know how to expand the exponential function as a power series, we obtain the expansion
$$ a^s = e^{s\log{a}} = \sum_{k=0}^{\infty} \frac{(\log{a})^k}{k!} s^k. $$
(alternatively, consider the definition of $e^x$ as the limit of $(1+x/n)^n$: rearranging this allows us to write
$$ \log{y} = \lim_{n \to \infty} n(y^{1/n}-1) = \lim_{s \to 0} \frac{y^s-1}{s}, $$
which we recognise as the derivative quotient of $y^s$ at $s=0$.)
$\gamma$ is the Euler-Mascheroni constant, which is basically defined as $-\Gamma'(1)$: for our purposes, it comes from the integral
$$ \int_0^{\infty} e^{-t}\log{t} \, dt = -\gamma, $$
in which you can recognise the derivative of $x^{s-1} e^{-x} $ with respect to $s$ at $s=1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1195301",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Pole and removable sigularity I try to solve the following problem. I do not sure how to begin :
Let $f$ be a holomorphic function on $\mathbb{C}\setminus \{0\}$. Assume that there exists a constant $C > 0$ and a real constant $M$ such that $$|f(z)| \leq C|z|^M$$ for $0 < |z| < \frac{1}{2}.$ Show that $z=0$ is either a pole or a removable singularities for $f$, and find sharp bound for $O_0(f)$, the order of $f$ at $0$ (I think that the order means the of $z$ in $f(z)$. I think that $f$ might should take the form $f(z) = \frac{g(z)}{z^{O_0(f)}}$ where $g$ is entire.)
I am not sure I should if I should write $f := \frac{g}{z^n}$ for some $n \in \mathbb{N} \cup \{0\}$ where $g$ is entire.(I have no particular reasons for writing $f$ in this form except that the most of the questions ask about poles or singularity of $f$, $f$ is often written in this form) I guess that the condition, $|f(z)| \leq C|z|^M$ for $0 < |z| < \frac{1}{2}$, might connect to that $f$ is a polynmial of degree at most $M$ on $0 < |z| < \frac{1}{2}$, but it seems that this contradict the form $f = g/ z^n$. So I am confusing how to start.
|
Suppose first that $M\ge0$. Then $f$ is bounded on a neighborhood of $z=0$, $z=0$ is a removable singularity and $f$ has a zero at $z=0$ of order $\lceil M\rceil$.
If $M<0$ consider $g(z)=z^{\lceil -M\rceil}\,f(z)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1195370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Weak convergence + compactness = strong convergence? Let $X$ be a Banach space and $K$ a compact subset of $X$. If $(x_n)_n$ is a sequence such that $x_n\in K$ for all $n$ and $(x_n)_n$ converges weakly to some $x\in X$, i.e. $x^*(x_n)\to x^*(x)$ for all $x^*\in X^*$ the dual space of $X$.
I know that we have the strong convergence for some subsequence. But, do we have the strong convergence of the whole sequence?
|
I just realized that the sequence $(x_n)_n$ has the following property:
"Every subsequence has a subsequence which converges to $x$."
It follows that the whole sequence must converge to $x$. For more details see the questions:
Question 1
Question 2
Question 3
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1195508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to draw an $405^\circ$ angle? In a math test a question was to draw a $405^\circ$ angle. Is it formally correct to say draw an angle as I think that in geometry, an angle has just some formal definition. So what is the connection between the formal definition and the drawing? And how do one draws angles over $360^\circ$?
|
It depends on how you think about angles. You can either agree that $405^\circ$ is exactly the same as $45^\circ$. -- This is how mathematicians usually think about it. Or you can think about it as $1$ complete rotation ($360^\circ$) and then an additional $45^\circ$. -- This is how engineers usually think about it.
The way you draw it depends on which of the two ways above you think about it, but the angle should start on the positive $x$-axis and end on the ray positioned at a $45^\circ$ angle counterclockwise from that position either way you think about it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1195589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Extension of Integral Domains
Let $S\subset R$ be an extension of integral domains. If the ideal $(S:R)=\{s\in S\mid sR\subseteq S\}$ is finitely generated, show that $R$ is integral over $S$.
My first attempt was to show that $R$ is finitely generated as an $S$-module, then the extension is immediately integral. Is this always the case though? I am having some trouble figuring out how to show this. Also, I don't see where the fact that they are integral domains could come into play.
|
Hint. The claim follows from the standard determinant trick.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1195958",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Can the empty set be an index set? I ran into a question, encountered in a computational course.
Could anyone tell me why the empty set $ \emptyset $ can be an index set?
My source is this book
|
If you've seen How I Met Your Mother, you might remember the episode when Barney is riding a motorcycle inside a casino, and when the security guards grab him he points out one simple thing: "Can you show me the rule that says you cannot drive a motorcycle on the casino's floor?".
Mathematics is quite similar. If there's nothing in the rules which forbids it, it's allowed.
Direct your attention to Definition 7.1.9 in that book, it says that an index set is a set of all indices of some family of computable [partial] functions/computably enumerable sets. The empty set is a set of computable functions, and the empty set is exactly its index set.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1196089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Proving that $\Delta(M \times M)$ is a submanifold of $M \times M$ I am struggling to prove that
$\Delta(M \times M) = \{(x,x) : x \in M\}$ is a submanifold of $M \times M$.
A manifold M is a submanifold of N if there is an inclusion map $i:M \rightarrow N$
such that:
*
*$i$ is smooth
*$Di_x$ is injective for each $x \in M$
*The manifold topology of $M$ is the induced topology from $N$.
I also know the following theorem:
Let $F:M \rightarrow N$ be a smooth map and $c \in N$ be such that at each point $a \in F^{-1}(c)$ the derivative $DF_a$ is surjective.
Then, $F^{-1}(c)$ is a smooth manifold of dimension $dim(M)-dim(N)$.
In the course of the proof, we see that the manifold structure on $F^{-1}(c)$ satisfies the conditions of the definition of submanifolds.
Problem:
We want to avoid the verifications of the definition of submanifold.
Applying the theorem, we can take $F:M \times M \rightarrow M$ defined by $(x,y) \mapsto y-x$, and so $\Delta(M \times M) = F^{-1}(0)$.
But $y-x$ in general may not belong to $M$.
What do I need to do to prove what I want to prove? Hints? Thanks
|
Hint: Consider the map $M \to M \times M$ given by $x \mapsto (x,x)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1196172",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Let $G$ be a group, where $(ab)^3=a^3b^3$ and $(ab)^5=a^5b^5$. How to prove that $G$ is an abelian group? Let $G$ be a group, where $(ab)^3=a^3b^3$ and $(ab)^5=a^5b^5$. Prove that $G$ is an Abelian group. I know that the answer for this question has been already posted and I have seen it. However, could somebody explain in detail the steps required to prove that $G$ is an Abelian group. Why, for example, could not we just divide $ababab = aaabbb$ by $ab$, get $abab = aabb$, cancel $a$ and $b$ and get $ba = ab$?
|
Suppose that $a,b \in G$ are arbitary elements of the group G, with the assumption that $(ab)^3 = a^3 b^3$, and $(ab)^5 = a^5 b^5$.
Observe that for
$$
(ab)^3 = a^3 b^3
$$
Multiplying left and right by respective inverses yields
$\implies ababab = aaabbb \implies baba = aabb $
In addition we have
\begin{equation}
(ab)^5 = a^5 b^5
\end{equation}
Multiplying left and right by respective inverses yeilds
$\implies ababababab = aaaaabbbbb\implies a^{-1}abababababb^{-1}=a^{-1}aaaaabbbbbb^{-1}$
$\implies babababa=aaaabbbb$
So, now we have reduced this to a new problem.
$(ba)^4 = a^4 b^4$ and $(ba)^2 = a^2 b^2$
Now notice that
$(ba)^4 = (ba)^2(ba)^2 = (a^2 b^2)(a^2 b^2)$ This now shows us that the following is also true.
$a^4 b^4 = a^2 b^2 a^2 b^2$ We now have another instance of cancellation using inverses in fact this time we can cancel twice on each side. Giving us $a^2b^2 =b^2 a^2$. We are getting close! Recall that we have the relation $(ba)^2 = a^2 b^2$. This is equivalent to saying $(ab)^2 = b^2 a^2$. All we have done is switch their roles. The rest of the proof falls out rather quickly. We now have
$$
a^2b^2=b^2a^2= (ab)^2 = abab
$$
A final multiplication of inverses on each side the desired result $ab = ba$. Thus the group is Abelian.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1196261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
}
|
Why is the following collection of sets equals the following? Suppose $(A_n)$ is a sequence of events, For any $I \subset \{1,2,\ldots \} $, set
$$ C_I = \bigg( \bigcap_{n \in I} A_n \bigg) \cap \bigg( \bigcap_{n \notin I } A_n^c \bigg) $$
I am trying to show that for any $n \geq 1 $ we have
$$ \bigcup_{|I| < \infty,\ n \in I} C_I = A_n $$
I find kind of hard to understand this identity. For example, if I take $I = \{1,2,3 \} $, then
$$ C_I = ( A_1 \cap A_2 \cap A_3) \cap ( A_4^c \cap A_5^c \cap \cdots) $$
But, then how can I understand and compute $ \bigcup_{n \in I } C_I $ in this situation?
|
Look at how something gets into a given $C_I$. Let $I$ be a finite index set and $a\in C_I$
then we have $\forall k\in I,a\in A_k$. We also have $a\in A_j^C$ for $j\notin I$, or in other words, $a\notin A_j$ Therefore, for a given index set, we're collecting all the events which are in every one of those index sets, and in NO other ones.
Now, fix $n$ and let $I$ range over every finite index set that includes $n$. How does something get into one of these sets? Well, it has to be in $A_n$, since we are only taking index sets that include n, so we have your union is a subset of $A_n$ So, the only question is, does it miss anything from $A_n$? The answer is no, because if $a\in A_n$, there will be some index set $I$ that $a\in C_I$.
In all honestly, the reason for that last step is evading me, but I started typing before I had the final answer! Hopefully it'll come to you, me, or someone else shortly.
The general proof method though is to try and show both sets are subsets of each other when trying to show a complicated set equality, and thus look at how things get into each one.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1196328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Smallest n to align sample mean with population mean There's a question in my book that I just do not understand. This is it in its entirety:
Let $ \bar{X} $ be the sample mean of a random sample of size $ n $ from a normal distribution with a variance of 9. Find the smallest sample size such that the sample mean is within 0.5 units of the population mean with probability no less than (i) 0.9, (ii) 0.95
I really don't know where to begin. I've just started learning about confidence intervals, and usually the sample mean has been given.
I think it would be setup something like this:
$$P(\mu-0.5 \le \bar{x} \le \mu+0.5) = 0.9 $$
But how can I do that? And where does finding the smallest n come into play?
I'm not given the sample mean, and I'm not given the population mean.
I'd like to have more work to show, but I don't even understand how to start this problem. I read back through the chapter, and there's no comparable examples. I'm sure this is ultimately simple, but it has me thrown for a loop.
If anyone could tell me how to set this up, I'd be very grateful.
|
Hint:
If $X_i$~$N(\mu,\sigma)$ represent $n$ random variates, then
$$\frac1n \sum_{i=1}^{n}X_i\text{ has distribution } N(\mu, \frac{\sigma^2}{n})$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1196398",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Compact operators on Hilbert Space I m working on the following problem:
Let $K:H\rightarrow H$ be a compact operator on a Hilbert space. Show that if there exists a sequence $(u_n)_n\in H$ such that $K(u_n)$ is orthonormal, then $|u_n|\rightarrow \infty$.
Here is my argument: It is suffice to show for all $u\in H$, there exists $M>0$ such that $|u_n-u|\geq M$, that is $(u_n)$ has no limit in $H$. Suppose $u_n\rightarrow u$ (all the convergence are strong in my argument), then $Ku_n\rightarrow Ku$. I claim that we can extract a subsequence $(Ku_{n_k})_k$ that is divergent. Denote $E_k=\overline{span\{Ku_{n_k}\}}$, and we may choose $Ku_{n_k}$ such that $||Ku_{n_k}||=1$ (trivial) and $dist(Ku_{n_{k+1}}, E_k)\geq 1/2$. So $|Ku_{n_k}-Ku_{n_{k+1}}|\geq 1/2$ for all $k$, which violates the convergence of the sequence $Ku_n$.
Since I did not use the fact that $K$ is a compact operator, there must be something wrong with the proof. Can someone let me know which part is problematic? Any hints for the right approach?
|
I'd proceed as follows:
*
*An orthonormal set has no limit point.
*Therefore, any set containing an infinite orthonormal set is not compact.
*If $|u_n|$ is bounded, then $K$ maps a bounded set to a set whose closure is not compact.
Your proof got confused in the first step
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1196466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
inequality regarding norm of linear operator Let $T$ be a linear operator on a vector space $X$. For $x\in X$, I know there's the inequality that $$||Tx||<||T||||x||$$
Yet I'm wondering what are those norms. Are they arbitrary? especially on the right hand side there's operator norm and vector norm, how do we cooperate that?
Thank you!
|
There are a few equivalent definitions of the operator norm. Perhaps the most relevant for this particular question is the following:
Let $T:V \to W$ be a linear map between two normed vector spaces $V$ and $W$. The operator norm $\|T\|_{op}$ of $T$ is defined to be
$$\|T\|_{op} := \sup \left\{\frac{\|Tx\|}{\|x\|} : x \in V \text{ and } x \not = 0 \right\}$$
It then follows immediately that $\|Tx\| \leq \|T\|_{op} \|x\|$ for any $x \in V$.
Check out Wikipedia for more info.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1196599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
How to determine the limit of this sum? I know that $\lim_{x\to\infty} \dfrac{2x^5\cdot2^x}{3^x} = 0$. But what I can't figure out is how to get that answer. One of the things I tried is $\lim_{x\to\infty} 2x^5 \cdot \lim_{x\to\infty}(\dfrac{2}{3})^x$, but then you'd get $\infty \cdot 0$, and I think that is undefined. What would be a correct way to get $0$?
|
$$F=\lim_{x\to\infty}\frac{2x^5}{(3/2)^x}$$ which is of the form $\dfrac\infty\infty$
If L'Hospital's rule is allowed,
$$F=\lim_{x\to\infty}\frac{2\cdot5x^4}{(3/2)^x\ln3/2}$$ which is again of the form $\dfrac\infty\infty$
So, we can apply L'Hospital's rule again and so on
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1196753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Why do we take $R$ when constructing the tensor algebra?l Let $R$ be a commutative ring and $M$ be an $R$-module.
Define $T^0(M)=R$ and $T^n(M)=M\otimes...\otimes M$(n-times) for $n\in\mathbb{Z}^+$.
Then we take $T(M)\triangleq \oplus T^n(M)$ and give an operation to make it an $R$-algebra.
My question is why do we take $T^0(M)$? What's the role of $T^0(M)$ in the tensor algebra $T(M)$?
And I don't get how to define an operation on $T(M)$.
It's written in my text that we take an operation as $(m_1\otimes...\otimes m_i)(n_1\otimes...\otimes n_j)=(m_1\otimes...\otimes m_i\otimes n_1\otimes...\otimes n_j)$ but what does this mean when $i$ or $j$ is $0$?
|
You want $T(M)$ to be an $R$-algebra. Thus, the role of $T^0(M)$ is to give you an algebra homomorphism $R\stackrel{\sim}{\to} T^0(M)\to T(M)$.
When $i=0$ and $j>0$ you define $r\cdot (n_1\otimes \dots\otimes n_j)=(rn_1)\otimes \dots\otimes n_j$, similarly for $j=0$. When $i=0=j$, then the multiplication in $T(M)$ is just the same as in $R$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1196883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Aproximation of $a_n$ where $a_{n+1}=a_n+\sqrt {a_n}$ Let $a_1=2$ and we define $a_{n+1}=a_n+\sqrt {a_n},n\geq 1$.
Is it possible to get a good aproximation of the $n$th term $a_n$?
The first terms are $2,2+\sqrt{2}$, $2+\sqrt{2}+\sqrt{2+\sqrt{2}}$ ...
Thanks in advance!
|
For the third term, we have the following. Let $c_n = \sqrt{a_n} -\frac{1}{2}n + \frac{1}{4}\ln n$, then $c_{n+1} -c_{n} = -\frac{1}{2} \cdot \frac{\sqrt{a_n}}{(\sqrt{a_n}+\sqrt{\sqrt{a_n}+a_n})^2} + \frac{1}{4} \ln(1+ \frac{1}{n})\quad (1)$. Note that $a_n = \frac{1}{4}n^2 - \frac{1}{4}n\ln n + o(n \ln n)$, the first term in R.H.S of (1) equals to $-\frac{1}{8} \cdot \frac{2n - \ln n + o(\ln n)}{n^2 -n \ln n + o(n \ln n)} = -\frac{1}{8n} \cdot(2+ \frac{ \ln n + o(\ln n)}{n - \ln n + o( \ln n)} )= -\frac{1}{4n} - \frac{\ln n}{8n^2}(1+o(1))$. Therefore (1) $= - \frac{\ln n}{8n^2}(1+o(1))$. Then we have $\lim_{n \rightarrow \infty} \frac{c_{n+1} -c_{n} }{- \frac{\ln n}{8n^2}} = 1$ and also have $\lim_{n \rightarrow \infty} \frac{c_{n+1} -c_{n} }{ \frac{\ln (n+1)+1}{8(n+1)} -\frac{\ln n+1}{8n} } = 1$. For $\frac{\ln n+1}{8n}$, we may integrate $-\frac{\ln x}{8x^2}$ with respect to $x$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1196979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 2
}
|
Every Partially ordered set has a maximal independent subset I am working on this problem:-
Every Partially ordered set has a maximal independent subset.
Definition:Let $\langle E,\prec\rangle$ be a partially ordered set. A subset $A\subset E$ is called independent set if for any two of its elements $a , b$ neither $a\prec b$ or $ b\prec a$.
My attempt:- I am trying to apply the Teichmüller–Tukey lemma,but from my thinking I see Teichmüller–Tukey lemma only give guarantee having the maximal subset.
any help appreciated.
|
This is a classical application of Zorn's lemma, but you can just as well use the Teichmüller–Tukey lemma instead.
HINT: Show that $\mathcal F=\{A\subseteq E\mid A\text{ is independent}\}$ has finite character. What can you conclude about a maximal element in $\cal F$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1197083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
linear algebra help - diagonal matrix and triangular matrix (a) Suppose that the eigenvectors of an n×n matrix A are the standard basis vectors ej for j = 1, . . . , n. What kind of matrix is A?
(b) Suppose that the matrix P whose columns are the eigenvectors of A is a triangular matrix. Does that mean that A must be triangular? Why or why not?
Im stuck on part b. I figure A must be a diagonal matrix, but doesn't that make P also a diagonal matrix? Any hints/helps? thanks in advance.
|
I suppose that the basis vectors $\mathbf{e}_i$ are proper eignevectors of $A$, and $A$ is diagonalizable. In this case you have:$A=PDP^{-1}$ where $D$ is a diagonal matrix with the eigenvalues of $A$ as diagonal elements and $P$ is a matrix that has as columns the eigenvectors, so, in your case $P=I$ and A is diagonal.
For the question b) note that upper triangular matrices form a subring of $n\times n$ matrices. So the inverse of an upper triangular matrix is upper triangular and the product of upper triangular matrices is upper triangular. Since a diagonal matrix is also upper triangular, the product $PDP^{-1}=A$ is upper triangular.
If $A$ is not diagonalizable then someone of the eigenvectors is a generalized eigenvector and the decomposition is a Jordan canonical form $A=PJP^{-1}$ where $J$ is upper triangular In this case $A$ is upper triangular if $P$ is upper triangular.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1197189",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
What's the difference between $|z|^2$ and $z^2$, where $z$ is a complex number? I know that $|z|^2=zz^*$ but what is $z^2$? Is it simply $z^2=(a+ib)^2$?
|
If $z=a+ib$, then
\begin{align*}
z^2=&\,z\times z=(a+ib)\times(a+ib),\\
\left|z\right|^2=&\,z\times\overline z=(a+ib)\times(a-ib).
\end{align*}
Note that $\left|z\right|^2$ is always real and non-negative, whereas $z^2$ is, in general, complex.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1197291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Compute the limit of $\frac{\log \left(|x| + e^{|y|}\right)}{\sqrt{x^2 + y^2}}$ when $(x,y)\to (0,0)$ $$\lim_{(x,y)\to (0,0)} \frac{\log \left(|x| + e^{|y|}\right)}{\sqrt{x^2 + y^2}} = ?$$
Assuming that $\log \triangleq \ln$, then I tried the following:
1. Sandwich rule
Saying that $\log \left(|x| + e^{|y|}\right) < |x| + e^{|y|}$:
$$\begin{align}
\lim_{(x,y)\to (0,0)} \frac{\log \left(|x| + e^{|y|}\right)}{\sqrt{x^2 + y^2}} &< \\
& \lim_{(x,y)\to (0,0)} \frac{|x| + e^{|y|}}{\sqrt{x^2 + y^2}} = \\
&= \lim_{r \to 0} \frac{|r\cos \theta| + e^{|r\sin\theta|}}{r} \\
&= \lim_{r \to 0} |\cos\theta| + \frac{e^{|r\sin\theta|}}{r}
\end{align}$$
From here it seems that the limit doesn't exist, so it doesn't indicate anything on the given function.
2. Polar coordinates
Tried expressing $x=r\cos\theta, y=r\sin\theta$, though got stuck right at the $\log$ function.
Also tried using it in the Sandwich rule above, to no avail.
3. Single variable assignment
Another technique is to replace an expression of $x$ and $y$ with a single variable $t$, but for this case it is not helpful.
The $\sqrt{x^2 + y^2}$ strongly indicates on Polar, though I can't work through that $\log$ and $e$.
It seems that I'm missing an important logarithmic identity, though I've seen many identities at Wiki and none is useful.
|
This is a very non-rigorous approach, but intuitive.
The Taylor expansion (about $v=1$) of $\log(v)$ is $(v-1)+O(v^2)$, and the Taylor expansion (about $u=0$) of $e^u$ is $1+u+O(u^2)$.
Thus $\log(|x|+e^{|y|})$ is approximately $|x|+e^{|y|}-1$ which is approximately $|x|+|y|+O(x^2)+O(y^2)$. In polar coordinates this is $|r|(|\cos \theta|+| \sin \theta|) + O(r^2)$.
Now as $r \to 0$, clearly the value of $\frac{|r|(|\cos \theta| +|\sin \theta|)}{|r|}=|\cos \theta| + |\sin \theta|$ will vary with $\theta$, so the limit cannot be well-defined. Someone ought to check this though, I don't know yet if it's correct.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1197413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Turn a power of $2$ into a multiple of $7$, maybe adding a $2$ but not a $7$ On St. Patrick's Day, the Lucky Charms leprechaun wagered me a bottle of Glenfiddich I couldn't solve this math problem before midnight:
There is a power of $2$ that can be turned into a multiple of $7$ with a simple rotation of a representation. Furthermore, that same power of $2$ can be turned into that same multiple of $7$ by adding a $+2$ to another representation. And if you really want to use a $7$ to get there, that can also be done, but you need a couple more things besides the $+2$ to accomplish the transformation.
I immediately said $1024$, since $2401 = 7^4$, but he said that's wrong, because that requires a rotation and a swap. "Think easier, lad," he said.
Midnight came and went and I didn't figure it out. But I still want to know the answer. I know it's something so easy I will feel stupid once the answer is revealed to me. The answer doesn't require heavy duty number crunching. I've been looking at the powers of $2$ up to $2^{64}$ but I can't figure it out. This riddle has me stumped.
|
Hmm, those leprechauns are tricky...
Perhaps we need to rotate $16$ to get that multiple of 7, $91$. I can't get the rest of it to make sense, though - I tried working through other bases, no joy yet - I would have expected him to allude to going from $31_5$ to $331_5$, because they're very fond of dublin'.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1197543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 2
}
|
Elementary proof of topological invariance of dimension using Brouwer's fixed point and invariance of domain theorems? http://people.math.sc.edu/howard/Notes/brouwer.pdf https://terrytao.wordpress.com/2011/06/13/brouwers-fixed-point-and-invariance-of-domain-theorems-and-hilberts-fifth-problem/
These papers give fairly elementary proofs of Brouwer's fixed point and invariance of domain theorems. Having established these tools, is it possible to prove that an open subset of $ \mathbb{R}^n $ and an open subset of $ \mathbb{R}^m $ can't be homeomorphic unless $ n = m $ without refering to all those more advanced things which are normally used here, such as homology theory and algebraic topology?
|
If $V$, open subset of $\Bbb R^m$, were homeomorphic to an open subset of $\Bbb R^n$, $U$, let $f: U \to V$ be a homeomorphism. (Suppose WLOG that $m \leq n$.) Compose with a linear inclusion map $\Bbb R^m \hookrightarrow \Bbb R^n$ to get a continuous injective map $U \to \Bbb R^n$ with image contained in the subspace $\Bbb R^m$, as it factors as $U \to V \subset \Bbb R^m \hookrightarrow \Bbb R^n$.
If $m < n$, the image cannot be open - any neighborhood of a point in the hyperplane contains points not in the hyperplane. Therefore $m=n$. So invariance of domain implies invariance of dimension.
Of course, this says even more: there's not even a continuous injection $U \to V$ between open sets of $\Bbb R^n, \Bbb R^m$ respectively, $n>m$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1197640",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 0
}
|
Prove that $HK$ is a subgroup iff $HK=KH$. Let $H$ and $K$ be subgroups of a group $G$, and let $HK=\{hk: h
\in H, k \in K\}$, $KH=\{kh: k \in K, h \in H\}$. How can we prove that $HK$ is a subgroup iff $HK=KH$?
|
It is usually helpful to recall that $ L$ is a subgroup if and only if $ L^ 2=L $, $ L^{- 1}= L $ and $ L\neq \varnothing$.
If $HK$ is a subgroup, then $HK=(HK)^{-1}=K^{ -1}H^{-1}=KH$. Hence the condition is met. Conversely,
$$ \begin{align}(HK)^{ -1} &= K^{ -1}H^{-1}\\&=KH =HK\text{, }\\(HK)(HK)=H(KH)K&=\\H( HK)K&= H^2K^ 2= HK\end{align} $$ and of course $ HK\neq \varnothing$. We made repeated use of the fact $ H $ and $ K $ are subgroups and $ H K = KH $.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1197732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 0
}
|
Question about the relation between the adjoin and inverse of linear operator on Hilbert space I am teaching myself functional analysis from a CS background. I am clueless about the following exercise problem of introductory functional analysis. Any hint or help is appreciated. Thanks a lot!
Let $H$ be a Hilbert space and $T: H \rightarrow H$ a bijective bounded linear operator whose inverse is bounded. Show that $(T^{*})^{-1}$ exists and
$$
(T^{*})^{-1} = (T^{-1})^{*}
$$
It is easy to show $T^{-1}$ exists, but I seem to not able to find any clue to find the relation between the $T^{-1}$ and $T^{*}$.
|
Hint: Since $T$ is a bounded bijective linear operator, you have the existence of $T, T^*, T^{-1}$ and $(T^{-1})^*$ (why?)
So then $\left< x, y\right> = \left<T^{-1}Tx, y\right> = \left<Tx, (T^{-1})^*y\right> = \left<x, T^*(T^{-1})^*)y\right>$, for all $x,y\in\mathcal{H}$. What does this tell us about the relationship between $T^*$ and $(T^{-1})^*$?
Further hint, inverses are unique.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1197809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Converting a cubic to a perturbation problem I'm trying to learn about Perturbation, but feel like I'm confused before I've even started.
Right now I'm focused on using them to find solutions to polynomial equations.
The initial example I've been given has $x^3 - 4.001x + 0.002 = 0$, the numbers clearly lend towards $\epsilon$ = 0.001, and you can then have $x^3 - (4 + \epsilon)x + 2\epsilon = 0$
Where I'm confused, is how to apply this to a cubic, when there isn't an obvious value for $\epsilon$.
So, for example, IF we take away the .001 from the first equation & simplify, we have: $x^3 + 4x + 2 = 0$ as our starting equation, how would we then decide a reasonable value for $\epsilon$?
Do we just pick anything reasonably small? (though how small is reasonably?)
Cheers,
Belle
|
the idea, as i understand it, is to identify a small or large parameter in the problem. you have identified in yours as $\epsilon.$ the non perturbed problem, that is with $\epsilon = 0$ has easy solution. in your case this is $x^3 - 4x = 0.$ the solutions are $$x = 0 , 2, -2$$ we can three roots of the perturbed problem $$x^3 -(4+\epsilon) x + 2\epsilon=0$$ near any one of the solution $x_0$ by looking for $$x = x_0 + \epsilon x_1 + \epsilon^2x_2 + \cdots$$ subbing in the equation and expanding we find $$(x_0+\epsilon x_1+\cdots)^3 -(4+\epsilon)(x_0+\epsilon x_1 + \cdots)+2\epsilon= 0$$ at order $\epsilon$ you get $$3x_0^2x_1-x_0 - 4x_1 + 2=0 \to x_1 = \frac{2-x_0}{4-3x_0^2} $$ the first order corrections to the roots are
$$x = 0+\frac 12 \epsilon , 2+0\epsilon, -2-\frac 12\epsilon$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1197894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Why is $\ker\omega$ integrable iff $\omega\wedge d\omega=0$? Suppose $\omega$ is a nonvanishing $1$-form on a $3$-manifold $M$. It's known that $\ker\omega$ is an integral distribution iff $\omega\wedge d\omega=0$.
I'm trying to understand this, but I don't get why $\ker\omega$ integrable implies $\omega\wedge d\omega=0$.
I worked out so far: suppose $\omega\wedge d\omega=0$. Let $X,Y$ be vector fields in $\ker\omega$. Then
$$
d\omega(X,Y)=X(\omega(Y))-Y(\omega(X))-\omega([X,Y])
$$
so $\omega([X,Y])=-d\omega(X,Y)$. Since $\omega$ is nonvanishing, we can find $Z$ such that $\omega(Z)\neq 0$. Then
$$
0=(\omega\wedge d\omega)(X,Y,Z)=c\omega(Z)d\omega(X,Y)
$$
for some $c\neq 0$. Thus $d\omega(X,Y)=0$, so $\omega([X,Y])=0$, so $[X,Y]\in\ker\omega$. Then $\ker\omega$ is involutive, so by Frobenius, it is integrable.
On the other hand, if $\ker\omega$ is integrable, then the annihilator ideal $I(\ker\omega)$ is closed under $d$. Clearly $\omega\in I(\ker\omega)$, so by assumption $d\omega\in I(\ker\omega)$. I'm trying to show $\omega\wedge d\omega(X,Y,Z)=0$ for any three vector fields, but it looks like a dead end. Is it possible to assume $X,Y\in\ker\omega$ without loss of generality some how? Because that would make the calculation work out.
|
Yes, you can assume $X,Y\in\ker(\omega)$. Since $M$ is three dimensional $L=\bigwedge^3TM$ is a line bundle over $M$, and $\omega\wedge d\omega$ is naturally defined on (local) sections of this line bundle. Take a local basis $(X,Y)$ of vector fields parallel to $\ker(\omega)$, and a smooth local extension $Z$ of some vector $Z_m\in T_mM\setminus\ker(\omega_m)$, then $Z\wedge X\wedge Y$ is a local basis of the line bundle $L$, and evaluating $\omega\wedge d\omega$ on it yields zero (through your calcuation), so that this form is actually locally zero, and hence globally zero.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1197985",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
}
|
Is it okay to define k-th symmetric power of $M$ in this way? I want to define the tensor algebra and related algebras in a very formal way. I will illustrate how I tried to define algebras below.
Let $R$ be a commutative ring and $M$ be an $R$-module.
Consider the tensor algebra $T(M):=\bigoplus_{n\in \mathbb{N}} M^{\otimes n}$. Then there exists a canonical monomorphism $\pi_n:M^{\otimes n}\rightarrow T(M):n\mapsto \delta_n$ for each $n\in\mathbb{N}$. Then, let's define $T^n(M):=\pi_n(M^{\otimes n})$. So that $T^n(M)$ becomes really a subset of $T(M)$. And, for convenience, let's write $m_1\otimes...\otimes m_{n}$ to mean $\pi_n(m_1\otimes...\otimes m_n)$.
Now, let $I$ be the ideal of $T(M)$ generated by elements of the form $m_1\otimes m_2 - m_2\otimes m_1$. Then, take the quotient $R$-algebra of $T(M)$ by $I$ and denote it by $S(M)$ and call it the symmetric algebra on $M$.
Now, define $S^n(M):=\{x+I\in S(M): x\in T^n(M)\}$ and call it the $n$-th symmetric power of $M$.
Is my definition equivalent to the usual definition? That is, $S^k(M)$ is usually defined as $T^k(M)/I\cap T^k(M)$, but I hate this since this is not actually a subset of $S(M)$.. Is it okay to define $S^n(M)$ in my way?
|
The inclusion $T^k(M) \to T(M)$ induces a map
$$
T^k(M)/I \cap T^k(M) \to T(M)/I = S(M)
$$
which is obviously injective. Your definition of $S^k(M)$ is exactly the submodule of $S(M)$ which is the image of this map (since obviously $T^k(M) \to S(M)$ is trivial on $I \cap T^k(M)$). Therefore, the two definitions give canonically isomorphic modules, and it is unlikely to result in confusion if one uses one or the other.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1198091",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
stuck with root test for series convergence to $1$ for $\sum_{n=0}^{\infty} \left(\frac{(1+i)}{\sqrt{2}}\right)^n$ I've got this series and I used the series convergence root test.
However my problem is: The result of the root test is one, so I can't show wheter the series converges/diverges
$$\sum_{n=0}^{\infty} \left(\frac{(1+i)}{\sqrt{2}}\right)^n$$
My approach
$$ \sqrt[n]{\left|\left(\frac{(1+i)}{\sqrt{2}}\right)\right|^n}
= \left|\frac{1+i}{\sqrt{2}}\right| = \frac{\sqrt{2}}{\sqrt{2}}=1 $$
So, now I have 1 as result and can't decide wheter it converges or not?
Is there any other test I can use to show?
|
Actually it is a geometric series with ratio $\,\mathrm e^{\tfrac{\mathrm i\pi}4}$ and the general term is periodic, hence it cannot converge.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1198177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Proof verification: Prove that a tree with n vertices has n-1 edges This question is not a duplicate of the other questions of this time. I want to ask is how strong is the following proof that I am going to give from an examination point of view?
Proof:
Consider a tree $T$ with $n$ vertices. Let us reconstruct the tree from the root vertex.
When the first root vertex has been added, the number of edges is zero. After the root vertex, every vertex that is added to the construction of $T$ contributes one edge to $T$.
Adding the remaining $n-1$ vertices to the construction of $T$, after the root vertex, will add $n-1$ edges.
Therefore, after the reconstruction of $T$ is complete, $T$ will have $n$ vertices and $n-1$ edges.
Hence Proved.
Is this a sound proof at all?
|
What seems missing from the proof is:
*
*After the $n$-th vertex is added, how do we know there exists some "unused" vertex $v$ which is adjacent to a "used" vertex?
*How do we know that $v$ is not adjacent to multiple "used" vertices?
*How do we know that this algorithm will terminate, having used all the vertices?
Also note that not all trees are rooted (i.e., have a root vertex), but we can distinguish an arbitrary vertex to make it rooted.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1198263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
}
|
Error in Introduction to Mathematical Philosophy Is this an error in the text or am I reading incorrectly. What am I missing?
Introduction to Mathematical Philosophy Page 18 Definition of Number
“A relation is said to be “one-one” when, if $x$ has the relation in question to $y$, no other term $x_0$ has the same relation to $y$, and $x$ does not have the same relation to any term $y_0$ other than $y$. When only the first of these two conditions is fulfilled, the relation is called “one-many”; when only the second is fulfilled, it is called “many-one.” It should be observed that the number $1$ is not used in these definitions.”
Error 1:
First condition: … no other term $x’$ has the same relation to $y$. When only the first of these two conditions is fulfilled, the relation is called “one-many”;
Given that $x$ has the relation to $y$ then $x$ is the domain of the relation. Therefore if $x’$ has a relation to $y$, two elements of the domain would map to a single element in the co-domain. This is a many-one. The book refers to this as a one-many.
Error 2:
Second condition: … and $x$ does not have the same relation to any term $y’$ other than $y$. When only the second is fulfilled, it is called “many-one.”
Given that $x$ has the relation to $y$ then $x$ is the domain of the relation. Therefore if $x$ has a relation to $y’$ then $x$ maps to two elements of the co-domain this is a one-many. The book refers to this as a many-one.
|
It might help to consider the examples that Russell gives in the very next paragraph:
In Christian countries, the relation of husband to wife is one-one ;
in Mahometan countries it is one-many ; in Tibet it is many-one. The
relation of father to son is one-many ; that of son to father is
many-one, but that of eldest son to father is one-one.
E.g., an x-y relation is one-many if one x can have many y's (but each y has only one x).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1198360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Let $(s_n)$ be a sequence that converges... Exercise 8.9 from Elementary Analysis: The Theory of Calculus by Kenneth A. Ross:
Let $(s_n)$ be a sequence that converges.
(a) Show that if $s_n \geq a$ for all but finitely many $n$, then $\lim s_n \geq a$.
(b) Show that if $s_n \leq b$ for all but finitely many $n$, then $\lim s_n \leq b$.
(c) Conclude that if all but finitely many $s_n$ belong to $[a, b]$, then $\lim s_n$ belongs to $[a, b]$.
Here's what I have so far:
Consider the set $S = \{n \in \mathbb N : s_n < a\}$. By assumption, $S$ is a finite nonempty subset of $\mathbb R$. So it must have a maximum, say $M$. Now assume $s = \lim s_n$ and $s < a$. Then for each $\epsilon > 0$, there exists $N_0 \in \mathbb N$ such that $$n > N_0\; \text{implies} \mid s_n - s\mid < \epsilon\text{,}$$ or, in the case where $\epsilon = a - s > 0$, $$n > N_0\; \text{implies} \mid s_n - s\mid < a - s \text{.}$$ Define $N := \max \{N_0, M\}$. Then $$n > N\; \text{implies}\; s_n \geq a\; \text{and} \mid s_n - s\mid < a - s \text{.}$$ The latter inequality implies, in particular, $s_n < a$ for all $n > N$. But this is contradiction, so our assumption that $s < a$ must be false. Thus $s \geq a$. $\square$
I was hoping somebody could help me make sense of this. The book suggested drawing a picture, but I couldn't figure out how. So what would this look like on both the number line and on the $n$-$s_n$ plane? Also, is there a better way to prove part (a)?
|
I think making sense of it would involve drawing a kind of Cauchy sequence, like this typical one:
Pretty much what's going on is that, if only finitely many $s_n<a$, then when $n$ is large enough, $s_n\geq a$. This, you may recognize, is the definition of a limit (more or less). In this picture, imagine drawing a horizontal line in the $y-$axis lower than the eventual limit as drawn. At some later $n$, every point in the sequence has to be above the line, which again means the limit is larger than the value.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1198429",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Find $f$ such that $\int_{-\pi}^{\pi}|f(x)-\sin(2x)|^2 \, dx$ is minimal Fairly simple question that's been bothering me for a while.
Supposedly it should be simple to solve from the properties of inner product but I can't seem to solve it.
Find $f(x) \in \operatorname{span}(1,\sin (x),\cos (x))$ such that $\int_{-\pi}^{\pi}|f(x)-\sin(2x)|^2dx$ is minimal.
Tip: $\int_{-\pi}^\pi \sin (x)\sin(2x)\,dx=\int_{-\pi}^\pi \cos (x)\sin (2x) \, dx = \int_{-\pi}^\pi \sin (2x) \, dx=0$
I'm at a complete blank and I'd like a tip in the right direction.
|
The reason for writing $|f(x)-\sin(2x)|^2$ rather than $(f(x)-\sin(2x))^2$ is that one can allow the coefficients in the linear combination to be complex numbers, not just real numbers, so the square need not be non-negative unless you take the absolute value first.
Now let's see how to use the "tip" you're given:
$$
\int_{-\pi}^\pi \sin (x)\sin(2x)\,dx=\int_{-\pi}^\pi \cos (x)\sin (2x) \, dx = \int_{-\pi}^\pi \sin (2x) \, dx=0 \tag{tip}
$$
We have
\begin{align}
& \int_{-\pi}^\pi |f(x)-\sin(2x)|^2\,dx = \int_{-\pi}^\pi|(a + b\cos x+c\sin x)-\sin(2x)|^2\,dx \\[8pt]
= {} & \int_{-\pi}^\pi {\huge(}|a+b\cos x+c\sin x|^2 \\[6pt]
& {}\qquad\qquad{} + \underbrace{\Big((a+\bar a+(b+\bar b)\cos x+(c+\bar c)\sin x\Big)\sin(2x)}_{\text{The ``tip" says the integral of this part is $0$.}} + \sin^2(2x){\huge)}\,dx \\[8pt]
= {} & \int_{-\pi}^\pi |a+b\cos x+c\sin x|^2 + \sin^2(2x)\,dx. \tag2
\end{align}
What values of $a,b,c$ minimize this last expression $(2)$? Clearly making them all $0$ makes the integral of that first square $0$, but making any of them anything other than $0$ makes the integral of that part a positive number.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1198523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 1
}
|
limit of a two variables recursive series I am looking at a sequence which is defined as the following:
$$a_0 = a$$
$$b_0 = b$$
$$a_{n+1}=\frac{a_n+b_n}{2}$$
$$b_{n+1}=\sqrt{a_nb_n}$$
I know that both series have $a_n\geq a_{n+1} \geq b_{n+1} \geq b_n$ for every $n \geq 1$ and therefore are monotone, bounded, and converge to the same limit. My question - given $a,\,b$ what is the limit?
|
Both sequences converge to the arithmetic-geometric mean of $a$ and $b$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1198624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Grobner basis and subsets Let $A$ be a subset and $I$ an ideal of polynomial ring $R=k[x_1,x_2,...,x_n]$. Is there any algorithm for deciding when $A\subseteq I$?
|
Without striving for efficiency:
Use Buchberger's algorithm to produce a Groebner basis $g_1,g_2,...,g_n$ of $I$.
Let $f_1,f_2,...,f_m$ be generators of $A$. For each $f_i$ run Buchberger with $g_1,g_2,...,g_n,f_i$. If the output is again $g_1,g_2,...,g_n$ for all $i=1,2,...,m$ (as opposed to $g_1,g_2,...,g_n,h_i$ with $h_i\neq0$) then $A\subset I$. Otherwise, $A\not\subset I$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1198697",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
how to calculate discrete event probability? A bookstore sells children's books that belong to two publishing companies A and B and were published between years 2000 and 2004. The probabilities of a book being published by companies A and B are 0.6 and 0.4. The probability that company A published a book in years 2000, 2001, 2002, 2003, 2004 are 0.1, 0.2, 0.3, 0.3 and 0.1 respectively. The probability that company B published a book in years 2000, 2001, 2002, 2003, 2004 are 0.3, 0.2, 0.1, 0.2 and 0.2 respectively.
a) Find the probability that a book was published after 2002
b) Find the probability that a book published after 2002 was published by company A.
I think that for solution a)
probability that a book was published after 2002 is
$$\begin{align}
P(A \cup B) &= P(A) + P(B) - P(A)P(B)
\end{align}
$$
which is equal to
$$\begin{align}
P(A \cup B) &= \frac{4}{10}+\frac{4}{10}-\frac{4}{10}\frac{4}{10} = 0.64
\end{align}
$$
for solution b) i don't understand the question exactly but
If I assume that it asks only the probability that a book published after 2002 was published by company A
$$\begin{align}
P(A \cup \overline{B}) &= P(A) + P(\overline{B}) - P(A)P(\overline{B})
\end{align}
$$
which is equal to
$$\begin{align}
P(A \cup B) &= \frac{4}{10}+\frac{6}{10}-\frac{4}{10}\frac{6}{10} = 0.76
\end{align}
$$
Otherwise if I assume that only the probability that a book published after 2002 was published by company A and the probability that a book published after 2002 was published by company A and B
So the solution is
$$
P(A) = 4/10 = 0.4
$$
Is solution of a is correct and for b which solution is correct?
Thanks in advance.
|
Do you see how you can convert all the probabilities to these values?
And does that make it easier to figure out the answers?
$$\begin{array}{|c|c|c|c|c|c|} \hline
\text{Company}& \text{2000}& \text{2001}& \text{2002}& \text{2003}& \text{2004} \\ \hline
\text{A} & .06& .12& .18& .18& .06 \\ \hline
\text{B} & .12& .08& .04& .08& .08 \\ \hline
\end{array}$$
There is no "published by A and B". They are mutually exclusive. So to find part b), you need to find what percentage of books published after 2002 were the ones published by Company A.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1198951",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Testing for Uniform Convergence of the sum of an Alternating Series. I'm still trying to get used in understanding the concept behind uniform convergence, so there's another questions which I'm currently have trouble trying to answer.
Suppose there's a series $$\sum_{k=0}^{\infty}(-1)^k\frac{x^{2k+1}}{2k+1}$$ and x is such that $-1 \leq x \leq 1$.
My first attempt was to use the Weierstrass' M Test but I can only seem to find $M_k$ such that $$M_k=\frac{1}{2k+1}$$. However, after a comparison test $\sum_{k=0}^{\infty}M_k$ doesn't converge.
I tried to find a partial sum of $\sum_{k=0}^{\infty}(-1)^k\frac{x^{2k+1}}{2k+1}$ to work with similar to the last question I posted such as $$S_n=\sum_{k=0}^{n}(-1)^k\frac{x^{2k+1}}{2k+1}$$ where I realise the last term could actually be an even number n=2z or an odd number n=2z+1 and as a result could have an impact on the sign of the last term.
My thinking was to derive a Sum such that $$S_{2n+1}=\sum_{k=2n}^{2n+1}(-1)^k \frac{x^{2k+1}}{2k+1}=-\frac{x^{4n+3}}{4n+3}$$ and attempt prove uniform convergence of that.
Would this be an appropriate method or am I going the wrong way about this completely?
|
This problem isn't quite difficult. First, the series is a power series, with radius of convergence $R=1$, to obtain this you can use Cauchy-Hadamard formula or Ratio Test.Finally you use Abel's Theorem: if $f(x)=\sum_{n=0}^\infty a_n x^n$ ($a_n,x\in\mathbb{R}$) has convergence ratio $R$, and the numerical serie $\sum_{n=0}^\infty a_n R^n$ converges ($\sum_{n=0}^\infty a_n (-R)^n$) then $\sum_{n=0}^\infty a_n x^n$ is uniform convergent in $[0,R]$ ($[-R,0]$).
With this at hand: evaluating in $x=1$ we have $\displaystyle\sum_{n=0}^\infty\frac{(-1)^n}{2n+1}$ and this series is convergent -Leibniz Criteria-. In the other case, $x=-1$ we get $\displaystyle\sum_{n=0}^\infty(-1)^n\frac{(-1)^{2n+1}}{2n+1}=\sum_{n=0}^\infty(-1)^{n-1}\frac{1}{2n+1}$ and this series is convergent, again by Leibniz Criteria.
I hope it will be ok for you.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1199037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Real equivalent of the surreal number {0.5|} I've been reading up on Surreal numbers, but have some questions.
Some equivalent real and surreal numbers.
2.5 =
{2|3} =
{{{0|}|}|{{{0|}|}|}} =
{{{{{|}}|{}}|{}}|{{{{{|}}|{}}|{}}|{}}}
0 =
{-1|1} = {-2|1} = {-2,-1|1} =
{{|0}|{0|}} = {{|{|0}},{|0}|{0|}} =
{{{}|{{|}}}|{{{|}}|{}}}
-3/8 = {-0.5|-0.25} = {{-1|0}|{{-1|0}|0}}
{{{{}|{{|}}}|{{{}|{{|}}}|{{|}}}}|{{{}|{{|}}}|{{|}}}}
What about the real number for {0.5|}?
|
The number for {1/2|} is "1"...
a = {{{|}|{{|}|}}|}
numeric label for a = 1
a == "1" = True
Surreal {1/2|} represented by form {{{|}|{{|}|}}|}
is equivalent to form {{|}|} represented by name "1"
...according to python code...
from surreal import creation, Surreal
s = creation(days=7)
a = Surreal([s[1/2]],[])
name = a.name_in(s)
equivelence = s[name]
print('a =',a)
print('numeric label for a =',name)
print('a == "{}" = {}'.format(name,a==equivelence))
print('Surreal {} represented by form {}'.format('{1/2|}',str(a)))
print(' is equivalent to form {} represented by name "{}"'.format(str(equivelence),name))
...using [PySurreal] (https://github.com/peawormsworth/PySurreal):
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1199129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Show that $\binom{2n}{n}$ is an even number, for positive integers $n$. I would appreciate if somebody could help me with the following problem
Show by a combinatorial proof that
$$\dbinom{2n}{n}$$
is an even number, where $n$ is a positive integer.
I tried to solve this problem but I can't.
|
Let $S$ be all subsets of $T=\{1,2,3,\dots,2n\}$ of size $n$. There is an equivalence relation on $S$ where every equivalence class has two elements, $\{A,T\setminus A\}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1199206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
}
|
Let $f$ be a strictly decreasing function. Then $\int_{a}^bf^{-1}=bf^{-1}(b)-af^{-1}(a)+\int_{f^{-1}(b)}^{f^{-1}(a)}f $ I'm trying to prove the fact that if $f$ is a strictly decreasing function, then:$$\int_{a}^bf^{-1}=bf^{-1}(b)-af^{-1}(a)+\int_{f^{-1}(b)}^{f^{-1}(a)}f $$
I have already proven it for strictly increasing functions. In that case, I made a sketch so I could understand the integral with geometry, and then, using the partitions: $$P=({t_0=a,t_1,...,t_n=b})$$
$$P'=(f^{-1} (t_0)=f^{-1}(a),f^{-1}(t_1),...,f^{-1}(t_n)=f^{-1}(b))$$ I computed $L(f^{-1},P)$+$U(f,P')$ and the rest of the proof was easy to develop.
However, I can't prove it when $f$ is strictly decreasing, neither can I see it geometrically.
Any advice will be appreciated.
|
The lighter region is the integral on your left-hand side.
You have $$\int_a^bf^{-1}=+U-V+W$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1199283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
$\sum_{1}^{\infty}\int_{n}^{n+1} e^{-\sqrt{x}} dx,$ converge or diverge? Since
$$D^{-1} e^{-\sqrt{x}} \big|_{x := u^{2}} = D^{-1} e^{-u} Du^{2} = 2D^{-1} e^{-u} u = -2(u+1)e^{-u} + C
= -2(\sqrt{x} + 1)e^{-\sqrt{x}} + C,$$
we have
$$\int_{n}^{n+1} e^{-\sqrt{x}} dx = 2(\sqrt{n} + 1)e^{-\sqrt{n}} - 2(\sqrt{n+1} + 1)e^{-\sqrt{n+1}}.$$
Then I am not sure how to proceed to conclude the convergence or divergence?
|
$$\sum_{1}^{\infty}\int_{n}^{n+1} e^{-\sqrt{x}} dx = \int_{1}^{\infty} e^{-\sqrt{x}} dx =2\int_{1}^{\infty} e^{-t} t dt = \left[-2(t+1)e^{-t}\right]_1^\infty =\frac{4}{e} $$
Since the integral has a finite value $\frac{4}{e}$ , the sum is convergent.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1199363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Identity about a Functor I'm moving my first steps in CT and suddenly after reading about functors, this question came up in my mind:
Let $F \colon A \to B$ a functor between two categories $A$ and $B$, is it true that for any arrow $f \colon a \to a'$ in $A$,
is commutative?
According to me the definition of functor doesn't involve nor implies this identity. Nevertheless, due to my restricted experience in CT, I cannot find a counterexample.
I hope the clarification will help me build a strong "set" of example/counterexample in this field.
Thanks in advance.
|
If $F\colon A\longrightarrow B$ is a functor from the category $A$ to the category $B$, then it associates to each object $a$ of $A$ an object $F(a)$ of $B$ and to each arrow $f\colon a\rightarrow a'$ in $A$ an arrow $F(f)\colon F(a)\rightarrow F(a')$ in $B$. This association is made in a way compatible with the identity morphisms in $A$ and with the composition law in $A$.
Thus, given an object $a$ in $A$, $F(a)$ is an object of $B$. Hence, in general, there can be no arrow from $a$ to $F(a)$, simply because the two objects live in two different categories! Therefore, the square you built can not be understood meaningfully in a general situation. Even if $A=B$ (so that $F(a)$ is an object of $A$ for all objects $a$ in $A$), there may still be no sensible way to "connect $a$ to $F(a)$ via the action of $F$" (i.e. to get something as the "arrow" $a\stackrel{F}{\rightarrow} F(a)$ you put in the square). For example, for a prescribed arrow $h\colon a\rightarrow F(a)$ in $A$ (seen as the codomain of $F$), there might be no arrow $g\colon a''\rightarrow a$ in $A$ (seen as the domain of $F$) such that $F(a'')=a$ and $F(g)=h$.
The only way I see to give a meaning to your square, in general, is in the following situation. Suppose you have a functor $F\colon A\longrightarrow A$ (what is called an endofunctor of $A$) and a natural transformation $\tau\colon Id_{A}\Rightarrow F$ from the identity functor of $A$ into $F$ (if you do not know what a natural transformation of functors is, then just check the definition in any book of Category Theory!). Then, for each arrow $f\colon a\rightarrow a'$ in $A$, you do have $F(f)\circ \tau (a)=\tau(a')\circ f$, just by definition of natural transformation.
Hope this helps somehow.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1199453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Finding $\sup$ and $\inf$ of $\frac{n^5}{2^n}$ where $n$ is natural number I'm trying to find $\sup A, \inf A$ where
$$A=\{a_n=\frac{n^5}{2^n}:n\in\Bbb{N}\}, 1\not\in\Bbb{N}$$
For $n=1$ we have $a_1 = \frac{1}{2}$, $\lim_{x\rightarrow +\infty} \frac{n^5}{2^n}=0$ and after differentiating I found out that the critical point is at $n=\frac{5}{\ln 2}$. And there is my problem:
We know that the candidates for $\sup$ and $\inf$ are $0,\frac{1}{2}$ and $a_k, a_j$ for natural $k$, $j$ near $\frac{5}{\ln 2}$. But how to find $k$ and $j$? Clearly $\frac{5}{\ln 2}\not\in\Bbb{N}$ so $k=\lfloor\frac{5}{\ln 2}\rfloor$ and $j=\lceil\frac{5}{\ln 2}\rceil$ but how to find where $k$ and $j$ are exactly?
|
Note that the critical value $\frac{5}{\ln 2}\in(7,8)$ and hence $k=\lfloor\frac{5}{\ln 2}\rfloor=7$ and $j=\lceil\frac{5}{\ln 2}\rceil=8$. But $a_7>a_8$ and so $a_1<a_2<\cdots<a_7>a_8>a_9>\cdots$ and hence $\sup\{a_n\}=a_7$ and $\inf\{a_n\}=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1199552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Two categories sharing the same objects and morphisms Is there a natural example of two categories $\mathcal{C}$, $\mathcal{C}'$ which have the same class of objects and the same class of morphisms, including source and target maps, but different composition rules?
Of course it is easy to cook up an example, for example consider any set $X$ with two different monoid structures, this will produce two categories with one object with the desired properties. But this is not what I am looking for.
I would prefer an example where both categories $\mathcal{C}$ and $\mathcal{C}'$ are actually used in practice.
Background. While writing some basic stuff on categories, I have realized that often we only define categories by listing their objects and morphisms, saying almost nothing about the composition. In most cases this doesn't cause any confusion, because there is a "unique" reasonable way of composing the morphisms, but in general it may cause problems.
|
Consider categories whose objects are finite sets $X$ and whose morphisms $X \to Y$ are subsets of $X \times Y$. I can think of at least two interesting composition operations:
*
*Think of subsets of $X \times Y$ as $|X| \times |Y|$ matrices over the truth semiring, and perform matrix multiplication.
*Think of subsets of $X \times Y$ as $|X| \times |Y|$ matrices over $\mathbb{F}_2$, and perform matrix multiplication.
The first composition operation gives the category of finite sets and relations, while the second composition operation gives the category of finite-dimensional $\mathbb{F}_2$-vector spaces.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1199638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
there exsit postive integer $x,y$ such $p\mid(x^2+y^2+n)$ For any give the postive integer $n$,and for any give prime number $p$.
show that
there exsit postive integer $x,y$ such
$$p\mid(x^2+y^2+n)$$
My approach is the following:
Assmue that $n=1,p=2$,we choose$(x,y)=(1,2)$
$$2\mid6=1^2+2^2+1$$
Assmue that $n=1,p=3$, we choose $(x,y)=(1,2)$
$$3\mid6=1^2+2^2+1$$
Assume that $n=1,p=5$,we choose $(x,y)=(2,5)$
$$5\mid30=2^2+5^2+1$$
Assume that $n=2,p=2$, we choose $(x,y)=(2,2)$
$$2\mid10=2^2+2^2+2$$
Assume that $n=2,p=3$ we choose $(x,y)=(2,3)$
$$3\mid15=2^2+3^2+2$$
Assume that $n=2,p=5$,we choose $(x,y)=(3,3)$
$$5\mid20=3^2+3^2+2$$
and so on
Now I'm stuck and don't know how to proceed
|
The result is easy to prove if $p=2$, so we can assume from now on that $p$ is odd.
Modulo $p$, there are $\frac{p+1}{2}$ squares, namely the $\frac{p-1}{2}$ quadratic residues of $p$, and $0$.
So modulo $p$ there are $\frac{p+1}{2}$ distinct values of $x^2$. There are also (for fixed $n$) $\frac{p+1}{2}$ distinct values of $-y^2-n$, since there are $\frac{p+1}{2}$ distinct values of $y^2$, and hence of $-y^2$.
Since $\frac{p+1}{2}+\frac{p+1}{2}=p+1\gt p$, by the Pigeonhole Principle there exist $x$ and $y$ such that $x^2\equiv -y^2-n\pmod{p}$. This implies that there are values of $x$ and $y$ such that $x^2-(-y^2-n)\equiv 0\pmod{p}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1199739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Let $∑_{n=0}^∞c_n z^n $ be a representation for the function $\frac{1}{1-z-z^2 }$. Find the coefficient $c_n$ Let $∑_{n=0}^∞c_n z^n $ be a power series representation for the function $\frac{1}{1-z-z^2 }$. Find the coefficient $c_n$ and radius of convergence of the series.
Clearly this is a power series with center $z_0=0$, and $f(z)=\frac{1}{1-z-z^2 }$ is analytic, because it's represented by a power series. I also know that
$$c_n =\frac{1}{n!} f^{(n)}(0)$$
but this doesn't get me anywhere, I also try the special case of Taylor series, but nothing look like this. I wonder if any one would give me a hint please.
|
Taylor series of function is
$$1+z+2z^2+3z^3+5z^4+8z^5+13z^6+21z^7+....$$
the coefficients are Fibonacci numbers
$$F(n)=F(n-1)+F(n-2)$$
hence
$C_n=F(n)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1199807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
}
|
Find the sum of the series $\frac{1}{2}-\frac{1}{3}+\frac{1}{4}-\frac{1}{5}+\frac{1}{6}-\cdots$ My book directly writes-
$$\frac{1}{2}-\frac{1}{3}+\frac{1}{4}-\frac{1}{5}+\frac{1}{6}-\cdots=-\ln 2+1.$$
How do we prove this simply.. I am a high school student.
|
In calculus there's this famous alternating harmonic series:
$S = 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + ... + (-1)^{(n+1)} \cdot \frac{1}{n} + ... $
(*) It's convergent and its sum is equal to $ln2$
Your series is equal to exactly $T = -S + 1$ so it's
also convergent and its sum must be exactly $(-ln2+1)$
I realize that I didn't prove this (*) statement. I am not
aware of an elementary proof but there might be one.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1199897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Derivative of a function defined by an integral with $e^{-t^{2}}$ I'm facing a bit of a tricky question and I can't figure out how to get to the correct answer. We have to find the derivative of the following functions:
(1) $F(x) = \int^{x}_{3} e^{-t^{2}} dt$
(2) $G(x) = x^{2} . \int^{5x}_{-4} e^{-t^{2}} dt$
(3) $H(x) = \int^{x^3}_{x^2} e^{-t^{2}} dt$
As I have discovered, $e^{-t^{2}}$ has no closed form and is just defined in the error function. I have got the answer to the first question using the fundamental theorem of calculus, getting an answer of $e^{-x^{2}}$ however I can't get the correct answer for the other two. Could someone help me and explain how to get the answers to (2) and (3)? I have seen the answers on Wolfram Alpha and there appears to be some sort of chaining within the integral which I cannot understand.
Thanks! Helen
|
If $u$ and $v$ are functions in $x$ and $$F(x) = \int^{v}_{u} f(t) dt$$ , then
$$F'(x)=v'f(v)-u'f(u)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1199969",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Finding for what $x$ values the error of $\sin x\approx x-\frac {x^3} 6$ is smaller than $10^{-5}$
Find for what $x$ values the error of $\sin x\approx x-\frac {x^3} 6$ is smaller than $10^{-5}$
I thought of two ways but got kinda stuck:
*
*Since we know that $R(x)=f(x)-P(x)$ then we could solve: $\sin x-x+\frac {x^3} 6<10^{-2}$ but I have no idea how to do it, deriving the expression to a more simple expression makes it loose the information on $10^{-5}$.
*Find for what $x,c$ we have $R_5(x)=\frac {\cos (c)x^5}{5!}<10^{-5}$ but can I simply choose $c=0$? then the answer would be: $x<\frac {(5!)^{1/5}}{10}$ which looks like it's right from the graph of $R(x)$ in 1. But why can I choose $c$? what if it was a less convenient function?
Note: no integrals.
|
Consider
$$
\sin(x) = x - \frac{x^3}{6} + \frac{x^5}{120} - \ldots.
$$
This series is alternating, and the terms are strictly decreasing in magnitude if $|x| < 1$. So we get
\begin{align*}
\text{estimate} - \text{ reality} &= x - \frac{x^3}{6} - \left(x - \frac{x^3}{6} + \frac{x^5}{120} - \ldots\right) \\
&=\frac{x^5}{120} - \ldots \\
&\leq \frac{x^5}{120}
\end{align*}
The last estimate works only because the terms are strictly decreasing in magnitude. So it suffices to take $x$ satisfying
$$
-10^{-5} < \frac{x^5}{120} < 10^{-5}
$$
But this is not a necessary condition.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1200066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Limit with number $e$ and complex number This is my first question here. I hope that I spend here a lot of fantastic time.
How to proof that fact?
$$\lim_{n\to\infty} \left(1+\frac{z}{n}\right)^{n}=e^{z}$$
where $z \in \mathbb{C}$ and $e^z$ is defined by its power series.
I have one hint: find the limit of abs. value and arguments,
but i don't know how to use it to solve that problem.
Thank you for help.
Before I try solve this problem, I proofed that $$e^z=e^{x}(\cos y + i \sin y)$$
where $z=x+yi$ ,maybe this help.
|
Expand using the binomial formula: $\displaystyle \left(1+\frac{z}{n}\right)^n = \sum_{k=0}^n {n\choose k}\left( \frac{z}{n}\right)^k = \sum_{k=0}^\infty E_k^n$ where we define $\displaystyle E_k^n = {n\choose k}\left( \frac{z}{n}\right)^k$ for $k \le n$ and $= 0$ otherwise
We want $\displaystyle \sum_{k=0}^\infty E_k^n$ to converge to $\displaystyle \sum _{k=0}^\infty \frac{z^k}{k!}$ as $n \to \infty$ To do that we will show $\displaystyle E^n_k \to \frac{z^k}{k!}$ as $n \to \infty$
$\displaystyle E^n _k = \frac{n!}{k!(n-k)!}\left( \frac{z}{n}\right)^k = \frac{n!}{k!(n-k)!} \frac{z^k}{n^k} = \frac{n!}{n^k (n-k)!}\frac{z^k}{k!} =\frac{n}{n} \frac{(n-1)}{n}\cdot \ldots \cdot \frac{(n-k+1)}{n}\frac{z^k}{k!} $
Therefore we just have to prove $\displaystyle \frac{n}{n} \frac{(n-1)}{n}\cdot \ldots \cdot \frac{(n-k+1)}{n} \to 1$. The number of terms to multiply is constant and equal to $k$. So there is no problem with invoking how each of them goes to $1$ seprately, and that limits commute with multiplication.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1200151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 0
}
|
Integrating $\int_{0}^{2} (1-x)^2 dx$ I solved this integral
$$\int_{0}^{2} (1-x)^2 dx$$
by operating the squared binomial, first.
But, I found in some book, that it arrives at the same solution and I don't understand why it appears a negative simbol. This is the author solution:
$$\int_{0}^{2} (1-x)^2 dx = -\frac{1}{3}(-x+1)^3|_{0}^{2}=\frac{2}{3}$$
|
There are two places where you need to be careful about the sign. The first place is when you make the derivative of the substitution (if your substitution is 1-x = t, then after making the derivative you get -dx = dt). The second place is when you change the limits of the integration due to the substitution. The lower limit becomes 1 and the upper limit becomes -1. You can find a complete solution to this problem here
http://www.e-academia.cz/solved-math-problems/definite-integral-with-substitution.php
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1200218",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
how to show that $f(\mathbb C)$ is dense in $\mathbb C$? Let $f$ an holomorphic function not bounded. How can I show that $f(\mathbb C)$ is dense in $\mathbb C$ ? I'm sure we have to use Liouville theorem, but I don't see how.
|
By contradiction, suppose that $f(\mathbb C)$ is not dense in $\mathbb C$. Therefore, there is a $z_0\in\mathbb C$ and a $r>0$ such that $f(\mathbb C)\cap B_r(z_0)=\emptyset$. Therefore $$\frac{1}{f-z_0}<\frac{1}{r}$$
and thus, by Liouville theorem $\frac{1}{f-z_0}$ is constant. This implies that $f$ is also constant which is a contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1200308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Is the following set connected given that the union and intersection is connected Suppose $U_1, U_2$ are open sets in a space $X$. Suppose $U_1 \cap U_2$ and $U_1 \cup U_2$ are connected. Can we conclude that $U_1$ must be connected??
I am trying to find a counterexample, but I failed. PErhaps it is true? Can someone help me find a counter example? thanks
|
Suppose $A$ and $B$ form a separation of $U_1$, i.e., $A$ and $B$ are disjoint nonempty open sets such that $A\cup B = U_1$. Because $U_1 \cap U_2$ is a connected subset of $U_1$, it must be entirely contained in either $A$ or $B$ (else we would get a separation of $U_1 \cap U_2$ by intersecting $A$ and $B$ with $U_1 \cap U_2$); WLOG let $U_1 \cap U_2 \subset A$. Then, $A\cup U_2$ and $B$ forms a separation of $U_1 \cup U_2$, a contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1200376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 0
}
|
If two continuous maps coincide in dense set, then they are the same Suppose $f,g : R \to R $ are continuous and $D \subset \mathbb{R}$ is dense. Suppose $f(x) = g(x) $ for all $x \in D$. Does it follow that $f(x) = g(x) $ for all $x \in \mathbb{R}$??
My answer is affirmative. Suppose $h(x) = f(x) - g(x) $. By hypothesis,
$$ D = \{ x : h(x) = 0 \} $$
but we know $\{ x : h(x) = 0 \} $ is closed since $h$ is continuous. and so $\{ x : h = 0 \} = \overline{ \{ x : h = 0 \} } = \overline{D} = \mathbb{R} $ as desired.
Is this right? thanks for any feedback.
|
An other way
Let $x\in \mathbb R\backslash D$.
By density of $D$ in $\mathbb R$, there is a sequence $(x_n)\subset D$ such that $\lim_{n\to\infty }x_n=x$. By continuity of $f$ and $g$ on $\mathbb R$ and by the fact that $f(x_n)=g(x_n)$ for all $n$,
$$f(x)=\lim_{n\to\infty }f(x_n)=\lim_{n\to\infty }g(x_n)=g(x),$$
what prove the claim.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1200491",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Solution system $3x \equiv 6\,\textrm{mod}\,\, 12$, $2x \equiv 5\,\textrm{mod}\,\, 7$ , $3x \equiv 1\,\textrm{mod}\,\, 5$ Have solution the following congruence system?
$$\begin{array}{ccl}
3x & \equiv & 6\,\textrm{mod}\,\, 12\\
2x & \equiv & 5\,\textrm{mod}\,\, 7\\
3x & \equiv & 1\,\textrm{mod}\,\, 5
\end{array}$$
Point of Interest: This question requires some special handling due to the mixture of factors among the moduli. This is more than the run of the mill Chinese Remainder Theorem problem
|
Using $\#12$ of this,
$$2x\equiv5\pmod7\equiv5+7\iff x\equiv6\pmod7\ \ \ \ (1)$$
$$3x\equiv1\pmod5\equiv1+5\iff x\equiv2\pmod5\ \ \ \ (2)$$
$$3x=12k+6\iff x=4k+2\implies x\equiv2\pmod4\ \ \ \ (3)$$
$$(2),(3)\implies x\equiv2\pmod{\text{lcm}(5,4)}\implies x\equiv2\pmod{20}\ \ \ \ (4)$$
Now safely use CRT on $(1),(4)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1200580",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
$H^{n}(M)$ where $M$ is compact, orientable and connected manifold I need to show that if $M$ is compact, orientable and connected manifold of dimension $n$, then $H^{n}(M) = \mathbb{R}$.
I saw that a possible proof is to take an atlas for $M$, with $U_{\alpha}$, $\alpha = 1,..N$ and $\rho_{\alpha}$ be a smooth partition of unity. Then define a smooth map
\begin{equation}
\xi: \Omega^{n}(M) \rightarrow \mathbb{R}^{N}
\end{equation}
by
\begin{equation}
\xi(\omega) = (\int_{M}\rho_1\omega,..,\int_{M}\rho_N\omega)
\end{equation}
and consider the subspace $X = \{\xi(dv) | v \in \Omega^{n-1}(M)\}$.
The proof than shows that (proof is omitted)
(A) $\omega$ is exact if and only if $\xi(\omega) \in X$
(B) $w \in \Omega^{n}(M)$ is exact if and only if $\int_M\omega=0$.
I have two questions:
1) What does it means $\omega$ is exact? It's not a sequence, just an element of $\Omega^{n}(M)$.
2) Why (B) actually implies that $H^{n}(M) = \mathbb{R}$ ?
Thanks
|
*
*A differential $n$-form $\alpha$ is exact if there is some differential $(n-1)$-form $\beta$ such that $d\beta = \alpha$.
*Recall that $H^n(M)$ is defined by taking the closed $n$-forms and modding out by the exact $n$-forms. Fact (B) implies that integration gives you a well-defined map $H^n(M)\rightarrow \mathbb R$, since any exact form integrates to zero. In fact, you can show that this is a vector space isomorphism. (Hint: Take a non-zero closed form $\alpha$ and show that any other closed form is equivalent, modulo some exact form, to a multiple of $\alpha$. Use the integration map.)
(Note that part $B$ follows from Stokes's theorem.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1200659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Series of inverse function $A(s) = \sum_{k>0}a_ks^k$ and $A(s)+A(s)^3=s$.
I want calculate $a_5$. What ways to do it most efficiently?
|
$A(s) = \sum_{k>0}a_ks^k$ and $A(s)+A(s)^{3} =s$
We know (Cauchy product): $$A(s)^{2} = \sum_{n>0}^{\infty} \left( \sum_{i=0}^{n}a_{i} a_{n-i} \right) s^n$$
And
$$A(s)^{3} = \sum_{n>0}^{\infty} \left( \sum_{j=0}^{n}a_{n-j} \left( \sum_{i=0}^{j}a_{i} a_{j-i} \right) \right) s^n$$
Hence:
$$ \sum_{n>0}^{\infty}a_ns^n+\sum_{n>0}^{\infty} \left( \sum_{j=0}^{n}a_{n-j} \left( \sum_{i=0}^{j}a_{i} a_{j-i} \right) \right) s^n = s$$
For $n=0$:$$a_{0}+{a_0}^{3}=0$$
For n=1:
$$a_{1}+a_{1}.0+0=1$$
By raising the $n$ you get the $a_{n}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1200779",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 6,
"answer_id": 1
}
|
Group theoretic construction for permutation algorithm Consider a permutation $\sigma = [s_1, \ldots, s_n]$. The `contracting endpoints' construction for the subsequence $[s_i,\ldots, s_k]$ is given by iteratively taking the product of cycles given by the first and last elements of the sequence, successively discarding first and last elements.
Hence, the construction for [2,3,4,5] in [1,2,3,4,5,6] yields (2,5)(3,4).
Can this construction be defined purely in terms of group-theoretic operations?
|
Assuming you compose permutations right to left, it may be helpful to observe that $(x_2,x_k,x_{k-1},\dots,x_3)(x_1,x_2,\dots,x_k)=(x_1,x_k)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1200846",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Is the solution of the equation $x^x=2$ rational? Let $x$ be the solution of the equation $x^x=2$. Is $x$ irrational? How to prove this?
|
Suppose $x$ is rational. Then there exist two integers $a,b$ such that $$\left(\frac{a}{b}\right)^{a/b}=2 \\ \frac{a}{b} = 2^{b/a}.$$ But that's impossible because the RHS is rational only for $a=1$, which actually makes it also integer, while with $a=1$ the LHS is non-integer for all $b>1$. Checking that $(a,b)=(1,1)$ yields the false identity $1=2$ concludes the proof.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1200919",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Polynomial ideals I got stuck with an exercise while preparing for my exam, and could use a hint or two to move on...
Let $f(X) = a_n X^n+a_{n-1} X^{n-1}+ \cdots +a_0 \in \mathbb{Z}[X]$ with $a_0\neq 0$
Assuming that $X\in \langle f(X)\rangle$ then prove: $n \leq 1$ and either $\langle f(X)\rangle = \langle X\rangle$ or $f(X) = \mathbb{Z}[X]$
I have shown that if $5 \in \langle f(X)\rangle$ then $n = 0 $ and $a_0\mid 5$, and got to at point where I have:
$$X\in \langle f(X)\rangle \Rightarrow X = \lambda f(x), \quad \lambda \in \mathbb{Z}[X]$$
Which tells me that $\deg(f(x))$ is either one or zero.
If it is $0$ then $f(X)$ is a unit
If $\deg(f(X))$ is 1 then $f(X)$ is irreducible and $\langle f(X)\rangle$ is a maximal ideal
|
$X=f(X)g(X) \Rightarrow deg f(X) + deg g(X)=1 \Rightarrow deg f(X)=0,1.$ If $deg f(X)=0,$ then $f(X)$ is a constant. Say $f(X)=\lambda \in \mathbb Z.$ In this case equating the coefficient of $X$ from both side we get that $\lambda$ is actually a unit. (This part you got right.) Now let $f(X)=aX+b, a \neq 0.$ Since $X \in \langle f(X) \rangle,$ we must have $a$ is a unit and $b=0.$ (Why?) Thus $\langle f(X) \rangle = \langle X \rangle.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1201004",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
find an inverse function of complicated one Let $f:\mathbb{R}\rightarrow \mathbb{R}$: $$f(x) = \sin (\sin (x)) +2x$$
How to calculate the inverse of this function?
So far i searched a lot in the internet but i didn't find any easy algorithm to this.
What i found is just for easy function (like $f(x)=x^2$) , but no for complicated one.
Can someone show me the steps? are they any rules i need to know? thanks a lot in advance!
|
Two general methods exist, but often it is very hard to employ them with some success:
*
*integral representations as Burniston-Siewert-like representations; see: http://www4.ncsu.edu/~ces/publist.html
*series as Lagrange series: http://en.wikipedia.org/wiki/Lagrange_inversion_theorem
It is useful to remark that "simpler" equations as Kepler equation ($M=E+e\sin(E)$) needed the introduction of special functions (i.e. the Bessel functions) to be solved by means of series expansion.
For example in your case, the Lagrange inversion would give the following formal series solution near 0:
$$x(u)= \frac{u}{2} + \sum_{n=1} \frac{(-1)^{n}}{2(n!)}\left(\frac{d}{du}\right)^{n-1}\sin\left(\sin\left(\frac{u}{2}\right)\right)^n $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1201112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Compute $e^A$ where $A$ is a given $2\times 2$ matrix Compute $e^A$ where $A=\begin{pmatrix} 1 &0\\ 5 & 1\end{pmatrix}$
definition
Let $A$ be an $n\times n$ matrix. Then for $t\in \mathbb R$,
$$e^{At}=\sum_{k=0}^\infty \frac{A^kt^k}{k!}\tag{1}$$
|
Or, write
$A = I + N \tag{1}$
with
$I = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \tag{2}$
and
$N = \begin{bmatrix} 0 & 0 \\ 5 & 0 \end{bmatrix}. \tag{3}$
Note that
$IN = NI = N, \tag{4}$
that is, $N$ and $I$ commute, $[N, I] = 0$, and apply the well-known result that for commuting matrices $B$ and $C$ we have $e^{B + C} = e^B e^C$, an exposition of which may be found here: $M,N\in \Bbb R ^{n\times n}$, show that $e^{(M+N)} = e^{M}e^N$ given $MN=NM$,
and write
$e^A = e^I e^N. \tag{6}$
Now using the matrix power series definition of $e^X$,
$e^X = \sum_0^\infty \dfrac{X^n}{n!}, \tag{7}$
it is easily seen that
$e^I = \sum_0^\infty (\dfrac{1}{n!} I) = (\sum_0^\infty \dfrac {1}{n!}) I = e I, \tag{8}$
since $I^n = I$ for all $n$, while we easily calculate
$N^2 = 0, \tag{9}$
so with $X = N$ (7) becomes
$e^N = I + N = \begin{bmatrix} 1 & 0 \\ 5 & 1 \end{bmatrix}. \tag{10}$
Then
$e^A = e^{I + N} = e^I e^N$
$ = e^I(I + N) = e I(I + N) = e (I + N) , \tag{11}$
and finally, again with the aid of (10),
$e^A = \begin{bmatrix} e & 0 \\ 5e & e \end{bmatrix}. \tag{12}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1201192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Evaluate the following indefinite integral Evaluate the integral : $$\int\frac{1-x}{(1+x)\sqrt{x+x^2+x^3}}\,dx$$
I tried through putting $x=\tan \theta$ as well as $x=\tan^2\theta$ .but I am unable to remove the square root. I also tride by putting $x+x^2+x^3=z^2$. But I could not proceed anyway...Please help...
Update :
putting $x=\frac{1-t}{1+t}$ , I get , $$\sqrt{x+x^2+x^3}=\sqrt{\frac{2-3t+t^2-t^3}{(1+t)^3}}$$
How you got $\sqrt{x+x^2+x^3}=\sqrt{(t^3+3)(1-t^2)}/(t+1)^2$ ?
|
Hint 1: $t \mapsto (1-x)/(1+x)$
$$\begin{equation}\displaystyle\int\frac{x-1}{(x+1)\sqrt{x+x^2+x^3}}\,\mathrm{d}x = 2\arccos\left(\frac{\sqrt{x}} {x+1}\right) + \mathcal{C}\end{equation}$$
Hint 2: One can show that $t = (1-x)/(1+x)$ is it's own inverse. In other words $x = (1-t)/(1+t)$. Hence the derivative becomes. $\mathrm{d}x = -2t \,\mathrm{d}t/(t+1)^2$.
Similarly we have $\sqrt{x^3+x^2+x} = \sqrt{(t^3+3)(1-t^2)}/(t+1)^2$ so we get a nice cancelation. Explicitly we have
$$
\begin{align*}
\sqrt{x^3+x^2+x}
& = \sqrt{ \left(\frac{1-t}{1+t}\right)^3 + \left(\frac{1-t}{1+t}\right)^2 + \left(\frac{1-t}{1+t}\right) } \\
& = \sqrt{ \frac{-t^3+t^2-3t+3}{(1+t)^3}} = \sqrt{ \frac{1+t}{1+t}\frac{(1-t)(t^2+3)}{(1+t)^3}} = \frac{\sqrt{(1-t^2)(t^3+3)}}{(t+1)^2}
\end{align*}
$$
Hint 3: Use the substitution $\cos u \mapsto t$, what happens?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1201307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Prove $c$ satisfies the integral If $f:[0,1] \to \mathbb{R}$ is continuous, show that there exists $c \in [0,1]$ such that
$$f(c)=\int_0^1 2t f(t) \text{d}t.$$
So it's pretty clear to me that I have to use Intermediate Value Theorem and Cauchy-Schwarz inequality but I can't quite get the trick done.
Any help appreciated.
|
Since $f$ is continuous there exist $a,b\in [0,1]$ such that $f(a)\le f(x)\le f(b), \:\forall x\in [0,1].$ Now,
$$t\in[0,1]\implies tf(a)\le t f(x)\le t f(b).$$ So
$$2f(a)\int_0^1 tdt \le 2\int_0^1 tf(t)dt\le 2f(b)\int_0^1 tdt.$$
Can you finish now?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1201379",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Methods to quickly compute percentages Yesterday, talking with a friend of mine, she asked me what is a quick (and – of course – correct) way to compute percentages, say $3.7 \%$ of $149$. Frankly, I was sort of dumbfounded, because I use the following two methods:
*
*either I use ratios,
*or I start to go on with a sequence of approximations (e.g. I get the $10$% of $149$ and then I proceed by roughly converging towards $4$%, and then I wildly guess...).
In both cases, she was not satisfied: ratios were too slow, and the other system... well, you can guess by yourself.
Are there some other options that are both fast (which means that you should not need pencil and paper to make the calculation) and sound?
Please, notice that I am not interested in the result of the example I chose itself: this should rather be a way to show a particular system.
Thank you for your time.
PS: To the moderators, I was seriously contemplating the idea of using the tag "big list", but I am not sure the topic is that compelling.
|
3.7% of 100 = 3.7
3.7% of 40 = 4*0.37 = 1.48
3.7% of 9 = 9*0.037 = 0.333
3.7+1.48+0.333=5.513
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1201478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Regarding $\lim \limits_{(x,y,z)\to (0,0,0)}\left(\frac{x^2z}{x^2+y^2+16z^2}\right)$--is WolframAlpha incorrect? $$ \lim_{x,y,z\to 0} {zx^2\over x^2+y^2+16z^2}$$
So I am trying to evaluate this limit..
To me, by using the squeeze theorem, it seems that the answer must be zero.
I trying using the spherical coordinates, which also gives in the same result.
However, WolframAlpha says the limit does not exist.
Could I know whether I am missing something or WolframAlpha is incorrect?(as it happens occasionally)
|
hint: $0 \leq \dfrac{|zx^2|}{x^2+y^2+16z^2} \leq |z|$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1201562",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
How to prove that $(z^{-1})^{-1} = z$ and $(zw)^{-1} = z^{-1}w^{-1}$? I need to prove that $(z^{-1})^{-1} = z$ and $(zw)^{-1} = z^{-1}w^{-1}$ but the only thing I can think about is to consider
$$z = a+bi, w = c+di$$
and then prove it algebraically using laws of multiplication for complex numbers. Any ideas to prove it structuraly?
|
Generally, $(z^a)^b=z^{ab}$, so $(z^{-1})^{-1}=z$.
We have $(zw)(zw)^{-1}=1.$ Pre-multiply by $z^{-1}$ and then $w^{-1}$ to get the result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1201647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
What is the correct value? My confusion is:
$(-9)^{2/3} = ((-9)^{2})^{1/3} = ((-9)^{(1/3)})^{2} = 4.32$
But my calculator shows math error, and google says:
$(-9)^{2/3} = 2.16+3.74i$
|
It is because it is showing one of the three possible roots, one of them being $4.32$, and the other two are $2.16 + 3.74i$ and its conjugate
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1201723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Associativity of concatenation of closed curves from $I$ to some topological spaces $X$ I'm looking for some example of closed curve such that $f*(g*h)=(f*g)*h,$ in some topological space $X$.
I tried to use $X$ like the Sierpinski space, but I can't find such closed curve.
|
It depends on your definition of $f * g$ (indeed for Moore loops every triplet satisfies the equation), but let's use the most common definition: for $f, g : [0,1] \to X$,
$$(f*g)(t) := \begin{cases}
f(2t), & 0 \le t \le \frac{1}{2}; \\
g(2t-1), & \frac{1}{2} \le t \le 1.
\end{cases}$$
If you allow your space to not be Hausdorff, weird things can happen. Let $X = \{0,1\}$ with the indiscrete topology (ie. the only open sets are $\emptyset$ and $X$). Let $f = g = h = \gamma : I \to X$ be defined by
$$\gamma(t) = \begin{cases}
1, & t = p/2^n \text{ for some } p \in \mathbb{N}; \\
0, & \text{otherwise}.
\end{cases}$$
This is continuous because every map to an indiscrete space is continuous. And you can check directly from the definition that $\gamma*(\gamma*\gamma) = (\gamma*\gamma)*\gamma$ (simply because $x$ is a dyadic number iff $2x$ is iff $2x - 1$ is), but of course $\gamma$ isn't the constant loop.
However, when your space is Hausdorff, the situation is considerably simpler. The condition $(f*g)*h = f*(g*h)$ pointwise (ie. you literally have $((f*g)*h)(t) = (f*(g*h))(t)$ for all $t$) is equivalent to $$f = f*g \text{ and } h = g*h,$$ just by inspecting the definition.
So when is it possible to have $f = f*g$ for example? Let $t \in [0,1]$, then:
$$f\left(\frac{t+1}{2}\right) = (f * g)\left(\frac{t+1}{2}\right) = g(t).$$
But then $f = f*g = (f*g)*g$, so $f\left(\frac{t+1}{4}\right)$ is again equal to $g(t)$. Here is a picture if it can help:
By induction, $$g(t) = f\left(\frac{t+1}{2^n}\right),$$ which converges to $f(0)$ (remember that $f$ is continuous). Thus $g$ is the constant loop, if your space is Hausdorff (because a sequence can only have one limit). Now using again the same trick $f = f*g = (f*g)*g = \dots$, you can deduce that $f(t) = f(1)$ for all $t$ (divide $[0,1]$ into intervals of the type $(1/2^{n+1}, 1/2^n]$). Thus $f$ is the constant loop too.
The same argument (with minor modifications) shows that $h = g*h \implies h = \text{cst}$. In conclusion, the only triplet of loops $(f,g,h)$ such that $f*(g*h) = (f*g)*h$ "on the nose" is the triplet $(\text{cst}, \text{cst}, \text{cst})$ of three constant loops if your space is Hausdorff.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1201879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Elementary questions about polynomials and field extensions Let $$f(x)=x^2+x+1.$$ This is irreducible in $\mathbb{Z_2}[x]$, and thus $\mathbb{Z_2}[x]/(f(x))$ is a field $K$ where $(f(x))$ is a principle ideal. I don't quite understand how I find that $\overline{0}$,$\overline{1}$, $\overline{X}$, and $\overline{X+1}$ are the elements of this new field.
Also, would I be able to rewrite $f(x)$ as a product of factors of degree $1$ in $K[x]$? If so how would I go about this?
This is from Dan Saracino's Abstract Algebra: A First Course.
|
In fact, $(f(x))$ is a maximal ideal, and this is equivalent to $\mathbb{Z}_2[x] / (f(x))$ being a field. Anyway, in this quotient $\bar{X}^2 + \bar{X} + \bar{1} = \bar{0}$, so, $\bar{X}^2 = \bar{X} + \bar{1}$, and by induction any power of $\bar{X}$ can be written as a $\mathbb{Z}_2$-linear combination of $\bar{1}$ and $\bar{X}$. (Neither of these is a multiple of the other, otherwise, by the below comment, $\bar{0}$ or $\bar{1}$ would be a root of $f$ in $\mathbb{Z}_2$, and hence $f$ would not be irreducible, a contradiction.) Thus, the field $\mathbb{Z}_2[x] / (f(x))$ is a vector space over $\mathbb{Z}_2$ with basis $\bar{1}, \bar{X}$, and its elements are precisely $\bar{0}, \bar{1}, \bar{X}, \bar{X} + \bar{1}$.
You can always write $f(x)$ as a product of linear factors over the splitting field $K$ of $f$ (in fact, this is why it's called a splitting field). By construction $f(\bar{X}) = \bar{X}^2 + \bar{X} + \bar{1} = \bar{0}$, so $\bar{X}$ is a root of $f$, and hence $x - \bar{X}$ is a factor of $f$ in $K[x]$. Since $\deg f = 2$, it hence has two linear factors. (We don't need to know the other factor to answer the question as written, but we can show that $f(x) = (x - \bar{X})(x - (\bar{X} + \bar{1}))$ in $K[x]$, which we can find readily with polynomial long division.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1201993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
If $x^3 +px -q =0$ then value of $(\alpha + \beta)(\beta + \gamma)(\alpha+\gamma)(1/\alpha^2 + 1/\beta^2+1/\gamma^2)$ I am given a cubic equation $E_1 : x^3 +px -q =0$ where $p,q \in R$ so what would be value of the expression $$(\alpha + \beta)(\beta + \gamma)(\alpha+\gamma)(\frac{1}{\alpha^2} + \frac{1}{\beta^2}+\frac{1}{\gamma^2})$$ where $\alpha , \beta,\gamma$ are roots of cubic equation $E_1$
I know that this has got to do something with sum/product of root of equation but I don't know how to solve this problem , perhaps I can do something by assuming value of $p,q$ to be something then solve the given expression?
|
This one is easy. You need to note that
\begin{align}
\alpha + \beta + \gamma &= 0\tag{1}\\
\alpha\beta + \beta\gamma + \gamma\alpha &= p\tag{2}\\
\alpha\beta\gamma &= q\tag{3}
\end{align}
and hence the calculation of the desired expression $E$ is given by
\begin{align}
E &= (\alpha + \beta)(\beta + \gamma)(\gamma + \alpha)\left(\frac{1}{\alpha^{2}} + \frac{1}{\beta^{2}} + \frac{1}{\gamma^{2}}\right)\\
&= (-\gamma)(-\alpha)(-\beta)\left(\frac{1}{\alpha^{2}} + \frac{1}{\beta^{2}} + \frac{1}{\gamma^{2}}\right)\\
&= -\frac{\alpha^{2}\beta^{2} + \beta^{2}\gamma^{2} + \gamma^{2}\alpha^{2}}{\alpha\beta\gamma}\\
&= -\frac{(\alpha\beta + \beta\gamma + \gamma\alpha)^{2} - 2\alpha\beta\gamma(\alpha + \beta + \gamma)}{\alpha\beta\gamma}\\
&= -\frac{p^{2}}{q}
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1202095",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.