text
stringlengths 83
79.5k
|
|---|
H: If $V$ is right-orthogonal, does it hold $\langle AV,BV\rangle_F=\langle A,B\rangle_F$?
Let $A,B\in\mathbb R^{m\times n}$. It's easy to see that for the Frobenius inner product it holds $$\langle A,B\rangle_F=\operatorname{tr}B^\ast A=\operatorname{tr}A^\ast B.\tag1$$ So, if $U\in\mathbb R^{k\times m}$ is left-orthogonal (i.e. $U^TU=I_m$), then $$\langle UA,UB\rangle_F=\operatorname{tr}B^TU^TUA=\langle A,B\rangle_F\tag2.$$ Can we show an equivalent result for right-orthogonal (i.e. $VV^T=I_n$) $V\in\mathbb R^{n\times k}$ and $\langle AV,BV\rangle_F$?
AI: This follows from the cyclic property of the trace (see Is trace invariant under cyclic permutation with rectangular matrices? )
$$
\begin{aligned}
\langle AV, BV \rangle_F &= \mathrm{tr} V^T B^T A V \\
&= \mathrm{tr} B^T A V V^T \\
&= \langle A, B \rangle_F
\end{aligned}
$$
|
H: Computing $\int_{0}^{1} x^2 \sin(2\pi nx)\sin(2\pi mx) \,dx$
I would like to know if there is any easy way or known formula to compute the following integral. For $n,m \in \mathbb{N}$, for $n \neq m$, $$ \int_{0}^{1} x^2 \sin(2\pi nx)\sin(2\pi mx) \,dx$$ I tried various graphs for $n\ne m$, and it seems the answer is $0$. That's why I thought there might easy way to deal with this. Thanks.
AI: Hints:
$$2\sin a\sin b=\cos(a-b)-\cos(a+b)$$
and by parts
$$\int x^2\cos(px)\,dx=\frac1px^2\sin(px)-\frac1p\int 2x\sin(px)dx
\\=\frac1px^2\sin(px)+\frac1{p^2} 2x\cos(px)dx-\frac1{p^2}\int 2\cos(px)dx.$$
|
H: A question regarding partitions.
Number of partitions of n = p(n)
Number of partitions of n which has a part equal to 1 = p(n-1)
Number of partitions of n into k parts = p(n,k)
If for some k the following inequality holds
p(n,k) ≤ p(n-1)
Then does it necessarily imply that all the partitions of n into k parts has a part equal to 1?
AI: No, for example $p(4,2) = 2 = p(3)$ but there is a partition of $4$ into $2$ parts where neither part is $1$.
|
H: Definition of eigen space.
I am studying linear algebra and got confused in defining eigen space corresponding to eigen value.The thing wondering me is the same thing defined in two different books in different manners.let $\lambda$ be eigen value of matrix $A$ of order n over the field $\mathbb{F}$ then Hoffman & kunze defines eigen space as follows:
Definition:The collection of all $x\in\mathbb{F^n}$ such that $Ax=\lambda x$ is called the eigenspace associated with $\lambda$.
While in another book A first course in module theory by M.E Keating,he defines eigenspace as follows:
Definition:Given an n x n matrix $A$ over a field $\mathbb{F}$, an eigenspace for $A$ is a nonzero
subspace $U$ of $\mathbb{F^n}$ with the property that $$Ax=\lambda x$$ $\forall x\in U$
I think the two definitions are not equivalent:
Let $x_1,x_2$ be two linearly independent eigenvectors corresponding to eigen value $\lambda$,then according to first definition a subspace generated by $x_1,x_2$ is the eigen space of $A$ corresponding to eigen value $\lambda$ while according to definition-2 there are three eigen space each of which generated by
(1)$x_1$ only,
(2)$x_2$ only,
(3) $x_1$ and $x_2$
Am I right in my understanding?since my graduation days I am familiar with first definition and used it in my problems,could anybody help me in understanding
Thanks in advance
AI: You are right: they are not equivalent. The first definition is the usual one. The second one is more general: if $F_\lambda$ is the eigenspace corresponding to $\lambda$ (with respect to the first definition), then the author of the second definition is saying that any non-zero subspace of $F_\lambda$ is an eigenspace for $A$ corresponding to the eigenvalue $\lambda$.
|
H: It is given that x, y, z are 3 real numbers such that $(x-y)/(2+xy)+(y-z)/(2+yz)+(z-x)/(2+zx)=0$
It is given that x, y, z are 3 real numbers such that $\frac{(x-y)}{(2+xy)}+\frac{(y-z)}{(2+yz)}+\frac{(z-x)}{(2+zx)}=0$. Is it true that at least two of three numbers must be equal? Justify your answer.
This is the given solution:
Yes. Multiplying both sides by $(2+xy)(2+yz)(2+zx)$, we get
$F:=(x-y)(2+yz)(2+zx)+(y-z)(2+xy)(2+zx)+(z-x)(2+yz)(2+xy)=0$.
Now regard F as a polynomial in $x$, since $F=0$ when $x=y$, $x-y$ is a factor of $F$.
Similarly, $y-z$ and $z-x$ are also factors of $F$. Since $F$ is of degree $3$,
$F≡k(x-y)(y-z)(z-x)$
for some constant $k$. By letting $x=1$, $y=-1$, $z=0$, we have $k=2$. Thus
$F≡2(x-y)(y-z)(z-x)$.
Hence $F=0$ implies that two of variables are equal.
However, I do not understand why F is a polynomial of degree 3, as I think it should be degree 5. Also, the values set for $x$, $y$, and $z$ are not equal, which contradicts the claim that yes, at least two of three numbers must be equal.
Can anyone help to clarify my problems?
AI: Because $$\sum_{cyc}\frac{x-y}{2+xy}=\frac{\sum\limits_{cyc}(x-y)(4+2xz+2yz+z^2xy)}{\prod\limits_{cyc}(2+xy)}=$$
$$=\frac{2\sum\limits_{cyc}(x^2z-y^2z)}{\prod\limits_{cyc}(2+xy)}=\frac{2(x-y)(y-z)(z-x)}{\prod\limits_{cyc}(2+xy)}.$$
The expression of degree $5$ is equal to $0$:
$$\sum_{cyc}(x-y)z^2xy=xyz\sum_{cyc}(zx-zy)=0.$$
Also, $$\sum_{cyc}4(x-y)=0.$$
|
H: Partitioning a set of $11$ women and $7$ men - Combinatorics
Let $S (n, k)$ be the number of $k$-element partitions of an $n$-element set.
A set of eleven women and seven men is to be partitioned into four subsets.
None of the subsets should consist exclusively of women or men.
How many such partitions are there?
My textbook gives
a) $S(7, 4) · S(11, 4) · 4!$
and
b) $S(7, 4) · (S(10, 3) + 4 · S(10, 4)) · 4!$
as answers, but I don't understand how they arrived at these solutions. Can someone please explain the logical reasoning behind these answers? Thank you for your time.
EDIT: I now realise that b) is just the recursive formula for Stirling numbers (of the second kind) used on a). But how did we arrive at a)? Partitioning $7$ men in $4$ subsets and $11$ women in $4$ subsets should give us 8 subsets(?)
AI: For (a), we first form $4$ subsets consisting of only men and $4$ subsets consisting of only women. Now, think of each subset of men merging with one subset of women, so that all $8$ subsets get paired to form $4$ subsets, none of which exclusively consist of all men/women. The factor of $4!$ arises because of the number of making these pairs.
|
H: Is it safe to say the following about the odd prime numbers other than 5.
I am studying basic number theory and have a habit of writing down interesting facts whenever I conclude something from the text or a problem itself. I was wondering whether I can write it down too:
All odd prime numbers other than 5, either themselves are one less or more than a multiple of 10, or their square is 1 less than a multiple of 10.
What do you say?
AI: Yes, you don't even need the numbers to be prime. This can be proven via basic modular arithmetic. All $1,3,7,9 \mod 10$ follow the above rules.
Edit:- So you don't need modular arithmetic, but it does make your life easier. First, note that if any odd number ends in $1$ or $9$ then the first property is satisfied. Second, if it ends in $3$ or $7$, write it as $10n+3$ or $10n+7$ and square it to conclude that second property is satisfied
|
H: Some terminology: differences of term, formula, and expression in logic?
In the wikipedia article on logical terms it is written:
In analogy to natural language, where a noun phrase refers to an
object and a whole sentence refers to a fact, in mathematical logic, a
term denotes a mathematical object and a formula denotes a
mathematical fact. In particular, terms appear as components of a
formula.
I am not sure to understand exactly what that means, and the exact difference between:
a term
a formula
an expression
If authors adopt different conventions, I would like the standpoint of one that makes a clear distinction between the terms. In particular things that are not clear not me are questions like:
is every term a formula?
is every term an expression?
is every formula a term?
is every formula an expression?
is every expression a term?
is every expression a formula?
Examples of things that are terms, formulas, and expressions are welcome.
AI: A term is a "name": variables and constants are terms.
In addition, "complex" terms can be manufactured using function symbols.
Example: $n$ is a variable, $0$ is a constant and $+$ is a (binary) function symbol.
Thus, $n,0$ and $n+0$ are terms.
Formulas are statements.
Atomic formulas are the basic building blocks for manufacturing statements.
They are formulas that have no sub-parts that are formulas.
They are manufactured using predicate symbols, like e.g. $\text {Even}(x)$, equality and terms.
Thus, $\text {Even}(n), 0=0$ and $n+0=n$ are atomic formulas.
With connectives and quantifiers we can write more complex formulas, like: $\forall n (n+0=n)$ and $0=0 \to \forall n (n+0=n)$.
Expression can be a "generic" category: it may mean a string of symbols.
We may call well-formed expression a string of symbols that satisfies the rule of the syntax.
If so, it is either a term or a formula.
|
H: Cartan-Weyl basis for the complexified Lie algebra $L_{\mathbb{C}}(SU(N))$
I'm trying to construct the Cartan-Weyl basis for $L_{\mathbb{C}}(SU(N))$.
Looking at the basis for the complexified lie algebra for $SU(2)$ (consisting of the cartan element and the step operators), is there a straightforward generalisation here? The basis for the Cartan subalgebra is clear, so do we just shift the Cartan-Weyl basis for $SU(2)$ down the diagonal?
AI: It's not so clear what you mean but there are very simple ways of obtaining what I think you want.
Define the matrix $N\times N$ matrix
\begin{align}
E_{ij}= \left\{\begin{array}{cl}
1&\hbox{at position}\ (i,j)\\
0&\hbox{elsewhere}
\end{array}\right.
\end{align}
Thus for $N=3$ we have
\begin{align}
E_{11}= \left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0 \\
\end{array}
\right)\, ,\qquad E_{12}=\left(
\begin{array}{ccc}
0 & 1 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0 \\
\end{array}
\right)\, ,\qquad E_{13}=\left(
\begin{array}{ccc}
0 & 0 & 1 \\
0 & 0 & 0 \\
0 & 0 & 0 \\
\end{array}
\right)
\end{align}
etc. One can then define a Cartan subalgebra for $\mathfrak{su}(3)$ using
\begin{align}
h_1=E_{11}-E_{22}\, ,\qquad h_2=E_{22}-E_{33}
\end{align}
The $E_{ij}$ with $i<j$ are raising operators and $E_{ij}$ with $i>j$ are lowering operators.
For instance, in this simple $3$-dimensional representation, the vector
$\vert 1\rangle\to (1,0,0)^\top$ is the highest weight vector, has weight $(1,0)$, and is killed by $E_{12}$, $E_{23}$ and $E_{13}$, i.e. is killed by all the raising operators.
This obviously generalizes to an $\mathfrak{su}(n)$.
|
H: Real valued continuous function is the unique difference of two positive functions
Let $X$ be a compact Hausdorff space and let $f: X \to \mathbb{R}$ be a continuous function. I want to prove that we have a unique decomposition
$$f= f_1 - f_2$$
where $f_1 f_2 = 0 = f_2 f_1$.
where $f_1, f_2: X \to \mathbb{R}$ are continuous positive functions. I managed to show the existence, but I'm not sure how I can prove the uniqueness.
AI: Uniqueness easily follows from the condition $f_1f_2=0$. This means that for every $x \in X$ you have $f_1(x)=0$ or $f_2(x)=0$ and $f(x)=f_1(x)-f_2(x)$, this implies if $f(x)>0$ then $f(x)=f_1(x)$ while $f_2(x)=0$ and vice versa $f(x)<0$ implies $f(x)=-f_2(x)$. If $f(x)=0$ then $f_1(x)=f_2(x)$ and their product must be zero: that is $f_1(x)=f_2(x)=0$.
Existence is an easy application of the pasting lemma, since you said you already proved it I won't go into detail.
Edit: since you asked in the comments I am going to explain how the pasting lemma can be used to provide the maps. Since $f$ is continuous $N=f^{-1}((-\infty, 0])$ and $P=f^{-1}([0, +\infty))$ are both closed subsets of $X$. The pasting lemma states that if you have two continuous function $g,h$ defined on closed subsets of a topological space and they coincide on the intersection then gluing them together you get a continuous function on the whole space.
In our case you can take $g\colon P \rightarrow \mathbb{R}$ to be $g=f|_P$ and $h\colon N \rightarrow \mathbb{R}$ to be the zero function. Applying the lemma you obtain a function which is your $f_1$.
|
H: Paradox In the criteria for $a$ to be a removable singularity or a pole of $f$?
My complex analysis textbook stated the following proposition:
Let $a$ be an isolated singularity of $f$
If $\lim_{z\to a}(z-a)f(z)=0$, then $a$ is a removable singularity
If there exists a number $m \in \mathbb{N}$ such that: $\lim_{z\to a}(z-a)^mf(z) \neq 0$, then $a$ is a pole of order $m$
My questions are:
1 - Imagine that we have a function $f$ such that $\lim_{z\to a}(z-a)f(z)=0$. Would that mean that $ \nexists m\in \mathbb{N}: \lim_{z\to a}(z-a)^mf(z) \neq 0$?
2 - Imagine that we have a funcion $f$ such that $\exists m\in \mathbb{N}: \lim_{z\to a}(z-a)^mf(z) \neq 0$, would that mean that: $\forall k \in \mathbb{N} \setminus \{m\},\lim_{z\to a}(z-a)^k f(z) = 0$?
This makes sence because the order of the pole is only one number, so there can only exist one number such that $\lim_{z\to a}(z-a)^mf(z) \neq 0$
3 - Imagine that we have a function $f$ such that $ \exists m\in \mathbb{N}: \lim_{z\to a}(z-a)^mf(z) \neq 0$. This would mean that $a$ is a pole of order $m$.
Now let's consider the limit $\lim_{z\to a}(z-a)^k f(z)$, with $k \in \mathbb{N}$ such that $k \neq m$. We have that either $\lim_{z\to a}(z-a)^k f(z)=0$ (1) or $\lim_{z\to a}(z-a)^k f(z) \neq 0$ (2).
If (2) is true there exists two different number such that $\lim_{z\to a}(z-a)^mf(z) \neq 0$ so with one of them is the order of the pole?
If (2) is false and (1) is true because there exists only one number such that $\lim_{z\to a}(z-a)^mf(z) \neq 0$, if we consider $k = 1$, this would imply that $\lim_{z\to a}(z-a)f(z)=0$ and this would mean that $a$ is a removable singularity because of the initial proposition. But we also have $ \exists m\in \mathbb{N}: \lim_{z\to a}(z-a)^mf(z) \neq 0$ because of the way I defined $f$, so, again because of the proposition, $a$ is a pole of order $m$. This seams like a paradox So witch one is right?
AI: 2 in the first paragraph means implicitly that the limit is finite and non-zero; for a pole the limit is infinite for all $k<m$ and zero for all $k>m$, so 2 in the second paragraph is false as noted - in general, there are actually 4 choices - zero limit, finite non-zero limit, infinite limit or no limit (this last being precluded in a pole or removable singularity, but happening in an essential singularity); there is no paradox just a misunderstanding of what the notions involved mean
personally I think that the above definitions are way too complicated and they are properly expressed as theorems or lemmas etc
The standard definitions for an isolated singularity are:
$a$ is a removable singularity if $f$ is bounded in a neighborhood of it (and the result is that there is a finite value $c$ for which $f(z) \to c$ and we call that $f(a)$, and then $f$ is holomorphic near $a$ too, while $(z-a)^mf(z) \to 0, m \ge 1$
$a$ is a pole if $f(z) \to \infty, z \to a$ (and the result is that there is an unique integer $m \ge 1$ for which $(z-a)^mf(z) \to c \ne 0, c \ne \infty$ etc)
$a$ is an essential singularity if the limit of $f(z)$ doesn't exist when $z \to a$ and then there are sequences $z_{mk} \to a, (z_{mk}-a)^mf(z_{mk}) \to \infty, k \to \infty$ for all $m \ge 0$, as well as sequences for which $f$ is bounded - actually by a more general theorem, all but at most one complex value, are taken infinitely many times in a neighborhood of $a$, though the first result about $f$ having sequences on which it grows faster than any rational function near $a$ is also important
|
H: How to prove that $\wp''$'s zeros are not at half-periods?
This is an exercise adapted from Apostol. The problem is stated as
Prove that
$$\wp''\left(\frac{\omega_1}{2}\right)=2(e_1-e_2)(e_1-e_3)$$
where $\omega_1,\omega_2$ generates the lattice for $\wp$.
I could see that by Weierstrass' differential equation, we have
$$2\wp''\wp'=4\wp'((\wp-e_1)(\wp-e_2)+(\wp-e_2)(\wp-e_3)+(\wp-e_3)(\wp-e_1))$$
and
$$2\wp'''\wp'+2\wp''^2=4\wp''(\cdots)+4\wp'(\cdots)$$
At $z=\frac{\omega_1}{2}$ we have $\wp'''=\wp'=0$ since they are odd elliptic functions. Therefore,
$$2\wp''^2\left(\frac{\omega_1}{2}\right)=4(e_1-e_2)(e_1-e_3)\wp''$$
Now if $\wp''\left(\frac{\omega_1}{2}\right)\neq0$ we are done. However, I find it very difficult to prove the claim. I tried the following steps: first, we assume $\wp'(z)\neq 0$. Then by the expanded differential equation,
$$\wp''=6\wp^2-\frac{1}{2}g_2=6\left(\wp-\sqrt{\frac{g_2}{12}}\right)\left(\wp+\sqrt{\frac{g_2}{12}}\right)$$
Since $\wp(z)\pm\sqrt{\frac{g_2}{12}}$ are double zeros of $\wp''$ and the order of $\wp''$ is $4$, if I could prove that $\pm\sqrt{\frac{g_2}{12}}\neq e_i$, $i=1,2,3$ then we are done.
I tried to use the fact that $g_2=2(e_1^2+e_2^2+e_3^2)$, but it left me with the problem of proving
$$\frac{e_1^2+e_2^2+e_3^2}{6}\neq e_i^2$$
for all $i$.
How can I proceed? or is there a simpler argument that applies? I must have missed something. Please help me. Thanks in advance.
AI: A very simple argument is counting the multiplicities of values. $\wp'$ is an odd elliptic function with a single pole of order $3$, hence it attains each value in $\widehat{\mathbb{C}} = \mathbb{C}\cup \{\infty\}$ exactly three times (in a fundamental region) counting multiplicities. Now by oddness, we have
$$\wp'\biggl(\frac{\omega_1}{2}\biggr) = \wp'\biggl(\frac{\omega_2}{2}\biggr) = \wp'\biggl(\frac{\omega_1 + \omega_2}{2}\biggr) = 0\,,$$
hence these are all simple zeros of $\wp'$. In other words, $\wp''$ doesn't vanish at the half-periods.
|
H: First order linear non-homogenous ode
I'm learning how to solve ode and there's one thing in my lecture notes that I don't understand.
$y' +py = q \ $
$y(x_0) = y_0$
I understand that I can rewrite this to
$\phi = ce^{-P} + e^{-P} \int_{x_0}^{x}q(t)e^{P(t)}dt$
So I get:
$y_0 = \phi(x_0) = ce^{-P(x_0)} + e^{-P(x_0)} \int_{x_0}^{x_0}q(t)e^{P(t)}dt$
The integral being zero makes sense because of the bounds, but here I don't understand why it should be obvious to me that $P(x_0) = 0$
From which I can get $ c = \phi(x_0) = y_0$
AI: $$y' +p(x)y = q(x) $$
Note that $P(x)$ is not the $p(x)$ in the DE. It comes as an integrating factor:
$$(ye^{\int p(x) dx})'=qe^{\int p(x)dx}$$
$$ye^{\int p(x) dx}=c+\int qe^{\int p(x)dx}dx$$
$$y=ce^{-\int p(x) dx}+e^{-\int p(x) dx}\int qe^{\int p(x)dx}dx$$
So that: $$P(x)=\int_{x_0}^x p(x)dx$$ And: $$P(x_0)=\int_{x_0}^{x_0}p(x)dx=0$$
|
H: $f(0)=f(1)=0$, $f(x)=\frac{f(x+h)+f(x-h)}{2}$ implies $f(x)=0$ for $[0, 1]$
Question: Suppose $f$ is continuous on $[0, 1]$ with $f(0)=f(1)=0$. For $\forall x\in (0, 1)$, there $\exists h>0$ with $0\le x-h<x<x+h\le1$ such that $f(x)=\frac{f(x+h)+f(x-h)}{2}$. Show that $\forall x\in(0, 1), f(x)=0$.
I tried to prove that $f$ is differentiable on $(0, 1)$ using the fact that $\frac{f(x+h)-f(x)}{h}=\frac{f(x)-f(x-h)}{h}$, but I realized that not all $h$ holds the equation, but there exists a particular $h$ in every $x$. So, this is not a proper approach.
I also thought about the concavity of $f$. Since $\forall x\in(0, 1),\exists h>0$ with $0\le x-h<x<x+h\le1$ such that
$$f(x)=f\left(\frac{x-h}{2}+\frac{x+h}{2}\right)\ge {1\over2}f\left(x-h\right)+{1\over2}f\left(x+h\right)$$
, which implies $f$ is concave downward, and
$$f(x)=f\left(\frac{x-h}{2}+\frac{x+h}{2}\right)\le {1\over2}f\left(x-h\right)+{1\over2}f\left(x+h\right)$$
, which implies $f$ is concave upward.
Two facts might imply that $f$ is constant, which in turn $\forall x\in[0, 1], f(x)=0$ since $f(0)=f(1)=0$. Is this approach correct?
I thought it has to be more precise, so I wanted to use the second derivative. But I actually failed to prove that $f$ is differentiable. Could you please give me some ideas about the question? Thanks a lot.
AI: Since $f$ is continuous on a closed, bounded interval, it attains its bounds.
So, let $x^*\in [0,1]$ be a point such that $f(x^*)$ is a maximum on $[0,1].$ However, we know that $f(x^*)$ is the average of two points around it, which (since $f(x^*)$ is a maximum) must also be maxima, i.e.
$$\exists h_1>0\:\colon f(x^*+h_1)=f(x^*).$$
Repeating for $x^*+h_1,$ we form a sequence $x^*, x^* + h_1,x^*+h_1+ h_2,\dotsc.$ This sequence converges, since it is bounded above by 1.
Suppose it converges to some $L< 1.$ But then, since $f$ is continuous, $f(L)$ is also equal to $f(x^*)$ (by sequential continuity). So we can repeat the above process with $f(L)$ to obtain a new point $L+h$ at which $f$ also attains its maximum (a contradiction).
Hence $L=1$ and
$$f(x^*)=f(1)=0.$$
Repeating with the minimum of $f$ gives us that the maximum and minimum values attained by $f$ are both $0,$ and so $f$ is constant.
|
H: Frame vs vector basis in differential geometry
Let $E \to M$ a finite dimensional vector bundle. I faced a couple of times that a vector basis of the fiber $E_x$ over $x \in M$ was often called a 'frame'. Are there any differences between the the notation of a 'frame' and a 'vector basis'? Is a 'frame' just a terminology more conventionally used in differential geometry but in truth synonymous to vector space?
AI: A local frame is a choice of a basis of $E_x$ for each point $x$ in an open set $U$ of $M$, such that this choice of basis varies continuously with $x$. A global frame is a local frame with all of $M$ as the domain.
However, sometimes people may use the word 'frame' to simply mean a basis of a fiber.
|
H: Finding a complement of $U=\{f\in C(\mathbb{R},\mathbb{R}) |f(0)=0\}$
Consider the vector space $V=C(\mathbb{R},\mathbb{R})$ and $V\ni U=\{f\in C(\mathbb{R},\mathbb{R}) |f(0)=0\}$. I want to find a complement of $U$, such that $V=U\oplus W$. This condition is the same as finding a set $W$ that satisfies $V=U+W$ and $U\cap W={f_0}$ where $f_0$ is the null element. At a lecture, the professor defined $W=Span({\mathbb{\mathbf{1}}})$, where $\mathbf{1}$ is the function defined by $\mathbf{1}(x)=1\ \forall x\in \mathbb{R}$ and proved that this is a complement. He did not, however, give an explanation of how intuitively to find this complement without knowing beforehand that it indeed does satisfy the conditions.
Looking at the definition of $U$, I would rather define $W$ as:
$$
W=\{f \in C(\mathbb{R}, \mathbb{R}): f(0) \neq 0\} \cup\left\{f_{0}\right\}
$$
But I would imagine that if it were that simple, the professor would have done the same. Is there something wrong with my definition?
AI: The issue is that the condition $V=U\oplus W$ requires $V$ and $W$ to be vector subspaces, as oposed to just sets as you say; in particular they have to be closed under vector addition. Having this in mind, if you define $$W=\{f \in C(\mathbb{R}, \mathbb{R}): f(0) \neq 0\} \cup\left\{f_{0}\right\},$$
then it's clear that this is not a vector space. To see this consider $f(x) = x^2+1$ and $g(x) = -1$ for all $x \in \mathbb R$; then $f,g \in W$, and if $W$ were indeed a vector space one would have $f + g \in W$, yet $$(f+g)(x) = (x^2+1)+ (-1)=x^2,$$ which is not in $W$.
|
H: Some doubts about Levy's Continuity Theorem proof - Convergence results
THEOREM (Levy's Continuity Theorem)
Let $(\mu_n)_{n\geq1}$ be a sequence of probability measures on $\mathbb{R}^d$, and let $(\hat{\mu}_n)_{n\geq1}$ denote their characteristic functions (or Fourier transforms).
If $\hat{\mu}_n(u)$ converges to a function $f(u)$ for all $u\in\mathbb{R}^d$, and if in addition $f$ is continuous at $0$, then there exists a probability $\mu$ on $\mathbb{R}^d$ such that $f(u)=\hat{\mu}(u)$, and $\mu_n$ converges weakly to $\mu$.
A PART OF THE PROOF FOR $d=1$ (FIRST PART)
$(\ldots)$ let $\beta=\dfrac{2}{\alpha}$ ($\alpha$ and $\beta$ constants) and we have the useful estimate
$$\mu_n\left(\left[-\beta,\beta\right]^c\right)\le\dfrac{\beta}{2}{\displaystyle \int_{-\frac{2}{\beta}}^{\frac{2}{\beta}}\left(1-\hat{\mu}_n(u)\right)du}\tag{1}$$
Let $\varepsilon>0$. Since by hypothesis $f$ is continous at $0$, there exists $\alpha>0$ such that $\left\vert1-f(u)\right\vert\le\dfrac{\varepsilon}{4}$ if $\left\vert u\right\vert\le\dfrac{2}{\alpha}$ (This is because $\hat{\mu}_n(0)=1$ for all $n$, whence $\lim\limits_{n\to\infty}\hat{\mu}_n(0)=f(0)=1$ as well.) Therefore
$$\left\vert\dfrac{\alpha}{2}\displaystyle{\int_{-\frac{2}{\alpha}}^{\frac{2}{\alpha}}\left(1-f(u)\right)du}\right\vert\le\dfrac{\alpha}{2}\displaystyle{\int_{-\frac{2}{\alpha}}^{\frac{2}{\alpha}}\dfrac{\varepsilon}{4}du}=\dfrac{\varepsilon}{2}\tag{2}$$
$(\ldots)$ there exists an $N$ ($\in\mathbb{N}$) such that $n\geq\mathbb{N}$ ($n\in\mathbb{N}$) implies
$$\left\vert\displaystyle{\int_{-\frac{2}{\alpha}}^{\frac{2}{\alpha}}\left(1-\hat{\mu}_n(u)\right)du} - {\displaystyle\int_{-\frac{2}{\alpha}}^{\frac{2}{\alpha}}\left(1-f(u)\right)du}\right\vert\le\dfrac{\varepsilon}{\alpha}\tag{3}$$
whence, by $(2)$, $\dfrac{\alpha}{2}{\displaystyle\int_{-\frac{2}{\alpha}}^{\frac{2}{\alpha}}\left(1-\hat{\mu}_n(u)\right)du}\le\varepsilon$. Next apply $(1)$ to conclude $\mu_n\left(\left[-\alpha, \alpha\right]^c\right)\le\varepsilon$, for all $n\ge N$.
So far so good to me. The following SECOND PART is not that clear instead.
A PART OF THE PROOF FOR $d=1$ (SECOND PART)
There are only a finite number of $n$ before $N$, and for each $n<N$ there exists an $\alpha_n$ such that $\mu_n\left(\left[-\alpha_n, \alpha_n\right]^c\right)\le\varepsilon$. Let $a=\max(\alpha_1,\ldots,\alpha_n;\alpha)$. Then
$$\mu_n\left(\left[-a, a\right]^c\right)\le\varepsilon,\hspace{0.3cm}\text{for all }n\tag{4}$$
The inequality $(4)$ means that for the sequence $(\mu_n)_{n\ge1}$ for any $\varepsilon>0$ there exists an $a\in\mathbb{R}$ such that $\sup\limits_{n}\mu_n\left(\left[-a,a\right]^c\right)\le\varepsilon$. Therefore, we have shown
$$\limsup\limits_{m\to\infty}\sup\limits_{n}\mu_n\left(\left[-m,m\right]^c\right)=0\tag{5}$$
for any fixed $m\in\mathbb{R}$.
Given the first part, my doubts about SECOND PART of the proof are:
1. Why can I be sure that "for each $n<N$ there exists an $\alpha_n$ such that $\mu_n\left(\left[-\alpha_n, \alpha_n\right]^c\right)\le\varepsilon$"?;
2. Why can I state that "the inequality $(4)$ means that for the sequence $(\mu_n)_{n\ge1}$ for any $\varepsilon>0$ there exists an $a\in\mathbb{R}$ such that $\sup\limits_{n}\mu_n\left(\left[-a,a\right]^c\right)\le\varepsilon$"? More precisely, why can I draw a conclusion specifically on the $\sup\limits_n$ of the set $\mu_n\left(\left[-a,a\right]^c\right)$?;
3. Could I also state that the conclusion of all the reasoning is that $\limsup\limits_{m\to\infty}\sup\limits_{n}\mu_n\left(\left[-m,m\right]^c\right)=\liminf\limits_{m\to\infty}\sup\limits_{n}\mu_n\left(\left[-m,m\right]^c\right)=0$ (for any fixed $m\in\mathbb{R}$) and not just that $\limsup\limits_{m\to\infty}\sup\limits_{n}\mu_n\left(\left[-m,m\right]^c\right)=0$ (for any fixed $m\in\mathbb{R}$)?
AI: (1) Note that $\lim_{K \to \infty} \mu_n([-K,K]^c) = 0$. Thus choosing $K$ sufficiently large ensures that $\mu_n([-K,K]^c) \leq \epsilon$.
(2) If you have $\mu_n([-a,a] ^c) \leq \epsilon$ for all $n$, then this means that $\epsilon$ is an upper bound for $\{\mu_n([-a,a]^c): n \geq 1\}$. By definition of sup as the LEAST upper bound, we get
$$\sup \{\mu_n([-a,a]^c): n \geq 1\} \leq \epsilon$$
(3) Yes, you can do that. Recall that $\liminf_n a _n \le \limsup_n a_n$.
|
H: Weird Cauchy Problem
Can somebody help me in solving this weird Cauchy problem? I really don't know how to face it.
$$
\begin{cases}
y' = -\dfrac{(2x+y)\cos(x^2 + xy + 1) + y}{x\cos(x^2 + xy + 1) + x + 1}\\\\
y(0) = \sin(1)
\end{cases}
$$
I tried to perform $z = x^2 + xy+1$ and then $z' = \dfrac{y'}{x}$, yet this led me to the writing
$$z'(x\cos(z) + x + 1) = -(x^2+z+1)\left[\cos(z)-1\right]$$
But I don't know how to proceed. Is there some trick for those kind od CP?
Note: $y = y(x)$.
AI: Multiply both sides by the denominator and we find that
\begin{align}
(2x+y+xy')\cos(x^2+xy+1)=-y-(x+1)y'\,.
\end{align}
Put $u=x^2+xy+1$ and $v=(x+1)y$, the equation above becomes
\begin{align}
u'\cos u=-v'\,.
\end{align}
Integrate both sides to find the solution.
|
H: Integrate from $0$ to $2\pi$ with respect to $\theta$ the following $(\sin \theta +\cos \theta)^n$
$$\int_0^{2\pi} (\sin \theta +\cos\theta)^n d\theta$$
First I think about De Moivre's formula given by
$$(\cos x +i \sin x)^n=\cos (nx)+i\sin (nx)$$
I tried to apply it but I found myself lost !
Any tips or information how to solve this integral ? Thanks in advance !
AI: That won't help. Use $\sin\theta+\cos\theta=\sqrt{2}\sin(\theta+\pi/4)$. The phase shift doesn't affect integrals over a period, so your integral is $2^{n/2}\int_0^{2\pi}\sin^{2n}\theta d\theta$, which is $0$ for odd $n$. For even $n$, say $n=2k$, it's $$2^k\int_0^{2\pi}\sin^{2k}\theta d\theta=2^k\int_0^{2\pi}\sin^{2k}\theta d\theta=2^{k+2}\int_0^{\pi/2}\sin^{2k}\theta d\theta.$$To evaluate that, we use Beta functions:$$2^{k/2+2}\int_0^{\pi/2}\sin^{2k}\theta d\theta=2^{k/2+1}\operatorname{B}(k+\tfrac12,\,\tfrac12)=2^{k/2+1}\frac{\Gamma(k+\tfrac12)\sqrt{\pi}}{k!}=\frac{(2k)!}{k!^22^{3k/2-1}}\pi.$$This is $\frac{n!}{(n/2)!^22^{3n/4-1}}\pi$.
|
H: Boundedness and openness of the set
Need to prove/disprove boundedness and openness of the set $S=\left\{f\in L_1[1,\infty):\;\displaystyle\int\limits_1^\infty x|f(x)|dx<1\right\}$.
There are no problems with boundedness. But I can’t check the openness. If $f_0\in S$ and $f\in B_r(f_0)$, then $\displaystyle\int\limits_1^\infty x|f_0(x)|dx<1$ and $\displaystyle\int\limits_1^\infty |f(x)-f_0(x)|dx<r$. But then $\displaystyle\int\limits_1^\infty x|f(x)|dx\leq \displaystyle\int\limits_1^\infty x|f(x)-f_0(x)|dx+\displaystyle\int\limits_1^\infty x|f_0(x)|dx$ and i can't see how to use that $f\in B_r(f_0)$. For the same reason, it is not possible to verify that the mapping $\varphi(f)=\displaystyle\int\limits_1^\infty x|f(x)|dx$ is continuous, since everything spoils the factor $x$. Maybe, need to disprove the openness, but i also don't know how. Can anyone help me, please?
AI: In fact, we find that the set fails to be open. To see that this is the case, consider the sequence $f_n$ defined by
$$
f_n(x) = \frac 1{n+1} \mathbf 1_{[n,n+2]}.
$$
We see that $\int_1^\infty x|f_n(x)|dx=2$ for $n = 1,2,3,\dots$ and that $f_n \to 0 \in S$. So, we conclude that the complement of $S$ fails to be closed, which means that $S$ fails to be open.
|
H: How to prove that $(\Bbb{Z}[t]+t\Bbb{R}[t])/t\Bbb{R}[t]\cong\Bbb{Z}\cong\Bbb{Z}[t]/t\Bbb{Z}[t]\cong\Bbb{Z}[t]/(\Bbb{Z}[t]\cap t\Bbb{R}[t])$?
I already proved that $(\mathbb{Z}[t]+t\mathbb{R}[t])/t\mathbb{R}[t]\cong\mathbb{Z}[t]/(\mathbb{Z}[t]\cap t\mathbb{R}[t])$ with the first isomorphism theorem but i do not know how to continue.
AI: The equality $\mathbb{Z}[t]\cap t\mathbb{Z}[t]=t\mathbb{Z}[t]$ is clear (if you're not convinced, take 30sec to prove it), so the last isomorphism is in fact an equality.
It remains to prove that $\mathbb{Z}[t]/t\mathbb{Z}[t]\simeq\mathbb{Z}$, which can be achieved by applying the first isomorphism theorem to evaluation at $0$.
|
H: Is $\mathbb{R}^2$ a Ring?
From what I know, $\mathbb{R}^2$ is a group under addition, defined as $(a, b) + (c, d) = (a+b,c+d)$. However, this answer on another question seems to suggest that $\mathbb{R}^2$ is actually a ring with multiplication defined as $(a, b)\cdot (c, d) = (ac, bd)$. I thought that we usually only define multiplication over the group $\mathbb{R}^2$ as $(a, b)\cdot (c,d) = (ac - bd, ad+bc)$, and in result end up making it a field called $\mathbb{C}$?
AI: As the answer you have linked indicates, the key here is that the exact meaning of $\Bbb R^2$ (or equivalently $\Bbb R \times \Bbb R$) depends in the context that we are working in.
In settings where $\Bbb R$ and/or $\Bbb C$ are the only rings/fields being discussed (typically in problems of an area whose name includes the word "analysis"), $\Bbb R^2$ typically refers to the abelian group/vector space over the set $\Bbb R^2$. In other words, no multiplication between elements is defined or considered.
However, in settings where $\Bbb R$ and/or $\Bbb C$ are considered to be one ring among many (typically in problems of an area whose name includes the word "algebra(ic)"), $\Bbb R^2$ refers to the multiplication associated with the product $\Bbb R \times \Bbb R$ of rings. That is, $\Bbb R^2$ a ring with multiplication defined by $(a,b)\cdot(c,d) = (ac,bd)$.
The symbol $\Bbb R^2$ is never used to refer to $\Bbb R^2$ with the complex-number multiplication $(a,b)\cdot (c,d) = (ac - bd,ad + bc)$, except perhaps for pedagogical reasons. Where the set $\Bbb R^2$ is given this multiplication rule, the symbol $\Bbb C$ is used instead.
|
H: Solve the differential equation $y=2\sqrt{x}y^2y'+4xy'$
Solve the following differential equation:
$$
y=2\sqrt{x}y^2y'+4xy'
$$
The main problem for me is to understand what type of DE it is, since it is neither of these:
Separable
Homogeneous
Linear
Exact
Bernoulli
Riccati
Implicit
Lagrange
Perhaps I am missing something. But how should I approach this problem?
AI: $$y=2\sqrt{x}y^2y'+4xy'$$
It's Bernoulli's differential equation if you consider $x'$ instead of $y'$
$$yx'-4x=2\sqrt{x}y^2$$
|
H: Calculating sum of series using derivative of a function
We're given the following problem:
"We know that $\frac{1}{1 - x} = \sum_{k=0}^{\infty} x^k $ for $ -1 < x < 1 $. Using the derivative with respect to $x$, calculate the sum of the following power-series: $ \sum_{k=1}^{\infty} kx^k $ and $ \sum_{k=1}^{\infty} k^2x^k $."
I have thus first taken the derivative of the left side:
$$ \frac{d}{dx}\left[\frac{1}{1 - x}\right] = \frac{1}{(1 - x)^2}$$
And of the right side:
$$ \frac{d}{dx}\left[\sum_{k=0}^{\infty}x^k\right] = \frac{d}{dx}\left[1 + x + x^2 + x^3 + x^4 + ...\right] $$
$$ = 0 + 1 + 2x + 3x^2 + 4x^3 + ...$$
$$ = \sum_{k=0}^{\infty} kx^{k-1} $$
Then I substituted these derivatives in the following equation:
$$ \sum_{k=0}^{\infty} kx^{k-1} = \frac{1}{(1 - x)^2} $$
From this I rewrote the power-series I needed to find the sum from:
$$ \sum_{k=0}^{\infty} kx^k = \sum_{k=0}^{\infty} kx^1x^{k-1} $$
$$ = \sum_{k=0}^{\infty} x\left(\frac{1}{(1 - x)^2}\right) $$
$$ = \sum_{k=0}^{\infty} \left(\frac{x}{(1 - x)^2}\right) $$
Now I am a little stuck. The solution says the sum of the first power-series should equal $\frac{x}{(1 - x)^2}$. I think that I'm close but I don't know how to continue.
Can anyone help me out?
AI: Yes, you are close: since$$\sum_{k=0}^\infty kx^{k-1}=\frac1{(1-x)^2},$$multiplying both sides by $x$ gives you$$\sum_{k=0}^\infty kx^k=\frac x{(1-x)^2}.$$
|
H: why does $P(X) = X^3+X+1$ have at most 1 root in $F_p$?
why does $P(X) = X^3+X+1$ has at most 1 root in $F_p$ ?
I could fact check this on Sage for small values of $p$.
For example $p=5$ or $7$ or $19$; there is no root.
If $p = 11, 2$ is the only root.
If $p = 13, 7$ is the only root.
If $p = 17, 11$ is the only root.
I also realize that there can't be only 2 roots a1 and a2
because a3 = -(a1+a2) will be a root as well.
AI: It's not true.
$3$ and $14$ are roots in $F_{31}$.
|
H: Direct Product of Rings and isomorphism
Is there a way of finding all possible isomorphisms on the direct product of rings? I know that if the rings are of different size, direct product isomorphism induces automorphism on each rings, and if the rings are the same (isomorphically), then it also induces isomorphism in between the structures in addition to the automorphisms, but are these the only possibilities?
AI: In general not, it highly depends on what your rings in the product are. But you can reformulate the question in light of catogory theory: by universal property of products endomorphisms $\phi: A \times B \to A \times B$ are 1-1 to pairs $(\phi_A:A \times B \to A, \phi_B:A \times B \to B)$. Sometimes it helps when you have more informations about $A$ and $B$.
|
H: Prove W is a subspace of V.
If W₁ ⊆ W₂ ⊆ W₃......, where Wᵢ are the subspaces of a vector space V, and W = W₁ ∪ W₂ ∪......
Prove that W ≤ V.
So I proved that:
If W₁ and W₂ are two subspaces of V and W₁ ∪ W₂ ≤ V then W₁ ⊆ W₂ or W₂ ⊆ W₁.
(I let u ∈ W₁ - W₂ and v ∈ W₂ - W₁ and it was trivial)
Now I don't know how to use this to prove the problem.
I'm getting confused.
Maybe I don't have to use induction?
AI: First, observe that $0 \in W_1$, so $0 \in W$. Next, for $x,y \in W$, fix $W_i$ and $W_j$ such that $x \in W_i$ and $y \in W_j$. Let $k$ be any index greater than $i$ and $j$, so that $x, y \in W_k$; hence, $x + y \in W_k \subset W$. Finally, given $x \in W$ and a scalar $\alpha$, fix $W_i$ such that $x \in W_i$, so $\alpha x \in W_i \subset W$. We are
|
H: Dual representation of finite groups
If $V$ is a $\mathbb{C}G$ module, then $V^*$ is the dual module with the action
$$(gf)(v) = f(g^{-1}v) $$
for $g\in G,f\in V^*$ and $v\in V$. Where $V^* = \text{Hom}(V,\mathbb{C})$.
What I don't understand is why do we need the inverse of $g$ in $f(g^{-1}v)$, how does this agree with the definition of representation?
AI: Well, one of the axioms for the group action is $g(hx)=(gh)x$. So let's assume that we've defined our group action on $V^*$ by $(gf)(v):=f(gv)$. Now take two elements $g,h\in G$ and calculate:
$$((gh)f)(v)=f(ghv)$$
$$(g(hf))(v)=(hf)(gv)=f(hgv)$$
These are not necessarily equal when $G$ is nonabelian! And so our "group action" is not an action at all.
But they become equal (regardless of $G$) if we use $(gf)(v):=f(g^{-1}v)$ instead:
$$((gh)f)(v)=f((gh)^{-1}v)=f(h^{-1}g^{-1}v)$$
$$(g(hf))(v)=(hf)(g^{-1}v)=f(h^{-1}g^{-1}v)$$
The same works if we replace $(\cdot)^{-1}$ with any other anti-homomorphism $G\to G$.
|
H: Convergence of series $\sum_{k=2}^{\infty} \frac{2^{k}}{\lfloor{\frac{k}{2}\rfloor}}$
I would like to inspect the convergence of the following series
$$\sum_{k=2}^{\infty} \frac{2^{k}}{\lfloor{\frac{k}{2}\rfloor}}$$.
Because I am new to the whole series part it would be very nice if someone could explain to me how it goes. I have managed to show the convergence with ordinary series but with ceilings and floors it feels a bit more complicated.
AI: It diverges by a comparison with the harmonic series.
|
H: Inequality question.
Let $a,b,c>0$ with $a+b+c=1$. Show that $$\frac{1}{a} + \frac{1}{b} + \frac{1}{c} \leq 3 + 2\cdot\frac{\left(a^3 + b^3 + c^3\right)}{abc}$$
Ohhhkk. So first off,
\begin{align} a^3 + b^3+ c^3 & =a^3 + b^3+ c^3- 3abc +3abc\\
& =\ (a+b+c)(a^2+b^2+c^2-(ab+bc+ca))+3abc\\
& = \ (1-3(ab+bc+ca)) + 3abc \\
\end{align}
Using this the inequality becomes,
$$7 \cdot \left(\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\right) \leq 9+ \frac{2}{abc}$$
How do i proceed from here? Was this the right approach? Is there a better one?
AI: I think the following is a smooth enough.
We need to prove that:
$$(a+b+c)(ab+ac+bc)\leq3abc+2(a^3+b^3+c^2)$$ or
$$\sum_{cyc}(2a^3-a^2b-a^2c)\geq0$$ or
$$\sum_{cyc}(a-b)^2(a+b)\geq0,$$ which is obvious.
Your second inequality it's indeed just the first:
$$7\left(\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\right)\leq9+\frac{2}{abc}$$ it's
$$7(a+b+c)(ab+ac+bc)\leq9abc+2(a+b+c)^3$$ or
$$\sum_{cyc}(3abc+2a^3+6a^2b+6a^2c+4abc-7a^2b-7a^2c-7abc)\geq0$$ or
$$\sum_{cyc}(2a^3-a^2b-a^2c)\geq0$$ or
$$\sum_{cyc}(a^3-a^2b-ab^2+b^3)\geq0$$ or
$$\sum_{cyc}(a-b)^2(a+b)\geq0.$$
|
H: Equivalent definition of decomposable map
Let $A\subset B(H)$ be an operator system and $B$ be a $C^*$-algebra. (1)$u:A\rightarrow B$ is called a decomposable map if $u$ is in the linear span of $CP(A, B)$, where $CP(A,B)$ is the set of all completely positive maps from $A$ to $B$.
There is another equivalent definition. (2)$u:A\rightarrow B$ is called a decomposable map if $u=u_1-u_2+i(u_3-u_4)$, each $u_i\in CP(A,B)(i=1,\cdots 4) $
It is easy to see that (2) implies (1). How to check that (1) imply (2)?
AI: Suppose $u:A\to B$ is decomposable, and write $u=\sum_{k=1}^n\lambda_ku_k$ for some $\lambda_k\in\mathbb C$ and $u_k\in CP(A,B)$. Each $\lambda_k$ can be written $\lambda_k=(\lambda_{k,1}-\lambda_{k,2})+i(\lambda_{k,3}-\lambda_{k,4})$, where $\lambda_{k,j}\geq0$ for $k=1,\ldots,n$, $j=1,\ldots,4$. Then for each $j$, $\sum_{k=1}^n\lambda_{k,j}u_k$ is in $CP(A,B)$, and we have
$$u=\left(\sum_{k=1}^n\lambda_{k,1}u_k-\sum_{k=1}^n\lambda_{k,2}u_k\right)+i\left(\sum_{k=1}^n\lambda_{k,3}u_k-\sum_{k=1}^n\lambda_{k,4}u_k\right).$$
|
H: Show that there is a $\pi_i$-related smooth vector field for each smooth vector field $X_i \in \Gamma(M_i,TM_i)$
Assume $M_1, \dots,M_k$ are smooth manifolds and define $M:=M_1\times \dots \times M_k$. Denote the projections on the $i$-th factor with $\pi_i: M \rightarrow M_i$. I want to show that for each smooth vector field $X_i \in \Gamma(M_i,TM_i)$ there is a $\pi_i$-related smooth vector field $Y\in \Gamma(M,TM)$.
Since I don't know any theorems about the existence for related vector fields my approach was to prove the existence by constructing one. I know that if $Y$ is a smooth vector field over $M$ then for all smooth function $f\in C^\infty(M)$, $fY:M\rightarrow TM$, defined by $$(fY)_p=f(p)Y_p$$
is a smooth vector field as well.
From the Lemma below I know that for each real-valued smooth function $g$ on an open subset of $M_i$, we have $$Y(g\circ \pi_i)=(Xg)\circ \pi_i.$$
Well, that's basically how far I am right. I have read the chapter about this topic in Introduction to smooth manifolds by John M. Lee but I am still lacking intuition for this situation. If anyone could lead me in the right direction I would appreciate it.
Definition of $F$-related vector fields:
Suppose $F: M\rightarrow N$ is a smooth, where $M,N$ are smooth manifolds. Smooth vector fields $X\in \Gamma(M,TM)$ and $Y\in \Gamma(N,TN)$ are called $\mathbf{F}$-related, if for each $p\in M$, $dF_p(X_p)=Y_{F(p)}$.
Lemma:
Assume $X,Y$ and $F$ are as specified in the definition above. $X$ and $Y$ are $F$-related if and only if for every smooth real-valued function $f$ on an open subset $U\subseteq N$ we have $X(f\circ F)=(Yf)\circ F$. This Lemma follows basically by inserting in the definitions.
AI: Suppose $X \in \mathfrak{X}(M), Y \in \mathfrak{X}(N)$ we can define a vector field $X \oplus Y : M \times N \to T(M \times N)$ on product manifold $M \times N$ as
$$
(X \oplus Y)_{(p,q)} = (X_p,Y_q)
$$
under natural identification of $T_{(p,q)}(M \times N)$ with $T_p M \oplus T_qN$ (by isomorphism $\alpha : T_{(p,q)}(M \times N) \to T_pM \oplus T_qN$ defined as $\alpha (v) = (d\pi_M(v), d\pi_N(v))$, one can show that it is a smooth vector field on the product manifold.
So, wlog, given $X \in \mathfrak{X}(M_1)$ it can be checked that for any $X_j \in \mathfrak{X}(M_j)$ for $j=2,\dots,k$, the resulting product $X \oplus X_2 \oplus \cdots \oplus X_k$ is $\pi_1$-related to $X$ by the way the product vector field defined. So vector field on product manifold that $\pi_1$-related to $X$ is not unique. Of course we can choose $X \oplus \mathbf{0}\, \oplus \cdots\oplus \mathbf{0}$ for convenient.
Since you read Lee's, i want to point out that the construction of product vector field above is in fact an exercise in Lee's Introduction to Smooth Manifold (See Problem 8-17 and more general setting in Problem 8-18). However vector fields on the product manifold that $\pi_1$-related to a vector field $X \in \mathfrak{X}(M_1)$ is not necessarily in form of product vector field.
After read this post, i've come to conclusion that
$\mathfrak{X}(M \times N) \supsetneq \mathfrak{X}(M) \oplus \mathfrak{X}(N)$ (as shown in that answer),
Any vector vector field $V$ in product manifold $M \times N$ is in form of $V= X \oplus Y$ for some $X \in \mathfrak{X}(M)$ and $Y \in \mathfrak{X}(N)$ if and only if $V$ and $X$ are $\pi_M$-related and $V$ and $Y$ are $\pi_N$-related.
In more general setting, we know that for any smooth surjective submersion $F : M \to N$ and $X \in \mathfrak{X}(M)$, the pushforward $F_{*}(X)$ is a well-defined smooth vector field on $N$ that is $F$-related to $X$ is and only if $dF_p(X_p) = dF_q(X_q)$ whenever $p$ and $q$ are in the same fiber. So by applying this to the map $\pi_M : M \times N \to M$ and $\pi_N : M \times N \to N$, we have the following criteria :
Any vector vector field $V \in \mathfrak{X}(M \times N)$ is also in $\mathfrak{X}(M) \oplus \mathfrak{X}(N)$ if and only if $d\pi_M(V_{(p,q)})$ constant on each fiber $\{p\} \times N$ and $d\pi_N(V_{(p,q)})$ is constant on each fiber $M \times \{q\}$.
|
H: For a given integer $k$, which of the following are false?
For a given integer $k$, which of the following are false?
$(1)$ If $k($mod $72)$ is a unit in $\mathbb{Z}_{72}$, then $k($ mod $9)$ is a unit in $\mathbb{Z}_9$
$(2)$ If $k($mod $72)$ is a unit in $\mathbb{Z}_{72}$, then $k($ mod $8)$ is a unit in $\mathbb{Z}_8$
$(3)$ If $k($mod $8)$ is a unit in $\mathbb{Z}_{8}$, then $k($ mod $72)$ is a unit in $\mathbb{Z}_{72}$
$(4)$ If $k($mod $9)$ is a unit in $\mathbb{Z}_{9}$, then $k($ mod $72)$ is a unit in $\mathbb{Z}_{72}$
I am confused to proceed in any direction .Please help.
Thanks for your time and support.
AI: $k$ is a unit modulo $n$ iff there exists a $b$ such that $kb \equiv 1 \pmod{n}$. This implies that
$$n | (kb-1)$$
Now suppose $d | n$. Then by transitive property of divisibility, we can claim that $d | (kb-1)$. So $kb \equiv 1 \pmod{d}$ as well. This means $k$ is a unit modulo $d$ as well.
For the converse, let us consider $3$. It is a unit in $\Bbb{Z}_8$ because $3^2 \equiv 1 \pmod{8}$. Ask yourself: is $3$ a unit in $\Bbb{Z}_{72}$ or not?
|
H: Prove that (MB (φ))^k = MB (φ^k)
Let V be a K vector space with base B: = {b1,…, bn} and φ an endomorphism in V with a representation matrix MB(φ). Prove that (MB(φ))^k = MB(φ^k) for k = 1, ..., n applies.
AI: Because the diagram below is commutative and thus compatible with compositions of endomorphisms composing the squares next to one another(especially with $\varphi \circ \varphi$)
$$
\require{AMScd}
\begin{CD}
V@>{\varphi} >> V \\
@VVBV @VVBV \\
K^n @>{MB(φ)}>> K^n
\end{CD}
$$
here $B: V \to K^n$ identifies $V$ with $K^n$ by identifying $b_i$ with $e_i=(0,0,...,1,...,0)$. Composing this squares gives you the result you are looking for
|
H: Let $a,b,c>0$ with$ \frac{1}{a}+\frac{1}{b}+\frac{1}{c} = 1$. Prove that $(a + 1)(b + 1)(c + 1) \geq 64$
Let $a,b,c>0$ with$ \frac{1}{a}+\frac{1}{b}+\frac{1}{c} = 1$. Prove that
$(a + 1)(b + 1)(c + 1) \geq 64$
Ohk so we are given that $abc=a+b+c$ with that now the inequality becomes $2abc+(a+b+c)+1 \geq 64$
How do i proceed from here?
AI: $$a+1=1+\frac ab+\frac ac+1$$
etc. Multiply this with the similar expressions for $b+1$ and $c+1$ and use
AM/GM.
|
H: How to prove the sequence $ \frac{c^n}{\sqrt{n}}, c \in (0, 1)$ is convergent
Given a sequence $a_n = \frac{c^n}{\sqrt{n}}$ where $c \in (0, 1), n = 1, 2, 3, \cdots$, how to prove that the sequence is convergent? What if $c \in (0, \infty)$?
AI: thats because if $c \in (0, 1]$ then $0 \le a_n \le 1/\sqrt{n}$ and $1/\sqrt{n} \to 0$ for $n \to \infty$.
If $c >1$ then $\frac{c^n}{n} \le \frac{c^n}{\sqrt{n}}$ but $\frac{c^n}{n} \to \infty$, thus $\frac{c^n}{\sqrt{n}}$ diverges
|
H: High school percentage question that I got wrong.
Calvin and Susie are running for class president. Of the first 80% of the ballots that are counted, Susie receives 53% of the votes and Calvin receives 47%. At least what percentage of the remaining votes must Calvin receive to catch up to Susie in the election?
Let V be the total number of votes. 80% = 0.8, so 0.8V votes have been counted.
Susie has 53% of this, which is (0.53)(0.8V).
Calvin has 47% of this, which is (0.47)(0.8V).
The difference between their counts is (0.53)(0.8V) - (0.47)(0.8V) = (0.06)(0.8V) = 0.048V.
The remaining votes make up 20% of V, or 0.2V.
Divide the amount Calvin needs by the remaining amount: 0.048V / 0.2V = 0.24, or 24%.
My answer was wrong, it's actually 62% can anyone please help me understand what I did wrong and how to do this problem?
AI: Calvin has received $0.47\cdot 0.8V = 0.376V$ votes, but he needs $0.5V$ total votes to catch up. Thus of the remaining $0.2V$, he needs $0.124V$ votes, so the percentage is $0.124/0.2 = 0.62$.
Your mistake was assuming that if Calvin receives a number of votes that allows him to catch up to Susie's current total, he will win. But surely Susie gets some of those $0.2V$ votes as well.
|
H: Finding area bounded by 3 function that aren't constant
Find an area that are bounded by $3$ functions:
\begin{align}
&= + 6
,\\
&= ^3
,\\
2 + &= 0
.
\end{align}
I only found the solution if one of the functions is constant, like $x=2$.
AI: \begin{align}
f_1(x)&=x+6
,\\
f_2(x)&=x^3
,\\
f_3(x)&=-x/2
.
\end{align}
\begin{align}
S_{ABC}
&=S_{ABD}+S_{BCD}
\\
&=
\int_{-4}^0 f_1(x)-f_3(x) \, dx
+
\int_0^2 f_1(x)-f_2(x) \, dx
=12+10=22
.
\end{align}
|
H: The sides of a pentagon are represented in centimeters by $x$, $10$, $2x$, $1$ and $3$. How many even values of $x$ satisfy this pentagon?
The sides of a pentagon are represented in centimeters by $x$, $10$, $2x$, $1$ and $3$. Determine how many even values of $x$ are there that satisfy this pentagon.
The answer is 5.
How can I solve this problem? Is the triangle inequality useful, because when I used it, I couldn't find the range for $x$.
Can someone help me?
AI: In order for the sides to form a pentagon, the longest side has to be less than the sum of the other sides (this is equivalent to triangle inequality). You have two cases:
The longest side is $10$, so you can write
$$10<x+2x+1+3$$ or $x>2$
The longest side is $2x$, so $$2x<x+10+1+3$$ or $x<14$
Therefore, the allowed values are $4,6,8,10,12$
|
H: Help with differentiation chain rule with tensors in backpropagation
Say, we're given $N$ feature vectors $\mathbf{x}_i \in \mathbb{R}^{D \times 1}$ and assembled into a matrix $X \in \mathbb{R}^{D \times N}$. We also have a matrix $W \in \mathbb{R}^{D \times D}$, $W = XX^\top$ and a predictor matrix $Y \in \mathbb{R}^{D \times N}$, $Y=WX$. Say, we have a scalar function, e.g., $\ell = \left\lVert Y \right\lVert_F$. We need to compute the gradients of $\ell$ using back-propagation.
We know that $\frac{\partial \ell}{\partial Y}$ is a matrix. $\frac{\partial \ell}{\partial W}$ is also a matrix and $\frac{\partial \ell}{\partial W} = \frac{\partial \ell}{\partial Y} \frac{\partial Y}{\partial W}$ (by the chain rule).
Here's the problem: $\frac{\partial Y}{\partial W}$ is a 4-tensor and multiplying a matrix by a 4-tensor gives a 4-tensor (albeit, potentially of a different shape), NOT a matrix (as $\frac{\partial \ell}{\partial W}$ should be)!
Obviously, I'm doing something wrong. The question is - what?
Thx
AI: The kind of "multiplication" used here is $\frac{\partial\ell}{\partial W_{ab}}=\sum_{cd}\frac{\partial\ell}{\partial Y_{cd}}\frac{\partial Y_{cd}}{\partial W_{ab}}$, where nobody writes the $\sum_{cd}$.
|
H: Number of $3$-Letter words from the alphabet {A, B, C} that have no $2$ "A's" directly one after the other
What is the number of $3$-Letter words from the alphabet {A, B, C} that have no $2$ "A's" directly one after the other?
What am I doing wrong? I have the following calculation:
$(1 \cdot 2 \cdot 3) + (2 \cdot 1 \cdot 2) + (3 \cdot 2 \cdot 1) = 16$.
In each of the above parenthesis, $1$ stands for "A", and since no two "A's" are allowed one after the other, we multiply $1$ with the only $2$ options left (B and C). $3$ is a free slot in which we can use any letter from our alphabet.
This calculation is wrong, but I'm not sure why. My textbook gives $22$ as a solution. Where is my mistake?
AI: Total number of allocations: $3 \times 3 \times 3 = 27$, because you have 3 slots and 3 letters/slot.
Allocations with 'banned' A: 2, $(AAx), (xAA)$. For each such allocation you have 3 choices $(A, B,C)$. Hence , $27- 2 \times 3 = 21$. Now, you have counted $AAA$ twice, add it from the solution:
$$
27 - 2 \times 3 +1 = 22
$$
|
H: Prove that $(\mu_1 \otimes \mu_2)\circ {\Pi_1}^{-1}=\mu_1$
Suppose $\Pi_1 :(\Omega_1 \times \Omega_2, \mathcal{F_1} \otimes \mathcal{F_2} ,\mu_1 \otimes \mu_2) \rightarrow (\Omega_1, \mathcal{F_1},(\mu_1 \otimes \mu_2) \circ {\Pi_1}^{-1})$ is a projection map which basically maps $(x,y) \rightarrow x$.
We are to prove that $(\mu_1 \otimes \mu_2)\circ {\Pi_1}^{-1}=\mu_1$.
I was thinking of showing that the two measures are equal on a generator of $\mathcal{F_1}$, then we can say that they agree over $\mathcal{F_1}$. Bu how to explicitly choose a generator such that this holds?
Can anyone help?
AI: Assuming these are probability measures (otherwise this is not true):
Since these are general $\sigma$-algebras, you will not be able to use your approach. You would only be able to do that if you already knew a nice generator for $\mathcal{F}_1.$ Instead, the solution is quite straightforward if you realize what $\Pi_1^{-1}$ is exactly. Suppose $A\in \mathcal{F}_1$, then $\Pi_1^{-1}(A) = A\times \Omega_2.$ Now fill this in:
$$(\mu_1\otimes \mu_2)\circ\Pi_1^{-1}(A) = (\mu_1\otimes \mu_2)(A\times \Omega_2) = \mu_1(A)\cdot \mu_2(\Omega_2) = \mu_1(A).$$
|
H: Finding a closed form of an integral: $\int_0^k\ln(a\sin^2(x)+(a+b)\cos^2(x))dx$
I am trying to find a closed form for the following integral:
$$\int_0^k\ln(a\sin^2(x)+(a+b)\cos^2(x))dx$$
And I know that $a>0$, $b\ge0$ and $k=(\pi(1+n))/2$ where $n$ is a natural number.
How can I approach this problem?
AI: Assignment:
Find a closed form for the following integral:
$$\mathcal{I}_\text{k}\left(\alpha,\beta\right):=\int_0^\text{k}\ln\left(\alpha\sin^2\left(x\right)+\left(\alpha+\beta\right)\cos^2\left(x\right)\right)\space\text{d}x$$
Where $\text{k}:=\frac{\pi\left(1+\text{n}\right)}{2}$ for $\text{n}\in\mathbb{N}$ and $\alpha\space\wedge\space\beta\in\mathbb{R}_{>0}$.
Solution:
First, let's recall that:
$$\alpha\sin^2\left(x\right)+\left(\alpha+\beta\right)\cos^2\left(x\right)=\alpha\sin^2\left(x\right)+\alpha\cos^2\left(x\right)+\beta\cos^2\left(x\right)=$$
$$\alpha\left(\underbrace{\sin^2\left(x\right)+\cos^2\left(x\right)}_{=\space1}\right)+\beta\cos^2\left(x\right)=\alpha+\beta\cos^2\left(x\right)\tag1$$
So, we have:
$$\mathcal{I}_\text{k}\left(\alpha,\beta\right)=\int_0^\text{k}\ln\left(\alpha+\beta\cos^2\left(x\right)\right)\space\text{d}x\tag2$$
Now, let's find:
$$\frac{\partial\mathcal{I}_\text{k}\left(\alpha,\beta\right)}{\partial\beta}=\frac{\partial}{\partial\beta}\left\{\int_0^\text{k}\ln\left(\alpha+\beta\cos^2\left(x\right)\right)\space\text{d}x\right\}=$$
$$\int_0^\text{k}\frac{\partial}{\partial\beta}\left(\ln\left(\alpha+\beta\cos^2\left(x\right)\right)\right)\space\text{d}x=\int_0^\text{k}\frac{\cos^2\left(x\right)}{\alpha+\beta\cos^2\left(x\right)}\space\text{d}x\tag3$$
Now, we write:
$$\cos^2\left(x\right)=\frac{\alpha+\beta\cos^2\left(x\right)}{\beta}-\frac{\alpha}{\beta}\tag4$$
Using the linearity of the integral we can split it up, so:
$$\frac{\partial\mathcal{I}_\text{k}\left(\alpha,\beta\right)}{\partial\beta}=\frac{1}{\beta}\int_0^\text{k}1\space\text{d}x-\frac{\alpha}{\beta}\int_0^\text{k}\frac{1}{\alpha+\beta\cos^2\left(x\right)}\space\text{d}x=$$
$$\frac{1}{\beta}\cdot\left[x\right]_0^\text{k}-\frac{\alpha}{\beta}\int_0^\text{k}\frac{1}{\alpha+\beta\cos^2\left(x\right)}\space\text{d}x=\frac{\text{k}}{\beta}-\frac{\alpha}{\beta}\int_0^\text{k}\frac{1}{\alpha+\beta\cos^2\left(x\right)}\space\text{d}x\tag5$$
Let $\text{u}:=\tan\left(x\right)$, so $\text{d}x=\frac{1}{\sec^2\left(x\right)}\space\text{du}$. The lower bound gives $\text{u}=\tan\left(0\right)=0$ and notice for the upper bound that when $\text{k}=\frac{\pi\left(1+\text{n}\right)}{2}$ for $\text{n}\in\mathbb{N}$ we have that $\text{u}=\tan\left(\text{k}\right)\to\infty$. So:
$$\frac{\partial\mathcal{I}_\text{k}\left(\alpha,\beta\right)}{\partial\beta}=\frac{\text{k}}{\beta}-\frac{\alpha}{\beta}\underbrace{\int_0^\infty\frac{1}{\alpha\left(\text{u}^2+1\right)+\beta}\space\text{du}}_{=\space\text{I}_1}\tag6$$
Let $\text{s}:=\sqrt{\frac{\alpha}{\alpha+\beta}}\cdot\text{u}$, so:
$$\text{I}_1=\frac{1}{\sqrt{\alpha\left(\alpha+\beta\right)}}\int_0^\infty\frac{1}{\text{s}^2+1}\space\text{ds}=\frac{1}{\sqrt{\alpha\left(\alpha+\beta\right)}}\cdot\lim_{\text{n}\to\infty}\left[\arctan\left(\text{s}\right)\right]_0^\text{n}=$$
$$\frac{1}{\sqrt{\alpha\left(\alpha+\beta\right)}}\cdot\left(\lim_{\text{n}\to\infty}\arctan\left(\text{n}\right)-\arctan\left(0\right)\right)=\frac{1}{\sqrt{\alpha\left(\alpha+\beta\right)}}\cdot\frac{\pi}{2}\tag7$$
So, we have:
$$\frac{\partial\mathcal{I}_\text{k}\left(\alpha,\beta\right)}{\partial\beta}=\frac{\text{k}}{\beta}-\frac{\alpha}{\beta}\cdot\frac{1}{\sqrt{\alpha\left(\alpha+\beta\right)}}\cdot\frac{\pi}{2}=\frac{\text{k}}{\beta}-\frac{\pi}{2}\cdot\frac{\sqrt{\alpha}}{\beta}\cdot\frac{1}{\sqrt{\alpha+\beta}}\tag8$$
Now, we must find:
$$\mathcal{I}_\text{k}\left(\alpha,\beta\right)=\int\frac{\partial\mathcal{I}_\text{k}\left(\alpha,\beta\right)}{\partial\beta}\space\text{d}\beta\tag9$$
|
H: For which ideal $I$ of $\Bbb Z[t]$ is $\mathbb{Z}[t]/I\cong\Bbb Z_{11}$?
Maybe for $I=(11,t-1)$ but i don't know how to prove it or if it is even right.
AI: We want to kill $11$ and $t$ and so the natural choice is $I=(11,t)$. This leads to the homomorphism $\phi: \mathbb{Z}[t] \to \mathbb{Z}_{11}$ given by $\phi(p(t)) = p(0) \bmod 11$. It remains to prove that $\phi$ is surjective with kernel $I$.
|
H: Is the degree m taylor polynomial, A polynomial field over R^n?
$x \in R[x]$, $x^2 \in R[x]$, since it is a field any operation * b/w these two's results must also be in $R[x]$, but $x^-1$ is not in $R[x]$
So clearly polynomials do not form a field.
However, Professor Ghrist says that it is a polynomial field at 1:04 of this video
https://www.youtube.com/watch?v=pePJKl6EFDo&t=63s
Am I missing something?
AI: That's a really bad expression since it tempts the listener to confuse two totally different issues. What he mean by "polynomial field" has nothing to do with "fields" in algebra. Here he draws an analogy to vector fields: https://en.wikipedia.org/wiki/Vector_field
A vector field assigns to a point a vector, in yours/his case the "polynomial field" assigns to a point a polynomial: the Taylor expansion around the point.
That's why he calls it a "field". But this is not standard usage of language, no algebraist would use this "terminology" here, but the informal analogy is ok.
|
H: If $0 < \alpha < \alpha + \delta < \beta < \frac\pi2$ then $\tan\alpha + \tan\beta > \tan(\alpha + \delta) + \tan(\beta - \delta).$
For reasons that probably don't bear examination (I've rewritten my answer to this question, but I haven't posted the new improved version with added vitamins, because I wish to supplement my irritatingly long and pointless school-geometry proof with an equally quixotic school-trigonometry proof, avoiding the theory of convexity), I wish to prove:
$$
\text{If } 0 < \alpha < \alpha + \delta < \beta < \frac\pi2 \text{ then } \tan\alpha + \tan\beta > \tan(\alpha + \delta) + \tan(\beta - \delta)
$$
using only trigonometric identities familiar from secondary school mathematics.
All I have managed to come up with so far is this pathetic dog's dinner of a proof:
If $0 < \theta < \theta + \varphi = \frac\pi2,$ then
$$
\tan\theta + \tan\varphi = \tan\theta + \frac1{\tan\theta} = 2 + \left(\frac1{\sqrt{\tan\theta}} - \sqrt{\tan\theta}\right)^2,
$$
which decreases strictly with $\theta$ for $\theta \leqslant \frac\pi4.$
If $\alpha + \beta = \frac\pi2,$ then either $\alpha + \delta \leqslant \frac\pi4$ or $\beta - \delta \leqslant \frac\pi4,$ therefore $\tan\alpha + \tan\beta > \tan(\alpha + \delta) + \tan(\beta - \delta).$ $\ \square$
Assume from now on that $\alpha + \beta \ne \frac\pi2.$ Then $\tan\alpha\tan\beta \ne 1,$ and similarly $\tan(\alpha + \delta)\tan(\beta - \delta) \ne 1.$
Because $\alpha + \beta = (\alpha + \delta) + (\beta - \delta),$ the addition formula for the tangent function gives:
\begin{equation}
\label{eq:1}\tag{$1$}
\frac{\tan(\alpha + \delta) + \tan(\beta - \delta)}{\tan\alpha + \tan\beta} =
\frac{1 - \tan(\alpha + \delta)\tan(\beta - \delta)}{1 - \tan\alpha\tan\beta}.
\end{equation}
Because $\tan\alpha\tan\delta < 1,$ we can write, after some tedious manipulation (yawn-making details provided on request):
$$
\tan(\alpha + \delta)\tan(\beta - \delta) - \tan\alpha\tan\beta =
\frac{\tan\delta(1 - \tan^2\alpha\tan^2\beta)[\tan(\beta - \alpha) - \tan\delta]}
{(1 - \tan\alpha\tan\delta)(1 + \tan\beta\tan\delta)}.
$$
Therefore $\tan(\alpha + \delta)\tan(\beta - \delta) - \tan\alpha\tan\beta$ has the same sign as $1 - \tan\alpha\tan\beta.$
It follows that the right hand side of \eqref{eq:1} is always strictly less than $1,$ and we are done (more like done in!). $\ \square$
Surely it must be possible to do better than this? Please put me out of my misery, so that I can be rid of this stupid obesssion, and perhaps even get on with some slightly more respectable mathematics instead!
AI: We need to prove that if $\{a,b,c,d\}\subset\left(0,\frac{\pi}{2}\right)$, $a\geq b$, $c\geq d$, $a\geq c$ and $a+b=c+d$ so
$$\tan{a}+\tan{b}\geq\tan{c}+\tan{d}.$$
Indeed, we need to prove that:
$$\frac{\sin(a+b)}{\cos{a}\cos{b}}\geq\frac{\sin(c+d)}{\cos{c}\cos{d}}$$ or
$$\cos(c+d)+\cos(c-d)\geq\cos(a+b)+\cos(a-b)$$ or
$$a-b\geq c-d$$ or
$$a-c\geq b-d,$$ which is obvious because $a-c\geq0$, but $b-d\leq0.$
I think it is better to use Karamata.
Indeeed, if $\beta-\delta\geq\alpha+\delta$ so
$$(\beta,\alpha)\succ(\beta-\delta,a+\delta).$$
If $\beta-\delta\leq\alpha+\delta$ so
$$(\beta,\alpha)\succ(\alpha+\delta,\beta-\delta).$$
In any case your inequality follows from Karamata because $\tan$ is a convex function on $\left(0,\frac{\pi}{2}\right).$
|
H: Differences between polynomial quotient rings $\mathbb{Z}_m[x]/(x^n+1)$ and $\mathbb{Z}_m[x]/(x^n-1)$
As based on the definition of the polynomial quotient ring
$\mathbb{Z}_m[x]/(x^n+1) = \left\{a_{n-1}x^{n-1}+\cdots+a_1x+a_0:a_i\in\mathbb{Z}_m\right\}$,
does that imply that $\mathbb{Z}_m[x]/(x^n+1) = \mathbb{Z}_m[x]/(x^n-1)$, as any negative coefficient in the elements of $\mathbb{Z}_m[x]/(x^n-1)$ will be shifted to be between 0 and $m-1$?
Or is there any fundamental difference between $\mathbb{Z}_m[x]/(x^n+1)$ and $\mathbb{Z}_m[x]/(x^n-1)$ under the same $\mathbb{Z}_m$?
Thank you.
AI: I re-read your question a couple of times and your question seems to stem from your belief that $Z_m[x]/(x^n-1)$ should be equal (isomorphic) to $Z_m[x]/(x^n+1)$, which is not necessarily the case (although I can't come up with an instance where it is in deed the case). Anyways, take a look at this example, maybe it will shed some light on the topic. Take $Z_3[x]/(x^2+1)$ and $Z_3[x]/(x^2-1)$. We have that $(x^2+1)$ is irreducible since $x^2 \equiv 0$ or $1$ and so $x^2+1 \equiv 1$ or $2$, but $(x^2-1)=(x-1)(x+1)$. Hence, $Z_3[x]/(x^2+1)$ is a field but $Z_3[x]/(x^2-1)$ is not.
|
H: A closed discrete set
Let $V$ be a normed vector space.
Let $(b_n)\subseteq V, b_n \to b\in V.$ Show that $B := \{b,b_1,b_2\dots\}$ is closed.
I know that if $b_n\to b,$ then $b_n$ is Cauchy. That is, $\forall \epsilon > 0, \exists N\in\mathbb{N}, n,m\geq N\Rightarrow ||b_n-b_m|| < \epsilon.$ Also, if $(x_n)\subseteq B, x_n \to x$, then if $x\not\in B, ||x_n-x|| > 0\,\forall n.$ But how can I use the fact that $b_n$ is convergent to show that $B$ is closed? I think I can use the convergence of $(b_n)$ to show that $\exists \epsilon_0 > 0$ such that $||x_{n_k}-x|| \geq \epsilon_0\,\forall k$, where $(x_{n_k})$ is a subsequence of $(x_n).$ Also, it may be easier to show that $V\backslash B$ is open.
AI: Suppose that $x\notin B$. Then in particular $x\ne b$, so let $\epsilon=\frac12\|b-x\|$. There is an $n_0\in\Bbb N$ such that $b_n\in B_\epsilon(b)$ for all $n\ge n_0$, and $B_\epsilon(b)\cap B_\epsilon(x)=\varnothing$, so $b_n\notin B_\epsilon(x)$ when $n\ge n_0$. You now have a nbhd of $x$ that excludes all but finitely many points of $b$. Let $\delta=\min\{\|x-b_k\|:k<n_0\}$; can you see how to use $\delta$ to get a nbhd of $x$ that is disjoint from $B$?
|
H: Olympiad Number theory Problem Solution Doubt
[Iberoamerican $1998]$ Let $\lambda$ be the positive root of the equation $t^{2}-1998 t-1=0 .$ Define the sequence $x_{0}, x_{1}, \ldots$ by setting
$
x_{0}=1, \quad x_{n+1}=\left\lfloor\lambda x_{n}\right\rfloor \quad(n \geq 0)
$
Find the remainder when $x_{1998}$ is divided by 1998
$1998<\lambda=\frac{1998+\sqrt{1998^{2}+4}}{2}=999+\sqrt{999^{2}+1}<1999$
\begin{aligned}
x_{1}=1998, x_{2}=& 1998^{2} . \text { since } \lambda^{2}-1998 \lambda-1=0 \\
\lambda=1998+\frac{1}{\lambda} & \text { and } x \lambda=1998 x+\frac{x}{\lambda}
\end{aligned}
for all real numbers $x$.
since $x_{n}=\left\lfloor x_{n-1} \lambda\right\rfloor$ we have
$
x_{n}<x_{n-1} \lambda<x_{n}+1, \quad \text { or } \quad \frac{x_{n}}{\lambda}<x_{n-1}<\frac{x_{n}+1}{\lambda}
$
since $\lambda>1998,\left\lfloor\frac{x_{n}}{\lambda}\right\rfloor=x_{n-1}-1 .$
how they found $\left\lfloor\frac{x_{n}}{\lambda}\right\rfloor=x_{n-1}-1$ ???
AI: From $$\frac{x_{n}}{\lambda}<x_{n-1}<\frac{x_{n}+1}{\lambda}$$ we have$$
\left\lfloor\frac{x_{n}}{\lambda}\right\rfloor<x_{n-1}\leq \left\lfloor\frac{x_{n}}{\lambda}+\frac1\lambda\right\rfloor\leq\left\lfloor\frac{x_{n}}{\lambda}\right\rfloor+1$$
The first inequality is strict because $x_{n-1}$ is an integer.
|
H: Dependence of two random variables
In a Bernoulli experiment of parameter $p$ let $T$ be the instant of first success and $U$ the instant of second success. Find the density of $U$ and tell if $T$ and $U$ are independent or not.
This is what I've done: since $U$ is the instant of second success we have that:
$$p_U(k)=\mathbb{P}[U=k]=p^2(1-p)^{k-2}$$
since it is not relevant where is the instant of first success, because we know that in every case we have $2$ successes and $k-2$ insuccesses. At the same time, we have that:
$$p_{T,U}(m,n)=\mathbb{P}[T=m,\ U=n]=p(1-p)^{m-1}(1-p)^{n-m-1}p=p^2(1-p)^{n-2}=p_U(n)\neq p_T(m)p_U(n)$$
Since $T \sim $ Geo$(p)$. So they are not independent. Is it correct or not?
AI: For the density, you forgot an additional factor $k-1$ for all the possible positions for the first success. Look up the negative binomial distribution.
For the dependence, your proof works. Here is an alternative: note that we must always have $T<U$. This means they must be dependent, because $$P(U=n, T=m)=0\neq P(U=n)P(T=m)$$
if $n<m$.
|
H: Definition of derivative as a linear operator being applied to a vector
I have been told that, given a differentiable function $f: \mathbb{R}^n\longrightarrow \mathbb{R}$, we can view $f'(x)$ as a linear operator from $\mathbb{R}^n$ to $\mathbb{R}$ for any $x$, which makes sense because it is a vector, and thus a linear operator. So $f'(x)[y] = \nabla f(x)^Ty$, basically by definition. But later in class, we used $f'(x)[y] = \underset{h\rightarrow 0^+}{\text{lim}}\frac{f(x + hy) - f(x)}{h}$, which I don't understand. Specifically, why is $y$ showing up inside the limit? To me, $f'(x)[y]$ means first take the derivative of $f$ at $x$, and then apply the result to $y$. So $y$ shouldn't appear in the limit definition of the derivative of $f$ at $x$, and yet here it just looks like the $y$ and the limit have been fused together.
So it seems like the two values are supposed to be equivalent, so should I just be treating the above equation as the definition for $f'(x)[y]$? If so, is there an easy way to see that the two notions of $f'(x)[y]$ are equivalent, i.e. that $\nabla f(x)^Ty = \underset{h\rightarrow 0^+}{\text{lim}}\frac{f(x + hy) - f(x)}{h}$, where $\nabla f(x) = \underset{h\rightarrow 0^+}{\text{lim}}\frac{f(x + h) - f(x)}{h}$?
By the way, I don't think it really matters whether or not the limit is a one-sided or two-sided limit, it was just posed to me as a one-sided limit.
AI: Suppose that you define $f'(x)(y)$ as $\nabla f(x)^Ty$. Since$$\nabla f(x)=\left(\frac{\partial f}{\partial x_1}(x),\ldots,\frac{\partial f}{\partial x_n}(x)\right),$$then, if $\{e_1,\ldots,e_n\}$ is the standard basis of $\Bbb R^n$, we have, for each $k\in\{1,2,\ldots,n\}$, $f'(x)(e_k)=\nabla f(x)^Te_k$, since both numbers are equal to $\frac{\partial f}{\partial x_k}(x)$. But then, if $y\in\Bbb R^n$, $y$ can be written as $a_1e_1+a_2e_2+\cdots+a_ne_n$ and therefore, by linearity,$$f'(x)(y)=\nabla f(x)^Ty,$$since both numbers are equal to$$a_1\frac{\partial f}{\partial x_1}(x)+\cdots+a_n\frac{\partial f}{\partial x_n}(x).\tag1$$Now, note that$$\lim_{h\to0}\frac{f(x+he_k)-f(x)}h=\frac{\partial f}{\partial x_k}(x).$$So, again by linearity,$$\lim_{h\to0}\frac{f(x+hy)-f(x)}h=(1)=f'(x)(y).$$
|
H: On the definition of compactness
Rudin, in Principles of Mathematical Analysis, defines compactness: A set in a metric space is compact if and only if for any open cover $\{G_\alpha\}$ of $E$ there exist a finite subcover $G_{\alpha_1},...,G_{\alpha_k}$ such that:
$E \subseteq G_{\alpha_1} \cup \cdots \cup G_{\alpha_k}.$
My question is about changing the word any for some, i think that would be valid, any suggestion?
AI: Consider the open cover $\{X\}$. This is a finite open cover of every subset of $X$. So your modified notion would apply to every set.
That said, there is a version of this notion which is nontrivial: what if we bound the sizes of the open sets involved? (Note that this is a fundamentally metric, as opposed to topological, idea.) Specifically, consider the following property:
For each $\epsilon>0$ there is a finite cover of $X$ consisting only of open sets with diameter $<\epsilon$.
Here the diameter of an open set $U$ is the supremum of the distances between elements of that set:$$diam(U)=\sup\{d(a,b): a,b\in U\}.$$
This is no longer trivial - e.g. $\mathbb{R}$ with the usual metric does not have this property. But this is still very different from compactness.
|
H: The image of a locally constant function is coutable
Let $X\subset \mathbb{R}^m$. A function $f: X\to \mathbb{R}^n$ is said to be locally constant if, for every $x\in X$, there is $\epsilon_x>0$ such that, $f(y)=c_x$, for all $y\in B_{\epsilon_x}(x)\cap X$. Show that if $f:X\to \mathbb{R}^m$ is locally constant, then $f(X)$ is countable.
My attempt:
I proved that $f(X)=\bigcup\limits_{x\in X}\{c_x\}$, but I don't know how can I deduce that $f(X)$ is countable from the equality above. So, I tried to define an injective function $\varphi: f(X)\to \mathbb{N}$, by putting $\varphi(f(x))=N_x$, where $N_x=\lfloor \frac{1}{\epsilon_x}\rfloor+1$, but It didn't work too. I will appreciate any help. Thanks.
AI: HINT: $\{B_{\epsilon_x}(x):x\in X\}$ is an open cover of $X$, and $\Bbb R^m$ is hereditarily Lindelöf, so it has a countable subcover.
|
H: Prove $\lim_{n \rightarrow \infty} f(x) f(2^2x) f(3^2x) \cdots f(n^2x) = 0$ for $f: \mathbb{R} \rightarrow \mathbb{R}$ in $L^1(\mathbb{R})$.
Here's another question that I'm stuck on from my studies for an upcoming exam. This one comes from another practice preliminary exam.
Problem
Let $f: \mathbb{R} \rightarrow \mathbb{R}$ be a Lebesgue integrable function. Prove that for almost every $x \in \mathbb{R}$ that $$ \lim_{n \rightarrow \infty} f(x) f(2^2x) f(3^2x) \cdots f(n^2x) = 0$$
I.e. given that $\int_\mathbb{R} |f(x)| dx < \infty$, prove the above limit.
Please, correct me if I'm wrong, confirm my thoughts, or provide hints towards a solution. Thanks in advance!
My Partial Attempt
Break the support into three cases. Either $S := \{ x \in \mathbb{R} | f(x) \neq 0\}$ (1) has $\mu(S) = 0$, (2) finite $\mu(S) > 0$, or (3) $\mu(S) = \infty$,.
(1) We have immediately that the integral $\int_\mathbb{R} |f(x)| dx = 0$ because the function is supported by a set of measure zero. Hence, the function is zero almost everywhere and we have our result.
(2) This is the tricky case for me. This is where I'm looking for guidance if my case-by-case method is in fact a method for solution. Otherwise, provide a hint or an alternative solution method.
(3) Help here too.
AI: For $j=1,2,\dots$ and $\epsilon>0$, define
$$A_j^\epsilon = \{ x: |f(j^2 x)|>\epsilon\}.$$
This is a measurable set. We know
$$ \epsilon m (A_j ^\epsilon) \le \int |f |(j^2 x ) dx = \frac{\int |f| (y) dy}{j^2}=\frac{c}{j^2}.$$
Therefore $m (A_j ^\epsilon) \le \frac{c}{j^2\epsilon}$.
Write $A_j = A_j^{1/\sqrt{j}}$. Then $m (A_j ) \le \frac{c}{j^{3/2}}.$
Now let $A=\limsup_{j} A_j = \{x: |f(j^2 x) |\ge \frac{1}{\sqrt{j}}\mbox{ for infinitely many }j\mbox{'s}\}$
It follows from the Borel-Cantelli lemma for general measure spaces that
$$ m (A)=0.$$
On the complement of $A$
$$|f(j^2 x)| \le \frac{1}{\sqrt{j}},$$
for all $j$ large enough (depending on $x$), and the result follows.
|
H: How many of these unit squares contain a portion of the circumference of the circle?
Question: Let $n$ be a positive integer. Consider a square $S$ of side $2n$ units with sides parallel to the coordinate axes. Divide $S$ into $4n^2$ unit squares by drawing $2n−1$ horizontal and $2n−1$ vertical lines one unit apart. A circle of diameter $2n−1$ is drawn with its centre at the intersection of the two diagonals of the square $S$. How many of these unit squares contain a portion of the circumference of the circle? Source
My answer:
It's obvious that we are basically talking about the incircle of the square. So as it's four quarters are symmetrical, if we find the number of squares lying on the circumference of just one quarter, multiplying it with four must give us our desired result.
Considering any one quarter, we observe that the circle touches the side of the square at the mid-point, so number of squares touching the circumference will be equal to the number of columns falling in half of the square, which is $4 (\frac{2n-1}2)= 4n-2$.
I find my solution doubtful; Am I correct?
Any alternate solutions will be much appreciated.
AI: For example, here is the case $n=10$, with squares containing a piece of circumference highlighted.
Notice that some columns contain one highlighted square in the
first quadrant, others two, three or four.
But note that each highlighted square in the first quadrant is obtained from the previous one by moving either one unit right or one unit down. And the total number of steps is ...
|
H: Prove that if $ \lim_{x\to\infty}f\left(x\right)=L $ then $ \lim_{n\to\infty}\intop_{0}^{1}f\left(n\cdot x\right)dx=L $.
let $ f $ be integrable function in any interval such [0,M].
assume $ \lim_{x\to\infty}f\left(x\right)=L $ for some $ L\in \mathbb{R} $
and prove that
$ \lim_{n\to\infty}\intop_{0}^{1}f\left(n\cdot x\right)dx=L $.
I've managed to prove that
$ \intop_{0}^{1}f\left(n\cdot x\right)dx=\frac{1}{n}\intop_{0}^{n}f\left(x\right)dx $ .
Im not sure how to continue. Thanks in advance
AI: For $\epsilon >0$, take $N >0$ such that $\vert f(x) - L \vert < \epsilon$ for $x >N$
Then for $n >N$ $$\begin{aligned}\left\vert \frac{1}{n}\intop_{0}^{n}f(x)dx - L \right\vert &= \left\vert\frac{1}{n} \intop_0^N f(x) \ dx + \frac{1}{n} \intop_N^n \left(f(x)-L\right) \ dx + \frac{n-N}{n} L - L\right\vert \\
&\le \frac{1}{n} \intop_0^N \vert f(x) \vert \ dx + \epsilon \frac{n - N}{n} + \frac{N}{n} \vert L \vert
\end{aligned}$$
As $\lim\limits_{n \to \infty} \frac{1}{n} = 0$, and $\int_0^N \vert f(x) \vert \ dx$ is finite by hypothesis, you can bound the RHS of the inequality by $3 \epsilon$ for $n$ large enough which provides the desired conclusion.
|
H: Finding $\iint_W x^2y\,\mathrm{d}x\,\mathrm{d}y$ where $W$ is a rectangle
I'm learning double integrals and I'm trying to calculate the following integral:
$$\iint_{W} x^2y \,\mathrm{d}x\,\mathrm{d}y\,,$$ where $W$ is a rectangle given by points: $A=(0,1), B=(2,1), C=(2,2), D=(0,2)$.
Could you please help me calculate the following integral? I know how to calculate double integrals in general, I just don't know how to get the boundaries.
Thanks
AI: Considering that the endpoints of the rectangle are $(0, 1)$, $(2, 1),$ $(2, 2),$ and $(0, 2),$ the rectangle can be viewed as the set of points $\mathcal R = \{(x, y) \,|\, 0 \leq x \leq 2 \text{ and } 1 \leq y \leq 2\}.$ (For instance, you can click this link to view the four endpoints in Geogebra.) Consequently, the integral is given by $$\int_1^2 \int_0^2 x^2 y \, dx \, dy.$$ Can you finish up by computing this double integral?
|
H: If $\frac{z-\alpha}{z+\alpha},(\alpha \in R)$ is a purely imaginary number and $|z|=2$, can we find value of $\alpha$ geometrically?
If $\dfrac{z-\alpha}{z+\alpha},(\alpha \in R)$ is a purely imaginary number and $|z|=2$, then find value of $\alpha$.
Now I took $\dfrac{z-\alpha}{z+\alpha}=t$ and as t is purely imaginary, and use the fact that $t+ \bar{t}=0$ and obtained the answer $\alpha = \pm2$.
But I was wondering that if there is any way to think about the answer more directly using geometry of complex numbers given that $z$ lies on a circle centered at origin having radius $2$.
AI: For now, let's say $\alpha$ may be any complex no. $z_1$ and $(-\alpha)$ be another complex no. $z_2$
Here consider such a circular arc passing through $z_1$, $z_2$ and another complex no. $z_o$
From the property of circles, angle (a) between $(z_1-z_o)$ and $(z_2-z_o)$ will remain constant wherever $z_o$ moves on the arc.
We can write this as (using rotation theorem) :
$$\frac{z_1-z_o}{|z_1-z_o|} =\frac{z_2-z_o}{|z_2-z_o|} e^{ia}$$
$$\to \frac{z_1-z_o}{z_2-z_o} =\frac{|z_1-z_o|}{|z_2-z_o|} e^{ia}$$
Taking argument of both the sides:
$$\arg \left( \frac{z_1-z_o}{z_2-z_o} \right) = a$$
So we can draw an analogy here that:
For any two fixed $z_1$ and $z_2$, if $$\arg \left( \frac{z_1-z_o}{z_2-z_o} \right) = a \ (constant)$$ then the locus of $z_o$ will be an. arc passing through $z_1$, $z_2$ and $z_o$
Consider this:
$$\frac{z-\alpha}{z+\alpha} = bi$$
$b \in \mathbb{R}$ and $\alpha \in \mathbb{R}$
Since these two complex numbers are equal, their principal argument must also be equal.
$$\arg \left( \frac{z-\alpha}{z+\alpha} \right) = \arg (bi)$$
$$\arg \left( \frac{z-\alpha}{z+\alpha} \right) = \frac{\pi}{2}$$
Since $\alpha$ and $(-\alpha)$ are fixed complex no. (on the Real axis since both are real no.), the locus of $z$ will be an arc as mentioned above.
Moreover, the arc will be a semicircle as the angle is $\frac{\pi}{2}$. (Another property of circles)
SO,
$z$ will have to lie on such a $\color{red}{semicircle}$.
As we increase $\alpha$, the radius of this $\color{red}{semicircle}$ will increase. (See Here for visualisation (vary the slider))
But we know that $z$ has two lie on the circle centred at $0$ and of radius $2$.
So the points where this circle meets the Real Axis have to be $(\pm2,0)$ and these endpoints of the semicircle are nothing but $(\pm \alpha,0)$
So,
$$\color{green}{\alpha = \pm 2}$$
NOTE:
The block-quoted matter which defines the locus of the above conditions can be used in similar questions, just take in mind:
The locus of $z_o$ will be :
$\bullet$ An arc if $a \in (0,\pi)$
$\bullet$ A line segment if $a = \pi$
$\bullet$ A pair of rays if $a=0$
|
H: Conditions for a Euclidean domain to be a field or a polynomial ring over a field
I am having trouble proving the following.
Let $R$ be a Euclidean domain with degree function $\delta,$ i.e., $\delta(ab)=\delta(a)\delta(b)$ for all $a,b\in R-\{0\}$ and $\delta(a+b)\leq\textrm{max}(\delta(a),\delta(b))$. Show that $R$ is a field or that $R=F[x],$ where $F$ is a field.
AI: I believe it is $\delta(xy)=\delta(x)+\delta(y)$.
Let $F=\{x:\delta(x)=0\}$. We will prove that $F$ is a field.
Let $x,y\in F$, then: $ \delta(xy)=\delta(x)+\delta(y)=0, \delta(x+y)\leq \text{max}(\delta(x),\delta(y))=0$, so $x+y,xy \in F$.
$1=ax+r,\delta(r)<\delta(x)=0$, or $r=0$ implies $r=0$ since $\delta(r)$ is not negative. $1=ax$ implies that $a=a^2x$ and $\delta(a)=\delta(a^2)\delta(x)=0$ implies that $\delta(a)=0, a\in F$.
If $F=R$, then R is a field, and we are done. Suppose now $F\neq R$, i.e. there exists $u:\delta(u)>0$.
Let $x\in R$, having the smallest positive degree amongst all numbers with positive degree, i.e. $\delta(x)>0$ and $\forall y\in R$, $\delta(y)>0 \Rightarrow \delta(x)\leq\delta(y)$, We will prove that $R=F[x]$.
Proof. Recursive on $\delta(y)$ suppose true for $n$, let $\delta(y)=n+1$, we can write $y=zx+b\delta(b)<\delta(x)$ implies that $\delta(b)=0$ and $b\in F$.
$\delta(zb)=\delta(y-b)\leq \delta(y)$. We deduce that $\delta(zb)=\delta(z)+\delta(b)$ and $\delta(z)<n+1$ we deduce that $z\in F[x]$ and $y\in F[x]$.
|
H: Find the value of $n$ if $\frac{a^{n+1}+b^{n+1}}{a^n+b^n}=\frac{a+b}{2}$
Now, this question looks simple, it did to me too, at first, but I got stuck at a point and can't get out.
This is how I did it, take a look :
$$\dfrac{a^{n+1}+b^{n+1}}{a^n+b^n}=\dfrac{a+b}{2}$$
By cross multiplication, we get :
$$2a^{n+1}+2b^{n+1}=(a+b)(a^n+b^n) = a^{n+1}+b^{n+1}+ab^n+a^nb$$
Transposing the first two terms of RHS to LHS, we obtain :
$$a^{n+1}+b^{n+1}=ab^n+a^nb$$
Now, what I did the first time I attempted this question was that I transposed $a^nb$ to the LHS and $b^{n+1}$ to the RHS but my friend suggested that we could also transpose $ab^n$ to LHS and $b^{n+1}$ to RHS and obtain different results. I suggested that we look at some constraints and arrive at condition based answers. Here's how I proceeded :
$1^{st}$ method :
$$a^{n+1}-a^nb=ab^n-b^{n+1}$$
$$\implies a^n(a-b)=b^n(a-b)$$
Now, instead of just cancelling out $a-b$, I thought of putting a condition that would enable the cancellation to be possible. That condition is that $a-b$ should not be equal to $0$, so $a \neq b$
Now, on assuming that $a \neq b$, we get :
$$a^n=b^n$$
Now, there are two cases when this is possible, one, when $n=0$, so $a^n=b^n=1$ and other, when $a=b$, but, we have already assumed that $a \neq b$ to arrive at this result, which means that the case that suggests that $n = 0$ is true. So, the $1^{st}$ method gives us the conclusion that $a \neq b \implies n = 0$
Here's the $2^{nd}$ method :
$$a^{n+1}-ab^n=a^nb-b^{n+1}$$
This time, we take $a$ and $b$ common on the LHS and the RHS respectively to obtain :
$$a(a^n-b^n)=b(a^n-b^n)$$
Now, we can cancel out $(a^n-b^n)$ from both LHS and RHS if $a^n-b^n \neq 0 \implies a^n \neq b^n$
Now, this can be true only if $a \neq b$ and $n \neq 0$ because if any of these two cases end up being true, then $a^n$ will be equal to $b^n$. So, we assume that $a \neq b$ and $n \neq 0$ and cancel optu $a^n-b^n$ from both LHS and RHS to obtain :
$$a=b$$
This is the part that confuses me. We assume that $a \neq b$ to arrive at a conclusion that $a = b$, is it possible? Do outcomes like this appear frequently (this is the first time I have encountered something like this)? Did I make some mistake? How do I get out of this?
In my opinion (which is most probably wrong), the second method gives us no useful outcome and tells us that $a$ can not be equal to $b$ which is almost surely wrong because I don't see any restriction that would show that $a \neq b$.
I do think that a better approach would be to take two cases : $a \neq b$ and $a = b$ and then expand them and then combine the outcomes. But I'd like to know what's wrong with this approach and how do I correct it?
Thanks!
AI: You have cancelled $(a-b)$ both sides assuming that $a\neq b$. What you have missed out is that $a=b$ is also a solution to the first method of solving. (Because then $0=0$)
Here's some more explanation:
Let's have a look where you've ended up after the first method:
$$a^n(a-b)=b^n(a-b)$$
Now if we have to cancel out $(a-b)$ on both sides, we must assume that $a\neq b$. This ends us up with:
$$a^n=b^n$$
Now $n$ cannot be $1$ because of the assumption we made to arrive here. Hence $n$ must be $0$
Now let's go back to the point before we cancelled $(a-b)$. Note that if $a=b$, the equality is respected:
$$0=0$$
Hence, $a=b$ is another solution to this. From here, If we consider it in the form of : $a^n=b^n$, we get $n=1$.
Let's walk over to method $2$ (again, just before the cancellation):
$$a(a^n-b^n)=b(a^n-b^n)$$
Again, assuming $a^n\neq b^n$, we end up in what we already found earlier:
$$a=b$$
Now looking at it in the eyes of : $a^n=b^n$, $n$ cannot be $0$ because of our underlying assumption that $a^n\neq b^n$. Hence $n$ should be $1$
And for the last time, if we don't cancel , but simply observe, $a^n=b^n$ is also a solution , leading us to $n=0$
Thus, both the methods yield the same result . (The beauty of Mathematics)
P.S : You can place the values $0$ and $1$ for $n$ in the question and see that everything checks out like it should.
P.P.S : I have assumed that : $a=b \implies n=1$ and $a^n=b^n\implies n=0$ . You can do it the other way round too.
|
H: How can I prove that a polynomial is irreducible over $\mathbb Z[x]$?
I am asked to prove that
$$P(x) = x^6 + x + 1$$ is irreducible over $\mathbb Z[x]$.
I tried using Eisenstein criteria by a doing a change of variable such as $x = y + a$ but I was unsuccessful.
AI: $x^6+x+1$ has no real roots, so it can't have any factors of odd degree. The only possibility for factoring is as the product of a quadratic and a quartic. The quadratic would have to be of the form $x^2 + a x + 1$ or $x^2 + a x - 1$ where $a$ is an integer.
The second form is out because it would have a real root. The first would have a real root if $|a|\ge 2$. That leaves only three possibilities, which are easy to check.
|
H: Uniform Convergence of a series of functions using the Dirichlet's test
I have recently been trying out some questions on series of functions. I got stuck in one of those problems in which I am supposed to show that the below series of functions is uniformly convergent on any bounded interval.
The series is given by:
$$\sum_{1}^\infty (-1)^n\frac{x^2+n}{n^2}$$
I tried using the Dirichlet's test over here by letting $a_n(x)=(-1)^n$ and $b_n(x)=\frac{x^2+n}{n^2}$ but what I am unable to prove here is that $b_n(x)$ is monotonic and uniformly converging to $0$ for all $x$ in a bounded interval.
Please help!
AI: Choose $m\in\Bbb Z^+$ large enough so that your bounded interval is contained in $[-m,m]$. Then show that $\langle b_n:n>m\rangle$ is monotonic and converges uniformly to $0$ on $[-m,m]$. Dirichlet’s test then allows you to conclude that $$\sum_\limits{n>m}(-1)^n\frac{x^2+n}{n^2}$$ converges, which is good enough. It may be helpful to rewrite $b_n$ as
$$b_n=\left(\frac{x}n\right)^2+\frac1n\;.$$
|
H: Induced homomorphism example
Give an example of a commutative ring $R$, $R$ -modules $M,N,$ and$W$, and an injective $R$ module homomorphism $g:M \rightarrow N$ such that the induced homomorphism $Hom_{R}(N,W) \rightarrow Hom_{R}(M,W)$ is not surjective.
I'm just learning the basics of modules and I'm having trouble coming up with a good example here. I feel like the answer should be fairly easy. Any hints?
AI: Take $g: \mathbb Z \rightarrow \mathbb Z/2 \mathbb Z$ and $W = \mathbb Z$, all modules are over $\mathbb Z$ i.e they are abelian groups. Try and compute what $\text{Hom}(\mathbb Z/2 \mathbb Z, \mathbb Z)$ and $\text{Hom}(\mathbb Z, \mathbb Z)$ are.
|
H: How do I determine the area that is inside the circle r=3cos(θ) and outside the r=3sin(2θ) curve (For the first quadrant)
I have this graph here, but how do I find out the area that's inside the circle but outside the 3sin2θ rose shaped curve? I'm lost, please help me solve this
AI: The area of a region in polar coordinates defined by the equation $r=f(θ)$ with $\alpha\leq\theta\leq\beta$ is given by:$$\int_{\alpha}^{\beta} \frac{1}{2}{r}^2 \; d\theta$$
The area of a region between two functions in polar coordinates is just the difference in area between the two functions in that region . First, find $\alpha$ and $\beta$, which is where $r$ is equal in the two functions:
$$3\cos{\theta}=3\sin{2\theta} \implies \theta=\pm\frac{\pi}{6}$$
Note that you're looking for the area only in the first quadrant. Try finishing the rest given what I have provided here.
|
H: Proving that a quantity is an integer
Suppose $gcd(a,b)=1$ and $a,b\geq 1$. How does one prove that $\dfrac{a+b\choose b}{a+b}$ is an integer?
I tried substituting for $a$ and $b$ using $ax+by=1$ for some $x,y$, also tried expanding but i cant figure it out. Any help is appreciated, thanks!
AI: HINT: $a$ and $a+b$ are relatively prime, and $$a\binom{a+b}a=(a+b)\binom{a+b-1}{a-1}\;.$$
|
H: Derivative has Linear Growth Implies Lipshitz
Let $f\in C^\infty(\mathbb{R}^d)$. If $f$ has linear growth i.e
$$|\nabla f(x)|\leq C(|x|+1)$$
then is $f$ Lipshitz?
attempt at proof :
by Mean Value Theorem there exists $c\in (0,1)$ such that
\begin{align*}
|f(x)- f(y)| \leq |\nabla f((1-c)x+cy)||x-y|\leq & C(|x-y|+1)|x-y|
\\
\leq& C(|x-y|^2+|x-y|)
\\
\leq& C(2|x-y|^2+1).
\end{align*}
AI: Try $f(x) = x^2$ on $\mathbf R^1$. Then $|\nabla f(x)| = |f'(x)| = 2|x| \le 2(|x| + 1)$ for all $x$, yet $f$ is not Lipschitz.
|
H: GIF of the sum $\sum_{i=1}^{1000}\frac{1}{i^{2/3}}$
I am asked to find the GIF (greatest integer function) of the sum:$$\sum_{i=1}^{1000}\frac{1}{i^{2/3}}$$
I am able to find the lower limit of the sum by using the fact that
$$\sum_{i=1}^{1000}\frac{1}{i^{2/3}}$$ is greater than
$$\int_{1}^{1000}x^{-2/3}dx$$
But I am unable to find the upper limit which will help me find the GIF. Any help is appreaciated.
Edit : I realized I should add this, the answer is 27. So I basically have to prove that the given sum lies between 27 and 28, of which I have proved 27 is the lower limit.
AI: A strict lower limit is
$$\int_1^{1001}x^{-2/3}dx = 3(\sqrt[3]{1001} - 1) > 3(\sqrt[3]{1000} - 1) = 27.$$
A strict upper limit is
$$1 + \int_1^{1000}x^{-2/3}dx = 1 + 3(\sqrt[3]{1000} - 1) = 28.$$
So, the answer is $\boxed{27}$.
|
H: Showing two UFD elements are relatively prime without Bézout
I am struggling to prove a result about greatest common divisors that will lead eventually to Gauss's Lemma.
In particular, we let $R$ be a UFD with $\alpha,\beta \in R$. Define $\delta=\textrm{gcd}(\alpha,\beta)$, which exists because $R$ is a UFD. My book claims that there exist relatively prime elements $\alpha',\beta'\in R$ such that $\alpha=\delta\alpha'$ and $\beta = \delta \beta'$.
My problem is that I cannot show that $\textrm{gcd}(\alpha',\beta')=1$. If $R$ were a PID, this would be easy with the help of Bézout's identity. By Bézout $s\alpha +t\beta = \delta$ for some $s,t\in R$. Substitute $\alpha=\delta\alpha'$ and $\beta = \delta \beta'$ to write $s(\delta\alpha')+t(\delta\beta')=\delta$. We then cancel and find $s\alpha' +t\beta' =1$. Hence, $\gcd(\alpha',\beta')=1$ by Bézout. However, $R$ is not necessarily a PID, so I cannot use Bézout's identity.
Is there a clever/obvious way to get around this problem? Thank you!
AI: So, we know that $R$ is ufd, and we don't know anything else. As such, it seems reasonable to write down the unique prime factorisations of $\alpha$ and $\beta$, and see what we can do with them. I've pared that down slightly, only explicitly writing down the primes that divide both $\alpha$ and $\beta$, but it's still the intuition behind the proof.
Let $p_1, \ldots, p_m$ be the distinct irreducible elements dividing both $\alpha$ and $\beta$. Then since $R$ is a ufd, we may write
$$
\alpha = p_1^{a_1}\ldots p_m^{a_m}x
$$
$$
\beta = p_1^{b_1}\ldots p_m^{b_m}y
$$
for $a_i, b_i \geq 1$ and $x, y$ not divisible by any of the $p_i$. Since the $p_i$ are the only irreducibles dividing both $\alpha$ and $\beta$, we have that $x$ and $y$ are coprime.
For each $i$, let $c_i = \min({a_i, b_i})$. Then $\delta = p_1^{c_1}\ldots p_m^{c_m}$ is the greatest common divisor of $\alpha, \beta$ (I'm not sure what definition you're using of gcd, but whatever it is, it shouldn't be too hard to check that this is true).
Let $\alpha' = p_1^{a_1 - c_1}\ldots p_m^{a_m - c_m}x$ and $\beta' = p_1^{b_1 - c_1}\ldots p_m^{b_m-c_m}$. Note that $\alpha = \delta \alpha'$ and $\beta = \delta \beta'$.
Then the primes dividing $\alpha'$ are precisely the primes dividing $x$ and the $p_i$ with $a_i > b_i$. Similarly the primes dividing $\beta'$ are precisely the primes dividing $y$ and the $p_i$ with $a_i < b_i$. It is then clear that these two sets of primes are distinct, so there are no primes that divide both $\alpha'$ and $\beta'$, and hence $\alpha'$ and $\beta'$ are coprime.
|
H: Taylor series at $a=0$
I am trying to find Taylor series representation $\sum_{n=0}^\infty a_n x^n$ for the function
\begin{equation*}
f(x)=
\begin{cases}
\frac{x-\sin{x}}{x^2}, & \text{ when } x \neq 0\\
0, & \text{ when } x = 0,
\end{cases}
\end{equation*}
where $a_n = \frac{f^n(0)}{n!}$.
I am confused by that when $x=0$, the function is also $0$. How to find Taylor series at $a=0$ for such a function? Thanks in advance for any help.
AI: Let $n>0$.
For $x\ne 0$,
$$\sin(x)=x+\sum_{k=1}^n\frac{(-1)^kx^{2k+1}}{(2k+1)!}+x^{2n+2}\epsilon(x)$$
thus
$$f(x)=\sum_{k=1}^n\frac{(-1)^{k+1}x^{2k-1}}{(2k+1)!}+x^{2n}\epsilon(x).$$
with $$\lim_{x\to 0,x\ne 0}\epsilon(x)=0$$
$f$ is continuous at $x=0$, therefore
$$(\forall x\in \Bbb R)\; f(x)=\sum_{k=1}^n\frac{(-1)^{k+1}x^{2k-1}}{(2k+1)!}+x^{2n}\epsilon(x)$$
with $\epsilon(0)=0.$
|
H: Show that $\sum_{1}^\infty\frac{\sin(nx)}{n^3}$ is differentiable everywhere
I have recently been trying out some questions on series of functions.In one of the questions, I was given a series $$\sum_{1}^\infty\frac{\sin(nx)}{n^3}$$
and now I am supposed to show that the above series is differentiable at every real number and I need to find its derivative.
I was wondering when a series of the form $\sum_{1}^\infty f_n(x)$ said to be differentiable at x?..is it when each $f_n(x)$ is differentiable?.. and if that is so,then I assume that I am done with first half of the question.
In the second half,should I show that the given series is uniformly convergent by using the Weierstrass' M-Test and hence differentiate the series term by term?
Help please!
AI: The main theorem regarding differentiation of series of functions is provided at this link.
Here $f_n(x) = \frac{\sin nx }{n^3}$ is such that $\sum f_n^\prime(x) = \sum \frac{ \cos nx }{n^2}$ is normally convergent, therefore uniformly convergent by Weierstrass M-test as $\sum 1/n^2$ converges. As the series is also convergent at $0$, the theorem states that the series of functions is differentiable and has for derivative $\sum \frac{ \cos nx }{n^2}$.
|
H: Proof verification and explanation in probability
Six regimental ties and nine dot ties are hung on a tie holder.
Sergio takes two simultaneously and randomly. What is the probability that both ties are regimental?
I have seen that the probability that, not counting the order, the two extracted are between 6 fixed and none of the other 9; therefore if $E$ is the event then:
$$\text{Pr}[E] = \frac{C_{6,2}\cdot C_{6,0}}{C_{15,2}} = 1/7$$
where
$$C_{n,k} = \frac{D_{n,k}}{P_k} = \frac{n!}{k!(n-k)!} = {n \choose k}$$
For some it is more immediate to calculate the result as: $6/15 \cdot 5/14 = 30/210 = 1/7.$
Could I please have a clear step-by-step explanation?
AI: Even though he takes the ties simultaneously, we can still calculate it as if he takes one at a time randomly without replacing the first tie.
At the beginning, there are $15$ ties, and $6$ of them are regimental. So the probability of taking a regimental tie is $\frac{6}{15}$.
For the second tie, there are only $14$ left (since he already took $1$), and if he took a regimental tie on the first pick, there are $5$ regimental ties left. So the probability of taking a regimental tie as the second tie is $\frac{5}{14}$.
To find the probability of both these events happening, just multiply the individual probabilities.
$$\frac{6}{15}\times\frac{5}{14}=\frac{1}{7}$$
So the final answer is $\frac{1}{7}$.
|
H: Classifying certain types of matrices
How many similarity classes of nilpotent $4 \times 4$ matrices over $\mathbb{C}$ are there?
I suspect the answer is connected to minimal polynomials, but I'm not sure. Any suggestions?
AI: An $n \times n$ matrix is nilpotent if and only if it has no eigenvalues except $0$, i.e. its characteristic polynomial is
$\lambda^n$. It is similar to its Jordan canonical form, which then consists of blocks with diagonal $0$. So the similarity classes in the $4 \times 4$ case correspond to the possible Jordan forms, with blocks of the following sizes:
$$ \matrix{1,1,1,1\cr
2,1,1 \cr
2,2 \cr
3,1 \cr
4\cr}$$
|
H: Evaluate $\int \cos^2(x)\tan^3(x) dx$ using trigonometric substitution
How would I integrate to evaluate $\int \cos^2(x)\tan^3(x) dx$ using trigonometric substitution?
I made an attempt by making substitutions such as $$\cos^2(x)=1-\sin^2(x)$$
$$\tan(x) = \frac{\sin(x)}{\cos(x)}$$ and $$\tan^2(x)=1+\sec^2(x)$$
But I couldn't find a way to make it look like an integral I could solve using a $u$ substitution or identity.
Could I get some help on this one?
AI: $$\int \cos^2x\tan^3x\ dx=\int \frac{\sin^3x}{\cos x}\ dx=\int \frac{(1-\cos^2x)\sin x}{\cos x}\ dx$$
Let $\cos x=t\implies -\sin x\ dx=dt$
$$=\int \frac{(t^2-1)dt}{t}$$
$$=\int \left(t-\frac{1}{t}\right)dt$$
|
H: Removing Gibbs Phenomenon
I am working with a sample of 20 points given from an unknown 1-periodic function that are plotted like this: Original sample
I am using Inverse Fast Fourier Transform (ifft) to recover the signal resampled in 1000 points at [0,1) that is plotted like this: Resampled
It is showing a Gibbs Phenomenon at the end of the signal. What can be causing this fact? As far as I know Gibbs Phenomenon occurs near a jump discontinuity...
Any idea about why is this happening and how could I solve it?
AI: You could use the Fourier cosine series. This amounts to extending the function to be even on $[-1,1]$ and then making it periodic with period $2$. That function will be continuous.
|
H: Uniform convergence and integrals.
I'm asked to tell if the following integral is finite:
$$\int_0^1 \left(\sum_{n=1}^{\infty}\sin\left(\frac{1}{n}\right)x^n \right)dx$$
I studied the series (which converges uniformly on $(-1,1)$ by d'Alembert's Criterion and in $-1$ by Leibniz's Criterion, so in general the convergence is uniform in $[-1,1)$).
In $1$ we have that the series goes like $\frac{1}{n}$ and so diverges.
Can I exchange integral and sum if the convergence is not uniform in $1$? I'd say yes because I can write $\int_0^1$ as $\lim_{\epsilon \to 1} \int_0^{\epsilon}$ but I'd like a confirmation.
AI: We know that $f_{n}(x)= \sin \Big( \frac{1}{n} \Big) x^{n}$ are Lebesgue integrable on $(0,1)$ and positive. Since $$ \sum_{n=1}^{ \infty} \int_{0}^{1} \sin \Big( \frac{1}{n} \Big) x^{n} dx = \sum_{n=1}^{ \infty} \sin \Big( \frac{1}{n} \Big) \frac{1}{n+1}, $$ which converges, because $$ \sin \Big( \frac{1}{n} \Big) \leq \frac{1}{n} \Rightarrow \sum_{n=1}^{ \infty} \sin \Big( \frac{1}{n} \Big) \frac{1}{n+1} \leq \sum_{n=1}^{ \infty} \frac{1}{n} \frac{1}{n+1} < \infty $$ we then know by Levi's Theorem for Series of Lebesgue Integrable Functions that $$ \int_{0}^{1} \sum_{n=1}^{ \infty} \sin \Big( \frac{1}{n} \Big) x^{n} dx = \sum_{n=1}^{ \infty} \int_{0}^{1} \sin \Big( \frac{1}{n} \Big) x^{n} dx$$
|
H: Clarify about local contractibility of quotient spaces
Consider these couple of spaces: the first is $A:=\{\frac 1 n :n \in \mathbb N\}\subset\mathbb R $; the other is $B:=[0,3)\subset \mathbb C$. I must describe the topology induced by the projections $h:\mathbb R\to \mathbb R/A$ and $k:\mathbb C\to \mathbb C/B$. In particular, I must say if the quotient spaces are locally contractible.
In the case of $\mathbb C/B$ we have that the open sets are all the sets corresponding to the projection of an open $D$ of $\mathbb C$, with the condition that either $D\cap B =\emptyset$ or $D\cap B =B$. Every point of $\mathbb C/B$ different from $k (3)$ and $k ([0,3))$ has an open neighbourhood trivially contractible, since the projection here is an homeomorphism. For the points $k (3)$ and $k (B)$, it suffices to take a open set $D'$ in $\mathbb C$ containing $[0,3]$; any deformation retraction of $D'$ to $B$ induces a contraction of $k (D')$ to $k (B)$, so every point has a contractible neighbourhood.
I have a problem in the other case however: consider, for every natural $n$, two disjoint intervals, one containing $\frac 1 n $ and the other containing $\frac 1 {n+1} $. Now, the countable union of all these intervals is open, and is contractible to $A$ (i.e., there is a contractible neighbourhood of $h (A) $ in the quotient space). This reasoning should be wrong (the solution says that this space isn't locally contractible) but I don't see why. Thans in advance for any help.
AI: For convenience I’ll identify $x$ and $h(x)$ for $x\in\Bbb R\setminus A$ and let $a=h(A)$. It isn’t the point $a$ that’s the problem: it’s $0$. Any nbhd of $0$ in $\Bbb R\setminus A$ must contain
$$(-\epsilon,0]\cup\{a\}\cup\bigcup_{n\ge m}\left(\frac1{n+1},\frac1n\right)\cup\bigcup_{1\le n<m}\left(\frac1n-\epsilon,\frac1n+\epsilon\right)$$
for some $m\in\Bbb Z^+$ and $\epsilon>0$, and any $\{a\}\cup\left(\frac1{n+1},\frac1n\right)$ is a copy of $S^1$.
|
H: Find $( \dotsb ((2017 \diamond 2016) \diamond 2015) \diamond \dotsb \diamond 2) \diamond 1$ given ...
For positive real numbers $a$ and $b,$ let
$$a \diamond b = \frac{\sqrt{a^2 + 4ab + b^2 - 2a - 2b + 9}}{ab + 6}.$$Find
$$( \dotsb ((2017 \diamond 2016) \diamond 2015) \diamond \dotsb \diamond 2) \diamond 1.$$
I can't find any quick way to do this. Can anyone help? Thanks in advance!
AI: Notice for all positive number $a$, we have
$$a \diamond 2 = \frac{\sqrt{a^2 + 8a + 4 - 2a - 4 + 9}}{2a+6} = \frac{\sqrt{a^2+6a+9}}{2a+6} = \frac12$$
The mess at hand equals to
$$( (\cdots) \diamond 2) \diamond 1 = \frac12 \diamond 1 = \frac{\sqrt{37}}{13}$$
|
H: The probability of getting $y$ new coupons from a batch of $k$
In the coupon collector process, the goal is to assemble a collection of $n$ distinct coupons, while we get a random coupon at each time.
I am looking at a generalization of this problem, where at each time we get a batch of $k$ random coupons (with repetitions) at once, for some $k\in\mathbb N^+$.
Assume that we have already collected $N\le n$ distinct coupons and let $y\in\mathbb N$, what is the probability that we get $y$ new and distinct coupons in the next batch (i.e., we will have $N+y$ distinct coupons after that batch)?
Does this has a close-form formula?
For example, if $n=10, k=3$ and we have so far collected $N=6$ coupons, the probability of collecting another (exactly) $y=1$ coupon is
$$
(4/10)\cdot(7/10)^2 + (6/10)\cdot(4/10)\cdot (7/10) + (6/10)^2\cdot (4/10) = 0.508.$$
Here, I looked at it as if the samples in the batch were ordered 1,2,3, and the summands corresponds to getting a new coupon at the first/second/third sample.
This approach doesn't seem to allow computation over a reasonable number of arguments.
Is there a better way to evaluate it?
AI: The probability that all coupons you draw are among the $N$ coupons already seen and $y$ particular new coupons is $\left(\frac{N+y}n\right)^k$. Conditional on this, the probability that all of the $y$ particular new coupons are drawn is, by inclusion–exclusion,
$$
\sum_{j=0}^y(-1)^j\binom yj\left(1-\frac j{N+y}\right)^k\;.
$$
Thus, the probability to draw exactly $y$ particular new coupons is
$$
\sum_{j=0}^y(-1)^j\binom yj\left(\frac{N+y-j}n\right)^k\;.
$$
There are $\binom{N-n}y$ ways to select these $y$ new coupons, so the probability to draw exactly $y$ new coupons is
$$
\binom{N-n}y\sum_{j=0}^y(-1)^j\binom yj\left(\frac{N+y-j}n\right)^k\;.
$$
In your example, this is
$$
\binom{10-6}1\sum_{j=0}^1(-1)^j\binom 1j\left(\frac{6+1-j}{10}\right)^3=4\left(\left(\frac7{10}\right)^3-\left(\frac6{10}\right)^3\right)=0.508\;,
$$
in agreement with your result.
You may also be interested in Coupon Collector Problem with Batched Selections and Coupon Collector's problem, version with multiple coupons in a box, which are about the same modification of the coupon collector’s problem.
|
H: Show that the series $\sum_{n=1}^\infty \sin \left( \frac{x}{n^2} \right)$ does not converge uniformly
I asked this question about a week ago but I am little bit unsure about the way to solve it so I hope it is ok if I ask again about some things I do not fully understand.
I have to show that the series
$$
S = \sum_{n=1}^\infty \sin \left( \frac{x}{n^2} \right)
$$
does not converge uniformly on $\mathbb{R}$ which can be shown by showing that
$
\sin \left( \frac{x}{n^2} \right)
$ fails to uniformly converge towards $0$ when $n$ tends to $\infty$. Is this because of contraposition? I know that if $\sum_{n=1}^\infty a_n$ converges unifomrly then $a_n$ converges uniformly towards $0$.
Futhermore, by negation we have that $\sin \left( \frac{x}{n^2} \right)$ does not converge uniformly towards $0$ when $n$ tends to $\infty$ if
$$
\exists \epsilon > 0 \ \forall N \in \mathbb{N} \ \exists x \in \mathbb{R} \ \exists n \in \mathbb{N} \ : n \geq N \ \text{and} \left|\sin \left( \frac{x}{n^2} \right)\right| \geq \epsilon
$$
If I then pick $\epsilon = \frac{1}{2}$ and $x = \frac{\pi n^2}{3}$ I get the desired result but don't I also have to pick a specific $n \in \mathbb{N}$ so that this only works when $n \geq N$? Or is it simply enough to pick $\epsilon$ and $x$?
Thanks for your help.
AI: Concerning your first question: yes, it is by contraposition.
The proof that your sequence does not converge uniformly to $0$ is almost correct. Simply take $x=\dfrac{\pi N^2}3$ (instead of $x=\dfrac{\pi n^2}3$). Then, there is a $\varepsilon>0$ (namely, $\dfrac12$) such that, for every $N\in\Bbb N$, there is some natural $n\geqslant N$ (namely, $N$ itself) and some number $x$ such that $\left|\sin\left(\dfrac x{n^2}\right)\right|\geqslant\dfrac12$.
|
H: Moving a rocket between two points on a straight line, when to rotate from prograde to retrograde?
Imagine you have a rocket and you want to move it from point a to point b. The flight plan is as follows:
Fire the rocket engine for a constant acceleration of 1 m/s^2 until x meters is covered (prograde burn)
Turn the rocket 180 degrees so that the rocket engine points retrograde. Note that while this happens the linear acceleration of the rocket is 0 but the velocity is still > 0, it still moves.
Fire the rocket engine for a constant deceleration of 1 m/s^2 until the last x meters is covered (retrograde burn)
It is logically that the burn in step 1 and 3 covers the same distance, x.
Say that the distance between a and b is 1000 meter and rotating the rocket takes 5 seconds. How do I approach finding a solution for x?
Normally I would start with computing the integral of the acceleration so I can find the distance the rocket has covered after several seconds. But in this case I don't know the distance (or time) after which the rocket should stop firing its rocket.
Please ignore other physical forces like gravity, drag, etc...
I've removed the image as it was indeed wrong and very confusing.
AI: The trick is to figure out how long it will take to accelerate that distance and then use that to calculate the distance itself.
Let's lay out the variables:
Uniform acceleration/deceleration $a$
Known rotation time $t_R$
Known total distance $d$
Unknown acceleration/deceleration time $\tau$
The distance traversed during both acceleration and deceleration is $\frac{1}{2}a\tau^2$. Moreover, after acceleration, the ship is traveling at a speed of $a\tau$, so the distance traversed during rotation is $a\tau t_R$.
Putting all of this together yields the quadratic equation
$$ d = a\tau^2 + a\tau t_R$$
which can be solved for $\tau$. Once $\tau$ is known, you can easily compute $\frac{1}{2}a\tau^2$.
|
H: Is $H_m - H_n$ a surjection onto $\mathbb{Q}^+$?
I was wondering whether, for each rational $q$, we may always write
$$q = \sum_{k=a}^b \frac 1k$$
For some positive integers $a \leq b$. I get the feeling that this is not true (although an immediate consequence of $\mathbb{R}^+$ being Archimedean is that the set of such $q$ is dense). I'm sure there is some slick proof using Bertrand's postulate (as is typical with these problems) but I'm not seeing it. This post is partly a reference request, as I'm sure this has been touched on before in some article, and would like to see it.
AI: You cannot get $H_m-H_n=2$, or any integer $\ge2$.
In a sum $1/(n+1)+\cdots+1/m$ exactly one of the terms has minimal
$2$-adic valuation. This can only be zero if one has only one term.
Therefore there's a power of $2$ in the denominator, which cannot
be $2^0$ save in the cases $H_{n+1}-H_n=1/(n+1)$ where $n+1$ is odd.
|
H: Is $1.r=r.1=r$ for a Non Commutative Ring
I am new to ring theory.
As per one of the axioms of a ring $R$ we have $\forall$ $r \in R$ ,$\:$$\exists$ $1 \in R$ such that $$1.r=r.1=r \tag{1}$$
So this is definitely commutative property for $1$ and $r$
Is this scenario true even for Non commutative Ring?
Does it mean if $\forall$ $a,b \in R$ ,$a.b \ne b.a$ Then $R$ is Non Commutative with an exception of $(1)$?
AI: By definition of the multiplicative identity, it's indeed true $1 \cdot r = r \cdot 1 = r$ for any $r \in R$.
However, your second assertion is false. We say that $R$ is non-commutative if there exists $a,b \in R$ such that $ab \neq ba$, and not necessarily all. An example is the ring of $2 \times 2$ real matrices $\mathcal{M}_{2 \times 2}(\mathbb{R})$, which is not commutative, but diagonal matrices commute with each other.
|
H: What power must we check to find the order of an element?
I know that $2^{100} \equiv 1 \pmod {125}$ because $\phi(125)=100$. $125=5^3$ is also the perfect power of an odd prime, so it has at least one primitive root. So, it is reasonable to check if $2$ is a primitive root mod $125$.
To check this, it would suffice to find every divisor of $100$ as a power of $2$, but that would take a longer time than I believe to be necessary, because I read once that we only need to check the divisors $2^2\cdot 5$ and $2\cdot 5^2$ as powers of $2$. Indeed, neither of them are the orders of $2$ mod $125$, and I am also told that $2$ is a primitive root mod $125$. However, I don't quite understand why we only need to check $2^2\cdot 5$ and $2\cdot 5^2$.
On top of that question, how can we then generalize this way of checking to other mods?
AI: Let us say that $|G|=n$ and you want to check if $g \in G$ is a primitive root. Then what you should check is the following:
$$\text{is } \quad g^{n/q} \not\equiv 1 \pmod{n} \, \quad \forall \quad \text{prime divisors } q \text{ of } n?$$
If this holds then $g$ is a primitive root otherwise not.
Proof:
We will prove this by contradiction. Suppose $g^{n/q} \neq 1$ for all prime divisors $q$ of $n$ but $\text{ord}(g)=m <n$. Note that order of $g$ is $m$ tells us that $g^m=1$. Moreover since the order of an element must divide the order of the group, therefore $m | n$. In other words, there exists an integer $t$ such that $n=mt$. Since $m<n$, therefore $t>1$. By prime factorization theorem, $t$ must have at least one prime factor, let us call it $p$. Observe that since $p | t$ and $t | n$, therefore $p$ is also a prime factor of $n$. Now consider
$$g^{n/p} = g^{mt/p}=\left(g^{m}\right)^{t/p}=1^{t/p}=1.$$
But this violates the condition that $g^{n/q} \neq 1$ for all prime divisors $q$ of $n$. Hence our assumption that $m<n$ is false. Therefore $m=n$ and hence $g$ is a generator of $G$.
So in your problem: all you need to do is check the following:
Is
$$2^{100/2}\equiv 2^{50} \equiv 1 \pmod{125} ?$$
Is
$$2^{100/5}\equiv 2^{20} \equiv 1 \pmod{125} ?$$
If the answer to both these is NO, then $2$ is primitive root.
|
H: Is this a well known property of modular arithmetic
Let:
$p_1, p_2$ be primes
$x > 0$ be an integer where $p_1 \nmid x$ and $p_2 \nmid x$
I am interested in understanding the conditions where:
$x - p_1 \equiv 0 \pmod {p_2}$
$x - p_2 \equiv 0 \pmod {p_1}$
It seems to me that this is only true when:
$$x - p_1 - p_2 \equiv 0 \pmod {p_1p_2}$$
I find this interesting because I can apply this to arguments about primes.
Here are some very simple examples:
Since $5\times 3 \nmid 56 - 5 - 3 = 48$, either $(56 - 5)$ or $(56 - 3)$ must be prime.
Since $7\times 3 \nmid 110 - 7 - 3 = 100$, either $(110-3)$ or $(100-7)$ must be prime [in this case, both are]
I suspect that either my reasoning is wrong or this property is well known.
When I read the article on modular arithmetic in Wikipedia, I am not seeing anything related to this property.
Most likely, this property can easily be derived from one of the properties mentioned there. If someone could explain the related property, that will help me know where to look to learn more. :-)
AI: Assuming $p_1$ and $p_2$ are distinct primes, you have a system of two linear congruences with coprime moduli. There exist integers $a$ and $b$ such that $ap_1+bp_2=1$.
The Chinese remainder theorem gives a constructive formula for a solution $x$, (unique modulo $p_1p_2$), as $x=ap_1^2+bp_2^2$. This can be rewritten as
$$
x=p_1(1-bp_2)+p_2(1-ap_1)
$$
from which it follows that $x\equiv p_1+p_2\pmod{p_1p_2}$.
Conversely, if $x\equiv p_1+p_2\pmod{p_1p_2}$, it is immediate that $x\equiv p_1\pmod{p_2}$, and $x\equiv p_2\pmod{p_1}$.
|
H: Finding all intersections of $f(x)= \sin(x)+1$ and $g(x)= \cos(x)$ on the interval $[0,4\pi]$
The question asks to find all the points where $f(x)= \sin(x)+1$ intersects with $g(x)= \cos(x)$ on the interval $[0,4\pi]$.
I started by setting both equations equal to each other resulting in the new equation:
$$\sin(x)+1 = \cos(x)$$
I thought that if I was somehow able to use trigonometric identities in order to make $\sin(x)$ and $\cos(x)$ end up multiplying to each other so that I don't get rid of any solutions and can solve more easily.
My process:
sin(x)+1 = cos(x)
(sin(x)-cos(x))^2= (-1)^2
sin^2(x)-2sin(x)cos(x)+cos^2(x)=1
sin^2(x)+cos^2(x)=1+2sin(x)cos(x) Pythagorean Identity
1= 1+2sin(x)cos(x) Subtract 1 from both sides
0= 2sin(x)cos(x)
This states that the solution is anytime cos(x) or sin(x) equals zero. This would mean x= 0,π/2,π,3π/2,2π,5π/2,3π,7π/2, and 4π.
But when I graphed this I got that the solutions are at: x=0,3π/2,2π,7π/2, and 4π. This is half of what I thought would be the solutions.
I used logic to try to solve it now.
I started again with setting the equations to each other again and then guessing and checking.
sin(x)+1 = cos(x)
I knew that for this to be true sin(x) would have to equal zero when cos(x) would have to equal one or sin(x) would have to equal negative one when cos(x) would have to equal zero.
This in mind. I listed all the places:
sin(x) equals zero: 0,π, and 2π
cos(x) equals one: 0, 2π
Where they coincided I knew there was a solution. Here two solution were 0 and 2π.
Then I did the same for when sin(x) equals negative one and cos(x) equals zero
sin(x) equals negative one: 3π/2
cos(x) equals zero: π/2, 3π/2
Here another solution was 3π/2.
Because sin and cos graphs oscillate I know that is I add 2π to every one of these solutions I would get the rest of the solutions from [2π,4π].
Although, when problems become more complicated I can't always rely on guess and check so I was wondering how I could algebraically solve it since I can't figure it out.
AI: When you square both sides you run the risk of introducing false solutions.
$(\cos x - \sin x) = 1$
Squaring both sides...$(\cos x - \sin x)^2 = 1$ will now gives a "solution" when $(\cos x - \sin x) = -1$
So, if this is the route you take, you must be careful to check which of your solutions are associated with which equation.
When $\sin x > 0$ then $\sin x + 1 > 1$ and it is always the case that $\cos x \le 1.$ Similarly when $\cos x<0$ there is impossible for $\sin x + 1$ to be less than $0.$ We can use these fact to eliminate the "extra" solutions.
An alternative approach is to say
$\sqrt 2 (\frac {\sqrt 2}{2}\cos x - \frac {\sqrt 2}{2}\sin x) =1\\
\sqrt 2 (\cos \frac {\pi}{4}\cos x - \sin\frac {\pi}{4}\sin x) =1\\
\cos (x+\frac {\pi}{4}) = \frac {\sqrt 2}{2}$
|
H: Calculus proof of ln(ab)= lna + lnb
My calculus book states the following theorem of the properties of natural logarithms:
If a, b > 0 , then
ln(ab)= lna + lnb
The author goes on to prove this theorem as follows
I do not understand what property allowed the author to use the substitution U = t/a because the original variable in the second integral is "t" and clearly U is not the same as t. Shouldn't U = t.
AI: When you have a definite integral, the variable which you are integrating with respect to is a "dummy variable": in the sense that it does not matter what you call it. Thus,
$$\int_a^b\frac1tdt,\;\int_a^b\frac1udu,\;\int_a^b\frac1sds,\;\int_a^b\frac1\zeta d\zeta$$
all mean exactly the same thing and have exactly the same value. After the substitution $u=t/a$, we obtain $$\int_a^{ab}\frac1tdt=\int_1^b\frac1udu=\ln b.$$
|
H: $ \frac{1}{n}\sum_{i=1}^{n}{C_i}\text{ is weakly compact} $
Let $X$ be a separable Banach space. Let $C_1,...,C_n$ are nonempty weakly compact convex subsets of $X$. Why
$$
\frac{1}{n}\sum_{i=1}^{n}{C_i}\text{ is weakly compact}
$$
An idea please.
AI: In general for a topological vector space $X$ and compact subsets $A$ and $B$, $A+B$ is the image of the compact set $A\times B$ under the continuous map $+: X\times X\rightarrow X$. Hence $A+B$ is compact. Multiplication by scalars also preserves compactness.
|
H: Minimal mutual information for the Binary Symmetric Channel
I am working on the following exercise:
Let $X, Y$ be RVs with values in $\mathcal{X} = \mathcal{Y} = \{0, 1\}$ and let $p_X(0) = p$ and $p_X(1) = 1−p$. Let $\mathcal{C} = (X , P, Y)$ be the channel with input RV $X$, output RV $Y$ and transition matrix
$$P = \begin{bmatrix} q &1-q\\ 1-q &q \end{bmatrix}.$$
Compute $I(X, Y )$.
For which values of $q$ is $I(X; Y)$ minimal?
For 1. we note that
$$I(X;Y) = H(Y) - H(Y \mid X) = H(Y) - H(q,1-q).$$
I do not see how to do 2. however, could you help me?
AI: I'm assuming you meant to say $\mathcal{X} = \mathcal{Y} = \{0,1\}$. If $Y=X$, i.e. the two random variables are equal, $I(X;Y)$ is fixed and $P$ is meaningless.
Your expansion doesn't really help because both $H(Y)$ and $H(Y|X)$ depend on $q$. Instead, let's look at $I(X;Y) = H(X) - H(X|Y)$ by the symmetry of mutual information. $H(X)$ is fixed for a fixed $p$, so we only need to maximize $H(X|Y)$ through $q$ to minimize $I(X;Y)$.
If $q = 0.5$, observing $Y$ doesn't give any information about $X$, so our best guess is equivalent to flipping a $p$-biased coin, using the outcome as our guess and just ignoring $Y$. So, for $q=0.5$, $H(X|Y) = H(X)$, which is the maximum because conditioning can't increase entropy. You can plug-in the joint distribution of $X,Y$ to verify this intuitive explanation.
|
H: is this the correct way to solve this question?
when two fair dice are rolled,the odds of throwing a 'double'(two dice with the same number) is 1:5.
if two dice are rolled 400 times ,the best estimate of the number of times you would NOT get a double would be ___?
my work:
400/2=80/2=40
is this the correct answer to the solution and if not ,what is the correct answer to this question?
AI: Start by calculating the probability of getting a double of a particular number (1 for instance).
Each dice has 6 numbers, so the probably of getting 1 on a single dice is $\left(\frac 16\right)$. The probability of getting 1 on both dice is the product of the probabilities of getting 1 on each individual dice: $$\left(\frac 16\right)\left(\frac 16\right)=\left(\frac{1}{36}\right)$$
Now, multiply that result by 6 to calculate the probability of getting a double of any of the 6 numbers.
$$\left(\frac{1}{36}\right)6=\left(\frac{1}{6}\right)$$
The probability of rolling a double is $\frac 16$.
To calculate the theoretical amount of rolls (out of 400) that will not be doubles, take 1 minus the probability of rolling a double, and then multiple that result by 400.
$$\left(1-\frac{1}{6}\right)400=\left(\frac{5}{6}\right)400=333$$ (I rounded to the nearest whole number)
|
H: How to compute the transition matrix of a channel composed of two channels?
I have a quick question on the following exercise:
Let $C = (X , P, Y)$ be a binary channel which is composed of two
binary channels in sequence, such that the output of the first channel $C1 = (X , P_1, Z)$ is the
input of the second channel $C_2 = (Z, P_2, Y)$. Let the transition matrices be given by
$$P_1 :=
\begin{bmatrix}
3/4 &1/4\\
3/4 &3/4
\end{bmatrix} \quad \text{ and } \quad P_2 := \begin{bmatrix}1/3 &2/3\\ 2/3 &1/3 \end{bmatrix}$$
Compute the transition matrix $P$.
I just want to make sure that I am on the right track: $P$ should be given by $P = P_2 \cdot P_1$, since $\mathcal{C_1}$ maps into $\mathcal{C_2}$, right?
AI: Think in terms of the variables. $C_1$ maps $X \to Z$ with respect to $P_1$ and $C_2$ maps $Z \to Y$ with respect to $P_2$.
Since $C$ maps $X \to Y$, the equivalent cascaded channel is $X \to Z \to Y$. First, $X$ passes through $C_1$ (and hence $P_1$) and then the obtained $Z$ passes through $C_2$ (and hence $P_2$). Following the order, we get $P = P_1 P_2$, assuming $C_1$ and $C_2$ are independent of each other.
|
H: Evaluation of a complex polynomial
As an intermediate step to a problem, I would like to know whether or not the following is true:
Let $0<r<1$, and let $\zeta_j$ denote the $n$th root of unity. Then define polynomial index by $j$ as
$$ f_j(z) = \frac{z\prod_{i} (z-r^{1/n}\zeta_i)}{z-r^{1/n}\zeta_j}.$$
Then $f_j(r^{1/n}\zeta_j) = f_k(r^{1/n}\zeta_k)$ for any $j$ and $k$.
The trouble I have proving or disproving this guess is that I couldn't directly plug in the value, as the denominator would then become zero. On the other hand, if I try to cancel out one linear factor from both the numerator and the denominator and try it evaluate from there, the expansion would be very messy and I don't seem to be able to figure out what it would evaluate to. I would appreciate any suggestions or hints.
AI: Hint: $\prod_{i} (z-r^{1/n}\zeta_i) = z^n -r$ and use L'Hospital's Rule.
|
H: Finite dimension implies A[a]=A(a)
I was trying to prove following statement:
Let $A \subseteq F$ be field extension and $a \in F$. Then $A[a] = \{f(a)\,|\,f \in A[x]\}$. Prove that if $A[a]$ is finite dimensional as vector space over A, then $A[a]=A(a)$
All my attempts were unsuccessful, how do we prove such thing?
AI: HINT. Use the fact that $A[a]$ is finite dimensional as a vector space over $A$ to show that there is a polynomial for which $a$ is a root, say of degree $n$. There is then a minimal polynomial for $a$. [If you have not learned this, read the definition and prove that such a polynomial exists using division and your polynomial for which $a$ vanishes.] Then show any higher powers ($\geq n$) of $a$ can be expressed in terms of lower powers of $a$ using this irreducible minimal polynomial. Explain then how you can perform addition/subtraction using these lower powers of $a$. Then you have shown (you may have to verify a few things) that $A[a]$ is a field. Then you just need to see/explain how this shows $A[a]=A(a)$.
|
H: Convert to a polynomial type integral
I came across a question where I had to convert $\int x^{32}\left(4+7x^3\right)^{2/9}\,dx$ to a "polynomial type integral". I have no idea where to start because I have never seen this type of question before. All I know is that this type of integral appears in engineering.
I cannot distribute the $x^{32}$ term and I also cannot expand $\left(4+7x^3\right)^{2/9}$ so easily. What is the approach to this type of question?
NOTE: I am not necessarily concerned about the answer for this specific integral. I am more interested in the required steps for rewriting the integral so that it is of the polynomial type.
AI: If you let $y = 4+7x^3$ then $dy = 21x^2$ so I would integrate by parts letting $dv = x^2 \left(4+7x^3\right)^{2/9} dx$ and $u = x^{30}$ which will result in $\int x^{30} \left(4+7x^3\right)^{2/9} dx$, thus reducing the degree of the leading polynomial by 2.
Do this 15 times to end up with $\int x^2 \left(4+7x^3\right)^{2/9} dx$, it should be very easy to spot the pattern after the first couple...
|
H: Moment generating function for sum of independent random variables same as joint mgf
I'm seeing in general that for moment generating functions, the mgf of $X+Y$ where $X,Y$ are independent random variables is $M_{X+Y}(t) = M_X(t)M_Y(t)$. I'm also seeing that the joint mgf is given by $M_{(X,Y)}(t) = M_X(t_1)M_Y(t_2)$.
I'm not understanding why these two things would have the same formula. That is, why does $M_{(X,Y)}(t) = M_{X+Y}(t)$ for independent random variables? Would appreciate both a mathematical and heuristic explanation of why these are the same. I believe I may be making an error, however, in thinking the two formulas are the same.
AI: They don't have the same formula. $M_X(t) M_Y(t)$ is a function of a single variable $t$, whereas $M_X(t_1) M_Y(t_2)$ is a function of two variables, $t_1$ and $t_2$. You might say, but what if $t_1 = t_2$? Then yes, you get the MGF of $X + Y$. But they are no more the same as if you were to claim that $f(x,y) = x^2 + y^2$ is equivalent to $g(x) = 2x^2$. They are drastically different functions.
|
H: Show there is no ring map $R=\mathbb{Z}[\sqrt{-3}]\to\mathbb{Z}[i]=S$ such that $1_R\mapsto 1_S$
Let $R=\mathbb{Z}[\sqrt{-3}]$, and let $S=\mathbb{Z}[i]$. For sake of contradiction, assume $\varphi:R\to S$ is a ring map with $\varphi(1_R)=1_S$. Note that $\mathbb{Z}[\sqrt{-3}]$ is a free abelian group on the generators $1,\sqrt{-3}$. Thus $\varphi$ is completely determined by its action on these generators. Clearly $1_R=1\in\mathbb{Z}[\sqrt{-3}]$ and $1_S=1\in\mathbb{Z}[i]$. So $\varphi(1)=1$. Furthermore, $\varphi(\sqrt{-3})^2=\varphi(\sqrt{-3}^2)=\varphi(-3)=-3\varphi(1)=-3$, so $\varphi(\sqrt{-3})=\pm\sqrt{-3}$. This can't happen since $\pm\sqrt{-3}\not\in\mathbb{Z}[i]$. Therefore $\varphi$ cannot exist. Is this correct?
AI: This looks great to me!
Depending on what this is for, you may have a bit of routine lifting to do: what you have is $\phi(\sqrt{-3})^2 \in \mathbb{Z}[i]$ with $\phi(\sqrt{-3})^2= -3$, implying there is an element whose square is $-3$ in $\mathbb{Z}[i]$. As you already know, there is no such element. But you may be required to show this by showing you cannot find $a,b \in \mathbb{Z}$ with $(a+bi)^2= -3$. This is just a matter of expanding and relating parts, which gives $a^2 - b^2= -3$ and $2ab= 0$; obviously, there are no integers $a,b$ satisfying these equations.
|
H: Convergence $ \int_{-1}^1 \sqrt{1-\frac{x}{(1-x^2)^2}}$
I was trying to solve the following integral:
$$\int_{-1}^1 \sqrt{1-\frac{x}{(1-x^2)^2}}$$
But when I plugged it in to any online calculator, It said it couldn't find the integral and that it might not exist. Does this integral converge or is it just very hard to evaluate? Furthermore, If it can be evaluated, I would appreciate any help evaluating it. Thanks in advance!
AI: $\lim_\limits{x\to 1^-}\left(1-\frac{x}{(1-x^2)^2}\right)=-\infty$, so the square root isn’t even defined over the whole interval. And even taking its absolute value wouldn’t help, since it blows up.
|
H: How to get the number of all possible combinations of k positive integers to reach a given product?
Let $n_1,\cdots,n_k$ be $k$ positive integers and $k>2$. Given that $\prod_{i=1}^kn_i=N$, how would you find all combinations of $(n_1,\cdots,n_k)$? Here the order does matter. For example, when $k=3$, $(2,3,2)$ and $(2,2,3)$ are different.
Btw, I just was stuck in the first step: finding the prime factorization of $N=\prod_{j=1}^mp_j^{q_j}$, where all $p_j$ are prime numbers and $q_j\geq1$, and the the number of combinations can be expressed in the form of something related with $q_j$.
For example, in the $k=3$ case, I got a solution: the number is $\prod_{j=1}^m \tbinom{1+q_j}{2}$. But I cannot get the idea how to get such conclustion. Anyone know the detailed derivation?
Thank you very much.
AI: Consider just one prime that divides $N$, but with some reasonable power. In the $k=3$ case, you can put the prime factors into any of the three numbers that will multiply to $N$. For example, if $p^4$ is the highest factor of some prime $p$ that divides $N$, you are looking for three numbers (with $0$ allowed) that sum to $4$. This is what stars and bars addresses. There are ${5 \choose 2}=10$ weak compositions of $4$ into three pieces. Each prime dividing $N$ gets treated the same: you want the number of weak compositions of the exponent of the prime into $k$ pieces, then as the composition of the factors of each prime is chosen separately, you multiply them.
|
H: Metric properties.
i have the following problem:
Let $x=(x_{1},...,x_{n})$ and $y=(y_{1},...,y_{n})$ in $\mathbb{R^{n}}$.
let’s define $d(x,y)=|x_{i}-y_{i}|$ for some $i \leq n$ permanent.
What properties of metric does d have?
I have come to the conclusion that it is a metric, but according to the book I am following it says no, but does not mention an argument
I would be very grateful if you could tell me which property of metrica does not comply please, thank you
AI: Suppose $n>1$, and you fix $i=1$ for the sake of concreteness. Then, $d(x,y) = |x_1-y_1|$. Let $x = (0, \dots, 0, 1)$ and $y = (0, \dots, 0, 0)$ Then, what is $d(x,y)$? Is $x=y$?
|
H: Linear Algebra Matrices to equations
Show that
$$\operatorname{det}\;\begin{bmatrix}
1&1&1\\x^2&y^2&z^2\\x^4&y^4&z^4
\end{bmatrix}=(y^2-x^2)(z^2-x^2)(z^2-y^2)$$
I am doing my homework,and this question came up.
I need to know about subject of this question so i can study and solve it.
I tried to look it up from textbook and couldn't find it.
AI: You are given a matrix with entries in some polynomial ring, say $\Bbb Z[x,y,z]$ or $\Bbb R[x,y,z]$ (in this specific case it does not really matter, what the underlying ring is). That being said, you can compute the determinant of a matrix just like any other matrix, for example (since we are given a $3\times 3$ matrix) via Sarrus‘ rule:
$$\operatorname{det}\;\begin{bmatrix}
1&1&1\\x^2&y^2&z^2\\x^4&y^4&z^4
\end{bmatrix}=y^2z^4 + z^2x^4 + x^2y^4 - x^4y^2 - y^4z^2 - z^4x^2$$
Now expanding the term of the desired result should give you this as well.
|
H: Correct notation to restrict parameter in equation
I'm trying to express the following equation using correct notation:
$\sin{\left(\frac{n\pi}{2}\right)},n\,\text{even} = 0$
I've already specified $n$ is a natural number, so presumably I don't need to respecify it? Would the following be better?
$\left\{\sin{\left(\frac{n\pi}{2}\right)}:n\,\text{even}\right\} = 0$
Is there a more "mathy" way to express that $n$ is even? What if I hadn't yet specified that $n$ belonged to the natural numbers, would I use something like?
$\left\{\sin{\left(\frac{n\pi}{2}\right)}:n\in\mathbb{N}\,,\text{even}\right\} = 0$
Are all of these essentially acceptable? None of them?
AI: I guess you'd like to say that $$\sin\left(\frac{n\pi}{2}\right) = 0$$ for all even $n \in \mathbb{N}$. Am i correct?
You can indeed express the set of all these values via
$$\left\{\sin\left(\frac{n\pi}{2}\right) \mid n \in \mathbb{N},\ n\ \text{even}\right\} = \{0\}.$$
Note that your expression
$$\left\{\sin\left(\frac{n\pi}{2}\right) \mid n\ \text{even}\right\} = 0$$
is incomplete, since you don't mention that $n \in \mathbb{N}$ and on the left hand side you have a set, whilst on the right hand side we have a number.
Note: This is something you might sometimes encounter in abstract algebra where the trivial group $\{e\}$ is written as $0$ or $1$. But in a scenario like this, i would not recommend it.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.