text
stringlengths 83
79.5k
|
|---|
H: Under which conditions the rings $\mathbb{Z}_p[x]/(x^n+1)$ and $\mathbb{Z}_p[x]/(x^n-1)$ are fields? (for $p$ prime)
I'm looking for the necessary and sufficient conditions for any prime $p$ and any positive integer $n$ to make the quotient rings $\mathbb{Z}_p[x]/(x^n+1)$ and $\mathbb{Z}_p[x]/(x^n-1)$ not only rings but fields.
I know that the quotient ring $\mathbb{Z}_p[x]/f(x)$ is a field if and only if $f(x)$ is irreducible over $\mathbb{Z}_p$:
Then $\mathbb{Z}_p[x]/(x^n-1)$ can never be a field, as $1$ is a root of $x^n-1$ in $\mathbb{Z}_p[x]$, making it reducible, am I right?
For $\mathbb{Z}_p[x]/(x^n+1)$:
2.1. If $n\geq p$, then $f(x)=x^n+1$ might have a root in $\mathbb{Z}_p$, as one of the two following cases happens: either (a) all integers in $\mathbb{Z}_p^*$ are roots of $f(x)$, or (b) there is a monic polynomial $g(x)$ such that $g(x)\equiv 0 (\text{mod }p)$ with $\text{deg}(g)<p$ and the solutions of $g(x)\equiv 0(\text{mod }p)$ are the solutions of $f(x)\equiv 0 (\text{mod }p)$.
Note: In (b), if the equation $g(x)\equiv 0(\text{mod }p)$ has no solutions, then $f(x)\equiv 0(\text{mod }p)$ has no solutions as well.
2.2. If $n<p$, then for all integers $a$ with $\gcd(a,p)=1$, we have that $x^n\equiv a(\text{mod }p)$ has exactly $\gcd(n,p-1)$ solutions if $$a^{\frac{p-1}{\gcd(n,p-1)}}\equiv 1(\text{mod }p),$$ or no solution at all if $$a^{\frac{p-1}{\gcd(n,p-1)}}\not\equiv 1(\text{mod }p).$$ Then by taking $a=p-1$, we have that $x^n+1\equiv 0(\text{mod }p)$ which is equivalent to $x^n\equiv p-1(\text{mod }p)$ has exactly $\gcd(n,p-1)$ solutions if $$(-1)^{\frac{p-1}{\gcd(n,p-1)}}\equiv 1(\text{mod }p),$$ or no solutions at all if $$(-1)^{\frac{p-1}{\gcd(n,p-1)}}\not\equiv 1(\text{mod }p).$$
My questions are:
Just to be sure, $\mathbb{Z}_p[x]/(x^n-1)$ is never a field, right?
Is there any easier criteria or direct theorem I'm missing for the case $\mathbb{Z}_p[x]/(x^n+1)$ which could give me a more straight-forward condition for $p$ and $n$?
Thank you.
AI: First consider $x^n-1$ . . .
If $n=1$ then $x^n-1=x-1$ which is irreducible in $\mathbb{Z}_p[x]$.
If $n > 1$ then $x^n-1$ has the factor $x-1$, so $x^n-1$ is reducible in $\mathbb{Z}_p[x]$.
Next consider $x^n+1$ . . .
If $n=1$ then $x^n+1=x+1$ which is irreducible in $\mathbb{Z}_p[x]$.
If $n$ is odd and $n > 1$, then $x^n+1$ has the factor $x+1$, so $x^n+1$ is reducible in $\mathbb{Z}_p[x]$.
If $n$ is an even positive integer but not a power of $2$, then $n$ has an odd prime factor, $q$ say, with $q < n$. Letting $m=n/q$, it follows that $x^n+1$ has the factor $x^m+1$, so $x^n+1$ is reducible in $\mathbb{Z}_p[x]$.
If $n=2$ then $x^n+1=x^2+1$ which factors mod $p$ if and only if $-1$ is a quadratic residue mod $p$, which happens if and only if $p=2$ or $p\equiv 1\;(\text{mod}\;4)$.
If $n=4$ then $x^n+1=x^4+1$ which is reducible mod $p$ for all primes $p$.
$\qquad$Why is $X^4+1$ reducible over $\mathbb F_p$ with $p \geq 3,$ prime
If $n$ is a power of $2$ with $n > 4$, then since $x^4+1$ is reducible mod $p$ for all primes $p$, it follows that $x^n+1$ is reducible mod $p$ for all primes $p$.
|
H: Does joint distribution affect Radon-Nikodym derivative?
Given two real-valued random variables $X, Y$ with distributions $\mu_X, \mu_Y$. Suppose $\mu_X<\!<\mu_Y$, then the Radon-Nikodym derivative $\frac{d\mu_Y}{d\mu_X}(\cdot)$ exists $\mu_X$-a.e. on $\mathbb{R}$.
It seems that joint distribution $(X, Y)$ does not affect their Radon-Nikodym derivative at all? Since the measure-theoretic definition of Radon-Nikodym derivative as a limit is given by
$$\frac{d\mu_Y}{d\mu_X}(\cdot) = \lim_{r \rightarrow 0} \frac{\mu_Y(B(r,\cdot))}{\mu_X(B(r,\cdot))}.$$
I am just a bit unsure since the "change of measure" type of argument is ubiquitous in probability theory and yet it isn't affected by the joint law. (Unless there is some other type of change of measure argument in probability theory.)
AI: In the setting of the Radon-Nikodym theorem, the two measures $\mu_X$ and $\mu_Y$ are measures on the same measurable space $(\Omega, \mathcal{F})$. When you have two random variables, and you want to talk about the Radon-Nikodym derivative, you first need $\mu_X$ and $\mu_Y$ to be on the same measurable space (in probability, usually $(\mathbb{R}, \mathcal{B}(\mathbb{R}))$. But after this, any information about the joint distribution of $X$ and $Y$ is irrelevant. If you take a step back and ignore the probabilistic context, you will see that the Radon-Nikodym derivative concerns two measures on the same measurable space, and not really two marginal measures on some larger space like $\mathbb{R} \times \mathbb{R}$ or $\Omega \times \Omega$.
|
H: What is $dudv$ in the metric tensor?
In the definition of a $m$-dimensional Riemannian manifold $(M,G)$, if $(U;u^i)$ is a local coordinate system of $M$, the tensor field $G$ on $U$ can be written as
$$
G = g_{ij}du^i\otimes du^j\;\tag{1}
$$
I have seen in many places people write the "Riemann metric"
$$
ds^2 = g_{ij}du^idu^j\;\tag{2}
$$
For example, see this article on the first fundamental form and this one on metric tensors.
In the context of tensors, it is clear what $du^i\otimes du^j$ and $du^i\wedge du^j$ mean. Sometimes the wedge product is even omitted. I have also seen people define
$$
dudv = \frac12(du\otimes dv+dv\otimes du)\;.\tag{3}
$$
What is the correct meaning of $dudv$ in (2)? Is it the tensor product, wedge product, or the symmetrization?
AI: It is exactly what you wrote in $(3)$. With this, we show that $g_{ij}\,{\rm d}u^i\otimes {\rm d}u^j = g_{ij}\,{\rm d}u^i\,{\rm d}u^j$ as follows: $$g_{ij}\,{\rm d}u^i\otimes\,{\rm d}u^j = \frac{1}{2}(g_{ij}\,{\rm d}u^i\otimes {\rm d}u^j + g_{ji}\,{\rm d}u^j\otimes {\rm d}u^i) = \frac{1}{2} g_{ij}({\rm d}u^i\otimes{\rm d}u^j+{\rm d}u^j\otimes {\rm d}u^i) = g_{ij}\,{\rm d}u^i\,{\rm d}u^j,$$where in the middle $=$ sign we factor out $g_{ji} = g_{ij}$.
|
H: Prove $\int_a^b f(x)\,\mathrm dx=\int_a^cf(x)\,\mathrm dx+\int_c^bf(x)\,\mathrm dx$ without the Fundamental Theorem of Calculus
Let $f(x)$ be a continuous function. Let $a,b,c$ be constants, with $a < c < b$. Prove that
$\displaystyle\int_a^b f(x)dx = \int_a^c f(x)dx + \int_c^b f(x)dx$ $(*)$
In particular, I would like to see a proof of this property that does not use the Fundamental Theorem of Calculus. I am aware that this can be easily proved using the Fundamental Theorem. However, the proof of the Fundamental Theorem of Calculus that I understand the most is the one given in James Stewart's Calculus. As it turns out, the property $(*)$ is actually used by Stewart in order to prove the Fundamental Theorem of Calculus!
As far as actually proving $(*)$ without using the Fundamental Theorem, the only thing I can think of is to use Riemann sums
$\displaystyle \int_a^b f(x)dx = \lim_{ n \to \infty } \sum_{i=1}^n f(x_i) \frac {b-a}{n}$
$\displaystyle \int_a^c f(x)dx + \int_c^b f(x)dx = \lim_{ n \to \infty } \sum_{i=1}^n f(x_i) \frac {c-a}{n} + \lim_{ m \to \infty } \sum_{j=1}^m f(x_j) \frac {b-c}{m}$
Not sure what to do next, since $x_i \neq x_j$ in general. Similarly $m = n$ is not necessarily true.
AI: Hint
To see this you need to work with Riemann sums in general, not just with the case of partitions of equal length.
Consider a partition $P: a=a_0<...<a_n=c$ of $[a,c]$, and some intermediate points $x_1^*,.., x_n^*$. Consider also a partition $Q: c=b_0<...<b_m=b$ of $[c,b]$, and some intermediate points $y_1^*,.., y_m^*$.
Then the sum of the corresponding Riemann sums
$$\sum_{k=1}^n f(x_k^*)(a_k-a_{k-1})+\sum_{k=1}^m f(y_k^*)(b_k-b_{k-1})$$
is a Riemann sum for $\int_a^b f(t)dt$ for the partition
$$P \cup Q= a_0<a_1<...<a_n<b_1<...<b_m=b$$
and the intermediate points $x_1^*,.., x_n^*,y_1^*,.., y_m^*$.
Note here that $\| P \cup Q \| = \max\{ \|P \|, \| Q \| \}$.
Conversely, if you have a partition $P: a=a_0< a_1< ...< a_n =b$, let $k$ be the last index for which $a_k \leq c$. Then $a_{k+1}>c$.
Now, for any intermediate points $x_1,..., x_n$ show that $P': a_0<a_1<...<a_k <c$ (or $P': a_0<a_1<...<a_k =c$) and $Q': c< a_{k+1}<....<a_n=b$ are partitions of $[a,c], [c,b]$ and that $x_1^*,.., x_{k-1}^*, c$ and $x_{k}^*,.., x_n^*$ are intermediate points.
If $R$ is the corresponding Riemann sum for $P$ , and $R_1,R_2$ are the corresponding Riemann sums for $P',Q'$, show that
$$|R-R_1-R_2 | < 2\|P\| M$$
where $$M= \sup\{ |f(x)| : x \in [a,b]\}$$
|
H: represent $x^3 +2x$ as power series
Well it is pretty wierd for me to see this question, the function already is power series isn't it?
Am I missing the purpose of the excersize?
AI: A power series is an infinite series of the form $\sum_{n=0}^\infty a_n(x-x_0)^n$. If for some function $f$ and some set $D$,
$$
f(x) = \sum_{n=0}^\infty a_n(x-x_0)^n,\quad x\in D
$$
we say that the RHS is the power series of $f$ at $x_0$.
When $f$ is a polynomial function of the form $f(x)=b_0+b_1x+\cdots+b_mx^m$, its power series at $x_0=0$ is $f$ itself.
In general, you will need to find the coefficients $a_n$ when $x_0\ne 0$. But Taylor's theorem tells you that it has something to do with $f^{(n)}(x_0)$.
|
H: Is every (sufficiently nice) function a solution to an autonomous ode?
For whatever reason, I'm suddenly interested in whether or not one can characterized those functions which are solutions to autonomous ode. The first natural question to ask, then, is: are all $C^1([a,b])$ functions solutions to some autonomous ode?
If not, why not?
AI: For a counter example, sine function can not because its derivative $\cos(x)$ can not be written as $f(\sin(x))$ for any function $f$.
Since
$\cos(\pi/4) = \sin(\pi/4) = \sqrt{2}/2,$
we must have $f(\sqrt{2}/2) = \sqrt{2}/2$.
$\sin(3\pi/4)=\sqrt{2}/2$ but $\cos(3\pi/4) = -\sqrt{2}/2$.
Edit: I guess for invertible $C^1$ function, this would work. For $u'(x) = f(u(x))$, we can define $f(x) = u'(u^{-1}(x))$.
|
H: If $\lim_m \mathbb{P}(\{\sup_{j\geq 1}{|S_{m+j}(t)-S_{m}(t)|}\leq \epsilon\})=1$ then $\mathbb{P}(\{S_n(t)\text{ is a convergent sequence}\}=1$
Let $(X_n)$ be a sequence of random variables in Probability space.
Take $S_k=\sum_{n=1}^{k}{X_k}$. We suppose that : for all $\epsilon >0$
$$\lim_m \mathbb{P}(\{\sup_{j\geq 1}{|S_{m+j}(t)-S_{m}(t)|}\leq \epsilon\})=1$$
Can we say that: $\mathbb{P}(\{\{S_n(t)\}\text{ is a convergent sequence}\}=1$
AI: Yes. We can find $m_k$ such that $P(\sup_{j \geq 1} |S_{m+j}-S_m| >\frac 1 k ) <\frac 1 {2^{k}}$ when $m=m_k$. Hence $\sum_k P(\sup_{j \geq 1} |S_{m+j}-S_m| >\frac 1 k ) <\infty$. Apply Borel -Cantelli Lemma to conclude that $(S_n)$ Is Cauchy with probability $1$.
|
H: Closed form of $\sum_{n=1}^{\infty}a^n \cos(nt)$
I'm trying to find a closed form (if one exists) for
$$\sum_{n=1}^{\infty}a^n \cos(nt)$$
where $a \in (0, 1)$. I know that
$$\begin{matrix} \sum_{n=1}^{\infty} \frac{a^n}{n} \cos(nt) &= &\ln(a^2-2a\cos(t)+1) \\
\sum_{n=1}^{\infty} \frac{1}{n} \cos(nt) &= &2 \ln|2 \sin(\frac{t}{2})| \end{matrix}$$
It feels like these two facts combined should make it relatively easy to come up with the closed form for $\sum_{n=1}^{\infty}a^n \cos(nt)$ but I'm having difficulty seeing it.
AI: $$
\begin{align}
\sum_{n=1}^\infty a^n\cos(nt)
&=\frac12\sum_{n=1}^\infty\left(a^ne^{int}+a^ne^{-int}\right)\tag1\\
&=\frac12\left(\frac{ae^{it}}{1-ae^{it}}+\frac{ae^{-it}}{1-ae^{-it}}\right)\tag2\\
&=\frac{a\cos(t)-a^2}{1-2a\cos(t)+a^2}\tag3
\end{align}
$$
Explanation:
$(1)$: $\cos(x)=\frac{e^{ix}+e^{-ix}}2$
$(2)$: sum of a geometric series with ratios $ae^{it}$ and $ae^{-it}$
$(3)$: $\cos(x)=\frac{e^{ix}+e^{-ix}}2$
|
H: Proving continuity: asking to see if I have a counter-example
I was asked to prove the following:
Suppose $F_1, \dots, F_2$ are disjoint closed subsets of $\mathbb{R}$.
Prove that if
$$g: \cup_{k=1}^n F_k \to \mathbb{R}$$
is a function such that $g_{|F_k}$ is a continuous function for each
$k \in \{1,\dots,n\}$, then $g$ is a continuous function.
But I'm not sure if it's impossible. May I ask you to check if this is a counter-example?
$$A=\mathbb{N}$$
$$B=\{n+2^{−n}:n \in \mathbb{N} \}$$
and $g(x) = \lceil x \rceil$ (i.e. rounding up. For example, $g(3.0) = 3, g(3 + 2^{-3}) = 4$)
$g_{|A}$ and $g_{|B}$ are continuous, but $g$ is not.
AI: Your $g$ is not continuous on $\mathbb R$ but it is continuous on $A \cup B$. So where is the contradiction?
In fact $A \cup B$ is a discrete space and any function on it is continuous.
|
H: If T ◦ S = S ◦ T and v is an eigenvector of T, then v is an eigenvector of S
I have to prove or provide a counterexample to the following claim:
Let $V$ be a vector space, $T, S : V \to V$ linear transformations. If $T S = ST$ and $v$ is an eigenvector of $T$, then $v$ is an eigenvector of $S$.
I thought it's correct and started to prove it,
I showed that $S(v)$ is an eigenvector of $T$, but I got stuck there.
$S(T(v)) = T(S(v))$
$S(\lambda v)=T(S(v))$
$\lambda S(v)=T(S(v))$
AI: Take $T$ as the identity map and $S$ as any linear map. Then clearly $TS=ST$. If the proposition were true, then every eigenvector of $T$ would be an eigenvector of $S$. But since $T$ is identity, every vector of $V$ is an eigenvector of $T$. So as soon as one finds an $S$ for which some vector is not an eigenvector we have disproved the proposition. Of course, there are many such examples of $S$.
|
H: Does there always exist a multiple keyhole avoiding finite number of points?
Let's $C$ be a circle in $\mathbb C$. Let $z_1,\ldots,z_N$ be points in the region enclosed by $C$ ($C$ itself is not contained in that region). A multiple keyhole avoiding $z_1,\ldots,z_N$ is a shape of the form below.
The whole shape should not intersect itself (so it should be a simple closed curve), the small circle enclosing $z_i$ should be centered at $z_i$, all the small circles should have a same radius, the two line segments that form a corridor should be parallel, and all the segments deleted from $C$ should have the same length.
Does a multiple keyhole avoiding $z_1,\ldots,z_N$ always exist? It seems to exist when $N$ is small, but I'm not sure it will exist for arbitrary $N$.
I need to know the existence of such multiple keyhole to prove the residue formula in complex analysis. I'm reading Stein's "Complex Analysis" p. 77.
Edit
I tried to prove this by mathematical induction. I assumed that it holds for $N$, and tried to prove that it also holds for $N+1$. But in the situation below, $N$ is $4$, and the new point is shown by the grey dot near the center. I don't think I can connect $C$ with a small circle centered at the grey dot in any way.
AI: Longer answer: Surely you can avoid one point, so the base case is satisfied.
Suppose you have a keyhole contour avoiding $k$ points. Introduce a new point $p_{k+1}$ to be avoided. We can "stretch" the outer circle $C$ to enclose all $k+1$ points if necessary. There is a minimum distance between $p_{k+1}$ and the other $p_i$, $1\leq i\leq k$. Call it $d>0$. Draw a keyhole of radius $d/50$ around $p$, and draw the straight lines connecting this keyhole to $C$ avoiding the other $k$ points.
|
H: Polynomial quotient ring $\mathbb{Z}_m[x]/f(x)$ when $\mathbb{Z}_m$ is only a ring (not a field)?
The definition of polynomial quotient ring requires $\mathbb{Z}_m$ to be a field.
But if $m$ is not prime, then $\mathbb{Z}_m$ is just a ring (not a field). So under what conditions $\mathbb{Z}_m[x]/f(x)$ can still be a ring? ($f(x)$ polynomial over $\mathbb{Z}_m$[x], example $x^n+1$).
Note: I found out this question:
Quotient rings over rings that are not fields, and it mentions that any principal ideal ring should suffice, and $\mathbb{Z}_m$ is a commutative principal ideal ring for all $m$, but I'm not sure if that is enough argument to make $\mathbb{Z}_m[x]/f(x)$ a ring.
AI: When you write $\Bbb{Z}_m[x]/f(x)$, this is the same thing as writing $\Bbb{Z}_m[x]/(f(x))$. In particular, you are taking the quotient of the ring by the ideal generated by $f(x)$. It is true in general that the quotient of a commutative ring by an ideal is again a ring. E.g. if $A$ is a ring and $\mathfrak{a}$ is an ideal, then $A/\mathfrak{a}$ is a ring. So, what you have written will always be a ring, regardless of the coefficients used. That is, for any ring $A$, $A[x]/(f(x))$ is a ring, for any $f(x)\in A[x]$.
|
H: Find infimum of the sets of number $x + \frac{1}{x} $
Let $A = \{ z = x + \frac{1}{x} : x > 0 \} $ and $B = \{z = 2^x + 2^{1/x} : x > 0 \} $
I want to find $\inf A $ and $\inf B $.
Proof.
Clearly, by AM-GM inequality one has $x + \dfrac{1}{x} \geq 2 $ and $2^x + 2^{1/x} \geq 2 \sqrt{2^{x+1/x} } \geq 2 \sqrt{2^2} = 4 $.
Thus: claim $\inf A = 2 $ and $\inf B = 4 $
For the first one, we need to see that if $l$ is lower bound for $A$:
$$ x + \dfrac{1}{x} \geq l $$
for all $x$, then $2 \geq l$. Well, trivial: put $x=1$ then $1+1 \geq l $. So $\boxed{\inf A = 2 }$. Similarly, if $x=1$ in $B$ we see $2+2 \geq l$ so $\boxed{\inf B = 4}$ . QED
Is this enough work for the proof?
AI: Looks good to me. In fact, you prove not only that $\inf A=2$, but also that $\min A=2$. Similarly, you have proved that $\min B=4$.
|
H: Solving "Square" PDE
I'm trying to find all solutions to:
$$f_{xx}+2f_{xy}+f_{yy} = 0$$
I noticed that if we let $D = \frac{\partial}{\partial x} + \frac{\partial}{\partial y}$ then we are looking for the kernel of $D^2$. I'm not sure where to go from here though.
AI: Make a change of variables:
$$u=x-y\\v=x+y$$
Then
$$\frac{\partial}{\partial x}=\frac{\partial}{\partial u}+\frac{\partial}{\partial v}\\
\frac{\partial}{\partial y}=-\frac{\partial}{\partial u}+\frac{\partial}{\partial v}
$$
and the equation becomes much simpler to solve
$$4\frac{\partial^2 f}{\partial v^2}=0\iff f(u,v)=vG(u)+H(u)$$
where $G,H$ are arbitrary functions. Back-substituting yields the most general solution for the PDE:
$$f(x,y)=(x+y)G(x-y)+H(x-y)$$
|
H: Compute an $n\times n$ determinant with factorial and powers of $x$
Compute
$$
D_{n}=
\begin{vmatrix}
1 & 0 & 0 & 0 & 0 & \ldots & 1\\
1 & 1! & 0 & 0 & 0 & \ldots & x\\
1 & 2 & 2! & 0 & 0 & \ldots & x^{2}\\
1 & 3 & 3\cdot2 & 3! & 0 & \ldots & x^{3}\\
\ldots & \ldots & \ldots & \ldots & \ldots & \ddots & \ldots\\
1 & n & n\left( n-1\right) & n\left( n-1\right) \left( n-2\right) &
n\left( n-1\right) \left( n-2\right) \left( n-3\right) & \ldots & x^{n}%
\end{vmatrix}
$$
My attempt: I tried to make an expansion along the first row, but I didn't
obtain anything; I also tried to compute some particular cases by I didn't see
how to obtain a general formula such that to be capable to compute $D_{n}$.
Any ideeas?
AI: Let $M_n$ be the matrix of interest (the argument of the determinant in your question). We claim that
$$M_n = A_n B_n, ~ A_n = \begin{bmatrix} 1 & 0 & 0 & \cdots & 0 \\ 1 & 1! & 0 & \cdots
& 0 \\ 1 & 2 & 2! & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & n & n(n - 1) & \cdots & n! \end{bmatrix}, ~ B_n = \begin{bmatrix} I_n & \vec{v}_{n-1} \\ 0 & (x - 1)^n / n!\end{bmatrix}$$
where $\vec{v}_{n-1}^T = \begin{bmatrix} 1 & x - 1 & (x - 1)^2 / 2 & \cdots & (x - 1)^{n-1} / (n-1)!\end{bmatrix}$. The correctness of the first $n$ columns of the product are trivial to verify. To verify the correctness of the last column, note that
$$(A_n B_n)_{i, ~n+1} = \sum_{j = 0}^{i - 1} \frac{(i - 1)!}{(i - j - 1)!} \frac{(x - 1)^j}{j!} = \sum_{j = 0}^{i - 1} {i - 1 \choose j} (x - 1)^j = x^{i - 1}$$
by the Binomial Theorem. (Note here that the indexing of the matrix starts with $1$, not zero.) As determinants are multiplicative, it follows that
$$\det(M_n) = \det(A_n) \cdot \det(B_n) = \left[ \prod_{k = 0}^n k! \right] \cdot (x - 1)^n / n! = \left[ \prod_{k = 0}^{n-1} k! \right] \cdot (x - 1)^n$$
since $\det(A_n)$ is simply the product of its diagonal entries (it's lower triangular) and $\det(B_n) = (x - 1)^n / n!$ by Laplace expanding along the last row. It's reassuring to see that this calculation confirms saulspatz's computer simulations. $\square$
|
H: A question relate to dominated convergence theorem
given that:$g(x)=\frac{1}{xln(x)},x > 1$. $f_n=c_n\mathbf{1}_{A_n}$, where $c_n \geq 0$, $A_n$ is measurable and $\subset [2, +\infty)$. $|f_n|\leq g$ ($f_n$ is dominated by $g$.)
$\forall x, \lim_{n \rightarrow \infty}f_n(x) = 0$.
the question is:do we have$\int^\infty_2 f_n(x)dx \rightarrow 0, n \rightarrow \infty$?
AI: Yes.
The function $g$ is not Lebesgue integrable, since monotone convergence theorem shows the calculation below is valid.
$$\begin{align}
\int_2^\infty g(x)dx &= \lim_{R\to\infty}\int_2^R g(x)dx\\ &=\lim_{R\to\infty}[\log\log x]_2^R\\ &= \infty
\end{align}$$
Then you can't use Lebesgue's dominated convergence theorem. However, for arbitrary sequence $(f_n=c_n\chi_{A_n})$ which is dominated by $g$ and $f_n\to 0$ pointwise, it can be said that $\int_2^\infty f_n(x)dx \to 0$. (Here, $\chi_A$ is a characteristic function of $A$.) We show this by indirect proof and assume $\int_2^\infty f_n(x)dx \not\to 0$.
First we claim $$\forall \varepsilon\in (0,1/2), \exists ! t\in(2,\infty), g(t)=\varepsilon$$ This is easy since $g$ is decreasing in $(2, \infty)$ and $g(2)>1/2, \lim g(x) = 0$. Now we can say that for arbitrary function $\varepsilon\chi_{A}$ dominated by $g$ and $1/2$, $$\int_2^\infty \varepsilon\chi_{A} \leq \varepsilon (t-2)\leq \varepsilon t=\frac{1}{\log t}$$
Note that if $\varepsilon \to 0$, then $t \to \infty$, then $\frac{1}{\log t}\to 0$.
Since $\int_2^\infty f_n(x)dx \not\to 0$, you can take a subsequence $(f_{n_k})$ such that $\int_2^\infty f_{n_k}(x)dx \to I$, where $I=\limsup_n \int_2^\infty f_n(x)dx \in (0, \infty)$.
Let $\varepsilon_0\in (0,1/2)$ be a real number such that $\frac{1}{\log t_0} < I/2$. ($t_0$ is a unique number satisfying $g(t_0)=\varepsilon_0$.) There exists a number $N$ such that for all $k>N$, $\int_2^\infty f_{n_k}(x)dx > I/2$. By the argument above, it can be said that $c_{n_k} > \varepsilon_0$.
Applying Lebesgue's dominated convergence theorem to $(f_{n_k})$ with dominating function $g\chi_{[2, t_0]}$, it follows that
$$\lim_{k\to \infty}\int_2^\infty f_{n_k}(x) dx = \int_2^\infty 0 dx = 0 $$
However this is a contradiction since $(f_{n_k})$ was taken to satisfy $\int_2^\infty f_{n_k}(x)dx \to I$.
|
H: If $\Delta \varphi =0$ then $\nabla \times \nabla \varphi =0$.
I was looking into a question about vector functions $\varphi$ satisfying the Laplace equation $\Delta \varphi =0$. I found that this answer to the question was given in most places I looked for, but my problem with it is that it doesn't use the hypothesis that $\Delta \varphi =0$.
My question comes from my understanding that the curl of a gradient is $0$ only if the mixed partials satisfy Clairaut's theorem, which is not always true. So is there a counterexample where $\Delta \varphi =0$ but $\nabla \times \nabla \varphi \neq 0$? Or does the hypothesis of being a solution to the Laplace equation impose that the mixed partials are always equal?
Thank you!
AI: Given $f:\Bbb{R}^3\to \Bbb{R}$ a function of class $C^2$ (twice continuously differentiable), Clairaut's Theorem says the mixed partials commute, i.e. that
$$ \frac{\partial^2 f}{\partial x_i\partial x_j}=\frac{\partial^2 f}{\partial x_j\partial x_i}$$
for all $i,j\in \{1,2,3\}$. It is true that whenever $f$ is of class $C^2$, $\nabla \times \nabla f=0$, as a calculation using the symmetry of the partials will show. I think that for this statement you only need the $C^2$ assumption. You should note that usually a harmonic function (satisfying $\Delta f=0$) is assumed to be $C^2$. In particular, harmonicity implies $C^2$ and the proof can be carried out using the weaker statement.
|
H: What is the set of $\sup\{x\geq 1/2: (x-\frac{1}{2})^2+1\leq \log 3\}$?
What is the set of
$$\sup\{x\geq 1/2: (x-\frac{1}{2})^2+1\leq \log 3\}$$
It seems that for any $x\geq 1/2$, $\sup {(x-\frac{1}{2})^2+1}=1\leq \log 3$. So what is the sup?
AI: I am editting my answer since the OP made a mistake and edited his question. Since $x \geq \frac{1}{2}$ then $(x-\frac{1}{2})^2 \geq 0$ but $(x-\frac{1}{2})^2+1 \leq ln(3)$. the sup exists and is equal to the largest value of x on the interval $[\frac{1}{2}, \sqrt{ln3-1} + \frac{1}{2}]$ thus the sup is equal to the upper bound of the interval $\sqrt{ln3-1} + \frac{1}{2}$
|
H: Prove that a non-trivial commutative ring $R$ is a field if and only if $0$ and $R$ are the only ideals of $R$.
Prove that a non-trivial commutative ring $R$ is a field if and only if $0$ and $R$ are the only ideals of $R$.
Proof Let $R$ be a field. Let $a$ be a non-trivial ideal of $R$. Let $x$ be in $a - \{ 0\}$.
Then $R = Rx^{−1}x$ in $Rx$ in $a$ in $R$ and $a = R$.
Conversely suppose that $0$ and $R$ are the only ideals of a non-trivial
commutative ring $R$. For all $x$ in $R - \{0\}$, $Rx$ is a non-trivial ideal of $R$. Thus
$R = Rx$ and there is y in $R$ such that $1 = yx$. Therefore every element of
$R -\{0\}$ has an inverse and $R$ is a field.
So this is the proof I have but I do not understand it...
The part I do not understand is "Then $R = Rx^{−1}x$ in $Rx$ in $a$ in $R$ "
Every help is appreciated!
AI: $R$ is a field, then if you take some nontrivial ideal $a$ in $R$ there exists some $x \in a$ such that $x \not= 0$, then as $Rx \subset a \subset R$ and as $R$ is a field, we have that $x^{-1} \in R$, so $rx^{-1} \in R$ for all $r\in R$, ,then $r = rx^{-1}x \in Rx \subset a \implies R \subset Rx \subset a \implies Rx = a = R$
|
H: Giving away $500$ pencils: $1$ on the first day, $2$ on the second day, etc. How long will it take?
I'd like to know the name of this subject so that I discover how to solve the problem on my own.
I suppose that's something like "$(n-1) + (n-2) + \cdots$" but I'm not sure about it. =\
I have 500 pencils and I want to give them to a friend, but not at once. I intend to give one pencil on first day; two on second one; three on third; and so on.
How long will it take to give all pencils? In the end, will remain any pencils?
I appreciate the support!
AI: After $n$ days, you have given
$$1+2+3+\dots n=\frac{n(n+1)}2$$
We are looking for the smallest integer $n$ such that
$$\frac{n(n+1)}2\geq500$$
Since
$$\frac{31(31+1)}2=496 \quad\text{and}\quad\frac{32(32+1)}2=528$$
You'll give your last pencil on the 32th day, but you'll only give 4 pencils on the last Day.
|
H: Fiber preserving map that is not a bundle map
I used to think every map that is fiber preserving is a bundle map in the sense of the definition below:
Definition Let $E\xrightarrow{\pi}M$ and $F\xrightarrow{\rho}N\ $ be vector bundles and $g:M \rightarrow N\ $ a diffeomorphism between $M$ and $N$. Then, we define that $f:E\rightarrow F\ $ is a bundle map that preserve fibers if:
$$ g\circ \pi = \rho \circ f$$
But in page 209 at Foundations of Mechanics a map $f:E \rightarrow F\ $ between two vector bundles such that $f\ $ is fiber preserving is defined, but there's a remark saying that $f$ is not necessarily a vector bundle mapping.
Question How could a map be fiber preserving without being a bundle mapping?
AI: A vector bundle mapping has to be a linear map on the fibers (the fibers are vector spaces and it has to be a linear map in the sense of vector spaces). So for a counterexample, you could take $\mathbb{R}$ thought of as the trivial bundle over a point, and any smooth map from $\mathbb{R}$ to itself that is not linear (e.g. $x \mapsto x^2$).
|
H: Math formulation for # of outcomes of 2 indistinguishable dice
I'm not sure why the equation for finding this is $(6 * 6)/{2!} + 6/3$ I understand that out of the 36 outcomes when they are distinguishable, we need to divide them by 2 to get rid of duplicates like (1,2) and (2, 1). But why are we adding $6/3$ back?
AI: In the division, you divided the number of doubles by $2$. Another way to look at it is of the $6^2$ possible rolls, possible rolls, $6$ are doubles. So we we have $$\frac{6^2-6}{2}$$ non-double outcomes, plus the $6$ doubles, which gives the same result.
|
H: Proving postulate about a property fo spherical vectors
Assume we have $X, Y$ constant unit vectors of $\mathbb{R}^3$
I postulate that the maximum of the function:
$(V \cdot X) (V \cdot Y)$
I reached by the halfway vector between $X,Y$ i.e at the vector $V_0 = slerp(X,Y, 0.5)$
To try to prove it I tried finding the critical point of the derivative, i.e:
$(V'\cdot X)(V\cdot Y) + (V\cdot X)(V'\cdot Y)$
But that is leading me down a rabbit hole I don't seem to be able to get out of.
AI: Without loss of generality, choose $X=(1,0,0)$ and $Y=(\cos\phi_0,\sin\phi_0,0)$. Then (assuming $V$ is also unit vector), you can write $V$ in polar coordinates as $V=(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta)$. Then your expression becomes
$$\sin\theta\cos\phi(\sin\theta\cos\phi\cos\phi_0+\sin\theta\sin\phi\sin\phi_0)=\sin^2\theta\cos\phi\cos(\phi-\phi_0)$$
If you want the maximum, you get $\theta=\pi/2$, so it's in the same plane.
Also $$\frac d{d\phi}\cos\phi\cos(\phi-\phi_0)=-\sin(2\phi-\phi_0)=0$$
so $$\phi=\frac{\phi_0}2$$
You will need to consider separately the case where $\phi_0=\pi$
|
H: Relation between the radius of $n$ identical circles and the radius of an enclosing tangent circle
$n$ small circles are tangent to each other and tangent to the big circle. Here's a figure for $n=4$:
Asking hints of how to find the reason between the radius of small circles into the big circle.
I tried doing some trigonometrics on the arcs, but failed.
Also tried this but I'm not sure about it:
Assuming $r$ is the small radius, $R$ is the radius of big one, $n$ is the amount of small circles inside the big and $x$ is the distance from the center of big circle to the center of small circle.
$$\sin\frac\pi n = \frac r x \tag{1}$$
But we don't know what $x$ is yet. We can get it from
$$x + r = R \tag{2}$$
and combine the two equations:
$$x = R - r = \frac{r}{\sin\frac\pi n} \tag{3}$$
$$R = r \left(1 + \frac{1}{\sin\frac\pi n} \right) \tag{4}$$
Therefore,
$$R \sin\frac\pi n = r \left(\sin\frac\pi n + 1 \right) \tag{5}$$
AI: The centers of the small circles form a regular polygon with sides $2r$. You can find the circumradius of this polygon, then increase it by $r$ to find $R$. The circumradius is $\frac{2r}{\sin \frac \pi n}$ Then $R=\frac{2r}{2\sin \frac \pi n}+r$ which is equivalent to your $R\sin \frac \pi n=r(\sin \frac \pi n+1)$
|
H: Net based on a filter
Assume a topological space X, t. I read a proposition in my textbook:
A filter A $\to$ y if and only if every net {$s_a$}, a $\in$ A, based on A also converges to y.
What are the net elements $s_A$ in this case? Are they elements of the topological space X or are they elements of the filter A? Also, why is the index set, a $\in$ A, when A is a filter and not a directed set?
AI: Every filter is a directed set. Here, for instance, $\langle A,\supseteq\rangle$ is a directed set: you can easily check that the conditions that make $A$ a filter imply those that make $\langle A,\supseteq\rangle$ a directed set. The $s_a$ are points of $X$: for each $a\in A$ we choose any point $s_a\in a$. Then $\langle s_a:a\in A\rangle$ is a net indexed by the directed set $\langle A,\supseteq\rangle$.
|
H: If positive integers $a$, $b$, $c$ satisfy $\frac1{a^2}+\frac1{b^2}=\frac1{c^2}$, then the sum of all values of $a\leq 100$ is ...
I'm struggling to solve the following problem. I would like hint (just a hint, not a full solution please) on how to solve it:
The positive integers $a$, $b$, and $c$ satisfy $$\dfrac1{a^2}+\dfrac1{b^2}=\dfrac1{c^2}$$
The sunm of all possible $a\leq 100$ is ...
A) $315\quad$ B) $615\quad$ C) $680\quad$ D) $550\quad$ E) $620$
(Source: 2005 Cayley (Grade 10), #25)
Primary Topics: Number Sense
Secondary Topics: Counting | Fractions/Ratios
(original problem image)
What I've done so far is that I've rearranged $1/a^2 + 1/b^2 = 1/c^2$ to get $a^2 + b^2 = (ab/c)^2$. Then this means that $a$, $b$ and $ab/c$ are pythagorean triples, because
$$(integer)^2 + (integer)^2 = (integer)^2$$
But I'm not sure how to proceed from there, I'd really appreciate a hint.
Thanks in advance!
AI: Hint 1) $a$ must be a multiple of $5$.
Hint 2) You need only consider multiples of the $(3,4,5)$ and $(5,12,13)$ triangles.
|
H: Let $f: X \to Y$, $\mathcal{C}$ be an arbitrary collection of subsets of $Y$, then $f^{-1}(\sigma(\mathcal{C})) \subset \sigma(f^{-1}(\mathcal{C}))$
I have already read the solution to my question at https://math.stackexchange.com/a/1530785/81560, but unfortunately, the notation is confusing.
I am trying to prove:
Let $f: X \to Y$, and $\mathcal{C}$ be an arbitrary collection of subsets of $Y$. Then $f^{-1}(\sigma(\mathcal{C})) \subset \sigma(f^{-1}(\mathcal{C}))$.
I have already shown that
$$\mathcal{A} = \{A \subset Y: f^{-1}(A) \in \sigma(f^{-1}(\mathcal{C}))\}$$
is a $\sigma$-algebra of subsets of $Y$. Now Yeh's Real Analysis, 3rd edition (p. 8) goes on to say
Clearly $\mathcal{A} \supset \mathcal{C}$
and this is the one part of the proof I don't understand. Why is this clearly true?
AI: If $C\in\mathcal{C}$, then $f^{-1}[C]\in f^{-1}[\mathcal{C}]\subseteq\sigma(f^{-1}[\mathcal{C}])$, so $C\in\mathcal{A}$.
|
H: Find the stationary points of $f(x,y)=5y\sin(3x)$
Given the function $f(x,y)=5y\sin(3x)$ find the stationary points.
I found $f_x=15y\cos(3x)$. Solving $f_x=0$, I got $y=0,x=\frac{(2n+1)\pi}{6}$
Similarly, $f_y=5\sin(3x)$. Solving $f_y=0$, I got $x=\frac{n\pi}{3}$
I thought my stationary points would be $(\frac{(2n+1)\pi}{6},0$) and $(\frac{n\pi}{3},0)$. But the solutions say that it is only $(\frac{n\pi}{3},0)$.
Can someone please explain to me what I'm missing hear or not understanding?
AI: At the stationary point, both derivatives must be equal to zero. If $x=\frac{(2n+1)\pi}{6},\; y=0$, then
$$
f_y= 5\sin 3x= 5\sin \frac{(2n+1)\pi}{2}=5\sin\left(n\pi+\frac{\pi}2\right)=\pm 5\ne 0.
$$
|
H: Let G be a group of order p^n suppose that H is a normal subgroup of G
Prove that there exists a normal subgroup H' a subset of G such that [H':H] = p
AI: The number of subgroups of $H$ with index $p$ is congruent to $1\pmod p$. We look at the orbits of these subgroups under conjugation by elements in $G$; of course, they form orbits of size $p^k$ for some non-negative integer $k$. Since the sum of the sizes of these orbits is congruent to $1\pmod p$, at least one of them must be $1$; in other words, it is normal in $G$.
|
H: If $A = [-5, 3)$ and $B = (1, \infty)$, is $3$ a member of $A \cap B$?
If $A = [-5, 3)$ and $B = (1, \infty)$, what is $A \cap B$?
Since $3$ is not a member of $A$, do we include it in the intersection?
AI: This question might make more sense if you try to visualise it. When you plot it on the number line, it will be clear to you. You should be getting $A\cap B$ to be $(1,3)$.
|
H: How to solve $vv''+v'^2-6t^2=0$
Hi, I have been trying to solve this equation for a week now.
However, I keep ending up with the same result everytime, it may be because my knowledge on this chapter is not great.
I am unable to figure it out, please help.
Below is the question and the conditions.
$$
\begin{cases}
vv''+v'^2-6t^2=0 \\
v'(0)=0,\\
v(0)=1,\\
v>0\quad\forall t
\end{cases}$$
They are asking to use the substitution $z=vv'$ to solve the equation, and
here you can find my attempt, but I am obviously unsure if it is the correct way of solving the exercise.
Thank you so much, any help is appreciated.
AI: You can notice the integral of $z = v_tv$ is $u = \int z dt = \frac{v^2}{2}$. So using the substitution $u = v^2$. You then get $u'' -12t^2=0$. This is then solvable with the standard methods of solving differential equations.
$u'' = 12t^2 \implies u = t^4+c_1t+c_2 \implies v = \sqrt{t^4+c_1t+c_2}$
Now, we plug in initial conditions and we are done:
$v(0) = 1 = \sqrt{0^4+c_1(0)+c_2} \implies c_2 = 1$
$v'(0) = 0 = \frac{4(0)^3+c_1}{2\sqrt{(0^4+c_1(0)+1}} \implies c_1=0$
So the solution is:
$v = \sqrt{t^4+1}$
|
H: Monotonically increasing/decreasing functions
I am having some trouble with this. I can see that the last two are true by testing them with a few functions, but I don't know how to prove them. Also, I am not too certain on the first two. Can I get some pointers on how to do this? Thank you.
Definition: Let $A,B\subseteq \mathbb R$. A function $f:A\rightarrow B$ is called strictly monotonically increasing, if for all $a,b\in\mathbb R$ with $a<b$
it holds that $f(a)<f(b)$. It is called strictly monotonically decreasing if for all $a,b\in\mathbb R$ with $a<b$, it holds that $f(a)>f(b)$.
Show that a strictly monotonically increasing function is injective.
Let $A,B\subseteq\mathbb{R}$ and let $f:A\rightarrow B\quad f(x)=i$ where $a=a_i$ be a strictly monotonically increasing function. By the definition, this means that for all $a,b\in\mathbb{R}$ with $a<b$, we have $f(a)<f(b)$. This implies that all $a_i$'s are distinct, so we know that $f$ is injective.
Is a monotonically increasing function always surjective? Justify.
No. Counterexample: Consider the monotonically increasing function $f:\mathbb{Z}\rightarrow\mathbb{Z}\quad f(x)=5x-7$. Then consider $0\in\mathbb{Z}$. As $f(x)=0\leftrightarrow x=\frac{7}{5}$. But $\frac{7}{5}\not\in\mathbb{Z}$. Thus, we conclude that $f$ is not surjective.
Let $f:A\rightarrow B$ be strictly monotonically decreasing and $g:B\rightarrow C$ be strictly monotonically increasing.
Show that $g\circ f$ is strictly monotonically decreasing.
Prove that if $f:A\rightarrow B$ is strictly increasing and surjective then the inverse function $f^{-1}$ is also strictly monotonically increasing.
AI: Your proofs for the first two questions are fine! I would word the injectivity proof differently, however -- it's slightly unclear what all the variables refer to. For instance
Suppose $x,y\in A$ with $x<y$. Then $f(x)<f(y)$ so they must be distinct.
As for the third, suppose $x>y$. Then $f(x) < f(y)$ (because $f$ is decreasing) and thus $$g(f(x))<g(f(y))$$ (because $g$ is increasing). Hence for all $x>y$, $$g\circ f(x)<g\circ f(y)$$ which is, of course, the definition for strictly monotonically decreasing.
Finally for the fourth one, let $f:A\to B$ be strictly increasing and surjective. By part (1), it's injective and so it is bijective and thus $f^{-1}$ is well-defined. Suppose that $x>y$, with $x,y\in B$ We want to prove $$f^{-1}(x)>f^{-1}(y).$$ Suppose for sake of contradiction that $$f^{-1}(x) \le f^{-1}(y).$$ Clearly the two can't be equal -- because the inverse function is bijective, and $x\neq y$. Thus $$f^{-1}(x)<f^{-1}(y).$$ We now apply $f$ to both sides and get $x<y$ (because $f$ is monotonically increasing), a contradiction. Hence the statement must be true.
|
H: isomorphism between a field and non-field rings
I think that once you have a field and a ring which is not a field, you can conclude that there is no isomorphism between these two.
Is it right? if not, is there an example? if true, can someone give a rigorous proof?
AI: This is correct, and the intuition should be that "being a field" is a property that can be stated entirely in terms of ring-theoretic statements, and therefore must be preserved by isomorphism. Here's the proof in full detail.
Suppose $F$ is a field and $R$ is a ring and $\varphi : F \to R$ is a ring isomorphism.
First, we will show that $R$ is commutative. Let $a,b \in R$ be arbitrary. Since $F$ is surjective, $a = \varphi(x)$ and $b = \varphi(y)$ for some $x,y \in F$. Then
$$ab = \varphi(x) \varphi(y) = \varphi(xy) = \varphi(yx) = \varphi(y) \varphi(x) = ba.$$
Since $a$ and $b$ are arbitrary, $R$ is commutative.
Next, since $0 \neq 1$ in $F$, $\varphi$ is injective, and $\varphi(0) = 0, \varphi(1) = 1$, $0 \neq 1$ in $R$.
Finally, we will show that nonzero element of $R$ are invertible. Let $r \in R \setminus \{0\}$ be arbitrary. Since $F$ is surjective, there is some $x \in F$ such that $\varphi(x) = r$. Since $r \neq 0$ and $\varphi(0) = 0$, $x \neq 0$. Since $F$ is a field, there is some $y \in F$ such that $xy = yx = 1$. Now
$$1 = \varphi(1) = \varphi(xy) = \varphi(x) \varphi(y) = r \varphi(y)$$
and
$$1 = \varphi(1) = \varphi(yx) = \varphi(y) \varphi(x) = \varphi(y) r,$$
so $r$ is invertible in $R$.
This completes the proof that $R$ is a field.
|
H: Question on proving continuity
My goal is to show that $(\vec{y})^{-1}$ is continuous, where $\vec{y}(x)= (x, \frac{1}{x})$ on the domain $x > 0$.
Now clearly $\vec{y}$ itself is continuous and it is bijective; I first thought to use the theorem that states if a function $f$ has a compact domain and is continuous then its inverse is also continuous. However, the domain $(0, \infty)$ clearly is not compact so no luck there.
There is also no luck using the inverse value theorem since the Jacobian matrix of $\vec{y}$ is singular.
Maybe I am overthinking this but I do not know what else I can do to achieve my goal. Any thoughts?
AI: Let $A = \left\{\left(x, \frac{1}{x}\right) : x \in (0, \infty)\right\}$, the domain of $\vec{y}^{-1}$. Note that, for any point $(a, b) \in A$, i.e. where $b = \frac{1}{a}$, then $\vec{y}^{-1}$ maps $(a, b)$ to $a$. That is, $\vec{y}^{-1}$ is simply a restriction of the projection function $\pi_1(x, y) = x$. This projection function is continuous (indeed, non-expansive), hence so is its restriction $\vec{y}^{-1}$.
|
H: Determine the largest size n of a problem that can be solved in time t , assuming that the algorithm to solve the problem takes f(n) microseconds.
For each function f(n) and time t in the
following table, determine the largest size n of a problem that can be solved in time t ,
assuming that the algorithm to solve the problem takes f(n) microseconds.
AI: I will do the first two as an example to help you do the rest. Since a second is $10^6$ microseconds. By solving an equation which relates $f(n)$ to the time we are plotting for $f(n)$ to run we can solve for the largest input $n$ that $f$ can run on within the time limit.
$1$ second:
$log(n^2) = 1,000,000 \implies n^2 = e^{1,000,000} \implies n = e^{500,000}$
$1$ minute:
$log(n^2) = 60,000,000 \implies n^2 = e^{60,000,000} \implies n = e^{30,000,000}$
the rest can be similarly done.
P.S. make sure to floor the values of n you get from these equations because n is an integer length input.
|
H: How can it be physically justified that the energy is conserved for the solution of wave equation with Neumann boundary condition?
I have learned that the IBVP: $$u_{tt}-c^{2}u_{xx}=0,$$ $$u(x,0)=f(x),\,u_{t}(x,0)=g(x),$$ $$u_{x}(0,t)=u_{x}(L,t)=0$$
has the unique solution $\displaystyle u(x,t)=\frac{1}{2}(A_{0}+B_{0}t)+\sum_{n=1}^{\infty}[A_{n}\cos(\omega_{n}t)+B_{n}\sin(\omega_{n})t]\cos\left(\frac{n\pi x}{L}\right)$ with $\displaystyle A_{n}=\frac{2}{L}\int_{0}^{L}f(x)\cos\left(\frac{n\pi x}{L}\right)dx,\,n=0,1,2,\cdots$, $\displaystyle B_{n}=\frac{2}{\omega_{n}L}\int_{0}^{L}g(x)\cos\left(\frac{n\pi x}{L}\right)dx,\,n=1,2,3\cdots$ and $\displaystyle B_{0}=\frac{2}{L}\int_{0}^{L}g(x)dx$ if some assumptions on $f$ and $g$ are met.
Also, by differentiating $\displaystyle e(t)=\frac{1}{2}\int_{0}^{L}[u_{t}^{2}(x,t)+c^{2}u_{x}^{2}(x,t)]dx$, the quantity known as energy, I understood that $e'(t)=0$ hence the energy is conserved for this problem. However, the solution implies that the zeroth-order term is a linear function of $t$ so the solution grows as $t\rightarrow\infty$. This makes me confused about how the conservation of energy should be intuitively (or physically) interpreted (in fact, higher-order terms essentially represent oscillation so I think they are not of problem).
How can the conservation of energy be interpreted when the solution seems to keep increase as time increases?
AI: The same thing applies to an object in inertial movement. Its position in space is $u = u_0 + v t$ which is unbounded, but its kinetic energy $m v^2/2$ is constant. The reason is that only the derivative with respect to time appears in the energy.
|
H: $M$ is a simple $A\oplus B$-module implies $M$ is either a simple $A$-module or a simple $B$-module?
In section 1.6 of Lectures on Algebraic Categorification by Mazorchuk, it is mentioned that if $A$ and $B$ are finite dimensional unital $k$-algebras, ($k$ a field) and $M$ is simple as an $A\oplus B$-module, then $M$ is in fact either simple as an $A$-module or a $B$-module. Why is this?
I'm a bit confused by a few points. First, I'm pretty sure $A\oplus B$ really is the algebra with underlying set $\{(a,b):a\in A,\ b\in B\}$, with pointwise operations, and is not notation for the coproduct $A\otimes_k B$, since that case is treated in the next paragraph.
But then identifying $A$ with the subset $(A,0)$ and $B$ with $(0,B)$, one has $AB=BA=0$. If $M$ is not a simple $A$-module, there is $m\in M$ nonzero such that $0\neq Am\neq M$. But since $M$ is a simple $A\oplus B$-module,
$$
M=(A\oplus B)(Am)=Am+BAm=Am
$$
since $M$ is a simple $A\oplus B$-module, a contradiction. Doing the same argument with $B$ seems to show $M$ is both simple as an $A$-module and as a $B$-module. Is my reasoning flawed? I find it strange that the word 'or' is used instead of 'and' in that case in the original claim, even if it's still technically correct.
AI: In this set-up, each $M$-module has the form $M=M_1\oplus M_2$
where $M_1$ is an $A$-module and $M_2$ is a $B$-module.
Then $M$ is simple iff either $M_1$ is a simple $A$-module and $M_2=0$
or the other way round.
The ring $A\oplus B$ has central idempotents $e_1=(1,0)$ and $e_2=(0,1)$
with $e_1+e_2=1$. Then $M=M_1\oplus M_2$ with $M_i=e_i M$.
The action is given by $(a,b)(m_1,m_2)=(am_1,bm_2)$.
I think the flaw in your argument is that $M$ is in general not an $A$-module
(or a $B$-module) or at least not a unital $A$-module, since the identity element
of $A$ does not in general act trivially on $M$. In $A\oplus B$
the identity of $A$ is in effect the idempotent $e_1=(1,0)$.
|
H: Likelihood of probability of heads = 0.5 given that the coin yields 4 heads in 9 tosses.
Suppose I have a coin.
The probability of heads is unknown.
I toss the coin 9 times and get 4 heads.
What is the likelihood that probability of heads(Ph) is 0.5
What I have tried till now
Likelihood of Ph being 0.5 given I have 4 heads from 9 tosses = Probability of getting 4 heads from 9 tosses when Ph is 0.5/Probability of getting 4 heads from 9 tosses when Ph varies from 0 to 1
In equation form:
P(Ph=0.5|4 heads from 9 tosses) = P(4 heads from 9 tosses|Ph=0.5)/P(getting 4 heads from 9 tosses)
Since coin tosses follow binomial distribution
$P(4 heads from 9 tosses|Ph=0.5) = 9C4 (0.5^4) * (1-0.5)^5 \tag{1}$
$P(getting 4 heads from 9 tosses) = 9C4 \int_0^1 (Ph^4)(1-Ph)^5 \, dPh \tag{2}$
where Ph varies from 0 to 1
But dividing eqn (1) and (2) yields a number greater than 1.
What am I doing wrong?
AI: The likelihood is only the numerator of your calculation
$L(\frac{1}{2})=\binom{9}{4}(\frac{1}{2})^9\approx 0.246$
what you were triyng to calculate is the posterior density of $\theta$
$$f(\theta|\mathbf{x})=\frac{\binom{9}{4}\theta^4(1-\theta)^5}{\int_0^1 \binom{9}{4}p^4(1-p)^5 dp}=1260\cdot\theta^4(1-\theta)^5$$
Which is a $Beta(5;6)$ distribution
Please, for future posts, use MathJax
|
H: Find the sum of all coefficients of the polynom $p(x) = (x ^4 − 3x^3 − x + 1)^9$
A polynomial $$p(x) = (x ^4 − 3x^3 − x + 1)^9$$ is written in standard form. Find the sum of all its coefficients ?
AI: For any polynomial $p$ the sum of all the coefficients is simply $p(1)$. So the answer is $(1-3-1+1)^{9}=-2^{9}$.
(If $p(x)= \sum\limits_{k=0}^{n} a_kx^{k}$ then $p(1)=\sum\limits_{k=0}^{n} a_k$).
|
H: Proving $|a+b|\le |a|+|b|$ from $-|a|\le a \le |a|$
In Spivak Calculus chapter 1, question no. 14, it is asked to prove the aforementioned inequality. However, the way I proved it is unnecessarily long. Can someone critique it for me, and mention an alternate shorter proof method if any?
My proof:
$-|a|\le a \le |a| \implies -|a+b|\le a+b \le |a+b|$
Case 1: $a+b\ge 0$
$-|a+b|\le a+b \le |a+b| \implies -(a+b)\le a+b \le (a+b) \implies -a-b\le a+b \le a+b $
Subcase 1: $a\ge 0 ,b\ge 0$:
$-a-b\le a+b \le a+b \implies -|a|-|b|\le a+b \le |a|+|b| \implies -(|a|+|b|)\le a+b \le (|a|+|b|) \\ \implies |a+b| \le |a|+|b|$
Subcase 2: $a\ge 0 ,b< 0$:
$-a-b\le a+b \le a+b \implies -|a|+|b|\le a+b \le |a|-|b| \implies |a+b| \le |a|-|b| \implies |a+b| \le |a|+|b|$
Subcase 3: $a<0, b \ge 0$ Since the inequality is symmetric in $a$ and $b$, Subcase 2 applies on this one too.
Subcase 4: $a < 0,b < 0$: Not applicable
I analyzed similarly the four subcases for case 2: $a+b \le 0$ but you can see the proof is getting too long.
AI: Adding
$$-|a|\le a\le |a| $$ and
$$-|b|\le b\le |b|.$$
you get
$$-(|a|+|b|)\le a+b\le|a|+ |b|,$$
which is
$$-(a+b)\le|a|+|b|\land a+b\le|a|+|b|$$
or
$$|a+b|\le|a|+|b|.$$
|
H: Question in a theorem related to topic weak topology
I have done a course on topology and I am self studying concepts that were not taught in class from the book Foundations of Topology ( C Wayne Patty).
I got struck upon this theorem.
My question is in line 4 of the theorem - How does author wrote $< x_n>$ Union { x} is compact.
Image of Theorem 1.41 ->
I couldn't think how it must hold. Please help.
AI: If $(U_i)$ is an open cover of $<x_n> \cup \{x\}$ then $x \in U_j$ for some $j$. Since $x_n \to x$ there exists $m$ such that $x_n \in U_j$ for all $n \geq m$. For $1\leq n <m$ there exists $U_{i_n}$ such that $x_n \in U_{i_n}$. Now $U_j, U_{i_1},U_{i_2},...,U_{i_{m-1}}$ cover $<x_n> \cup \{x\}$.
|
H: Finding the integral curves of a vector field
I have been stuck on this question for very long now
Given the vector field
$$
F(x, y) = (5x + 2y, 3x + 7y)
$$
And the ODE system:
$$
x' = 6x + 9y\\
y' = 1x + 4y.
$$
Determine the integral curves to $F$ by finding the general solution to the ODE system.
My problem:
I have tried to solve the general solution and got:
$$
v(t) = c(2,-5)e^{-t} + d(5, 2)e^{5t}.
$$
But I don't know how to get the integral curves.
AI: It seems that you've arrived at the solutions
$$x = ce^{-t} + de^{5t}, \; y = -ce^{-t} + de^{5t}.$$
From here, note that $x+y = 2de^{5t}$ and $x-y = 2ce^{-t}$. For any given solution, $c$ and $d$ are fixed. So we hence have that
$$(x+y)(x-y)^5 = 2d(2c)^5 e^{5t} e^{-5t} = 64c^5d$$
which is constant. So we have integral curves of form
$$(x+y)(x-y)^5 = a,$$
for $a \in \mathbb R$.
|
H: How to re-express a continuous function with a singularity (but with well-defined limit) to avoid the singularity?
I have a function $\frac{x^{1-\gamma}-(xy)^{\frac{1-\gamma}{2}}}{y - x}$ that has a singularity at $y=x$ but the limit as $x \to y$ is well defined, namely, $\lim_{x \to y}\frac{x^{1-\gamma}-(xy)^{\frac{1-\gamma}{2}}}{y - x} = \frac{1}{2}(\gamma - 1)y^{-\gamma}$. My goal is to re-express the function to avoid this singularity and I would like to know if there exists a theorem that guarantees the existence of such a function. Also, is there a general technique for tackling such a problem or is it more based on experience and trial and error? $x$, $y$, and $\gamma$ are all real, positive values.
As an example, I have a similar function, $\frac{y - (xy)^{\frac{1}{2}}}{y - x}$ that I can re-express as $\frac{y + (xy)^{\frac{1}{2}}}{x + y + 2(xy)^{\frac{1}{2}}}$ to avoid the singularity.
AI: If you mean to find a “closed” expression, then you can't, in general. Consider the function
$$
f(x)=\begin{cases}
\dfrac{\sin x}{x} & x\ne 0 \\[4px]
1 & x=0
\end{cases}
$$
This is not expressible under a “single formula” and your case is very similar. Indeed, you have
$$
f(x)=\frac{x^{1-\gamma}-(xy)^{\frac{1-\gamma}{2}}}{y - x}
$$
The limit for $x\to y$ is the derivative at $y$ of $g(x)=(xy)^{(1-\gamma)/2}-x^{1-\gamma}$.
If you set $\delta=(1-\gamma)/2$, you have
$$
g(x)=(xy)^{\delta}-x^{2\delta}
$$
For rational $\delta$ you can, in a finite time, find an expression of the form $g(x)=(x-y)q(x)$, for a quite complicated $q$, complexity depending on the least positive denominator needed to express $\delta$. But for irrational $\delta$ this is not possible: such irrational will have rational numbers in every neighborhood and the formula will be different for any of them, but they should transform “continuously” into one another (one can make precise what “continuous” means).
|
H: Find the common ratio of the progression.
A geometric progression has 625 as the first term. The product of its first 3 terms is equal
to the product of its first 6 terms. Find the common ratio of the progression.
AI: Write the terms of the progression as $a_n = 625 r^n$. This gives that our first term, $a_0$, is $625 = 5^4$; call this $a$. The product of the first three terms is hence $a \cdot ar \cdot ar^2 = a^3r^3$, and similarly, we have the product of the first six terms is $a^6r^{15}$. Thus either $r=0$ or
$$a^3 r^3 = a^6 r^{15},$$
giving $a^3r^{12} = 1$, or $5^{12} r^{12} = 1$.
So $r = \pm\frac 15$ or $0$.
edit as Yves correctly points out, $r=0$ isn't actually allowed in the definition of a GP, so unless otherwise specified, the answer is $\pm\frac 15$.
|
H: Combination definition question
I am relatively new to math.
I am confused as to the formula given is this Wikipedia page on
combination, where it states in the first paragraph
$$\displaystyle C^n_k={n\choose k}=\frac{n(n-1)…(n-k+1)}{k(k-1)…1},$$
which can be written using factorials as $\displaystyle\frac{n!}{k!(n-k)!}.$
How do these two formulas equate to each other? In the first formula I understand the denominator is $k!$, but what does '$n(n-1)...(n-k+1)$' mean?
AI: $n(n-1)\cdots(n-k+1)=\frac{n(n-1)\cdots(n-k+1)(n-k)(n-k-1)\cdots1}{(n-k)(n-k-1)\cdots1}=\frac{n!}{(n-k)!}$
hence:$$\frac{n(n-1)\cdots(n-k+1)}{k!}=\frac{n!}{k!(n-k)!}$$
|
H: What is this tensor operation called?
I'm wondering if the following operation has a name:
$$
\begin{pmatrix}
a & b & c \\
d & e & f \\
g & h & i \\
\end{pmatrix} ? M =
\begin{pmatrix}
aM & bM & cM \\
dM & eM & fM \\
gM & hM & iM \\
\end{pmatrix}
$$
In other words the operation multiplies each element of $n$ rank tensor $N$ as scalars by the whole second $m$ rank tensor $M$, creating a $n+m$ rank tensor.
(I need to perform it in a program I am writing. Before I implement it myself, I would like to check if it already exists in the libraries I am using. The problem is I don't have a clue what to search for. And the operation seems rather elementary, so it likely exists.)
Edit: I edited the question to make it less confusing and use proper nomenclature.
AI: This operation is called Kronecker Product. Here is the Wikipedia's link : https://en.m.wikipedia.org/wiki/Kronecker_product#:~:text=In%20mathematics%2C%20the%20Kronecker%20product,resulting%20in%20a%20block%20matrix.&text=The%20Kronecker%20product%20is%20named,to%20define%20and%20use%20it.
|
H: Rudin POMA Chapter 1 exercise 18
Question:
If $k \geq 2$ and $x \in R^k$ ,prove that there exists $y \in R^k$ such that $y \neq 0$ but $x \cdot y = 0$. Is this also true if k = 1?
Proof:
$$|x+y|^2 = (x \cdot y)(x\cdot y) = |x|^2 + |y|^2 +2(x\cdot y)$$
$$\text{As} \space (x \cdot y)=0$$
$$\text{Then} \space|x|^2 + |y|^2 =|x+y|^2$$
So any $x$ and $y$ that fulfils this this equation will form a right triangle and $x \cdot y = 0$
For the case where $k=1$, suppose $x$ and $y$ are non zero, then their dot product is $xy$, which must be nonzero as $R^1$ is an ordered field.
Does this suffice as a proof? Is it missing anything?
AI: You did not complete the proof of the first part. You cannot end the proof by saying 'any $x$ that fulfils....'.
When $y=0$ you can take $x=(1,1,..,1)$. For $y \neq 0$ there is at least one vector $z$ linearly independent of $y$ because $k \geq 2$. Now take $x=z-ay$ where $a=\frac {y.z} {\|y\|^{2}}$. Then $x.y=0$ and $x \neq 0$.
|
H: An example showing subspace of compact space is not compact.
I had to give an example showing subspace of compact space is not compact.
For this I first picked example of compact space from the book Topology by James R. Munkres section 26, solved example 2.
$X= \{0\}\cup\{\frac{1}{n} | n\in \mathbb N \}$.
This space $X$ is compact space as subspace of $\mathbb R$. Now Let,
$Y= \{\frac{1}{n}\mid n\in \mathbb N\}$
I claim that $Y$ is not compact as subspace of $X$. To show this I used the fact that if $Y$ is subspace of $X$ then $Y$ is compact iff every covering of $Y$ by sets open in $X$ contains a finite subcollection covering $Y$.
First I showed that each singleton set $\{\frac{1}{k} \}$ is open in $X$.
Then, $Y=\bigcup_{k=1}^{\infty} \{\frac{1}{k} \}$
So, this collection is open cover of $Y$ by sets open in $X$.
Hence, by above fact if $Y$ is compact then there exist a finite subcollection of above singleton sets. But, this is not possible because, if such finite subcollection exist then $Y$ would be finite but, $Y$ is infact infinite set.
So, we conclude that $Y$ is not compact set.
Is my solution is correct? If not, then please tell me corrections.
AI: This is a correct solution.
In general, as long as you pick a closed and bounded subset of $\mathbb{R}^n$ (such as a closed ball), and take a non-closed subset (e.g open ball), it satisfies the requirements of the question. Your example is similar.
|
H: Show that, the spiral and the half straight line in $\Bbb{R}^2$ are homeomorphic.
We are given a spiral say the set $A=\{(e^x\cos x,e^x\sin x)|x\in\Bbb{R}\}$ and $B=\{(y,y)|y\ge0\}$. We have to find a homeomorphim $h:A\to B$.
Intuitively by looking at their graphs in $\Bbb{R}^2$, it seems to be homeomorphic as you can always transform a spiral into a half straight line by stretching from the end (think of a string of infinite length) and vice versa.
By I want to have a rigorous construction of $h$.
Let's write $h=(h_1,h_2)$, then we must have $h(e^x\cos x,e^x\sin x)=(y,y)$ for some $y\ge0$.
Now, $h(e^x\cos x,e^x\sin x)=(h_1(e^x\cos x,e^x\sin x),h_2(e^x\cos x,e^x\sin x))=(y,y)$
So, we must have $h_1(e^x\cos x,e^x\sin x)=h_2(e^x\cos x,e^x\sin x)$
If I choose $h_1(x_1,x_2)=h_1(x_1,x_2)=x_1^2+x_2^2$ i.e. $h(x_1,x_2)=(x_1^2+x_2^2,x_1^2+x_2^2)$.
Then $h_1(e^x\cos x,e^x\sin x)=h_2(e^x\cos x,e^x\sin x)=e^{2x}$. Then choose $y=e^{2x}$.
Will that $h$ work? Can anyone verify my solution? Is there anything that I need to rectify? Thanks for help in advance.
AI: Your $h$ does not seem to work because it does not have $(0, 0)$ in its image.
In fact, I don't think your spaces are homeomorphic. If you remove the point $(0, 0)$ from the line, it remains connected but removing any point from the spiral would make it disconnected.
If you consider the open ray $\{(y, y) \mid y > 0\}$, then your solution would work with $$h(x_1, x_2) = (x_1^2 + x_2^2, x_1^2 + x_2^2).$$
EDIT: In fact, if we work with the open ray, it might be easier to construct the following homeomorphisms:
$\Bbb R \sim A.$
Consider the map $f:\Bbb R \to A$ defined by $x \mapsto (e^x\cos x, e^x\sin x)$.
This is clearly a bijection. Its inverse is also easily calculatable as
$$(x, y) \mapsto \dfrac{1}{2}\ln(x^2 + y^2).$$
Note that $(0, 0) \notin A$ and so the above map is indeed well-defined. Now it is clear that both $f$ and its inverse are continuous and thus, we have the homeomorphism.
$\Bbb R^+ \sim B$. (Where $\Bbb R^+ = (0, \infty)$.)
This is quite easy as we have the map $x\mapsto (x, x)$ in one direction and the map $(x, y)\mapsto x$ in the other.
$\Bbb R^+ \sim \Bbb R$.
This is again easy as we have the maps $e^x$ and $\ln x$.
|
H: Why can I cancel in modular arithmetic?
Based on my school, the cancellation law for modular arithmetic is as stated:
For all integers $a$, $b$, $c$, $n$, with $n > 1$ and $a$ and $n$ are coprime, if $ab$
$≡ ac $( $mod$ $n$), then $b ≡ c$ ( $mod$ $n$ ).
Apparently, the proof for this was to multiply both sides by $a$-1.
2 questions then stem from this:
1) If you do modular multiplication, shouldn't you multiply the modulus as well?
If $a \equiv b \mod n$, then $ma\equiv mb \mod {nm}$. Why isn't this happening when $a$-1 is multiplied on both sides,i.e. I don't see a $a$-1 in the modulus?
2)Isn't multiplicative inverse of modulo $n$ such that $a$-1$a$ ≡ $1$ ( $mod$ $n$) (ie must be congruent to 1 modulo n)?
$\boxed{\text{Solve the equation $5 x+13 y=75$ for integers $x, y\quad$ }}$
Such an equation is called a $\color{red}{\text{Diophantine equation}}$.
Re-write: $5 x=75-13 y$
Then $5 x \equiv 75(\bmod 13),$ by Theorem $8.4 .1$ (Epp)
Re-write: $5 x \equiv 5 \cdot 15(\bmod 13)$
Note that 5 and 13 are coprime.
Thus, $x \equiv 15(\bmod 13),$ by Theorem $8.4 .9$ (Epp)
Thus, $x \equiv 2(\bmod 13),$ because 15 mod $13=2$
So $x=2$ is a solution.
Substituting back into the equation: $5(2)+13 y=75$
And thus $y=5$
(Transcribed from this image)
As you can see, on line 5, when they multiply both sides by $5$-1, its not congruent to 1 modulo 13?
PS:
I looked up on this possible duplicate: Why can I cancel in modular arithmetic when working modulus a prime number? but didn't seem to understand both the poster and the answerer.
AI: If $a\equiv b \mod n$, then we can write $a=b+kn$ for some $k\in\mathbb{Z}$.
So multiplying by $m$ say gives $am=bm+knm$, which can be written as $am\equiv bm \mod mn$, but also as $am\equiv bm \mod n$, with $km$ as the 'new' $k$.
$a^{-1}$ exists as $\gcd(a,n)=1$, and is an integer between $1$ and $n-1$, and doesn't appear in the modulus for the reason given above.
For part 2, $5^{-1}\cdot 5\equiv 1 \mod {13}$, and
$$5x\equiv 5\cdot15 \mod {13}$$
$$ 5^{-1}\cdot 5x\equiv 5^{-1}\cdot 5\cdot15 \mod {13} $$
$$ x\equiv 15 \mod {13}$$
|
H: What kind of manifold doesn't have a specific rank distribution?
For example, I know that $2$ dimensional sphere doesn't have rank $1$ tangent distribution for it vanishes at some point by the hairy ball theorem. Is there any criterion for it?
AI: If a closed oriented manifold has a distribution of rank $1$ then its Euler class vanishes.
https://mathoverflow.net/questions/38414/euler-characteristic-of-a-manifold-with-non-vanishing-vector-field
Thurston there exists a foliation (integrable distribution) of codimension 1 on a closed manifold if and only if its Euler class vanishes.
https://www.jstor.org/stable/1971047?seq=1
|
H: Decide if a function is Riemann Integrable
I have the following function: $f:[-\pi,\pi]\rightarrow [-\pi,\pi]\ \ f(x,y) = \left\{\begin{matrix}
\frac{xy}{x^4+y^2}, & (x,y) \neq (0,0)\\
0,& (x,y) = (0,0)
\end{matrix}\right.$ I need to decide if the function is or it is not Riemann Integrable. I am trying to do this by proving that the function is continuous. If it is it means that is Riemann Integrable too. And it is not because I found two paths for which the approach is different. The two paths are: $x = 0$ and $x = y$. Because of this, the function is not continuous and therefore is not Riemann Integrable, right? I am not sure if this is the correct answare so if anyone could point me in the right direction it would be great. Thank you!
AI: Because of this, the function is not continuous and therefore is not Riemann Integrable, right?
Not right. A discontinuous function can still Riemann integrable. For example, consider the function $f:[-1, 1] \to \Bbb R$ defined as
$$f(x) = \begin{cases}1 & x \neq 0\\0 & x = 0\end{cases}.$$
This function is discontinuous at $0$ but still Riemann integrable on $[-1, 1]$.
Even in your question, the function is only discontinuous at one point and so, that would not matter.
However, your function still is not Riemann integrable. This is because your function is not bounded. To see this, let $x = 1/n$ and $y = x^2$. Then, we have
$$f(x, y) = \dfrac{x^3}{2x^4} = \dfrac{n}{2}.$$
|
H: Explain a section of Euclid's Theorem on an infinite number of prime numbers.
I'm trying to understand Euclid's Theorem, using proof by contradiction, which states:
There are an infinite number of prime numbers.
In the book it has the following explanation: We assume that there are a finite number of prime numbers, $p_1, p_2, \dotsc, p_n$. We then consider an integer $Q$:
$$Q:= p_1 \cdot p_2 \dotsb p_n+1$$
From the Fundamental Theorem of Arithmetic we know that any composite number can be represented as the product of various prime numbers. Therefore:
$$Q=p_1^{e_1} \cdot p_2^{e_2} \dotsb p_n^{e_n}, \ \ \text{for a suitable }e_1,\dotsc,e_n \in \mathbb{N_0}$$
Since $Q>1$, there is at least one $i \in [n]$ with $e_i \neq 0$. Therefore, for $p_i$ we have that:
$$p_i \mid Q \ \text{and} \ p_i \mid (Q-1)$$
This is a contradiction to our original assumption that $p_i \geq2$. Thus there are an infinite number of prime numbers.
I'm having difficulty understanding how the fact $p_i \mid Q \ \text{and} \ p_i \mid (Q-1)$ is used to come to the contradiction.
AI: If $p\mid Q$ and $p\mid(Q-1)$, then $p$ is a factor of $Q-(Q-1)=1$. The only positive
integer factor of $1$ is $1$.
|
H: A line up question where the position and the number itself should not equal in an arrangement
There are $n$ numbers, from $1$ to $n$. Suppose that the numbers are arranged randomly, and asks there is no arrangement where the position and the number itself are equal. How many arrangements totally?
For example, if $n = 3$, there are only $2$ arrangements,
$2$ $3$ $1$
$3$ $1$ $2$
I have thought this question a long time, but failed to find the answer.
AI: You could try an inclusion--exclusion approach. Let's see:
The total number of unrestricted arrangements is of course $n!$.
Let $A_j$ denote the set of arrangements in which the number $j$ is in the $j^{th}$ position. Then the set of arrangements in which there exists some number in its own position is
$$
\bigcup_{j=1}^n A_j
$$
Notice for example that $|A_j| = (n-1)!$. In general if $I \subset \{1,\dots,n\}$ and, then $\bigcap_{j \in I} A_j$ is the set of arrangements in which all of the numbers $i \in I$ are all in position. There must be $(n-|I|)!$ such arrangements.
Now, using inclusion--exclusion in this special case where the cardinality of $\bigcap_{j \in I} A_j$ depends only on the cardinality of $I$, we can compute
\begin{align}
\biggl| \bigcup_{j=1}^n A_j \biggr| &= \sum_{k=1}^n (-1)^{k-1}\binom{n}{k}(n-k)! \\
&= n!\sum_{k=1}^n (-1)^{k-1}\frac{1}{k!}
\end{align}
Subtracting this from $n!$ gives:
$$
n! \Bigl( 1 - \sum_{k=1}^n (-1)^{k-1}\frac{1}{k!}\Bigr) = n! \sum_{k=0}^n (-1)^k\frac{1}{k!}
$$
|
H: Dirac comb Fourier transform
I have $ h(t) = a \delta (t) + \delta (t-T) + a \delta (t - 2T) $ and I found that the Fourier transform of this impulse response is $ H(f) = a + e^{-i 2 \pi f T } + a e^{-i 2 \pi f 2 T } $ is this transform correct ? because my book obtained the same result but after wrote it as $ (1 + 2a cos ( 2 \pi f T ) ) e^{- i 2 \pi f T } $
AI: Hint.
$$
a+e^{-i\alpha}+a e^{-2 i\alpha} = \left(a e^{i\alpha}+ 1 + a e^{- i\alpha}\right)e^{- i\alpha} = (1+2a\cos(\alpha))e^{- i\alpha}
$$
|
H: Describe all continuous and linear tranformations $f:(\mathbb{R}^{2},\mathscr{F}(x,y)) \rightarrow (\mathbb{R},|\cdot|)$
Let $\mathscr{B}=\{(-a,a)\times \mathbb{R}:a>0\}$. It's easy to see that $\mathscr{B}$ is the filter of neighborhoods of the origin in a topology compatible with the linear structure of $\mathbb{R}^{2}$, namely, $\mathscr{F}(0,0)=\{U \subset \mathbb{R}^{2}: U \supset (-a,a) \times \mathbb{R} \ (\exists a>0) \}$.
I already proved that every linear tranformation $f:(\mathbb{R}^{2},\mathscr{F}(x,y)) \rightarrow )\mathbb{R},|\cdot|)$ of the form $f_\lambda(x,y)=\lambda x$ is continuous and that the linear transformation $\tilde{f}:(\mathbb{R}^{2},\mathscr{F}(x,y)) \rightarrow (\mathbb{R},|\cdot|)$ given by $\tilde{f}(x,y)=y$ is not continuous.
My question: How to describe all the continuous and linear tranformations $f:(\mathbb{R}^{2},\mathscr{F}(x,y)) \rightarrow (\mathbb{R},|\cdot|)$?
I believe they are all projections of the form $f_\lambda(x,y)=\lambda x$, above mentioned.
AI: Indeed, any linear transformation $\left(\mathbb R^2, \mathscr F(x, y)\right) \to (\mathbb R, \vert\cdot\vert)$ can be expressed as a linear combination of $f_1(x,y) = x$ and $\tilde f(x,y) = y$. Now, note (or prove, if it isn't given) that the sum of two continuous linear transformations with $(\mathbb R, \vert\cdot\vert)$ as range is also continuous. Can you finish from here?
|
H: Proving $f(x,n)=\lfloor x \lfloor x \lfloor x \lfloor x ...(\text{n times})\rfloor\rfloor \rfloor \rfloor $ is increasing for $x>0$
$$f(x,n)=\lfloor \underbrace{x \lfloor x \lfloor x \lfloor x \dots\rfloor\rfloor \rfloor \rfloor}_{\text{$n$ times}}$$
with $ n \in \mathbb{N}$. Can we prove this function is increasing for $x>0$? Can it be proved by induction? I tried very hard, but not being able to reach a conclusion.
AI: It can be proved by induction. We have $a \geq b$ gives $\lfloor a \rfloor \geq \lfloor b \rfloor$, which proves the case for $n=1$.
If $f(x,n)$ is increasing, then for $a\geq b$, we have $f(a,n) \geq f(b,n)$, which means $\lfloor f(a,n) \rfloor \ge \lfloor f(b,n)\rfloor$. But the left hand side is $f(a, n+1)$, and the right hand side is $f(b, n+1)$, and hence we conclude that $f(x, n+1)$ is increasing.
|
H: Determinant and norm of the cross product
I assume the vectors are in Euclidean space.
I know that the determinant of a vector family is the area/volume of the associated parallelogram/parallelepiped. But I also read that the norm of the cross product was the same area. So do we have $\det(u,v)=\|u\wedge v\|$? How to prove it?
Is it only true for two vectors or do we have relationships such as $\det(u,v,v)=\|u\wedge v\wedge w\|$?
AI: Note $\det(u_1\cdots u_k)$ doesn't make sense unless $(u_1\cdots u_k)$ is a square matrix, i.e. $k=n$.
(I am treating vectors as column vectors in $\mathbb{R}^n$.)
The inner product in $\Lambda^k\mathbb{R}^n$ satisfies
$$ \langle u_1\wedge\cdots\wedge u_k,v_1\wedge\cdots\wedge v_k\rangle=\det [u_i\cdot v_j] $$
That is, the $ij$ entry (of the matrix we take the determinant of) is the dot product of $u_i$ and $v_j$.
In particular the norm is given by the so-called Gramian determinant:
$$ \|u_1\wedge\cdots u_k\|^2=\det[u_i\cdot u_j] $$
If we write $U=(u_1\cdots u_k)$, not necessarily a square matrix, then this is $\det(U^TU)$.
When $U$ is a square matrix, this simplifies to $\|u_1\wedge\cdots\wedge u_k\|=\det U$, yes.
|
H: Simultaneous probability distribution
Just starting to learn probability and would appreciate if someone could explain how this type of questions are done.
I have the following simultaneos probability function:
p-X,Y(x,y) 0 1 2
0 0.1 0.2 0.1
1 0.2 0.3 0.1
What is P(Y>0∣X=1)?
AI: I think X is the variable in column. If not, use the same but reversed procedure.
In order to respond to the question, use the definition
$\mathbb{P}[Y>0|X=1]=\frac{\mathbb{P}[Y>0;X=1]}{\mathbb{P}[X=1]}=\frac{0.3+0.1}{0.6}=\frac{4}{6}$
|
H: Calculate a line integral
I need to calculate a line integral: $\int_\gamma (2x+1)dx+(3xy+1)dy$, where $\gamma$ is the surface of the traingle $OAB$ where $O(0,0)\ \ \ A(3,-3)\ \ \ B(-6,-6) $. I tried to solve this by parametrizing the lines $AB$, $AO$ and $OB$, but it turns out to be a hard to compute integral, so I was wondering if you can help me here. Thank you
AI: Let the interior of the triangle be $A$.
By Green's theorem,
\begin{align*}
\oint_{\gamma} (2x+1) \,dx + (3xy+1) \, dy &= \iint_A \left(\frac{\partial}{\partial x} (3xy+1) - \frac{\partial}{\partial y}(2x+1)\right)\,dx\,dy\\
&= \iint_A 3y \, dx \, dy.
\end{align*}
You can now parametrise the interior of the triangle, and this integral is much easier to compute than working directly.
|
H: Short exact sequence of vector spaces
Suppose $V$ is a finite-dimensional vector space, and $U,W$ its subspaces. Is it possible to construct the following short exact sequence? $$0\rightarrow U\cap W\xrightarrow{f}U\oplus W\xrightarrow{g}U+W\rightarrow 0 \text{ (the direct sum here is external)}$$
I thought we could consider $U+W$ as $(U\oplus W)/(U\cap W)$ (the dimensions seem to permit it), then $f$ can be the standard injective inclusion $\iota$ of $U\cap W$ in $U\oplus W$ and $g$ the standard surjective "projection" $\pi$ onto the quotient space. Is this explicit enough a definition for $f$ and $g$? The exercise asks to construct these mappings, would there be any need to go into further detail than this?
AI: I think you can prove something stronger:
Let be $I$ and $J$ ideals of a ring $R$ ($R$-modules). Then the below sequence is exact:
$$
0 \to I \cap J \xrightarrow{f} I \oplus J \xrightarrow{g} I+J \to 0
$$
Indeed, you can identify $I \oplus J$ with $I \times J$. Then, you pick
$$
f: I \cap J \to I \oplus J \qquad x \mapsto (x,x)
$$
and
$$
g: I \oplus J \to I+J \qquad (x,y) \mapsto x-y
$$
The first function is invective obviously, while the second is surjective (nothing to say I think). Moreover, if $a \in Im (F)$, then there exists $x \in I \cap J$ such that $a = (x,x)$. Then $g(a) = g(x,x) = x-x=0$, then $a \in \ker (g)$. Viceversa, if $(x,y) \in \ker(g)$ then $x = y$. Thus $(x,y) \in Im(f)$.
Clearly, your exercise is a particular case of what I've just said (vector spaces over a field $K$ are just $K$-modules)
|
H: Find the Laurent series for $f(z)=\frac{1}{z(1-z)}$
I am having difficulties finding Laurent series of the above function, around these two domains
a) $1<|z|$
b) $1<|z-1|$
For a) I do
$$ \sum_{n=0}^\infty \frac{1}{z^{n+1}} - \frac{1}{z} $$
Is this correct? And how can I solve b)?
AI: Since$$f(z)=\frac1z\cdot\frac1{1-z}$$and since, when $|z|>1$, you have$$\frac1{1-z}=-\sum_{n=-\infty}^{-1}z^n,$$you have, in same same region\begin{align}f(z)&=\frac1z\left(-\sum_{n=-\infty}^{-1}z^n\right)\\&=-\sum_{n=-\infty}^{-1}z^{n-1}\\&=-\sum_{n=-\infty}^{-2}z^n.\end{align}In the other region, use the fact that$$f(z)=\frac1{z-1}\cdot\frac1{1+(z-1)}$$
|
H: Characterization of absolute value : how to tackle such problems?
In a problem we are asked to show that there existe a unique function $\alpha:\mathbb{C}\rightarrow\mathbb{R}$ such that :
$\alpha(x)=x$ for all real $x\geq0$
$\alpha(zw)=\alpha(z)\alpha(w)$ for all $z,w\in\mathbb{C}$
$\alpha$ is bounded on the unit circle $C(0,1)$
My question is : considering that the characterization of the absolute value is only hinted in the title, how would you tackle this problem ?
Would you just prove that $\alpha(z) = |z|$ ?
AI: Indeed, $\alpha (z) = |z|$ does work, so let's set about proving that this is unique.
Consider any $e^{i \theta}$. We claim that $\alpha(e^{i\theta}) = \pm 1$. If not, then note $\alpha(e^{i n \theta}) = \alpha(e^{i \theta})^n$ for all integers; taking a very positive or negative $n$ will yield $\alpha(e^{i \theta})^n$ unbounded and hence $\alpha(e^{i n \theta})$, contradicting the boundedness condition.
Note that $\alpha(e^{i \theta}) = -1$ is impossible, since $\alpha(e^{i\theta/2})^2 = \alpha(e^{i \theta}) = -1$, but a real cannot square to $-1$.
So we conclude that $\alpha(e^{i \theta}) = 1$ for all $\theta$, and hence $\alpha(re^{i\theta}) = \alpha(r) \alpha(e^{i \theta}) = r$ for all reals $r \geq 0$, giving the desired answer (by taking $r = |z|$, $\theta = \arg(z)$).
|
H: Projection of measure zero set in $\mathbb{R}^2$.
Let $E\subset \mathbb{R}^2$ be a set of measure zero in $\mathbb{R}^2$. Let $E_x,E_y$ be the projection of E on the $x$-axis and $y$-axis respectively.
Is it true that atleast one of $E_x$ and $E_y$ is measurable and measure zero in $\mathbb{R}$?
AI: No. Consider the set $E=\{(x,y):x=y\}$. This set has measure $0$ but its projections are measurable and do not.
|
H: Reference for "If $(a_n)$ is a sequence of real numbers with $\lim a_n^2=0$, then also $\lim a_n=0$"?
Could you please give me a good reference as to the below-quoted real analysis result?
If $(a_n)$ is a sequence of real numbers with $\lim a_n^2=0$, then also $\lim a_n=0$
AI: By definition of a limit, for all $\varepsilon > 0$ there exists $N \in \mathbb N$ such that $|a_n^2 - 0| < \varepsilon$ for all $n > N$.
Then we can see that the same holds for $a_n$: for any $\varepsilon > 0$, we know that for $\varepsilon_1 = \varepsilon^2$ there exists $N \in \mathbb N$ such that $a_n^2 < \varepsilon_1\hspace{0.2cm}\forall n>N$, and this gives us $|a_n| < \varepsilon\hspace{0.2cm}\forall n>N$.
|
H: Intuitive, possibly graphical explanation of why rationals have zero Lebesgue measure
I know that rationals, being a countable set, have zero Lebesgue measure. I think one way to prove it is to find an open set containing rationals that has measure less than $\epsilon$ for every $\epsilon >0$ fixed. You can do it by taking the rational points in sequence and taking intervals of length $\epsilon/2^n$. Then the union of these intervals has measure less or equal than $\epsilon$.
However I was wondering: how can I explain this intuitively? If one thinks of a dense subset, such as $\mathbb{Q}$ in $\mathbb{R}$, one thinks of something that is "so close" to the original set that it is undistinguishable, in a certain way. I think the most intuitive explanation would be that when you take those intervals, you are "scaling down" their lengths faster than how a given sequence of rational points approach a non rational one.
But this may sound a bit confusing, tricky, so I was wondering: is there a simple, intuitive, possibly graphical way of explaining to someone with very little background in math why rationals have measure zero?
AI: This is a really hard question; I think in general intuition for this sort of thing tends to come with experience, as you get used to the concepts. Having said that, I'll try to articulate the way that I think about it.
I guess the way of viewing $\mathbb{Q}$ as a subset of $\mathbb{R}$ is a load of dots on a continuous line. Obviously these dots are very close together (in fact the whole thing is nonsense because they're dense in $\mathbb{R}$), but intuitively the mental image does help to capture some of the relevant properties, particularly with an eye to the Lebesgue measure.
I would suggest constructing this set in steps, according to increasing denominator. Start with $\mathbb{Z}$. It seems pretty clear to me that this should have measure zero, since the dots are spaced out, and hence they occupy an "infinitely small" proportion of $\mathbb{R}$. Rigorously, we can prove that $\mathbb{Z}$ has measure zero by putting an interval of width $\epsilon 2^{-\lvert n \rvert}$ around each $n$.
For each $n\geq 1$, define $S_n = \{\frac{a}{b}\mid a,b\in\mathbb{Z}, b \leq n\}$ to be the set of rational numbers with denominator at most $n$. Thus, $\mathbb{Z} = S_1$. For each $n$, the elements of $S_n$ have some minimum gap between them (the lowest common multiple of the denominators less than or equal to $n$), hence the same argument we used for $\mathbb{Z}$ shows that $S_n$ has measure zero for each $n$.
At each step, we have a set of measure zero. If we continue this process infinitely, we will eventually reach every rational number (i.e. for every rational number $x$, there is a finite $n$ with $x \in S_n$), so in some sense $\mathbb{Q}$ is the "limit" of these null sets, and hence it is itself null. We can certainly make this "some sense" rigorous, since $\mathbb{Q}$ is the countable union of the $S_n$, but I'm not sure that's useful for the intuition.
Obviously what I've done here is not very sophisticated, but I think it is a bit easier to visualise than just invoking countability of $\mathbb{Q}$, since we are actually "zooming in" on $\mathbb{Q}$ in an explicit way.
|
H: Simplification of formula: $\cos{2\theta} + \sin{2\theta}\tan{\phi}$
So I'm trying to understand a certain proof/derivation of a formula in physics, but I seem to struggle with the following simplification to the final result.
The given formula is:
$$ 0 = \cos{2\theta} + \sin{2\theta}\tan{\phi} $$
And it is simplified to:
$$ \theta = \frac{1}{2}\tan^{-1}{\left(-\frac{1}{\tan{\phi}}\right)} $$
I don't really see how you can get there from the given foruma. I've tried moving things around in the equation but it never gives me the proper result. Anyone have any idea how this is achieved?
AI: Dividing by $\cos(2\theta)$ we obtain
$$
0 = 1 + \tan(2\theta) \tan \phi
$$
Your expression follows
P.S. : We have assumed $\theta \neq \frac{\pi}{4}$, why?
|
H: $E\int_0^T|X(t)|^2dt=0$ then $X=0 dP\times dt$
$X:\Omega\times [0,T]\to \mathbb{R}$ - process measurable with respect to the product sigma field such that $E\int_0^T|X(t)|^2dt=0$. Prove that then $X=0 dP\times dt$ Can anyone prove it?
AI: Let $A=\{(\omega,t): X(t))(\omega)=0\}$. By hypothesis we have $\int_0^{T} |X(t)|^{2}dt=0$ for almost all $\omega$. And this implies that For almost all $\omega$, $X(t))(\omega)=0$ for almost all $t$. Now apply Fubini's Theorem to $(P\times m)(A^{c})=\int I_{A^{c}} dP\times dt$. We get $\int \int I_{A^{C}} dt dP=\int 0dP=0$.
|
H: How to prove this set of functions has only one element
Let $B= \{z \in \mathbb{C} : |z|< 1\}$ and $\overline{B}= \{z \in \mathbb{C} : |z|\leq 1\}$
Let $C(\overline{B},\mathbb{R})$ be the set of continuous functions from $\overline{B} \rightarrow \mathbb{R}$
and $g$ a continuous function define as following $g: \overline{B} \setminus B = \mathbb{S}^1 \rightarrow \mathbb{R}$.
I have to prove that the following set $X = \{ u \in C(\overline{B},\mathbb{R}) : u\restriction_{B} harmonic \space and \space u\restriction_{\overline{B} \setminus B} = g \}$ has only one element.
I thought this could be proved by defining two functions that satisfy the conditions of $X$ and then showing that is the same.
So let $u_1$ and $u_2$ be two functions in $X$ and $h: u1 - u2 $
Then $h\restriction_{\mathbb{S}^1} = u_1\restriction_{\mathbb{S}^1} - \space u_2\restriction_{\mathbb{S}^1} = g - g = 0$ so $u_1\restriction_{\mathbb{S}^1} = u_2\restriction_{\mathbb{S}^1} = g$.
Now I want to show that $h\restriction_{B} = u_1\restriction_{B} - u_2\restriction_{B} = 0$ using the fact that $u_1$ and $u_2$ are harmonic but I dont know how to do it.
Thank you in advance.
AI: If $u_1,u_2: \overline{B}\to\mathbb{R}$ are both harmonic with $u_1|_{\partial B} = g = u_2|_{\partial B}$ then their difference $u_1 - u_2$ is also harmonic.
Their difference is also identically zero on the boundary. By the maximum principle, a harmonic function is extremized on the boundary of its domain. So $u_1 - u_2 = 0$.
|
H: System of higher order Differential Equations
Sorry asking out of interest but is it possible for there to be a system of ODEs
like
$X'' + X' = AX$
if so could one give an example of system which could be represented
as this
AI: If $\mathbf A = (a_{ij})_{ij} $ and if $\mathbf X = [x_1,\dots,x_n]^{\text{T}}$, we can write your system as
$$
\begin{cases}
x_1'' + x_1 ' = a_{11}x_1 + \dots + a_{1n}x_n \\
x_2'' + x_2 ' = a_{21}x_1 + \dots + a_{2n}x_n \\
\vdots
\\
x_n'' + x_n ' = a_{n1}x_1 + \dots + a_{nn}x_n \\
\end{cases}
$$
For example, if $n=2$ and $\mathbf A = \operatorname{Id}$, we get
$$
\begin{cases}
x_1''+x_1' = x_1\\
x_2'' +x_2' = x_2
\end{cases}
$$
which has solution
$$
\begin{bmatrix}
x_1 \\
x_2
\end{bmatrix}
=
\begin{bmatrix}
c_1 \operatorname{e}^{-\frac{1}{2}t(1+\sqrt 5)} +c_2 \operatorname{e}^{-\frac{1}{2}t(1-\sqrt 5)} \\
c_3 \operatorname{e}^{-\frac{1}{2}t(1+\sqrt 5)} +c_4 \operatorname{e}^{-\frac{1}{2}t(1-\sqrt 5)} \\
\end{bmatrix}
$$
|
H: Why curve has Jordan measure equal $0$
There is a theorem that curve $y=f(x)$, where $f:\ [a;b]\rightarrow \mathbb{R}$ and $f$ is a continuous function on this interval, then it's Jordan measure is equal $0$. Why is this so?
I have been taught that Jordan measure is an extension of the notion of size (length, area, volume), so shouldn't the Jordan measure of a curve be it's length? So it should be non-zero for $a\neq b$.
AI: I presume that the author means the Peano-Jordan measure of the graph of $f$ in $[a,b]\times \Bbb R\subseteq \Bbb R^2$.
The graph of a function cannot contain a polyrectangle and therefore its inner Peano-Jordan measure is $0$. This means that if the graph is Peano-Jordan measurable then its measure is $0$.
To see that, in your case, $m^*(\operatorname{gr} f)=0$ as well, notice that $f:[a, b]\to\Bbb R$ is continuous and therefore uniformly continuous. Therefore, for every $\varepsilon$ there is some $\delta_\varepsilon<1$ such that $\lvert f(x)-f(y)\rvert<\varepsilon$ for all $x,y$ such that $\lvert x-y\rvert\le\delta_\varepsilon$. This implies that for all $x\in [a,b]$, $$\left(\left(x-\frac{\delta_\varepsilon}2,x+\frac{\delta_\varepsilon}2\right]\times\Bbb R\right)\cap\operatorname{gr}f\subseteq \left(x-\frac{\delta_\varepsilon}2,x+\frac{\delta_\varepsilon}2\right]\times\left(f(x)-\varepsilon,f(x)+\varepsilon\right]$$
Therefore, for any $\varepsilon>0$ we can select $a=x_0<x_1<\cdots< x_N\le b$ such that $x_n-x_{n-1}=\frac{\delta_\varepsilon}3$ and $b-x_N<\frac{\delta_\varepsilon}3$, and consider the cover $$\operatorname{gr}f\subseteq \bigcup_{j=0}^N \left(x_j-\frac{\delta_\varepsilon}2,x_j+\frac{\delta_\varepsilon}2\right]\times\left(f(x_j)-\varepsilon,f(x_j)+\varepsilon\right]$$
This means that $$m^*(\operatorname{gr}f)\le m^*\left(\bigcup_{j=0}^N \left(x_j-\frac{\delta_\varepsilon}2,x_j+\frac{\delta_\varepsilon}2\right]\times\left(f(x_j)-\varepsilon,f(x_j)+\varepsilon\right]\right)\le \\\le\sum_{j=0}^Nm^*\left(\left(x_j-\frac{\delta_\varepsilon}2,x_j+\frac{\delta_\varepsilon}2\right]\times\left(f(x_j)-\varepsilon,f(x_j)+\varepsilon\right]\right)=2(N+1)\varepsilon\delta_\varepsilon$$
Notice, in addition to that, that by construction $N$ is a very specific number: $$N=\left\lfloor \frac{b-a}{\frac{\delta_\varepsilon}3} \right\rfloor\le 3\frac{b-a}{\delta_\varepsilon}$$
therefore, $$m^*(\operatorname{gr}f)\le (6(b-a)+2\delta_\varepsilon)\varepsilon\le (6(b-a)+2)\varepsilon$$
Taking the $\inf$ over $\varepsilon>0$, we obtain $m^*(\operatorname{gr}f)=0$.
|
H: How many numbers of 4 digits have at least three specific digits
The question I tried to solve goes like this:
How many 4-digit numbers have at least one digit $2$, one digit $3$ and one digit $4$?
I saw it as simple as this (because it's exactly 4 digits):
The number will have a $2$, a $3$, a $4$ and $X$.
So the solution I thought would be is:
$$3 \cdot 2 \cdot 1 \cdot 10 = 60 $$
$X$ could be any digit of the $10$ digits.
However, this also includes numbers that start with $0$. This can't count so I remove $6$ because of
$$0234,0324,0342,0432,0243,0423$$
so the solution would be $54$.
However, this isn't at all the solution. What am I missing?
AI: Here is an other way to evaluate the number of possibilities. We will condition on the fourth number we have
Case: $0$
If the fourth number is $0$, it can't ne in the first position. It has three places where it can go. Next we put the $4$ (three remaining places), etc. This gives us
$$3\times3\times2\times1=18\tag1$$
Case: $1,5,6,7,8,9$
There are six choice for the number and four places to put it. Then, we put the $4, 3$ and $2$ as before. This gives us
$$6\times4\times3\times2\times1=144\tag2$$
Case: $2,3,4$
We now have a repeated number. First we chose which one is repeated (three choices) and where to put them, $4\choose2$. Finally, we put the remaining numbers. This gives us
$$3\times{4\choose2}\times2\times1=36\tag3$$
There are $18+144+36=198$ such numbers.
Hope it helps
You could also find the same result starting with you reasoning. Starting with
$$3\times2\times1\times10$$
You counted nombre of the form $3245$ but not $5324$. The fourth digit you chose was always the last digit of the number. You need to add a $\times4$ to choose where you put the last digit.
$$3\times2\times1\times10\times4=240$$
Now we counted numbers starting with $0$, we need to remove them ($6$ cases). Also, if the fourth number was a $2$, it was counted twice. $234(2)$ is the same as $(2)342$ where $(2)$ represents the fourth digit. So we need to remove the number where the fourth digit is $2,3$ or $4$. ($36$ cases).
This bring us back to $240-6-36=198$ possibilities.
|
H: Definition of isomorphism of graded rings
After searching through some literature I got a bit confused what one has to check to conclude that two graded rings are isomorphic (as graded rings).
Suppose that $R$ and $S$ are graded rings, then a homomorphism of graded rings is a ring homomorphism
$$f:R\rightarrow S$$ such that $f(R_{n})\subset S_{n}$ for all $n$.
Question: Suppose that we have such a homomorphism of graded rings between $R$ and $S$, then to conclude that $R$ and $S$ are isomorphic as graded rings, is it then enough to show that the map $f$ from above is a ring isomorphism? Or do we also have to show that $f(R_{n})=S_{n}$ for all $n$?
AI: An isomorphism of graded rings is a morphism $f:R \rightarrow S$ of graded rings, such that there is an inverse morphism $g:S \rightarrow R$ of graded rings. So in particular $f$ and $g$ are isomorphism of rings.
Conversely assume we have a graded morphism of rings $f:R
\rightarrow
S$, which is an isomorphism of rings ie. there is an inverse morphism of rings $g:R\rightarrow S$. Now the only thing to check is that $g$ is indeed a morphism of graded rings meaning $g(S_n)\subseteq R_n$. Now since $f$ is graded with inverse map $g$ we find that $g(S_n) = f^{-1}(S_n) = R_n$. This shows that $g$ is an morphism of graded rings, hence $f$ is an isomorphism of graded rings. With other words, the forgetful functor $\mathsf{grRing} \rightarrow \mathsf{Ring}$ is conservative (reflects isomorphisms).
|
H: Godel's completeness theorem and Consistency of ZFC
Is it necessary to assume the consistency of $\sf ZFC$ when we prove the completeness theorem? When we use the completeness theorem to show that Peano Arithmetic is consistent, the assumption of consistency of $\sf ZF$ is necessary?
I mean, whenever one says that the completeness theorem prove the consistency of a first-order theory, does he assume that $\sf ZF$ or $\sf ZFC$ is consistent?
AI: Using the completeness theorem, in general, means that we want to equate syntax and semantics. For this we need a framework that can interpret semantics, usually this means set theory such as $\sf ZF$ or its relatives (but not only these!)
Now, you don't have to assume $\sf ZF$ is consistent for using the completeness theorem, because we are working inside $\sf ZF$, that is our meta-theory, as far as Peano is concerned.
But now you might want to ask, should we even use $\sf ZF$ as a meta-theory? Does it prove a contradiction or not? And in that case you need to assume its consistency in the meta-meta-theory, or work with a meta-meta-theory which is strong enough to prove it. But then you need to ask yourself, why is this meta-meta-theory is good enough, and well... it's turtles all the way down from there.
TL;DR No, you don't need to assume anything. But once you start digging into your meta-theory's meta-theory, i.e. the meta-meta-theory, you might want to add these assumptions.
|
H: What is the homogenous parametric equation of a line given two points?
In the case we have two non-homogeneous points $\vec{A}=(x_1,x_2)$ and $\vec{B}=(y_1,y_2)$, we can write the vector equation of the line with a real parameter $\lambda$:
$r: \quad \vec{X}=\vec{A}+\lambda(\vec{B}-\vec{A})$
Or
$r: \quad \vec{X}=(1-\lambda)\vec{A}+\lambda\vec{B}$
Here the coefficients of $\vec{A}$ and $\vec{B}$ depend on the same parameter.
Suppose know that we have two homogeneous points in the plane:
$\vec{A}=(x_1,x_2,x_0)$
$\vec{B}=(y_1,y_2,y_0)$
In this case, the vector equation of the line is:
$r: \vec{X}=\lambda\vec{A}+\mu\vec{B}$
Where $\lambda$ and $\mu$ are two real parameters. Are they indepent? How do you explain the fact we need two indepent parameters in this case? In the first we needed a point and then the difference between the points (direction) multiplied by a single parameter. Is this related to the fact that homogeneous coordinates are defined but a multiplicative constant? Since in the second equation there is no difference and the parameters are apparently independent, I don't understand what represents the direction of the line.
AI: Given $z \ne 0$, point $(x, y)$ is represented by $(x z, y z, z)$ in homogenous coordinates.
Because of this, in homogenous coordinates, all $(x k, y k, z k)$ for $k \in \mathbb{R}$ except $k = 0$ refer to the same point on the Euclidean plane.
Let's examine what happens when we linearly interpolate between two homogeous coordinates:
$$\left[\begin{matrix} x \\ y \\ z \end{matrix}\right] = (1 - \lambda)\left[ \begin{matrix} x_1 \\ y_1 \\ z_1 \end{matrix}\right] + \lambda\left[ \begin{matrix} x_2 \\ y_2 \\ z_2 \end{matrix}\right] = \left[ \begin{matrix} (1-\lambda)x_1 + \lambda x_2 \\ (1-\lambda)y_1 + y_2 \\ (1-\lambda)z_1 + z_2 \end{matrix} \right]$$
On the Euclidean plane, this corresponds to point
$$\left\lbrace ~ \begin{aligned}
x^\prime &= \displaystyle \frac{ (1-\lambda)x_1 + \lambda x_2 }{ (1-\lambda)z_1 + \lambda z_2 } \\
y^\prime &= \displaystyle \frac{ (1-\lambda)y_1 + \lambda y_2 }{ (1-\lambda)z_1 + \lambda z_2 } \\
\end{aligned} \right .$$
It is no longer linear, although the points are on a straight line. But, if $z_1 z_2 \lt 0$, there is a point ($\lambda = z_1 / (z_1 - z_2)$) when the coordinates are undefined due to division by zero.
The solution is to do linear interpolation in homogenous coordinates using
$$\vec{P} = (1-\lambda)\vec{A} + \lambda\mu\vec{B}$$
where $\mu$ ensures the two have the same $z$, i.e.
$$\mu = \frac{A_z}{B_z}$$
so that the divisor ($z$) is constant. Note that because $\mu \ne 0$ if $\vec{A}$ and $\vec{B}$ are finite, $\vec{B}$ and $\mu\vec{B}$ refer to the same point on the Euclidean plane. If both $\vec{A}$ and $\vec{B}$ refer to points at infinity, then one can use $\mu = 1$.
(Obviously, $(1-\lambda)z + \lambda z = z$ for all $\lambda \in \mathbb{R}$.)
This way, $\lambda$ interpolates between the points linearly in both homogenous and the corresponding 2D coordinates. This works when both points are finite (both have nonzero $z$), or when both points are at infinity (both have zero $z$, in which case $\mu = 1$). This does exclude transition between finite points and points at infinity, though; but that is understandable, as this is linear interpolation.
If we use the 2D Euclidean form $\overline{p} = \overline{B} + \lambda\overline{A}$, i.e. $\vec{A}$ is the homogenous direction vector, and $\vec{B}$ is the point corresponding to $\lambda = 0$ on the line, then the form is
$$\vec{P} = \mu\vec{B} + \lambda\vec{A}$$
where $\mu$ still has the same reasoning: it ensures that the Euclidean representation of the two points has sensible divisor. In this case, it is sufficient that $\vec{B}$ is finite, or both $\vec{B}$ or $\vec{A}$ refer to points at infinity. If $\vec{A}$ is a point at infinity, then $\mu = 1$ is a sensible choice; the the locus of points $\vec{P}$ is finite for all finite $\lambda \in \mathbb{R}$.
|
H: Find the function does describe the the percentage of the area that each circle overlaps
I saw this question, yesterday and it got me thinking, what function does describe the the percentage of the area that each circle overlaps.
In that diagram it is given that the distance between the center of the circles is $1.5r$ but I want to generalize that to $nr$. The circles has the same radius.
Question: what function describes the problem I want to solve?
I tried using the answer that was given in the original problem, but that didn't bring me further. Thanks for any advice.
AI: Well, let's solve this by using some graphics. I define the following diagram:
The equation of a circle is given by:
$$\left(x-\text{a}\right)^2+\left(\text{y}-\text{b}\right)^2=\text{r}^2\tag1$$
Where $\left(\text{a},\text{b}\right)$ are the center coordinates of the circle and $\text{r}$ is the radius of the circle.
We have two circles and they can be described using (which can be seen from the diagram above):
$$\left(x-\alpha_1\right)^2+\text{y}^2=\text{R}^2\tag2$$
$$\left(x-\alpha_2\right)^2+\text{y}^2=\text{R}^2\tag3$$
It is also not hard to see that $\alpha_1=\frac{\text{R}\Delta}{2}$ and $\alpha_2=-\alpha_1=-\frac{\text{R}\Delta}{2}$, using the diagram above.
Now, we can look at the following diagram:
If we combine both diagrams, we can see that $\epsilon_1=\left|\text{R}-\frac{\text{R}\Delta}{2}\right|=\text{R}\left|1-\frac{\Delta}{2}\right|$ and $\epsilon_2=-\epsilon_1=-\text{R}\left|1-\frac{\Delta}{2}\right|$. And using $(2)$ and/or $(3)$ we can solve for $\lambda_1$ and $\lambda_2$:
$$\left(0-\alpha_1\right)^2+\lambda_1^2=\text{R}^2\space\Longleftrightarrow\space\lambda_1=\sqrt{\left|\text{R}^2-\alpha_1^2\right|}\space\wedge\space\lambda_2=-\lambda_1=-\sqrt{\left|\text{R}^2-\alpha_1^2\right|}\tag4$$
Now, we take a look at the last diagram:
Using the first and the third diagram it is not hard to see that $\varphi_1=\text{R}+\frac{\text{R}\Delta}{2}=\text{R}\left(1+\frac{\Delta}{2}\right)$ and $\varphi_2=-\varphi_1=-\text{R}-\frac{\text{R}\Delta}{2}=-\text{R}\left(1+\frac{\Delta}{2}\right)$.
Now, we can solve $(2)$ and $(3)$ for the equation in the first quadrant in the first diagram:
$$\left(x-\alpha_1\right)^2+\text{y}^2=\text{R}^2\space\Longrightarrow\space\text{y}=\sqrt{\text{R}^2-\left(x-\alpha_1\right)^2}\tag5$$
$$\left(x-\alpha_2\right)^2+\text{y}^2=\text{R}^2\space\Longrightarrow\space\text{y}=\sqrt{\text{R}^2-\left(x-\alpha_2\right)^2}\tag6$$
Now, we can integrate to find the areas (using that a circle is symmetric):
The total area in diagram 3 is given by:
$$\mathcal{A}_1:=4\int_0^{\varphi_1}\sqrt{\text{R}^2-\left(x-\alpha_1\right)^2}\space\text{d}x=$$
$$\frac{\text{R}^2}{2}\left(2\pi+\Delta\sqrt{4-\Delta^2}+4\arctan\left(\frac{\Delta}{\sqrt{4-\Delta^2}}\right)\right)\tag7$$
The total area in diagram 2 is given by:
$$\mathcal{A}_2:=4\int_0^{\epsilon_1}\sqrt{\text{R}^2-\left(x-\alpha_2\right)^2}\space\text{d}x=$$
$$\frac{\text{R}^2}{2}\left(2\pi-\Delta\sqrt{4-\Delta^2}-4\arctan\left(\frac{\Delta}{\sqrt{4-\Delta^2}}\right)\right)\tag8$$
So, the desired fraction is given by:
$$\eta:=\frac{\mathcal{A}_2}{\mathcal{A}_1}=\frac{\frac{\text{R}^2}{2}\left(2\pi-\Delta\sqrt{4-\Delta^2}-4\arctan\left(\frac{\Delta}{\sqrt{4-\Delta^2}}\right)\right)}{\frac{\text{R}^2}{2}\left(2\pi+\Delta\sqrt{4-\Delta^2}+4\arctan\left(\frac{\Delta}{\sqrt{4-\Delta^2}}\right)\right)}=$$
$$\frac{2\pi-\Delta\sqrt{4-\Delta^2}-4\arctan\left(\frac{\Delta}{\sqrt{4-\Delta^2}}\right)}{2\pi+\Delta\sqrt{4-\Delta^2}+4\arctan\left(\frac{\Delta}{\sqrt{4-\Delta^2}}\right)}\tag9$$
Note: these calculations only make sense when $0\le\Delta\le2$ and when $\Delta\to2$ we get (because we get a undetermined form):
$$\lim_{\Delta\to2^-}\eta=0\tag{10}$$
As the last step, we can plot $\eta$ as a function of $\Delta$:
If you use Mathematica, you can use some code of me. In order to plot a complete diagram, you can use:
R =;
\[CapitalDelta] =;
\[Alpha]1 = (R*\[CapitalDelta])/2;
\[Alpha]2 = -\[Alpha]1;
\[Epsilon]1 = R*Abs[1 - (\[CapitalDelta]/2)];
\[Epsilon]2 = -\[Epsilon]1;
\[Lambda]1 = Sqrt[Abs[R^2 - \[Alpha]1^2]];
\[Lambda]2 = -\[Lambda]1;
\[Phi]1 = R*(1 + (\[CapitalDelta]/2));
\[Phi]2 = -\[Phi]1;
ContourPlot[{(x - \[Alpha]1)^2 + y^2 ==
R^2, (x - \[Alpha]2)^2 + y^2 == R^2}, {x, -2 R, 2 R}, {y, -2 R,
2 R}, GridLines -> {{0, \[Alpha]1, \[Alpha]2, \[Epsilon]1, \
\[Epsilon]2, \[Phi]1, \[Phi]2}, {0, \[Lambda]1, \[Lambda]2}}]
In order to evaluate the integral $(7)$, you can use:
FullSimplify[
Integrate[
4 Sqrt[R^2 - (x - ((R*\[CapitalDelta])/2))^2], {x, 0,
R*(1 + (\[CapitalDelta]/2))}, Assumptions -> R > 0 && \[CapitalDelta] > 0]]
In order to evaluate the integral $(8)$, you can use:
FullSimplify[
Integrate[
4 Sqrt[R^2 - (x - (-(R*\[CapitalDelta])/2))^2], {x, 0,
R*Abs[1 - (\[CapitalDelta]/2)]}, Assumptions -> R > 0 && \[CapitalDelta] > 0]]
And so $\eta$ is found by:
FullSimplify[(1/
2 R^2 (2 \[Pi] - \[CapitalDelta] Sqrt[4 - \[CapitalDelta]^2] -
4 ArcTan[\[CapitalDelta]/Sqrt[4 - \[CapitalDelta]^2]]))/(1/
2 R^2 (2 \[Pi] + \[CapitalDelta] Sqrt[4 - \[CapitalDelta]^2] +
4 ArcTan[\[CapitalDelta]/Sqrt[4 - \[CapitalDelta]^2]])),
Assumptions -> R > 0 && \[CapitalDelta] > 0]
And the plot of $\eta$ as a function of $\Delta$ is made using:
Plot[(2 \[Pi] - \[CapitalDelta] Sqrt[4 - \[CapitalDelta]^2] -
4 ArcTan[\[CapitalDelta]/Sqrt[4 - \[CapitalDelta]^2]])/(
2 \[Pi] + \[CapitalDelta] Sqrt[4 - \[CapitalDelta]^2] +
4 ArcTan[\[CapitalDelta]/Sqrt[4 - \[CapitalDelta]^2]]), {\[CapitalDelta], 0, 2},
AxesLabel -> {HoldForm[\[CapitalDelta]], HoldForm[\[Eta]]},
PlotLabel -> None, LabelStyle -> {GrayLevel[0]}]
|
H: Let $G$ be a nonabelian group of order $p^{3},$ where $p$ is a prime. Show that $G$ has exactly $p^{2}+p-1$ distinct conjugacy classes.
Problem
Let $G$ be a nonabelian group of order $p^{3},$ where $p$ is a prime. Show that $G$ has exactly $p^{2}+p-1$ distinct conjugacy classes.
Attempt
Let $G$ be a nonabelian group with $|G|=p^{3},$ where $p$ is a prime. Recall $|Z(G)|=p$ from the previous problem (1)[I've prove already]. By Class equation,
\begin{equation} \label{5 - 1}
p^{3}=|G|=|Z(G)|+\sum_{i=1}^{n}[G:C(a_i)]=p+\sum_{i=1}^{n}[G:C(a_i)],
\end{equation}
where ${\rm class~}a_1$, ${\rm class~}a_2$, $\cdots$, ${\rm class~}a_n$ are the nonsingleton conjugacy classes in $G$. It is enough to show that $n=p^2-1$.
Since $2 \leq\left[G: C\left(a_{i}\right)\right]=\frac{|G|}{|C(a_i)|}=\frac{p^3}{|C(a_i)|},$ we have
$$
| C\left(a_{i}\right)|=1{\rm ~or~}p{\rm ~or~}p^2{\rm ~or~p^3}
$$
for each $i=1,2,\cdots, n$. Note that $| C\left(a_{i}\right)|\neq 1$ for all $i$ since $G$ is nonabelian and $| C\left(a_{i}\right)|\neq p^3$ for all $i$ because $|Z(G)|=p$. Assume that there is $j \in N$ such that $|C\left(a_{j}\right)|=p$. Then
$$
\left| \frac{G}{C(a_j)}\right|=\frac{|G|}{|C(a_j)|}=\frac{p^3}{p}=p^2.
$$
Hence, $G/C(a_j)$ is abelian. Then -----??------, which is a contradiction. Hence, $|C\left(a_{i}\right)|=p^2$ for all $i$.
---------skip--------------
Question
Am I approaching in the right way?
If good, how to prove the '-----??------'.
AI: Because $\mathcal Z(G)\subset\mathcal C(a_i)\neq G$ we must have $p$ $|$ $|\mathcal C(a_i)|$ $|$ $p^3$. Therefore $|\mathcal C(a_i)|=p^2$. $$\therefore p^3=p+np$$ $$\therefore n=p^2-1$$
|
H: Countability of the set $(0,1)$
I am trying to prove that the set $(0,1)$ is uncountable from "A First Course in Analysis by Yau". I have a question about a particular step.
In the text, the result is proved by contradiction. It is supposed that the set $(0,1)$ is countable, which it is then written that there must exist a bijection $f:\mathbb{N}\rightarrow (0,1)$ (which is ultimately contradicted).
My question is, why does the bijective map have to exist? If we suppose that $(0,1)$ is countable, shouldn't there exist an injective map $g:(0,1)\rightarrow\mathbb{N}$?
AI: There is a 1-1 correspondence between $A$ and $B$, if and only if a) there is an injection from $A$ to $B$ and b) at the same time there is an injection from $B$ to $A$.
So, if you can demonstrate that injection from $(0,1)$ to $\mathbb N$, then yes, you have demonstrated that $(0,1)$ is countable -- but more, you have proved it to be '''countably infinite'''.
|
H: Prove that for all sets $A$ and $B$ $A\subseteq B$ implies $A\cap B=A$.
In the next proof we use the following lemmas:
For $A$ and B sets, $A \subseteq B$ implies $A \cup B = B$.
For all sets $A$, $A \cap A = A$.
For all sets $B$, $B \cap \varnothing = \varnothing $.
Assume that $A⊆B$. As a result, $A \cup B = B$. Note that foregoing statement suggests two possible cases.
Case 1: $A = B$.
Since $A = B$, as a matter of fact we are trying to prove that $B\cap B=B$ which is true. Therefore, $A \cap B = A$ trivially.
Case 2: $A = \varnothing$
$A ∩ B = \varnothing ∩ B = B ∩ \varnothing = \varnothing = A.$
Then,
$A \cap B = A.$
∎
Is this proof right?
AI: This can be proved using 'modus tollens', which is:
$P\to Q \implies \lnot Q \to \lnot P$
So, we need to show that:
$$A\cap B\ne A \to A \not\subseteq B$$
If $A\cap B\ne A$, then $\exists x\in A$, such that $x\not\in B$ which means $A \not\subseteq B$, as required.
Modus tollens then states that the contrapositive, i.e. your original statement, is also true.
|
H: Sum of complex roots' fractions
According to this:
If $\omega^7 =1$ and $\omega \neq 1$ then find value of
$\displaystyle\frac{1}{(\omega+1)^2} +
\frac{1}{(\omega^2+1)^2} +
\frac{1}{(\omega^3+1)^2} +
... + \frac{1}{(\omega^6+1)^2}=?$
First I try like
$\displaystyle\frac{1}{\omega+1} +
\frac{1}{\omega^2+1} +
\frac{1}{\omega^3+1} +
... + \frac{1}{\omega^6+1} = 3
$
I have done distribution them and finally got the solution $\dfrac{5}{3}$
However, this is, without a doubt, a time-consuming way.
Can someone please suggest easier way to solve this one.
AI: Note that $\omega, \ldots, \omega^6$ are precisely the roots of the sixth degree polynomial:
$$p(x) = x^6 + \cdots + 1 = \dfrac{x^7-1}{x-1}.$$
Thus, we can write
$$p(x) = (x-\omega)\cdots(x-\omega^6).$$
Taking (natural) $\log$ on both sides and differentiating gives us
$$\dfrac{p'(x)}{p(x)} = \dfrac{1}{x-\omega}+\cdots+\dfrac{1}{x-\omega^6}.$$
Note that
\begin{align}
\log p(x) &= \log(x^7 - 1) - \log(x-1)\\
\implies \dfrac{p'(x)}{p(x)} &= \dfrac{7x^6}{x^7-1} - \dfrac{1}{x-1}.
\end{align}
This gives us that
$$\dfrac{7x^6}{x^7-1} - \dfrac{1}{x-1} = \dfrac{1}{x-\omega}+\cdots+\dfrac{1}{x-\omega^6}.$$
Differentating both sides again gives us
$$\dfrac{(x^7-1)(42x^5) - (7x^6)(7x^6)}{(x^7-1)^2} + \left(\dfrac{1}{x-1}\right)^2 = -\left(\dfrac{1}{x-\omega}\right)^2-\cdots-\left(\dfrac{1}{x-\omega^6}\right)^2.$$
Now, we simply substitute $x = -1$ both sides. It is clear that the RHS transforms to the negative of what we want, whereas the LHS becomes
\begin{align}
\dfrac{(-2)(-42) - (7)(7)}{(-2)^2} + \left(\dfrac{1}{-2}\right)^2 &= \dfrac{84-49}{4} + \dfrac{1}{4}\\
&= \dfrac{36}{4} = 9
\end{align}
This gives us the answer as $-9$.
|
H: Why this approximation for $\pi$ is so accurate?
Berggren and Borwein brothers in "Pi: A Source Book" showed a mysterious approximation for $\pi$ with astonishingly high accuracy:
$$ \left(\frac{1}{10^5}\sum_{n={-\infty}}^\infty e^{-n^2/10^{10}}\right)^2 \approx \pi.$$
In particular, this formula gives $42$ billion (!) correct digits of $\pi$.
I have two questions:
(1) Why this approximation is so much accurate?
(2) Is it possible to prove without computation that this formula is not an identity?
AI: This isn't actually so mysterious after all; it's just a high precision approximation of the identity
$$\int_{-\infty}^\infty e^{-x^2}\,dx = \sqrt{\pi},$$
by using very small rectangles to approximate area. By considering these rectangles, you can also see that the areas cannot be equal, since some area will be left over between the rectangles and the graph of $e^{-x^2}$.
|
H: Taylor Series of $\ln(1+x^3)/[\tan(x)-x]$
Hi i want to find the Taylor series of the function $$\frac{\ln(1+x^3)}{[\tan(x)-x]}$$ at $0$ until n=3
i have $\ln(1+x^3)=x^3-\frac{x^6}{2}+\frac{x^9}{3}+o(x^9)$ and $\tan(x)-x=\frac13 x^3+o(x^3)$
by devision I found that $$\frac{\ln(1+x^3)}{\tan(x)-x}=3-\frac{3}{2}x^3+o(x^3)$$
but I don't find the same result as here.
AI: You need to use more terms in the expansion of $\tan x-x$:
\begin{align*}
& \frac{{\log (1 + x^3 )}}{{\tan x - x}} = \frac{{x^3 - \frac{{x^6 }}{2} + \mathcal{O}(x^9 )}}{{\frac{1}{3}x^3 + \frac{2}{{15}}x^5 + \mathcal{O}(x^7 )}} = \frac{{3 - \frac{3}{2}x^3 + \mathcal{O}(x^6 )}}{{1 + \frac{2}{5}x^2 + \mathcal{O}(x^4 )}}
\\ &
= \left( {3 - \frac{3}{2}x^3 + \mathcal{O}(x^6 )} \right)\left( {1 - \frac{2}{5}x^2 + \mathcal{O}(x^4 )} \right) = 3 - \frac{6}{5}x^2 - \frac{3}{2}x^3 + \mathcal{O}(x^4 ).
\end{align*}
|
H: Existence of a continuous linear functional with $f(x_0)=||x_0||$ and $ \sup\{|f(x)|:x\in X,x\neq 0\}=1$
Let X be a linear, real or complex normed space and let $x_0\in X,x_0\neq0$. Show that: It exists a continous
linear functional $f:X\rightarrow\mathbb{K}$ where $\mathbb{K}=\mathbb R,\mathbb C$ s.t :$$ f(x_0)=||x_0||$$$$ \sup\{|f(x)|:x\in X,x\neq 0\}=1$$
My problem is next:
We must show existence , if it exists a continous linear functional $f$ which respect above conditions then:
$f(x_0+x_0+...x_0)=f(x_0)+f(x_0)+...f(x_0)=||x_0||+||x_0||+...||x_0||=n||x_0||$ ($n$ terms)
Let $y_0=x_0+x_0+...x_0$ then $n||x_0||=f(y_0)\leq \sup\{|f(x)|:x\in X,x\neq 0\}=1$ so $n||x_0||\leq 1 ,\forall n\geq1$ but then $||x_0||=0 $ and this is a contradiction with $x_0\neq0$
AI: As $x_0 \neq 0$, $\{x_0\}$ can be completed into a basis $\mathcal B= \{x_0\} \cup \{x_i \mid i \in I\}$ of $X$.
Define $f$ on the basis $\mathcal B$ by $f(x_0) = \Vert x_ 0 \Vert$ and $f(x_i)= 0$ for $i \in I$.
For any vector $x=\lambda_0 x_0 + \sum_{j \in J \subseteq I} \lambda_j x_j$ where $J$ is a finite subset of $I$, we have
$$\vert f(x) \vert = \vert \lambda_0 f(x_0) \vert = \vert \lambda_0 \vert \Vert x_0 \Vert \le \Vert x \Vert.$$
From which the expected result follows
$$\sup\{|f(x)|:x\in X,\Vert x \Vert \le 1 \, ,x\neq 0\}=1$$ by modifying the $\sup$ of your question.
|
H: Show a continuous function $f$ attains a maximum value in $\Bbb{R}$.
"Let $f:\Bbb{R}\to \Bbb{R}$ be a continuous function such that $f(0)=2$ and $\lim_{x\to{-\infty}}f(x)=0$, and $f$ is decreasing when $x\geq 3$ . Show $f$ attains a maximum value in $\Bbb{R}$."
Lets call the maximum value $M$. When $x\to \ -\infty$, we have that $f\to 0$. Let $\epsilon>0$ be given. This means there is a $N<0$ such that $$x<N \implies \vert f(x) \vert < \epsilon$$
In other words $f$ is arbitrarily small when $x<N$, therefore $M$ must exists at a point where $x\geq N$. Let $A=[N,0]$, and let $s$ be the point where $f(x)$ attains its maximum on $A$, that is $f(s)=sup(f(A))$.
Since $f$ is decreasing when $x\geq 3$, $M$ can only occur at the point $3$, when $x\in [3,\infty)=B$.
So if $M$ exists in $A$, it is at the point $s$. Likewise, if $M$ exists in $B$, it is at $x=3$. Therefore $M\in [s,3]$, since a continuous function attains a max and min value in a closed set and $f$ is continuous in $[s,3]$.
Is there a way to explicitly calculate $M$ with the given information? Also, to show the existence of $M$, is it enough to guarantee that $M$ exists in a closed set of $\Bbb{R}$ and then apply the extreme value theorem ( as I have tried )?
AI: Your proof isn't logically sound because you start with the assumption that a maximum $M$ exists and then go on to conclude that it must be in some given closed interval.
However, all you've shown is that if $M$ exists, then $M \in [s, 3].$
(Which also doesn't make sense because $M$ is in the codomain and $[s, 3]$ is in the domain. You meant to conclude that $M = f(x_0)$ for some $x_0 \in [s, 3]$.)
In other words $f$ is arbitrarily small when $x < N$,
This too doesn't make much sense. What you are saying is that $f$ becomes arbitrarily small when $x < N$ but that is not the case. (Note that you are fixing $N$.)
What is true is that $f$ can be made sufficiently small by choosing some "sufficiently negative" $N$.
However, your line of reasoning was sensible and one can polish it as follows:
Choose $\epsilon = 1$ specifically. Then, you know that there exists $N < 0$ such that $|f(x)| < 1$ for $x < N$.
For $x \ge 3$, you know that $f(x) \le f(3)$.
Consider the closed and bounded interval: $[N,3]$.
$f$ must attain its maximum on this. Call this maximum $M$.
(Now, we have actually shown the existence of $M$!)
Show that this $M$ is the maximum of $f$ on $\Bbb R$.
It is clear that $M \ge 2$ and $M \ge f(3)$. These two facts show that $M \ge f(x)$ for every $x \in (-\infty, N) \cup (3, \infty)$.
Thus, we are done.
Is there a way to explicitly calculate $M$ with the given information?
No. In fact, try this exercise: Show that there exist two different functions $f$ satisfying the given conditions such that they have different maxima.
Also, to show the existence of $M$, is it enough to guarantee that $M$ exists in a closed set of $\Bbb R$ and then apply the extreme value theorem ( as I have tried )?
You have phrased it a bit incorrectly but the spirit is correct - the technique is often to show that the maximum that $f$ attains on some closed and bounded interval is going to be the maximum on $\Bbb R$ as well. (Note that you just wrote "closed", you also need "bounded".)
|
H: Interval of Existence for initial value problem: $\dot x = x^2$, $x(0) = x_0$
I am currently reading through a chapter of a book called 'Differential Dynamical Systems' by James D. Meiss and the initial value problem shown in the title is an example given in Chapter 3 (numbered 3.22) which is solved by separation of variables, giving the solution $x(t)=\frac{x_0}{1-x_0t}$. This isn't hard but then the author explains that the interval of existence is $t \in (-\infty, \frac{1}{x_0})$ for positive $x_0$. I don't understand why the solution does not exist $\forall t \in \mathbb{R}\setminus\{\frac{1}{x_0}\}$. When you differentiate at any point not equal to $t=\frac{1}{x_0}$, you get $\dot x=\frac{{x_0}^2}{(1-x_0t)^2}=x^2$ so I'm not sure if maybe I don't fully understand what is meant by 'interval of existence' or if there's something else that I am missing.
AI: Maximal solutions of an ODE $\begin{cases}y'=F(t,y)\\ y(t_0)=x_0\end{cases}$ ($F:\Bbb R\times\Bbb R^d\to \Bbb R^d$) are maximal in the set of partial functions $g:I\to\Bbb R^d$ such that $I$ is an open interval containing $t_0$, $g'(t)=F(t,g(t))$ for all $t$ and $g(t_0)=x_0$; the ordering is inclusion-extension.
The reason for this is that, in a way, when you pass from a connected component of the domain to the other, you end up losing all relation with the previous one. More concretely, there are not just one, but infinitely many solutions $f:\Bbb R\setminus\{\frac1{t_0}\}\to \Bbb R$ such that $f(0)=x_0$. To name a few: $$f(t)=\frac{x_0}{1-x_0t}\\ f(x)=\begin{cases}\frac{x_0}{1-x_0t}&\text{if }t<\frac1{t_0}\\ \frac{x_0+1}{1-(x_0+1)t}&\text{if }t>\frac1{t_0}\end{cases}\\ f(x)=\begin{cases}\frac{x_0}{1-x_0t}&\text{if }t<\frac1{t_0}\\ \frac{x_0+2}{1-(x_0+2)t}&\text{if }t>\frac1{t_0}\end{cases}\\\vdots$$
and I could go on. This would make the correspendence (or, more appropriately, a correspondence) sending every $(t_0,x_0)$ to a maximal solution with inital data $y(t_0)=x_0$ quite more cumbersome, all the more if you want to guarantee that two maximal solutions coincide if and only if they have the same value at some point.
|
H: Show that $a_n:=\frac{(-1)^{n-1}}{2n-1}$ converges
Show that $a_n:=\frac{(-1)^{n-1}}{2n-1}$ converges.
If $(a_n)$ converges, the sequences is a Cauchy sequence. Which means:
$\forall \epsilon >0 \,\,\,\exists N \in \mathbb{N}\,\,\,m,n>N:\left|\frac{(-1)^{n-1}}{2n-1}-\frac{(-1)^{m-1}}{2m-1}\right|<\epsilon$
$\left|\frac{(-1)^{n-1}}{2n-1}-\frac{(-1)^{m-1}}{2m-1}\right|\le\left|\frac{1}{2n-1}\right|+\left|\frac{1}{2m-1}\right|<\left|\frac{1}{2N-1}\right|+\left|\frac{1}{2N-1}\right|=\frac{2}{2N-1}<\epsilon\Longleftrightarrow N>\frac{2+\epsilon}{2\epsilon}$
So we choose $N:=\left\lceil \frac{2+\epsilon}{2\epsilon}\right\rceil$
Which means we will always find a $N$ for any $\epsilon>0$
Is this correct?
AI: This solution is indeed valid.
Rather than use the Cauchy condition, you could also prove that it converges to $0$ directly: for any $\varepsilon>0$, $|a_n|$ is less than $\varepsilon$ for all $n>\frac 12 \left(1 + \frac 1 \varepsilon\right)$.
|
H: Expected number of times a result comes up: Proof of formula
If I am flipping a fair coin, and want to know the expected number of heads after $n$ flips, then I immediately know the answer is $\frac{n}{2}$. Similarly, with a fair die, the number of $5$'s that come up after $n$ rolls is on average $\frac{n}{6}$. Presumbably, the general form of this rule is as follows:
$$
\text{Expected number of times a result comes up}=P(\text{result}) \times\text{Number of trials}
$$
This is much less labour intensive than working out the weighted average in another way. To compute the average number of $5$'s that come up after $n$ rolls, I could work out:
$$
\left(P(\text{Zero $5$'s coming up}) \times0\right)+\left(P(\text{One $5$ coming up}) \times1\right)+\left(P(\text{Two $5$'s coming up}) \times2\right)+...+\left(P(\text{$n$ $5$'s coming up}) \times n\right)
$$
Is it possible to use this second, more labour intensive result, to prove the first?
AI: Yes it's definitely possible. The number of 5s in $n$ rolls of a dice will follow a binomial distribution with parameters $n$ and $1/6$. This means that for $k \leq n$,
$$
\mathbb{P}(X = k) = \binom{n}{k}p^k(1-p)^{n-k},
$$
where $p = 1/6$. So then you can look at
\begin{align}
\mathbb{E}(X) &= \sum_{k=0}^n \binom{n}{k}p^k(1-p)^{n-k} k.
\end{align}
Write out the binomial coefficient
$$
\binom{n}{k} = \frac{n!}{(n-k)!k!}
$$
and then take out a factor of $np$ to get:
\begin{align}
&n p\sum_{k=1}^n \frac{(n-1)!}{(k-1)!(n-k)!}p^{k-1}(1-p)^{n-k}.
\end{align}
Now we will re-write the sum over an index $j = k-1$. So when $k=1$, we get $j=0$. And when $k=n$, we get $j=n-1$ (after all there are only $n$ terms in the sum however we write it). So we get
\begin{align}
& n p\sum_{j=0}^{n-1} \binom{n-1}{j}p^{j}(1-p)^{n-1 -j} \\
&= n p\bigl(p + (1-p)\bigr)^{n-1} \\
&= np.
\end{align}
|
H: Does the existence of the integral $\int_0^\infty f(x)dx$ imply that f(x) is bounded on $[0,\infty)$ when f(x) is continuous in this same interval?
Does the existence of the integral $\int_0^\infty f(x)dx$ imply that f(x) is bounded on $[0,\infty)$ when f(x) is continuous in this same interval ?
edit
I'm trying to use Cauchy without knowing what the next step is, stating that $\int_a^\infty f(x)dx$ exists if and only if $\lim_{t\to \infty} \int_a^xf(x)dx $ exists and approachs a particular value, which means that for all $\epsilon > 0$ there is $t_0$ such that for all $x,y > t_0$ : $$|\int_a^y f(x)dx - \int_a^x f(x)dx | < \epsilon$$
and since $$\int_a^y f(x)dx - \int_a^x f(x)dx = \int_x^y f(x)dx$$ we get that $$|\int_x^y f(x)dx|<\epsilon$$
And from here I don't really know what conclusion can be drawn. Does that mean that f(x) is indeed bounded ? It seems to me that I'm missing some counter-example to disprove the statement.
Regards
AI: Does the existence of the integral $\int_0^\infty f(x)dx$ imply that f(x) is bounded on $[0,\infty)$ when f(x) is continuous in this same interval ?
No. For each interval $[n,n+1]$ define $f_n:[n,n+1]\to\mathbb{R}$ by piecewise linear extension of the following four points:
$$f_n(n)=0$$
$$f_n(n+\frac{1}{2n^3})=2n$$
$$f_n(n+\frac{1}{n^3})=0$$
$$f_n(n+1)=0$$
So it's a triangle with basis of length $1/n^3$ and height $2n$. And thus
$$\int_n^{n+1}f_n(x)dx=\frac{1}{n^2}$$
And for $n=0$ we put $f_0$ to be the constant $0$ and so $\int_0^1 f_0(x)dx=0$.
Now we glue all $f_n$:
$$f:[0,\infty)\to\mathbb{R}$$
$$f(x)=f_{\lfloor x\rfloor}(x)$$
$f$ is continuous, it is not bounded but
$$\int_0^{\infty} f(x)dx=\sum_{n=0}^\infty\int_n^{n+1}f_n(x)dx=\sum_{n=1}^\infty \frac{1}{n^2}=2$$
|
H: Find value for 3-dimensional function f such that f is continuous at (0,0,0)
Let $f$ be a function $f: \mathbb{R}^3\setminus M \to \mathbb{R}$ where $M:= \{(x,y,z)^t \in \mathbb{R}^3\; |\; x = y = 0,\; z\in \mathbb{R}\}$ and
$$f(x,y,z) = \frac{xz^2+y^3}{x^2+y^2}.$$
Examine if there's a possible value $m = f(0,0,0)$ such that $f$ is continuous at $(0,0,0)^t$.
I suspect for $m:=0$ that $f$ is continuous at $(0,0,0)^t$. However I'm failing to prove it.
Let $\varepsilon > 0$ and $(x,y,z)^t \in \mathbb{R}^3\setminus M$:
$$|f(x,y,z)| = \frac{|xz^2 + y^3|}{x^2+y^2} \leq \frac{|x||z|^2+|y|^3}{x^2+y^2}$$
How do I proceed from here? Advice is greatly appreciated.
AI: Take a sequence $\left(a_n\right)_{n\in\mathbb{N}} \subset \mathbb{R}^3 \setminus M$ defined as $a_n = (\tfrac{1}{n^3}, 0, \tfrac{1}{n})$.
What is $\lim_{n\to\inf} f(a_n)$?
|
H: value of $g(3)$ in functional equation
If $$\int^{2x+\cos(\pi x)}_{0}g(t)dt=x^2$$for all real $x$ and $g(x)$ is continuous function. Then $g(3)$ equals
What i try: using Newton leibnitz formula
$$\frac{d}{dx}\bigg(\int^{2x+\cos x}_{0}g(t)dt\bigg)=\frac{d}{dx}(x^2)$$
$$g(2x+\cos \pi x)(2-\pi\sin(\pi x)=2x$$
When i put $x=1$. Then $g(1)=1$
When i put $x=2$. Then $g(5)=2$
How do i calculate $g(3)$
Help me please. Thanks
AI: Taking $x = \frac 32$ gives
$$g\left(3 + \cos\left(\frac{3\pi}{2}\right)\right) = \frac{3}{2 - \pi \sin \left(\frac{3\pi}{2}\right)}$$
which hence gives
$$g(3) = \frac{3}{2-\pi(-1)} = \frac{3}{2+\pi}.$$
|
H: Given only the lengths of the three sides of a triangle, can we find if the triangle is impossible?
I was recently trying to implement Hero's Formula and I was surprised to find it declare that the perimeter of a triangle with sides of length 3, 4, and 9 is a complex number. This suggests that given the lengths of all three sides of a triangle, it is possible to determine if the triangle is possible or not.
To me, this was a surprising result that I do not recall hearing elsewhere. In fact, without restrictions on angels, I was unaware that triangles could be impossible. Is this correct? Given the lengths of the three sides of a triangle, can we find if the triangle is impossible?
AI: Very simply, the longest side cannot be longer than the sum of the shorter sides. If it is the same length as the sum of the shorter sites, it will degenerate to a straight line.
|
H: Maximum $n$ such that $e2^{1 - {k \choose 2}}{k \choose 2}{{n-2} \choose {k-2}} \leq 1$, as $k \to \infty$.
Using:
${k \choose 2} \leq \frac{k^2}{2}$, and
${{n-2} \choose {k-2}} \leq \frac{k^2}{n^2}{n \choose k} \leq \frac{k^2}{n^2} \left(\frac{ne}{k}\right)^k$,
we have $e2^{1 - {k \choose 2}}{k \choose 2}{{n-2} \choose {k-2}} \leq
\frac{e2^{-{k \choose 2}}k^4}{n^2} \left( \frac{ne}{k} \right)^k =
\frac{ek^4}{n^2} \left( \frac{ne \sqrt2}{k{\sqrt2}^k} \right) ^k$.
If $n=\frac{1}{e \sqrt{2}} k \sqrt{2}^{k},$ the parenthetical term is 1. The leading coefficient is then $e^{2} k^{2} 2^{1-k}$.
Now the text I'm reading says that we can afford for the parenthetical term to be $2 + o(1)$, and so we can take $ n=\left(\frac{\sqrt{2}}{e}+o(1)\right) k \sqrt{2}^{k}$. Can I get an explanation of these last two assertions?
AI: This expression looks like some sort of Ramsey Theory construction. The goal is to find the largest $n = n(k)$ such that the inequality holds in the limit $k \to \infty$, I presume.
Suppose that in $n = \left(\frac{\sqrt{2}}{e} + o(1)\right) k \sqrt{2}^k$, we take the $o(1)$ term to be say, $\frac{\sqrt{2}}{ek}$, forcing the parenthetical term to be, $2 - 2/\sqrt{k}$. Then the exponential term is now
$$\left(2 - \frac{2}{\sqrt{k}}\right)^k = 2^k \left(1 - \frac{1}{\sqrt{k}}\right)^k \sim 2^k e^{-\sqrt{k}}$$
Note that the factor of $2^{-k}$ brought about by the factor of $1/n^2$ outside the exponent will cancel the $2^k$ from the exponent, and asymptotically the expression will grow like $k^2 e^{-\sqrt{k}}$, which is certainly $o(1)$.
The idea is that if the exponential term grows slow enough than $2^k$, then the factor of $1/n^2$ can cancel this growth as it has a factor of $2^{-k}$. In the above example, we've shown that this can be obtained by having the base of the exponent to be $2 - o(1)$.
|
H: Order of the sum of two meromorphic functions
I haven't been around complex analysis at some time now and I need it for Riemann surfaces and I a bit confused about something. I know that if I have a meromorphic function $f$ with order $k$ at $p$ and a meromorphic $g$ with order $n$ at $p$ I will have that $ord_p(f+g)\geq min\{ord_p f, ord_pg\}$ and smaller then their max, but can we be a bit more specific, by this I mean is it possible to have all cases in the previous example, that is I can have that $f+g$ will have order $k$ with $k$ varying between the maximum and minimum of the previous valies? I remember that there was a theorem that we could find the order of poles a function taking the limits of $(z-z_0)f(z)$ and finding the first $k$ such that this limit is zero, and with this my intuition would say no, but again I am not that familiar with complex analysis so any help is aprecciated.
New edit: I am interested in the case that suppose I have $3$ points and $ord_{p_1}f=k_1$ $ord_{p_2}f=k_2$ and $ord_{p_3}f=m$ and $ord_{p_1}(g)=k_1$ $ord_{p_2}(f)=n$ $ord_{p_3}(g)=k_3$ where these are all negative numbers such that $n>k_2$ and $m>k_3$. Is it possible that from $f_1$ and $f_2$ I can construct a meromorphic function such that the order at $p_i$ will be $k_i$, It's just like you said we dont know what is happening when the functions have the same order so I cant really sum them.
I am trying to do this because I am trying to do a Riemann surfaces exercise that is that if we have a base-point free linear system $|D|$ we will have that for any finite number of points $\{p_1,...,p_k\}$ we have that there exists $E\in |D|$ such that $p_i$ is not in the support of $E$ and that is basically proving the existence of such meromorphic function I belive.
Thanks in advance!!
AI: If $f$ and $g$ have poles of orders $k$ and $n$ at $p$ and $k < n$, then $f+g$ has a pole of order $n$ at $p$. This is obvious if you consider the Laurent series around $p$.
If $k=n$, all you can say is the order $\le k$: note that you could even have $f+g=0$.
|
H: Detail in the proof that $\mathbb Z[\sqrt {10}]$ is not a UFD
We want to show that $2$ is irreducible in $\mathbb Z[\sqrt {10}]$, so we assume: $$2=(a+b\sqrt{10})(c+d\sqrt{10}),\quad a,b,c,d \in \mathbb Z$$
While there exists an automorphism, call it $f$, sending $\sqrt {10} \mapsto -\sqrt {10}$, we can also write (why can we?): $$2=(a-b\sqrt{10})(c-d\sqrt{10})$$
and then we get: $$4=(a+b\sqrt{10})(c+d\sqrt{10})*f(a+b\sqrt{10})*f(c+d\sqrt{10})=(a^2-10b^2)(c^2-10d^2)$$
Why are we allowed to multiply elements from the original ring with their images under the automorphism $f$ and further build our logic upon this assumption? As far as I see it: $$(a+b\sqrt{10})(c+d\sqrt{10}) \neq (a-b\sqrt{10})(c-d\sqrt{10})$$ in the "original" ring.
AI: Observe that
$$(a+b \sqrt{10})(c+d\sqrt{10}) = ac + 10bc + (ad+bc) \sqrt{10}.$$
If this equals $2$, then by irrationality of $\sqrt{10}$ we must have $ad + bc = 0$.
But
$$(a-b \sqrt{10})(c-d\sqrt{10}) = ac + 10bc - (ad+bc) \sqrt{10},$$
so $ad+bc=0$ gives that these are equal.
|
H: Graphs of $e^{-y} = x$ and $y = \frac 1x$: Why do they seem so similar on $[0, 1]$, but one converges while the other diverges?
First things first, in the above graphs the one in blue is just a rotated version of ex from where the question did really form. I rotated it just in order to make comparisons easier.
If we did the integral of both the functions in [0,1] we see that it's value for the blue curve is finite(i.e. -0.632) but for the green one it is non-existent (of-course as 0 isn't in it's domain).
But again if we repeat the same process in (0,1] we find that although the value approaches the one we found before for the blue curve, it keeps on increasing (diverging) for the green one as closer and closer we get to 0.
I wish to know why even as both the curves are essentially of the same type in [0,1], one limits it's area to a finite value while the other keeps on diverging.
AI: They are not of the same type: the blue curve is exponential decay. We have
$$
\color{blue}{\int_{0}^{\infty} e^{-y}\,dy }= \left.- e^{-y}\right|_{0}^{\infty} = 1 - 0 =1
$$The green curve is a reciprocal function. We have
$$
\color{green}{\int_{0}^{1} x^{-1}dx}= \left.\ln|x|\right|_{0}^{1} = 0 - (-\infty) =\infty
$$
|
H: write permutation as product of 3-cycles
Let $\sigma$ be the permutation $\sigma=(1234)(35)(2653)^2(56)$. I've got to say whether $\sigma$ is a product of 3-cycles or not.
My attempt was to simplify $(2653)^2=(36)(25)=(25)(36)$. Is this correct?
At the end I'd get $\sigma=(63)(5)(241)$, which shows that it can't be written as a product of 3-cycles. I hope that's right. Thanks for helping.
AI: Unfortunately, just because something can be written in simplest form without $3$-cycles doesn't mean that it's not a composition of $3$-cycles; $(12)(34)$ is in fact a composition of two $3$-cycles (which I leave as an exercise; in fact, all even permutations can be formed from composing $3$-cycles).
A better proof here is to use permutation parity; $\sigma$ is an odd permutation (by summing parities of the permutations that make it), but $3$-cycles are all even permutations.
|
H: checking uniform convergence of series $\sum_{n=1}^\infty x^n$
I have a doubt in a question in which I need to check the uniform convergence of the series given by:
$$\sum_{n=1}^\infty x^n$$ on (-$1,1$)
Now if the series is uniformly convergent,then its sequence of partial sums (s$_n$) is uniformly convergent.
I have found that $s$$_n$ = $\frac{1-x^n}{1-x}$ which point-wise converges to $s$($x$) = $\frac{1}{1-x}$
If ($s$$_n$) is uniformly convergent ,then $\sup${$\lvert s_n(x)-s(x)\rvert$:$x$ $\in$($-1,1$)| should tend to $0$ as $n$ tends to infinity.
Now how to check whether $\sup${$\lvert $$\frac{x^n}{1-x}\rvert$:$x$ $\in$($-1,1$)} converges to zero as $n$ tends to infinity or how can I use the definition here?
AI: The supremum is $+\infty$ for all $n$, because
$$\lim_{x \to 1-0}\frac{x^n}{1-x}=+\infty$$
The best you can do is uniform convergence on $[-a, a]$ for all $a \in (0, 1)$, with the Weierstass M-test, for example.
|
H: If $X$ is path connected then $X$ has not isolated point.
Lemma
If $X$ path connnected then for any $x_0$ there exist a path such that connect $x_0$ with any other point $X$ of $X$.
Statement
If $X$ is path connected then $X$ has not isolated point.
Unfortunately I can't prove the statement, but I'm sure that it is posisble to prove it showing that if $x_0\in X$ is an isolated point for $X$ then there aren't continuous path that connect $x_0$ with any other $x\in X$. If the statement is generally false then is it true for $\Bbb{R}^n$? So could someone help me, please?
AI: The statement is not quite true: if $X=\{x_0\}$, then $X$ is path connected, and its only point is isolated. It can also fail if $X$ is not $T_1$. Let $X=\{0,1\}$ with open sets $\varnothing$, $\{0\}$, and $\{0,1\}$, and let
$$f:[0,1]\to X:x\mapsto\begin{cases}
0,&\text{if }0\le x<\frac12\\
1,&\text{if }\frac12\le x\le 1\;.
\end{cases}$$
Then $0$ is an isolated point of $X$, $f(0)=0$, $f(1)=1$, and $f$ is continuous, because $f^{-1}[\varnothing]=\varnothing$, $f^{-1}[\{0\}]=\left[0,\frac12\right)$, and $f^{-1}[X]=[0,1]$ are all open in $[0,1]$.
It is true, however, if $X$ is $T_1$ and has at least two points. Suppose that $x_0\in X$ is isolated, and $x_1\in X\setminus\{x_0\}$; we’ll prove that there is no path from $x_0$ to $x_1$.
Suppose that $f:[0,1]\to X$ is a continuous function such that $f(0)=x_0$ and $f(1)=x_1$. Let $U=f^{-1}[\{x_0\}]$; $\{x_0\}$ is both open and closed in $X$, so $U$ is both open and closed in $[0,1]$. Let $V=[0,1]\setminus U$; $V$ is also both open and closed in $[0,1]$, and since $1\in V$ (why?), $V\ne\varnothing$. But $[0,1]$ is connected, so no such separation can exist. Thus, there is no such function $f$, and $X$ is not path connected.
|
H: Joint distribution and covariance of Poisson process and waiting time
Hi I am having a trouble solving for this problem where I have to find
1) Joint distribution of $W_{1}$, $W_{r}$ for $r\geq2$.
2) $\operatorname{Cov}(W_{1},W_{r})$ for $r\geq2$.
[Notation explanation: $W_{r}=\min(t:N_{t}\geq r) $ is waiting time until the $r^\text{th}$ occurrence.
Here, $(N_{t})$, ${t\geq0}$ is a Poisson process with occurrence rate $\lambda>0$.]
How should I solve this? I did solve the case when $r=2$. In that case $\operatorname{pdf}_{W_{1},W_{2}}(t_{1},t_{2})=\lambda^2e^{-\lambda t_{2}}I(_{0<t_{1}<t_{2}})$
But as I tried to generalize it to case $r$, it became quite complicated and I got lost. I would really appreciate if someone could help.
Thanks.
AI: Let $X_n = W_n - W_{n-1}$ for $n\geq 2$ and $X_1 = W_1.$ That is, $X_n$ are the interarrival times. We have that $X_n\overset{iid}{\sim} \text{Exp}(\lambda).$ Now denote the joint pdf of $(W_1, W_r)$ by $f_{1,r}.$ Then
$$f_{1,r}(t_1, t_r) = f_{r|1}(t_r|t_1)\cdot f_{1}(t_1),$$
where $f_{r|1}$ is the pdf of $W_r$ conditional on $W_1$ and $f_1$ is the (marginal) pdf of $W_1.$ Then $f_1(t_1) = \lambda e^{-\lambda t_1}1\{t_1\geq 0\}.$ Now write
$$W_r = X_1 + \sum_{i=2}^r X_i.$$
Then note that $X_1$ and $\sum_{i=2}^r X_i$ are indepdent and that the latter is $\Gamma(r-1, \lambda)$ distributed. Using that $X_1=W_1$, we have that conditional on $W_1,$ $W_r$ is the sum of $W_1$ and a $\Gamma(r-1, \lambda)$-distributed random variable. The conditional pdf is then a shifted Gamma pdf:
$$f_{r|1}(t_r|t_1) = \frac{\lambda^{r-1}(t_r-t_1)^{r-2}}{\Gamma(r-1)}e^{-\lambda (t_r-t_1)} 1\{t_r\geq t_1\}.$$
So the product is
$$f_{1r}(t_1, t_r) = \frac{\lambda^{r}(t_r-t_1)^{r-2}}{\Gamma(r-1)}e^{-\lambda t_r} 1\{t_r\geq t_1\geq 0\}.$$
For the covariance, note that
$$Cov(W_1, W_r) = Cov(X_1, X_1) + \sum_{i=2}^n Cov(X_1, X_i) = Var(X_1) = \frac{1}{\lambda^2},$$
using independence of the $X_i.$
|
H: $\sup(a + B) = a + \sup B$
I believe my proof of this simple fact is fine, but after a few false starts, I was hoping that someone could look this over. In particular, I am interested in whether there is an alternate proof.
For a real number $a$ and non-empty subset of reals $B$, define $a + B = \{a + b : b \in B\}$. Show that if $B$ is bounded above, then $\sup(a + B) = a + \sup B$.
My attempt:
Fix $a \in \mathbb{R}$, take $B \subset \mathbb{R}$ to be nonempty and bounded above, and define
$$a + B = \{a + b : b \in B\}.$$
Since $B$ is nonempty and bounded above, the least-upper-bound axiom guarantees the existence of $\sup B$. For any $b \in B$, we have
$$b \leq \sup B,$$
which implies
$$a + b \leq a + \sup B.$$
As this is true for any $b \in B$, it follows that $a + \sup B$ is an upper bound of $a + B$, and hence $\sup(a + B)$ exists, by the completeness axiom, since $B \neq \emptyset$ implies immediately that $a + B \neq \emptyset$. I claim that $a + \sup B$ is in fact the least upper bound of $a + B$. As we have already shown it to be an upper bound, it suffices to demonstrate that $a + \sup B$ is the least of the upper bounds. Let $\gamma$ be an upper bound of $a + B$. Hence, for any $b \in B$,
$$a + b \leq \gamma,$$
which implies that
$$b \leq \gamma - a.$$
As this holds for all $b \in B$, $\gamma - a$ is an upper bound of $B$. Hence, by the definition of supremum,
$$\gamma - a \geq \sup B,$$
which implies that
$$\gamma \geq a + \sup B,$$
as desired.
I tried to write the proof initially be showing that $\sup(a + B) \leq a + \sup B$ and $\sup(a + B) \geq a + \sup B$, but didn't have any luck. If there is a trick to it, I would be interested in hearing it.
AI: What you've done looks correct to me, but I think we can rework it more concisely using exactly the strategy that you suggest at the end of the question. Note that both suprema exist since the sets are non-empty.
First direction: Let $\lambda \in a + B$. Then $\lambda = a + b$ for some $b \in B$. Since a supremum is an upper bound, $b \leq \sup B$, so $\lambda \leq a + \sup B$. Since $\lambda \in a + B$ was arbitrary, $a + \sup B$ is an upper bound for $a + B$, hence $\sup(a + B) \leq a + \sup B$.
At this point it might be worth pausing to try the other direction yourself - the idea is similar, so it would be a good test of understanding.
Other direction: Let $b \in B$. Then $a + b \in a + B$, and since a supremum is an upper bound, $a + b \leq \sup(a + B)$. Rearranging, $b \leq \sup(a + B) - a$, so $\sup(a + B) - a$ is an upper bound on $B$, and hence $\sup B \leq \sup(a + B) - a$, and it follows that $\sup(a + B) \geq a + \sup B$.
Conclusion: It follows immediately that $\sup(a +B) = a + \sup B$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.