text
stringlengths 83
79.5k
|
|---|
H: 'Shrunken Version' of a convex set is also convex
I'm trying to show that for a convex set $K$ in $\mathbb{R}^n $ (possibly bounded, if that makes things easier), the set $K_{\epsilon}:= \{x\in K: \text{dist}(x,\partial K)>\epsilon\}$ is also convex (I don't really care whether we consider open or closed sets since I only have to integrate over the set). How could I prove that?
I've tried the following: For any boundary point $p$, we can find a hyperplane $H_p$ s.t. $p\in H_p$ and $H_p$ separates $K$. Now my idea was to shift all hyperplanes by $\epsilon$, then we can write $K_{\epsilon}$ as the intersection over all these shifted hyperplanes and hence it would be convex as an intersection of convex sets. But I don't see why exactly we can actually write $K_{\epsilon}$ as this intersection, it's just intuitively clear to me. Is this a good approach? Is it even correct? How can I go about proving it? (Is the statement even true? If it helps, I might also assume we are in $\mathbb{R}^2$ and that we have a convex bounded lipschitz domain)
AI: If $p$ and $q$ are in $K_\epsilon$, the closed balls $B_\epsilon(p)$ and $B_\epsilon(q)$ of radius $\epsilon$ centred at $p$ and $q$ are contained in $K$. If $0 \le \lambda \le 1$, we need
to show that the ball of radius $\epsilon$ centred at $\lambda p + (1-\lambda) q$ is contained in $K$. But any member of this ball is $\lambda p + (1-\lambda) q + v$ where $\|v\| \le \epsilon$, and it is thus $\lambda (p+v) + (1-\lambda) (q+v)$ where $p+v \in B_\epsilon(p)$
and $q+v \in B_\epsilon(q)$ are both in $K$.
|
H: Why is $P(a \text{ and } b)$ maximized when $P(a \text{ or } b)$ is minimized?
I can't seem to wrap my head around why $P(a \text{ and } b)$ is minimized when $P(a \text{ or } b)$ is maximized. This comes from PIE:
$$P(a \text{ or } b) = P(A) + P(B) - P(a \text{ and } b).$$
Can someone please explain the intuition behind this? I'm even trying to picture the Venn diagram in my head, but this exact relationship doesn't make sense.
AI: $A \cap B$ is the overlapping part in $A \cup B$.
Imagine you have two sheets of paper and you want to maximize their surface.
If they overlap, you can increase the surface by making them overlap less.
|
H: A Map between topological spaces is open iff interior of preimage is a subset of preimage of interior.
Suppose $X$ and $Y$ are topological spaces, and $f: X \to Y$ is any map. Show that $f$ is open iff $\forall B \in Y, int(f^{-1}(B)) \subseteq f^{-1}(int B)$.
AI: Suppose that $f$ is open. Let $B \subseteq Y$ and consider $f^{-1}[B]$.
$\operatorname{int}(f^{-1}[B])$ is open and
$$f[\operatorname{int}(f^{-1}[B])] \subseteq f[f^{-1}[B]] \subseteq B$$
so $$f[\operatorname{int}(f^{-1}[B])] \subseteq \operatorname{int}(B)$$ as the latter is the maximal open subset of $B$. So by definition of $f^{-1}$ we get that
$$\operatorname{int}(f^{-1}[B]) \subseteq f^{-1}[\operatorname{int}(B)]$$ as required.
And if we have the equality we just shown for all $B \subseteq Y$, consider $O \subseteq O$ open. We want to show $f[O]$ is open. Let's try $B=f[O]$ and we get
$$\operatorname{int}(f^{-1}[f[O]]) \subseteq f^{-1}[\operatorname{int}(f[O])]$$
We know also that $O \subseteq f^{-1}[f[O]]$ so $O \subseteq \operatorname{int}(f^{-1}[f[O]])$ too and so $$O \subseteq f^{-1}[\operatorname{int}(f[O])]$$
which implies $$f[O] \subseteq \operatorname{int}(f[O])$$ which implies that indeed $f[O]$ is open and we're done.
|
H: Probability of a consecutive sequence of 3 balls
You have 6 balls, 3 white, 3 black. What is the probability of a sequence of 3 white balls (the white balls appear next to each other in the sequence) ?
My solution is:
There are $\binom{6}{3}$ ways to arrange the 6 balls on a line, while there are 4 ways you can put the 3 white balls next to each other. This results into:
$$\frac{4}{\binom{6}{3}}$$
Is this correct ?
AI: Looks good to me.
You have $6$ balls, which you can order in $6!$ ways. Then, since balls of the same color are indistinguishable amongst themselves, you have to adjust, for a total of $$\frac{6!}{3! 3!} = \binom{6}{3} = 20$$
ways of arranging the balls in a sequence.
Then we can consider the three white balls as one object (so they always appear next to each other). We thus would have $4$ objects, and the number of ways to arrange them is
$$\frac{4!}{3! 1!} = \binom{4}{3} = 4$$
by the same logic as before (since you have to account that the $3$ black balls are indistinguishable amongst themselves).
So the probability of a sequence of 3 white balls is indeed
$$\frac{4}{20} = 0.2$$
|
H: Could someone suggest a good book for a first course in real analysis for someone who just failed their first course in real analysis?
Okay so basically I've never taken a course that involved any proof writing (other than having done proof by induction in a levels) until my first real analysis course and I did really, really bad in it. We were using Bartle's introduction to real analysis, but I honestly never really understood the proofs in it, or how to write my own. I need to retake the course, but I want to figure out how to do it myself in the first place. Any leads on how to start would be really really appreciated. Also, if there is anything I need to do to make it easier for me to understand, please suggest that as well.
Thank youu!!!
AI: I would advise you to try "How to think about Analysis" by Lara Alcock. It will give you a chance to think about the subject as a whole rather than trying to start again at a course you have found tough. She is a BRILLIANT teacher. Give it a go.
|
H: Stock Technical Analysis: Keltner Channel Calculation
I'm a software engineer and I'm not so good at maths, I am writing some software which performs technical analysis on stocks but it appears my maths is slightly off and I have spent hours and hours trying to figure out which part of my formula is wrong and I can't figure it out. I am hoping some maths genius may be able to discover my mistake.
Essentially, I am trying to calculate the Keltner Channels of a given stock, which seems to work fine for the first period, but all subsequent periods are very slightly wrong.
Here is my sample data:
This is data of the first ever 32 trading days of stock BYND, you can illustrate this data yourself by going to https://uk.tradingview.com/chart/AFGZbVHw/ - then if you add in an indicator and add in Keltner Channels (KC) and Average True Range (ATR), you can see the data in that spreadsheet visualised, I've attached a picture of it below:
In the top left of each indicator there is a settings icon where you can adjust the indicator, I have adjused the keltner channels to 5 periods and multipler to 2, the ATR is also adjusted to 5 periods as it is used to calculate the keltner channels.
To calculate the 3 Keltner Channel Values the formula is:
Keltner High Value = EMA Value + 2 * Average True Range (ATR)
Keltner Middle Value = EMA Value
Keltner Low Value = EMA Value - 2 * Average True Range (ATR)
If you look at my spreadsheet, I calculated all of the ATR and EMA values correctly, they match perfectly with the TradingView charts. When I calculate the first set of Keltner values, they are also a perfect match with TradingView, but the problem is when I calculate the next Keltner Channel values, they no longer match the TradingView values, I get the following values:
ATR = 13.04
Keltner Low = 44.5068061 (should be 46.82)
Keltner Middle = 70.58999 (is correct and matches TradingView at 70.59)
Keltner High = 96.67319163 (should be 94.36)
I can upload my spreadsheet so you can see my calculations, but I think the main problem is that when you calculate the second keltner channel values, the ATR value is no longer correct, although I know ATR at that position is 13.04 (you can even see the TradingView chart show this), the actual value used in the keltner channel is something like 94.36 - 70.59 / 2 = 11.885 so it's like they are not even using the ATR value for that period. Where on earth are they calculating the 11.885 value from? Or does the formula for keltner change somehow on the second iteration???
Any guidance would be appreciated, happy to provide the excel file too, just not sure how to upload it.
EDIT: My excel file can be downloaded from here: https://gofile.io/d/XZdFaH
AI: Well, I am an idiot. It turns out all my calculations were correct, but there are multiple ways to calcualte ATR and I was just calculating it using a different method.
It turns out that TradingView is using the ATR EMA Smoothing algorithm for calculating the ATR value, where as I am using the ATR RMA Smoothing algorithm.
It's annoying that there are multiple ways to calculate ATR which all give different results, very confusing! But at least now I know.
|
H: Multiplicity of the zero eigenvalue in $A^tA$
If I have a fat matrix $A$ $\in\mathbb{F}^{m\times n}$ (with $m<n$), is it true that $A^tA$ has a zero eigenvalue of multiplicity of $n-m$? I am not sure if it is true but I tried few variation to contradict it without a succes.
AI: This depends on the underlying field $\mathbb F$ as well as the rank of $A$. If $A$ is a real matrix, then $Ax=0$ if and only if $A^tAx=0$, because
$$
Ax=0\Rightarrow A^tAx=0\Rightarrow \|Ax\|^2=x^tA^tAx=0\Rightarrow Ax=0.
$$
Therefore the nullity of $A^tA$ is precisely the nullity of $A$, which is equal to $n-\operatorname{rank}(A)$ and is bounded below by $n-m$. So, the zero eigenvalue of $A^tA$ has multiplicity $n-m$ if and only if $A$ has full row rank.
|
H: Exterior derivative of a vector field
The exterior derivative of a scalar function is
$d f(x,y,z) = (
\frac{\partial f}{\partial x} dx
+ \frac{\partial f}{\partial y} dy
+ \frac{\partial f}{\partial z} dz
)$
Am I correct in assuming then that
$d\left( F_x(x,y,z) e_x + F_y(x,y,z) e_y + F_z(x,y,z) e_z \right)$
would be
$\left(
\frac{\partial F_y}{\partial x}
- \frac{\partial F_x}{\partial y} \right)
dx \wedge dy +
\left(
\frac{\partial F_x}{\partial z} -
\frac{\partial F_z}{\partial x} \right)
dz \wedge dx +
\left(
\frac{\partial F_z}{\partial y} -
\frac{\partial F_y}{\partial z} \right)
dy \wedge dz$
AI: We only talk about exterior derivatives of differential $k$-forms, not vector fields. However, what we can do is the following: given a vector field $F: \Bbb{R}^3 \to \Bbb{R}^3$, $F = (F_x, F_y, F_z)$, we can consider the following one-form:
\begin{align}
\omega &= F_x \, dx + F_y \, dy + F_z \, dz
\end{align}
And yes, the exterior derivative of the one-form $\omega$ is indeed the thing you wrote down:
\begin{align}
d\omega &= \left(\dfrac{\partial F_y}{\partial x} - \dfrac{\partial F_x}{\partial y}\right) dx \wedge dy + \left(\frac{\partial F_x}{\partial z} - \frac{\partial F_z}{\partial x} \right) dz \wedge dx + \left(\frac{\partial F_z}{\partial y} - \frac{\partial F_y}{\partial z} \right) dy \wedge dz
\end{align}
Just some fun extra tidbits: if you know some vector calculus, the above expression probably looks pretty familiar, almost like the curl of $F$, though not quite.
If you want to somehow get the curl of $F$ from here, you need to look at the "Hodge star" operator, which assigns to the above $2$-form $d\omega$ a certain $1$-form $\alpha$, namely
\begin{align}
\alpha &= \left(\dfrac{\partial F_y}{\partial x} - \dfrac{\partial F_x}{\partial y}\right) dz + \left(\frac{\partial F_x}{\partial z} - \frac{\partial F_z}{\partial x} \right) dy + \left(\frac{\partial F_z}{\partial y} - \frac{\partial F_y}{\partial z} \right) dx
\end{align}
then from here, you can get a vector field, $G$, (pretty much by replacing $dx$ with $e_x$, $dy$ with $e_y$ and $dz$ with $e_z$),
\begin{align}
G:= \left( \frac{\partial F_z}{\partial y} - \frac{\partial F_y}{\partial z}\right) e_x + \left(\frac{\partial F_x}{\partial z} - \frac{\partial F_z}{\partial x} \right) e_y + \left(\dfrac{\partial F_y}{\partial x} - \dfrac{\partial F_x}{\partial y}\right) e_z,
\end{align}
and this is precisely the curl of $F$
|
H: locally free resolution of coherent sheaf on quasi-projective scheme
Sorry for my bad English.
In Hartshorne "Algebraic Geometry " , III. Example 6.5.1
i.e. there is locally free resolution
$\dots \to \mathscr{L}_1\to \mathscr{L}_0\to \mathscr{F}\to 0$.
But I can't understand why $\mathscr{L}_1, \mathscr{L}_2$... are also
locally free sheaves of finite rank.
Please help me, thanks.
AI: II, 5.18 states that for $\overline{X}$ projective over $A$, any coherent sheaf $\mathscr{F}$ can be written as an epimorphic image of some finite direct sum of the twists $\mathcal{O}(n)$ of the structure sheaf. Thus, as a quotient of a locally free sheaf $\mathscr{L}_0$ of finite rank. As the kernel of $\mathscr{L}_0\rightarrow \mathscr{F}$ is itself coherent, same applies, and one finds $\mathscr{L}_1\rightarrow \mathscr{L}_0 \rightarrow \mathscr{F}\rightarrow 0$ exact with $\mathscr{L}_1$ locally free of finite rank. And so on.
In the quasi-projective case, one can choose an open immersion $i:X \hookrightarrow \overline{X}$ into a projective $A$-scheme, and then by Exercise II.5.15 of Hartshorne, extend a coherent sheaf $\mathscr{F}$ on $X$ to a sheaf $\mathscr{F'}$ on $\overline{X}$ which is still coherent. By the previous, there is a resolution $\mathscr{L}_{\bullet}\rightarrow \mathscr{F}'\rightarrow 0$ by a finite rank locally free sheaves on $\overline{X}$. The restriction functor to the open subscheme $X$ is exact and preserves "being locally free of finite rank", hence ${\mathscr{L}_{\bullet}}_{\restriction_{X}} \rightarrow \mathscr{F'}_{\restriction_{X}}(=\mathscr{F})\rightarrow 0$ is a resolution by finite rank locally free sheaves on $X$ (in fact, finite direct sums of line bundles).
|
H: Number of words that can be made from the word ALGEBRA
I'm trying to do this combinatorics problems out of KHEE-MENG KOH's "Principles and Techniques of Combinatorics" and am getting overwhelmed.
We want to find the number of distinct 2-letter strings that can be formed from the word ALGEBRA.
My approach:
We have 6 distinct letters from the seven letters in ALGEBRA.
So, I choose 2 of those those 6 letters and we can permute those 2 letters (i.e. AL and LA are two different strings) and then add the one string "AA"
$$ (2!){6 \choose 2} + 1 = 30 + 1 = 31$$
So we can make 31 distinct strings from the word ALGEBRA.
According to the solutions:
Of the 7 letters, we choose 2 to make a string. We then subtract the duplicates. There are 5 strings that begin with A and 5 strings that end with A that are counted twice. As a result
$$(2!) {7 \choose 2} - 10 = 32$$
We have 32 strings that can be formed from the word ALGEBRA.
Aren't they forgetting that "A_1 A_2" and "A_2 A_1" are the same string (i.e. "AA" = "AA"), so shouldn't they subtract by eleven for a total of 31 distinct strings?
AI: You are correct. Another approach:
Consider the six distinct letters: $A,L,G,E,B,R$.
For the first element of the string we can choose a letter in $6$ ways, and for the second element we can choose from any of the remaining $5$ letters, yielding a total of $6 \times 5 = 30$ choices for the two letter string. Now, add another letter 'A' into the possible choices for letters. Notice combinations of the letter 'A' and any other letter were already considered before, and the only combination missing is 'AA'.
So, in total there are $31$ ways to choose the string.
|
H: Find a basis of a subspace
Let $X$ be a set with $X \neq \emptyset$ and $F$ a field. Let $V$ the vector space such $V=\{ f : X \rightarrow F \}$ with the usual operations. Find a basis for the subspace,
$$W= \{ f\in V \mid f(x) =0 \quad \text{for all} \quad x \in X \quad \text{except for a finite number of elements}\}$$
Can you hel me with this problem? I don't understand how to do it.
AI: For each $x\in X$, let $f_x$ be the function which is $1$ at $x$ and $0$ elsewhere. We will show that $B=\{f_x:x\in X\}$ forms a basis for $W$. For linear independence, assume that
$$\lambda_1f_{x_1}+\lambda_2 f_{x_2}+\ldots +\lambda_n f_{x_n}=0 \text{ (the function which sends every $x\in X$ to $0$)}$$
for some $\lambda_1,\ldots,\lambda_n\in F$ and some $x_1,\ldots,x_n\in X$. We may assume, without loss of generality, that $x_i\ne x_j$ if $i\ne j$ because otherwise we can combine like terms. Therefore, for each $i$, the value of the LHS at $x_i$ is $\lambda_i$ and the value of the RHS at $x_i$ is $0$, and hence $\lambda_i=0$. Now for spanning, let $f\in W$. There exist distinct $x_1,\ldots,x_n\in X$ such that $f(x_i)\ne 0$ for all $i$ and $f(x)=0$ for all $x\in X\backslash\{x_1,\ldots,x_n\}$. Therefore,
$$f=f(x_1)f_{x_1}+f(x_2)f_{x_2}+\ldots+f(x_n)f_{x_n},$$
completing our proof.
It is worth noting that the "finite" in "except for a finite number of elements" is crucial. Otherwise, we would not necessarily be able to express every element in $W$ as a finite linear combination of elements in $B$.
|
H: Showing the solution of a recurrence relation
I've been working through recurrence relation problems and came across one that I am struggling with
Say we have a relation as follows
$r_k - 7r_{k-1} + 12r_{k-2} = 0$ for all $k \geq 2$ and $r_0 = 1, r_1 = 7$
The problem is essentially asking whether for any $a_n$, does $a_n = n3^n+4^n$
Now the base cases hold, but I'm unsure of how to proceed for proving for all $n$
AI: Recurrence relations of the form
$$a_n= A_1 \cdot a_{n-1} + A_2 \cdot a_{n-2}$$
Have the general solution
$$a_n = c_0 {r_0}^n + c_1 {r_1}^n$$
Where $r_0$ and $r_1$ are solutions of the characteristic equation
$$t^2 - A_1t - A_2 =0$$
And $c_0, c_1$ satisfy the simultaneous equations
$$a_0= c_0 {r_0}^0 + c_1 {r_1}^0 = c_0+c_1$$
And
$$a_1 = c_0 {r_0}^1 + c_1 {r_1}^1 = c_0 r_0 + c_1 r_1$$
It's easy to see how this generalizes to higher order linear homogeneous difference equations as well.
Use $A_1=7$ and $A_2=-12$ to solve your problem. Wolfram Alpha gives
$$r_n = 4^{n+1} - 3^{n+1}.$$
|
H: Function field of open set of subvariety
Fulton, in his book Algebraic Curves on classical algebraic geometry, says that if $X$ is an irreducible algebraic set and $V \subset X$ an open subset, then the field of fractions $k(V) = k(X)$; subsequently, he defines the dimension of an abstract variety and says that in the above scenario, $dim(V) = dim(X)$ (Proposition 10, p. 75). Here is my confusion: If $Y \supset X$ is another irreducible algebraic set, then it is not necessarily true that $k(Y) = k(X)$! (E.g., $Y = \mathbf{A}^n$ and $X$ is any algebraic subset.) Does this mean that we have to think of the function field and dimension of an abstract variety relative to a particular "base closed set"?
I am taking a look at the book after a break of a few months so sorry if I forgot something obvious. :)
AI: You've forgotten that $V\subset X$ should be an open subset. The function field and dimension of a variety are intrinsic concepts which do not depend on any embedding.
|
H: If we have a connected graph that remains connected after removing any edge, can we make the following claim
Given we have a connected graph $G$ that remains connected if any edge in $G$ is removed
Say we have $(a, b) \in E(G)$ (where $E(G)$ is the set of all edges in $G$) and $(c, d) \in E(G)$.
Can we assume that $(a, d) \in E(G)$? I would assume no because the graph can be still be connected if an edge from $(a, d)$ doesn't exist.
AI: Any cycle graph serves as a counterexample to your claim.
Take for instance a cycle with four nodes in the order $a$, $b$, $d$, $c$. It contains edges $(a,b)$ and $(c,d)$, but not $(a,d)$. So you are correct that the assumption does not hold.
|
H: Orders in extension of $Q_p$
Suppose we have field extensions $L/K/\mathbb{Q}_p$. If $R$ is a ring such that
$$\mathcal{O}_K \subset R \subset \mathcal{O}_L$$
(where if $M$ is an extension of $\mathbb{Q}_p$ then $\mathcal{O}_M$ is the integral closure of $\mathbb{Z}_p$ in $M$) and $R$ has rank $[L : K]$ as an $\mathcal{O}_K$-module, then can we say that $R = \mathcal{O}_L$? In the number field case, this isn't true, but I'm wondering if it is true in the local case.
AI: The ring of integers of $ \mathbf Q_2(\sqrt{5}) $ is not $ \mathbf Z_2[\sqrt{5}] $, it's $ \mathbf Z_2[(1 + \sqrt{5})/2] $, so that's a counterexample to your claim.
In general, you can turn global counterexamples to this into local ones by recalling that the discriminant captures the ramification information of a number ring, so if you pick an order lying inside it whose discriminant includes a prime factor $ p $ which is unramified in the actual number ring, then you'll get a counterexample after taking completions of the rings at $ p $. In the example I gave, the discriminant of $ \mathbf Z[(1 + \sqrt{5})/2] $ is $ 5 $, but the discriminant of the order $ \mathbf Z[\sqrt 5] $ is $ 20 = 2^2 \cdot 5 $, so you get a counterexample after completing at the prime $ p = 2 $.
|
H: Volume of the solid bounded by $z = 4-x^2$, $y+z=4$, $y=0$ and $z=0$.
If I am seeing this problem correctly, when $z=0$, $x = \pm 2$, so $-2 \le x \le 2$.
The $y$ coordinate varies from $0$ to $4$, because when $z=0, y=4$ (the plane $y+z=4$ with $z=0$). So $0 \le y \le 4$.
The $z$ coordinate varies from the plane $z=0$ to the plane $z=4-y$.
Then $0 \le z \le 4-y$.
So the integrals are:
$\displaystyle \int_{-2}^{2} \int_{0}^{4} \int_{0}^{4-y} dzdydx$
Is this correct?
AI: Note that the $z=f(x,y)$ function is simply the intersection between the parabolic cylinder $z<4-x^2$ and the plane $z<4-y$, and thus the (x,y) domain is split in the intersection of both: in $y=x^2$.
Hence, the integration has two members, one for $y<x^2$ in which $z<4-x^2$:
$$
\int_{-2}^{2}\int_{0}^{x^2}\int_{0}^{4-x^2} \text{dzdydx} = \frac{128}{15}
$$
and for $y>x^2$ in which $z<4-y$:
$$
\int_{-2}^{2}\int_{x^2}^{4}\int_{0}^{4-y} \text{dzdydx} = \frac{256}{15}
$$
The answer is $\frac{128}{15}+\frac{256}{15}=\frac{128}{5}$
|
H: If $R$ is a reduced Noetherian ring, then every prime ideal in the total quotient ring $K(R)$ is maximal.
I know that in $K(R)$, the set of maximal ideals is the set of associated primes of $K(R)$ and that an ideal is maximal if and only if it is the localization of a maximal associated prime of $R$.
So, we know there are only finitely many maximal ideals in $K(R)$.
I'm not sure if that is helpful, but, We want to show that if $R$ is a reduced Noetherian ring, then every prime ideal in $K(R)$ is in fact maximal.
I'm not sure how to use assumption that $R$ is reduced. All I know is that this means that there are no nilpotent elements in $R$.
AI: First, note that $Q(R)$ is Noetherian and reduced since $R$ is, and second, every element of $Q(R)$ is either a unit or a zero divisor. Once this is established, we can effectively forget about $R$. With that in mind, all ideals introduced are ideals of $Q(R)$.
Let $\mathfrak{p}_1,\ldots,\mathfrak{p}_k$ be the minimal prime ideals of $Q(R)$ (there are finitely many since $Q(R)$ is Noetherian). Recall that the nilradical (the set of nilpotent elements, or equivalently, the radical of the zero ideal) of $Q(R)$ is equal to the intersection of all primes ideals of $Q(R)$. Since $Q(R)$ is reduced, this means that the intersection of the prime ideals is the zero ideal. Furthermore, because every prime ideal contains a minimal prime ideal, the intersection of all prime ideals equals $\cap_{i=1}^k\mathfrak{p}_i$ and therefore $\cap_{i=1}^k\mathfrak{p}_i=(0)$.
We now show that $\mathfrak{p}_1,\ldots,\mathfrak{p}_k$ are all maximal ideals. Let $j_1\in\{1,\ldots,k\}$ and let $I$ be an ideal such that $\mathfrak{p}_{j_1}\subseteq I\subsetneq Q(R)$. Let $x\in I$. Then $x$ is not a unit since $I\neq Q(R)$ which implies $x$ is a zero divisor. So, there exists nonzero $y\in Q(R)$ such that $xy=0$. Since the $y$ is nonzero and $\cap_{i=1}^k\mathfrak{p}_i=(0)$, we see $y\notin\cap_{i=1}^k\mathfrak{p}_i$ and therefore there exists $j_2\in\{1,\ldots,k\}$ such that $y\notin \mathfrak{p}_{j_2}$. However, $xy=0\in\mathfrak{p}_{j_2}$, so $x\in\mathfrak{p}_{j_2}\subseteq\cup_{i=1}^k\mathfrak{p}_i$. Thus, we conclude $I\subseteq\cup_{i=1}^k\mathfrak{p}_i$.
Next, by prime avoidance, we deduce that $I\subseteq \mathfrak{p}_{j_3}$ for some $j_3\in\{1,\ldots,k\}$ and hence $\mathfrak{p}_{j_1}\subseteq\mathfrak{p}_{j_3}$. Because $\mathfrak{p}_{j_3}$ is a minimal prime ideal, this implies $\mathfrak{p}_{j_1}=\mathfrak{p}_{j_3}$ and therefore $I=\mathfrak{p}_{j_1}$. Thus, we conclude $\mathfrak{p}_1,\ldots,\mathfrak{p}_k$ are maximal ideals. Finally, if $\mathfrak{p}$ is any prime ideal, then it contains a minimal prime $\mathfrak{p}_j$ for some $j\in\{1,\ldots,k\}$ and hence $\mathfrak{p}=\mathfrak{p}_j$ by maximality of $\mathfrak{p}_j$, so every prime ideal is maximal.
I have edited my answer to fill in a lot more details, but let me know if you need further clarification on any points.
|
H: Countability in practice
I am studying the function portion of discrete mathematics and I am wondering to know that how can we practically count the integers?
As they are saying these are countable.
AI: The term "countable" in math just means that we can assign each "object", "label", or "symbol" an integer number without repeats.
The set of integers is infinite, yet we can give each integer it's own number. We can label 1 as "1", 2 as "2", 3 as "3" and so on... (Of course we can do this for negative numbers too).
This might be confusing to grasp, but their are higher order infinities, meaning that some infinities are larger than others. That is, there are countable infinities and there are uncountable infinities. The set of real numbers is uncountable. You can use up all the numbers in the set of integers and still not be able to label all the real numbers.
|
H: Linear Mappings and Ker
Let $\{\vec{v_1},...,\vec{v_k}\}$ be a basis for a subspace $S$ of an $n$-dimensional vector space $V$. Prove that there exists a linear mapping $T:V\rightarrow V$ such that Ker$T=S$.
I reviewed the rank-nullity theorem and I am very lost with this proof.
AI: Two hints:
A basis for a subspace can be extended to a basis for the entire space.
A linear transformation is specified once you know its values on a basis.
|
H: Increasing Percentage Question
Question: A population of a colony of bacteria increases by 20% every 3 minutes. If at 9:00am the colony had a population of 144,000, what was the population of the colony at 8:54am?
My solution: If I want to increase something by 20%, then I would multiply it by (.20). So for the problem above, my solution would be x * (.20) * (.20) = 144000 and I would solve for x. However when I solve for x, the number is 3600000, which doesn't make sense. What exactly am I doing wrong?
AI: If the population increases by $20\%$ every $3$ minutes,
that means it's multiplied by $\color{red}1.20$ every $3$ minutes.
|
H: Prove that $ \sum_{i=1}^{N} a_i \leq \sqrt{N \sum_{i=1}^{N}a_i^2} $
Prove that $ \sum_{i=1}^{N} a_i \leq \sqrt{N \sum_{i=1}^{N}a_i^2} $. Well i choose $u=(1,\ldots,1)$ and $v=(a_1,\ldots,a_N)$ whit $a_i$ positive and. Apply $u \circ v \leq |u||v|$. With this now i want to prove that $\sum_{i,j}^{N}\frac{\partial u}{\partial x_i}\frac{\partial v}{\partial x_j}\leq N |\nabla u||\nabla v|$. I know that I am very close , but if somebody can help me, i will aprecciate..thank you
AI: So we already know that $\sum_i a_i = u\circ v$. You then have $|u| = \sqrt{u\circ u} = \sqrt{N}$. And $|v| = \sqrt{v\circ v} = \sqrt{\sum_i a_i^2}$. Thus, $\sqrt{N\sum_i a_i^2} = |u||v|$, and so $u\circ v \leq |u||v|$ gives the desired result.
|
H: limit of a function at infinity from the definition perspective
By definition from Wiki, for $f(x)$ a real function, the limit of $f$ as $x$ approaches infinity is $L$ if for all $\epsilon > 0$, there exists a $M$ such that $|f(x) - L| < \epsilon$ whenever $x > M$.
Now I can prove a function $g(x)$ that for all $\epsilon > 0$, there exists a $M$ such that $|g(x) - L| < 2\epsilon$ whenever $x > M$. Can I say $\underset{x \rightarrow \infty}{\lim}~g(x) = L$?
AI: Yes. Let $\epsilon'=2\epsilon$, and suppose $\epsilon'>0$. Clearly, $\epsilon>0$, so the condition implies there exists an $M$ so that $|g(x)-L|<2\epsilon$ whenever $x>M$. But this is the same thing as saying that given any $\epsilon'>0$, there exists an $M$ so that $|g(x)-L|<\epsilon'$ for any $x>M$, which is exactly the original definition.
|
H: How to prove that the language L={w1#w2#. . .#wk: k ≥ 2, each wi ∈ {0,1}^* , and wi = wj for some i !=j} is not context free using the pumping lemma?
I am having trouble choosing the string to use for the proof. I know that I have to choose a string such that at least two substrings separated by the # are equal to each other but am unsure of how to approach this. If someone could please help me with this, I would appreciate it.
AI: HINT: If $p$ is the pumping length, try $w=0^p1^p\#0^p1^p$.
|
H: How to integrate :$\int \frac{\sin^4x+\cos^4x}{\sin x \cos x}\:dx$
How to integrate :
$$\int \frac{\sin^4x+\cos^4x}{\sin x \cos x}\:dx$$
$$=\int \:\sin^2x \tan x \: dx+\int \:\cos^2x \cot x \:dx$$
Any suggestion?
AI: Add and subtract in nominator $2 \sin^2x \cos^2x$. Can you continue?
|
H: When to use each of the three double-angle identities for cosine?
There are three double angle identities that are all equivalent to each other. The concept of the equations being equivalent sounds fair to me, except I noticed each one has a specific time when to be used precisely with the problem.
How will I know when one of the three should be used according to the question?
I noticed it does matter at times, but maybe I do not understand the concept as a whole, does anyone know which instances I should choose either, other, or? Perhaps I am mistaken, but if so, could someone clarify why?
Thank you
AI: Consider the problem:
Solve the equation $$\cos2\theta=\cos\theta.$$
Recognizing that $$\cos2\theta=\cos^2\theta-\sin^2\theta=2\cos^2\theta-1=1-2\sin^2\theta,$$ which form do you feel is appropriate? Since the right side of the equation is written in terms of $\cos\theta$, perhaps it is most useful to write $\cos2\theta$ in terms of $\cos\theta$ as well:
\begin{align}
\cos2\theta&=\cos\theta\\
2\cos^2\theta-1-\cos\theta&=0\\
(2\cos\theta+1)(\cos\theta-1)&=0
\end{align}
This way, you can solve the quadratic equation in terms of $\cos\theta$.
Another example where the choice is relevant:
Simplify the expression $$\frac{\cos2\theta}{\cos\theta+\sin\theta}.$$
We would have $$\frac{\cos2\theta}{\cos\theta+\sin\theta}=\frac{\cos^2\theta-\sin^2\theta}{\cos\theta+\sin\theta}=\frac{(\cos\theta+\sin\theta)(\cos\theta-\sin\theta)}{\cos\theta+\sin\theta}=\cos\theta-\sin\theta.$$
|
H: Prove that f(m,n)=(m+n−2)(m+n−1)/2+m from $\Bbb Z^+\times \Bbb Z^+$→ $\Bbb Z^+$ is one-to-one.(2)
Since the last post no one gave the solution, so i reopen one and use other approach searched in this forumn.
https://math.stackexchange.com/a/91323/620871
Show that the polynomial function $$f(m,n)=(m+n−2)(m+n−1)/2+m $$ is one-to-one and onto. Both domain is $\Bbb Z^+\times \Bbb Z^+$, codomain are $\Bbb Z^+$.
I want to prove $f(m,n)=f(p,q) \longrightarrow (m=p \text{ and }n=q)$.
$$\frac12(m+n-2)(m+n-1)+m=\frac12(p+q-2)(p+q-1)+p\;.\tag{1}$$ The first step is to show that $m+n=p+q$, so suppose not. We may as well assume that $m+n<p+q$. For convenience let $a=m+n$ and $d=(p+q)-a$, so that it becomes $$\frac{(a-2)(a-1)}2+m=\frac{(a+d-2)(a+d-1)}2+p\;.$$
Then $$\begin{align*}
m-p&=\frac{(a+d-2)(a+d-1)}2-\frac{(a-2)(a-1)}2\\
&=ad+\frac{d(d-3)}2\\
&
\end{align*}$$
Since $a\ge 2$,$d\ge1$, we dicovered that $m-p>a$ when $a\ge 2$,$d>1$,$m-p>m+n,$which is absurd.
However,if $d=1$,$a\ge 2$,then $m-p>a$ is not hold. How to duel with the case that $d=1$,$a\ge 2$ or we can just ignore it?
Thanks.
AI: There’s at least one problem with what you’ve done. You assume that $m+n>p+q$, let $a=m+n$ and $d=p+q-a$; clearly this implies that $d<0$, so the later assertion that $d\ge 1$ cannot be right. I suggest a slightly different approach.
We know that
$$\frac{(m+n-2)(m+n-1)}2=\sum_{k=1}^{m+n-2}k$$
and
$$\frac{(p+q-2)(p+q-1)}2=\sum_{k=1}^{p+q-2}k\;.$$
Without loss of generality assume that $m+n\ge p+q$. Then
$$0=f(m,n)-f(p,q)=\sum_{k=p+q-1}^{m+n-2}k+m-p\;.\tag{1}$$
(If $p+q-1>m+n-2$, the summation evaluates to $0$.) If $m+n>p+q$, then $m+n-2\ge p+q-1$, and $(1)$ implies that
$$0=\sum_{k=p+q-1}^{m+n-2}k+m-p\ge p+q-1+m-p=m+q-1\ge 1\;,$$
which is absurd, so $m+n=p+q$, and $(1)$ implies that $0=m-p$, i.e., that $m=p$, which in turn implies that $n=q$. Thus, $f$ is one-to-one.
When you try to prove that $f$ maps $\Bbb Z^+\times\Bbb Z^+$ onto $\Bbb Z^+$, you may find the following diagram helpful; each point $\langle m,n\rangle$ is labelled with the number $f(m,n)$.
$$\begin{array}{ccc}
n&\begin{array}{c|cc}
4&7&12&18&25\\
3&4&8&13&19\\
2&2&5&9&14\\
1&1&3&6&10\\\hline
&1&2&3&4
\end{array}\\
&m
\end{array}$$
Notice that the points are numbered along consecutive diagonals from upper left to lower right, and that each diagonal has one more point on it than the previous one.
|
H: Probability that a book with $200$ pages has $3$ misprints at most using Poisson approximation
A publisher assumes that when printing a typical book, the probability for a misprint on a page is $1.25$%.
I'm trying to find out these two things:
What is the probability that on a book with $200$ pages three misprints are made at most?
How can one calculate the above using the Poisson limit theorem?
I think the probability that each misprint appears on a given page is $p = \frac{1}{n}$.
Exactly three would be $\binom{m}{3}p^3(1-p)^{m-3}$.
$$\binom{200}{3}(\frac{1}{200})^3 (1-\frac{1}{200})^{200-3} = 0.0612 = 6,12 \text{%}$$
Is that correct?
I know that the poisson limit theorem is defined like this:
$$\lim_{n \to \infty} \binom{n}{k}p^k_n(1-p_n)^{n-k} = e^{-\lambda} \frac{\lambda^k}{k!}$$
where $p_n$ is a sequence of real numbers in $[0,1]$ so that the sequence $np_n$ converges to a finite limit $\lambda$.
I tried
$$P(X \leq 3) = \frac{e^{-1}1^0}{0!} + \frac{e^{-1}1^1}{1!} + \frac{e^{-1}1^2}{2!} + \frac{e^{-1}1^3}{3!}$$
but this doesn't give the answer as above.
Any help is appreciated!
AI: This might be a useful approach to consider, based on my reading of the information:
The probability of a page containing a misprint is given as 1.25%. I interpret this to imply $p=0.0125$.
To determing the probability of having exactly 3 misprinted pages out of 200, using the binomial distribution, one then has:
$$
P(k=3)=\binom{200}{3} p^{3}(1-p)^{\left(200-3\right)}
$$
$$
\binom{200}{3} = \frac{200!}{3!(200-3)!} = \frac{200 \times 199 \times 198}{6}=1313400
$$
$$
P(k=3)=\binom{200}{3} p^{3}(1-p)^{\left(200-3\right)} = 1313400 \times 0.0125^{3} \times (1-0.0125)^{197} = 0.215246
$$
Comparing to the Poisson distribution,
$$
P(k=3)=\frac{\lambda^{k}e^{-\lambda}}{k!}
$$
let $\lambda=pn= 0.0125 \times 200=2.5$, we have
$$
P(k=3)=\frac{2.5^{3}e^{-2.5}}{6}=0.2137630
$$
To compute the "at most 3", I believe it requires adding the contributions of $k=0,k=1,k=2$ to the one for $k=3$.
I hope this helps.
|
H: Why paracompact spaces are required to be Hausdorff
If paracompactness is supposed to be a generalization of compactness. Why is Huasdorfness required in its definition?
It seems like it is more a generalization of compact normal spaces. But the name does not suggest so.
AI: In fact, many texts require compact spaces to be Hausdorff too, making them normal as well. Paracompact spaces that are Hausdorff are also normal, which is a generalisation of that fact. Indeed paracompactness can be seen as a strong form of normality (e.g. normal spaces obey "every point finite open cover has a point-finite shrinking", so normality behaves like a weak "covering property" (as properties like compactness, paracompactness, Lindelöfness, etc. are called as a category).
Historically paracompactness grew out of a common generalisation of metrisability and compactness (it occurs quite naturally in proving metrisation theorems), so in that light adding Hausdorff seems natural, as we then get the normality people were already used to getting in compactness anyway. Many alternative characterisations of paracompactness require regularity as well. Most spaces that occur "in nature" that are paracompact were already Hausdorff anyway. And Hausdorffness ensures that we have lots of open sets (enough to separate points) and so lots of open covers, so that paracompactness "means" something, intuitively speaking.
|
H: Piecewise Function vs Regular Function
I'm into graphing functions and I'm currently working on some project of mine. I'm a little confused, what's the main distinction of a Piecewise Function with just a Regular / $f(x)$ Function? I mean, most Piecewise functions posses the same format of equations with an $f(x)$ Function.
AI: Generally a piece-wise function has at least one jump discontinuity in it. For example $F(x) =1$ is a "regular" function as you refer to it; whereas
$G(x)= \begin{cases}
-1 & x< 0 \\
0 & 0\leq x\leq 100 \\
1 & x > 100
\end{cases}$ is piece-wise (it has jump discontinuities at $x=0$ and $x=100$). For more info see the Wikipedia article on piecewise functions.
|
H: A question in proof of Apostol ( Mathematical Analysis) in Theorem 10.27
I am self studying Apostol Mathematical Analysis Chapter->Lebesgue Integration and I was unable to think about an argument used in that proof.
Adding it's image ->
Can someone please tell a rigorous argument which deduces the blue underlined portion of the proof.
AI: $G_{n,1}(x)=\max \{f_1(x),f_2(x),...,f_n(x)\}$. The right hand side is increasing and its limit is $\sup \{f_n(x): n=1,2...\}$. Also $G_{n,1}(x) \to G_1(x)$ almost everywhere. Hence you only have to take limit as $n \to \infty$ to get $G_1(x)=\sup \{f_1(x),f_2(x),...\}$ almost everywhere.
|
H: Relationship between induced measure and measure corresponding to a density function.
I am reading these lecture notes, section 2.3 (pg 4), and I've become very confused about relationship between
The induced measure $\mu_X$ -- a random variable $X: \Omega \rightarrow S$ with the original measure $\mu$ induces $\mu_X(B) = \mu(X^{-1}(B))$.
The measure $\nu(A) = \int_A f\ d\mu$ corresponding to a density function $f:\Omega \rightarrow \mathbb{R}^{0+}$.
The notes compare the definition of density function I'm familiar with: $Pr(X \leq a) = \int_{-\infty}^a f(x)\ dx$ with the measure-theoretic equivalent: $\mu_X(B) = \int_B f\ d\lambda$ where $\lambda$ is the Lebesgue measure.
I'm trying to reconcile the original definition of $\mu_X$ with the new one, and
I can't see why it should be the case that $\mu_X(B) = \int_B f\ d\lambda = \int_{X^{-1}(B)}1\ d\mu$ where the RHS is just another way of writing the original $\mu(X^{-1}(B))$.
I'm also confused because $\nu$ and $\mu_X$ are written so similarly you'd suspect they're the same thing, but $\mu_X$ is a measure on $(S, \mathcal{A})$ whereas $\nu$ is a measure on $(\Omega, \mathcal{F})$, so this is clearly not possible. But then I'm not sure what was the point of $\nu$.
I think this is somehow related to one of the theorems stated in the text: $\int g\ d\nu = \int f g\ d \mu$ (when $f, \nu, \mu$ are related as described above), but if I use this by plugging in $\lambda \rightarrow \mu, I_B \rightarrow g $ ($I_B$ being the indicator function for set $B$)) I end up with $\int I_B d\nu$ where $\nu(A) = \int_A1\ d\lambda$, which still doesn't get me anything I want.
What am I missing?
AI: For any random variable $X$ we always have a measure $\mu_X$ on the Borel sigma algebra of the real line defined by $\mu_X(B)=P(X^{-1}(B))$. In general there is no density function of $f$. We say that $X$ has a density if there exist a non-negative measurable function $f$ such that $P(X^{-1}(B)=\int_B f(x)dx$ for all Borel sets $B$. In 2) $\mu$ is not $\mu_X$ but it is the Lebesgue measure. $\nu$ is same as $\mu_X$ and we have $\mu_X(B)=\int_B f(x)dx$ for all Borel sets $B$.
|
H: An integrable function $f$ on $\Bbb R$ satisfying $\lim_{h\to 0}\int_\Bbb R \frac{|f(x+h)-f(x)|}{h}dx=0$ must be constant
Suppose $f:\Bbb R\to \Bbb C$ is an integrable function, i.e. $\int_\Bbb R|f|~dx<\infty$, satisfying $$ \displaystyle\lim_{h\to 0}\displaystyle\int_\Bbb R \dfrac{|f(x+h)-f(x)|}{h}dx=0.$$
I am trying to show that in this case that $f$ must be constant (of course in a.e. sense). Using Fatou's lemma, I could show that $\liminf_{h\to 0} (f(x+h)-f(x))/h=0$ for almost all $x\in \Bbb R$, but I can't see how to progress next. Can I get a hint?
AI: Let $a<b$ and assume that $a$ and $b$ are Lebesgue points of $f$. Then $\int_a^{b} \frac {f(x+h)-f(x)} h dx \to 0$. This can be written as $\frac 1 h (\int_{a+h} ^{b+h} f(x)dx- \int_a^{b} f(x)dx) \to 0$ or $\frac 1 h(_b^{b+h}f(x)dx-\int_a^{a+h} f(x)dx )\to 0$. Since $a$ and $b$ are Lebesgue points this gives $f(a)=f(b)$. Hence $f$ is a constant almost everywhere. Note that $f$ is also integrable so $f=0$ almost everywhere!.
|
H: If $f(x)=(x^2+6x+9)^{50}-4x+3$ has roots $r_1, r_2, \ldots, r_{100}$, then compute $\sum_i (r_i+3)^{100}$
Let $f(x)=(x^2+6x+9)^{50}-4x+3$, and let $r_1,r_2,\ldots,r_{100}$ be the roots of $f(x)$. Compute $$(r_1+3)^{100}+(r_2+3)^{100}+\cdots+(r_{100}+3)^{100}$$
How should I compute this?
AI: If $r_i$ is root of the given polynomial, then $y_i = r_i + 3$ is root of the polynomial
$$f(y-3) = y^{100}-4y + 15$$
It follows
$$\sum_{i=1}^{100}y_i^{100} = 4\sum_{i=1}^{100}y_i - 100\cdot 15 = -1500$$
|
H: Which subsets out of S, W and T form subspace of the vector space V?
$V = \mathbb{R}^\mathbb{R}$
$S = \{f : f$ is monotone$\}$
$T = \{f : f(2) = (f(5))^5\}$
$W = \{f : f(2) = f(5)\}$
Note that monotone means either non-decreasing or non-increasing.
My Attempt:
For S:
It's not a subspace of V and it can be shown with a counterexample which violates the fact that vector spaces are closed under some binary operation (addition in this case).
$
f_1(x) = x\\
f_2(x) = -2x \\
f_3(x) = f_1(x) + f_2(x)\\
$
$ f_3(x)=
\begin{cases}
(-x) + (-2(-x)) = x,& \text{if } x \in (-\infty, 0) \\
x + (-2x) = -x ,& \text{if } x \in (-\infty, 0)\\
\end{cases}
$
$f_3(x)$ is not monotonic, hence S is not a subset of V. [Reason: S is not closed under addition.]
For W:
For a non-decreasing monotonic function, $f$, if $x \leq y$, then $f(x) \leq f(y).$
We know that
$$2 \leq 5 \\
f(2) \leq f(5) \\$$
But it is given that
$$f(2) = f(5)$$
Hence, $f$ must be a constant function and it can be defined as $$f(x) = k, \text{ such that } k \in \mathbb{R}$$
It can be shown that a family fo constant funstions contains a function which is zero everywhere and it is closed under addition and scalar multiplication. Hence, W is a subspace of V.
AI: W is the only one which is subspace of $R^R$ cause....
W forms a vector space in itself.
In T the scalar multiplication doesn't hold.
In S we can find a non-decreasing function $f$ and a non- increasing function $g$ and create a fault in the addition so that $f+g $ doesn't stay monotone.
|
H: Prove the left hand side of the following inequality
$\frac{2ab}{a+b} \le \sqrt{a \cdot b} \le \frac{a+b}{2}$ where $a,b > 0$
The right hand side $\sqrt{a \cdot b} \le \frac{a+b}{2}$ is the AM-GM inequality, it's clear how to solve it. Does the left hand side also a trivial/elementary proof?
I appreciate your answer.
AI: Graphically:
All means are homogeneous in $a,b$ and you can divide by $b$. Then with $x:=\dfrac ba$,
$$\frac{2x}{x+1}\le\sqrt x\le\frac{x+1}2.$$
|
H: Proof $\mathbb{C}^* \cong \mathbb{C} / \mathbb{Z}$
I am trying to prove the isomorphism between $\mathbb{C}^*$ and $\mathbb{C} / \mathbb{Z}$. I already established the way to do it:
find a surjective homomorphism $f: \mathbb{C} \to \mathbb{C}^*$, such that $Ker(f)=\mathbb{Z}$
take the homomorphism $\phi: \mathbb{C} \to \mathbb{C}/\mathbb{Z}$
Then there exists a homomorphism $g: \mathbb{C}/\mathbb{Z} \to \mathbb{C}^*$, and then we have to prove that $g$ is a isomorphism.
My problem mostly is in finding a surjective homomorphism $f$ such that the $Ker(f)=\mathbb{Z}$. Anyone that can help me out?
AI: You have the right idea! To finish it up, remember the following facts:
$e^{z_1 + z_2} = e^{z_1} \cdot e^{z_2}$. But we know $e^{n 2\pi i} = 1$ for each $n \in \mathbb Z$...
Can you use your idea (the first isomorphism theorem) to finish the proof with this?
I hope this helps ^_^
|
H: A question in Hilbert spaces
Let be $ X=C[-1,1]$ and define $\langle f,g \rangle =
\int^1_{-1} f(t)g(t) dt$.
If $M=\{f \in\ C=[-1,1]\mid f\text{ is odd function}\}$, what is $ M^\perp$?
AI: You can easily show that $$C([-1,1])=O([-1,1])\oplus E([-1,1])$$ where $O([-1,1])$ denotes the continuous odd functions on $[-1,1]$ and $E([-1,1])$ denotes the continuous even functions. (Hint: $f(t)=\frac{f(t)-f(-t)}{2}+\frac{f(t)+f(-t)}{2}$).
Next, you can show that odd and even functions are orthogonal to each other. Conclude that $O([-1,1])^{\perp}=E([-1,1])$.
|
H: Infinite divisibility - two examples on characteristic functions
Are the r.v.s with following characteristic functions infinitely divisible?
$e^{it}e^{-|t|}e^{-t^2}$
$\left(\frac{1}{2}(e^{it}+e^{-it})\right)^n$
The second one is easy because it is just a $\cos(t)^n$ but $\cos(\frac{\pi}{2}) = 0$ and characteristic functions of infinitely divisible distributions never vanish. So the answer is no.
What about the first one? This seems like convolution of independent $\delta_{1} * N(0, \sqrt2) * Cauchy$ so $$
\varphi_3(t)=e^{it}e^{-|t|}e^{-t^2} = \varphi_{X_1 + X_2 + X_3}(t) = \varphi_{X_1}\varphi_{X_2}\varphi_{X_3}
$$
Both Cauchy and Gaussian are infinitely divisible. I am not sure if Dirac's delta is. If it isn't then can I conclude that $\varphi_3(t)$ is not infinitely divisible? Any tips appreciated.
But $\delta_k = \delta_1 + \dots + \delta_1$ (k times) so I think Dirac's delta is infinitely divisible.
AI: Your arguments are correct except that what you have is $\delta_1$. To show that it is i.d. just note that it is $\delta_{1/n} *\delta_{1/n}*...*\delta_{1/n}$ ($n$ factors) for each $n$.
|
H: Factoring inequalities using Iverson identity - confused by double summations in Concrete Mathematics book
In chapter 2 section 4 (multiple sums) of Concrete Mathematics(Graham,Knuth,Patashnik) the authors use Iverson Identity to rearrange the variables' bounds.
In particular, they start off with a question like this:
\begin{equation}
S = \displaystyle\sum\limits_{1 \le j < k \le n}^{}{(a_k - a_j)(b_k - b_j)}
\end{equation}
Continuing by changing the variable to get the lower triangle (from the diagonal):
\begin{equation}
S = \displaystyle\sum\limits_{1 \le k < j \le n}^{}{(a_j - a_k)(b_j - b_k)}
\end{equation}
Then they go on to add S to itself using the Iverson identity:
$$[1 \le j < k \le n] + [1 \le k < j \le n] = [1 \le j, k \le n] - [1 \le j = k \le n]$$
Here is where I feel absolutely lost. I can't understand how they arrive from the two inequalities in the left to the first in the right? On the left, both $k$ and $j$ have lower bound of 1, yet in the first on the right the $k$ is not bounded and $j$ appear not to have an upper bound. What am I missing?
AI: Basically, what is meant there is:
$$\{(j,k): 1\leq j < k \leq n \text{ or } 1\leq k < j \leq n\}$$
$$ = \{(j,k): 1\leq j, k \leq n \} \setminus \{(j,k): 1\leq j= k \leq n\}$$
|
H: Are All Generating Sets for the Borel Algebra Uncountable?
A related question asks is there a smallest set that generates a given $\sigma$-algebra:
Smallest collection of subsets that generate a sigma algebra
The only existing answer to that (at the time of writing this) says that there must be a smallest cardinality for any generating set of any $\sigma$-algebra, by the well-orderedness of cardinals.
Focusing on the Borel sigma algebra over the Reals, $\mathcal{B}(\mathbb{R})$, the above implies that there's a smallest such cardinality of a set that generates this algebra. But is that smallest cardinality uncountable? I believe this is formally what I'm asking:
$$|\mathbb{N}| \notin \{|S| \mid S \in \mathcal{P}(\mathbb{R}) \wedge \sigma(S) = \mathcal{B}(\mathbb{R})\}$$
My intuition says yes. If there was a countable set generating the Borel algebra it would seem weird... but I'm not sure...
AI: Open intervals $(a,b)$ with $a$ and $b$ rational form a countable class generating the Borel sigma algebra of $\mathbb R$.
|
H: System of algebraic equations is solvable iff condition on Gröbner basis
I was reading on Gröbner basis in chapter 9 of Dummit and Foote's Abstract Algebra, and while investigating on my own I stumbled upon an interesting result in Gröbner Bases - Theory and Applications by Franz Winkler which states the following:
Let $f_1,\ldots,f_m \in K[x_1,\ldots,x_n]$, and consider the system $f_1=\dots=f_m=0$. Let $I=(f_1,\ldots,f_m)$ and let $G$ be a normed Gröbner basis of $I$. Then, the system is unsolvable in $\overline{K}^n$ if and only if $1 \in G$.
The main issue I'm having is when trying to prove that $1 \in G$. I was able to show that if $1 \notin G$, then there is no constant polynomial in $G$, and also that if $G \cap K[x_n] = \varnothing$, then by induction I get the desired contradiction.
Any help on the issue would be greatly appreciated.
AI: I think it is possible to answer this with the weak Nullstellensatz: first of all, the non trivial implication would be:
$Unsolvable \Rightarrow 1 \in G$ : if $V_{\bar{K}}\left ( I \right ) $ is the set of solutions of your system, because of the weak Nullstellensatz, this would imply that $I = K\left [ x_1,...,x_n \right ]$. but then, because $G$ is a Gröbner base, $\left ( G \right ) = I$ and $LT\left ( 1 \right ) = 1$ must be divisible by some $LT\left ( g_i \right )$. But then $1 = LT\left ( g_i \right )$, but then it would be $g_i \in K$. But, because G is normed, it should be $1 = g_i \in G$, just what you need.
|
H: Find $7^{1604} \mod28$
How do I find $7^{1604} \mod28$? 7 and 28 aren't coprime, so I can't use Fermat's little theorem. How do I approach that types of problmes? Do I use Chinese Remainder theorem?
AI: A typical approach to problems with a fixed number to take modulo of is to break the number up to prime powers. In your case, you break $28$ up to $7\times 2^2$.
Next, use CRT to convert the problem to 2 subproblems, investigating the remainder modulo 7 and 4. The first one is obvious, you just get 0. What about the second one? Can you use Fermat's little theorem?
Don't forget to put everything together with CRT when you're done!
|
H: how is this a functor?
This is from Page 78 of Rotman's homological algebra book:
Give an (R,S)-bimodule A and left R-module B, then Hom(A, B) is a left S-module, where sf takes a to f(as) and Hom (A, ) is a functor from left R-Mod to left S-Mod.
I've proved everything except the following part:
Suppose f is R-map from B to C, then Hom(A, f) is a S-map from Hom(A, B) to Hom (A, C).
I got stuck here: Hom(A, f)(sh)(a) = f(sh)(a) = fh(as) = Hom(A, f)h(as) for h in Hom(A,B) and s in S.
I am not sure how to copy with this s. Any help would be appreciated!
AI: So you want to check that $\hom(A,f)$ is indeed an $S$-map, right ?
If that is so, then $\hom(A,f)(sh) = f\circ (sh)$.
Now if you evaluate that on $a$, you get $f((sh)(a)) = f(h(as)) = f\circ h(as)= s(f\circ h)(a)$, so $f\circ (sh) = s(f\circ h)$, and so $\hom(A,f)(sh) = s\hom(A,f)(h)$
|
H: Order of elliptic curve $y^2 = x^3 + ax^2 + b^2x$ is multiple of $4$.
Let $\mathbb{F}_q$ be a finite field with odd characteristic and let $a,b \in \mathbb{F}_q$ with $a \neq \pm 2b$ and $b \neq 0$. Define the elliptic curve $E: y^2 = x^3 + ax^2 +bx$
The goal is to show that $4 \ | \ E(\mathbb{F}_q)$.
The first step is showing that the points $(b,b\sqrt{a+2b})$ and $(-b,-b\sqrt{a-2b})$ have order 4, which I was able to show.
Now, I want show the following things:
At least one of $a +2b,\ a-2b, \ a^2-4b^2$ is a square in $\mathbb{F}_q$ .
If $a^2-4b^2$ is a square in $\mathbb{F}_q$, then $E[2] \subseteq E(\mathbb{F}_q)$.
If I show these 2, it follows that $4 \ | \ E(\mathbb{F}_q)$ , however I fail to show these 2 properties.
Can I somehow get a contradiction by assuming that none of these is a square, or how is this done?
AI: For the second part, consider this:
The roots of $x^2 + ax +b^2$ are given by $ t_{1,2} = \frac{-a \pm \sqrt{a^2-4b^2}}{2}$.
If $a^2-4b^2$ is a square in $\mathbb{F}_q$ then the above expression is in $\mathbb{F}_q$. Thus, $y^2 = x(x-t_1)(x-t_2)$ has 3 roots .
|
H: Prove that if $ \lim_{x \to 0}g(x)=0$, then $\lim_{x \to 0} g(x) \cdot \sin(1/x) = 0 $
This is the Problem 21 from Chapter 5 of M. Spivak's "Calculus". It states:
Prove that if $ \lim_{x \to 0}g(x)=0$, then $\lim_{x \to 0} g(x) \cdot \sin(1/x) = 0 $
That's how I approached the problem. First, we know that $\sin(1/x)$ does not approach any limit as $x$ approaches $0$. However, we also have that $ \left | \sin(1/x) \right | \leq 1 = M $ for all $x$ near $0$.
Since $\lim_{x \to 0} g(x) = 0$, we can make $g(x) \cdot sin1/x$ as close to zero as we want.
I am not sure how to prove this formally though. That's what I've come up with so far.
Since $M = 1$, we can have $ |g(x) \cdot M| < \epsilon $ for any $\epsilon$. Thus, $|g(x)| < \dfrac {\epsilon}{|M|} = \epsilon $. However, I am stuck in converting this into statements about $\delta$ and generalizing for $M$ other than $1$. Need your help.
AI: We have
$$-g(x)\le g(x)\sin\frac1{x}\le g(x)$$
for all $x\ne 0$. Note that $$\pm g(x)\to 0\text{ as }x\to 0.$$ Therefore, by the pinching theorem, we have
$$g(x)\sin\frac1{x}\to 0\text{ as }x\to 0.$$
|
H: proving a set is bounded Metric spaces
I'm trying to show that given the metric space X= $(\mathbb{R}^{2}, d)$ where $d$ is the usual Euclidean metric that the set $D=\{(x,y): \sqrt{x^{2}+y^{2}} <1\}$ is bounded.
I am using Rudin and it gives the following definition of bounded: "$E \subseteq X$ is bounded if there is a real number $M$ and a point $q \in X$ such that $d(p,q) < M$ for all $p \in E$"
I understand the definition, but am having trouble applying it. I take $M = 1$ and let $q \in \mathbb{R}^{2}$ and $p \in D$. Then I need to show that $d(p,q) < 1$.
So let's say $p$ has coordinates $(p_{1}, p_{2})$ and $q$ has coordinates $(q_{1}, q_{2})$. Then $\sqrt{(p_{1} - q_{1})^{2} + (p_{2} - q_{2})^{2}}$
but now I am not sure how to get to the conclusion. I could expand out the terms and use the fact that since $p \in D, \sqrt{p_{1}^{2} + p_{2}^{2}} < 1.$ Not sure if that is correct?
AI: Take $q=(0,0)$. for any $p=(x,y) \in D$ we have $d((x,y), (0,0))=\sqrt {x^{2}+y^{2}} <1$ so $D$ is bounded.
|
H: Are the following functions positive definite?
Is there a quick way of determining whether the following functions are positive-definite or not?
$f: \mathbb{R}\to\mathbb{C}$
$f(x) = 3$
$f(x) = -3$
$f(x) = x - 3$
$f(x) = x + 3$
So, my attempts:
For $f(x) = x - 3$ if we take $n=1, a_1 = 1$ and $x_1$ arbitrary then $$1 \cdot f(x_1 - x_1) \cdot \overline{1} = 1 \cdot f(0) \cdot \overline{1} = 1 \cdot -3 \cdot \overline{1} = -3 < 0$$ so it is not.
Similar reasoning can be made on $f(x) = -3$. Is this part correct so far?
Now, $f(x) = 3$ seems to be positive definite but on the other hand if I take $a_1 = 1, a_2 = -1$ and arbitrary $x_1, x_2$ then
$$a_1 f(x_1 - x_1) \overline{a_1} + a_1 f(x_1 - x_2) \overline{a_2} + a_2 f(x_2 - x_1) \overline{a_1} + a_2 f(x_2 - x_2) \overline{a_2} = \\
= 3 ( 1 \cdot \overline{1} + 1 \cdot \overline{-1} -1 \cdot \overline{1} -1 \cdot \overline{-1}) = 3 ( 1 - 1 - 1 + 1 ) = 0 $$ and it is not greater than zero! Does that mean the function is not positive definite?
What about $f(x) = x + 3$? Here I don't know how to proceed.
AI: For real $x_i$'s we have $$\sum \sum a_ia_j ((x_i+3)-(x_j+3))$$ $$=\sum \sum a_ia_j (x_i-x_j)$$ $$= (\sum a_j) (\sum a_ix_i) -(\sum a_i) (\sum a_jx_j) =0$$ so $f$ is certainly not positive definite.
|
H: $\cos(\alpha-\beta)+\cos(\beta-\gamma)+\cos(\gamma-\alpha)=\frac{-3}{2}$,show that $\cos\alpha+\cos\beta+\cos\gamma=\sin\alpha+\sin\beta+\sin\gamma=0$
I think that I've done a major part of the problem but I'm stuck at a point.
Here's what I've done :
It's given to us that
$$\cos(\alpha-\beta)+\cos(\beta-\gamma)+\cos(\gamma-\alpha) = \dfrac{-3}{2}$$
Using the identity $\cos(A-B) = \cos A \cos B + \sin A \sin B$, we obtain :
$$\cos\alpha\cos\beta + \sin\alpha\sin\beta + \cos\beta\cos\gamma + \sin\beta\sin\gamma + \cos\gamma\cos\alpha + \sin\gamma\sin\alpha = \dfrac{-3}{2}$$
Multiplying both sides by $2$, we obtain :
$$2\cos\alpha\cos\beta + 2\cos\beta\cos\gamma + + 2\cos\gamma\cos\alpha + 2\sin\alpha\sin\beta + 2\sin\beta\sin\gamma + 2\sin\gamma\sin\alpha = -3$$
Adding $\sin^2\alpha+\sin^2\beta+\sin^2\gamma+\cos^2\alpha+\cos^2\beta+\cos^2\gamma$ to both sides, we obtain :
$$\text{LHS : } (\cos^2\alpha + \cos^2\beta + \cos^2\gamma + 2\cos\alpha\cos\beta + 2\cos\beta\cos\gamma + 2\cos\gamma\cos\alpha)$$
$$ + (\sin^2\alpha + \sin^2\beta + \sin^2\gamma + 2\sin\alpha\sin\beta + 2\sin\beta\sin\gamma + 2\sin\gamma\sin\alpha)$$
$$\text{RHS : } -3 + (\cos^2\alpha + \sin^2\alpha) + (\cos^2\beta + \sin^2\beta) + (\cos^2\gamma + \sin^2\gamma)$$
On simplifying,
$$\text {LHS : } (\cos\alpha + \cos\beta + \cos\gamma)^2 + (\sin\alpha + \sin\beta + \sin\gamma)^2$$
$$\text{RHS : } -3+1+1+1 = -3+3 = 0$$
So, we obtain :
$$(\cos\alpha + \cos\beta + \cos\gamma)^2 + (\sin\alpha + \sin\beta + \sin\gamma)^2 = 0$$
$$\implies (\cos\alpha + \cos\beta + \cos\gamma)^2 = -(\sin\alpha + \sin\beta + \sin\gamma)^2$$
Now, square rooting both sides would involve $\iota$ i.e. $\sqrt{-1}$ but I haven't learnt about complex numbers yet and I think that the solution can be continued without using complex numbers but I don't know how.
Any help would be appreciated.
Thanks!
AI: If the sum of squares of two real numbers is zero, it implies that both numbers are zero. If you want you can simply prove this using Reductio-Ad-Absurdum.
|
H: What is the value of $g'(1)$?
Suppose $f(x,y)$ is real valued function for which $f(1,1)=1$ and its gradient at this point point is given by $\nabla f(1,1)=(-4,5)$.
Define a function $g(t)$ by $g(t)=f(t,f(t^2,t^3))$. Then what is the derivative of $g$ at $t=1$?
I deduce that $g(1)=f(1,1)=1$.
How to find $g'(1)$ ? Any help or hint.
Thanks in advance.
AI: You can use total differentiation:
\begin{align*}
g'(t) = \frac{\partial f}{\partial x}\frac{dt}{dt} + \frac{\partial f}{\partial y} \left(\frac{\partial f}{\partial x}\frac{dt^2}{dt}+\frac{\partial f}{\partial y}\frac{dt^3}{dt}\right).
\end{align*}
Note that $\frac{\partial f}{\partial x}(1,1) = -4$, $\frac{\partial f}{\partial y}(1,1) = 5$, and $x=t,y=f(t^2,t^3)$, so for $t=1$ you have $x=1$ and $y=f(1,1) = 1$. Thus
$$g'(1) = -4\cdot 1+5(-4\cdot 2\cdot 1+5\cdot 3\cdot 1) = 35-4=31.$$
|
H: Convergence in distribution of sum of two random variables, simple case
$X_n, Y_n$ are random variables on the same probability space and it's known that $X_n \rightarrow X$ and $Y_n \rightarrow 0 =: Y$ in distribution. Proof that $X_n + Y_n \rightarrow X$ in distribution.
I have a problem because of the case, when $x\ge 0$ (then $F_Y$ = 1).
I need to proof that $|F_{X_n + Y_n} - F_Y| < \varepsilon$ for large n, I've tried to estimate
$F_{X_n + Y_n}(x) = P(X_n + Y_n \le x) \le P(X_n \le x) + P(Y_n \le 0)$, but it doesn't work, because $P(Y_n \le 0) = 1$
AI: Hint: $P(X_n+Y_n \leq z) \leq P(X_n+Y_n \leq z, |Y_n| <\epsilon)+P(|Y_n|\geq \epsilon)$. Also $P(X_n+Y_n \leq z, |Y_n| <\epsilon) \leq P(X_n \leq z+\epsilon)$. Can you show using these that $\lim \sup P(X_n+Y_n \leq z) \leq P(X \leq z)$? A similar argument gives $\lim \inf P(X_n+Y_n \leq z) \geq P(X \leq z)$ provided $(X=z)=0$.
|
H: Check if a $f \in U$
Let $U$ be a subspace where $U = \{ f \in Abb(\mathbb{R},\mathbb{R}) |\; \forall\; x \in \mathbb{R}: f(x) = -f(-x)\}$ and $f_0: x \mapsto \frac{1}{1+x^2}$.
Check if $b: x \mapsto |x|$ is in $f_0 + U$.
When I write out $f_0 + U = \frac{1}{1+x^2} + f(x) = \frac{1}{1+x^2} -f(-x)$ I can't see a way to bring the function $b$ into the game. I know I can write $b = x$, $x \geq 0$ and $b = -x$, $x<0$ or when I plot $f(x) = x$ which is in U, I can see that it matches the graph for $x>0$ but not when $f_0$ comes into play. I am not even sure how to tackle this problem.
EDIT: $Abb(\mathbb{R},\mathbb{R})$ means all maps from $\mathbb{R} \to \mathbb{R}$
AI: If $b \in f_0+U,$ then $b-f_0 \in U.$ That is $(b-f_0)(x)=(f_0-b)(-x).$ Now $$(b-f_0)(x)=|x|-\frac{1}{1+x^2}$$ and $$(f_0-b)(-x)=\frac{1}{1+x^2}-|-x|=\frac{1}{1+x^2}-|x|,$$ therefore $$|x|-\frac{1}{1+x^2}=\frac{1}{1+x^2}-|x|$$ which implies $$|x|=\frac{1}{1+x^2}$$but this is only true for exactly $2$ values of $x \in \mathbb{R}.$
|
H: Calculate the residue of $\exp\left(\frac{z+1}{z-1}\right)$ in every point of $\mathbb{C}$
I have to calculate the residue of $\exp\left(\frac{z+1}{z-1}\right)$ in every point of $\mathbb{C}$.
So I tried to compute the Laurent Series expansion $\forall z_0 \in \mathbb{C}$.
For $z_0 = 0$ we obtain that $f(z)=\sum_{k \geq 0}\frac{(z+1)^k}{(z-1)^k}$ but I dont understand what the coefficient $a_{-1}$ is.
Thanks in advance.
AI: The only singularity of $f$ is at $z=1$, but that's an essential singularity. I get
$$f(z)=\sum_{n=0}^\infty\frac{(z+1)^n}{n!(z-1)^n}$$
but that's not a Laurent series as it stands.
But also
$$f(z)=\exp\left(1+\frac{2}{z-1}\right)=e\exp\left(\frac{2}{z-1}\right)
=e\sum_{n=0}^\infty\frac{2^n}{n!(z-1)^n}.$$
Now that is a Laurent series at $z=1$, and the coefficient of $(z-1)^{-1}$
is $2e$.
|
H: Convergence with probability one of $\sum \frac{1}{n}X_n$ and $\sum \frac{1}{\sqrt n}X_n$ if $X_n$ are i.i.d. $N(0,1)$
Suppose $X_n$ are $N(0,1)$ i.i.d.:
If $Y_n = \frac{1}{n}X_n$ then does $\sum Y_n$ converge with P.1?
If $Y_n = \frac{1}{\sqrt n}X_n$ then does $\sum Y_n$ converge with P.1?
If we have convergence, then prove the limiting distribution is infinitely divisible.
Now, (1) seems like easy task for 2 series Kolmogorov's theorem:
$\sum \mathbb{E} Y_n = 0$
$\sum \mathbb{Var} Y_n = \sum \frac{1}{n^2}\mathbb{Var} X_n < \infty$
so (1) does converge. I think the limiting distribution is $N(0,\frac{1}{n^2})$ and it is infinitely divisible because $N(0, \frac{1}{n^2}) \stackrel{D}{=} N(0,\frac{1}{n^3}) + \dots + N(0,\frac{1}{n^3})$ n times. I don't know if that is correct though.
I am stuck on (2) too. I tried 3 series theorem with $c = 1$ so:
$$
\sum \mathbb{P}(|Y_n| > 1) = \sum \mathbb{P}(|X_n| > n^2) = \sum \mathbb{P}(X_n > n^2) + \mathbb{P}(X_n < -n^2) = \sum 2\mathbb{P}(X_n > n^2) = \sum 2(1 - \phi(n^2))
$$
Here I got stuck. Of course $2(1 - \phi(n^2)) \to 0$ as $n \to \infty$ but I am not sure about the convergence of such series. Could you give me a hand?
AI: $\sum\limits_{n=1}^{N} \frac 1 {\sqrt n} X_n$ has normal distribution with mean $0$ and variance $\sum\limits_{n=1}^{N} \frac 1 n$. From this it is easy to see that $\sum\limits_{n=1}^{N} \frac 1 {\sqrt n} X_n$ does not even converge in distribution.
Linear combinations of jointly normal random variables have normal distribution and limits in distribution of infinitely divisible distributions are infinitely divisible distributions. So if we have convergence the the limiting distribution is infinitely divisible .
[ Let $Z_n \sim N(0,r_n)$ with $r_n \to \infty$. Then $\frac {Z_n} {\sqrt {r_n}} \sim N(0,1)$ so $\frac {Z_n} {\sqrt {r_n}} $ converges in distribution. Since $Z_n= \sqrt {r_n} \frac {Z_n} {\sqrt {r_n}} $ it should be clear that $Z_n$ cannot converge in distribution. I will leave the details to you].
|
H: Upper bound "square" of Lebesgue measure of set
Let $\lambda$ be the Lebesgue measure and $A$ a set with $\lambda(A) < \varepsilon$. Consider the set $A^2 = \{a \cdot a \ | \ a \in A\}$. Can we produce an upper bound on $\lambda(A^2)$ in terms of $\varepsilon$?
AI: No. If $A=(n,n+\frac 1 {\sqrt n})$ then $A^{2}=(n^{2},(n+\frac 1 {\sqrt n} )^{2})$. $\lambda (n,n+\frac 1 {\sqrt n}) \to 0$ but $\lambda (n^{2},(n+\frac 1 {\sqrt n})^{2}) \to \infty$.
|
H: Determining $\min\{|z-a|^2+|z-b|^2 \mid z\in \mathbb{C}\}$
I've been doing exercises on understanding the structures of given complex sets, but I'm stuck on this one.
Find $$\min\{|z-a|^2+|z-b|^2 \mid z\in \mathbb{C}\},$$ where $a,b\in \mathbb{C}.$
What's the correct way to tackle this kind of exercises? I've tried changing it to polar form and expanding the complex number ($z=x+iy$) but I get expressions way more complicated and I think I'm not going the right way.
Would it be correct to do $f(z)=|z-a|^2+|z-b|^2$ and differentiate this function?
That is, $f'(z)=-2a-2b+4z$.
Thanks for the time.
AI: The function $|z|$ is not differentiable wrt $z$.
You could think about it geometrically. Suppose you plug a $z$ into the expression $|z-a|^2+|z-b|^2$, what is this expression? It is $d_a^2+d_b^2$, where $d_a$ is the distance from $a$ to $z$ and $d_b$ is similarly interpreted. It should be easy to see that to minimise $d_a^2+d_b^2$, $z$ must be on the line segment joining $a$ and $b$. Can you take it from here?
|
H: Is Inclusion–Exclusion Principle still valid if $\cap$ and $\cup$ are exchanged?
The principle for the case of three sets, states:
$$|A\cup B\cup C|=|A|+|B|+|C|-|A\cap B|-|A\cap C|-|B\cap C|+|A\cap B\cap C|$$
I wonder whether there exists a similar formula, where $\cap$ and $\cup$ are exchanged; for example:
$$|A\cap B\cap C|=|A|+|B|+|C|-|A\cup B|-|A\cup C|-|B\cup C|+|A\cup B\cup C|$$
If yes, is there a way to find the cardinality of the intersection of n sets?
AI: Yes, there is such a duality. It works for $n$ sets, but I'll outline the three-set version. The general principle is the same. I'll assume that all sets are subsets
of a finite set $X$, and use $A^c$ for the complement of $A$ in $X$, etc.
Take your first identity, and
$$|A\cup B\cup C|=|A|+|B|+|C|-|A\cap B|-|A\cap C|-|B\cap C|+|A\cap B\cap C|$$
and replace each set by its complement:
$$|A^c\cup B^c\cup C^c|=|A^c|+|B^c|+|C^c|-|A^c\cap B^c|-|A^c\cap C^c|-|B^c\cap C^c|+|A^c\cap B^c\cap C^c|.$$
De Morgan's laws give
$$|(A\cap B\cap C)^c|=|A^c|+|B^c|+|C^c|-|(A\cup B)^c|-|(A\cup C)^c|-|(B\cup C)^c|+|(A\cup B\cup C)^c|.$$
But $|D^c|=|X|-|D|$ for all $D\subseteq X$. Applying this throughout gives
$$|X|-|A\cap B\cap C|=|X|-|A|-|B|-|C|+|A\cup B|+|A\cup C|+|B\cup C|-|A\cup B\cup C|.$$
From this
$$|A\cap B\cap C|=|A|+|B|+|C|-|A\cup B|-|A\cup C|-|B\cup C|+|A\cup B\cup C|$$
is immediate.
|
H: Find a holomorphic function when you know the real part.
I am trying again to learn complex analysys and I have solved an exercise in here but I am not sure if it is correct.
$u(x,y) = x^2-y^2+ln(x^2+y^2)$. I need to find $f$ holomorphic function, such that $Re(f) = u$ This is how I tried to solved it: $f = u+iv$. Since $f$ is a holomorphic function it needs to satisfy the
Cauchy–Riemann equations: $\left\{\begin{matrix}
\frac{\partial u}{\partial x} = \frac{\partial v}{\partial y}\\
\frac{\partial u}{\partial y} = -\frac{\partial v}{\partial x}
\end{matrix}\right.$ $\frac{\partial u}{\partial x} = 2x+\frac{2x}{x^2+y^2} = \frac{\partial v}{\partial y} \Rightarrow v = \int 2x+\frac{2x}{x^2+y^2}dx$ Therefore, $f = x^2-y^2+ln(x^2+y^2)+iv$, right?
AI: We have
$$v=2xy+2\arctan\frac yx+C(x)\implies v_x=2y-\frac{2y}{x^2+y^2}+C'(x)$$
and this must equal
$$-u_y=2y-\frac{2y}{x^2+y^2}\stackrel{\text{comparing with above}}\implies C'(x)=0\implies C(x)=K\;(=\text{constant})$$
so
$$v=2xy+2\arctan\frac yx+K\,,\,\,\;K=\text{constant}$$
|
H: Prove that any 2 bases of a vector space has the same cardinality
I know this question has been asked before, but I tried to prove it myself and I cant finish my prove because im not sure how to write the contradiction in a foraml and correct way.
Let V be a vector space, and $B_1$, $B_2$ an infinite bases. Assume by contradiction that $ ,|B_{1}|\neq|B_{2}| $. So assume that $ |B_{1}|<|B_{2}| $ without loss of generality. So let:
$ |B_{1}|=\aleph_{\alpha}<\aleph_{\beta}=|B_{2}| $
and let:
$ B_{1}=\left\{ u_{j}:j<\aleph_{\alpha}\right\} B_{2}=\left\{ v_{i}:i<\aleph_{\beta}\right\} $
now, for each $v_{i}\in B_{2} $ we will find $ \mathcal{C}_{i}\subseteq\aleph_{\alpha} $ and scalar's $c_j$ such that $ \sum_{j\in C_{i}}c_{j}u_{j}=v_{i} $
and for each $v_i\in B_2 $ define :
$ \mathcal{D}_{i}=\left\{ u_{j}:j\in\mathcal{C}_{i}\right\} $
(all the vectors from $B_1$ such that $ \sum_{j\in C_{i}}c_{j}u_{j}=v_{i} $ )
So, it follows that for any $v_i\in B_2 $
$ \mathcal{D}_{i}\in\bigcup_{n\in\mathbb{N}}B_{1}^{n} $
So if I'll define $ \mathcal{D}=\left\{ \mathcal{D}_{i}:i<\aleph_{\beta}\right\} $ we will have:
$ \mathcal{D}\subseteq\bigcup_{n\in\mathbb{N}}B_{1}^{n} $
Also, we know that $ |\bigcup_{n\in\mathbb{N}}B_{1}^{n}|=|B_{1}|=\aleph_{\alpha} $ because all the sequences are finite. Therefore, $ |\mathcal{D}|\leq\aleph_{\alpha} $.
Now, I want to say that for any finite set $ D_i $ there will be an infinite vectors from $ B_2 $ that will share the same $ D_i $ and therefore they will be linear dependent. But I'm not sure how to express it in a correct formal way. If anyone could find a contradiction from the step i have left, it will be very helpful. Thanks in advance.
Edit:
I think I found a contradiction.
So, the are no more then $ \aleph_{\alpha} $ sets in $\mathcal D $ as I stated before.
Now, In $ B_2 $ there are $ \aleph_{\beta} $ vectors, so if we will define a function $ f:B_{2}\to\mathcal{D} $ that maps each vector to the appropriate $ D_i $ it will not be injective, so we can define :
$ \mathcal{F}_{k}=\left\{ v\in B_{2}:f\left(v\right)=\mathcal{D}_{k}\right\} $
So it follows that $ B_{2}\subseteq\bigcup_{k<\aleph_{\alpha}}\mathcal{F}_{k} $
Now, notice that $ \bigcup_{k<\aleph_{\alpha}}\mathcal{F}_{k} $ is a union of $ \aleph_{\alpha} $ sets, such that any set has to be finite, because otherwise we'll have infinite vectors that use the same $ \mathcal{D}_{i} $ and therefore they would be linear dependent. So, we can conclude that:
$ |\bigcup_{k<\aleph_{\alpha}}\mathcal{F}_{k}|\leq|\dot{\bigcup_{k<\aleph_{\alpha}}}\mathcal{F}_{k}|\leq\aleph_{\alpha}\times\aleph_{\alpha}=\aleph_{\alpha} $
(because in each set there's finite numbers of vectors, obviously it smaller then $ \aleph_{\alpha} $ )
and therefore $ \aleph_{\beta}=|B_{2}|\leq\aleph_{\alpha} $ In contradiction to our assumption. I will be glad to hear what you think about it. Thanks
AI: Here is a proof, based on the same principles, but somewhat different presentation from what you might see elsewhere: $\DeclareMathOperator{\span}{span}$
We define $F\colon[B_1]^{<\omega}\to[B_2]^{<\omega}$, where $[X]^{<\omega}$ is the set of finite subsets of $X$.
$$F(X)=\min\{Y\mid X\subseteq\span(Y)\}$$
Claim. The function $F$ is well-defined.
Proof. Each $x\in X$ has a unique minimal finite set, $Y_x$, such that $x$ is a non-trivial linear combination of the elements of $Y_x$. So it is enough to look for subsets of $\bigcup_{x\in X}Y_x$. Moreover, if $X$ is a subset of $\span(Y)$ and $\span(Y')$, then $X\subseteq\span(Y)\cap\span(Y')$, but because $Y\cup Y'$ is linearly independent, it has to be that $X\subseteq\span(Y\cap Y')$. So indeed this is well-defined.
Claim. $F$ is finite-to-one.
Proof. If $Y\in[B_2]^{<\omega}$, then $\span(Y)$ is a finite dimensional subspace, and therefore can only contain finite linearly independent subsets, since $B_1$ is linearly independent, that means that only finitely many of its elements can lie in $\span(Y)$, so only finitely many finite subsets are mapped to $Y$.
Claim. $|B_1|=|B_2|$.
Proof. Define the equivalence relation on $B_1$ by $u\sim v\iff F(\{u\})=F(\{v\})$, then by the previous claim, each equivalence class is finite, and therefore $|B_1/{\sim}|=|B_1|$. Taking the union of each equivalence class, which is an element in $[B_1]^{<\omega}$, to its image under $F$, is now injective. Therefore $|B_1|\leq|[B_2]^{<\omega}|=|B_2|$.
Define the same in the other direction, i.e. $F'\colon[B_2]^{<\omega}\to[B_1]^{<\omega}$, etc., and we have that $|B_2|\leq|B_1|$. By Cantor–Bernstein we have equality. (Alternatively, assume that $|B_2|\leq|B_1|$, as you did, and finish one paragraph early.)
|
H: Proving a subset of $H^1(\mathbb{R}^d)$ is compactly embedded in $L^2(\mathbb{R}^d)$.
I was recenty reading about the weighted Lebesgue spaces and came accross an exercise that asks to prove that $H^1(\mathbb{R}^d) \cap L^2(\mathbb{R}^d,|x|^2\,dx)$ is compactly embedded in $L^2(\mathbb{R}^d)$. Where $H^1(\mathbb{R}^d)$ is the usual Sobolev space and $L^2(\mathbb{R}^d,|x|^2\,dx)$ is the weighted Lebesgue space containing functions $f$ for which $\int_{\mathbb{R}^d} |f|^2\,|x|^2\,dx< \infty$
The compact embedding theorems I know work on a bounded domain, not sure how to prove this one.
AI: Assume $V\subset W$ is a bounded set, where $W$ is your weighted space.
Now note that $\forall \epsilon>0$ we find some $N>0$ such that for all $u \in V$
$$\int_{B_{N}^{c}} u^{2} < \epsilon/2$$
Assume this is not the case. Then for some $\epsilon>0$ for every $N>0$ there is some $u \in V$ such that $||u||_{L^{2}(B_{N}^{c})}\ge \epsilon$. But then $||u||_{W} \ge ||u(x) \cdot x^2||_{L^{2}(B_{N}^{c})} \ge N^2 \epsilon$ which contradicts boundedness of $V$.
On $B_{N}$ (the ball of radius $N$ in $\mathbb{R}^{d}$), we can apply the normal Rellich-Kondrachov to ensure the compact embedding to $L^{2}(B_{N})$.
Now recall that compactness of a set $A \subset X$ in a metric space $X$ means that
$\forall \epsilon >0$ we find $x_{1},...,x_{k=k(\epsilon)} \in X$ such that
$A \subset \cup_{j=1}^{k} B_{\epsilon}(x_{j})$ .
This means that $\forall \epsilon>0$ we find functions $f_{1},...,f_{k(\epsilon)} \in L^{2}(B_{N})$ such that $\forall u \in V$ we have some $f_{j}$ such that $$||u-f_{j}||_{L^{2}(B_{N}^{c})} < \epsilon/2$$
But $||u||_{L^{2}(B_{N}^{c})} < \epsilon/2$ anyway, so we even have
$$||u-f_{j}||_{L^{2}(\mathbb{R}^{d})} < \epsilon$$
which implies the compact embedding $V \subset \subset L^{2}(\mathbb{R}^{d})$.
|
H: Corners are cut off from an equilateral triangle to produce a regular hexagon. Are the sides trisected?
The corners are cut off from an equilateral triangle to produce a regular hexagon. Are the sides of the triangle trisected?
In the actual question, the first line was exactly the same. It was then asked to find the ratio of area of the resulting hexagon and the original triangle. In the several solutions of this problem which I found on the internet, they had started with taking the sides of triangle as trisected by this operation and hence the side length of the hexagon would also be equal to one-third of the side length of triangle.
I have seen some variations of this problem where they had explicitly mentioned that the side was trisected and then hexagon was formed.
On stackexchange, there are problems in which they started by trisecting the sides (they mentioned it in the title) and getting a regular polygon.
My question is, if we cut off corners from the equilateral triangle to form regular hexagon, is it going to trisect the sides of triangle or not?
AI: Yes. If we cut off corners to create a regular hexagon, then each angle of the hexagon is $120^\circ$, meaning that each angle of each removed triangle is $60^\circ$, so these triangles are equilateral.
Now all sides of the hexagon are equal. Each triangle you removed shares a side with the hexagon, so all its sides are equal to the side length of the hexagon. Thus the three parts of each side of the original triangle are equal - two of them are sides of removed triangles and the third is a side of the hexagon.
|
H: Denote that expression is differentiated, without differentiating it
I'm trying to indicate that I'm working with the derivative of an expression, without differentiating it. This is how it would be done, if I differentiate both sides now:
\begin{align}
y &= 5x^2\\
y' &= 10x
\end{align}
However, I want to differentiate the right side later, but keep working with the expression. How can I denote that I'm referring to the derivative? I suppose this won't work, but I would like to do something in this fashion:
$$
y = 5x^2\\
y'= (5x^2)'
$$
Is there any good way to do this?
AI: You could write:
$$\frac{d}{dx} 5x^2$$
This is very clear.
|
H: Showing the Closed unit disk is not open but is closed and perfect - metric spaces
I'm trying to show that the closed unit disk i.e. $D = \{z \in \mathbb{C} : |z| \leq 1\}$ is closed and perfect but it is not open.
I have managed to show that it is closed (I think) but am unsure of my final step.
Showing it is Closed:
Consider $D^{c} = \{z \in \mathbb{C} : |z| >1\}$ and let $z_{0} \in D^{c}, |z_{0}| = 1+r, r > 0.$ Then if we consider $B_{r}(z_{0}) = \{w \in \mathbb{C} : |z_{0} - w| < r\}$
By the triangle inequality I get $|z_{0}| \leq |z_{0} - w| + |w| \Rightarrow |w| \geq |z_{0}| - |z_{0} - w| = 1+r - r = 1$. Therefore $|w| \in D^{c}.$ Can I then say that $B_{r}(z_{0}) \subseteq D^{c}$. Hence $D^{c}$ is open and $D$ is closed.
Questions on the above:
I believe that equality at the end of the triangle inequality argument by $ >$ but I can't see why?
Can I make the argument that just because $|w| \in D^{c}$ then the ball is contained in $D^{c}$
Showing it is not Open:
I was trying to do this directly. Take $(v,w) \in D$ and let $\epsilon = 1 - \sqrt{v^{2} + w^{2}}$ then if $(s,t) \in B_{\epsilon}((v,w))$ we have that $d((s,t), (0,0)) \leq d((s,t), (v,w)) + d((v,w), (0,0)).$ But this doesn't seem correct to me, as after doing the calculations I get that the set is open.
Showing it is Perfect:
Rudin defines a Perfect set as one that is closed and every point in the set is a limit point. I only need to check that every point is a limit point, as I've shown the set is closed.
I am stuck here as well. I choose a point $w \in D$ and let $r > 0.$ Now we need to show if we draw a ball with radius $r$ center $w$ that we get a point that is different to $w.$
$B_{r}(w) = \{v \in \mathbb{C} : |w - v| \leq r\}$
AI: You are close to proving $D^c$ is open. Just notice there that you have mistakenly put the equal sign here $|w|\ge |z_0|-|z_0-w|\gt 1+r-r=1$ since $(|z_0-w|\lt r)$
Showing it is not open is trivial. Take a point on the boundary,let us say $z_0$, then $|z_0|=1$ . For every $\epsilon \gt 0$, consider the neighbourhood $B(z_0,\epsilon)$ .Then $B(z_0,\epsilon)\cap D^c\neq \emptyset$ ( By definition of the boundary point ).
This shows
$B(z_0,\epsilon) \nsubseteq D$.Hence $z_0$ is not interior point.
Now $D\subset D'$(the set of limit points of $D$) , since the points in $D$ are either interior points or boundary points. To show it is perfect, we just have to show there is no limit point of $D$ in $D^c$, which is easy since every point in $D^c$ is interior and $ D\cap D^c=\emptyset$. So no neighbourhood of a point in $D^c$ contains a point of $D$. Thus proving the claim.
|
H: if $ i:L^{p}( d\mu )\longrightarrow L^{q}(d\mu) $ is an inclusion map, then $ i $ is bounded.
I want to show that if $ i:L^{p}( d\mu )\longrightarrow L^{q}(d\mu) $ is an inclusion map where $ (X,A, \mu) $ is a measure space and $L^{p}(d\mu)\subseteq L^{q}( d\mu ) $, then $ i $ is bounded.
Firstly, we can use the definition: For this I need to find constant $ c $ such that $ \parallel f \parallel_{L^{q}}<c\parallel f\parallel_{L^{p}} $ for all $ f\in L^{p} $. But I could not find this $ c $.
So ı tried to use closed graph theorem: Assume that $ f_{n} $ converges to $ f $ in $ L^{p} $ and $ i(f_{n})=f_{n} $ converges to $ g $ in $ L^{q} $. I need to show that $ f=g $. Since $ L^{q} $ is complete then $ g\in L^{q} $. If ı can show that $ f_{n} $ converges to $ g $ in $ L^{p} $, ı can conclude that $ f=g $. If measure of $ X $ is finite, I showed this. But I could not finish the question. if measure of $ X $ is not finite,how can we show that $ f_{n} $ converges to $ g $ in $ L^{p} $.
AI: Convergence in any $L^{p}$ implies almost everywhere convergence for some subsequence. Hence $f_n \to f$ in $L^{p}$ and $f_n \to g$ in $L^{q}$ implies some subsequence converges a.e to both $f$ and $g$ which implies $f=g$ a.e..
(First choose a subsequence which converges a.e. to $f$ and then choose a further subsequence which converges a.e. to $g$)
|
H: Proof that $\text{Hom}_R(M, -)$ is left exact in the category of $R$-modules
I'm looking at a proof that $\text{Hom}_R(M, -)$ is left exact for $R$-modules. Specifically at the one that appears in Robert Ash's Abstract Algebra, which you can find here on page 13.
Let $A, B, C$ be $R$-modules for commutative ring $R$ and suppose
$$0 \to A \xrightarrow{f} B \xrightarrow{g} C \to 0 $$
is a short exact sequence. And consider
$$ 0\to \text{Hom}_R(M, A) \xrightarrow{f_\ast} \text{Hom}_R(M, B) \xrightarrow{g_*} \text{Hom}_R(M, C)$$
I understand everything in Ash's proof except for the very last step in proving that $\ker{g_*}\subseteq \text{im}f_*$.
Suppose $\beta\in\ker{g_*}$, then $g\circ\beta = 0$, and therefore for some $y\in M$ we have $g(\beta(y)) = 0$. So $\beta(y)\in\ker{g}=\text{im}f$. Therefore there is some $x\in A$ such that $\beta(y) = f(x)$. Here is where I have issues. Ash states that $x = \alpha(y)$ for $\alpha\in\text{Hom}_R(M, A)$. But how can one be certain that such a homomorphism exists?
The answer that appears here suffers from a similar issue. Here a function $l:M\to A$ is defined such that $l(y) = x$, but it isn't shown to be a homomorphism and I'm not certain how you would show that, if it's even possible from such a definition.
AI: First of all, $g\circ\beta=0$ means for all $y\in M$, $g(\beta(y))=0$ (not only for some).
So let $y\in M$. Since $\beta(y)\in\ker(g)=im(f)$, there exists $x\in A$ such that $\beta(y)=f(x)$. The crucial point that you're missing is this one: since $f$ is injective by assumption, this $x$ is unique!! (if $\beta(y)=f(x_1)=f(x_2)$, then $x_1=x_2$...)
Hence, we may denote by $\alpha(y)$ this $x$, and we get a map $\alpha:M\to A$ such that $\beta(y)=f(\alpha(y))$ for all $y\in M$. Thus, $\beta=f\circ \alpha$.
Now it remains to prove that $\alpha$ is $R$-linear. By definition, for $y\in M$, $\alpha(y)$ is the unique element of $M$ such that $\beta(y)=f(\alpha(y))$.
But for all $y_1,y_2\in M$ and all $r\in R$, we have $\beta(y_1+ry_2)=\beta(y_1)+r\beta(y_2)=f(\alpha(y_1))+r f(\alpha(y_2))=f(\alpha(y_1)+r\alpha(y_2))$. But uniqueness above, $\alpha(y_1+ry_2)=\alpha(y_1)+r\alpha(y_2)$, and we are done.
Since $\alpha$ is $R$-linear we have $\beta=f\circ\alpha=f_*(\alpha)$.
|
H: Upper bound Lebesgue measure of "square" of set on closed interval
This is a follow up to this question: Upper bound "square" of Lebesgue measure of set
Let $\lambda$ be the Lebesgue measure, $A$ a set with $\lambda(A) < \varepsilon$ and $M > 0$. Consider the set $A^2 = \{a \cdot a \ | \ a \in A\}$. Can we produce an upper bound on $\lambda(A^2 \cap [-M, M])$ in terms of $\varepsilon$?
AI: Yes, $\lambda (A^{2}\cap [-M,M]) \leq 2\sqrt M \epsilon$. To see this note that we can cover $A$ by intervals $(a_i,b_i)$ with $\sum (b_i-a_i) <\epsilon$. Now note that $A^{2}\cap [-M,M]$ is convered by $(a_i^{2},b_i^{2})\cap [-M,M]$ and $\lambda (a_i^{2},b_i^{2})\cap [-M,M]) \leq 2\sqrt M (b_i-a_i)$ since $b_i^{2} -a_i^{2} =(b_i-a_i)(b_i+a_i)$.
|
H: Show if $A$ is open w.r.t to $d_1$ and $d_2$ then $A$ is also open w.r.t $D$.
Let $d_1$, $d_2$ and $D$, given by $D(x,y)=\max\{d_1(x,y),d_2(x,y)\}$, be metrics on $M$. Assume $A\subseteq M$. Show if $A$ is open w.r.t at least one of $d_1$ and $d_2$ then $A$ is also open w.r.t $D$.
CASE 1 Let $x,y \in A$. Assume $A=A^\circ$ w.r.t $d_1$ and $D=d_1$, then
$$
A^\circ = \{ x \in A| \exists r >0: K_1(x,r) \subseteq A \}=
\\
\{ x \in A| \exists r >0: \{y: d_1(x,y) < r\} \subseteq A \} =
\\
\{ x \in A| \exists r >0: \{y: D(x,y) < r\} \subseteq A \}
$$
Last equality follows from the assumption $D=d_1$.
CASE 2
Let $x,y \in A$. Assume $A=A^\circ$ w.r.t $d_1$ and $D=d_2$,...
I'm stuck here
Please give some good hints or suggestions for a solution.
Kind regards,
AI: Hint: For a metric $d$, $r> 0$, and $x \in M$, let $B^d_r(x) = \{y : d(x,y) < r\}$. That is, $B^d_r(x)$ is the open (with respect to $d$) ball of radius $r$ centered at $x$.
The statement "$A$ is open with respect to $d_1$" means that for any $a \in A$, there exists an $\epsilon > 0$ for which $B^{d_1}_{\epsilon}(a) \subseteq A$. Using this statement, we want to conclude that for any $a \in A$, there exists an $\epsilon > 0$ for which $B^{D}_{\epsilon}(a) \subseteq A$.
This becomes a much easier task if we first show that $B^D_{\epsilon}(a) \subseteq B^{d_1}_{\epsilon}(a)$.
|
H: Why is $(1 - \frac{\ln n}{n})^n$ approximated by $e^{-\ln n}$ for $n \rightarrow \infty$?
I'm trying to reconstruct the proof for the Erdös-Renyi Theorem from Jackson's "Social and Economic Networks" [1], chapter 4.2.3. This part I don't understand:
Let $p(n)$ be a function s.t. $\lim_{n\to \infty} \frac{p(n)}{\ln n / n} \to 0$, i.e., a function that grows slower than $\ln n / n$. Relevant for the proof are indeed choices for $p(n)$ that are close to $\ln n / n$, since we want to show that $\ln n / n$ is a threshold for this choice.
The proof then claims that $(1-p(n))^n$ can be approximated by $e^{-np(n)}$. It does not state what "approximate" should mean in this context. The relevant quote from the proof is:
Given that $p(n)/n$ converges to 0, we can approximate $(1 - p(n))^n$ by $e^{-np(n)}$.
The first thing I associate this with is $\lim_{n \to \infty}(1 + \frac{a}{n})^n = e^a$. If we say that $p(n)$ is chosen "very close" to $\ln n / n$ (say $(\ln n / n)^{1 - \epsilon}$) I could see how we can squeeze our eyes and rewrite $(1-p(n))^n$ as $(1 - \frac{\ln n}{n})^n$. If we now treated $\ln n$ as a constant (which it obviously is not), this would lead to $\lim_{n \to \infty}(1 - \frac{\ln n}{n})^n = e^{-\ln n}$. With $p(n) \approx \ln n / n$, this amounts to $e^{-p(n)n}$.
However, treating $\ln n$ as a constant feels (and is) very wrong here, right? Am I missing some way of making this approximation?
Thanks for any help.
[1] Jackson, Matthew O. Social and economic networks. Princeton university press, 2010.
AI: That is because $\;(1 - \frac{\ln n}{n})^n=\mathrm e^{n\ln(1 - \frac{\ln n}{n})}$ and that, in a neighbourhood of $0$, we have $\ln(1-u)=-u-\frac{u^2}2+o(u^2)$, so that
$$\mathrm e^{n\ln(1 - \frac{\ln n}{n})}=\mathrm e^{n\bigl(- \tfrac{\ln n}{n}-\tfrac{\ln^2n}{2n^2}+o\bigl(\tfrac{\ln^2 n}{n^2}\bigr)\bigr)}= \mathrm e^{-\ln n}\cdot\underbrace{\mathrm e^{-\tfrac{\ln^2 n}{2n}+o\bigl(\tfrac{\ln^2n} n\bigr)}}_{\substack{\downarrow\\\mathstrut 1}}\sim_\infty\mathrm e^{-\ln n}$$
|
H: Is the successor of a non finite ordinal a non finite ordinal?
Definition:
$$\alpha \text{ is finite iff } \forall\: \beta \: \text{ordinal}, \: \beta \leq \alpha\: \text{and}\: \beta \: \neq \emptyset \: \Rightarrow \exists \gamma( \beta=\gamma\cup\{\gamma\}) $$
(definition corrected thanks to suggestion)
Question: Does this hold true ?
$$\alpha \ \ \text{non finite} \Rightarrow \: \alpha + 1 \ \ \text{non finite} $$
If it is true, then a non finite ordinal can have a predecessor but a limit ordinal cannot have a predecessor, am I right ?
AI: Your definition is slightly wrong. It merely states that every smaller ordinal is $0$ or a successor, but this much is also true for $\omega$, which is certainly not finite, what with it being the least infinite ordinal.
You can fix this by changing $\forall\beta<\alpha$ to $\forall\beta\leq\alpha$.
Now, to your question, of course that the successor of an infinite ordinal is infinite. If $\alpha$ is infinite there is a witness for the fact that $\alpha$ is not finite, which is either $\alpha$ or some $\beta<\alpha$. This will remain a witness when we consider $\alpha+1$.
Finally, a limit ordinal is defined as an ordinal which is not a successor.1 So again, quite trivially, limit ordinals are not successors.
In some places $0$ is also separated from the limit ordinals, in which case a limit ordinal is a non-zero ordinal which is not a successor.
|
H: example for $\varphi(H \cap K) \subset \varphi(H)\cap \varphi(K)$, $H, K< G$ and $\varphi \in Hom(G,G')$
Let $H$ and $K$ be subgroups of $G$ and $\varphi \in Hom(G,G')$
Give an example of $\varphi(H \cap K) \subseteq \varphi(H)\cap \varphi(K)$ for which the inclusion holds strictly.
I have already proved the relation, and tried a few examples but all I can come up with are examples in which they are equal.
Any ideas?
AI: A simple example is given by, given an abelian group $A$, take $G=A\times A$ and $G'=A$. Then we define
$$\varphi(g,h)=gh$$
Take $H=A\times\{1\}$, $K=\{1\}\times A$. Then the intersection is trivial, but the intersection of the images is the whole group.
As per Sebastian Schoennenbeck's example, one can take any symmetric group $S_n$ and have the homomorphism $\varphi:S_n\to\mathbb\{-1,1\}$ be the sign homomorphism. Then for example if we have the transpositions $(12)$ and $(23)$, the corresponding subgroups of order $2$ intersect trivially, but the images of the two subgroups are the same and are the whole image of the homomorphism, so the containment is proper.
|
H: Is the power of set of elements that are less than $x$ little?
Does every set $X$ can be well ordered such that for all element $x$ the power of set of elements that are less than $x$ is less than the power of $X$?
I saw this idea in proof of some problem. Can you show me why it is true?
AI: Yes. Initial ordinals are exactly the order types of such well-orders.
And we can alternatively define an initial ordinal as the smallest ordinal of a given cardinality. To see why, if $\alpha$ is an initial ordinal, every proper initial segment (i.e. the set of all those which are smaller than some $x$) is isomorphic to a strictly smaller ordinal. Since $\alpha$ was minimal in its cardinality, that smaller ordinal must be smaller also in cardinality.
If so, given a set $X$, if we can well-order $X$ (which we can, assuming the axiom of choice holds), then we can well-order it with a minimal order-type, which is an initial ordinal.
|
H: Entire function that is in $\mathcal{L}^1({\mathbb R^2})$
I take the lebesgue measure on $\mathbb R$ .
Take $f:\mathbb R \to \mathbb R$ a continuous function and $\int_{\mathbb R}|f(x)|\mathrm d\mu(x)<\infty$.
Can we say any thing about $\lim_{|x|\to \infty}|f(x)|$ ? or can we say if $f$ is bounded ?
If we can show $f$ is bounded can we generalize this result to $\mathbb R^n$.
Basically I was dealing with a different problem .
It said "If $f:\mathbb C \to \mathbb C$ is a holomorphic function and $\int_{\mathbb R^2}|f(x+iy)|\mathrm dx\mathrm dy <\infty$ then $f\equiv0$".
So my approach was if I can show $f$ is bounded then I am done.
Thanks in advance please help me out and let me know if I am wrong at my approach.
AI: by Cauchy $2\pi i f(z)=\int_{|w-z|=R}\frac{f(w)}{w-z}dw=\int_0^{2\pi}f(z+Re^{it})idt$ so $2\pi|f(z)|\le \int_0^{2\pi}|f(z+Re^{it})|dt$; integrating from say $R=1$ to $R=2$ we get (using that $dxdy=RdRdt \ge dRdt=dtdR, R \ge 1$)
$2\pi|f(z)|\le \int_1^2\int_0^{2\pi}|f(z+Re^{it})|dtdR \le \int_1^2\int_0^{2\pi}|f(z+Re^{it})|RdtdR \le \int \int_{A_z} |f(x+iy)|dxdy \le M $
where $A_z$ is the annulus centered at $z$ between circles of radiuses $1$ and $2$ and $M= \int_{\mathbb R^2}|f(x+iy)|\mathrm dx\mathrm dy <\infty$
so $f$ is constant and hence zero by the finite integral property
As noted in the comments, the same proof works for $f$ harmonic as the integral mean value property is still true as is the fact that a bounded harmonic function in the plane (even bounded one way if the function is real) is constant
|
H: Prove for $P_n(z)$ if $r<1$ then exist $N$ s.t for every $n>N$ $P_n$ has no roots at $\{z:|z|<1\}$
Prove for $P_n(z)$ if $r<1$ then exist $N$ s.t for every $n>N$ then $P_n$ has no roots at $\{z:|z|<r\}$ where $P_n(z)=1+2z+\ldots +nz^{n-1}$
AI: Hint: Show that $(1-z)P_n(z)=\frac {1-z^{n}} {1-z} -nz^{n}$. Conclude that $(1-z)P_n(z) \to \frac 1 {1-z}$ uniformly on $\{z: |z| <r\}$. Can you finish?
|
H: How many trains a train will meet in its way from station $A$ to station $B$?
In each hour, a train starts from station $A$ to station $B$ and another starts from station $B$ to station $A$. Each train has the same speed and takes $5$ hours to reach the other station. How many trains a train will meet in its way from one station to the other station?
My answer is $11$, but my book says the answer is $10$.
My explanation: Suppose, Train X is starting from station $A$ to station $B$. Train X meets $1$ another train when it starts to leave the station. Then Train X meets with $1$ another train in every $30$ minutes and meets the 11th train when it enters to station $B$.
Could anyone please help me understand what I'm doing wrong.
AI: When not entering or exiting the station 9 are met (on the journey itself) but then an additional two, one at the start as it leaves the station and then one as it arrives as you say... I understand how the answer could be 9 or 11 due to the ambiguity of whether meeting in the station counts but in either case, the answer is not 10. Maybe whoever wrote the answers simply did 5 / (1/2) = 10 rather than consider the meeting at time 0 or 5.
|
H: Let $f(x) = x + 2\sin(\ln(\frac{1}{x}))$ for $x \geq 1$. Show that $\lvert f(x) - f(y) \rvert \leq 3|x-y|$ for all $x, y \geq 1$
My attempt:
$$\lvert f(x) - f(y) \rvert = \lvert x-y + 2(\sin(\ln(\frac{1}{x})) - \sin(\ln(\frac{1}{y}))) \rvert$$
Using triangle inequality and $\lvert \sin(x) - \sin(y) \rvert \leq \lvert x-y \rvert$ we have:
$$\lvert f(x) -f(y)\rvert \leq \lvert x-y \rvert + 2\lvert \ln(\frac{1}{x}) - \ln(\frac{1}{y})\rvert$$
I'm stuck here. I know that for $0 < x, y \leq 1$ we have $\lvert x - y \rvert \leq \lvert \ln{x} - \ln{y} \rvert$ but it's the opposite of what I want. Am I missing something obvious? Or do I need a different approach altogether?
AI: For each $x\geqslant1$, you have $f(x)=x+2\sin\bigl(-\log(x)\bigr)$ and therefore$$f'(x)=1-\frac{2 \cos (\log (x))}x.$$So,$$|f'(x)|\leqslant1+\frac2{|x|}\leqslant3$$and therefore, by the Mean Value Theorem,$$|f(x)-f(y)|\leqslant3|x-y|.$$
|
H: How can I understand the definition $(a-ε, a+ε)\thinspace\cap(D\thinspace\backslash{\{a\}})= \emptyset$?
Let D be a subset of $\mathbb {R}$ and $a\in\mathbb {R}$.
The definition for a point of convergence of a set D: $(a-ε, a+ε)\thinspace\cap(D\thinspace\backslash{\{a\}})= \emptyset$
I even drew it for better understanding:
It is clearly shown that in the interval $(a-ε, a+ε)$ has an infinite number of elements of set D including point a. But when I look at the intersection of the interval $(a-ε, a+ε)$ and $(D\thinspace\backslash{\{a\}})$ I don't get an empty set but rather an infinite number of elements of a set D without a as the left side of the definition directs.
What am I missing?
AI: Wiki's definition of isolated point says:
"if there exists a neighborhood of point $a$ which does not contain any other points of $D$."
This means: $(a - \epsilon, a + \epsilon) \cap D \setminus \{ a \} = \emptyset$, for some $\epsilon$.
Regarding limit (or accumulation) point the definition is:
"every neighbourhood of $a$ (with respect to the topology) also contains a point of $D$ other than $a$ itself."
This means: $(a - \epsilon, a + \epsilon) \cap D \setminus \{ a \} \ne \emptyset$, for every $\epsilon$.
In conclusion, the two definitions are different: the first one has $=$ and has "there is at least one neighborhood (existential quantifier)..." while the second has $\ne$ and has "for every neighborhood (universal quantifier)...".
|
H: KL-Divergence of Uniform distributions
Having $P=Unif[0,\theta_1]$ and $Q=Unif[0,\theta_2]$ where $0<\theta_1<\theta_2$
I would like to calculate the KL divergence $KL(P,Q)=?$
I know the uniform pdf: $\frac{1}{b-a}$ and that the distribution is continous, therefore I use the general KL divergence formula:
$$KL(P,Q)=\int f_{\theta}(x)*ln(\frac{f_{\theta}(x)}{f_{\theta^*}(x)})$$
$$=\int\frac{1}{\theta_1}*ln(\frac{\frac{1}{\theta_1}}{\frac{1}{\theta_2}})$$
$$=\int\frac{1}{\theta_1}*ln(\frac{\theta_2}{\theta_1})$$
From here on I am not sure how to use the integral to get to the solution.
AI: You got it almost right, but you forgot the indicator functions. So the pdf for each uniform is
$$P(P=x) = \frac{1}{\theta_1}\mathbb I_{[0,\theta_1]}(x)$$
$$\mathbb P(Q=x) = \frac{1}{\theta_2}\mathbb I_{[0,\theta_2]}(x)$$
Hence,
$$
KL(P,Q) = \int_{\mathbb R}\frac{1}{\theta_1}\mathbb I_{[0,\theta_1]}(x)
\ln\left(\frac{\theta_2 \mathbb I_{[0,\theta_1]}}{\theta_1 \mathbb I_{[0,\theta_2]}}\right)dx
$$
Since $\theta_1 < \theta_2$, we can change the integration limits from $\mathbb R$ to $[0,\theta_1]$ and eliminate the indicator functions from the equation. Also, since the distribution is constant, the integral can be trivially solved
$$
\int_{\mathbb R}\frac{1}{\theta_1}\mathbb I_{[0,\theta_1]}
\ln\left(\frac{\theta_2 \mathbb I_{[0,\theta_1]}}{\theta_1 \mathbb I_{[0,\theta_2]}}\right)dx =
\int_{\mathbb [0,\theta_1]}\frac{1}{\theta_1}
\ln\left(\frac{\theta_2}{\theta_1}\right)dx=$$
$$
=\frac {\theta_1}{\theta_1}\ln\left(\frac{\theta_2}{\theta_1}\right) -
\frac {0}{\theta_1}\ln\left(\frac{\theta_2}{\theta_1}\right)=
\ln\left(\frac{\theta_2}{\theta_1}\right)
$$
And you are done.
|
H: How to prove that this function all over the positive integers gives us this sequence?
On the first hand, I have this sequence : $0,1,1,2,2,2,3,3,3,3,...$ which is the sequence where an $n$ positive integer appears $n+1$ times consecutively.
On the other hand, I have this function : $a_n=\lfloor\frac{\sqrt {1+8n}-1}{2}\rfloor$ where $n\ge0$
$a_0=0$ ; $a_1=1$ ; $a_2=1$; $a_3=2$
This function seems to be a formula for this sequence.
However, if it is the case, how can we prove it ?
And if it isn't, what is the explanation ?
To begin with, i did something :
$\frac{\sqrt {1+8n}-1}{2}=t+b$ where $t\in \mathbb N$ and $b\in [0,1[$
After some simplification, i get this :
$8n= 4t^2+4b^2+8tb+12t+12b+8$
After this, i don't know how to continue...
AI: Idea:
In the sequence, $a_n$ becomes $m$ when $n=\sum\limits_{i=0}^m i=\dfrac{m(m+1)}2$; i.e., $m^2+m-2n=0$.
Solving this quadratic for $m$, we get $m=\dfrac{-1+\sqrt{1+8n}}2$.
|
H: Limit of sequence $a_n = (-4)^{\frac{1}{2n+1}}$
Using a calculator, I can see that this sequence $a_n = (-4)^{\frac{1}{2n+1}}$ is convergent and has limit -1. However, I am struggling to prove this in a formal way.
Since the exponent tends to zero when $n$ tends to infinity, I thought that the limit should be 1 (but this is not what I get using the calculator). Why is my reasoning wrong?
It would be very helpful if anyone could give me a hint as to how to proceed.
Thanks in advance.
AI: Using your definition of $(-4)^{\frac{1}{2n+1}}$, we have
$$ (-4)^{\frac{1}{2n+1}}=-4^{\frac{1}{2n+1}}=-e^{\frac{\ln 4}{2n+1}}=-1 $$
because $\lim\limits_{n\rightarrow +\infty}{\frac{\ln 4}{2n+1}}=0$.
|
H: Calculation with Landau symbol (Big $O$)
I'm not sure about my calculations with the Landau Symbol $O$:
Let $c>0$ and $n\to \infty$. Consider:
$$\frac{1}{c\sqrt{n}+O\left(\frac{\ln(n)}{n}\right)}-\frac{1}{c\sqrt{n}}=
\frac{O\left(\frac{\ln(n)}{n}\right)}{\left (c\sqrt{n}+O\left(\frac{\ln(n)}{n}\right)\right)c\sqrt{n}}=
\frac{O\left(\frac{\ln(n)}{n\sqrt{n}}\right)}{\left (c\sqrt{n}+O\left(\frac{\ln(n)}{n}\right)\right)}
$$
Now, I am not sure if I can conclude
$=O\left(\frac{\ln(n)}{n^2}\right),$
since the $O$ in the denominator tends to zero compared to the other summand.
AI: Hint: A slightly different approach to yield your desired bound is to factor $c\sqrt n$ out of the denominator in the first fraction and use geometric series to handle the resulting term after factoring out $c\sqrt n$.
|
H: Prove that G is Hamiltonian
Given $G$ a graph with degrees:$6,6,4,4,4,k,k$ on $7$ vertices and $10$ regions
(and by Euler $n-f+r=2$ I found that $k$=3)
prove $G$ is contains a Hamiltonian cycle
I did find a visual cycle on the actual graph, where in the solutions (by a student) he proved that vertex 3 doesn't have to be a neighbor with the other vertex degree 3 and thus by the theorem the graph is hamiltonian
What will be a better explanation in a discrete math course
AI: Here it is... a Hamiltonian cycle in your graph:
|
H: How to solve $x^x-x=1$?
I was recently posed the question "solve for $x$ in $x^x-x=1$". The intended answer was $x=0$, assuming that $0^0=1$, but I used brute force and determined another solution, $x\approx1.776775040097$ (which Wolfram Alpha agrees with me on). Is there a closed form or symbolic solution to this - an exact solution? I have tried solving with the super square root (and Lambert W function), but this didn't seem to work out for me. Is there a way to solve it?
AI: Consider that you look for the zero's of function
$$f(x)=x^x-x-1$$
Its first derivative $f'(x)=x^x (\log (x)+1)-1$ cancels at $x=1$ and the second derivative test $f''(1)=2$ shows that this is a minimum.
Build a Taylor expansion to get
$$f(x)=-1+(x-1)^2+\frac{1}{2} (x-1)^3+\frac{1}{3} (x-1)^4+O\left((x-1)^5\right)$$ Using series reversion, then
$$x=1+\sqrt{y+1}-\frac{y+1}{4}-\frac{1}{96} (y+1)^{3/2}+O\left((y+1)^2\right)$$ where $y=f(x)$. Making $y=0$, this gives as an approximation
$$x=\frac{167}{96}\approx 1.73958 $$ To polish the root, use Newton method starting with this estimate. The iterates will be
$$\left(
\begin{array}{cc}
n & x_n \\
0 & 1.739583333 \\
1 & 1.778584328 \\
2 & 1.776779132 \\
3 & 1.776775040
\end{array}
\right)$$
Edit
If we make the first expansion $O\left((x-1)^n\right)$ and repeat the inversion series, we generate the sequence
$$\left\{2,\frac{7}{4},\frac{167}{96},\frac{175}{96},\frac{160
379}{92160},\frac{3687}{2048},\frac{12144341}{6881280},\frac{110221693}{61931520
},\frac{211659504277}{118908518400}\right\}$$
We can also use $x_0=2$ and use high order iterative methods. For order $4$, that is to say one level after Householder method, we have
$$x=2\,\frac {4575+67460 a+299400 a^2+558920 a^3+463660 a^4+141128 a^5} {6655+86720 a+352260 a^2+615000 a^3+483960 a^4+141128 a^5 }$$ where $a=\log(2)$.
This gives, as another approximation, $x=1.776779506$.
|
H: Determining an expression for a linear function $f$ such that $f(x_1)=y_1$ and $f(x_2)=y_2$
We say that a function $f:[a,b]\rightarrow\mathbb{R}$ is linear if it is of the form $f(t)=mt+n$ for some $m, n \in\mathbb{R}$. Show that $f$ is determined by its values at two (distinct) points in $[a,b]$. More precisely, arrive at an expression for a linear $f$ such that $f(x_1)=y_1$ and $f(x_2)=y_2$ for some $x_1, x_2 \in [a,b]$.
I know that this function is continuous on $[a,b]$ and so it is uniformly continuous. But I don't know what they really asked for?.
AI: You need two equations to uniquely determine $m$ and $n$. Plugging into linear equation any two points you may arrive to an expression of the form
$$y-y_1=\frac{y_2-y_1}{x_2-x_1} (x-x_1)$$
|
H: Cycle structure in symmetric group
I am studying group representations and to prove that characters from symmetric groups $\chi(g) \in \mathbb{Z}, \forall g \in S_{n}$ I need prove that:
Consider that $\sigma \in S_{n}$, and $\gcd(m, o(\sigma)) = 1$. Then $\sigma$ and $\sigma^{m}$ has the same cycle structure.
Someone can help me with this question. Thank you in advance.
AI: $\newcommand{\Span}[1]{\left\langle #1 \right\rangle}$If $\tau$ is a cycle of length $k$ in the disjoint cycle decomposition of $\sigma$, then $k \mid o(\sigma)$.
You have thus to prove that if $\gcd(m, k) = 1$, then $\tau^{m}$ is also a cycle of length $k$.
Consider the cyclic subgroup $\Span{\tau}$ of $S_{n}$. Then a standard result shows that $o(\tau^{m}) = o(\tau)$, so that $\Span{\tau} = \Span{\tau^{m}}$. In particular, $\tau$ is a power of $\tau^{m}$. This implies that $\tau^{m}$ is a cycle of length $k$.
|
H: Transfinite induction, proving $\operatorname{P}(0)$ although $\alpha = 0$ is out from hypothesis.
In a transfinite induction, if I have to prove that a predicate $\operatorname{P}(\alpha)$ is true $\forall \alpha \gt 0$, can I proceed showing $\operatorname{P}(0), \; \operatorname{P}(\alpha) \rightarrow \operatorname{P}(\alpha + 1), \; \operatorname{P}(\alpha) \; \forall \alpha \lt \lambda \rightarrow \operatorname{P}(\lambda) \;\; \lambda$ limit ordinal? $\operatorname{P}(0)$ is true because $\alpha = 0$ is out from hypothesis, but, I'm wondering, is it enough to conclude that $\operatorname{P}(1)$ is true?
I thought about it proving this criterion: $\forall \alpha \gt 0 \; ( \omega^{\beta_k} \cdot n_k + \ldots + \omega^{\beta_1} \cdot n_1 + n_0 ) \cdot \omega^{\alpha} = \omega^{\beta_k + \alpha}, \; k, n_i \in \omega\! \smallsetminus\!\{0\} \; \forall i$.
AI: Yes, there are two ways around this:
Define $Q(\alpha)=\alpha=0\lor P(\alpha)$, and prove that $\forall\alpha\,P(\alpha)$. That is, forcefully add $0$ to your predicate.
Define $Q(\alpha)=\alpha<\omega\to P(\alpha+1)\lor P(\alpha)$. That is, shift the natural numbers by $1$, and then go back to $P$.
In either case, the practice is that you can simply start your induction by with $1$. But now you will have to verify it by hand, like you would normally do for a basis case.
|
H: Is the orthogonal group convex?
Is the group of real orthogonal matrices convex?
I've read that this space has two connected components, and I don't think that this set is convex, since all convex sets must be path connected. However, I'm just not a specialist in Algebra. Could someone please provide an explanation?
AI: To show it is not convex, consider the identity matrix $I$ and the negative identity matrix $-I$. Both are orthogonal matrices.
However
$$\frac{1}{2}I + \frac{1}{2}(-I) = 0$$
which is not an orthogonal matrix.
|
H: Injection from an open interval into a ball in $\mathbb{R}^{2}$
As an exercise I am trying to show that we can find an injection from some open interval $(0,1)$ say into the open ball $B_{r}(x)$, where $r>0$ and $x \in \mathbb{R}^{2}.$
I'm a bit confused because I've only ever dealt with injective functions where the co-domain is $\mathbb{R}$ but here it is a ball.
But would the injection be something like each element in the open interval maps to a radius that is used in the definition of the open ball?
Thanks
AI: You can simply take$$\begin{array}{rccc}f\colon&(0,1)&\longrightarrow&B_r(x)\\&t&\mapsto&x+(tr,0).\end{array}$$It makes sense, since $\|f(x)-x)\|=\|(tr,0)\|=tr<r$. And it is injective, because$$(t_1r,0)=(t_2r,0)\iff t_1r=t_2r\iff t_1=t_2.$$
|
H: Determine a polynomial function with some information about the function
I am working through some exercises at the end of a textbook chapter on polynomial functions. Till now the questions have been about providing answers based on a given polynomial function. However, with this particular question I am to work backwards and define the polynomial based on some information about it:
use the information about the graph of a polynomial function to determine the function. Assume the leading coefficient is $1$ or $–1$. There may be more than one correct answer.
The $y$-intercept is $(0, 0)$, the $x$-intercepts are $(0,0)$, $(2,0)$, and the degree is 3.
End behavior: As $x$ approaches $-\infty$, $y$ approaches $-\infty$, as $x$ approaches $\infty$, $y$ approaches $\infty$.
What I can tell is that since it's an odd degree, the functions will approach $-\infty$ or $+\infty$ either side of $x=0$ but that's already provided in the description.
Tried writing it down as: $y = x(x-2)$ since the root of $(0,0)$ is $0$ (right) and the root of $(2,0)$ is $-2$ (right?).
The provided answer is $x^3-4x^2-4x$.
How can I arrive at this solution with the information provided? Granular baby steps appreciated if possible?
AI: There are two $x$-intercepts, the degree is at least $2$, from the behavior at $x$ approach $-\infty$ and $\infty$, the degree is at least $3$.
If it is cubic, the leading coefficient is $1$.
$$y=x(x-2)(x-c)$$
Since there are only $2$ distinct roots, $c$ is either $0$ or $2$.
The solution provided by the book is obtained by taking $c=2$.
Another alternative solution is $x^2(x-2)$.
|
H: How can I prove $\sum_{n=1}^{\infty}\frac{cos(\frac{nπ}{3})}{n^s}=\frac{1}{2}(6^{1-s}-3^{1-s}-2^{1-s}+1)\zeta(s)$ for $Re(s)>1$
Question:-
Prove that
For $Re(s)>1$
I used this series while evaluating
$\int_{0}^{t} x^2\cot(x)dx$
I got
Evaluating it we get
On letting $ t=\frac{π}{6}$
We have to find $\int_{0} ^{π/6} x^2 cot(x)dx$ which is equal to
After that I used this for different value of $s$
.But I didn't know how to prove that.
Can anybody help me!
AI: $$S(s)=\sum_{n=1}^\infty\frac{\cos(n\pi/3)}{n^s}$$
First of all notice that if $n$ is a multiple of 3 then we have a series of the form:
$$\sum_{m=1}^\infty\frac{\cos(m\pi)}{(3m)^s}=3^{-s}\sum_{m=1}^\infty\frac{(-1)^m}{m^s}=-3^{-s}\eta(s)$$
We now have:
$$S(s)=-3^{-s}\eta(s)+\sum_{n=1}^\infty\frac{\cos\left[(3n-2)\pi/3\right]}{(3n-2)^s}+\sum_{n=1}^\infty\frac{\cos\left[(3n-1)\pi/3\right]}{(3n-1)^s}$$
$$\frac{(3n-2)\pi}{3}=n\pi-\frac 23\pi$$
and we can see that:
$$\cos\left(n\pi-\frac23\pi\right)=\cos(n\pi)\cos(2\pi/3)+\sin(n\pi)\sin(2\pi/3)$$
now since $\sin(n\pi)=0$ for an integer $n$ we have:
$$\sum_{n=1}^\infty\frac{\cos\left[(3n-2)\pi/3\right]}{(3n-2)^s}=\cos(2\pi/3)\sum_{n=1}^\infty\frac{(-1)^n}{(3n-2)^s}$$
similarly we can say that:
$$\sum_{n=1}^\infty\frac{\cos\left[(3n-1)\pi/3\right]}{(3n-1)^s}=\cos(\pi/3)\sum_{n=1}^\infty\frac{(-1)^n}{(3n-1)^s}$$
Bringing this all together we get:
$$\sum_{n=1}^\infty\frac{\cos(n\pi/3)}{n^s}=-3^{-s}\eta(s)+\frac 12\sum_{n=1}^\infty\frac{(-1)^n}{(3n-1)^s}-\frac 12\sum_{n=1}^\infty\frac{(-1)^n}{(3n-2)^s}$$
|
H: Why is this theorem about derivatives true? $\frac{dy}{dx}= \frac{1}{dx/dy}$
$$\frac{dy}{dx}= \frac{1}{\;\frac{dx}{dy}\;}$$
Why is the above theorem true as long as $dx/dy$ is not zero? How can you prove it rigorously?
I don’t think it is obvious by the definition of the derivative. I think this says $dx/d(x^2)$ will equal to $1/2x$ and so we can evaluate derivatives such as this. But I want a rigorous proof.
Edit: by the answers I think you want the existence and differentiability of f inverse for something like this to even work ? Could the derivative still exist in such an example and fail to be able to be evaluated like this?Or does that have no meaning ?
AI: If
$$
f^{-1}(f(x)) = x
$$
in some neighborhood of $x$, then by the chain rule,
$$
\dfrac{df^{-1}(f(x))}{dy} f'(x)= 1,
$$
and
$$
\dfrac{df^{-1}(y)}{dy}= \dfrac{1}{f'(x)}
$$
where $y = f(x)$.
|
H: Figure out if the improper Integral exists
please only give a slide hint, not the complete way to the goal, since I want to figure it out myself ;)
So given is the improper integral:
$$\displaystyle\int_{0+0}^{+\infty} x^{\frac{\alpha}{5}}(1+x^2)^{\alpha-3}dx$$
So I want to check for which $\alpha \in \mathbb{R}$ my improper integral exists.
Since I couldnt figure out a good way to build the antiderivative, I figured I could evaluate:
$\lim\limits_{x\rightarrow+\infty}x^{\frac{\alpha}{5}}(1+x^2)^{\alpha-3}$ and $\lim\limits_{x\rightarrow0+0}x^{\frac{\alpha}{5}}(1+x^2)^{\alpha-3}$
In order for a converging areafunction, $\lim\limits_{x\rightarrow+\infty}x^{\frac{\alpha}{5}}(1+x^2)^{\alpha-3}=0$
has to hold, and $\lim\limits_{x\rightarrow0+0}x^{\frac{\alpha}{5}}(1+x^2)^{\alpha-3}$ has to exists.
Now I looked at the cases for which the exponents have a significant change in behavior.
Namely: $\alpha < 0$ and $0<\alpha<3$ and $3<\alpha$
In the case $\alpha < 0$:
$\lim\limits_{x\rightarrow0+0}x^{\frac{\alpha}{5}}(1+x^2)^{\alpha-3}=+\infty$ so is no real number! and doesnt work
In the case $3<\alpha$:
$\lim\limits_{x\rightarrow0+0}x^{\frac{\alpha}{5}}(1+x^2)^{\alpha-3}=0$
and
$\lim\limits_{x\rightarrow+\infty}x^{\frac{\alpha}{5}}(1+x^2)^{\alpha-3}=\infty$
In the case $0<\alpha<3$
$\lim\limits_{x\rightarrow0+0}x^{\frac{\alpha}{5}}(1+x^2)^{\alpha-3}=0$
$\lim\limits_{x\rightarrow+\infty}x^{\frac{\alpha}{5}}(1+x^2)^{\alpha-3}$ I could not figure out, since l'Hoptial did not help.
My assumption is that $\forall \alpha \in M \subset (0,3)$ the improper integral exists.
But .. the one limit i couldnt figure out :/ what could I try? Or is there a clever way to build the anitderivetive which I obviously dont know?
Thank you for your time and help :)
AI: Your conditions are not always true. Suppose that in the limit of large $x$ the function behaves like $1/x$. The limit is $0$, but the integral diverges to $\infty$. Similarly, at low $x$, the function might behave like $1/\sqrt x$. The limit of the function at $0+0$ is $+\infty$, but the integral is finite.
So how do we solve the problem. We start similarly to your approach: we split the integral in three parts, one for low $x$, one for intermediate $x$, and one for high $x$. Since the function is continuous and bounded in $[a,b]\subset(0,\infty)$, the middle integral is finite. At low $x$, we have $1+x^2\approx1$, so the corresponding integral is $$I_L=\int_0^a x^{\frac \alpha 5}dx$$
For $\alpha=-5$ you get $$I_L=\ln a-\ln 0=\infty$$
otherwise $$I_L=\frac 1{\frac\alpha 5+1}x^{\frac\alpha 5+1}\mid_0^a$$
In order for $I_L$ to be finite you need to have $$\frac\alpha 5+1\gt0$$ or $\alpha\gt -5$. Similarly, at high $x$ you have $1+x^2\approx x^2$, so $$I_H=\int_b^\infty x^{\frac\alpha 5}x^{2(\alpha-3)}dx$$
Follow the same approach to see the other condition for $\alpha$.
|
H: Uniqueness proof: smallest element of an integers set
How do you prove that the smallest element of a nonempty set of positive integers is unique? This is straightforward but how do you show it formally speaking?
AI: I think you can do something like this:
Take a set of positive integers, ${S \subseteq \mathbb{N}}$. Assume that we have two positive integers ${n_1,n_2 \in S}$ with the property that ${n_1 \leq s\ \forall\ s \in S}$, and also ${n_2 \leq s\ \forall\ s \in S}$ (that is, both ${n_1,n_2}$ are smallest positive integers in the set ${S}$). This would mean that both ${n_1\leq n_2}$, and equally that ${n_2\leq n_1}$. The only thing one can conclude is that ${n_1=n_2}$. Note you asked how to prove if one existed, it would be unique: this does not necessarily prove one exists, though
|
H: Prove the following hope equality
$X$ is a non negative ramdom variable, prove that $EX = \int_{0}^{\infty} P(X>t)dt$.
Well I don't know how to start to prove this, I have a clue that says to me that is: write $P(X>t)$ as the integral of the index function and use the Fubini's theorem.
AI: As you said, you can use Fubini's theorem to write
$$
\int_0^\infty P(X>t)dt = \int_0^\infty E [\mathbb 1_{\{X>t\}}]dt
= E \left[\int_0^\infty \mathbb 1_{\{X>t\}}dt \right]
= E \left[X \right].
$$
The use of Fubini is justified by the fact that the integrand is non-negative.
|
H: find the hermitian operator of an operator which attaches a matrix and its iverted
$V=M_n\left(\mathbb{C}\right)$ with the standard inner product $<A,B>=tr\left(B^{*}A\right)$ and let $P\in V$ be an invertible matrix. We define an operator $T_p:V->V$ such that $$T_p\left(A\right)=P^{-1}AP$$
I need to find $adj(T_p)$
I tried by definition and it didn't workout too much for me, and also if I could express $T_p$ in the standard base, and then use $\left[T^{*}\right]_B=\left[T_B\right]^{*}$, but I don't see how to express $T_p$ in such way, so I'm stuck.
AI: Hint: if $A, B\in V$, then $tr(A B) = tr(B A)$
Using this rule, try to find a matrix $C$ depending on $B$ such that
\begin{equation}
\langle T_p(A), B\rangle = \langle A, C\rangle
\end{equation}
|
H: Show the set $S=\{(x_1, x_2, x_3, x_4, x_5)\in \mathbb{R}^5 \vert x_3^2e^{x_1+x_2^{100}}>2\}$ and another set is open.
Can I please get feedback on proof below? Thank you!!
$\def\R{{\mathbb R}}$
Show the set $$S=\{(x_1, x_2, x_3, x_4, x_5)\in \R^5 \vert x_3^2e^{x_1+x_2^{100}}>2 \text{ and } x_3x_4 - x_5^2<-1\} \subset\R^5$$ is open.
$\textit{Proof.}$ Observe that $f\colon \R^5 \to \R$ as $f(x_1, x_2, x_3, x_4, x_5) = x_3^2e^{x_1+x_2^{100}}$ then $f$ is continuous, i.e., $(2, \infty)$ is open in $\R$. So, $f^{-1}(2,\infty)$ is open then $\{(x_1, x_2, x_3, x_4, x_5)\in \R^5 \vert x_3^2e^{x_1+x_2^{100}}>2\}$ is an open set. Now, let us define $g\colon \R^5 \to \R$ as $g(x_1, x_2, x_3, x_4, x_5)=x_3x_4 - x_5^2$, then we can see that $g$ is continuous as $(-\infty, -1)$ is open in $\R$. So, by definition, $g^{-1}((-\infty,-1))$ is open in $\R^5$. Thus, $\{(x_1, x_2, x_3, x_4, x_5)\in \R^5 \vert x_3x_4 -x_5^2 <-1\}$ is open. Now, since the intersection of open sets, particularly two, is open. Therefore, $$\{(x_1, x_2, x_3, x_4, x_5)\in \R^5 \vert x_3^2e^{x_1+x_2^{100}}>2\} \cap \{(x_1, x_2, x_3, x_4, x_5)\in \R^5 \vert x_3x_4 - x_5^2<-1\}$$ is open, and we are done.
AI: The proof is correct. I would have given you the same proof! It is the intersection of two open sets and hence open.
|
H: A question of Quantitative aptitude and Logic in School Level
I need help in a question asked by my younger brother in his aptitude exam of Secondary level.
Question's Image ->
I thought answer should be B
Answer is given (A)
I can't reason how it must be independent of both d and n.
Can anyone please explain.
AI: Since the balls fill the cylinder snugly and from end to end, the cylinder must have an internal diameter of $d$ and height of $nd$, so its volume is
$$\pi\left(\frac{d}2\right)^2\cdot nd=\frac{\pi nd^3}4\;.$$
The total volume of the balls is
$$n\cdot\frac43\pi\left(\frac{d}2\right)^3=\frac{\pi nd^3}6\;.$$
The volume fraction occupied by the melted balls is therefore
$$\frac{\frac{\pi nd^3}6}{\frac{\pi nd^3}4}=\frac23\;,$$
which is independent of both $n$ and $d$.
One can arrive at this conclusion without actually doing the calculations by realizing that both the internal volume of the cylinder and the total volume of wax are proportional to $nd^3$, so the volume fraction of the melted wax is just the ratio of the proportionality constants.
|
H: I somehow deduced that $\tan x=\iota$ for any real value of $x$ by equating the value of $\tan(\frac{\pi}{2}+x)$ obtained using two identities.
Let's assume that we're familiar with the identity : $\tan \Bigg (\dfrac{\pi}{2} + x \Bigg ) = -\cot x$ which we have derived using the unit circle.
I was trying to equate the values of $\tan \Bigg ( \dfrac{\pi}{2} + x \Bigg )$ obtained using the above mentioned identity and the compound angle identity and I got a weird result. Have a look :
$$\tan \Bigg ( \dfrac{\pi}{2} + x \Bigg ) = \dfrac{\tan\dfrac{\pi}{2} + \tan x}{1 - \tan \dfrac{\pi}{2} \tan x}$$
For the sake of simplicity, let us assume that $\tan \dfrac{\pi}{2} = a$ and $\tan x = b$.
$$ \therefore \tan \Bigg ( \dfrac{\pi}{2} + x \Bigg ) = \dfrac{a + b}{1 - ab} \implies -\cot x = \dfrac{a + b}{1 - ab}$$
Also,
$$-\cot x = \dfrac{-1}{\tan x} = \dfrac{-1}{b}$$
$$ {\color{red} {\therefore \dfrac{-1}{b} = \dfrac{a + b}{1 - ab} \implies -1 + ab = ab + b^2 \implies -1 = b^2}}$$
This leads us to :
$$\tan x = b = \sqrt{-1} = \iota$$
which is not true.
So, what went wrong here?
I think that the ${\color{red}{\text{highlighted part}}}$ was wrong because while cross-multiplying, I automatically made the assumption that $1 - ab$ has a real value which won't be the case if $\tan \Bigg ( \dfrac{\pi}{2} \Bigg )$ doesn't have a real value (which is actually the case as $\tan \Bigg ( \dfrac{\pi}{2} \Bigg ) = \dfrac{\sin \Bigg ( \dfrac{\pi}{2} \Bigg )}{\cos \Bigg ( \dfrac{\pi}{2} \Bigg )} = \dfrac{1}{0}$ which does not have a real value and approaches $\infty$)
Was this the mistake I made?
Thanks!
AI: This is a cool "paradox," hadn't seen it before!
Even before the red line, the identity
$$\tan \Bigg ( \dfrac{\pi}{2} + x \Bigg ) = \dfrac{\tan\dfrac{\pi}{2} + \tan x}{1 - \tan \dfrac{\pi}{2} \tan x}$$
is objectionable.
This isn't true-- or, more accurately, it isn't even grammatically correct, since $\frac{\pi}{2}$ isn't in the domain of the tangent function.
Similarly, when you let $a = \tan\big(\frac{\pi}{2}\big)$ you are saying something grammatically incorrect, and so you can't expect to do formal algebraic manipulations with $a$ and receive something meaningful. It might be instructive to replace your identity with the identity
$$\tan \Bigg ( \dfrac{\pi}{2} + x \Bigg ) = \lim_{y \to \frac{\pi}{2}} \dfrac{\tan y + \tan x}{1 - \tan y \tan x}$$
(which is valid for any $x$ not an integer multiple of $\pi$), and see what happens.
|
H: Proving that the statement $f: X \to Y$ is continuous iff $x_n \to x \implies$ $f(x_n) \to f(x)$ maybe false if $X,$ and $Y$ are not metric spaces.
Given that $f: (X, \tau_1) \to (X, \tau_2)$ is a map. Then I want to show that even if $x_n \to x \implies f(x_n) \to f(x),$ f may not be continuous.
I know that f is continuous if $(X, \tau_1)$ is first countable. So I must not have $\tau_1$ to be first countable.
But I can not find such f and $(X,\tau_1)$
Any help is highly appreciated.
AI: Sketch: Let $X=\Bbb R$, and let $\tau_1$ be the co-countable topology: $U\subseteq X$ is open iff $U=\varnothing$, or $X\setminus U$ is countable. Let $\tau_2$ be the discrete topology, and let $f$ be the identity map. Prove that $\tau_1$ and $\tau_2$ have the same convergent sequences, but that $f$ is not continuous.
|
H: Interpretation of the word Random
I have previous knowledge of what a random experiment is, but sometimes I get confused by the use of the word Random.
I can express my doubts as the following questions: if something is random them it is aleatory? if something has a defined distribution like in a random experiment then is not true that it is not random? If someone say something is random it is referring to a specific distribution like in a uniform distribution? is not a frequency distribution a pattern?
for example in the following paragraph:
Extinction is a process that can depend on a variety of ecological, geographical, and physiological variables. These variables affect different species of organisms in different ways, and should, therefore, yield a random pattern of extinctions.
It says I should be able to describe extinction with a distribution frequency?
AI: Random means unpredictable. The distribution of the probabilities of the outcomes can be anything, unknown or known.
(If desired, the distribution can be guessed by reasoning from an explanatory model, or determined empirically from an histogram.)
For example, if you throw ten coins at a time, you cannot predict the number of tails. But you known that the distribution will be binomial.
|
H: Calculating radial derivative from Cartesian derivatives
For a radially symmetric function $f(x, y, z)$, is there a simple method to convert from $\frac{\partial f(x, y, z)}{\partial x}$, $\frac{\partial f(x, y, z)}{\partial y}$, $\frac{\partial f(x, y, z)}{\partial z}$ to $\frac{\partial f(x, y, z)}{\partial r}$?
Such a function could be a spherical Gaussian evaluated on a Cartesian grid, for which I want to calculate the radial derivative numerically.
AI: $$\begin{align}\frac{\partial f}{\partial r}&=\frac{\partial f}{\partial x}\frac{dx}{dr}+\frac{\partial f}{\partial y}\frac{dy}{dr}+\frac{\partial f}{\partial z}\frac{dz}{dr}\\&=\frac{\partial f}{\partial x}\sin\theta\cos\phi+\frac{\partial f}{\partial y}\sin\theta\sin\phi+\frac{\partial f}{\partial z}\cos\theta\\&=(\nabla f)\cdot \hat r\end{align}$$
|
H: For any pair of sequences $(a_n),(b_n)$ with $a_n
I wrote down a solution, but I'm not sure if it works:
Question: Let $\mu$ be a measure on $\mathscr{B}(\mathbb{R})$ such that $\mu(B)< \infty$ for all bounded $B$. Let $f \geq 0$ be $\mu$-integrable. Show that for any pair of sequences $(a_n),(b_n)$ with $a_n<b_n$ and $a_n \rightarrow \infty$ we have $\lim_n \int_{a_n}^{b_n} f d \mu=0$.
Attempt: Since $f$ us $\mu$-integrable and $\mu(B)< \infty$ for all bounded $B$, notice for any $\varepsilon>0$ there is an $x_0\in \mathbb{R}$ such that $\int_{x \geq x_0} f(x) d \mu(x) < \varepsilon$. Thus for any $\varepsilon$, find such an $x_0$ and find and $N \in \mathbb{N}$ such that for all $n>N$ we have $a_n > x_0$. Then for any $n>N$ we have $\int_{a_n}^{b_n} f d\mu \leq \int_{x \geq x_0} f(x) d\mu(x)< \varepsilon$. Thus $\lim_n \int_{a_n}^{b_n} f d \mu=0$ as desired.
AI: If $f$ is integrable then $fI[f\geq \alpha]\rightarrow 0$ almost everywhere as $\alpha\rightarrow\infty$, then by dominated convergence theorem
$$
\lim_{\alpha\rightarrow\infty}\int fI[f\geq \alpha]d\mu =0
$$
Now
$a_n\rightarrow\infty\Rightarrow b_n\rightarrow\infty$,
and consider
$$
\int f(x)I[a_n \leq x \leq b_n] d\mu(x) \\
=\int f(x)I[x\geq a_n]d\mu - \int f(x)I[x\geq b_n]d\mu
$$
Then take limit $n\rightarrow\infty$ you will get the desired answer.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.