text
stringlengths 83
79.5k
|
|---|
H: Simplifying the inequality
I have a simple question but couldn't get it at all.
How can I simplify this therm?
$$\frac{x}{x^2-1} \ge \frac 1x$$
AI: Simplify $\frac{x}{x^2-1} \ge \frac 1x$.
\begin{align*}
\frac{x}{x^2-1} &\ge \frac 1x\\
\frac{x}{x^2-1} - \frac 1x &\ge 0\\
\frac{x^2-(x^2-1)}{x(x^2-1)} &\ge 0\\
\frac{1}{(x-1)x(x+1)} &\ge 0\\
\Rightarrow \boxed{x\in(-1,0)\cup(1,+\infty)}
\end{align*}
Note that $x\neq0,-1,+1$ owing to the initial expression to be simplified.
Might as well draw the graphs of $\frac{x}{x^2-1}$ and $\frac 1x$ on the same axes for a better feel!
|
H: In a metric space, compact implies sequentially compact
I'd like to know if this demonstration is correct.
Let $X$ be a metric space and $K \subseteq X$. Show that if $K$ is compact, then $K$ is sequentially compact.
$K$ is compact, therefore every open cover has a finite subcover. Consider then a sequence $\{x_n\}_{n \in \mathbb{N}} \subset K$ and suppose (to find a contradiction) that it has no covergent subsequence, i.e., no element of $K$ is accumulation point for $\{x_n\}_{n \in \mathbb{N}}$.
This means that, for every $x \in K$ exists a $\varepsilon_x$ such that $B_{\varepsilon_x}(x)\cap \{x_n, n \in \mathbb{N}\}$ is finite, where $B_r(x)$ denotes the open ball with radius $r$ centered in $x$.
Note that every set $B_{\varepsilon_x}(x)$ is open and the union over all $x \in K$ obviously covers $K$.
Now, as $K$ is compact by hypothesis, there exists a finite set $K_0 \subset K$ such that
$$K = \bigcup_{x \in K_0}B_{\varepsilon_x}(x).$$
Now, observe that $$\{x_n,n \in \mathbb{N}\} = \{x_n,n \in \mathbb{N}\}\cap K = \{x_n,n \in \mathbb{N}\}\cap \left[\bigcup_{x \in K_0}B_{\varepsilon_x}(x)\right]$$
$$=\bigcup_{x \in K_0}\left[\{x_n,n\in \mathbb{N}\}\cap B_{\varepsilon_x}(x)\right].$$
But this last set is finite, as it is a finite union of finite sets. This is absurd as $\{x_n, n\in \mathbb{N}\}$ is infinite, therefore $\{x_n\}_{n \in \mathbb{N}}$ must have an accumulation point.
This shows that $K$ compact implies $K$ sequentially compact.
AI: It’s basically correct, but I would tighten up one point. The fact that the sequence has no accumulation point actually means that for each $x\in K$ there is an $\epsilon_x>0$ such that $\{n\in\Bbb N:x_n\in B_{\epsilon_x}(x)\}$ is finite; this is strictly stronger than the statement that $B_{\epsilon_x}(x)\cap\{x_n:n\in\Bbb N\}$ is finite, since $\{x_n:n\in\Bbb N\}$ itself might be finite. However, the fact that $\{n\in\Bbb N:x_n\in B_{\epsilon_{x_k}}(x_k)\}$ is finite for each $k\in\Bbb N$ actually ensures that $\{x_n:n\in\Bbb N\}$ is infinite. You really should show this in your argument, however, since you need it at the end, when you get your contradiction by noting that $\{x_n:n\in\Bbb N\}$ is infinite.
|
H: If $0 \in \sigma(N)$ and $A = NN^\ast$, then $0 \in \sigma(A)$.
I am going through a longer proof of a theorem which states the following as an intermediate result:
Let $N$ be a bounded normal operator on a Hilbert space $H$ with $0
\in \sigma(N)$. Then the self-adjoint operator $A = NN^\ast$ has $0
\in \sigma(A)$.
I was able to find a proof of this assertion using the approximate point spectrum $\Pi(N)$ of $N$.
For a normal operator $N$ one can show that $\Pi(N) = \sigma(N)$ holds. With this statement, the claim follows: Since $0 \in \sigma(N)$, there is a sequence $(x_n)$ with $\| x_n\| = 1$ such that $\| Nx_n \| \longrightarrow 0$, therefore, $A x_n \longrightarrow 0$ and $A$ has $0 \in \sigma(A)$.
However, the proof of $\Pi(N) = \sigma(N)$ is somewhat long and it seems to me that the above statement could be proven in a much shorter and more elementary manner.
Question: Is there a way to show $0 \in \sigma(A)$ more easily and shorter and
without using the arguments I cited above?
AI: If $0\notin \sigma(A)$ then it means the operator $NN^*$ is invertible. So there is a bounded operator $T$ such that $NN^*T=TNN^*=I$. In particular this tells us that $N(N^*T)=I$, which means $N$ is invertible from the right side. Also, since $N$ is normal we get $I=TNN^*=TN^*N$, which means $N$ is also invertible from the left side. And it is a standard result that if an element is invertible from both sides then the one sided inverses must be equal, which means that the element is invertible. So we got that $N$ is invertible, which contradicts $0\in\sigma(N)$.
|
H: Number of elements mapped to $f(a)$ where $f$ is a group homomorphism
Let $G$ and $G'$ be any two groups.
Let $f$ : $G$ $\rightarrow$ $G'$ be a group homomorphism.
Let $K$ denotes the kernel of $f$.
Let $a$ be any arbitrary element of $G$.
I need to show that there are total $m$ elements mapped to $f$($a$) where $m$ is the order of $K$.
We know that $aK$ = {$ak$ : $k \in K$}
Consider $f$($ak$) = $f$($a$)$f$($k$) = $f$($a$) (as $f$ is a homomorphism and $k \in K$)
Using the fact that $O(aK)$ = $O(K)$, we get that there are atleast $m$ elements mapped to $f(a)$.
Now how can I show that there are no other elements mapped to $f(a)$?
AI: Let $x$ be any element of $G$ for which $f(x) = f(a)$. Then we obtain
$$
\big( f(a) \big)^{ - 1 } f(x) = e^\prime,
$$
where $e^\prime$ is the identity element of $G^\prime$, that is,
$$
f \left( a^{-1} \right) f(x) = e^\prime,
$$
or
$$
f \left( a^{-1} x \right) = e^\prime,
$$
and so $a^{-1} x \in K$, which shows that
$$
x = a \left( a^{-1} x \right) \in aK.
$$
Hope this helps.
|
H: If we pick a sequence of numbers $(a_k)$ at random, what is the expected radius of convergence of $\sum_k a_k x^k$?
Suppose we pick a sequence of positive integers independently and identically distributed from $\mathbb{N}^+$: call it $(a_k)=(a_0,a_1,a_2,a_3,\ldots)$. If we consider the corresponding generating function $f(x) = \sum_k a_k x^k$, what can we say about the radius of convergence $R$ of $f$? The Cauchy-Hadamard theorem says $R^{-1}= \limsup_{k\to\infty} \sqrt[k]{|a_k|}$, but I'm wondering if we can say any more from a probabilistic standpoint.
Here are my thoughts (mostly low-hanging fruit) on the problem if the $a_k$ are positive integers.
If $(a_k)$ is bounded, we have $R=1$; this follows immediately by comparison to the geometric series. I don't think "most" positive integer sequences are bounded; in fact, I suspect they are of measure zero in the set of all such sequences.
If $a_k = O(k^r)$ for any real $r$, $R=1$ as well. Likewise, if $a_k = O(M^k)$, $R=M^{-1}$; by the integer stipulation, we must have $M\geq 1$.
If the $a_k$ are positive integers, I don't think we can do better than $R=1$.
I thought about how to generalize the problem if we allow the $a_k$ to be real numbers; I haven't thought about the complex case. Here are my thoughts, again somewhat rudimentary.
If the $a_k$ are eventually zero, obviously $R=\infty$.
We can now have $a_k = O(M^k)$ for any $M>0$ (for instance, the Maclaurin series for tangent gives $R=\pi/2$)
Analyzing convergence at the boundary is probably a lost cause
Please feel free to ask for clarification or to change the tags if you think they could be improved.
Update: instead of, "pick a sequence of positive integers independently and identically distributed from $\mathbb{N}^+$," perhaps I should specify a distribution. After looking at several common models, I think a Boltzmann or logarithmic distribution might be best, but I'm not sure. I realize this is an important aspect of the problem and I'm sorry I don't have a better idea of what to ask.
AI: Let $\mu:=E[a_0]$. Define $Y_i:=a_ix^i$, so that $\mu_i:=E[Y_i]=\mu x^i$ and $\mbox{Var}(Y_i)=x^{2i}\mbox{Var}(a_0)$. The Kolmogorov 2-series theorem states that $\sum_i Y_i$ converges almost surely (is finite, in fact) if $\sum_i\mu_i$ and $\sum_i \mbox{Var}(Y_i)$ both converge.
This reduces to $\sum_{i\geq 0}x^i$ converging (so $R=1$) and $\sum_i x^{2i}$ converging (also $R=1$)
There's a possibility that $R>1$ is possible, but for that you'd need the 3-series theorem and likely more subtle info on the nature of the distribution of $X_i$. You'd also need the 3-series theorem if the sum of expected values or variances diverges to establish convergence.
|
H: Number of points of discontinuity of $1/\log|x|$
I was solving a few questions from limits continuity and discontinuity when I came across a question asking for the number of points of discontinuity of $f(x)=1/\log|x|$.
I could easily observe that at $x=±1$, the limits tend to different infinities so the function was discontinuous at these 2 points.
However on checking the answer key, it said that there were 3 points of discontinuity which included $x=0$.
However I believed that continuity is checked by finding the functional value at a point only for the points within the domain,else the limits to the point are checked (If they exist in the domain too) and $x=0$ was definitely outside it. Also the limits at either side of $0$ tend to $0$. So it should have been continuous.
I also found this solution on several sites like Quora however everyone said there were 3 points including $0$ calling it a removable discontinuity.
Please correct my understanding if faulty.
AI: This function is continuous everywhere in its domain of definition.
Its domain of definition is $\mathbb R\setminus \{-1,0,1\}$. That's because the expression $1/\log(|x|)$ only makes sense on that set, since the expressions $1/\log(|-1|)$, $1/\log(|0|)$, and $1/\log(|1|)$ are meaningless. This does not make these points of discontinuity, it simply means they are not in the domain of the function -- a completely separate concept.
This is not to say, however, that the function doesn't have a continuous extension to include $\{0\}$ in the domain, but that's not the same question.
|
H: A questions about the Big $\mathcal{O}$ -notation
In my Analysis textbook, the author writes $f(x)=\mathcal{O}(g(x))$
But in a video the person said $f(x)\in\mathcal{O}(g(x))$ is the correct interpretation, and even said, the other notation doesnt make any sense.
Is one considered better? Or is one really wrong? Does $f(x)=\mathcal{O}(g(x))$ still imply that $f(x)$ is an element of a given set?
AI: I would say $f(x) \in \mathcal O(g(x))$ is technically more correct, but $f(x) = \mathcal O(g(x))$ is used a lot in literature. The problem with the notation is that the = sign is not symmetric here, that is, $f(x) = \mathcal O(g(x))$ does not mean that $\mathcal O(g(x)) = f(x)$; the latter does not even make sense.
|
H: Infinite sums involving factorials
My question is
$$\sum_{n=1}^{\infty}\frac{2}{n!}= 2e$$
and $$\sum_{n=1}^{\infty}\frac{n^2}{n!}= 2e$$
But each term in the series
$$\sum_{n=1}^{\infty}\frac{n^2}{n!}$$
except the first one is greater than each term in
$$\sum_{n=1}^{\infty}\frac{2}{n!}$$
So why isn't that
$$\sum_{n=1}^{\infty}\frac{n^2}{n!} > \sum_{n=1}^{\infty}\frac{2}{n!}$$
AI: Please do note the comments noting that your first summation should start at zero. Anyway, you've answered your own question: it so happens that the fact that the first series has a larger first term makes up for the fact that every term in the second series except the first is larger than the respective term in the first series.
This fact should not be too difficult to swallow because the first few terms of both series are the ones that dominate the series, so you can imagine that the infinitely many terms that the second series is larger in each make minute contributions to even out the disparity due to the larger first few terms in the first series.
|
H: Integral of $\int\limits_0^{2\pi } {{a^{\frac{{b\cos (x - c)}}{d}}}dx} $?
I am trying to find the integral of,
$\int\limits_0^{2\pi } {{a^{\frac{{b\cos (x - c)}}{d}}}dx} $
Where, $a,b,c,d \in R$.
I am trying to find the definite integral in wolfram alpha online but it does not provide me any result. Does that mean the Integral does not exist or wrong? I could not do that. Thank you.
AI: your integral:
$$\mathcal{I}=\int_0^{2\pi}a^{\frac{b\cos(x-c)}{d}}dx=\int_0^{2\pi}e^{\alpha\cos(x-c)}dx=\int_c^{2\pi+c}e^{\alpha\cos x}dx$$
where $\alpha=\frac{b\ln(a)}{d}$
we can try and split this integral up now, since $\cos x$ has a period of $2\pi$ we can write this as:
$$\mathcal{I}=-\int_0^cf(x)dx+\int_0^{2\pi}f(x)dx+\int_0^cf(x)dx$$
$$=\int_0^{2\pi}f(x)dx=\int_0^{2\pi}e^{\alpha\cos x}dx=2\int_0^\pi e^{\alpha\cos x}dx$$
which is a standard integral called the Bessel function of the first kind, and so your answer is:
$$\mathcal{I}=2\pi I_0\left(\frac{b\ln a}{d}\right)$$
|
H: Find the Taylor series for $f(z)=e^z$ about $z_0=1+i$.
Find the Taylor series for $f(z)=e^z$ about $z_0=1+i$.
I know that I want to use the geometric series for $e^z$ which goes $1+z+\frac{z^2}{2!}+\frac{z^3}{3!}...$, but this is centered around $z_0=0$. How would I go about changing this for $z_0=1+i$?
AI: Since $e^z = e^{z_0} \cdot e^{z-z_0}$, we have
$$
e^z = e^{z_0} \cdot \sum_{k=0}^{\infty} \frac{(z-z_0)^k}{k!} =
e^{1+i}\cdot\sum_{k=0}^{\infty} \frac{(z-1-i)^k}{k!}.
$$
|
H: Better quantifying $\exists B \subseteq \mathbb{R}^{n+k} \ \ \exists f:B \to \mathbb{R}^k \ \ \forall x \in B: a \in B \ \text{and} \ F(x, f(x)) = 0$
How could I express this result better, quantificationally? For a pre-specified $a$,
$\exists B \subseteq \mathbb{R}^{n+k} \ \ \exists f:B \to \mathbb{R}^k \ \ \forall x \in B: a \in B \ \text{and} \ F(x, f(x)) = 0$
I just dislike how we are 'picking' an $x$ from $B$ before saying that $a \in B$.
AI: The following equivalences
$$\forall x \in X(\phi \land \psi) \equiv \phi \land (\forall x \in X\,\psi)$$
$$\exists x \in X(\phi \land \psi) \equiv \phi \land (\exists x \in X\,\psi)$$
are valid providing $x$ does not occur free in $\phi$. In your example, you can use these to move the subformula $a \in B$ to just under the quantification over $B$:
$$\exists B \subseteq \mathbb{R}^{n+k} (a \in B \land (\exists f \in B \to \mathbb{R}^k \ \ \forall x \in B \ F(x, f(x)) = 0)).$$
Note that this is equivalent to what you had, but I agree with you that it reads better.
[Aside: I've converted your mixture of uses of $\in$ and $:$ into a more common convention. Feel free to convert back to the conventions of your course or book.]
|
H: Validity of Proof of Sum of First $n$ Natural Numbers
Background
Lately I've been self-studying Tom M. Apostol's Vol. 1 Calculus to make my understanding of the subject more rigorous after taking the actual class. I came across a proof for what the sum of the squares of the first $n$ natural numbers was - $$\sum_{i=1}^n i^2$$ and how it was equal to
$$\frac {n^3}{3} + \frac{n^2}{2} + \frac{n}{6}\,.$$
In short, this involved the notion of a telescoping series, but I'd rather not go too deeply into the specifics. Rather, my interest turned to that of the sum of the first $n$ natural numbers, which was involved in the aforementioned demonstration.
Motivation
I'm sure plenty of you all have seen at least one proof of the following equality -
$$\sum_{i=1}^n i = 1 + 2 + \cdots + (n - 1) + n = \frac{n(n+1)}{2}\tag{1}\label{1}\\$$
Examples of these include a visual proof, proof by induction, etc. The purpose of this post is to explain my proof, whether it is valid, and how I could improve it if so. (Feel free to also critique my notation, I'd appreciate it.)
If done correctly, we should be able to find what the sum is equal to without making what I believe to be huge intuitive leaps.
Proof
Starting with \eqref{1}, we set the sum equal to a new variable $k$.
$$1 + 2 + \cdots + (n - 1) + n = k\tag{2}\label{2}\\$$
From here, one can realize that in some sense all the terms leading up to $n$ are fairly "close" in value to it; in other words, they can be put into the form $(n-a)$. As a result, \eqref{2} becomes -
$$(n - (n - 1)) + (n - (n - 2)) + \cdots + (n - 2) + (n - 1) + n$$
Rearranging the many $n$ together as well as the remaining terms, we get -
$$\underbrace{n + n + \cdots + n}_{n\text{ times}} + [-1 - 2 - \cdots - (n - 2) - (n -1)]$$
$n$ added unto itself $n$ times is the definition of $n^2$. This and factoring out the negative from the remaining sum gives -
$$n^2 - [1 + 2 + \cdots + (n - 2) + (n -1)]$$
Recall that this is still equal to the original sum, $k$. Notice as well that the remaining sum is equal to $k$ minus the $n$ term. Our equation transforms into -
$$n^2 - (k - n) = k \tag{3}\label{3}\\$$
Solving for $k$ (the original sum) results in -
$$k = \frac{n^2 + n}{2} = \frac{n(n + 1)}{2} \tag{4}\label{4}\\$$
Hence proving \eqref{1}.
Is this logically sound, or did I mess up somewhere?
AI: This is correct but can be made simpler:
$$S=1+2+3+\cdots(n-1)+n$$
and
$$S=n+(n-1)+(n-2)+\cdots2+1,$$
so that by addition
$$2S=(n+1)+(n+1)+(n+1)+\cdots(n+1)+(n+1)=n(n+1).$$
|
H: Transformation of PD matrix with rank deficient other matrix
This should be easy for you.
I have an intuitive feeling for why the following should be correct, but I would like something more rigorous than my feeling :)
Consider the $m\times m$-matrix $B$, which is symmetric and positive definite (full rank).
Now this matrix is transformed using another matrix, say $A$, in the following manner:
$A B A^T$. The matrix $A$ is $n\times m$ with $n<m$. Furthermore the constraint $rank(A) < n$ is imposed.
My intuition tells me that $A B A^T$ must be symmetric and positive semi-definite, but what is the mathematical proof for this?
(why exactly does the transformation preserve symmetry and why is it that possibly negative eigenvalues in $A$ still result in the transformation to be PSD? Or is my intuition wrong)?
Edit: please exclude the case of A=0.
AI: For symmetry: note that in general, we have $(AB)^T = B^TA^T$, hence $(ABC)^T = C^TB^TA^T$. With that, we see that
$$
(ABA^T)^T = A^{TT}B^TA^T = ABA^T.
$$
For positive semidefiniteness: an $n \times n$ symmetric matrix $M$ is positive semidefinite if (and only if) $x^TMx \geq 0$ whenever $x \in \Bbb R^n$. We note that
$$
x^T(ABA^T)x = (x^TA)B(A^Tx) = (A^Tx)^T B (A^Tx).
$$
Because $B$ is positive definite (and hence positive semidefinite), we must have $y^TBy \geq 0$ for $y = A^Tx$. Thus, $x^T(ABA^T)x \geq 0$, so that $ABA^T$ is indeed positive semidefinite.
|
H: Can we extend the monoid $(\mathcal P(A),\cup,\emptyset)$ to a group?
Natural numbers are famously a way to build up "the rest" of the numbers: integers as pairs of natural numbers modulo the correct equivalence relation, and similarly for rational numbers, etc.
The powerset of some set $A$ has a similar structure to the natural numbers, in that $(\mathbb{N}, +, 0)$ is a monoid, and so is $(\mathcal{P}(A), \cup, \emptyset)$, but neither has a subtraction that is always nicely behaved. However, one could imagine extending $\mathcal{P}(A)$ in an analogous way to how we obtain the integers, by defining an equivalence relation
$$(X, Y) \sim (Z, W) \iff X \cup W = Z \cup Y$$
Essentially making $(X, Y)$ into $X \setminus Y$, but without losing information in the case that $Y \subsetneq X$, just as we can represent an integer $a - b$ as $(a,b)$ without losing information when $b > a$ (assuming we use saturating subtraction for natural numbers).
At any rate, my question is, does this structure have a name, and if so, does the study of it lead anywhere interesting?
AI: Contra your claim, the relation $\sim$ is not an equivalence relation: consider $\alpha=(\{1,2\},\{1,2\})$, let $\beta=(\{1,2\},\{1\})$ and let $\gamma=(\{1\},\{1,2\})$. Then we have $\alpha\sim \beta$ and $\alpha\sim\gamma$ but $\beta\not\sim\gamma$.
The issue is that whenever $Z\cup W\subseteq X=Y$ we have $(X,Y)\sim (Z,W)$ for silly reasons: $X\cup W=X$, $Z\cup Y=Y$, and $X=Y$.
In fact, more generally we have $(\mathbb{N},\mathbb{N})\sim(X,Y)$ for every $X,Y\subseteq\mathbb{N}$. So if we try to fix things by looking at the transitive closure of $\sim$ instead, everything trivializes.
EDIT: We can fix this specific problem by restricting attention to the set $Disj_2(\mathbb{N})$ of disjoint pairs of sets of natural numbers. However, this causes two new issues.
First, there's no point in bringing an equivalence relation into the picture anymore: if $A\cup Y=B\cup X$ and $A\cap B=X\cap Y=\emptyset$ then $A=X$ and $B=Y$. For example, we have $A\subseteq A\cup Y=B\cup X$ so $A\subseteq X$ since $A\cap B=\emptyset$, and similarly $X\subseteq A$; and by symmetry, we also have $B\subseteq Y$ and $Y\subseteq B$.
More importantly, we now need to be careful about our arithmetic operations: "coordinatewise union" is no longer defined on $Disj_2(\mathbb{N})$ since it doesn't preserve disjointness! Instead, the best addition analogue seems to be $$(A,B)\oplus (X,Y)=((A\setminus Y)\cup (X\setminus B), (Y\setminus A)\cup(B\setminus X)).$$ This unfortunately isn't too well-behaved: while it is commutative and has identity and inverses, it is not associative. This is because it is idempotent: $(A,B)\oplus (A,B)=(A,B)$.
There is a natural way to think of the structure $(Disj_2(\mathbb{N}), \oplus)$, however. Intuitively, we start with the set $M$ of all multisets of natural numbers where multiplicities are allowed to be arbitrary integers; this is a group under "multiset union" $\underline{\cup}$, and really is just a messy way of describing the product group $\prod_\mathbb{N}\mathbb{Z}$. We think of $(A,B)\in Disj_2(\mathbb{N})$ as representing the multiset containing one of each $a\in A$, negative one of each $b\in B$, and zero of every other number. Then $(Disj_2(\mathbb{N}), \oplus)$ can be gotten from the group $(M,\underline{\cup})$ by "truncating" each multiset to allow only the multiplicities $-1,0$, and $1$, with $\oplus$ being the analogous "truncation" of $\underline{\cup}$.
More abstractly, given any group (or indeed magma) $(G,*)$ and any function $\mathfrak{F}:G\rightarrow G$, we can build a new structure $(G_\mathfrak{F},*_\mathfrak{F})$ defined by setting $$G_\mathfrak{F}=ran(\mathfrak{F}),\quad g*_\mathfrak{F}h=\mathfrak{F}(g*h).$$ Unfortunately, this isn't a very nice construction in general, as we've seen in the case above; indeed, I don't think it has a name at all.
|
H: Orthogonal Complement of quadratic functions in L²([-1,1])
How can we characterize $V^{\perp}$ where $ V= \{v \in L^2([-1,1]): v(x)=ax+bx^2,b\neq 0\}$ ?
I've tried looking for $ \{f \in L^2([-1,1]): \int_{[-1,1]}{fv\ d\lambda}=0, \ v(x)=ax+bx^2 \}$ but I cannot figure out an explicit characterization of this space.
AI: Keep in mind that if $f \in V^\perp$ then for all $a,b\in \mathbb{R}$ and $b\not= 0 $ we must have that $$\int_{-1}^{1}f(x)(ax+bx^2)dx = 0$$
which is equivalent to $$ a\int_{-1}^1 xf(x) dx = -b\int_{-1}^1 x^2f(x) dx$$
We choose $a = 0$ to see that since $b\not=0$ we must have that $\int_{-1}^1 x^2f(x) dx = 0$. Hence for all $a\in\mathbb{R}$ we also have that $a\int_{-1}^1xf(x) dx = 0$ which can only happen if $\int_{-1}^1 xf(x)dx = 0$ as well.
Let us dig a little further and decompose $f\in V^\perp$ into its even $(f_e)$ and odd $(f_o)$ components and use the symmetry of $[-1,1]$ to see if we learn anything else. Well the product of even functions is even and similarly for the product of odd functions. Further an even function times an odd function is odd. Since the integral of an odd function over a symmetric domain is $0$, then
$$\int_{-1}^1 xf(x)dx = \int_{-1}^1 x f_e(x)dx +\int_{-1}^1 x f_o(x)dx = 0+\int_{-1}^1 x f_o(x)dx = 0$$
$$\int_{-1}^1 x^2f(x)dx = \int_{-1}^1 x^2 f_e(x)dx +\int_{-1}^1 x^2 f_o(x)dx = \int_{-1}^1 x^2 f_e(x)dx +0 = 0$$
Hence if we take any even function on $[-1,1]$ and orthogonal to $x^2$ on the same domain and any odd function on $[-1,1]$ and is orthogonal to $x$ on the same domain, then their sum is in $V^\perp$. This completely defines the space.
As Medo points out in comments, the only odd function orthogonal to $x$ on $[-1,1]$ is the $0$ function and the only even function orthogonal to $x^2$ on $[-1,1]$ is the again the $0$ function. Hence $f = 0$ is the only function in $V^\perp$.
|
H: Finding a domain where an integral of a function is '0'
Motivated from reading mr.fourier's tricks, I wanted to come with some of my own to solve some problems of mine.
consider,
$$ P(x) = a_o x^n + a_1 x^{n-1} ... + a_n x^0$$
Now, I multiply $ x^k$ on both sides with $ k \leq n $
I get,
$$ x^k P(x) = a_o x^{n+k} ... +a_n x^{0+k}$$
Now I integrate both sides over an interval,
$$ \int_{a}^{b} x^k P(x) = \int_{a}^{b} a_o x^{n+k} ... +a_n x^{0+k}$$
How do I interval such that integral is 0 on the right side for all terms of form $x^{a+k}$ except where $a=k?$
It would be even better if there was some other function I could use for replicating fourier tricks for polynomial functions
AI: You can't do what you want by playing with intervals, but for "some other function" you probably want Legendre polynomials.
And if you like what Fourier expansions do, you might want to go research the terms "inner product space", "Hilbert space", and Orthogonal polynomials.
|
H: Differentiate $\left(x^6-2x^2\right) \ln\left(x\right) \sin\left(x\right)$
Differentiate
$$\left(x^6-2x^2\right)\ln\left(x\right)\sin\left(x\right)$$
with respect to $x$
My work so far
$\class{steps-node}{\cssId{steps-node-1}{\tfrac{\mathrm{d}}{\mathrm{d}x}\left[\left(x^6-2x^2\right)\ln\left(x\right)\sin\left(x\right)\right]}}$
$=\class{steps-node}{\cssId{steps-node-3}{\class{steps-node}{\cssId{steps-node-2}{\tfrac{\mathrm{d}}{\mathrm{d}x}\left[x^6-2x^2\right]}}\cdot\ln\left(x\right)\sin\left(x\right)}}+\class{steps-node}{\cssId{steps-node-5}{\left(x^6-2x^2\right)\cdot\class{steps-node}{\cssId{steps-node-4}{\tfrac{\mathrm{d}}{\mathrm{d}x}\left[\ln\left(x\right)\right]}}\cdot\sin\left(x\right)}}+\class{steps-node}{\cssId{steps-node-7}{\left(x^6-2x^2\right)\ln\left(x\right)\cdot\class{steps-node}{\cssId{steps-node-6}{\tfrac{\mathrm{d}}{\mathrm{d}x}\left[\sin\left(x\right)\right]}}}}$
$=\class{steps-node}{\cssId{steps-node-8}{\left(\class{steps-node}{\cssId{steps-node-10}{\tfrac{\mathrm{d}}{\mathrm{d}x}\left[x^6\right]}}-2\cdot\class{steps-node}{\cssId{steps-node-9}{\tfrac{\mathrm{d}}{\mathrm{d}x}\left[x^2\right]}}\right)}}\ln\left(x\right)\sin\left(x\right)+\left(x^6-2x^2\right)\cdot\class{steps-node}{\cssId{steps-node-11}{\dfrac{1}{x}}}\sin\left(x\right)+\left(x^6-2x^2\right)\ln\left(x\right)\class{steps-node}{\cssId{steps-node-12}{\cos\left(x\right)}}$
$=\left(\class{steps-node}{\cssId{steps-node-13}{6}}\class{steps-node}{\cssId{steps-node-14}{x^5}}-2\cdot\class{steps-node}{\cssId{steps-node-15}{2}}\class{steps-node}{\cssId{steps-node-16}{x}}\right)\ln\left(x\right)\sin\left(x\right)+\dfrac{\left(x^6-2x^2\right)\sin\left(x\right)}{x}+\left(x^6-2x^2\right)\ln\left(x\right)\cos\left(x\right)$
$=\left(6x^5-4x\right)\ln\left(x\right)\sin\left(x\right)+\dfrac{\left(x^6-2x^2\right)\sin\left(x\right)}{x}+\left(x^6-2x^2\right)\cos\left(x\right)\ln\left(x\right)$
$=x\left(\left(\left(6x^4-4\right)\ln\left(x\right)+x^4-2\right)\sin\left(x\right)+\left(x^5-2x\right)\cos\left(x\right)\ln\left(x\right)\right)$
I've had great difficulty in solving this. Was my method correct?
Also, would there be any shortcuts in solving this, or is this method the best way of getting the solution?
AI: Let $f(x)=\left(x^6-2x^2\right)\ln x\sin x$. Then,
$$\ln f(x)=\ln(x^6-2x^2) +\ln (\ln x ) + \ln(\sin x)
$$
and
$$f’(x)=f(x)\left(\frac{6x^5-4x}{x^6-2x^2}+\frac{1}{x\ln x}+ \cot x\right)
$$
|
H: Proof of non diagonalizibility of higher matrix power
So the proof is to show that if a regular matrix A is not diagonalizable in $ M_n(\Bbb C) $ then no power of $A^k$ for $k \in \Bbb N$ so i started it with a proof by contradiction suppose $A^k$ is diagonalizable then there exists a minimal polynomial such that $p(A^k)=0$ so if we write that polynomial as $p=(x-a_1)...(x-a_n)$ but then i got stumped on how taking $k$-th roots will lead to a solution so when i checked the answer it said take the $k$-th root of all the $a_i$'s and we get a new polynomial $q=(x-a_{11})(x-a_{12})....(x-a_{1k}).....(x-a_{nk})$ where $a_{11},..,a_{1k}$ are roots of the original and for this polynomial $q(A)=0$ i got to this myself but i dismissed the correct way to a proof .
Why can we factorize $p$ to $q$ like so cause if i multiply it i dont see how it turns back to p?
what will guarantee that $A$ will be its root? i dont get how we cant have a remainder when we multiply everyting eg i know we will get $x^k -a_1$ cause i think ($a_{11}*...*a_{1k}=a_1$) but what will happen to the rest of it?
please give me a detailed explanation or at least a link to where i can read in detail on factorin or such things i have a knowledge gap in
edit: i forgot to state matrix A is regular fixed it
AI: On the contrary, we will assume that $A^k$ is diagonalizable for some integer $k \geq 2.$ We have therefore that the minimal polynomial $p(x)$ of $A^k$ can be written as a product of distinct linear factors, i.e., $p(x) = (x - c_1) \cdots (x - c_n)$ with $c_i \neq c_j$ for all pairs of integers $i \neq j.$ Using the fact that $p(A^k) = 0$ gives that $0 = p(A^k) = (A^k - c_1 I) \cdots (A^k - c_n I)$ so that $A$ satisfies the polynomial $q(x) = (x^k - c_1) \cdots (x^k - c_n).$ Considering that the underlying field is $\mathbb C,$ each of the factors $x^k - c_i$ splits into distinct linear factors, hence the polynomial $q(x)$ splits into distinct linear factors. But the minimal polynomial of $A$ must divide $q(x),$ hence the minimal polynomial of $A$ splits into distinct linear factors, i.e., $A$ is diagonalizable --- a contradiction. QED.
|
H: Let $G$ be a group with $33$ elements acting on a set with $38$ elements. Prove that the stabilizer of some element $x$ in $X$ is all of $G$.
I'm trying to figure out this old qualifying exam question:
Let $G$ be a group with 33 elements acting on a set with 38 elements. Prove that the stabilizer of some element $x \in X$ is all of $G$.
I think I'm supposed to use the orbit-stabilizer theorem to prove that the orbit of any $x\in X$ must be trivial, i.e. $orb_G(x)=\{x\}$. This is what I know:
$|G|$ and $|X|$ are relatively prime.
Since $|orb_G(x)| $ divides $|G|$ we must have that $|orb_G(x)|=1, 3, 11 $ or 33.
The orbit of each $x\in X$ partitions $X$.
If $|orb_G(x)|=1$ then by the orbit-stabilizer theorem: $|G|=|orb_G(x)||stab_G(x)| \implies |stab_G(x)|=33$.
I just don't see how to put this together in the right way. I wondered if $|orb_G(x)| $ necessarily needs to divide $|X|,$ but I didn't find anything to support that.
AI: $$\langle(1,2,3,4,5,6,7,8,9,10,11)(12,13,14)(15,16,17)\\(18,19,20)(21,22,23)(24,25,26)(27,28,29)(30,31,32)\\(33,34,35)(36,37,38)\rangle$$ is a counterexample.
|
H: Probability that piecewise continuous $X(\omega) \ \in A, \ A \in \mathcal{B}$
I solved this problem, but since my understanding of Borel sets and $X^{-1} \in \mathcal{B}$ is still not polished, I decided to ask it.
On a probability triple with Lebesgue measure on $[0,1]$, a random variable is defined such that
$$
X(\omega) = \left\{
\begin{array}{l}
1 & \omega \in [0,\frac{1}{4})\\
2\omega^2 & \omega \in [\frac{1}{4}, \frac{3}{4})\\
w^2 & \omega \in [\frac{3}{4},1]
\end{array}
\right.
$$
If $A=[0,1]$, what's the $P(X \in A$). So I split $A$ into $4$ disjoint subsets, $A_1 \ldots A_4$:
$$
A=[0,\frac{1}{4}),[\frac{1}{4},\frac{3}{4}), [\frac{3}{4},1), \{1\}
$$
when $A=1, X$ is a simple random variable, so
$$
A_4 = \{\omega:X(\omega)=1\} \rightarrow P(X \in A_4) = \frac{1}{4}-0
$$
For $A_1$, I think $P(X \in A_1)=0$, because $\{ \omega: 0 < X(\omega)<\frac{1}{4}\}$ is not defined. \
For $A_3, P(X \in [\frac{1}{4}, \frac{3}{4})) = \frac{\sqrt{3}-1}{\sqrt{8}}$
Lastly, $P(X \in [\frac{3}{4}, 1)) = 1-\frac{\sqrt{3}}{2}$
Putting it all together, $P(X \in A) = \frac{1}{4} + \frac{\sqrt{3}-1}{\sqrt{8}} + 1-\frac{\sqrt{3}}{2} \approx 0.64$
If that's correct, what would be the probability $P(X \in A^c) = P(X>1)$? From the definition of $X(\omega)$, it should be
$$
A^c = \{\omega:1 < X(\omega)\leq \frac{9}{8}\} \Leftrightarrow X^{-1}((1,\frac{9}{8}]) = \bigg[\frac{1}{\sqrt{2}}, \frac{3}{4}\bigg] \Rightarrow P(X^{-1}) = \frac{3}{4}-\frac{1}{\sqrt{2}} \approx 0.04
$$
so they don't sum to $1$! Then either my calculation for $P(X \in A)$ is incorrect, or definition of $A^c$.
AI: Verify that $X \notin A$ iff $\frac 1 {\sqrt 2} <\omega <\frac 3 4 $ and $X \in A$ iff $\omega >\frac 3 4 $ or $\omega <\frac 1 {\sqrt 2}$. ( I am ignoring possible equalities because Lebesgue measure of a single point is $0$).
Hence $P(X \in A)=\frac 14 +\frac 1 {\sqrt 2 }$ and $P(X \notin A)=\frac 3 4 -\frac 1 {\sqrt 2}$ and these add up to $1$.
|
H: Understanding connection of $=$ and $>$ relations in proofs.
I would like to ask about a certain pattern I see in some proofs. This is an example taken from the book: Lang, Serge & Murrow, Gene. "Geometry - Second Edition" (p. 69)
An obtuse angle is an angle which has more than 90°. Prove (in a sentence or two) that a triangle cannot have more than one obtuse angle.
Proof: Suppose triangle ABC has two obtuse angles, $\angle A$ and $\angle B$.
$m(\angle A) + m(\angle B) > 180°$ contradicting the theorem that the sum of angles in a triangle equals 180°. Therefore, no triangle can contain two obtuse angles.
Is $=$ relation sometimes taken as the negation of $>$ relation ? Am I missing something ?
AI: $=$ and $>$ are incompatible. But that does not mean they are negations of each other.
If $a =b$ then $a < b$ and $a > b$ are impossible.
If $a>b$ then $a= b$ and $a< b$ are impossible.
If $a < b$ then $a=b$ and $a > b$ are impossible.
If $m \angle A + m \angle B > 180^\circ$ then $m\angle A + m\angle B =180^\circ$ is impossible.
You don't need a negation to do a proof by contradiction.
Suppose you want to prove Lucy is not a lion, and you manage to prove that Lucy is lizard. Being a lizard means Lucy can't be a lion. So you are done. QED. But if Lucy is not a lion that doesn't mean she is a lizard. There are more than two options. But those two options, being a lizard and being a lion, are incompatible.
|
H: Definite integral of the following question
Evaluate:
$$\int_{0}^{\infty}\frac{4x\ln (x)}{x^4+4x^2+1}dx$$
I took $x^2$ common from the denominator and then substituted $\ln (x) =u$, and then I was stuck. The result turns out to be
$$\int_{-\infty}^{\infty}\frac{4u}{(e^u+e^{-u})^2+2}dx$$
AI: This integral cannot be calculated with "conventional" methods. The trick I will tell you, is to sub $x=1/t$ and then $dx=-1/t^2dt$. I leave it op to you to do the algebra. You will end up with the negative form of the given integral. In other words, if the given integral is $I$, you end up with $I=-I$ and so $I=0$. This is just the overview of the method. You need to work out the rudiments. That's your exercise otherwise you haven't learned anything from it
|
H: Study the convergence of the Series $\sum_{n=0}^{\infty} e^{-\sqrt{n}}$
Study the convergence of the Series $\sum_{n=0}^{\infty} e^{-\sqrt{n}}$ The only thing I know is that $e^{-\sqrt{x}}$ is strictly decreasing. I also know that the only method here to use is the Comparison or limit criteria, but i don't know to what sequence compare it to. Thanks in advance
AI: Hint:
$$e^{-\sqrt{n}}\leq \frac{1}{n^2} \quad \text{for sufficiently large }n$$
Showing that is easy if you use Taylor expansion of $e^x$.
|
H: Does every non-compact Tychonoff space admit an unbounded continuous function?
Let $X$ be a completely regular Hausdorff space. Such a space is also known as Tychonoff space, or a $T_{3.5}$-space. Furthermore, let's assume that $X$ is not compact.
Question. Does $X$ admit a continuous function $f: X\to \mathbb{R}$ with unbounded image?
Context. It is known that if $X$ is a non-compact metric space, then $X$ admits an unbounded continuous real-valued function. This was discussed thoroughly in this MSE thread. Note that the same conclusion holds if $X$ is a non-compact normal space (also known as $T_4$-space). This is because the proof using Tietze extension theorem (see the answer in the linked MSE thread by the user Espace' etale) still works. This is the motivation for asking the current question.
In general, Tietze extension theorem fails for $T_{3.5}$-spaces (for example, consider the Moore plane), so one cannot apply the trick above; of course, it is possible that the question has a negative answer, in which case I'd love to see a counter-example.
AI: Not necessarily. A space $X$ such that every continuous $f:X\to\Bbb R$ is bounded is called pseudocompact. Every countably compact space is pseudocompact, and $\omega_1$ with the order topology is a countably compact, non-compact Tikhonov (and even hereditarily normal) space, so it is also a pseudocompact space that is not compact. (Note that the argument by Espace' etale uses more than normality: it also uses the fact that sequential compactness implies compactness in metric spaces.)
|
H: Need the result of composing an infinite number of smooth functions be smooth?
$f$ is a smooth function from a manifold to itself. So is $f\circ f$, and $f\circ f\circ f$ and so on...
If this sequence is extended forever, and supposing that it converges to some function, need the function it converges to be smooth as well? (To be clear, by smooth, I mean $f$ is a diffeomorphism.)
I am tempted to deploy an argument like, "every step of the way is smooth, so the limit must be as well." However, there are sequences of rational numbers that converge, but not to rational numbers. (In fact, this is one way to create the reals.) So I will not deploy that argument, but instead ask you what the truth is!
AI: No. For a counterexample, take $f\colon\Bbb R\to\Bbb R$ defined by $f(x)=x^{1/3}$. Then, writing $f^{\circ 2}=f\circ f$, $f^{\circ3}=f\circ f\circ f$, and so on,
$$
\lim_{n\to\infty} f^{\circ n}(x) = \lim_{n\to\infty} x^{1/3^n} = \mathop{\rm sign}(x),
$$
which is discontinuous at $x=0$.
In general, a limit of any sequence of smooth functions need not be smooth, so such a heuristic is probably a liability.
|
H: Dijkstra's algorithm for a single path only
I'm looking to create maps for a board game with some specific properties, but my knowledge of graph theory is essentially negligible so I'd love some help. The maps will consist of territories which border each other in a 2D plane, I'm looking for a method for creating graphs which represent these maps, with vertices representing territories, and edges representing border. The key property of these graphs is;
There is only one path of minimum length between any two vertices.
Other properties include:
The graph is bidirectional.
Each vertex in the graph is accessible from every other vertex.
For paths between vertices there is no limit to the number of paths longer than the minimal length.
The edges all have the same weight.
From what I understand Dijkstra's algorithm allows me to find the shortest path between two points, but how do I specify that only one such path exists?
Any help is much appreciated :)
AI: A graph with the property that for every pair of nodes, there is a shortest path is sometimes called 'min-unique.' (Usually this concept is used in the directed graph context, where it has complexity theoretic meaning.)
I'll discuss below an algorithm to verify min-uniqueness of weighted undirected graphs, with non-negative weights.
I suspect that the class of undirected unweighted min-unique graphs might be pretty limited. Some observations and a conjecture is in the last section.
If you want to verify that a graph is min-unique:
One way to count the number of length $k$ paths between nodes $s$ and $t$ is by taking the $s,t$th entry of the $k$th power of the adjacency matrix: https://en.wikipedia.org/wiki/Adjacency_matrix#Matrix_powers So if you calculate the pairwise distances for all nodes you can determine the number of paths of that length by powering the matrix, and in this way check uniqueness.
Alternatively, Dijkstra's algorithm can be modified to give the number of shortest paths. Instead of just keeping track of the distance, keep track of the number of paths that realize that distance.
Previously I wrote out a strategy using the first bullet (still in the answer history), but I think it would be horribly inefficient and better to do something like the following:
Iterate over the nodes of the graph, and for each node s do:
Use a modified Dijkstras algorithm (below) to check whether all the paths from it to other nodes are min unique.
If not, stop and reject the graph.
Otherwise, continue. (You can also remove s at this point.)
Modified Dijkstra:
Run Dijkstra's algorithm to calculate all the distances $d(s,w)$ for $w \in V$. (Here $s$ is the fixed node from the above loop.)
Then, for each node $w$ check whether there are $u,u' \sim w$ , $u \not = u'$, with $d(s,u) = d(s,w) - d(u,w)$ and $d(s,u') = d(s,w) - d(u',w)$. If there are any, then the graph is not min-unique and you can reject it.
If every $w$ passes this, then for all $w$, the minimum path from $s$ to $w$ is unique. Here is the reason: Suppose that there is a node $w$ where there are two paths from $s$ to $w$ of shortest length. Moreover, choose $w$ to be a closest node to $s$ satisfying this property. Let $\gamma, \gamma'$ be two of those paths. The nodes of $G$ that $\gamma, \gamma'$ step through right before $w$ have to be different, otherwise that node would be a node closer to $s$ with non-unique min paths. Say those nodes are $u,u'$. We must have $d(s,u) = d(s,w) - d(u,w)$ and $d(s,u') = d(s,w) - d(u',w)$, and $u,u' \sim w$ by construction, which means the test in the above loop would have caught this.
(Note something a little subtle here is that you need all $w$ to pass this test to say anything about any one of them.; e.g. imagine starting with a square with one node labelled $s$. Add a long path to the opposite node of $s$, say $t$, to form a lollipop. The test will only fail at $t$, although every node beyond $t$ has 2 min paths to $s$.)
This costs an extra additive $O(E)$ per loop. This is a little more expensive than Dijkstra, but perhaps you can squeeze the min uniqueness into the actually construction of the shortest paths tree. I would just use an out-of-the-box implementation of Dijkstra's algorithm and then run this extra step.
So that gives $O(V (D + E))$,where $D = O ( E + V log(V))$ is the time to run Dijkstra's algorithm. Since you're building a game for humans and not super computers, I guess $V$ is not that big and this is fine.
Let me know if anything is unclear or seems mistaken.
Maybe a reasonable thing to do would be to program a min-uniqueness checker along the above lines, then sample uniformly random points in a square and build the Delaunay triangulation, and check min-uniqueness. You can also download some small graph libraries, for instance through networkx, and run through them.
I don't know how often you'd have to repeat this until you find a min-unique graph. You can easily burn through millions of graphs this way, and maybe find a counter-example to the conjecture below.
If you allow the edge weights to be different : you take any connected graph and assign the edges random uniformly weights in $[0,1]$, and it will be min-unique.
You can even get away with assigning integer valued weights in $[0,N]$ if you choose $N$ judiciously, by an application of the isolation lemma: https://en.wikipedia.org/wiki/Isolation_lemma.
In the directed graph case this means that you can simulate min-unique distances by subdividing your edges, although you will end up with lots of degree 2 nodes this way. (This is part of why min-uniqueness is meaningful in complexity theory, since you can use this make a Turing machine unambiguous, see e.g here, which relates to the question of whether it is easier to solve problems where the unknown solution is known to be unique if it exists.)
In the undirected case it's not clear to me that obtaining min-uniqueness through subdivision works, however, since you have to also account for the pairs of new nodes and the choice of original node to connect to first along a path between pairs of new nodes complicates the reasoning.
Is it possible that for any graph $G$, there is a homeomorphic graph that is min-unique? I'm think this is likely to be false. I put a conjecture in the next section.
Observation: If G is an undirected, unweighted graph, then G is min-unique iff all the blocks of its block-cut tree are min-unique.
Proof: Suppose the blocks are min-unique. Consider any pair of vertices. There is a unique path in the block-cut tree, and within each block there is a unique min-path connecting the cut-vertices separating the blocks that the tree-path steps through. On the other hand, suppose that G is min-unique. The shortest paths connecting nodes of any of the biconnected blocks do not leave the block, since it would have to do so along it cut-vertex it would later have to return through, hence the block is also min-unique. QED
Using this, here are some classes of min-unique (unweighted, undirected) graphs: odd-cycles, complete graphs and, by the observation, graphs where the maximal 2-connected components are either odd cycles or complete graphs. The last class includes trees as the case where the blocks are edges.
Also, this observation means that to classify the min-unique graphs it suffices to classify the 2 vertex connected min-unique graphs.
Some doodling has lead me to believe the following:
Conjecture: The only 2-vertex connected, undirected, unweighted, min-unique graphs are odd cycles and complete graphs.
I'll update if I find a proof or a counter-example.
This would imply:
Conjecture: The only min-unique (undirected, unweighted) graphs are those whose biconnected components are either odd-cycles are complete graphs.
|
H: Extension of a differentiable function $f$ to an open superset
This is a question the book Munkres-Calculus on Manifolds pg.144(Exercise 3-b)
If $f :S\to \mathbb R$ and $f$ is differentiable of class $C^r$ at each point $x_0$ of $S$,then $f$ may be extended to a $C^r$ function $h: A\to \mathbb R$ that is defined on an open set $A$ of $\mathbb R^n$ containing $S$.
My attempt, with $f:S \to \mathbb R$ is $C^r$, for each $x \in S$,then for each $x \in S$,
I can choose $U_x$ open neighborhood of $x$ such that $\cup U_x = A$ open (arbitrary union of abiertos es abierto).The item before "a"(pg.144.Exercise 3),it is show that if $f$ is $C^r$ then always exists $g:U_x \to \mathbb R $ where $x \in U_x \subset \mathbb R^n$ such that $f$ is equal to $g$ when $U_x \cap S$ and
$$h(x) =\left \{ \begin{matrix} \phi(x)g(x)& \mbox{if }x\mbox{ $\in U_x$ }
\\ 0 & \mbox{if }x\mbox{ $\notin \operatorname{supp} \phi$}\end{matrix}\right.
$$
Is $C^r$ with $\phi:\mathbb R^n \to \mathbb R $ is $C^r$, where your support is in $U_x$.Choosing $h:A \to \mathbb R $ with the extension of $f$.
AI: Use a partition of unity: following your idea, we have by hypothesis, for each $s\in S$, opens $U_s\ni s$ and $C^r$ functions $\tilde f_s:U_s\to \mathbb R$ such that $\tilde f_s|_S=f.$ Choose $\{\psi_s:s\in S\}$ a smooth partition of unity subordinate to the cover $\{U_s:s\in S\}$ and set $\tilde f:U:=\bigcup_{s\in S} U_s\to \mathbb R:\ x\mapsto \sum _{s\in S}\psi_s(x)\tilde f_s(x)$. The sum is finite because $\{\text{supp}\ \psi_s:s\in S\}$ is locally finite. Now $S\subseteq U,\ U$ is open and $\tilde f$ is clearly $C^r$. Finally, if $x\in S,$ and $s\in S$ such that $\psi_s(x)\neq 0,$ then $x\in U_s$ so $\tilde f_s(x)=f(x)$. It follows that $\tilde f(x)=\sum _{s\in S}\psi_s(x)\tilde f_s(x)=\sum _{s\in S}\psi_s(x) f(x)=f(x)\sum _{s\in S}\psi_s(x)=f(x)$ so $\tilde f$ agrees with $f$ on $S.$
|
H: Finding the closed form of $\int _0^{\infty }\frac{\ln \left(1+ax\right)}{1+x^2}\:\mathrm{d}x$
I solved a similar case which is also a very well known integral
$$\int _0^{\infty }\frac{\ln \left(1+x\right)}{1+x^2}\:\mathrm{d}x=\frac{\pi }{4}\ln \left(2\right)+G$$
My teacher gave me a hint which was splitting the integral at the point $1$,
$$\int _0^1\frac{\ln \left(1+x\right)}{1+x^2}\:\mathrm{d}x+\int _1^{\infty }\frac{\ln \left(1+x\right)}{1+x^2}\:\mathrm{d}x=\int _0^1\frac{\ln \left(1+x\right)}{1+x^2}\:\mathrm{d}x+\int _0^1\frac{\ln \left(\frac{1+x}{x}\right)}{1+x^2}\:\mathrm{d}x$$
$$2\int _0^1\frac{\ln \left(1+x\right)}{1+x^2}\:\mathrm{d}x-\int _0^1\frac{\ln \left(x\right)}{1+x^2}\:\mathrm{d}x=\frac{\pi }{4}\ln \left(2\right)+G$$
I used the values for each integral since they are very well known.
My question is, can this integral be generalized for $a>0$?, in other words can similar tools help me calculate
$$\int _0^{\infty }\frac{\ln \left(1+ax\right)}{1+x^2}\:\mathrm{d}x$$
AI: You can evaluate this integral with Feynman's trick,
$$I\left(a\right)=\int _0^{\infty }\frac{\ln \left(1+ax\right)}{1+x^2}\:dx$$
$$I'\left(a\right)=\int _0^{\infty }\frac{x}{\left(1+x^2\right)\left(1+ax\right)}\:dx=\frac{1}{1+a^2}\int _0^{\infty }\left(\frac{x+a}{1+x^2}-\frac{a}{1+ax}\right)\:dx$$
$$=\frac{1}{1+a^2}\:\left(\frac{1}{2}\ln \left(1+x^2\right)+a\arctan \left(x\right)-\ln \left(1+ax\right)\right)\Biggr|^{\infty }_0=\frac{1}{1+a^2}\:\left(\frac{a\pi \:}{2}-\ln \left(a\right)\right)$$
To find $I\left(a\right)$ we have to integrate again with convenient bounds,
$$\int _0^aI'\left(a\right)\:da=\:\frac{\pi }{2}\int _0^a\frac{a}{1+a^2}\:da-\int _0^a\frac{\ln \left(a\right)}{1+a^2}\:da$$
$$I\left(a\right)=\:\frac{\pi }{4}\ln \left(1+a^2\right)-\int _0^a\frac{\ln \left(a\right)}{1+a^2}\:da$$
To solve $\displaystyle\int _0^a\frac{\ln \left(a\right)}{1+a^2}\:da$ first IBP.
$$\int _0^a\frac{\ln \left(a\right)}{1+a^2}\:da=\ln \left(a\right)\arctan \left(a\right)-\int _0^a\frac{\arctan \left(a\right)}{a}\:da=\ln \left(a\right)\arctan \left(a\right)-\text{Ti}_2\left(a\right)$$
Plugging that back we conclude that
$$\boxed{I\left(a\right)=\:\frac{\pi }{4}\ln \left(1+a^2\right)-\ln \left(a\right)\arctan \left(a\right)+\text{Ti}_2\left(a\right)}$$
Where $\text{Ti}_2\left(a\right)$ is the Inverse Tangent Integral.
The integral you evaluated can be proved with this,
$$I\left(1\right)=\int _0^{\infty }\frac{\ln \left(1+x\right)}{1+x^2}\:dx=\frac{\pi }{4}\ln \left(2\right)-\ln \left(1\right)\arctan \left(1\right)+\text{Ti}_2\left(1\right)$$
$$=\frac{\pi }{4}\ln \left(2\right)+G$$
Here $G$ denotes the Catalan's constant.
|
H: Solving least squares with QR factorization
I'm looking at the notes on https://www.cs.cornell.edu/~bindel/class/cs3220-s12/notes/lec11.pdf.
On the first page, we have the following steps
\begin{align}
||Ax-b||^2&=||Q^T(Ax-b)||^2\\
&=\left|\left|\begin{bmatrix}R_{11}\\0\end{bmatrix}x-\begin{bmatrix}Q_1^Tb\\Q_2^Tb\end{bmatrix}\right|\right|^2\\
&=||R_{11}x-Q_1^Tb||^2+||Q_2^Tb||^2
\end{align}
How did the 3rd line follow from the 2nd? I don't understand how the single norm was broken into the addition of 2 norms.
AI: If $u \in \mathbb R^n$ and $v \in \mathbb R^m$ are column vectors then
$$
\left\| \begin{bmatrix} u \\ v \end{bmatrix} \right \|^2 = \| u \|^2 + \| v \|^2.
$$
The reason is that both are equal to $u_1^2 + \cdots + u_n^2 + v_1^2 + \cdots + v_m^2$.
So in this example we are saying that
$$
\left \| \begin{bmatrix} R_{11} x - Q_1^T b \\ Q_2^T b \end{bmatrix} \right \|^2 = \| R_{11}x - Q_1^T b \|^2 + \| Q_2^T b \|^2.
$$
|
H: Prove there is only one $2$-form $p^*\omega = dx\wedge dy$
I am new with forms and pullbacks and admit differential geometry is not my best area. I'm trying to solve the next problem.
Let be $(x, y)$ coordenates on $\mathbb{R}^2$. Let $p:\mathbb{R}^2\rightarrow\mathbb{R}^2/\mathbb{Z}^2=\mathbb{T}^2$ the projection. Show that there is only one $2$-form $\omega$ on $\mathbb{T}^2$ such that
\begin{equation} p^*\omega = dx\wedge dy
\end{equation}
Is this form closed? is this form exact?
I did a demostration by contradiction but I'm not sure if it's correct.
Suppose there is another $2$-form $\theta$ such that $p^*\theta= dx\wedge dy $
then $p^*(\omega - \theta) = dx\wedge dy - dx\wedge dy = 0$
because $p$ is no null then $\omega-\theta=0$. It's too simple and I doubt that it works,
any help will be apreciate.
AI: $p:\mathbb{R}^2\rightarrow\mathbb{T}^2$ is a local diffeomorphism and it is surjective. It implies that for every $x\in \mathbb{T}^2$ there exists $x'\in\mathbb{R}^2$ such that $p(x')=x$ and $dp_{x'}:T_{x'}\mathbb{R}^2\rightarrow T_{p(x)}\mathbb{T}^2$ is an isomorphism.
Suppose that $p^*\omega=p^*\theta$, let any $x\in \mathbb{T}^2, u,v\in \mathbb{T}^2$ and $x'\in\mathbb{R}$, $u',v'\in T_{x'}\mathbb{R}^2$ such that $p(x')=x$, $dp_{x'}(u')=u, dp_{x'}(v')=v$; $0=p^*(\omega-\theta)_{x'}(u',v')=(\omega-\theta)_{p(x')}(dp_{x'}(u'),dp_{x'}(v'))=(\omega-\theta)_x(u,v)=0$ implies that $u=v$.
Thus the form is unique, the form is closed since $\mathbb{T}^2$ is $2$-dimensional and a the differential of a $2$-form is $3$-form which is zero since an alternated $3$-form is zero on a $2$-dimensional vector space.
The $2$-form is not exact since it is a volume form.
In fact $\omega$ exists since $\mathbb{T}^2$ is the quotient of $\mathbb{R}^2$ by $f(x,y)=(x+1,y)$ and $g(x,y)=(x,y+1)$ and $f^*(dx\wedge dy)=g^*(dx\wedge dy)=dx\wedge dy$.
|
H: Is there such a thing as a free quasivariety?
I have heard in universal algebra there is such a thing as a free variety, but is there such a thing as a free quasivariety? I would assume, that, for instance, in the language of a single binary operation symbol $*$, a free quasivariety is a free variety where additionally if $x*y=z*w$, then $x=z$ and $y=w$. Is this correct?
AI: There are no such concepts as "free variety" or "free quasi-variety" in Universal Algebra. There are free objects in a variety (quasi-variety).
|
H: A coin is flipped 15 times. How many possible outcomes contain exactly four tails? contain at least three heads?
A coin is flipped 15 times where each flip comes up either heads or tails. How many possible outcomes (a) contain exactly four tails?, (b) contain at least three heads?
Hello everyone, I am currently a beginner at combinatorics and discrete math and any comments or answers would be extremely helpful with this question as I am still getting the hang of them. Thank you so much in advance! All contribution really helps :D
AI: Let's try to come up with an idea for a):
We start with something easier: How many possible outcomes contain exactly one tails? Well obviously $15$ since it can happen at either of these 15 throws.
Let's update our question to: How many possible outcomes contain exactly two tails? We know the first tails has 15 possible throws where it can appear. The second tails has to happen at a different throw, so there are only 14 throws left. But each of these throws would be okay. So what does this mean? Well, for each of the 15 possibilities for the first tails there are 14 possibilities for the second tails left. We are tempted to say the answer is therefore $15\cdot 14 = 210$ but we would be wrong. There is a problem we need to fix first. We counted every possible combination twice! For example we could have placed the first tails at throw 1 and the second tails at throw 2 but we could also have placed the first tails at throw 2 and the second at throw 1. In the end this yields the same outcome, meaning first and second throw are tails and rest is heads. We can fix this by dividing by $2$. So we get the answer
$$\frac{15\cdot 14}{2} = 105\,.$$
So what happens if we ask: How many possible outcomes contain exactly three tails? The idea is the same. We have 15 throws to place the first tails. We have 14 positions to place the second tails. Now we are left with 13 throws to place the third tails. Combining these we get $15\cdot 14\cdot 13$ possibilities which contain duplicates like before. We need to find out how many duplicates of each possibility we have. In our second example this was $2$. Here it's a bit more complicated. We can have first tails throw 1, second tails throw 2 and third tails throw 3. Now we could also have first tails throw 2, second tails throw 3 and third tails throw 1. So in essence, for every possible way to arrange the three different tails at throw 1, 2 and 3, we have a duplicate. How many ways are there to arrange the first, second and third tails? That's $3 \cdot 2 \cdot 1 =6$. It's the same logic as before. The first tails has three positions to go to, the seoncd has two, the last has one.
This yields a formula which we can now state. It's
$$\frac{n!}{k!\cdot(n-k)!}$$
This formula calculates how many ways there are to take $k$ things from $n$ total things. In your case this would be how many ways there are to have $k$ many throws land on tails when performing a total of $n$ throws.
A short reminder:
$$n! = n \cdot (n-1) \cdot (n-2) \cdot \ldots \cdot 2 \cdot 1\,.$$
So for example
$$3! = 3 \cdot 2 \cdot 1 = 6\,.$$
Let's try to understand this formula. First look only at
$$\frac{n!}{(n-k)!}\,.$$
Let's put some numbers in. We chose $n=15$ and $k = 2$. We get
$$\frac{15!}{(15-2)!} = \frac{15!}{13!} = \frac{15 \cdot 14 \cdot 13 \cdot 12 \cdot \ldots \cdot 2\cdot 1}{13 \cdot 12 \cdot \ldots\cdot 2 \cdot 1} = 15 \cdot 14\,,$$
because many things cancel out. Notice that this is exactly the first thing we calculated in example 2. So this part
$$\frac{n!}{(n-k)!} = n \cdot (n-1) \cdot\ldots\cdot (n-(k-1))$$
actually calculates this first step in our examples. It tells us how many ways there are to take $k$ many ordered things from $n$ total things.
We are not interested in orderings of $k$. So for us each way in which we take the $k$ things from the same positions is the same, regardless of ordering. We therefore have to calculate how many orderings of $k$ there are. And thats $k!$, it's exactly the argument we used in in example 3. Therefore we need to divide by $k!$. This yields the complete formula
$$\frac{n!}{k!\cdot(n-k)!}\,.$$
You now should be able to tackle problem a) (you got the formula). There is now an easy (and an even easier) way to calculate b) using this formula, but you can't just use it since b) asks for at least but the formula only gives you an answer for exactly.
I hope this helps.
|
H: Perturbation on sequences and their limit points
I believe this is really simple, but I can't figure it out alone. Let $\{x_k\}_{k\in \mathbb{N}}\subset \mathbb{R}^n$ and be a sequence such that $\|x_k\|\to \infty$, and let $v\in \mathbb{R}^n$ be arbitrary.
Is the set of limit points of $\left\{ \frac{x_k}{\left\|v+x_k\right\|} \right\}_{k\in \mathbb{N}}$ equal to the set of limit points of $\left\{ \frac{x_k}{\left\|x_k\right\|} \right\}_{k\in \mathbb{N}}$?
It looks true, because $v$ is a finite perturbation and everything tells me that $x_k$ is going to lead the convergence since it diverges, but I couldn't give formal arguments for proving that. I appreciate any help.
AI: If you wish to show two sequences $(a_n)$ and $(b_n)$ have the same limit points, then it suffices to show that $\|a_n - b_n\| \to 0$ as $n \to \infty$. If we have a limit point $L$ of $(a_n)$, then we have a subsequence $a_{n_k} \to L$. Then,
$$\|b_{n_k} - L\| \le \|b_{n_k} - a_{n_k}\| + \|a_{n_k} - L\| \to 0 + 0,$$
since $\|b_{n_k} - a_{n_k}\|$ is a subsequence of the convergent sequence $\|a_n - b_n\|$ and $\|a_{n_k} - L\| \to 0$. It follows that $b_{n_k} \to L$. So, every limit point of $(a_n)$ is a limit point of $(b_n)$. Symmetrically, we can reverse the roles of $(a_n)$ and $(b_n)$ to obtain the reverse inclusion, i.e. the limit points of $(a_n)$ and $(b_n)$ are the same.
Thus, it suffices to show that
$$\left\|\frac{x_k}{\|v + x_k\|} - \frac{x_k}{\|x_k\|}\right\| \to 0.$$
Consider:
\begin{align*}
\left\|\frac{x_k}{\|v + x_k\|} - \frac{x_k}{\|x_k\|}\right\| &= \|x_k\|\left|\frac{1}{\|v + x_k\|} - \frac{1}{\|x_k\|}\right| \\
&= \|x_k\|\frac{\Big|\|x_k\| - \|v + x_k\|\Big|}{\|v + x_k\|\|x_k\|} \\
&\le \frac{\|v\|}{\|v + x_k\|} \\
&\le \frac{\|v\|}{\|x_k\|- \|v\|} \to 0.
\end{align*}
|
H: Sufficient condition for a flow to be symplectic
Suppose $(M,\omega)$ is a symplectic manifold and $X$ a smooth vector field defined on $M$ such that its corresponding flow $\{g_s\}_{s\in\mathbb{R}}$ is defined for all $s$ in sone some $U_s\subset M$. Is it true that if the Lie derivative $\mathcal{L}_X \omega=0$, then the flow is symplectic; i.e. $g_s^*\omega=\omega$?
AI: For simplicity, I'll just assume that the flow of $X$ is defined everywhere; i.e $g:\Bbb{R} \times M \to M$. I think a stronger statement is true as well. If $T$ is any tensor field on $M$, then we have
\begin{align}
\mathcal{L}_XT = 0 \quad \iff \text{for all $s\in \Bbb{R}$, } g_s^*T = T
\end{align}
The proof of this follows from the "flow definition" of Lie-derivatives, that $\mathcal{L}_XT := \dfrac{d}{ds}\bigg|_{s=0}g_s^*T$ (everything is of course interpreted pointwise).
From this definition, the "if" part ($\impliedby$) of the statement is clear. For the "only if" ($\implies$) part, note that since flows have the group property that $g_{s_1 + s_2} = g_{s_1}\circ g_{s_2}$, it follows that for every $\lambda\in \Bbb{R}$,
\begin{align}
\dfrac{d}{ds}\bigg|_{s=\lambda} g_s^*T &= g_{\lambda}^*(\mathcal{L}_XT)
\end{align}
(this is just a 1-2 line calculation). So, based on the assumption $\mathcal{L}_XT = 0$, the above identity shows that for every $\lambda\in \Bbb{R}$, $\dfrac{d}{ds}\bigg|_{s=\lambda} g_s^*T = 0$. This means $\lambda \mapsto g_{\lambda}^*T$ is a constant function (notice that we're implicitly using the fact $\Bbb{R}$ is connected here). By evaluating at $\lambda = 0$, we see the "constant" (in this case a constant tensor field with respect to $\lambda$) is $T$.
From here, you can of course specialize to the case where $T = \omega$ is the symplectic form. But nothing in the proof is simplified by assuming this from the beginning (side note: applying to the case where $T=g$ is the metric tensor from Riemannian geometry shows that the flow of a vector field consists of isometries if and only if $\mathcal{L}_Xg = 0$; i.e Lie derivative of the metric vanishes).
|
H: Analogous version of $\operatorname{var(X+cY)} = \operatorname{var}(X) + c^2\operatorname{Var}(Y)$ for vectors of uncorrelated random variables?
Is there an analogous version of $\operatorname{var(X+cY)} = \operatorname{var}(X) + c^2\operatorname{var}(Y)$ for vectors of uncorrelated random variables ($X$ and $Y$ are random variables here)?
For example, consider $\operatorname{var(\boldsymbol{X}+A\boldsymbol{var}(Y)$, where the bold indicators a vector random variables.
Would the variance look something like
$$
\operatorname{var(\boldsymbol{X})+A^TA\operatorname{var}(\boldsymbol{Y})}
$$
?
AI: For vector-valued random variables $\overrightarrow{X}$ and $\overrightarrow{Y}$, the appropriate concept is a covariance matrix: if $\overrightarrow{X} = [x_1, ..., x_n]$, then
$$\operatorname{cov}(\overrightarrow{X}) = [a_{ij}],$$
where $a_{ij} = \operatorname{cov}(x_i, x_j)$. It is then true that for any matrix $A$ of appropriate dimension,
$$\operatorname{cov}(A\overrightarrow{X}) = A\operatorname{cov}(\overrightarrow{X})A^t,$$
and direct computation should show that $\operatorname{cov}(\overrightarrow{X} + \overrightarrow{Y}) = \operatorname{cov}(\overrightarrow{X}) + \operatorname{cov}(\overrightarrow{Y})$ when the components of $\overrightarrow{X}, \overrightarrow{Y}$ are uncorrelated. So yes, you do get a version of your original statement:
$$\operatorname{cov}(\overrightarrow{X} + A\overrightarrow{Y}) = \operatorname{cov}(\overrightarrow{X}) + A\operatorname{cov}(\overrightarrow{Y})A^t,$$
and if $A = cI$ for some scalar $c$, then it is even true that
$$\operatorname{cov}(\overrightarrow{X} + c\overrightarrow{Y}) = \operatorname{cov}(\overrightarrow{X}) + c^2\operatorname{cov}(\overrightarrow{Y}).$$
|
H: $f^*(U_1 \times ... \times U_k) = \bigcap_{i=1}^k f^*_i(U_i)$
I’m trying to prove this result and I would really appreciate if you could give some feedback in my proof.
Result: Let $A,A_1,...,A_k$ be sets, for some positive integer $k$, let $f: A \rightarrow A_1 \times ... \times A_k$ be a function and let $U_i \subseteq A_i$ for each $i \in \{1,...,k\}$. Then
$$f^*(U_1 \times ... \times U_k) = \bigcap_{i=1}^k f^*_i(U_i).$$
I will be using the map $\pi_i:A_1 \times ... \times A_k \rightarrow A_i$ defined by $\pi_i((x_1,...,x_k))=x_i$ for all $(x_1,...,x_k) \in A_1 \times ... \times A_k$ (the projection map). In addition, I will make use of the functions $f_i: A \rightarrow A_i$ defined by $f_i = \pi_i \circ f$ for all $i \in \{1,...,k\}$ (the component functions of $f$).
Proof: In order to show that $f^*(U_1 \times ... \times U_k) = \bigcap_{i=1}^k f^*_i(U_i)$, we have to show that each of the sets is a subset of the other.
Let $a \in f^*(U_1 \times ... \times U_k)$. We have that $f(a) \in U_1 \times ... \times U_k$. Then $f(a)=(u_1,...u_k)$ with $u_i \in U_i$ for each $i \in \{1,...,k\}$. By the definition of component functions, we see that $u_i = (\pi_i \circ f)(a)=f_i(a)$ for all $i \in \{1,...,k\}$. Hence $f_i(a) \in U_i$ for all $i \in \{1,...,k\}$. By definition, we have that $a \in f^*_i(U_i)$ for all $i \in \{1,...,k\}$, so $a \in \bigcap_{i=1}^k f^*_i(U_i)$. Therefore $f^*(U_1 \times ... \times U_k) \subseteq \bigcap_{i=1}^k f^*_i(U_i)$.
Now, let $b \in \bigcap_{i=1}^k f^*_i(U_i)$. We have that $b \in f^*_i(U_i)$ for all $i \in \{1,...,k\}$. By definition, $f_i(b) \in U_i$ for all $i \in \{1,...,k\}$. This implies that $(f_1(b),...,f_k(b)) \in U_1 \times ... \times U_k$, so $f(b) \in U_1 \times ... \times U_k$. By definition, we conclude that $b \in f^*(U_1 \times ... \times U_k)$. Therefore $\bigcap_{i=1}^k f^*_i(U_i) \subseteq f^*(U_1 \times ... \times U_k)$.
With this, we conclude that $f^*(U_1 \times ... \times U_k) = \bigcap_{i=1}^k f^*_i(U_i)$.
$\square$
Thank you very much!
AI: A more simplified proof:
\begin{align}
f^*(U_1 \times \cdots \times U_k)
&= \{a \in A : f(a) \in U_1 \times \cdots \times U_k\} \\
&= \{a \in A : (f_1(a),\dots,f_k(a)) \in U_1 \times \cdots \times U_k\} \\
&= \{a \in A : f_i(a) \in U_i \textrm{ for all } i \in \{1,\dots,k\}\} \\
&= \{a \in A : a \in f^*(U_i) \textrm{ for all } i \in \{1,\dots,k\}\} \\
&= \bigcap_{i=1}^k f^*(U_i).
\end{align}
|
H: How can we erase graph of $f(x)=-10a((x/a)-[x/a])$ from specific parts?
The function
$$f(x)=-10a(\frac{x}{a}-[\frac{x}{a}])$$
I want to erase some parts of graph from $x=a$ to $2a$ and from 3a to 4a ... And so on
How can i accomplish that? I have no idea of how can we do that also please suggest any resource from which i can learn these transformation
AI: You can do a piecewise-defined function:
$\forall\ \text{even } n\ge0$,
$f(x)=\begin{cases}\text{undefined}&a(n+1)<x<a(n+2)\\-10a\left(\frac xa-\left[\frac xa\right]\right)&\text{for all other values of }x\end{cases}$
Not sure if it is the right notation but I basically want $n$ to be all the even whole numbers so that $x$ is between every odd multiple of $a$ starting with $a$ to $2a$.
Also, $-10a\left(\frac xa-\left[\frac xa\right]\right)$ simplifies down to zero for $a\ne0$ just in case you didn't know.
|
H: what is the projective dimension of $ (x,y)\mathbb{C}[x,y]_{(x,y)}$?
For the local ring $R = \mathbb{C}[x,y]_{(x,y)}$ and its maximal
ideal $M = (x,y)\mathbb{C}[x,y]_{(x,y)}$.
What is the projective dimension $\operatorname{pd}_R(M)$ of M?
My thought:
I tried to construct a minimal free resolution of $M$, $\cdots \rightarrow L_2 \rightarrow L_1 \rightarrow L_0 \rightarrow M \rightarrow 0 $.
Define $L_0 := R \oplus R$ since $ M = Rx + Ry$.
However, I couldn't prove that $L_0 \otimes_R k \cong M \otimes_R k$ , where $k = R/M$.
My questions are:
Is it correct to set $L_0 := R \oplus R$. If this is correct, then
How to prove $L_0 \otimes_R k \cong M \otimes_R k$.
AI: Let $\varphi \colon L_{0} \to M$ denote the surjection sending $(r, s)$ to $rx +sy$. Note that $\varphi$ descends to a $k$-linear surjection $\varphi \otimes_{R} \mathrm{Id}_{k} \colon L_{0} \otimes_{R} k \to M \otimes_{R} k$. Since $L_{0} \otimes_{R} k$ is a $k$-vector space of dimension $2$, it follows that $\varphi \otimes_{R} \mathrm{Id}_{k}$ is an isomorphism of $k$-vector spaces if $M \otimes_{R} k$ has $k$-dimension $2$. On the other hand, $M \otimes_{R} k \cong M/M^{2}$ as $k$-modules, and it is not too hard to show that $M/M^{2}$ has $k$-basis given by the residue classes of $x, y$. This answers your question (2).
You can complete your complex by observing that the morphism $\psi \colon R \to L_{0}$ sending $r \in R$ to $(-ry, rx)$ is injective with image precisely $\ker(\varphi)$. Indeed, the containment $\mathrm{Im}(\psi) \subset \ker(\varphi)$ is obvious. On the other hand, if $rx + sy = 0$, then $rx = -sy$. Since $R$ is a UFD and $x, y$ are coprime irreducibles, one obtains $r = yu, s = xv$ for some $u, v \in R$. The equation $rx + sy = 0$ then reads $xyu + xyv = 0$, so $v = -u$.
This is a projective resolution of $M$ of minimal length. This follows because if there were a shorter resolution, then $M$ would be projective. A finitely generated projective module over a local ring is free of finite rank. But $M$ is an ideal of $R$, and so can only be free if it has rank $1$. This cannot be the case by the argument above: an $R$-linear surjection $R \to M$ would descend to a $k$-linear surjection $R \otimes_{R} k \to M/M^{2}$, which is impossible, since $M/M^{2}$ has dimension $2$ as a $k$-vector space.
|
H: Solve the following non-linear system of equations $x = \alpha \log(y/(1-y)), y = \alpha \log(x/(1-x)) - \beta$ in terms of $\alpha, \beta$
I have the following non-linear equations,
$$x = \alpha \log(y/(1-y))$$
$$y = \alpha \log(x/(1-x)) - \beta $$
where $\alpha, \beta$ are constants, $\log$ is the natural logarithm.
I wish to solve for $x$.
Is it possible? It seems that by plugging in $x$ into the $y$ equation, I get something like $$y = \alpha \log(\alpha \log(y/(1-y))/(1-\alpha \log(y/(1-y)))) - \beta$$ and I have no idea if it is possible to recover an analytical solution of $y$ in terms of $\alpha$ and $\beta$
AI: I do not think that you could obtain any analytical solution and then, hoping that I am correct, you will need to consider some numerical method.
Consdering your equations
$$x=\alpha \log \left(\frac{y}{1-y}\right) \tag 1$$
$$y=\alpha \log \left(\frac{x}{1-x}\right)-\beta \tag 2$$ as you did, the second equation becomes,
$$y=\alpha \log \left(\frac{\alpha \log \left(\frac{y}{1-y}\right)}{1-\alpha
\log \left(\frac{y}{1-y}\right)}\right)-\beta\tag 3 $$ which does not look very pleasant.
Still thinking about numerical method, I should define $$z=\alpha \log \left(\frac{y}{1-y}\right)$$ to write the final equation as
$$\frac{e^{\frac{y+\beta }{\alpha }}}{1+e^{\frac{y+\beta }{\alpha
}}}=\alpha \log \left(\frac{y}{1-y}\right)\tag 4$$ which is probably better conditioned.
|
H: Understanding an exercise (Ahlfors' Complex Analysis)
I have two questions about the solution of the following exercise, taken from Ahlfors' Complex Analysis.
In the first integral of the first equality, why is it equivalent to integrate along the curve $|z|=2$ to integrate along the curve $|z+1|=1$? And similarly for the second integral, but considering the curve $|z-1|=1$. I understand they're changing the original circle by a circle centered at the points $-1$ and $1$ and contained in $|z|=2$, but I can't find the theorem or result they're using to do so.
Why are they doing that change? Isn't it enough to say that both integrals along the original curve $|z|=2$ equal $2\pi i$ by the Cauchy's integral formula?
AI: An answer to point 1. If two cycles are homologous then their integrals will be the same. The cycle formed by the one circle $|z|=2$ is homologous to the one formed by the two smaller circles so the step works. I referred to the Real and Complex analysis by Walter Rudin. theorem 10.35 for the details on this.
For point 2, I think you are right. Integrating $\frac{1}{z-1}$ over $|z|=2$ or$|z-1|=1$ should give the same result as both circles are homologous so this step seems redundant.
|
H: Does $i(n) < \log (n)$ imply $\frac{\log i(n)}{n} \in o \left( \frac{\log n}{n} \right)$?
$i(n)$ is a sequence of nonnegative numbers (integers) indexed by $n$. I think it only implies $ ... \in O\left(\frac{\log n}{n} \right)$, yet the other assertion was made in some paper I am reading. Just wanted to confirm.
Here's the relevant section from the paper
AI: One way to check that $f(n)\in o(g(n))$ is to see if $\lim_{n\to\infty}\frac{f(n)}{g(n)}=0$.
In this case,
$$
\lim_{n\to\infty}\frac{[\log i(n)]/n}{[\log n]/n} = \lim_{n\to\infty}\frac{\log i(n)}{\log n} \leq \lim_{n\to\infty}\frac{\log\log n}{\log n} = 0
$$
using monotonicity of the logarithm, so $\frac{\log i(n)}n\in o\left(\frac{\log n}n\right)$.
|
H: $\operatorname{Hom}(\operatorname{Hom}(\mathbb{Q}$/$\mathbb{Z}$, $\mathbb{Q}),\Bbb{Z})$ is isomorphic to $0$
$\operatorname{Hom}(\operatorname{Hom}(\mathbb{Q}$/$\mathbb{Z}$, $\mathbb{Q}),\Bbb{Z})\cong\{0\}$ as $\mathbb{Z}$-modules.
Not sure how to see it. Any help would be appreciated!
AI: Notice that by the universal property of the quotient $\Bbb{Q}/\Bbb{Z}$, $\operatorname{Hom}_\Bbb{Z}(\Bbb{Q}/\Bbb{Z},\Bbb{Q})$ is in canonical bijection with $$S = \{f\in\operatorname{Hom}_\Bbb{Z}(\Bbb{Q},\Bbb{Q})\mid f(n) = 0\textrm{ for all }n\in\Bbb{Z}\}.$$ However, if $f\in S$ and $m/n\in\Bbb{Q},$ it follows that $$0 = f(m) = f(n\cdot m/n) = nf(m/n),$$ and since $\Bbb{Q}$ is torsion-free, this implies $f(m/n) = 0.$ As $m/n$ was arbitrary, it follows that $f = 0,$ and that $\operatorname{Hom}_\Bbb{Z}(\Bbb{Q}/\Bbb{Z},\Bbb{Q}) = \{0\}.$
|
H: If Cauchy's functional equation is continuous at some point, how to prove that it is continuous at every point? (Darboux Weakening)
Let $f$ be Cauchy's functional equation i.e:
$$f(x_1 + x_2) = f(x_1) + f(x_2) \quad (1)$$
Wiki states that
Cauchy proved that (1) is continuous. This condition was weakened in 1875 by Darboux who showed that it was only necessary for the function to be continuous at one point.
Let $f$ be continuous at let's say $0$ i.e:
$$ lim_{x\to0} f(x) = f(0) \quad (2)$$
How could we prove that $f$ is indeed continuous $ \forall x \in D(f)=\mathbb{R}$ based on (1) and (2)?
AI: Suppose $f:\mathbb R\to\mathbb R$ is an additive function, i.e., it satisfies the additivity equation $f(x+y)=f(x)+f(y)$, and suppose $f$ is continuous at $a$; I want to show that $f$ is also continuous at $b$. Now
$$f(x)=f(x+a-b)+f(b-a)=f(g(x))+f(b-a)$$
where
$$g(x)=x+a-b.$$
Since $g$ is continuous at $b$ and $f$ is continuous at $g(b)$, it follows that $f\circ g$ is continuous at $b$; since $f(b-a)$ is a constant, it follows that $f(x)=f(g(x))+f(b-a)$ is continuous at $b$.
|
H: Given positive real numbers $a, b, c$ with $ab + bc + ca = 1.$ Prove that $ \sqrt{a^{2} + 1} + \sqrt{b^{2} + 1} + \sqrt{c^{2} + 1}\leq 2(a+b+c).$
Given positive real numbers $a, b, c$ with $ab + bc + ca = 1.$ Prove that $$ \sqrt{a^{2} + 1} + \sqrt{b^{2} + 1} + \sqrt{c^{2} + 1}\leq 2(a+b+c).$$
I have no idea to prove this inequality.
AI: Note that $$\sqrt{a^2+1}= \sqrt{a^2+ab+bc+ca}=\sqrt{(a+b)(a+c)}\le \frac{(2a+b+c)}{2}~~\text{AM-GM}\,.$$
Addling three similar results we prove that
$$\sqrt{a^2+1}+\sqrt{b^2+1}+\sqrt{c^2+1}\le \frac{4(a+b+c)}{2}=2(a+b+c)\,.$$
|
H: A falling object does not keep accelerating indefinitely but, due to air resistance, reaches a terminal speed. What is the terminal speed?
Suppose that the speed of such an object, t seconds after the fall commences is vm/s where v=
$$\frac{200}{3}(1-e^{-0.15t})$$
Find the speed of the object after five seconds.
I have substituted t=5, getting 35.2km/h
What is the terminal speed?
I know the answer is: $$\frac{200}{3}m/s$$
But is there a formula to calculate this/what is the logic to getting this answer?
This is from a Year 12 Methods textbook.
AI: Since the object in free fall will reach terminal velocity at the very end, you have to take the limit of the velocity as $t$ approaches infinity.
$$v=\frac{200}{3}(1-e^{-0.15t})$$
As $t$ grows till $\infty$, the value of $e^{-kx}$ approaches zero. Hence we can substitute $t=0$ in our equation. Hence we get-
$$v_{terminal}=\frac{200}3$$
|
H: Convergence of $\sum_{n=0}^\infty(z^n+\frac{1}{2^nz^n})$
I want to find the domain of convergence of $\sum_{n=0}^\infty(z^n+\frac{1}{2^nz^n})$.
My first thought was to use the ratio test, but that doesn't yield anything fun. So, I was wondering if we could play with Laurent series? I know $\sum_{n=0}^\infty z^n=\frac{1}{1-z}$ for $|z|<1$, but for some reason I am having trouble with $\sum_{n=0}^\infty \frac{1}{2^nz^n}$.
I know that $\frac{1}{1-z}=-\sum_{n=1}^\infty\frac{1}{z^n}$ for $|z|>1$. So, if I substitute $\frac{1}{2}z$ in for $z$ I get: $\frac{1}{1-\frac{z}{2}}=-\sum_{n=1}^\infty(\frac{1}{2z})^n$. So, after this I suppose I would get, for the series I just calculated, that $|z|>\frac{1}{2}$, and so combining with the first part, we get that $\frac{1}{2}<|z|<1$.
So, could I say that the series in question, $\sum_{n=0}^\infty(z^n+\frac{1}{2^nz^n})$, is really just the Laurent series $\frac{1}{1-z}-\frac{1}{1-\frac{z}{2}}-1$ centered at $z=0$, and so it converges in the annulus $\frac{1}{2}<|z|<1$?
AI: It is clear from your argument that the series converges for $\frac 1 2 <|z|<1$. To show that it does not converge outside this annulus just use the fact that if one series converges and another series diverges then the sum diverges. For example, $|z| \geq 1$ implies that $\sum z^{n}$ diverges whereas $\sum \frac 1 {2^{n}z^{n}}$ converges so the given series diverges.
|
H: Prove the following sequence converges
I am fairly confident that if $\alpha\in \mathbb{R}$ is such that $0<\alpha<1$, then the sequence
$$(a_n):a_n=n\alpha^n$$
converges to $0$. I created a generalization of a method found in Prove $ne^{-n}$ converges to zero for $0<\alpha<1/2$ in which you argue
$$n\alpha^n\leq \left(\frac{2}{(1/\alpha)}\right)^n$$
and use results about geometric series, but I am completely lost on the formal proof for $1/2<\alpha <1$. Maybe a hint to help me get started?
Alternatively, if such a hint/solution is overly complicated, I would greatly appreciate a simpler solution (if it exists) that can at least show the sequence is bounded.
AI: $e^{nt} \geq \frac {n^{2}t^{2}} 2$ for $t \geq 0$. Put $t =-\ln \alpha$. You get $\alpha^{-n} \geq cn^{2}$ for some $c>0$. Can you finish?
|
H: Convergence of $\sum_{n=0}^\infty \frac{(-1)^n}{z+n}$
I want to find the domain of convergence of the series $\sum_{n=0}^\infty \frac{(-1)^n}{z+n}$
I recently posted a similar question here: Convergence of $\sum_{n=0}^\infty(z^n+\frac{1}{2^nz^n})$ where I was able to use Laurent series and find an annulus in which the series does converge. However, for this problem, I am not sure if I can do that. The ratio and root test don't help too much either. So, I was wondering if there is a way to apply, maybe, Leibniz's criterion to a series of a complex variable? What would something like that look like? Maybe there is something else that would work? Any ideas, hints, etc. are (as always) greatly appreciated! Thank you.
AI: The series is not defined when $z \in \{0,-1,-2,...\}$. Assume that $z \notin \{0,-1,-2,...\}$ and consider $\sum (-1)^{n} [\frac 1 {z+n} -\frac 1 n] =\sum (-1)^{n} z\frac 1 {n(z+n)}$. Check that this series is absolutely convergent. Since $\sum (-1)^{n} /n$ is convergent it follows that the given series converges whenever $z \notin \{0,-1,-2,...\}$.
|
H: Compactness without using Heine-Borel in $L^p$ spaces
Consider the set of functions $S=\{\sin(2^nx):n\in\mathbb{N}\}$ in $L^2[-\pi,\pi]$ with the metric $d(f,g)=\left(\int_{\pi}^{\pi}|f(x)-g(x)|^2dx\right)^{\frac1{2}}$. Then is $S$ both closed and bounded in $L^2[-\pi,\pi]$ but noncompact in it?
I think yes. The proof of bounded ness seems easy. The proof of closed, though a little tough, can be somewhat accomplished, by noting that a function that converges in the given metric should also converge in the standard metric norm (I think). The proof of non compactness or compactness baffles me. What is an open cover in $L^2[-\pi,\pi]$ for $S$. How to give an open cover that does not have a finite subcover? Any hints? Thanks beforehand.
AI: $(\frac 1 {\sqrt {2\pi}} \sin kx)$ is an orthonormal sequence. The given sequence is a subsequence of this. The distance between any two terms of this sequence is $\sqrt 2$. This proves that the sequence is closed and also that there is no convergent subsequence.
|
H: Probability axioms does not make sense?
Assume a unit square to be sample space (infinite points inside it being its elements). Let the points are $\{p_1, p_2, ...\}$
then, by probability axioms,
$$1 = Pr(p_1 \cup p_2 \cup \cdots ) = Pr(\{p_1\}) + Pr(\{p_2\}) + \cdots + Pr(\{p_n\}) = \\
= Pr(p_1) + Pr(p_2) + \cdots + Pr(p_n)
= 0 + 0 + \cdots $$ (as Pr of individual point in space is zero)
$= 0$
Where do I lack in understanding the logic of axioms?
AI: There is no sequence $(p_k)_{k\in\Bbb N}$ of points of the unit square $S$ such that $S=\{p_1,p_2,p_3,\ldots\}$. In other words, $S$ is not countable.
|
H: How does Synthetic Division for linear divisors $ax + c$ with $a>1$ work?
I used this guide from Mesa Community College to learn synthetic division. However it does not seem to work if $a>1$ in the divisor $ax + c$.
For example for this problem $\frac{3x^3-5x^2+4x+2}{3x+1}$ from the same website when I expand the solution in the picture below from the website I get $$(3x^2-6x+6) (3x+1) = 9x^3-15x^2+12x+6\neq 3x^3-5x^2+4x+2$$ So are they wrong? How can synthetic division be done correctly for $a>1$? I also noticed that the expanded solution can be divided by three to get the expected polynomial, how can this be integrated into the synthetic division algorithm?
AI: Rewriting
$$
\frac{3x^3-5x^2+4x+2}{3x+1}=\frac{3x^3-5x^2+4x+2}{3\left(x+\frac{1}{3}\right)}
$$
you see that you have to divide your result, obtained not taking into account the facotor $3$, by $3$.
|
H: How to prove this equation about calculation of matrix determinant?
How to prove the equation about the determinant of Matrix $M$, i.e.,
$|M|=\frac{(M \cdot a) \times (M \cdot b) \cdot (M \cdot c)}{a \times b \cdot c}$
where $a$, $b$ and $c$ are arbitrary vectors.
This euqation is encountered in An introduction to continuum mechanics P.55 , authored by G.N. Reddy. It's subsequently used to prove that the determinant of Deformation Gradient is the change of volume during deformation.
I would be greatful if somebody could shed some lights on it.
AI: For all vectors $a, b, c \in{\mathbb R}^3$, one has $(a\times b)\cdot c = \det(a,b,c)$ by Laplace expansion. Hence
\begin{equation}
(M a \times M b)\cdot M c = \det(M a, M b, M c) = \det(M[a, b, c]) =
\det(M)\det(a, b, c) = \det(M) (a\times b)\cdot c
\end{equation}
|
H: How to prove that the first derivative of $ \left| ln(x) \right| $ exists?
I am trying to prove that the first derivative of $ \left| ln(x) \right| $ exists.
$$ \lim_{h \to 0} \frac{f(x_o-h) -f(x_0)}{h} = \lim_{h \to 0} \frac{\ln(x_o-h) -\ln(x_0)}{h} = \lim_{h \to 0} \frac{\ln(\frac{x_o-h}{x_0})}{h} = \lim_{h \to 0} \frac{\ln(1 -\frac{h}{x_0})}{h} \quad (1) $$
But I don't know how to continue this.
Any ideas (without D'Hopital on limits)?
AI: Differentiation is a local property, as we are computing limits to a certain point. Note that if $x_0\in(0,1)$, then $|\ln(x)|=-\ln(x)$ in a small interval around there, so $|\ln(t)|$ is differentiable there. Likewise if $x_0>1$ (it will be $|\ln(x)|=\ln(x)$ in a small interval around $x_0$, so it will be differentiable at $x_0$. But on $x_0=1$, your function is not differentiable and I leave this as an exercise to you! (hint: look at the other answer)
|
H: Convolution - Heaviside
I'm having a hard time seeing how $\int_0^t f(u)H(t-u-1)du$ where H is the Heaviside function, is equal to $0$ for $t<1$ and $\int_0^{t-1}f(u)du$ for $t>1$. I know of course that $H(x)$ is generally zero for $x<0$ and $1$ for $x>0$ but I don't see what happened here. Thank you!
AI: Let's put the question in a form where $H(x)$ is considered explictly without compounding it with the linear map $u\mapsto t-u-1$. Consider the convolution integral and apply the change of variable $t-u-1\mapsto y$: then
$$
\begin{split}
\int\limits_0^t f(u)H(t-u-1)\mathrm{d}u &= \int\limits^{t-1}_{-1} f(t-y-1)H(y)\mathrm{d}y\\
& = \int\limits^{t-1}_{-1} f(t-y-1)H(y)\mathrm{d}y \\
& =
\begin{cases}
0 & t\le 1\\
\\
\displaystyle\int\limits^{t-1}_{0} f(t-y-1)\mathrm{d}y & t >1
\end{cases} =
\begin{cases}
0 & t\le 1\\
\\
\displaystyle\int\limits^{t-1}_{0} f(u)\mathrm{d}u & t >1
\end{cases}
\end{split}
$$
|
H: Calculating the mean of a simple birth process
Consider a population in which each individual gives birth after an exponential time of parameter $\lambda$, all independently.
If $i$ individuals are present then the first birth will occur after an exponential time of parameter $i\lambda$.
Then we have $i + 1$ individuals and, by the memoryless property, the process begins afresh.
We denote with $X_t$ the number of individuals at time $t$ and suppose $X_0 = 1$.
Let $T$ be the time of the first birth.
We want to calculate $\mathbb{E}(X_t)$,
$$\begin{align*}
\mathbb{E}(X_t) & = \mathbb{E}(X_t1_{T \leq t}) + \mathbb{E}(X_t1_{T > t}) \\
& = \int_0^t \lambda e^{-\lambda s}\mathbb{E}(X_t | T = s)ds + e^{-\lambda t}
\end{align*}$$
My question is: why $\mathbb{E}(X_t1_{T > t})=e^{-\lambda t}$?
I know $\mathbb{P}(T>t)=e^{-\lambda t}$ but I don't see how the expectation simplifies.
AI: It $T>t$ then there is no birth till time $t$ so $X_t$ is same as $X_0$ which is $1$. Hence $EX_t 1_{T>t}=P(T>t)=e^{-\lambda t}$
|
H: Construct a homeomorphism between $S^1/\rho$ and $S^1$
Construct a homeomorphism between $S^1/\rho$ and $S^1$ (the unit circle)
where $S^1=\{(x,y)\in \mathbb{R}^2|x^2+y^2=1\}$ and the equivalence relation is $$(x',y')\rho(x'',y'') \iff y''\leq 0 \text{ and } y'\leq 0.$$
I get it intuitively, I know this equivalence relation is identifying the part of the circle below and on the $x$-axis as a single class in the quotient, so is like shrinking them to single point, so that the upper part of the circle closes in the quotient, and therefore it should be homeomorphic to a circle $S^1$.
Now I am having trouble finding the expression for the homeomorphism
I think I could do some like $f(t)=(cos(2t), sin(2t)))$, with $t\in [0,\pi]$, to parametrize $S^1$ where $t$ is the arc-length of the upper half of $S^1$ , but I need to relate it to the quotiented space and write everything in cartesian coordinates.
Can someone shed some light?
The expression for f should be such that
$f(x',y')=f(x'',y'') \iff (x',y')\rho (x'',y'')$.
AI: Instead of using polar coordinates and trying double the angle for positive $y$-coordinates, you might consider the following projection:
Inside the unit circle $C_1$ place another circle $C_2$ with center $(0,\tfrac 1 2)$ and radius $\tfrac 1 2$. Now given any $(x,y)\in C_1$ the line segment between $(x,y)$ and the origin intersects $C_2$ in a unique point. See this illustration using GeoGebra.
The projected point on $C_2$ stays stationary when $y\le 0$, which is exactly what you need to define your homeomorphism.
The unit circle $C_1$ is given by $x^2+y^2=1$, the second circle $C_2$ is given by $x^2+(y-\frac 1 2)^2 = \frac 1 4$. The line segment between $(x,y)$ and the origin consists of the points $(tx,ty)$ for $t\in[0,1]$.
Hence, the intersection point satisfies
$$
(xt)^2+\left(yt-\frac 1 2\right)^2 = \frac 1 4,
$$
which is equivalent to
$$
\underbrace{(x^2 + y^2)}_{=1} t^2 - yt = 0.
$$
When $y\ge 0$, solving for $t$ yields $t=0$ (the origin) and $t=y$,
corresponding to the point $(xy,y^2)$.
Now shifting and scaling yields a homeomorphism $C^2\to S^1$ given by $(x,y)\mapsto (2x,2y-1)$ that sends our intersection point on $C_2$ to
$$
\left( 2xy, 2y^2 - 1 \right).
$$
Hence, the map
\begin{align*}
S^1 &\longrightarrow S^1 \\
(x,y) &\longmapsto
\begin{cases} (2xy, 2y^2 - 1) & y > 0, \\
(0,0) & y \le 0
\end{cases}
\end{align*}
induces the desired homeomorphism $S^1/\rho \to S^1$.
|
H: Doubt about solution to Axler's Linear Algebra Done Right problem
I am confused about a solution by Stanford's MATH113 class to a problem in Sheldon Axler's Linear Algebra Done Right, 3rd Ed. I have seen solutions elsewhere (on Slader) that are very similar.
The question (3.A.11, pg 58) is below, where $\mathcal{L}(V, W)$ denotes the set of all linear maps from $V$ to $W$.
Suppose $V$ is finite-dimensional. Prove that every linear map on a subspace of $V$ can be extended to a linear map on $V$. In other words, show that if $U$ is a subspace of $V$ and $S \in \mathcal{L}(U, W)$, then there exists $T \in \mathcal{L}(V, W)$ such that $Tu = Su$ for all $u \in U$.
Below is the solution from Stanford's MATH113 class, Fall 2015. I was unable to find any of the Propositions or Definitions mentioned.
Proof. Suppose $U$ is a subspace of $V$ and $S \in \mathcal{L}(U, W)$. Choose a basis $u_1, \ldots, u_m$ of $U$. Then $u_1, \ldots, u_m$ is a linearly independent list of vectors in $V$ and so can be extended to a basis $u_1, \ldots, u_m, v_1, \ldots, v_n$ of $V$ (by Proposition 2.33). Using Proposition 3.5, we know that there exists a unique linear map $T \in \mathcal{L}(V, W)$ such that
\begin{align}
Tu_i = Su_i \quad &\text{for all} \quad i \in \{1, 2, \ldots, m\} \\
Tv_j = 0 \quad &\text{for all} \quad j \in \{ 1, 2, \ldots, n \} .
\end{align}
Now we are going to prove $Tu = Su$ for all $u \in U$. For any $u \in U$ $u$ can be written as $a_1 u_1 + \cdots + a_m u_m$. Since $S \in \mathcal{L}(U, W)$, by Definition 3.2 we have
$$Su = a_1 Su_1 + a_2 S u_2 + \cdots + a_m S u_m.$$
Since $T \in \mathcal{L}(V, W)$, we have
\begin{align}
Tu &= T(a_1 u_1 + \cdots + a_m u_m) \\
&= a_1 T u_1 + a_2 T u_2 + \cdots a_m T u_m \\
&= a_1 S u_1 + a_2 S u_2 + \cdots a_m S u_m \\
&= Su.
\end{align}
Therefore we have $Tu = Su$ for all $u \in U$, so we have proved that every linear map on a subspace of $V$ can be extended to a linear map on $V$.
The solution proves that $Tu = Su$ for all $u \in U$, but I do not see how that is sufficient to show that $T$ is linear — what about those elements in $V$ that are not in $U$?
Suppose $U'$ is the complementary subspace to $U$ (that is, $U \oplus U' = V$). I believe any element in $U'$ can be expressed as a linear combination of $v_1, \ldots, v_n$. Now take $a, b \in V$ and $\lambda \in \mathbb{F}$. To prove $T$ is linear, I believe we need to show that
$$ T(\lambda a + b) = \lambda T(a) + T(b) $$
holds, even if (among other combinations) $a \in U$ and $b \in U'$. How (if at all) does the Stanford proof address this? It seems to me that they only consider the case where both $a$ and $b$ are in $U$.
AI: This point is addressed "under the hood" in the statement
Using Proposition 3.5, we know that there exists a unique linear map $T \in \mathcal{L}(V, W)$ such that
\begin{align}
Tu_i = Su_i \quad &\text{for all} \quad i \in \{1, 2, \ldots, m\} \\
Tv_j = 0 \quad &\text{for all} \quad j \in \{ 1, 2, \ldots, n \} .
\end{align}
It is specified that $T$ is a linear map, so we already know that $T$ is linear.
As for what $T$ does to elements not in $U$: a vector $v \notin U$ can be expressed as
$$
v = a_1 u_1 + \cdots + a_m u_m + b_1 v_1 + \cdots + b_n v_n,
$$
and the fact that $v \notin U$ tells us that one of the coefficients $b_j$ is non-zero. We find that
$$
T(v) = a_1 T(u_1) + \cdots + a_m T(u_m) + b_1 T(v_1) + \cdots + b_n T(v_n)
\\ = a_1 T(u_1) + \cdots + a_m T(u_m).
$$
Note that they could have equivalently made a proof using a complementary subspace. The complementary subspace corresponding to the map constructed in the proof is $U' = \operatorname{span}\{v_1,\dots,v_m\}$, and $T$ was defined so that $T|_{U'} = 0$.
|
H: Is $2^{2^m-2}+1$ always a composite number for $m>2$?
Is $2^{2^m-2}+1$ always a composite number for $m>2$ ?
I really don't have any idea how to prove or disprove this , it is given to me that it is true . Since nothing else is coming to my mind , I tried to prove it by induction but it was not useful .
Could someone please guide me how to prove this ?
Thanks !
AI: For any odd n, n>1, we have
$$ 2^{2n}+1 = (2^n+1)^2 - \big(2^{(n+1)/2}\big)^2 = \big( 2^n+2^{(n+1)/2}+1 \big) \big( 2^n-2^{(n+1)/2}+1 \big). $$
Moreover
$$ 2^n+2^{(n+1)/2}+1 > 2^n-2^{(n+1)/2}+1 = 2^{(n+1)/2}\big( 2^{(n-1)/2}-1 \big) + 1 > 1 $$
for $n>1$.
Therefore, $2^{2n}+1$ is composite whenever $n$ is odd and $n>1$.
In the given problem, $n=2^{m-1}-1$ is odd, and $m>2$ translates to $n>1$. $\blacksquare$
|
H: Looking for a paper in the game theory literature
I am looking for a paper I've read several years ago, but I cannot find it using google. I think it is quite well known.
It is about prices and their indication about quality. There are informed and uninformed buyers, as the informed buyers know the value of goods, prices tend to reflect the value/quality. Uninformed buyers can take advantage of this by using the price as a signal of quality.
Does anybody know the author/title and year of this paper?
AI: OLIGOPOLISTIC PRICE COMPETITION
WITH INFORMED AND UNINFORMED
BUYERS
by
Michal Ostatnický
https://www.cerge-ei.cz/pdf/wp/Wp413.pdf
Any good ?
Reputation: The New Palgrave Dictionary
Martin Cripps, March 2006
https://discovery.ucl.ac.uk/id/eprint/14446/1/14446.pdf
I Googled using >>"uninformed buyer" game theory<<
Looks like an interesting topic
|
H: Calculate $Y$ based on new$(X,Y)$ coordinates (from horizontally and vertically translated $x,y$)
For this quadratic function, I need to translate the $(x,y)$ coordinate axes horizontally and vertically, so that the new origin of the coordinate system is located at the point with old coordinates $(2,3)$.
The new $(X,Y)$ coordinates are therefore related to the old $(x,y)$ -system by the equations:
Question -1
Since the new coordinates is 2 and 3 for x and y respectively, I substract them from x and y. Even though it is correct when I enter into the system, I don't fully understand why I need to substract, why not addition.
If you could guide me for further reading or explanation, that would be great.
$X = x -2 $
$Y = y -3 $
$ y = 2x^2−3x+1$
I need to substitute these $X = x -2, Y = y -3$, into the formula $y = 2x^2−3x+1$, in order to express Y as a function of X.
Below is the work.
$y = ax^2 + bx + c
= a(x + b/2a)^2 + (c-b^2/4a)$
$y = 2x^2−3x+1
\
= 2( x - 3/4)^2 + (1-9/8)
\
= 2( x - 3/4)^2 -1/8
\
y + 1/8 = 2( x - 3/4)^2$
Let $Y= y + 1/8$
Let $X= x - 3/4$
Therefore $Y=2X^2$
Question -2
My calculation for Y is as below. However, when I enter this answer, I still get incorrect response. I have checked it many times, but still couldn't spot the mistake. Is my method incorrect?
$y -3 = 2(x-2)^2 - 3(x-2)+ 1$
$y = 2(x^2 -4x + 4) - 3x -6 + 1 + 3$
$y = 2x^2 -8x + 8 -3x -2$
$y = 2x^2 -11x + 6$
AI: Question 1:
I do not know if i can satisfy your first question. Simply put the new center coordinates in your equation
$$ X= x-2 = 2-2 = 0, $$
$$ Y= y-3 = 3-3 = 0. $$
You see that $X$ an $Y$ are zero at the new center $x=2$ and $y=3$.
Or vice versa put the old center coordinates in the new system
$$ -2= x-2, $$
$$ -3= y-3, $$
resulting in
$$ x=-2+2=0, $$
$$ y=-3+3=0. $$
You see that $x$ an $y$ are zero at old center for $X=-2$ and $Y=-3$.
Question 2:
In the second line there is a typo
$$ y= 2 (x^2 -4 x + 4) - 3x + 6 + 1 + 3 .$$
|
H: Inequality on time to reach absorption for Markov chain
Take any Markov chain on the state set $\{0,1,...,n\}$ with the condition that the transition probability $P_ij$ to go from state $i$ to state $j$ is zero whenever $j>i$.
Define the random variable $T_n$ to be the number of steps before the process reaches the state $0$, if the process is started at state $n$.
Is it always true that $\displaystyle \mathbb{E}[T_n]\leq \sum_{i=1}^n \frac{1}{\sum_{j=0}^i P_{ij}(i-j)}$?
Example 1: Let $n=1$, $P_{1,1}=p$ and $P_{1,0}=1-p$. Then $\displaystyle\sum_{i=1}^n \frac{1}{\sum_{j=0}^i P_{ij}(i-j)}=\frac{1}{1-p}$.
We have $\mathbb{P}(T_1=k)=(1-p)p^{k-1}$ for $k\geq 1$ and so $\displaystyle \mathbb{E}(T_1)=\sum_{k=1}^{\infty}k\mathbb{P}(T_1=k)=(1-p)\frac{\mathrm{d}}{dp}\left(\frac{p}{1-p}\right)=\frac{1}{1-p}$
Example 2: Let $n=2$, $P_{2,0}=\frac{2}3$, $P_{2,1}=\frac{1}3$, $P_{2,2}=0$ and $P_{1,1}=P_{1,0}=\frac{1}2$. Then $\mathbb{P}(T_2=1)=\frac{2}3$ and $\mathbb{P}(T_2=k)=\frac{1}{3.2^{k-1}}$ if $k>1$.
In this case, $\displaystyle\sum_{i=1}^n \frac{1}{\sum_{j=0}^i P_{ij}(i-j)}=\frac{13}5$ and $\mathbb{E}(T_n)=\frac{5}3$
A few observations that may or may not help:
There is a recurrence relation $\mathbb{E}(T_n)=\sum_{k=0}^nP_{nk}\mathbb{E}[T_{k}+1]$.
Remove the zeroeth row and zeroeth column from the transition matrix. Then you get a matrix $Q$ with $Q_{ij}=P_{ij}$ for $i,j\ne 0$. Define the fundamental matrix $N=(1-Q)^{-1}$. $\mathbb{E}(T_n)$ is the $n$th element of $N\bf{1}$. $N$ is an upper triangular matrix.
My question appears to be related to this question.
The process looks similar to a renewal processes.
AI: If I understand the problem correctly, the inequality need not always hold.
Let $P=\begin{pmatrix}1&1/2&1/20\\0&1/2&0\\0&0&19/20\end{pmatrix}$, $Q:=\begin{pmatrix}1/2&0\\0&19/20\end{pmatrix}$, and $r:=(1/2,1/20)$.
Then the probability of taking exactly $n$ steps to reach the $0$th state, starting from the $n$th state, is $rQ^{n-1}e_n$. So the mean of the number of steps is $$\mathbb{E}(T_n)=\sum_nnrQ^{n-1}e_n=r(I-Q)^{-2}e_n=20$$
The rhs of the inequality is $$\sum_{i=1}^2\frac{1}{\sum_{j=0}^iP_{ij}(i-j)}=\frac{1}{1/2}+\frac{1}{2/20}=12$$
|
H: $\int_{0}^{1}\frac{x}{\sqrt{1-x}}dx$
Solvable fixing $1-x=t$, I have a doubt about the integration's extremes. If $x=1\rightarrow t=1-x=0$, while if $x=0\rightarrow t=1-x=1$, so we have $-\int_{1}^{0}\frac{1-t}{\sqrt{t}}dt=\int_{0}^{1}\frac{1-t}{\sqrt{t}}dt$?
Thanks in advance.
AI: To interchange the limits on the integral, reverse the sign.
$$
-\int_1^0\frac{1-t}{\sqrt{t}} dt = \int_0^1\frac{1-t}{\sqrt{t}} dt
$$
|
H: How to prove that two empty lists have the same elements in the same order using vacuous implication?
I know that to some extent this question may be insignificant, but I'm a little bit uncomfortable with the vacuous implication in this situation.
We know that two lists are equal iff they have the same length and the same elements in the same order. And we know that two empty lists are equal.
So how could we formally prove that two empty lists have the same elements in the same order?
(I think somehow vacuous implication could prove it, but I'm a little bit confused about how to construct the implication since we need to account for the order and the multiplicity of the elements in a list)
AI: If two lists both have length $17$, how can we establish that they "have the same elements in the same order"? What does that phrase mean?
One possible way to say two lists $A$ and $B$ have the same elements is, for all $x$,
$x$ is in $A$ if and only if $x$ is in $B.$
A possible way to say that the elements of $A$ and $B$ are in the same order,
given that they have the same elements, is that for all $x$ and $y$, $x$ and $y$ are in $A$ and $x$ occurs before $y$ in $A$ if and only if $x$ and $y$ are in $B$ and $x$ occurs before $y$ in $B$.
If $A$ and $B$ are empty then for all $x$ and $y$, $x$ does not occur in $A$ and $x$ does not occur in $B$, hence both directions of the implication for "same elements" are vacuously true.
Also, it is not true that $x$ and $y$ are in $A$ and also not true that $x$ and $y$ are in $B$, hence both directions of the implication for "same order" are vacuously true.
The ideas above seem to work if a list is an ordered set (with no repetition of elements) but not if repetition of elements is allowed.
So you probably need something different.
But it really depends on how you define a list in the first place.
A list of length $N$ might be defined a function from the first $N$ integers to the elements of the list.
For example, suppose we say $A$ is a list if there exists a unique non-negative integer $N$ called the length of $A$,
such that if $k$ is an integer, $1 \leq k \leq N,$
then the $k$th element of $A$ exists and may be called $A(k).$
Then $A$ and $B$ have the same elements in the same order if their lengths are the same number $N$ and if $A(k) = B(k)$
for every integer $k$ where $1\leq k\leq N.$
Can you see how this is satisfied vacuously if $N= 0$?
But a list $A$ might be defined inductively as follows:
either $A$ is the empty list (containing no elements),
or $A$ is the ordered pair $(a,A')$ where $a$ is the first element of $A$ and $A'$ is a list.
That is, we make lists by inserting elements at the beginnings of existing lists, starting with the empty list.
Now the question is what it means for two such lists to have the same elements in the same order.
We might define this inductively too, that is, $A$ and $B$ are equal if they are both empty or have equal first elements inserted in front of equal lists. But in that case there's no "proof" that empty lists are equal; we had to accept it by definition in order to support the definition of equality for non-empty lists.
The same problem occurs if we define lists recursively by appending elements to existing lists, that is, a list $A$ is either the empty list or $(A',a)$ where $A'$ is a list and $a$ is an element.
So the question that first has to be answered is, what's a list?
Until that is answered with a mathematical definition, no proof is possible.
|
H: Move a function from the integrand into the differential in a Stieltjes-Integral
If I have an integral like this
$$\int_{0}^{\infty} e^{-st}f(t)d(\alpha(t)),$$
then is it possible to transform it into a "classic" Laplace-Stieltjes-Integral of the form
$$\int_{0}^{\infty}e^{-st}d(\alpha_2(t))?$$
My idea would be to probably calculate $\alpha'(x)$ and then write as the differential the integration of $(\alpha'*f)$.
AI: Well you can't use the same "$\alpha(t)$" in both!
Given two measures, $\alpha(t)$ and $\beta(t)$ then $\int f(t)d\alpha(t)= \int d\beta(t)$ if and only if $d\beta(t)= f(t)d\alpha(t)$ which is the same as saying that the derivative of $\beta$ is f(t) times the derivative of $\alpha$.
|
H: The natural map $V^* \times W \to \text{Hom}(V,W) $ Is bilinear.
I want to know how to show that $$V^*\times W \to \text{Hom}(V,W) $$ $$(\varphi,w) \mapsto (V \ni v \mapsto \varphi(v) w \in W) $$
is bilinear.
I am currently leaning things again for my exam that is comming up. I know that you need to show bilinearity with.
$f(x + x',y) = f(x,y) + f(x',y)$
$f(x*\alpha, y) = \alpha f(x,y)$
and the same for y as well. Just my issue is that i do not know how we could show that for the $\varphi$ in the map. Can we do something like this ?
$f(\varphi + \mu, w) = (V \to W, (\varphi + \mu)(v)w) = (V \to W, \varphi(v)w + \mu(v)w)= (V\to W, \varphi(v)w) + (V \to W, \mu(v)w) $
Just wanted to make sure i did this right.
All the help is appreciated.
AI: This is written very cumbersome. Nobody writes it like that. I present a way to write down a solution nicely: The issue is that $f(\varphi,w) \in \operatorname{Hom}(V,W)$ is a map, and showing that
$$f(\varphi+ \mu,w) = f(\varphi, w) + f(\mu, w)$$
boils down to showing that two maps are equal! But two maps are equal if they are equal on every element of the domain, so for fixed $v \in V$ it suffices to show that
$$f(\varphi+\mu,w) (v) = f(\varphi,w)(v) + f(\mu,w)(v)$$
But this last thing is obvious:
$$f(\varphi+ \mu,w)(v) = (\varphi+\mu)(v)w = \varphi(v)w + \mu(v)w = f(\varphi,w)(v) + f(\mu,w)(v)$$
Similarly, you can check all the other axioms for bilinearity.
|
H: Mean value theorem for integrals proof
Can you give me a proof of Mean value theorem for integrals without using Fundamental theorem of calculus (because I want to prove FTC using MVT for integrals).
AI: Let $f:[a,b]\rightarrow \mathbb{R}$ be continuous. By the extreme value theorem there exist $x_{m},x_{M} \in [a,b]$ such that $f(x_m)=m:= \inf_{x\in [a,b]}f(x)$ and $f(x_M)=M:=\sup_{x\in [a,b]}f(x)$. Now clearly
$$f(x_m)(b-a)=\int_a^b m \: dx \leq \int_a^b f(x) \: dx \leq \int_a^b M \: dx = f(x_M)(b-a).$$
From here the intermediate value theorem ensures the existence of a $\xi$ between $x_m$ and $x_M$, such that
$$f(\xi)=\frac{1}{b-a} \int_a^b f(x) \: dx$$
|
H: Possible orders of special element in a group
Let $G$ be a group. Let $x$ be an element of order $3$ and $y(\neq e)$ be an element of $G$ such that $xyx^{−1} = y^3$. Then what are the all possible order of the element $y$?
My attempt:
Since the order of the element $y$ and $y^3$ are same then if order $y$ is finite, then $\gcd (3,n)=1$, i.e $3 \nmid n$. Now consider $H = \left<x\right>$ and $K = \left<y\right>$ then $H,K$ are subgroups of $G$ suppose if $HK$ is a subgroup. If $3\mid (n-1)$ then we can find a group that has order $3n$ and that is non-Abelian. So $n=3k+1$.
AI: Hint: We have $x^nyx^{-n}=y^{3^n}$ for all $n\ge 1$. For $n=3$ we have $x^3=e$, so that $y=y^{27}$, i.e., $y^{26}=e$. Now the possible orders are included in the set $\{2,13,26\}$. Note that $y\neq e$.
|
H: Turning point of equation
Using differentiation, find the turning points of
$$
(x^{2}+y^{2}-x)^2=x^{2}+y^{2}
$$
Thanks!
AI: We have
\begin{equation}
(x^{2}+y^{2}-x)^2=x^{2}+y^{2}\tag1\label{eq:1}
\end{equation}
Differentiate both sides and put $y'=0$.
\begin{align*}
&(x^{2}+y^{2}-x)^2=x^{2}+y^{2}\\
\implies &2(x^2+y^2-x)(2x+2yy'-1)=2x+2yy'\\
\implies &(x^2+y^2-x)(2x-1)=x\tag2\label{eq:2}
\end{align*}
Using \eqref{eq:2} in \eqref{eq:1}, we get,
\begin{align*}
\left(\dfrac{x}{2x-1}\right)^2=x^2+y^2\tag3\label{eq:3}
\end{align*}
Using \eqref{eq:3} back into \eqref{eq:2}, we get,
\begin{align*}
&\left(\dfrac{x}{2x-1}\right)^2-x=\dfrac{x}{2x-1}\\
\implies &\left(\dfrac x{2x-1}-\dfrac12\right)^2=x+\dfrac14\\
\implies &\dfrac{1}{4(2x-1)^2}=\dfrac{4x+1}{4}\\
\implies &(4x+1)(2x-1)^2=1\tag4\label{eq:4}
\end{align*}
Now, \eqref{eq:4} is a cubic equation with $0$ as one of the roots.
|
H: Doubt on the definition of a dynamical system
I am studying control theory and I am focusing on dynamical system. In the introdution of the notes of my professor, it is defined a dynamical system as a system defined by three elements:
time
a set of functions defined on the time interval, W
the behaviour of the system at the given time
I have clear what are the 1. and 3., but I really can't understand what he means by W. Since we are considering a dynamical system in the context of control thoery, I would guess that these are the inputs and the outputs, but what can they be in a general context?
For example, consider the evoulution of a population. This is clearly a dynamical system, and can be modeled as:
$\dot{x}(t)=cx(t)$
in this case, what is the set of function $W$ as specified in 2.?
AI: Consider a slight modification of your system:
$\frac{d}{dt} x(t)=u(t)x(t)$.
Now, I think you would be comfortable with $W=\{u\}$, right? Now consider that we additionally require $u$ to be continuous. Does this change $W$? You will now probably ask: "why should it?"
And now comes the trick: by the same logic, you will have to admit that requiring $u$ to be constant should not change $W$. Now set $u(t)=c$ and you are back at your original example.
|
H: Starting with $\frac{X+A}{a}=\frac{B}{b}$ and $X-A=B$, derive $A=\frac{a-b}{a+b}X$ and $B=\frac{2b}{a+b}X$.
I've tried a few approaches, to no avail. Thanks in advance.
AI: The first equation gives $b(X+A)=aB$ and substituting $B=X-A$ gives $b(X+A)=a(X-A)$.
Re-arranging gives $X(b-a)=-A(a+b)$ so $A=\frac{b-a}{-(a+b)}X=\frac{a-b}{a+b}X$ and then $B=X-A=X-\frac{a-b}{a+b}X=\frac{a+b}{a+b}X-\frac{a-b}{a+b}X=\frac{2b}{a+b}X$.
|
H: In Category Theory can id be considered an isomorphism?
The definition from isomorphism
$f$ and $g$ are isomorphisms iff $f.g=id$ and $g.f=id$
Well
$id.id=id$
Does this make $id$ an isomorphism?
My intuition says that not, because that would break the terminal object definition, right?
Terminal object $t$ is a object that has only and only one morphism $f_{t}$ going from any object to it.
I understood that if I have to terminals $t_1$ and $t_2$ there must to be the isomorfisms $f_{t1} : t1 \rightarrow t2$ and $f_{t2} : t2 \rightarrow t1$ and that $f_{t1}.f_{t2}=id_{t1}$
But if I take $id_{t1}$ as an isomorphism, and aslso $f_{t2}$ then there would be two isomorphisms from "any object" to $t1$ and that would break the unique up to unique isomorphisms don't?
Or that means that $f_{t1}$, $f_{t2}$, $id_{t1}$, $id_{t2}$ are all the same morphism? (That would make sense for poset)
AI: $id$ is an isomorphism, yes.
In your case, you have to be careful about domains and codomains. For instance, $f_{t_2}$ is the unique morphism from $t_2$ to $t_1$, if $t_1$ is terminal.
And similarly, $id_{t_1}$ is the unique morphism from $t_1$ to $t_1$, if $t_1$ is terminal.
Maybe recalling the definition of terminal object can help : $x$ is terminal if and only if, for each $y$ in the category, there is a unique morphism from $y$ to $x$. So, this unique morphism could be written $f_y$, since it depends only on $y$ (assuming $x$ is fixed).
|
H: Diffie Hellman - bad choice of parameters
In the diffie helman algorithm a key is generated, A and B choose a random number a,b resp only they know. A prime number p and a generator g is given. The key is defined as $g^{a*b} \mod p$. I have to justify that $(a,b,g,p)=(3,4,15,31)$ is a bad choice, but the reason is not because the numbers are too small.
I have no idea how to solve this but I have thought about it for more than an hour.
I thought about maybe $15^x \mod p$ only is a subset of $\{0,...,30\}$ but I don't know what I should do with this information even if it was true.
AI: Your intuition is correct.
$15$ is not a primitive root modulo $31$, and its order is $10$. On average, an attacker has to perform only $5$ guesses to recover one secret key (which is enough to break the system).
A better choice would have been, for example, $g = 17$.
|
H: Is the set $V=U\cap-U$ balanced?
Let $E$ be a topological vector space and $U$ be an arbitrary neighborhood of $0$. I would like to know if $V=U \cap -U$ is balanced, that is $\lambda V \subset V$ for all $\lambda \in \mathbb{C}$ such that $|\lambda|\leq 1$.
I used this neighborhood to proof that every connected topological group $G$ can be written as $G=\bigcup_{n=1}^{\infty} U^n$ where $U$ is a negihborhood of $e$ in $G$. Now I was trying to see if this set has more properties when we are working with topological vector spaces.
But I was unable to prove that fact. If $\lambda \in \mathbb{C}$ such that $|\lambda|\leq 1$ and $x \in V$ then $x \in U$ and $x \in -U$. Thus, $x=u_0$ and $x=-u_1$ with $u_0,u_1 \in U$. I can't see a way to continue.
AI: No: take $E=\Bbb C$ and $U$ an open square centred at zero. Then $U=-U$
but $V=U$ is not invariant under multiplication by $\frac15(3+4i)$ say.
|
H: $16=m^{19} \mod 143$ - what is $m$
The background is RSA encryption. Can I use some theorem to exploit this situation?
I thought about fermats theorem but I don t know how to use it here
fermats theorem (If a and p are coprime numbers such that $a^{p−1} − 1$ is divisible by p, then p need not be prime.)
AI: Hint:
We can't apply Fermat's theorem because $143 = 11 \cdot 13$ is not prime. But we can apply it to the prime factors:
$$
m^{10} \equiv 1 \bmod 11,
\quad
m^{12} \equiv 1 \bmod 13
$$
Thus, solve $19x \equiv 1 \bmod 10$ and $19x \equiv 1 \bmod 12$. Then $m = m^1 \equiv m^{19x} \equiv 16^x \bmod 143$.
|
H: Is $f^2 \circ f=f \circ f^2$ true?
Suppose $f:\mathbb{R} \rightarrow \mathbb{R}$ is a function from the set of real numbers to the same set with $f(x)=x+1$.
We write $f^{2}$ to represent $f \circ f and f^{n+1}=f^n \circ f$.
Is it true that $f^2 \circ f = f \circ f^2$?
Why?
AI: Since composition of functions is associative:
$$
f\circ (g \circ h) = (f \circ g) \circ h
$$
then one may write
$$
f^2 \circ f = (f\circ f)\circ f = f\circ (f \circ f) = f\circ f^2.
$$
|
H: What is the expected number of additional rolls of dice to get $n$ even numbers consecutively, if we got $m$ ($0\le m
Basically how many times we should roll the dice to get $n$ even numbers back to back? But the catch is we have already started the trial and we got $m$ even numbers, where $m \ge 0$ and $m < n$.
That means I have rolled already $m$ times and I got even number every time. Now what is the expected number of additional rolls that require to get $n$ consecutive even numbers?
AI: Call this number $e(m,n)$, and also set $e(n,n)=0$ (if you've just had a run of $n$
consecutive evens you need no more rolls).
Conditioning on the next roll gives, for $0\le m<n$,
$$e(m,n)=1+\frac{e(m+1,n)+e(0,n)}{2}.$$
Subtracting two consecutive instances gives
$$e(m+1,n)-e(m,n)=\frac{e(m+2,n)-e(m+1,n)}{2}$$
for $0\le m\le n-2$.
Therefore
$$e(m+1,n)-e(m,n)= 2^mA$$
with $A=e(1,n)-e(0,n)$. Adding these up gives
$$e(n,n)-e(0,n)=1-e(0,n)=A(1+2+4+\cdots+2^{n-1}).$$
This provides a relation between $e(1,n)$ and $e(0,n)$ and from the initial relation
we get another:
$$e(0,n)=1+\frac{e(1,n)+e(0,n)}{2}.$$
These should be enough to find $e(0,n)$, $e(1,n)$ and so then $A$ and all the $e(m,n)$.
|
H: Plotting the Vertices of a Rotated Ellipse with Non-Origin Centre (MATLAB)
I'm trying to plot the vertices of an ellipse of the form:
$Ax^2 + Bxy + Cy^2 + Dx + Ey + F = 0$.
Here's my attempt:
A = -0.009462052409440;
B = 0.132811666715687;
C = -0.991096125887092;
D = 1.450474988439371;
E = -10.108254824293347;
F = -55.282226665842030;
% semi-major axis
a = -sqrt(2*(A*E^2 + C*D^2 - B*D*E + (B^2 - 4*A*C)*F)*((A+C) + sqrt((A-C)^2 + B^2)))/(B^2-4*A*C); % semi-major
% semi-minor axis
b = -sqrt(2*(A*E^2 + C*D^2 - B*D*E + (B^2 - 4*A*C)*F)*((A+C) - sqrt((A-C)^2 + B^2)))/(B^2-4*A*C); % semi-minor
% centre of ellipse
xx = (2*C*D -B*E)/(B^2-4*A*C);
yy = (2*A*E - B*D)/(B^2-4*A*C);
% the angle from the positive horizontal axis to the ellipse's major axis
if B ~= 0
theta = atan(1/B*(C-A-sqrt((A-C)^2 + B^2)));
elseif A < C
theta = 0;
else
theta = pi/2;
end
% plot ellipse
fimplicit(@(x,y) A*x.^2 +B*x.*y + C*y.^2 + D*x + E*y + F,'--')
xlim([68,86])
ylim([-1,1])
% plot centre point
hold on
plot(xx,yy,'rx')
% plot vertices
plot(xx + a*cos(theta),yy + a*sin(theta),'rx')
plot(xx + a*cos(-theta),yy + a*sin(-theta),'rx')
plot(xx + b*cos(theta+pi/2),yy + b*sin(theta+pi/2),'rx')
plot(xx + b*cos(theta-pi/2),yy + b*sin(theta-pi/2),'rx')
Result:
Clearly something is not quite right, but I can't seem to figure it out.
The formulae for axes and angle are taken from here:
https://en.wikipedia.org/wiki/Ellipse#General_ellipse
The ellipse needs to remain in its original form, so no transformations or rotations in the final result please.
AI: Everything is correct, I've just checked your formulas with GeoGebra. Your plot looks wrong because $x$ and $y$ axes don't have the same scale: to obtain a correct visualisation you need an aspect ratio $x:y=1$.
|
H: If a and b are transcendent and algebraically dependent
If $a$ and $b$ are transcendental numbers and algebraically dependent, then for any $\alpha$ and $\beta$ algebraic, it follows that the linear combination of $a$ with $\alpha$, and the linear combination of $b$ with $\beta$ are also algebraically dependent.This is true?If this is true, can you show me the proof?
AI: Suppose we have a base field $K$, with $\alpha,\beta$ algebraic over $K$ and $a,b$ transcendental over $K$ but algebraically dependent over $K$. This means that the field extension $K(a,b)/K$ has transcendence degree $1$ and that $K(a,b,\alpha,\beta)/K$ also has transcendence degree $1$ since $K(a,b,\alpha,\beta)=K(a,b)(\alpha,\beta)$. Therefore
any two elements of $K(a,b,\alpha,\beta)$ are algebraically dependent over $K$ (since otherwise the extension would have transcendence degree at least 2), in particular given $\lambda_{1,2},\mu_{1,2}\in K$ there is some polynomial $p\in K[X,Y]$ such that $p(\lambda_1 a+\mu_1 \alpha, \lambda_2 b+\mu_2 \beta)=0$.
|
H: A question about Orthogonal Decomposition Theorem
In a textbook that I'm reading the "Orthogonal Decomposition Theorem" is given as below:
However, there is no proof for equation (2). Can anyone prove it?
Thanks
AI: Since $\{u_1,\ldots ,u_p\}$ form a basis of $W$, each $\hat y$ can be written as
$$\hat y = c_1u_1 + \cdots + c_pu_p$$
Applying $u_i$ via dot product and using orthogonality gives:
$$\hat y \cdot u_i = c_i u_i\cdot u_i,\; i=1,\ldots , p$$
Hence,
$$c_i = \frac{\hat y \cdot u_i}{u_i\cdot u_i} = \frac{(\hat y+z) \cdot u_i}{u_i\cdot u_i} = \frac{y \cdot u_i}{u_i\cdot u_i},$$
since $y= \hat y + z$ with $z\in W^{\perp}$.
|
H: Prove that $|V_\alpha|=|\operatorname{P}(\alpha)|$ if and only if $\alpha=\{2,\omega+1\}$ or $\alpha=\kappa+1$, $\kappa=\beth_\kappa$
$\kappa$ is a cardinal, $V_\alpha$ belongs to the Von Neumann hierarchy $\begin{cases} V_0=\emptyset \\ V_{\alpha+1}=P(V_\alpha) \\ V_\lambda=\underset{\gamma<\lambda}{\bigcup}V_\gamma \end{cases}$ and the Beth function is defined in this way: $\begin{cases} \beth_0=\aleph_0 \\ \beth_{\alpha+1}=2^{\beth_\alpha} \\ \beth_\lambda=\underset{\gamma<\lambda}{\bigcup}\beth\gamma \end{cases}$
It's easy to see that $|V_0|\ne|\operatorname{P}(0)|, \; |V_1|\ne|\operatorname{P}(1)|, \; |V_2|=|\operatorname{P}(2)|$ and, for countable recursion, I prooved that $\forall n\in\omega \; |V_n|>|\operatorname{P}(n)|$.
$V_\omega$ is countable, whereas $|V_{\omega+1}|=2^{|V_\omega|}
=2^{\aleph_0}=|\operatorname{P}(\omega+1)|.$ Then, $\forall \; \omega+2<\alpha<\omega^2 \quad |V_\alpha|>2^{\aleph_0}=|\operatorname{P}(\alpha)|$ because these $\alpha$ are countable.
Now, for ordinals $\alpha\geq\omega^2$ I use this fact: $|V_\alpha|=\beth_\alpha$. Let be $\kappa$ a cardinal, $\forall\alpha+2$ such that $|\alpha|=\kappa$, then $|V_{\alpha+2}|=\beth_{\alpha+2}=2^{\beth_{\alpha+1}}>\beth_{\alpha+1}=2^{\beth_{\alpha}}\geq2^{|\alpha|}\geq2^{\kappa}=\operatorname{P}(\alpha+2)$.
Cardinals and successor of cardinals are left. $\forall\kappa$ cardinal $|V_\kappa|=\sum_{\gamma<\kappa}{|V_\gamma|}=\max\{\sup_{\gamma<\kappa}{|V_\gamma|,\kappa}\}$ and I don't know how to show that it isn't equal to $|\operatorname{P}(\kappa)|.$ If $\kappa$ is a fixed point of Beth function, then $|V_{\kappa+1}|=|\operatorname{P}(\kappa+1)|$, if $\kappa$ isn't a fixed point, it shouldn't be true, but I don't know how to go on.
AI: Suppose $\kappa$ is a cardinal and $\beth_\kappa >\kappa$. Then by definition of $\beth_\kappa$ (since $\kappa$ is a limit ordinal), $\beth_\gamma >\kappa$ for some $\gamma < \kappa$. This should tell you that $\beth_\kappa > 2^\kappa$ and so you should be done.
Same thing for $\kappa +1$ : if $\kappa$ isn't a fixed point, you still have that $\beth_\gamma >\kappa $ for some $\gamma<\kappa$, etc.
|
H: Exercise question pertaining to Unions and Intersections of 3 sets
I have spent a day and a half trying to answer this question.
I do not know how to prove that C is a subset of A using the given equality.
AI: You can use, that $C \subset A \Leftrightarrow A \cup C = A$ and then simplify, for example, first member:
$$(A \cap B) \cup C = (A \cup C) \cap (B \cup C) $$
|
H: Using Cauchy convergence criterion to prove that, "if convergent series contains only finitely many negative terms then it is absolutely convergent"
This question is asked already here
Proof verification: convergent series with a finite number of negative terms is Absolutely Convergent
But, answer to this question used different method (it does not explain how the Cauchy convergence criterion applied)
Let $s_n$ be nth partial sum of $\sum a_n$ and $t_n$ be nth partial sum of $\sum |a_n|$ and let $a_n≥0$ for all $n>K$ then,
if $m>n>K$ we have, $t_m-t_n= s_m-s_n$
Now, how to apply Cauchy convergence criterion to establish the convergence of $(t_n)$ ?
I know, by Cauchy convergence criterion we have, for given $\epsilon >0$ there exists $M(\epsilon)\in\mathbb{N}$ such that, if $m>n>M(\epsilon)$ then $|s_m-s_n|=|s_{n+1}+s_{n+2}+...+s_m|<\epsilon$
(But then, why should be this $M(\epsilon)>K$?)
how to proceed? Please help
AI: Let $N(\epsilon)$ be an integer greater than both $M(\epsilon)$ and $K$. Then for all $m>n>N(\epsilon)$, $a_n, \ldots, a_m$ are all positive, and their sum is smaller than $\epsilon$. You may apply Cauchy’s criterion from here.
|
H: Equivalent sets. Are they interchangeable?
Freshman question, really, but the more I think about it, the more I doubt.
Suppose that two sets belong to the same equivalence class. Are they in effect interchangeable? (I understand that there is no axiom of `interchangeability' in the definition of an equivalence relation.)
For example, consider the equivalence class of all people who are 30 years old. This equivalence class contains both men and women who are 30; and men and women are different 'objects' if I may say. Yet if I consider the class of people who are 30 for some analysis, it does not matter if I pick a man or a woman. They are interchangeable as long as what matter is their age.
I just wonder if this is characteristic of all equivalence classes one can encounter in mathematics.
AI: The answer to your question is in the question, here:
They are interchangeable as long as what matter is their age.
If all that matters in any particular context is the condition that specifies the equivalence relation then any representative will do.
|
H: Simple notation question about independent sigma algebras
Let the random variables $X$, $Y$ be independent, i.e. for the sigma algebra generated by those variables holds $\sigma(X,Y) = \sigma(X)\sigma(Y)$.
If $ \omega \in \sigma(X,Y)$, then we have also $\omega \in \sigma(X)\sigma(Y)$.
My question: How can I move on from this? So $\omega \in \sigma(X)\sigma(Y)$. Can one say that $\omega \in \sigma(X)$ and $\omega \in \sigma(Y)$ holds?
More specific: what can I "expect" from the fact that $\omega$ lies in the product of two sigma algebras? What does this say about the properties of the single element $\omega$?
Thanks in advance for further explanations! :-)
AI: No, of course you can't say $\omega \in \sigma(X)$. $\sigma(X,Y)$ is the $\sigma$-algebra generated by sets $X^{-1}(a,b)$ and $Y^{-1}(a,b)$
for all open intervals $(a,b)$.
So it contains countable unions of countable intersections of such sets (and more complicated things too).
|
H: Example of a function whose second derivative does not exist but limiting formula for the second derivative holds
Here's Exercise 11 in Baby Rudin:
Suppose $f$ is defined in a neighborhood of $x$, and suppose $f^{\prime\prime}(x)$ exists. Show that
\begin{equation}\label{11.0}
\lim_{h \to 0} \frac{f(x+h)+ f(x-h)-2f(x)}{h^2} = f^{\prime\prime}(x)
\end{equation}
Show by an example that the limit may exist even if $f^{\prime\prime}(x)$ does not.
I had no trouble proving the statement but I am having trouble coming up with an example. Initially, I thought of:
$f(x) =
\begin{cases}
x+1 & \text{if $x<0$} \\
0 & \text{if $x=0$} \\
x-1& \text{if $x>0$}
\end{cases}
$
Then, as $h \to 0$,
$\lim_{h \to 0} \frac{f(x+h)+ f(x-h)-2f(x)}{h^2} =
\begin{cases}
\lim\limits_{h \to 0} \frac{(x+h+1)+ (x-h+1)-2x-2}{h^2} = \lim\limits_{h \to 0} \frac{0}{h^2}=0 & \text{if $x<0$} \\
\lim\limits_{h \to 0} \frac{0}{h^2}=0 & \text{if $x=0$} \\
\lim\limits_{h \to 0} \frac{(x+h-1)+ (x-h-1)-2x+2}{h^2} = \lim\limits_{h \to 0} \frac{0}{h^2}=0& \text{if $x>0$}
\end{cases}$
Note that $f(0-) = f(0+) = \lim\limits_{h \to 0} f(0) = 0$. However, this also leads me to:
$f''(x) =
\begin{cases}
0 & \text{if $x<0$} \\
0 & \text{if $x=0$} \\
0& \text{if $x>0$}
\end{cases}$
Is my example even correct at all? If not, can someone suggest a better example and show how $f''(x)$ does not exist (presumably at some point)?
AI: how about $f(x) =
\begin{cases}
-x^2 & \text{if $x<0$} \\
x^2 & \text{if $x\geq 0$} \\
\end{cases}
$.
second derivative at $0$ does not exist
but the limit zero
$$\lim_{h \to 0} \frac{f(x+h)+ f(x-h)-2f(x)}{h^2}$$
at $x=0$ exists (and is equal to $0$).
|
H: Find a plane that passes through a given point and is orthogonal to a given plane
"Let $\pi = x+y+z=0$ be a plane. Let $\rho$ be a plane.
The projection of $\rho$ on $\pi$ is a line, and $\rho$ passes through the origin.
Find the plane $\rho$."
What I got:
$\rho$ and $\pi$ are orthogonal because the projection of $\rho$ on $\pi$ is a line. It means that a vector normal to $\pi$ belongs to $\rho$: for example, the vector $(1,1,1)$.
So I have 2 pieces of information about $\rho$: it contains the vector $(1,1,1)$ and the point $(0,0,0)$.
But there are an infinite number of planes that contains a certain vector and passes through a certain point.
So I need another piece of information about $\rho$ to determine it uniquely. For example, another vector it contains.
How do I get that from the question statement?
edit:
Actually it asks for a plane, not the plane. So, in fact, there are an infinite number of planes that satisfies the statement.
AI: Any plane containing $(1,1,1)$ is eligible. For example a plane containing a vector of plane $\pi$ such as $(1,-1,0)$, giving the parametric representation :
$$\begin{cases}x&=&a1+b(1)\\y&=&a1+b(-1)\\z&=&a1+b(0)\end{cases}$$
from which one can deduce the implicit representation
$$2z=x+y$$ by eliminating $a$ and $b$...
|
H: Center of a topological group is closed?
Is it true that the center $Z$ of a topological group $G$ is closed?(maybe we need the space to be Hausdorff or something like that...) I was thinking I can just show it is opened. So if I pick $x\in Z$ then I need to find an open $U \ni x$ such that $U\subset Z$. But I am not sure how to show it.
AI: The centre of $G$ is the intersection of the centralisers of its elements.
The centralizers are closed, so the centre is too.
In more detail
$$Z(G)=\bigcap_{g\in C}C_G(g)$$
where
$$C_G(g)=\{h\in G:ghg^{-1}h^{-1}=e\}$$
is closed in $G$.
|
H: Is it possible to write every real polynomial in two variables like ths one in this form?
Is it possible to write every real polynomial in two variables $x$ and $y$ with this form:
$$a x^2+b x y +c y^2$$
with general coefficients $a,b,c$ into the form
$$(d x + e y)^2$$
for some, possibly complex, $d$ and $e$?
From the second form to the first is obvious, but I need the other way around: is there some formula for the coefficients $d$ and $e$ as a function of $a,b,c$?
AI: No, this does not happen for most polynomials. You can just divide the first expression thru by $y^2$ and obtain the expression
$$a \left(\frac{x}{y}\right)^2 + b \left(\frac{x}{y}\right) + c$$
which is just a single-variable quadratic in $u = x/y$. It is well-known that the Fundamental Theorem of Algebra guarantees a factorization of this quadratic of the form
$$a (u - r_1) (u - r_2)$$
but the roots can be different, so it may not be a perfect square. To recover the factorization for your original expression, just multiply thru by $y^2$ and you will see that it factors into
$$a(x - r_1 y)(x - r_2 y) = (x\sqrt{a} - r_1 y\sqrt{a})(x\sqrt{a} - r_2 y\sqrt{a})$$
And hence you can see that such a factorization exists if and only if $r_1 = r_2$, which is equivalent to asserting that the discriminant $b^2 - 4ac = 0$.
|
H: Countinuity and openness of $f:(\mathbb{N},\varepsilon) \rightarrow (\mathbb{Z},\tau): f(x)=2x $, $\tau=\{A_n,\emptyset, \mathbb{Z}\}$
Consider the family $A_n=\{x \in \mathbb{Z}| -n \leq x \leq n\}, n\in \mathbb{N}$
Let $\tau=\{A_n,\emptyset, \mathbb{Z}\}$ be the topology over $\mathbb{Z}$
Let $f:(\mathbb{N},\varepsilon) \rightarrow (\mathbb{Z},\tau): f(x)=2x $ be a mapping, where $\varepsilon$ is the induced euclidean topology
Say if it is continuous and open
I would like some feedback on my solution:
a)Continuity
The $A_n , n\in \mathbb{N}$ are a basis of the topology, so it is enough to verify that $f^{-1}(A_n) \in \varepsilon$. In fact: I can write $A_n=[-n,n]$
I am unsure as to what should the correct answer be depending if you have to intersect with the subspace to get the open sets of the subspace topology like I did in option 1, or if you just have to check that the preimage is the intersection of the subspace with an open set of the initial topology:
option 1
$f^{-1}(A_n)=f^{-1}([-n,n])=[-n/2,n/2] \cap \mathbb{N}=[1, \lfloor n/2\rfloor]$ which $\in \tau$ so it is continuous I guess this is the correct one
option 2
$f^{-1}(A_n)=f^{-1}([-n,n])=[-n/2,n/2] $ if n happens to be odd, I would have an interval wich endpoints are not integers and therefore it doesn't have the form $U\cap \mathbb{N}$ , with $U$ an open set of $\mathbb{R}$ then it is not continuous
b) Openness
Let $A\in \varepsilon$, and $A=\mathbb{N}\cap B(p,r)=[a,b]$,where $B(p,r)$ is an open ball of $(\mathbb{R},\varepsilon)$, $a,b \in \mathbb{N}$, then $f([a,b])=[2a,2b]\in \tau$, with $2a, 2b \in \mathbb{Z}$ So it is open
AI: Since every subset of $(\Bbb N,\varepsilon)$ is an open set, any map from $(\Bbb N,\varepsilon)$ into any topological space is continuous, because the reverse image of anything is an open set.
But $f$ is not open, since $\{1,2\}$ is an open subset of $(\Bbb N,\varepsilon)$, but $f\bigl(\{1,2\}\bigr)=\{2,4\}$, which is not an open subset of $(\Bbb Z,\tau)$.
|
H: What does $\{g:\mathbb{R}\rightarrow\mathbb{R}\mid g\circ f=f\circ g\}$ mean?
Is the set $\{g:\mathbb{R}\rightarrow\mathbb{R}\mid g\circ f=f\circ g\}$ infinite?
Why?
Let's suppose $f(x)=x^2$
$g(x)$ is an inverse function of $f(x)$.
$g(f(x))=(x^2)^{\frac12}=x$
Therefore, since composition of functions are associative,
$g(f(x))=f(g(x))=x$
The range of $f(x)=x$ is $[0,\infty)$
Since the range is $\infty$, the set is infinite.
My question is: What is the point of ${g:\mathbb{R}\rightarrow\mathbb{R}}$ in $g\circ f=f\circ g$?
AI: If $f,g : \mathbb{R} \to \mathbb{R}$ are two functions, one can consider the two other functions $f\circ g$ and $g\circ f$ defined by $f\circ g (x) = f\left(g(x)\right)$ and $g\circ f (x) = g\left(f(x) \right)$. One can also ask whenever are these two functions the same.
If $f$ is a fixed function, the set $C(f)=\left\{g : \mathbb{R} \to \mathbb{R} | f\circ g = g \circ f \right\}$ is the set of all functions $g$ for which the two functions $f\circ g$ and $g\circ f$ are equal.
For example, if $f = \mathrm{id}$ is the identity function, then $C(f)$ is the whole set of functions, because all function commutes with the identity map.
But if $f(x)=x^2$ for example, then the function $g : x \mapsto -x$ is not in $C(f)$, as $f\circ g = f$ and $g\circ f = -f$. The set $C(f)$ depends on $f$.
Note that for every $n$, $f^{\circ n}=\underbrace{f\circ f \circ \cdots \circ f}_{n \text{ times}}$ is in $C(f)$, because $f^{\circ n} \circ f = f \circ f^{\circ n} = f^{\circ n+1}$, so for general $f$ you know an infinite family of functions in $C(f)$.
Edit
It has been noted that it would be relevant to discuss the case whenever $\{f^{\circ n} | n \in \mathbb{N} \}$ is finite (with convention $f^{\circ 0} = \mathrm{id}$). As it is non-empty ($f \in C(f)$) one can ask if it can be any positive integer. The answer is yes. Suppose $n \geqslant 1$ and consider the following function :
\begin{align}
f(x) =\begin{cases}
x+1 & \text{if} & x \in \{1,2\ldots,n-1\} \\
1 & \text{if} & x = n \\
x & \text{if} & x \neq 1,\ldots, n
\end{cases}
\end{align}
It is just a cylcic permutation on $\{1,\ldots,n\}$, and the identity on the complementary. It follows that $f^{\circ n}$ is the identity, and that for exery $1\leqslant k,k' \leqslant n-1$, if $k \neq k'$, then $f^{\circ k} \neq f^{\circ k'}$. Consequently, the set $\{ f^{\circ k} | k \in \mathbb{N} \}$ has exactly $n$ elements.
|
H: Context free grammar for language $\{ \{a,b\}^*$: where the number of $a$'s is unequal to the number of $b$'s$\}$
I've seen many solutions for when the number of $a$'s and $b$'s ARE equal but how should the grammar be for the time when the numbers are unequal?
So far I have this but it can't produce many things like aba:
S -> A | B
A -> aAb | bAa | aA | a
B -> aBb | bBa | bB | b
AI: You can't produce aba because you just forgot some rules, consider this corrected version:
S -> A | B
A -> aAb | bAa | abA | baA | Aab | Aba | aA | Aa | a
B -> aBb | bBa | abB | baB | Bab | Bba | bB | Bb | b
|
H: Sorgenfrey topology
$B$ is the base of the Sorgenfrey topology ($\mathcal{T}_{S}$ ), being $ \mathcal{T}_{u}$ the usual topology.
AI: If you show that $[5,\infty)$ is open in $\mathcal{T}_S$, then your set will be open in any product space $X\times Y$ where $X=(\mathbb{R},\mathcal{T}_S)$, because $U\times Y$ is open in the box or product topology when $U\in \mathcal{T}_S$.
Notice on the other hand that $[5,\infty)=\cup_{n=1}^\infty[5,5+n)$, and hence open.
|
H: Modulus and Congruences, odd example.
Hey guys I am reading a math book and I got a bit confused on the congruence chapter.
I have just seen that $a \pmod n$ = remainder of n|a.
However as an example of " a (mod n) = remainder ", they wrote:
1 = 15 (mod 7)
The peculiar example was: "The integer 29 is 5 mod 6"
Which I understand it would translate as: $$29 = 5 (mod 6)$$
I do know that 29 is congruent to 5 (mod 6), as
$$6 \mid (29-5) = 4 $$
thus, $29 \equiv 5 (mod 6)$.
However 29 is not at all the remainder of $6\mid 5$ so I am confused as
to me this example does not make sense (being an example or congruence rather than the remainder). It feels like it is badly written. Please help.
AI: 29 is 5 mod 6 means that 29 mod 6 is same as 5 mod 6. When you divide 29 by 6, you get 5 which is same as 5 mod 6. So it is actually 29 mod 6 =5 mod 6, which is in general stated as 29 is 5 mod 6.
|
H: Study convergence of $\int_{0}^{\infty} \frac{e^{\sqrt{x}}}{e^x + 1}$
Study convergence of $$\int_{0}^{\infty} \frac{e^{\sqrt{x}}}{e^x + 1}$$
First of all, I can only use the comparison test or the limit comparison test, but I don't know to witch series compare it.
Is known that polynomials of grade $n \gt e^{x}$ and that $\sqrt{x} \lt x$ and so $e^{\sqrt{x}} \lt e^{x}$. Any hints how to proceed ? Thanks in advance.
AI: Use $0\le e^{\sqrt{x}}\le e^{x/2}$ for $x\ge4$, $0\le\frac{e^{\sqrt{x}}}{e^x+1}<\frac12e^4$ for $x\in[0,\,4)$.
|
H: Proof verification: $f$ is convex iff $f'$ is monotonically increasing
This is (the first half of) exercise 14 in Baby Rudin
Let $f:(a, b) \to \mathbb{R}^1$ be differentiable. Prove that $f$ is convex iff $f'$ is monotonically increasing.
($\Rightarrow$) Assume $f$ is convex in $(a, b)$. Fix $0 < \lambda < 1$ and $a < y \le x < b$. Notice that
\begin{align}\tag{13.1}
y \le x \implies y (1-\lambda) \le x (1-\lambda) \implies y - \lambda y \le x -\lambda x \implies \lambda x + y - \lambda y \le x
\end{align}
By the definition of convexity, we have that
\begin{equation}\tag{13.2}
f[\lambda x + y - \lambda y] \le \lambda f(x) + f(y) - \lambda f(y)
\end{equation}
and differentiating (13.2) with respect to x, we have
\begin{equation}\tag{13.3}
f^{\prime}[\lambda x + y - \lambda y] \cdot \lambda \le \cdot \lambda f^{\prime}(x) \implies f^{\prime}[\lambda x + y - \lambda y] \le f^{\prime}(x)
\end{equation}
By (13.1) and (13.3), we conclude that $f'$ is monotonically increasing.
($\Leftarrow$) Suppose $f'$ is monotonically increasing. Fix $0< \lambda< 1$ and suppose $f$ is not convex. Then $\exists p, q \in (a, b)$ such that
\begin{equation}\tag{13.4}
f(\lambda p + q - \lambda q) > \lambda f(p) +f(q) - \lambda f(q) \stackrel{\textrm{w.r.t. } p}{\implies} f'(\lambda p + q- \lambda q) > f'(p)
\end{equation}
Without loss of generality, let $p \geq q$, which implies
\begin{align*}
p (1-\lambda) > q (1-\lambda) \implies p - \lambda p > q -\lambda q \implies p > \lambda p + q - \lambda q
\end{align*}
Since $f'$ is monotonically increasing, we get $f'(p) > f'(\lambda p + q- \lambda q)$ which contradicts (13.4).
Can someone please critique my proof? Please don't bother suggesting a new proof as those can be found here and here. I am new to handling derivatives in an abstract setting, so I am not sure if it is valid to differentiate (13.3) and preserve the direction of the inequality, like I did. Is there a theorem/ lemma that supports this move?
AI: You can usually never differentiate inequalities.
If $f,g : ]a,b[ \rightarrow \mathbb R$ are differentiable,
$$
\forall x \in ]a,b[, \ f(x) \leq g(x)
$$
does not imply
$$
\forall x \in ]a,b[, \ f'(x) \leq g'(x).
$$
For example, take $f(x) = x^2$ and $g(x) = x$. We have $f(x) \leq g(x)$ on $[\frac{1}{2},1]$ and yet for all $x \in [\frac{1}{2},1]$, $f'(x) = 2x \geq 1 = g'(x)$.
The reason the result fails is because saying a function is above another does not tell us anything about how fast these functions grow comparatively.
You can try to draw a counter-example to convince yourself.
Hope this helps!
|
H: How to prove $ E - ( A \cap B ) $ = $ (E- A) \cup (E - B) $ where for the purpose of this exercise E is a set that all other sets are a subset of.
I understand what the left hand side is stating. The set of elements containing E (where E is a set that all other sets are a subset of for the purpose of this exercise ) but excluding elements in both A and B. However, I don't understand how you can get the right hand side from it.
AI: One direction is obvious: as $A\cap B\subset A, B$, the complements $E\setminus A$ and $E\setminus B$ are contained in $A\setminus A\cap B $, hence their union.
Conversely, consider an element of $E$ which is not in $A\cap B$. This means it cannot belong to both of them. So, either it does not belong to $A$ (but possibly to $B$), in which cas it belongs to $E\setminus A$ by definition, hence to $(E\setminus A)\cup(E\setminus B).$ Similar argument if the element does not belong to $B$.
|
H: Why is it important the manifold has codimension $1$ in order to prove this identity for $\operatorname{div}fV$ on $\partial M$?
I've seen the following claim in some lectures notes which let me think that I might have a major misunderstanding:
The claim is that if $M$ is an embedded submanifold of $\mathbb R^d$ with boundary of codimension $1$ and $f$ and $V$ are differentiable scalar and vector fields, respectively, then $$\operatorname{div}fV=f\operatorname{div}V+\frac{\partial f}{\partial\nu}\langle V,\nu\rangle+\langle\nabla f,V_{\partial M}\tag1,$$ where $$\frac{\partial f}{\partial\nu}:=\langle\nabla f,\nu\rangle,$$ $\nu$ is the normal field and $V_{\partial M}$ is the tangential component of $V$ (i.e. the projection of $V$ onto the tangent space).
I don't understand why it is important that $M$ has codimension $1$. If $M$ is $k$-dimensional, then $\partial M$ is $(k-1)$-dimensional. If $M$ has codimension $1$, then it is $(d-1)$-dimensional and hence $\partial M$ is $(d-2)$-dimensional .... Why should this be of any use in $(1)$?
Assuming $M$ is $k$-dimensional, $(1)$ should trivially follow from $$\operatorname{div}(fV)(x)=\langle\nabla f(x),V(x)\rangle+f(x)\operatorname{div}V(x)\;\;\;\text{for all }x\in\mathbb R^k$$ and $$\langle\nabla f(x),V(x)\rangle=\langle\nabla f(x),\operatorname P_{T_x(\partial M)}V(x)\rangle+\langle V(x),\nu(x)\rangle\frac{\partial f}{\partial\nu}(x)\tag2$$ for all $x\in\partial M$, where $\operatorname P_{T_x(\partial M)}$ denotes the orthogonal projection of $\mathbb R^k$ onto the tangent space $T_x(\partial M)$ of $\partial M$ at $x\in\partial M$.
AI: First, perhaps you misread the requirement: it is not itself $M$ that must have codimension 1, it is instead the boundary of $M$, denoted $\partial M$, that must have codimension 1. But the phrase "... with boundary of codimension 1... " leaves out some information which could perhaps clarify the situation: that phrase should be parsed as
... such that the boundary of $M$ has codimension in $\mathbb R^d$ equal to 1...
Equivalently, $M$ itself must have codimension in $\mathbb R^d$ equal to 0.
So, why must $\partial M$ have codimension $1$ in $\mathbb R^d$?
As soon as you write the words "$\nu$ is the normal field and $V_{\partial M}$ is the projection of $V$ onto the tangent space", one wonders: normal field of what? Tangent space of what? The only sensible answer that I can see is that $\nu$ is the normal field to $\partial M$ and $V_{\partial M}$ is the projection of $V$ onto the tangent space of $\partial M$.
And in order that $\partial M$ even possess a normal field, it must have codimension 1. So, in order for the $v$ term in equation (1) to even be defined, $\partial M$ must have codimension 1.
(It might be clearer if one rewrites equation (1) with proper quantifiers: each term should have the argument $x$ in the correct position, sort of like you did in the later equations; and the equation should hold for each $x \in \partial M$.)
The point here is that a submanifold of dimension $n \ge 2$ or higher does not have a well-defined normal field. It does have a well-defined normal bundle, which is a vector bundle of dimension $n$. For example, for a circle $C$ embedded in $\mathbb R^3$, which has codimension 2, its normal bundle has fibers of dimension $2$: at each point $x \in C$, the normal plane $N_x C$ is the 2-dimensional subspace of $T_x \mathbb R^3$ ($= \mathbb R^3$) which is normal to the 1-dimensional tangent line $T_x C$.
In general, for an codimension $m$ submanifold $B \subset \mathbb R^n$, the normal bundle is an $m$-dimensional vector bundle over $B$, whose fiber $N_x B$ is the $m$-dimensional subspace of $T_x \mathbb R^n$ (which is identified with $\approx \mathbb R^n$) that is normal to the $n-m$ dimensional subspace $T_x B$ of $T_x \mathbb R^n$. It follows that there is an orthogonal direct sum
$$T_x \mathbb R^n = T_x B \oplus N_x B
$$
And then, as you asked in the comments, for the case that $n=d$ and that $B = \partial M$ has codimension 1 in $\mathbb R^n$, one obtains
$$T_x \mathbb R^d = T_x (\partial M) \oplus N_x (\partial M)
$$
|
H: Sines Fourier series
There is a basic principle I don't get. Say I want to find the Sines Fourier series of $e^x$ on $[0,1]$. Why do I in this case treat a "continuation" of $e^x$ from $[0,1]$ to $[-1,1]$ so it is an odd function? I always see it being done but I don't know why. Of course the Fourier series should be odd, is it just a random minimal interval we choose to build the series on?
AI: Using combination of sine functions, as you note it, we deal with an odd function.
You surely know that an odd function is such that
$$ \forall x, \ \ f(-x)=-f(x)$$
Therefore, the only odd function "continuating" $f(x)=e^x$ is $-f(-x)=-e^{-x}$ on $[-1,0]$ making it compulsory to consider this interval $[-1,0]$.
In a further step, we complete the function defined in this way on $[-1,1]$ by translating it with $2k\vec{i}$ translation vectors where $\vec{i}$ is the unit vector of $x$-axis.
|
H: Evaluate $\displaystyle\sum_{r=0}^8 (-1)^r \binom{20}{r} \binom{20}{8-r}$
Please help me with this question
$$\sum_{r=0}^8 (-1)^r \binom{20}{r} \binom{20}{8-r}$$
AI: Evaluate
$$\sum_{r=0}^8 (-1)^r \binom{20}{r} \binom{20}{8-r}$$
I would evaluate $$\sum_{r=0}^8 (-1)^r \binom{20}{r} \binom{20}{12+r}={20\choose 16}$$
that is the coefficient of $x^8$ in the binomial series of $(x^2-1)^{20}$ got by multiplying the binomial series below.
\begin{align*}
(1+x)^{20}&=\sum_{i=0}^{20}{20\choose k}x^i\\
(x-1)^{20}&=\sum_{j=0}^{20}(-1)^j{20\choose j}x^{20-j}\\
\end{align*}
|
H: Sigma index (Induction)
I was wondering how the index that's left is measured.
When you prove this via induction,
$$\sum_{k=1}^{2^n} \frac{1}{k} \geq \frac{n}{2} $$
$$\sum_{k=1}^{2^{n+1}} \frac{1}{k}= \sum_{k=1}^{2^{n}} \frac{1}{k} + \sum_{k=2^n+1}^{2^{n+1}} \frac{1}{k} $$
you will come across this part
$$\geq\frac{n}{2}+\sum_{k=2^n+ 1}^{2^{n+1}} \frac{1}{k}\geq \frac{n}{2}+2^n\cdot \frac{1}{2^{n+1}}$$
where does the $2^n$ comes from?
Is it a index difference?
AI: Yes, it depends on the limits of the sum. More generally, if $(x_k)_{k\geq 1}$ is a decreasing sequence then, for $b\geq a\geq 1$,
$$\sum_{k=a+1}^bx_k\geq \sum_{k=a+1}^bx_b=x_b\sum_{k=a+1}^b1=(b-a)\cdot x_b.$$
In your case, $x_k=1/k$, $b=2^{n+1}$, and $a=2^n$.
Therefore
$$(b-a)=2^{n+1}-2^n=2^n.$$
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.