text
stringlengths 83
79.5k
|
|---|
H: Finding equation of position-time function
Dudley Do-Right is riding his horse at his top speed of $10m/s$ toward the bank, and is $100m$ away when the bank robber begins to accelerate away from the bank going in the same direction as Dudley Do-Right. The robber's distance, $d$, in metres away from the bank after $t$ seconds can be modelled by the equation $d=0.2t^2$. Write a model for the position of Dudley Do-Right as a function of time.
The answer is $d=10t-100$.
My question is how do you know that it is $-100$, and not $100$?
Thanks in advance for your help.
AI: It is $-100$ and not $+100$ because the positive axis was assumed to be along the direction of motion and Dudley's initial position was on the negative part of the axis where the origin is assumed to be the position of the bank.
Here's a depiction:
At time $t = 0$:
-100m ............................ 0m ............................ +100m
Dudley ----->.................... Bank .................................
................................ Robber ----->..........................
|
H: Why is $V$ a neighborhood of $p$?
this is a proof for theorem that compact subsets of metric spaces are closed.
I can't understand the line: If $V$ is intersection of $V_{q1}$, $V_{q2}$, $\dots$ , $V_{qn}$,
then $V$ is a neighborhood of $p$.
I don't think being intersections of neighborhood doesn't mean it is a neighborhood.
Pleas help me understand this!
$\mathbf{2.34}\,\,\,$Theorem Compact subsets of metric spaces are closed.
Proof$\,\,\,$ Let $K$ be a compact subset of a metric space $X$. We shall prove that the complement of $K$ is an open subset of $X$.
Suppose $p\in X$, $p\notin K$. If $q\in K$, let $V_q$ and $W_q$ be neighborhoods of $p$ and $q$, respectively, of radius less than $\frac{1}{2}d(p,q)$ [see Definition $2.18\text{(a)}$]. Since $K$ is compact, there are finitely many points $q_1,\ldots,q_n$ in $K$ such that $$K\subset W_{q1}\cup\cdots\cup W_{qn}=W.$$ If $V=V_{q1}\cap\cdots\cap V_{qn}$, then $V$ is a neighborhood of $p$ which does not intersect $W$. Hence $V\subset K^{c}$, so that $p$ is an interior point of $K^{c}$. The theorem follows.
AI: On page 32, Rudin defines a neighborhood $N_r(p)$ of $p$ to be the set
$$N_r(p) = \{q \in X : d(p, q) < r\}$$
which is an open disk. Since all $V_{q_i}$ are open disks centered at $p$, it follows immediately that $ V = \bigcap V_{q_i}$ is the smallest of these open disks.
|
H: Convergence of series, comparison test on $\frac{1}{\sqrt{n}+ \ln n}$
I am suppose to use the limit comparison test to prove divergence or convergence. There isn't really any examples in my book that show how to pick your $b_n$ so I just pick whatever works out nicely.
$$a_n = \dfrac{1}{\sqrt{n} + \ln n}$$
$$b_n = \dfrac{1}{n^4}$$
I chose this because I know that it converges and that it makes the math easier.
$$\dfrac {\dfrac{1}{\sqrt{n} + \ln n}}{\dfrac{1}{n^4}}$$
$$ {\dfrac{n^4}{\sqrt{n} + \ln n}}$$
So now I look at my Table of Truths to find the answer.
I know that my $b_n$ converges, that is easy. It goes to zero.
My $a_n$ I am not sure about. I do know that the limit of $\dfrac{a_n}{b_n}$ is greater than zero.
According to the table it converges if $b_n$ does, but infinity is another option so I guess that this is really infinity which is only converges if $a_n$ does. So now I need to look at $a_n$ but really isn't an infinite limit > 0? Besides that logical confusion $a_n$ does converge to zero so that means $b_n$ converges as well. Which I guess is kind of confusing to me, because it seems like I didn't really prove anything since I just looked at the original function which I know goes towards zero anyways.
What just happened? I feel like I did nothing.
AI: Suggestion:
If you're fairly new to sequences and series, a quick and "dirty" way to show that the series diverges (it fails to converge) is to compare it to $b_n = \dfrac 1n.\;$ Recall, the harmonic series diverges: $\sum_{n\to \infty} \frac 1n \to \infty$.
And, you can confirm: $$\forall n \geq 3,\quad \dfrac 1n \leq \dfrac{1}{\sqrt n + \ln n}$$
|
H: Compute limit with the help of MacLaurin series expansion
Compute $$\lim _{x \to 0} \dfrac{e^{2x^2}-1}{x^2}$$ with the aid of a MacLaurin series expansion.
AI: Edit: The problem has been corrected. We keep the original solution, and add a solution to the corrected problem below.
Hint: We have
$$e^{(2x)^2}=1+(2x)^2+\frac{(2x)^4}{2!}+\frac{(2x)^6}{3!}+\cdots.$$
(Just write down the Maclaurin expansion of $e^t$, and everywhere in the expansion replace $t$ by $(2x)^2$.) Thus
$$\frac{e^{(2x)^2}-1}{x^2}=4+O(x^2).$$
Answer to corrected problem: This asks for
$$\lim_{x\to 0} \frac{e^{2x^2}-1}{x^2}.$$
We have
$$e^{2x^2}=1+2x^2+\frac{(2x^2)^2}{2!}+\frac{(2x^2)^3}{3!}+\cdots.$$
Subtract $1$, divide by $x^2$. We get $2$ plus a bunch of terms that have $x$'s in them. As $x\to 0$, these terms approach $0$, so the limit is $2$.
Another way: We used the Maclaurin expansion mechanically in the solution, because it is a nice tool that it is very important to know about. But there are simpler ways. Let $t=x^2$. Then we want to find
$$\lim_{t\to 0^+}\frac{e^{2t}-1}{t}.$$
Let $f(t)=e^{2t}$. Note that $f(0)=1$. Then by the definition of the derivative,
$$\lim_{t\to 0} \frac{f(t)-f(0)}{t}=f'(0).$$
In our case, $f'(t)=2e^{2t}$, so $f'(0)=2$, and our limit is $2$.
|
H: Definition of a nilpotent group.
I saw two different definitions of a nilpotent group, but I'm not really sure how these definitions are equivalent.
The first one is from Basic Abstract Algebra (Robert Ash):
A central series for $G$ is a normal series $1 = G_0 \trianglelefteq G_1 \trianglelefteq ... \trianglelefteq G_r = G$ such that $G_i/G_{i-1} \subseteq Z(G/G_{i-1})$ for every $i = 1, ... ,r$. An arbitrary group $G$ is said to be nilpotent if it has a central series.
The second one is from Advanced Modern Algebra (Rotman):
The descending central series of a group $G$ is $$ G = \gamma_1(G) \supseteq \gamma_2(G) \supseteq ...,$$
where $\gamma_{i+1}(G) = [\gamma_i(G),G].$ A group $G$ is called nilpotent if the lower central series reaches $\{1\}$; that is, if $\gamma_n(G)=\{1\}$ for some $n$.
I can't really see how these definitions are equivalent, because they are defining it in very different ways... so I would appreciate it if anybody could clarify this for me.
Thanks in advance
AI: Observe that
$$\frac{G_{i+1}}{G_i}\subseteq Z\left(\frac{G}{G_i}\right)\iff \left[\frac{G_{i+1}}{G_i},\frac{G}{G_i}\right]\subseteq\frac{G_i}{G_i}\iff [G_{i+1},G]\subseteq G_i. \tag{$\circ$}$$
Thus by induction
$$[[\cdots[[G_i,\overbrace{G],G],\cdots],G}^{\large i}]\subseteq G_0=1.$$
In particular, if $G_n=G$ then $\gamma_{n+1}(G)=1$. So existence of an ascending series implies the descending series terminates. Conversely, the descending series is also an ascending series if it actually has $1$ at its base, since its terms $G_i:=\gamma_{n+1-i}(G)$ satisfy the right-hand side of $(\circ)$.
|
H: Prove that $\frac{1}{\sin t} - \frac{1}{t}$ is increasing on $(0,\pi/2)$.
I would like to obtain a rigorous proof of the fact that
$$
\frac{1}{\sin t} - \frac{1}{t}
$$
is increasing on $(0,\pi/2)$. I attempted the usual by taking the derivative and seeing if it's positive, however, this led me nowhere enlightening. Is it possibly related to the fact that $\sin x \leq x$ for all $0 \leq x \leq \pi/2$? This isn't a homework problem, so I would not mind a full solution. Thanks in advance!
AI: Taking the derivative, you want to show that
$$ \frac{ - \cos t } { \sin^2 t } + \frac{1}{t^2 } \geq 0 \Leftrightarrow \tan t \sin t \geq t^2 $$
There are many ways to prove this, like in this other question.
|
H: for prime ideals, the intersection of the squares is the square of the intersection?
Here is something that i proved and i would appreciate feedback on my proof:
Proposition: Let $A$ be a commutative Noetherian ring and $p,q \in \operatorname{Spec}(A)$. Then $p^2 \cap q^2 = (p\cap q)^2$.
Proof: $(\Rightarrow)$: let $x \in p^2 \cap q^2$. Then $x = y^2=z^2$ with $y \in p, z \in q$. Then $y^2-z^2=0 \in q$. Hence $y^2 \in q$ and so $y \in q$ which finally shows $y \in p \cap q$. Since $x = y^2$, this shows that $x \in (p \cap q)^2$. $(\Leftarrow)$ Take $x \in (p \cap q)^2$, thus $x=y^2$ with $y \in p \cap q$. Then $x=y^2 \in p^2 \cap q^2$.
Edit:
As Berci's answer shows, my proof is incorrect. Certainly we have $(p\cap q)^2 \subset p^2 \cap q^2$. The question now is whether the reverse inclusion holds.
AI: Not exactly.
The square of an ideal $I$ contains the sums of products $xy$ where $x,y\in I$. But I think, the proof can be reformulated accordingly..
(Well, actually, if $\exists 1/2\in A$, then the elements $x^2$ for all $x\in I$ generate $I^2$..)
|
H: Differentiate a rational function
I'm having problems solving a problem. Suppose to find the slope of:
$$f(x)= \frac{x}{x-2}$$ at the given point $(3,3)$.
I got $1/(1) = 1$ (for the slope)
But my answerbook says that it's $-2$. Can someone explain what I'm doing wrong?
AI: Method one:
$$f(x) = \frac{1}{h(x)} \ \to \ f'(x) = -\frac{h'(x)}{h^2(x)}$$
$$f(x) = \frac{x}{x-2} = \frac{x - 2 + 2}{x-2} = 1 + \frac{2}{x-2}$$
$$f'(x) = 0 - 2(x-2)^{-2}$$
$$f'(3) = -2$$
Method two:
$$f(x) = \frac{g(x)}{h(x)} \ \to \ f'(x) = \frac{g'(x)h(x) - h'(x)g(x)}{h^2(x)}$$
$$f'(x) = \frac{1\cdot(x-2) - 1\cdot x}{(x-2)^2}$$
$$f'(3) = \frac{-2}{1^2} = -2$$
Edit:
I think the key here is to understand that if:
$$f(x) = \frac{g(x)}{h(x)} $$ Then $f'(x)$ is not simply:
$$f'(x) = \frac{g'(x)}{h'(x)}$$
|
H: Math logic - determine whether an inference exist
This is the first time I see this kind of question.
Ok, I have:
$\{\neg A \vee B, B \to C, A \vee C \} \models B \vee C$
I have to determine whether an inference exist or not.
How do I do so? please help.
AI: HINTs:
Note that on the left of the double turnstile $\models$, you have $\lnot A\lor B$, and you have $A\lor C$.
$$\lnot A \lor B \equiv A\rightarrow B\tag{1}$$
$$A \lor C \equiv \lnot A\rightarrow C\tag{2}$$
Now, we know that for any proposition, $A$, we have that $A \lor \lnot A$ is a tautology (always true).
So, given that we have $A\lor \lnot A$, $(1)$, and $(2)$, what can we say about $B\lor C$?
|
H: Prove that $X \nvdash \alpha\to\beta$, $X$ when maximally consistent
I'm trying to solve a sample exam and I got stuck.
Let $\alpha, \beta$ be statements and $X$ a set of statement such that $X \cup \{\beta\}$ is maximally consistent.
Given $X \cup \{\beta\} \vdash \alpha $, prove that $X \nvdash \alpha\to\neg\beta$.
I don't know how to start. please help me .. thanks in advance.
AI: Perhaps you can use the deduction theorem, which says: "$X \cup \{ \beta \} \vdash \alpha$ if and only if $X \vdash \beta \rightarrow \alpha$." Suppose for reductio that $X \vdash \alpha \rightarrow \neg \beta$ as well. Given $X \cup \{\beta\} \vdash \alpha$, the deduction theorem implies that $X \vdash \beta \rightarrow \alpha$, which you know is equivalent to $X \vdash \neg \alpha \rightarrow \neg\beta$. Hence, $X \vdash (\alpha \vee \neg \alpha) \rightarrow \neg \beta$, and since $\alpha \vee \neg \alpha$ is a tautology, $X \vdash \neg \beta$. But then $X \cup \{\beta\}$ is inconsistent, $\bot$.
|
H: Show that there are uncountably many intervals in R.
How can I show that there are uncountably many intervals in R?
I think contradiction would work by assuming there are countably many intervals and
using that there are countable number of rational number in R, but I'm not sure how to proceed.
AI: Recall that a set is uncountable iff it has an uncountable subset. From the set of all possible intervals in $\Bbb{R}$, we consider the subset of such intervals of the following form:
$$
S=\{(x,\infty) \mid x\in \Bbb{R}\}
$$
It suffices to show that $S$ is uncountable. To do this, we construct a bijection $f\colon S\to\Bbb{R}$ defined by:
$$
f((x,\infty))=x
$$
Hence, since $\Bbb{R}$ is uncountable, so too is $S$, as desired.
|
H: Help with a proof in Hungerford's book
I didn't understand a detail in this proof of this theorem:
The definition of content:
I didn't understand why $C(C(f)f_1C(g)g_1)\thickapprox C(f)C(g)C(f_1g_1)$
I need help only in this part.
Thanks in advance
AI: If $p\in D[x]$ and if $a\in D$, then $C(af)=aC(f)$. (Why?) If you apply this for $a=C(f)C(g)$ and $p=f_1g_1$, then you obtain the equality $C(C(f)f_1C(g)g_1)=C(f)C(g)C(f_1g_1)$.
I hope this helps!
|
H: source to study Zariski tangent space
My algebraic geometry is more of a disconnected set of ideas; however I need to understand well the Zariski tangent space. My commutative algebra though is decent. Towards this end, I don't find Hartshorne helpful, neither Eisenbud or Shokurov-Danilov as their presentation is dense. What I am looking for is a comprehensive treatment in terms of classical varieties, not in terms of schemes. Any recommendations?
AI: Have you looked at Shafarevich, Varieties in Projective Space (chapter 2 I think)? He starts out by calculating with polynomials what a tangent space should look like, and then derives from there the abstract definition.
|
H: How to make this difference equation continuous?
$$ F_n=F_{n-1}-F_{n-2} $$
How can I convert this oscillating sequence into a continuous function? IE get it in terms of n.
AI: This sequence has exactly 6 values:
($a$,$\ $ $b$, $\ $ $b-a$, $\ $ $-a$, $\ $ $-b$, $\ $ $a-b$) in cycle.
So, there is a reason to build trigonometric function with period of $6$:
$$
F(n) = A \sin \frac{\pi n}{3} + B \cos\frac{ \pi n}{3},
$$
If $F_0 = a$, $F_1 = b$, then
$$
\left\{
\begin{array}{c}
A\cdot 0 + B\cdot 1 = a;\\
A\cdot \dfrac{\sqrt{3}}{2} + B \cdot \dfrac{1}{2} = b; \\
A\cdot \dfrac{\sqrt{3}}{2} - B \cdot \dfrac{1}{2} = b-a; \\
\end{array}
\right.
$$
$B=a$, $A=\dfrac{2b-a}{\sqrt{3}}$.
So,
$$
F(n) = \dfrac{2b-a}{\sqrt{3}} \cdot \sin \frac{\pi n}{3} \;+ \; a \cdot \cos\frac{\pi n}{3},
$$
where $a=F_0$, $b=F_1$.
|
H: Does $f'(x) \in o(g'(x))$ imply $f(x) \in o(g(x))$ for monotonically increasing $f$ and $g$?
The title says it all. This seems intuitively true to me, but I'm not sure how one would go about proving this.
(I'm asking because I'm trying to show that $x^n \in o(x^{n+1})$ for all natural $n$, and I was wondering if you could just take derivatives and use induction.)
AI: It will be a lot easier to divide $x^n$ by $x^{n+1}$, see what happens as $x$ gets big.
Anyway, the result about derivatives is not correct. Let $f(x)=1-\frac{1}{x^2}$ and let $g(x)=1-\frac{1}{x}$. Then $f'(x)\in o(g'(x))$ but $f(x)\not\in o(g(x))$.
|
H: Let $Q$ be a special orthogonal matrix. Show that $Q(u\times v)=Q(u)\times Q(v)$ for any vectors $u, v\in\mathbb R^3$.
Let $Q$ be a $3\times3$ special orthogonal matrix. Show that $Q(u\times v)=Q(u)\times Q(v)$ for any vectors $u, v\in\mathbb R^3$.
I have no idea how to start. I'm not sure if $Q(u)\cdot Q(V)=Q(u\cdot v)$ would helps. Please give me some help. Thanks.
AI: The cross-product $u\times v$ is the unique vector such that
$$
\det(u,v,w)=(u\times v)\cdot w\qquad \forall w
$$
where $\det(u,v,w)$ is the determinant of the $3\times 3$ matrix whose columns are $u,v,w$ in this order, that is the determinant of the linear map that sends the canonical basis to $(u,v,w)$. That's a common definition of the cross-product. See below if needed.
Recall that $Q$ special orthogonal means $Q^T=Q^{-1}$ and $\det Q=1$.
We need to prove that $Q^T(Qu\times Qv)=u\times v$. So let us compute
$$
Q^T(Qu\times Qv)\cdot w=(Qu\times Qv)\cdot Qw=\det(Qu,Qv,Qw)=\det Q\det(u,v,w)=\det(u,v,w).
$$
By the uniqueness defining $u\times v$, this proves $Q^T(Qu\times Qv)=u\times v$, i.e. $Qu\times Qv=Q(u\times v)$.
Note: the same argument shows more generally that, as mentioned by lhf and wikipedia,
$$
M^T(Mu\times Mv)=(\det M) u\times v\quad\Rightarrow \quad (Mu\times Mv)=(\det M) M^{-T}(u\times v)
$$
for every invertible $3\times 3$ matrix $M$, where $M^{-T}=(M^{-1})^T=(M^T)^{-1}$. The formula on the left is true for every matrix $M$ and is just $0=0$ in the singular case, since we have $Mu\times Mv=0$ for every $u,v$ in this case.
The fact that the identity $\det(u,v,w)=(u\times v)\cdot w$ is satisfied by every $u,v,w$ can be checked directly, by computations, from the determinant definition of $u\times v$. Another way to see it is to note that the map $(u,v,w)\longmapsto (u\times v)\cdot w$ is multilinear, anti-symmetric (or alternating), and sends the canonical basis to $1$, whatever definition of the cross-product you might have. So it must be the determinant. Uniqueness of $u\times v$ satisfying the identity follows from $(\mathbb{R}^{3})^\perp=\{0\}$, as $w_1\cdot w=w_2\cdot w$ for every $w$ implies $(w_1-w_2)\cdot w=0$ for every $w$, in particular for $w=w_1-w_2$, whence $\|w_1-w_2\|^2=0$.
|
H: Is every point of every open set E, the subset of R^2 a limit point of E?
I think this one is really trivial.
Since E is open, for every element e in E, there is a neighborhood N of e such that N is in E.
If radius of the neighborhood of e is r, then,
for any d(n,e) such that 0
Hence, e is limit point by definition.
Is this correct?
AI: The claim is true, but your reasoning is incomplete. Let $x \in E$. Given $\epsilon > 0$, let $N_\epsilon(x)$ be the open ball of radius $\epsilon$. Then, $N_\epsilon(x) \cap E \subset \mathbb{R}^2$ is an open nbhd of $x$, and therefore cannot be a singleton.
|
H: Question related to matrix in computer memory
I'm trying to solve the following problem from a book:
A matrix $\mathbb M$ has 3 rows and 4 columns:
$$\left[ \begin{array}{cccc}
a_{11} & a_{12} & a_{13} & a_{14}\\
a_{21} & a_{22} & a_{23} & a_{24}\\
a_{31} & a_{32} & a_{33} & a_{34}\\
\end{array} \right]$$
The 12 entries in the matrix are to be stored in row major form in
locations 7,609 to 7,620 in a computer’s memory. This means that the
entries in the first row (reading left to right) are stored first, then
the entries in the second row, and finally the entries in the third
row.
Find formulas (in $n$) for $r$ and $s$ so that $a_{rs}$ is stored in
location 7,609 + $n$.
So far I have come up with this relation among $r$, $s$ and $n$ (which is fairly common in matrix calculations):
$$4 \cdot (r-1) + s - 1 = n$$
But I can't seem to find the second relation that will allow me to express $r$ and $s$ in terms of $n$. Any help is greatly appreciated.
AI: For the rows, we want to come up with a function that will map $\{0,1,2,3\}$ to $1$, $\{4,5,6,7\}$ to $2$, and $\{8,9,10,11\}$ to $3$. This suggests that we use a floor function as follows:
$$
r=1+\lfloor n/4 \rfloor
$$
For the columns, we want to come up with a function that will map $\{0,4,8\}$ to $1$, $\{1,5,9\}$ to $2$, $\{2,6,10\}$ to $3$, and $\{3,7,11\}$ to $4$. This suggests that we take the remainder as follows:
$$
s=1+(n \bmod 4)
$$
|
H: $W_2(f,0) = \frac{1}{2} \# f^{-1}(l)$?
There must be some rather straight forward reason for $W_2(f,0) = \frac{1}{2} \# f^{-1}(l)$ but I really get stuck with why. Could someone help me out?
AI: This follows from the proof of the Borsuk-Ulam theorem preceding this theorem. (Starting on page 91 of Guillemin and Pollack's Differential Topology)
Guillemin and Pollack show that
$$W_2(f,0) \equiv \tfrac{1}{2}\# f^{-1}(l) \pmod 2$$
in their proof of the Borsuk-Ulam theorem. But the conclusion of the Borsuk-Ulam theorem says that for a function
$$f: S^k \longrightarrow \Bbb R^{k+1} \setminus \{0\}$$
such that
$$f(-x) = -f(x) \text{ for all } x \in S^k,$$
we have that
$$W_2(f,0) = 1.$$
Hence $f^{-1}(l)$ cannot be empty for any line $l$ through the origin of $\Bbb R^{k+1}$.
|
H: Help with the problem of conditional probability
Two friends are students of the subject of Statistics so that when one passes the notes lack other. It is known that the first will attend 80% of classes and the second at 40%, independently. What is the probability that the friends have all class notes?
AI: For any particular class, let $A'$ be the event "Alicia did not attend," and $B'$ be the event "Beti did not attend." The probability of $A'$ is $0.2$, and the probability of $B'$ is $0.6$. Thus by independence the probability neither showed up is $0.12$.
It follows that with probability $0.88$ at least one showed up, so they have the notes for that class.
If there are $n$ classes, the probability they have all the notes is $(0.88)^n$. Here we are assuming that attending the various classes are independent events. This is probably not a reasonable assumption.
Remark: Or else let $A$ be the even Alicia attended, and $B$ be the event Beti did. Then $\Pr(A\cap B)=0.32$. Now use the formula $\Pr(A\cup B)=\Pr(A)+\Pr(B)-\Pr(A\cap B)$ to conclude that $\Pr(A\cup B)=0.88$. So the probability one or both attended is $0.88$.
|
H: Is there any symmetric complex matrix not satisfying this condition
Friedberg linear algebra p.471:
"If $A$ is symmetric with entries in $\mathbb{F}$, then $||A||_{op}$ is the largest absolute value of $A$"
Here, the operator norm is defined as $||A||_{op} \triangleq \sup_{x\neq 0} \frac{||Ax||}{||x||}$.
==========
I can prove this sentence when the field is $\mathbb{R}$, but i'm not sure when $\mathbb{F}=\mathbb{C}$.
Let $A$ be a $n\times n$ symmetric complex matrix and $\lambda$ be an eigenvalue of $A$.
Then, $\overline{\lambda}$ is an eigenvalue of $A^*$, since $A$ is symmetric.
My question is, is $\lambda$ an eigenvalue of $A^* A$?
If not, please give me a counterexample.
AI: I think that for $\mathbb{F} = \mathbb{C}$, the author meant hermitian when (s)he wrote symmetric.
Over $\mathbb{C}$, symmetry in the sense of $A^T = A$ is not very interesting, and it's easy to find counterexamples. Let
$$A = \begin{pmatrix}i & 1\\1 & 0\end{pmatrix}$$
$A$ is obviously symmetric in the $A^T = A$ sense, the operator norm is $\sqrt{\frac52} = \lVert A\cdot\frac{1}{\sqrt{2}}(-i,1)\rVert$, the eigenvalues are $\dfrac{i \pm \sqrt{3}}{2}$, both of absolute value $1$.
$$A^\ast A = \begin{pmatrix}2 & -i\\i & 1\end{pmatrix}$$
has eigenvalues $\dfrac{3 \pm \sqrt{5}}{2}$.
If on the other hand hermitian symmetry was meant, then all eigenvalues are real, eigenvectors to different eigenvalues are orthogonal, and hence the operator norm is the largest absolute value of all eigenvalues.
|
H: Symmetric functions possess a common zero.
So possess a common zero, means $\exists x \in S^k: f_i(x) = 0$, right?
Then I could not follow this brief proof - what is the corollary? Because the information in the proof is so little, I couldn't even guess which.
Thanks =)
Theorem. Any $k$ smooth functions $f_1,\ldots, f_k$ on $S^k$ that satisfy the symmetry condition $f_i(-x)=-f_i(x)$, $i=1,\ldots, k$, must possess a common zero.
Proof. If not, apply the corallary to the map $$ f(x)=(f_1(x),\ldots, f_k(x),0),$$ taking the $x_{k+1}$ axis for $l$.
Edit: the corollary turns out to be the theorem mentioned in this question.
AI: Yes, with this theorem (why do they call it corollary, then...?), it works exactly like they say, by contradiction. If the $f_j$'s don't vanish simultaneously, then $f$ takes $S^k$ to $\mathbb{R}^{k+1}\setminus\{0\}$, and satisfies the symmetry condition. So by the corollary-theorem, it must intersect in particular the line $\{x_1=\ldots=x_k=0\}$. But this gives a simultaneous zero. Contradiction.
|
H: Prove that every two sets A,B : $P(A \bigtriangleup B) \not= P(A) \bigtriangleup P(B)$
I want to show that for every 2 sets this claim are true
$$P(A \bigtriangleup B) \not= P(A) \bigtriangleup P(B)$$
Notes : P is the power set.
can I do that with truth table or it will be too primitive? if not formal way is preferred.
any suggestions to do that?
Thanks!
AI: Well, clearly we always have
$$\emptyset\in P(A\Delta B)\;,\;\;\text{but also always}\;\;\emptyset\notin P(A)\Delta P(B)$$
|
H: If $\alpha +\beta = \dfrac{\pi}{4}$ prove that $(1 + \tan\alpha)(1 + \tan\beta) = 2$
If $\alpha +\beta = \dfrac{\pi}{4}$ prove that $(1 + \tan\alpha)(1 + \tan\beta) = 2$
I have had a few ideas about this:
If $\alpha +\beta = \dfrac{\pi}{4}$ then $\tan(\alpha +\beta) = \tan(\dfrac{\pi}{4}) = 1$
We also know that $\tan(\alpha +\beta) = \dfrac{\tan\alpha + \tan\beta}{1- \tan\alpha\tan\beta}$
Then we can write $1 = \dfrac{\tan\alpha + \tan\beta}{1- \tan\alpha\tan\beta}$
I have tried rearranging $1 = \dfrac{\tan\alpha + \tan\beta}{1- \tan\alpha\tan\beta}$ but it has not been helpful.
I also thought if we let $\alpha = \beta$ then I could write $\tan(\alpha+ \alpha) = 1$
(does this also mean $\tan(2\alpha) = 1$?)
then: $\tan(\alpha + \alpha) = \dfrac{\tan\alpha + \tan\alpha}{1- \tan\alpha\tan\alpha}$
which gives: $1 = \dfrac{2\tan\alpha}{1-\tan^2\alpha}$
Anyway these are my thoughts so far, any hints would be really appreciated.
AI: You were almost there:
$$\begin{align}
1 &= \frac{\tan \alpha + \tan\beta}{1 - \tan\alpha \tan\beta}\\
1 - \tan\alpha\tan\beta &= \tan\alpha + \tan\beta\\
2 &= 1 + \tan\alpha + \tan\beta + \tan\alpha\tan\beta\\
2 &= (1+\tan\alpha)(1+\tan\beta)
\end{align}$$
where each equation is equivalent to the preceding/following, and the two can be transformed into each other by a simple step.
First multiply with the denominator (that's $\neq 0$),
then add $1 + \tan\alpha\tan\beta$ to both sides,
then write $1 + x + y + xy$ as the product $(1+x)(1+y)$.
|
H: Disproving that a sequence is Cauchy's Sequence
$f_n=(-1)^nn$
I tried doing it this way :
Given $\epsilon>0$ we have for $n>m$
$|f_n-f_m|=|(-1)^nn-(-1)^mm|\le |(-1)^nn|+|(-1)^mm|=n+m>2m$
Am i good this far ? Now how do i conclude with this result that the sequence is not a cauchy sequence i am not getting the final statement i should make to conclude.
Aman
AI: You're just fine, though you don't really need the inequality $\,n+m>2m\,$ .
Now just take any $\,0<\epsilon <1\,$ . As you've actually proved that
$$n\neq m\implies |f_n-f_m|>1$$
you are done in fact.
|
H: Does every unital ring contain all the integers?
Let us suppose there is a ring $R$ with the multiplicative identity $1$.
We know that $1+r\in R$, where $r$ is any element of the ring $R$. Does this mean $1+1$ is also part of the ring, or does $r$ have to be an element of the ring different from $1$?
Is $1+1$ called $2$ in the ring? Similarly, as $-1$ is also part of the ring, is $-1+ -1$ called $-2$ in the ring? If it is, then I suppose all integers are contained in every unital ring.
These questions are very elementary. However, I read contradictory remarks in some places which tend to confuse me. So I thought it would be best to clear any doubts, however trivial the questions.
Thanks in advance for your help!
AI: We do indeed call $1+1 = 2$ in any unital ring, and similarly, we have in any unital ring a subring generated by $1$, which will consist precisely of elements of the form $1+1+\cdots +1$ and their negatives (and $0$).
However, this does not necessarily mean that the ring contains all integers (by which I mean a subring isomorphic to the integers). The reason is that it can happen that adding $1$ to itself some number of times gives $0$, such as is the case in the ring $\mathbb{Z}/n\mathbb{Z}$ where adding $n$ copies of $1$ gives $n$ which equals $0$ in that ring.
If there is no way to get $0$ by adding $1$ to itself, we say that the characteristic of the ring is $0$, and the subring generated by $1$ is isomorphic to the integers.
If there is a way to get $0$, and $n$ is the smallest number of $1$'s you need to add to get $0$, we say that the ring has characteristic $n$. In this case the subring generated by $1$ will be isomorphic to $\mathbb{Z}/n\mathbb{Z}$.
|
H: The value of $ A \exp\left(\frac{-1}{2\pi} \int_{-\pi}^{\pi} \ln(1+A+2BC \cos x) dx \right)$
I'm looking for the value of: $$ A \exp\left(\frac{-1}{2\pi} \int_{-\pi}^{\pi} \ln(1+A+2BC \cos x) dx \right)$$
I know we could take $y=1+A+2BC \cos x$ but changing variable in this way makes the integral from $1+ A - 2BC$ to $1+ A - 2BC$ which makes the integral zero!
How to solve it ?
thanks
AI: Hint:$$y=A \exp\left(\frac{-1}{2\pi} \int_{-\pi}^{\pi} \ln(1+A+2BC \cos x) \ \mathrm dx \right)\to$$
$$\ln y=\ln A+\left(\frac{-1}{2\pi} \int_{-\pi}^{\pi} \ln(1+A+2BC \cos x \mathrm dx\right)=\ln A+\left(\frac{-1}{\pi} \int_{0}^{\pi} \ln(1+A+2BC \cos x \mathrm dx \right)$$
now take $1+A+2BC \cos x=z$ then $\ \mathrm dz=-2BC\ \sin x \ \mathrm dx\to \ \mathrm dx=\pm \dfrac{-\ \mathrm dz}{2BC\sqrt{1-z^2}}$ we have
$$\ln y=\ln A-\underbrace{\frac{1}{2\pi BC} \left(\int_{0}^{\frac{\pi}{2}} \frac{-\ln z}{\sqrt{1-z^2}} \ \mathrm dz+\int_{\frac{\pi}{2}}^{\pi} \frac{\ln z}{\sqrt{1-z^2}} \ \mathrm dz\right) }_H
$$
then integrate by part ...$$\large{y '=Ae^{H}\cdot H '}$$
|
H: Hahn-Banach Theorem in the C*-algebra
What is the Hahn-Banach Theorem in the C*-algebra(or W*-algebra maybe)?
If B is an nondense subalgebra of C*-algebra(or W*-algebra maybe), can we get an state f of A which is always zero at the subalgebra B from the version of Hahn-Banach Theorem?
AI: Every state is a completely positive map. Then use Arveson extension theorem. For details see this notes.
|
H: Ehrenfeucht–Fraïssé game, how can I understand it?
My course of "Formal Methods" deals with Ehrenfeucht–Fraïssé games, particularly regarding the inexpressibility in FO logic.
At the moment I've fully understand what this games are and how they are supposed to work.
But I can't grasp several theorems regarding inexpressibility, probably because I lack certain basic concepts.
I've tried to read the 2nd chapter of "Elements of Finite Model Theory" by Leonid Libkin but I failed at it quite prominently.
So which are the main concepts I have to understand before directly tackling Ehrenfeucht–Fraïssé games? Are there any prominent sources out there?
AI: I learned about Ehrenfeucht–Fraïssé games from the wonderful, informal notes of Prof. Moschovakis, which can be found here. These notes are used in the first-year graduate logic sequence at UCLA, so they don't assume too much in the way of background. In fact, these games were covered in the first quarter of the class, so I do not believe it was necessary to be familiar with anything more than elementary first-order logic and model theory.
|
H: Prove that : $\sqrt[4]{(a^2+1)b}+\sqrt[4]{(b^2+1)c}+ \sqrt[4]{(c^2+1)a} \le 3\sqrt[4]{2}.$
For $a,b,c\in\mathbb{R}^+$ and $a+b+c=3$ .
Prove that : $\sqrt[4]{(a^2+1)b}+\sqrt[4]{(b^2+1)c}+ \sqrt[4]{(c^2+1)a} \le 3\sqrt[4]{2}.$
AI: Oops, give a wrong answer in previous edit. Here is the corrected version.
Notice both $\sqrt[4]{x}$ and $\sqrt[4]{x^2+1}$ are strictly increasing function in $x$.
The list of numbers $(\sqrt[4]{a},\sqrt[4]{b},\sqrt[4]{c})$ are in same sorted order as $(\sqrt[4]{a^2+1},\sqrt[4]{b^2+1},\sqrt[4]{c^2+1})$.
By Rearrangement inequality, we have:
$$\begin{align}&\sqrt[4]{(a^2+1)b}+\sqrt[4]{(b^2+1)c}+ \sqrt[4]{(c^2+1)a}\\
\le & \sqrt[4]{(a^2+1)a}+\sqrt[4]{(b^2+1)b}+ \sqrt[4]{(c^2+1)c}\tag{*1}
\end{align}$$
Notice
$$\frac{d^2}{dx^2} \sqrt[4]{(x^2+1)x} =
-\frac{3\,{\left( x^2 -1\right) }^{2}}{16\,{x}^{\frac{7}{4}}\,{\left( {x}^{2}+1\right) }^{\frac{7}{4}}} \le 0$$
$\sqrt[4]{(x^2+1)x}$ is a concave function in $x$ and by Jensen's inequality, we have:
$$
\text{R.H.S of } (*1)
\le 3 \left[ \left( \left(\frac{a+b+c}{3}\right)^2 + 1\right) \left(\frac{a+b+c}{3}\right)\right]^{\frac14}
= 3\sqrt[4]{2}
$$
|
H: Strange application of the mean value theorem
I recently came across the following lemma.
Lemma. Suppose $\Omega_n^N = [-\pi n, \pi n]^N$ and $f \in L^1(\Omega_n^N)$ is $2\pi n$-periodic. Let $A$ be a subset of $[-\pi n, \pi n]$. There exists some $\alpha_N \in [-\pi n, \pi n] \setminus A$ such that \begin{equation} \left\vert \int_{[-\pi n, \pi n]^{N-1} \times \lbrace \alpha_n \rbrace} f(x) dx \right\vert \leq \frac{1}{2 \pi n -\vert A \vert } \int_{\Omega_n^N} \vert f(x) \vert dx \tag{1} \end{equation}
Proof. Integrate the l.h.s. of (1) for $\alpha_N \in [-\pi n, \pi n] \setminus A$ and use the mean-value theorem.
Now there are a couple of things that seem strange.
First of all the lemma seems to be plain wrong if you choose $A=[-\pi n, \pi n]$. There is certainly no $\alpha_N \in \emptyset$. Let us therefore suppose that the author meant "proper subset".
Now let's do as the author tells us: We call the l.h.s of (1) \begin{equation} g(\alpha) := \left\vert \int_{[-\pi n, \pi n]^{N-1} \times \lbrace \alpha \rbrace} f(x) dx \right\vert \end{equation} and integrate \begin{equation} I:= \int_{[-\pi n, \pi n] \setminus A} g(\alpha) d \alpha \end{equation} How can one apply the mean value theorem here? $[-\pi n, \pi n] \setminus A$ is not an interval, it need not even be connected.
Can anyone tell me what the author actually meant, when he gave the idea of his proof? Any help would be greatly appreciated!
AI: I think what the authors call the "mean-value theorem" is something like this: if $B$ is a measurable subset of $\mathbb R$ (with $\vert B\vert >0$) and $g:B\to \mathbb R$ is in $L^1(B)$, then one can find $\alpha\in B$ such that $g(\alpha)\leq \frac{1}{\vert B\vert} \int_B g(t)dt$. This is easy to check by contradiction.
Concerning your lemma, you may assume that $f\geq 0$ because in the left-hand side $\left\vert \int\right\vert$ is $\leq \int\vert\;\; \vert$. In other words, you may omit all absolute values.
Instead of $\int_{[-n\pi,n\pi]^{N-1}\times\{ \alpha\}} f(x) dx$, I will write $\int_{[-n\pi,n\pi]^{N-1}} f(x,\alpha)dx$, which seems more correct.
Applying the "mean-value theorem" to $B=[-n\pi,n\pi]\setminus A$ (which requires that $\vert A\vert\neq 2n\pi$) and $g(t)=\int_{[-n\pi,n\pi]^{N-1}} f(x,t)dx$, this gives the result because $$\int_B g(t)dt=\int_{[-n\pi,n\pi]^{N-1}\times B }f(x,t)dxdt\leq \int_{\Omega_n^N} f(u)du\, .$$
(Actually, I should be a little bit more precise: $g$ may take infinite values, but it is finite almost everywhere. Alternatively, the "mean-value theorem" still holds with functions with values in $[0,\infty]$).
|
H: Electrodynamics in general spacetime
Let $M\cong\mathbb{R}^4_1$ be the usual Minkowski spacetime. Then we can formulate electrodynamics in a Lorentz invariant way by giving the EM-field $2$-form $\mathcal{F}\in\Omega^2(M)$ and reformuling the homogeneous Maxwell equations as
$$d\mathcal{F} = 0$$
Then the Poincaré lemma tells us that the first of the two equations (i.e. $\mathcal{F}$ is closed) implies that $\mathcal{F} = d\mathcal{A}$ for some $\mathcal{A}\in\Omega^1(M)$ (i.e. $\mathcal{F}$ is exact). $\mathcal{A}$ is the usual potential for ED. This automatically gives us the Gauge symmetry $\mathcal{A}'=\mathcal{A}+d\chi$, for any $\chi\in C^\infty(M)$.
My question is: say we want to treat ED on ageneral spacetime, i.e. any $4$-semi-Rimannian manifold $(M,g)$ using the same Maxwell equations. Then if $H^2(M)\neq 0$ (the $2$nd cohomology group) we don't have anymore that $\mathcal{F} = d\mathcal{A}$, and we also lose the Gauge symmetry, which would make things harder. How is the problem approached? How do you treat ED in general spacetime?
AI: The key is Weyl's famous observation that electrodynamics is really (classical) $U(1)$-gauge theory. Concretely:
You generalise the global $1$-form $\mathcal{A}$ on $M$ to a connection $\nabla$ on a Hermitian line bundle $\mathcal{L} \to M$, which can locally be written as $d + \mathcal{A}$ for $\mathcal{A}$ the so-called connection $1$-form.
The differential $\mathcal{F} := d\mathcal{A}$ of the global $1$-form $\mathcal{A}$ is replaced by the curvature $$\mathcal{F} := d\mathcal{A} + \mathcal{A} \wedge \mathcal{A} = d\mathcal{A}$$ of the connection $\nabla$, which is still a global $2$-form and still satisfies $d\mathcal{F} = 0$ by the Bianchi identity as applied to a connection on a line bundle.
Gauge symmetry in this context now still holds, for the curvature $2$-form $\mathcal{F}$ is unchanged if you replace $\nabla$ by $\nabla + df$ for $f \in C^\infty(M)$.
This all, of course, fits extremely nicely with your observation about $H^2(M)$, for the assignment
$$
(\text{line bundle $\mathcal{L} \to M$}) \mapsto (\text{curvature $2$-form $\mathcal{F}$ of a connection $\nabla$ on $\mathcal{L}$})
$$
induces a homomorphism
$$
\operatorname{Pic}(M) \to H^2_{\mathrm{dR}}(M) \cong H^2(M,\mathbb{R}),
$$
where the Picard group $\operatorname{Pic}(M)$ is the abelian group of isomorphism classes of line bundles on $M$, with
$$
[\mathcal{L}] + [\mathcal{L}^\prime] := [\mathcal{L} \otimes \mathcal{L}^\prime], \quad -[\mathcal{L}] := [\mathcal{L}^\ast];
$$
then $H^2(M,\mathbb{R}) = 0$ if and only if every closed $2$-form on $M$ is exact (i.e., $\mathcal{F} = d\mathcal{A}$ for some global $1$-form $\mathcal{A}$), if and only if every line bundle is trivial or torsion (so that, necessarily, $\nabla = d + \mathcal{A}$ for a global $1$-form $\mathcal{A}$). The moment that $H^2(M,\mathbb{R}) \neq 0$, however, you do have non-trivial line bundles and non-exact closed $2$-forms, so that you really do need to consider your spacetime $M$ together with a potentially non-trivial line bundle $\mathcal{L} \to M$.
Addenda
The cohomology group $H^2(M,\mathbb{R})$ contains an isomorphic copy of $H^2(M,\mathbb{Z})/\operatorname{Tor}(H^2(M,\mathbb{Z}))$ (via the UCT) as a lattice. It’s a non-trivial consequence of the Chern–Weil theory that our homomorphism $\operatorname{Pic}(M) \to H^2(M,\mathbb{R})$ not only maps into this lattice but actually recovers the Chern class $\operatorname{Pic}(M) \to H^2(M,\mathbb{Z})$ modulo torsion.
Given a line bundle $\mathcal{L}$, a connection $\nabla$ on $\mathcal{L}$ is the gauge potential of an electromagnetic field in that topological sector, and the curvature $\mathcal{F}$ of $\nabla$ is the field strength of that electromagnetic field.
The class of the line bundle $\mathcal{L}$ in $\operatorname{Pic}(M) \cong H^2(M,\mathbb{Z})$ is called the topological charge or topological defect. If $H^2(M,\mathbb{Z}) \cong \mathbb{Z}$, then, in suitable units, the integer corresponding to $[\mathcal{L}]$ can be interpreted as a monopole charge à la Dirac. Indeed, the Dirac monopole can be interpreted as a certain connection on a certain non-trivial line bundle on $M = \mathbb{R}^{1,3} \setminus \text{(timelike worldline)}$.
|
H: Is there a nonnormal operator with spectrum strictly continuous?
Let $H$ be an infinite dimensional separable Hilbert space.
Definition : An operator $A \in B(H)$ is normal if $AA^{*} = A^{*}A$.
Definition : The spectrum $\sigma(A)$ of $A \in B(H)$, is the set of all $\lambda \in \mathbb{C}$ such that $A - \lambda I$ is not bijective.
It decomposes as follows:
- Point spectrum: $\sigma_{p}(A) = \{\lambda \in \mathbb{C} : A - \lambda I \text{ not injective} \}$
- Continuous spectrum: $\sigma_{c}(A) = \{\lambda \in \mathbb{C} : A - \lambda I \ \text{ injective with a dense nonclosed range} \}$
- Residual spectrum: $\sigma_{r}(A) = \{\lambda \in \mathbb{C} : A - \lambda I \ \text{ injective with a nondense range} \}$
Examples:
Let $S$ be the bilateral shift defined on $H = l^{2}(\mathbb{Z})$ by $S.e_{n} = e_{n+1} $.
Its spectrum is strictly continuous : $\sigma(S) = \sigma_{c}(S) = \mathbb{S}^{1}$.
It's also a unitary operator ($SS^{*} = S^{*}S = I$), so a fortiori a normal operator.
Let $T$ be the unilateral shift defined on $H = l^{2}(\mathbb{N})$ by $T.e_{n} = e_{n+1} $.
Its spectrum is not strictly continuous because $0 \in \sigma_{r}(T)$.
It's a nonnormal operator because $[T^{*},T].e_{0} = e_{0}$.
Is there a nonnormal operator with spectrum strictly continuous ?
Bonus questions : Can we exclude the compact operators ? How classify these operators ?
AI: Lets try the Volterra operator $(Vf)(t)=\int_0^tf(x)dx$ on $H=L^2[0,1].$ It is quasinilpotent (i.e. spectral radius is 0) so $\sigma(V)=\{0\}.$ Its range contains the set $\{f\in C^1[0,1]\mid f(0)=0\}$ which is dense. (To see this, note that $B=\{\sin\pi kx,\ k\in\mathbb N\}$ is an orthogonal base in $L^2[0,1].$) It means that $0$ belongs to continuous spectrum.
|
H: Name for Cartesian Product variant that does not return an empty set if one of the sets is empty
I am looking for the name of this mathematical operation that behaves very similar to Cartesian Product.
Given:
A = {1,2}
B = {3,4}
C = {5}
D = ∅
The operation (which I'll write as *), behaves like this:
A * B = {1,2} * {3,4} = {(1,3),(1,4),(2,3),(2,4)} // Same as a Cartesian Product
A * C = {1,2} * {5} = {(1,5),(2,5)} // Same as a Cartesian Product
A * D = {1,2} * ∅ = {1,2} // Different from a Cartesian Product
D * A = ∅ * {1,2} = {1,2} // Different from a Cartesian Product
Furthermore:
A * B * C = {1,2} * {3,4} * {5} = {(1,3,5),(1,4,5),(2,3,5),(2,4,5)} // Same as a Cartesian Product
A * C * D = {1,2} * {5} * ∅ = {(1,5),(2,5)} // Different from a Cartesian Product
A * D * D = {1,2} * ∅ * ∅ = {1,2} // Different from a Cartesian Product
Generally, this is the formal definition of that operator:
A * B = A × B if A ≠ ∅ and B ≠ ∅
A * B = A if B = ∅
A * B = B if A = ∅
A * B = ∅ if A = ∅ and B = ∅
What's the correct name for that operation?
AI: I am unaware of this sort of operations. But one can easily define it to be $$A*B=\begin{cases} A\times B & \text{if } A\times B\neq\varnothing\\ A\cup B & \text{if }A\times B=\varnothing\end{cases}$$
That is a legitimate mathematical definition, even if a bit peculiar generally.
|
H: why $(r+I)(s+I) = rs + I$ in the quotient ring $R ?$
Say $R$ is a commutative ring and $I\in R$ is an ideal. Let us consider the quotient $R/I$. It is created by taking every element $a\in R$, and adding all the elements of $I$ to it. The elements of $R/I$ are of the form $a+I$ and $b+I$, $\forall a,b\in R$.
Now $$(a+I).(b+I)=(a+i_{1}).(b+i_{2}),\forall (i_{1},i_{2})\in I \times I$$
$$(a+i_{1}).(b+i_{2})=ab+a.i_{2}+b.i_{1}+i_{1}i_{2}=ab+I$$
We know that $a.i_{2}+b.i_{1}+i_{1}i_{2}\in I$.
If $(a+I).(b+I)=ab+I$, then taking suitable $(i_{1},i_{2})\in I\times I$, we should be able to prove $(a+i_{1}).(b+i_{2})=ab+i_{3},\forall i_{3}\in I$. However, can every element in $I$ be generated by taking suitable $i_{1},i_{2}\in I$? And if not, is that the reason why $(a+I).(b+I)$ has to be defined as equal to $ab+I$ in violation of the distributive property of ring elements? This has confused me for a long time.
EDIT: I figured addition does not have to be defined as such because $(a+i_{1})+(b+i_{2})=a+b+i_{1}+i_{2}$, where $i_{1}+i_{2}\in I$. In fact, every element $i\in I$ can be constructed by taking $i_{1}=0$ and $i_{2}=i$. Hence, $(a+I) + (b+I)=a+b+I$ naturally. This is the logic I followed to determine that $(a+I).(b+I)$ doesn't quite work as nicey. I'm not sure if this logic is flawed or not as I haven't had the opportunity to ask anybody.
Thanks for your help in advance!
AI: The multiplication in the quotient ring is not defined by
$$
(a+I)(b+I)=
\{\,(a+i_{1})(b+i_{2}): (i_{1},i_{2})\in I \times I\,\}
$$
but by
$$
(a+I)(b+I)=ab+I.
$$
This is a definition, nothing else. Why do we define it in this way? Because it does what we want, together with
$$
(a+I)+(b+I)=(a+b)+I,
$$
namely it makes $A/I$ into a ring and
\begin{align}
\pi\colon A &\to A/I\\
a&\mapsto a+I
\end{align}
a ring homomorphism.
The only thing to show is that the definitions are “correct”: if $a_1+I=a_2+I$ and $b_1+I=b_2+I$, then we should have
$$
a_1b_1+I=a_2b_2+I
$$
that is
$$
a_1b_1-a_2b_2\in I.
$$
This is true because
$$
a_1b_1-a_2b_2=a_1b_1-a_1b_2+a_1b_2-a_2b_2=
a_1(b_1-b_2) + (a_1-a_2)b_2
$$
and, by hypothesis, $a_1-a_2\in I$ and $b_1-b_2\in I$; apply the properties of $I$ to end the proof. Similarly for the addition. The verification of the ring properties is easy.
|
H: Why is $(0,1)\cup \left\{ 3 \right\}$ disconnected?
I have just learned the definition of connectedness and wikipedia gives an example of a disconnected set: $(0,1)\cup \left\{ 3 \right\}$ (https://en.wikipedia.org/wiki/Connected_space#Examples). Why is it disconnected? I need a bit clarification on this. Thanks for any help!
AI: In the subspace topology inherited from $\mathbb{R}$, the space $X = (0,1)\cup\{3\}$ can be covered by two open sets, $(0,1)$ and $\{3\}$, which are disjoint. Hence $X$ is disconnected.
It is also arcwise disconnected because no path can be found connecting any $x\in(0,1)$ with $3$.
|
H: Equation of motion Pendulum using $w=e^{ix}$
I'm working with the equation of motion for a pendulum as follows:
$$x''+ \frac{g}{l} \sin (x)=0$$
Where $x$ is the angle between the pendulum and the vertical rest position.
I am required to use the complex variable $w=e^{ix}$ to rewrite the equation of motion in the form $(w')^2= Q (w)$, where $Q$ is a cubic polynomial. So in the form $(u')^2=u^3 + au + b$, with $a$, $b$ constants.
I'm not sure where to start with the question, can anybody help me get going?
Homework help
AI: Multiply the equation through by $x'$ and integrate once to get
$$x'^2-\frac{2 g}{\ell} \cos{x} = C$$
where $C$ is a constant of integration. Now, if $w=e^{i x}$, then $\cos{x}=(w+w^{-1})/2$ and
$$w' = i x' e^{i x} \implies x'=-i w'/w$$
Then the equation is equivalent to
$$-\frac{w'^2}{w^2} - \frac{g}{\ell} \left (w+\frac{1}{w}\right)=C$$
Then, multiplying through by $-w^2$, we get
$$w'^2+\frac{g}{\ell} w^3 + C w^2+\frac{g}{\ell} w=0$$
which is not quite the form specified, but is an equation of the form $w'^2+Q(w)=0$, where $Q$ is a cubic in $w$.
|
H: Integrating $\int_{-1}^{1}(1+x)^m(1-x)^ndx$
Stimulated by the physics background of path integral, I want to know how to calculate this integral
$$\int_{-1}^{1}(1+x)^m(1-x)^ndx,$$
where $m$, $n$ are positive integers. I'm struggling with it.
Different kinds of approaches are welcome, thanks in advance!
AI: Let denote
$$I(n,m)=\int_{-1}^{1}(1+x)^m(1-x)^ndx$$
so by integration by parts we find
$$I(n,m)=\frac{m}{n+1}I(n+1,m-1)$$
and then by induction
$$I(n,m)=\frac{m!n!}{(m+n)!}I(n+m,0)=\frac{m!n!}{(m+n+1)!}2^{n+m+1}$$
|
H: Why does $L^1(\mathbb{R})$ not have the C* property?
Consider the space $L^1(\mathbb{R})$ which is a Banach-algebra when taking the convolution as the algebra product and even posses the B*-property if one takes $f^*(t)=\overline{f(-t)}$ as the involution. But $L^1(\mathbb{R})$ does not have the C* property.
What would be a counter example to show that $L^1(\mathbb{R})$ does not have the C* property?
AI: Take for example any odd, continuous and real-valued $f\in L^1(\mathbb R)\cap L^2(\mathbb R)$ which is positive on $(0,\infty)$. (The requirement $f\in L^2$ ensures that $f*f^*(x)$ is defined everywhere). Then $f^*=-f$, and so
$$
\vert f*f^*(x)\vert=\left\vert \int_{\mathbb R} f(t)f(x-t)\, dt\right\vert
\leq \int_{\mathbb R} \vert f(t)\vert \,\vert f(x-t)\vert \, dt
$$
for all $x\in\mathbb R$. Moreover, we have $strict$ inequality for any $x> 0$ because the function $t\mapsto f(t)f(x-t)$ changes sign at $t=x$. It follows that
$\Vert f*f^*\Vert_{L^1}=\int_{\mathbb R} \vert f*f^*(x)\vert dx$ is $strictly$ smaller than $\int_{\mathbb R}\int_{\mathbb R} \vert f(t)\vert \,\vert f(x-t)\vert \, dt dx=\Vert f\Vert_{L^1}^2$
|
H: Calculate the limit: $\lim_{n \to \infty}\int^{1}_{0}{\frac{e^x}{1+x^n}\mbox{d}x \mbox{?}}$
How can I calculate the following limit:
$$\lim_{n \to \infty}\int^{1}_{0}{\frac{e^x}{1+x^n}\mbox{d}x \mbox{?}}$$
Is all right if I say that the initial limit is equivalent with $$\int^{1}_{0}{\lim_{n \to \infty}{\frac{e^x}{1+x^n}}\mbox{d}x}=\int^{1}_{0}{e^x}\mbox{d}x=e-1\mbox{?}$$
I have to prove that quantity under integral is bounded?
thanks :)
AI: If you don't want to use any theorems regarding swapping integration with limits you can do it elementarily. Observe $e-1=\int_0^1 e^xdx$. So you need to prove that
$$
\int_0^1 (e^x/(1+x^n)-e^x)dx =-\int_0^1 \frac{x^n}{1+x^n}e^xdx
$$
tends to zero. The absolute value of this is smaller than $e\int_0^1 x^ndx=e/(n+1)$.
|
H: Help for evaluating complicated integral $\int \frac 1 {x^n-x} dx$
I have this complicated integral to evaluate :
$$\int \dfrac 1 {x^n-x} dx$$
I'm struggling to evaluate this.
My attempt :
$$\int \dfrac1x \cdot \dfrac 1 {x^{n-1}-1} dx$$
Now, I try to apply integration by parts. For that, I use : $V=\large\dfrac1x$ and $U=\large\dfrac 1 {x^{n-1}-1}$
But, that doesnt take me anywhere. It just gives me an even more complicated expression to evaluate..
Help would be appreciated.. Thank you..
AI: Hint :
Let I=$\large\int \dfrac 1 {x^n-x} dx$
I=$\large\int \dfrac 1{x} \cdot \dfrac 1 {x^{n-1}-1} dx$
I=$\large\int \dfrac {x^{n-2}}{x^{n-1}} \cdot \dfrac 1 {x^{n-1}-1} dx$
let $\large x^{n-1}=t$ $\implies dt=\big(n-1)x^{n-2}\ dx$
so, I=$\large \dfrac{1}{n-1}\cdot \int \dfrac {dt}{t\cdot (t-1)} $
use partial fractions now..
You're done!!
|
H: If $(W,<)$ is a well-ordered set and $f : W \rightarrow W$ is an increasing function, then $f(x) ≥ x$ for each $x \in W$
I could use a hand understanding a proof from Jech's Set Theory.
Firstly, note that Jech defines that an increasing function $f : P \rightarrow Q$ is a function that preserves strict inequalities (where $P$ and $Q$ are partially ordered sets). We then have
Lemma 2.4. If $(W,<)$ is a well-ordered set and $f : W \rightarrow W$ is an increasing
function, then $f(x) ≥ x$ for each $x \in W$.
I'm having difficulty understanding the proof. It goes like so.
Proof. Assume that the set $X = \{x \in W : f(x) < x\}$ is nonempty and let $z$
be the least element of $X$. If $w = f(z)$, then $f(w) < w$, a contradiction.
I get the first sentence - we're doing a proof by contradiction, which is why $X$ is non-empty, and since $W$ is well-ordered, we can conclude that $X$ has a minimum element, and we call it $z$.
What's going on in the second sentence?
AI: In the proof we pick $z$ to be the least, then we set $w = f(z)$, but we know from the definition of the set that $f(z) < z$, so $w < z$. However, $f$ is an (strictly) increasing function, that is $w < z$ implies $f(w) < f(z) = w$. This in turn implies that $w$ should be a part of $X$, but $w < z$ and contradicts that we have picked the least element (which we could because $W$ is well-ordered).
Alternative (and I think that a bit more intuitive) view on this proof is that an existence of element $z$ such that $$f(z) < z$$ implies that
$$z > f(z) > f(f(z)) > f^{(3)}(z) > f^{(4)} > \ldots$$
is an infinite strictly decreasing sequence, which is a contradiction due to the fact that $W$ is well-founded.
I hope this helps ;-)
|
H: Analyticity of C*-algebra valued functions
Let $\mathcal{A}$ be a unital C*-algebra and consider a function $f:\mathbb{C} \rightarrow \mathcal{A}$. What is an accessible tool to prove or disprove that $f$ is analytic, i.e. can be locally expanded in a power series of $z$?
Take as a concrete example $f_A(z) = \exp(zA-\overline{z}A^*)$ for some fixed $A\in \mathcal{A}$. Is $f_A$ an analytic function?
AI: The holomorphic functional calculus implies that any function that you would "expect" to be analytic is analytic and that the power series is exactly what it "should" be. For example, $z \mapsto e^z$ is analytic and has power series $\sum_{n=0}^\infty \frac{z^n}{n!}$.
However, we would not expect your example $f_A(z)=exp(z A - \bar z A^*)$ to be analytic. To see this, let's specialize to the case where $\mathcal A=\mathbb C$ and $A=1$. Then your function is $$f(z)=exp(2i \ Im \ z),$$ where $Im \ z$ is the imaginary part of $z$. Power series should be in $z$, not in $Im \ z$ or $Re \ z$. Indeed, when we check whether the Cauchy-Riemann equations are satisfied, we see that the function $f$ is not analytic.
For details on the holomorphic functional calculus, you can take a look at Conway's A Course in Functional Analysis, Section VII.4.
If you are looking for conditions equivalent to analyticity, you can find a couple in Takesaki's Theory of Operator Algebras III, google books link. (There are surely better references available, but this is what I have by my side.)
|
H: Self-studying through an undergraduate math course. Need Tao-like textbooks!
I'm a physics undergraduate student who always enjoyed math, and briefly studied it at a university but for various reasons (laziness, youth) gave up and changed 'majors'. But I always wanted to go through an undergraduate math course in my own time, unconstrained by class, etc. Now that I've passed all my exams I was thinking of doing something over the summer.
I had a look at Terry Tao's free lecture notes from an analysis course he taught and I was absolutely shocked at how good they are. I love the verbosity and how he motivates every bit of information. From what I read, he wrote an Analysis textbook which I intend to get.
My question is, are there any other similar (in the sense of their exposition) textbooks for subjects such as Topology, Algebra (Linear and Abstract - from my brief studies I've come to believe that I'm an absolute algebra antitalent, but I'm hoping it's because I didn't have anything else than fairly dry lecture notes to study from, and let's be honest, I didn't study very much) and of course more advanced Analysis, Probability and Statistics?
AI: I too am a physics undergrad(in progress) specializing in electronics. And I am also somewhat inclined towards mathematics.
Here are my suggestions for self studying:
Abstract Algebra : Charles Pinter offers an amazing textbook which is also brilliantly capturing. You will soon start reading up other topics in AA. Then you can check out Artin's book and then I.N.Herstein's classic, Topics in Algebra.
Linear Algebra - Gilbert Strang's long standing edifice has become the standard textbook for Linalg in many universities. But Strang's book is not rigorous and scarcely contains any proofs. For a rigorous approach (but with a lot of motivation) look for Poole's "Linear Algebra". You could consider Paul Halmos' Finite Dimensional Vector Spaces as a bit of an advancement, although its a classic.
Topology: James Munkres is considered to be a Bible kind of book by many students. However, I started reading topology from the book by the Hungarian mathematician, Ákos Császár.
Analysis: Hmm. Walter Rudin's classic on Real Analysis is another standard text in universities, however I found many other books for supplement. Kolmogorov's book(is out of print, you can get a pdf copy)is interesting. You could take up Pure Mathematics by G.H.Hardy: the book is available on Gutenberg. Also, Tom Apostol's books are unmissable.
Probability and Statistics: W. Feller's probability texts, two volumes are my constant reference books on Prob. And Stat. They are very well written and require a little bit of patience.
Number Theory: Hardy and Wright, Ramanujan's lost notebooks, G.Andrews, D.M.Burton,Waclaw Sierpinski and many many more classics.
On a general basis, George Polya's books: How to solve it?, Mathematical Reasoning and induction, etc. have been my choice and they still are on my bookshelf.
Happy Reading!
EDIT: Found this link, contains some good books that I have forgotten to mention, Books that every student "needs" to go through
|
H: How to integrate $\int_0^x \frac{t}{(1-t)^2}dt $?
How do you integrate $\int_0^x \frac{t}{(1-t)^2}dt $?
My approach was to use partial fractions for $\frac{t}{(1-t)^2}$. I thus have to rewrite $\frac{t}{(1-t)^2}$ as $\frac{t}{(t-1)^2}$(right?) and then integrate:
$\frac{1}{(t-1)}+\frac{1}{(t-1)^2}$
But then the first term gives me: $ \ln(t-1)\Big|_0^x $, which seems to be wrong?
(The derivative of $\ln (1-x)+\frac{x}{1-x}=\frac{x}{(1-x)^2}$. So that should be the desired result.)
AI: Using your method of partial fractions:
$$\int{\left(\frac{1}{t-1}+\frac{1}{(t-1)^2}\right)}dt=\ln|t-1|-\frac{1}{t-1}+C$$
$$\begin{align}
\int_0^x{\left(\frac{1}{t-1}+\frac{1}{(t-1)^2}\right)}dt&=\ln|x-1|-\frac{1}{x-1}-ln(1)+\frac{1}{0-1} \\
&= \ln|x-1|-\frac{1}{x-1}-1 \\
&= \ln|x-1|-\frac{x}{x-1}
\end{align}$$
|
H: Differential topology book
I want to self-study differential topology. I'd like to hear suggestions from you about appropriate books that I could use while studying.
Note: I have not studied differential topology before. I self-studied general topology and some algebraic topology before.
Thank you
AI: A standard introductory textbook is Differential Topology by Guillemin and Pollack. It was used in my introductory class and I can vouch for its solidity. You might also check out Milnor's Topology from the Differentiable Viewpoint and Morse Theory. (I have not read the first, and I have lightly read the second.)
For other books on topology, Hatcher has a nice list here. You may be interested in books like Bott-Tu or others listed under item III, manifold theory.
|
H: Convergence of a series containing logarithm
I'm stuck on the following problem: for what values of $\alpha$ the following sum is convergent:$$S(\alpha)=\sum_{k=2}^{+\infty}\frac{1}{\ln(k)^\alpha}$$
If the series is convergent, there will be a constant $M$ for which
$S(\alpha\ge M)\lt\infty$, but how is it possible to calculate $M$?
Tanks.
AI: Note that, for all positive values of $\alpha$, $(\ln{k})^{\alpha} < k$ for sufficiently large $k$. This is because $x^{\alpha} < e^x$ when $x > 0$ for sufficiently large $x$. Thus, there is no value of $\alpha$ for which the sum converges, by comparison with the divergent sum having summand $1/k$ (i.e., the harmonic sum). Clearly, there is also no negative value of $\alpha$ for which the sum converges as well.
|
H: If $\gcd(g(x),P_A(x)) \neq 1$ then prove that $g(A)$ is not invertible
we had this question at our exam today and my professor told me my proof is very wrong mathematically.
Given $A \in M_{n \times n } (\mathbb C)$, and a polynomial $g(x) \in \mathbb C[x]$, and that $\gcd(g(x),P_A(x)) \neq 1$. ($P_A(x)$ is the characteristic polynomial of A. Prove that $g(A)$ is not invertible.
So I said that if their gcd is not one, that means they share a common devisor. From Cayley Hamilton we know that $P_A(A) = 0$, and since we are in $\mathbb C$, this element which is their GCD takes $A$ to zero. Thus, since this element is shared for both $P_A(x)$ and $g(x)$ then this element does exist in $G(x)$ and sends it to zero, giving that $G(A) = 0$, which is in matrices words the zero matrix, which is not invertible.
now my professor told me that is totally wrong, and I agree with him.
What is the proper solution?
AI: Notice that if $\lambda$ is an eigenvalue of $A$ then $g(\lambda)$ is an eigenvalue of $g(A)$ and since $\gcd(g(x),P_A(x))\neq 1$ then there's an eigenvalue $\lambda$ of $A$ (root of $P_A$) which's a root of $g(x)$ and then $g(\lambda)=0$ is an eigenvalue of $g(A)$ hence $g(A)$ isn't invertible.
|
H: Why is it considered incorrect to use the variable of integration as a boundary of integration?
Recently, a question was asked about limits of integration and the OP made the following "typo":
$$I = \int_0^x f(x) \, dx$$
as opposed to
$$I = \int_0^x f(t) \, dt $$
I commented that this typically frowned upon and can make for some odd results. It occured to me however, that I wasn't sure why this was formally incorrect, but that I had just been taught that it shouldn't be done. I couldn't think of a reason why it's considered best practices to differentiate between variable of integration and variables in the limits of integration..
Any thoughts / reasoning behind this?
AI: Let's consider a simple case:
$\int_0^x x\; dx$.
Applying the usual method,
$\int_0^x x\; dx
= \dfrac{x^2}{2}\big|_0^x
=\text{?}
$
The lower term is
$\dfrac{x^2}{2}$
evaluated at $x=0$,
and this is obviously $0$.
The upper term is
more problematical.
It is
$\dfrac{x^2}{2}$
evaluated at $x=x$,
and it is unclear what this means.
Suppose the integral was
$\int_0^{2x} x\; dx$.
The upper term would then be
$\dfrac{x^2}{2}$
evaluated at $x=2x$,
and it is even more unclear what this means.
Good notation makes ideas clear
and helps avoid mistakes;
as shown here,
using the same variable for
the variable of integration
and a limit of integration
can make things unclear.
|
H: Solve equation with variables inside and outside cos
I have an issue with following problem:
A line needs to be intersected with a cosinus function.
My line would be
$\displaystyle l_1 = \binom{x_1}{y_1} + s\binom{u_1}{v_1}$
My function for a curved line in 2D is
$\displaystyle g: y_2 = d + a + a * (-cos(f * x_2))$
I need the first intersection (the smallest s > 0). How do I do that? I stand at
$s * v_1 + a * cos(f * (x_1 + s * u_1)) = y_1 - a - d$
and have no idea how to solve for s from here.
AI: Your expression is of the form
$s+A\cos(b+c\ s) = d$.
Generally, you can't give an expression
for the solution - it can only be done numerically.
By a linear transformation,
this can be written as
$p x+q\cos(x) = r$.
When mathematicians
noticed that
they often needed to solve
$x e^x = y$
for $x$,
they invented a function
(the Lambert W function)
that solved it.
Similarly,
you could create a function
(the Roth C function)
that solves
$\cos(x)/x = y$ for $x$
and give the solution to your equation
in terms of $C$.
Until such a function exists,
you will have to be content
with solving the equation numerically.
|
H: Prove that an algorithm cannot reach a given goal
We are given an algorithm that, in each step takes a set $\left\{a, b, c\right\}$ It takes any two variables $a, b$ at random and changes them to $0.6 + 0.8b$ and $0.8a - 0.6b$. The initial value of the algorithm is $\left\{3, 4, 12\right\}$. Prove that the algorithm cannot reach $\left\{x, y, z\right\}$ where $|x - 4|, |y - 6|, |z - 12| < \frac{1}{\sqrt3}$.
I realized straight away that this could be solved using invariance. The invariant was easy to find: It was the function $f_i(a, b, c) = a^2 + b^2 + c^2$. This function gives the same value for any step $i$ of the algorithm. $f(3, 4, 12) = 169$.
It remained to prove that $x^2 + y^2 + z^2 \neq 169$. Now, I started off by adding the inequalities given in the statement:
$(x - 4)^2 + (y - 6)^2 + (z-12)^2 < 3\left(\frac{1}{\sqrt3}\right)^2 = 1$
$\implies x^2 + 16 - 8x + y^2 + 36 - 12y + z^2 + 144 - 24z < 1$
$\implies x^2 + y^2 + z^2 < 8x + 12y + 24z - 195$
This is where I was stuck. How do I prove that $8x + 12y + 24z - 195 < 169$?
AI: Good start. Now what are the minimum values that $x,y,z$ can take? Add their squares and you are home.
|
H: Problem with binomial sumation
I am trying to solve the following summation:
$\sum_{k=0}^{M-1} {M-1 \choose k} \alpha^{k} (1-\alpha)^{M-1-k} u(k)$
where:
$u(k) = 1$ , if $0 \le k < j$
$u(k) = (1-\frac{j}{k+1})$ , if $M-1 \ge k \ge j$
Is there a close expression for this summation?
AI: I don't believe there is a closed form (we see sums like this in binomial distributions), but we can put this into a slightly nicer form.
Write the sum as
$$\sum_{k=0}^{j-1} \binom{M-1}{k} \alpha^k (1-\alpha)^{M-1-k} + \sum_{k=j}^{M-1} \binom{M-1}{k} \alpha^k (1-\alpha)^{M-1-k} \left (1-\frac{j}{k+1}\right)$$
Note we can remove the one in the second term and get
$$\sum_{k=0}^{M-1} \binom{M-1}{k} \alpha^k (1-\alpha)^{M-1-k} - j \sum_{k=j}^{M-1} \binom{M-1}{k} \alpha^k (1-\alpha)^{M-1-k} \frac{1}{k+1}$$
Note that the first term is simply $(\alpha+1-\alpha)^{M-1} = 1$ by the binomial theorem. Also note that
$$\frac{1}{k+1} \binom{M-1}{k} = \frac{1}{M} \binom{M}{k+1}$$
Then we may write the sum as
$$1-\frac{j}{M} \sum_{k=j}^{M-1} \binom{M}{k+1} \alpha^k (1-\alpha)^{M-1-k}$$
|
H: Unambiguous terminology for domains, ranges, sources and targets.
Given a correspondence $f : X \rightarrow Y$ (which may or may not be a function) I generally use the following terminology.
$X$ is the source of $f$
$Y$ is the target
$\{x \in X \mid \exists y \in Y : xy \in f\}$ is the domain
and $\{y \in Y \mid \exists x \in X : xy \in f\}$ is the range.
However, it has come to my attention that some people use 'domain' to mean $X$, rather than the subset of $X$ described above. So, is there any standard, unambiguous terminology for these concepts that does not have this problem?
AI: No, there is no standard unambiguous terminology. The standard terms are "domain" and "range" but they are ambiguous between authors.
|
H: Evaluating $\lim_{x\to0}(x\tan x)^x$
Any suggestions for evaluating the limit $$\lim_{x\to0}(x\tan x)^x$$
I have tried writing $\tan$ as $\dfrac{\sin}{\cos}$ and then got the Taylor series of them but it didn't lead me somewhere. Thanks a lot
AI: Assuming the limit exists and is equal to $L$, take logs:
$$\log{L} = \lim_{x \to 0} x \log{(x \tan{x})}$$
Use $\tan{x} \sim x$ in this limit. Then use
$$\lim_{y \to 0} y \log{y} = 0$$
and the limit should come out easily:
$$\log{L} = \lim_{x \to 0} x \log{x^2} = \lim_{x \to 0} 2 x \log{x} = 0$$
Therefore, $L=1$.
|
H: What would be the negation of these statements?
What would be the negation of
"No dogs have three legs".
I think "Some dogs do not have three legs"
"Some animals don't eat meat"
I think "All animals eat meat"
"I make the bread, or she does not make the bread"
I think "I do not make the bread, and she does make the bread"
Am i right here?
AI: The negation of the first statement would be "Some dogs have three legs." You are correct on the second and the third.
|
H: When is limit substitution valid?
This question asks to find the $\lim_{x\to0}(x\tan x)^x$ . Ron Gordon and Maisam Hedyelloo make the substitution $ x\sim \tan x$ , and it works and they get the correct answer. However, if you try to make the substitution $x \sim \arcsin x$ into $\lim_{x \to 0} \large \frac {\arcsin(x)-x}{x^3}$ you get the wrong result of $0$, when in reality the limit is equal to $\frac 16$ . So when can you use this kind of substitution? Thanks.
P.S. I have asked a similar question here , and I thought I had the answer to the question but now I see that it is not complete.
AI: You can substitute whenever the function you are limiting is continuous at the limiting point.
The example with $\arcsin(x)$ that you gave isn't continuous at $x=0$ (it's not defined there).
By the way, you don't even get $0$ when you "substitute" into that equation: you get $\frac{0}{0}$ which is indeterminate.
|
H: working out a percentage from my email open rates
i have the following numbers:
recipients: $95$
opens: $39$
bounces: $2$
how would i get the percentage value per open?
accoridng to this post: http://www.campaignmonitor.com/blog/post/2395/all-about-email-open-rates
i would do the following:
open rate = opens / emails sent - bounces
AI: I should think open rate = opens / (emails sent - bounces) * 100% would be more to the point. Order of operations is important!
If you want the percentage for one open in your whole set of emails, then you would just do (open rate for one open) = 1 / (emails sent - bounces) * 100%.
|
H: Finding the limit with three variables
Here is the original problem:
$$\lim_{(x,y,z)\to (0,0,0)}{(\cos x-1)\sin(2y)(e^{3z}-1)\over x^2yz}$$
I was thinking about splitting up the limit like this:
$$\lim_{(x,y,z)\to (0,0,0)}{(\cos x-1)\over x^2}\cdot\lim_{(x,y,z)\to (0,0,0)}{\sin(2y)\over y}\cdot\lim_{(x,y,z)\to (0,0,0)}{e^{3z}-1\over z}$$
And then break it down some more:$$\lim_{x\to 0}{(\cos x-1)\over x^2}\cdot\lim_{y\to 0}{\sin(2y)\over y}\cdot\lim_{z\to 0}{e^{3z}-1\over z}$$
And then I want to use L'Hospital's rule. If someone could let me know if I'm heading in the right direction or not, that would be great. Thanks!
AI: $$\lim_{(x,y,z)\to (0,0,0)}{(\cos x-1)\sin(2y)(e^{3z}-1)\over x^2yz}$$
I was thinking about splitting up the limit like this:
$$\lim_{(x,y,z)\to (0,0,0)}{(\cos x-1)\over x^2}\cdot\lim_{(x,y,z)\to (0,0,0)}{\sin(2y)\over y}\cdot\lim_{(x,y,z)\to (0,0,0)}{e^{3z}-1\over z}$$
And then break it down some more:$$\lim_{x\to 0}{(\cos x-1)\over x^2}\cdot\lim_{y\to 0}{\sin(2y)\over y}\cdot\lim_{z\to 0}{e^{3z}-1\over z}$$
LH rule to the first part gives you (-0.5)
Second part ofcourse gives you 2 by multiplying dividing by 2 and cancelling sin 2y/2yThird part again use LH rule to get 3.
Multiply all three parts to eventually get an ANSWER -3
|
H: Perimeter of a rectangle with four circles in each corner
my teacher asked a question in exercise for calculate the perimeter of following:
I get result of $144\pi$ and some of my friends got $208.2$ as result (assuming $\pi\approx3.14$).
Q: what's the correct answer of this question ?
I get $144\pi$ with this:
Since each part of circles "not used" is 90° then the rest is 270° (360°-90°)
$360° \rightarrow 2\pi r$
$270° \rightarrow x$
So,
$360x = 270 \times 2\pi 5$
$x = \frac{2700\pi}{360}$
$x = 7,5\pi$
$P = (2\times45) + (2\times12) + (4\times7,5\pi) = 144\pi$
AI: You're reasoning is just fine, until the last line:
Only one term in the sum has a factor of $\pi$, so the correct answer should be $$\begin{align} P & = 2\cdot 12 + 2\cdot 45 + 4\cdot 7.5\pi \\ \\ & = 24 + 90 + 30 \pi \\ \\ & = 114 + 30\pi \approx 208.2\end{align}$$
|
H: Weighted integral of random variables
Given a random zero-mean gaussian random variable $X(t)$ with parameter $t$, such that $E [X(t) X(t^\prime)] = \sigma^2 (t) \delta_{tt^\prime}$, is it possible to produce a single gaussian random variable that is the weighted sum of all $X(t)$ by a non-random function $f(t)$? In other words, is $Z=\int_0^T f(t) X(t) dt$ well defined, and if so, is $Z$ a gaussian normal random variable, and what is its variance?
My naive attempt is to discretize the integral
\begin{equation}
Z = \int_0^T f(t) X(t) dt
\approx \sum_{n=0}^{N-1} f(n \delta T) X(n \delta T) \delta T
\end{equation}
where $\delta T = T/N$. Now, if we take the simplest limit where $\sigma(t) = \sigma_0$ and $f(t) = 1$, this reduces to
\begin{eqnarray}
Z & = & \frac{T}{N}\sum X_n
\end{eqnarray}
But this is a gaussian random variable with $\sigma = \sqrt{N} \delta T \sigma_0 = T \sigma_0 / \sqrt{N}$, which has $\sigma \rightarrow 0$ as $N \rightarrow \infty$.
If I were to take a guess, I'd assume $Z$ would be gaussian and have variance $\sigma^2 = \int_0^T f^2(t) \sigma^2(t) dt$, but I have been unable to show this.
AI: Kudos for your logic in studying this problem (except that your $\sigma$ goes to $0$ when $N\to\infty$). Indeed, as you realized, the construction of the object $Z$ runs into serious problems, which are of two kinds mainly.
First, an i.i.d. process $(X(t))_{t\in\mathbb R_+}$ is a wild beast, for example, which probability space are we going to use? Second, the way to define the integral $\int\limits_0^TX(t)\mathrm dt$ is not obvious at all. The Riemann way is doomed since every upper Darboux sum is $+\infty$ and every lower Darboux sum is $-\infty$, and the Lebesgue way runs into measurability issues.
Note that stochastic integrals are based on quite different processes, for instance on a Brownian motion $(W_t)_{t\in\mathbb R_+}$, which is far from being i.i.d. To begin with, its paths being almost surely continuous, this process is entirely determined by $(W_t)_{t\in\mathbb Q_+}$, that is, a countable collection of random variables--and now one can begin to work...
|
H: Generating function of a counting function.
Let $m$ be odd. Let $\eta(m)$ count the number of ways we can express $m$ as a product of exactly two odd numbers, counting order. What is $$\sum_{m\text{ odd }}\eta(m)x^m\text{ ? }$$
So, as an example, $\eta(9)=3$ since $9=3\times 3=1\times 9=9\times 1$, while $\eta(p)=2$ for an odd prime. Suppose we extended $\eta$ for even numbers in the same manner. Then $\eta(0)=0$, $\eta(2)=0$ but $\eta(4)=1$ since $4=2\times 2$. Can we find $$\sum_{n\geqslant 0}\eta(n)x^n\text{ ? }$$
AI: The right generating function to use here is not the ordinary generating function but the Dirichlet generating function, since your function is just the odd terms of the Dirichlet series of the divisor function
$$\sigma_0(n) = \sum_{d | n} 1.$$
The divisor function has Dirichlet series
$$\sum_{n \ge 1} \frac{\sigma_0(n)}{n^s} = \zeta(s)^2$$
and to isolate the odd terms we just need to remove the factor corresponding to $2$ in the Euler product of $\zeta(s)$ twice, giving
$$\sum_{n \text{ odd}} \frac{\sigma_0(n)}{n^s} = \zeta(s)^2 \left( 1 - \frac{1}{2^s} \right)^2.$$
The ordinary generating function you want is definitely not rational. The Taylor series of a rational function is a solution to a linear recurrence relation, and there are strong constraints on what such a sequence can look like: in particular, its growth rate is eventually a polynomial times an exponential (but possibly with a finite period), while $\sigma_0$ has a much more erratic growth rate.
|
H: Manipulating random decimals
I've been slowly working my way into the world of AI and its representations of random. Well as you can guess this occurs from a call to random() resulting in a value between $[0, 1]$ to create a desired random effect/action.
What confuses me is that once we get a random number back, lets say $0.10831288644112647$ people tend to use it for one random action (flip a coin) but I was thinking why can't we keep using this number to dictate more actions? Maybe even flip a lot of coins??
Let's not just use the value we get back but the decimal place values too. Currently I'd expect that each value has a $1$ in $10$ chance to be a $[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]$ and thus we could flip a coin for each value?
Just speculating but wouldn't we now be able to flip $18$ coins still maintaining its random result?
Continuing off this idea could we take the result from one decimal place with another? ex
if randomResult[0] && randomResult[1] == 3 then flip coin 19
Edit: The first value would not be $[0,\dots,9]$!
AI: Mathematically, yes. We could generate $10$-digit random numbers and use each digit to determine one flip.
Unfortunately, this concept doesn't quite work. Consider generating $17$-digit random numbers (without trimming zeros, that is, $1.23030000$ not $1.2303$). Then we get the following results 100000 runs:
N | Freq.
__|_______
0 | 225425
1 | 160930
2 | 163908
3 | 164527
4 | 165253
5 | 165161
6 | 165476
7 | 164510
8 | 163528
9 | 161282
This is probably because not all numbers are $17$ digits long, so the padding which is added significantly biases towards zeros. We cannot ignore or trim the padding because then it is biased away towards zeros.
However, if we instead generated integers between $[0, 2^{n})$, we could use this to do coinflips (heads if the $n$th bit is $1$, or zero otherwise). It is the intricacies of floating point arithmetic which makes this method of random decimals unviable.
|
H: upper semi-continuity of a multi-valued function $T$ and lower semi-continuity of $d(x,T(x))$
Let $(X,d)$ be a complete metric space, $CB(X)$ the set of closed and bounded subsets of $X$, and $T:X\rightarrow C(X)$ be a multi-valued function.
How can you prove this:
If $T$ is upper semi-continuous then $f(x)=d(x,T(x))$ is lower semi-continuous
And the 2nd question is that Does the reverse hold?
AI: I hope that the definition of "upper semi-continuous" is the following $T:X\to CB(X)$ is usc if $\{ x;\; T(x)\subset U\}$ is open for every open set $U\subset X$.
Assume that $T:X\to CB(X)$ is usc. We have to show that for any $\alpha\in\mathbb R$, the set $O_\alpha=\{ x;\, d(x,T(x))>\alpha\}$ is open in $X$. Fix $\alpha$ and take any $x_0\in O_\alpha$. Choose $\varepsilon >0$ such that $d(x_0,T(x_0))>\alpha+\varepsilon$. Then $T(x_0)\subset U_0=X\setminus B_0$, where $B_0$ is the closed ball $B(x_0,\alpha+\varepsilon]$.
Since $U_0$ is an open set, the set $V_0=\{ x;\; T(x)\subset U_0\}$ is an open neighbourhood of $x_0$; and by definition we have $d(x_0, T(x))\geq \alpha+\varepsilon$ for every $x\in V_0$. By the triangle inequality follows that if $x\in V_0$ and $d(x,x_0)<\varepsilon$, then $d(x, T(x))> \alpha$, i.e. $x\in O_\alpha$. Hence, $V_0\cap B(x_0,\varepsilon)$ is an open neighbourhood of $x_0$ contained in $O_\alpha$. This shows that $O_\alpha$ is open in $X$, for any $\alpha$.
The converse is not true. Take for example $X=\mathbb R$. Let $f:\mathbb R\to\mathbb R$ be any lower semi-continuous but not continuous map such that $f(x)>x$ for all $x\in X$, and define $T(x)=\{ f(x)\}$. Then $T$ is not usc, but $d(x,T(x))=\vert x-f(x)\vert=f(x)-x$ is sci.
|
H: an example of Zariski cotangent space
I am studying these very nice notes on the Zariski tangent space.
Let $f \in K[x_1,\dots,x_n]$ and let $V=Z(f)$, the zero set of $f$.
Let $P \in V$ and let $M_P$ denote the vanishing ideal of point $P$.
We define a map $d: K[V] \rightarrow K$ by sending $g \in K[V]$ to
the image of $g(x)-g(P)$ in $M_P/M_P^2$. Then $df = \sum_i \frac{\partial f}{\partial x_i}|_P dx_i$, which comes from taking the class mod $M_P^2$ of the Taylor expansion of $f$ at $P$.
My question is: why do we have that $df=0$?
This appears to be the case in the example presented in the beginning of page 3 in the notes i refer to. Note that according to the definitions $df= \sum_i \frac{\partial f}{\partial x_i}|_P (x_i-a_i) + M_P^2$, where $P=(a_1,\dots,a_n)$.
AI: By definition, $df := f - f(P) \bmod M_p^2$, where $f$ is viewed as an element of the coordinate ring
$$
K[V] := K[x_1,\dotsc,v_n]/I[V];
$$
in particular, $M_p$ is an ideal in $K[V]$, not in $K[x_1,\dotsc,x_n]$.
Now, if $f \in I[V]$ (e.g., if $V = Z(f)$), then $f = 0 \bmod I[V]$ and $f(P) = 0$, and hence $f - f(P) = f = 0$ already in $K[V] \supset M_p$. Thus, a fortiori, $f - f(P) = 0 \bmod M_p^2$ in $M_p \subset K[V]$, so that by definition of $df \in T_p^\ast V$, $df = 0$.
|
H: Algorithm to find the roots of a polynomial over integer
What is the algorithm to find the roots of a polynomial over integer? I observe that one can find the roots within few seconds in Sage even when all coefficients of are very large. I have asked in https://cstheory.stackexchange.com/questions/18406/roots-of-a-polynomial but still do not get answer.
AI: Use the rational root test: if you have a polynomial
$$a_0 + a_1x + \ldots + a_nx^n$$
with integer coefficients, all integer roots are of the form $\pm c$, where $c\vert a_0$.
Finding all roots therefore just amounts to factoring $a_0$ and some (very cheap) polynomial evaluations.
There are several techniques for proving a polynomial is irreducible over $\mathbb{Z}$ (Eisenstein's criteria, Perron's criteria) and these can rule out integer roots in special cases. But unless I were looking at polynomials with truly huge coefficients, the general algorithm I would implement would simply try all of the plausible roots.
|
H: Unclear step in Eisenbud's proof of a variant of Hauptidealsatz
I need help in understanding the proof of the following
Theorem 10.1. If $\mathfrak{p}$ is a prime of a Noetherian ring $R$ minimal subject to containing $x \in R$, then its height is at most one.
In the context of $\mathfrak{q}$ being another prime, strictly contained in $\mathfrak{p}$, the author replaces $R$ with $R_{\mathfrak{p}}$, concluding by minimality that $R/(x)$ is Artinian.
Now, I believe this is not true: take for instance $R = k[[t]]$, $\mathfrak{p} = (t)$ and $x = 0$. Then $R_{\mathfrak{p}} = R$ so $R/(x)$ is isomorphic to $R$ which is not Artinian.
I can't find my mistake, or the way to use the theorem cited in the proof given, namely
Theorem 2.14. $R$ is Noetherian with all primes maximal iff it is of finite length iff it is Artinian, in which case it has finitely many primes.
to derive that $R/(x)$ is Artinian. I would appreciate any kind of help.
AI: In THM 10.1, $\mathfrak{p}$ is minimal among primes containing $x$. In your example, this is not the case.
Using that $\mathfrak{p}$ is minimal among primes containing $x$, the maximal ideal of $R_{\mathfrak{p}}/(x)R_{\mathfrak{p}}$ is also minimal, so $R_{\mathfrak{p}}/(x)R_{\mathfrak{p}}$ must be Artinian.
|
H: Prove $\lim _{ n\to\infty } \sqrt [ n ]{ \sum _{ i=0 }^{ k } a_i ^n } =\max { \{{ a }_{ 1 }, \ldots ,{ a }_{ k }\} } $
We know that
$$ a_1, a_2, \ldots, a_k \ge 0 $$
Prove that:
$$\lim _{ n\to\infty } \sqrt [ n ]{ \sum _{ i=0 }^{ k } a_i ^n } =\max { \{{ a }_{ 1 }, \ldots ,{ a }_{ k }\} } $$
AI: Divide by the Maximum of them and show the limit is 1, by using $$\max a_i\leq \sum_{i=1}^k a_i \leq k \cdot \max a_i $$(if all $a_i=0$ the limit is trivial).
Wlog $a_1\geq a_i \forall i$. Then
\begin{align*}
a_1=\sqrt[n]{a_1^n}\leq \Big(\sum_{i=1}^k a_i^n\Big)^\frac{1}{n}&=a_1 \cdot \Big( \sum_{i=1}^k \left(\frac{a_i}{a_1}\right)^n\Big)^\frac{1}{n}\leq a_1 \cdot \sqrt[n]{k}
\end{align*}
|
H: Injective on a dense set
Let $f:X\to Y$ a continuous function between Hausdorff spaces such that $D\subseteq X$ is dense and $f\restriction_D$ is injective. Can we say that $f$ must be injective?
I can't find a counterexample, so I tried to prove it:
Let $x,y\in X$ such that $f(x)=f(y)$. If $x\neq y$, then there exist $U,V\subseteq X$ disjoint open sets with $x\in U$ and $y\in V$. We know that $D$ intersects both $U$ and $V$, but now I don't know what to do.
What do you think?
AI: Consider $X=Y=\mathbb R$, $D=\mathbb Q$ and $f(x)=|\sqrt 2x|+x$.
Assume $f(x)=f(y)$ with $x\ne y$. Since $f\restriction_{[0,\infty)}$ and $f\restriction_{(-\infty,0]}$ are injective, we need only consider the case $x<0<y$, so $(1-\sqrt2)x=(1+\sqrt 2)y$ and hence $\frac xy=\frac{1+\sqrt 2}{1-\sqrt 2}=-3-2\sqrt 2$ is not rational, i.e. $x,y$ cannot be both in $D$.
|
H: In the definition of Strong Feferman Set Theory, what does this phrase mean?
In this post, Mike Shulman talks about Strong Feferman Set Theory, arguing that it deals with the foundational issues raised by category theory in an especially straightforward manner.
Mike explains:
To ZFC (or your favorite set theory) we add a constant symbol $U$
together with the axiom “$U$ is a universe,” and also an axiom schema
stating that for any statement $\varphi$, all of whose parameters are
in $U$ but which does not mention $U$ explicitly, we have $\varphi^U \Leftrightarrow \varphi$.
What does the expression 'all of whose parameters are in $U$ but which does not mention $U$ explicitly' actually mean?
AI: Here is an example of "parameters". Suppose we are trying to write an axiom that for all sets $x,y$ there is a $z = \{x,y\}$. We would write the axiom as:
$$
(\forall x)(\forall y)(\exists z)(\forall w)[w \in z \leftrightarrow w = x \lor w = y]
$$
Here $x$ and $y$ are called (informally) the "parameters" of this axiom, because they are the sets upon which the constructed set $z$ will depend.
The axiom scheme in the question is trying to say that the elements of $U$ have the same properties from the perspective of $U$ that the do from the perspective of the entire universe. For example, a set $x$ is "countable in $U$" if and only if it is countable:
$$
(\forall x \in U) \left [ (\exists f \in U)(\text{$f$ is a bijection from $x$ to $\omega$}) \leftrightarrow (\exists g)(\text{$g$ is a bijection from $x$ to $\omega$})\right ]
$$
This can be put into the form $(\forall x \in U)[\phi^U(x) \leftrightarrow \phi(x)]$, so $x$ is a "parameter" of that formula $\phi$ in this scheme.
|
H: Ratio of sum of Euler's totient to $n$: $\lim_{n \to \infty} {\log \left( \sum_{k=2}^n \varphi(k) \right) \over \log(n)}$
This is more a casual/recreational question...
It seems to me, that the limit as given in the subject line
$$\lim_{n \to \infty} {\log \left( \sum_{k=2}^n \varphi(k) \right) \over \log(n)} = \log_n \left( \sum_{k=2}^n \varphi(k) \right) \le 2 $$
Possibly this is somehow trivial. And does it approach 2 in the limit?
AI: Thm: We have
$$
\sum_{n < x} \varphi(n) \sim \frac{3}{\pi^2} x^2
$$
Proof:
Since
$$
n = \sum_{d | n} \varphi(d)
$$
by Moebius inversion we get
$$
\varphi(n) = n \sum_{d | n} \frac{\mu(d)}{d}
$$
Therefore
$$
\sum_{n < x} \varphi(n) = \sum_{n < x} n \sum_{d | n} \frac{\mu(d)}{d}
$$
Interchanging summation we get
$$
\sum_{d < x} \frac{\mu(d)}{d} \sum_{d | n, n < x} n
$$
The inner sum equals
$$
d \cdot \frac{x^2}{2 d^2} + O(x) = \frac{x^2}{2d} + O(x)
$$
Therefore the final answer is
$$
\sum_{d < x} \frac{\mu(d)}{2 d^2} \cdot x^2 + O(x\log x) =
\frac{1}{2\zeta(2)} x^2 + O(x\log x)
$$
because the later sum converges to $1 / \zeta(2) = 6/\pi^2$.
$\square$
EDIT: In particular your limit is indeed equal to $2$!
EDIT 2: Actually for most integers $\varphi(n) \asymp n$.
In fact the proportion of integers $n < x$ such that $\alpha n < \varphi(n) < \beta n$, with $\alpha < 1$ converges to a continuous distribution function $$\mathbb{P}(\alpha < X < \beta) > 0$$ where explicitely $$ X := \prod_{p} \bigg ( 1 - \frac{X(p)}{p} \bigg )$$ and the $X(p)$ are independent random variables with $$\mathbb{P}(X(p) = 1) = \frac{1}{p} \text{ and } \mathbb{P}(X(p) = 0) = 1 - \frac{1}{p}.$$
This is Schoenberg's theorem.
|
H: Exponential of a matrix and related derivative
$\DeclareMathOperator{\tr}{tr}$
I have $ X \in M(n,\mathbb R) $ to be fixed. I define $ g(t) = \det(e^{tX}) $
Then the author proceeds as follows:
\begin{align}
g'(s) &= \frac {d}{dt} g(s+t)\\
&=\frac {d}{dt} \det(e^{(s+t)X}) |_{t=0}\\
&=\frac{d}{dt}(\det(e^{sX})\det(e^{tX})|_{t=0}\\
&= g(s)\tr(X),
\end{align}
as $ s $ is independent of $ t $. The author proves this in the course of proving
$$ \det(\exp(X))=\exp(\tr(X)) $$
My question is why is the function $ g $ concocted for this proof? Also how is the very first step understood? I apologise if I am missing something quite simple and thanks in advance for any replies.
AI: $\DeclareMathOperator{\exp}{exp}\DeclareMathOperator{\det}{det}\DeclareMathOperator{\tr}{tr}$Here's my take on unpacking your source's argument. Observe that
$$
\exp(\tr(tX)) = e^{\tr(X)t}, \quad t \in \mathbb{R},
$$
so that if the function $g(t) := \det(\exp(tX))$ satisfies the initial value problem
$$
g^\prime(t) = \tr(X)g(t), \quad g(0) = 1,
$$
then by the basic theory of ODE,
$$
\det(\exp(tX)) =: g(t) = g(0)e^{\tr(X) t} = \exp(\tr(tX)), \quad t \in \mathbb{R},
$$
yielding the desired result. So, your source's strategy is precisely to check that $g(t)$ does indeed satisfy the relevant initial value problem; since $g(0) = 1$ by a simple calculation, it suffices to check that $g^\prime(t) = \tr(X)g(t)$.
Now, fix $t \in \mathbb{R}$, and let $0 \neq h \in \mathbb{R}$. Then
$$
g(t+h) = \det(\exp((t+h)X)) = \det(\exp(tX)\exp(hX)) = g(t)g(h),
$$
so that
$$
\frac{g(t+h)-g(t)}{h} = \frac{g(h)-1}{h}g(t).
$$
Hence, to conclude that
$$
g^\prime(t) := \lim_{h \to 0} \frac{g(t+h)-g(t)}{h} = \tr(X)g(t),
$$
it suffices to check that
$$
g^\prime(0) := \lim_{h \to 0} \frac{g(h)-1}{h} = \tr(X),
$$
but this in turn can be easily checked on upper triangular matrices. Perhaps you could also use this answer by Avitus together with the chain rule, though using the Jacobi formula for the total derivative of the determinant might be tricky in its own right.
|
H: An Error in Landau's FOA Thm 140?
Thm 140: If $\xi > \eta$, then $\eta + \nu = \xi$ has exactly one solution $\nu$. (All lowercase Greek letters represent Dedekind cuts.)
Summary of proof and the alleged error:
Consider the set of all rational numbers of the form $X - Y$ (with $X > Y$) where $X$ is a lower number for $\xi$ and $Y$ is a lower number for $\eta$.
Landau then proves that this set defines a cut and claims that it is the solution $\nu$ to the equation in the statement of the theorem. (He has also already shown that this solution is unique.)
To prove this, he must show that $\eta + \nu$ and $\xi$ are equal, which simply consists of showing that every lower number of the former is a lower number of the latter and vice versa.
Now here is where I believe he makes an error: he writes:
"Every lower number for $\nu + \eta$ is of the form $$(X - Y) + Y_1$$ where X is a lower number for $\xi$, ** $Y$ is an upper number for $\eta$ **, $Y_1$ a lower number for $\eta$, and $X > Y$.
Now we have ** $Y > Y_1$ **, [so] $$((X - Y) + Y_1) + (Y - Y_1) = \cdots$$
The issue here is that he originally says that $Y$ is a lower number of $\eta$, but then makes it an upper number so that he can subtract $Y_1$ from $Y$ (he doesn't define negative numbers until the section on real numbers, so only subtraction that yields a positive number is 'legal' at this point).
Am I right? If so, how would you repair this proof?
If I'm wrong, what am I missing here?
Thanks in advance.
AI: The lower numbers for $-\eta$ should be the negatives of the upper numbers for $\eta$. So in setting up the lower numbers for $\xi-\eta$ it should be numbers $X-Y$ where $X$ is a lower number for $\xi$ and $Y$ is an upper number for $\eta$. This would make the steps in the proof work, with $Y_1$ a lower number for $\eta$, but would mean the wording of the proof should change to "consider rationals of the form $X-Y$ where $X$ is a lower number for $\xi$ and $Y$ is an upper number for $\eta$," at the start of the proof.
|
H: Is a prevariety the same as a notherian scheme?
By a pre-variety I mean a quasi-compact locally ringed space which can be covered by (a finite # of) affine varieties.
I was wondering, this seems to be the same in scheme language as a Noetherian scheme, or am I overseeing something?
AI: The typical example of a pre-variety that is not a variety is the "affine line with doubled origin." This pre-variety is covered by two copies of $\mathbb A^1$ that are glued along $\mathbb A^1\setminus \{0\}$ via the identity map $x\mapsto x.$ This differs only slightly from the construction of $\mathbb P^1$ as a gluing of the same open sets via $x\mapsto 1/x.$
A noetherian scheme is a different beast in general, for we can consider as examples any spectrum of a noetherian ring. In particular, let $A = K[[x]]$ be a power series ring in a variable $x$ over our field $K.$ This ring is noetherian, but not of finite type, and hence $\operatorname{Spec}(A),$ which contains a single closed point, is not covered by an affine variety.
|
H: Find the value of $\large i^{i^{.^{.^.}}}$
Find the value of $\large i^{i^{.^{.^.}}}$ ?
How should we start to solve it ?
Also you can see this one if it helps.
Thanks
AI: Let $z=i^{i^{.^{.^{.}}}}$. Then, as Hagen von Eitzen pointed out, $i^{z}=z$. Then $1=z\,i^{-z}=z \, e^{-i\pi z/2} $. It follows that
$$-\frac{i\pi}{2}=-\frac{i\pi z}{2}\,e^{-i\pi z/2}.$$
Using the notion of Lambert's W function, we see that
$$-\frac{i\pi z}{2}=W(-i\pi/2),$$
or
$$z=\frac{2i}{\pi}\,W(-i\pi /2).$$
|
H: Induction Using Multi-Index
Does anyone know how to use induction in the context of multi-indices?
I know the induction is done on the multi-index length, the main problem is how to split a multi-index of length $n+1$ into one of length $n$ and another of length $1$.
For example, I have the following problem: Prove that $|x^\alpha|\leq |x|^{|\alpha|}$ for all $\alpha\in\mathbb N^n$ and $x\in\mathbb R^n$.
The first inductive step is using $\alpha=e_j$: $|x^\alpha|=|x_j|\leq |x|=|x|^{|\alpha|}$. Now I suppose $|x^\alpha|\leq |x|^{|\alpha|}$ for every multi-index of length $n$. Now how to prove the result is true for $\alpha$ of length $n+1$?
AI: Let $\alpha$ be a multi-index of length $n+1$ so that there exists a multi-index $\beta$ of length $n$ and an index $j$ with $\alpha = \beta + e_j$. Then $$x^\alpha = x^{\beta + e_j} = x^\beta x_j.$$ Now invoke the induction hypothesis: $$|x^\alpha| = |x^\beta x_j| = |x^\beta||x_j| \le |x|^{|\beta|}|x_j|.$$ Since $|x_j| \le |x|$ you get $|x^\alpha| \le |x|^{|\beta| + 1} = |x|^{|\alpha|}$.
|
H: Payment problem
I'm looking for an algorithm, which can solve the following problem:
There is a basket, containing (n) products, paired with a value, which shows how much money is required to cover them.
E.g basket(n=3):
{apple,500€},{banana,400€},{cherry,300€}
In our purse we have different type of vouchers(m pcs) in different amount. A voucher is a payment method, which has a value, and can be used to pay predefined products, not necessarily all types of them. It is also given which products can we pay with one voucher and vica-versa.
E.g: Voucher types:
V1 can be used to pay: apple,cherry
V2 can be used to pay: apple,banana
V3 can be used to pay: cherry
E.g purse(m=3):
{V1,500€},{V2,600€},{V3,100€}
We would like to pay the whole basket, so a voucher can be used to cover more products. Meanwhile more money than required to pay the basket can be accepted. The sum of the basket, and the payment are natural numbers. The task is to decide whether are we able to pay the basket used the given purse?
Does anyone can suggest an algorithm or a reduction to a well-known problem to solve this decision problem properly in every case?
Thank you,
Adam
AI: If all prices and values are natural numbers, you are looking for a perfect matching: There's a node for every Euro of fruit, a node for every Euro of vvouvher value and an edge from each vouvher-Euro-node to each compatible fruit-euto-node.
|
H: What is the proper terminology for these two types of multiplication?
((QUESTION REWORKED))
First question on this site, and I apologize if this question has been answered. I searched and searched and the fact I don't know the basic terminology is hindering me from finding the answer.
This question stems from a current programming issue I am attempting to detail to others. In one instance we have the secondary variables being added together and then multiplied against the main variable. While in the other scenario, each one is multiplied and updates the main variable before the next. So you get very different numbers, depending on if you add the secondary together before multiplying or if you do them one at a time.
The variables:
$$\begin{align*}
x&=100 \\
a &= 2 & b &= 3 \\
c &= 4 & d &= 5
\end{align*}$$
The first formula is:
$$y = d(c(b(a(x)))).$$
The second formula is:
$$y = x(a+b+c+d).$$
What is the proper math terminology for both types?
Any help would be appreciated.
AI: I don't think there is specific terminology assigned to either form, rather the property is called the distributive property, and you are either adding first, or multiplying each term individually prior to adding.
Given your edits, the two different forms have nothing in common mathematically.
The first format indicates that $a$, $b$, $c$ and $d$ might be functions and each would perform an operation on the element inside the parentheses next to it. However, as they are listed as being variables and as having specific values, the first form is simply an explicit "multiply in this order" command, while the second form is an explicit "add these before multiplying" command.
If you were to speak what is written in each formula, they might sound like this:
(1) "$d$ times multiplication of $c$ and multiplication of $b$ and multiplication of $a$ and multiplication of $x$."
(2) "$x$ times sum of $a$, $b$, $c$ and $d$."
Note that the first form might be shortened using "pi" notation:
$$x\prod_{i=0}^n a_i$$
Where $a_0 = a$, $a_1 = b$, $a_2 = c$, and $a_3 = d$.
If a, b, c and d are given as percentages, note that the formula $x*a*b*c*d$ is not a correct way to calculate the compounding effect. Instead, it should be calculated like so:
$$x * {100 + a \over 100} * {100 + b \over 100} * {100 + c \over 100} * {100 + d \over 100}$$
|
H: Change of variables with a square
Can someone help me understand this a bit better:
$\int (x-y)^2 dx = \int(y-x)^2dx$
as $(y-x)^2 = (x-y)^2$. Now, if I make the change $z = x-y$ in the one on the LHS I get:
$\int z^2 dz$
as $dz = dx$. Now, if I make the change $z=y-x$ in the one on the RHS I get:
$\int z^2 (-dz)$
as $dz = -dx$. This implies that
$\int z^2 dz = - \int z^2 dz = 0$
which is clearly not true. I don't understand - can someone help me understand changing the variable of integration when you have a square or a square root please?
AI: In the first case, $z=x-y$. In the second case $z=y-x$. It doesn't follow that $\int z^2 dz = - \int z^2 dz = 0$, because these are two different $z$'s. The confusion comes from the fact that you used the same variable name for two different values. Look at it again, but with a different variable for the second case:
$$z=x-y$$
$$u=y-x$$
$$\begin{align}
\int{z^2}dz&=-\int{u^2}du \\
\frac{1}{3}z^3+C &= -\frac{1}{3}u^3+C
\end{align}$$
This follows because $z=-u$.
|
H: A name for set of disjoint intervals
What's in a name? That which we call a rose by any other name would smell as sweet.
William Shakespeare
I'm looking for a short name for the phenomenon collection of disjoint intervals.
I currently use selection, which I'm not very satisfied with, and I wonder if there is a better (more conventional) name out there.
AI: Options:
More formal: Pairwise disjoint set of intervals
Less formal: set of disjoint intervals
Less formal: disjoint set of intervals
Write it as something like "Let $a,b,c$ be disjoint intervals."
Basically, there is no special terminology for what you want. To avoid confusion, just don't use one. If you're using the concept a lot in a paper, you're free to make up a word, or to name the set of all such sets.
|
H: Diffeomorphisms preserving harmonic functions
I'm looking for smooth maps $ \Bbb R^n \to \Bbb
R^n $ with the property that, whenever $ h $ is a harmonic function ($\Delta h=0$), $ h\circ f $ is also harmonic. Is there a nice characterization of these functions? I'm especially interested in those function which are bijective such that $ f^{-1} $ also has this property. Is there a complete characterization of these "harmony preserving diffeomorphisms"?
AI: These are harmonic morphisms: maps $\varphi$ that "preserve Laplace's equation $\Delta u=0$" upon precomposition by $\varphi$. There is a vast literature on them (they are more generally defined between arbitrary Riemannian manifolds using the Laplace-Beltrami operator).
In particular with Euclidean spaces, we have this characterization: a map is a harmonic morphism if and only if it is itself harmonic and "horizontally weakly conformal," meaning its components' gradients are orthogonal and equal length. Finding all harmonic morphisms between arbitrary open subsets of Euclidean spaces is an open problem. See Proposition $1.10$ in Wood's reference article Harmonic Morphisms Between Riemannian Manifolds. Wood & Helein have a broader-purpose article Harmonic Maps in the text Handbook of Global Analysis.
|
H: Why does every undirected graph have at least one cut of size $|E| / 2$?
In an undirected graph $G = (V, E)$, a global maximum cut in $G$ is a pair $(S, V - S)$ with the largest possible number of edges with one endpoint in $S$ and another endpoint in $V - S$ (this quantity is called the size of the cut).
There is a well-known randomized approximation algorithm for finding a maximum cut that, on expectation, produces a cut of size $|E| / 2$ regardless of the graph. Consequently, this means that every graph must have a cut whose size is at least $|E| / 2$.
Although I completely understand why the above proof establishes why such a cut must exist, I don't have a good intuitive understanding of why this is the case. Is there a simple intuitive explanation for why every undirected graph must have a cut of size at least $|E| / 2$?
Thanks!
AI: I think you can prove this using induction and the induction step will give some intuition for the situation. Let's leave out the base step (since it's trivial) and focus on the induction step, since we only care about intuition right now.
Take a graph $G = (V, E)$ and remove one arbitrary vertex $v$ and all it's adjacent edges $e_1, ..., e_k$. The resulting graph $G'$ has a cut of size $(|E| - k) / 2$ by the induction hypothesis. Now you can choose which side of the cut you put your left over vertex $v$ on. Choose the one that "adds more edges to the cut". Since you have $k$ edges adjacent to v, you add at least $k/2$ edges to the cut. So your graph $G$ has a cut of size at least $|E|/2$.
I think this step-by-step adding vertices in a way that makes the cut large gives a good intuition.
|
H: Baby Rudin vs. Abbott
I am considering Stephen Abbott's Understanding Analysis and Walter Rudin's Principles of Mathematical Analysis. I am looking for a comparison between the two that addresses both of the following matters,
Is one of the two substantially more mathematically rigorous than the other?
Does one include substantially more challenging problems?
Which provides a better introduction to Real Analysis?
Follow-up question:
Would someone who has worked through Abbott's book be at a disadvantage compared to someone who has completed Rudin's text?
AI: They are both rigorous in that they both give complete proofs of their results. Rudin's problems on the other hand are challenging to newcomers. Abbot's problems are on a much lower level than Rudin's. I love Rudin's books, but there are mixed opinions on whether they should be used as introductions. I used Principles after my first year of analysis and loved it. I'd say first work through Abbot because he will likely provide more motivation. Later, get Rudin and push your boundaries of understanding. You might just become an analyst after that approach. It's what happened to me.
|
H: Let $R$ be a ring with every element but $1$ having a left quasi-inverse. Then $R-\{1\}$ is a group under $a*b=a+b-ba$.
This question is related to exercise 1.51 from Rotman's "Introduction to the Theory of Groups".
An element $a$ in a ring $R$ (with unit element $1$) has a left quasi-inverse if there exists an element $b \in R$ such that $a+b-ba=0$. I want to show that if every element in $R$ has a left quasi-inverse except $1$, then $R - \{1\}$ is a group under the operation $a*b=a+b-ba$.
What I have an issue with is showing closure, i.e. that $a+b-ba=1$ iff $a=1$ or $b=1$. I noticed that $x+y-yx=1$ is equivalent to $(1-y)(x-1)=0$, but this is as far as I can get, since we don't know that the ring has no zero-divisors.
AI: The $*$ operation is associative and has $0$ as neutral element (direct verification). Moreover, the left quasi-inverse of $a$ ($a\ne1$) cannot be $1$, because $a+1-1a=1\ne0$ (the $0\ne1$ assumption must be made, of course, or $R\setminus\{1\}$ would be empty and so not a group).
If $c$ and $d$ are left-quasi inverses of $a$ and $b$ respectively, you have
$$
(d*c)*(a*b)=(d*(c*a))*b=(d*0)*b=d*b=0.
$$
Therefore $a*b$ has a left quasi-inverse and the operation is well-defined on $R\setminus\{1\}$. Actually this shows that $a*b$ belongs to the set of elements having a left quasi-inverse, without assuming that all elements (except $1$ which can't) have a left quasi-inverse.
Now just show that a left quasi-inverse of a left quasi-inverse is …
|
H: Regarding 3-fold connected coverings of the $S^1 \vee \mathbb{R} P^2$
As in the question, I need to determine all of the 3-fold connected coverings of the wedge of the unit circle and the real projective plane.
Here's what I think: I know that the fundamental group of $S^1$ is isomorphic to the set of integers $\mathbb{Z}$, and that the fundamental group of $\mathbb{R}P^2$ is isomorphic to $\mathbb{Z}/2$. Then I gather that by van Kampen's theorem, $\pi_{1}(S^1 \vee \mathbb{R}P^2)= \mathbb{Z}* \mathbb{Z}/2$. So this means that the universal cover of this space can be represented by a Cayley diagram involving copies of the real line $\mathbb{R}$ and the sphere $S^2$, which, unfortunately here I cannot display in a good picture. Would this at least be considered one of the 3-fold connected coverings of this space, and if so, how can I compute others, if they exist? Am I missing something?
AI: The coverings of a (well-behaved, connected) space $X$ are in correspondence with the conjugacy classes of subgroups of the fundamental group $\pi_1(X)$, and the degree of a covering corresponds to the index of the subgroup.
What you need to do, then, is to find all subgroups if index $3$ in $G=\mathbb Z*\mathbb Z/2$. If $H\subseteq G$ is such a group, then $G$ acts on the set $G/H$ of its cosets, of which there are three, and this gives us a map $f:G\to S(G/H)$, where $S(G/H)$ is the group of permutations of the set $G/H$. If we know $f$, then $H$ can be reconstructed as the preimage of the stabilizer of $H$ in $S(G/H)$.
Since $S(G/H)$ is isomorphic to $S_3$, the symmetric group on three letters, we need to find all morphisms $f:G\to S_3$ whose image is transitive, and compute the preimage of the stabilizers of points.
Now $S_3$ has exactly two transitive subgroups: its cyclic subgroup of order $3$ generated by $(1,2,3)$, and $S_3$ itself. It is not difficult to determine all surjective maps from $G$ to these two subgroups.
|
H: Continuing direct product on a subcategory
Let $F$ is a full subcategory of a category $G$, both categories having binary direct product.
Is it always true that there is such a binary direct product in $G$ that it is a continuation of a binary direct product in $F$?
Hm, can it be generalized for infinitary direct products, rather than only binary? (less important question than the first question above)
AI: Let $F$ be a $4$-element Boolean algebra, say $\{\varnothing,\{0\},\{1\},\{0,1\}\}$ ordered by inclusion. Regard $F$ as a partially ordered set and thus as a category (with a unique morphism $a\to b$ exactly when $a\subseteq b$). Obtain $G$ by inserting into $F$ one new element $z$, above $\varnothing$ but below the other three elements of $F$. Again, regard $G$ as a partially ordered set and thus as a category. Both categories have binary products (i.e., the partially ordered sets have binary meets). But the product of $\{0\}$ and $\{1\}$ in $F$ is $\varnothing$ while in $G$ their product is $z$.
|
H: Practice Problem Books
The Analysis I/II/III (Differentiation and Continuity/Sequence and Series/Integration) published by AMS. The first one is this. It's a problem-solution book. I found it excellent because of the quality of questions there.
I am looking for similar book (complete problem (only) book with complete solution) in linear algebra. However, let's include following subjects too:
Linear Algebra
Abstract Algebra
Complex Analysis
Topology
Please suggest if only it is like the book i suggested in the beginning. I do not want books like exceptions in analysis/mathematics which also have significant problems which are interesting. I also do not want books that are completely aimed for competitions like putnam and are too tough.
AI: Analysis
A problem Book in real analysis, Asuman G. Askoy
Problems and Solutions for Undergraduate Analysis, Rami Shakirachi based
on Lang's Analysis book
Problems and Theorems in Analysis I, II and III, Polya, Szego(I think this was what you were referring)
Linear Algebra
Linear Algebra, Jim Hefferon, http://joshua.smcvt.edu/linearalgebra/ has the text book which has all problems and solutions.
Complex Analysis
Problems and Solutions for Complex Analysis, Rami Shakirachi based on Lang's complex analysis book.
Abstract Algebra
Abstract Algebra, John Beachy http://www.math.niu.edu/~beachy/abstract_algebra/study_guide/contents.html is an online text with problems and solutions.
Solution Manual for A first course in Abstract Algebra, Fraleigh . The solution manual has solutions to almost all problems if I recall and can find it online.
|
H: the representation on the regular representation is faithful
I am reading the proof of the following proposition.
Proposition. As algebras, $\mathbb{C} G \cong \bigoplus_i \mathrm{End}(W_i),$ where $G$ is a finite group and $W_i$ are irreducible representation of $G$.
The proof goes as follows.
A representation $G \to \mathrm{Aut}(W_i)$ extends to a homomorphism $\mathbb{C}G \to \mathrm{End}(W_i)$. Thus we have $\phi: \mathbb{C} G \to \bigoplus_i \mathrm{End}(W_i)$.
I don't understand the next step where we want to show $\phi$ is injective.
The text book says that "This is injective since the representation on the regular representation is faithful".
What is the representation on the regular representation? And why is it faithful?
I appreciate any help.
AI: The "representation on the regular representation" is probably just a typo for "the regular representation". This is the representation on the vector space $\mathbb CG$ in which each element of $G$ acts by multiplication from the left (on the standard basis vectors of $\mathbb CG$). That representation is faithful because every non-identity element of $G$ moves some vectors in $\mathbb CG$, for example any element of $G$ (considered as a basis vector in $\mathbb CG$). Finally, if we decompose the regular representation into irreducible summands, then each non-identity element must act nontrivially in at least one of those irreducible representations.
|
H: Differences between $C_c^\infty[0,T]$ and $C_c^\infty(0,T)$
I believe it is true that:
If $f \in C_c^\infty(0,T)$, then $f(T)=f(0)=0$.
$C_c^\infty(0,T) \subset C_c^\infty[0,T]$
$C^\infty(0,T) \subset C_c^\infty[0,T]$
If $f \in C_c^\infty[0,T]$, it doesn't necessarily mean that $f(T)=f(0)=0$.
Finally, what happen if I define weak derivative as
$$\int_0^T u\phi' = -\int_0^T u'\phi$$
for all $\phi \in C_c^\infty[0,T]$, instead of $C_c^\infty(0,T)$.
I will think more on this but maybe somebody has already studied it. Anyway, more information about these differences would be appreciated.
AI: First let's assume some definitions (see for example Gilbard-Trudinger page 10): $C^\infty([0,T])$ is the space of all functions $f\in C^\infty((0,T))$ such that $f$ and all it's derivatives, can be continuously extended up to the boundary of $(0,1)$.
With this definition we have that all $f\in C^\infty([0,T])$ does have compact support, hence $$C^\infty([0,T])=C_c^\infty([0,T])$$
The last equality shows that $3$ is not true and $4$ is true.
On the other hand if $f\in C_c^\infty ((0,T))$, then you can find a $\delta >0$ (try to prove it), such that $f(x)=0$ for $x\in (0,\delta)\cup (T-\delta,T)$, hence $1$ and $2$ is true.
To conclude, note that if you define weak derivative (as you have formulated) by using the space $C^\infty([0,T])$ as a test space, then you have some problems with integration by parts (can you see this?).
|
H: One Point Derivations on locally Lipschitz functions
Let $A$ be the algebra of $\mathbb{R}\to\mathbb{R}$ locally Lipschitz functions. What is the vector space of derivations at $0$? The proof that for continuous functions there aren't really any doesn't seem to work in this case.
I was thinking about trying to define the tangent spaces for Lipschitz manifolds analogously to differentiable manifolds, but I couldn't work out even this simplest example. Can tangent spaces of Lipschitz, even topological manifolds be defined at all and how?
AI: As Etienne pointed out in a comment, the book Lipschitz Algebras by Weaver is very much relevant. Section 4.7 introduces and describes derivations on the algebra of Lipschitz functions on a compact metric space. Weaver's constructions can be viewed as a way to introduce differentiable structures on metric spaces. See this paper by Gong.
I don't know of any concept of a tangent space to a topological manifold. But a Lipschitz manifold is a complete doubling metric space, and therefore one can define its tangent cones as (pointed) Gromov-Hausdorff limits of rescaled spaces $(X,\delta^{-1}d)$. Different sequences of scales $\delta_n$ can produce different limits, which means tangent cone is not unique in general. See section 8.7 of Nonsmooth Calculus by Heinonen.
|
H: How to prove this property of floor function?
$
\left\lfloor { - x} \right\rfloor = - \left\lfloor x \right\rfloor
$
if $x \in \mathbb{Z}$ and
$ \left\lfloor { - x}\right\rfloor = - \left\lfloor x \right\rfloor -1 $ otherwise
This is an exercise from Tom Apostol's book "Calculus Volume I" section 1.11 number 4. He defined $\left\lfloor x \right\rfloor$ as the greatest integer $\leqslant x$.
I have tried it but I don't get it. Could you help me?.
AI: Use that you define $[x]$ such that $[x]\leq x<[x]+1$. Now replace this into all the statements you want to prove.
|
H: Usually, main results are called theorems, while smaller results are called propositions. Is there a name for super-immediate results?
In mathematical papers, main results are called theorems, while less central results are called propositions. But sometimes, there is a result that is so immediate, it doesn't even deserve to be called a proposition; but nonetheless, you want to state it, just for emphasis. Consider the following example.
Definition. Write $x \sim y$ iff $x \leq y$ and $y \leq x$.
Proposition. If $x \sim y$, then $x \leq y$.
In my opinion, the word 'proposition' is far too strong for these kind of results. 'Proposition' suggests an insight, however small, but there is no insight here; the result is just there for emphasis.
Is there a traditional name for these kinds of super-immediate results? I'm tempted to call it a 'corollary', but that term is usually reserved for something that follows immediately from a theorem (proposition etc.), rather than a definition.
AI: Writing changes over the decades. If I'm reading something, I like to have definitions numbered and obvious. Plenty of important books and papers gve definitions in the middle of a sentence, with the new term or phrase in italics. This is fine if you really are reading the thing word for word, but harder to follow if skipping around.
At the other end, I do not always give a heading or name for every corollary or intermediate result. Having too few numbered and named ("Theorem") items is one way to be obscure, another way is to name and number every other line, so that the important stuff does not stand out.
Now that i think of it, I am more likely than most authors to put little cautionary notes, after a definition especially. There are always places where you write something which is unambiguous, but where a reader without your experience might misinterpret. So I often say "Note that property W does not really imply property X." Sometimes co-authors have made me remove such "notes," as too much coddling of the reader. As an example, if using the Legendre/Jacobi symbol and saying "so that $(n|p) = -1,$" I am likely to also say "note that this requires that $p$ not divide $n.$"
So, as the others are saying, a paragraph of remarks, possibly without the heading "Remark," is a good middle ground.
|
H: Different direct product in a category and its full subcategory
A question related to Continuing direct product on a subcategory.
Let $F$ is a full subcategory of a category $G$.
I denote $\operatorname{Ob}X$ the set of objects of a category $X$.
Is it possible the following?
There are binary direct products $\times_F$ and $\times_G$ in these categories.
$A\times_G B\in\operatorname{Ob}F$ if $A, B\in\operatorname{Ob}F$.
$A\times_G B$ is non-isomorphic to $A\times_F B$ for some objects $A$, $B$ of $F$.
We probably should also assume that the product morphisms are the same for $F$ and $G$ (provided that the morphisms are morphisms of $F$).
AI: No. Under the given hypotheses, $A \times_G B \cong A \times_F B$ naturally. To see this it suffices to observe that they represent the same functor when restricted to $F$.
|
H: Finding a planar graph satisfying these properties
I need to construct a 3-regular connected planar graph with a planar embedding where each face has degree 4 and 6. In addition, each vertex must be incident with exactly one face of degree 4.
seems to fit the description well if extended infinitely but unfortunately infinite graphs are not allowed. I spent so much time trying to construct such a graph! The hard part is that there is that annoying unbounded face outside everything...
AI: The colours show the order in which I constructed the graph. Every vertex must be part of one face of order 4, so I start with a square (black, centre).
Now, each vertex is of order 3, and the four vertices have order 2, so extend one edge out from each. They're next to their face of order 4, so these edges must be parts of hexagons (blue).
We now have four vertices which are already of order 3 and missing their face of order 4, and eight vertices which are of order 2 and missing their face of order 4. Create the suitable faces (red).
Now each "edge" of the square formed by the red edges is really three edges, and they must each form part of a hexagon. The 4 corners of the red square are order 2, so they need one more edge each, and that brings us to 5 of the 6 edges of each hexagon, so just join them up in the green square.
Check: the outside face is also of order 4, so this works.
|
H: When can the inverse Fourier tranform be represented in terms of integral?
From Wikipedia
Use the convention for the Fourier transform that
$$
(\mathcal{F}f)(\xi):=\int_{\mathbb{R}^n} e^{-2\pi iy\cdot\xi} \, f(y)\,dy.
$$
Furthermore, we assume that the Fourier transform is also integrable.
The most common statement of the Fourier inversion theorem is to state the inverse transform as an integral. For any integrable function $g$ and all $x∈ℝ^n$ set
$$
\mathcal{F}^{-1}g(x):=\int_{\mathbb{R}^n} e^{2\pi ix\cdot\xi} \, g(\xi)\,d\xi.
$$
Then for all $x∈ℝ^n$ we have
$$
\mathcal{F}^{-1}(\mathcal{F}f)(x)=f(x). $$
For $g \in L^p, p \in [1,2]$, its inverse FT exists, if and only if $g$ is in $L^2$?
For $g \in L^p, p \in [1,2]$, its inverse FT can be represented as $$
\mathcal{F}^{-1}g(x):=\int_{\mathbb{R}^n} e^{2\pi ix\cdot\xi} \, g(\xi)\,d\xi, $$ if and only if $g$ is in $L^2\cap L^1$?
For $f \in L^2$, when can its FT be represented in terms of integral as above?
For that purpose, when is its FT in $L^2\cap L^1$? For $f \in L^2\cap
L^1$, its FT isn't necessarily in $L^2\cap L^1$, is it?
Thanks and regards!
AI: Your last question/surmise hits the real point: it is possible to contrive functions in $L^1\cap L^2$ whose Fourier transforms are not in $L^1\cap L^2$. And so on... Nevertheless, indeed, it is elementary-traditional to discuss Fourier transforms for such functions. A moderately clever person can find many such functions, so that one would feel vaguely confident that this is ok... But, no, this is making it more complicated than it need be, for non-elementary reasons.
Although questions such as your first two may indeed be critical in some situations, they are misleading in subtle and insidious ways, as the $L^2\cap L^1$ issue highlights. Thus, despite many decades of progress on Fourier transforms, Laurent Schwartz' late-1940s creation of the theory of distributions, and of tempered distributions, included many ideas which (in my opinion) lent sanity and simplicity to these questions. (Without slandering/libeling any deceased very-good mathematicians by naming them, I must note that at the time some people thought it was outrageous that Schwartz won a Fields Medal for "distributions", because, in effect, "the experts" could already do all those things. My observation is that these mechanisms were potentially simple enough that "any idiot could do it", and Schwartz found/manifested that simplicity.)
Specifically, Schwartz observed that the (Frechet) space of infinitely-differentiable functions all of whose derivatives decay faster than any power of $1/(1+x^2)$ _maps_to_itself_ under Fourier transform.
The major "objection" was/is that the Schwartz space is not Hilbert, nor Banach. Well, ok, but it is an excellent answer to an important question, so to the extent that these "simpler" notions about function spaces are insufficient, this story is evidence that we should enlarge our repertoire of function-spaces right away!
The non-triviality of finding this sort of balance is meaningfully illustrated in many other "eigenfunction expansion" scenarios, wherein we want to express "general" functions as "superpositions" (sums and integrals...) of special functions, maybe eigenfunctions ($L^2$?) for some naturally-occurring operator.
Returning to the original question: Fourier inversion is literally an integral only when both $f$ and $\hat{f}$ are in $L^2\cap L^1$, as you say... but, even then, there is a pitfall: in what sense do we mean the equality of functions, the original, and the integral of exponentials??? This is a very serious question. The convergence need not be pointwise at all, certainly not uniformly pointwise. Convergence in an $L^2$ sense is assured (Plancherel), but couldn't make sense unless both $f$ and $\hat{f}$ were in $L^2$. :)
So, if we are trying to be clever, we can define $\hat{f}$ by the Plancherel-isometric extension of the integral giving the "forward" Fourier transform, and only worry about the integral for inversion, soooo with $f\in L^2$ and $\hat{f}\in L^2\cap L^1$, the integral form of Fourier inversion is literally correct. :)
But, seriously, this kind of analysis is a bit frivolous, all the more so since (I think) understanding what's going on is only possible after taking up a more sophisticated viewpoint (Schwartz...). But I hope the discussion does respond helpfully.
|
H: Are my answers right here about true and false statements?
Every integer is a rational number -> false -- correct?
Let r = true; s = true. Is this statement true or false? $$\lnot [r \lor (\lnot s \lor r)];\quad$$ -> true -- correct?
Let p = true; r = false; q = false. Is the following statement true or false?
$$\lnot [\lnot q \lor(p \lor \lnot r)$$ -> false -- correct?
AI: You didn't strike out, but only one of your answers is correct:
Note that any integer $n$ can be expressed as the fraction: $\dfrac n1$. (In all fairness, the answer depends on how the rational numbers are defined. If we define the integers as a proper subset of the rationals, as suggested by my note, then your answer should be true.)
This is in fact false. $$\lnot[T \lor (F \lor T)] \equiv \lnot [T \lor T] \equiv \lnot [T] = F$$
Correct: "false" is the correct answer. $$\lnot [\lnot q \lor(p \lor \lnot r)] \equiv \lnot[\lnot F \lor T \lor\lnot F] \equiv \lnot[T] \equiv F$$
|
H: Riccati differential equation $y'=x^2+y^2$
$$y'=x^2+y^2$$ I know, that this is a kind of Riccati equation, but is it possible to solve it with only simple methods? Thank you
AI: It can be solved (or, rather, transformed into a recognizable form) using simple methods, but the result can only be expressed in terms of special functions.
Namely, let us write $\displaystyle y=-\frac{v'}{v}$, then $\displaystyle y'=-\frac{v''}{v}+\frac{v'^2}{v^2}$ so that the equation becomes linear:
$$v''+x^2v=0.\tag{1}$$
If we further introduce
$$v(x)=\sqrt{x}\cdot f\left(\text{$\frac{x^2}{2}$}\right),\tag{2}$$
then (1) transforms into a particular case of the Bessel equation for $f(t)$:
$$t^2f''+tf'+\left(t^2-\frac{1}{16}\right)f=0.$$
It has the general solution
$$f(t)=C_1 J_{1/4}(t)+C_2J_{-1/4}(t).\tag{3}$$
Substituting this into (2), we find $v(x)$, and then $y(x)$ is given by its logarithmic derivative. Clearly, $y(x)$ will depend only on the ratio $C_1/C_2$ (instead of $C_1$, $C_2$ separately). This ratio plays the role of integration constant for the initial first order equation $y'=x^2+y^2$.
|
H: Geometric Distribution Probability Problem
Suppose that $X$ is a geometric random variable with parameter (probability of success) $p$.
Show that $\Pr(X > a+b \mid X>a) = \Pr(X>b)$
First I thought I'd start by calculating $\Pr(X>n)$ where $n=a+b$:
$$\Pr(X > n) = p_{n+1} + p_{n+2} + \cdots = ?\tag{1}$$
But I don't know how to determine the limit of equation (1). I know for an infinite geometric series starting at index zero:
$$\sum\limits_{n=0}^\infty ax^n=\cfrac{a}{1-x}\text{ for }|X|<1$$
But don't know what to do when index starts at $n$.
Next I thought I'd do:
$$\Pr(X > a+b) \mid X > a) = \cfrac{ \Pr[ (X > a+b) \cap (X > a)] }{\Pr(X > a)}$$
$$=\cfrac{\Pr(X > a+b)}{\Pr(X > a)}$$
and substitute my result from equation (1). Any help appreciated in advance. Thank you.
AI: Here's how to handle an infinite geometric series when the index starts at $n$ instead of $0$:
\begin{align}
\sum_{k=n}^\infty ax^k & = ax^n + ax^{n+1} + ax^{n+2} + ax^{n+3}+\cdots \\[10pt]
& = (ax^n) + (ax^n)x +(ax^n)x^2 + (ax^n)x^3+\cdots \\[10pt]
& = b + bx + bx^2 + bx^3 + \cdots
\end{align}
Now it starts at index $0$. And of course $b$ is $ax^n$.
It looks as if you can do the rest.
Second method: You mentioned that the probability of "success" is $p$. That means the probability of success on each trial is $p$.
If $X$ is defined as the number of trials needed to get one success, then the event $X>n$ is the same as the event of failure on all of the first $n$ trials, so that probability of that is $(1-p)^n$.
If $X$ is defined as the number of trials needed to get one failure, then the event $X>n$ is the same as the event of success on all of the first $n$ trials, so the probability of that is $p^n$.
|
H: is this the right truth table?
When I filled out the table I tried my best to figure it out. But If I made any mistakes please help me correct them. Thanks!
sorry 5th one should be false
AI: EDIT: UPDATE
Now your table is mostly correct.
Check your truth value assignment columns; we need to cover all possible $2^3$ truth-value assignments, and you've missed, for example, $P = F, Q = T, R = T$, but double counted another.)
The only truth value combination that is false is when we have $P = F, Q = T, R = T$. Then and only then is it the case that $P \lor \lnot Q \lnot R = F \lor F \lor F = F$. For all other truth value combinations, the compound disjunction is true.
The truth values listed in your columns below $P$ and $Q$ are
slightly off. Compare the columns below with your columns.
|
H: Prove that if A is symmetric and invertible, then$ (A^{-1})^t = (A^t)^{-1}$.
I have a problem. Prove that if A is symmetric and invertible, then $(A^{-1})^t = (A^t)^{-1}$.
This is what I have done, please tell me if it is correct:
Say $C=A^{-1}$. Then, $C^t = ((C^{-1})^t)^{-1} = ((C^{-1})^{-1})^t = C^t$.
AI: If $A$ is an invertible matrix (symmetric or not symmetric) then
$$AA^{-1}=I\Rightarrow \left(AA^{-1}\right)^T=\left(A^{-1}\right)^TA^T=I$$
so the matrix $A^T$ is invertible and
$$\left(A^T\right)^{-1}=\left(A^{-1}\right)^T$$
|
H: Prove that if a product$ AB$ of $n\times n$ matrices is invertible, so are the factors $A$ and $B$.
I'm really struggling with "simple" proofs, and I could only come up with this. I googled for the solution and know how to do it now, but I first want to know if there's anything wrong with my solution?
$$
(AB)^{-1}=B^{-1}*A^{-1}.
$$ For this equation to hold, $B$ must be invertible and so must $A$. //
AI: First: your solution isn't actually a solution. By writing $A^{-1}$ you are assuming that $A$ is invertible, which is what you want to prove.
One way to do this would be to note that
$$
\det(AB) = \det(A)\det(B).
$$
If $AB$ is invertible, then $\det(AB)\neq 0$, and so $\det(A)\neq 0$ and $\det(B)\neq 0$.
Another way: If $AB$ is invertible, then there is a $C$ such that
$$
(AB)C = I
$$
That is
$$
A(BC) = I
$$
So $BC$ is a right inverse of $A$, and so $A$ is invertible. Likewise $B$ is invertible. (See this question/answers for a bit more: If $AB = I$ then $BA = I$).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.