text
stringlengths 83
79.5k
|
|---|
H: Convergence of Matrix Series
I would just like a quick sanity check. If I have a matrix $ M $, then the series $ 1 + M + M^2 + M^3 \cdots $ converges to $ (1-M)^{-1} $ if the operator norm $ \lVert M \rVert_{\mathrm{op}} < 1$. Is it sufficient to show that each column vector $ v $ of $ M $ has norm $ \lVert v\rVert_{L^2} < 1 $?
AI: Consider $M = \begin{pmatrix} 1/2 & 1/2 \\ 1/2 & 1/2 \end{pmatrix}$ (this is projection to the diagonal). Each column has norm smaller than $1$, but $1-M$ is a proper projection (hence not invertible).
Added: Let's work over $\mathbb{R}$. Give $\mathbb{R}^n$ the 1-norm and then $\mathrm{Mat}_{n \times n}(\mathbb{R})$ the corresponding operator norm (this is equivalent to the operator norm on $\mathrm{Mat}_{n \times n}(\mathbb{R})$ coming from the 2-norm on $\mathbb{R}^n$ since all norms on finite-dimensional vector spaces are equivalent -- so convergence questions are not affected).
There is a nice thing feature of this norm on $\mathrm{Mat}_{n \times n}(\mathbb{R})$. Let $M = [m_1 \vert m_2 \vert \ldots \vert m_n] \in \mathrm{Mat}_{n \times n}(\mathbb{R})$ and put $C = \max(\|m_1\|_1,\ldots,\|m_n\|_1)$. Let $x = (x_1,\ldots,x_n) \in \mathbb{R}^n$ be a unit vector for the 1-norm. Then $$\| M x\|_1 = \|x_1 m_1 + \ldots x_n m_n\|_1 \leq |x_1| C + \ldots |x_n| C = C$$
so that $\|M\| \leq C$. So, if $C <1$, then $1-M$ is invertible and the desired series gives the inverse.
|
H: Discrete Math Question: arithmetic progression
A lumberjack has $4n + 110$ logs in a pile consisting of n layers. Each layer has two more logs than the layer directly above it. If the top layer has six logs, how many layers are there? Write the steps to calculate the equation for the problem and state the number of layers.
It's given that the top layer has 6 logs and each layer below that gains 2 logs per layer.
1st layer: 6
2nd layer: 6 + 1(2) = 6 + 2 = 8
3rd layer: 6 + 2(2) = 6 + 4 = 10
4th layer: 6 + 3(2) = 6 + 6 = 12
5th layer : 6 + 4(2) = 6 + 8 = 14
and so on
the nth layer: $a_n = 6 + 2(n-1)$
1st term 6 and common difference 2 with n terms
1st term: $a_n = 6$
last term: $a_n = 6 + 2n + 4$
$S_n = (n/2)(a_1 + a_n)$
$$4n + 110 = (n/2)(6 + 6 + 2(n-1))$$
Can anyone help break this equation down to solve for n?
AI: $$6+8+10+\cdots+(6+2n-2)=6n+(0+2+4+\cdots+2n-2)=\\6n+2(0+1+2+\cdots+n-1)=6n+2\frac{(n-1)n}{2}=6n+(n-1)n=n^2+5n$$
You were told that this equals $4n+110$, so $n^2+5n=4n+110$, which is a quadratic equation $n^2+n-110=(n+11)(n-10)=0$. The only positive solution is $n=10$.
|
H: Basis for a $R$-Module $R$, with $R$ the ring of endomorphisms
I got a doubt with this problem:
Let $M=\{f:\mathbb{N}\to \mathbb{Z}|\text{$f$ is a function}\}$, defining the sum in $M$ as $(f+g)(n)=f(n)+g(n)$, $M$ is an abelian group. Let $R=\{\phi :M\to M : \phi\ \text{is a morphism}\}$. $R$ is a ring with the pointwise addition and the product as the function composition. Define $$\phi_1(f)(n)=f(2n+1),\ \phi_2(f)(n)=f(2n) $$
Show that $\{\phi_1,\phi_2\}$ is a basis of $R$ as a $R$-module.
Well, I know what to do, but my problem is that I don't even know how to handle the problem. For instance, let $\varphi \in R$, I have to show that exists $\alpha,\beta \in R$ such that:
$$
\alpha \phi_1+\beta \phi_2=\varphi
$$
and then I don't know how to go on.
AI: Considering elements of $M$ as sequences, we can define 'joint inverse' for $\phi_1,\phi_2$: Let
$$\alpha_0:=f\mapsto (0,f(0),0,f(1),0,f(2),\dots) \\
\beta_0:=f\mapsto (f(0),0,f(1),0,f(2),0,\dots)$$
So that, $\alpha_0\phi_1+\beta_0\phi_2=id$, and hence we can choose $\alpha:=\varphi\alpha_0$ and $\beta:=\varphi\beta_0$.
You also have to prove that $\phi_1,\phi_2$ are ($R$-) independent.
|
H: Confusion about an example in Miles Reid Undergraduate Algebraic Geometry
He is giving examples of lines at infinity and how they correspond to asymptotes (pg. 14). So he says:
"The hyperbola $(\frac{x^2}{a^2} - \frac{y^2}{b^2} = 1)$ in $\mathbb{R}^2$ corresponds in $\mathbb{P}^2\mathbb{R}$ to $\mathrm{C}:\left(\frac{\mathrm{X}^2}{a^2} - \frac{\mathrm{Y}^2}{b^2} = \mathrm{Z}^2\right)$; clearly this meets ($\mathrm{Z} = 0$) in the two points $(b, \pm a, 0) \in \mathbb{P}^2\mathbb{R}$, corresponding in the obvious way to the asymptotic lines of the hyperbola."
What? I thought if $\mathrm{Z} = 0$, then the points would be $(\pm a, \pm b, 0)?$ I see how the points he obtained corresponds to the asymptotes: $\frac{b}{a}x$ and $-\frac{b}{a}x$. But I am not sure how he arrived at those points and not the ones I mentioned above.
AI: In $\Bbb P^2\Bbb R$, we identify points $(u,v,w)$ and $(\lambda u,\lambda v,\lambda w)$ for all $\lambda\in\Bbb R\setminus\{0\}$.
In particular, $(a,b,0)$ is the same point as $(-a,-b,0)$ and $(a,-b,0)$ is the same point as $(-a,b,0)$. So, these four solution are two points, and it is $(a,b,0)$ instead of $(b,a,0)$ if $|a|\ne |b|$.
|
H: Quotient Spaces that are $T_0$, and the Quotient Space $x \sim y$ iff $\overline{\{x\}}=\overline{\{y\}}$.
Let $X$ be an arbitrary topological space; verify that by letting $xE_0y$ whenever $\overline{ \{ x \} } = \overline{ \{ y \} }$, we define an equivalence relation $E_0$ on $X$ and that $X/E_0$ is a $T_0$-space. Show that if for an equivalence relation $E$ on $X$ the quotient space $X/E$ is a $T_0$-space, than $E_0 \subseteq E$.
Hint: The set $\overline{ \{ x \} }$ is a union of equivalence classes of $E_0$.
That it defines an equivalence relation, and that the space is $T_0$ I did already, but for $E_0 \subseteq E$ I tried the hint. Obviously $\overline{ \{ x \} } = \bigcup_{y \in \overline{ \{ x \} }} [y]$.
I am rather clueless how to prove $E_0 \subseteq E$. Suppose $xE_0y$ but it is not the case that $x E y$, then $\overline{ \{ x \} } = \overline{ \{ y \} }$ but $\overline{ \{ [x]_E \} } \ne \overline{ \{ [y]_E \} }$ (in the quotient topoloy $X/E$, where $[x]_E$ denotes the equivalence class in $X/E$) using the fact that in a $T_0$ space $x \ne y \Rightarrow \overline{ \{x\} } \ne \overline{ \{y\} }$. I have a vague feeling that this might be a contradiction because $X/E_0$ and $X/E$ should be structurally equivalent (or homeomorphic) in some sense (i.e. $X/E_0$ might be some "universal" $T_0$ quotient space...) but I have no idea how to make this formal.
After some trial and error I came to the conclusion that if I could prove if $X/E$ is a $T_0$ space, then the projection map $p_E : X \to X/E$ would be a closed map (which I conjecture that it would be true, but can't prove) then it would follow.
Any ideas and further hints for me?
AI: Assume that $[x]_E\ne[y]_E$, then, as $X/E$ is $T_0$, one of the points, say $[x]_E$, has an open neighborhood $U$ avoiding the other one ($[y]_E\notin U$).
As the projection $p:X\to X/E$ is continuous, $p^{-1}(U)$ is open, and $y\notin p^{-1}(U)$, but $x\in p^{-1}(U)$. This ensures that $\overline{\{y\}}\subseteq X\setminus p^{-1}(U)$, in particular $x\notin\overline{\{y\}}$. So $x E_0 y$ cannot hold.
|
H: Another doubt about real functions on manifolds
Well, some days ago I've asked here how do we describe functions on manifolds. My idea was that it could be done using the coordinate functions of a chart: if $(x,U)$ is a chart for a manifold $M$ then we can define one function $f : U \to \Bbb R$ as a combination of the $x^i$ functions. Now I have one doubt that seems very silly (the answer is probably obvious, and I'm failing to see it).
Now here comes my doubt: let $C^{\infty}(U\subset M,\Bbb R)$ be the set of all smooth functions defined in the subset $U$ of a manifold $M$ of dimension $n$. I've defined a $k$-combination to be a map
$$c:\prod_{i=1}^k C^{\infty}(U,\Bbb R) \to C^{\infty}(U,\Bbb R)$$
so for instance, for $k=2$ the map $c(f,g)=\lambda f + \sin \circ g$ would be a $2$-combination of $f$ and $g$. Now, let $k = n$, then trivially by the definition we have that:
$$c(x^1,\dots,x^n)\in C^\infty(U,\Bbb R)$$
My question is: do we have that for any $f \in C^\infty(U,\Bbb R)$ there exists a unique $n$-combination of the functions $x^i$ such that $f = c(x^1,\dots, x^n)$? In other words, do we have that any function defined on $U\subset M$ is a suitable combination of the coordinate functions?
Thanks very much in advance!
AI: I believe the answer to your question is yes. Suppose $f: U \subseteq M \rightarrow \mathbb{R}$ where $M$ is a manifold. Let $p \in U$ then, by the definition of a manifold, there exists (at least one) $(V,x)$ a coordinate chart with $V \subseteq U$. If $V$ is too large we can construct a new chart by intersection and shrink the domain as needed. Note
$$ f|_V = f \circ x^{-1} \circ x $$
which means the formula you desire exists locally at $p$. Now, I may not be able to write a formula for $f$ in terms of coordinates on all of $U$ since it is conceivable that $U$ needs to be covered by several charts.
But, perhaps, the real question you are asking, is how can we define a function in terms of something besides a coordinate chart on a manifold. The answer there is usually given in terms of the explicit structure of the set as a point set. For example, $x+y+z=1$ gives a plane hence $f(x,y,z) = 2x+2y+z$ is some function on the plane not (yet) given in coordinate chart on the plane( which I have not stated). However, it is a simple exercise to choose parameters for the plane as in so doing construct charts which then could be used to formulate $f$. I know this is possible by the general argument I give at the outset of this post.
|
H: Preparing for Mathematics Olympiad
I am preparing for Mathematics Olympiad , can any one suggest me some books to prepare for olympiad ? The topics that usually come up involve:
congruence modulo $n$,
inequalities ,
number system, elementary number theory, etc.
Please help me!
Thanks
Kushashwa
AI: I'd recommend that you visit the Art of Problem Solving's (AoPS) website: I've linked you to their "resource" page with articles you can download (they are freely accessible.) The website is a "hub" for very motivated students of mathematics, many of whom engage in competition math. The site hosts mathematics resources, curricula, on-line forums, and a "bookstore". So feel free to explore the vast site.
Given the topics you specifically mention, I'll link you to some pdf notes on Number Theory; it's about 40-some odd pages, covering the topics you mention, and more. I'll also link you to a pdf entitled Olympiad Number Theory: an Abstract Perspective. You'll find at least two (freely accessible) notes in pdf on inequalities available for downloading, at the linked page at the top. Here's one of those: Inequalities.
Enjoy!
|
H: Multisets with Exact Number of Repeated Integers
Given a multiset that contains 5 numbers where the numbers are from 0 to 5 inclusive, and the numbers can be repeated:
a) In how many ways can you have a multiset with exactly four 4s?
b) In how many ways can you have a multiset with exactly three 3s?
c) In how many ways can you have a multiset with exactly two 2s?
d) In how many ways can you have a multiset with exactly one 1s?
e) In how many ways can you have a multiset with exactly zero 0s?
I know that the number of repeated combinations of $k$ elements from $n$ available elements is given by the expression ${n+k-1 \choose k}$, but how would you apply it to this problem.
For example, for a) would it be ${5+3-1 \choose 3} = 35$
I am very confused on this subject, and require some help.
AI: We do exactly two $2$'s, you can do the rest.
We need to count the number of ways to produce "the rest" of the multiset. So we need to count the number of $3$-element multisets, where the elements are chosen from the collection $\{0,1,3,4,5\}$.
You quoted a formula for counting the number of $3$-element multisets taken from a collection of $5$ numbers. It yields the answer $\binom{5+3-1}{3}$.
Remark: We certainly do not need the formula to deal with Question (a). For we need to choose one non-$4$ to go with the four $4$'s. There are $5$ numbers ot choose from, so we have $5$ choices.
The formula you mentioned does work, however, with $n=5$ and $k=1$.
|
H: Probablity of finding A or B or both A and B
In a jungle, the probability of an animal being a mammal is 0.6, a nocturnal is 0.2. What is the probability that an animal found in this jungle is either a mammal, or nocturnal or both. Assume that these are independent traits.
AI: Hint: The probability that $A$, $B$, or both $A$ and $B$ occur is
$$
P(A\text{ and }B)+P(A\text{ and not }B)+P((\text{not }A)\text{ and }B).
$$
|
H: Mathematicians talking about their identity as a person and as a mathematician?
I was wondering if any of you know of any books, articles, interviews, youtube videos, ... (etc) where a mathematician talks about his or her identity as a person and as a mathematician? Thank you for any sources!
AI: A mathematician's apology. G.H.Hardy
|
H: For what $x\in[0,2\pi]$ is $\sin x < \cos 2x$
What's the set of all solutions to the inequality $\sin x < \cos 2x$ for $x \in [0, 2\pi]$? I know the answer is $[0, \frac{\pi}{6}) \cup (\frac{5\pi}{6}, \frac{3\pi}{2}) \cup (\frac{3\pi}{2}, 2\pi]$, but I'm not quite sure how to get there.
This is what I have so far: $\\
\sin x - \cos 2x < 0\\
\sin x - \cos^2 x + \sin^2 x < 0\\
2\sin^2 x + \sin x - 1 < 0\\
-1 < \sin x < \frac{1}{2}$
Any help will be much appreciated.
AI: You reached the right inequality. A slightly different way of finishing goes as follows.
We are interested in where $(2\sin x-1)(\sin x+1)\lt 0$.
We find the places where $(2\sin x-1)(\sin x+1)=0$. This happens at $x=\frac{\pi}{6}$, $x=\frac{5\pi}{6}$, and $x=\frac{3\pi}{2}$.
These $3$ points divide the interval $[0,2\pi]$ into $4$ pieces (intervals). On each of these pieces, our function does not change sign. So in each piece, we can choose a convenient test point to find the sign of our function on the whole piece.
For example, on the piece $[0,\frac{\pi}{6})$ choose the test point $x=0$. Result: our function is $\lt 0$ at the test point, and therefore $\lt 0$ on the whole interval $[0,\frac{\pi}{6})$.
On the piece $(\frac{\pi}{6}, \frac{5\pi}{6})$, choose the test point $x=\frac{\pi}{2}$. There our function is positive. So we do not use this interval.
Two more pieces to go.
|
H: Johann Bernoulli did not fully understand logarithms?
This wikipedia article makes the claim:
"Bernoulli's correspondence with Euler (who also knew the above equation) shows that Bernoulli did not fully understand logarithms."
This is found under "History" . What does wikipedia mean here? Surely, Bernoulli understood what a logarithm was.
AI: According to Bradley and Sandifer, eds., Leonhard Euler: Life, Work and Legacy, pages 259-260, Bernoulli, in correspondence with Euler, 1727-29, asserted $\log(-x)=\log x$. There's more to it than that, the book goes into some detail.
|
H: Show $s(s(a))=s(b)$ implies $s(a)=b$
Let us have a first order language $L=\{0,s\}$, where $0$ is a constant, $s$ is a function symbol of arity $1$. The first-order theory $T$ is axiomatized as follows:
$\forall x \neg( s(x) = 0)$
$\forall x \exists y(x =0 \vee s(y)=x)$
How could I prove the following statement?
$$\forall a, b \left( s(s(a))=s(b) \implies s(a)=b \right)$$
I know how I could prove the converse using the substitution but I am not sure what I could use in the other direction.
AI: You can't, as stated. Here's a model which satisfies these constraints. Let $\mathcal{M}$ have as its domain $\{0, 1, 2\}$. Let $s$ be the function that sends $0$ to $1$, sends $1$ to $2$, and sends $2$ to $2$.
In this model, both of the two axioms are true. Furthermore, letting $a = 0$ and $b = 2$, we have $s(s(a)) = s(s(0)) = s(1) = 2 = s(2) = s(b)$. But $s(a) = s(0) = 1 \neq 2 = b$.
In order to prove what you're looking for, then, you'll need to add an extra axiom. For instance, note that the reason this model works is because $s$ is not injective. That is $s(a) = s(b)$ doesn't imply $a = b$. Adding an axiom to the effect that $s$ is injective would certainly suffice to prove what you want to prove.
|
H: Non-convergent series of convergent integral
I'm trying to find a series representation for a integral, but I think there's something I'm missing, as even though the algebraic manipulations I'm doing are valid (I think!), the series representation of the integral (which I know to converge) ends up diverging. Here's what I'm doing (specifics omitted for brevity, p and q are large polynomials with non-integer exponents, deg(q)>deg(p), q(x) has no roots in the positive reals):
$$\mathcal{I}=\int_0^\infty \frac{p(\lambda)}{q(\lambda)}e^{-k\lambda^2}d\lambda$$
$$\mathcal{I}=\int_0^\infty \frac{p(\lambda)}{q(\lambda)}\sum_{n=0}^\infty\frac{(-1)^nk^n\lambda^{2n}}{n!}d\lambda $$
$$\mathcal{I}=\sum_{n=0}^\infty\frac{(-1)^nk^n}{n!}\int_0^\infty \frac{\lambda^{2n}p(\lambda)}{q(\lambda)}d\lambda$$
$$\mathcal{I}=\sum_{n=0}^\infty\frac{(-1)^nk^n}{n!}C_i$$
, where $C_i$ is a constant involving the ratios of Gamma functions. Only problem is, the last expression for $\mathcal{I}$ does not converge (or, at least, oscillates enormously beyond the ability of my computer to calculate- it pegs the 200th partial sum at around $10^{2500}$), when in fact the first expression gives a accurate value of around 0.02.
My question is, what error (of concept or execution) have I wound up inadvertently committing and, if possible, what means is there to correct it so I wind up with a convergent series that can be used?
Thanks.
AI: The exponential series converges uniformly on compact intervals but not on the whole of $[0,+\infty)$. You cannot interchange the integral with the sum of that series.
|
H: If the points $x_1,x_2,\ldots,x_n$ are distinct,then...
I am stuck on the following problem that says:
If the points $x_1,x_2,\ldots,x_n$ are distinct,then for arbitrary real values $y_1,y_2,\ldots,y_n$, prove that the degree of the unique interpolating polynomial $p(x)$ such that $p(x_i)=y_i,\,\,(1 \le i \le n)$ is $\le n-1$.
I think I have to use Lagrange polynomial but I could not put the things together . Can someone help? Thanks in advance for your time.
AI: As currently worded, the assertion is false. There are infinitely many interpolating polynomials, of arbitrarily high degrees. For if $P(x)$ is an interpolating polynomial, so is
$$P(x)+Q(x)(x-x_1)(x-x_2)\cdots(x-x_n)$$
for any polynomial $Q(x)$.
What we can say is that there is a unique interpolating polynomial of degree $\le n-1$.
To prove that, we need to do two things (i) show that there is an interpolating polynomial of degree $\le n-1$ and (ii) show that there is at most one.
For (i) use the ordinary Lagrange polynomial.
For (ii), suppose that $P$ and $Q$ are interpolating polynomials of degree $\le n-1$, Then $P-Q$ is $0$ at $x_1,x_2 ,\dots,x_n$. But a polynomial of degree $\le n-1$ can have $n$ or more distinct roots only if it is the identically $0$ polynomial. It follows that $P-Q$ is identically $0$.
|
H: Determining Convergence of Power Series
Flip a fair coin until you get the first "head". Let X represent the number of flips before the first head appears. Calculate E[X].
So I solved this problem and you get a power series:
$E[X] = 1*0.5 + 2*0.5^2 + 3*0.5^3+ ...$
This is basically of the form $\sum\limits_{i=0}^{\infty}x_i(0.5)^i.$
I flipped open my calculus book to review how to solve this series but I didn't find anything on how to calculate the limiting value for a power series, just the radius of convergence.
I see for a sum of infinite geometric series, the value is:
$S_n = a+ ax + ax^2 + ... + ax^n + ... = \cfrac{a}{1-x}$ for |x| < 1.
Can someone please tell me the general approach if there is one for a power series?
Thank you in advance.
AI: The general approach to this kind of problem is to sum the series by repeated differentiation and multiplication. This should produce a solution to any series of the form $\sum_{n=0}^{\infty}n^k x^n$, and, therefore, any series of the form $\sum_{n=0}^{\infty}p(n)x^n$ . As an example, let's solve your specific series, $\sum_{n=0}^{\infty}nx^n$. To solve this, recall the formula for a simple geometric series:
$$\sum_{n=0}^{\infty}x^n=\frac1{1-x}$$
Now, differentiate both sides with respect to x to get:
$$\sum_{n=0}^{\infty}nx^{n-1}=\frac1{(1-x)^2}$$
Next, multiply both sides by x, and we have:
$$\sum_{n=0}^{\infty}nx^{n}=\frac x{(1-x)^2}$$
Finally, substitute in $x=1/2$ to get $E[X]=2$, which is the correct result for the expectation of a geometric distribution with p=0.5.
Does that answer your question?
|
H: Matrices with trace zero.
I would like to show that every trace zero square matrix is similar to one with zero diagonal elements.
This question has been asked before, and has had an answer by Don Antonio.
And my problem is that I cannot understand the cited paper.
In the cited paper, (proof 4), one finds the sentence
Since $\text{Tr}(K) = \text{Tr}(B^{–1}SB) = \text{Tr}(S) = 0$, this step can be
repeated to replace $K$ by a matrix whose every diagonal element is zero ( thereby changing $c$ and $r^T$ ) thus constructing $C$ so that every diagonal element of $C^{–1}SC$ is zero.
Question
I thought that, since $\text{Tr}(K)=0$, we can, by the induction hypothesis, find an invertible $D$, such that $D^{-1}KD$ has diagonal elements $=0$. But how does this enable us to replace $K$ by a matrix with zero diagonal elements, at the cost of changing $r^T$ and $c$?
I tried to constuct some matrix $C$ from $D$ such that $C^{-1}\begin{bmatrix}0&r^T\\c&K\end{bmatrix}C=\begin{bmatrix}0&r'^T\\c'&K'\end{bmatrix}$ with $K'$ having zero diagonal elements, but to no avail.
I have tried various choices of $C$, but the result of the multiplication refuses to be of the required form, so I wonder if I am missing something here?
Any hint is well-appreciated.
AI: Try $$C = \begin{bmatrix}1 & 0 \\ 0 & D\end{bmatrix}.$$
|
H: a matrix with determinant $1$, what can be said about the column $(a \space c)^T$?
$\begin{pmatrix}a&b\\c&d\end{pmatrix}$ be a matrix with determinant $1$, what can be said about the column $(a \space c)^T$?
from the condition we have $ad-bc=1$, so do I have to conclude something from this condition only?
Thank you.
AI: The only thing we can conclude with certainty, if we know nothing about $b$ and $d$, is that the column is not $(0,0)^T$.
In every other case, we can find $b$ and $d$ such that the determinant is $1$. For example, if $a\ne 0$, we can choose $b=0$ and $d=1/a$. If $a=0$ but $c\ne 0$, we can produce an example along similar lines.
Remark: There could be other answers, if conditions are put on the matrix entries. For example, if all entries are restricted to be integers, then we can conclude that $a$ and $c$ are relatively prime, that is, they have no common divisor greater than $1$,
|
H: Is $\alpha^{\beta}$ a cardinal?
Let $\alpha , \beta$ cardinals. Is $\alpha ^{\beta}$, defined as the set of all functions $f:\beta\to \alpha$, a cardinal?
I do this question because an autor of a text book says that the exponentiation of cardinals $\alpha$ raised to $\beta$ is defined as the cardinal of the set $\alpha ^{\beta}$. Then I ask my question, isn't it always $\alpha ^{\beta}$ a cardinal?
AI: Cardinal exponentiation is defined by saying that the cardinal number $\kappa^\lambda$ is the cardinality of the set of functions from $\lambda$ to $\kappa$. If you also denote that set of functions by $\kappa^\lambda$, then of course the symbol $\kappa^\lambda$ is ambiguous: it can be either the set of functions from $\lambda$ to $\kappa$ or the cardinality of that set. These are, however, two different things, even though many people denote them by the same symbol.
That is why some of us prefer to write ${}^XY$ for the set of functions from $X$ to $Y$: in that notation
$$\kappa^\lambda=\left|{}^\lambda\kappa\right|\;,$$
where $\kappa^\lambda$ is unambiguously the cardinal number that is the cardinal exponential $\kappa$ raised to the power $\lambda$, and ${}^\lambda\kappa$ is unambiguously the set of functions from $\lambda$ to $\kappa$.
|
H: Analytic extension of functions in Hardy spaces
This is a problem I came across in a direct scattering problem. I have a function $a(s)$ that
is of the form$$
a(s)=\int_0^{\infty}e^{is\xi}A(\xi)d\xi
$$ where $A(\xi)\in L^1\cap L^2$. Then is it possible to extend this function to a bounded analytic function in $\mathbb{C}^{+}$? Why or why not? Can someone give me a proof? I know that if taking $s$ to be a complex number $z\in\mathbb{C}^{+}$, then the new function $a(z)$ lies in the Hardy space $H^2(\mathbb{C}^{+})$. I tried to evaluate the curve integral$$
\oint_{\gamma}a(z)dz
$$so that I can use Morera theorem. But how to calculate this? Can someone help me?
AI: To see that $a(s)$ is bounded, note this
$$ |a(s)|=\Bigg|\int_0^{\infty}e^{is\xi}A(\xi)\,d\xi\, \Bigg|\leq \int_{0}^{\infty}|A(\xi)|d\xi < \infty, $$
since $A(\xi) \in L_1(0,\infty)$.
|
H: Research and application of causal inference
I have been reading Pearl's book to understand how Bayesian networks and causal discovery might work. Other than Pearl, I haven't yet found a rigorous, systematic approach to causal inference from observational data. The theorems and algorithms he presents (e.g. Inductive Causation) look convincing to me, however, it appears that causal discovery is very much in research. I was wondering if anyone had any experience applying any of this research or knows of any other, strongly supported methods of causal inference.
AI: Causal inference, in the form of many algorithms that evolved from ideas Pearl (et. al.) presented in the papers the book you mention is based on are abundant, sometimes with marked success. This book will give plenty of algorithms and applications.
One striking application is the construction of a Bayesian network for the diagnosis of complicated lung conditions. The network performs almost as well as a team of expert lung doctors. There are many others for, e.g., robots traveling through a maze, computer vision, OCRs etc.
|
H: Question regarding a Jacobian
Suppose I have these two pairs of variables:
\begin{equation}
u = g_1(x,y), \qquad v = g_2(x,y),
\end{equation}
\begin{equation}
x = h_1(u,v), \qquad y = h_2(u,v).
\end{equation}
If my jacobian of $x$ and $y$, $J(x,y)$ is the determinant of the partial derivatives of the functions $g_1$ and $g_2$ with respect to $x$ and $y$, could I then say that the Jacobian of $u$ and $v$ is $J(u,v) = 1/J(h_1(u,v),h_2(u,v))$ or is this not true?
I am reading a portion of my statistics book and was not sure if this is just coincidence or is it a fact. The example shows that the Jacobian for going from polar to Cartesian coordinates is $1/r$, and I know that from Cartesian to polar the Jacobian is $r$ so I wasn't sure if this was a coincidence or not.
AI: I think, we can porve the following alternatively:
$$\frac{\partial(x,y)}{\partial(r,s)}=\frac{\partial(x,y)}{\partial(u,v)}\frac{\partial(u,v)}{\partial(r,s)}$$ Here we assume that $x=f(u,v), y=g(u,v)$ and $u=\phi(r,s),~v=\psi(r,s)$
The point I can hint you is to do the definition of Jacobian to get the right path. Look for the case $x=r,y=s$ next.
|
H: Integral of $\frac{2}{x^3-x^2}$
How can I integrate $\dfrac2{x^3-x^2}$? Can someone please give me some hints? Thanks a lot!
AI: HINT:
$$\frac2{x^3-x^2}=\frac2{x^2(x-1)}=\frac{A}x+\frac{B}{x^2}+\frac{C}{x-1}$$
for what values of $A,B$, and $C$?
|
H: How many distinct copies of $P_m$ are in $K_n$?
Let $K_n$ be the complete graph of order $n$ and $P_m$ a path with $m$ distinct vertices, $1 \leq m \leq n$.
Question: How many distinct copies of $P_m$ are contained in $K_n$?
Given that a permutation maps a path to a different path it seems like there will always be another permutation which will send the original path to the same path, different from the original, so that the number of copies of $P_m$ contained in $K_n$ will be:
$$\frac{m!}{2}\binom{n}{m} $$
Is this correct? If not, or if so, could someone provide a more rigorous derivation of the correct value?
AI: It needs only a minor correction.
Any $m$ vertices of $K_n$ may be the vertices of a copy of $P_m$, so there are $\binom{n}m$ ways to choose the vertices of the path. They may be traversed in any order, so a given set $m$ vertices may be traversed as a path in $m!$ ways. However, for $m>1$ this overcounts by a factor of $2$, since it counts as distinct the two directions in which a copy of $P_m$ may be traversed. Thus, the number of copies of $P_m$ in $K_n$ is
$$\frac{m!}2\binom{n}m=\frac{n!}{2(n-m)!}=\frac12n^{\underline{m}}$$
if $m>1$ and $n$ if $m=1$. (Here $n^{\underline{m}}$ is the falling factorial.)
|
H: How to solve a system of equations
I have this system of equations:
3x²+7y²=55
and
2x²+7xy=60
Is there a method of solving [x,y] without using x²=t, y²=z?
AI: Yes, since $x$ is nonzero (because of the second equation), we can eliminate
$y=(60-2x^2)/(7x)$ and substitute this in the first equation, which gives
$$
25(x + 4)(x + 3)(x - 3)(x - 4)=0.
$$
|
H: Perfect matching in a graph
Assuming I have a bipartite graph with the following property:
for each subgroup of nodes $s \subseteq {V} $ :
$$
\sum_{v\epsilon N(S),z\epsilon N(N(S)) }{} {(v,z) \geq 2\left \| S \right \|}
$$
Where $N(S)$ is the neighbourhood of $A$.
i.e. if you go over each of $S$ neighbors and count the edges coming out from each one. the number of edges is grater than or equal to twice the size of the subgroup S.
How can I show that this graph has a double matching, where double matching is a graph with 2 different perfect matching.
I know it is true by intuition but I can't seem to find a way to prove it formally.
EDIT:
I tried to use Hall's theorem, but it doesn't seem to be correct, because I am counting the number of edges and as a result I am not getting the size of the |N(S)|, because some of the vertices counted more than once.
assuming I have the graph $V= \{v_1, v_2 , u_1, u_2\}$. and $v_1$ is connected to $u_1$ and $u_2$, and $v_2$ is connected to $u_1$ and $u_2$. this graph has the property I mentioned above, and it's pretty obvious that there is a double matching in here.
But if you look at the size of the neighborhood of $S=\{v_1, v_2\}$, it will be $|{u1,u2}| = 2$
and not $2|S|= 4$ as Hall's theorem requires (because when you are counting the vertices you count $u_1$ and $u_2$ twice. once as $v_1$1 neighbors, and once as $v_2$ neighbors).
Also if I try to use Hall's theorem I nned to 1. choose a first perfect matching 2. remove it, and see that there is still a perfect matching (i.e. the is a double matching)
but in this case we can choose a perfect matching that will leave us without a choice for the second matching.
Please advice..
AI: Edit:
Ok, I was wrong, I think that your conjecture is false. As mentioned in comments, I assume the following property
$$\sum_{v\in N(S),\ z\in N(N(S)) }(v,z) \geq 2 |S| \tag{$\spadesuit$}$$
Consider the following bipartite graph (obviously it doesn't even have a single matching):
Let $S_\bullet$ be the part of $S$ that contains black vertices, and $S_\circ$ the rest. If $S_\bullet$ is not empty, then sum contains at least all the blue edges, hence $8 \geq 2 |S_\bullet|$. Also, if $S_\circ$ contains any of the first two white vertices, then again $8 \geq 2 |S_\circ|$ because of the blue edges. Moreover, if $S_\circ$ does not contain any of the first two white vertices, then $4 \geq 2 |S_\circ|$. Finally, because of how the left-hand side sum is stated, the blue edges induces by $S_\circ$ and $S_\bullet$ will be counted appropriate number of times, that is one for $v \in N(S_\bullet)$ and one for $v \in N(S_\circ)$, hence, condition $(\spadesuit)$ is satisfied.
I hope this helps now ;-)
|
H: What does $\mathrm d^2 x$ exactly mean?
I am learning radiometry and one of the equation is radiance which is given as the radiant flux per unit projected area per unit solid angle. In equation:
$$L = {d^2\Phi \over {cos(\theta)dAd\omega}} (eq. 1)$$
Now further in the book I read they use intensity which is the angular density of radiant flux:
$$I = {d\Phi \over d\omega} (eq. 2)$$
And they explain that because of the cosine law I is attenuated by $cos(\theta)$ the angle of incidence between the surface normal and the incident light direction (or view direction). So far so good.
My problem is that they substitute $Icos(\theta)d\omega$ to the numerator in equation 1 which gives something like:
$${Icos(\theta)d\omega \over {cos(\theta)dAd\omega}} \rightarrow {I\over{dA}}$$
All that seems logical to me but the question is: in equation 1 the numerator is $d^2\Phi$. So is it legal to replace it with just $Icos(\theta)d\omega$. What does the exponent 2 means (after d and before phi) mathematically in that case? How should I read it and interpret it?
Thank you so much for your "smart" help.
For reference: www.astrowww.phys.uvic.ca/~tatum/stellatm/atm1.pdf (p12)
AI: The answer to your questions is that in physics this type of repalcement is possible. So asking math people why may cause discussion. Hygenically as a amthematician you can not straight do it but in physics this type of operation with operators is correct and one time under the constarints brought in that field of work consensus: $d^2\Phi$ acts as an operator and cn be substituted in this case. It pre-supposes a periodic behaviour of the solution of the system that allows straight to apply the mathematical ugly but physically elegant: $d^2\Phi=Icos(\theta)d\omega$ (Oups!).
So the mathematical background of calculation is correct but the heuristic straight forward substitution is not mathematically nice.
Let me try to put the equations in infitessimal notation (just try to recall what I remind from radiation theory), there are three of them:
$$\delta I=L\,cos\theta\,\delta A \quad (1)$$
$$\delta \Phi=I\,\delta \omega \quad (2)$$
$$\delta^2 \Phi=L\,\delta A \, (\delta\omega\, cos\theta) \quad (3)$$
With the differential intensity $\delta I$ (of the point source in a given direction on $\delta A$).
It should be possible now to re-construct all your equations by the above eauation.
Your equation $1$ follows from here equaetion $3$. Your equation $2$ from here equation $2$ and with equation $1$ and $3$ we obtain
$$\delta^2 \Phi=L\,\delta A \, (\delta\omega\, cos\theta)=\delta I\; \delta\omega\quad (4)$$
Now introduce the condition
$$\delta I_{(\theta)}=I_{(n)} \,cos(\theta) \quad (5)$$
And this is physics, if I recall correctly this can be argumented by Lambert's cosine law.
Now you should have all arguments toegether.
|
H: why $\frac{d}{dy}$ can pass through integral w.r.t. $x$?
When I calculate integration of multivariables, many books use the following step without proofing. I want to know that why it is true:
$$\frac{d}{dy}\left[\int^a_b f(x,y)dx\right]_{y=k}=\int^a_b \frac{\partial}{\partial y} \left[f(x,y)\right]_{y=k}dx$$
I also wonder that whether it is true when the integral or differentiation become indefinite. Which is :
$$\frac{d}{dy}\int f(x,y)dx=\int\frac{\partial}{\partial y} f(x,y)dx$$
$$\frac{d}{dy}\int^a_b f(x,y)dx=\int^a_b \frac{\partial}{\partial y} f(x,y)dx$$
$$\frac{d}{dy}\left[\int f(x,y)dx\right]_{y=k}=\int \frac{\partial}{\partial y} \left[f(x,y)\right]_{y=k}dx$$
AI: In simple terms, integration is a limiting case of summation (Riemann sums). Therefore, under reasonable assumptions you can differentiate under the integral sign - just like the case with sums.
As for your other question, the general indefinite integral of $f(x,y)$ is $\int f(x,y)\mathrm{d} x=\int_{x_0}^xf(t,y) \mathrm{d}t+C$ where $x_0,C$ are constants. Applying the rule here gives the required result (again if the requirements of the theorem are met).
|
H: Explicitly writing out a differential 2-form
In Tu's An Introduction to Manifolds, one question asks:
At each point $p\in \mathbb{R}^3$, define a bilinear function $\omega_p$ on $T_p(\mathbb{R}^3)$ by:
$$\omega_p(\underline{a},\underline{b})=\omega_p((a^1,a^2,a^3),(b^1,b^2,b^3))=p^3(a^1b^2-a^2b^1)$$
For tangent vetors $\underline{a},\underline{b}\in T_p(\mathbb{R}^3)$, where $p^3$ is the third component of $\underline{p}=(p^1,p^2,p^3)$. Since $\omega_p$ is an alternating bilinear function on $T_p(\mathbb{R}^3)$, $\omega$ is a 2-form on $\mathbb{R}^3$. Write $\omega$ in terms of the standard basis $dx^i\wedge dx^j$ at each point.
I understand that we write this as $\omega=a_{ij}dx^i\wedge dx^j$, with $a_{ij}=\omega(e_i,e_j)$ where $e_1,e_2,e_3$ span $T_p(\mathbb{R})$. With this I find that all constants vanish apart from $a_{12}$ and $a_{21}$, which lead to:
$\omega=p^3dx\wedge dy-p^3dy\wedge dx=2p^3dx\wedge dy$. In the solutions however, since an alternating function of two arguments is completely determined by its actions on $w(e_{k},e_{l}),k<l$, Tu sums only over $i<j$ leading to $\omega=p^3dx\wedge dy$.
My question is, I thought that whether or not a multilinear function is alternating, you should be able to characterise it by feeding it all possible combinations of basis elements. But it seems in this case that leads to a different answer. Why is this?
AI: Note that $(a\wedge b)_{ij} = 2a_{[i}b_{j]}$ so in particular the 12 component is $a_1b_2-a_2b_1$. Therefore only one of these is needed to reconstruct both the $12$ and $21$ components. Put another way, you are not building the two form not out of usual matrices like $$\pmatrix{0 & 1 \\ 0& 0}$$ but instead ones like $$\pmatrix{0 & 1 \\ -1& 0}$$
The reason this differs from the usual bilinear approach is that we expand two-forms in terms of this latter basis $\mathrm dx_1\wedge \mathrm d x_2=- \mathrm dx_2\wedge \mathrm d x_1$ rather than the more familiar $ \mathrm dx_1\otimes \mathrm d x_2 \neq \pm\mathrm dx_2\otimes \mathrm d x_1$
|
H: Definition of the fundamental group
Why are the elements of the fundamental group of a space equivalence classes? Why isn't the group defined to be the set of all possible loops at a base point with the product operation of paths? What would go wrong if it was defined so? Or is it simply not useful?
AI: We need the homotopy equivalence relation to capture the "essential" parts of the topological space. You can think of it kind like throwing lassos into the space and pulling tightly.
Without the equivalence relation, we fail to get an actual group. We can't even have an identity element, for instance.
To expand: the only obvious candidate for an identity element is the constant map from $[0,1]$ to the basepoint (the "standing around, twiddling one's thumbs" path). If you concatenate this with any other distinct path $\gamma$, then you will not end up with $\gamma$. It will have the same image, to be sure, but it's not just the journey, it's how you get there (to abuse an english phrase).
|
H: In an inner product space over $\mathbb R$, prove $ (u,w)=0 \Leftrightarrow \left \| u+w \right \|=\left \| u-w \right \| $
Let $V$ be an inner product space over field $F$ and $u,w\in V$.
Prove that if $F=\mathbb{R}$ then:
$$ (u,w)=0 \Leftrightarrow \left \| u+w \right \|=\left \| u-w \right \| $$
Is it also true for $F=\mathbb{C}$ ?
AI: Hint: Try expanding $\|u + w\|^2 = \langle u + w, u + w \rangle$ and $\|u - w\|^2 = \langle u - w, u - w \rangle$. Now notice how some terms cancel out. Does this cancelation also happen in $\mathbb C$?
By following the hint, one gets:
$$
\|u + w\| = \|u - w\| \iff \langle u, w \rangle + \overline{\langle u, w\rangle} = 0
$$
Since $\overline{\langle u, w\rangle} = \langle u, w\rangle$ in $F = \mathbb R$, this is equivalent to $\langle u, w\rangle = 0$.
On the other hand, if $F = \mathbb C$, then $u = (1, 2)$, $w = (2i, i)$ is a counterexample. $\|u + w\| = \sqrt{10} = \|u - w\|$. Yet, $\langle u, w \rangle = -4i \ne 0$.
|
H: Show mapping involving tensor product is well defined.
Let $R$ be a subring of $S$, let $N$ be a left $R$-module and let $\iota : N \to S \otimes_RN$ be the $R$-module homomorphism defined by $\iota(n) = 1 \otimes n$. Suppose that $L$ is any left $S$-module and that $\varphi : N \to L$ is an $R$-module homomorphism from $N$ to $L$. Then there is a unique $S$-module homomorphism $\Phi : S \otimes_RN \to L$ such that $\varphi = \Phi \circ \iota$.
My approach so far is:
Define $\Phi : S \otimes_RN \to L$ by $\Phi(s \otimes n) = s\varphi(n)$.
Show $\Phi$ is well defined.
Show $\Phi$ is in fact an $S$-module homomorphism.
Show $\Phi$ is unique.
I am having trouble with showing that $\Phi$ is well-defined. I suppose $s \otimes n = s' \otimes n'$, it follows by property of cosets that $(s, n) - (s', n') \in S \otimes_RN$. We want to show that $\Phi(s \otimes n) = \Phi((s, n) + S\otimes_RN) = s\varphi(n) = s'\varphi(n') = \Phi(s' \otimes n')$. So if I could show that $s \varphi(n) = s'\varphi(n')$ I would be done with showing well-definedness.
Can someone give me a hint on how to do this? Thanks!
AI: I assume that you use the definition of tensor product that starts with a free module $F$ generated by elements of the form $(s, n)$, then defines the tensor product as $F/A$ where $A$ is the submodule of $F$ generated by bilinearity conditions.
What you want to show is that the map $\overline\Phi: F \to L$ defined by $\overline\Phi((s, n)) = s\phi(n)$ gives the same value for elements in the same coset of $A$ in $F$, i.e., that $\overline\Phi(A) = 0$. This can be verified easily by testing generators of $A$.
|
H: Solve the equation : $2013x+\sqrt[4]{(1-x )^7}=\sqrt[4]{(1+x )^7}$.
Solve the equation : $2013x+\sqrt[4]{(1-x )^7}=\sqrt[4]{(1+x )^7}$.
Show that it has percisely one root: $x=0$.
AI: Let $f(x)=2013x+\sqrt[4]{(1-x )^7}-\sqrt[4]{(1+x )^7}$. We want to solve the equation $f(x)=0$. Observe that the domain of $f$ is $[-1,1]$. Furthermore, its derivative is:
$$
f'(x)=2013+ \dfrac{7 \sqrt[4]{-(x-1)^7}}{4 (x-1)}-\dfrac{7 (x+1)^6}{4 ((x+1)^7)^{3/4}}
$$
which is positive for all $x\in(-1,1)$. Hence, $f$ is monotone increasing, so $x=0$ is its only root.
|
H: Need pointers on how to do this trigonometric proof
$$ \cos x = \cos y + \cos^3 y$$
$$\sin x = \sin y - \sin^3 y$$
Prove that $\sin {(x - y)} = \pm \frac{1}{3}$.
I need a little hint, not a complete answer.
AI: HINT: here $\sin x=\sin y(1-\sin^2y)=\sin y\cos^2y$
$\sin^2x+\cos^2x=1\implies \sin^2y\cos^4y+\cos^2y+\cos^6y+2\cos^4y=1$
$\implies (1-\cos^2y)\cos^4y+\cos^2y+\cos^6y+2\cos^4y=1$
$\implies 3\cos^4y+\cos^2y-1=0$
EDIT:
Subs. $\cos^2y = t$ gives $3t^2+t-1=0\implies t=\dfrac{-1+\sqrt{13}}{6}\tag{taking positive root only}$
$\implies \cos^2y=\dfrac{-1+\sqrt{13}}{6}\implies \sin^2y=1-\dfrac{-1+\sqrt{13}}{6}=\dfrac{7-\sqrt{13}}{6}$
Therefore, $\sin^2(x-y)=\sin^2y\cos^2y=\left(\dfrac{7-\sqrt{13}}{6}\right)\left(\dfrac{-1+\sqrt{13}}{6}\right)=\dfrac{8\sqrt{13}-20}{36}\neq\dfrac{1}{9}$
Therefore, $\sin(x-y)\neq\pm\dfrac{1}{3}$
I think there is some problem with your question or the expected result.
|
H: Why do $UU^* = I$ and $U^*U = I$ hold on different spaces for the unitary matrix $U$ of a polar decomposition?
The following is from Lang $SL_2$.
Consider the polar decomposition of a matrix A. We let $P_A = (A^*A)^{1/2}$ and set $U$ s.t. we have $$UP_Av = Av.$$
Then, it follows that $U\colon \operatorname{im} P_A \to \operatorname{im} A$. Lang now defines $U$ to be $0$ on the orthogonal complement of $\operatorname{im} P_A$ and concludes that we have on $\operatorname{im} P_A$ $$U^*U = I,$$ and on $\operatorname{im} A$ we have $$UU^* = I.$$
Questions:
1) To find U, can we take the right inverse of P_A and multiply it on the left by A? The way U was defined here is suggestive that it should not be so simple.
2) Why don't the two equalities hold on the same space? Or how do I see that $UU^*$ is the orthogonal proj. onto im $A$ and $U^*U$ onto im $P_A$?
AI: Since $P_A$ is self-adjoint, $(\ker P_A)^\perp = \operatorname{im} P_A$, and so $P_A|_{\operatorname{im}P_A} : \operatorname{im}P_A \to \operatorname{im}P_A$ is invertible, allowing you to define $U : \operatorname{im} P_A \to \operatorname{im} A$ by $$U := A \left(P_A|_{\operatorname{im}P_A}\right)^{-1},$$ where $\left(P_A|_{\operatorname{im}P_A}\right)^{-1}$ is the inverse of $P_A|_{\operatorname{im}P_A}$ as a map $\operatorname{im}P_A \to \operatorname{im}P_A$. So, you are absolutely right, but you do need to be absolutely precise about domains and codomains.
Now, recall that $U : \operatorname{im} P_A \to \operatorname{im} A$, so that $U^\ast : \operatorname{im} A \to \operatorname{im} P_A$. Hence, a priori, $$U U^\ast : \operatorname{im} A \to \operatorname{im} A, \quad U^\ast U : \operatorname{im} P_A \to \operatorname{im} P_A.$$ Since $A^\ast A = P_A^2$, it is easy to check by direct computation that $U^\ast U = 1_{\operatorname{im} P_A}$, so that $U : \operatorname{im} P_A \to \operatorname{im} A$ is injective. On the other hand, by the original construction, $U$ is manifestly surjective, i.e., onto $\operatorname{im} A$. Thus, $U : \operatorname{im} P_A \to \operatorname{im} A$ is necessarily invertible, so that since $U^\ast : \operatorname{im} A \to \operatorname{im} P_A$ is a left inverse of $U$, it is necessarily the inverse of $U$, implying $UU^\ast = 1_{\operatorname{im} A}$.
Now, if you extend $U$ to all of $\mathbb{C}^n$ by setting $U|_{(\operatorname{im} P_A)^\perp} := 0$, then you can check directly that $U^\ast|_{(\operatorname{im} A)^\perp} = 0$. This allows you to check, subspace by subspace, that the equalities
$$
U^\ast U = 1_{\operatorname{im} P_A}, \quad UU^\ast = 1_{\operatorname{im} A}
$$
for $U$ as an operator $\operatorname{im} P_A \to \operatorname{im} A$ translate to the equalities
$$
U^\ast U = \text{orthogonal projection onto $\operatorname{im} P_A$}, \quad U U^\ast = \text{orthogonal projection onto $\operatorname{im} A$}
$$
for $U$ as an operator $\mathbb{C}^n \to \mathbb{C}^n$.
|
H: What is the maximum value of $\frac{2x}{x + 1} + \frac{x}{x - 1}$, if $x \in \mathbb{R}$ and $x > 1$?
What is the maximum value of
$$f(x) = \frac{2x}{x + 1} + \frac{x}{x - 1},$$
if $x \in \mathbb{R}$ and $x > 1$?
A 2-D plot of of $f$ for $x \in (\infty, \infty)$ is here.
Lastly, note that WolframAlpha cannot find a global maximum.
AI: For $x>1$ we have $f(x)>\frac1{x-1}$, which is unbounded from above.
|
H: $O\in M_{3}(\mathbb{R})$ is orthogonal and $\det O=-1$ then $\lambda=-1$ is an eigenvalue of $O$
I was asked the following question on a test:
If $O\in M_{3}(\mathbb{R})$ is orthogonal
and $\det O=-1$
then $\lambda=-1$
is an eigenvalue of $O=(o_{ij})$
.
I tried building equations using $OO^{t}=I\ \Rightarrow[OO^{t}]_{ij}=\sum_{k=1}^{3}o_{ik}o_{jk}=\delta_{ij}
$ and the statement about the determinant, but without success.
Can anyone give me a hint?
Thanks!
AI: The only fact we need about orthogonal matrices is that their eigenvalues have absolute value 1.
Suppose all the eigenvalues are real. Then each eigenvalue has to be $1$ or $-1$. Since their product is $-1$, at least one needs to be $-1$.
If not all eigenvalues are real, there exists one complex eigenvalue $\lambda$. But then also $\overline{\lambda}$ is an eigenvalue. Let $\mu$ be the remaining eigenvalue. It follows that
$$-1=\det O=\lambda \overline{\lambda} \mu=|\lambda|^2\mu=\mu$$
|
H: What is the minimum value of $\sqrt{\frac{2(x - 1)}{x}} + \frac{x + 1}{x}$, if $x > 1$?
What is the minimum value of
$$f(x) = \sqrt{\frac{2(x - 1)}{x}} + \frac{x + 1}{x},$$
if $x \in \mathbb{R}$ and $x > 1$?
Note that $f$ has a global minimum value of
$$f(1) = 2$$
if we allow $x \geq 1$. (The WolframAlpha verification is here.)
A 2-D plot of the function for $x \in (-\infty,\infty)$ is here.
AI: note that
$$\frac{d}{dx}f(x)=\frac{d}{dx}\sqrt{\frac{2(x - 1)}{x}} + \frac{x + 1}{x}=\frac{\left(\sqrt{2-\frac2x}-2\right)x+2}{2(x-1)x^2}$$
when $1\lt x\lt2$ , you can see that $f'(x)$ has a positive value and when $x=2$, $f'(x)=0$. when $x\gt2$ $f'(x)$ has a negative value. In fact, it has a max. value at $x=2$ but no min. value given that $x\gt1$. It is not actually strange, not every function has min. value.
|
H: What is a harmonic complex function?
So, as far as I have learned, a complex function $f(u)$ is considered harmonic if and only if it satisfies the undermentioned equation:
$$
\frac{\partial ^2 u}{\partial x^2} + \frac{\partial ^2 u}{\partial y^2} = 0
$$
for any given complex number $x+iy$.
Is that right ?
AI: I think that a real function $u(x,y)$ is harmonic if it obeys that equation. If it does, then there is another real function $v(x,y)$ that is also harmonic, and there is a complex function $f(x+iy)=u(x,y)+iv(x,y)$ which is differentiable. By that, I mean, you can write $f(x+iy)=g(x+iy,x-iy)=g(z,\overline{z})$, and $\partial g/\partial\overline{z}=0$
|
H: Differentiability Question
I have the following true/false claim:
There exists a function $f(x,y)$ which is differentiable function at $(x_0,y_0) $ and its directional derivatives at each direction $(\cos \theta, \sin \theta )$ for $0\leq \theta <2\pi $ equal $\cos^2 \theta + 2\sin \theta $ .
I am almost sure this claim is false but can't understand exactly why .
I guess that this is because if such a question was true, the gradient should have been dependent on the angle, which isn't possible.
Can someone help me understand how to formalize this argument?
Thanks !
AI: Wlog $(x_0,y_0)=(0,0)$. The derivative in direction $v$ is $\nabla f|_{(0,0)}\cdot v$. Now consider
$$v=(1,0)^t = (\cos(0),\sin(0))^t$$
Then by assumption
$$\nabla f|_{(0,0)}\cdot v = 1$$
and
$$\nabla f|_{(0,0)}\cdot (-v) = \nabla f|_{(0,0)}\cdot (\cos(\pi),\sin(\pi)) = 1\neq -\nabla f|_{(0,0)}\cdot v$$
This is a contradiction.
|
H: A strange trigonometric identity in a proof of Niven's theorem
I can't understand the inductive step on Lemma A in this proof of Niven's theorem. It asserts, where $n$ is an integer:
$$2\cos ((n-1)t)\cos (t) = \cos (nt) + \cos ((n-2)t)$$
I tried applying the angle subtraction formula to both sides, but all that does is introduce a bunch of sines, which I can't see how to eliminate.
AI: As
$\cos(A-B)+\cos(A+B)=\cos A\cos B+\sin A\sin B+\cos A\cos B-\sin A\sin B=2\cos A\cos B$
Put $A+B=nt,A-B=(n-2)t$
Alternatively use $\cos C+\cos D=2\cos\frac{C+D}2\cos\frac{C-D}2$
|
H: Why $\omega$ can't be bijectively mapped to $\omega +1$
Let $\omega$ be the order type of the totally ordered set $\mathbb{N}$, and $\omega +1$ the set $\Bbb{N}\cup \{0\}$.
$0$ is greater than all the natural numbers as per this ordering.
My question is why can't $\Bbb{N}$ be bijectively mapped to $\Bbb{N}\cup\{0\}$? I understand the basic argument:
There is no element in $\Bbb{N}$ which maps to $\{0\}$.
$\Bbb{N}$ is bijectively mapped to a proper subset of $\Bbb{N}\cup\{0\}$. A set can't be bijectively mapped to a proper subset of the co-domain and also the co-domain.
But I'm not confident about whether these arguments work for infinite sets. One of the reasons being I used to think if a set can be bijectively mapped to another set, then any permutation of that set can be bijectively mapped to the same set. But this is not true for mapping $\Bbb{N}\cup\{0\}$ to $\Bbb{N}$. Hence I'm not quite sure what arguments are valid for infinite sets. My questions:
1. Do my arguments work for infinite sets? If not, what arguments are required to prove that $\omega$ can't be bijectively mapped to $\omega + 1$?
2. Is picking out every element from the co-domain and proving that it has been injectively mapped to the only way of proving bijection? Does the selection of such an element assume the axiom of choice?
I realise my question may be unclear, and will attempt to make it more clear should anyone find it difficult to navigate this mess. Thanks in advance!
AI: But $\omega$ and $\omega+1$ are equipotent, which is to say: there is a bijection between the two sets. Consider the map $f(n)=n+1, f(\omega)=0$.
There is just no order isomorphism between the two sets. Because order isomorphism preserves the properties of the order, in particular the existence of a maximal element. $\omega+1$ has a maximal element, whereas $\omega$ doesn't.
But more can be said: If two ordinals are order-isomorphic then they are equal.
Note that when working with sets which are well-ordered, the axiom of choice is not used. We have a definable choice function (definable from the well-order, that is). In particular, if the codomain is countable we can always make choices.
|
H: value of $\sum_{k=0}^{49}(k+1)(1.06)^{k+1}$
How do I calculate the value of $\sum_{k=0}^{49}(k+1)(1.06)^{k+1}$? I do not know the way to solve this type of a summation. Any guidance will be much appreciated
AI: For $x\in\mathbb{R}\setminus\{1\}$,
$$\begin{align*}
\sum_{k=0}^{49}(k+1)x^{k+1}&=x\sum_{k=0}^{49}(k+1)x^{k}=x\frac{d}{dx}\sum_{k=0}^{49}x^{k+1} \\
&=x\frac{d}{dx}\sum_{k=1}^{50}x^{k} =x\frac{d}{dx}\left(x\frac{x^{50}-1}{x-1}\right)
\end{align*}$$
Now, you can easily compute the last expression, and evaluate it at $x=1.06$.
|
H: I found that $\frac{dx}{dt} \cdot x = x$. What did I do wrong?
I found the following while fiddling around with the product rule.
$$
\frac{dx}{dt} \cdot x
= \frac{1}{2} \left( \frac{dx}{dt} \cdot x + x \cdot \frac{dx}{dt}\right)
= \frac{1}{2} \frac{d}{dt} x^2
= x
$$
Which is wrong iff $\frac{dx}{dt} \neq 1$. What did I do wrong?
AI: We have $\frac{d}{dx}x^2 = 2x$, but $\frac{d}{dt}x^2=2x\frac{dx}{dt}$.
You've essentially assumed that $x=t$, in which case $\frac{dx}{dt} = 1$.
|
H: How do I solve the following difference differential equation
While studying a particular physical system, I arrived at the following difference differential equation:
$$\frac{dx_n(t)}{dt} = -g \left\{\sqrt{(n + 1)(n + 2)}x_{n+1}(t) - (2n +1)x_n(t)\right\},$$
where $g$ is a constant and the initial conditions are is $x_n(0) = 0$ for $n \geq 0.$
How do I solve this?
Thank you!
My work:
Writing out the first equations, I got:
$$x_0'(t) = -g \{\sqrt{2}x_1(t) - x_0(t)\},$$
$$x_1'(t) = -g \{\sqrt{6}x_2(t) - 3x_1(t)\},$$
$$\vdots$$
Since these first order differential equations are interdependent, my solution to a given one of them will have to be restricted by the next one. That's where I need your help.
AI: Given $x_n(0)=0$. Plug in $t=0$ to get $x_n'(0)=0$. Differentiate the equation, plug in $t=0$ to get $x_n''(0)=0$, and so on: all derivatives of $x_n$ at $t=0$ are zero.
But start with any function $x_0(t)$ with all derivatives $0$ at $0$., then recursively plug in to get solutions for all the other $n$ in terms of that. For example $x_0(t) = \exp(-1/t^2)$ with of course $x_0(0)=0$.
$$
x_{{1}} \left( t \right) ={\frac { \left( x_{{0}} \left( t
\right) g-x'_{{0}} \left( t \right) \right) \sqrt {2}}
{2g}}
$$
$$
x_{{2}} \left( t \right) ={\frac { 3\,x_{{0}}
\left( t \right) {g}^{2}-4\, x'_{{0}} \left( t
\right) g+x''_{{0}} \left( t \right)
}{{g}^{2}}}
$$
$$
x_{{3}} \left( t \right) ={\frac {15\,x_{{0}} \left( t \right) {
g}^{3}-23\, x'_{{0}} \left( t \right) {g
}^{2}+9\, x''_{{0}} \left( t \right)
g-x'''_{{0}} \left( t \right) }{12{g}^{3}}}
$$
|
H: Prove: $\|\lambda v\| = |\lambda| \cdot \|v\| $
Prove: $\|\lambda v\| = |\lambda| \cdot \|v\| $, for a vector space $V$ with an inner product and $\lambda \in F$,
How do we prove this?
I understand the geometric meaning is that if you multiply a vector by a scalar then you make it length greater by that scalar times, and even if you multiply by a minus-signed scalar then you actually lengthen it by the same $|\lambda|$ but to the other direction. But how do we algebraically prove this?
AI: It comes nearly immediately from the definitions.
Let $\langle \cdot, \cdot \rangle$ be the inner product, which is linear in the first slot and
conjugate linear in the second slot.
Then
\begin{align*}
\|\lambda v\| &= \sqrt{ \langle \lambda v , \lambda v\rangle}\\
&= \sqrt{\lambda \cdot \langle v, \lambda v\rangle }\\
&= \sqrt{\lambda \cdot \overline{\lambda} \cdot \langle v,v\rangle}\\
&= \sqrt{ |\lambda|^2 \cdot \langle v,v\rangle}\\
&= |\lambda| \cdot \sqrt{\langle v,v\rangle}\\
&= |\lambda|\cdot \|v\|
\end{align*}
|
H: boundary of the boundary of a set is empty
I am learning some stuff about the interior, closure and boundary of sets $A\subset\mathbb R^n$ and I am wondering about the following:
1) $\partial\partial A=\partial A$ ?
2) $\partial\partial\partial A=\partial A$ ?
3) $\partial\partial A=\emptyset$ ?
So 1) is false for e.g. $A=\mathbb Q$ with $\partial A=\mathbb R\neq\emptyset=\partial\partial A$
2) and 3) seems kinda hard. I guess 3) is wrong but I don't have a counterexample.
So does anybody have an idea about 2) and 3) ?
Add: A point $x$ is a boundary point of a set $A\subset \mathbb{R}^n$ if every neighborhood of $x$ contains a point of $A$ and $A^c$.
AI: For the second question, when $A = \mathbb{Q}$, $\partial A = \mathbb{R}$ and $\partial\partial A = \emptyset $ and $\partial\partial\partial A = \emptyset$ as well.
For the third, example, consider the open interval $A = (-1,1)$ then $\partial A = \{-1,1\}$ and $\partial\partial A = \{-1,1\} \neq \emptyset$.
|
H: Holomorphic function zeros on the circle
I'm learning to use some methods of complex analysis, solving some problems.
Could you give me a hint to solve the following problem?
$f$ is holomorphic in $D^2=\{z: |z|<1\}$ and continious in $\partial D^2\cup D^2$. Also, there is an open subset $U$ of $\partial D^2$ such as $f|_U=0.$ I am to prove $f|_U=0$.
Unfortunately, I have a lack of techniques, but I think somethin like maximum modulus principle would be useful. Perhaps there is some general result?
AI: The maximum modulus principle is indeed useful to prove that under the assumptions (including $U \neq \varnothing$), you have $f \equiv 0$.
An open subset $U$ of the boundary contains the image of an interval $[a, a + 2\pi/n]$ under $t \mapsto e^{it}$, and with $\zeta = e^{2\pi i/n}$, consider
$$g(z) = \prod_{k=0}^{n-1} f(z\cdot \zeta^k).$$
$g$ is (easily seen to be) holomorphic in the unit disk, and continuous on the boundary. Its boundary values are, by construction, very simple.
An alternative way to prove it:
Since $f$ is continuous on $D^2 \cup \partial D^2$, it is the Cauchy integral of its boundary values,
$$f(z) = C_f(z) = \frac{1}{2\pi i} \int_{\lvert \zeta\rvert = 1} \frac{f(\zeta)}{\zeta - z}\, d\zeta.$$
Since $f$ vanishes on $U$ (which we may assume relatively open in $\partial D^2$), the Cauchy integral defines a holomorphic function on $D^2 \cup U \cup \left(\mathbb{C}\setminus \overline{D^2}\right)$ that vanishes identically on a non-discrete set (namely $U$).
|
H: Can it happen that an object will not cast any shadow at all?
I am puzzled by a question in Trigonometry by Gelfand and Saul on p. 57.
Can it happen that an object will not cast any shadow at all? When and where? You may need to know something about astronomy to answer this question.
I have drawn a diagram with the height of the object represented by $h$ and the length of the shadow $l$ ( I don't know how to upload it, sorry).
To calculate the length of the shadow I used
$\cot \theta = \dfrac{l}{h}$
Which rearranging gives
$l = h\cot \theta$
We want $l = 0 $, which I think occurs when $\theta = 90$. I say think because my calculator says $tan 90$ is a "math error" (my calculator can't calculate $\cot$ directly). Am I correct in saying the shadow is of zero length when $\theta = 90$ ?
Secondly my astronomy is less than it could be. Where and when would the sun create an angle of 90 degrees? I am thinking at noon. Does this occur at any latitude?
AI: What is the object? A shallow spherical cap will only cast a shadow when the sun is near the horizon, although it can be held on its side to make it cast a shadow or lit from below. There are lots of objects like prime numbers and functors that never cast shadows.
$\displaystyle \cot(90)=\lim_{x\to 90^+} \frac{1}{\tan(x)}\lim_{x\to 90^-} \frac{1}{\tan(x)}=0$
At equinox the sun is at the zenith (perpendicularly overhead) at the point on the equator where it is noon. At any instant there is a point somewhere between the tropics of Cancer and Capricorn where the sun is at zenith.
|
H: Invertibility of a linear operator on a Hilbert space.
Let $H$ be an infinite dimensional Hilbert space over $\mathbb C$, $T$ be a continuous linear operator of $H$, $r(T)=\sup_{||x||=1}|(Tx|x)|$ be the numerical radius of $T$, and $z\in \mathbb C$, such that $|z|<1$.
Assume that $r(T)\leqslant 1$.
Clearly $\ker (I-zT)=\{0\}$. So, if I could show that $T$ is compact then it would follow that $I-zT$ is invertible. So my questions are :
Is there an example of $T$ non compact ?
Is there an example of $T$ such that $I-zT$ is non surjective?
AI: The answer to the first question is yes, any $T = \lambda\cdot I$ with $0 < \lvert\lambda \rvert < 1$ is a non-compact operator satisfying the requirements.
The answer to the second question is no, all such $I - z\cdot T$ are invertible.
First, by
$$\lvert \langle (I - z\cdot T)x \mid x\rangle\rvert = \lvert \lVert x\rVert^2 - z \langle Tx\mid x\rangle\rvert \geqslant (1 - \lvert z\rvert r(T)) > 0$$
for $\lVert x \rVert = 1$, it follows that $\lVert (I - z\cdot T) x\rVert \geqslant (1 - \lvert z\rvert r(T))\lVert x\rVert$, hence $\mathcal{R}(I - z\cdot T)$ is closed.
So either it is all of $H$, or $\mathcal{R}(I - z\cdot T)^\perp$ is nontrivial.
Suppose $y \in \mathcal{R}(I - z\cdot T)^\perp$. then, by the above computation,
$$0 = \lvert \langle (I - z\cdot T)y\mid y\rangle\rvert \geqslant \lVert y\rVert^2 (1 - \lvert z\rvert r(T))$$
and the second factor on the right is strictly positive, thus $\lVert y\rVert = 0$.
|
H: Division between two numbers of the form $u + v\sqrt 2$
I need to do a division $a/b$, where $a$ and $b$ are numbers of the form $u + v\sqrt 2$, and $u$ and $v$ are integers (I'll write $a = u + v\sqrt 2$ and $b = u' + v'\sqrt 2$).
What is an effective way of computing that division? That is, how can I compute that without considering the infinite decimal expansion of $\sqrt 2$?
I've heard I should have been knowing something about abstract algebra, ring theory, Euclidean domains, but, although I searched some information, I'm still too far from those fiels of mathematics, which I am going to study in some years.
The only thing I know about $a/b$ is that the result should always be of the form $p + q\sqrt 2$, where $p$ and $q$ are rational: am I wrong?
And finally, is it necessary to write $a/b = a\frac{u'-v'\sqrt 2}{(u'-v'\sqrt 2)(u'+v'\sqrt 2)}=\frac{u'-v'\sqrt 2}{u'^2-2v'^2}$, so as to get rid of the $\sqrt 2$ in the denominator? Is it right that that irrational quantity may compromise the effectiveness of the division operation?
AI: $$\left( u'+v'\sqrt{2} \right)\left( u'-v'\sqrt{2} \right)=u'^2-2v'^2$$
Thus
$$\frac{a}{b}=\frac{\left(u+v\sqrt{2}\right) \left( u'-v'\sqrt{2} \right)}{u'^2-2v'^2}$$
This way, the top has the form $m+n\sqrt{2}$ with $m,n$ integers, and the denominator is another integer $l$. Thus, you $p=\frac{m}{l}$ and $q=\frac{n}{l}$.
Important, for integers, $u'^2-2v'^2=0$ if and only if $u'=v'=0$.
|
H: Minimal polynomial over an extension field divides the minimal polynomial over the base field
I need help proving this theorem:
Given the field extension: $\mathbf{K} \subseteq \mathbf{L}$, for $\alpha \in \mathbf{L}$ and $g(x) \in \mathbf{K}[x]$, $\alpha$'s minimal polynomial over $K$,
and $f(x) \in \mathbf{L}[x]$, $\alpha$'s minimal polynomial over $L$,
then the degree of $g$ is bigger than the degree of $f$ and $f(x)$ divides $g(x)$.
AI: Because $\mathbf{K}\subseteq\mathbf{L}$, you also have $\mathbf{K}[x]\subseteq\mathbf{L}[x]$, so that $g\in \mathbf{L}[x]$ and $g(\alpha)=0$, and therefore (because $f$ is the minimal polynomial of $\alpha$ over $\mathbf{L}$) we must have $f\mid g$, and hence also $\deg(f)\leq\deg(g)$.
|
H: Partial fractions for $\frac{t+1}{2\sqrt{t}(t-1)}$
How do I use partial fractions for the expression $\dfrac{t+1}{2\sqrt{t}(t-1)}$? Because I have to find the integral of it... Thank you
AI: Your given integrand isn't a rational function (yet) to use partial fractions, we must first obtain a ratio of polynomials. We can do this by substituting $u = \sqrt t$.
$$u = \sqrt t \implies t = u^2 \implies dt = 2u\,du$$
Now, substituting the above into our original integral gives us:
$$\int \frac{t+1}{2\sqrt{t}(t-1)}\,dt = \int \frac{u^2 + 1}{2u(u^2 - 1)}\,(2u\,du) =\int \dfrac {u^2 + 1}{u^2 - 1}\,du$$
Now, polynomial division, followed by "partial fractions" gives us: $$\int \left(1 + \dfrac {1}{u^2 - 1}\right)\,du = \int \left(1 + \dfrac {1}{(u-1)(u+1)}\right)\,du = \int \left(1 + \dfrac{A}{u - 1} + \dfrac B{u + 1} \right) \,du$$
Now we solve for $A, B$:
$A(u+1) + B(u - 1) = 2 \iff Au + A + Bu - B = 2 \iff (A + B)u + (A - B) = 2$
$A + B = 0$
$A - B = 2$
Adding the equations gives us $2A = 2 \iff A = 1 \implies B = -1$ and we'll have a result of the form $$u + A\ln|u-1| + B\ln |u+1| + \text{Constant}$$ $$ = \sqrt t +\ln|\sqrt t - 1| - \ln |\sqrt t + 1| + \text{Constant} $$
$$ = \sqrt t+ \ln\left|\dfrac{\sqrt t - 1}{\sqrt t+1}\right| + C$$
|
H: Restriction of a lower semi-continuous functional again lower semi-continuous?
Let $F: [a,b]\times \mathbb R \times \mathbb R \rightarrow \mathbb R$ be continuous, $J(u) = \int_{[a,b]} F(x, u, u') dx$ be a functional over $W^{k,p}([a,b])$. We assume that for any uniformly convergent sequence $(u_{r})_{r\in\mathbb N}$ of equi-Lipschitzian functions, $ \lim_{r\rightarrow \infty}u_{r} = u_{0}$, we have:
$$J(u_{0}) \le \liminf_{r\rightarrow\infty} J(u_{r}) $$
My question is: for $I \subset [a,b]$, is the restriction $J_{|I}(u) := \int_{I} F(x, u, u') dx$ lower semi-continous in the same sense that J is? If yes, why? If not, is this possible under addtional assumptions?
Thank you very much for your time and energy!
AI: I assume $I$ means a subinterval $[c,d]$. Let $u_r:I\to \mathbb R$ be a uniformly convergent sequence of $L$-Lipschitz functions. Extend them to $[a,b]$ via
$$U_r(x)=\begin{cases}u_r(c), \quad &x<c \\ u_r(x), &x\in [c,d] \\ u_r(d), &x>d\end{cases}\tag1$$
Clearly, $U_r$ are equi-Lipschitz and converge uniformly. Thus, $$J( U_0)\le \liminf J(U_r)\tag2$$ The contribution of $[a,c]$ to $J$ is $ \int_a^c F(x,u_r(c),0)\,dx$, which converges to $ \int_a^c F(x,u_0(c),0)\,dx$ due to the continuity of $F$. Similarly for the contribution of $[d,b]$. Subtract them from (2) to conclude with the desired result.
If $I$ is a general closed subset of $[a,b]$, you can extend $u_r$ in the gaps between components of $I$ by linear interpolation. This preserves $L$-Lipschitzness and uniform convergence of the sequence. The derivatives of the added linear pieces will no longer be $0$, but they will converge uniformly in all gaps of size $>\delta$, for any fixed $\delta$. The gaps of size $<\delta$ contribute little.
|
H: If $\alpha$ is an ordinal, proving that $\alpha\cup\{\alpha\}$ is an ordinal.
I refer to pg.4 of this article.
Assuming $\alpha$ is an ordinal, we have to prove $\alpha\cup \{\alpha\}$ or $\alpha +1$ is an ordinal.
Isn't this obvious from the construction of ordinals? As per the construction given in the article, for any ordinal $\beta$, the next ordinal is $\beta\cup\{\beta\}$. So is any such explicit proof required, as is given in the article?
I quote: "$\alpha +1$ is transitive, for if $y\in\alpha +1$ then either $y=\alpha$ and $\alpha\subset \alpha +1$, or $y\in\alpha$." I don't understand how this follows from the properties of ordinals given on pg.3. I feel this is a proof of the fact that $\alpha +1$ is transitive by assuming that $\alpha +1$ is transitive. One may refer to definition 7 on pg.3
EDIT: Could someone also kindly outline the suffficient conditions for proving that a number is an ordinal? The artice suggests transitivity and strict ordering are suffcient conditions, or maybe I'm reading it wrong.
Thanks in advance!
AI: You need to discern between "obvious equivalence" and definitions. The definition of an ordinal is a set which is transitive and well-ordered by $\in$.
If $\alpha$ is assumed to be an ordinal this means that it is a transitive set and well-ordered by $\in$. Now you want to prove that $\alpha+1=\alpha\cup\{\alpha\}$ also satisfies the same properties.
This is a very simple proof, yes. But it is required regardless. Even more so because this is supposed to be an paper detailing the construction of ordinals to people which are less familiar with them.
To the second question, we assume that $\alpha$ is transitive. Therefore if $x\in\alpha\cup\{\alpha\}$, either $x\in\alpha$ and therefore $x\subseteq\alpha\subseteq\alpha\cup\{\alpha\}$, or $x=\alpha$ and then trivially $x\subseteq\alpha\cup\{\alpha\}$. Therefore $\alpha+1$ is transitive whenever $\alpha$ is.
As for the equivalent conditions for being an ordinal:
$x$ is an ordinal if $x$ is a transitive set which is well-ordered by $\in$. Assuming the axiom of regularity it suffices to require $\in$ to be a linear order instead.
$x$ is an ordinal if $x$ is a transitive set, and all its members are transitive sets. This definition requires the axiom of regularity to holds as well.
|
H: Unique root to a function
Let $f:[a,\infty)\rightarrow \mathbb{R}, \ \ f\in C^2[a,\infty)$ such that $$ \\ f(a)>0 , \ \ f'(a)<0, \ \ f''(x)\leq 0 \ \ \forall x\in [a,\infty)$$ Prove that
$$ \exists !~t\in (a,\infty):f(t)=0$$
AI: From the mean value theorem, $f'(x)-f'(a)=(x-a)f''(\xi)$ for some $\xi\in(a,x)$, hence $f'(x)\le f'(a)<0$ for all $x\in(a,\infty)$ because $x-a\ge0$ and $f''(\xi)\le 0$. As a consequence $\frac{f'(x)}{f'(a)}\ge1$ for all $x\ge a$.
Let $x=a-\frac{f(a)}{f'(a)}$. Then $x>a$ and $f(x)-f(a)=(x-a)f'(\xi)=-f(a)\frac{f'(\xi)}{f'(a)}\le -f(a)$ because $f(a)>0$ and $\frac{f'(\xi)}{f'(a)}\ge1$.
In other words, $f(x)\le 0<f(a)$ and
by the intermediate value theorem, there exists $t\in[a,x]$ with $f(t)=0$.
If there were two solutions $f(t_1)=f(t_2)=0$ with $t_1<t_2$, then by Rolle $f'(x)=0$ for some $x\in(t_1,t_2)$, contradicting $f'(x)<0$. Hence the solution is unique.
|
H: What is the definition of $H^{-k}(\mathbb{R}^n)$ and its norm?
What is the definition of $H^{-k}(\mathbb{R}^n)$ and its norm? How can I understand the fact $$\|f\|_{H^{-k}(\mathbb{R}^n)}=\|(I-\triangle)^{-k}f\|_{H^{k}(\mathbb{R}^n)}.$$
AI: By definition, $\|f,H^s\| =\|(1+|\xi|^2)^{s/2}\hat f(\xi),L_2\| $, where $\hat f$ is a Fourier transform of $f$.
In addition, $\hat{\Delta f} = -|\xi|^2\hat f$ (up to a constant factor depending on you definition of Fourier transform), so if $k\in \mathbb N$
$$\|(I-\Delta)^k g,H^{-k}\|=\|(1+|\xi|^2)^{-k/2}\times(1+|\xi|^2)^k\hat g(\xi),L_2 \|$$
$$=\|(1+|\xi|^2)^{k/2} \hat g(\xi),L_2 \|=\|g,H^k\|,$$ which replies to your second question.
|
H: Prove/disprove the following theories regarding operators in inner product spaces
Prove/disprove:
(I) Let $V$ be a vector space with an inner product upon field $F$. Given an operator $T:V\to V$, which is invertible. Is $T^{*}$ invertible?
(II) Let $v_1, ..., v_k$ be eigenvectors of $T$ that are correspondent to different eigenvalues, is $\{v_1,...,v_k\}$ and orthogonal set? (In the same vector space from (I)).
AI: 1) Yes if $T$ is invertible with inverse $T^{-1}$ then $T^*$ is invertible with inverse $(T^{-1})^*$:
$$TT^{-1}=\mathrm{id}\Rightarrow (T^{-1})^*T^*=\mathrm{id}$$
2) It's easy to construct a counterexample. For example in $\mathbb R^2$ take any two linearly independant vectors $v_1,v_2$ and define $T$ by $T v_1=v_1$ and $Tv_2=2v_2$...
|
H: About irreducibility of a particular class of bihomogeneous polynomials
Is the polynomial
$$
x_0^2y_0+x_0x_1y_1+x_1^2y_2+x_1x_2y_3+x_2^2y_4+x_0x_2y_5 \in \mathbb K[x_i, y_j]
$$
reducible over an algebrically closed field $\mathbb K$?
I've noticed that the polynomial is bi-homogeneous of degree $(2,1)$ and I think that the question can be stated in the following more general way: fix an integer $d \ge 1$ and consider the space of homogeneous polynomials of degree $d$ in $x_0,x_1,x_2$. This is a vector space (over $\mathbb K$) of dimension $N$ (it doesn't matter, anyway I think $N=\binom{d+2}{2}$). Order somehow the polynomials $M_i(x_0,x_1,x_2)$ of a base of this space and then consider
$$
p(x_i,y_j) = \sum_{i=0}^{N-1} M_i(x_0,x_1,x_2)y_i
$$
Is $p$ reducible? I do not know how to begin... Do you know any useful tricks to prove reducibility/irreducibility in this case?
Thanks.
AI: We claim that your polynomial is irreducible at least when the characteristic is not $2$. To see this consider your polynomial as a quadratic polynomial in $x_0$ with coefficients in the UFD $R:= k[x_1,x_2,y_0,y_1,\ldots,y_5]$. Your polynomial is reducible iff it admits a root in $R$. So to show it does not admit a root in $R$, it is enough to show that the discriminant $\Delta$ is not a perfect square in $R$. I compute the discriminant to be
$$\begin{eqnarray*} \Delta &=& (x_1y_1 + x_2y_5)^2 - 4y_0(x_1^2 y_2 + x_1x_2y_3 + x_2^2y_4) \\
&=& (y_1^2 - 4y_0y_2)x_1^2 + (\text{lower terms}).\end{eqnarray*}$$
Now if the discriminant is a perfect square I can write $\Delta = (ax_1 + b)^2$ where $a,b$ are polynomials in $k[x_2,y_0,\ldots,y_5]$. Comparing coefficients, we get that
$$a^2 = (y_1^2 - 4y_0y_2).$$
But this results in a contradiction because the R.H.S. is irreducible by Eisenstein with the prime element $y_0 \in k[y_0,y_2]$. Thus your original polynomial must be irreducible.
|
H: How to solve the following problems with exponent?
If $9^{x+2}= 240+9^x$ then x= ?
$10^x = 64$ what is the value of $10^{(x/2)+1} = ?$
$x/x^{1.5} = 8*x^{-1}$ and x > 0 , then x = ?
$x^{-2} = 64$, then $x^{1/3} + x^0$ = ?
$4^x - 4^{x-1} = 24 $ then $(2x)^x = ?$
AI: $1: 9^x(9^2-1)=240\implies 9^x=3=9^{\frac12}\implies x=\frac12$
$2: 10^x=64\implies 10^{\frac x2+1}=10\cdot (10^x)^{\frac12}=10\cdot (64)^{\frac12}$
$3:\frac x{x^{1.5}}=8\cdot x^{-1}\implies x^{1+1-1.5}=8\implies x^{\frac12}=8\implies x=(8)^2$
$4: x^{-2}=64\implies x^2=(x^{-2})^{-1}=(64)^{-1}=\frac1{64}\implies x=\pm\frac18$
So, $x^{\frac13}+x^0=\left(\frac18\right)^{\frac13}+1=1+\frac1{8^{\frac13}}$
Do you know about the cube roots of $1?$
$5:4^{x-1}(4-1)=24\implies (2^2)^{(x-1)}=8=2^3\implies 2(x-1)=3$
Find $x$ and proceed
The formulas used:
$a^m\cdot a^n=a^{m+n}$
$a^m=a^n\implies m=n$ if $a\ne0,\pm1$
$a^{-m}=\frac1{a^m}$
$a^0=1$ if $a\ne0$
|
H: What texts do you recommend to study calculus?
I've studied calculus 2 years from Arabic text . It was great text , which is supported with huge amount of examples and exercises , Now , I find it's a good step to study the material in English as my future studies will be in English , So i search for a text which cover what I have learned before to learn English terminology and notation and so on , and then go further in the subject .
So , Do you know good text which start from the beginings of calculus - limits and derivatives - and then go through applications of derivatives and integrals and it's application in areas and volumes ? and also have series and further material in the subject .
My friends have recommended "early calculus transcendentals" by Ron Larson, and other recommended "essential calculus " for the same author .
How do you find those texts ? are there any other good texts ?
You have to notice that , I don't look for a text which go slowly throw the beginings , as I already have studied the subject before , but if the text go slowly the basics and also cover further subject , no problem with that.
Regards ...
AI: I think you'd do very well with Michael Spivak's Calculus. It's thorough, covers the basics, but so much more than that. His text is more rigorous and theoretical than are many texts used for introductory calculus. It is used in calculus courses, particularly those with a pure mathematics emphasis, at many universities. Spivak has also written The Hitchhiker's Guide to Calculus which reads more like a novel, and gives an intuitive understanding of Calculus. (He has said [somewhere?] that he has used it Hitchhiker as "supplemental reading" in the Calculus courses he's taught.)
An alternative but excellent and even more challenging text(s) would be to study Apostol's Calculus, Volumes I and II. This is more in line with the level of study for which I believe you are prepared.
I'm familiar with Stewart's Calculus - Early Transcendentals, and that will do just fine if your primary aim is to acquire proficiency in studying calculus in English. I just don't think it will challenge you as much as will Spivak's Calculus. Stewart also authors Essential Calculus, but it's just stripped down and not as enriching as is his other texts.
Remark: Given your previous coverage of Calculus and your study of Enderton and Dummit and Foote, I'd really suggest taking on Apostol's work. If not Apostol, go for Spivak. For both, you'll find lecture notes and syllabi to use to accompany the texts by Googling "Apostol, Calculus: edu" and checking our promising "hits".
|
H: How do I prove that $\int_{0}^{1} \frac{1}{\log(x)}dx$ diverges?
This is an exercise of a book I'm using to study... The book gives a hint: compare to $f(x)=\frac{1}{1-x}$. However I was not able to realize how could I compare these two functions. I tried changing the variable $x$ to $(1-u)$ in the integral, but it didn't solve my problem. I also tried using the Taylor expansion of $\frac{1}{\log(x)}$, but it was not helpful either...
AI: Better use $$ 1-\frac 1 x\leq \log x$$
To clarify, note that the above gives $$\frac{1}{{\log x}} \leqslant \frac{x}{{x - 1}} \Rightarrow \int\limits_0^t {\frac{{dx}}{{\log x}}} \leqslant \int\limits_0^t {dx} + \int\limits_0^t {\frac{{dx}}{{x - 1}}} \to - \infty \;,\; t\to 1^{-}$$
ADD The inequality above is really fundamental, following from the definition of the logarithm. Note that $$1-\frac 1x=\int_1^x\frac{dt}{t^2}\leq \int_1^x\frac{dt}t=\log x\leq \int_1^x 1 dt=x-1$$
give the widely used tight intequalities $$1-\frac 1 x\leq \log x\leq x-1$$ which are used to prove, for example, that $$\lim_{x\to 0}\frac{\log(1+x)}x=1$$
|
H: Help understanding a property of modulus
If it is given that $|b|>1$ and $|ab|=1$
can someone please explain what should be the value of $|a|$ ?
AI: One of basic properties of $|\cdot|$ is that $$|ab|=|a||b|,$$
so we can write $$1=|ab|=|a||b|.$$ Clearly $b\ne 0$. Then we divide both parts by $|b|$:
$$|a|=\frac{1}{|b|}.$$
We know that $|b|>1$, then (divide by $|b|$) $$\frac{1}{|b|}<1,$$
hence $$|a|<1.$$
|
H: What is the height of a regular polygon?
I have three small circles forming a pyramid. I would like to centre that group in a square but have spent a couple of hours trying to calculate the height of the pyramid. I just can't seem to get them vertically centred.
Given a square, a large circle filling the square and then three smaller circles forming a pyramid, what is the height from the bottom of the smallest circle to the top of the smallest circle.
https://googledrive.com/host/0BwFQiTKfux0qY1Y2d1hRdndtSEk/so_question.svg
Calc radius of each circle of n circles in a circle: www.had2know.com/academics/inner-circular-ring-radii-formulas-calculator.html
apothem: www.mathopenref.com/apothem.html
Python code I tried to get working python code
AI: If $r$ is the radius of the blue circles, the the width of the blue figure is $4r$ and the height is $\left(2+\sqrt 3\right)r$. The height of a regular triangle play a role here.
By the way, if you rotate the blue pyramid by $30^\circ$, you can grow them a bit bigger.
|
H: Is $\lim_{t\to 0}\frac{e^{xt}-1}{x}=0$ uniformly on $x>0$?
I would like to determine whether the following limit is uniform on $x\in (0,\infty)$: $$\lim_{t\to 0}\frac{e^{xt}-1}{x}.$$
By "uniform" here, we mean $\exists \delta_\epsilon>0$ such that $\frac{e^{xt}-1}{x}<\epsilon$ for all $0<t<\delta_\epsilon$, for all $x>0$.
From the Taylor remainder we can see that there is a point $t^*$ between $0$ and $t$ such that $\frac{e^{xt}-1}{x}=\frac{xt + x^2 (t^*)^2}{x}=t + x(t^*)^2$. This might suggest that the limit is not uniform, but of course we don't know the manner in which $t^*\to 0$. Investigating the derivative with respect to $t$, it is $e^{xt}$. Again we might try the Taylor estimate here, but that doesn't seem to be getting me anywhere.
Any ideas?
AI: If it is a uniform convergence it would converge to $0$ by pointwise convergence. So say $t< \delta \implies \sup_{x \in \mathbb{R}^+}|\frac{e^{tx}-1}{x}|<1$. That is clearly a contradiction.
|
H: Outer measure discontinuous from below
I was trying to find an example of an outer Measure which is not continuous from below.
These are the definitions I use
An outer measure on $X$ is a function $\mu^\ast: \mathcal{P}(X)\to [0,\infty]$ if
it fulfills
$\mu^\ast(\emptyset)=0$
$\mu^\ast\Big( \bigcup_{j=1}^\infty A_j\Big) \leq \sum_{j=1}^\infty A_j$
And an outer measure is continuous from below when for the sequence
$(A_j)_{j\in \mathbb{N}}$ with $A_j\subset A_{j+1}$ for alle $j$
the equality
$$ \mu^\ast \Big( \bigcup_{j=1}^\infty A_j\Big)= \lim_{j\to \infty} \mu^\ast (A_j)$$
Some results which might be helpful
All measures are continuous from below
All metric outer measures are continuous from below
So I search for an outer measure which isn't continuous from below.
AI: Let
$$\mu^\ast(A) = \begin{cases}
0\quad\;,\; A = \varnothing\\
1\quad\;,\; A \text{ is finite and nonempty}\\
\infty\quad, \text{ otherwise}\end{cases}$$
on an infinite set $X$.
If $\bigcup A_j$ is infinite, either at least one $A_j$ is infinite, or infinitely many $A_j$ are nonempty, so
$$\mu^\ast\left(\bigcup A_j\right) \leqslant \sum \mu^\ast(A_j).$$
Let $A_j = \{x_k\colon 1 \leqslant k \leqslant j\}$ for a sequence of distinct $x_k$, then
$$\mu^\ast(A_j) = 1$$
for all $j$, but $\mu^\ast\left(\bigcup A_j\right) = \infty$.
|
H: Showing that $U(2^n)$ is not cyclic for $n \ge 3$
I'd appreciate a hint on how to solve the following problem:
Prove that $U(2^n) (n \ge 3)$ is not cyclic. ($U(m)$ is the group of positive integers $j \le m$ such that $\gcd(j,m)=1$, under multiplication $\mbox{mod}\,\,m$)
Since elements in $U(2^n)$ are coprime to $2^n$, $U(2^n)=\{1,3,5,...,2^n-3, 2^n-1\}$. I tried taking an arbitrary odd number, $1 \le 2k+1 \le 2^n$, and showing that $(2k+1)^{2^n} \not\equiv 1 \,\,\mbox{mod} \,\,2^n$, which would show that no element has order $2^n$ and therefore cannot generate $U(2^n)$ so that $U(2^n)$ is not cyclic.
I used the binomial theorem to expand $(2k+1)^{2^n}$ in general terms, and wanted to show that there is some coefficient that is not divisible by $2^n$, so that $(2k+1)^{2^n}\, \mbox{mod} \,\,2^n \not\equiv 1$. Problem is, the coefficients other than one are divisible by $2^n$, as far as I can see. I hope I'm not missing something obvious.
Outside of this, I'm afraid I'm out of ideas. How else can I go about this?
Thanks.
AI: The first issue here is that the order of $U(2^n)$ is not $2^n$ - in fact, it is $\phi(2^n)=2^{n-1}$, where $\phi$ is Euler's totient function.
As a hint: note that the element $2^n-1$ is of order 2, since
$$
(2^n-1)^2=2^{2n}-2^{n+1}+1\equiv1\pmod{2^n}.
$$
Also,
$$
(2^{n-1}+1)^2=2^{2n-2}+2^n+1\equiv1\pmod{2^n}
$$
as long as $2n-2\geq n$, which holds since $n\geq 3$.
So, you have two (distinct) elements of order 2. Can that happen if $U(2^n)$ is cyclic?
|
H: Prime ideals and epimorphism
Let $\phi$:$R$$\rightarrow$$S$ be a ring epimorphism.
Show that if $P\triangleleft S$ is a prime ideal (of S), then $\phi^{-1}(P)\triangleleft R$ is a prime ideal (of R).
Can someone help me with that?
Thanks in advance
AI: Expanding on @Daniel Fischer's comment:
$P\subseteq S$ is prime iff $S/P$ is a domain.
Furthermore if $\phi:R\to S$ is epi then $S\cong R/\ker\phi$.
Now $S/P \cong R/(\ker\phi+\phi^{-1}(P))$ but since $0\in P$ and $\ker\phi = \phi^{-1}(0)$, $\ker\phi\subseteq\phi^{-1}(P)$ so $R/(\ker\phi+\phi^{-1}(P)) = R/\phi^{-1}(P)$ which is a domain(because it is isomorphic to a domain), and so $\phi^{-1}(P)$ is prime.
|
H: Quadratic ternary forms
What is the difference between solubility, local solubility and global solubility when it comes to solving quadratic ternary normal forms, i.e a equation of the form $ax^2 + by^2 + cz^2 =0$?
Thanks in advance.
AI: Global solvability means solvable over a global field (such as a number field or function field), and local solvability means over a local field (such as finite extension of a $p$-adic field). The theorem of Hasse -Minkowski for quadratic forms gives a relation: If $K$
is a global field and $f(x)=a_1x_1^2+\cdots a_nx_n^2$ is a polynomial with coefficients in $K^{\ast}$, then $f(x)=0$ has a non-trivial solution over $K$ if and only if it has a non-trivial solution in every local field arising as completion of $K$ with respect to the absolute valuation. We can take $K=\mathbf{Q}$, then solvability over $\mathbf{Q}$ is equivalent to solvability over $\mathbf{R}$ and all $\mathbf{Q}_p$. For example, the termary quadratic form $Q(x)=5x^2+7y^2-13z^2$ has a non-trivial (global) rational solution, since it has a local solution over $\mathbf{Q}_p$ for every prime $p$ (and a real solution, of course).
On the other hand, we could have seen this much easier (by using Lagrange, or direct verification, e.g., $(x,y,z)=(3,1,2)$.
|
H: Geometric progression — Sum of terms, sum of terms' cubes, sum of terms' squares
Consider the infinite geometric progression $a_1, a_2, a_3, \dots$. Given the sum of its terms, as well as the sum of the terms' cubes
$a_1 + a_2 + a_3 + \cdots = 3\\
a_1^3 + a_2^3 + a_3^3 + \cdots = \frac{108}{13}$
find the sum of the terms' squares
$a_1^2 + a_2^2 + a_3^2 + \cdots$
This is what I have so far:
$a_1 + a_2 + a_3 + \cdots = 3\\
\Longrightarrow 1 + q + q^2 + \cdots = \frac{3}{a_1}$
$a_1^3 + a_2^3 + a_3^3 + \cdots = \frac{108}{13}\\
\Longrightarrow 1 + q^3 + q^6 + \cdots = \frac{108}{13a_1}$
$a_1^2 + a_2^2 + a_3^2 + \cdots\\
= \frac{a_1^3}{a_1} + \frac{a_2^3}{a_2} + \frac{a_3^3}{a_3} + \cdots\\
= \frac{a_1^3}{a_1} \left(1 + \frac{q^3}{q} + \frac{q^6}{q^2} + \cdots\right)$
Any help will be much appreciated.
AI: HINT:
Let the first term be $a$ and the common ratio be $r$
For convergence we need $|r|<1$
So, the sum of the first series $\sum_{0\le n<\infty}a\cdot r^n=\frac a{1-r}=3\ \ \ \ (1)$
So, the sum of the second series $\sum_{0\le n<\infty}a^3\cdot (r^3)^n=\frac {a^3}{1-r^3}=\frac{108}{13}\ \ \ \ (2) $
Cube the first & divide by $(2)$ to get $r$ and then get $a$ from $(1)$
We need $$\sum_{0\le n<\infty}a^2\cdot (r^2)^n=\frac{a^2}{1-r^2}$$
|
H: Proof by induction that the sum of terms is integer
I'm having some trouble in order to solve this induction proof.
Proof that $\forall{n} \in \mathbb{N}$ the number $\frac{1}{5}n^5+\frac{1}{3}n^3 + \frac{7}{15}n$ is an integer.
I've tried proving this by induction, but I've not succeed so far. What I did was:
Let's assume that $\frac{1}{5}n^5+\frac{1}{3}n^3 + \frac{7}{15}n = k$, with $k \in \mathbb{Z}$. So, by induction,
$n=1$
$\frac{1}{5} + \frac{1}{3} + \frac{7}{15} = 1$ and $1 \in \mathbb{Z}$.
Inductive step
I want to prove that if $\frac{1}{5}n^5+\frac{1}{3}n^3 + \frac{7}{15}n = k$, with $k \in \mathbb{Z}$ then $\frac{1}{5}(n+1)^5+\frac{1}{3}(n+1)^3 + \frac{7}{15}(n+1) = l$, with $l \in \mathbb{Z}$
And then I'm stuck. I've tried developing every binomial but, for example, for $\frac{1}{5}(n+1)^5$ I get a $\frac{1}{5}n^4$ that is not in the inductive hypothesis, so therefore, I cannot get rid of it and it's not an integer, so I cannot conclude that $l \in \mathbb{Z}$.
Any help or ideas from where I can follow? Thanks in advance!
AI: You want to try to write this expression in terms of the previous one. So, start by expanding out the terms:
$$
\begin{align*}
\frac{1}{5}(n+1)^5&=\frac{1}{5}\sum_{k=0}^{5}\binom{5}{k}n^k\\
&=\frac{1}{5}(n^5+5n^4+10n^3+10n^2+5n+1)\\
&=\frac{1}{5}n^5+n^4+2n^3+2n^2+n+\frac{1}{5}
\end{align*}
$$
Do the same thing for $(n+1)^3$ and $(n+1)$. Now, you can pick out the terms corresponding to the inductive hypothesis; all that's left is to show that the other remaining terms ALSO give you an integer.
|
H: Programming language to learn mathematics
I am a computer scientist who programs since 3 years. I am currently in my 4th semester and I struggle with some math classes, not because they are extremely difficult but they are taught extremely boring and I get no feedback at all.
Because I am totally in love with programming and I program everyday I thought it must be possible to write an universal math programming language.
So that I could do something like this
Proof ( (A ∩ B) ∪ C = A ∩ (B ∪ C) ⇐⇒ C ⊆ A ) => {
// do the proof
}
And then it would tell me if it would be correct.
Does something like this exist?
I was looking at http://www.wolfram.com/mathematica/ but I am not sure if this is what I actually want.
Another example would be:
For example if I have to proof ForEach x element_of N; x|7; fib(x)|7. Then I could write let x = 7; fib(x) equals 13 => result (proof_is_wrong)
AI: There is something like this called Coq (the website is slow sometimes, unfortunately). You can write proofs in it and check them. Another software is Isabelle. You can write a wide range of proofs in both, and there are others as well but I suggest you start with these (they keyword to search is "proof assistant").
For instance in Isabelle, their respository of proofs shows the rank-nullity theorem in linear algebra and Fermat's Last Theorem for exponents 3 and 4. A proof of the four-colour theorem (roughly, any map needs at most four colours to be coloured so that no two countries sharing a more-than-point border share the same colour) has also been implemented in Coq.
If you are familiar with programming you ought to be able to write simple proofs in it pretty soon, though the ability to write proofs by hand is also pretty important and it is unlikely that the software alone will make you good at this, though it might help ward off the boredom. Mathematica is not for writing proofs and doing mathematics, but rather for algebra and other symbolic mathematics.
|
H: Intuition Behind an Identity
I'm currently studying for a complex analysis prelim. exam in August, so I'm working through some of the exercises in Titchmarsh. One of the exercises has us evaluate the integrals $$\int_0^\infty\frac{1}{1+x^4}\,dx\quad\text{and}\quad\int_0^\infty\frac{x^2}{1+x^4}\,dx.$$After evaluating each of them, I found $$\int_0^\infty\frac{1}{1+x^4}\,dx=\int_0^\infty\frac{x^2}{1+x^4}\,dx=\frac{\pi}{2\sqrt{2}}.$$Pretty sure I had miscalculated, I went to Wolfram Alpha to verify my answers only to find I had done it correctly.
My question is why these two have the same value. Intuitively, I expected $\int\frac{x^2}{1+x^4}\,dx$ to be larger because on the interval $(1,\infty)$, $x^2>1$. The only explanation I can think of is that the $x^2$ makes the integrand much smaller in the interval $[0,1]$ than the original function, but I wouldn't have guessed it to be enough to make the values come out the same. Is there some other intuitive reason why these two integrals are the same?
AI: You may use the change of variables $x\leftrightarrow x^{-1}$ to verify the equality without evaluation.
|
H: The value of $w$ also has a max error of $p\%$
Suppose $\frac{1}{w}=\frac{1}{x}+\frac{1}{y}+\frac{1}{z}$ where each variable $x,y,z$ can be measured with a max error of $p\%$
Prove that the calculated value of $w$ also has a max error of $p\%$
I guess I need to take its derivative i.e $-\frac{1}{w^2}dw=-\frac{1}{x^2}dx-\frac{1}{y^2}dy-\frac{1}{z^2}dz$
After there, what do I need to do?
I know this question maybe seem nonsense. But please help me to solve. Thanks!:)
AI: From this line
$$-\frac{1}{w^2}dw=-\frac{1}{x^2}dx-\frac{1}{y^2}dy-\frac{1}{z^2}dz$$
The percentage error of $w$ is
$$p_w=100\frac{dw}{w}\%$$
and likewise for the other variables, therefore
$$\frac{1}{w}p_w=\frac{1}{x}p_x+\frac{1}{y}p_y+\frac{1}{z}p_z$$
So assuming a maximum error of $p\%$ on the independent variables we have that
$$\frac{1}{w}p_w\leq\frac{1}{x}p+\frac{1}{y}p+\frac{1}{z}p=\left(\frac{1}{x}+\frac{1}{y}+\frac{1}{z}\right)p=\frac{1}{w}p$$
|
H: Simple limit problem: $\lim_{x\to2}(\frac{1}{x-2}-\frac{4}{x^2-4})$
While trying to help my sister with her homework she gave me the next limit: $$\lim_{x\to2}(\frac{1}{x-2}-\frac{4}{x^2-4})$$
I know the conventional way of solving it would be (That's what i showed her):
$$\lim_{x\to2}(\frac{1}{x-2}-\frac{4}{x^2-4})=\lim_{x\to2}\left(\frac{x+2-4}{x^2-4}\right)=\lim_{x\to2}\left(\frac{x-2}{(x+2)(x-2)}\right)=\lim_{x\to2}\left(\frac{1}{x+2}\right)=\frac14$$
But she gave me the next answer:
$$\begin{align}
\lim_{x\to2}(\frac{1}{x-2}-\frac{4}{x^2-4})&=\lim_{x\to2}\frac{1}{x-2}-4\lim\frac{1}{x^2-4}\\
&=\lim_{x\to2}\frac{1}{x-2}-4\lim_{x\to2}\frac{1}{x+2}\lim_{x\to2}\frac{1}{x-2}\\
&=\lim_{x\to2}\frac{1}{x-2}-4\frac14\lim_{x\to2}\frac{1}{x-2}\\
&=\lim_{x\to2}\frac{1}{x-2}-\lim_{x\to2}\frac{1}{x-2}\\
&=0
\end{align}$$
I actually couldn't explain her why is she wrong. Cause technically it looks fine. What am i missing?
AI: The error was in the very first step. We can say that $$\lim_{x\to a}\bigl(f(x)+g(x)\bigr)=\lim_{x\to a}f(x)+\lim_{x\to a}g(x)$$ provided that both of the limits on the right-hand side exist. In this case--$f(x)=\frac1{x-2}$ and $g(x)=-\frac4{x^2-4}$ with $a=2$--neither of these limits exist.
|
H: Metric Space (Elementary Analysis)
Let $d: X \times X \to \Bbb R$ is a function satisfying all properties of a metric space but $d(x,y)=0 \implies x = y$.
If we define $\sim$ on $X$ by $x\sim y \iff d(x,y) = 0$,
prove that $D([x], [y]) = d(x,y)$ where $[x] = \{z \in X \mid z\sim x\}$ is well-defined on equivalence classes and makes the set of equivalence classes into a metric space.
Some helps please!!
(I showed that $\sim$ is equivalence relation)
Thank you!
+++
I followed your advice and just solved this problem. Could you tell me if there is any weakness or error in my proof?
AI: In essence you're identifying all the "indistiguishable" points, that is, pairs such that $x\neq y$ yet $d(x,y)=0$ into one point $\bar x=\{y:d(x,y)=0\}$.
You ought to prove two things:
$(1)$ The new metric $d(\bar x,\bar y):=d(x,y)$ where we choose $x\in\bar x,y\in \bar y$ is "well-defined", meaning that the output does not depend on the representative we take in $\bar x,\bar y $ to feed into $d(x,y)$. Thus, prove that $x\sim x'$ and $y\sim y'\implies d(x,y)=d(x',y')$.
$(2)$ This alleged metric is indeed one.
Hint Assume $d(x,x')=d(y,y')=0$. $$\begin{align}d(x,y)&\leq d(x,x')+d(x',y')+d(y',y)\\d(x,y)&\leq \;\;\;\;0\;\;\;\;+\;\;\;\;0\;\;\;\;\;+d(y',y)\\d(x,y)&\leq d(x',y')\end{align}$$
It remains to show under the same assumption that $d(x,y)\geq d(x',y')$.
When working with say the space of all square Lebesgue integrable functions over some interval, $\mathscr L^2(I)$, one usually uses the above. Concretely, one defines $f\simeq g\iff f=g \text{ a.e. on } I$ to work with a metric space instead of a semi-metric space.
|
H: Weighted uniform convergence of Taylor series of exponential function
Is the limit
$$
e^{-x}\sum_{n=0}^N \frac{(-1)^n}{n!}x^n\to e^{-2x} \quad \text{as } \ N\to\infty \tag1
$$
uniform on $[0,+\infty)$?
Numerically this appears to be true: see the difference of two sides in (1) for $N=10$ and $N=100$ plotted below. But the convergence is very slow (logarithmic error $\approx N^{-1/2}$ as shown by Antonio Vargas in his answer). In particular, putting $e^{-0.9x}$ and $e^{-1.9x}$ in (1) clearly makes convergence non-uniform.
One difficulty here is that the Taylor remainder formula is effective only up to $x\approx N/e$, and the maximum of the difference is at $x\approx N$.
The question is inspired by an attempt to find an alternative proof of $\epsilon>0$ there is a polynomial $p$ such that $|f(x)-e^{-x}p|<\epsilon\forall x\in[0,\infty)$.
AI: Credits should go to Landscape.
Define $$r_n(x)=\sum_{k=n+1}^\infty (-1)^k\frac{x^k}{k!}$$
Note that by Taylor's theorem with Lagrange's form of the remainder we can write $$r_n(x)=(-1)^{n+1}e^{-x'}\frac{x^{n+1}}{(n+1)!}$$
where $x'$ is positive. It follows $$e^{-x}|r_n(x)|\leq e^{-x}\frac{x^{n+1}}{(n+1)!}$$
Easy verification shows the last function has absolute maximum at $x=n+1$. But $$\frac{1}{{(n + 1)!}}{\left( {\frac{{n + 1}}{e}} \right)^{n + 1}} \sim \frac{1}{{\sqrt {2\pi \left( {n + 1} \right)} }}$$ by Stirling, so convergence is indeed uniform. $\quad \Box$
|
H: Determinant of a Matrix Proof: $\;\det(qA) = q^n(\det A)$
I am required to show that:
$\det(qA) = q^n(\det A)$, where $A$ is a real $n\times n$ Matrix, and $q$ is a constant
I believe that this claim is true after doing few examples. However, but I do not know how to start the proof.
AI: Andrea Mori's method is the canonical one, but I'll add one more.
If you know that $\det(BA) = \det(B)\det(A)$, then you can set $B=qI$, so that:
$$\det(qA)=\det(qIA)=\det(qI)\det(A)$$
Now, what is $\det(qI)$?
|
H: Dynamics of a linear map
Let $F : \mathbb{R^2} → \mathbb{R^2}$ be any map and, given a point
$(x_0, y_0)$ in $\mathbb{R^2}$ define $x_n$ and $y_n$ by $(x_{n+1}, y_{n+1}) = F(x_n, y_n)$. We study the dynamics of the map $F$ by
studying the limiting behaviour of the sequence $(x_n, y_n)$ as $n→∞$
for different choices of the starting point $(x_0, y_0)$. What is the
limiting behaviour of $(x_n, y_n)$ when $F$ is expressed by
$$\bigg(\begin{matrix} 0~2\\ 1~1\\ \end{matrix}\bigg)$$
I thought I'll try to do this using Cayley-Hamilton theorem, but the hint suggests:
Find the eigenvalues of the matrix. The lines $y = x$ and $x + 2y = 0$ should figure prominently in your solution.
I can see, that the lines in the hint are just eigenvectors, and if we take points that are eigenvectors, then the dynamics are clear. But what about other points? Does the hint somehow helps to to find dynamics for those points or is Cayley-Hamilton theorem the way to go?
AI: Other points are linear combinations of eigenvector, hence by linearity their behviour can be predicted componentwise with respect to this decomposition.
|
H: Numerical computation of continuous Fourier transform
Are there any algorithms that numerically compute the continuos Fourier transform of a given function f?
I find plenty of implementations of the discrete Fourier transform, using FFT,
but, if I´m not mistaken, DFT is not a discrete approximation of the continuous Fourier transform, but a different, although related, concept.
AI: The fast Fourier transform (FFT) is used to compute numerical approximations to continuous Fourier transforms (CFT). This is not apart from its application or correspondence to Discrete Fourier of course. A numerical approximation of the CFT requires evaluating a large number of integrals, each with a different integrand, since the values of this integral for a large range of the variable are needed. The FFT can be effectively applied to this problem. There are however cases where FFT in brotherhood with DFT are not accurate; e.g. DFT is periodical and spectrum aliasing may occur, other approximations are elaboreted on the spot such as here>>>
See further
Here>>>
Here>>>
Also cross reference
Here>>>
|
H: Prove that $x^2<\sin x \tan x$ as $x \to 0$
$$x^2<\sin x \tan x \quad as \; x \to 0$$
I made the substitution $x \to \arctan x$ .
$\arctan^2 x<x\sin (\arctan x)$
$\arctan x < \large \frac{x}{(x^2+1)^{\frac 14}}$
There are two functions $f(x)$ and $g(x)$ . $f(0)=g(0)$ . If $f'(x)>g'(x)$ on the interval $(0, a)$ , then that implies that $f(x)>g(x)$ on the interval $(0, a)$ . Therefore if $RHS'>LHS'$ , then $RHS>LHS$ .
$LHS'=\large \large \large \frac {1}{x^2+1} <RHS'=\frac {x^2+2}{2(x^2+1)^{\frac 54}}$
$\large \large \large \frac {1}{x^2+1}<\frac {x^2+2}{2(x^2+1)^{\frac 54}}$
$1<\large \frac {x^2+2}{2(x^2+1)^{\frac 14}}$
Using standard techniques (such as first derivative test) we can show that the $RHS$ has a minimum at $(0, 1)$ so we have proved the inequality.
I have two questions:
$1)$ Is my proof correct?
$2)$ Are there nicer ways of doing this inequality (without involving math higher than calc 1)? I have tried AM-GM and others but I think it is very hard to do it very elegantly because the $RHS$ is such a good approximation of the $LHS$ as $x \to 0$ .
AI: Apply GM - HM to $\sin x$ and $\tan x$ (both positive for $x\geq 0$), we get that
$$ \sqrt{ \sin x \tan x } \geq \frac{2} { \frac{1} {\sin x} + \frac{ 1}{ \tan x} } = \frac{2 \sin x} { 1 + \cos x } = 2 \tan \frac{x}{2} \geq x$$
The only 'calc' that you need is the last inequality, but that has an easy graphical approach too.
|
H: What does $f \in H^\infty$ mean?
I am reading this research paper about polynomials with non-negative coefficients. Can some one tell what does the notation $f \in H^\infty$ mean so that I can research about this function class?
AI: $H^\infty$ is the class of bounded holomorphic functions on the open unit disc (or sometimes the upper half-plane).
It is one of the Hardy spaces.
|
H: Factorizations of $x^2+x$ in $\mathbb Z_6[x]$
So I was looking through my old algebra book and found a question that I can't seem to answer.
Find two Factorizations of $x^2+x$ as the product of nonconstant polynomials that are not associates of $x$ or $x+1$.
I found $(x+3)(x+4)$, can anyone find the other one?
I would appreciate help satiating my curiosity.
AI: How about $x^2+x = (5x+3)(5x+2)$? I notice that $a = 5$ is invertible in $\mathbb{Z}_6$, hence just kind of multiply your result by $a$, and $a^{-1} = 5$. Is this valid? @@
|
H: singular (co)homology over various fields of same characteristic
Is the following true: if $K$ and $F$ are fields with the same characteristic and $X$ is a topological space, then for any $n$ there holds $$\dim_K H_n(X;K) = \dim_F H_n(X;F)\text{ and }\dim_K H^n(X;K) = \dim_FH^n(X;F),$$ where $H_n(-;-)$ and $H^n(-;-)$ are singular homology and cohomology with coefficients?
AI: Yes. Any field is an extension of $\mathbb{Q}$ or some $\mathbb{F}_p$, so it suffices to take $K = \mathbb{Q}$ or $\mathbb{F}_p$ and $F$ an extension of $K$. Then $F$ is flat over $K$, so using the universal coefficient theorem (or just tensoring $F$ with the singular chain complex with coefficients in $K$), you see that $H_n(X;F) \cong H_n(X;K) \otimes F$ and likewise for cohomology.
|
H: Basic statistics - Calculate distribution of winning
I have a 100 sided fair dice with each side labelled 1 thru 100. I win if the number rolled is 49 or higher (1% advantage).
1. What is the probability of me winning exactly 500 rolls if the dice is rolled 1000 times?
What is the general formula for calculating the probability of winning exactly W rolls if:
P=probability of winning (52% if the above example)
N=total number of rolls
AI: When you roll the die once, the probability of a loss is $\frac{48}{100}=0.48$, and the probability of a win is $\frac{52}{100}=0.52$, not $0.51$. Therefore the probability of any particular sequence of $500$ wins and $500$ losses is $\left(\frac{48}{100}\right)^{500}\left(\frac{52}{100}\right)^{500}=0.48^{500}\cdot0.52^{500}=0.2496^{500}$. There are $\binom{1000}{500}$ ways to choose which $500$ rolls are wins, so there are $\binom{1000}{500}$ different sequences of $500$ wins and $500$ losses. The overall probability of getting one of these sequences is therefore
$$\binom{1000}{500}\left(\frac{48}{100}\right)^{500}\left(\frac{52}{100}\right)^{500}=\binom{1000}{500}\cdot0.2496^{500}\approx0.005665\;.$$
If you want the probability of winning to be $0.51$, you need to set the lower limit for a win at $50$, not $49$.
As can be seen from the reasoning above, the general formula for the probability of winning exactly $W$ of $N$ rolls when the probability of a winning roll is $p$ is
$$\binom{N}Wp^W(1-p)^{N-W}\;.$$
|
H: Proving irreducibility of $x^6-72$
I have the following question:
Is there an easy way to prove that $x^6-72$ is irreducible over $\mathbb{Q}\ $?
I am trying to avoid reducing mod p and then having to calculate with some things like $(x^3+ax^2+bx+c)\cdot (x^3+dx^2+ex+f)$ and so on...
Thank you very much.
AI: I had posted a more general question on Brilliant before, namely asking when is $p_n(x) = x^6 + n $ reducible over the integers (which is equivalent to reducible over the rationals as the content of the polynomial is 1.)
Suppose $p_n(x)=g(x)\cdot h(x),$ where $g$ and $h$ are not constants. The sum of the degrees of $g$ and $h$ is $6$, and the product of the leading coefficients is $1.$ Because all coefficients are integers, this means that the leading coefficients of $g$ and $h$ are either both $1$ or both $-1.$ In the latter case, we multiply both $g$ and $h$ by $-1$ so that the leading coefficients are $1$. Also, we can assume, without loss of generality, that $\deg(g)\geq \deg(h).$
The polynomial $p_n$ has $6$ complex roots, all with absolute value $\sqrt[6]{|n|}$. Suppose the degree of $h$ is $k$, which can be $1,2,$ or $3$. Then the absolute value of the free term of $h$ is the product of absolute values of $k$ roots, thus it is $|n|^{k/6}.$ If this is an integer, then $|n|$ must be either a perfect square (if $k=3$) or a perfect cube (if $k=2$) or a perfect 6th power (if $k=1$, though this is also a perfect cube). Moreover, if $k=3$, then $n$ cannot be positive, because every cubic polynomial has a real root, and $x^6+n>0$ for positive $n$.
Hence $p_n(x)$ is reducible if and only if $n = -a^2 $ or $b^3$.
|
H: Find All Points on a Paraboloid where Tangent Plane is Parallel to a Given Plane
Find all points on the paraboloid $z=x^2+y^2$ where tangent plane is parallel to the plane $x+y+z=1$ and find equations of the corresponding tangent planes. Sketch the graph of these functions.
I have its answer. I don't really understand such type of questions. And I am really willing to learn. Also I added its answer as a picture. Please teach me how to solve.
Please help me. Thank you so much:)
AI: To get a normal vector to the paraboloid at a point (x,y,z), we can take the gradient $\nabla f(x,y,z)=-2xi-2yj+k$. Since we want the tangent plane at the point to be parallel to the plane $x+y+z=1$, the normal vector $\nabla f(x,y,z)=-2xi-2yj+k$ has to be parallel to the vector $i+j+k$ (since this is a normal vector to $x+y+z=1$). This means that $-2xi-2yj+k$ must be a constant multiple of $i+j+k$, so $-2xi-2yj+k=c(i+j+k)$ for some constant c. Then
$-2x=c$, $-2y=c$, and $1=c$, so $x=-1/2$ and $y=-1/2$. Therefore $z=x^2+y^2=1/4+1/4=1/2$ at the point of tangency, and the tangent plane has equation $x+y+z=-1/2$
at this point.
|
H: Proving and understanding the Fixed point lemma (Diagonal Lemma) in Logic - used in proof of Godel's incompleteness theorem
http://en.wikipedia.org/wiki/Diagonal_lemma
I am wondering about the proof of the "Fixed-Point Lemma"
$\text{Mod } \Sigma$ is the class of all models of $ \Sigma$. $\text{Th Mod } \Sigma$ is the set of all sentences which are true in all models of $\Sigma$. This however is just the set of all sentences logically implied by $\Sigma$. We call this set the set of consequences of $\Sigma$ or $Cn \Sigma$. Thus we have that $Cn \Sigma = \{\sigma | \Sigma | \sigma \} = Th\space Mod \space \Sigma$.
Now we let $A$ be a set of axioms for our system over the language of $R$ which is a set of relations, constants, and formulas which create a number theory. Now if we have some set of consequences $Cn\Sigma$ due to our set of axioms $\Sigma$ we create a theory $T$ which models the set of consequences $Cn \Sigma$. Now referring to Wikipedia, as it states:
"Let $T$ be a first-order theory in the language of arithmetic and capable of representing all computable functions. Let $ψ$ be a formula in the language with one free variable. The diagonal lemma states that there is a sentence $φ$ such that $φ \Leftrightarrow ψ(\#(φ))$ is provable in T.
Intuitively, $φ$ is a self-referential sentence saying that $φ$ has the property $ψ$. The sentence $φ$ can also be viewed as a fixed point of the operation assigning to each formula $θ$ the sentence $ψ(\#(θ))$. The sentence $φ$ constructed in the proof is not literally the same as $ψ(\#(φ))$, but is provably equivalent to it in the theory $T$.
Let $f\colon \mathbb{N}\rightarrow \mathbb{N}$ be the function defined by:
$$f(\#(θ)) = \#(θ(\#(θ)))$$
for each $T$-formula $θ$ in one free variable, and $f(n) = 0$ otherwise. The function $f$ is computable, so there is a formula $δ$ representing $f$ in $T$.
--> I don't quite understand the above sentence. I know that since $\theta$ is a $T$-formula it is a consequence of the theory $T$, so we are only looking at functions $f(n)$ which operate on consequences of $T$ (which $\theta$ is one of). Now it is claimed that since $f$ is computable there is some $\delta$ representing $f$ in $T$. I am not sure what that is supposed to mean/what $\delta$ is really doing. Now I know that we are representing the sentences $\theta$ which are the consequences of the theory $T$. Also for $f(\#(θ)) = \#(θ(\#(θ)))$ is this just saying that we are defining $f$ to be some sort of assignment of a Gödel number to a Gödel number (the $\#(θ(\#(θ)))$)?
Wikipedia goes on to say:
Thus for each formula $θ$, $T$ proves:
$$(\forall y) [ δ(\#(θ),y) \Leftrightarrow y = f(\#(θ))]$$
If I can understand the above it is possible that I can crack the rest of the proof - but will try and get it posted here so that others can see the walkthrough of the proof.
Thanks much in advance,
Brian
P.S. I was also hoping to gain some understanding beyond just understanding the proof of the lemma. I see a little but here:
Gödel's Incompleteness Theorem - Diagonal Lemma
and am guessing that somehow we are generating some sentence (similar to another real number in Cantors diagonalization argument) in which ?? I don't know how to put the rest in to words as I don't understand exactly what diagonalizing the sentances does (I do a little bit by intuition but can not formalize what is going on). Any thoughts on this would be very welcome. I also am wondering how this fits in with the $\delta$ above.
AI: The $\delta$ in the proof statement is a formula. In the language of $\text{PA}$, there is no symbol for the function $f$; but $\text{PA}$ is strong enough so that there is some formula $\delta(x, y)$ such that the following two things hold: $$f(n) = m \quad \text{ iff } \quad \text{PA} \vdash \delta(\underline{n}, \underline{m})$$
$$f(n) \neq m \quad \text{ iff } \quad \text{PA} \vdash \neg\delta(\underline{n}, \underline{m})$$
where $\underline{k}$ is the numeral for the number $k$. (A weaker statement would be to only assert the first biconditional; this weaker property of "weakly" representing a function holds of all $\Sigma_1^0$-functions).
Now, given that $f(\#(\theta)) = \#(\theta(\#(\theta)))$, where $\theta(x)$ is a formula in the language of $\text{PA}$, define a new formula as follows:
$$\alpha(x) = \exists y (\delta(x, y) \wedge \psi(y))$$
Finally, define $\varphi = \alpha(\#(\alpha))$. Then by our definitions:
$$\begin{align}
\text{PA} \vdash \varphi & \leftrightarrow \alpha(\#(\alpha)) \\
& \leftrightarrow \exists y (\delta(\#(\alpha), y) \wedge \psi(y)) \\
& \leftrightarrow \exists y (y = \#(\alpha(\#(\alpha))) \wedge \psi(y)) \\
& \leftrightarrow \psi(\#(\alpha(\#(\alpha)))) \\
& \leftrightarrow \psi(\#(\varphi))
\end{align}$$
The step from $\exists y (\delta(\#(\alpha), y) \wedge \psi(y))$ to $\exists y (y = \#(\alpha(\#(\alpha))) \wedge \psi(y))$ is (if I'm not mistaken) the other step you mention. That follows from the fact that $\text{PA}$ is representing the function $f$ with $\delta$. Hence, for all $k$, if $f(n) = k$, then $\text{PA} \vdash \forall y (\delta(\underline{n},y) \leftrightarrow y = \underline{k})$.
Simiarly, if $T$ extends $\text{PA}$, it must also strongly represent all recursive functions, and so this applies to $T$ as well.
|
H: Recursively Defined Entities
So I am having some trouble understanding how one is to come up with the recursive definition to the following problem...
We are given a rectangle of width $2$ and length $n$. Suppose we have dominoes of size $2\times 1$. What is the number of different ways we can cover the $2\times n$ rectangle?
The solution is suppose to be $a(n) = a(n-1) + a(n-2)$ but I'm not understanding exactly how they have arrived at that. I can plug values in and see that it does indeed work but what the intuition is behind that is what confuses me.
Thanks in advance!
AI: In a corner, you have two possibilities to place a domino, either vertical
|x|...
|x|...
which leaves you with an $(n-1)×2$ rectangle to tile, or horizontally
xx...
.....
which forces a horizontal domino below
xx...
yy...
and leaves you with an $(n-2)×2$ rectangle.
|
H: Find all invertible elements of $ \Bbb{Q}[x]/(x^{600}) $.
I know that invertible elements of $\Bbb{Q}[x]$ are constants, so $\Bbb{Q}$. But in $\Bbb{Q}[x]/(x^{600})$, I suppose there are more invertible elements. How to find all of them?
AI: HINT:
Show first that if $R$ is a commutative ring with identity with a unique maximal ideal $\frak m$ (such rings are called local) the invertible elements of $R$ are exactly the elements in $R\setminus\frak m$.
Now show that $\Bbb Q[X]/(X^N)$ has a unique maximal ideal for all $N\geq1$, namely the ideal generated by (the class of) $X$.
|
H: Prove that $\arctan\left(\frac{2x}{1-x^2}\right)=2\arctan{x}$ for all $|x|<1$, directly from the integral definition of $\arctan$
I would like to show that for $A(x) = \int_{0}^{x}\frac{1}{1+t^2}dt$, we have $A\left(\frac{2x}{1-x^2}\right)=2A(x)$, for all $|x|<1$.
My idea is to start with either $2\int_0^x\frac{1}{1+t^2}dt$ or $\int_0^{2x/(1-x^2)}\frac{1}{1+t^2}dt$, and try to transform one into the other by change of variables. (It would make more sense for the moment if we did not do any trigonometric substitutions, since we are defining the trig functions via this integral.)
One of the several things I've tried is to use $A(x)+A(1/x)=\pi/2$, and write $\int_0^{2x/(1-x^2)}\frac{1}{1+t^2}dt=\pi/2 - \int_0^{(1-x^2)/2x}\frac{1}{1+t^2}dt=\pi/2 - \int_0^{1/2x}\frac{1}{1+t^2}dt+\int_0^{x/2}\frac{1}{1+t^2}dt$, but this doesn't seem to be getting me anywhere.
Any ideas?
AI: Hint: Let $f(x)=A\left(\frac{2x}{1-x^2}\right)$ and let $g(x)=2A(x)$, both defined by integrals precisely as in the OP.
Use the Fundamental Theorem of Calculus to show that these functions have the same derivative. So they differ by a constant. Then all you will need to do is to show that they agree at say $x=0$, so the constant is $0$.
|
H: Find the coefficient of $x^{20}$ in $(x^{1}+⋯+x^{6} )^{10}$
I'm trying to find the coefficient of $x^{20}$ in
$$(x^{1}+⋯+x^{6} )^{10}$$
So I did this :
$$\frac {1-x^{m+1}} {1-x} = 1+x+x^2+⋯+x^{m}$$
$$(x^1+⋯+x^6 )=x(1+x+⋯+x^5 ) = \frac {x(1-x^6 )} {1-x} = \frac {x-x^7} {1-x}$$
$$(x^1+⋯+x^6 )^{10} =\left(\dfrac {x-x^7} {1-x}\right)^{10}$$
But what do I do from here ? any hints ?
Thanks
AI: The coefficient of $x^{10}$ in $(1+ x + \ldots + x^5)^{10}$ is equal to the number of integers $0 \leq x_i \leq 5 $ such that $\sum_{i=1}^{10} x_i= 10$.
We apply the Principle of Inclusion and exclusion, to deal with the restriction of $x_i \leq 5$.
If the only restriction is $0 \leq x_i$ then there are ${10 + 9 \choose 9 } $ solutions by the bars and stars method (sum of 10 non-negative integers is 10).
If $x_1 \geq 6$, then we substitute $x_1 = 6 + x_1 ^*$, and there are ${4 + 9 \choose 9}$ solutions by the stars and bars method (sum of 10 non-negative integers is 4).
Observe that we can't have 2 terms which are more than $6$.
Hence, by PIE, the coefficient is ${ 19 \choose 9} - 10 { 13 \choose 9}$, which is 85228.
|
H: A definition of metric space
Can you please help me solve the question below?
I have no idea how to prove this one.
Define the set
$$X:=\{K\subset\mathbb C:K\text{ is bounded and closed}\}$$
Define a function $d\colon X \times X \to \mathbb{R}$ via
$$ d(K_1,K_2)=\inf\{\delta>0:K_1\subset N_\delta(K_2)\text{ and }K_2\subset N_\delta(K_1)\}$$
where
$$ N_\delta(K):=\bigcup_{y\in K}N_\delta(y)=\{x\in\mathbb C:\exists y\in K\text{ with }|x-y|<\delta\}.$$
i) Show that $d$ defines a metric on $X$.
ii) Is $d$ still a metric if $X$ contains all bounded sets in $\mathbb C$? All closed sets?
AI: This is called Hausdorff distance, by the way.
i) The problem statement is wrong. You must exclude the empty set from $X$., ie. I shall work with $X:=\{K\subset\mathbb C:K\text{ is nonempty and compact}\}$.
Also, $d$ is not a map $X\to X$ but rather $X\times X\to\mathbb R$ as required for a metric.
You need to show the properties of a metric:
$d(K_1,K_2)\ge 0$: Since both sets are bounded and nonempty, the set we take the infimum of is nonempty, hence the infimum is $<\infty$. As all $\delta$ in the set are $>0$, the infimum is also $\ge 0$.
$d(K_1,K_2)=0\Rightarrow K_1=K_2$: Assume $K_1\ne K_2$ and wlog $x\in K_2\setminus K_2$. Since $K_2$ is closed, there is an $\epsilon$ neighbourhood of $x$ such that $B_\epsilon(x)\cap K_2=\emptyset$. Then $x\notin N_\delta(K_2)$ if $\delta<\epsilon$ and hence $d(K_1,K_2)\ge \epsilon$.
$d(K_1,K_3)\le d(K_1,K_2)+d(K_2,K_3)$: Note that $N_{\delta_1+\delta_2}(K)\subseteq N_{\delta_1}(N_{\delta_2}(K))$. Use this and write down the exlicit condition we want to show and you are done.
ii) Let $A=\{z\in\mathbb C:|z|<1\}$ and $B=\{z\in\mathbb C:|z|\le 1\}$. These are bounded sets with $d(A,B)=0$ and $A\ne B$. Hence this is no longer a metric.
Let $A=\mathbb R$ and $B=i\mathbb R$. These are closed sets with $d(A,B)=\infty$, hence this is no longer a metric.
|
H: Rotor Identity $ \frac{1+ba}{|a+b|} = e^{-B\theta /2} $
To prove:the identity given above where $ a, b $ are vectors, $ B $ is the unit bivector in the $ a\wedge b $ plane and $\theta $ is the angle between $ a$ and $ b$. (From "Geometric Algebra for Physicists" by Doran and Lasenby).
Expanding the L.H.S i get $$ \frac{1+b.a}{|a+b|} - \frac{|a\wedge b|}{|a+b|}B $$
The R.H.S gives me by definition, $$ \cos(\theta/2) - \sin(\theta/2)B $$
Using grade projection, we should have
$$ \frac{1+b.a}{|a+b|} = \cos(\theta/2) $$
and
$$ \frac{|b\wedge a|}{|a+b|} = \sin(\theta/2) $$
But i can't think of an easy way to prove either. I am trying to prove it using geometry and the rules of GA, rather than trigonometry.
AI: Remember that a rotation can be performed through a composition of two reflections. Let $c$ be the vector in the $a \wedge b$ plane that has an angle $\theta/2$ with both $a$ and $b$. Then a rotation that would rotate $a$ to $b$ can be seen as
$$\underline R(s) = \hat c \hat a s \hat a \hat c$$
for any vector $s$. (Here, we're choosing trivially to reflect over $a$ and then reflect over the angle bisector.)
The question then becomes how we can compute the vector $c$ that bisects that angle. In fact, $c = \hat a + \hat b$ does this nicely (not well defined if $\hat b = - \hat a$, but then the plane is not well-defined anyway). Then $\hat c = (\hat a + \hat b)/|\hat a + \hat b|$ and we get the resulting rotor to be
$$R = \hat c \hat a = \frac{\hat a \hat a + \hat b \hat a}{|\hat a + \hat b|}$$
which takes the form you want. It is, however, not immediately clear to me how to generalize the problem to using non-unit vectors. This argument that $c$ is the right vector to reflect over doesn't work if you don't use unit vectors.
|
H: Eulerian graph in two color
How can we prove the Eulerian Map can be color in 2 colors. I know the Eulerian graph can be colored at most 4, which is Four color problem. But I have no idea how to prove into 2 colors. Anyone can help me do this? Thanks!
The Eulerian map at here is mean the Eulerian planar graph (so all the vertices have even degrees).
AI: If by "Eulerian Map" you mean Eulerian planar graph, then you might be interested in the fact that
A planar bipartite graph is dual to a planar Eulerian graph and vice versa. [MathWorld]
Edit:
Sketch of the proof:
$G$ is bipartite
every cycle of $G$ is even
every simple cycle of $G$ is even
every vertex of $G^d$ has even degree
$G^d$ is Eulerian.
I hope this helps ;-)
|
H: Finding the multiples of a number that satisfy the question.
Two numbers multiply to equal 200. Find the numbers such that the difference between the square root of one number and the reciprocal of the other is minimized.
Having a tough time working around this problem, I'm having some trouble.
AI: If one number is $x$, the other is $\frac{200}x$, so we want to minimize the distance between $\sqrt x$ and $\frac x{200}$. We can in fact make these expressins equal: $\sqrt x=\frac x{200}\iff x=200^2=40000$ (and the other factor is $\frac1{200}$).
|
H: If $X$ and $Y$ are independent then $f(X)$ and $g(Y)$ are also independent.
Knowing that if you have two independent $X$ and $Y$, and $ f $ and $ g $ measurable functions, how to show that then $ U = f (X) $ and $ V = g (Y) $ are still independent.
AI: You said measurable so I am going to assume you want a measure-theoretic answer, and that your definition of independence is $X$ and $Y$ are independent iff $\sigma(X)$ is independent of $\sigma(Y)$, i.e. that $P[X \in B_1, Y \in B_2] = P[X \in B_1]P[Y \in B_2]$ for all borel sets $B_1,B_2$. You must assume $f,g$ are borel functions (this is so that $f(X),g(Y)$ are measurable, so asking the question of independence makes sense). The $\sigma$-algebra generated by $f(X)$ is a sub-$\sigma$-algebra of the $\sigma$-algebra generated by $X$, and similarly for $g(Y)$ and $Y$. To see this note that for any borel set $B$ we have $(f\circ X)^{-1}(B) = X^{-1}(f^{-1}(B)) = X^{-1}(\text{some borel set}) \in \sigma(X)$. Since $\sigma(X) \perp \sigma(Y)$ it follows that $\sigma(f(X)) \perp \sigma(g(Y))$.
|
H: Find $m, n$ such that $\frac{n^2 + 1}{m^2 + 1 }$ is an integer multiple of a perfect square
I'm trying to find $n,m \in \mathbb{N}$ such that $\sqrt{ \frac{n^2+1}{2(m^2+1)}}$ is rational.
I see that if $a,b$ are relatively prime $\sqrt{ \frac{a}{b}}$ is rational if and only if $a,b$ are perfect squares. $n^2+1$ can be a perfect square only if n = 0 and $2(m^2+1)$ is a square only when m = 1. For any other solution, we must have that WLOG $a = b\cdot r^2$ for some integer $r$. In other words $\frac{n^2+1}{m^2+1} = 2 \cdot r^2$
How could I go about solving that - or, what seems more likely, showing that there are no solutions? Is there a more general way to show $\frac{n^2+1}{m^2+1} = d \cdot r^2$ can have solutions only for specific $d$.
Thank you
AI: As leshik points out, there are infinitely many solutions to
$$n^2 - 2m^2 = 1$$
Each of these gives $\frac{n^2 + 1 } { 2(m^2 + 1) } = 1$
I'm not certain if requiring an integer perfect square is possible, but your question states that rational numbers are fine.
Consider the Pell's equation $x^2 - 13 y^2 = -1$. It has a solution $(18, 5)$, hence has infinitely many solution.
Observe that for each solution,
$$\frac{x^2+1} { 2 (5^2 + 1) } = \left( \frac{y}{2} \right) ^2 $$
Hence, we have infinitely many pairs of integers of the form $(x, 5)$ which work.
Since the smallest solution of $x^2- 13 y^2 = 1$ is $(649, 180)$, the next solution is quite large, and is $(x,y) = (23382, 6485)$.
This arose from realizing that we have a solution $(18,5)$. To generalize this approach for any $d$ would require finding a specific solution first, before knowing what $D$ (in Pell's Equation) to use.
|
H: (Revisited$_2$) Injectivity Relies on The Existence of an Onto Function Mapping Back to Its Preimage
QUEST:
For any sets $X$ and $Y$, there exists an injective function $f:X\rightarrow Y$ if and only if there exists a surjective function $g:Y\rightarrow X$.
QUESTION$_1$:
How do you people approach this problem. I mean, what is running through your brain when you look at each of the above statements?
KNOWN:
$\dagger\hspace{.5cm}$If $f : X \rightarrow Y$ is injective, then there exists a function $g: Y \rightarrow X$ such that $g \circ f = 1_X$.
$\dagger\hspace{.5cm}$If $f:X \rightarrow Y$ is surjective, then there must exist a function $g:Y \rightarrow X$ such that $f \circ g = 1_Y$.
THOUGHTS:
http://en.wikipedia.org/wiki/Cantor–Bernstein–Schroeder_theorem
ATTEMPT$_{Q1}$: $\leftarrow$ This attempt is wrong, just so you know...
Since $f$ is an injection it is a bijection onto its image, and so there exists an inverse $h:f(X)\rightarrow X$. Now, let $x$ be an arbitrary element in $X$, and define $g:Y\rightarrow X$ by
$$g(y) =
\begin{cases}
h(y), & \text{if }y\in f(X) \\
x, & \text{otherwise }
\end{cases},$$
so $g$ is a bijection and therefore a surjection.
QUESTION$_2$:
Let $\precsim$ be a relation defined by
$$X\precsim Y~\iff~\exists~f:X\rightarrow Y~(1-1).$$
Let $\succsim$ be a relation defined by
$$X\succsim Y~\iff~\exists~f:X\rightarrow Y~(\text{onto}).$$
How are $\precsim$ and $\succsim$ related in the context of QUEST's proof?
ATTEMPT$_{Q2}$: $\leftarrow$ Maybe somebody will check this...
By the Cantor–Bernstein–Schroeder theorem, if $X\precsim Y$ and $Y\succsim X$, then $X\cong Y$, so we can define a relation $\leq$ on cardinalities as follows:
$$\lvert X \rvert \leq \lvert Y \rvert~~~\text{if}~~~X\precsim Y,$$
namely $\exists~f~\text{s.t.}~f:X\rightarrow Y~(1-1)$, which suggests that $\leq$ is anti-symmetric since
$$\lvert X \rvert \leq \lvert Y \rvert~\text{and}~\lvert Y \rvert \leq \lvert X \rvert \iff X\precsim Y~\text{and}~Y\precsim X,$$
if and only if there exists injective maps $f:X\rightarrow Y$ and $g:Y\rightarrow X$, so there exists also an injective map $h:X\rightarrow Y$, by C-B-S, and so $X\cong Y\iff \lvert X \rvert = \lvert Y \rvert$, where $\lvert * \rvert$ denotes cardinality.
AI: You already have all the ingredients in your question.
If there is an injective function $f\colon X\to Y$, then your fact 1 gives you a function $g\colon Y\to X$; is it what you're looking for?
If there is a surjective function $g\colon Y\to X$, then your fact 2 gives you a function $f\colon X\to Y$ such that $g\circ f=1_X$ (just reverse the role of $f$ and $g$ and of $X$ and $Y$); is it what you're looking for?
Complete solution. I accept your two facts as known, but stated in a slightly different way:
Every injective function has a left inverse
Every surjective function has a right inverse
Suppose there exists an injective function $f\colon X\to Y$. By fact 1, $f$ has a left inverse, that is, a function $g\colon Y\to X$ such that $g\circ f=1_X$. I claim that the function $g$ is surjective; indeed, if $x\in X$, we have
$$
x = g\circ f(x) = g(f(x)) = g(y)
$$
where $y=f(x)$.
Suppose conversely that there exists a surjective function $g\colon Y\to X$. By fact 2, $g$ has a right inverse, that is, a function $f\colon X\to Y$ such that $g\circ f=1_X$. I claim that the function $f$ is injective; indeed, if $f(x_1)=f(x_2)$, then
$$
x_1=g\circ f(x_1) = g(f(x_1))=g(f(x_2))=g\circ f(x_2)=x_2.
$$
|
H: Solve the equation about matrix
The equation is $x^2 = x$, which $x$ is a $2\times2$ matrix. Anyone can give me some hint? Thanks!
AI: Two different hints:
1) This is a question with 2-by-2 matrices... why not just write down good ol' $x=\begin{pmatrix}a & b\\c & d\end{pmatrix}$, compute $x^2-x$, and see what happens?
2) You could also notice that for such a matrix $x$, we must have $x^2-x=0$; hence the minimal polynomial for $x$ must divide $t^2-t=t(t-1)$. What does that tell you about the matrix?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.