text
stringlengths 83
79.5k
|
|---|
H: Prove $\det(kA)=k^n\det A$
Let $A$ be an $n \times n$ matrix and $k$ be a scalar. Prove that $\det(kA)=k^n\det A$.
I really don't know where to start. Can someone give me a hint for this proof?
AI: First, let's recall what multiplication of a matrix by a scalar means:
When we multiply $\;kA$, where $A$ is an $n\times n$ matrix, and $k$ a scalar, then
every entry $a_{ij}$ of matrix $A$ is multiplied by $k$: i.e. $a_{ij}\mapsto ka_{ij}$ for each $a_{ij}$. That means for each row $i,\;0\leq i\leq n,\;$ we can factor out $k$.
Now...Recall the elementary row operations you've learned, and how each one of them affects the determinant of the matrix on which it is operating. Specifically, when any one row is multiplied by the scalar $k$, the determinant of $A$ becomes $\;k\det A$. So given that scalar multiplication of a square $n\times n$ matrix is equivalent to "row operating" on $n$ rows (by multiplying each row by the scalar $k$), we can conclude that $$\large\det (kA) = \underbrace{k\cdot k\cdot\cdots \cdot k}_{\large n \;\text{times}}\det A = k^n\det A$$
|
H: Who realized $\int \frac 1x dx =\ln(x)+c$?
Who discovered the non-obvious $\int \frac 1x dx=\ln(x)+c$ ? Were power series involved? The series look similar on opposite sides of 1:
$$ \frac 1x =\sum_{n=0}^\infty (-1+x)^n \text{ for } |x-1|<1 $$
$$ \ln(x) = \ln(-1+x)-\sum_{k=1}^\infty \frac{(-1)^k}{k(-1+x)^k} \text{ for } |x-1|>1 $$
$$ \ln(x) = -\sum_{k=1}^{\infty}\frac{(-1)^k(-1+x)^k}k \text{ for } |x-1|<1 $$
AI: By about 1640, the solution to the "area problem" for curves with equation $Y^n = aX^m$ was known by Fermat for all integer cases except when $n = 1, m = -1$.
I.e., the only unsolved area problem was for $Y = \frac 1X$ - the standard equation for the graph of a hyperbola.
In 1647, Gregoire de St. Vincent showed the following special property for hyperbolas:
If: $\frac ab = \frac cd$ then $\int_a^b\frac 1x dx=\int_c^d\frac 1x dx$
In 1649 Alfonso Antonio de Sarasa recognized this feature in Gregoire's work and connected it to the properties of logarithms. In particular he recognized the following additive property of these measurements which had previously be a key feature in the study of common logarithms (base 10): the area determined by a product of two numbers , bd, is equal to the sum of the areas determined by b and d separately.
If $\frac ab = \frac cd$ and $a=1$ and $c=1$ then $\frac 1b \not= \frac 1d$, but $\frac 1b + \frac 1d = \frac {1}{bd}$
This is the same as saying what RGB did:
$$\int_{1}^{bd}\frac{\text{d}t}{t}=\int_{1}^{b}\frac{\text{d}t}{t}+\int_{1}^{d}\frac{\text{d}t}{t}$$
$$f(bd)=f(b)+f(d)$$
$$ A^b A^d=A^{b+d}$$
Thus a logarithmic relationship was implied and A was found to be e.
$$ \int_1^b \frac 1x dx = \ln(b)$$
$$ \int_1^d \frac 1x dx = \ln(d)$$
$$ \int_1^{bd} \frac 1x dx = \ln(b)+\ln(d)$$
Source
The Hyperbolic Angle "$u$" is defined in relation to the function $\frac 1x$:
The magnitude of the hyperbolic angle "$u$" is the area of the corresponding hyperbolic sector (red above, yellow below) which is ln x since it is defined as the integral of the projection onto the axis from a=1 or y=x to b: $\int_1^b \frac 1x dx=\ln(b)$
While power series were not involved in the discovery of the natural logarithm, they can be used to show the relationship between $\frac 1x$ and e:
If a function is its own derivative then: $ y=f(x), \frac{dy}{dx}=y$
Such a function is a series: $ y=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\dots$
$$\frac{dy}{dx}=0+1+\frac{2x}{2!}+\frac{3x^2}{3!}+\frac{4x^3}{4!}\dots=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+ \dots$$
If we differentiate $y^p$: $ \frac{d(y^p)}{dx}=py^{p-1}\frac{dy}{dx}$
Since $ \frac{dy}{dx}=y$ it follows: $ \frac{d(y^p)}{dx}=py^{p-1}y=py^p$
Given: $y_1=1+z+\frac{(z)^2}{2!}+ \frac{(z)^3}{3!}+\dots$
$$\frac{dy_1}{dz}= 1+z+\frac{(z)^2}{2!}+ \frac{(z)^3}{3!}+\dots=y_1$$
Let $z=ax$,$\frac{dy_1}{d(ax)}=y_1$ $\frac{dy_1}{d(x)}=ay_1$
$$y_1=y^p= 1+px+\frac{(px)^2}{2!}+ \frac{(px)^3}{3!}+\dots=y_1$$
If we let $p=\frac 1x$ then, $y^{\frac 1x}=1+1+\frac{(1)^2}{2!}+ \frac{(1)^3}{3!}+\dots=e$
And Hence replacing $e^1$ with $e^x$ yeilds:
$e^x=1+x+\frac{(x)^2}{2!}+ \frac{(x)^3}{3!}+\dots$
This all lead to the development of the Slide Rule as well as hyperbolic trig functions.
|
H: Integrate $\int e^{-x} \cos x \,\mathrm{d}x$
I know that integration by parts leads to an infinite loops of sin and cos so what do I do?
I can't do $u$ substitution because I can't get rid of all the variables.
$$\int e^{-x} \cos x \,\mathrm{d}x$$
AI: That's the entire point - you want to go through a loop once and use the "feedback" to get the answer.
Integrate by parts as follows:
$$\begin{align}\int dx \, e^{-x} \, \cos{x} &= -e^{-x} \cos{x} - \int dx \, e^{-x} \, \sin{x} \\ &=-e^{-x} \cos{x} + e^{-x} \sin{x} - \int dx \, e^{-x} \, \cos{x} \end{align}$$
Now do you see that you could go on forever...or you can just combine the like terms in the equation. (That is the feedback aspect.) When you do this, you find that
$$\int dx \, e^{-x} \, \cos{x} = \frac12 \left ( \sin{x} - \cos{x} \right ) + C$$
where $C$ is a constant of integration.
|
H: Error calculating the limit $ \lim_{x \to 0}{\frac{x-\tan(x)}{x^2 \cdot \sin(x)}} $
Given this limit:
$$ \lim_{x \to 0}{\frac{x-\tan(x)}{x^2 \cdot \sin(x)}} $$
Wolfram says the result is $\frac{1}{3}$ , but I tried to solve it and I get 0:
$$ \lim_{x \to 0}{\frac{x \cdot (1-\frac{\tan(x)}{x})}{x \cdot (x \cdot \sin(x))}} = $$
$$ \lim_{x \to 0}{\frac{1-\frac{\tan(x)}{x}}{x \cdot \sin(x)}} = $$
$$ \lim_{x \to 0}{\frac{1}{x \cdot \sin(x)}} - \frac{1}{x \cdot \sin(x)} = 0$$
This because $ \lim_{x \to 0}{\frac{\tan(x)}{x}} = 1$
But the result is wrong.
I would like to know not only it's right solution, but also (and specially) what's wrong in this attempt to solve it.
AI: By the Taylor series we have
$$\tan x=x+\frac{x^3}{3}+O\left(x^5\right)$$
and
$$\sin x=x+O(x^2)$$
so
$$\lim_{x\to0}\frac{x-\tan(x)}{x^2 \cdot \sin(x)}=\lim_{x\to0}\frac{-\frac{x^3}{3}+O\left(x^5\right)}{x^3+O(x^4)}=-\frac{1}{3}$$
|
H: Please help me to prove that $|f(x)| \le M \Vert x\Vert$ around $0$ when $|f(x)| \le \Vert x \Vert^\alpha$ around $0$
Question:
Suppose that $0<r<1$ and that $f\colon B_1(0) \to \Bbb R$ is continuously differentiable.
If there is an $\alpha>0$ such that $|f(x)| \le \Vert x \Vert^{\alpha}$ for all $x\in B_r(0)$, then prove that there is an $M>0$ such that $|f(x)| \le M\Vert x \Vert$ for $x\in B_r(0)$
Solution trial: let $x_k\in B_r(0)$ be a sequence with $x_k\to 0$ as $k\to \infty$
Since $f$ is continuously differentiable, $f$ is continouos as well.
Since $f$ is continouos and $x_k\to 0$, it must be that $f(x_k)\to f(0)$.
That is, $$\lim_{k\to\infty}|f(x_k)|\le \lim_{k\to\infty}\Vert x_k \Vert^{\alpha}=0.$$
This is just an idea that may not work. Please help me find a valid proof. Thank you.
AI: Lemma: There exist $\delta > 0$ and $M>0$ such that whenever $\Vert x \Vert < \delta$, $|f(x)| \le M \Vert x \Vert$.
Warning: I have not mucked about with this sort of mathematics in a number of years, so please check my argument carefully.
Since $f$ has continuous first partial derivatives at $0$, it is differentiable at $0$.
That is,
$$\lim_{x \to 0} \frac {f(x) - f(0) - x \cdot \nabla f(0)}{\Vert x \Vert} = 0.$$
Since $|f(0)| \le \Vert 0 \Vert^\alpha = 0$, $f(0)=0$.
Thus $$\lim_{x \to 0} \frac {f(x) - x \cdot \nabla f(0)}{\Vert x \Vert} = 0.$$
That is, for any $\epsilon > 0$ there is a $\delta > 0$ such that whenever $\Vert x \Vert < \delta$, $$\left\vert \frac {f(x) - x \cdot \nabla f(0)}{\Vert x \Vert} \right\vert < \epsilon.$$
Rearranging the fraction,
$$\left\vert \frac {f(x)}{\Vert x \Vert} - \frac {x \cdot \nabla f(0)}{\Vert x \Vert} \right\vert < \epsilon.$$
Now the quantity $$q(x) := \frac{x\cdot\nabla f(0)}{\Vert x \Vert}$$ does not depend on the magnitude of $x$, so by a continuity and compactness argument, it is bounded.
Thus the quantity $$\frac {f(x)}{\Vert x \Vert}$$ is also bounded.
$\square$
|
H: Mergelyan's theorem from Runge's theorem?
From Conway, A course in functional analysis, page 85. Corollary 8.5.
I want to ask for a hint how to deduce Mergelyan's theorem from Runge's theorem, assuming a functional analysis rhetoric proof. This is listed as a corollary after Runge's theorem with no proof given. I thought about it, but I cannot remove the key construction of a measure $$\overline{\mu}(w)=\int_{K}\frac{d|\mu|(z)}{|z-w|}$$ in the book, because later one need a variation of Cauchy's intergral formula to prove that if $\int gd\mu=0$ for each $g$ in the closure in $C(K)$ of rational functionals with poles in $E$, then $\int fd\mu=0$ for any $f$ analytic on an open set containing $K$. In order to show that in a certain step (8.3) one need to prove that $$(\frac{d}{dw})^{n}\overline{\mu}(w_{0})=0$$for any $n$, hence $\overline{\mu}(w_{0})$ must be $0$. This relies on the assumption $g$ is a rational function.
The key assumption in Mergelyan's theorem is $\mathbb{C}-K$ is connected. Therefore I need to use the assumption somehow. It is not correct that every meromorphic function on $\mathbb{C}-K$ must be analytic. So if I do not modify the author's proof significantly, I do not know how to prove Mergelyan's theorem based on his work. I feel I must be thinking in the wrong way somehow.
AI: You are overthinking this. Let me quote Runge's theorem from the book:
8.1. Runge's theorem. Let $K$ be a compact subset of $\mathbb C$ and let $E$ be a subset of $\mathbb C_\infty\setminus K$ that meets each component of $\mathbb C_\infty\setminus K$. If $f$ is analytic in a neighborhood of $K$, then there are rational functions $f_n$ whose only poles lie in $E$ such that $f_n\to f$ uniformly in $K$.
(Here $\mathbb C_\infty$ is the extended complex plane.)
If $\mathbb C \setminus K$ is connected, you can choose $E=\{\infty\}$, thus forcing all $f_n$ to be polynomials.
|
H: Next step in solving an equations...
I have managed so far to break down an the following equation:
$x^n+y^n=1$
to
$x^n=1-y^n$
but what is the next step to get $x$ on it's own?
I have hopped over here from StackOverflow where I am trying draw superellipse where a and b are always 1. So applogies for my lack of terminology! I have a very beginner understanding of mathematics and almost no understanding of mathematical notation for equations!
AI: $$x^n+y^n=1$$
$$\iff x^n = 1 - y^n$$
$$\implies \large (x^n)^{\frac 1n} = x = \left(1 - y^n\right)^{\frac 1n}$$
Another way of expressing "to the $1/n$th power" is "the nth root of", which is denoted on the right-hand side below:
$$x = \left(1 - y^n\right)^{\large \frac 1n} = \sqrt[\Large n]{1 - y^n}$$
|
H: Poisson Estimators
Consider a simple random sample of size $n$ from a Poisson distribution with mean $\mu$. Let $\theta=P(X=0)$.
Let $T=\sum X_{i}$. Show that $\tilde{\theta}=[(n-1)/n]^{T}$ is an unbiased estimator of $\theta$.
AI: In this calculation, we assume that you are familiar with the moment generating function, and know in particular that the moment generating function of a Poissson $X$ with parameter (mean) $\mu$ is given by
$$M_X(t)=E(e^{tX})=e^{\mu(e^t-1)}. \tag{1}$$
Temporarily, for typing ease, let $c=\frac{n-1}{n}$. We want to find the expectation of $c^T$, that is, the expectation of
$$c^{X_1+X_2+\cdots+X_n}.$$
Thus we want
$$E(c^{X_1}c^{X_2}\cdots c^{X_n}).\tag{2}$$
By independence, this is the product of the expectations, which is
$$(E(c^{X_1})^n.\tag{3}$$
So now we go after $E(c^{X_1}$.
Putting $t=\ln c=\ln((n-1)/n))$ in (1) we find that
$$E(c^{X_1})=e^{\mu ((n-1)/n-1)}=e^{-\mu/n}.\tag{4}$$
Expression (3) tells us to take the $n$-th power of this. we get $e^{-\mu}$, which is $\theta$. This completes the proof.
Remark: For completeness, we sketch the calculation of the moment generating function. We have
$$E(e^{tX})=\sum_{k=0}^\infty e^{tk} e^{-\mu}\frac{\mu^k}{k!}.$$
Reorganize this as
$$e^{-\mu} \sum_{k=0}^\infty \frac{w^k}{k!},$$
where $w=\mu e^t$.
But we recognize $\sum_0^\infty \frac{w^k}{k!}$ as the series expansion of $e^w$, and now it is just a mater of putting pieces together.
|
H: Integrate $\int e ^ \sqrt{x} \, dx$
$$\int e ^ \sqrt{x} \, dx$$
I don't even know how to begin. I tried u sub but obviously doesn't work, no variable to cancel. I tried the formula to memorize $$\int ae^ {au} \, du = \frac{1}{a^2} (au - 1)e^au$$ but that didn't work and I think maybe it isn't suppose to.
AI: HINT: Let $u=\sqrt x$; $du=\frac1{2\sqrt x}dx$, so $dx=2\sqrt x du=2u\,du$. Now you have an integral that you can compute by integrating by parts.
|
H: Is the Fibonacci number-like function too trivial to investigate about?
To make some interesting recursive function, I generalized Fibonacci numbers to a function $f(x)$ such that satisfies the following condition:
Given a function $g(x)$, such that $g(0)=0$ and $g(1)=1$, defined on the interval $x\in [0,2)$, $f(x+2)=f(x+1)+f(x)$ for $\forall x\in \mathbb{R}$, where $f(x)=g(x)$ on the interval $x\in [0,2)$
I believe that this function has a boring mechanism and there is not so interesting feature in it. Do you think the function is worth investigating deeply?
AI: You did not define a function, but rather a family of functions. For each $g$ you get a different $f$. Functionally, there is nothing going on here. For each $g(x)$ and $(x+1)$ given by $g$, you generate recursively a Fibonacci-like sequence. So what is going here is that you have a whole family of such sequences, given by the initial values specified by $g$. Since $g(0)=0$ and $g(1)=1$, one of those sequences is actually the Fibonacci sequence. The others may or may not be.
|
H: Find the number of ways of picking the following cards from a standard, 52-card deck.
a) a king and a queen
b) a king or queen
c) king and a red card
d) king or a red card
For a), I can see that, since there are 4 kings and 4 queens in a deck, the number of ways of picking both would be $(4)(4) = 16$. This is the multiplication rule? Why does this work, and how would I know to use it?
For b), I can see that the ways of picking a king OR a queen is, instead of multiplying, I must add $4+4=8$ ways. Addition rule? When does this work, and how do I know to use it?
c) and d) are where things start to get even fuzzier for me. How would I go about starting these?
I'm thinking that if a) is similar to c), then couldn't I use the multiplication rule? As in, 4 kings times 26 total red cards? But this produces an incorrect answer. Next, I thought that maybe I should exclude the kings from the set of red cards, but 4 times 22 is still incorrect.
AI: The reason you can use the multiplication rule in the first one is that you are computing the number of ways to take one item each from two sets which are disjoint. There may be a more elegant way of explaining this, but I am still an amateur. :)
Presumably, by "a king and a red card" they mean "two cards, one of which is a king, and the other of which is a red card." E.g. a single red king would not suffice. If this is the case, you must take $2(26)$, which is 2 for the black kings and 26 for the red cards (same rule as before). Now you start counting the cases involving sets that are not disjoint. Take the king of diamonds...there are 25 red cards you can pair with it. Now take the king of hearts. Again there are 25 red cards you can pair with it, but one of the possibilities (drawing the two red kings) was already considered. So the answer should be $$2(26) + 1(25) + 1(24).$$ In part d, you just have to take the number of elements in the set $\{\text{King or red card}\}$, which is 28.
Edit: I fixed an error, but then my correction was wrong, so I changed it back!
|
H: How to graph the trigonometric function when period is more than the range?
I have to graph this trigonometric equation for a given range,
$y = -\sin \left(\dfrac{x}{3}\right) - 2, ~-\dfrac{\pi}{2} \le x \le \dfrac{\pi}{2}$
But the period is coming out to be $6 \pi$. So the question is how?
I would appreciate any help.
AI: Simply sketch lightly two vertical lines, each intersecting the $x$ axis: one line $x = \pi/2$ that's perpendicular to the x-axis, the other, also a perpendicular line being $x = -\pi/2$.
Then sketch the portion of the curves period that lies between those vertical lines. Feel free to sketch in the whole period, lightly, but then darken the portion contained on and between the two lines. The points where the graph intersect the two vertical lines should be darkened in, too, to illustrate that they are endpoints which are included in you closed interval.
From Wolfram Alpha: below is the full period + of the curve $\;y = \sin\left(\frac x3\right) - 2$:
And what follows is a graph of the two vertical lines at $x = \pi/2$, $x = -\pi/ 2$, with the portion of the sine curve $\;y = \sin\left(\frac x3\right) - 2$ that lies between (and intersects) those lines. Note that the portion of the curve that lies within the given range resembles a line: if you look at the vertical line $x = 0$ as a common reference point for both graphs, you can see which portion of the curve is the focus of the second graph: Indeed, the curve below which is sandwiched between our two vertical lines includes an inflection point of the given sine curve, and explains why it "looks like a straight line."
|
H: Uniformly continuous function on a disconnected domain
$f:A=\mathbb{Q}\cap (0,7)\to\mathbb{R}$ be an uniformly continuous function
can anyone tell me which of the following are correct?
$1. f \text{ is bounded}$
$2. f$ must be constant
$3. f$ is differentiable at all rational points
$4. f$ is differentiable in $(0,7)$
as uniformly continuous function is send bounded set to bounded set so $1$ is true, I have no counter example for $2$, as the domain is totally disconnected so $3,4$ are false. am I right?
AI: I don't think $f$ has to be constant. $f(x) = x$ seems to be uniformly continuous on $A$. In fact, if $g: [0, 7] \to \mathbb R$ is continuous, its restriction to $A$ is uniformly continuous. (To see this, suppose $\epsilon > 0$ is given, then there exists $\delta > 0$ such that $|x - y| < \delta$ implies $|g(x) - g(y)| < \epsilon$, where $x, y \in [0, 7]$. Obviously the part of the sentence about $x, y$ is true for $x, y \in A \subseteq [0, 7]$ also, so $f$ is uniformly continuous.)
That $f$ is not necessarily differentiable at rational points is not a result of the domain being disconnected.
You can just pick $f(x) = \max\{1, x\}$. $f$ is uniformly continuous but not differentiable at $1$ because the left limit and the right limit of $\frac{f(x - 1)}{x - 1}$ as $x \to 0$ are different.
|
H: derivative as a linear map
$f:\mathbb{R}^n\to\mathbb{R}$ is given by $f(x_1,\dots,x_n)=a_1x_1+\dots+a_nx_n$
where $a=(a_1,\dots,a_n)$ is a fixed nonzero vector, $Df(0)$ denote the derivative of $f$ at $0$ could anyone tell me which of the following are correct?
$1. Df(0)$ is a linear map from $\mathbb{R}^n\to\mathbb{R}$
$2. [(Df)(0)]a=\|a\|^2$.
$3. [(Df)(0)]a=0$
$4. [(Df)(0)]b=a_1b_1+\dots+a_nb_n$ for some $b=(b_1,\dots,b_n)$
I am sure $1$ is true, $(Df)(x)$ where $x=(x_1,\dots,x_n)$ will be a $1\times n$ collumn which is $(a_1,\dots,a_n)^T$ am i right? so $2$ is true, and $4$ is true. am I right?
AI: 1) is true by definition.
Since $f$ is linear, it is easy to see that $Df(x)\delta = a^T \delta$. (Hence, in this case, $Df(0) = Df(x)$ for all $x$.
2) $Df(0) a = a^T a = \|a\|^2$.
3) $Df(0) a = 0$ iff $ a = 0$.
4) $Df(0)b = a^T b$.
|
H: Learning to read mathese - how do I interpret $(c_j = a + (j - \frac{1}{2}) \Delta x$
$(c_j = a + (j - \frac{1}{2}) \Delta x$
I am trying to learn the midpoint rule for approximation of the area under a curve but I can't translate this into something I can work with.
The formula to memorize is $M_n = \Delta x ( f(c_1) ... f(c_n))$
simple enough
So
$\int_0^2 x^2 dx$
So my $\Delta x$ is $\frac{1}{2}$
$c_1 = 0 + (1 - \frac{1}{2}) * \frac{1}{2}$
$c_2 = 0 + (2 - \frac{1}{2}) * \frac{1}{2}$
$c_3 = 0 + (3 - \frac{1}{2}) * \frac{1}{2}$
Is that correct or is it really suppose to be $j= \Delta x$?
$c_{\frac{1}{2}} = 0 + (\frac{1}{2} - \frac{1}{2}) * \frac{1}{2}$
This also doesn't make sense to me, where do I begin? And how do I interpret this.
AI: $$M_n = \Delta x \left( f(c_1) + \cdots + f(c_n)\right)$$
$$c_j = 0 + \left(j - \dfrac 12\right)\Delta x = j\Delta x - \dfrac {\Delta x}{2}$$
Now, why would you put $j =1/2$?
$\;j$ is either $\;1, \,2, \,3,\;\text{or}\;4.$ You've computed $c_1 - c_3$, but still need to compute $$c_4=0+(4−(1/2))(1/2).$$
Once you have all the values for $c_1, c_2, c_3, c_4$, you need to compute $M_n$ accordingly, where I am guessing $f$ refers to the integrand $x^2$, and $f(c_j)$ being the function $x^2$ evaluated at $x = c_j$.
|
H: (Qual Question) Example of a non-measurable function $a_{ij}:\mathbb{Z}\times\mathbb{Z}\to\mathbb{R}$
The title says it all. The question arises from a qualifying exam question in which it asks to provide an example in which we may have $A\neq B$ where
$A=\sum_{i\geq1,j\geq1}^{\infty}a_{ij}$ and $B=\sum_{j\geq1,i\geq1}^{\infty}a_{ij}$. The iterated inner/outer sums in each expression are assumed to be absolutely convergent.
The only instance that this could fail is when $a_{ij}$ is non-measurable, by Tonelli's theorem since we have by hypothesis
$$\int_{\mathbb{Z}}\;d\#_{i}\int_{\mathbb{Z}}|a_{ij}|\;d\#_{j}<\infty,$$
i.e. the iterated integral of $|a_{ij}|$ exists and is finite and the product $\sigma$-algebra on $\mathbb{Z}\times\mathbb{Z}$ is $\sigma$-finite with respect to counting measure (see below).
But the $\sigma$-algebra on $\mathbb{Z}$ is just $2^{\mathbb{Z}}$, and since the product $\sigma$-algebra on $\mathbb{Z}\times\mathbb{Z}$ is just the smallest $\sigma$-algbera which contains the sets $A\times B$ ($A\subset\mathbb{Z}$, $B\subset\mathbb{Z}$), the $\sigma$-algebra on $\mathbb{Z}\times\mathbb{Z}$ is also just $2^{\mathbb{Z}\times\mathbb{Z}}$, and this would imply all complex-valued functions on $\mathbb{Z}\times\mathbb{Z}$ are measurable, no?
So is the question incorrectly stated, or am I missing something here? Part (b) asks for an additional assumption on $a_{ij}$ so as to guarantee the interchange of summation does not affect the sum, but it seems all "reasonable" conditions have already been imposed in part (a).
AI: You're misunderstanding
The iterated inner/outer sums in each expression are assumed to be absolutely convergent.
It does not mean that
$$\int_{\mathbb{Z}}\;d\#_{i}\int_{\mathbb{Z}}|a_{ij}|\;d\#_{j}<\infty,$$
or equivalently $\sum\limits_{i,\,j} \lvert a_{ij}\rvert < \infty$. If it did, you'd have a summable family, and all ways to sum the family would lead to the same sum.
It means that
$$\bigl(\forall i\bigr) \left(\sum_{j} \lvert a_{ij}\rvert < \infty\right),$$
and, with $b_i := \sum\limits_j a_{ij}$, you have
$$\sum_i \lvert b_i\rvert < \infty.$$
(And similarly for the other nesting.)
For an example, see Brian's answer
|
H: Partial Derivatives on Manifolds - Is this conclusion right?
I'm self-studying Differential Geometry and I've asked here about how to describe functions on a manifold, and now that I'm pretty sure that my conclusions about that are correct I've started to think on how do we compute partial derivatives. Well, Spivak defines on his book that if $(x,U)$ is a chart on a smooth manifold $M$ and if $f : U \to \Bbb R$ is differentiable, then we can define the $i$-th partial with respect to this chart as:
$$\frac{\partial f}{\partial x^i}(p)=D_i(f\circ x^{-1})(x(p))$$
This is very natural and very good, but I started to think on ways to compute with this. Indeed, I'm studying Spivak's Differential Geometry books, but in addition to the theory, I'm trying to get the way to compute things. So my thought was: following the question I've refered, if I can express any function $f : U \to \Bbb R$ as combination of the coordinate functions with usual functions defined on the real line, then I can take any partials if I know the partials of the coordinate functions. Indeed, I computed as follows:
$$\frac{\partial x^i}{\partial x^j}(p)=D_j(x^i \circ x^{-1})(x(p))$$
But I've defined $x^i=I^i \circ x$ where $I:\Bbb R^n \to \Bbb R$ is the identity. So we have that:
$$x^i\circ x^{-1}=(I^i\circ x)\circ x^{-1}$$
But composition is associative, and since $x : U\to \Bbb R^n$ and $x^{-1} : x(U)\subset \Bbb R^n \to U $ we have that $x \circ x^{-1} : x(U)\subset \Bbb R^n \to \Bbb R^n$ and this is just the identity $I$, so that $(I^i \circ x)\circ x^{-1} = I^i$ and so
$$\frac{\partial x^i}{\partial x^j}(p)=D_jI^i(x(p))$$
But we know that $D_jI^i(q) = \delta_j^i$ independent of the point, where $\delta^i_j$ is the Kronecker Delta. So we have that:
$$\frac{\partial x^i}{\partial x^j}(p)=\delta_j^i$$
I've shown similarly that the partial derivative on a manifold is linear, obeys the product rule and that it obeys the chain rule. So suppose now $M$ is a manifold of dimension $2$, $U\subset M$ and that $(x,U)$ is a chart. Then we have coordinate functions $x^1$ and $x^2$ and we can express for instance the following function $f : U \to \Bbb R$ as:
$$f = \sin \circ (x^1 x^2)$$
Then we have that the partial with respect to $x^1$ for instance is:
$$\frac{\partial f}{\partial x^1}(p)=\cos(x^1(p)x^2(p))\frac{\partial (x^1 x^2)}{\partial x^1}(p)=\cos(x^1(p)x^2(p))\left(\frac{\partial x^1}{\partial x^1}(p)x^2(p)+x^1(p)\frac{\partial x^2}{\partial x^1}(p)\right)$$
And using the result I've shown above I would get:
$$\frac{\partial f}{\partial x^1}(p)=x^2(p)\cos(x^1(p)x^2(p))$$
So is really like this that we compute partial derivatives in practice on Manifolds? Are all my conclusions correct?
Thanks very much!
AI: Your calculations appear on the mark. The idea of differentiating with respect to manifold coordinates is necessarily abstract. However, at the end of the discussion the point to remember is simply this:
a manifold is a curved space which allows a local calculus
The interesting thing about the derivatives you're working through is that we may take them on a sphere, cylinder, projective space, three-dimensional Thurston model geometry, whatever. The method to calculate is the same. The partial derivative of $f$ with respect to $x^j$ measures the change in $f$ along the $x^j$-th coordinate direction on the manifold at the point in question. Notice this is the same as it was in calculus, just now the space is (possibly) curved.
Furthermore, yes, once you have manifold coordinates and functions expressed in manifold coordinates it is just like what we already know from multivariate calculus. It has to be. Remember $\mathbb{R}^n$ is the essential and trivial example of an $n$-dimensional manifold. Therefore, you should expect to recover all the usual theorems of calculus for manifolds; linearity, the Leibniz rule, the chain rule, implicit and inverse function theorems and so forth. Even integration, once given a suitable way of patching things together can be discussed. Of course, sometimes it's easy to get lost in all the notation, but it seems you're finding your way just fine.
|
H: $x \sin x=2$ why is my proof that there no solutions wrong?
$\frac 12 x \sin x=1$ . Let's look at a right triangle with base $x$ and altitude $\sin x$ . Then our equation is for the area of this triangle. Let the sides of the triangle be $a=x$ , $b=\sqrt {x^2+sin^2 x}$ , and $c= \sin x$ . According to wikipedia, Heron's formula can be written as $$A=\large \frac { \sqrt {4a^2c^2-(a^2+b^2-c^2)^2}}{4}$$
Plugging in:
$4=\large \sqrt{4x^2 \sin^2 x-(x^2+x^2+\sin^2 x-\sin ^2 x)^2}$
$4=x^2 \sin^2 x -x^4$
$x^2(x^2- \sin^2 x)=-4$
$x^2$ will always be positive, and $\sin^2 x$ is never greater than $x^2$ , so this equation can have no real solutions. The original has solutions, so why is this wrong?
AI: This particular wikipedia formula is wrong.
It should be correctly either
$$A=\large \frac { \sqrt {4a^2b^2-(a^2+b^2-c^2)^2}}{4}$$
or
$$A=\large \frac { \sqrt {4a^2c^2-(a^2-b^2+c^2)^2}}{4}\,.$$
Mind the symmetry..
|
H: Interest Calculation
A university student receives his statement for his tuition and notices that he doesn't have enough money to pay it all off at once. The student inquires about interest rates at his university and is told the following information:
You will be subject to interest charges of one per cent monthly on the amount owing from your last statement. The annual interest rate is $12.7$ per cent.
After paying all he has, he still owes his university $\$2420.$ Since he has a part time job in which he earns $\$600$ monthly, he decides to pay off $\$600$ every month on his balance.
How much money will he owe the university each month? Assume school starts in September. What month will the balance be completely paid off?
AI: After the first month, the student will owe
$$2420 (1.01) = 2444.20$$
Then the student pays 600 dollars and owes 1844.20. With another compounding period, the student owes about 1862.64. The student pays 600 dollars and owes 1262.64. And so on.
Algebraically, the amount the student owes looks like, where $P$ is the principal and $m$ is the monthly payment,
$$[[[P (1+i) - m](1+i)-m](1+i)-m]....$$
which, when rearranged, becomes, for n compounding periods
$$P(1+i)^n - m \sum_{k=1}^{n-1} (1+i)^k = P (1+i)^n - m \frac{(1+i)^n-1}{i}$$
Let's say $n=4$ in your example; then the student owes
$$2420 (1.01)^4 - 600\frac{(1.01)^4-1}{0.01} \approx 82.02$$
The university would then compound interest once more, and the payoff amount would be $82.84$.
Here's a complete table of what the student will owe each month on this schedule:
$$\left(
\begin{array}{cc}
1 & 1844.20 \\
2 & 1262.64 \\
3 & 675.27 \\
4 & 82.02 \\
\end{array}
\right)$$
ADDENDUM
You should know that the university charges $12.7\%$ interest annually, as $(1.01)^{12}-1 \approx 0.127$
Also, the student may pay off the loan completely in $n=4$ months by setting the above expression to zero to get a mothly payment that allows this. The result is
$$m = P \frac{i}{1-(1+i)^{-n}} = 620.20 $$
|
H: Meaning of Problem in Evaluation
In Algebra, a good rule-of-thumb I saw was If solving an equation leads to a contradiction, there is no solution. And this makes sense to me, particularly in the following case:
$x-1 = x +1 \Rightarrow -1 = 1$
We are being asked to find a number whose predecessor is the same as its successor. That would require 1 and -1 to have the same effect on $x$. Impossible! But I was shown another case of no solution -- an equation that seemed to have a valid solution up until checking:
$\frac{2}{x+2} + \frac{x}{x-2} = \frac{x^2+4}{x^2-4} \Rightarrow 2(x-2)+x(x+2) = x^2+4$
Solving this for $x$, we can get an answer of $x=2$. But we see in the original equation, $x\neq2$ because of division by zero. So there is no solution.
My question is: What does it mean (geometrically / algebraically / logically) if solving an equation does not halt in a contradiction, but causes a contradiction on evaluation?
AI: $$\frac{2}{x+2} + \frac{x}{x-2} = \frac{x^2+4}{x^2-4} \quad \overset{\large x\neq \pm 2}{\Longrightarrow} \quad2(x-2)+x(x+2) = x^2+4$$
In moving from the left-hand side to the right-hand side of the "implication" above, you implicitly multiplied both terms on the left, along with the right hand side of the equation to the left, by $$x^2 - 4 = (x + 2)(x - 2)$$. If we do not qualify that "operation" by the provision $x\neq \pm 2$, we are allowing for the possibility of multiplication by zero. Any equation, when multiplied by zero, may change in ways that introduce extraneous solutions which aren't actually solutions to the original equation. Without omitting $x = \pm 2$ (to disallow multiplication by zero), we dilute the original equation and information about the original equation is lost.
So the apparent solution of $x = 2$ was introduced by manipulations on the original equation, and in particular, by forgetting the provision I've made explicit above the "implication" connective above. I wouldn't necessarily say that we arrived at a contradiction, but rather found information that turned out to be information (an apparent solution) that doesn't apply to the original equation which has no solution.
|
H: Calculation for absolute value pattern
I have a weird pattern I have to calculate and I don't quite know how to describe it, so my apologies if this is a duplicate somewhere..
I want to solve this pattern mathematically. When I have an array of numbers, I need to calculate a secondary sequence (position) based on the index and the size, i.e.:
1 element:
index: 0
position: 1
2 elements:
index: 0 1
position: 2 1
3 elements:
index: 0 1 2
position: 2 1 3
4 elements:
index: 0 1 2 3
position: 4 2 1 3
5 elements:
index: 0 1 2 3 4
position: 4 2 1 3 5
6 elements:
index: 0 1 2 3 4 5
position: 6 4 2 1 3 5
etc....
The array can be 1-based as well if that would make it easier. I wrote this out to 9 elements to try and find a pattern but the best I could make out was that it was some sort of absolute value function with a variable offset...
AI: For an array with $n$ elements, the function is:
$$f(n,k)=1+2(k-[n/2])$$
when $k\geq [n/2]$ and
$$f(n,k)=2+2([n/2]-k-1)$$
when $k<[n/2]$, where $[\cdot]$ is the floor function. Example:
$$f(5,4)=1+2(4-2)=5$$.
$$f(5,1)=2+2(1-2+1)=2$$
|
H: Invariant subspace
Suppose that $v = v_1 + iv_2$, where $v_1$ and $v_2$ are real vectors. Show that if we view $A$ as defining a map $α$ of $\mathbb{R^3}$ into itself, then $α$ leaves the subspace spanned by $v_1$ and $v_2$ invariant.
I'm looking for hints of how I should approach this problem. I mean, do I need to go directly by somehow showing that $\alpha(v_1)$ and $\alpha(v_2)$ lies in subspace spanned by $v_1$ and $v_2$ or differently?
Note, this problem appeared on chapter on eigenvalues and eigenvectors (how to find them), but I cannot see how I can use them here. Help?
AI: We need to interpret the context a little. Presumably $A$ is a $3\times 3$ matrix, and induces a linear transformation on $V=\mathbb{R}^3$. This linear transformation has a possibly non-real eigenvalue $\lambda=a+ib$.
Let $v$ be an eigenvector for eigenvalue $\lambda$, and let $v=v_1+iv_2$.
We want to show that $Av_1$ and $Av_2$ are each in the (real) subspace spanned by $v_1$ and $v_2$. So we want to show that $Av_1$ is a (real) linear combination of $v_1$ and $v_2$, and that $Av_2$ is a linear combination of $v_1$ and $v_2$.
Calculate. We have $Av=\lambda v$. Thus
$$A(v_1+iv_2)=Av_1+iAv_2.\tag{1}$$
But also
$$A(v_1+iv_2)=(a+ib)(v_1+iv_2)= av_1-bv_2 +i(av_2+bv_1).\tag{2}$$
Look at the right-hand sides of (1) and (2). The real parts must be equal, and the imaginary parts must be equal.
We conclude that
$$Av_1=av_1-bv_2 \qquad\text{and}\qquad Av_2=bv_1+av_2.$$
This yields the desired result.
Note that the argument works for any vector space over the reals.
|
H: Find the number of bytes that begin with 10 or end with 01.
A sequence of digits where each digit is 0 or 1 is called a $binary\
\> number$. Each digit in a binary number of a component of the number. A
binary number with eight components is called a byte. Find the number
of bytes that begin with 10 or end with 01.
I start with the binary numbers that start with 10, but don't end in 01.
$$10000001$$ $$10000000$$ $$10000011$$
The first two digits are fixed. The middle 4 digits have two possibilities each, so $2^4 = 16$. The final two digits have three possibilities, $00,\ 11,$ and$\ 10$. So $(16)(3) = 48$, the number of bytes that begin with 10 but don't end with 01.
Then, I do the binary numbers that don't start with 10.
$$11000001$$ $$00000001$$ $$01000001$$
The last two digits are fixed, so following the same logic as above, $(16)(3) = 48$, the number of bytes that end with 01 but don't begin with 10.
Now, this is where I am stuck. I tried $48+48 = 96$, but the answer is 112.
AI: You forgot numbers that start with 10 and end with 01 - 10xxxx01 - it give you 16 numbers. 96 + 16 = 112
|
H: Partition problem
While doing some programming, I found that I needed to design or implement an algortihm to basically do the same as the Partition Problem,that consists in partitioning a set of integers in two groups with the most equitative sum, but with floating point numbers. Is there any "efficient" solution? I know it is NP-hard, but I at least would like a "not-so-exponential" algorithm. Any help or ideas is greatly appreciated.
AI: Well, the best answer depends on the parameters of the problem. It is only nasty-nasty-nasty exponential if the numbers get large - large as in n bits for a large value of n, not large as in the number itself is large. For instance, if you had a thousand files that were all less than 10 megabytes in size and you wanted (for some reason) to put them on two DVDs and you wanted both DVDs to have the exact same amount of file space used, THAT could be solved very quickly. It would only take billions of billions of years if you threw out the constraint that they were all less than 10 megabytes in size and had the sizes capable of being things like 10^100 megabytes. This is the difference between strongly exponential and just exponential - because 10^100 is only a 100-digit number you see, and only takes 300ish bits to express.
One thing you could do - is use the fourier transform, and the property that convolution in time domain is equivalent to multiplication in frequency domain. So the dual possibilities that an element is in one set of the partition or the other, means it is either added or subtracted. So basically, if your numbers are 3, 7, 37, 22, and 101, you would have a function which is an impulse at 3 and -3, and another function which is an impulse at 7 and -7, etc., and then you convolve them all together; and by that I mean, you take the fourier transform, you multiply them, and then you just take the average of that function - if it is 0, then there is a frequency-0 component of 0, meaning there is no solution. And if it is nonzero, then there is a solution.
If however you do have some fairly large numbers (numbers like 10^100 taking 300 bits, not numbers like 10^300000 taking a megabyte), and you weren't concerned with getting an EXACT solution but just wanted to see if there was a NEAR solution, you could do the same concept - except just divide all the numbers by a power of 2 and round to the nearest integer, or perhaps round up AND down and have impulses at both integers, perhaps have the one it is closer to be an appropriately larger impulse. Then you wouldn't be doing a fourier transform on an unruly large quantity of points.
|
H: Does $\lim_{t\to 0}\frac{x \sin(xt)}{1+x^2}=0$ uniformly over $x\in \mathbb{R}$?
Consider the following limit: $$\lim_{t\to 0}\frac{x \sin(xt)}{1+x^2}.$$The question is to determine whether it exists, and if it does, whether it is uniform, i.e. whether there is $\delta_\epsilon>0$ not depending on $x$ such that $\left|\frac{x\sin(xt)}{1+x^2} - L\right|<\epsilon$ for all $0<t<\delta$, for all $x\in S$, for some set $S$ (say $S=\mathbb{R}$).
I think it is clear that the limit is zero for all $x$. In looking at the uniformity, certainly it converges uniformly on a bounded set $\{|x|<B\}$. I conjecture that it doesn't converge uniformly on $x\in \mathbb{R}$. To show that it doesn't, I'm trying to make convenient choices for $t$ to show that as $x$ gets large, a particular nonzero value of $\frac{x \sin(xt)}{1+x^2}$ occur arbitrarily close to $t=0$. For instance, I might choose $t = 1/x$, which gives $\frac{x \sin(1)}{1+x^2}$. But this goes to zero as $x\to \infty$, so this doesn't help.
I notice that $\sin(xt)=xt + O(x^3t^3)$, so $$\frac{x\sin(xt)}{1+x^2}=\frac{x^2 t}{1+x^2}+\frac{O(x^4 t^3)}{1+x^2},$$ which doesn't seem to shed any light.
Any ideas?
AI: Note that
$$
\left\lvert\frac{x}{1+x^2}\right\rvert\rightarrow0\text{ as }\lvert x\rvert\rightarrow\infty.
$$
So, for any $\epsilon>0$, you can find $X>0$ such that
$$
\left\lvert\frac{x\sin(tx)}{1+x^2}\right\rvert\leq\left\lvert\frac{x}{1+x^2}\right\rvert<\epsilon\text{ whenever }\lvert x\rvert\geq X,
$$
regardless of the value of $t$.
So, if you can show that convergence is uniform on $\{\lvert x\rvert\leq X\}$, you are golden!
|
H: Norm of integral operator
Consider the operator $T(f(t)) = \int_0^t f(s)ds$, where $t \in [0,1]$, and $f(t) \in C[0,1]$.
To prove $$\|T^n\| = \frac{1}{n!}$$
Thanks for suggestions.
AI: HINT:
$$
\|T^n\| = \sup_{\|f\|=1} \|T^nf\| = \sup_{\|f\| =1}\sup_{s_{n+1}\in [0,1]} \left|\int^{s_n=s_{n+1}}_0\cdots\int^{s_2}_0 f(s_1) \,ds_1\cdots ds_n\right|
\\
\leq \sup_{\|f\| =1}\sup_{s_{n+1}\in [0,1]} \int^{s_n=s_{n+1}}_0\cdots\int^{s_2}_0 \sup_{s_1\in[0,s_2]}|f(s_1)| \,ds_1\cdots ds_n
\\
\underbrace{\leq}_{\text{Why this inequality is true?}} \sup_{s_{n+1}\in [0,1]} \int^{s_n=s_{n+1}}_0\cdots\int^{s_2}_0 1\,ds_1\cdots ds_n = \sup_{s_{n+1}\in [0,1]} \frac{s_{n+1}^n}{n!}.
$$
Now try to find some $f\in C[0,1]$ such that the bound is achieved.
|
H: Explanation/How to use the Lattice isomorphism theorem
I am having trouble understanding some of the wordings of the Lattice isomorphism theorem (Also known as 4th isomorphism theorem) in group theory. I quote here the theorem as in Dummit and Foote
Let $G$ be a group and let $N$ be a normal subgroup of $G$. Then there is a bijection from the set of subgroups of $A$ of $G$ which contain $N$ onto the set of subgroups $ \overline{A}=A/N$ of $G/N$. In particular every subgroup of $\overline{G}$ is of the form $A/N$ for some subgroup $A$ of $G$ containing $N$.
I'm a bit lost on the part where it says "Then there is bijection from the set of ......"
What exactly do they mean there is a bijection?. Do they mean this in the sense there is literally a bijection or are they using this word loosely?
How can one use this particular property to prove something useful? :)
My Professor used to say that this is the most natural of the isomorphism theorems but I just don't see/get it.
Can someone be kind enough to explain this better?. If you can include a typical example where this theorem is used can clear things up a bit.
Thanks
Edit
Question: Let $N$ be a order $7$ normal subgroup of $G$. Suppose that $G/N \cong D_{10}$ the dihedral group of order $10$. I want to prove that $G$ has a normal subgroup of order $35$.
Here is how I proceed. Since the given dihedral group has a normal subgroup (The cyclic subgroup of order $5$), $G/N$ must also have such a subgroup $M$. Now by the lattice isomorphism theorem, $M=A/N$ for some subgroup $A$ of $G$ containing $N$. Can I immediately conclude that $A$ is normal in $G$ here without referring to the index of $A$ in $G$?. (It turns out that $A$ in this instance has index $2$ so normal in $G$).
AI: Let $S$ be the set of all subgroups $A$ of $G$ such that $N\subseteq A$. Let $T$ be the set of all subgroups of $G/N$. The claim is that there is a bijection between the sets $S$ and $T$. The use of the word bijection is the usual one: there exists a function $\psi:S\to T$ which is both injective and surjective.
Side remark: The theorem is much stronger. Each of the sets $S$ and $T$ above are in fact lattices and the bijection is a lattice isomorphism. Moreover, the isomorphism preserves and reflects normality.
Applications:
1) A normal subgroup $N\subseteq G$ is maximal (i.e., no subgroup $H$ exists with $N\subseteq H \subseteq G$) iff the index of $N$ in $G$ is a prime number.
2) Let $N$ be normal in $G$. Then $G/N$ is simple iff there exists no normal subgroups $K$ of $G$ with $N\subset K \subset G$.
A similar result holds for rings, giving the classical and very important application
3) If $I$ is an ideal in a commutative ring $R$, then $R/I$ is a field iff $I$ is maximal in $R$. This gives a very efficient way to building fields.
Answer to you later edit: Yes, normality is both preserved and reflected by the bijection. That means that if $\psi(A)=H$, then $A$ is normal in $G$ iff $H$ is normal in $G/N$. Most other properties of groups though will not be preserved or reflected in this way.
|
H: I have to determine which of the following define a metric on $\Bbb R \,\,$?
I am stuck on the following problem:
Determine which of the following define a metric on $\Bbb R$:
$d(x,y)=\frac{|x-y|}{1+|x-y|}$
$d(x,y)=|x-2y|+|2y-x|$
$d(x,y)=|x^2-y^2|$
MY ATTEMPT:
In each of the aforementioned cases, $d(x,y) \ge 0 $ and $d(x,y)=d(y,x).$ So, I have to check the triangle inequality.
For option 1, $$d(x,y)=\frac{|x-y|}{1+|x-y|} \implies d(x,y) \le \frac {|x-y|}{|x-y|}=1$$ and hence $d(x,y) =1 \le d(x,z)+d(z,y)=2.$ So, option 1 defines a metric on $\Bbb R$.
For option 2, I can not prove triangle inequality and I need help here.
For option 3, we see that
$$\begin{align*}
d(x,y)&=|x^2-y^2|\\\\
& =|(x^2-z^2)+(z^2-y^2)| \\\\
&\le |x^2-z^2|+|z^2-y^2|\\\\
&=d(x,z)+d(z,y)
\end{align*}$$ and so option 3 defines a metric on $\Bbb R$.
Am I right? Thanks in advance for your time.
AI: For 2) $d(1,2)=6$ while $d(2,1)=0$, so it fails to be a metric on two accounts of the definition.
For 1) your proof is incorrect as clearly $d(x,y)$ does not have to be equal to $1$ for all $x,y$ (in fact, it's never equal to $1$). You need to show that $d(x,y)\le d(x,z)+d(z,y)$, so knowing that $d(u,v)\le 1$ does not help at all. Try proving it in general: for any metric $d$ whatsoever, the function $\rho(x,y)=\frac{d(x,y)}{1+d(x,y)}$ is a metric.
For 3) your proof of the triangle inequality is correct but $d(1,-1)=0$, thus it is not a metric space.
|
H: $F(u)= \frac{2}{\pi}\int_{0}^\infty \frac{uf(x)}{u^2 + x^2}dx.$ Show that $\lim\limits_{u\downarrow0}F(u)=f(0)$.
This is an problem from p. 296 of Buck's Advanced Calculus: Let $f$ be continuous on the interval $0\leq x < \infty$ with $|f(x)|\leq M$. Set $$F(u)= \frac{2}{\pi}\int_{0}^\infty \frac{uf(x)}{u^2 + x^2}dx.$$ Show that $\lim\limits_{u\downarrow0}F(u)=f(0)$.
A heavy theme of the preceding chapter has been uniform convergence of improper integrals with parameters, and the operations that uniform convergence justifies. For instance, we might want to express the integrand as itself being an integral in $u$, and then try to switch the order. Or we might try to differentiate with respect to $u$ underneath the integral (and hope that $\int_{0}^\infty \phi_{1}(u,x)dx$ converges uniformly, so we are justified in doing so).
The integral converges uniformly on $u\in [0, L]$ for $L>0$, but I don't think it converges uniformly on $u\in \mathbb{R}$ (perhaps I am wrong).
At first I thought of integrating $F(u)du$ on $[0,L]$, and switching the order (which is justified by uniform convergence), and then seeing what we can come up with there. If indeed it is true that $\lim\limits_{u\downarrow0}F(u)=f(0)$, then $F$ should be integrable on $[0,L]$.
$$\int_0^L F(u)du = \int_0^L\frac{2}{\pi}\int_0^\infty\frac{uf(x)}{u^2+x^2}dx\,du=\frac{2}{\pi}\int_0^\infty f(x)\int_{0}^L \frac{u}{u^2+x^2}du \, dx$$ $$=\frac{1}{\pi}\int_0^\infty f(x) \ln\left(\frac{L^2+x^2}{x^2}\right) \, dx,$$ but I'm not sure that's getting me anywhere.
Any ideas?
AI: Given $u>0$, letting $x=ut$ in the integral expression of $F(u)$, then
$$F(u)=\frac{2}{\pi}\int_0^\infty\frac{f(ut)}{1+t^2}dt.$$
Since
$$\int_0^\infty\frac{1}{1+t^2}dt=\frac{\pi}{2},$$
it follows that
$$F(u)-f(0)=\frac{2}{\pi}\int_0^\infty\frac{f(ut)-f(0)}{1+t^2}dt.$$
Then for any $a>0$,
$$|F(u)-f(0)|\le \frac{2}{\pi}\left(\int_0^a\frac{|f(ut)-f(0)|}{1+t^2}dt+\int_a^\infty\frac{|f(ut)-f(0)|}{1+t^2}dt\right)\le \frac{2}{\pi}\left(\int_0^a\frac{|f(ut)-f(0)|}{1+t^2}dt+2M\int_a^\infty\frac{1}{1+t^2}dt\right).$$
Letting $u\to 0^+$, we have
$$\limsup_{u\to 0^+}|F(u)-f(0)|\le \frac{4M}{\pi}\int_a^\infty\frac{1}{1+t^2}dt.$$
Letting $a\to\infty$, the conclusion follows.
|
H: Where does $\sin 3° =3\sin 1° -4 \sin^3 1°$ come from?
Wikipedia makes the claim:
"Though a complex task, the analytical expression of $\sin 1°$ can be obtained by analytically solving the cubic equation $\sin 3° =3\sin 1° -4 \sin^3 1°$ from whose solution one can analytically derive trigonometric functions of all angles of integer degrees."
Where did this equation come from? I did a quick google search and I didn't find much.
P.S. If possible do not answer this using series.
AI: It comes from the identity $\sin 3x=3\sin x \cos^2 x-\sin^3 x$ by applying $\cos^2 x=1-\sin^2 x$. Nothing special about $1^\circ$ or $3^\circ$
|
H: Are there infinitely many rational outputs for sin(x) and cos(x)?
I know this may be a dumb question but I know that it is possible for $\sin(x)$ to take on rational values like $0$, $1$, and $\frac {1}{2}$ and so forth, but can it equal any other rational values? What about $\cos(x)$?
AI: There are infinitely many primitive Pythagorean triples, that is, triples $(a,b,c)$ of positive integers such that $a$, $b$, and $c$ are positive integers $\gt 1$ such that $a^2+b^2=c^2$.
Any such triple determines a right triangle. The sines and cosines of the two non-right angles are the rationals $\frac{a}{c}$ and $\frac{b}{c}$. So there are infinitely many angles between $0$ and $\frac{\pi}{2}$ such that $\sin x$ and $\cos x$ are both rational.
You are undoubtedly familiar with the triples $(3,4,5)$ and $(5,12,13)$. There are infinitely many more. The Wikipedia article linked to above gives a detailed description.
We can already get infinitely many examples by letting $n$ be any integer $\gt 1$, and setting $a=n^2-1$, $b=2n$, and $c=n^2+1$. Make the right-triangle $ABC$, with the right angle at $C$, and $a,b,c$ as above.
Note that $\triangle ABC$ really is a right-triangle, since $(n^2-1)^2+(2n)^2=(n^2+1)^2$.
Let $x=\angle A$. Then $\sin x=\frac{n^2-1}{n^2+1}$ and $\cos x=\frac{2n}{n^2+1}$. Thus both are rational.
|
H: Is the tensor product of 2 free Abelian groups free?
Ok, basically, I think that is true. Let's consider $A$, and $B$ are both free $\mathbb{Z}-$modules, i.e $A = \bigoplus_{i \in I} \mathbb{Z}$, and $B = \bigoplus_{j \in J} \mathbb{Z}$, so we'll have:
$$\begin{align}A \otimes B &= \left(\bigoplus_{i \in I} \mathbb{Z} \right) \otimes \left(\bigoplus_{j \in J} \mathbb{Z} \right) \\
&= \bigoplus_{(i, j) \in I \times J} \left( \mathbb{Z} \otimes \mathbb{Z} \right) \\
&= \bigoplus_{(i, j) \in I \times J} \mathbb{Z} \end{align}$$
So $A \otimes B$ is also free. Does my proof look correct?
My second question is, I wonder if I can prove it directly, i.e, say $A$ is free with base $\left\{ \alpha_i \right\}_{i \in I}$, and $B$ is free with base $\left\{ \beta_j \right\}_{j \in J}$, I'll try to prove that $A \otimes B$ is also free with base $\left\{ \alpha_i \otimes \beta_j \right\}_{i \in I, j \in J}$.
I can prove that, the set $\left\{ \alpha_i \otimes \beta_j \right\}_{i \in I, j \in J}$ does indeed, generate $A \otimes B$, but how can I prove that they are linearly independent? The hardest part I'm facing is that I don't know what $0 \in A \otimes B$'s representations are, i.e under what conditions will $\sum\limits_{\text{finite}} a_i \otimes b_i$ equal to $0$. Is there some way to do it?
Thank you guys very much,
And have a good day,
AI: The first part looks fine, assuming you've shown that $\oplus$ distributes over $\otimes$ in the way you have written. For the second approach, here is a useful function: with your notation, define a map $A \times B \rightarrow \mathbb{Z}$ by fixing indices $i_0, j_0$ and writing each $(a,b)$ as $a = \sum_i n_i \alpha_i$ and $b = \sum_j m_j \beta_j$. Since the integers appearing in the sum are unique, we can define $\phi(a,b) = n_{i_0}m_{j_0}$. This map is then $\mathbb{Z}$-multilinear and so gives a map $A \otimes B \rightarrow \mathbb{Z}$ that does nice things for you.
|
H: Taylor series expansion for $f(x)=\sqrt{x}$ for $a=1$
I seem to be stuck defining an alternating sequence of terms in this series because $f^{(0)}(x)=f(x)$ is positive, as well as $f'(x)$, but then every other term starting with $f''(x)$ is negative. How can I define $f^{(n)}(x)$ given this?
\begin{array}{ll}
f(x)=x^{\frac{1}{2}} & f(1)=1 \\
f'(x)=\frac{1}{2}\cdot x^{-\frac{1}{2}} & f'(1)=\frac{1}{2} \\
f''(x)=(-1)^1\cdot\left(\frac{1}{2}\right)^{2}\cdot x^{-\frac{3}{2}} & f''(1)=(-1)^1\cdot\left(\frac{1}{2}\right)^{2} \\
f'''(x)=(-1)^2\cdot 3\cdot\left(\frac{1}{2}\right)^{3}\cdot x^{-\frac{5}{2}} & f'''(1)=(-1)^2\cdot3\cdot\left(\frac{1}{2}\right)^{3} \\
f^{(4)}(x)=(-1)^3\cdot3\cdot 5\cdot\left(\frac{1}{2}\right)^{4}\cdot x^{-\frac{7}{2}} & f^{(4)}(1)=(-1)^3\cdot3\cdot 5\cdot\left(\frac{1}{2}\right)^{4} \\
f^{(n)}(x)=(-1)^{n-1}\left(\frac{1}{2}\right)^{n}\cdot x^{\frac{1-2n}{2}} & f^{(n)}(1)=(-1)^{n-1}\left(\frac{1}{2}\right)^{n}
\end{array}
I thought I had the right answer until I realized that I'd be defining $f(x)$ to be negative.
AI: This is a formula which won't display so well in a comment.
$$1\cdot3\cdot5\cdot7=\frac {1\cdot2\cdot3\cdot4\cdot5\cdot6\cdot7}{2\cdot4\cdot6}=\frac {7!}{2^33!}$$ You should be able to work out the general term from there.
Note also that there is no reason that every term of the sum has to fit the same neat formula. You can always write it as $$a_0+\sum_{r=1}^\infty a_rx^r$$ where $a_0$ is the term which does not fit the pattern, and $a_r$ has a general form.
|
H: A proof in circles.
I need help proving this problem:
$AB$ is a diameter of a circle. $CD$ is a chord parallel to $AB$ and $2CD = AB$. The tangent at B meets the line $AC$ produced at $E$. Prove that $ AE = 2AB $.
What I've got so far is this:
on extending the line $CD$ to the tangent at $B$ such that $CD$ and the tangent meet at some point $H$, I know that $CH = \dfrac 34 AB$. So from this I know that $CE = \dfrac 3 4 AE.$
How to go further?
AI: Name the center of the circle O. Then OCA is an equilateral triangle (OA and OC are both radii so they must be the same length) which means the angle CAB is 60 degrees. The triangle EAB is thus a "30-60-90" triangle which means its sides have a ratio of 1 to 2 to sqrt(3) with AE being the "2" side and AB being the "1" side.
|
H: Heaviside step function squared
I have a question about the Heaviside step function $\theta(\xi)$, defined by
$$\theta(\xi):=\begin{cases}
1, & \xi\geq0\\
0, & \xi<0
\end{cases}$$
I need to evaluate the square of the Heaviside step function, i.e.
$$[\theta(\xi)]^2$$
So, my question is: does the relation
$$[\theta(\xi)]^2=\theta(\xi)$$
holds?
AI: If you take $\theta(\xi)$ as you said, (with $\theta(0)=1$):
$$\theta(\xi):=\begin{cases}
1, & \xi\geq0\\
0, & \xi<0
\end{cases}$$
It will be a regular function (not a generalized, like $\delta(x)$), and there is not any reason to think about it unusually. so the relation holds:$$[\theta(\xi)]^2=\theta(\xi)$$
For the case of $\theta(0)={1\over2}$ as other answers say, $\theta^2(0)\neq \theta(0)$.
But if you define it as :
$$\theta(\xi):=\begin{cases}
1, & \xi>0\\
\text{undefined},& \xi=0\\
0, & \xi<0
\end{cases}$$
I will show that again$[\theta(\xi)]^2=\theta(\xi)$. As a generalized function, $\theta(\xi)$ will be defined through the following relation (where $g(t)\equiv\theta(\xi)$) :
$$\int_{-\infty}^{+\infty}\phi(t)g^{(n)}(t) dt=(-1)^n \int_{-\infty}^{+\infty}\phi^{(n)}g(t)dt \, \,\,\,\,\,\,**$$
where $\phi$ vanishes at $\pm \infty$.
If $g^2(t)=g(t)$, you must have:
$$\int_{-\infty}^{+\infty}\phi(t) {\frac{d^n g^2(t)}{dt^n}}dt=(-1)^n\int_{-\infty}^{+\infty}\phi^{(n)}(t)g(t)dt$$
We denote ${\frac{d^n g^2(t)}{dt^n}}$ with $f(t)$ for simplicity:
$$\int_{-\infty}^{+\infty}\phi(t) f(t)dt=(-1)^n\int_{-\infty}^{+\infty}\phi^{(n)}(t)g(t)dt\, \,\,\,\,\,\,***$$
The right hand side is simply equal to $(-1)^n\int_{0}^{+\infty}\phi^{(n)}(t)dt=(-1)^{n-1}\phi^{(n-1)} (0)$.
Now, we use the $**$ equation with $g(t)=\delta (t)$:
$$\int_{-\infty}^{+\infty}\phi(t)\delta^{(n)}(t) dt=(-1)^{n}\phi^{(n)} (0)$$
we conclude that in the left hand side of $***$, $f(t)$ must be the $(n-1)\text{th}$ derivative of $\delta (t)$, which implies ${\frac{d^n \theta^2(\xi)}{d\xi ^n}}={\frac{d^n \theta (\xi)}{d\xi ^n}}$ or $\theta^2(\xi)=\theta(\xi)$.
|
H: How is a sequentially compact space not compact?
Let us assume the space $X$ is not compact. Then there exists a covering with no finite subcovering, such that every set contains at least one point that no other does.
Select a countable number of such points assuming the axiom of countable choice. We have an infinite sequence $\langle x_{n}\rangle$.
There is no point in $X$ such that every open set containing it contains infinite points of $\langle x_{n}\rangle$, as for every point in $X$ there is at least one set which contains only one or no point of $\langle x_{n}\rangle$- namely the open set part of the infinite cover of $X$. Hence, there is no accumulation point in $X$, making $X$ not sequentially compact.
I know for a fact that there exist topological spaces which are sequentially compact but not compact. It would be great if someone pointed out the (perhaps glaring) flaw in the argument?
Thanks in advance!
AI: The flaw is in the clause such that every set contains at least one point that no other does. The ordinal space $\omega_1$ with the order topology is sequentially compact but not compact. The most obvious open cover with no finite subcover is the one consisting of the sets $V_\alpha=\{\xi:\xi<\alpha\}$ for $\alpha<\omega_1$: this is an uncountable nest of increasing open sets, so each point of the space is actually contained in uncountably many of the $V_\alpha$.
|
H: Hyperbolic integration solving
$$ \therefore x-x_0 = \pm \int_{\phi(x_0)}^{\phi(x)} \frac{d \phi}{\sqrt\frac{\lambda}{2}\left( \phi^2-(\frac{m}{\sqrt \lambda})^2\right)} $$
How can we write the above equation to as,
$$
\phi(x) = \pm \frac{m}{\sqrt \lambda} \tanh\left[\frac{m}{ \sqrt 2} (x-x_0)\right]$$
Do I need to use the hyperbolic function $\tanh$ ?
I tried but got different!
AI: Note that:
$$\int \frac{dx}{x^2 - a^2} = \frac{\rm{arctanh}(x/a)}{a}$$
Define $a= m / \sqrt{\lambda},\ b = \sqrt{\lambda / 2}$:
$$\pm ab(x-x_0) = \rm{arctanh}(\phi(x) / a) - \rm{arctanh}(\phi(x_0) / a)$$
Since we don't know what $\phi(x_0)$ is, call $\rm{arctanh}(\phi(x_0) / a)$, $c$, and get:
$$\phi(x) = a \tanh\left(c\pm ab(x-x_0)\right)$$
$$\phi(x) = \frac{m}{\sqrt{\lambda}} \tanh\left(c\pm \frac{m}{\sqrt{2}}(x-x_0)\right)$$
So if you assume that $\phi(x_0)=0$, you can get the desired result.
|
H: For which values of M vectors$A(m-4,2,2m-12),B(2,m-12,2)$ are orthogonal
I want to find for which values of M vectors$A(m-4,2,2m-12),B(2,m-12,2)$ are orthogonal.
what I did is to do $A*B=0$ and the result was $m=7$ then I inserted $7$ and tried to check if they are orthogonal but it didnt gave me $0.$
there is something wrong with the way I did it?
Thanks!
my calculations:
$A*B = 2m-8+2m-24+4m-24=0 \rightarrow 8m=56 \rightarrow m=7$
AI: $$
A \cdot B = 2(m-4) + 2(m-12) + 2(2m-12) = 2m - 8 + 2m - 24 + 4m - 24 = 8m - 56 = 0
$$
from which you can find that $m = 7$ is indeed a solution. To check it just substitute it back
$$
A \cdot B = (3, 2, 2) \cdot (2, -5, 2) = 6 - 10 + 4 = 0
$$
|
H: $x+y> \epsilon$ then $x>\frac{\epsilon}{2}$ or $y>\frac{\epsilon}{2}$
Since the statement is true,
$P(x+y>\epsilon) \le P(x>\frac{\epsilon}{2} \text{ or } y>\frac{\epsilon}{2})$
Why the inequality exists there?
AI: The statement
$$x+y > \epsilon \Longrightarrow x > \frac \epsilon 2 \ \text{ or } \ y > \frac \epsilon 2$$
is equivalent to
$$\{ x+y > \epsilon \} \subseteq \{ x > \frac \epsilon 2 \} \cup \{ y > \frac \epsilon 2 \}.$$
The latter set contains the former so the probability that it occurs is bigger.
|
H: Series expansion of $\sqrt{\log(1+x)}$ at $x=0$
Mathematica gives the following series expansion of $\sqrt{\log(1+x)}$ at $x=0$.
$$
x^{1/2}-\frac{1}{4}x^{3/2}+\frac{13}{96}x^{5/2}-\cdots
$$
You can find it from Wolfram alpha too.
How can I obtain the expansion?
Obviously Taylor expansion is impossible because $\sqrt{\log(1+x)}$ is not analytic at $x=0$.
Taylor expansion of the $\log(1+x)$ at $x=1$ is possible. But I don't know how to take sequre root on the expanded series.
I think I have not learned about square root of a series from calculs or analysis course.
From what material can I study about such things?
AI: We have
$$\log(1+x)=x-\frac{x^2}{2}+\frac{x^3}{3}-\frac{x^4}{4}+\cdots$$
and recall that
$$(1+x)^\alpha=1+\alpha x+\frac{\alpha(\alpha-1)x^2}{2}+\frac{\alpha(\alpha-1)(\alpha-2)x^3}{3}+\cdots$$
so for $\alpha=\frac{1}{2}$ we have
$$\sqrt{\log(1+x)}=\left(x-\frac{x^2}{2}+\frac{x^3}{3}+O(x^4)\right)^{1/2}=\sqrt{x}\left(1+(\underbrace{-\frac{x}{2}+\frac{x^2}{3}+O(x^3))}_{=u}\right)^{1/2}\\=\sqrt{x}(1+\frac{1}{2}u-\frac{1}{8}u^2+O(u^3))=\sqrt{x}(1-\frac{x}{4}+\frac{x^2}{6}-\frac{1}{8}\frac{x^2}{4}+O(x^3))\\=\sqrt{x}(1-\frac{x}{4}+\frac{13x^2}{96}+O(x^3))$$
|
H: Find $x$ such that $\frac{1}{x} > -1 $
How do I solve this type of inequalities analytically?
I know the answer is $ x<-1 $ and $ x> 0 $ but:
$$\frac{1}{x} > -1 $$
$$1>-x $$
$$ x>-1 $$
Wat I'm doing wrong?
AI: You can't multiply both sides of an inequality by an unknown (which is your first step). This is because when you multiply by a negative, you have to flip the inequality, but when you multiply by a positive, you don't. If you multiply by an unknown you don't know which to do, because you don't know its sign.
What you can do is separate the case into "Assuming $x > 0$..." and "Assuming $x < 0$...".
|
H: Apparent equivalent notations for the axiom of infinity
I'm just begining to build the systems of numbers based on the axioms of set theory ($\mathsf{ZF}$). Accordingly the axiom of infinity is no more than assuming the existence of $\mathbb{N}$ (of course the axiom is formulated in terms of the existence of an inductive set). Now the original statement in terms of logic is
$\exists I(\emptyset \in I\wedge(\forall x(x\in I \longrightarrow x^{+}\in I)))$, understanding $x^{+}:=x\cup\{x\}$.
This is the way I learnt it first. After some days I have learnt it I just worried about understanding the concept. But now that I try to use it again I just came up (I don't know why, maybe it's because of the similarity to the notation of the other axioms in my attempt to remember the original notation) with this apparently equivalent statements:
$1) \exists I\forall x(x\in I\longrightarrow (x=\emptyset \vee x^{+}\in I))$
$2) \exists I\forall x(x\in I\longrightarrow (\emptyset \in I\wedge x^{+}\in I))$
If I put the original statement in the equivalent form
$\exists I\forall x(\emptyset \in I\wedge(x\in I \longrightarrow x^{+}\in I))$
Then to prove the equivalence I probably should deal with the main part that is bound with the quantifiers. So I want to ask how to prove that my statements are equivalent or not, logically speaking.
Note: Intuitively both statements are false because I could form I=$\emptyset$ and this is not and inductive set.
AI: You are correct in your note. Both the formulations are susceptible to fail because taking $I=\varnothing$ satisfies the inner formula vacuously.
The first formulation is incorrect as well because $\{\varnothing\}$ also satisfies it.
The second formulation is correct, but note that $\varnothing\in I$ can be moved out of the implication and we have the same formulation as the last form of the axiom of infinity.
|
H: Name for $X^\infty=\bigcup\limits_{k=0}^\infty X^k$
I'm making structures associated with groups, rings and so on in OCaml and in order to do so I started by defining sets and a few operations (intersection, union, difference, carthesian product, carthesian power) and now I want to define a set $X^\infty=\bigcup\limits_{k=0}^\infty X^k\subsetneq X^\Bbb N$ which I would represent with arrays but I couldn't find a proper name for it. Is there one?
AI: This is the free monoid on the set $X$. A monoid is a set together with a binary operation which is associative and has a unit. The monoid structure on the set you call $X^{\infty }$ is concatenation and the unit is the unique $0$-tuple (the only element in the set $X^0$). The more common notation for the free monoid is $X^*$.
|
H: The least value of $4x^2-4ax +a^2-2a+2$ on $[0,2]$ is $3$. What is the integer part of $a$?
The least value of $4x^2-4ax +a^2-2a+2$ on $[0,2]$ is $3$. What is the integer part of $a$?
We know that minimum value of a quadratic is $-\cfrac{b}{2a}$.
We will get one condition from here and $-\cfrac{b}{2a}$ should be equal to $3$.
But the problem is that this limit is for the whole function, not for an interval, and it might not apply to the interval we have been given.
AI: Hint: there are three possibilities, given the shape of the graph of a quadratic
Let $f(x)=4x^2-4ax +a^2-2a+2$, then
either $f(x)$ is increasing on $[0,2]$, in which case the minimum value occurs at $x=0$
or $f(x)$ is decreasing on $[0,2]$, in which case the minimum value occurs at $x=2$
or $f(x)$ has minimum value within $[0,2]$
Test each of the three possibilities and see which gives you a consistent solution.
Added later
If $x=0$ we need to solve $a^2-2a+2=3$, so $a=1\pm\sqrt2$. But then we have to check those two values of $a$ to see whether $f$ is decreasing (which is the condition for using $x=0$).
If $x=2$ we have $16-8a+a^2-2a+2=a^2-10a+18=3$ whence $a=5\pm\sqrt{10}$, and we need to test the deacreasing condition.
|
H: Probability Question : Pick a ball.
Here's the problem statement.
Given a bag with three type of balls, i.e. winning ball, losing ball and try-again ball. If a person picks up a winning ball then he wins, if he picks losing ball then he loses and if he picks a try-again ball, he tries again and that try-again ball is discarded. Now, if there are w,l and t numbers of winning,losing and try-again balls in bag, respectively. What is the probability that the person wins.
Here's the simple solution.
Either Pick the winning ball or pick try again ball, and then pick winning ball, or pick try again ball two times and then pick a winning ball.. and so on.. till there are no more try-again balls available.
The equation would be of the type
Let Total (o) = w+l+t
Prob = w/o + (t/o)*(w/(o-1)) + (t/o)*(t-1/o-1)*(w/o-2) ..
Now, this is actualle equivalent to w/(w+l), i.e. as if there were no try-again at all.and the try-again actually plays no role.
Now, I am not able to get my head around the fact that why does the try-again thing plays no role at all?
AI: The conditional probability of winning or losing on a given draw, given he does not choose a try-again ball, only depends on the relative number of win or lose balls. Therefore, the last draw he makes, which determines the outcome of the game, only depends on the number of win and lose balls.
|
H: Will a point moving on a sphere always at an angle x (0 deg. < x < 90 deg.) to the "equator" reach a "pole"?
Formulating my question seems to have given me the answer: that the point will continue getting closer to the pole but never reach it. Am I correct?
Edit in response to Martin Argerami:
I see your point. So let's make the point always move at an angle x to the latitude line that includes the point. Then will the point reach a pole?
AI: Spherical coordinates $(a,b)$ of a point on the unit sphere are related to its Cartesian coordinates $(x,y,z)$ through the relations
$$
(x,y,z)=(\cos a\cos b,\sin a\cos b,\sin b).
$$
In the Northern hemisphere, the longitude $b$ is an angle in $[0,\frac\pi2]$ and the latitude $a$ is an angle in $\mathbb R/2\pi\mathbb Z$. Assume that the point is at $(a(0),b(0))=(0,0)$ at time $0$, is at $(a(t),b(t))$ at time $t\geqslant0$, and moves at constant speed, crossing parallels at a constant angle.
Then, writing down the direction of the parallel at point $(x,y,z)$ as $(-y,x,0)$, that is, the direction of the parallel at point $(a,b)$ as $(-\sin a,\cos a,0)$, and the speed vector at time $t$ as
$$
(-\sin a,\cos a,0)\cos b\cdot a'+(-\cos a\sin b,-\sin a\sin b,\cos b)b',
$$
one gets the conditions that
$$
(\cos b)^2(a')^2+(b')^2,\qquad\cos b\cdot a',
$$
should both be constant. That is, $b(t)=\beta t$ for some $\beta\gt0$ and $a'(t)=\alpha/\cos(\beta t)$ for some positive $\alpha$, the ratio $\alpha/\beta$ characterizing the angle the particle crosses parallels at.
Thus, the point is at the North pole at time $t_N=\pi/(2\beta)$, which is finite. Since the point moves at constant speed, the total distance it moved when it reaches the North pole is finite. But the number of turns around the North-South axis is described by
$$
a(t_N)=\int_0^{t_N}\frac\alpha{\cos(\beta t)}\mathrm dt=\frac\alpha\beta\int_0^{\pi/2}\frac{\mathrm dt}{\cos t},
$$
which diverges, hence it is infinite.
|
H: Find the vector that meets the following criteria
I want to find the vector $X$ by the following lines:
$$(1,-3,5) \cdot X=49$$
$$(4,1,-1) \cdot X = 0$$
$$(2,0,-3)\cdot X=-9$$
I would like to get some advice how to find him.
Thanks!
AI: Hint: Assume the vector $X=(x,y,z)$ and then solve the system of equations.
|
H: Complex logarithm vs real logarithm
Suppose we are given the function
$$\theta = \ln |z| \quad \text{defined on the upper half plane $\{ z \in \mathbb{C} \colon \Im( z) > 0\}$}$$
Naively, I would go and manipulate
$$
\ln | z| = \ln (z\bar z)^{\frac{1}{2}} = \frac{1}{2}(\ln z + \ln \bar z + 2\pi i k(z)) \quad \text{(for some integer - valued function $k$)}
$$
and so, because of the last term being integer-valued and non-constant, one cannot take partial derivatives $\partial \theta / \partial z$, $\partial \theta / \partial \bar z$.
On the other hand, the complex logarithm is holomorphic in $\mathbb{C} \setminus (-\infty,0)$ so I ought to be able to differentiate after all since I am only working on the upper half plane.
But one of the problems I have with this reasoning is that $\theta$ is a real - valued function, so if it is holomorphic then it must be constant. There must be an error in my reasoning, what am I doing wrong?
My guess is that I am erroneously mixing up the complex logarithm on the right hand side and the real logarithm on the left hand side, and that $\log \bar z$ is also not the complex logarithm but the composition of the latter with complex conjugation .. Unfortunately I am out of depth with the complex logarithm, my course in Complex Analysis barely scratched the surface of the study of this interesting function.
Many thanks for your help!!
AI: $f\colon z\mapsto \ln \bar z$ is not homolorphic because $f(z+h)=f(z)+\frac1{\bar z}\cdot \bar h + o(h)$ whereas holomorphic needs $f(z+h)=f(z)+a\cdot h+o(h)$.
|
H: Replacing Axiom of Extensionality with a logical formalism
Is it possible to replace the Axiom of Extensionality with a formalism from logic, namely the following one: $\forall a \forall b (a=b\Leftrightarrow \forall P (P (a)\Leftrightarrow P (b)))$ ($ P $ is obviously a predicate)?
AI: In general, the axiom
$$
(\forall P)[P(a) \Leftrightarrow P(b)] \Rightarrow a=b
$$
is an extensionality axiom. It goes by the name "Leibniz' law".
The first problem, though, it how to quantify over $P$. Usually this is done in second-order logic, rather than first-order logic.
In first-order logic, the kind used to study ZFC, the only quantification possible is over sets. One cannot quantify over formulas, predicates, or anything else. Without the axiom of extensionality, it is possible that two sets could contain all the same elements, but still be nonequal. This could happen, for example, if each set also has a "label" which is visible to us but not visible to the language of ZFC. For a real-life analogy, a math class and history class could contain the same students without being the same class.
In second-order logic, if we take the quantifier over predicates to range over all predicates, then as the answer by Hagen von Eitzen indicates we can consider the predicate $P_a(x) \equiv x = a$. However, as explained at Wikipedia, philosophers studying Leibniz' law may reject the predicate $P_a$ because it makes the law trivial. There is also an article on Leibniz' law at the Stanford Encyclopedia of Philosophy.
|
H: matrix product with trace zero
$D$ is a positive definite matrix, $A$ and $B$ are both positive semidefinite matrices, $c$ is a postive integer. I want to know whether $trace\{(A+B+cI)^{-1}ABD\}=0$ implies that $AB=0$?
AI: No. Let
$$A = \begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}\!, \quad B = \begin{bmatrix} 1 \\ & 0 \end{bmatrix}\!, \quad D = \begin{bmatrix} 2 & -1 \\ -1 & 2 \end{bmatrix}\!, \quad c = 1.$$
Then
$$(A + B + cI)^{-1} A B D = \frac{1}{5}\begin{bmatrix} 2 & -1 \\ 4 & -2 \end{bmatrix}\!,$$
so $\mathop{\rm tr}((A + B + cI)^{-1} A B D) = 0$, but
$$AB = \begin{bmatrix} 1 & 0 \\ 1 & 0 \end{bmatrix}.$$
Acknowledgment: Example fixed by Sebastien B's comments.
|
H: $f:\Bbb{R}^n\to\Bbb{R}^m$ is differentiable iff all coordinate functions $f_i:\Bbb{R}^n\to\Bbb{R}$ are differentiable
What I tried:
Since $f$ is differentiable, there is a linear function $L$ such that
$$f(x)=f(a)+L(x-a)+(\text{remainder})$$
Let $f_i$ be a coordinate function. Is it true that $\pi_1\circ L:\Bbb{R}^n\to \Bbb{R}$ is a linear function such that $$f_i(x)=f(a)+(\pi_i\circ L)(x-a)+(\text{remainder})? $$ For the converse:
For every coordinate function $f_i$ there is a linear function $L_i:\Bbb{R}^n\to\Bbb{R}$. We need to compose a linear $F:\Bbb{R}^n\to\Bbb{R}^m$. Define $F$ as $(x-a)\mapsto (L_1(x-a),...,L_m(x-a))$. Is this linear? Do these make sense?
AI: You're just saying that you know an $m\times n$ matrix iff you know all its rows. What you have is correct.
Of course, to be complete, we need to include discussion of the remainder/error (which I'll call $\epsilon$). If each coordinate $f_i$ is differentiable, given $\varepsilon>0$, we know there is $\delta_i$ so that $\|x-a\|<\delta_i \implies |\epsilon_i|<\varepsilon/\sqrt m$, so $\|x-a\|<\delta=\min(\delta_1,\dots,\delta_m) \implies \|\epsilon\|=\|(\epsilon_1,\dots,\epsilon_m)\|<\varepsilon$. Conversely if $f$ is differentiable, given $\varepsilon>0$, there is a $\delta$ so that $\|x-a\|<\delta\implies \|\epsilon\|<\varepsilon \implies |\epsilon_i|\le \|\epsilon\|<\varepsilon$.
|
H: Does smooth section of a quotient space $G/H$ define an immersion?
Question 1:
Let $G$ be a Lie group and $H<G$ a Lie subgroup of $G$, given the projection $p:G\to G/H$
and a smooth (local) section $s:G/H\to G$ s.t. $p\circ s=id$, then does this imply that $s$ is an immersion? So a section can in general be considered a submanifold of $G$? When will $s$ be an embedding?
Question 2:
If the first case is true, denote the image of $s$ by $M$. Then is it true that the pointwise multiplication $M\cdot H=G$ (at least locally)? On the other hand, if $M$ is given that satisfies $M\cdot H=G$, is it true that $M$ is always the image of a smooth section? Is there a standard way to construct and classify submanifolds of $G$ that satisfies $M\cdot H=G$?
AI: Question 1: $p:G\to G/H$ is a principal fiber bundle with structure group $H$. Thus $s$ is an immersion
(since $p\circ s =Id$, $s$ has rank $=\dim(G/H)$, and $s$ is injective and a closed embedding).
Question 2: Yes, $M.H=G$. But not every $M$ with this property is the image of a section. $M$ might be a leaf of the horizontal foliation given by a flat principal connection, then $p:M\to G/H$ would be a covering mapping.
|
H: Calculate the inner angles of the triangle $A(2,-3,5),B(0,1,4),C(-2,5,2)$
I want to calculate the inner angles of this triangle.
$$A(2,-3,5),B(0,1,4),C(-2,5,2)$$
I know that for calculate the angle I need to do the following thing:
$$\cos(\alpha)=\frac{A\cdot B}{|A||B|}$$
I need to calculate AB with BC and AC with AB and AC with BC?
Thanks!
EDIT
$$AB(-2,4,-1),AC(-4,8,-3),BC(-2,4,-2)$$
I found the angle between AB and AC = $5.94$
the angle between AC and BC = $5.55$
the angle between AB and BC is = $11.49$
the two other angles are right but the third not, what I did wrong?
AI: At first calculate the vectors: $\overrightarrow{AB}$, $\overrightarrow{BC}$, and $\overrightarrow{AC}$, and their norms. After that use the formula you posted.
For example:
$$\overrightarrow{AB} = (0, 1,4)- (2,-3,5) = (-2, 4, -1)$$
$$|\overrightarrow{AB}|=\sqrt{(-2)^2+4^2+(-1)^2}.$$
I think you can conclude now.
Edit
The third angle you got is in fact an exterior angle. Using the dot product formula, you will get the angle between two vectors, when their representations are set sharing a common origin or a common end. See the picture bellow:
|
H: Tangent map of the inclusion map of a submanifold
Let $M$ be the Minkowski spacetime, let $f\in C^{\infty}(M)$ be defined as $f(m)=x^{0}(m)$, with $\{x^{\mu}\}$ being a global Cartesian coordinates system, and let $M\supset F_{t}=f^{-1}(t)$ be the submanifold relative to a regular value $t\in\mathbb{R}$ of $f$. How can the inclusion map $\iota_{t}:F_{t}\longrightarrow M$ and the tangent map $T\iota_{t}:TF_{t}\longrightarrow TM$ be visualized?
AI: The inclusion map is given by the identity (if you see $F_t$ as a subspace of $M$), the tangent map $T\iota$ is the inclusion of the tangent space of $F_t$ in $TM$
|
H: Need an easy CDF for Inverse transform sampling
I want to use inverse transform sampling to generate some random numbers, which all fall into a given interval $(0,x_{max})$. The numbers are not necessarily distributed evenly but can be "skewed". I do not know the true distribution, all I know is that there can be some skew.
So I need a cumulative distribution function, which reaches 1 at $x_{max}$ and which has an inverse which can be written down as a formula. It should have a parameter, which allows to adjust the "skew", and which generates a uniform distribution (a straight line) as a special case.
These are quite humble requirements, but I cannot find such a function.
AI: You might want to try the family $(F_\alpha)$ indexed by $\alpha\gt0$ and defined by $F_\alpha(x)=(x/x_{\mathrm{max}})^\alpha$ for every $x$ in $[0,x_{\mathrm{max}}]$.
Exercise: For every real number $\gamma$ there exists some value of $\alpha$ such that the skewness of $F_\alpha$ is $\gamma$.
|
H: Is't a correct observation that No norm on $B[0,1]$ can be found to make $C[0,1]$ open in it?
There's a problem in my text which reads as:
Show that $C[0,1]$ is not an open subset of $(B[0,1],\|.\|_\infty).$
I've already shown in a previous example that for any open subspace $Y$ of a normed linear space $(X,\|.\|),~Y=X.$ Even though using this result this problem turns out to be immediate the sup-norm is becoming immaterial.
And I can't believe what I'm left with:
No norm on $B[0,1]$ can be found to make $C[0,1]$ open in it.
Is this a correct observation?
AI: This statement is actually true under more general settings. It seems convenient to talk about topological vector spaces, of which normed spaces are a very special kind.
So let $X$ be a topological vector space and $Y$ be an open subspace. So we know $Y$ contains some open set.
Since the topology on topological vector spaces are translation invariant (that is, $V$ is open if and only if $V+x$ is open for all $x\in X$, you can check this in normed spaces), we know $Y$ contains some open neighborhood of the origin, say $0\in V\subset Y$.
Another interesting fact about topological vector spaces is that for any open neighborhood $W$ of the origin, one has \begin{equation}
X=\cup_{n=1}^{\infty}nW.
\end{equation}Again you might check this for normed spaces. Apply this to our $V$, and note that $Y$ is closed under scalar multiplication, we have \begin{equation}
X=\cup nV\subset \cup nY=Y.
\end{equation}
So we have just proved
The only open subspace is the entire space.
Note: If $S$ is a subset of a vector space, for a point $x$ and a scalar $\alpha$ we define \begin{equation}
x+S:=\{x+s|s\in S\}
\end{equation} and \begin{equation}
\alpha S:=\{\alpha s|s\in S\}.
\end{equation}
|
H: Check if $u + v\sqrt 2 > u' + v'\sqrt 2$ without computing $\sqrt 2$
I'm building an algorithm that perform some computations on two inputs, m and n. These are numbers of the form $u + v\sqrt 2$, where $u$ and $v$ are integers.
I'm asking here because at a certain point the algorithm checks if m $>$ n, and, in order of the algorithm to be effective, it must not use the infinite decimal expansion of $\sqrt 2$.
So how can I say about the possibility to perform that check exactly and in a finite length of time? Is there a way to do that?
Just to give an example, if it had been to check m $=$ n, that is $u + v\sqrt 2 = u' + v'\sqrt 2$, it would have been much simpler, because one would have just need to check that $u + v = u' + v'$, since $\sqrt 2$ is irrational.
What about, instead, the inequality? How to get rid of $\sqrt 2$?
AI: The only non obvious case is when $a=u-u'$ and $b=v'-v$ are nonzero and have the same sign. Assume for definiteness that $a$ and $b$ are positive. To check whether $a\gt b\sqrt2$ or $a\lt b\sqrt2$, compute $a^2$ and $2b^2$ and compare them.
|
H: Are there algebraic structures with more than one neutral element and/or more than one inverse element?
I was reading a book on groups, it points out about the uniqueness of the neutral element and the inverse element. I got curious, are there algebraic structures with more than one neutral element and/or more than one inverse element?
AI: They do exist but they are algebraic structures with partial operations, i.e. the multiplication $a*b$ is not defined for all $a,b$. Typical examples are "journeys": you can compose a journey from $x$ to $y$ with a journey from $z$ to $w$ if and only if $y=z$. Standard mathematical examples are categories and groupoids. So a groupoid is sometimes thought of as a "group with many identities", and a group is a groupoid with only one identity.
These ideas lead to double categories and groupoids, which have compositions thought of as in different directions. Double groups are just abelian groups, by what is called the Eckmann-Hilton argument, or interchange law, but double groupoids are quite complicated!
Other interesting examples are inverse semigroups. Consider a set $X$ and the set $I(X)$ of all bijections between subsets of $X$. Clearly there is a composition $f \circ g$ of any $f,g \in I(X)$, but the domain of $f \circ g$ may be smaller than expected and even empty. See the wiki entry for more information. In particular the identity $1_A$ on a subset $A$ of $X$ is associated with the domain $A$.
Sept 21, 2016 I'd like to add the point that my definition of "higher dimensional algebra" is that it is the study of algebraic structures with partial operations whose domains are defined by geometric conditions. This allows for a combination of algebra and geometry, which is exploited in our 2011 book Nonabelian Algebraic Topology.
|
H: Ordering Relation
Prove the following theorem:
Suppose $A$ is a set, $F \subseteq P (A)$, and $F \neq \varnothing.\;$ Then the least
upper bound of $F$ (in the subset partial order) is $\bigcup F.$
AI: Hint: Show that for every $B\in F$ we have $B\subseteq\bigcup F$, so it is an upper bound; and if $C$ is such that for all $B\in F$ we have $B\subseteq C$ then $\bigcup F\subseteq C$.
|
H: Prove that $\sum\limits_{k=0}^{n-1}\dfrac{1}{\cos^2\frac{\pi k}{n}}=n^2$ for odd $n$
In old popular science magazine for school students I've seen problem
Prove that $\quad $
$\dfrac{1}{\cos^2 20^\circ} +
\dfrac{1}{\cos^2 40^\circ} +
\dfrac{1}{\cos^2 60^\circ} +
\dfrac{1}{\cos^2 80^\circ} = 40. $
How to prove more general identity:
$$
\begin{array}{|c|}
\hline \\
\sum\limits_{k=0}^{n-1}\dfrac{1}{\cos^2\frac{\pi k}{n}}=n^2 \\
\hline
\end{array}
, \qquad \mbox{ where } \ n \ \mbox{ is odd.}$$
AI: Since $n$ is odd, the numbers $u_k=\cos k\pi/n$ are the same as the numbers $\cos 2k\pi/n$, i.e. the distinct angles $\theta$ satisfying $n\theta=0$ (mod $2\pi$). We think of them as the roots of the equation $\cos n\theta=1$. Writing
$$\cos n\theta = \cos^n\theta - \binom{n}{2}\cos^{n-2}\theta\sin^2\theta \cdots \pm n\cos\theta \sin^{n-1}\theta$$
and using $\sin^2\theta=1-\cos^2\theta$, we see that the $u_k$'s are the roots of the polynomial
$$p(u)=u^n - \binom{n}{2}u^{n-2}(1-u^2) \pm n u(1-u^2)^{(n-1)/2} + 1.$$
Note that all powers of $u$ which occur are odd (except for the constant term).
The reciprocals $1/u_k$ are the roots of the "reverse polynomial"
$$r(u)=u^n p(1/u) = u^n + a_{n-1} u^{n-1} + \cdots,$$
where $a_{n-1}=\pm n$ and $a_{n-2}=0$.
The sum in question is the sum of the squares of the roots of $r(u)$, i.e. $a_{n-1}^2 - 2a_{n-2} = n^2$.
|
H: Eigenvalues of a specific $9\times9$ matrix - a simpler way?
Let $ρ$ be the permutation of $\{1, \dots , 9\}$ be given by $$ρ=\bigg(\begin{array} \\1 2 3 4 5 6 7 8 9 \\2 3 4 1 6 7 5 9 8 \end{array}\bigg)$$ and let $α : C^9 → C^9$ be the linear map defined
by by $α(e_j) = e_{ρ(j)}$, $j = 1, \dots , 9$. What are the
eignevalues of $α$?
I know how to find them directly, but I was wondering, if there's a way of doing it without calculating characteristic polynomial?
AI: To compute the characteristic polynomial is not that difficult in this case: if you draw the matrix of $\alpha$, you'll see that it's a block-diagonal matrix, with three blocks along the main diagonal. So the characteristic polynomial is the product of the characteristic polynomials of these blocks, which are
$$
t^2-1 \ , \quad -t^3 + 1 \quad \text{and}\quad t^4 -1 \ .
$$
|
H: Eigenvectors of a $2 \times 2$ matrix when the eigenvalues are not integers
How can I calculate the eigenvectors of the following matrix?
$$\begin{bmatrix}1& 3\\3& 2\end{bmatrix}$$
I calculated the eigenvalues. I got
$$\lambda_1 = 4.541381265149109$$
$$\lambda_2 = -1.5413812651491097$$
But, now I don't know how to get the eigenvectors. When I create a new matrix after I subtracted Lambda value from all the members of the matrix on the main diagonal and tried to solve the homogeneous system of equations, I get only null vector for both $\lambda_1$ and $\lambda_2$....
When I used this website for calculating eigenvalues and eigenvectors. I got these eigenvectors
$(0.6463748961301958, 0.7630199824727257)$ for $\lambda_1$
$(-0.7630199824727257, 0.6463748961301957)$ for $\lambda_2$
.... but have no idea how to calculate them by myself...
Is it even possible? ....or it's possible to calculate it numerically?
AI: Indeed, as Chris Eagle and Michael pointed out to you, calculators are not always your best friend.
Instead, if you do your maths with the characteristic equation, you'll find out that the eigenvalues look nicer this way:
$$
\lambda = \frac{3 \pm \sqrt{37}}{2}
$$
And it's not at all impossible to find the eigenvectors. For instance the one with the $+$ sign, you could start like this:
$$
\begin{pmatrix}
1 - \dfrac{3 + \sqrt{37}}{2} & 3 & \vert & 0 \\
3 & 2 - \dfrac{3 + \sqrt{37}}{2} & \vert & 0
\end{pmatrix}
$$
Hint: After some easy simplifications, you'll find out that it's very useful to multiply one of the rows by $1 - \sqrt{37}$ and that you can write the corresponding eigenvector as simple as this: $(1 - \sqrt{37}, -6)$.
|
H: Transverse a single point
I got very confused with understanding this theorem. So $\{y\}$ is a point, how could it be transversed by $f$?
Proof: Given any $y \in Y.$ alter $f$ homotopically to make it transversal to $\{y\}$.
Thank you for your help~~
AI: The tangent space to a point is trivial, so to say that $f$ is transverse to $\{y\}$ is just to say that $y$ is a regular value of $f$, i.e. that $f'(x):T_xX\to T_yY$ is surjective for all $x\in f^{-1}(y)$.
|
H: MLE of Poisson Variable
Consider a random sample of size $n$ from a Poisson distribution with mean $\mu$. Let $\theta=P(X=0)$. Find the MLE of $\theta$ and show that it is a consistent estimator.
--We have $\theta=P(X=0)=e^{-\mu}$. To find the MLE, I took the log likelihood, $\ell(\mu,\mathbf{x})=-n\mu$, which has a derivative $-n$ with respect to $\mu$. Therefore the MLE would be $0$. Is this calculation correct? It seems too simple...
AI: You have to take the joint likelihood of the $n$ samples. If $X_1,X_2,\ldots,X_n$ are the samples you write $$\log P(X_1,X_2,\ldots,X_n)=\log\prod P(X=X_i)\\=\sum\log P(X=X_i)\\=\sum (X_i\log\lambda -\lambda-\log(X_i!)).$$ To find the MLE of $\theta$ you write the above expression in terms of $\theta$ and the maximizer with respect to $\theta$ would be the desired MLE.
|
H: A cardinality of a graph
If I have graph $G=(V,E)\\$
What is the meaning of $|G|$? (The cardinality of G).
I'd like to know few words about it.
Thank you!
AI: Generally, for a given graph $\,G=(V,E),\;$ the standard meaning of $|G|$ is simply $$|G| = |V|$$
|
H: Differentiation solving
According to the question , answered by martini , I am failing to evoke my memory that how can we write this $$\sum_i \partial_i^2\rho = \frac{D\rho^2 - \sum_i x_i^2}{\rho^3} = \frac{D-1}{\rho}$$
AI: Well, $\rho^2 = \sum_i x_i^2$ hence $D \rho^2-\sum_i x_i^2 = (D-1)\rho^2$ then the $2$ cancels with the $3$ in the denominator leaving the requested expression $\frac{D-1}{\rho}$.
The other equality arises from the observation that
$$ \frac{\partial \rho}{\partial x_j} = \frac{\partial }{\partial x_j} \sqrt{ \sum_i x_i^2} = \frac{1}{2\rho}\frac{\partial }{\partial x_j}\sum_i x_i^2 = \frac{x_j}{ \rho}$$
then differentiate once more,
$$ \frac{\partial }{\partial x_j} \frac{x_j}{ \rho} = \frac{\rho-\frac{\partial \rho}{\partial x_j}x_j}{\rho^2}=\frac{\rho^2-x_j^2}{\rho^3}$$
then change the $j$ to $i$ and sum to see the first equality.
|
H: Does there exist such a function?
Can we find a function $ f: \mathbb{R} \to \mathbb{R} $ that is continuous only at the points $ 1,2,\ldots,100 $?
AI: Take $$f(x) = \prod_{i=1}^{100}(x-i)1_{\mathbb{Q}}(x)$$ where $1_{\mathbb{Q}}(x) = 1$ if $x$ is rational and $0$ otherwise. Can you show that it has the desired properties?
|
H: A step in computing the cohomology ring of $\mathbb{C}P^n$
On page 250 of Hatcher's Algebraic Topology, he uses a certain corollary to compute the cohomology ring of $\mathbb{C}P^n$. The relevant section is below for convenience:
I understand the proof except for his statement that once can deduce $H^{2i}(\mathbb{C}P^n)$ is generated by $\alpha^i$ for all $i<n$. The base case follows trivially from the induced isomorphism between $H^2(\mathbb{C}P^1)$ and $H^2(\mathbb{C}P^2)$. I tried working everything out after that, but in the end I arrive at $H^{2i} (\mathbb{C}P^n)$ is generated by $\alpha^i$ for all $i<n-1$, not $n$, as Hatcher says. I really don't see how one could use the fact that $H^2(\mathbb{C}P^2)$ is generated by $\alpha$ to show that $H^4(\mathbb{C}P^3)$ is generated by $\alpha^2$. How does one see this?
AI: To take the example at the end of your question, the fact that $H^4(\mathbb CP^3)$ is generated by $\alpha^2$ is deduced from the fact (part of the induction hypothesis) that $H^4(\mathbb CP^2)$ is generated by $\alpha^2$. That's why Hatcher first recalls that the inclusion of $\mathbb CP^2$ in $\mathbb CP^3$ induces an isomorphism of cohomology up to dimension $4$, so that the full result for $\mathbb CP^2$ (including $H^4$) is available when proving the result for $\mathbb CP^3$. (Analogous comments apply for general $n$ in place of $3$.)
|
H: Irrational sum to integers?
Is it possible for $(a-b)k + bf$ to be an integer if $k,f$ are irrational numbers and $a,b$ are integers? What about $(a-b)k - bf$?
AI: $$(0-1)\sqrt2+1\cdot\sqrt2=0$$
$$(2-1)\sqrt2-1\cdot\sqrt2=0$$
In general this is possible, if and only if the set $\{1,k,f\}$ is linearly dependent over $\mathbb{Q}$. This is close to being a tautology, as seen by the following argument.
Assume the set $\{1,k,f\}$ is linearly dependent over $\mathbb{Q}$, then by clearing the denominators we get a linear dependency relation with integral coefficients $c_i,i=1,2,3$:
$$
c_1k+c_2f=c_3.
$$
Set $c_2=\pm b$ (the sign depending on the case you are interested in) and solve $a$ from the equation $a-b=c_1\implies a=b+c_1$.
On the other hand, if
$$
(a-b)k\pm bf=c
$$
with integers $a,b,c$, then clearly $\{1,k,h\}$ is linearly dependent over $\mathbb{Z}$, and hence also over the rationals.
So for example you can do this with $k=1+\sqrt2$ and $f=\sqrt2$. Or $k=45+17\sqrt{17}$ and $f=-4+\sqrt{17}$. OTOH you cannot do it with $k=\sqrt2$ and $f=\sqrt3$.
Linear dependence over the rationals may not sound like a very useful tool here unless you are familiar with elementary theory of field extensions. That theory will give you useful criteria for deducing linear independence over $\mathbb{Q}$.
|
H: Larger circuit design for same boolean function?
I've designed this circuit with 4 logic gates, and did Karnaugh map's simplification and Quine McCluskey method. However I found out that actually my circuit design is already optimized and I can't really compare how the simplifications offer a less expensive circuit.
I'd like to add a few more gates to the circuit not changing the boolean function of it. Which is (X2'∙X0)+(X2∙X1)
AI: You are correct in your desire to add extra gates, because as it's currently constructed it is prone to a logical hazard. Particularly there is a risk of the output briefly transitioning to a $0$ when the input changes between $111$ and $011$ when it should remain $1$.
Adding the term $X_0X_1$ will not change the logical function of the circuit, and will eliminate the hazard. So your expression will become:
$$F=X_0X_2'+X_1X_2+X_0X_1$$
|
H: How many quartic polynomials have single-digit integer coefficients?
Let $X$ be the set of all polynomials of degree 4 in a single variable $t$ such that every coefficient is a single-digit nonnegative integer. Find the
cardinality of $X$.
This is a question from Balakrishnan's Intro. Discrete Mathematics, and it's an even-numbered exercise, so I can't see if my solution is correct.
Let $$t_i\ (i = 1, 2,\ldots,k)$$ be the coefficients in a polynomial with $k$ terms. Then, $$t_k>0\in\mathbb Z$$
The first polynomial can be expressed as:
$$t_1^4 + t_2^3 + t_3^2 + t_4$$
the second
$$t_1^4 + t_2^2 + t_3$$
and so on
I think I can recognize and define a pattern here in that since the polynomials must all be of degree $4$, each polynomial must have $t_k^4$ as a coefficient, correct? So that means that, for each coefficient of the polynomials, there are $9$ possible nonnegative integers. Wouldn't that mean:
$$(1)(9)(9)(9) + (1)(9)(9) + (1)(9) + (1)$$
is the cardinality of $X$?
AI: Your solution is off a little bit; and your notation is off a little more! So, let's see if we can fix that.
A polynomial of degree four in $t$ is an expression of the form
$$
at^4+bt^3+ct^2+dt+f,
$$
where we require that $a\neq 0$.
The choices for $a$, $b$, $c$, $d$, and $f$ don't affect eachother; $a$ must be in $1,2,\ldots,9$, and the rest of coefficients must be in $0,1,2,\ldots,9$. (Notice that it said nonnegative, not positive!)
So, there are 9 choices for $a$, and 10 choices each for $b$, $c$, $d$, and $f$; that gives us $9\cdot 10^4=90000$ different polynomials of degree 4.
|
H: $\int_{0}^\infty e^{-xu}\sin x \, dx$: Where is my mistake?
Below find a a scan of a page out of Buck's Advanced Calculus. Right below the line marked (6-33), he seems to claim that $\int_{0}^\infty e^{-xu}\sin x \,dx = \frac{1}{1+u^2}$ (for $u>0$). I keep getting $\frac{u^2}{1+u^2}$, and I'm wondering if I'm making a mistake, or if there's a typo.
$$\int_{0}^\infty e^{-xu}\sin x \, dx = \int_{0}^\infty e^{-xu} d(-\cos x)=-\cos x e^{-xu}]_{0}^{\infty}-\int_{0}^\infty(-\cos x)d(e^{-xu})$$ $$=1 + \frac{-1}{u}\int_{0}^\infty e^{-xu}\cos x \, dx=1 - \frac{1}{u}\int_{0}^\infty e^{-xu}d(\sin x)$$ $$= 1 - \frac{1}{u}\left([e^{-xu}\sin x]_{0}^{\infty} - \int_{0}^\infty \sin x \,d (e^{-xu})\right)= 1 - \frac{1}{u}\left(\frac{1}{u}\int_{0}^\infty e^{-xu}\sin x \, dx\right)$$ so
$$I = 1 - \frac{1}{u^2}I \Rightarrow \frac{u^2 + 1}{u^2}I = 1 \Rightarrow I = \frac{u^2}{u^2+1}.$$
What do you think?
AI: It would seem you made a careless error. What is $\dfrac{\partial}{\partial x}e^{ux}$?
|
H: A sentence that has infinite models, finite model, but no finite model above certain cardinality
Let $T$ be a theory and $\sigma$ a sentence, such that
there exists infinite $\mathfrak{A} \models T + \sigma$.
there exists finite $\mathfrak{A} \models T + \sigma$.
there exists $n \in \mathbb{N}$, such that for all $\mathfrak{A}$ with $|\mathfrak{A}| > n$, $\mathfrak{A} \models T + \neg\sigma$.
Is this possible?
AI: Certainly.
Consider $\cal L$ to be the language containing one binary relation symbol $<$.
$T$ is the theory stating that $<$ is a linear order (irreflexive, transitive and total).
$\sigma$ is the statement that if there are $n$ different elements in the universe, then $<$ is unbounded. That is:$$\Big(\exists x_1\ldots\exists x_n(\bigvee_{i<n}x_i\neq x_{i+1})\Big)\rightarrow\forall x\exists y(x<y)$$
It's easy to see that $T$ has finite models of any cardinality, as well infinite models. But $\frak A\models\sigma$ then its universe infinite or has less than $n$ different objects.
|
H: Real variable lemma
Someone can help with the following lemma?
Lemma: Let $\theta'(r)\geq0,\theta(r)>0$ and $\dfrac{\theta'(r)}{\theta(r)}$ decreasing for $r>0$. Then
$$\frac{\displaystyle\int^r_0\theta'(s)\,ds}{\displaystyle\int^r_0\theta(s)\,ds}$$
is decreasing.
Thanks!
AI: I'll assume $\theta(r)$ doesn't vanish in $[0, \infty)$. Now for every $\displaystyle 0 \leq s < r$, $\displaystyle \frac{\theta'(r)}{\theta(r)} \leq \frac{\theta'(s)}{\theta(s)}$. Cross multiply and integrate from $0$ to $r$ to get $\displaystyle \theta'(r) \int_0^r \theta(s) ds - \theta(r) \int_0^r \theta'(s)ds \leq 0$. But this means, $\displaystyle \frac{d}{dr}$ of your quotient of integrals is $\leq 0$.
|
H: Is this a vertical asymptote?
I have this function:
$$ f(x) = (x+1) \cdot e^{\frac{1}{x}} $$
I have the two side limits:
$$ \lim_{x \to 0^-} { (x+1) \cdot e^{\frac{1}{x}} } = 0 $$
$$ \lim_{x \to 0^+} { (x+1) \cdot e^{\frac{1}{x}} } = +\infty $$
So the right left side limit is not infinity. Is that still considered an asymptote?
AI: Yes, an asymptote exists at $\;\;x = 0$. An asymptote can exist even if a limit does not approach infinity, as in the case here, where the function is asymptotic to the $x$ axis ($y = 0$) near the origin.
See the graph of $f(x) = (x + 1)\, e^{1/x}\,$ below, compliments of Wolfram Alpha. The graph below is "zoomed in" near the origin. As $x \to 0^-$, $f(x) \to 0$, but never reaches zero. Hence, the function becomes horizontally asymptotic to $y = 0$.
Here is more "global" version of the graph of the same function, where you can see that the function is vertically asymptotic to $x = 0$ as $x\to 0^+$:
|
H: To show an analytic function is one-to-one on the unit disk
Let $\displaystyle f(z) = \sum_{n=0}^\infty a_nz^n$ be analytic in the unit disk $D_1(0)$ with $f(0) = 0$ and $f'(0) = 1$. Prove that if $\displaystyle \sum_{n=2}^\infty n|a_n| \le 1$, then $f$ is one-to-one in $D_1(0)$.
I am able to show that $f$ has a unique zero in $D_1(0)$ and $f$ is locally one-to-one, but I cannot go any further.
We may write $\displaystyle f(z) = z + \sum_{n=2}^\infty a_nz^n$. As $\displaystyle \sum_{n=2}^\infty n|a_n| \le 1$, we have $\displaystyle \sum_{n=2}^\infty |a_n| < 1$. So
$$
\left|\sum_{n=2}^\infty a_n z^n \right|_{C_1} < 1
$$
By Rouché's Theorem, $f$ and $z$ have the same number of zeros inside $C_1$. So 0 is the unique zero of $f$.
Next, we let $g(z) = f'(z) -1$. So $\displaystyle g(z) = \sum_{n=2}^\infty na_nz^{n-1}$. Again, we use the assumption that $\displaystyle \sum_{n=2}^\infty n|a_n| \le 1$ to conclude that $g$ maps the unit disk into itself. Moreover, $g(0) = 0$. Apply the Schwarz's Lemma, we have
$$
|f'(z)-1|\le |z|, z \in D_1(0)
$$
which means that $f'(z) \ge 1-|z| >0$ for all $z \in D_1(0)$, i.e., $f$ is locally injective.
AI: Hints:
Show that $\operatorname{Re}f'>0$ in $D$.
Check that $\displaystyle \frac{f(b)-f(a)}{b-a} = \int_0^1 f'(a+t(b-a))\,dt$ for any $a,b\in D$ with $a\ne b$.
Combine 1 and 2.
|
H: Evaluate $\int_2^\infty{\frac{3x-2}{x^2(x-1)}}$
To be shown that $\int_2^\infty{\dfrac{3x-2}{x^2(x-1)}}=1-\ln2$
My thought: $\dfrac{3x-2}{x^2(x-1)}=\dfrac{3x}{x^2(x-1)}-\dfrac{2}{x^2(x-1)}$
• $\dfrac{3x}{x^2(x-1)}=\dfrac{3}{x(x-1)}=\ldots=-\dfrac{3}{x}+\dfrac{3}{x-1}$
• $\dfrac{2}{x^2(x-1)}=\ldots=-\dfrac{2}{x^2}+\dfrac{2}{x-1}$
But developing the sum of the integrals of the above two gives ln of infinite in my results. I don't know what I am doing wrong. Can you please confirm the dots above? Although I have several times. Thanks a lot
AI: Hint
Write the integrand as: $$\frac{3x-2}{x^2(x-1)}=\frac{Ax+B}{x^2}+\frac{C}{x-1}$$ in which $A,B$ and $C$ is unknown constants and so find the proper values for them. I think via this way you can find them easier than the way you noted. Indeed, $$\frac{Ax+B}{x^2}+\frac{C}{x-1}=\frac{(A+C)x^2+x(B-A)-B}{x^2(x-1)}$$ and therefore you get $$A+C=0, ~~B-A=3,~~-B=-2$$ and so...
|
H: Are cross-references context-free?
First, a little background: In XML there is the ability for one part of an XML document to reference another part of the document (i.e., a cross-reference). Below is an example. The BookSigning element references a Book element:
<Library>
<BookCatalogue>
...
<Book isbn="0-440-34319-4">
<Title>Illusions</Title>
<Author>Richard Bach</Author>
</Book>
...
</BookCatalogue>
<BookSignings>
...
<BookSigning isbn_ref="0-440-34319-4" />
...
</BookSignings>
</Library>
That is, isbn_ref points to isbn. The value of isbn_ref and isbn match.
If there is no matching isbn value then the isbn_ref is dangling and the XML is invalid.
I want to know if cross-references in XML can be expressed using a Context-Free Grammar (CFG)? Or, does the use of cross-references make XML Context-Sensitive?
Dealing with the XML syntax is much too complicated, so I would like to abstract the problem to something more manageable. I believe that cross-referencing in XML is analogous to this: Let x represent any XML element and a represent one end of a cross-reference. We could have an XML document without any cross-references, which corresponds to a sequence of x's:
xxxxxxxxxxxxx
Suppose that the XML document has a cross-reference, then it has an a and somewhere else in the document there must be exactly one other a. So amongst the x's there must be zero a's or exactly two a's:
xxxxxxaxxxxxxaxxxx
I wrote a CFG for that language:
S -> X | A
X -> xX | empty
A -> XaB | BaX
B -> XaX
So, if I have a correct abstraction of cross-referencing in XML, then I have proven that cross-references in XML are context-free. The problem is that I am not convinced that my abstraction faithfully represents cross-referencing in XML. Do I have a correct abstraction? Are cross-references context-free?
AI: I think there's a pretty convincing but informal argument that cross-references are not context-free. A context-free language can be matched by a push-down automaton. However, if you have multiple cross-references and the IDs are drawn from an unbounded set, you need to "remember" all of the IDs you've seen, so you can't guarantee that the one you need will be at the top of the stack when you encounter a reference.
(For what it's worth, I think your abstraction also breaks down on the possibility of multiple references to the same ID, but that's by the by).
|
H: Calculating Log-likelihood using Raphson and Jacobian matrices?
I am reading the following paper:
http://www.ntuzov.com/Nik_Site/Niks_files/Research/papers/stat_arb/Ahmed_2009.pdf
and in particular page 13. I want to try and calculate lambda_t(p) = exp^(Beta^T x_t) but I am having problems with the fact beta is derived from a log-likelihood estimation and also jacobian matrices, which require newton raphson:
Now I never did a degree in maths (I did engineering) but I am reasonably mathematical. I need therefore quite a bit of help trying to work out how to actually implement the above in code. I am trying to calculate the order rate, lambda_t(p) = exp^(Beta^T x_t).
I found this, which I think may be answering my question:
http://en.wikipedia.org/wiki/Jacobi_method#Example
If the above link is no good, would someone be able to give me a really basic example of how I could calculate beta, so that I can then calculate lambda_t(p) = exp^(Beta^T x_t)?
Does anyone know what T is representing?
I wasn't sure if I need to implement newton raphson, as well as the jacobian method, or whether the latter encompasses that.
Help most appreciated,
Thanks
AI: The wiki link above is okay, but when using Newton-Raphson and Jacobi with functions and vectors like yours, I would actually read this one instead. A little more complex, but that's because it's more generally applicable then the link you referenced. Your trying to find a $\beta$ that solves the MLE equation:
For the NR iteration, your iterating from $\beta$-value to $\beta$-value along the function until you reach the solution, which is the desired $\beta$-value. The Jacobian represents the derivative of the function you need to solve, in this case the MLE; it allows you to iterate properly along the curve (derivative = a function's rate of change). On the wiki page, $\beta: MLE(\beta)=0$ analogous to $x: f(x)=0$, and $f'(x)$ is analogous to $J(\beta)$.
Does anyone know what T is representing?
T is simply the transpose operation (note that it says $\beta\in\mathbb{R}^p$)- the vector $\mathbf{x_t}$ looks to me like it should be $\in\mathbb{R}^{n\times p}$, otherwise the next equation (the summation) would not have enough $\mathbf{x_i}$ column vectors.
I wasn't sure if I need to implement newton raphson, as well as the jacobian method, or whether the latter encompasses that.
Other way around: Newton-Raphson (your last equation) encompasses the Jacobian (denoted $J$; it's inverse is used).
Note The Newton Raphson technique doesn't iterate until the error goes to zero: it takes an infinite amount of time to do so. Instead, set an upper-bound (a given tolerance $\epsilon$) on the error between $\beta$-values (using the 2-norm of the difference), and use that as a while-loop condition for your $\beta_n$ iterations.
The first thing you'll need to do is come up with an "educated guess" of an initial value, call it $\beta_0$. Whenever possible, try to think of typical $\beta$ values for the given scenario, and pick something around there.
From here, you need to calculate the (1) Jacobian matrix and (2) partial derivative for that $\beta$-value, where each element is given by Eq3 and Eq2 (respectively). You can use for-loops to iterate through $jac$ and $mle$ and create the matrix/vector element-by-element (since each element requires a summation). Define functions where the only parameter you have to pass to them is $\beta$, and let them output $J(\beta_n)$ and $\frac{\partial\log L}{\partial\beta}$.
Note the following relationships:
MLE (Eq 2) is the partial derivative of Eq 1; it's one element of a vector.
Jacobian (Eq 3) is the partial derivative of Eq 2; it's one element of a matrix.
The jacobian is size $p\times p$ and $mle$ is a $p$-vector, since the
$\beta$-vector is $\in\mathbb{R}^p$.
From there, you have all the values you need to get $\beta_1$ (a.k.a. $\beta_{n+1}$) from Eq4. Using this new $\beta$ value, compare the change in beta to the tolerance, and iterate through the equations with the new $\beta$ value if needed.
In terms of code:
define $mle(\beta_n)$ and $jac(\beta_n)$ using equations 2 and 3 above
Initialize $\Delta\beta=10^8$
while $err\geq\epsilon$
{
$\Delta\beta = [jac(\beta_n)]^{-1}*mle(\beta_n)$;
$err = ||\Delta\beta||$;
$\beta_{n+1} = \beta_{n} + \Delta\beta$;
}
Just a quick note: although the iteration above shows the inverse of the jacobian being used, for large matrices this becomes computationally expensive. You didn't specify what language you were using, but if it's something like MATLAB, use $jac(\beta_n)\backslash mle(\beta_n)$, as gaussian elimination saves loads of time.
|
H: (Revisted) Surjectivity: Examples for Compositions
I'm asked to give an example where $f$ is surjective, but $g\circ f$ is not. I suspect that $f(x)=x$ and $g(x)=\frac{1}{x}$ will do the trick, namely for $f,g : \mathbb{R}\rightarrow \mathbb{R}$, right?
Well, what about where $g$ is surjective...
AI: Since $\operatorname{f}$ is surjective, the surjectivity of $\operatorname{g} \circ \operatorname{f}$ depends on $\operatorname{g}$ alone. In fact the image of $\operatorname{g} \circ \operatorname{f}$ is just the image of $\operatorname{g}$. You can choose any non-surjective function $\operatorname{g}$ and $\operatorname{g}\circ \operatorname{f}$ will fail to be surjective.
Your example $\operatorname{g}(x) = \frac{1}{x}$ works fine since $\operatorname{g}$ misses $0$, i.e. there is no $x$ for which $\frac{1}{x}=0$. Other simple examples include $\operatorname{g}(x) = x^2$, $\operatorname{g}(x) = \operatorname{e}^x$ or even the constant function $\operatorname{g}(x) = 0$.
|
H: Running time of adding $n$ items
I am trying to calculate how many binary additions it takes to add $n$ items. I see that with each iteration of binary addition, I am left with $n/2$ items so I see that it would take $\log_2 n$ iterations.
By writing out a few examples, I see that it is $n-1$ binary additions to add together $n$ items but I'm wondering how I can prove this.
Is there a formula which which shows $\Sigma (\log_2 \log_2 (\log_2 ...(n))$ or something?
AI: You have $n$ items when you start. Each time you do an addition, instead of the two items, you have only one item (their sum): so the total number of items decreases by $1$.
So as you have $n$ items initially and each addition decreases the number of items by $1$, you need to make $n-1$ additions for the total number of items to reach the final value $1$.
Bonus: In a knockout tournament with $n$ players, how many matches do you need?
|
H: How to go from a sum to a product and a product to a sum?
I have read here (third post down) that exponentials turn sums into products and logarithms turn products into sums. Can someone please further explain this?
AI: Consider the set $\mathbb R$ of all real numbers together with the operation of addition and consider the set $\mathbb R_+$ of all positive real numbers with the operation of multiplication. Both of these are examples of groups. The properties that $e^{a+b}=e^a\cdot e^b$ and $\ln(ab)=\ln(a)+\ln(b)$, show that $\exp:(\mathbb R,+)\to (\mathbb R_+,\cdot)$ is a group isomorphism. Its inverse is $\ln$.
|
H: How to study positivity of $x^3 -x -1$?
Studying this inequality:
$$x^3 -x -1 \ge 0 $$
Since I can't apply Ruffini's rule, I cannot recognize a method to study the function positivity. I could decompose it to:
$$ x \cdot (x+1) \cdot (x-1) -1 \ge 0$$
But the $-1$ is a problem, how do I go on?
AI: You can study its derivative. Note that $f'(x)=3x^2-1$. This has roots at $x_1,x_2=\pm\dfrac 1{\sqrt 3}$, and one can see it is positive on $(x_1,\infty)$ and $(-\infty,x_2)$, and negative on $(x_2,x_1)$. Then, your function will be increasing on $(-\infty,x_2)\cup(x_1,\infty)$ and decreasing on $(x_2,x_1)$. Study its roots to conclude. As computing the roots is not easy, one can argue as follows: $x^3-x-1=0$ is equivalent to $x^3=x+1$. Because of the steep decrease of $x^3$ for negative values, we can argue there are no negative roots. On the other hand, $x^3\leq x+1$ for example for $x=0$, but $x^3\geq x+1$ for say $x=2$. So there must be a positive root in $[0,2]$.
Using Cardano's method on the depressed cubic one can find the positive root to be $$x=\left(\frac 1 2+\sqrt{\frac 1 4-\frac 1 {27}}\right)^{1/3}+\left(\frac 1 2-\sqrt{\frac 1 4-\frac 1 {27}}\right)^{1/3}$$
|
H: Eigenvectors from different eigenspaces of an operator are orthogonal?
Let $V$ be a vector space with an inner product on $\mathbb C$ with a finite dimension, and $T : V \to V$ an operator (not necessarily Normal,) with eigenvectors $\{v_1,...,v_k\}$ for different eigenvalues $\{\lambda_1,...,\lambda_k\}$. Are $\{v_1,v_2,...,v_k\}$ necessarily an orthotognal set of vectors?
AI: No, you really need normality. Consider, for instance, the matrix
$$
A = \begin{pmatrix} 1 & 1 \\ 0 & -1 \end{pmatrix};
$$
viewed as a linear operator on $\mathbb{C}^2$ with the standard inner product, $A$ is not normal. Then for
$$
v_1 := \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \quad v_2 := \begin{pmatrix} 1 \\ -2 \end{pmatrix},
$$
$Av_1 = v_1$, $Av_2 = -v_2$, but $\langle v_1, v_2 \rangle = 1 \neq 0$.
|
H: Show that: $\lim\limits_{r\to\infty}\int\limits_{0}^{\pi/2}e^{-r\sin \theta}\text d\theta=0$
I would like to show $\lim\limits_{r\to\infty}\int_{0}^{\pi/2}e^{-r\sin \theta}\text d\theta=0$.
Now, of course, the integrand does not converge uniformly to $0$ on $\theta\in [0, \pi/2]$, since it has value $1$ at $\theta =0$ for all $r\in \mathbb{R}$.
If $F(r) = \int_{0}^{\pi/2}e^{-r\sin \theta}\text d\theta$, we can find the $j$th derivative $F^{(j)}(r) = (-1)^j\int_{0}^{\pi/2}\sin^{j}(\theta)e^{-r\sin\theta}\text d\theta$, but I don't see how this is helping.
The function is strictly decreasing on $[0,\pi/2]$, since $\partial_{\theta}(e^{-r\sin\theta})=-r\cos\theta e^{-r\sin \theta}$, which is strictly negative on $(0,\pi/2)$.
Any ideas?
AI: It's only enough to show that
$$ \int\limits_{0}^{\pi/2}{e^{-r\sin\theta}\text d\theta}\le \int\limits_{0}^{\pi/2}{e^{-r\frac{2}{\pi}\theta}\text d\theta}=\frac{\pi}{2r}\left(1-e^{-r}\right) \to 0 \quad (r \to +\infty)$$
|
H: Commutant of bounded linear operators on a Hilbert space
Given a Hilbert space $H$, denote by $\mathcal{A}=\mathcal{B}(H)$ the C*-algebra of bounded linear operators on $H$. Denote further by
$$\mathcal{B}(H)' := \{A\in \mathcal{B}(H) : [A,B]=0 \;\forall B \in \mathcal{B}(H)\}$$
the commutant of $\mathcal{B}(H)$.
I think $\mathcal{B}(H)' = \{\lambda \mathbb{1}:\lambda \in \mathbb{C}\}$, but how can proof this?
Are the elements of $\mathcal{B}(H)'$ invertible in $\mathcal{B}(H)'$? (Then my claim would follow by the Gelfand-Mazur theorem...)
AI: If $A$ is in the commutant and $x$ is any non-zero vector, then the fact that $A$ commutes with the orthogonal projection to the line spanned by $x$ implies that $Ax$ is a scalar multiple $\lambda x$ of $x$. The scalar has to be the same for all $x$ because if $x$ and $y$ are multiplied by different scalars, then $x+y$ would not be sent to a scalar multiple of itself.
|
H: Finite set on compact manifolds
I feel blocked with this claim - it sounds intuitively true, just thinking as a jellyfish entering a real line, the intersection of her legs with the real line is certainly finite since the jellyfish is compact - but I stuck with why.
$X$ and $Z$ are closed submanifolds inside $Y$ with complementary dimension. If at lease one of them, say $X$, is compact, and $X \pitchfork Z$, then $X \cap Z$ must be a finite set of points.
I understand that $X \cap Z$ is a zero-dimensional manifold. So it must be a series of disjoint points.
Then I start to guess: this conclusion is perhaps related to each sequence in a compact set has finite subsequence? So let $X \cap Z$ be the sequence, and hence it needs to be finite?
The statement is from Guillemin and Pollack's Differential Topology.
AI: AAs you say, $X\cap Z$ is zero dimensional. If $X$ is compact and $Z$ is closed, then $X\cap Z$ is a closed set in $X$. If it is not finite, then there is a point $x\in X\cap Z$ and a sequence $(x_n)_{n\geq1}$ with values in $X\cap Z$ and all distinct from $x$ such that $x_n\to x$ as $n\to\infty$.
Can you reach a contradition from this?
Notice that $x$ is a point of transverse intersection, so you know how the whole thing is near $x$.
|
H: Show that $f(x)=g(x)$ for all $x \in \mathbb R$
Let $f,g:\mathbb R \rightarrow \mathbb R$ be both continuous. Supose that $f(x)=g(x)$ for all $ x \in D$, where $D\subseteq\mathbb R$ is dense. Show that $f(x)=g(x)$, for all $x \in \mathbb R$. Id like a hint to solve this question.
AI: One more hint:
Define $h(x) = f(x)-g(x)$. For any given $y$ and $\delta$, there exists $x \in D$, such that $|x-y| < \delta$. Now, for prove that for any given $\epsilon$, since $h$ is continuous, $h(y) - h(x) = h(y) <\epsilon$.
|
H: Why does $\lim\limits_{x\to0}\sin\left(\left|\frac{1}{x}\right|\right)$ not exist?
Can someone explain, in simple terms, why the following limit doesn't exist?
$$\lim \limits_{x\to0}\sin\left(\left|\frac{1}{x}\right|\right)$$
The function is even, so the left hand limit must equal the right hand limit. Why does this limit not exist?
AI: The function is indeed even, as we will see, this does not prove the existence of the limit.
Assume that the limit exists.
$$ \lim_{x\to 0^+} \sin\left(\frac1x \right) $$
Which is the same as your limit because the left-hand limit and right-hand limit will be equal (if the limit exists). Then since $x > 0$, I removed the absolute value sign.
$$ \lim_{u\to\infty} \sin u $$
Using $u = 1/x$, we see that the limit certainly does not exist.
|
H: questions about total derivative
I am learning some stuff about the total derivative and got these two questions:
1) I was wondering if a linear map is totally differentiable. So let $A$ be linear, $A\in\mathcal L(\mathbb R^n,\mathbb R^m)$. Then
$$\lim_{h\rightarrow 0}\frac{A(x_0+h)-A(x_0)-A(h)}{\|h\|}=0$$ since $A(x_0+h)=A(x_0)+A(h)$ So $A(x_0)$ is totally differentiable.
Now does it have to be $A'(x_0)=A$ (constant) or $A'(x_0)=A(x_0)$?
2) For a function $f:\mathbb R^n\rightarrow\mathbb R^m$ why is it enough to consider all components $f_1,\dots,f_m$ to show total differentiability?
Thanks!
AI: As for #2, we can show the equivalence of "totally differentiable in each component $\iff$ totally differentiable" as follows: $\Leftarrow$ is perhaps obvious since in the definition we can just take a vector going to $x_0$ along the $i$th axis. $\Rightarrow$ can be shown by taking $\delta_i>0$ such that the value in the $i$th component is less than $\epsilon/\sqrt{m}$, for $\epsilon<1$, then the norm will be less than $\epsilon$ if we choose $\delta \leq \min\limits_{1\leq i\leq m}\delta_i$.
|
H: Indefinite integral $\int{\frac{dx}{x^2+2}}$
I cannot manage to solve this integral:
$$\int{\frac{dx}{x^2+2}}$$
The problem is the $2$ at denominator, I am trying to decompose it in something like $\int{\frac{dt}{t^2+1}}$:
$$t^2+1 = x^2 +2$$
$$\int{\frac{dt}{2 \cdot \sqrt{t^2-1} \cdot (t^2+1)}}$$
But it's even harder than the original one. I also cannot try partial fraction decomposition because the polynomial has no roots. Ho to go on?
AI: Hint:
$$x^2+2 = 2\left(\frac{x^2}{\sqrt{2}^2}+1\right)$$
|
H: Glueing morphisms of sheaves together - can I just do this?
While trying to solve a certain exercise in Hartshorne I realized that I need to use the following result:
Let $X,Y$ be two ringed topological spaces. Suppose we have a covering $\{U_i\}$ of $X$ and morphisms $f_i :U_i \to Y$ such that $f_i|_{U_i \cap U_j} = f_j|_{U_i \cap U_j}$ for every $i,j$. The bar just denotes ordinary restriction of morphisms to a subspace. Then there is a morphism $f : X \to Y$ such that $f|_{U_i} = f_i$.
Now the obvious choice for a morphism $f(x) = f_i(x)$ if $x \in U_i$ should work. Indeed for every $i$ we have maps $f_i^\sharp : \mathcal{O}_Y \to {f_i}_\ast(\mathcal{O}_X|_{U_i})$ and using this I want to build a morphism $f^\sharp : \mathcal{O}_Y \to f_\ast \mathcal{O}_X$. It suffices to define $f^\sharp$ open set by open set so choose $V \subseteq Y$ open. Choose $s \in \mathcal{O}_Y(V)$ and define $$s_i := f^\sharp_i(s) \in {f_i}_\ast (\mathcal{O}_X|_{U_i})(V).$$
My question is: Why should it be the case that $s_i|_{f_i^{-1}(V) \cap f_j^{-1}(V)} = s_j|_{f_i^{-1}(V) \cap f_j^{-1}(V)}$? I ask this because the $\{f_i^{-1}(V)\}$ cover $f^{-1}(V)$ and then I can invoke the gluing property of a sheaf to give me a section of $f_\ast \mathcal{O}_X(V)$. I am tempted to say from the diagram
$\hspace{1in}$
that I can apply the functor $\mathcal{O}_Y(-)$ to the bottom right corner and $\mathcal{O}_X(-)$ to the other three corners and still obtain a commutative square, but is this legal?
AI: You are defining maps $$ \mathcal{O}_Y(V) \stackrel{s_i^V}{\longrightarrow} \mathcal{O}_X(f^{-1}(V) \cap U_i) = \mathcal{O}_X(f_i^{-1}(U))$$ and then wanting to use the fact that $U_i$ covers $f^{-1}(V)$ to define the unique value of $s^V$ by virtue of $\mathcal{O}_X$ being a sheaf. So we have to restriction maps $$\mathcal{O}_X(f^{-1}(V)\cap U_i) \stackrel{\mbox{res}_{ij}}{\longrightarrow} \mathcal{O}_X(f^{-1}(V)\cap U_i \cap U_j) \\ \mathcal{O}_X(f^{-1}(V)\cap U_j) \stackrel{\mbox{res}_{ji}}{\longrightarrow} \mathcal{O}_X(f^{-1}(V)\cap U_i \cap U_j) $$ and we want these maps to agree for outputs of $s_i$ . But what is the sheaf morphism for $f_i |_{U_i \cap U_j}$? It restricts $f_i$ to $U_j$, so it's exactly the map $$\mathcal{O}_Y(V) \stackrel{s_i^V}{\longrightarrow} \mathcal{O}_X(f^{-1}(V)\cap U_i) \stackrel{\mbox{res}_{ij}}{\longrightarrow} \mathcal{O}_X(f^{-1}(V)\cap U_i \cap U_j). $$ The assumed agreement of the restictions of the $f_i$ then gives the desired agreement of the restrictions of the $e_i^V$.
|
H: Is $\{ a-b=y, a \oplus b=x \}$ solvable?
Is the system
$$ a \oplus b = x$$
$$ a-b = y $$
Where $a,b$ are variables and $x,y$ known, and $\oplus$ denotes bitwise xor, solvable?
I've tried to substitute $b=a\oplus x$ in the second equation but it didn't yield anything.
I need this in order to identify 2 lone items in a list with 2 times every other item. Thanks!
AI: It is not always solvable. For example, if $x$ and $y$ have opposite parities, one odd - the other even, then there cannot be a solution. This is because the LSBs of $a\oplus b$ and $a-b$ are the same (assuming they are all supposed to be natural numbers, if not then it may depend on how you define $\oplus$ with negative integers).
|
H: Confusion regarding transversal for a partition in Smith Introductory Mathematics: Algebra and Analysis
In Smith's Introductory Mathematics: Algebra and Analysis, I came across the definition of a transversal for a partition along with examples. Either I don't understand one of the examples, or it is simply wrong. Since I haven't authored any mathematics textbooks, I tend to think it's the former. Can someone clarify?
Here's the quote from the book:
Finally, when you have a partition of a set $S$, it is often useful to
have one representative from each set comprising the partition. A set
of these representatives is called a transversal for the partition.
[...] We view a plane as a set, the elements of which are the
geometric points comprising the plane. The plane can be partitioned
into parallel straight lines, and then any straight line which is skew
to the lines in the partition will suffice as a transversal. Such a
skew straight line will intersect each of the family of parallel lines
in exactly one point, as required. One could think of crazy
transversals, where a point is selected from each of the family of
parallel lines in some fashion, but our transversal has the virtue of
being geometrically pleasant.
Here's my thoughts.
Let $L$ be the set of lines in the plane. There is an equivalence relation $\parallel \> : L \times L$ (does that functional notation properly describe relations, too?) on $L$. It gives rise to a partition of the set of lines into parallel lines, but I'm dubious that it partitions the plane because any point in the plane, say $(0,0)$, will be on infinitely many lines.
Wikipedia agrees that a transversal "is a set containing exactly one element from each member of the collection". It seems to me that if the collection in question is a partition of lines in the plane into (disjoint) sets of parallel lines, then an element of a member of the collection must be a line (i.e. an element of $L$ and not an element of $\mathbb{R}^2$). So if, as stated above, "a skew straight line will intersect each of the family of parallel lines in exactly one point" then I don't see how it is a transversal for the partition.
Based on my understanding, I propose that a transversal of the partition induced by $\parallel$ on lines in the plane is the set of lines in the plane through a given point, say $(0,0)$.
FYI, this is self-study not homework. Feel free to edit the tags if something is more appropriate.
AI: First a minor notational point: no, the notation $\|:L\times L$ is not correct. Just say that the relation of being parallel is an equivalence relation on $L$. (Formally it’s a subset of $L\times L$ with certain properties.)
I suspect that you’re thinking of the partition of $L$ into equivalence classes of the relation $\|$; this is not what the author is talking about. He’s pointing out that each individual equivalence class of $\|$ is a partition of the plane into parallel lines.
If $\ell\in L$, let $[\ell]$ be the equivalence class of $\ell$, i.e., the set of all lines in the plane that are parallel to $\ell$. What the author is saying is that for each $\ell\in L$, the equivalence class $[\ell]$ is a partition of the plane. For example, for each $r\in\Bbb R$ let $\ell_r$ be the line whose equation is $x=r$, the set of all points $\langle r,y\rangle$ with first coordinate $r$. (E.g., $\ell_0$ is the $y$-axis.) Then $\{\ell_r:r\in\Bbb R\}$ is one of the equivalence classes: for each $t\in\Bbb R$, $[\ell_t]=\{\ell_r:r\in\Bbb R\}$. And as you can see, this family is a partition of the plane: if $\langle x,y\rangle\in\Bbb R^2$, $\ell_x$ is the unique member of the family containing $\langle x,y\rangle$.
What does a transversal for this partition look like? Since the members of $\{\ell_r:r\in\Bbb R\}$ are the lines parallel to the $y$-axis, a transversal for this partition is a subset $T$ of the plane that contains exactly one point on each line parallel to the $y$-axis. This means that for each $x\in\Bbb R$ there is exactly one $y_x\in\Bbb R$ such that $\langle x,y_x\rangle\in T$. This is actually a very familiar idea: this just says that $T$ is the graph of a function $f:\Bbb R\to\Bbb R$ given by $f(x)=y_x$. The graph of the function $f(x)=x^2$, for instance, is a transversal for this partition. So is the graph of $f(x)=2x$, or $f(x)=-3x+7$.
The last two graphs, of course, are straight lines that are skew to the parallel lines of the partition: the lines forming the partition are vertical, and the lines $y=2x$ and $y=-3x+7$ are not. Any non-vertical line in the plane — i.e., any line skew to the family of parallel vertical lines — has an equation of the form $y=ax+b$. For each value of $x$, that equation picks out a unique $y_x$, namely, the real number $ax+b$. In other words, for each $x$ the equation $y=ax+b$ picks out exactly one element of $\ell_x$, so its graph is a transversal for the partition $\{\ell_r:r\in\Bbb R\}$. This is what the author is getting at when he says that any straight line skew to the members of the partition is a transversal for it. But as we saw in the previous paragraph, the graph of every function $f:\Bbb R\to\Bbb R$ is a transversal for this particular partition, and most of these graphs are geometrically much less pleasant than a simple straight line.
In the foregoing discussion I worked with just one of the equivalence classes of the relation $\|$, the one consisting of the lines parallel to the $y$-axes, because it’s the easiest one to visualize and talk about. The other equivalence classes behave similarly, however.
Let me conclude by giving a complete enumeration of the equivalence classes of the relation $\|$. It’s convenient to change notation, however: for each $\theta\in[0,\pi)$ let $\ell_\theta$ be the line through the origin at an angle $\theta$ with the $x$-axis. Every point in the plane except the origin is on exactly one of these lines. Then $[\ell_\theta]$, the equivalence class of $\ell_\theta$, is the set of all lines in the plane parallel to $\ell_\theta$. If $\theta=\frac{\pi}4$, for instance, these are the lines with slope $1$, the lines whose equations are of the form $y=x+a$.
More generally, if $\theta\ne\frac{\pi}2$, then $[\ell_\theta]$ is the family of lines with slope $\tan\theta$, the lines whose equations are of the form $y=(\tan\theta)x+a$; if $\theta=\frac{\pi}2$, they are the vertical lines, the lines whose equations are of the form $x=a$.
Every line in the plane is parallel to exactly one of the $\ell_\theta$ with $0\le\theta<\pi$, so $\{[\ell_\theta]:0\le\theta<\pi\}$ is a complete enumeration of the set of equivalence classes of $\|$.
|
H: Show $\int_{0}^\infty\frac{\sin^2(xu)}{u^2}du=\frac{\pi}{2}|x|$ for all $x\in \mathbb{R}$
I would like to show $$\int_{0}^\infty\frac{\sin^2(xu)}{u^2}du=\frac{\pi}{2}|x| \,\,\,\,\,\,(\forall x\in \mathbb{R}).$$If I ask whether the integral converges uniformly, of course $\int_{\epsilon}^{\infty}\frac{\sin^2(xu)}{u^2}$ converges uniformly on $x\in \mathbb{R}$ for $\epsilon>0$, but I am having trouble finding a bound near $u=0$ which does not depend on $x$.
I've tried expressing the integrand as an integral in $x$ (hoping to then switch the order), without success. If I try differentiating underneath the integral with respect to $x$ I get $\frac{2\sin(xu)\cos(xu)u}{u^2}=\frac{2\sin(2xu)}{u^2}$, whose integral converges conditionally, I believe, which leaves me with no justification for differentiating under the integral.
Any tips?
AI: For $x=0$, the statement clearly holds.
Start off with the substitution $w=ux$, $dw=x\,du$. This gives (assuming $x$>0),
$$
\int_0^{\infty}\frac{\sin^2(w)x^2}{w^2}\frac{du}{x}=x\int_0^{\infty}\frac{\sin^2(w)}{w^2}\,dw
$$
On the other hand, if $x<0$, then we get
$$
x\int_0^{-\infty}\frac{\sin^2(w)}{w^2}=-x\int_{-\infty}^{0}\frac{\sin^2(w)}{w^2}\,dw=-x\int_{0}^{\infty}\frac{\sin^2(w)}{w^2}\,dw,
$$
since the function is even. Notice that the front term is precisely $\lvert x\rvert$ in either case! So, all that is left is to compute the integral.
|
H: Zech's logarithms - Why are they called "Zech"?
Zech's logarithms are defined in here.
I couldn't find a reason why they are called "Zech". The only thing a dictionary suggests is that Zech is an abbreviation for Zechariah, which doesn't seem relevant, right?!
So, could you please shed light on this naming?
AI: Apparently,
The reason for the name is that Julius Zech (1849) published a table
of these logarithms (which he called 'addition logarithms') for doing
arithmetic in $\mathbb{Z}$/p. These were, I think, intended for
number-theoretical calculations.
From Oliver Pretzel - "Error-correcting codes and finite fields".
Edit: Gerhard Betsch up at "Math Forum" has written up a piece about his personal history and research. Among other things, he writes:
... Zech's tables were designed as a tool for calculations in
theoretical astronomy. The preface makes no reference to Jacobi, and of
course no reference to finite fields.
|
H: Looking at the intermediate fields of $\Bbb{Q}_7 = \Bbb{Q}(\omega)/\Bbb{Q}$ where $\omega = e^{i2\pi/7}$ .
From Basic Abstract Algebra (Robert Ash):
The question that I'm concerned with is number 3, but I will write problems 1 and 2 as well, since they are all related...
We now do a detailed analysis of subgroups and intermediate fields associated with the cyclotomic extension $\Bbb{Q}_7 = \Bbb{Q}(\omega)/\Bbb{Q}$ where $\omega = e^{i2\pi/7}$ is a primitive $7^{th}$ root of unity. The Galois group G consists of automorphisms $\sigma_i, i= 1,2,3,4,5,6,$ where $\sigma_i(\omega) = \omega^i$.
Show that $\sigma_3$ generates the cyclic group G.
Show that the subgroups of G are $\langle 1 \rangle$ (order 1), $\langle \sigma_6 \rangle$ (order 2), $\langle \sigma_2 \rangle$ (order 3), and $G = \langle \sigma_3 \rangle$ (order 6).
The fixed field of $\langle 1 \rangle$ is $\Bbb{Q}_7$ and the fixed field of G is $\Bbb{Q}$. Let $K$ be the fixed field of $\langle \sigma_6 \rangle$. Show that $\omega + \omega^{-1} \in K$, and deduce that $K = \Bbb{Q}(\omega + \omega^{-1}) = \Bbb{Q}(\cos 2\pi/7)$
Answer
$\sigma_6(\omega + \omega^6) = \omega^6 + \omega^{36} = \omega + \omega^6$, so $\omega + \omega^6 \in K$. Now $\omega + \omega^6 = \omega + \omega^{-1} = 2 \cos(2 \pi)/7$, so $\omega$ satisfies a quadratic equation over $\Bbb{Q}(\cos 2\pi/7)$. By (3.1.9),
$$[\Bbb{Q}_7 : \Bbb{Q}] = [\Bbb{Q}_7 : K][K : \Bbb{Q}(\cos 2\pi/7)][\Bbb{Q}(\cos 2\pi/7) : \Bbb{Q}]$$
where the term on the left is 6, the first term on the right is $| \langle \sigma_6 \rangle|=2$, and the second term on the right is (by the above remarks) 1 or 2. But $[K : \Bbb{Q}(\cos 2\pi/7)]$ cannot be 2 (since 6 is not a multiple of 4), so we must have $K= \Bbb{Q}(\cos 2\pi/7).$
My Question
1) We know that $\omega^k = \cos( 2\pi k/n) + i\sin(2 \pi k/n)$, so
$$\omega + \omega^6 = \cos( 2\pi/7) + i\sin(2\pi/7) + \cos( 12\pi/7) + i\sin(12\pi/7)$$
I was trying to figure out how this was equal to $2 \cos 2\pi/7$. I tried looking up the basic properties of cosine and sine in case I had forgotten some of them...but I still can't see how these are equal.
2) How do we know that $\omega$ satisfies a quadratic equation over $\Bbb{Q}(\cos 2\pi/7)$? We can only know that if we have $$\cos( 4\pi/7) + i\sin(4\pi/7) \in \Bbb{Q}(\cos 2\pi/7),$$ right?
AI: For your first question, use the fact that $2\pi=\frac{14\pi}7,$ together with the results (obtained through periodic and even/odd properties of the sine and cosine functions):$$\cos(2\pi-z)=\cos(-z)=\cos(z)\\\sin(2\pi-z)=\sin(-z)=-\sin(z)$$
For the second, note that $\omega+\omega^{-1}=2\cos\frac{2\pi}7$ is equivalent to $$\omega^2-\left(2\cos\frac{2\pi}7\right)\omega+1=0.$$ Thus, $\omega$ satisfies $$x^2-\left(2\cos\frac{2\pi}7\right)x+1=0,$$ a quadratic with coefficients in $\Bbb Q(\cos\frac{2\pi}7).$
|
H: Show that every finite group of order n is isomorphic to a group of permutation matrices
Show that every finite group of order n is isomorphic to a group consisting of n x n permutation matrices under matrix multiplication.
(A permutation matrix is one that can be obtained from an identity matrix by reordering its rows)
I can't even understand the question. I think a group of n x n permutation matrices is not isomorphic to finite group of order n.
I think it is isomorphic to Sn.
Help me plz!
AI: If you can realise $S_n$ as a group of permutation matrices, then you can realise any subgroup of $S_n$ as a subgroup, which happens to be a group of permutation matrices.
You get a representation of $G$ as a subgroup of $S_n$ by associating each element $g\in G$ with the permutation of elements $h\in G$ which sends $h\to hg$
|
H: Flaw with proving that $S$ is a basis of $V$ iff $\|v\|^2=\sum_{i=1}^{n}{c_i}^2$
Let $V$ be a vector space over $\mathbb{R}$ equipped with a positive-definite scalar product $(\cdot,\cdot)$. Let $S = \left\lbrace v_1,...,v_n \right\rbrace$ be a set of orthogonal vectors of unit length in $V$ and $c_i =(v,v_i), \quad i=1,...n$ be the associated Fourier coefficients.
Prove that $S$ is a basis for $V$ if and only if $$\|v\|^2=(v,v)=\sum_{i=1}^{n}{c_i}^2, \quad \forall v \in V$$
Attempt:
If $S$ is a basis of $V$, then $\forall v \in V$ $$v = a_1v_1+...+a_nv_n$$ and $$c_i = (v,v_i)=(a_1v_1+...+a_nv_n,v_i) = a_i$$ $$\therefore \|v\|^2=(v,v)=(a_1v_1+...+a_nv_n,a_1v_1+...+a_nv_n) = \sum_{i=1}^{n}{a_i}^2 = \sum_{i=1}^{n}{c_i}^2$$
Conversely, if $$\|v\|^2=(v,v)=\sum_{i=1}^{n}{c_i}^2, \quad \forall v \in V$$ we need to show that $v_i$ are linearly independent. To this end, consider $$b_1v_1+...+b_nv_n=0$$ $$\therefore (b_1v_1+...+b_nv_n,b_1v_1+...+b_nv_n) =\sum_{i=1}^{n}{b_i}^2= 0$$ $$\therefore b_i = 0$$ One can also show, by the same argument presented before, that $$b_i=c_i$$
My question is, does the above argument suffice or is there any flaw with it?
AI: Looking at the second direction (i.e., assuming that $\|v\|^2 = \sum_{i=1}^n \langle v,v_i \rangle^2$ for all $v \in V$), all you've proved is that $\{v_1,\dotsc,v_n\}$ is linearly independent, but this is a general fact about orthogonal sets proved by precisely your argument. The essential point is proving that $\{v_1,\dotsc,v_n\}$ spans $V$. So, let $v \in V$, define
$$
v_0 := \sum_{i=1}^n \frac{\langle v_i,v\rangle}{\langle v_i,v_i\rangle} v_i = \sum_{i=1}^n \langle v_i,v\rangle v_i, \quad v_1 := v - v_0,
$$
and observe that $v = v_0 + v_1$ with $\langle v_0,v_1 \rangle = 0$; what you need to show is that $v_1 = 0$. But then, $$\|v\|^2 = \langle v,v \rangle = \langle v_0+v_1,v_0+v_1\rangle,$$ so what do you get when you expand out $\langle v_0+v_1,v_0+v_1 \rangle$?
|
H: Show that $\int_{0}^\infty \frac{1-\cos x}{x^2}dx=\pi/2$.
I am trying to show that $$\int_{0}^\infty \frac{1-\cos x}{x^2}dx=\pi/2.$$The hint is "try simple substitution", and not incidentally, the previous problem has shown that $\int_0^\infty \frac{\sin^2(xu)}{u^2}du=\frac{\pi}{2}|x|$. This looks an awful lot like we'd like to reduce it to the earlier case, for $x=1$.
What shall we try to substitute for? I think we'd have some problems subbing for cosine, since it does not approach a limit at infinity (correct me if there is a way to make this substitution). Subbing for $x^2$ hasn't gotten me anywhere.
We might want to try to split it up, and see if anything better comes out of trying to integrate $\int_0^\infty \frac{\cos x}{x^2}dx$. No luck there so far.
Any ideas?
AI: Hint: Use the identity
$$ \cos(x)=1-2\sin^2(x/2). $$
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.