text
stringlengths 83
79.5k
|
|---|
H: ring without zero-divisors
Suppose we have a ring (could be infinite) without zero-divisors. I have to prove that if $xy=1$ then also $yx=1$ for some $x$ and $y$ in the ring. I really need hints for this, because it seems I just cant figure it out. Thank you.
AI: $$x=(xy)x=x(yx)$$
Then $x(1-yx)=0$.
|
H: Different notions of differentiability
The following is somewhat unclear to me. Let $X$, $Y$ be locally convex vector spaces, let $f: X \supseteq U \longrightarrow Y$ be a (nonlinear) continuous map. Then one can say that $f$ is $C^1$ if
a) the difference quotient
$$ Df(x)\xi := \lim_{\epsilon \longrightarrow 0} \frac{f(x + \epsilon \xi) - f(x)}{\epsilon}$$
exists for all $x \in U$, $\xi \in X$ and if
$$Df: U \times X \longrightarrow Y$$
is continuous; or
b) $f$ maps $C^1$ curves in $X$ to $C^1$ curves in $Y$.
Question: Are the notions a), b) equivalent?
As far is understood, they are not in general, right? But for which spaces are they equivalent?
Addition: If $X$ and $Y$ are not Banach spaces, it seems too strong a notion to require the map $Df$ to be continuous from $X$ to $L(X, Y)$. Why is this so?
AI: b) does not imply a) even when $X=Y=\mathbb R^2$. It's easier to describe the counterexample in complex notation:
$$f(r\exp(i\theta)) = r\exp(i\theta+i\sin 2\theta),\quad r,\theta\in\mathbb R \tag1$$
It is important to notice that (1) applies both when $r<0$ and when $r>0$, giving consistent result because $\sin 2\theta$ is $\pi$-periodic.
The map (1) is not differentiable at the origin, because it is degree 1 homogeneous but not linear.
On the other hand, it maps $C^1$ curves to $C^1$ curves. Indeed, let $\Gamma:(-1,1)\to \mathbb R^2\approx \mathbb C$ be a $C^1$-curve. For definiteness assume $\Gamma(0)=0$. Let $\rho e^{i\phi}=\Gamma'(0)$; here $\rho>0$. Write $\Gamma(t)=t \rho e^{i\phi}+o(t)$ and plug this into $f$. Since $f$ is Lipschitz, we gave $$f\circ \Gamma(t)=f(t \rho e^{i\phi})+o(t) = t\rho e^{i\phi+i\sin 2\phi}+o(t) \tag2 $$ Hence, $f\circ \Gamma$ has a tangent at $t=0$. Its tangent at the points where $\Gamma\ne 0$ has argument $\arg \Gamma'(t)+2\sin \arg \Gamma(t)$ which is consistent with (2): that is, the tangent direction is continuous along the curve.
|
H: Computing maximum of an expression
What is the maximum of the expression?
$$x_1x_2x_3+x_2x_3x_4+\cdots+x_{2011}x_{2012}x_{2013}$$
If $x_1,x_2,..,x_{2013} \in [0,\infty)$, $x_1+x_2+\cdots + x_{2013}=2013$
AI: Hint: Compare with
$$ ( x_1 + x_4 + \ldots + x_{2011} ) ( x_2 + x_5 + \ldots + x_{2012} ) ( x_3 + x_6 + \ldots + x_{2013} ) $$
|
H: Pooled Estimate of the Variance
Suppose $X_{1},\dots,X_{m}$ is iid $N(\mu_{1},\sigma^{2})$ and $Y_{1},\dots,Y_{n}$ is iid $N(\mu_{2},\sigma^{2})$. Is it true that the pooled estimate of the variance, $S_{p}^{2}$, has the property $\frac{(n-1)S_{p}^{2}}{\sigma^{2}}\sim\chi^{2}_{n-1}$, as is the case for a single sample variance $S^{2}$? I know that $S_{p}^{2}$ multiplied by a constant has a $\chi^{2}$ distribution, but am not sure if this property holds here.
AI: You omitted to say that the sequence $X_1,\ldots,X_m$ is independent of $Y_1,\ldots, Y_n$. I will take that to be assumed below.
I will take $S_p^2$ to be defined as follows:
$$
S_p^2 = \frac{\sum_{i=1}^m (X_i-\bar X)^2 + \sum_{i=1}^n (Y_i-\bar Y)^2}{n+m-2}
$$
where $\bar X$ and $\bar Y$ are the sample means.
Then
$$
\frac{\sum_{i=1}^m (X_i-\bar X)^2}{\sigma^2} \sim \chi^2_{n-1}
$$
and
$$
\frac{\sum_{i=1}^n (Y_i-\bar Y)^2}{\sigma^2} \sim \chi^2_{m-1}.
$$
The sum of two chi-square random variables that are independent of each other is a chi-square random variable, and the number of its degrees of freedom is just the sum of the numbers of degrees of freedom of the two terms in the sum.
Apply that to $(n+m-2)S_p^2$ (not to $(n-1)S_p^2$ as your question states).
|
H: Volume of a wine barrel
This is a famous calculus problem and is stated like this
Given a barrel with height $h$, and a small radius of $a$ and
large radius of $b$. Calculate the volume of the barrel
given that the sides are parabolic.
Now I seem to have solved the problem incorrectly because here it seems 2
that the volume should be
$ \displaystyle \hspace{1cm}
V(a,b,h) = \frac{h\pi}{3}\left(2b^2 + a^2\right)\,.
$
Below is my attempt. As in the picture I view the barrel from the side, and try to find a formula for the parabola. So i solve
$ \displaystyle \hspace{1cm}
f(x) := A x^2 + B x + C
$
given $f(0) = f(h) = a/2$ and $f(h/2) = b/2$. This yields
$ \displaystyle \hspace{1cm}
f(x) = \frac{2(a-b)}{h^2} \cdot x^2 -
\frac{2(a-b)}{h} \cdot x +
\frac{a}{2}
$
Using the shell method integrating now gives the volume as
$ \displaystyle \hspace{1cm}
V(a,b,h) := \pi \int_0^h \bigl[f(x)\bigr]^2\,\mathrm{d}x
= \frac{\pi}{60} \cdot h (a+2b)^2 + \frac{\pi}{30} \cdot h(a^2+b^2)
$
Alas according to the formula above this seems incorrect! Where is my mistake?
AI: Let $k=h/2$, and put the origin in the middle, where symmetry asks it to be.
Then the equation of the upper parabola is
$$y=b-\frac{b-a}{k^2}x^2.$$
The integral of $\pi y^2\,dx$ from $0$ to $k$ is
$$\pi k\left(b^2-\frac{2}{3}(b-a)b+\frac{1}{5}(b-a)^2\right).$$
This simplifies to
$$\frac{\pi k}{15}(3a^2+4ab+8b^2)$$
Replace $k$ by $h/2$ and multiply by $2$.
|
H: Smooth approximation
How one can show, that if $f(x_1,\ldots,x_n)$ is a continuous function on an open subset $U\subset \mathbb{R}^n$, then for every $\varepsilon > 0$ and every open $V\subset U$, such that $\bar V \subset U$, there exists a function $g(x_1,\ldots,x_n)$, such that:
1) $g$ is smooth on $V$;
2) $g|_{U-V}=f|_{U-V}$;
3) $\max_{x \in \bar V} |f(x)-g(x)|\leq \varepsilon$;
4) $g$ is smooth in all points, where $f$ is smooth.
This lemma is used in the book of Dubrovin, Novikov, Fomenko, but they don't prove it and don't give some refernce, where it can be found.
AI: The proof can be found in several places. For instance, see Thm 2.5 in Hirsch, M. W., Differential Topology (Springer-Verlag, 1976) or §6.7 in Steenrod, N., The Topology of Fibre Bundles (PUP, 1951). Unfortunately, these sources don't provide a name or a historical note about this result.
A recent paper, where I looked up these references is Wockel, C., A Generalisation of Steenrod's Approximation Theorem (2006, arXiv, dmlcz), which supplies an obvious name.
|
H: Prove that if the coefficient of $ax^3+bx+c$ are odd then it is irreducible of $\mathbb{Q}$
Let $a,b,c$ be odd integers. Prove that $p(x)=ax^3+bx+c$ has no rational root.
AI: Hint: Suppose to the contrary that there is a rational root $\dfrac{p}{q}$. We can assume that $p$ and $q$ are relatively prime. So at least one of them is odd. Substituting, we get $ap^2+bpq+cq^2=0$.
This is impossible. If one of $p$ or $q$ is odd and the other even, then $ap^2+bpq+cq^2$ is odd. This is also the case if both $p$ and $q$ are both odd. So in neither case can $ap^2+bpq+cq^2$ be equal to $0$.
|
H: Calculus Implicit Differentiation
I'm learning implicit differentiation and I've hit a snag with the following equation.
$$
f(x, y) = x + xy + y = 2
$$
$$
Dx(x) + Dx(xy) + Dx(y) = Dx(2)
$$
$$
1 + xy' + y + y' = 0
$$
$$
xy' + y' = -1 - y
$$
$$
y'(x + 1) = 1 + y
$$
$$
y' = \dfrac{(1 + y)}{(x + 1)}
$$
$$
y'' = \dfrac{(x + 1)y' - (1 + y)}{(x + 1)^2}
$$
$$
y'' = \dfrac{(x + 1)\dfrac{(1 + y)}{(x + 1)} - (1 + y)}{(x + 1)^2}
$$
Ok now what according to this y'' = 0 which is wrong.
AI: Your mistake seems to originate when moving from this:
$$
xy' + y' = -1 - y
$$
...to this, where you "lost the sign":
$$
y'(x + 1) = 1 + y$$
We need $$y'(x + 1) = -(1 + y)$$
Let's back up: $$\begin{align} 1 + xy' + y + y' & = 0 \\ \\ xy' + y' & = -1 - y\\ \\ & = -(1 + y)\end{align}$$
Then we factor out $y'$ on the left-hand side, giving us:
$$y'(x + 1) = -(1 + y)$$
Fixing for that, then, we get:
$$\begin{align}
y' &= \dfrac{-(1 + y)}{(x + 1)} \\ \\
y'' & = \frac{(x + 1)(-y') - [-(1 + y)]}{(x + 1)^2} \\ \\
& = \frac{-(x + 1)\dfrac{-(1 + y)}{(x + 1)} + (1 + y)}{(x + 1)^2} \\ \\
& = \frac{(1 + y) + (1+y)}{(x+1)^2}\\ \\
& = \frac{2(y+1)}{(x+1)^2}
\end{align}$$
|
H: sketch the region R bounded by the graphs of the equations and find the volume of the solid generated by revolving R about the indicated axis
Let $R$ be the region bounded by
$$y=1/x,\quad y=1, \quad y=2,\quad x=0$$
Consider the solid generated by rotating $R$ about the y-axis
Sketch the region, the solid, and a typical disk/washer/shell (your choice)
I know that I have to slice horizontally and integrate with respect to $y$ but I can't figure out where I am going wrong.
I got from $\displaystyle \pi\int_0^1 \frac1y dy$
Am I going about this correctly so far?
AI: Hint: The volume is equal to
$$\int_{y=1}^2 \pi \frac{1}{y^2}\,dy.$$
Remark: You forgot the squaring part: note that a cross-section at height $y$ has area $\pi x^2$, that is, $\frac{\pi}{y^2}$.
In your solution, there is also an issue with the limits of integration. Note that the problem says that the region is bounded in particular by the lines $y=1$ and $y=2$.
The sketch may not be correct, The region of interest is bounded above by the line $y=2$, below by the line $y=1$, on the left by $x=0$ (the $y$=axis) and on the right by $y=\frac{1}{x}$.
|
H: Difference Between Imagespace and Columnspace of a Matrix?
I don't understand the difference between the columnspace of a matrix and the imagespace of a matrix. They are both the spanning sets of the columns of a matrix. Are they just different words for each other or is there a difference?
AI: The columnspace of a matrix is the vector space spanned by the columns of the matrix.
The image space a linear transformation is the set of all vectors that the transformation maps onto.
When you are interpreting the matrix as a linear transformation multiplying the left side of column vectors, then a column vector input combines the columns of the matrix. If you put in all possible inputs, you get all possible combinations of columns of the matrix. In this case, the columnspace is the same as the image space of the linear transformation.
But if you are using the matrix to multiply row vectors on the right instead, then the image of this transformation no longer matches the columnspace: it matches the rowspace! For now we will stick with the situation in the first paragraph, though.
I think a little more appreciation of the difference between transformations and matrices is called for. The image space of a transformation exists independently of whatever matrix you use to represent the transformation. No matter what matrix you compute for the transformation, the columns will always span the same space: the image space of the transformation.
|
H: Sketch the region, the solid, and a typical disk/washer/shell (your choice)
$y=x^2, y=1$; about the line $y=-1$
Alright so I have the sketch drawn but I cannot figure out if I'm doing this correctly because the $y=-1$ is throwing me off. The answer I got is $4/5\pi$.
I got that answer by using horizontal slices $R=\sqrt{y}=x$ and $r=y$.
$$
\begin{align}
V&=2\pi \int_0^1 y(\sqrt{y})= \int_0^1(y)^{3/2}\\
&=2\pi \frac{2}{5}y^{5/2} \bigg|_0^1=\frac{4}{5}\pi
\end{align}
$$
Help if I went about this the wrong way. This stuff is just completely confusing.
AI: The region being rotated is not hard to sketch. The parabola $y=x^2$ meets the line $y=1$ at $x=\pm 1$. The region is symmetric about the $y$ axis. We take advantage of that symmetry. So we will find the volume when the part from $x=0$ to $x=1$ is rotated, and then double the result.
You can probably see that the solid has a hole in it. The solid is in fact a cylinder with a hole drilled in it. The hole is quite wide at the two ends, and narrower in the middle.
Take a typical cross-section "at" $x$. The cross-section is a circle with a circular hole in it.
The radius of the outer circle is $2$. The radius of the inner circle is $x^2-(-1)$, though I visualize it as $x^2+1$.
So the area of cross-section at $x$ is $\pi(2^2-(x^2+1)^2)$.
Integrate from $x=0$ to $x=1$, then double.
Remark: Alternately, we could find the volume of the cylinder by basic geometry, and concentrate on finding the volume of the hole.
For the volume of the hole, if you are initially uncomfortable at rotating about the line $y=-1$, and prefer to rotate about the $x$-axis, you could raise everything by $1$. So then we would be dealing with the region below the line $y=2$ and above the curve $y=x^2+1$, rotated about the $x$-axis.
|
H: Find a basis for the range and kernel of $T$.
Find a basis for the range and kernel of $T$.
$$A =\begin{bmatrix}
2 & 0 & -1\\
4 & 0 & -2\\
0 & 0 & 0
\end{bmatrix} $$
Attempt at Solving for Basis of Range:
On finding the basis for the range, I know that the range is the same thing as the column space. So, finding the basis of the column space should be equivalent to finding the basis of the range. I got the following after reducing:
$$\mathrm{Rref}(A) =\begin{bmatrix}
1 & 0 & -\frac 12\\
0 & 0 & 0\\
0 & 0 & 0
\end{bmatrix} $$
Because I only have a pivot in column $1$, my corresponding column in the original $A$ (thus my basis for the range), would be:
$$\mathrm{Rref}(A) =\begin{bmatrix}
2\\
4\\
0
\end{bmatrix} $$
The solution given in the text says that this should actually be:
$$\mathrm{Rref}(A) =\begin{bmatrix}
1\\
2\\
0
\end{bmatrix}$$
I can see that the book solution only differs from mine in that their solution seems to have been divided by $2$. However, my question is - why are they dividing by 2? I don't understand why the original column is being altered.
Attempt at Solving for Basis of Kernel:
In solving for the kernel, I know that the basis of the kernel should be the same as the basis for the nullspace. From the $\mathrm{Rref}(A)$ above, I got the following equation:
$(X_1) = (\frac12X_3)$. Letting $X_3$ equal one, I got the following matrix for the basis of the kernel:
$$\text{Basis of Kernel} =\begin{bmatrix}
\frac12\\
0\\
1
\end{bmatrix}$$
This answer checks out with my solution in the text, but the text also provides the following solution:
=\begin{bmatrix}
0\\
1\\
0
\end{bmatrix}
I'm guessing this comes from the zero row in my $\mathrm{Rref}(A)$. But, why is this done? I thought the number of bases came from the number of independent variables in the $\mathrm{Rref}(A)$...
AI: Notice that if a vector $v$ span a subspace then for all $\lambda\ne 0$, $\lambda v$ span also this subspace so in your example we have
$$\mathrm{Im}(A)=\mathrm{span}((2,4,0)^T)=\mathrm{span}((1,2,0)^T)=\mathrm{span}((-100,-200,0)^T)$$
For the kernel: By the rank nullity theorem we have $\dim \ker(A)=2$ and we can see easily from the matrix $A$ (since the second column is zero) that $(0,1,0)^T\in \ker(A)$ so we should find another vector in the kernel: since the first column of the matrix is $(-2)$ times the third column so $(1,0,0)^T+2(0,0,1)^T=(1,0,2)^T\in\ker(A)$
|
H: Transversal functions are smooth?
This sounds intuitively true. However, I have some counter claims:
Although transversal is defined on smooth manifolds, which implies the image of $df_x$ is smooth. But this does not say if the function $f$ itself is smooth, and the existence of $df_x$ only assumes $f$ is $\mathcal{C}^1$.
So can this implies $f$ is smooth, and therefore $f$ generally smooth as long as it transverses?
AI: If $f$ is not smooth, then you cannot define $df$ on a local chart. As a result it is impossible to judge the rank of Jacobian and transversality would not make much sense. I suspect you confused whether $df_{x}$ is non-singular with whether $f$ is smooth at $x$.
|
H: Designing a casino cashback program
Let's say a casino is considering offering a cashback program whereby it would return 50% of player losses twice a month. The casino has a house edge of 1% on each game.
What steps could the casino take to ensure that they remain profitable?
One way would be to enforce a minimum number of plays per user per cashback period- what would that minimum number of plays be?
Let's say the casino can't ban or ID users.
AI: Suppose the casino offers only one game: you bet £1, then with probability 49% you win £2 (otherwise you lose your wager). Your expected net is -£0.02. Now if you know that half your total losses will be refunded, here's how it breaks down.
One bet: Win £1 with 49% probability, lose £0.50 with 51% probability, expected gains £0.23½.
Two bets: Win £2 with $.49^2$ probability, wash with $2\cdot.49\cdot.51$ probability, lose £1 with $.51^2$ probability. Expected gains £0.2201.
Three bets: Expected gains £0.33015.
...
47 bets: Expected gains £0.6824276...
It seems that for this simple game 47 is the best you can do. The specifics will vary, but probably the rough result will remain the same: you can expect to win back a fraction of a single bet.
|
H: Use the shell method to fine the volume of the solid generated by revolving the region bounded by:
$y=12x-11, y=\sqrt{x}$, and $x=0$.
I know I will be using $V=2\pi\int (\sqrt{x})^2-(12x-11)^2 dx$
It's the setting up of the problem I am having difficulty with. If anyone can help me without giving away the answer, I would greatly appreciate it.
AI: It looks, from your title, as though you need to use the cylindrical shell method for finding the volume of revolution about (I'm assuming here) the $y$-axis. The work you show is more consistent with the disk method (except you'd use $\pi$ in that case).
With the shell method, since volume will be of the cylinder obtained when revolving the region, we need to use as factors:
$2\pi$, since we revolve the region $360^\circ = 2\pi$ radians (all the way around the y-axis;
the radius of the "cylinder" $r(x)$ which will be given simply by "$x$" (the distance of $x$ from the y-axis), and
the height of the "cylinder" $h(x)$, which in this case will be the
distance given between the "top curve" minus the "bottom curve". So height will be given by $$h(x) = [\sqrt x - (12x - 11)] = \sqrt x - 12x + 11.$$
One of our bounds of integration will be $x = 0$. The other will be when $x$ is equal to the x-coordinate at which the two curves $y = \sqrt x$ and $y = 12x - 11$ intersect. This will be at $x = 1$. (Recall, we can solve for the $x$ of intersection(s) by equating the two curves: $\sqrt x = 12x - 11$. We can square both sides to get a quadratic equation, solve for the roots, and throw out the negative value. Please confirm that the curves intersect at $(1, 1)$.)
This gives us the following integral to evaluate: $$\int_0^1 2\pi\, r(x) \,h(x)\,dx = 2\pi\int_0^1 x(\sqrt x - 12x + 11)\,dx$$
|
H: What is the definition of "formal identity"?
In Ahlfors' Complex Analysis he remarks that harmonic $u(x,y)$ can be expressed as
$$
u(x,y) = \frac{1}{2}[f(x + i y) + \overline{f}(x - i y)]
$$
when $x$ and $y$ are real. He then writes
"It is reasonable to expect that this is a formal identity,
and then it holds even when x and y are complex".
What does he mean in this context by "formal identity"?
Edit: This entire page (p.27 of my edition) comes with what is a caveat, as far as I can tell:
We present this procedure with an explicit warning that it is purely formal and does
not possess any power of proof.
In the same page he uses the phrases "formal procedure", "formal reasoning", "formal arguments", and "formal identity".
Is he more or less saying that he's embarking on something that could be considered suspect, at least at this point in the book?
Thank you very much!
AI: The word "formal" as it's being used here doesn't have an entirely rigorous meaning. The archetypal example of a formal argument is manipulating a power series without worrying about convergence, which gives rise to the notion of formal power series. In general, a formal argument is one based on the "form" of the mathematical objects involved without thinking about their "substance" (e.g. a power series is a form, a function it's a Taylor series of is a substance).
In this case I agree with RGB that a possible interpretation is that the identity might hold on the level of power series, in which case it should hold for even complex $x, y$.
|
H: An image manifold that is a diffeomorphic copy of $X$ adjacent to the original.
The entire content is rather drafty, but I am especially baffled with the last comment "and thus produces an image manifold that is a diffeomorphic copy of $X$ adjacent to the original." This sentence does not make sense to me, like why we suddenly discuss original, why it is this case, and why this matter?
$\quad$We must deal with the necessity of deforming $X$ in a mathematically precise manner. Attempting to define deformations of arbitrary point sets in $Y$ is hopeless, so we shift our point of view somewhat. Considering $X$ as an abstract manifold and its inclusion mapping $i:X\hookrightarrow Y$ simply as an embedding, we know how to deform $i$, namely by homotopy. Since embeddings form a stable class of mappings, any small homotopy of $i$ gives us another embedding $X\to Y$ and thus produces an image manifold that is a diffeomorphic copy of $X$ adjacent to the original.
AI: If $f:X\to Y$ is a smooth embedding, then the image $f(X)$ is diffeomorphic to $X$.
Let $F:X\times (-\epsilon,\epsilon)\to Y$ be a smooth map, and write $f_t(x)$ for $F(x,t)$. Thus, we have a family of smooth maps $f_t:X\to Y$. Figuratively speaking, when $t,s\in (-\epsilon,\epsilon)$ are "close", the images $f_t(X)$ and $f_s(X)$ can be said to be "adjacent". Consider this gif of a homotopy (from wikipedia):
For times $t$ and $s$ that are close together, the image curves at time $t$ and time $s$ are "adjacent".
"Embeddings form a stable class of mappings" means that if $f_t$ is an embedding for some $t$, then for some neighborhood $S\subseteq(-\epsilon,\epsilon)$ of $t$, we will have that $f_s$ is an embedding for all $s\in S$.
Thus, if $X\subseteq Y$ is an embedded submanifold of $Y$ (so that $i:X\hookrightarrow Y$ is a smooth embedding), then a homotopy of $i$ (that is, a map $F:X\times(-\epsilon,\epsilon)\to Y$ with $f_0=i$) will, for $t$ sufficiently close to $0$, produce embeddings $f_t:X\to Y$, whose image manifolds $f_t(X)$ are "adjacent" to $X$ and diffeomorphic to $X$.
|
H: Is there an operator whose non-zero commutants are always injective?
Let $H$ be an infinite dimensional separable Hilbert space.
Is there an operator $T \in B(H)$ such that, if
$TA=AT$ with $0 \ne A \in B(H)$, then $A$ injective ?
Bonus question : what is the set of all such operators ?
AI: In finite dimension, the invariant subspaces of $T$ are exactly the nullspaces of the operators that commute with $T$. So there is no such example in finite dimension $\geq 2$.
If $T$ is normal, it admits a handful of non-injective commuting projections (=reducing subspaces).
So this is a question about non-normal operators in infinite dimension.
As often when looking for a counterexample to a finite dimension property, the answer is the shift. Not the bilateral one, which is unitary whence normal. But the unilateral shift
$$
S:(x_1,x_2,x_3,\ldots)\longmapsto (0,x_1,x_2,x_3,\ldots)\qquad x\in \ell^2.
$$
For every nonzero $A\in B(\ell^2)$ commuting with $S$, $A$ is injective.
Proof: the unilateral shift is conveniently realized as (unitarily equivalent to) the multiplication operator by $z$ on the Hardy space $H^2(\mathbb{D})$. In this context, it is well-known that the commutant of $S$ is exactly the algebra of analytic Toeplitz operators on $H^2$, i.e. multiplication operators by a bounded analytic function $\phi\in H^\infty$. Such nonzero multiplication operators are injective, since a nonzero analytic function on a domain has isolated zeros. QED.
Reference: "The commutant of analytic Toeplitz operators", Trans. AMS 1973, by Deddens and Wong.
|
H: Show that the LU decomposition of matrices of the form $\left[\begin{smallmatrix}0& x\\0 & y\end{smallmatrix}\right]$ is not unique
How can I show that every matrix of the form $\begin{bmatrix}0& x\\0 & y\end{bmatrix}$ has an $LU$ factorization and that even if $L$ is unit lower triangular there is not a unique factorization?
I am completely stuck. Any ideas would be greatly appreciated
AI: Suppose $L = \begin{bmatrix}1 & 0 \\ \ell & 1\end{bmatrix}$ and $U = \begin{bmatrix}u_1 & u_2 \\ 0 & u_3 \end{bmatrix}$. Then
$$
LU = \begin{bmatrix}u_1 & u_2\\u_1\ell & u_2\ell + u_3\end{bmatrix}.
$$
You want this to match with $\begin{bmatrix}0 & x\\0 & y\end{bmatrix}$. That means
\begin{align*}
u_1 & = 0 \\
u_1\ell & = 0 \\
u_2 & = x \\
u_2\ell + u_3 & = y.
\end{align*}
There are four unknowns and four equations, but the second equation is dependent on the first. That leaves one degree of freedom. $u_1$ and $u_2$ are fixed, but $u_3$ and $\ell$ can vary. $\ell$ can be chosen arbitrarily, and $u_3 = y - x\ell$ will work.
|
H: Does This Condition Characterize $e^z$?
The following is a question from a Complex Analysis qualifying exam I was studying from:
Does there exist an entire function $f$, distinct from $e^z$, such that $f(0)=1$ and $f'(n)=f(n)$ for all $n\geq 1$?
My instinct is that such a function should exist, although I have tried cooking one up and I have been unable to do so. I then decided my instinct may have been incorrect, so I tried proving $e^z$ is the only such function, but I have been unable to do so.
The roadblock to one potential method: A common way to show two analytic functions are the same is showing they agree on a set in their domains with a limit point. If I could show any function $f$ and $e^z$ agreed on the positive integers, then they would agree with $\infty$ as the limit point (under the transformation $z\mapsto 1/z$, this is the same as saying the limit point is zero) but $e^z$ is not a function on $\mathbb{C}_{\infty}$ (i.e., $e^{1/z}$ has an essential singularity at $z=0$).
Another method I haven't been able to make work: Showing the coefficients of their respective power series are equal.
AI: When you want to find out whether a condition uniquely determines an entire function, it is often fruitful to assume one has two (not necessarily different) functions satisfying the condition and find out what one can determine about their difference or quotient. In particular, if one function satisfying the condition is explicitly known.
Here, we have a condition that is satisfied by the exponential function, a function that is rather well known and has several very convenient properties. One of the convenient properties is that it has no zeros, hence the quotient $\dfrac{f(z)}{e^z}$ is entire with $f$. The fact that the exponential function satisfies the differential equation $y' = y$ is also often very useful.
So suppose $f$ is an entire function with
$$f(0) = 1\quad \text{and }\quad f'(n) = f(n),\; n \in \mathbb{Z}^+\tag{1}$$
To find out whether necessarily $f(z) = e^z$, let us consider $h(z) = f(z)e^{-z}$ and see what we can say about that. First, since $f(0) = e^0 = 1$, we have $h(0) = 1$. Differentiating, we find
$$h'(z) = f'(z)e^{-z} + f(z)\bigl(-e^{-z}\bigr) = \bigl(f'(z) - f(z)\bigr)e^{-z},$$
so $h'(n) = 0$ for all positive integers $n$.
On the other hand, if $g(0) = 1$ and $g'(n) = 0$ for all positive integers $n$, then we find for $f(z) = g(z)e^z$ that $f(0) = 1$ and since $f'(z) = g'(z)e^z + g(z)e^z = \bigl(g'(z) + g(z)\bigr)e^z$, that $f'(n) = \bigl(g'(n) + g(n)\bigr)e^n = g(n)e^n = f(n)$ for positive integers $n$. So the solutions to $(1)$ are in correspondence to the entire functions $h$ with $h(0) = 1$ and $h'(n) = 0,\; n \in \mathbb{Z}^+$.
Given only $h'(n) = 0,\; n \in \mathbb{Z}^+$, we can always achieve $h(0) = 1$ by adding a constant, so the question is:
Are there entire functions $g$ with $g(n) = 0,\, n \in \mathbb{Z}^+$ other than $g \equiv 0$?
Any primitive of such a function (since $\mathbb{C}$ is simply connected, all entire functions have a primitive) gives rise to a solution of $(1)$.
A non-constant function that vanishes in all $n \in \mathbb{Z}$ (not only in the positive integers, but that doesn't hurt of course) is $\sin (\pi z)$ - of course all (nonzero) multiples of that have the same property. Choosing the factor $-\pi$, we see that $h(z) = \cos (\pi z)$ gives rise to the solution $f(z) = \cos (\pi z)e^z$ of $(1)$ which evidently is different from $e^z$.
There are many more solutions to $(1)$, but few can be expressed as simply as the above.
|
H: Help in a proof in Hungerford's book
I'm trying to understand the end of this proof:
The theorem 6.1 is:
I need help in this point.
Thanks in advance.
AI: If $f\in F[x]$ is a unit, then $f$ is non-zero because $F[x]$ is not the trivial ring, and of course by definition there is some $g\in F[x]$ such that $fg=1$. The latter fact, by Theorem 6.1(iv), implies
$$\deg(f)+\deg(g)=\deg(fg)=\deg(1)=0.$$
For any non-zero $h\in F[x]$, we have $\deg(h)\geq 0$, so the above forces $\deg(f)=\deg(g)=0$, i.e. $f$ and $g$ must both have been constants. Hence any unit $f$ is a non-zero constant.
|
H: Reasoning about random variables and instances
Sorry if this question is so basic, it hurts. I feel like not understanding this topic well enough is holding me back. If any of this language is wrong in some fundamental way, please correct me! Moving on..
Say I make a probabilistic statement about a random variable $X$ drawn from some unknown continuous distribution:
$$Pr(X > t) < t$$
Assume that statement holds for all $t > 0$.
Now let's say I draw a sample $Y$ from the distribution of $X$, but I don't tell you what $Y$ is. Is there any real difference between reasoning about $Y$ and reasoning about $X$? Is $Y$ a random variable? Does $X = Y$? Does the following implicitly hold?
$$Pr(Y > t) < t$$
Now let's say we do know the value of $Y$, e.g. $Y = 2$. Would probabilistic statements about random variable $Y$ still apply? Or would they go out the window with knowing the variable? Specifically, would the following hold?
$$Pr(2 > t) < t$$
AI: Okay so if we select X randomly from a set Q such that any randomly selected X from Q generally follows the statistical rule:
Probability(X > t) < t
Then your question is: if we select some Y randomly from the set Q does Y also obey the same law?
Yes, because selecting an X and selecting a Y is the same thing if both of them are randomly selected and have no difference in how they are selected.
Now if you did know the value of Y ahead of time would it change things? Most certainly!
The way the formula works is as follows:
you have the statement Probability(X > t) < t
You first give it a value of t (Ex: t = 1/2)
And then you get some useful expression:
Probability(X > 1/2) < 1/2
If you now define the value of X...
Probability(X$_{defined}$ > 1/2) < 1/2
The statement above is meaningless since you can just compare the two values and get a definite yes or no whether X is greater than or less than t.
On the other hand let us say t is undefined and x is defined:
Probability(X$_{defined}$ > t) < t
You have a new probability law all together which concerns a new variable t and takes a constant value X
So I guess it depends on how you interpret the formula you gave to be honest.
|
H: Is the derivative of a function the secant line?
I am just learning derivatives and I found the derivative of $4x-x^2$ to be $4-2x$. At point $(1,3)$ the tangent line is $2x+1$. Now when I graph this, the derivative $4-2x$ cuts through the function $4x-x^2$. Does that mean the derivative is the secant line?
AI: The answer generally speaking is no. The derivative of a function at a point always represents the slope of the tangent line at that point. Sometimes the tangent line (by coincidence) crosses the curve somewhere else technically making it also a secant line, but there is no special meaning in this.
Based on your comment I misunderstood your qustion. The answer is still no in general. Try plotting the derivative of y=x^3.
|
H: Swinging Pendulum ODE
The problem is a pendulum with the ability to swing freely
I have a system of first order differential equations of the following form:
$\dfrac {d\theta} {dt}$=$\omega$
$\dfrac {d\omega} {dt}$=$-\dfrac glsin\theta-\dfrac{r(w)} {lm}$
we also know that $r(0)=0$ and $r'(0)>0$
where $\theta$ is the angle between the string and the vertical position and $\theta=0$ is the pendulums natural resting position, $l$ is the length of the string, $m$ is the mass of the object at the end of the string
I am trying to find the equilibrium points of the ODE system above and to describe what position of the pendulum they correspond to.
I imagine that to find the equilibrium points you set both equations equal to 0 and solve. And just from an intuitive standpoint I would guess that the equilibrium points are the exact moment when the pendulum reaches it's max peak height through each swing before it starts to fall again but I am having trouble getting that from the equations
AI: The equilibrium position would correspond to the location for which your angular acceleration is zero and the angular velocity is zero. The angular velocity being zero allows $\theta$ to remain constant. The angular acceleration being zero allows $\omega$ to stay zero.
So we know $\omega=0$ and $\frac{d \omega}{dt}=0$ which makes the second equation,
$$ 0=-\frac{g}{l} \sin(\theta) -\frac{r(0)}{ml}$$
Since $r(\omega)$ is the frictional force the pendulum experiences we know this must be $0$ when $\omega=0$ (friction doesn't cause motion it impedes it). This gives the equation,
$$ 0 = -\frac{g}{l} \sin(\theta) \quad \Rightarrow \quad \theta=0,\pi$$
Therefore the equilibrium position is at the bottom of the swing. The object will stay in that equilibrium provided that its velocity is zero.
There is also an equilibrium at $180^\circ$ from the bottom (i.e, the top), but this is only physically realizable if the pendulum is a rigid rod. A ball on a string couldn't stay there in equilibrium.
|
H: numbers between two real numbers
From my intuition, I believe that between two real different numbers ($a<b$), there are infinity many:
(1) rational numbers,
(2) irrational numbers,
(3) algebraic numbers and
(4) transcendental numbers.
But the question is, if I am correct, how to prove it?
I think the proof for (1) may be easy: first, find one rational number between $a$ and $b$ by comparing their integral part. If they are different, $[b]$ is the rational numbers. If they are the same, compare their first decimal place, second decimal place... until finding a "difference" (it must be found otherwise they are the same numbers). Deleting the decimal place of $a$ and $b$ after the 'difference'. Adding up divided by two yield a rational number. After finding a rational number $c_0$ between $a$ and $b$, since $b-c_0> 0$, $\exists n\in\mathbb{N},\frac1n<b-c_0$, so we can find another rational number $c_1=c_0+\frac1n$ such that $c_0<c_1<b$. In the same way we can find rational $c_2,c_3,c_4...$ which $a<c_0<c_1<c_2...<b$ .
Is this proof correct? Or there is a simpler proof, since this is an intuitive perspective. How about the proof for (2),(3) and (4)? And if there are 4 sets which each of them contains the different kinds of number between $a$ and $b$, are they countable sets?
AI: There are proofs that don't use decimal expansions. One approach is to use the fact that every real number has an integer greater than it, and thus there is an integer between $0$ and any positive real number.
Note that the density of the algebraic numbers follows immediately from the density of the rationals.
Now note that there is at least one transcendental number (the simplest proof uses the fact that the reals are uncountable and that the algebraic numbers are countable), and a transcendental number plus a rational number is transcendental. From this, use the density of the rationals to prove the density of the transcendentals.
Now note that the density of the transcendentals immediately implies the density of the irrationals.
Note: the same sort of addition trick will also let you show that the algebraic irrationals are dense.
Edit: I missed something in my original reading of the question, which is resolved by the following lemma.
Lemma: Let $(S,\le)$ be a totally ordered set with at least two elements.
Let $T \subseteq S$.
Suppose that for any $a,b \in S$ with $a<b$, there is a $t\in T$ such that $a<t<b$.
Then for any $a,b\in S$ with $a<b$, $(a,b)\cap T$ is infinite.
Proof: Suppose for the sake of contradiction that it is finite.
Every finite, totally ordered set is well-ordered.
Thus … you should try to finish the proof.
To address another concern:
Lemma 2: Let $T$ be a nonempty set of real numbers such that for any $t\in T$ and any $q\in \Bbb Q$, $t+q\in T$. Then every non-degenerate real interval contains an element of $T$ (and hence infinitely many elements of $T$).
Can you prove that this holds also if you use, instead of $\Bbb Q$, the set $M_b$ of rationals of the form $\frac k {b^n}$?
|
H: Does there exist a field which has infinitely many subfields?
Does there exist a field which has infinitely many subfields? Does there exist an enormous supply of such fields?
I don't know how to begin.
AI: The complex numbers $\mathbb{C}$ is an example of such a field. It has infinitely many subfields, since you can adjoin family of irrational numbers (pick your favourite ones!) to $\mathbb{Q}$. My favourites would be roots of $\sqrt[n]{2}$ for each $n\in\mathbb{N}$. So in this case, the infinite family of subfields would be $\{\mathbb{Q}(\sqrt[n]{2})\}_{n=1}^{\infty}$
|
H: is the Sudoku puzzle NP-complete?
In general Sudoku on $n^2 \times n^2$ boards of $n \times n$ blocks is NP-complete.
Is the common Sudoku on $9 \times 9$ board NP-complete?
AI: The 9x9 board cannot be NP-complete, because there are finitely many instances of the problem.
|
H: Uniform Continuity of a Function
The question is as follows:
Fix any $a>0$ and any $m \in \Bbb N$. Prove that $f\colon \Bbb Q \cap [-m,m] \to \Bbb R$ given by $f(x)=a^x$ is uniformly continuous.
Thanks for any help.
AI: Given $\epsilon \gt 0$ we want to find a $\delta$ such that $\vert a^x - a^t \vert \lt \epsilon$ whenever $0 \leq \vert x - t \vert \lt \delta$ for all $x,t\in[-m,m]$.
Consider the following,
$$\vert a^x - a^t \vert = \vert a^t \vert \vert a^{x-t}-1 \vert.$$
Because $a^x$ is continous and $[-m,m]$ is compact we know it has a maximum value which we will denote with $M$. Note that $M \gt 0$. We will break the problem into 2 cases (I) When $a$ is larger than $1$ and (II) when $a$ is less than $1$.
Case (I):
The following hold,
$a^t \leq M \quad \forall t \in [-m,m]$
$a^{x-t} \leq a^{\vert x-t \vert} \leq a^\delta $
There exists a $\delta'\gt 0$ such that $\vert a^\delta-1 \vert \lt \epsilon/M$, this follows from the continuity of the exponential function at $0$.
Therefore we have
$$\vert a^x - a^t \vert = \vert a^t \vert \vert a^{x-t}-1 \vert \leq M \vert a^{\delta}-1 \vert \leq M \epsilon/M = \epsilon$$
Where we chose $\delta \lt \delta'$.
I leave Case (II) to the reader.
|
H: Prove that at a party with at least two people, there are two people who know the same number of people.
Okay, now, I really want to solve this on my own, and I believe I have the basic idea, I'm just not sure how to put it as an answer on the homework. The problem in full:
"Prove that at a party with at least two people, that there are two
people who know the same number of people there (not necessarily the
same people - just the same number) given that every person at the
party knows at least one person. Also, note that nobody can be his or
her own friend. You can solve this with a tricky use of the
Pigeonhole Principle."
First of all, I'm treating the concept of "knowing" as A can know B, but B doesn't necessary know A. e.g. If Tom Cruise walks into a party, I "know" him, but he doesn't know me.
So what I did first was proved it to myself using examples of a party with 2 people, 3 people, 4 people, and so on. Indeed, under any condition, there is always at least a pair of people who know the same number of people.
So if we define $n$ as the number of party goers, then we can see that this is true under any circumstance if we assume that the first person knows the maximum number of people possible, which is $(n-1)$ (as a person can't be friends with himself). Then since we're not interested in a case where the second person knows the same number of people (otherwise there's nothing to prove), we want the second person to know one less than than the first, or $(n-2)$, and so on.
Eventually we reach a contradiction where the last person knows $(n-n)$, or 0. Since 0 is not a possible value as defined by the problem, that last person must know any number of people from 1 to $(n-1)$, which equals the number of people that at least one other person knows.
Now...I hope that this is the "right idea." But how can I turn this "general understanding" into an answer for a problem that begins with the word "prove?"
Let me note that we only very briefly touched on the concepts of induction and the pigeonhole principle, and did not go into any examples of how to formally "prove" anything with the pigeonhole principle. We did touch on proving the sum of numbers by induction, but that's all as far as induction goes.
Also: Combinatorics question: Prove 2 people at a party know the same amount of people does not really work for me, because
A) we've not talked about "combinatorics", and
B) that question allows for someone to know 0 people.
AI: Let $n$ be the number of party-goers. The maximum number of people a person can know is $n-1$ and the minimum number he/she can know is 1 (by assumption), giving us $n-1$ possibilities for the number of people someone can know. Every single person must be assigned one of these $n-1$ possible numbers but since there are $n$ party-goers one of these numbers must be used twice due to the pigeonhole principle i.e. two party-goers know the same number of people.
|
H: Making an inequality true
$n > 10$ implies $n + 3 \leq \Box\times n$
Possible answers:
$1$
$2$
$3$
$4$
I answered "$2$" and got it wrong. Why? When $n=2$, $(11) + 3 \le 2(11)$.
$n > 1$ implies $n + 3 \leq \Box\times n$
Possible answers:
$1$
$2$
$3$
$4$
I answered "$4$" and got it right.
Why did I get the 2nd problem right, and the 1st wrong?
AI: My first guess what that this was a question of the sort where you need to select every correct option, but if that were the case, then you would not have gotten the 2nd question correct. So I assume it is a mistake on the part of whoever is grading.
The statement
$$n>10\implies n+3\leq \Box\times n$$
is true for all values of $\Box$ that are greater than or equal to $\frac{14}{11}$. Out of the provided options, this means that the correct ones are $\Box=2$, $\Box=3$, and $\Box=4$.
The statement
$$n>1\implies n+3\leq \Box\times n$$
is true for all values of $\Box$ that are greater than or equal to $\frac{5}{2}$. Out of the provided options, this means that the correct ones are $\Box=3$ and $\Box=4$.
|
H: Find $ \int \frac {\tan 2x} {\sqrt {\cos^6x +\sin^6x}} dx $
Problem: Find $\displaystyle\int \frac {\tan 2x} {\sqrt {\cos^6 x +\sin^6 x}} dx $
Solution: $\tan 2x= \dfrac{2\tan x}{1-\tan^2 x}$
Also I can take $\cos^6x$ common from $\sqrt {\cos^6x +\sin^6x}$
I don't know whether it is good approach to the question
Please help
AI: HINT:
$$\cos^6x+\sin^6x=(\cos^2x+\sin^2x)^3-3\cos^2x\sin^2x(\cos^2x+\sin^2x)$$
$$=1-3\cos^2x\sin^2x=1-\frac34(\sin2x)^2$$
$$=1-3\cos^2x\sin^2x=1-
\frac34(\sin2x)^2=1-\frac34(1-\cos^22x)=\frac{1+3\cos^22x}4$$
Use $\cos2x=u$
|
H: Continuous uniform distribution over a circle with radius R
I started to do this problem with the standard integration techniques, but I cant help but think that there has got to be something I am not seeing. Since it is a uniform distribution, even though x and y are not independent, it seems like there should be some shortcut. Here is the problem:
Take a random point (x, y) which is uniformly distributed over the circle with radius R
$f(x,y) = \begin{cases} \frac{1}{\pi R^2} & x^2 + y^2 \le R^2, \\ 0 & \text{otherwise} \end{cases}$
calculate $g(x)$, $h(y)$, the mean of $x$, $y$, and $xy$.
AI: Let the pair $(X,Y)$ of random variables have uniform distribution on the disk with centre the origin and radius $R$. Then the joint density function $f_{X,Y}$ is $\frac{1}{\pi R^2}$ on the disk, and $0$ elsewhere.
For the density function of $x$, "integrate out" $y$. So we are integrating a constant from $-\sqrt{R^2-x^2}$ to $\sqrt{R^2-x^2}$. The result is $\frac{2\sqrt{R^2-x^2}}{\pi R^2}$ (for $-R\le x\le R$, and $0$ elsewhere).
The density function of $Y$ can be read off by symmetry.
For the mean of $X$, we don't need to calculate at all. By symmetry it is $0$.
The mean of $XY$ is also $0$. The second and fourth quadrant parts cancel the first and third quadrant parts.
|
H: Is $t^4+7$ reducible over $\mathbb{Z}_{17}$?
Is $f=t^4+7$ reducible over $\mathbb{Z}_{17}$?
Attempt: I checked that $f$ has not roots in $\mathbb{Z}_{17}$, so the only possible factorization is with quadratic factors.
Assuming $f=(t^2+at+b)(t^2+ct+d)$, we have $bd=7$, $a+c=0$ and $ac+b+d=0.$ But it is cumbersome to find if there are solutions for these equations.
I know certain tools to prove the irreducibility of a polynomial over rings like $\mathbb{Z}$ or $\mathbb{Q}$ such as Gauss and Eisenstein's criteria. But over finite rings like $\mathbb{Z}_n$, I don't know how to prove the irreducibility of a polynomial with degree higher than $3$. Finding roots allows me to find linear factors, but I cannot use this technique to find quadratic factors. What kind of tool may I use in this case?
AI: It seems that we need only elementary tools to show the irreducibility in the comments. But let me introduce one interesting approach, called the Berlekamp algorithm, that one can calculate by hands, in a systematic way.
Firstly, let $\beta=\{1,x,\cdots,x^3\}$ be a basis for $F_{17}/(x^4+7)F_{17}$. Then we consider the automorphism $\sigma_{17}=\sigma$ which takes $f$ to $f^{17}$, with respect to the basis $\beta$.
$$\sigma: \begin{pmatrix}1&0&0&0\\0&4&0&0\\0&0&-1&0\\0&0&0&-4\end{pmatrix}.$$
Now denote the matrix of $\sigma-\iota$ by $B$, which is $$=\begin{pmatrix}0&0&0&0\\0&3&0&0\\0&0&-2&0\\0&0&0&-5\end{pmatrix}.$$
Since this matrix has kernel of dimension $1$, we conclude that $f$ is irreducible!
Why? It depends upon the following lemma:
Lemma
If $h\in F_q[x]$ is monic and such that $h^q\equiv h\pmod f$, then $f=\prod_{c\in F_q}\gcd(f(x),h(x)-c)$.
Since the proof is easy, we omit it here.
Furthermore, by definition, we know that $h$ must belong to the kernel of $\sigma_q-\iota$. Apparently, $h=1$ is always such one polynomial, and hence $\rho:=\sigma-\iota$ always has a kernel of dimenson $\ge1$; this is called the trivial factorisation.
Now let $f$ have $k$ irreducible factors $f_i$. If $h$ satisfies the conditions in the lemma, then each $f_i$ divides one of $h(x)-c$, so that $h\equiv c\pmod {f_i}$. As a consequence, we find that the dimension of the space of such $h$ is exactly $k$. Since this space is just the kernel of $\sigma_q-\iota$, we now see how one can conclude that our $f$ is irreducible as above.
In general, if there are non-trivial solutions to $h^q\equiv h\pmod f$, then we shall write out the factorisation $f(x)=\prod_{c\in F_q}\gcd(f(x),h(x)-c)$, and examine each factor therein respectively.
Summary
The key idea here is the matrix denoted by $B$ above. Its nullity (dimension of its kernel) is exactly the number of irreducible factors of $f$. If this number is just $1$, then $f$ is irreducible. If there are non-trivial polynomials in that kernel, write out the corresponding factorisation to reduce further things.
Note, however, that the nullity of $B$ might be $\not=0$, while $f$ is still irreducible. In that case, the only way is to examine each greatest commen divisor $\gcd(f(x),h(x)-c)$.
If any inappropriate points occur, tell me, so that I can appropriate it. Thanks in advance.
|
H: Prove that if $x,y\in\alpha$, then $xy$ or $x=y$.
I refer to pg.4, Lemma 12 of this article on ordinal numbers.
It says "If $x,y\in \alpha$, then $x<y, x>y, \text { or }x=y$".
My question: As $\alpha$ is an ordinal, we know $x<\alpha$ and $y<\alpha$. However, how do we know that $x$ and $y$ can be compared in this manner? We know that as $\alpha$ is transitive, $x,y\subset\alpha$. However, that does not necessarily imply $x\subset y$, $y\subset x$ or $y=x$! Moreover, $x,y$ are not real numbers such that we could assume that one of the three relations $x<y,x>y$ or $x=y$ should hold.
Although $x$ and $y$ are ordinal numbers themselves by the fact that $x,y\in\alpha$, we can't assume this now as the lemma I am currently studying will be used to establish this fact.
Thanks in advance!
AI: Note that $<$ is merely a binary relation symbol. It is not inherently the order of the real numbers, or some other partial order.
I claim that a well-order is a linear order. Consider Definition 6, given $(A,\prec)$ which is a well-order, let $a,b\in A$ then $\{a,b\}$ is a non-empty subset of $A$ and therefore has a least element, therefore $a$ and $b$ are comparable.
Now it's easy to deduce that ordinals are linear orders by Definition 8 which requires them to be well-ordered by $\in$.
|
H: determine position of circle inside square
i need to determine position of circle inside square,let us suppose that we have following picture
we have following informations:
1.$ABCD$ is square
2.all small figures ,$KMCE$,$PKEF$,$NPFD$ are square as well
3.diamter of small circle is equal to $6$ cm
problem:
we have to find area of $ABCD$ square
problem what i have is that i dont know how to express small sides of small squares in terms of diameter or radius or even find relationship between big square and circle.clearly it is not obvious that center of circle is located on intersection points of diagonals,please help me ,maybe it is very simple and i dont see key fact,but please give me a hint and i will try to get point of main idea of solution of this problem.thanks in advance
AI: Call the sidelength of the small square $a$. Then, $|CD| = 3a$. This implies $|AD| = 3a$, and $|AN| = |AD|-|DN| = 2a$. The diameter of the small circle is thus $2a$. Now, solve $2a = 6$ to find $a$, and ...
|
H: Pushforward Filter
While reading some notes I came across the notion of a pushforward pre-filter. If $g:X\rightarrow Y$ is a continuous map of topological spaces and $F$ is a pre-filter on $X$, then then author claims that $g(F) = \{g(f) : f\in F\}$ is a pre-filter on $Y$. I am having trouble understanding why $g(F)$ must have the finite intersection property. Am I missing something obvious?
AI: Suppose given $g(f_1)$ and $g(f_2)$ in $g(F)$. You need to show that $g(f_1)\cap g(f_2)$ contains some element of the form $g(f)$, with $f\in F$. Since $F$ is a pre-filter, there is some $f\in F$ with $f\subseteq f_1\cap f_2$. It is immediate to show that for this $f$ it holds that $g(f)\subseteq g(f_1)\cap g(f_2)$. And this has nothing to do with topology.
|
H: Are irrational numbers completely random?
As far as I know the decimal numbers in any irrational appear randomly showing no pattern. Hence it may not be possible to predict the $n^{th}$ decimal point without any calculations. So I was wondering if there is a chance that somewhere down the line in the infinite list of decimal numbers in an irrational could reveal something like our date of birth in order (eg: 19901225) or a even a paragraph in binary that would reveal something meaningful.Since this a infinite sequence of random numbers ;
Is it possible to calculate the probability of a birthday (say 19901225) appearing in order inside the sequence?
Does the probability approach to 1 since this is an infinite series.
Any discussions and debate will be welcomed.
AI: Here are two examples of irrational numbers that are not 'completely random':
$$.101001000100001000001\ldots\\.123456789101112131415\ldots$$
Notice the string $19901225$ does not appear in the first number, and appears infinitely many times in the second.
Now, as to your question of probability, let's consider the interval $[0,1]$. Using a modified version of the argument in this question, it can be shown that given any finite string of digits, the set of all numbers containing the string in their decimal expansion is measurable, and has measure $1$.
So, as you suspect, if we choose a number at random between $0$ and $1$, the probability that it has the string $19901225$ in its decimal expansion is indeed $1$. Also, more surprisingly, and perhaps a bit creepy, the probability that we choose a number that contains the story of your life in binary is also $1$.
|
H: Extension degree of residue field.
Let $k$ be a field, and $A$ be a finitely generated $k$-algebra with $\text{dim}(A)\leq 1$. Then for any maximal ideal $\mathfrak{m}$ of $A$, does this inequality $[A/\mathfrak{m}:k]<\infty$ hold?
Actually, what I really want to know is following; for a scheme $X$ of finite type over $k$ with $\text{dim}(Z)=1$ for any irreducible component $Z$ of $X$. Let $x\in X$ be a closed point, and $k(x)$ be the residue field at $x$. Then $[k(x):k]<\infty$.
AI: Yes, $[A/\mathfrak{m}:k]<\infty$.
This follows from Zariski's version of the Nullstellensatz: a finitely generated algebra $B$ over a field $k$ which is itself a field satisfies $[B:k] \lt \infty$ . Apply to $B=A/\mathfrak m$.
Notice that the hypothesis $\text{dim}(A)\leq 1$ is irrelevant.
The generalization to schemes about which you "really want to know" immediately follows by taking an affine open neighbourhood of the closed point $x$ you are interested in.
(Once again considerations on the dimension of the scheme are irrelevant)
Reference
Zariski's result is Corollary 5.24, page 67 of Atiyah-Macdonald's Introduction to Commutative Algebra.
Edit
In case you want your very own copy of Atiyah-Macdonald's book, the good people of INTERNATIONAL__BOOKS/DVD'S will send you one, used but in very good condition, at the bargain price of \$6,785.78
Ah yes, you have to add $3.99 for delivery.
|
H: $\sqrt{\sum_1^\infty(x_k+y_k)^2}\le \sqrt{\sum_1^\infty x_k^2}+\sqrt{\sum_1^\infty y_k^2}$ provided the limits exist.
Using the fact that (for $x_k,y_k\in\mathbb R~\forall~k\in\mathbb N$) $$\sqrt{\sum_1^n(x_k+y_k)^2}\le \sqrt{\sum_1^n x_k^2}+\sqrt{\sum_1^n y_k^2}$$ I need to show that $$\sqrt{\sum_1^\infty(x_k+y_k)^2}\le \sqrt{\sum_1^\infty x_k^2}+\sqrt{\sum_1^\infty y_k^2}$$ provided $\sum_1^\infty(x_k+y_k)^2,~\sum_1^\infty x_k^2,~\sum_1^\infty y_k^2$ exist. Please tell me whether my attempt is correct!
Let for $n\in\mathbb N,$ $$A_n=\sum_1^n(x_k+y_k)^2,B_n=\sum_1^n x_k^2,~C_n=\sum_1^n y_k^2,\\a_n=\sqrt{\sum_1^n(x_k+y_k)^2},b_n=\sqrt{\sum_1^n x_k^2},~c_n=\sqrt{\sum_1^n y_k^2}$$
Then $\lim a_n=\sqrt{\lim A_n},~\lim b_n=\sqrt{\lim B_n},~\lim c_n=\sqrt{\lim C_n}$ since $\sqrt{}\in \mathcal C[0,\infty).$
$a_n\le b_n+c_n~\forall~n\in\mathbb N.$
Here the sequence $(b_n+c_n)_n$ is convergent since $(b_n)_n,(c_n)_n$ are so and $\lim(b_n+c_n)=\lim b_n+\lim c_n.$
Therefore
$\lim a_n\le \lim b_n+\lim c_n$$\implies\sqrt{\lim A_n}\le \sqrt{\lim B_n}+\sqrt{\lim C_n}\\\implies \sqrt{\sum_1^\infty(x_k+y_k)^2}\le \sqrt{\sum_1^\infty x_k^2}+\sqrt{\sum_1^\infty y_k^2}$
AI: If by $√∈[0,∞)$ you mean is continuous then I think you are ok. Maybe your therefore need a little justification. Do you know from your course that if $u_n \leq v_n$ and both converge then $\lim u_n \leq \lim v_n$ ?
|
H: How to rotate two vectors (2d), where their angle is larger than 180.
The rotation matrix
$$\begin{bmatrix} \cos\theta & -\sin \theta\\ \sin\theta & \cos\theta \end{bmatrix}$$ cannot process the case that the angle between two vectors is larger than $180$ degrees. (counter-clockwise rotation).
AI: The matrix $M(\theta)$, where
$$M(\theta) = \left[ \begin{array}{cc} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{array}\right]$$
Can take any value of $\theta$ whatsoever. For example, let $\theta = 270^{\circ}$ then we have $\sin\theta = -1$ and $\cos\theta =0$ giving
$$M(270^{\circ}) = \left[ \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array}\right]$$
The vector $[1,0]^{\top}$ gets sent to $[0,-1]^{\top}$ while the vector $[0,1]^{\top}$ to $[1,0]^{\top}$. Notice, moreover, that $\det(M(\theta)) = 1$ for all $\theta$. This is exactly an anti-clockwise rotation of $270^{\circ}$.
|
H: A conservative (2D) vector field that is perpendicular to the unit square on the boundary of unit square?
I'm trying to test a PDE solver and I'm wondering if there is any 2D vector field that satisfies the following on the domain $\Omega = [0,1] \times [0,1]$:
$$\text{curl} \;\mathbf{u} = 0 \;\;\;\forall \mathbf{x} \in \Omega$$
$$\mathbf{u}\cdot\mathbf{t} = 0 \;\;\;\forall \mathbf{x} \in \partial\Omega$$
where $\mathbf{t}$ is the tangential vector to $\partial\Omega$.
i.e. I'm looking for a vector field that is conservative and also, on the boundary of the domain (the unit square) is perpendicular to the boundary.
Is this possible? I can come up with several examples for each separate condition, but none for both.
AI: Consider the function
$$f(x,y):=\sin(\pi x)\>\sin(\pi y)\qquad\bigl((x,y\in\Omega\bigr)$$
and put
$${\bf u}(x,y):=\nabla f(x,y)=\bigl(\pi\cos(\pi x)\>\sin(\pi y),\ \pi\sin(\pi x)\>\cos(\pi y)\bigr)\ .$$
Then ${\rm curl}({\bf u})\equiv0$ and
$${\bf u}(0,y)=\bigl(\pi\sin(\pi y),0\bigr)\ne{\bf 0}\qquad(0< y<1)\ ,$$
and similarly for the other three edges of $\Omega$.
|
H: Taylor remainder of $f(x,y)=\sin x\cdot \cos y$
Given $f\colon \mathbb R^2\rightarrow\mathbb R,(x,y)\mapsto\sin x\cdot\cos y$ I want to show that there exists $M>0$ such that $$|f(x,y)-T_2(x,y)|\leq M(|x|+|y|)$$ for all $(x,y)\in\mathbb R^2$. $T_2$ is the taylor-polynomial of order $2$.
I know I have to consider the remainder of $T_2$. So I calculated all the third partial derivatives:
\begin{align*}
\frac{\partial f^3}{\partial x^3}(x,y)&=-\cos(x)\cos(y)\\
\frac{\partial f^3}{\partial y^3}(x,y)&=\sin(x)\sin(y)\\
\frac{\partial f^3}{\partial y^2\partial x}(x,y)&=\frac{\partial f^3}{\partial x\partial y^2}(x,y)=\frac{\partial f^3}{\partial y\partial x\partial y}(x,y)=-\cos(x)\cos(y)\\
\frac{\partial f^3}{\partial x^2\partial y}(x,y)&=\frac{\partial f^3}{\partial x\partial y\partial x}(x,y)=\frac{\partial f^3}{\partial y\partial x^2}(x,y)=\sin(x)\sin(y)
\end{align*}
But now I am really stuck. I looked up the formula for the remainder on wikipedia but I never get the term $|x|+|y|$. So how can you show that $|R_3(x,y)|\leq M(|x|+|y|)$?
AI: First compute $T_2(x,y)$ which is given by
$$ T_2(x,y) = x. $$
Now, we have
$$ \Big| f(x,y)-T_2(x,y)\Big|= \Big| \sin(x)\cos(y)-x\Big|= \Big| \sin(x)\cos(y)-\sin(x)+\sin(x)-x\Big|$$
$$ \leq |\sin(x)(\cos(y)-1)|+|\sin(x)|+|x| $$
$$ =|(\cos(y)-1)|+2|x| $$
$$ \leq |y|+2|x| $$
$$ \leq 2(|x|+|y|). $$
Note: The following inequalities
$$ |\sin(x)|\leq |x|,\quad |\cos(y)-1|\leq |y| $$
can be proved using the mean value theorem.
|
H: find $\mathbb E[X|Y]$ of the following joint pmf
$$Y$$
$$
\begin{array}{c|lcr}
X & 1 & 2& 3\\
\hline
1 & \frac{2}{15}& \frac{4}{15}& \frac{3}{15}\\
2 & \frac{1}{15}& \frac{1}{15}& \frac{4}{15}
\end{array}
$$
$\mathbb E[X|Y]=\sum_xxP(X|Y)=\sum_xx\frac{P(X,Y)}{P(y)}\ldots(1)$
where $P(y)=\sum_xP(X,Y)=$
$$Y$$
$$
\begin{array}{lcr}
1 & 2& 3\\
\hline
\frac{3}{15}& \frac{5}{15}& \frac{7}{15}\\
\end{array}
$$
Now which value of $P(y)$ i will substitute in equation $(1)$ ?
AI: Your formula $(1)$ is actually not the formula for the conditional expectation ${\rm E}[X\mid Y]$ (which in fact is a random variable, and not a fixed number) but instead it is the formula for the conditional expectation ${\rm E}[X\mid Y=y]$. Using your formula, you can calculate ${\rm E}[X\mid Y=y]$ for $y=1,2,3$ and then use that
$$
{\rm E}[X\mid Y]=\sum_{i=1}^3{\rm E}[X\mid Y=i]\mathbf{1}_{\{Y=i\}},
$$
where $\mathbf{1}_A$ denotes the indicator or characteristic function of the set $A$, i.e. it is $1$ on the set $A$ and $0$ outside of $A$. This is just a short way of saying that
$$
{\rm E}[X\mid Y](\omega)={\rm E}[X\mid Y=i]\quad \text{if }\,\omega\in \{Y=i\}.
$$
|
H: Convergence of $\varphi_n(x):=\frac{\varphi(nx)}{n}$ in Schwartz space
I want to find all $\varphi\in\mathcal S(\mathbb R)$ for which the sequence $\varphi_n(x):=\frac{\varphi(nx)}{n}$ converges in $\mathcal S(\mathbb R)$.
The first step, I have already managed to do by myself:
$\forall n\in\mathbb N, \varphi_n\in\mathcal S(\mathbb R)$. I used the substitution $x'=nx, \frac{\partial }{\partial x'}=n\frac{\partial }{\partial x}$: Then we have
$|\varphi_n|_{l,m}=\sup_{x\in\mathbb R}{|x^l\partial^m\frac{\varphi(nx)}{n}|}\\=\sup_{x'\in\mathbb R}{|\frac{1}{n}(\frac{x'}{n})^l (n\frac{\partial }{\partial x'})^m \varphi(x')}|\\
=\sup_{x'\in\mathbb R}{|n^{m-l-1}x'^l (\frac{\partial }{\partial x'})^m\varphi(x')}|<\infty,\ \forall n\in\mathbb N$
But now I'm stuck on showing the convergence:
My guess is, that above statement holds for all $\varphi\in\mathcal S(\mathbb R)$. Has anyone an idea, how to show that?
Any help is appreciated.
AI: Assume that the sequence $(\varphi_n,n\geqslant 1)$ converges in the Schwartz space. Since $\varphi$ is bounded, the limit function is necessarily $0$. In particular, we have $\sup_{x\in\mathbb R}|\varphi''_n(x)|\leqslant M$ for some $M$ independent on $n$, hence $\varphi''=0$ and $\varphi(x)=ax+b$. Since $\varphi\in\mathcal S(\mathbb R)$, we should have $a=b=0$.
|
H: Question on proof of deformation lemma
on page no 479 of partial differential equation (Evans) how the condition (iii) of deformation lemma is satisfied.
AI: After $(12)$, Evans concluded that for $u\in H$ and $t\in [0,1]$ $$\tag{1}\frac{d}{dt}I[\eta_t(u)]\leq 0,\ $$
$(1)$ is telling us that the function $v(t)=I[\eta_t(u)]$ for $u$ fixed is non-increasing, which implies that $v(t)\leq v(0)$ for all $t\in [0,1]$ or equivalently (remember that $\eta_0(u)=u$) $$I[\eta_t(u)]\leq I[u],\ \forall\ t\in [0,1]$$
|
H: Convergence of partial sums of real sequences
For all $i\in\mathbb{N}$, let $(a_{i,n})_{n\in\mathbb{N}}$ be a real sequence that tends to $0$ for $n\rightarrow\infty$. It holds also that $|a_{i,n}|\leq1$ for all $i,n\in\mathbb{N}$. Is it possible to show that \begin{align*}
c_n:=\frac{1}{n}\sum_{i=1}^{n}a_{i,n}\xrightarrow{n\rightarrow\infty}0?\end{align*} Thanks!
AI: Counter-example:
$$a_{i,n}=\left\{\begin{array}{ll} 1&n< 2i\\0 & n\ge 2i\end{array}\right.\ .$$
It is easy to see that $\lim\limits_{n\to\infty}c_n=\frac{1}{2}$.
|
H: Let $f:[0,1]\to[0,1]$ be continuous then $f$ assumes the value $\int_0^1 f^2(t)dt$ somewhere in $[0, 1].$
True/False test: Let $f:[0,1]\to[0,1]$ be continuous then $f$ assumes the value $\int_0^1 f^2(t)dt$ somewhere in $[0, 1].$
$$f:[0,1]\to[0,1]\implies f^2:[0,1]\to[0,1]\implies 0\le\int_0^1 f^2(t)dt\le1$$
So it's true.
but the paper says the statement is false.
Please help.
AI: Read the task carefully. You proved that $\int_0^1 f^2(t)dt\in[0,1]$. The task however asks, whether $f$ assumes the value of the integral, that is: Is there a $x\in[0,1]$, such that $f(x)=\int_0^1 f^2(t)dt$? This is indeed wrong, you can think of a counterexample using the following hint:
Hint: Let $f$ be a constant function, $f(x)=c$ for some $c\in[0,1]$. Then $f$ assumes the value $\int_0^1 f^2(t)dt$, iff $\int_0^1 f^2(t)dt=c$. Think about how to choose $c$ to get a counterexample.
|
H: Why is $\overline{\mathbb{C}\setminus\left\{0\right\}}=\overline{\exp(\mathbb{C})}$?
Why is $\overline{\mathbb{C}\setminus\left\{0\right\}}=\overline{\exp(\mathbb{C})}$?
I do not know how one can see that...
AI: Hint: $\exp(\Bbb C)=\Bbb C\setminus\{0\}$.
|
H: Basic First Order Linear Difference Equation(non-homogeneous)
I have this as my homework and I am not sure how to start:
Solve the first-order linear difference equation $$(k+1)x_{n+1}+x_n=k$$ for some constant $k.$ [Hint: The general solution of inhomogeneous linear difference equations also consists of a complementary function and a particular solution.]
Not sure how to start.
Need some guidance on this...
AI: As this is an inhomogenous equation, we should substract two succesive equations :
$$
(k+1)x_{n+1}+x_n = k\\
(k+1)x_{n+2}+x_{n+1} = k
$$
We will get an homogenous linear difference equation :
$$
(k+1)x_{n+2} - k x_{n+1} - x_n = 0
$$
Then you should solve the second degree equation :
$$
(k+1)r^2 - k r - 1 = 0
$$
The solutions of this equation is : $ r_1 = 1 $ and $ r_2 = -1/(k+1) $
Then $$ x_n = A + B (\frac{-1}{k+1})^n $$
Which will give you the form of the solution.
For more readings how to solve this, check http://en.wikipedia.org/wiki/Recurrence_relation
|
H: Compactly supported continuous function is uniformly continuous
Let $f:\mathbb R \rightarrow \mathbb R$ be continuous and compactly supported. How can I prove that $f$ is uniformly continuous ? I was trying to prove it by contradiction but get stuck. My attempt was as follows:
Let $E$ be the compact support of $f$. On $E$ we know that $f$ is uniformly continuous. If we assume that $f$ is not uniformly continuous we know
$$
\exists \epsilon > 0 \forall \delta > 0 \exists x,y \in \mathbb R: |x-y|<\delta \wedge |f(x)-f(y)| \geq \epsilon
$$ Let $\delta_n := \frac 1n$ and fix this $\epsilon > 0$. Compute a $\delta > 0$ s.t. $\forall x,y \in E:|x-y| < \delta \rightarrow |f(x)-f(y)| <\epsilon$. Let $n \geq N$ s.t. $\frac 1N < \delta$. Then we may assume wlog that $x_n \in E$ and $y_n \in \mathbb R \setminus E$ wehere $x_n,y_n$ are the points corresponding with $\delta_n$. This gives a sequence of points where $|x_n-y_n| \rightarrow 0$ and $x_n \in E$ and $y_n \in \mathbb R \setminus E$.
I now want to use somehow the continuitiy of $f$. How can I do this ? Is this approach a good one ? Can it be more simple ?
New idea:
I know that $E$ is compact so $(y_n)_{n=0}^\infty$ has a convergent subsequence $(y_{n_j})_{j=0}^\infty$ whit limit say $y \in E$. Now
$$
|x_{n_j}-y| \leq |x_{n_j}-y_{n_j}|+|y_{n_j}+y| \rightarrow 0
$$ So I can take $x_{n_j}$ close to $y$ to get $|f(x_{n_j})-f(y)| = |f(y)| <\frac \epsilon 2$ by the continuity. I can also take $y_{n_j}$ close to $y$ to get $|f(y_{n_j})-f(y)| <\frac \epsilon 2$. We further have
$$
|f(y_{n_j})| \leq |f(y_{n_j})-f(y)| + |f(y)|
$$ s.t.
$$
|f(y)| \geq |f(y_{n_j})| - |f(y_{n_j})-f(y)| \geq \frac \epsilon 2
$$ because $|f(y_{n_j})| \geq \epsilon$ per construction. So we have $|f(y)| \leq \frac \epsilon 2$ and $|f(y)| > \frac \epsilon 2$ which is a contradiction.
New solution: Let $\epsilon > 0$. Let $\delta_1$ for the uniform continuity on $E$. Further
$$
\forall x \in E,\ \exists \delta_x > 0\ \forall y \in \mathbb{R}: |x-y|< \delta_x \rightarrow |f(x)-f(y)| < \epsilon
$$ Compute an open finite over of $E$
$$
E \subseteq \bigcup_{i=1}^N B\left(x_i,\frac{\delta_{x_i}}2\right)
$$ Write $\delta_i := \delta_{x_i}$. Let $\delta_2 := \min_{i=1,\cdots,N} \frac {\delta_i} 2$ and $\delta := \min(\delta_1,\delta_2)$.
Assume $|x-y|< \delta$. If $x,y \in E$ or $x,y \notin E$ we are done. Otherwise assume $x \in E$ and $y \notin E$. Then $x \in B\left(x_i,\frac{\delta_i}{2}\right)$ for some $i$. Further
$$
|y-x_i| \leq |y-x|+|x-x_i|\leq \delta_2 + \frac{\delta_i}2 \leq \delta_i
$$ thus $y \in B(x_i,\delta_i)$ which proves the claim.
AI: Let us first see what is continuity. Given a positive $\epsilon$, continuity of $f$ means that at each $x$, we can find a small $\delta$ such that $x'\in (x-\delta,x+\delta)$ implies $f(x')\in (f(x)-\epsilon,f(x)+\epsilon)$. So around each $x$, we have a small open set that is mapped very close to $f(x)$. Also note that points in the same small open set are mapped to points close to each other, since they are both mapped to points close to $f(x)$. But the problem here is that this radius of the open sets $\delta$ depends on the point $x$.
Now compactness means you can find a finite collection of such small open sets that they cover the support of the function, and this is good enough for us. Because among these finitely many open sets, you can determine the smallest radius, say $r$. If the distance between $x$ and $x'$ are less than $1/10r$, then you know they must lie in the same open set, then $f(x)$ and $f(x')$ must be close. Now this $r$ does not depend on $x$, you have uniform continuity.
|
H: Equivalence relation and restriction
This is a HW question
Suppose $B \subseteq A$ and $R_a$ is an equivalence relation on A. Let $R_b$ the restriction of $R_a$ to B; that is, $R_b = \{(a,b) \in R_a : a,b \in B\} $ Is $R_b$ an equivalence relation on B.
I know that for am equivalence relation I need the the relation to be reflexive, antisymmetric and transitive.
To me this seems too simple and usually when i think of a HW question that way I am wrong. My reasoning is that if $(a,a) \in A$ then $(a,a) \in B $ because $B \subseteq A$ therefore $R_b$ is reflexive. Then similarly the anti symmetric and transitive properties can be shows.
AI: Your reasoning is not quite right. In order to show that $R_b$ is an equivalence relation on $B,$ you must show the following:
For every $x$ in $B,$ $(x,x)$ is in $R_b$.
If $x,y$ are in $B$ and $(x,y)$ is in $R_b,$ then $(y,x)$ is in $R_b$.
If $x,y,z$ are in $B$ and $(x,y),(y,z)$ are in $R_b,$ then $(x,z)$ is in $R_b$.
It is fairly straightforward to show each of these using the definition of $R_b,$ since $R_a$ is an equivalence relation on $A$ and $B\subseteq A$. Let me prove the second property as an example, and leave the other two to you.
Suppose $x,y$ are in $B$ and $(x,y)$ is in $R_b$. Since $B\subseteq A$ and $x,y$ are in $B,$ then $x,y$ are in $A$. Since $(x,y)$ is in $R_b,$ then by definition of $R_b,$ we have $(x,y)$ is in $R_a.$ Since $x,y$ are in $A$ with $(x,y)$ in $R_a,$ and $R_a$ is an equivalence relation on $A,$ then $(y,x)$ is in $R_a$ by symmetry of $R_a$ on $A$. Since $y,x$ are in $B$ and $(y,x)$ is in $R_a,$ then by definition of $R_b,$ we have that $(y,x)$ is in $R_b,$ as desired.
Note that no mention was made of ordered pairs in $A$ or $B$. (1) There's no reason to suspect that there are any such things. (2) Even if there are ordered pairs in $A$ or $B$, that is not connected to ordered pairs in $R_a$ or $R_b,$ so has nothing to do with the problem at hand. (3) Even if there is some ordered pair in $A,$ we can't conclude that that ordered pair is in $B$ ($B$ is the subset here).
|
H: Upper bound for the quotient of gamma functions?
I am looking for an upper bound for
$$ \frac{\Gamma(x+\beta)}{\Gamma(x)},\,\,\,\beta>0.$$
In this question it was shown that
$$ \frac{\Gamma(x+\beta)}{\Gamma(x)} \approx x^\beta. $$
Then, I believe that there must be some sort of polynomial upper bound but I have failed to come up with one. This is true for the case when $\beta$ is an integer. Any suggestion would be appreciated.
AI: If $\beta$ is an integer then from $\Gamma(z+1) = z\Gamma(z)$ we have
$$
\frac{\Gamma(x+\beta)}{\Gamma(x)} = \prod_{k=0}^{\beta-1} (x+k).
$$
Otherwise we can use the monotonicity of $\Gamma$ can obtain the crude bound
$$
\frac{\Gamma(x+\beta)}{\Gamma(x)} = \frac{\Gamma(x + \beta - \lceil\beta\rceil)}{\Gamma(x)} \prod_{k=1}^{\lceil\beta\rceil} (x+\beta-k) \leq \prod_{k=1}^{\lceil\beta\rceil} (x+\beta-k)
$$
which holds for $x + \beta - \lceil\beta\rceil \geq x_0$, where $x_0 \approx 1.46163$ is the location of the minimum of the gamma function on $\mathbb{R}^+$.
|
H: equivalence relation and lexicographic order
This is a HW question
Let $A = \mathbb{Z}^+ \times \ \mathbb{Z}^ +$. Define $R$ on $A$ by $(x_1,x_2)R(y_1,y_2)$ iff $x_1+x_2=y_1+y_2$. Is $R$ an equivalence relation on A.
I dont think It is as simple as to show that the reflexive, symmetry and transitivity hold on $x_1+x_2=y_1+y_2$ for all $\mathbb{Z}^+$ but rather something do with lexicographic order (but i could be wrong).
I am mostly looking for a hint on how to proceed.
AI: Hint: (After you correct a probable typo in your definition,) the relation is of type $aRb\iff f(a)=f(b)$
|
H: Combinatorics mmo problem
i've recently came across a problem that i came up with that is realted to a mmo(world of warcraft). Basicaly let's say one player deals a random number between $100$ and $200$ each strike and the opponent as $1000$ health. What would be the probability of killing my opponent in $5,6,\dots,10$ strikes? I've tried solving this problem but there just seems like there are too many possibilities wich seems to make it so it takes too much time to solve this problem. So i was wondering if anyone is able to solve this easily? also any sources on how to solve these kind of problems would be appreciated.
AI: I don't know if there's a good way to calculate this explicitly, but it's not hard to get the computer to do it for you:
# (Perl)
# $prob[$health][$strikes] is probability of killing a monster
# with $health health
# in exactly $strikes strikes
$prob[0][0] = 1;
my $MAX_HEALTH = 1000;
my @possible_damage = (100 .. 200);
my $denom = @possible_damage;
for my $health (1 .. $MAX_HEALTH) {
$prob[$health][0] = 0;
for my $damage (@possible_damage) {
if ($health <= $damage) {
$prob[$health][1] += 1/$denom;
} else {
for my $strikes (1 .. $#{$prob[$health-$damage]}+1) {
$prob[$health][$strikes] +=
$prob[$health-$damage][$strikes-1] / $denom;
}
}
}
}
for my $strikes (0 .. $#{$prob[$MAX_HEALTH]}) {
my $prob = 100* $prob[$MAX_HEALTH][$strikes];
printf "%2d %5.2f\n", $strikes, $prob if $prob;
}
The output:
$$
\begin{array}{rrl}
5 & 0.00 &\% \\
6 & 8.38 \\
7 & 65.59 \\
8 & 25.38 \\
9 & 0.65 \\
10 & 0.00 \\
\end{array}
$$
(Lines with "$0.00$" indicate a positive but negligible probability.)
This assumes that the probability of each damage value between 100 and 200 is distributed uniformly, which may not actually be the case.
|
H: Strange Partial Fractions Decomposition
I am trying to get from
$$\frac{z^7 + 1}{z^2(z^4+1)}$$
to
$$\frac{1}{z^2} + z - \frac{z+z^2}{1+z^4}.$$
The author did this by doing a partial fractions decomposition. I don't see how, however.. If I compute the partial fractions decomposition, I first find the roots of the denominator, but that's not what's done here. What he does is something that he calls "partial" partial fractions decomposition.
Any help?
AI: This is another way of doing what is in Paul Garret answer, avoiding to compute more than we are looking for.
You get the polynomial part $z$ by doing long division of $z^7+1$ by $z^2(z^4+1)$.
You get the principal part corresponding to the factor $z^2$ in the denominator by computing the first two steps of long division of $z^7+1$ by $(z^4+1)$ but organizing the terms in decreasing degrees before dividing. The first two coefficients you get are the coefficients of $\frac{A}{z^2}+\frac{B}{z}$ in that order. The other fraction you can get subtracting $\frac{1}{z^2}+z$ from $\frac{z^7+1}{z^2(z^4+1)}$.
|
H: Integral $ \dfrac { \int_0^{\pi/2} (\sin x)^{\sqrt 2 + 1} dx} { \int_0^{\pi/2} (\sin x)^{\sqrt 2 - 1} dx} $
I have this difficult integral to solve.
$$ \dfrac { \int_0^{\pi/2} (\sin x)^{\sqrt 2 + 1} dx} { \int_0^{\pi/2} (\sin x)^{\sqrt 2 - 1} dx} $$
Now my approach is this: split $(\sin x)^{\sqrt 2 + 1}$ and $(\sin x)^{\sqrt 2 - 1}$ as $(\sin x)^{\sqrt 2}.(\sin x)$ and $(\sin x)^{\sqrt 2 - 2}.(\sin x)$ respectively, and then apply parts. But that doesn't seem to lead anywhere. Hints please!
Edit:
This is what I did (showing just for the numerator)
$$ \int_0^{\pi/2} (\sin x)^{\sqrt 2 + 1} dx $$
$$ = \int_0^{\pi/2} (\sin x)^{\sqrt 2}.(\sin x) dx $$
$$ = (-\cos x)(\sin x)^{\sqrt 2}\Bigg|_0^{\pi/2} + \int_0^{\pi/2}(\sin x)^{ \sqrt 2 - 1 }(\cos^2 x) dx $$
(taking $ v = \sin x $ and $ u = (\sin x)^{\sqrt 2} $ in the $ \int uv $ formula)
$$ = \int_0^{\pi/2}\left( (\sin x)^{ \sqrt 2 - 1 } - (\sin x)^{ \sqrt 2 + 1 } \right) dx $$
Similarly for the denominator. This does give a reduction formula but then I don't see how to really use it for finding the answer.
AI: Let the top integral be $I$, and the bottom one $J$. To make typing simpler, let $a=\sqrt{2}$.
Integrate the top one by parts, letting $du=\sin x$ and $v=\sin^{a} x$. This is the standard way to get a reduction formula for $\int \sin^n x\,dx$.
So $dv=a\cos x\sin^{a-1}x\,dx$ and we can take $u=-\cos x$. Then
$$I=\left. -\cos x \sin^{a}x\right|_0^{\pi/2}+\int_0^{\pi/2}a\cos^2 x\sin^{a-1} x\,dx.$$
The first part dies at both ends. Rewrite $\cos^2 x$ as $1-\sin^2 x$. Then
$$I=aJ -aI.$$
Now we get
$$I=\frac{a}{a+1}J$$
and it's over.
|
H: When are free modules extended
I am looking for help to understand the following:
Let $R$ be a commutative ring and $P$ a projective $R[x]$-module. If $P_{\mathfrak m}$ (localization at $R-{\mathfrak m}$, for $\mathfrak m$ a maximal ideal of $R$) is a free $R_{\mathfrak m}[x]$-module, then $P_{\mathfrak m}$ is extended from $R_{\mathfrak m}$.
Recall that a $R[x]$-module $P$ is called extended from $R$, if $P\cong P'\otimes_R R[x]$ for an $R$-module $P'$.
Thank you!
AI: Unless I'm misunderstanding the question, this has nothing to do with localizations or projectivity of the original $P$. If $R$ is any ring and $P$ is a free $R[X]$-module, say $P\cong\bigoplus_{i\in I}R[X]$ as $R[X]$-modules for some set $I$, then, taking $P^\prime=\bigoplus_{i\in I}R$, $P^\prime\otimes_RR[X]\cong\bigoplus_{i\in I}R[X]\cong P$ as $R[X]$-modules.
|
H: partial orders homework
This is a HW question
Let $R_1$ be a partial order on a set $A_1$ and $R_2$ be a partial order on a set $A_2$. Define a relation $R_3$ on the set $A_3 = A_1 \times A_2 = {(a_1,a_2) : a_1 \in A_1, a_2 \in A_2} $ by $(x_1,x_2) R_3 (y_1,y_2)$ iff $x_1R_1y_1$ and $x_2R_2y_2$ . Show that $R_3$ is a partial order on $A_3$
I am looking for some hint on how to start. Does this have something to do with lexicographic order ? If so then do I need to be showing $a_1 = b_1$ and $a_2 R_2 b_2$ ?
AI: You just need to verify all the axioms of partial orders hold. Different books define them differently, but a standard definition would be a reflexive, antisymmetric, transitive relation. I'll prove one of them and you can do the rest as the proof is the same.
Reflexivity: Let $(a,b)\in A_1\times A_2$ then $aR_1a, bR_2b$ as $R_1,R_2$ are partial orders and hence reflexive. Thus by definition $(a,b)R_3(a,b)$.
|
H: Is it possible to pass functions into other functions in maths?
I wanna be frank with you guys and say my mathematical education was a bit... bleh, so I'm teaching myself a lot of stuff lately, a question that has come up for me: "Is it possible to pass functions into other functions?"
Like say I have a function $g$ that takes $2$ parameters $f$ and $a$, where $f$ is a function, and $a$ is any natural or real number (doesn't really matter in this case), I'd then use both of those later in $g$'s definition... The function passed into $g$ would then slightly alter $g$'s behavior, making $g$ more general by letting me pass other functions into it whenever I'd need it.
Is this doable at all? And would there even be any use for this in mathematics?
Note that I'm aware of how Higher Order Functions work in programming, I just haven't gotten the faintest clue if or how they're used in maths
AI: Yes, it's doable, and it's useful too.
Take for example the following 'indefinite integral function', which I'll denote by $I$. It takes three arguments:
A real number $a$
A real number $b$, with $a < b$
A real-valued function $f$ which is defined and bounded on the interval $a < x < b$
Then define
$$I(a,b,f) = \int_a^b f(x)\, dx$$
This is as good a function as any, and takes a function as an argument.
You can even have functions which both take in functions as arguments and spit out functions afterwards. For example, there is a function on the class of differentiable real functions defined by
$$\dfrac{d}{dx} : f \mapsto f'$$
It takes in a function $f$ and spits out its derivative.
For the nitpickers: some of the conditions given above aren't necessary, such as the stipulation that $a<b$ or even that $f$ be real-valued or whatever. But hopefully this is a simple example that illustrates the use of a function of the kind the OP wants.
|
H: Derivative inequality for a twice continuously differentiable function.
This is a question from a past exam. I thought that this was easy, but found no way of solving it.
Let $f: \mathbb R\rightarrow\mathbb R$ be continuously twice differentiable with $f''(x)\gt0\forall x\in\mathbb R$. Then
(a) if $a\lt b$, then $f'(a)\lt\dfrac{f(b)-f(a)}{b-a}\lt f'(b)$.
(b) if $a\lt x\lt b$, then $f(x)\lt f(a)+\dfrac{f(b)-f(a)}{b-a}(x-a)$.
For $(a)$, I thought that the mean-value theorem suffices, but later I found that, if $f(b)-f(a)=f'(a)$, then the inequality should not hold. How can I prohibit from this to happen?
For $(b)$, I also tried using mean-value theorem, but to no avail. Also, I tried rewriting it as $\dfrac{f(x)-f(a)}{x-a}\lt\dfrac{f(b)-f(a)}{b-a}$. However, I got nothing from this expression either, though I think it has something to do with the solution.
Thanks for any help in advance.
AI: Hint: If $f$ is convex then $\displaystyle g:x \mapsto \frac{f(x)-f(a)}{x-a}$ is monotically non-decrasing on $[a,+ \infty)$.
Let $a<x<y$. There exists $\lambda \in (0,1)$ such that $x= (1-\lambda)a+\lambda y$, hence $\displaystyle g(x)= \frac{1}{\lambda} \frac{f((1-\lambda)a+\lambda y)-f(a)}{y-a} \leq \frac{(1-\lambda)f(a)+\lambda f(y) -f(a)}{y-a} = g(y)$.
|
H: Norm map in Ideles
If $L/K$ is a finite extension, then there is a natural norm map from $\mathbf{A}^*_L$ to $\mathbf{A}^*_K$. This is a continuous homomorphism
$$\text{N}^L_K: \mathbf{A}^*_L\rightarrow \mathbf{A}^*_K$$
defined by the prescription that the $v$-component of $\text{N}^L_K$ is
$$\prod_{w\mid v}\text{N}^{L_w}_{K_v}x_w\;.$$
I'm trying to do an example with $L=\mathbb{Q}[i]$ and $K=\mathbb{Q}$. Consider
$$x=(1,3,1,1,\dots)\in\mathbf{A}^*_L$$
Then
$$\text{N}^L_K(x)=\left(\text{N}^{\mathbb{Q}[i]_{1+i}}_{\mathbb{Q}_2}1,\, \text{N}^{\mathbb{Q}[i]_{3}}_{\mathbb{Q}_3}3,\, \text{N}^{\mathbb{Q}[i]_{2+i}}_{\mathbb{Q}_5}1,\, \text{N}^{\mathbb{Q}[i]_{2-i}}_{\mathbb{Q}_5}1,\dots\right)=(1,\text{N}^{\mathbb{Q}[i]_{3}}_{\mathbb{Q}_3}3,1,1,\dots)$$
So what is $\text{N}^{\mathbb{Q}[i]_{3}}_{\mathbb{Q}_3}3$? Is it $9$? I thought it maybe $9$ because, to me, the extension $\mathbb{Q}[i]_{3}/\mathbb{Q}_{3}$ is a quadratic extension, so generalising the definition of $N^L_K$ for number fields to be the determinant of the multiplication matrix, we should get $9$.
EDIT (generalisation):
In a more general setting, if we have $\mathfrak{p}$ a prime in $K$ which is inert in $L$, and we consider
$$x=(\dots,1,1,\pi,1,1,\dots)\in\mathbf{A}^*_L$$
with $\pi$ the uniformiser of $\mathfrak{p}$ in the place induced by $\mathfrak{p}$. So we have
$$\text{N}^L_K(x)=(\dots,1,1,\text{N}^{L_{\mathfrak{p}}}_{K_{\mathfrak{p}}}\pi,1,1,\dots)$$
Now, what is $\text{N}^{L_{\mathfrak{p}}}_{K_{\mathfrak{p}}}\pi$? Is it $\text{N}^L_K\mathfrak{p}$?
Thanks for the help!
AI: By Dedekind theorem on the factorisation of primes, and for $x^2+1\pmod3$ is irreducible, we find that $3\mathbb Z[i]$ is a prime ideal. This tells us that the inertia degree of $3$ is $2$. Hence the local field extension is $ef=2$, where $e$ is the ramification index and $f$ the inertia degree. Consequently, yes, that norm is indeed $9$.
Edit
I cannot understand the reason for taking a uniformiser $\pi$, but, in any case, I think the following suffices.
If by $\pi$ you refer to a uniformiser in $K_\mathfrak p$, then certainly $\text{N}_{K_\mathfrak p}^{L_\mathfrak p}(\pi)=\pi^n$, where $n$ is the degree $[L_\mathfrak p:K_\mathfrak p]$. Since $\mathfrak p$ is inert, we find that $n=[L:K]$.
Hope this helps.
|
H: find scalar product of vectors in rectangular
let us consider following problem and picture
we have $ABCD$ rectagular with $AB=3$ and $BC=5$,$F$ and $E$ are midpoints of rectangular sides,we should find scalar product of
my question is can i locate point $A$ arbitrary or could i take coordinates of points arbitrary so that satisfy length property,namely $AB=3$,$BC=5$ or?clearly it seems that angle $FEC$ are not orthogonal,but how can i proof it?
AI: By Chasles relation we have
$$\vec{EF}\cdot\vec{EC}=(\vec{EA}+\vec{AF})\cdot(\vec{ED}+\vec{DC})=\vec{EA}\cdot\vec{ED}+\vec{AF}\cdot\vec{DC}\\
=-\frac{1}{4}(BC)^2+\frac{1}{2}(AB)^2=-\frac{7}{4}$$
|
H: Fundamental Theorem of Calculus Q
The FTC is often written as:
If $F(x) = \int_a^x f(t)\,\mathrm{d}t$ then $F'(x) = f(x)$.
Is it not also true that:
If $F(x) = \int f(x)\,\mathrm{d}x$ then $F'(x) = f(x)$?
What is the difference? Is it just the same thing written in a different way?
AI: The second case is a definition of the anti-derivative.
The first one is a relationship between geometric interpretation (typically area under the curve $f$) to the anti-derivative defined in the second case. This relationship is known as the Fundamental Theorem of Calculus.
|
H: Fibres in a power series ring versus fibres in a polynomial ring (a simple question)
Let $A$ be a commutative ring and $p \in \operatorname{Spec} A$.
In Matsumura's Commutative Ring Theory p. 118 it is mentioned that even though $A[x] \otimes \kappa(p) = \kappa(p)[x]$, it is not true in general that $A[[ x ]] \otimes \kappa(p) = \kappa(p)[[x]]$.
I tried to see why the equality fails for the power series ring and it seems to me that the problem is that when we have an element of $\kappa(p) [[x]]$, then in general we have infinitely many denominators and we can not write that element in the form "some element of $A/p[[x]]$ divided by some element of $A-p$".
Am i right, is that the reason? Also, can this fact be expressed more rigorously than i am expressing it?
AI: Well, maybe a better approach from your point of view is the following: is $A[[x]]\otimes A/I$ (canonically) isomorphic to $(A/I)[[x]]$? This holds when $IA[[x]]=I[[x]]$ and now can take look here.
|
H: Techniques of counting
Suppose you need to answer 10 out of 13 questions at an examinatioin. How many choices do you have if you must answer the first two questions
?
I think out of 13 , 2 questions must answer, so remaining is 11.
AI: $ 11 \choose 8 $+ $ 11 \choose 9 $+ $ 11 \choose 10$+ $11 \choose 11$ as while you need to answer 10, there isn't anything stated about answering 11, 12, or all 13 questions and thus you need to cover the cases where one is answering 8 or more of the remaining questions.
To compute those values, I can take the other combinations and compute as follows:
$ 11\choose3 $+$ 11\choose2$+$11\choose1$+$11\choose0$ = $\frac{11*10*9}{2*3}+\frac{11*10}{2}+11+1= 165+55+12=232$
Thus, there are 232 possible choices if you are answering at least 10 questions on the exam.
If you are answering exactly 10 questions, then the answer 165 as the other cases can be discarded.
To work this out a bit more to show why 11 is simply too low, consider which 3 questions are being skipped in the simple 10 questions being answered case. Let's take a few examples if we just write out which questions are being skipped:
{3,4,5}, {3,4,6}, {3,4,7}, {3,4,8}, {3,4,9}, {3,4,10}, {3,4,11}, {3,4,12}, {3,4,13},
{3,5,6}, {3,5,7}, {3,5,8}, {3,5,9}, {3,5,10}, {3,5,11}, {3,5,12}, {3,5,13}
Would be 17 possible combinations where I'm assuming the third question is skipped and at least one of the next 2 is skipped. This can be computed by taking a few combinations and putting them together:
If 3,4,5 are all skipped is one case to put aside.
Consider that 1 of 4,5 will be skipped and one of the last 8 questions which is 16 possibilities.
Total: 17 possibilities there and that is just scratching the surface here to my mind.
What may help is to re-frame the question. Instead of having 13 questions with 10 to answer, the first 2 questions are answered and that means there are now 11 questions left where 8 are to be answered. How many ways could you choose 8 elements from a set of 11?
|
H: find symmetric equation along line $y=-x$
generally i know that to find symmetric equation of function along line $y=x$,we should exchange $x$ and $y$ and solve back,but what about $y=-x$?should i repeat again the same procedure,but instead of $x$,should i take $-x$?let us consider following problem
consider equation of circle
$(x+3)^2+(y-3)^2=9$
find it's symmetric circle equation along line $y=-x$
so should i put $-x$ instead of $x$?thanks in advance
AI: When we reflect a point $(x_0,y_0)$ across the line $y=-x,$ our new $x$-coordinate will be $-y_0$ and our new $y$- coordinate will be $-x_0.$ Consequently, our general transformation in this case is to make $x\mapsto-y,$ $y\mapsto-x$.
What about the more general case of reflecting about a line $\ell$ through the origin, though? Well, first, see where the point $(1,0)$ is reflected to--say $(x_1,y_1)$--and where the point $(0,1)$ is reflected to--say $(x_2,y_2).$ Then in general, a point $(x,y)$ will be reflected about $\ell$ to $$(x_1x+x_2y,y_1x+y_2y).$$ How can we see this, though? It comes down to the fact that a reflection about a line through the origin is a linear transformation, and looking at it in terms of matrices shows us that $$\left[\begin{array}{c}x\\ y\end{array}\right]\mapsto \left[\begin{array}{cc}x_1 & x_2\\ y_1 & y_2\end{array}\right] \left[\begin{array}{c}x\\ y\end{array}\right].$$ That may be beyond what you'll encounter anytime soon. Consider it a sneak preview.
|
H: What log law justifies $(\lg n)^{\lg n} = n^{\lg \lg n}$?
I was reading the solution to 3.2-4 on this blog (cropped image pasted here)
notice the person says $\frac{(\lg n)^{\lg n}}{n} = \frac{n^{\lg \lg n}}{n}$
What log law justifies that?
Also, is it correct that it's an error to where they simplify $e^{\lg n}$ to $n$ in the denominator?
AI: It's a chain of log simplifications. I assume by $\lg$ the author means natural logarithm, by which $e^{\lg n} = n$ as a defining property (to answer your second question).
$$(\lg n)^{\lg n} = \left(e^{(\lg \lg n)}\right)^{\lg n} = e^{(\lg \lg n)(\lg n)}= e^{(\lg n)(\lg \lg n)} = (e^{\lg n})^{(\lg \lg n)} = n^{\lg \lg n}$$
|
H: black and white balls in the box
A box contains $731$ black balls and $2000$ white balls. The following process is to be repeated as long as possible.
(1) arbitrarily select two balls from the box. If they are of the same color, throw them out and put a black ball into the box. (We have sufficient black balls for this).
(2) if they are of different colors, place the white ball back into the box and throw the black ball away.
What will happen at last? Will the process stop with a single black ball in the box or a single white ball in the box or with an empty box?
I am unable to decide how to start and in which direction? should we apply probability or what?
AI: First, the number of balls in the box decrease by one at each step. Suppose $B(t)$ and $W(t)$ are the number of black and white balls present after $t$ steps. Since we start with $B(0)+W(0)=2731$ and $B(t+1)+W(t+1)=B(t)+W(t)-1$, we have that $B(2730)+W(2730)=1$. At this point we can no longer continue the process. So at this end, there is either a white ball or a black ball left.
Notice that the number of white balls present at any time is even. For instance, suppose we have just done step $t$. If we choose two black balls, then $B(t+1)=B(t)-1$ and $W(t+1)=W(t)$. If we choose a white ball and a black ball, then $B(t+1)=B(t)-1$ and $W(t+1)=W(t)$. If we choose two white balls, then $B(t+1)=B(t)+1$ and $W(t+1)=W(t)-2$. Necessarily, since $W(0)$ is even, it stays even throughout the process.
Hence $W(2730)=0$ and $B(2730)=1$. Even though what happens throughout the process is random, by parity, the process must end with only one black ball left.
|
H: Is sum and product of $k$ natural numbers always different from that of some other $k$ natural numbers?
Suppose $k$ is say $3$. Let $A,B$ be a sets of $3$ natural numbers. $A$ not equal to $B$. Can sum and product of the elements in $A$ be same as that of $B$.
If the numbers are primes then the conditions should hold. But I couldn't prove it if they are not primes. Is there a proof or counter example?
EDIT:
What if every number is greater than 1?
AI: The solutions of
$$X^3+aX^2+bX+c=0$$
have sum $-a$ and product $-c$. Since $b$ can still vary, your conjecture is definitely not true with complex or real numbers instead of naturals. On the other hand, the same argment with a quadratic polynomial shows that the case with $2$ numbers is true even if we allow real or complex numbers.
For naturals start with an arbitrary almost-solution (e.g. found by pushing around prime factors) such as $1\cdot 8\cdot 9=3\cdot 4\cdot 6$ but $1+8+9\ne 3+4+6$. Then try to find $r,s$ such that $1r+8+9s=3r+4+6s$, i.e. $3s=2r-4$, say $s=2, r=5$.
This gives us a solution $(1r,8,9s,3r,3,6s)$, i.e.
$$ 5\cdot 8\cdot 18 = 15\cdot 4\cdot 12\qquad \text{and}\qquad 5+8+18=15+4+12.$$
|
H: Sequence of Polynomials and Weierstrass's Approximation Theorem
I've been stuck on the following problem for a some time: Let $f$ be a continuous function on $[a,b]$. Show that there exists a sequence $(p_n)$ of polynomials such that $p_n \to f$ uniformly on $[a,b]$ and such that $p_n(a) = f(a)$ for all $n$.
Since $f$ is a continuous function on $[a,b]$, I know by the Weierstrass Approximation Theorem that there exists a sequence $(p_n)$ of polynomials such that $p_n \to f$ uniformly on $[a,b]$.
But I had trouble trying to show that $p_n(a) = f(a)$ for all $n$.
This is what I tried:
Direct Approach: Since I know $p_n \to f$ uniformly on $[a,b]$, by definition $(\forall \varepsilon > 0)(\exists N_0 \in \mathbb{N})$ such that $(\forall n > N_0)(\forall x \in [a,b])$ we have $|p_n(x) - f(x)| < \varepsilon$. Then for $n > N_0$ if I plug in $x=a$, I would be done. For the case $n \leq N_0$, I thought about bounding $|p_n(x) - f(x)|$ but got stuck.
Proof by Contradiction: Suppose that there exists an $n$ such that $p_n(a) \neq f(a)$. Suppose if $f(a) > p_n(a)$ and let $\varepsilon = f(a) - p_n(a)$. I tried using the definition of $p_n \to f$ uniformly on $[a,b]$, but didn't see how to get a contradiction.
I've thought about the contrapositive as well but didn't think that approach would be successful.
Are all of my approaches dead ends? Any hints would be appreciated.
AI: Consider a sequence of polynomials $p_n$ which converges uniformly to $f$. Then define $q_n(x)=p_n(x)+f(a)-p_n(a)$. Then $q_n(a)=f(a)$, $q_n$ is obviously a polynomial and
$$ |q_n(x)-f(x)| \leq |p_n(x)-f(x)|+|f(a)-p_n(a)|$$
which proves that $q_n$ converges uniformly to $f$.
|
H: Are there any function which's derivate is the scaled one of the original?( $f'(x) = f(cx)$ )
which function could satisfy the following, for a certain $c\ne1$
$f'(x) = f(cx)$
...beyond the trivial $f=0$
i've been thinking about it for a while.
for a simpler case:
$f'(x)= f(x+c)$
i've found $e^{xe^v}$ where $v$ is the solution of $c=\frac{v}{e^v}$
AI: We get that $$f^{(k)}(x)=c^{\frac{k(k-1)}{2}}f(c^kx).$$
From this we get that $$f(x)=\sum_{k=0}^{\infty}\frac{c^{\frac{k(k-1)}{2}}}{k!}f(0)x^k.$$
|
H: Conditions for multivariable differentiability
For a nonlinear system of equations: $x' = f(x)$ why is it that the following condition is equivalent to being differentiable at $x_0$.
Condition:
\begin{equation}
f(x) = A(x-x_0) + g(x)
\end{equation}
where $A$ is an $n x n$ constant matrix and $g$ satisfies:
\begin{equation}
\lim_{x\to x_0} \dfrac{||g(x)||}{||x-x_0||} = 0
\end{equation}
Maybe I am confused because I do not understand the term differentiable for a multivariable equation.
AI: "Differentiable" in a multivariable context means the same thing as it does in a single-variable context: namely, that the function can, locally, be accurately approximated by a linear function.
In this case, the linear function is $x\mapsto A(x-x_0)$; the $g(x)$ is simply the error in that approximation; and the condition
$$
\lim_{x\rightarrow x_0}\frac{\|g(x)\|}{\|x-x_0\|}=0
$$
just means that the error in the linear approximation goes to 0 "fast enough".
|
H: Set partitions of pairs
Suppose I am given a set of $n$ pairs of items (so I have $2n$ items in the set). I wish to partition the set into 2 disjoint sets such that at least one pair of items has a member in each set. I want to know how many 2-partitions there are of the $2n$ items where the smallest set has $k$ elements.
For example, if I have two pairs of items $\{a_1,a_2,b_1,b_2\}$ I can partition them into the following subsets:
$\{a_1,a_2,b_1\}$$\{b_2\}$;
$\{a_1,a_2,b_2\}$$\{b_1\}$;
$\{a_1,b_1,b_2\}$$\{a_2\}$;
$\{a_2,b_1,b_2\}$$\{a_1\}$;
$\{a_1,b_1\}$$\{a_2,b_2\}$;
$\{a_1,b_2\}$$\{a_2,b_1\}$;
So there are 4 partitions with the smallest set of size 1, and 2 partitions with the smallest set of size 2.
I have exhaustively computed the numbers for 1, 2, and 3 pairs, but I'm looking for a general formula. I know if $k=1$ there are $2n$ partitions, but I haven't found a closed form for a general solution, or even a recursive formula.
edit: I can show that if $k=2$ there are $4n-4$ partitions.
AI: First count the total number of partitions where the smallest set has $k$ elements, and then count the bad partitions, in which all pairs are together. Subtract bad from total.
The situation is a little different for $k=n$ than for $k\lt n$. This is because in the case $k=n$ there are $\frac{1}{2}\binom{2n}{n}$ partitions, and for $k\lt n$ there are $\binom{2n}{k}$.
Let us deal with $k\lt n$. If $k$ is odd, there is no bad partition. At least one couple must be separated, else our $k$-set would be made up of couples, so would have an even number of elements.
Now we deal with the case $k$ even, say $k=2l$. To make a bad partition, we choose $l$ couples. This can be done in $\binom{n}{l}$ ways.
A similar calculation takes care of the case $k=n$.
|
H: Is there a result on the behaviour of power series with positive integer coefficients on their boundary?
I have a power series whose coefficients are all positive integers and whose radius of convergence $r$ is $<1$ and I wish to prove that it has a pole at $r$, or at least an infinite radial limit. Is there a general result that could help in this situation or should I look for an ad-hoc proof?
AI: I can give you the following proof of what is an extension of Abel's Theorem.
THM Suppose $\alpha_n$ is a sequence of real numbers such that $\sum_{n\geqslant 0}\alpha_n$ diverges to $+\infty$, and such that its powerseries converges for $|x|<1$. Then $$\lim_{x\to 1^{-}}\sum_{n\geqslant 0}\alpha_nx^n=+\infty$$
P Let $M>0$ be given. Note that $$\frac{1}{1-x}f(x)=\sum_{n\geqslant 0}\sum_{k=0}^n\alpha_k x^n$$
By hypothesis this is true for $|x|<1$. Also, there exists $N>0$ such that $n>N$ implies $\sum_{k=1}^n\alpha_k>M$. Then we have that
$$\displaylines{
\frac{1}{{1 - x}}f(x) = \sum\limits_{n \geqslant 0} {\sum\limits_{k = 0}^n {{\alpha _k}} } {x^n} \cr
= \sum\limits_{n = 0}^N {\sum\limits_{k = 0}^n {{\alpha _k}} {x^n}} + \sum\limits_{n > N} {\sum\limits_{k = 0}^n {{\alpha _k}} } {x^n} \cr
> \sum\limits_{n = 0}^N {\sum\limits_{k = 0}^n {{\alpha _k}} {x^n}} + M\sum\limits_{n > N} {{x^n}} \cr
> M{x^{N + 1}}\frac{1}{{1 - x}} \cr} $$
It follows that $f(x) > M{x^{N + 1}}$ so $$\mathop {\lim \inf }\limits_{x \to {1^ - }} f(x) \geqslant M$$ for each $M>0$, whence it must be the case
$$\mathop {\lim }\limits_{x \to {1^ - }} f(x) = + \infty $$
ADD Note the last inequality follows from the fact we can assume the partial sums are all positive.
|
H: Simple Question on Calculating percentage
All -
It has been a while since I ever used Math. This is a very simple problem. How do I calculate the percentage of the following
Suppose I have 30 apples, and out of 30 Apples, 10 apples have become rotten, what is the percentage of apples that have become rotten?
Thanks.
AI: Simply divide $10$ by $30$ and multiply by $100$ to get the percentage.
$$\frac{10\;\text{rotten apples}}{30\;\text{apples in all}} \times 100\% = \text{percentage of rotten apples}$$
That gives you that $\;\dfrac 13 \times 100\% = 33.\overline{33}\%\;$ of the apples are rotten.
|
H: Sum of geometric series $\frac{7}{8} - \frac{49}{64} + \frac{343}{512}$
$$\frac{7}{8} - \frac{49}{64} + \frac{343}{512}...$$
I make the guess that this is suppose to be represented as
$$-1^{n+1} * (\frac{7}{8})^n$$
Now I use the formula my book gives
$$\Sigma_{n=M}^\inf -1^{n+1} = \frac{cr^M}{1 - r}$$
$$ \Sigma_{n=1}^\inf -1^{n+1} * (\frac{7}{8})^n$$
$$\frac{-1 \frac{7}{8}}{1 - \frac{7}{8}}$$
Why is this wrong? I copied the formula exactly and my representation of the series is correct.
AI: Define
$$a=\frac78\;,\;\;q=-\frac78$$
Thus
$$|q|<1\implies \sum_{n=0}^\infty aq^n=\frac a{1-q}=\frac{\frac78}{1+\frac78}=\frac7{15}$$
|
H: What does $\frac{\partial}{\partial x}(\frac{\partial f}{\partial u})$ mean when $f(x,t), u=x+ct, v=x-ct $?
I'm trying to transform the wave equation $\frac{\partial^2f}{\partial t^2}-c^2\frac{\partial^2f}{\partial x^2}$ using the substitution:
$
\\u=x+ct
\\v=x-ct
$
and using the chain rule for the partial derivatives $\frac{\partial f}{\partial x}$ and$\frac{\partial f}{\partial t}$ I get:
$
\frac{\partial f}{\partial x}=\frac{\partial f}{\partial u}\frac{\partial u}{\partial x}+\frac{\partial f}{\partial v}\frac{\partial v}{\partial x}=\frac{\partial f}{\partial u}*1+\frac{\partial f}{\partial v}*1 $ and similarly:
$:
\frac{\partial f}{\partial t}=\frac{\partial f}{\partial u}*c+\frac{\partial f}{\partial v}*(-c)
$
To express the second partial derivative $\frac{\partial^2f}{\partial x^2}$:
$
\frac{\partial^2f}{\partial x^2}=\frac{\partial}{\partial x}(\frac{\partial f}{\partial x})=\frac{\partial}{\partial x}(\frac{\partial f}{\partial u})+\frac{\partial}{\partial x}(\frac{\partial f}{\partial v})
$
Now the notation says I should take the derivative with respect to $x$ of $\frac{\partial f}{\partial u}$, but what then is $\frac{\partial f}{\partial u}$? If $f$ is a function of $x$ and $t$, which both are in expressed in both $u$ and $v$, how does the chain rule apply here (as compared to the first step above, which I can handle)?
Thanks!
Alexander
AI: We have
$$f(x,t)=g(u,v)=g(x+ct,x-ct)$$
so
$$\frac{\partial f}{\partial x}=\frac{\partial g}{\partial u}+\frac{\partial g}{\partial v}\quad\text{and}\quad \frac{\partial^2 f}{\partial x^2}=\frac{\partial^2 g}{\partial^2 u}+2\frac{\partial^2 g}{\partial u\partial v}+\frac{\partial^2 g}{\partial v^2}$$
and
$$\frac{\partial f}{\partial t}=c\frac{\partial g}{\partial u}-c\frac{\partial g}{\partial v}\quad\text{and}\quad \frac{\partial^2 f}{\partial t^2}=c^2\frac{\partial^2 g}{\partial u}-2c^2\frac{\partial^2 g}{\partial u\partial v}+c^2\frac{\partial^2 g}{\partial v^2}$$
hence the wave equation
$$\frac{\partial^2f}{\partial t^2}-c^2\frac{\partial^2f}{\partial x^2}=0$$
becomes
$$\frac{\partial^2 g}{\partial u\partial v}=0$$
so
$$g(u,v)=\phi(u)+\psi(v)$$
and finally we have
$$f(x,t)=\phi(x+ct)+\psi(x-ct)$$
|
H: What does the following symbol mean? (direct sum? o-plus? -- subject: matrix theory)
In this paper equation 11, the author uses a symbol that is a cross in a circle. I believe I have seen that referred to as a direct sum, but I am not completely sure what that is.
$$\bigoplus$$
This one
AI: This is an example of, what I call, a two-pass paragraph (I try to avoid these when writing). It's a situation in which you need to keep reading to understand the previous material.
If you keep reading, you'll find:
"In Eq. (11) the symbol $\oplus$ means matrix direct sum."
So $\phi(y)$ is the $4 \times 4$ diagonal matrix with the diagonal entries as listed in (11).
|
H: Prove a limit using the epsilon-delta definition
$\lim_{x\to-2}\frac{4x-1}{x+1} = 9$
Given $\epsilon>0$, $$(\exists \delta(\epsilon)>0) \left( |x+2|<\delta \implies \left|{\frac{4x-1}{x+1} - 9}\right| < \epsilon \right)$$
So, if $|x+2|<\delta$, then:
$$\left|{\frac{4x-1}{x+1} - 9}\right| = \left|{\frac{4x-1-9(x+1)}{x+1}}\right| = \left|{\frac{-5x-10}{x+1}}\right| = \left|{\frac{-1 (5x+10)}{x+1}}\right| = {\frac{|-1||5x+10|}{|x+1|}} = {\frac{5|x+2|}{|x+1|}}$$
I've found the expression $5|x+2|$ in the numerator:
$|x+2| < \delta = 1/2$
$-1/2<x+2<1/2$
$-5/2<x<-3/2$
$-3/2<x+1<-1/2 \implies$ ?
A colleague told me to leave both members of this inequality as positives. Why do we need both sides to be positives?
AI: You're right to want to bound values of $x$ away from $-1$, and to do so by $\frac{1}{2}$ is fine. The inequality you obtain by requiring $x$ to be within $\frac{1}{2}$ of $2$ is
$$-\frac{5}{2}<x<-\frac{3}{2}$$
which you have written. Then, as you've written, we have
$$-\frac{3}{2}<x+1<-\frac{1}{2}$$
Notice that the above inequality implies that $|x+1|>\frac{1}{2}.$ It follows that
$$\frac{5|x+2|}{|x+1|}<10|x+2|$$
So let $\delta(\epsilon)<\min\{\frac{1}{2},\frac{\epsilon}{10}\}$.
|
H: If every nonzero element of $R$ is either a unit or a zero divisor then $R$ contains only finitely many ideals?
Let $R$ be a commutative ring with $1$, if $R$ contains only finitely many ideals, then every nonzero element of $R$ is either a unit or a zero divisor. I know it's true.
How about the converse? i.e. If every nonzero element of $R$ is either a unit or a zero divisor, then $R$ contains only finitely many ideals.
Is it true? Can you give me an example about a commutative ring with $1$ that has infinitely many zero divisors? Thanks.
AI: The converse is not true. Consider the ring $R=\mathbb{C}[x,y]/(x^2,y^2,xy)$. Notice that every element has a representative of the form $a+bx+cy$. If $a=0$, this is a zero-divisor. If $a\ne 0$, then $a^{-1}-a^{-2}bx-a^{-2}cy$ is an inverse. So this ring satisfies the hypotheses of the question. Can you exhibit infinitely many ideals of this ring?
$R$ is also an example of an infinite commutative ring with infinitely many zero divisors.
|
H: Avoiding primefactors in reducible polynomials
Take distinct pairs $(c_i,d_i) \in \mathbb Z^2$, the entries being coprime. Put $f(x) = \prod_{i=1}^k (c_i x + d_i)$. Let $\mathbb P$ denote the rational prime numbers. Which conditions (if any) need to be imposed on $f$ in order to satisfy
$$\forall \textrm{ finite sets } S \subseteq \mathbb P: \exists n \in \mathbb N : \forall p \in S :\gcd(f(n),p)=1$$
If $k=1$ Dirichlet's theorem on primes in arithmetic progressions does the job.
AI: One need only test the condition for single primes because $f(x)\pmod p$ depends only on $x\pmod p$; then multiple primes can be combined using the Chinese remainder theorem.
Let $p$ be a prime.
A factor $c_ix+d_i$ is a multiple of $p$ for at most one residue mod $p$ and we can ignore the factors with $c_i\equiv 0\pmod p$.
From all other $i$, we get the set $c_i^{-1}d\pmod p$. If and only if this set does not cover all of $\mathbb Z/p\mathbb Z$, a solution of $f(x)\not\equiv 0\pmod p$ exists. This is trivially the case for all $p>k$ but needs to be tested for all smaller $p$. A simple criterion is that $\gcd(f(0),f(1),\ldots,f(k-1))$ must have only prime divisors $> k$.
For example, with $f(x)=(x+5)(2x+3)(3x-1) $ we have $\gcd(f(0),f(1),f(2))=5$, which is ok.
|
H: Surjectivity of a restriction map on distributions
I'm reading Kudla's exposition of Tate's thesis in the book "An Introduction to the Langlands Program" and have gotten stuck on some analytic details. Here's the setup: let $F$ be $\mathbb{R}$ or $\mathbb{C}$, so that there is an inclusion $C^{\infty}_c(F^{\times}) \subset \mathcal{S}(F)$, where $\mathcal{S}(F)$ is the Schwartz space of rapidly decreasing functions. Kudla claims that this induces a surjection $\mathcal{S}(F)^{\vee} \to C^{\infty}_c(F^{\times})^{\vee}$ on the respective spaces of distributions, which is not clear to me.
Can we perhaps invoke a form of the Hahn-Banach theorem? But $C^{\infty}_c(F^{\times})$ doesn't have the subspace topology coming from $\mathcal{S}(F)$, does it?
AI: Your misgivings are justified: the test functions do not have the topology from Schwartz functions, and not every distribution (dual of test fcns) is tempered (dual of Schwartz). There is a natural continuous injection because test functions are dense in Schwartz.
Edit: to clarify, and compare to non-archimedean case: tempered distributions do not surject to all distributions, because many/typical distributions are not tempered ("at infinity"). In the non-archimedean case, these two things are identical. When the test functions are required to be supported away from $0$, in the archimedean case the map of tempered to the dual has Dirac delta and derivatives in its kernel, but is still not surjective, because the non-temperedness is a condition at infinity. In the non-archimedean case, the support-away-from-$0$ only puts Dirac delta in the kernel of the map from tempered to the dual, because that's the only distribution supported at a point, in the non-archimedean case.
So the map from tempered distributions to the dual of test functions supported away from $0$ is not surjective, no. E.g., $\sum_{1\le n\in \mathbb Z} e^n\cdot \delta_n$ is a distribution that is not tempered. And the kernel of linear combinations of Dirac delta and derivatives at $0$ does not put this into "tempered", either.
One more time, in symbols: let $V$ be the (ind-finite) space of all linear combinations of Dirac delta and derivatives. Let $S'$ be tempered distributions. Let $X$ be test functions supported away from $0$, and $X'$ its dual. Then, sure, $X\subset S$ with dense image, but/and $S'\rightarrow X'$ factors through $S'/V$, but/and is not surjective. (Not even "modulo $V$").
But surely this mis-statement doesn't really matter to Kudla's argument.
|
H: algebraic manipulation question
$M_{z_n}(t)$ is a particular moment generating function, and it is given that $\lambda_n$ approaches $\infty$ as $n$ approaches $\infty$:
Could someone help me see how the above was derived?
AI: The big idea here is that
$$
e^x=1+x+\frac{x^2}{2}+O(x^3)\text{ as }x\rightarrow0.
$$
So, since $t/\sqrt{\lambda_n}\rightarrow0$, we have
$$
e^{t/\sqrt{\lambda_n}}=1+\frac{t}{\sqrt{\lambda_n}}+\frac{t^2}{2\lambda_n}+O\left(\frac{1}{\lambda_n^{3/2}}\right)\text{ as }n\rightarrow\infty,
$$
so that
$$
-t\sqrt{\lambda_n}+\lambda_n(e^{t/\sqrt{\lambda_n}}-1)=-t\sqrt{\lambda_n}+t\sqrt{\lambda_n}+\frac{t^2}{2}+O\left(\frac{1}{\lambda_n^{1/2}}\right)=\frac{t^2}{2}+O\left(\frac{1}{\lambda_n^{1/2}}\right).
$$
Since $\lambda_n\rightarrow\infty$, this last expression converges to $\frac{t^2}{2}$ as $n\rightarrow\infty$.
(We assume here, of course, that $t$ does not change with $n$.)
|
H: Question about implication
I have question about something i don't understand.
$\alpha$, $\beta$, and $\gamma$ are statements.
if $\alpha\implies\beta\lor\gamma$ then it's necessary that $\alpha\implies\beta$ or $\alpha\implies\gamma$. that answer is "no" but i can't understand why.
if it's given that $\alpha\implies\beta\lor\gamma$ then it means that when $\alpha$ is true then $\beta\lor\gamma$ must be also true which means that either $\beta$ is true or $\gamma$ is true. so it means that $\alpha\implies\beta$ or $\alpha\implies\gamma$, isn't it? because the only case when it's might be not true is when $\alpha$ is true, $\beta$ is false and also $\gamma$ is false.
So i can't understand why the answer to this question is "no".
I hope someone can explain it.
Thank you. (P.S. sorry for my bad English).
AI: Consider the following statements:
$\alpha$: $n$ is a natural number.
$\beta$: $n$ is even.
$\gamma$: $n$ is odd.
Certainly $\alpha\implies \beta\lor\gamma$, since if $n$ is a natural number, $n$ is even or $n$ is odd.
But we don't have $\alpha \implies \beta$ or $\alpha \implies \gamma$, since not every natural number is even and not every natural number is odd.
|
H: Proving an Identity involving $4^N$
I am trying to prove the following identity:
$$\sum_{k=0}^N\left({2 \, N - 2 \, k \choose N - k}{2 \, k \choose k}\right)=4^N$$
I have tried writing $4^N=2^{2N}=(1+1)^{2N}=(1+1)^N(1+1)^N$, and expanding each of these as a binomial expansion, but I have found nothing but dead ends so far. Any ideas?
I am currently working through a Ch. 3 "Generating Functions" from Analysis of Algorithms by Sedgewick/Flajolet. This is problem #30.
Thanks.
AI: Let, $f(x)=\displaystyle\sum_{n=0}^{\infty}{2n \choose n}x^n=(1-4x)^{-1/2}$. Then your LHS will be coefficient of $x^N$ of $f^2(x)=(1-4x)^{-1}$ which is $4^N$=RHS.
|
H: A 'complicated' integral: $ \int \limits_{-\infty}^{\infty}\frac{\sin(x)}{x}$
I am calculating an integral $\displaystyle \int \limits_{-\infty}^{\infty}\dfrac{\sin(x)}{x}$ and I dont seem to be getting an answer.
When I integrate by parts twice, I get:
$$\displaystyle \int \limits _{-\infty}^{\infty}\frac{\sin(x)}{x}dx = \left[\frac{\sin(x)\ln(x) - \frac{\cos(x)}{x}}{2}\right ]_{-\infty}^{+\infty}$$
What will be the answer to that ?
AI: Hint: From the viewpoint of improper Lebesgue integrals or in sense of Cauchy principal value is integral is legitimate. Integration by parts.
\begin{align}
\int \limits_{-\infty}^{\infty}\dfrac{\sin(x)}{x} \mbox{d} x
=
&
\lim_{t\to\infty}\int \limits_{-t}^{\frac{1}{t}}\dfrac{\sin(x)}{x} \mbox{d} x
+
\lim_{t\to\infty}\int \limits_{\frac{1}{t}}^{t}\dfrac{\sin(x)}{x} \mbox{d} x
\\
=
&
\lim_{t\to\infty}\int \limits_{-t}^{\frac{1}{t}}\sin(x)(\log x)^\prime \mbox{d} x
+
\lim_{t\to\infty}\int \limits_{\frac{1}{t}}^{t}\sin(x)(\log x)^\prime\mbox{d} x
\\
\end{align}
|
H: Generate Correlated Normal Random Variables
I know that for the $2$-dimensional case: given a correlation $\rho$ you can generate the first and second values, $ X_1 $ and $X_2$, from the standard normal distribution. Then from there make $X_3$ a linear combination of the two $X_3 = \rho X_1 + \sqrt{1-\rho^2}\,X_2$ then take
$$ Y_1 = \mu_1 + \sigma_1 X_1, \quad Y_2 = \mu_2 + \sigma_2 X_3$$
So that now $Y_1$ and $Y_2$ have correlation $\rho$.
How would this be scaled to $n$ variables? With the condition that the end variables satisfy a given correlation matrix? I'm guessing at least n variables will need to be generated then a reassignment through a linear combination of them all will be required... but I'm not sure how to approach it.
AI: If you need to generate $n$ correlated Gaussian distributed random variables
$$
\bf Y \sim \mathcal N(\bf \mu, \Sigma)
$$
where $\textbf{Y} = (Y_1,\dots,Y_n)$ is the vector you want to simulate, $\mu =(\mu_1,\dots, \mu_n)$ the vector of means and $\Sigma$ the given covariance matrix,
you first need to simulate a vector of uncorrelated Gaussian random variables, $\bf Z $
then find a square root of $\Sigma$, i.e. a matrix $\bf C$ such that $\bf C \bf C^\intercal = \Sigma$.
Your target vector is given by
$$
\bf Y = \bf \mu + \bf C \bf Z.
$$
A popular choice to calculate $\bf C$ is the Cholesky decomposition.
|
H: Do I have enough iMac boxes to make a full circle?
My work has a bunch of iMac boxes and because of their slightly wedged shape we are curious how many it would take to make a complete circle.
We already did some calculations and also laid enough out to make 1/4 of a circle so we know how many it would take, but I'm curious to see how others would approach this problem mathematically and see if you came up with the same answer.
Our stock consists of 12 of the 27" iMac boxes and 16 of the 21.5" iMac boxes.
The 21.5" box has the following dimensions
top: 5"
height: 21.25"
bottom: 8.75"
and the 27" box is
top: 5.75"
height: 23.75"
bottom: 9.5"
21.5" iMac box picture for reference
AI: Suppose we have $n$ wedge-shaped boxes wrapped around to make a circle, with an inner length (per box) of $C_1$, an outer length of $C_2$, and a 'diametric' length of $D$. Then putting the $n$ boxes together will yield an inner circle with circumference $n\times C_1$, and an outer circle with circumference $n\times C_2$. By the formula for the area of a circle, the diameter of the inner circle is then $\frac{nC_1}{\pi}$, and similarly for the outer circle, so the difference in diameters is $\frac{n}{\pi} (C_2-C_1)$ - but this has to be exactly equal to the twice the diametric length, $2\times D$ (since there we add an additional distance of $D$ on either side of the diameter). In other words, we have $\frac{n}{\pi}(C_2-C_1)=2\times D$, or solving for $n$, $n=\dfrac{2\pi D}{C_2-C_1}$.
For instance, for your first case, we have $C_1=5$ (inches), $C_2=8\frac34$, and $D=21\frac14$, so we get $n=\dfrac{2\cdot\frac{85}{4}\cdot\pi}{\frac{35}{4}-\frac{20}{4}} = \frac{34}{3}\pi\approx 35.6$ boxes. Unfortunately, since this isn't very close to an integer, any circle you make will be either a little tight or a little loose. I'll let you solve the second case yourself...
|
H: Singular Value Decomposition & Compression
What happen when we cancel some singular values in order to compress something?
How we compress it in this way ?
AI: Let $A = U \Sigma V^*$ be an SVD of $A$ and let $\widetilde{A} = U \widetilde{\Sigma} V^*$. Take a look at Frobenius norm of $A - \widetilde{A}$:
$$\|A - \widetilde{A}\| = \sqrt{\sum_k(\Sigma_k - \widetilde{\Sigma}_k)^2}.$$
If we want $\widetilde{A}$ to have rank $\widetilde{r} < \mathop{\rm rank} A$, while minimizing $\|A - \widetilde{A}\|$, how do we achieve that?
We create $\widetilde{\Sigma}$ from $\Sigma$ by replacing the smallest singular values by zero, thus getting $\|A - \widetilde{A}\|$ to be the square root of the sum of squares of those.
The idea here is to lose some information by dropping the rank, which requires less data to be saved, while disturbing the input as little as possible (in terms of the Frobenius norm).
I suggest reading Martin and Porter, "The Extraordinary SVD".
|
H: What is $x$ in $-x^2+2x+3 > 0$
I'm busy with a homework assignment and I do not understand how I can factorize $-x^2+2x+3$.
I can't find two numbers that when multiplied make $3$, and when added make $2$. How do I solve this problem? Also one thing that confuses me is the minus sign in front of $x^2$. All the assignments in my book so far have a positive $x^2$.
AI: Your two confusions should cancel each other if you go from
$$ -x^2+2x+3>0$$
to
$$ x^2-2x-3<0$$
|
H: How to show if $A \subseteq B$, then $A \cap B = A$?
Hi I'm new to set theory. I need to prove that if $A \subseteq B$, then $A \cap B = A$. I would like to do this the formal way, without a Venn diagram. How should I proceed?
AI: Like this:
I hope this helps ;-)
|
H: Quickest way to determine a polynomial with positive integer coefficients
Suppose that you are given a polynomial $p(x)$ as a black box (i.e. some oracle, to which you feed $x$ and it returns $p(x)$). It is known that the coefficients of $p(x)$ are all positive integers. How do you determine what $p(x)$ is in the quickest way possible?
(There are 2 metrics for quickness: the number of calls to the oracle and total number of operations. The relationship between the two is not given so we try to minimize both.)
AI: Ask for $m=p(1)$. Then all coefficients of $p$ are $\le m$.
Ask for $M=p(m+1)$. Expand $M$ in base $m+1$, done.
- That's two oracle queries and $\deg p$ integer div/mod operations
|
H: Proving that a function has a removable singularity at infinity
I'm having trouble with the following exercise from Ahlfors' text (not homework)
"If $f(z)$ is analytic in a neighborhood of $\infty$ and if $z^{-1} \Re f(z) \to0$ as $z \to \infty$, show that $\lim_{z \to \infty} f(z)$ exists. (In other words, the isolated singularity at $\infty$ is removable)
Hint: Show first, by use of Cauchy's integral formula, that $f = f_1 +f_2$
where $f_1(z) \to 0$ for $z \to \infty$ and $f_2(z)$ is analytic in the whole plane.
My attempt:
$f$ is analytic in the exterior of some disk $\{|z|\geq R_0 \}$. For large enough $|z|$ the circle $C_1(z)$ around $z$ with radius one lies in the domain of analyticity, and we can apply Cauchy's integral formula:
$$f(z)=\frac{1}{2\pi i}\oint_{C_1(z)}\frac{f(\zeta)}{\zeta-z} \mathrm{d} \zeta=\frac{1}{2 \pi i} \oint_{C_1(z)}\frac{\Re f(\zeta)}{\zeta-z} \mathrm{d} \zeta+\frac{1}{2\pi} \oint_{C_1(z)}\frac{\Im f(\zeta)}{\zeta-z} \mathrm{d} \zeta. $$
I want to say that the last expression is exactly the decomposition $f_1+f_2$. Indeed, using Cauchy's estimate (and the fact about the real part) one can show that the first integral tends to zero for large $|z|$, but I can't understand how to define the other one for all $z \in \mathbb C$.
In fact, even if I have such a decomposition, I don't think that it will suffice: For instance $f(z)=f_1(z)+f_2(z)$ with $f_1(z)=0,f_2(z)=\exp(z)$ has an essential singularity at $\infty$.
Thank you for any directions.
AI: The existence of the decomposition does not depend on the assumption on the real part of $f$. The shortest way to get it is to expand $f$ in Laurent series in some neighbourhood of infinity $W=\{ \vert z\vert >a\}$ :
$$f(z)=\sum_{n=-\infty}^{-1} c_n z^n + \sum_{n=0}^\infty c_n z^n\, .$$
The first sum is $f_1(z)$. The second series is a power series converging in $W$, hence in fact in the whole complex plane, so it defines an entire function $f_2(z)$.
If you want to follow the hint to get the decomposition, you can proceed as follows. Fix $R_0$ such that $f$ is holomorphic in an open set containing $\{\vert z\vert\geq R_0\}$. For each $R\geq R_0$, denote by $\Gamma_R$ the circle $\{\vert z\vert=R\}$ and define
$$f_R(z)=\frac{1}{2i\pi}\int_{\Gamma_R} \frac{f(\xi)}{\xi -z}d\xi\, ,$$
which is holomorphic in $\mathbb C\setminus\Gamma_R$. By Cauchy's formula, we have
$$f(z)=f_R(z)-f_{R_0}(z)$$
whenever $R_0<\vert z\vert <R$. This shows that if $R_0<R< R'$, then the functions $f_R$ and $f_{R'}$ agree in $\{ R_0<\vert z\vert<R\}$, and hence on the disk $D(0,R)$ by analytic continuation. In other words (letting $R\to\infty$), we get a single entire function $f_2$ equal to $F_R$ on $D(0,R)$ for any $R>R_0$. Finally, put $f_1=-f_{R_0}$.
Having done this, observe that the function $f_2$ satisfies the same assumption as $f$, since $f$ and obviously $f_1$ do. Moreover, $f_1$ has a limit at $\infty$ (!) Thus, to prove that $f$ has a limit at $\infty$, it is in fact enough to assume that $f$ is an $entire$ function.
At this point, I don't see how to conclude without using the $Borel$-$Caratheodory\;inequality$, which allows to control $\vert f\vert$ by ${\rm Re} f$. A special case of this inequality reads as follows : for any $r>0$
$$\sup_{\vert z\vert=r} \vert f(z)\vert\leq 2\sup_{\vert z\vert=2r} {\rm Re}f(z)+3\vert f(0)\vert\, .$$
From this, it follows that $z^{-1} f(z)\to 0$ as $z\to\infty$. Then the usual proof of Liouville's theorem shows that $f$ is constant (recall that we are assuming that $f$ is entire).
|
H: Mental estimate for tangent of an angle (from $0$ to $90$ degrees)
Does anyone know of a way to estimate the tangent of an angle in their head? Accuracy is not critically important, but within $5%$ percent would probably be good, 10% may be acceptable.
I can estimate sines and cosines quite well, but I consider division of/by arbitrary values to be too complex for this task. Multiplication of a few values is generally acceptable, and addition and subtraction are fine.
My angles are in degrees, and I prefer not have to mentally convert to radians, though I can if necessary. Also, all angles I'm concerned with are in the range of [0, 90 degrees].
I am also interested in estimating arc tangent under the same conditions, to within about 5-degrees would be good.
Backstory
I'm working on estimating the path of the sun across the sky. I can estimate the declination pretty easily, but now I want to estimate the amount of daylight on any given day and latitude. I've got it down to the arc cosine of the product of two tangents, but resolving the two tangents is now my sticking point. I also want to calculate the altitude of the sun for any time of day, day of the year, and latitude, which I have down to just an arc tangent.
AI: If you want to stay within 10%, the following piecewise linear function satisfies $$.9\tan\theta \le y \le 1.1\tan\theta$$ for $0\le\theta\le60$ degrees:
$$y={\theta\over60}\text{ for }0\le\theta\le20$$
$$y={2\theta-15\over75}\text{ for }20\le\theta\le45$$
$$y={\theta-20\over25}\text{ for }45\le\theta\le60$$
It might help to rewrite them as
$$y={5\theta\over300}\text{ for }0\le\theta\le20$$
$$y={8\theta-60\over300}\text{ for }20\le\theta\le45$$
$$y={4\theta-80\over100}\text{ for }45\le\theta\le60$$
so that you really don't have to divide by anything other than $3$. The line segment approximations lie above $\tan\theta$ from $\theta\approx25$ to $\theta=45$ and below it elsewhere, so you should round down and up accordingly when doing the mental arithmetic.It's obviously possible to extend this for angles greater than $60$ degrees, but whether (or how far) you can do so with formulas that use only "simple" multiplications and divisions is unclear.
A word of explanation: What I tried to do here was take seriously the OP's request for estimates you can calculate in your head. The ability to do mental arithmetic, of course, varies from person to person, so I used myself as a gauge. As for where the formulas came from, my starting point was the observation that the conversion factor between degrees and radians, $180/\pi$, is approximately $60$, so the estimate $\tan\theta\approx\theta/60$ should be OK for a while. A little trial and error showed it's good up to $\theta=20$ degrees (since $.9\tan20\approx.328$). It was easy to see that connecting $(0,0)$ to $(20,1/3)$ and $(20,1/3)$ to $(45,1)$ with straight lines would stay within the prescribed bounds. Finally, noting that $.9\tan60\approx1.55$, I saw that the line connecting $(45,1)$ to $(60,1.6)$ would have a nice slope and stay within the prescribed bounds as well.
|
H: Find the matrix for $T$ with respect to the standard bases $B = \{1,x,x^2\}$ for $P_2$.
Let $T:\ P_2 \to P_2$ be a linear operator defined by
$$T(a_0 + a_1 x + a_2 x^2) = a_0 + a_1 (x-1) + a_2 (x-1)^2.$$
Find the matrix $T$ with respect to the standard basis $B = \{1, x, x^2\}$ for $P_2$.
I know that the solution to this problem is the following matrix, but I don't understand how to find it. I tried to let the $a$ variables be $a$, $b$ and $c$ within my equation. Then, I plugged in the basis but that didn't work.
Answer:
$$T = \begin{bmatrix}
1 & -1 & 1\\
0 & 1 & -2\\
0 & 0 & 1
\end{bmatrix}.$$
AI: Note that
$$\begin{bmatrix}
1 & -1 & 1\\
0 & 1 & -2\\
0 & 0 & 1
\end{bmatrix} \cdot \begin{bmatrix} a_0 \\ a_1 \\ a_2 \end{bmatrix} =
\begin{bmatrix} a_0 - a_1 + a_2 \\ a_1 - 2a_2 \\ a_2 \end{bmatrix}.$$
At the same time,
\begin{align}
a_0 + a_1(x-1) + a_2(x-1)^2 &= a_0 + a_1x - a_1 + a_2x^2 - 2a_2x + a_2 \\
&= (a_0 - a_1 + a_2) + (a_1 - 2a_2)x + a_2x^2.
\end{align}
I hope it is more clear this way.
|
H: Calculate the projection of $g(x)=\exp(−2x^2)$ onto the subspace $S$
I have problem to getting started on this one:
"Let $f_1(x) = \exp(−x^2)$, $f_2(x) = xf_1(x)$, S the subspace of $L^2(\mathbb{R})$ spanned by $\{f_1,f_2\}$, and $P$ the projector onto $S$. Find $Pg$, where $g(x) = \exp(−2x^2)$."
AI: We have
\begin{eqnarray}
\langle f_1,f_2\rangle_{L^2}&=&\int_\mathbb{R}xf_1^2(x)\,dx=0,\\
\|f_1\|_{L^2}^2&=&\int_\mathbb{R}e^{-2x^2}\,dx=\sqrt{\frac{\pi}{2}},\\
\|f_2\|_{L^2}^2&=&\int_\mathbb{R}x^2e^{-2x^2}\,dx=\frac12\sqrt{\frac{\pi}{2}}.
\end{eqnarray}
We set
$$
\phi_1=\left(\frac{2}{\pi}\right)^{1/4}f_1,\ \phi_2=\left(\frac{8}{\pi}\right)^{1/4}f_2.
$$
We may choose an orthonormal basis $\{\phi_i\}_{i\ge 3}$ of $S^\perp$ in such a way that $\{\phi_i\}_{i\ge 1}$ is an orthonormal basis of $L^2(\mathbb{R})=S\oplus S^\perp$. Then
$$
g=\sum_{i=1}^\infty \langle g,\phi_i\rangle_{L^2}\phi_i
$$
and
$$
Pg=\alpha_1\phi_1+\alpha_2\phi_2,
$$
with
$$
\alpha_i=\langle g,\phi_i\rangle_{L^2},\ i=1,2.
$$
Setting $x=\sqrt{\frac23}y$, we have
\begin{eqnarray}
\alpha_1&=&\langle g,\phi_1\rangle_{L^2}=\|f_1\|_{L^2}^{-1}\int_\mathbb{R}e^{-3x^2}\,dx=\sqrt{\frac23}\|f_1\|_{L^2}^{-1}\|f_1\|_{L^2}^2=\sqrt{\frac23}\|f_1\|_{L^2}\\
\alpha_2&=&\langle g,\phi_2\rangle_{L^2}=\|f_2\|_{L^2}^{-1}\int_\mathbb{R}x^2e^{-3x^2}\,dx=\left(\frac23\right)^{3/2}\|f_2\|_{L^2}^{-1}\|f_2\|_{L^2}^2=\left(\frac23\right)^{3/2}\|f_2\|_{L^2}.
\end{eqnarray}
After replacing these values you get that
$$
Pg=\sqrt{\frac23}\left(f_1+\frac23f_2\right).
$$
|
H: Showing that a point lies in the intersection of the closure of some subsets of $\mathbb R^d$
Let $I$ be an index set and $D_\iota\subseteq \mathbb R^d$ for $\iota\in I$ and $x\in\mathbb R^d.$ Assume that for every $\iota\in I$ there exists a sequence $(x^\iota_n)_{n\in\mathbb N}\subseteq D_\iota$ such that $x^\iota_n\rightarrow x$ for $n\rightarrow\infty$. Does it follow that $x\in\bigcap_{\iota\in I}\overline{D_\iota}$?
AI: Since for each $i\in I$, there is a sequence in $D_i$ converging to $x$, this means $x\in\overline{D_i}$ for each $i$ (this follows easily since $\Bbb R^d$ is a metric space); in particular, $x\in\bigcap_{i\in I} \overline{D_i}$. Note we cannot say $x\in\overline{\bigcap_{i\in I}D_i}$ since we can take the rationals and irrationals in $\mathbb{R}$, which are both dense in $\Bbb R$ but have empty intersection.
|
H: Math logic - What does $X\vdash a, a \in X$ mean?
Lets take, for example, the deduction theorem:
For any Well Formed Formulas group $\Sigma$ and for any 2 formula $\alpha, \beta$ ,
$$\Sigma \cup \{\alpha\} \vdash \beta\iff\Sigma \vdash (\alpha\to\beta)$$
Question:
What does $\vdash$ mean? and what does it mean in the specific example? (deduction theorem)
Please help.
AI: The "$\vdash$" symbols reads as "proves". That is, $\Sigma \vdash \alpha$ means that the set of sentences $\Sigma$ proves $\alpha$. Usually, this is in some proof system that's already set out, or implicitly understood in the background.
In this case, the deduction theorem just says "$\Sigma$ together with the assumption $\alpha$ can prove $\beta$ if and only if $\Sigma$ (alone) can prove $\alpha \rightarrow \beta$."
|
H: Uniform Convergence and Injectivity
Let $f$ be a continuous function and let $\phi: [0,1] \to [a,b]$ where $\phi(x) = (b-a)x + a$. Clearly $\phi$ is injective and $f \circ \phi$ is continuous on $[0,1]$.
A Theorem in my book says that there exists a sequence of polynomials $(p_n)$ such that $p_n \to f \circ \phi$ uniformly.
Then my book defines $q_n = p_n \circ \phi^{-1}$ and claims that $q_n \to f$ on $[a,b]$ uniformly.
I have the following 2 questions:
I don't understand why injectivity is important in this case. I know injective $\iff$ left inverse, but here $\phi^{-1}$ is a right inverse which I know exists since $\phi$ is surjective.
Also why is it true that $q_n \to f$ uniformly? It seems like the author just applied $\phi^{-1}$ to both sides, but why is uniform convergence maintained?
AI: The idea of the proof is as follow:
Given a function $g$ defined on the interval $[0,1]$ we construct explicitely a sequence of polynomials $(q_n)$ (as for example the Bernstein polynomials) defined on the same interval and convergent uniformly to the function $g$.
Now clearly $\phi$ is bijective so given a function $f$ defined on the interval $[a,b]$ then the function $g=f\circ \phi$ is defined on $[0,1]$ so from the foregoing there's a sequence of polynomials $(q_n)$ convergent uniformly to $g$ so
$$\sup_{y\in[0,1]}|q_n(y)-g(y)|=\sup_{y=\phi^{-1}(x)\in[0,1]}|q_n(\phi^{-1}(x))-g(\phi^{-1}(x))|\to0$$
and clearly $p_n=q_n\circ \phi^{-1}$ is a polynomial defined on $[a,b]$ and $f=g\circ \phi^{-1}$ so
$$\sup_{x\in[a,b]}|p_n(x)-f(x)|\to0$$
so the sequence of polynomials $(p_n)$ is uniformly convergent to $f$.
|
H: Is there any arc-connected set $X\subset\mathbb{R}^n$ such that $\overline{X}$ is not arc-connected?
Could someone give me an example of an arc-connected set $X\subset\mathbb{R}^n$ such that $\overline{X}$ is not arc-connected?
Thanks.
AI: The topologist's sine curve $S:=\left\{\left(x,\sin\left(\frac1x\right)\right)\mid x>0\right\}$ is path-connected and thus arc-connected since every path-connected Hausdorff space is arc-connected (although in this case it is trivial to show arc-connectedness directly). Its closure $\overline S=S\cup(\{0\}\times[-1,1])$ is not path-connected, but still connected, as is any set between a connected set and its closure.
Remark: The closure $\overline S$ is also not locally connected. The proof looks somewhat similar to the proof of path-disconnectedness (in the comments). Indeed, path-connected and local connectedness are related in a certain way:
Once you know that $\overline S$ is not path-connected, you can conclude that $S\cup A$, where $\emptyset\neq A\subset\{0\}\times[-1,1]$, is not path-connected and thus not locally path-connected. This is because a connected, locally path-connected space must be path-connected. This follows from the fact that path-components in locally path-connected spaces are open and at the same time closed.
There is sort of a converse: Assume that you know already that $S\cup A$, where$\emptyset\neq A\subseteq\{0\}\times[-1,1]$, is not locally connected at the points of $A$. This can be used to show that $S\cup A$ is not path-connected. For if $p:I\to S\cup A$ connects a point $s\in S$ to a point $t\in A$, then $p$ is a closed map. Now quotient maps preserve local (path-)connectedness, so the path $p[I]$ had to be locally-connected. But $p[I]$ covers everything of $S\cup A$ which is on the left of $s\times\Bbb R$, and $t$ does not have arbitrarily small connected neighborhoods, contradicting the local-connectedness of $p[I]$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.