text
stringlengths 83
79.5k
|
|---|
H: FInd conditional extrema $z = x+2y$
I need to find conditional extrema of a function.
Here is function:
$z = x + 2y$ when $x^2 + y^2 = 5.$
AI: The gradient of $z$ is $(1,2)$. So the extrema are attained on the intersection between $x^2+y^2=5$ and $y=2x$ giving us $(x, y) = (1,2)$ and $(x, y) = (-1,-2)$. Thus, the extreme values of $z$ are $5$ and $-5$
|
H: For a complex number $\alpha $ which is algebraic over $\Bbb Q$, determining whether $\bar{\alpha}\in \Bbb Q(\alpha)$ or not
Let $\alpha =3^{1/3}+3^{5/4}i$, which is clearly algebraic over $\Bbb Q$. How can we determine whether $\Bbb Q(\alpha)$ contains $\bar{\alpha}$ or not?
This would be certainly true if $\Bbb Q(\alpha)$ is normal (hence Galois) over $\Bbb Q$, because $\bar{\alpha}$ is a root of the minimal polynomial of $\alpha$ over $\Bbb Q$. But in this case it doesn't seem easy to determine whether $\Bbb Q(\alpha)$ is normal over $\Bbb Q$, neither computing the minimal polynomial of $\alpha$ over $\Bbb Q$.
AI: Consider the extension $K = \mathbb{Q}(\sqrt[3]{3}, \sqrt[4]{3}, i)$. This is a normal extension of $\mathbb{Q}$, because it is the splitting field of $(X^{3}-3)(X^{4}-3)$. This is evident from the fact that the primitive third roots of unity are $(-1 \pm i\sqrt{3})/2$, which clearly belong to $K$. Hence, $K$ is a Galois extension of $\mathbb{Q}$. Its degree over $\mathbb{Q}$ is $24$, since $\mathbb{Q}(\sqrt[3]{3}, \sqrt[4]{3})$ has degree $12$ over $\mathbb{Q}$ (it contains subextensions of degree $3$ and $4$), and adjoining $i$ to any subfield of $\mathbb{R}$ gives an extension of degree $2$.
Moreover, we can list the elements of $\mathrm{Gal}(K/\mathbb{Q})$ explicitly in terms of the generators of $K$ over $\mathbb{Q}$. Indeed, $\sqrt[3]{3}$ must go to a root of $X^{3}-3$; $\sqrt[4]{3}$ must go to a root of $X^{4}-3$; and $i$ must go to a root of $X^{2}+1$. This gives $24$ possible automorphisms of $K$ over $\mathbb{Q}$, all of which must be realized since $[K:\mathbb{Q}] = 24$.
Back to the problem at hand, if $\overline{\alpha} \in \mathbb{Q}(\alpha)$, then $\sqrt[3]{3}$ belongs to $\mathbb{Q}(\alpha)$. From here, it is not hard to see that $i\sqrt[4]{3}$ belongs to $\mathbb{Q}(\alpha)$, and so one may deduce that $\mathbb{Q}(\alpha) = \mathbb{Q}(i\sqrt[12]{3})$. On the other hand, if we have this equality of fields, then clearly $\overline{\alpha} \in \mathbb{Q}(\alpha)$.
We always have the containment $\mathbb{Q}(\alpha) \subset \mathbb{Q}(i\sqrt[12]{3})$, and $\mathbb{Q}(i\sqrt[12]{3})$ has degree $12$ over $\mathbb{Q}$ because $i\sqrt[12]{3}$ is a root of $X^{12}-3$, so it suffices to show that $\mathbb{Q}(\alpha)$ has degree $12$ over $\mathbb{Q}$. One can do so by counting the distinct Galois conjugates of $\alpha$ by the elements of $\mathrm{Gal}(K/\mathbb{Q})$ enumerated above, and one sees that there are indeed 12 unique conjugates.
|
H: Question about ratio word problem
Here is the question and solutions:
It is possible that there are 10 cups of peanuts, 6 cups of walnuts, 4 cups of cashews. So total number of cups in this mix is 20. So 10:6:4 can be reduced to 5:3:2.
Another combination of the mix could be 20 cups of peanuts, 12 cups of walnuts, and 8 cups of cashews. So the total number of cups in this mix is 20+12 +8= 40 So 20:12:8 could be reduced to 5:3:2.
So I do not know how many cups of party mix based on the ratio 5:3:2 as this ratio could be reduced.
Without know how many cups of party mix, how is the answer between 3 and 4?
Thanks!
AI: The question says you are making $12$ cups of party mix. All the quantities in $5:3:2$ need to be multiplied by $\frac{12}{10}$
|
H: Question about proof of Riemann removable singularity theorem.
Theorem:
Let $f:D^*(z_0,r)=D(z_0,r)-\{z_0\}\to \Bbb C$ be holomorphic and bounded. Then $\lim_{z\to z_0}f(z)$ exists and the function $\hat{f}:D(z_0,r)\to \Bbb C$ defined by $$\hat{f}(z) =
\begin{cases}
f(z) & \text{if }z\in D^*(z_0,r) \\
\lim_{z\to z_0}f(z) & \text{if }z=z_0
\end{cases}$$, is holomorphic on $D(z_0,r)$.
Proof:
Define $g:D(z_0,r)\to \Bbb C$ by
$$g(z) =
\begin{cases}
(z-z_0)^2f(z) & \text{if }z\in D^*(z_0,r) \\
0 & \text{if } z=z_0
\end{cases}$$
After proving that $g\in H(D(z_0,r))$, the author says that since $f$ is bounded by some positive number $M$, $\lvert g\rvert\le M\lvert z-z_0\rvert^2$ on $D(z_0,r)$, it follows that the power series expansion of $g$ at $z_0$ has the form $$g(z)= \sum_{n=2}^\infty \text{$a_n$}(z-z_0)^n \;\;\forall z\in D(z_0,r)$$
But I have no idea about the relation between $g$ is bounded and $g$ has a power series expansion.
(The power series starts from 2 because $g(z_0)=0$ and $\frac{\partial g}{\partial z}(z_0)=0 $)
I know that $g$ is holomorphic on $D(z_0,r)$ and the radius of convergence of the power series is at least $r$. I think $g$ has power series expansion at $z_0$ because $g$ is holomorphic on $D(z_0,r)$. Do I miss something?
Thanks for your helping!
AI: Yes, since $g$ is holomorphic in a neighbourhood of $z_0$ it has a Taylor series about $z_0$, and that series converges to $g(z)$ in a neighbourhood of $z_0$.
Let that series be $a_0 + a_1 (z-z_0) + a_2 (z-z_0)^2 + \ldots$. We then have
$a_0 = \lim_{z \to 0} g(z)$. But we know that limit is $0$ because $|g(z)| \le M |z - z_0|^2$, so $a_0 = 0$. Next,
$$g'(z_0) = \lim_{z \to z_0} \frac{g(z) - g(z_0)}{z - z_0} = \lim_{z \to z_0} \frac{g(z)}{z - z_0} = 0$$
because $$\left| \frac{g(z) }{z - z_0} \right| \le M |z - z_0|$$
and since the series of $g'(z)$ is the term-by-term derivative of the series of $g$, this says $a_1 = g'(z_0) = 0$.
|
H: Topology on Reals generated by Predicates
What is the topology that the Real number has that is generated by the sets of the form $\{x\in \mathbb{R}:P(x)=T\}$ where P() is a predicate of the language of ordered fields (which is a predicate calculus with the constants $1,0$ the unary functions $-, $ and $()^{-1}$ defined only on non zero elements. The binary operations of addition and multiplication and the relation $<$) The sets clearly form a basis since the closure by finite conjunctions of predicates assures that the sets of the form mentioned above are closed by finite intersection. Each rational number can be expressed in this language so the set of rationals would have to be discrete. (The predicate $x=q$ is satisfied only by $q$ and so the singleton $\{q\}$ is open). By the same reasoning the algebraic numbers would need to be a discrete set in this topology. On the other hand the entire real line could not be discrete since that would imply an uncountable number of Predicates which cannot be the case since they are numerable. This topology would seem to be finer than the Euclidean one since one can define the sets of balls with rational radius and centered on a rational with a predicate which means this topology contains a basis of the euclidean one. What other kinds of sets would be open?
Edit: the topology must also be totally disconnected since the rationals are dense and the predicates of the form $x<q$ where $x$ is a variable and $q$ is a rational and the negation of such predicates mean that any pair of distinct reals cannot lie in the same connected part
AI: The theory of real closed fields admits quantifier elimination (over just the language of ordered rings, without the multiplicative inverse operation). It follows that any definable (without parameters) subset of $\mathbb{R}$ is a finite Boolean combination of sets of the form $\{x:p(x)\geq 0\}$ where $p$ is a polynomial with integer coefficients. Any such set is a finite union of intervals whose endpoints are algebraic numbers. Since, as you have observed, every algebraic number is definable, it follows that the topology they generate is just the topology generated by open intervals and singletons whose elements are algebraic. In other words, an open set in this topology is just a union of an open set in the usual topology and a subset of the real algebraic numbers.
This topology may seem pathological but it can in fact be embedded in $\mathbb{R}^2$. Let $A$ be the set of real algebraic numbers and enumerate $A=\{a_n\}_{n\in\mathbb{Z}_+}$. Consider the set $$X=(\mathbb{R}\setminus A)\times\{0\}\cup\{(a_n,1/n):n\in\mathbb{Z}_+\}\subset\mathbb{R}^2.$$ Then identifying $X$ with $\mathbb{R}$ via the first projection, the topology on $X$ as a subspace of $\mathbb{R}^2$ is the same as your topology on $\mathbb{R}$.
|
H: Question about approximating an area based on a graph
Here is the question and image:
Hi,
here is my attempt:
Each block has an area of $(1/2)^2 = 1/4$ miles$^2$. This grey region covers $14$ blocks. So $14*1/4 = 3.5$. So the approximate area is $3.5$. However, the answer is $2.75$ miles. I do not know why the answer is $2.75$. Thanks!
AI: Perhaps have a recount - there are approximately 11 shaded squares (give or take). You are right in that each square is $0.25$ miles$^2$. $0.25$ miles$^2$ x $11$ gives $2.75$ square miles.
Remember it's an estimate, so the answer closest to yours would still be $2.75$ square miles.
|
H: Can any pattern be sequence?
Can any pattern be sequence ?
like $2$, $e$, $\pi$, $13$, $12$, $67$ ... can be considered as sequence?
I've read that we consider only those pattern as sequence in which we can predict the next term.
Please elaborate this thing in details .
AI: A sequence is simply a function from $\Bbb{N}$ to some set (in this case, $\Bbb{R}$). That is, every number in $\Bbb{N}$ needs to be assigned a single number in $\Bbb{R}$. There's no requirement that there be a nice formula for these sequences. In fact, some sequences can be defined, but are mathematically impossible to compute (e.g. the sequence of digits of an uncomputable number).
With that in mind, I'd say that $2, e, \pi, 13, 12, 67 \ldots$ is not really a sequence, because not every natural number is assigned an element of $\Bbb{R}$. Assuming you start $\Bbb{N}$ at $1$, you've only assigned values for $1, 2, 3, 4, 5, 6$. There are, in fact, infinitely many sequences that start in this same way, and you haven't sufficiently specified which one.
Defining sequences with ellipses in this way, e.g. $1, 2, 5, 10, 17, 26, 37, 50, 65, \ldots$, is not exactly kosher. By specifying enough terms for a sufficiently simple formula, you decrease the risk that the reader will fail to grasp the sequence you're trying to specify, but it's much better to simply write $n^2 + 1$ instead, or more precisely, $(n^2 + 1)_{n=0}^\infty$. With a general formula, there's no chance that whatever pattern you're trying to convey could be missed.
If you don't have a general formula, then it's very important to still be able to properly identify the sequence that you're talking about. Essentially, you need to be talking about a sequence which satisfies certain conditions, and for which you know for a fact that no other sequence satisfies these conditions.
The most straight forward example of this is recursively defined sequences. For example, $a_0 = 1$ and for all $n \ge 1$, $a_n = \sin(n) + n$. This doesn't tell me the terms of the sequence, but I know that there is one and only one sequence that satisfies these conditions. In this way, I don't have a general formula, but I am referring to a single sequence.
EDIT: Looking over this answer again, I'm not particularly happy with it, but I don't know how to improve it. I just want to acknowledge how confusingly fine the line is between:
Specifying a few of the terms, but failing to specify a single sequence (as in what you did),
Clearly specifying a single sequence, but having no formula for the terms, and
Specifying a sequence by giving a general formula for the terms.
Whoever told you that sequences had to have a pattern was probably trying to warn you that an ellipsis "definition" like the one you gave needs to have a pattern, otherwise it's entirely unclear which of the many sequences with those numbers at the start you are referring to.
But, mathematically speaking, ellipsis "definitions" are dodgy, as they require some degree of mind-reading by the reader. Also, "patterns" are not a mathematical concept, but a psychological phenomenon. The reader takes in information, then fits it into a hierarchy of concepts in their mind, based on a personal aesthetic judgement of "simplicity".
For example, if somebody defined the terms $0, 0, 0, 0, \ldots$ (starting from $0$), you would instantly think it was the constant sequence $0$, because in your mind, that would be the "simplest" pattern that fit the sequence so far. Your mind would be less likely to think $n(n - 1)(n - 2)(n - 3)$, which also begins with $0, 0, 0, 0$, simply because your mind judges it to be less simple. It's somehow less of a "pattern" for it, despite the fact that both sequences are equally valid mathematically. There's no mathematical sense in which $n(n - 1)(n - 2)(n - 3)$ is "simpler" than the constantly $0$ sequence.
My point is, the idea of patterns and pattern-matching are psychological phenomena. Mathematical definitions don't care about psychological phenomena. There's no requirement that there be a pattern. As I said earlier, there's no need for it to be computable. That is, it may be the case that there is no possible algorithm/method to compute the terms of the given sequence. It's not quite the same as leaving them unspecified (this is the brutally fine line I was talking about); they exist, but we literally cannot figure out what they are (and we are certain that there is no way to do so). If you take one of these sequences and ask, is the hundredth term $65$, there is a true/false answer to this question (because the sequence is well-defined), but there may be no possible way for anyone to know this answer.
That's really the difference here: there is a correct answer as to what is the next number, but it may not be known, or even knowable. The start of your sequence $2, e, \pi, 13, 12, 67 \ldots$ doesn't indicate any one number next. Mathematically speaking, there is no answer to "what is the next number?". It's not that there is an unknown answer, or that there is an answer, but it isn't expressible by a formula, but, in fact, there is no answer. That what makes your sequence undefined.
|
H: Why is a 3-D sphere/cube centered at the origin not a subspace?
My class notes states that: a 3-D cube or sphere centered at the origin is not a subspace as it is not closed under addition and scalar multiplication. Why is this the case? I am having trouble understanding how it is not closed under addition.
AI: Let's say that this cube is the unit cube in $\mathbb{R}^3$ centered around the origin. Notice, $(0,1)$ is a vector in the unit cube, but a scalar of such a vector, e.g $(0,2)$, is not in the cube. Hence, it is not closed under scalar multiplication.
Take a vector $(1,0)$ and add it to a vector $(0,1)$. The resulting vector is $(1,1)$ which is not in the unit cube (since it has a length of $\sqrt{2}$). Thus, your cube is not closed under addition.
|
H: Proof Verification: $M^t\in SO_3$
By the Euler's theorem, the set $SO_3$ of all 3x3 orthogonal matrices $M$ such that $det(M)=1$, which is called the special orthogonal group of 3x3 matrices, is the set of all 3x3 rotation matrices. The rotation $\rho$ of $R^3$ can also be represented by spin $(\mu, \theta)$ where $\rho$ fixes a unit vector $\mu$(which is called a pole of $\rho$ and rotates the two dimensional subspace $W$ orthogonal to $\mu$ by $\theta$
I come to think that if $M\in SO_3$ represents the spin $(\mu, \alpha)$, then $M^t$ represents the spin $(\mu, -\alpha)$. Is this correct?
My proposition is originated from the observation that $M$ is similar to
$R=\begin{bmatrix}
1 & & \\
& \cos\alpha & -\sin \alpha \\
& \sin\alpha & \cos \alpha \\
\end{bmatrix}$
with some $B \in SO_3$, that is $M=BRB^t$.
$M^t = BR^tB^t$, and
$R^t=\begin{bmatrix}
1 & & \\
& \cos\alpha & \sin \alpha \\
& -\sin\alpha & \cos \alpha \\
\end{bmatrix}
=\begin{bmatrix}
1 & & \\
& \cos\alpha & -\sin (-\alpha) \\
& \sin(-\alpha) & \cos \alpha \\
\end{bmatrix}$.
Here $B$ can be interpreted as an ordered set of orthonormal vectors $(\mu, w_1, w_2)$ where $\mu$ is the pole of this rotation and $\{w_1, w_2\}$ is an ordered basis of the two dimensional subspace of $R^3$ orthogonal to $\mu$.
I want to verify that my proposition, together with the proof I suggested, is correct.
Thank you for reading.
AI: I believe you're right (both your end result and your derivation), but I don't know if the similarity is the most direct way to "see" (or convince a skeptic) that $M^t$ is spin $(\mu, -\theta)$.
The most relevant fact here, it seems to me, is that if $M$ is orthogonal then $M^t = M^{-1}$ (this is actually another way to define or characterize the orthogonal matrices), and $\det(M) = 1$ implies $M(\mu) = \mu$ (where $\mu$ represents the pole vector). That means $$M^t(\mu) = M^t(M(\mu)) = I_3(\mu) = \mu,$$ so $\mu$ is also the pole of $M^t$.
Once you have that $M^t$ also fixes $\mu$, we know because $M^t = M^{-1}$ that it must "reverse" the action of $M$ on vectors in $\Bbb{R}^3$. Specifically, vectors normal to $\mu$ in the plane $W := \operatorname{span}\{ \mu \}^\perp$ get rotated by the angle $\theta$ in $W$ by $M$. So these vectors must be rotated in an equal and opposite direction in $W$ by $M^{-1} = M^t$, which implies $M^t$ = spin $(\mu, -\theta)$.
|
H: Evaluation of Limit involved in the proof of Asymptotic Unbiasedness of S
We know that $S^{2}$ is an unbiased estimator of $\sigma^{2}$ and $S$ is a biased estimator of $\sigma$. But if $n\rightarrow\infty$, then $S$ is an asymptotically unbiased estimator of $\sigma$. I found a proof here(first answer). But in the last step, instead of solving the limit, graphic visualization has been given. Can we somehow algebraically prove that bias$\rightarrow0$ as $n\rightarrow\infty$? I was trying to solve with Gamma function definition but couldn't find a suitable formula to apply here.
AI: The following formula would be useful. $$ \Gamma(n+\frac{1}{2})=\frac{(2n)!\sqrt{\pi}}{2^{2n}n!}$$
Using this, you can show that $\sqrt{\frac{2}{n-1}} ~ \frac{\Gamma(n/2)}{\Gamma(\frac{n-1}{2})} \to 1$ as $n \to \infty$.
You can find a proof for the formula here.
|
H: Find real numbers r and s so that $a_{n+2}+ra_{n+1}+sa_n = 0$ and $b_{n+2}+rb_{n+1}+sb_n = 0$
I already know that $b_{n+1} = a_n +3b_n$ and $a_{n+1} = 3a_n - b_n$. So
$a_{n+2} = 3(3a_n-b_n)-(3b_n+a_n) = 9a_n-3b_n-3b_n-a_n = 8a_n-6b_n$ and
$b_{n+2} = 8b_n+6a_n$.
So we can rewrite the whole thing as
$8a_n-6b_n+r(3a_n-b_n)+sa_n = 8a_n-6b_n+r(3b_n+a_n)+sb_n$ which, in turn is:
$(-4r+s)a_n = (4r+s)b_n$.
The original problem states that $(3+i)^n = a_n+ib_n$ so I tried using n=1 so a = 3 and b = 1
so then I have $-12r+3s=4r+s$ so
$2s=16r$.
From here I'm stuck. Do I just try random options? Honestly, I don't feel like I've done it right so far.
AI: It looks like you made a substitution error when writing
$$8a_n−6b_n+r(3a_n−b_n)+sa_n=8a_n−6b_n+r(3b_n+a_n)+sb_n$$
Here, you substituted $8a_n-6b_n$ for $b_{n+2}$ instead of $8a_n+6b_n$. This in turn leads you to the incorrect conclusion that $2s=16r$. Regardless, you'll be left with a single equation for $r$ and $s$, which will always have infinite solutions. This is why you run into a dead end - you have to find a second equation for $r$ and $s$ to get something conclusive (doable, but a little tedious and roundabout).
Here's an alternate approach. Since $b_{n+2}+rb_{n+1}+sb_n=0$, $ib_{n+2}+irb_{n+1}+isb_n=0$. Then, we can write:
$$\left(a_{n+2}+ib_{n+2}\right)+r\left(a_{n+1}+ib_{n+1}\right)+s\left(a_n+ib_n\right)=0$$
Then, we use $(3+i)^n=a_n+ib_n$ to write:
$$(3+i)^{n+2}+r(3+i)^{n+1}+s(3+i)^n=0$$
This can be reduced to:
$$(3+i)^2+r(3+i)+s=0$$
$$9+6i-1+3r+ir+s=0$$
$$(8+3r+s)+(6+r)i=0$$
This gives $r=-6$ and $s=10$.
|
H: $E[1/(1+e^X)] = 1/2$ for standard normal $X$
I have normally distributed $X\sim\mathcal{N}(0, 1)$, and I want to compute
\begin{equation*}
\mathbb{E}[1/(1+e^X)] = \int_{-\infty}^\infty \frac{e^{-x^2/2}/\sqrt{2\pi}}{1+e^x} dx
\end{equation*}
I found numerically (and confirmed with Mathematica) that $\mathbb{E}[1/(1+e^X)] = 1/2$; this result continues to hold for arbitrary variances but breaks down once I select non-zero mean for $X$.
How can I prove this result? The integration trick to use is not jumping out to me.
AI: \begin{equation}
\frac{2}{1+e^X} = 1 + \frac{1-e^X}{1+e^X} = 1 - \tanh(X/2)
\end{equation}
Hyperbolic tangent is odd, hence the expected is $0$ if mean of $X$ is $0$.
|
H: Analytical way to find the third root of $x^{2}=2^{x}$ other than 2 and 4
How to find analytically the third root (-ve) of $x^{2}=2^{x}$ other than 2 and 4?
Does differentiation of the equation make a sensible way?
I tried with $\log_a$ for different $a$' s. But I couldn't find the root.
AI: Looking for the negative solution, we get
$$
x^2=2^x\\
x^2=e^{x\ln 2}\\
-x=e^{x/2\cdot\ln 2}\\
-xe^{-x/2\cdot\ln 2}=1\\
-\frac x2\ln 2e^{-x/2\cdot\ln 2}=\frac12\ln 2\\
-\frac x2\ln 2=W\left(\frac{\ln 2}2\right)\\
x=-\frac{2W\left(\frac{\ln 2}2\right)}{\ln 2}\approx-0.766665
$$
where the sign on the left-hand side in line 3 comes from knowing that $x$ is negative, and $W$ is the Lambert $W$ function. Which is to say, $W(1/2\cdot\ln 2)$ is the solution to
$$
ye^y=\frac{\ln 2}2
$$
|
H: Well-definedness of $\mu$-integral
The paragraph above is from: Foundations of Modern Probability (1st edition) - Kallenberg (Page 11)
I am struggling to show that the extended integral as described in the attached paragraph is independent of the choice of representation. I can see that if $f = g-h$ as written, that $g \geq f^{+}$ and $h \geq f^{-}$ necessarily, and I have also derived the following formulae for $f^{+}$ and $f^{-}$ in terms of $g$ and $h$:
$$f^{+} = \frac{max(g,h) - min(g,h) + g - h}{2}$$
and $$f^{-} = \frac{max(g,h) - min(g,h) + h - g}{2}$$
but don't see an obvious application of any of these considerations to showing independence.
AI: Uniqueness means $f=f_1-f_2=f_3-f_4$ with $f_i$'s non-negative and $\mu f_i <\infty$ for all $i$ implies $\mu f_1- \mu f_2=\mu f_3-\mu f_4$. To prove this just observe that $f_1+f_4=f_2+f_3$ and $\mu$ is linear for non-negative measurable functions. Hence $\mu f_1+\mu f_4= \mu f_2+\mu f_3$ from which the result follows.
|
H: What's the sum of all possible values of a number which is the sum of the digits of another number in this question
I came across a question in an exercise booklet for Mathematic Olympiad for primary school students in Australia. The question is described as follows:
Let A be a 2018-digit number which is divisible by 9. Let B be the sum of all digits of A and C be the sum of all digits of B Find the sum of all possible values of C.
I could put the above question into algebraic formulae, but I still couldn't work out the solution. I find it hard to tackle the different possibilities of the number C. Can anyone please help solve this problem? Thank you very much.
AI: First say if $A = 999 \cdots 999$, then $B = 2018 \cdot 9 = 18162$. And this $B$ has the max value possible given $A$ only has $2018$ digits and is divisible by $9$.
Notice if $B$ has $5$ digits, then $C$ at most can only be $35$ (e.g. $B = 17999 \Leftarrow A$ has $1999$ digits of $9$, $1$ digit of $8$ and $18$ digits of $0$), but not $36$ (e.g. $B = 18999$).
Next if $B$ has $4$ digits, then max of $B = 9999$ is possible when $A$ has $1111$ digits of $9$, other digits $0$, so $C = 36$. Quite clearly $37$ or above is impossible if we further reduce the value of $B$.
Also for $A$ to be a $2018$ digit no., $\min B = 9$ $(e.g. A = 10,000, \cdots ,008) \Rightarrow \min C = 9$.
Thus $C$ ranges from $9$ to $36$. The required sum is $45 \cdot 14 = 630$.
[Edit: I missed the fact that divisible by $9$ $\Leftrightarrow$ sum of digits must be multiples of $9$. Hence the required sum should be smaller than from $1$ to $36$.]
|
H: Why does that fact that $a_n \equiv 3 \pmod 5$ and $b_n\equiv 1 \pmod 5$ imply $1/\pi(\arctan(1/3))$ is irrational
So since I've started this multi-part question I've learned:
$(3+i)^n = a_n+ib_n$
$a_{n+1} = 3a_n-b_n$
$b_{n+1} = 3b_n+a_n$
$a_n \equiv 3 \pmod 5$
$b_n \equiv 1 \pmod 5$
Now I am asked why the fact that, for $n\geq 1$, $a_n \equiv 3 \pmod 5$ and $b_n \equiv 1 \pmod 5$ implies that $\frac{1}{\pi} \arctan(\frac{1}{3})$ is irrational.
I notice the similarities between $\frac{b_n}{a_n}$ and $\frac{1}{3}$, but I honestly have no clue how to go about this.
AI: Let $z=3+i$. Then $\arg z = \arctan ⅓ =: t$.
Now, since $b_n \neq 0$, we have that $nt = m\pi$ is impossible for any integer $n,m$.
(Note: $nt=\arg z^n$ comes from repeatedly taking powers of $z$. Suppose it can equal $m\pi$ for some integer $m$, then we must have $b_n=0$ for some $n$)
Thus, $t/ \pi = m/n$ has no solutions over the integers, so this number is irrational.
|
H: If $A^m = 0$, then $\mbox{rank}(A) \leq \frac{m-1}{m}{n}$
Let $A$ be a $n \times n$ real matrix. Show that if $A^m = 0$, then
$\mbox{rank}(A) \leq \frac{m-1}{m}{n}$
My attempt:
If $m=1$, then $A=0$ so $\mbox{rank}(A)=0$.
If $m=2$, we have $\mbox{im}(A) \subset \ker(A)$ so $2\operatorname{rank}(A) \leq \dim \mbox{im}(A) + \dim \ker(A)=n$
For arbitrary $m$, I want to use induction. $B=A|_{\mbox{im}(A)}$ satisfies $B^{m-1}=0$ so $\mbox{rank}(B) \leq \frac{m-2}{m-1}\mbox{rank}(A)$. Thus $\dim \ker B \geq \mbox{rank}(A)- \frac{m-2}{m-1}\mbox{rank}(A) =\frac{1}{m-1}\mbox{rank}(A)$ by rank-nullity theorem. Thus $n=\dim \ker A + \mbox{rank}(A) \geq \dim \ker B +\mbox{rank}(A)\geq \frac m {m-1}\mbox{rank}(A)$
Is this ok?
AI: This is essentially correct. You should clarify that by $B=\left.A\right\rvert_{\operatorname{im}A}$ you mean the linear map $\operatorname{im}A\to\operatorname{im} A$ and not the map $\operatorname{im}A\to\Bbb R^n$. Depending on how much the person whom you are speaking to values your ability to fill in details, you might want to address the fact that:
your inductive hypothesis is that the inequality holds $(\forall k<m,\forall n,\forall A,\cdots)$ as opposed to, say, $n$ being fixed.
the inductive hypothesis speaks of matrices, but you have chosen the notation in the inductive step to speak of linear maps to a substantial length. This may be addressed easily in three ways that I can think of: (a) saying that in this specific instance the problem may be reformulated matricially; (b) describing the matrix of $B$ in some basis; (c) starting all over again by saying that you are considering the linear-map formulation of the problem.
|
H: Why does the formula to get $(e^{x})'s$ slope differ from itself?
right now, and I've noticed that when the professor shows the formula to get the slope of $e^x$, it's different from the formula to get $e^x$. Why does this happen when the slope of $e^x$ is $e^x$? There's no reason it should be different.
The formula to get $e^x$ is shown as $\frac{x^n}{(n)(n-1)\cdots}$ but the formula to get $dy/dx$ of $e^x$ is $\frac{x^{n-1}}{(n-1)\cdots(1)}+\frac{x^n}{n!}$.
AI: They are, in fact, the same.
By definition, we have $$e^x=\sum_{n=0}^\infty\frac{x^n}{n!}=1+x+\frac12x^2+\cdots+\underbrace{\frac{x^{n-1}}{(n-1)!}}_{(n-1)\text{th term}}+\underbrace{\frac{x^n}{n!}}_{n\text{th term}}+\underbrace{\frac{x^{n+1}}{(n+1)!}}_{(n+1)\text{th term}}+\cdots$$
Under the linearity of differentiation, the derivative of a sum is the sum of the derivatives of each term. By differentiating the $n$th term in the series, we get the $(n-1)$th term of the original series. For example, differentiating the $n$th term in the above series using the power rule, we get $$\frac{d}{dx}\left(\frac{x^n}{n!}\right)=\frac{nx^{n-1}}{n!}=\frac{x^{n-1}}{(n-1)!}$$ which we can see is the original $(n-1)$th term. Differentiating the $(n+1)$th term, we get $$\frac{d}{dx}\left(\frac{x^{n+1}}{(n+1)!}\right)=\frac{(n+1)x^n}{(n+1)!}=\frac{x^n}{n!}$$ which we can see is the original $n$th term.
As an aside, in case you had trouble seeing why the rational expression cancels out so nicely, recall the definition of the factorial $$n!=n(n-1)(n-2)\cdots(3)(2)(1)=n(n-1)!.$$
As such, the last step in the above section looks something like this:
$$\require{cancel}\frac{(n+1)x^n}{(n+1)!}=\frac{\cancel{(n+1)}x^n}{\cancel{(n+1)}(n!)}=\frac{x^n}{n!}$$
|
H: How many 6 digit numbers can be formed from two sets of digit?
There are two sets of digit :
$ \text{set 1 :} \{~0,1,2,3,4~\}$
$ \text{set 2 :} \{~5,6,7,8,9~\}$
Now how many 6 digit number can we make by taking numbers from these two sets ? From $\text{set 1}$ repetition is permitted but from $\text{set 2}$ repetition is prohibited .
Solution :
The possible combinations are :
\begin{array}{c|cccc}
\ combo &\text{group 1}& \text{group 2} & \text{without considering 0} &\text{considering 0}&\text{result}\\
\hline
1 & 6 & \fbox0 & 5^6 & 5^6 - 5^5 & 12,500\\
2 & 5 & \fbox1 & 5 \times 5 \times 5 \times 5 \times 5 \times \fbox 5& 5 \times 5 \times 5 \times 5 \times 5 \times \fbox 5 - 5^5 & 12,500\\
3 & 4 & \fbox2 & 5 \times 5 \times 5 \times 5 \times \fbox {$5 \times 4 $}& (5 \times 5 \times 5 \times 5 \times \fbox {$5 \times 4 $})- (5 \times 5 \times 5 \times \fbox {$5 \times 4 $}) & 10,000\\
4 & 3 & \fbox3 & 5 \times 5 \times 5 \times \fbox {$5 \times 4 \times 3 $}& (5\times 5 \times 5 \times \fbox {$5 \times 4 \times 3 $})-(5 \times 5 \times \fbox{$5 \times 4 \times 3 $}) & 6000 \\
5 & 2 & \fbox4 & 5 \times 5 \times \fbox {$5 \times 4 \times 3 \times 2$} & (5 \times 5 \times \fbox {$5 \times 4 \times 3 \times 2$}) - ( 5 \times \fbox {$5 \times 4 \times 3 \times 2$}) & 2400\\
6 & 1 & \fbox5 & 5 \times \fbox {$5 \times 4 \times 3 \times 2 \times 1$} & (5 \times \fbox {$5 \times 4 \times 3 \times 2 \times 1$}) - (\fbox {$5 \times 4 \times 3 \times 2 \times 1$}) & 480\\
\end{array}
So,total number of possible combinations is $= 43880$
Is this procedure correct ?
AI: Yes, but I don't see you taking into account the positioning of the digits. For example, in combo 2, the digit from group 2 can be in one of the 6 places, while you let it be only in the last (it will affect also the 0 consideration).
Sanity check: assume we don't allow the usage of any digit twice. There are 9∗9∗7∗6∗5∗4=68K possible numbers like that. If you allow using some of the digits twice, you should get more options. Your answer, 43K is too small
Solution:
You can write an explicit formula in the following way: $k$ out of the $6$ digits should be from group 1. We need to choose where they will be placed - ${6 \choose k}$, and in each location choose a digit - $5^k$.
For the rest $6-k$ locations - you need to choose which digits will be placed there and in what order: ${5 \choose 6-k}(6-k)!=\tfrac{5!}{(k-1)!}$.
Now, sum over all possi$k$ to obtain
$$\sum\limits_{k=1}^6 {6 \choose k} 5^k \tfrac{5!}{(k-1)!}=495475$$
From that, we need to remove the ones that start with a $0$ (he is chosen to be at place 1):
$$\sum\limits_{k=1}^6 {5 \choose k-1} 5^{k-1} \tfrac{5!}{(k-1)!}=61870$$
Total: 433605.
|
H: Sequence without average density
How to (if it's possible) build infinite binary sequence in that 0-elements density will not converge to any value?For example, in $30\%_{one}\over70\%_{zero}$ random sequence (average) density (of 0-elements) converges to 70%,or in 11010010001000010000010000001000... (I don't know the formula) density will be 100% (correct me if I'm wrong).
AI: $$01001100001111000000001111111100000000000000001111111111111111\dots$$
|
H: Show that $\sum_{n=1}^{\infty}\left(1+\frac{1}{n}-\frac{1}{n^2}\right)^{-n^2}x^n$ does not converge for $x=\pm e$
Show that $$\sum_{n=1}^{\infty}\left(1+\frac{1}{n}-\frac{1}{n^2}\right)^{-n^2}x^n$$ does not converge for $x=\pm e$. Mathematica says that $$a_n:=\left(1+\frac{1}{n}-\frac{1}{n^2}\right)^{-n^2}e^n\xrightarrow{n\to\infty}e^\frac{3}{2}$$ so $a_n$ does not converge to $0$ which means that $\sum a_n$ does not converge as well as $\sum (-1)^na_n$. But how does mathematica obtain this result? I failed to show that. Also I can't show that $a_n$ is strictly increasing (which would also suffice). I'm a bit lost.
AI: $$
a_n = \exp\left[n -n^2 \log\left(1+\frac{1}{n}-\frac{1}{n^2}\right)\right]
= \exp\left[n -n^2\left(\frac{1}{n}-\frac{1}{n^2}-\frac{1}{2n^2} + o\left(\frac{1}{n^2}\right)\right)\right] \sim e^{3/2}.
$$
|
H: Write down an expression in the form $ax^n$ for: $\lim_{h\to 0} \frac{\sqrt{x+h}-\sqrt{x}}{h}$
Write down an expression in the form $ax^n$ for
$$\lim_{h\to 0} \frac{\sqrt{x+h}-\sqrt{x}}{h}$$
What I have tried so far:
multiplying by the conjugate to give:
$$\lim_{h\to 0} \frac{\sqrt{x+h}-\sqrt{x}}{h} \cdot \frac{h}{\sqrt{x+h}+\sqrt{x}}$$
so we cancel out $h$:
$$\lim_{h\to 0} \frac{\sqrt{x+h}-\sqrt{x}}{\sqrt{x+h}+\sqrt{x}}$$
as $\lim_{h\to 0}$:
$$\lim_{h\to 0} \frac{\sqrt{x+0}-\sqrt{x}}{\sqrt{x+0}+\sqrt{x}}$$
giving:
$$\lim_{h\to 0} \frac{\sqrt{x}-\sqrt{x}}{\sqrt{x}+\sqrt{x}}$$
So I'm not sure how you get the numerator to equal $1$ which would give the correct given answer: $0.5x^{-0.5}$
I think I'm missing a simpler method to obtain this answer - any hints would be much appreciated!
AI: Using the concept of derivatives:
We know that:
$$f'(x)=\lim_{h\to 0} \frac{f(x+h)-f(x)}{h}$$
Comparing what we have in the question to this formula, it will be clear to us that what is being asked is essentially:
$$\frac{\mathrm{d}}{\mathrm{d}x}\sqrt{x}$$
Hence we will finally end up with:
$$\boxed{0.5x^{-0.5}}$$
And we're done!
|
H: Equivalence of two elements in a quotient $C^*$-algebra
Let $A$ be a unital $C^*$-algebra, and $I_1$ and $I_2$ closed (two-sided) $C^*$-ideals of $A$ such that $A=I_1+I_2$. Suppose we have two positive elements $a_1\in I_1$ and $a_2\in I_2$ such that
$$a_1^2+a_2^2=1.$$
Question: Then is it true that $a_1$ and $1$ define the same class in the quotient $A/I_2$?
Thoughts: I want to say that $a_1 = \sqrt{1-a_2^2}$, and then expand this as a power series and cancel the leading $1$, whence the remaining terms all lie in $I_2$. But I am not sure if this series converges in $A$. Perhaps there is some basic fact that I am missing here about power series in $C^*$-algebras.
AI: You don't need the assumption $A=I_1+I_2$.
You have $\overline{a_1}^2=1$ in the quotient $A/I_2$. But the quotient is again a unital $C^*$-algebra and $\sigma_{A/I_2}(\overline{a_1}) \subseteq \sigma_A(a_1)\subseteq [0,\infty[$ so also $\overline{a_1}$ is positive in $A/I_2$. The assumption $a_1^2+a_2^2=1$ also implies that $1$ is a positive element. Thus the same reasoning gives that $1$ is positive in $A/I_2$. Since every positive element in a unital $C^*$-algebra has a unique positive square root (Theorem 2.2.1 in Murphy's "$C^*$-algebras and operator theory"), we conclude $1=\overline{a_1}$.
|
H: Average speed of train
Distance between two stations $A$ and $B$ is $778$km. A train covers the journey from $A$ to $B$ at a uniform speed of $84$km per hour and returns back to $A$ with a uniform speed of $56$km per hour. Find the average speed of the train during the whole journey?
The correct answer is:
Let distance between $A$ and $B$ be $x$
Time taken for travelling from $A$ to $B$ is $\frac{x}{84}$
Time taken for travelling from $B$ to $A$ is $\frac{x}{56}$
Total distance travelled is $x+x=2x$
Total time taken is $\frac{x}{84}+\frac{x}{56}$
Average speed is $67.2$
I know speed is distace/time and
I dont know if it may sound stupid but
what i thought of was
$$avg=\frac{\text{speed}_1+\text{speed}_2}{2} \tag{This is how we calculate average}$$
$$\frac{84+56}{2}=70$$
why is it giving wrong answer?
Also do I need to revise my physics concepts or maths or both?
AI: The formula for the average speed is: $Average Speed= \frac{Total Distance}{Total Time}$
To obtain the total distance you simply add the distances. $778+778$.
Then, you must obtain the total time.
That's where the "Time taken for travelling from B to A" in your solution comes in.
Since you have the average speed in kilometers per hour, if you divide the total speed by the average speed, you will find out how many hours it took to travel that distance, so:
$\frac{778}{56} = 13.89$ and $\frac{778}{84}=9.26$
Adding the total hours and using the average speed formula, you obtain: $Average Speed = \frac{1556}{23.15}=67.2$
In your answer you simply took the average of the speed averages, but not the average speed for the journey.
So, you added $56=\frac{778}{13.89}$ and $84=\frac{778}{9.26}$
$\frac{778}{13.89}+\frac{778}{9.26} = \frac{778*13.89 + 778*9.26}{13.89*9.26} = \frac{18010.7}{128.6214} = 140$
Then, dividing it by $2$, you obtained 70.
What you should have done was simply adding: $\frac{778+778}{13.89+9.26} = \frac{1556}{23.15} = 67.2$
|
H: Geometric Interpretation of a question related to complex numbers..
QUESTION: I just encountered that $$\sum_{n=0}^8e^{in\theta}=0$$where $\theta=\frac{2π}9$
First I prove it, and then ask my question :)..
($i=\sqrt{-1}$)
MY ANSWER: We know, $e^{i\theta}=\cos(\theta)+i\sin(\theta)$ and that $e^{in\theta}=\cos(n\theta)+i\sin(n\theta)$.. but writing that out in this summation really makes it nasty.. let's consider the series-
$1+e^{i\theta}+(e^{i\theta})^2+......+(e^{i\theta})^8$
If we use g.p. we get the summation is $$\frac{(e^{i\theta})^9-1}{e^{i\theta}-1}$$
Now, $$e^{9i\theta}=\cos(9\theta)+i\sin(9\theta)=1$$ considering $\theta=\frac{2π}9$ as given.
Therefore the summation is zero..
Q.E.D. $\square$
Now, if we observe we find out that for any $\theta=\frac{2π}k$ we have $$\sum_{n=0}^{k-1}e^{in\theta}=0$$
Isn't that great ?
Forgive me if this looks silly to you :P, but I was so elated after finding this out..
My question is, since almost every concept involving complex numbers can be chalked out geometrically.
What is the geometric interpretation of this pattern we discover?
AI: We can think of $e^{in\theta}$ as a vector $v_n$ from $0$ which ends at $e^{in\theta}$.
Now, let $v=v_0+v_1+...v_8$, then rotation around origin for $\theta $ takes $v$ to it self. But the only vector which non trivial rotation fixes is zero vector and thus $v=0$.
|
H: Give an example of distribution $u \in \mathcal{D}'((0,+\infty)) $ that is not extendible to $\mathbb{R}$
i.e.
find a $u \in \mathcal{D}'((0,+\infty))$, such that for any $v \in \mathcal{D}'(\mathbb{R})$,
$v|_{(0, +\infty)} \neq u$. In order to find such an example,
my question:
I tried to prove $e^{1/x^2}$ is an example, i.e. there is a distribution u such
that $ u|_{(0, +\infty)} = e^{1/x^2}$, but I cannot find a contradiction.
AI: An example is $u=\sum\limits_{n=1}^\infty \delta^{(n)}_{1/n}$, that ist $u(\varphi)= \sum\limits_{n=1}^\infty (-1)^n \varphi^{(n)}(1/n)$ for $\varphi\in\mathscr D((0,\infty))$. A neat argument that it cannot be extended to $\mathscr D'(\mathbb R)$ (despite the intuitive but not very precise idea that the series ''does not make sense'' for $\varphi\in\mathscr D(\mathbb R)$) is that distributions with compact support in $\mathbb R$ have finite order.
For open $A\subseteq B$ the restriction map $\mathscr D'(B)\to \mathscr D'(A)$ is surjective if and only if $A$ is some union of connected components of $B$.
|
H: Caratheodory's theorem for vectors in a cone
I am studying the book "matching theory" by Lovasz and Plummer, and I found the following statement (page 257):
Comparing it with Caratheodory's theorem in Wikipedia reveals two differences:
The book speaks about vectors in a cone, particularly, in the conic hull of some given vectors. Wikipedia speaks about vectors in the convex hull of some given vectors.
The book says that $n$ vectors are sufficient, where $n$ is the dimension of the space. Wikipedia says that $n+1$ vectors are sufficient.
I could not find an explicit statement of Caratheodory's theorem for the conic hull elsewhere. Does it easily follow from the theorem for convex hull?
AI: I presume that if states that if a cone $C$ in $\Bbb R^n$ (or any $n$-dimensional
real vector space)
is generated by a set $S$,
then each element of $C$ is a non-negative linear combination of at most $n$ vectors
in $S$.
I'm not sure anyone else uses the term conic hull but it makes sense to define it as the set
of non-negative linear combinations of the vectors in question.
One can prove this Caratheodory-type result by the same method as the usual one.
Let $u\in C$ and suppose $u=\sum_{i=1}^m a_i v_i$ is a minimal representation of $u$
with $a_i>0$ and $v_i\in S$. If $m>n$ then there is a linear dependence
$0=\sum_{i=1}^m b_i v_i$ and then for a suitable choice of $\lambda$,
$u=\sum_{i=1}^m (a_i+\lambda b_i) v_i$ has non-negative coefficients one of which
vanishes, contradicting minimality.
|
H: How does the indicator probability stays stable throughout the experiment
I've been dealing with this question-
An urn has 12 blue balls and 8 red balls. You extract balls one after the other. What is the expected value of the number of blue ball instances with a following red ball
This could be solved by indicators, if we define $X_i=1$ if the next ball is red for every blue ball, and say that the probability for each $X_i$ is $8/20$, we get that the expected value is $12\cdot(8/20)=4.8$, which is the correct answer.
However, I don't understand how the probability for EVERY $X_i$ stays the same. After the i'th extraction there are less then 20 balls in the urn, and the number of red balls might also differ, so why can we assume that probability is the same?
AI: You can think of the experiment as permutations of balls. The one in place #1 is taken first, the one in place #2 taken second, and so on. $X_i$ considers only place #i and due to symmetry, it is as likely to put a red ball in each place.
You are correct that as the experiment unfolds and you learn what happened, the probability changes. But we compute probabilities and expectations before the experiment starts.
Another intuitive approach: there is less probability for the 5th ball to be red if we've taken out many red balls already. BUT if we've taken out many blue balls - there is more probability for the 5th ball to be red. You can actually compute these probabilities and see how it balances out.
|
H: Fundamental group of the torus minus a point (Van Kampen thm)
i had the exercise to compute the fundamental group of the torus minus one point p.
I know that the fundamental group of the torus is $\pi_1(T^2) = \pi_1(S^1) \times \pi_1(S^1) = \Bbb Z \times \Bbb Z$. So:
$U :=$ open neighborhood of p
$V := T^2 \backslash \{p\}$
Then U and V are both path-connected and open in $T^2$, U $\cap$ V is path connected and $T^2 = U \cup V$.
So from my understanding we should have: $\pi_1(T^2) = \pi_1(U) * \pi_1(V)$ (where i use $*$ for free group)
but since $\pi_1(U) = 0$ (because U is contractible) this implies $\Bbb Z \times \Bbb Z = \pi_1(T^2) = \pi_1(V) = \pi_1(T^2 \backslash \{p\})$
But i looked it up and $\pi_1(T^2 \backslash \{p\}) = \Bbb Z * \Bbb Z $ which is not the same as $\Bbb Z \times \Bbb Z$, right? Does somebody know where i made a mistake?
AI: The mistake is that kn Van Kampen you don't get the free product, you get the amalgamated product $\pi_1(T^2\setminus\{p\})*_{\pi_1(U\cap V)} \pi_1(U)$
Here, $U\cap V$ retracts onto a circle so it has nontrivial $\pi_1$, so in the amalgamated product it will kill $[a,b]$ in $\mathbb Z* \mathbb Z$, that's why the torus has $\pi_1$ $\mathbb Z^2$.
To get the correct result, I think you have to get a geometric handle on $T^2\setminus \{p\}$ (find a nice space which it is homotopy equivalent to)
|
H: Prove that for every real number $x$, if $|x-3|>3$ then $x^2>6x$.
Not a duplicate of
Prove that for every real number $x$, if $|x − 3| > 3$ then $x^2 > 6x$.
This is exercise $3.5.10$ from the book How to Prove it by Velleman $($$2^{nd}$ edition$)$:
Prove that for every real number $x$, if $|x-3|>3$ then $x^2>6x$.
Here is my proof:
Let $x$ be an arbitrary element of $\Bbb R$. Suppose $|x-3|>3$. Now we consider two different cases.
Case $1.$ Suppose $x-3\geq0$. Ergo $x-3>3$ and so $x>6$. Multiplying both sides of $x>6$ by $x$ we obtain $x^2>6x$.
Case $2.$ Suppose $x-3<0$. Ergo $3-x>3$ and so $x<0$. Since $x<0$, then clearly $x<6$. Multiplying both sides of $x<6$ by $x$ we obtain $x^2>6x$.
Since the above cases are exhaustive, $x^2>6x$. Therefore if $|x-3|>3$ then $x^2>6x$. Since $x$ is arbitrary, $\forall x(|x-3|>3\rightarrow x^2>6)$. $Q.E.D.$
Is my proof valid$?$
One other question: In the above linked-post, there is a clear mistake in the proof:
Since by one of the cases we have $x^2 > 6x$ then $|x − 3| > 3$
$\Rightarrow$ $x^2 > 6x$.
But the cases must be exhaustive and from both of them we should reach the conclusion that $x^2>6x$. Is my reasoning correct$?$ If yes, then why no one mentioned it in the comments or the answers to the above post$?$
Thanks for your attention.
AI: \begin{align}
|x-3|>3 & \Rightarrow (x-3)^2>9 \\
& \Rightarrow x^2-6x+9>9 \\
& \Rightarrow x^2-6x>0 \\
& \Rightarrow x^2>6x \\
\end{align}
|
H: If every subsequence of a sequence has a convergent subsequence then sequence is bounded or not?
In this question, for statement 1 i've proved that the later sequence is bounded and hence has a convergent subsequence.
For statement 2, i tried to construct a counter-example but i couldn't. So i suspect it is true and if it is then how to prove it?
AI: $S_2$ is true. Let us prove this by contradiction. If $(a_n)$ is not bounded the there is a subsequence which tends to $\infty$ or $-\infty$. And this subsequence does not have any convergent subsequence.
More details: for any $n$ there exists $k_n$ such that $|a_{k_n}| >n$ and we may assume that $k_n$ are increasing. Either $a_{k_n}>n$ for infinitely many $n$ or $a_{k_n}<-n$ for infinitely many $n$ so (by going to a subsequence) we may assume that $a_{k_n}>n$ for all $n$. Now consider $(a_{k_n})$. This is a subsequence of $(a_n)$. Every subsequence of this tends to $\infty$ so it has no convergent subsequence.
|
H: Integrate $\operatorname{PV}\int_0^{\infty}\frac{x\tan(\pi x)}{(1+x^2)^2}dx$
A friend of mine send me the problem to integrate
$$\operatorname{PV}\int_0^{\infty}\frac{x\operatorname{tan}(\pi x)}{(1+x^2)^2}dx$$ where $\operatorname{PV}$ is Cauchy principle value.
I'm getting $\frac{1}{2}\psi^{(1)}\left(\frac{1}{2}+i\right)$ which is complex trigamma argument however, he has got answer in real closed form $\frac{\pi^2}{(e^{-\pi}+e^{\pi})^2.}$.
My work
I evaluted integral as follows
Recalling the result to due Weiestrass factorization theorem
$$ \operatorname{cos}(\pi x)=\prod_{n\geq 0}\left(1-\frac{4 x}{(2n+1)^2}\right)$$ taking $\log$ and differentiating with rest to $x$ we have $$\operatorname{tan}(\pi x)=\frac{8}{\pi}\sum_{n\geq 0}\frac{x}{(2n+1)^2-4x^2}$$ and thus subbing this result the integral, we have $$\frac{8}{\pi}\sum_{n\geq 0}\int_0^{\infty}\frac{x^2dx}{((2n+1)^2-4x^2)(1+x^2)^2}$$
Making partial fraction and integration gives us $$\sum_{n\geq 0}\frac{2}{(2n+2i+1)^2}=\frac{1}{2}\psi^{1}\left(\frac{1}{2}+i\right)$$
As the per WA $$\Re\frac{1}{2}\left(\psi^{(1)}\left(\frac{1}{2}+i\right)\right)=\frac{\pi^2}{(e^{-\pi }+e^{\pi})^2}\tag{1}\label{mainfm}$$
My question is, How do i prove the relation \eqref{mainfm}?
I tried for the references of relationship between hyperbolic function and trigamma function however, I cannot get any such relationship.
Any sort of help/ reference or different approches will be appreciated. Thank you.
Interestingly, making change of $\operatorname{tan}(\pi x)$ as $\operatorname{tanh}(\pi x)$ I came up with the following closed form
$$\int_0^{\infty}\frac{x\tanh(\pi x)}{(1+x^2)^2}dx=\frac{\pi^2}{4}-2$$
AI: $(1)$ follows immediately from the "conjugation property" and the reflection formula for $\psi^{(1)}$:
$$\psi^{(1)}(\bar{z})=\overline{\psi^{(1)}(z)};\qquad\psi^{(1)}(z)+\psi^{(1)}(1-z)=\frac{\pi^2}{\sin^2\pi z}.$$
|
H: The algebraic structure of ring $\mathbb{C}[x^2,xy,y^2]$
$1.$ What are maximal ideals of this ring?
$2.$ Is this ring local?
$3.$ Is this ring regular?
EDIT:
I tried to construct some ring that is isomorphic to $\mathbb{C}[x^2,xy,y^2]$
, but I cannot find it.
In addition, I think ideal $(x^2,xy,y^2)$ is a maximal ideal in $\mathbb{C}[x^2,xy,y^2]$.
AI: I'm afraid I don't know the answer to whether this ring is regular. I have a few thoughts on it at the end of the post and if I come up with something I'll edit this post, but I can answer your first two questions.
(EDIT: Please see the comments for a solution on regularity.)
A more familiar way to write this ring would be $\mathbb C[r, s, t]/(rt - s^2)$. Consider the map $\mathbb C[r, s, t] \longrightarrow \mathbb C[x^2, xy, y^2]$ via $r \mapsto x^2$, $t \mapsto y^2$, $s \mapsto xy$. This is onto and vanishes on $rt - s^2$ so it yields a well defined surjection $\phi: \mathbb C[r, s, t]/(rt - s^2) \longrightarrow \mathbb C[x^2, xy, y^2]$. We now have to prove injectivity. Essentially, the idea is that the only relations $x^2$, $xy$, and $y^2$ satisfy are that $x^2 y^2 = (xy)^2$, so the relation $rt = s^2$ will suffice. Anyway, take some $\overline{f} \in \mathbb C[r, s, t]/(rt - s^2)$ that is in the kernel of this map $\phi$. We have the relation $\overline{r}\overline{t} = \overline{s}^2$ Hence, any $s^2$ term in $f$ reduce to $\overline{r}\overline{t}$. Thus, we can without loss of generality take $f$ to be of the form $f_1(r, t) + f_2(r, t) s$, since we only care about $f$ modulo $(rt - s^2)$. Now, we assumed that $\phi(\overline{f}) = 0$ so $\overline{f}(x^2, xy, y^2) = 0$. Remember that this map was well defined on the quotient ring, so that just means that $f_1(x^2, y^2) + f_2(x^2, y^2) xy = 0$. The question then is when is this possible? Well consider the $x$ terms in both summands of this expression. All $x$-terms in $f_1(x^2, y^2)$ must appear with even degree and all $x$ terms in $f_2(x^2, y^2)xy$ must appear with odd degree. Thus, for their sum to be $0$, we need $f_1(x^2, y^2) = f_2(x^2, y^2) = 0$. Hence, each coefficient of $f_1$ and $f_2$ must be zero. Well since $\overline{f} = \overline{f_1(r, t) + f_2(r, t) s}$, we therefore have that $\overline{f} = 0$ and $\phi$ is injective, hence an isomorphism.
Now, recall the correspondence between ideals of a quotient ring and of the ring itself. The (maximal, prime) ideals of $\mathbb C[r, s, t]/(rt - s^2)$ correspond to the (maximal, prime) ideals of $\mathbb C[r, s, t]$ containing $(rt - s^2)$. We can use a little geometry. As $\mathbb C$ is algebraically closed, the Nullstellensatz tells us that all maximal ideals of $\mathbb C[r, s, t]$ are of the form $(r - a, s - b, t - c)$ (that is, they correspond to points in $\mathbb C^3$). Furthermore, $(r - a, s - b, t - c)$ is the ideal of polynomials vanishing on $(a, b, c)$. Hence, $(rt - s^2) \subseteq (r - a, s - b, t - c)$ iff $rt - s^2 \in (r - a, s - b, t - c)$ iff $ac - b^2 = 0$. Thus, the locus of points in $\mathbb C^3$ satisfying $ac - b^2$ correspond precisely to the maximal ideals of $\mathbb C[r, s, t]/(rt - s^2)$ via $(r - a, s - b, t - c)/(rt - s^2)$. This is a complete description of the maximal ideals of this ring. There are clearly many points on this locus so there are many maximal ideals.
I would also like to point out an interesting geometric note about this ring. An important concept used here is that polynomial rings (and their quotients) acts as functions on affine space (and subsets cut out by polynomials - Zariski closed subsets). This dualism between algebra and geometry is fundamental to, you guessed it, algebraic geometry. So I'd like to point out a seemingly unrelated question. How can we describe $\mathbb C^2 / \pm$, where by $/ \pm$ I mean modding out by $(a, b) \sim (-a, -b)$?. Well as algebraic geometry suggests, let's think about the ring of polynomial functions on $\mathbb C^2$ and the quotient $\mathbb C^2 / \pm$. The former is, of course, $\mathbb C[x, y]$. The latter is the subring $\{f \in \mathbb C[x, y] : f(x, y) = f(-x, -y)\}$. I won't prove this here but it's not too hard to see that this is actually exactly the ring in question - $\mathbb C[x^2, xy, y^2]$.
Well we just showed that $\mathbb C[x^2, xy, y^2] \cong \mathbb C[r, s, t]/(rt - s^2)$ via $r \mapsto x^2$, $s \mapsto xy$, $t \mapsto y^2$. We saw also that $\mathbb C[r, s, t]/(rt - s^2)$ has an intimate relation with the zero set of $rt - s^2$, which I will denote $Z(rt - s^2)$. Explaining this is great detail is the job of an algebraic geometry course, so I'll leave some depth aside and just give you some things to ponder. The ring of polynomials acting on this zero set $Z(rt - s^2)$ is best described as $\mathbb C[r, s, t]/(rt - s^2)$. Since this is isomorphic to $\mathbb C[x^2, xy, y^2]$, which is the ring of polynomials acting on $\mathbb C^2 / \pm$, this suggests a relation between $\mathbb C^2/\pm$ and $Z(rt - s^2)$. Well remarkably these are "isomorphic". I'm really thinking of this as an isomorphism of varieties, but again, that's a discussion for an algebraic geometry class. Anyway, here's the map: $\mathbb C^2/ \pm \longrightarrow Z(rt - s^2)$ via $(a, b) \mapsto (a^2, ab, b^2)$. This is a bijection. There's something remarkable about this too - it's in the opposite direction of the map I described between the rings of polynomials on these two respective spaces but it's still very similar to that map.
All of this rambling was not in the scope of your question, but I promised something about regularity so here it is. If you plot $rt - s^2 = 0$ in $\mathbb R^3$ you get a double cone meeting at the origin. Of course, we should really be in $\mathbb C^3$ but I can't visualize this so this is the best I got. The origin is notably different here - it's the one point that stops this thing from being a manifold. More specifically, you can't define a tangent space at the origin, but you can everywhere else. That signifies that this ring is not regular and that the culprit is at the origin. Via this correspondence between algebra and geometry I keep bringing up, this suggests that you should localize at the prime ideal $(r, s, t)/(rt - s^2)$ and see if what you get is a regular local ring. My suspicion, due to this picture, is that it is not and that every other localization by a maximal ideal will be. If I figure this out rigorously I'll edit the post, but if not I hope this rambling helped.
|
H: Meromorphic continuation of $1+z+z^2+z^3+\ldots$
Consider the function $f(z)=1+z+z^2+z^3+\ldots$. This series is absolutely convergent on the disc $|z|<1$ and is equal to $1/(1-z)$ in this region. Now, $1/(1-z)$ is a meromorphic function on $\mathbb{C}$ with a simple pole at $z=1$. Is this sufficient to show that $f$ itself has a meromorphic continuation to the whole complex plane with a simple pole at $z=1$?
Intuitively and "formally" this seems correct. But I am not sure if there are any analytic arguments hidden in the background that I am missing.
Thank you.
AI: The function $$f(z)=1+z+z^2+z^3+\ldots$$ is well defined for $D = \{ z \in \mathbb C | |z|<1 \}$.
The function $$g(z)=1/(1-z)$$ is analytic on on $\mathbb C \setminus \{1\}$.
On the domain $D$, $f(z) = g(z)$ so $g$ satisfies the definition of analytic continuation of $f$ to $\mathbb C \setminus \{1\}$, this continuation is unique (consider $f(z)-g(z)=0$).
|
H: Determining whether the set of lines in $\Bbb R^2$ with at least two points whose coordinates are rational is a countable set
Let $A$ denote the subset $\Bbb Q\times\Bbb Q$ of $\Bbb R^2$ and $U$ denote the set of all lines in $\Bbb R^2$ that intersect with $A$ in at least two points.
Now I know $\Bbb Q\times\Bbb Q$ is countable. How does that help to determine whether $U$ is countable or not? Is $U = (\Bbb Q\times\Bbb Q)\times(\Bbb Q\times\Bbb Q)$?
AI: Hint: The slopes of these lines are rational. The map $L \to (x,y,m)$ where $(x,y)$ is some fixed point of $\mathbb Q \times \mathbb Q$ in $L$ and $m$ is the slope of $L$ gives a one-to-one map of $U$ into $\mathbb Q \times \mathbb Q \times \mathbb Q$.
|
H: Let $X$ denote the closed unit interval, and let $R$ be the ring of continuous functions $X \to \mathbb{R}$.
$\textbf{Problem}$
$\bullet~$ Let $X$ denote the closed unit interval, and let $R$ be the ring of continuous functions $X \to \mathbb{R}$.
$\textbf{(a)}~$ Let $f_{1}, f_{2}, \dots, f_{n}$ be functions with no common zero on $X$. Prove that the ideal generated by these functions is the unit ideal.
$%\boxed{\textit{Hint: }~ \text{ Consider the sum } \sum_{i = 1}^{n}f_{i}^{2}.}$$\textbf{(b)}~$ Establish a bijective correspondence between maximal ideals of $R$ and points on the interval.
Any sort of Idea?
AI: Maximal ideals of $C(X) = R$ are given by $$I_{t_0} = \{f \in C(X) : f(t_0) = 0\}$$
for $t_0 \in X$. Namely, every such ideal is maximal since considering the map $C(X) \to \Bbb{R} : f\mapsto f(t_0)$ gives the isomorphism
$$C(X)/I_{t_0} \cong \Bbb{R}$$
since $I_{t_0}$ is the kernel of this map. This implies that $I_{t_0}$ is maximal since in general $m \subset R$ is a maximal ideal iff $R/m$ is a field.
Conversely, let $I$ be a maximal ideal in $C(X)$ and assume that $I \ne I_{t_0}$ for all $t_0$. Then for every $t \in X$ there is a function $f_t \in I$ such that $f_t(t) \ne 0$. By continuity there is an open neighbourhood $U_t$ of $t$ such that $|f_t| > 0$ on $U_t$. Since $X = [0,1]$ is compact and $(U_t)_{t\in X}$ is an open cover, there are
$t_1, \ldots, t_n \in X$ such that $X \subseteq U_{t_1} \cup \cdots \cup U_{t_n}$. Now the function
$$f = \sum_{i=1}^n f_{t_i}^2$$
is in $I$ since all of the $f_i$ are in $I$. Furthermore, $f$ is positive everywhere and hence invertible, which implies that $I = C(X)$. This contradicts maximality of $I$ so we conclude $I=I_{t_0}$ for some $t_0 \in X$.
Hence $t \mapsto I_t$ is a bijection between points of $X$ and maximal ideals of $C(X)$.
|
H: Complex logarithm and the residue
The integral expression for the complex logarithm is defined by:
$$\int_{\gamma} \frac{1}{z}\,dz$$
where $\gamma$ represents a rectifiable path in $\mathbb{C}\setminus\{0\}$. The above integral defines $\text{log}(z)$, which has a branch cut emanating from $0$. And this is the source of my confusion when it comes to the calculation of the residue at $0$. I mean, are we supposed to insert a branch cut at zero and then consider a circular path around the branch ?
Edit
I will rephrase the above question. The logarithm is defined as:
$$\text{log}(z)=\text{log}|z| + i(\theta + 2\pi k)$$
Here, the integer $k$ defines the Riemann sheet, in other words how many times you transverse the branch cut emanating from $0$ in the positive sense. In order to cross the branch cut we must however trace around the zero. The question is then: does the factor $2\pi i k$ in fact represent the residue contribution ?
AI: The reason that it is interesting to calculate the residue of $\frac 1z$ at $0$ (or equivalently, the counterclockwise integral $\oint_{|z| = 1} \frac 1z dz$) is that it cannot be (directly) calculated using an antiderivative.
If $\gamma$ is a curve in $\Bbb C$ that starts at $a \in \Bbb C$ and ends at $b \in \Bbb C$ and if there exists a function $F(z)$ such that $F$ is differentiable at all points in $\gamma$ with $f(z) = \frac d{dz}F(z)$, then we have
$$
\int_\gamma f(z)\,dz = F(b) - F(a).
$$
It follows that if $\gamma$ is a closed contour (so that $a = b$), then we have $\int_\gamma f(z)\,dz = F(a) - F(a) = 0$. In other words: if we know that there is an anti-derivative of $f$ that is globally defined along the contour $|z| = 1$, then it necessarily follows that its reside at $0$ is $0$.
That said, we can use the antiderivative to compute the residue if we split the desired integral into parts. Let $\gamma_1$ denote the path along $|z| = 1$ from $1$ to $-1$, and let $\gamma_2$ denote the path along $|z| = 1$ from $-1$ to $1$, both taken counterclockwise. We have
$$
\oint_{|z| = 1}\frac 1{z}\,dz = \int_{\gamma_1} \frac 1zdz + \int_{\gamma_2} \frac 1z dz.
$$
We now consider two different antiderivatives for $\frac 1z$ corresponding to different branch cuts. Define $\log^1,\log^2$ such that
$$
\log^1(e^{i \theta}) = i\theta, \quad \theta \in [-\pi/2,3 \pi/2);\\
\log^2(e^{i \theta}) = i\theta, \quad \theta \in [\pi/2, 5\pi/2).
$$
We then have
$$
\int_{\gamma_1} \frac 1zdz = \log^1(-1) - \log^1(0) = \pi i - 0 = \pi i,\\
\int_{\gamma_2} \frac 1z dz = \log^2(0) - \log^2(-1) = 2 \pi - \pi i = \pi i.
$$
Regarding your edited question: your definition
$$
\log(re^{i\theta}) = \{\log r + i(\theta + 2 \pi k) : k \in \Bbb Z\}
$$
is consistent with the definition
$$
\log(z) = \left\{\int_{\gamma}\frac 1z \,dz : \gamma \text{ is a contour from } 1\ \text{to}\ z\right\}.
$$
It is indeed the case that the multiple of $k$ corresponds to the contribution of the residue at $z = 0$.
|
H: How to prove divisibility by $7$?
I am currently doing some preparatory maths for which I have an oral examination at the end of August, and am currently completely stuck trying understand how to solve a problem.
The problem is as follows:
Two three digit numbers, $\overline {abc}$ and $\overline {def}$, are such that $\overline {abc}-\overline {def}$ is divisible by $7$. Show that the six digit number $\overline{abcdef}$ is also divisible by $7$.
I have run through most all divisibility rules for dividing by $7$ I have come across, apart from brute-forcing, but I cannot grasp how to solve this problem. Any ideas or help for how to crack this nut would be very helpful and appreciated.
AI: I think that the comments suffice to solve this particular exercise, but more generally, any time you want to proof divisibility criteria for integers, the solution usually lies in manipulating the decimal expansion of numbers, as in (using your notation):
$$\overline{abcdef}= 10^5 a+10^4 b+10^3 c +10^2 d+10^1 e + 10^0 f$$
As you can see from the comments, you can also manipulate "bigger chunks" of the expansion, as in
$$\overline{abcdef}=10^4\cdot\overline{ab}+10^2\cdot\overline{cd}+10^0\cdot\overline{ef}$$
In this case, the solution comes from simply noticing that $\overline{abcdef}=10^3\cdot\overline{abc}+10^0\cdot\overline{def}$, thus giving:
$$ \overline{abcdef}=1000\overline{abc}+\overline{def}=1001\overline{abc}+(\overline{def}-\overline{abc}) $$
Since $1001$ is divisible by $7$, you get the characterization you were looking for: $\overline{abcdef}\equiv\overline{def}-\overline{abc}\mod 7$, or in other words, $\overline{abcdef}$ is divisible by $7$ if and only if $\overline{def}-\overline{abc}$ is (sign doesn't matter in this case).
|
H: Certain open subsets of $L^1$ for $\sigma$-finite measure
Let $\mu$ be a $\sigma$-finite Borel measure on a metric space $X$, let $B$ be a Borel subset of $X$ of positive $\mu$-measure. Then when does the set
$$
\left\{
I_K g:\, g \in L^1_{\mu}(X)
\right\}\subseteq L^1_{\mu}(X)
$$
define a subset of $L^1_{\mu}(X)$ with non-empty interior (besides when $K=X$)?
More generally(ish), what about when $I_K$ is replaced by a strictly positive $\mu$-integrable function?
AI: Your subset is the range of the multiplication operator (with multiplier $I_K$ or any other measurable function). In particular, it is a subspace. As such, it is open if and only if it equals $L^1_\mu(X)$. For a multiplication operator with nonnegative multiplier this is the case if and only if the multiplier is bounded away from zero, i.e., there exists a constant $c>0$ such that $f\geq c$ a.e. If $f=1_K$, then this is equivalent to $\mu(X\setminus K)=0$.
|
H: Differentiation + integration: how to solve for acceleration and displacement given a specific velocity time graph?
The velocity-time graph shown below is for a particle moving in a straight line, from rest at A, through B to C and then back to rest at B.
I have a few questions below regarding this velocity-time graph that I am unsure how to solve, any hints would be much appreciated!
My attempts so far:
What is the acceleration of the particle 13 seconds after leaving A?
I know that differentiating velocity with respect to time gives acceleration - however I am unsure of how to apply this in the context of a graph. I'm also thinking I can apply this formula that I found where:
acceleration ($m/s^2$) = change in velocity ($m/s$) ÷ time taken ($s$)
Using the formula I got: $(10+7) ÷ 13 = 1.3077$ - but this is wrong. The correct answer is $-2.5m/s^2$ - I don't know how to get there though.
What is the particle's displacement from A 28 seconds after leaving A?
I think I have confused this with distance in the next question - the answer for displacement given is $85$m but how do you get there?
How far does the particle move in the first 28 seconds?
Distance is given from area under the given curve (from $0$ to $28$ seconds)
So I've added the area like so:
$(5 \times 10)/2 = 25$
$7 \times 10 = 70$
$(4 \times 10)/2 = 20$
$(12 \times 5)/2 = 30$
$25 + 70 + 20 + 30 = 145$m
This is from a Year 12 Methods textbook.
Thanks very much in advance for any tips!
AI: First Question:
acceleration (m/s2) = change in velocity (m/s) ÷ time taken (s)
here calculate a for 12 to 16 seconds.
for that
a=dv/dt
=0-10/16-12
=-2.5 m/s2
Second Question:
In v/t graph you can find displacement by calculating the area of the graph.
Because displacement equal to velocity into total taken time.
we can divide this graph into four part:
a1 = 0 to 5 s
a2 = 5 to 12 s
a3 = 12 to 16 s
a4 = 16 to 28 s
a1 = 5*10/2
=25m
a2 = 7*10
= 70m
a3 = 4*10/2
= 20m
a4 = 12*(-5)/2
= -30m
Total displacement = Total aria
= a1 + a2 + a3 + a4
= 25 + 70 + 20 -30
= 85m
|
H: Are these two products of random variables independent?
Assume $a,b_1$ and $b_2$ are independent random variables. I am wondering whether the $x=ab_1$ and $y=ab_2$ are independent.
Under the condition $a$ is known, these two random variables are obviously independent. However, when $a$ is also a random variable, how can we justify the independence?
AI: Assume that the random variables have finite second moment. If $ab_1$ and $ab_2$ are independent then $E[(ab_1)(ab_2)]=E(ab_1)E(ab_2)$ which gives $Ea^{2}=(Ea)^{2}$ (assuming that $Eb_1$ and $Eb_2$ are not zero). This is false unless $a$ is constant random variable. Thus there are plenty of counter-examples.
|
H: How many roots does $(x+1)\cos x = x\sin x$ have in $(-2\pi,2\pi)$?
So the nonlinear equation that I need to find the number of its roots is
$$(x+1)\cos x = x\sin x \qquad \text{with } x\in (-2\pi,2\pi)$$
Using the intermediate value theorem I know that the equation has at least one root on this interval, and if I use drawing I see that $x\sin x$ and $(x+1)\cos x$ intersect in three points, but from drawing I can't know if they might intersect again somewhere.
And the problem is that the number of zeroes is definitely not 3, the options are 4, 5, 6, 7 based on my textbook.
I tried the Fixed point method but $\{x\}$ didn't converge, either my starting point or the function I chose were inappropriate.
Can you help?
AI: From the hint given by @zkutch, it is evident from the graph that the equation has five roots when $x\in[-2\pi,2\pi]$. As suggested by @Claude Leibovici I've posted the original graph which is indeed more nice than the second one. However, students are more familiar with the second one. Third graph if necessary. :-)
|
H: Proof by Induction: Prove that $2^n > n^2$, for all natural numbers greater than or equal to $5$
Problem: $2^n > n^2, \forall n \in \mathbb{N} , n \geq 5$
Base: $2^5 > 5^2$
Induction Hypothesis: Assume for $n = k \geq 5$ that $2^k>k^2$
Inductive Step:
$$2^k > k^2$$
$$2^k \times 2 > k^2 \times 2$$
$$2^{k+1} > 2k^2$$
From there I can finish the proof by asserting that $k^2 > 2k+1, \forall k \in \mathbb{N} , k \geq 3$.
Do I need to prove that $k^2 > 2k+1, \forall k \in \mathbb{N} , k \geq 3$. Or can I substitute it into my inequality?
AI: You need to prove $k^2\color{blue}{\ge}2k+1$ for $k\ge\color{blue}{5}$ (unless your teacher thinks that's obvious enough to skip), although the stronger version you quoted is also correct. The proof does not, however, need induction. because it's equivalent to $(k-1)^2\ge2$.
|
H: If $V_{1} \subset V \subset V_1 + V_2\subset \mathbb{R}^{n}$. Is it true $V = V \cap V_{1} + V \cap V_2$?
Given subspaces $V$, $V_{1}$, $V_2$ of $\mathbb{R}^{n}$such that $V_1 \subset V \subset V_1 + V_2$. Is it true $V = V \cap V_{1} + V \cap V_2$?
Could you please verify my proof or propose other ways to solve it?
My proof: since $V_1 \subset V$, $V_1 \cap V = V_1$.
From $V \subset V_1 + V_2$ we know that for $v \in V$, $v_i \in V_i$ the following is true $v = v_1 + v_2$.
Rewrite it as $v - v_1 = v_2$. Hence, $v_2$ should belong to $V$ (meaning $V_2 \subset V)$ and, therefore, $V_1 + V_2 \subset V$.
Combining it with $V \subset V_1 + V_2$, we get $V = V_1 + V_2 = V \cap V_1 + V \cap V_2$.
AI: You reach the wrong conclusion $V_2 \subset V$ as an intermediate step.
You approach can be fixed though: The inclusion
$$
V \supset V\cap V_1 + V\cap V_2
$$
is trivial. For the other inclusion, let $v\in V$ and since $V\subset V_1+V_2$ there are $v_1\in V_1\subset V$ and $v_2\in V_2$ such that $v=v_1+v_2$. As you said, this yields $v_2\in V$ and hence $v=v_1+v_2$ with $v_1\in V_1 = V\cap V_1$ and $v_2\in V\cap V_2$. Since this works for any $v\in V$ we get
$$
V \subset V\cap V_1 + V\cap V_2.
$$
Putting together both inclusions we get the desired identity.
|
H: A wrong law of large numbers for dependent variables
Suppose we are given $Y, X_1, X_2,\ldots$ i.i.d. standard normal random variables and define
$$Z_i = \sqrt{\rho}Y + \sqrt{1-\rho}X_i$$
for some given $\rho\in[0, 1)$. The random variables $Z_i$ are not independent if $\rho > 0$. Fix some threshold $T\in\mathbb{R}$ and let $L_i$ take value $1$ if $Z_i < T$ and $0$ otherwise.
Below I will give a proof of the fact that a.s.
$$\lim_{n\to\infty}\frac{1}{n}\sum_{i=1}^nL_i = \Phi(T)$$
where $\Phi$ is the CDF of the standard normal distribution. However, I know that this result cannot be true: for example, if $\rho$ is very close to $1$ (full correlation) then intuitively we expect all $L_i$ to take value $1$ with probability $\Phi(T)$ and all of them to be $0$ with probability $1-\Phi(T)$.
Question: What am I doing wrong in my proof? What step or statement doesn't hold?
Proof
I will follow quite closely the easy proof in Section 7.2 of Probability with Martingales by D. Williams since the $L_i$ have finite moments. Writing $p=\Phi(T)$ and since $Z_i$ is standard normal, $L_i$ is Bernoulli with parameter $p$ and we have $E[L_i] = p$.
Now we look at the variables $Z_i$ conditioned on $Y$:
\begin{align}
P[Z_i\le z\mid Y] ={}&P[\sqrt{\rho}Y + \sqrt{1-\rho}X_i\le z\mid Y]\\
={}&P\left[X_i\le\frac{z-\sqrt{\rho}Y}{\sqrt{1-\rho}}\mid Y\right]\\
={}&\Phi\left(\frac{z-\sqrt{\rho}Y}{\sqrt{1-\rho}}\right).
\end{align}
In particular, we have that the variables $Z_i$ conditioned on $Y$ are i.i.d. (since the $X_i$ are). It follows that the variables $L_i$ conditioned on $Y$ also are i.i.d., they are Bernoulli, and
$$P[L_i=1\mid Y] = \Phi\left(\frac{T-\sqrt{\rho}Y}{\sqrt{1-\rho}}\right).$$
Now write $A_i = L_i - p$ and $T_n = \sum_{i=1}^nA_i$. Then
\begin{align}
E[T_n^4] ={}&E[E[T_n^4\mid Y]]\\
={}&E[nE[A_1^4\mid Y] + 3n(n-1)E[A_1^2A_2^2\mid Y]]\\
={}&nE[A_1^4] + 3n(n-1)E[A_1^2A_2^2]\\
\le{}&Kn^2
\end{align}
where in the first line we used the law of total expectation, in the second line we just used the fact that the conditional random variables are independent and proceeded as in the book, the third line is the law of total expectation again, and the last line holds for some $K$ since all moments are finite.
Now we can just follow the rest of the proof. We have that
$$E\left[\sum_{n=1}^\infty\left(\frac{T_n}{n}\right)^4\right]\le\sum_{n=1}^\infty K\frac{1}{n^2}<\infty$$
which implies
$$\sum_{n=1}^\infty\left(\frac{T_n}{n}\right)^4<\infty$$
a.s. which in turn implies that the sequence of terms a.s. goes to $0$ as $n$ goes to $\infty$, concluding the proof.
AI: (Migrated from the comment)
For the computation of $E[T_n^4\mid Y]$ to make sense, $A_i$ should have zero conditional mean, which leads to
$$p=\Phi\left(\frac{T-\sqrt{\rho}Y}{\sqrt{1-\rho}}\right).$$
Using this choice, all the other computations now make sense, proving
$$ \lim_{n\to\infty} \frac{1}{n}\sum_{i=1}^{n}\mathbf{1}_{\{Z_i \leq T\}} = P(Z_1 \leq T \mid Y) = \Phi\left(\frac{T-\sqrt{\rho}Y}{\sqrt{1-\rho}}\right). $$
(Although an overkill, this may also be viewed as a result of the Birkhoff-Kintchin Theorem.)
|
H: Can anyone solve this Pell equation?
I have solved the Pell equation $ p^2 - 95 q^2 =1$ . By looking at the convergents corresponding to the simple continued fraction of $\sqrt{95}$ I was able to find the fundamental solution $p=39$ and $q=4$ .
I found the five smallest pairs of positive integers $p,q$ that satisfy the above Pell equation. They are : $$\begin{align*} p=39 \quad& q=4 \\
p=3041 \quad&q=312\\
p=237159 \quad& q=24332 \\
p=18495361\quad& q=1897584 \\
p=1442400999 \quad& q=147987220 \end{align*}$$
However I am having difficulty solving the related Pell equation $$ p^2 - 95 q^2 =-1 , +1 , -1 , +1 , -1 , +1 , .....$$
The only difference now is that the right hand side of the equation is alternatively $-1$ and $+1$ , instead of just $+1$. One obvious trivial solution is $p=1$ and $q=0$ , or $p= \sqrt{-1}$ and $q=0$ but these trivial solutions do not count.
I am trying to find the five smallest pairs of positive integers $ p,q$ that satisfy this equation. I would appreciate your help.
AI: There are no solutions to $p^2-95q^2=-1$, because they would imply $p^2\equiv-1\bmod95$, which would imply $p^2\equiv-1\bmod19$, and there are no solutions to $p^2\equiv-1\bmod19$, because there are no solutions to $p^2\equiv-1\bmod n$ for prime $n\equiv-1\bmod4.$
|
H: Theorem 2.9 Rudin functional analysis - Inferring exists $n$ such that $K \cap nE \neq \emptyset$
Follow up to this question.
I realized that question, which I've asked, explains "why" we can apply Baire's Theorem to $K$. It doesn't address however why $\exists n$ such that $K \cap nE \neq \emptyset$, so this question it's just a check (as I'm reviewing my knowledge of Functional Analysis).
According to Baire's theorem such a $K$ is of second category, which means it is not a countable union of nowhere dense (so it's of second category) or equivalently the countable intersection of open dense in $K$ is not empty.
However I'm not able to reach the conclusion I want (or maybe I'm just not convinced). I guess I can pick a collection of open dense in $K$ therefore (collection is $\left\{ V_i \right\}$)
$$
K = \overline{\bigcap V_i} = \bigcup K \cap nE
$$
the bit that is confusing me is when I say "I can pick", can I actually pick such a collection as a consequence of Baire's theorem?
AI: Note that $$K = \bigcup_{n=1}^\infty K \cap nE$$
and for all $n \geq 1$ we have that $K \cap nE$ is closed in $K$ (since $E$ is closed in $X$). By the Baire category theorem (applied to the compact Hausdorff space $K$), there is $n \geq 1$ such that $K \cap nE$ has non-empty interior. In particular, $K \cap nE \neq \emptyset$.
|
H: Show $U_1 \cup U_2=V \implies U_1=V$ or $U_2=V$
Let $V$ be a vectorspace over the field $K$ and $U_1, U_2$ subspaces of $V$.
Show $U_1 \cup U_2 = V \implies U_1=V$ or $U_2=V$
my thoughts:
Let $x_1 \in U_1$ and $x_2 \in U_2$, then $x_1+x_2 \in U_1 \cup U_2$. But this would mean
$x_1+x_2 \in U_2$ or $x_1+x_2 \in U_1$. This would mean $U_1=V$ or $ U_2=V,$ since $x_1$ and $x_2$ are any elements from $U_1 \cup U_2 = V.$
Could someone give me a feedback if its correct?
AI: The original proof is problematic because you go from the statement "for every $x_1,x_2,x_1 + x_2 \in U_1$ or $x_1 + x_2 \in U_2$" to the statement " $x_1 + x_2 \in U_1$ for every $x_1,x_2,$ or $x_1+x_2 \in U_2$ for every $x_1,x_2$" without justification.
Here is an alternative way to construct the proof that you might find helpful. It suffices to show that $U_1 \cup U_2 = V$ and $V \neq U_1$, then it must hold that $V = U_2$. To that end, suppose (for the purpose of contradication) that there exists an element $x \in U_1 \setminus U_2$.
|
H: If $L_M:(\mathbb{R}^m, \|\cdot\|_p) \to (\mathbb{R}^m, \|\cdot\|_q)$ is an isometry where $p\neq q$, must $M$ be orthogonal?
Given any $m\times m$ square matrix $M$, let $L_M:(\mathbb{R}^m, \|\cdot\|_p) \to (\mathbb{R}^m, \|\cdot\|_q)$ be defined by $L_M(x) = Mx$ where $1\leq p,q< \infty$ and
$$\|(x_1,...,x_m)\|_p= \left( \sum_{k=1}^m |x_k|^p \right)^{1/p}.$$
It is well-known that if $p=q=2$ and $L_M$ is an isometry, then the matrix $M$ is orthogonal.
I am interested to know whether the same holds if we assume that $p\neq q$.
More precisely,
Assume that $L_M$ is an isometry. If $p\neq q$, must $M$ be an orthogonal matrix?
Clearly each column of $M$ has norm $1$.
So, it remains to show that every two columns of $M$ has zero dot product.
But I got stuck here.
AI: The unit spheres in $\mathbb R^m$ with respect to $\|\cdot\|_p$ and $\|\cdot\|_q$ aren't linear images of each other when $p\neq q$, hence there are no linear maps preserving norms in this case.
So the answer is trivially "yes": Every linear map $(\mathbb{R}^m, \|\cdot\|_p) \to (\mathbb{R}^m, \|\cdot\|_q)$ preserving norms is also orthogonal (since there are none).
|
H: How to show that $\sum_{n=1}^{N} \cos(2n-1)x = \frac {\sin(2Nx)}{2\sin(x)} $
I am studying Fourier analysis and have been given the following question:
Show that $$\sum_{n=1}^{N} \cos(2n-1)x = \frac {\sin(2Nx)}{2\sin(x)} $$
I used the formula for a finite geometric sum and Euler's formula to get to the following:
$\sum_{n=1}^{N} \cos(2n-1)x = Re (\sum_{n=1}^{N} e^{i(2n-1)x}) = Re (\sum_{n=0}^{N-1} e^{i(2n+1)x}) = Re (e^{ix} \sum_{n=0}^{N-1} (e^{i2x})^n) = .... = Re(\frac{i}{2 \sin{x}}(1-e^{i2xN}))$
I have been stuck here for a while and am unsure how to get to the required $\frac {\sin(2Nx)}{2\sin(x)}$.
What do I do next?
AI: Use the standard trick: factor out $\mathrm e^{iNx}$:
$$ i(1-\mathrm e^{2iNx})=i\mathrm e^{iNx}(\mathrm e^{-iNx}-\mathrm e^{iNx})=\mathrm e^{iNx}\frac{\mathrm e^{iNx}-\mathrm e^{-iNx}}i=2\sin x\,\mathrm e^{iNx},$$
whence
$$\operatorname{Re}\Bigl(\frac{i}{2 \sin{x}}(1-\mathrm e^{i2xN})\Bigr)=
\operatorname{Re}\Bigl(\frac{2\sin Nx\:\mathrm e^{ixN}}{2 \sin Nx}\Bigr)
=\frac{2\sin Nx\cos Nx}{2\sin x}.$$
|
H: Prove that for all real numbers $a$ and $b$, $|a|\leq b$ iff $-b\leq a\leq b$.
Not a duplicate of
If $a\leq b$ and $-a\leq b$, then $|a|\leq b$.
if $-a\leq b\leq a$, then $|b|\leq a$
Is my proof of $|a| \leq b \iff -b \leq a \leq b$ correct?
Prove that for all real numbers $a$ and $b$, $|a| \leq b$ iff $-b \leq a \leq b$
This is exercise $3.5.12.a$ from the book How to Prove it by Velleman $($$2^{nd}$ edition$)$:
Prove that for all real numbers $a$ and $b$, $|a|\leq b$ iff $-b\leq a\leq b$.
I am familiar with the routine proof of the above theorem but I was wondering whether we could prove the right-to-left direction of the above theorem in the following simple way:
Suppose $-b\leq a\leq b$. Since $a\leq b$ then $a\leq b$ or $-a\leq b$ and thus by definition $|a|\leq b$. Therefore if $-b\leq a\leq b$ then $|a|\leq b$. $Q.E.D.$
I am suspicious of my proof! Is it correct$?$ If not, then why$?$
Thanks for your attention.
AI: When you said "Since $a\le b$ then $a\le b$ or $-a\le b$",
you should have said since $a\le b$ then $a\le b$ and since $-b\le a$ then $b\ge -a$.
|
H: Diffeomorphism theorem for Lie Groups?
The integral lattice $\Bbb Z^n$ is a discrete subgroup of the Lie group $\Bbb R^n$. Therefore, it acts freely and properly discontinuously on $\Bbb R^n$ and the orbit space $\Bbb R^n/\Bbb Z^n$ has a smooth manifold structure. I wanted to show that $\Bbb R^n/\Bbb Z^n$ is diffeomorphic to the toral group $T^n = S^1\times\dots\times S^1$. I realized that I can show something more general:
Let $G$ be a Lie group and let $F:G \to H$ be a surjective Lie group homomorphim. If $\Gamma=\ker F$ is a discrete subgroup, then the orbit space $G/\Gamma$ is diffeomorphic to $H$.
Partial Proof: Let $\pi:G \to G/\Gamma$ be the quotient map. Define $\tilde{F}: G/\Gamma \to H$ by $\tilde{F}(\Gamma x) = F(x)$. This is a well defined bijection which is also a homeomorphism. Now since $\pi$ is a covering map, for each $p \in G/\Gamma$ there exists a connected neighbourhood $U$ of $p$ and a connected neighbourhood $\tilde{U}$ in $G$ such that $\pi|\tilde{U}: \tilde{U} \to U$ is a diffeomorphism. Therefore $\tilde{F}|U = F\circ\pi^{-1}$ and hence is a smooth map. Therefore $\tilde{F}$ is smooth because it's locally smooth.
This proof isn't complete since I haven't shown that $\tilde{F}^{-1}$ is smooth. This is where I'm stuck. It'll be helpful if someone gives a hint as to how to show that $\tilde{F}$ is a diffeomorphism.
AI: One way to see this is as follows : you proved that $G/\Gamma \to H$ was a homeomorphism. $\Gamma$ is discrete so this implies that $G$ and $H$ have the same dimension.
Also, $G\to H$ is a submersion (you have to prove that), therefore on tangent spaces it is surjective, so by a dimension argument it is an isomorphism on tangent spaces, therefore $G\to H$ is a local diffeomorphism (by the local inversion theorem).
This should be enough to conclude
(By the way, you probably know this but of course the $T^n \cong \mathbb{R^n/Z^n}$ case is completely elementary and easier than the general case)
|
H: Length of line segment at intersection of three spheres
For laying out a grid of spheres, I need to calculate the the length of the line segment (highlighted in red) at the intersection of three spheres:
Each sphere has an equal radius and is centred on the corners of an equilateral triangle; the $y$-coordinate for each sphere's center is $0$.
So, my question is: given the radius and $z$ and $x$ coordinates of each sphere, how do I calculate the length of the line segment highlighted in red?
AI: It is clear from the sketch that the segment is along line passing through the centroid of the $\triangle ABC$ and parallel to $z$-axis. Now, let $E$ be one end of the segment, $F$ be the centroid of the triangle. Hence, in $\triangle EFB$, by Pythagoras theorem, $$EB^2=EF^2+FB^2$$ The length of the segment is $2EF$.
In the question we are given the $x$ and $z$ coordinates of the respective centers. We get the coordinates of the centroid $F$ of the $\triangle ABC$ as $$F=\left(\frac{x_1+x_2+x_3}{3},0,\frac{z_1+z_2+z_3}{3}\right)$$ Now, from this you get $BF$.
Here is the figure drawn in Geogebra of the situation.
|
H: Integratethe following function: $\int \frac{1}{x(x+1)(x+2)\cdot\cdot\cdot(x+n)}dx, n \in \mathbb{N}$
Integrate the following function:
$$\int \frac{1}{x(x+1)(x+2)\cdot\cdot\cdot(x+n)}dx, n \in \mathbb{N}$$
I saw this question as an exercise on a University of Colorado website, and I'm not quite sure how to solve this. My first instinct was do this integral step-by-step. So first determine $\int\frac{1}{x}dx$, then $\int\frac{1}{x(x+1)}dx$, so on in order to notice a pattern and then use induction to prove it but I'm hopping there is a more concrete way to find the solution.
AI: Let us use the identity that
$$f(x)=\frac{1}{(x+1)(x+2)(x+3)...(x+n)}=\frac{1}{n!}\sum_{k=1}^{n} (-1)^{k+1} \frac{k{n \choose k}}{x+k}.$$
Let
$$g(x)=\frac{1}{x(x+1)(x+2)(x+3)...(x+n)}=\frac{1}{n!}\sum_{k=1}^{n} (-1)^{k+1} \frac{k{n \choose k}}{x(x+k)}.$$
$$\implies g(x)=\frac{1}{n!}\sum_{k=1}^{n} (-1)^{k+1} {n \choose k} \left(\frac{1}{x}-\frac{1}{x+k}\right)$$
$$\implies g(x)=\frac{1}{n! x}-\frac{1}{n!}\sum_{k=1}^{n}(-1)^{k+1} \frac{{n \choose k}}{x+k}.$$
Finally, $$\int g(x) dx=\frac{1}{n!}\left(\ln x+\sum_{k=1}^{n} (-1)^{k} {n \choose k} \ln(x+k)\right)$$
|
H: Do properties in linear algebra proved by using matrix transformations hold true irrespective of the choice of the bases for the vector spaces?
Let us say I am required to prove that V (dimension $= n$) and $\Bbb{R} ^ n$ are isomorphic and have chosen the matrix representation way of doing this.
Assume a linear transformation of
$T :V \rightarrow \Bbb{R} ^n$ that is injective.
The matrix representation is going to be a square matrix and as injectivity implies that the rank of $[T]$ is the dimension of $V$ ($= n$) which in turn means that the matrix is full rank. The rank of $[T]$ will also be equal to the dimension of the domain space and thus it is onto.
I think using this method I have proved that there exists a $T$ such that it is both injective and onto.
My only problem is that the $[T]$ here comes from fixing an ordered basis and I think the proof holds true only for these bases. How can I be sure that changing the basis would keep the proof intact?
PS: If the proof is wrong please include that too in the answer.
AI: There is no problem in proving something with the use of a basis, because a finite dimensional space does have one.
What's $[T]$? If we assume to take the canonical basis for the codomain, the columns of $[T]$ are precisely the vectors $T(v_1),T(v_2),\dots,T(v_n)$, where $\{v_1,v_2,\dots,v_n\}$ is the chosen basis.
Note that $\{w_1=T(v_1),w_2=T(v_2),w_n\dots,T(v_n)\}$ is also a basis of $\mathbb{R}^n$, because the set is linearly independent, due to $T$ being injective. So we can define a linear map $S\colon\mathbb{R}^n\to V$ by declaring that
$$
S(w_i)=v_i,\qquad i=1,2,\dots,n
$$
This map is the inverse of $T$: can you see it? So you actually don't need to use $[T]$ at all. Nor you need that the codomain is $\mathbb{R}^n$: it can be any $n$-dimensional vector space.
But you must use bases, because for infinite dimensional spaces it's false that injectivity implies surjectivity.
|
H: Find $g(x)$ from the following condition: ${g(x)}=(\int_{0}^{1}{e}^{x+t}{g(t)}dt)+x$
Find $g(x)$ from the following condition: $${g(x)}=\left(\int_{0}^{1}{e}^{x+t}{g(t)}dt\right)+x$$ I have tried to solve it by applying Newton-Leibnitz formula and solving the linear differential equation with the help of integrating factor, but I am getting $g(x)$ as $1+x$, it does not satisfy the original equation,
Please tell me where I have gone wrong and suggest a better approach to this question.
AI: I suggest another approach. We know:
\begin{equation}
{g(x)}=\left(\int_{0}^{1}{e}^{t}{g(t)}dt\right){e}^{x}+x
=C{e}^{x}+x,
\end{equation}
where $C=\int_{0}^{1}{e}^{t}{g(t)}dt$. We only need to discover the value of $C$. But:
\begin{equation}
C=\int_{0}^{1}{e}^{x}{g(x)}dx
=\int_{0}^{1}(C{e}^{2x}+xe^x)dx
=C\int_{0}^{1}{e}^{2x}dx+\int_{0}^{1}xe^xdx
\end{equation}
Therefore:
\begin{equation}
C\left(1-\int_{0}^{1}{e}^{2x}dx\right)=\int_{0}^{1}xe^xdx
\end{equation}
which can be easily solved.
|
H: Find the PDF of $U = {XY \over \sqrt{X^2+Y^2}}$ where $X, Y \sim N(0, 1)$ where X,Y are iid RV. Hence find the mean and variance of $U$.
The task is to find the PDF of $U = {XY \over \sqrt{X^2+Y^2}}$ where $X, Y \sim N(0, 1)$ where $X,Y$ are iid RV.
I approached this question as first finding the joint distribution of $g_{UV}(u, v)$ where $V = Y$ and then calculate the marginal distribution of $g_U(u) = \int_{-\infty}^{\infty} g_{UV}(u, v) dv$. But the final step becomes so messy as follow
$$
g_U(u)
= {1 \over 2\pi}
\int_{-\infty}^{\infty}
\exp\bigg(-\frac{v^3 + u^2v^2-u^2v+v}{2(v^2 - u^2)}\bigg)
\bigg(\frac{v}{\sqrt{v^2 - u^2}}
- \frac{uv^2}{(v^2 - u^2)^{3/2}}\bigg) dv.
$$
How can I proceed from here? Or is there a better or more efficient method to approach this question.
AI: Assuming $X,\,Y$ are independent, the joint infinitesimal probability of $X,\,Y$ is $\frac{1}{2\pi}\exp\frac{-x^2-y^2}{2}dxdy=\frac{d\theta}{2\pi}r\exp\frac{-r^2}{2}dr$. Since $U=\tfrac12R\sin2\Theta$, define $V:=\tfrac12R\cos2\Theta$ so$$\frac{dudv}{drd\theta}=\left|\begin{array}{cc}
u_{r} & v_{r}\\
u_{\theta} & v_{\theta}
\end{array}\right|=\left|\begin{array}{cc}
u/r & v/r\\
2v & -2u
\end{array}\right|=-2\frac{u^{2}+v^{2}}{r}=-\frac{r}{2}.$$So the joint PDF of $U,\,V$ is$$\frac{1}{2\pi}re^{-r^{2}/2}\frac{2}{r}dudv=\frac{1}{\pi}e^{-2\left(u^{2}+v^{2}\right)}dudv.$$Hence $U,\,V$ are $N\left(0,\,\tfrac{1}{4}\right)$ IIDs.
|
H: Invertbility of an element in a subalgebra.
Let $A$ be a unital algebra over the complex numbers and $B$ be a subalgebra of $A$ with $A=B + \Bbb{C}1_A$. Suppose that $B$ has a unit $1_B \neq 1_A$ and that $\lambda \in \Bbb{C}\setminus \{0\}$. The book I'm reading claims:
$$b + \lambda 1_A \mathrm{\ invertible \ in \ A} \iff b + \lambda 1_B \mathrm{\ invertible \ in \ B} $$
I have trouble showing the implication $\implies $ . Suppose $b+ \lambda 1_A$ has inverse $c$ in $A$. What is the inverse of $b+ \lambda 1_B $ in $B$, or if an explicit inverse can't be written down how can one show it exists?
AI: Suppose that $b+u1_A$ is invertible, $(b+u1_A)(b'+u'1_A)=1_A=bb'+u'b+ub'+uu'1_A$ since $B$ is a subalgebra $bb'+u'b+ub'=0, uu'=1$ and $(b+u1_B)(b'+u'1_B)=1_B$.
|
H: 2 questions in text of Lesson : Inner Product spaces of Hoffman Kunze Linear Algebra( Related to Orthogonality)
I am self studying linear Algebra from Hoffman Kunze and I have 2 questions in text given just after Corollary of Theorem 3 whose image I am adding below.
It's image:
Question (1): Why in last line of page 281 we must have $\alpha_{m+1} $ =0?
Question 2: How does in 2nd last line, $\alpha_{1}$ , ... $\alpha_{m} $ being different from 0 and $\alpha_{m+1} $ = 0 implies $\beta_{i}'s$ to be linearly dependent?
For (8-9 ) see image :
I shall be really thankful for any explanations provided.
AI: For Question 1: since $\alpha_{m+1}$ is in the span of the orthogonal basis $\alpha_1,\ldots,\alpha_m$, you can express $\alpha_{m+1}$ using (8-8):
$$\alpha_{m+1}=\sum_{k=1}^m\frac{(\alpha_{m+1}|\alpha_k)}{\|\alpha_k\|^2}\alpha_k$$
But $(\alpha_{m+1}|\alpha_k)=0$ for all $k$, so $\alpha_{m+1}=0$.
For Question 2: under these assumptions, equation (8-9) gives you a nontrivial linear dependence relation between the $\beta_i$'s, so they are linearly dependent.
|
H: How to express the tail bound of this series as a function of $N$
I encountered a problem which asks to show how the tail bound of a series converges to zero, where the tail bound has to be expressed as a function of $N$. For example, given the series:
$$
\sum_{k=0}^{\infty} \frac{1}{2^k}
$$
I can derive an estimate for its tail bound from $N$ to $\infty$ as:
$$
\sum_{k\geq N}\frac{1}{2^k}=\sum_{k=0}^{\infty}\frac{1}{2^k} - \sum_{k=0}^{N-1}\frac{1}{2^k}
$$
or that:
$$
\sum_{k\geq N}\frac{1}{2^k}=\frac{1}{1-(1/2)} - \frac{1-(1/2)^N}{1-(1/2)} = \frac{1}{2^{N-1}}
$$
Since $\frac{1}{2^{N-1}} \rightarrow 0$ as $N \rightarrow 0$, the tail bound converges to zero.
In the problem, I would have to derive a similar tail bound for the series:
$$
\sum_{k=N}^{\infty} \frac{1}{k^2}
$$
If I use a similar technique as above, I would get:
$$
\sum_{k=N}^{\infty} \frac{1}{k^2} = \sum_{k=0}^{\infty} \frac{1}{k^2} - \sum_{k=0}^{N-1} \frac{1}{k^2}
$$
From Euler, $\sum_{k=0}^{\infty} \frac{1}{k^2} = \frac{\pi^2}{6}$, which means that:
$$
\sum_{k=N}^{\infty} \frac{1}{k^2} = \frac{\pi^2}{6} - \sum_{k=0}^{N-1} \frac{1}{k^2}
$$
Since, $\sum_{k=0}^{\infty} \frac{1}{k^2} = \frac{\pi^2}{6}$, I am sure that the series converges, which implies that the tail bound $\sum_{k=N}^{\infty} \frac{1}{k^2}$ should approach $0$ as $N \rightarrow \infty$, however, I could not think of a way to express this decreasing tail bound sum as a function of $N$ ...
Any help ?
AI: There are many ways to show this using various series convergence tests.
Imo the simplest bound would be to use an integral:
$$\sum_{k\ge N}\frac1{k^2}\le\frac1{N^2}+\int_N^\infty\frac{\mathrm dx}{x^2}=\frac1{N^2}+\frac1N$$
Cauchy's condensation test may also be applied to give a series you already know, using $2^n\le N$:
$$\sum_{k\ge N}\frac1{k^2}\le\sum_{k\ge2^n}\frac1{k^2}\le\sum_{k\ge n}\frac{2^k}{(2^k)^2}=\sum_{k\ge n}\frac1{2^k}$$
Another standard approach would be to use telescoping series:
$$\sum_{k\ge N}\frac1{k^2}\le\sum_{k\ge N}\frac1{k(k-1)}=\sum_{k\ge N}\left(\frac1{k-1}-\frac1 k\right)=\frac1{N-1}$$
|
H: alternating series where $0
$0<x<1$ , then $1-x+{x}^2-\dots=$ $L$ $-(i)$
This is as much as I can say about this series:
Rewriting the series as $$\sum_{n=0}^{\infty} (-1)^{n}x^{n} $$
Ignoring the sign, I know that (i) is a geometric series and it would converge to $\dfrac1{1-x}$. Also, the terms are nonincreasing and they converge to $0$. Hence by the alternating series test, (i) is convergent.
It should satisfy this (I even checked by taking the two subsequences $S_{2n+1}$ and $S_{2n}$ of the partial sum sequence)
$$ \displaystyle| S_k - L | \leq a_{k+1} $$
I still can't understand how to use this information to find $L$ which is given to be $\dfrac1{1+x}$
AI: If $L=1-x+{x}^2-\cdots$ converges for $|x|<1$
then $xL= x-x^2+{x}^3-\cdots$ also converges for $|x|<1$
and $L+xL = 1$
so $L=\frac{1}{1+x}$ for $|x|<1$
|
H: Proper punctuation of cases in statement of Lemma
I found a very similar question asked on the TeX StackExchange here but the answer was not as definitive as I hoped for.
My question is in regard to how one properly punctuates the following Lemma. Should a comma be used at the end of each case? If not, what would be the proper way to punctuate the cases environments?
Lemma 6. Under the specified parameters constraints in Definition 3
\begin{array}{*3{>{\displaystyle}l}}
&\lim_{z\to 0}\tilde g_{n,\omega}(z,\nu)%
&=%
\begin{cases}
(\omega+\nu)^{(n)}, &\nu<0,\\
(\omega)^{(n)}, &\nu>0,
\end{cases}\\[1em]
&\lim_{z\to\infty}\tilde g_{n,\omega}(z,\nu)%
&=%
\begin{cases}
(\omega-1)^{(n)}, &\nu<0,\\
(\omega-1+\nu)^{(n)}, &\nu>0.
\end{cases}
\end{array}
AI: There are not really any hard rules for these situations that I am aware of, but I would pick either
Option One:\begin{array}{*3{>{\displaystyle}l}}
&\lim_{z\to 0}\tilde g_{n,\omega}(z,\nu)%
&=%
\begin{cases}
(\omega+\nu)^{(n)}, &\nu<0\\
(\omega)^{(n)}, &\nu>0,
\end{cases}\\[1em]
&\lim_{z\to\infty}\tilde g_{n,\omega}(z,\nu)%
&=%
\begin{cases}
(\omega-1)^{(n)}, &\nu<0\\
(\omega-1+\nu)^{(n)}, &\nu>0.
\end{cases}
\end{array}
or
Option Two:\begin{array}{*3{>{\displaystyle}l}}
&\lim_{z\to 0}\tilde g_{n,\omega}(z,\nu)%
&=%
\begin{cases}
(\omega+\nu)^{(n)}, &\nu<0\\
(\omega)^{(n)}, &\nu>0
\end{cases}\\[1em]
&\lim_{z\to\infty}\tilde g_{n,\omega}(z,\nu)%
&=%
\begin{cases}
(\omega-1)^{(n)}, &\nu<0\\
(\omega-1+\nu)^{(n)}, &\nu>0.
\end{cases}
\end{array}
If you are using commas instead of, for example, writing $$(\omega-1)^{(n)} \ \text{ for }\ \nu<0$$ then you definitely do want the first set of commas and you definitely don't want commas after every inequality, but you should have the period at the end, and it's a matter of taste whether the comma at the end of the first limit is necessary.
|
H: Integration: find as an exact value the enclosed area between $y=\frac{3x}{5π}$ and the curve $y=\sin x$ for $0≤x≤π$ shown shaded in the diagram.
The diagram shows the line $y=\frac{3x}{5\pi}$ and the curve $y=\sin$
$x$ for $0\le x\le \pi$.
Find (as an exact value) the enclosed area shown shaded in the diagram.
I'm not sure where I've made an error, but this is my working out so far:
Area under $y=\frac{3x}{5\pi}$ from $0$ to $\frac{5\pi}{6}$ is:
$$\int_{0}^{5\pi/6} \frac{3x}{5\pi}dx=\frac{5\pi}{24}$$
The remaining unshaded "triangle" area from $\frac{5\pi}{6}$ to $\pi$:
$$\frac{1}{2} \cdot \frac{\pi}{6} \cdot \frac{1}{2} = \frac{\pi}{24}$$
Area under $y=\sin x$ for $0\le x\le\pi$:
$$\int_{0}^{\pi} \sin{x}dx=2$$
Hence, shaded area = $$2-\frac{5\pi}{24} - \frac{\pi}{24} = \frac{-\pi}{4} + 2$$
The correct answer is:
$$1+\frac{\sqrt3}{2} - \frac{5\pi}{24} units^2$$
If someone could explain how to correctly solve this, it would be greatly appreciated!
AI: The area under $sin(x)$ between $x=5 \pi/6$ and $x=\pi$ is not a triangle. The top part is not a straight line, but $sin(x)$- which is curved. to find the area, you can integrate:
$$
\int_{5 \pi/6}^{\pi} sin(x) dx
$$
|
H: Algebra structure of $\mathbb{k}^S$
I'm trying to write explicitly how the algebra structure of $\mathbb{k}^S$ works on its basis elements, where $\mathbb{k}$ is a field and $S$ any finite set.
Let's call its basis $B:=\{e_s:s \in S\}$, where the elements are defined as $e_s: t \mapsto \delta_{s,t}$. Now, I would like to show that
$e_s \cdot e_t = \delta_{s,t}e_s$
$1_{\mathbb{k}^S} = \sum_{s \in S}e_s$
given that, if $f,g \in \mathbb{k}^S, (f\cdot g)(s) =f(s)g(s)$ and $(f+g)(s) =f(s)+g(s)$.
My attempt for the first equality is that $e_s \cdot e_t (x) = e_s(x)e_t(x)=\delta_{s,x}\delta_{t,x}$ which is not zero if $x=s$, so $e_s(x)e_t(x)=\delta_{t,s}$ but it's not right.
For the second equality I can't link the basis elements with the unit. Shouldn't it be $1_{\mathbb{k}^S}=1_{\mathbb{k}}$?
Can anyone help me?
AI: Let's name your algebra $A:=\Bbb{k}^S$. For any $f\in A$ we have, by definition, $f(t)=f(s)\delta_{s,t}\in\Bbb{k}$.
(Just think of $f\in A$ as a vector, and $f(t)$ as its $t$-th component.)
Thus, for any $x\in S$, $$(e_s\cdot e_t)(x)=e_s(x)e_t(x)=\delta_{s,x}\delta_{x,t}=\delta_{s,t}\delta_{s,x}=\delta_{s,t}e_s(x),$$
that is, $e_s\cdot e_t=\delta_{s,t}e_s$.
For the second equality, notice that $(f\cdot e_s)(x)=f(x)\delta_{x,s}=f(s)$. Repeat for all $s\in S$ and apply the distributive property to check that
$$f\cdot\sum_{s\in S}e_s\ (x)=\sum_{s\in S}(f\cdot e_s)(x)= \sum_{s\in S}f(x)\delta_{x,s}=f(x)\sum_{s\in S}\delta_{x,s}=f(x)=(f\cdot1_A)(x),$$
that is, $1_A=\sum_{s\in S}e_s$.
|
H: How to find $k$ from $f(x)\;=\;\frac c{1+a\cdot b^x}$?
The population of a culture of bacteria is modeled by the logistic equation:
$P(t)\;=\;\frac{14,250}{1+29\cdot e^{-0.62t}}$
To the nearest tenth, how many days will it take the culture to reach 75% of its carrying capacity?
What is the carrying capacity?
What is the initial population for the model?
$P(0)\;=\;\frac{14,250}{1+29\cdot e^{-0.62\cdot0}}\\P(0)\;=\;\frac{14,250}{30}=475\\P(0)\;=\;P_0\;=\;475\\$
Now, we got $P_0$.
75% is $\frac{3}{4}$.
So,
$P(t)\;=\;P_0e^{k\cdot t}\\\frac34\;=\;475\cdot e^{k\cdot t}$
I'm stuck at finding $k$.
Not sure I'm approaching in the right way.
AI: The carrying capacity is the maximum value of $P(x)$. Maximising it requires minimising the denominator, which is $$1+29e^{-0.62t} $$ and it is achieved when $t\to \infty$, and the denominator goes to $1$. The carrying capacity hence is $$14250$$ Now, you need to find $t$ for which $P(t)=75\%$ of $14250$, i.e. $$\frac 34 \times 14250 =\frac{14250}{1+29e^{-0.62 t}} \\ \implies29e^{-0.62t}=\frac 13 \\ \implies e^{0.62t}=87 \\ \implies t=\frac{\ln 87}{0.62}\approx 7.2$$
|
H: Let $ABC$ be a triangle and $M$ be the midpoint of $BC$. Squares $ABQP$ and $ACYX$ are erected. Show that $PX = 2AM$.
$\textbf{Question:}$ Let $ABC$ be a triangle and $M$ be the midpoint of $BC$. Squares $ABQP$ and $ACYX$ are erected. Show that $PX = 2AM$.
I could solve this problem using computational techniques but I am looking for purely synthetic solution. I tried drawing some extra midpoints, connected them. But still couldn't find the solution.
Any kind of hint or full solution both are appreciated.
AI: Rotate triangle $AXP$ by $90^{\circ}$ around $A$ so that $P$ goes to $B$ and $X$ goes to a new point $X'$. Then $PX$ goes to $BX'$.
But $X',A,C$ are collinear, and $A$ is the midpoint of $X'C$; so $AM$ is a medial line in triangle $X'BC$. Hence $X'B = 2AM$.
Notice that we prove also $PX\bot AM$.
|
H: Order of multiplication of matrices $A$ and $A^n$
I've come across a problem where I need to find a matrix $A^{n+1}$, where I was given matrices $A$ and $A^n$. I multiplied them like this: $A^n\cdot A$, but I was obviously wrong since the result is adequate for $A\cdot A^n$.
Now, I know that $A\cdot B\neq B\cdot A$, but how would I decide the order of multiplication if I was "only" given the exponent, as in this case? Should I always put a matrix with a smaller exponent first? If so, why?
AI: Powers of a single object commute.
\begin{align*}
A^m A^n &= \underbrace{A \times \cdots \times A}_{\text{$m$ copies of $A$}} \times \underbrace{A \times \cdots \times A}_{\text{$n$ copies of $A$}} \\
&= \underbrace{A \times \cdots \times A}_{\text{$m+n$ copies of $A$}} \\
&= \underbrace{A \times \cdots \times A}_{\text{$n+m$ copies of $A$}} \\
&= A^n A^m \text{.}
\end{align*}
In your particular example,
\begin{align*}
A \times A^n &= A \times \underbrace{A \times \cdots \times A}_{\text{$n$ copies of $A$}} \\
&= \underbrace{A \times \cdots \times A}_{\text{$n+1$ copies of $A$}} \\
&= \underbrace{A \times \cdots \times A}_{\text{$n$ copies of $A$}} \times A \\
&= A^n \times A \text{.}
\end{align*}
|
H: $\frac 1 {1 + \epsilon} \le 1 - \frac \epsilon 2$ for $\epsilon \in (0, \frac 1 2)$
How can we show that the following holds for $\epsilon \in (0, \frac 1 2)$?
$$
\frac 1 {1 + \epsilon} \le 1 - \frac \epsilon 2
$$
I thought, maybe it would be more convenient to try to show somehow that $\frac 1 {1 + \epsilon} + \frac \epsilon 2 \le 1$. And maybe use the fact that $\frac {1 + \epsilon} {1 - \epsilon} > 1 + 2 \epsilon$.
Best regards
AI: You can show this directly (and even for a wider interval for $\epsilon$) by rearranging the inequality:
$$\frac 1 {1 + \epsilon} \le 1 - \frac \epsilon 2 \Leftrightarrow 1\leq 1+\epsilon - \frac \epsilon 2(1+\epsilon) = 1 + \frac 12 \epsilon(1-\epsilon)$$
Since $\epsilon(1-\epsilon) \geq 0$ for $\epsilon \in [0,1]$, you are done.
|
H: A set of $n$ distinct items divided into $r$ distinct groups
A set of $n$ distinct items is to be divided into $r$ distinct groups of respective sizes $n_1, n_2, n_3$, where $\sum_{i=1}^{r}n_i=n$.
How many different division are possible ?
Because every permutation yields a division of the items and every possible division results from some permutation, it follows that the number of divisions of $n$ items into $r$ distinct groups of sizes $n_1, n_2, ... , n_r$ is the same as the number of permutations of $n$ items of which $n_1$ are alike, and $n_2$ are alike, ..., and $n_r$ are alike.
Can somebody explain why and how every permutation yields a division of the items and every possible division results from some permutation part?
(This question is taken from Sheldon.M.Ross First Course in Probability book)
AI: You can permute all the $n$ distinct objects in a line. To make $r$ groups of these, take the first $n_1$ elements in this permutation and put them in group 1, the next $n_2$ elements and put them in group 2, etc. So, each permutation of $n$ elements is division of those elements in $r$ such groups. However, this division isn't unique, since the number of permutations of $n$ elements also includes permutations within the group's elements, even if they have the same elements. You divide $n!$ with the number of permutations of the first group, then second, and so on.
Another way of looking at it is first choose $n_1$ items from $n$, assign them to group 1, then choose $n_2$ items from $n-n_1$, assign those to group 2, and so on.
|
H: How to calculate the triple integral $\iiint_{\Omega_t}\frac{1}{(x^2+y^2+z^2)^\frac32}$?
$\iiint_{\Omega_t}\frac{1}{(x^2+y^2+z^2)^\frac32}$, where $\Omega_t$ is the ellipsoid and $\Omega_t=\frac{x^2}{a^2}+\frac{y^2}{b^2}+\frac{z^2}{c^2}\leq 1$
I want to use the change variable for $u=\frac xa, v=\frac yb,w=\frac zc$, but after this, then we may have a complicated form for inside thing.
How can I do for this integral. Should I think about the Gauss Theorem?
AI: Hint
Do the substitution $$(x,y,z)=(ar\sin(\theta )\cos(\varphi ),br\sin(\theta )\sin(\varphi ),cr\cos \theta ),$$
where $\theta \in (0,\pi)$ and $\varphi \in (0,2\pi)$ and $r\in(0,1)$.
|
H: Is this Hilbert space construction legal?
Let $X=\ell^2=\{(\xi_i)_{i\in\mathbb{N}}\in\mathbb{R}^\mathbb{N}\, | \, \sum_{i\in\mathbb{N}}|\xi_i|^2<\infty\}$ be the real Hilbert space of square-summable sequences, and for every $j\in\mathbb{N}$ set
$$ X_j=\{(\xi_i)_{i\in\mathbb{N}}\in X\, | \, \forall i\neq j, \xi_i=0\}.$$
It follows that $(X_j)_{j\in \mathbb{N}}$ are closed mutually orthogonal vector subspaces of $X$, and hence they are also real Hilbert spaces in their own right using the subspace topology inherited from $\ell^2$ (same norm, inner product, etc). Now, consider the "direct sum" of the $X_i$s: $(\hat{X}, \langle\cdot\, |\, \cdot \rangle)$ where
$$\hat{X}=\{ (x_j)_{j\in \mathbb{N}} \in \times_{j\in\mathbb{N}}X_j \, | \,\|(x_j)_{j\in J}\|^2_{\hat{X}}= \sum_{j\in\mathbb{N}} \|x_j\|^2_{\ell^2} < +\infty\}.\tag{1}$$
and
$$\langle (x_j)_{j\in J} \, | (y_j)_{j\in J} \rangle = \sum_{j\in\mathbb{N}} \langle x_j \, | \, y_j\rangle_{\ell_2}.\tag{2}$$
Is $(\hat{X}, \langle\cdot\, |\, \cdot \rangle)$ a valid real Hilbert space?
My colleague claims that it is not: The elements of $\hat{X}$ are a countable product of countable sequences, so in a sense it is impossible to sum them formally. Essentially they're saying that we can not append a sequence onto another sequence countably many times as done in (1). However, I have seen this direct sum construction in several books, and proofs showing it is a valid Hilbert space do not rely on the structure of the underlying $X_i$'s at all (other than the fact that they are real Hilbert spaces to start with). What do you think?
AI: I guess you have a typo and want to have
$$\hat X = \{(x_j) \mid \sum_j \lVert x_j \rVert^2_{\ell^2} < \infty \}.$$
Then $\hat X$ is a Hilbert space. Your construction works for any collection of Hilbert spaces $X_j$, not only for the subspaces $X_j \subset \ell^2$ defined in your question.
See for example section "Direct sums" in https://en.wikipedia.org/wiki/Hilbert_space#Direct_sums.
Note that if you define
$$\tilde X = \{(x_j) \mid \sum_j \lVert x_j \rVert_{\ell^2} < \infty \},$$
then $\tilde X \subset \hat X$, thus your definition
$$\langle (x_j), (y_j) \rangle = \sum_{j\in\mathbb{N}} \langle x_j , y_j\rangle_{\ell_2}$$
defines an inner product on $\tilde X$. However, $\tilde X$ is not a Hilbert space - it lacks completeness.
By the way, in your example the $X_j$ are one-dimensional subpaces of $\ell^2$ and therefore isomorphic to $\mathbb R$. Thus $\hat X$ is nothing else than a copy of $\ell^2$.
|
H: Extension linear independent set to Hamel basis
If we have a linear independent set, then it is well known that by using Zorn's lemma it can be extended to Hamel basis. My question I have a linear independent set, call it $B_{0}$, I want to extended by transfinite induction to Hamel basis B by the set $ B_{00}=B\setminus B_0.$ such that $B_{00}$ must satisfy a specific condition. I am using perfect as example but in my case is not perfect. I have not seen construction like this before we usually use Zorn's lemma to extend any linear independent set. Any help will be useful.
AI: Let me repeat the proof that you want to do. Assume that we have a linearly independent set $X$, and extend it by adding elements. We may iterate adding elements, and it results in some linearly independent set. (We take a union in limit case.) Formally, we can think of it as follows:
$B_0= X$,
$B_{\alpha+1} = B_\alpha\cup\{v\}$, where $v$ is not generated by elements of $B_\alpha$, and
$B_\delta=\bigcup_{\alpha<\delta}B_\alpha$.
The iterating process will stop when we cannot choose any $v$ which is not generated by $B_{\alpha}$: in that case, $B_\alpha$ is a basis.
We may ask there is such an $\alpha$. How can we assure the existence? Mathematicians use proof by contradiction when they have no idea how to start. Let us do it in that way.
Assume that for each $\alpha$, we can find $v$ which is not generated by $B_\alpha$. Then we can repeat the iteration for all ordinals. Let $\langle B_\alpha \mid \alpha\in\mathrm{Ord}\rangle$ be the resulting sets.
We can see that $B_\alpha\subsetneq B_\beta$ if $\alpha<\beta$.
From this, you can see that the vector space $V$ contains $\mathrm{Ord}$-many elements: that is, for each $\alpha$, we have $v_\alpha$ such that $B_{\alpha+1}= B_\alpha\cup\{\alpha\}$. $\{v_\alpha\mid \alpha\in\mathrm{Ord}\}$ is a collection of pairwise different elements of $V$, and it makes the collection a proper class.
Is it possible? No, this is not possible, a set (in this case, $V$) cannot contain a proper subclass. Thus we have a contradiction.
|
H: Finding the Center of Mass of a disk when a part of it is cut out.
From a uniform disk of radius $R$ a circular disk of radius $\frac{R}{2}$ is being cut out.
The center of the "cut out" disk is at $R/2$ from the venter of the original disk. We have to find the center of mass of leftover body.
I thought that we should set up a coordinate system with the center of original disk as the origin. The formula for center of mass is
$$
\vec{R}_{CM} = \frac{1}{M_{tot}} \int \vec{r} ~dm$$
I thought of creating another identical region (identical to what is being cut out) on the left of $O$. Like this
By symmetry, any position vector $\vec{r}$ outside the encircled region (on the left) will have it's counter-part and hence it will cancel up. So, we need to worry only about the integral inside the circular region, even there by symmetry we know that $\vec{R}_{CM}$ will lie on the axis joining their centers (let's call the line joining all three centers as $x$-axis and the line perpendicular to this line as $y$-axis). If we use the polar coordinate then we have
$$
R_{CM} = \frac{1}{M_{tot}} 2\int \int r \cos \theta \sigma dA\\
\text{(I have written $2r\cos \theta$ because that's the thing we would get when we add any two}\\
\text{vectors in that encircled region, $\sigma$ is the mass per unit area, and $dA$ is the area element)}\\
R_{CM} = \frac{1}{M_{tot}}2 \sigma \int_{r=0}^{R} \int_{\theta=0} r \cos \theta ~dA
$$
But the problem is that I don't know the upper limit of $\theta$, I worked hard but it seemed a little different in this case.
We can use our ordinary cartesian system,
$$
R_{CM} = \frac{1}{M_{tot}} 2\sigma \int \int x dx dy$$
the limit of $x$ will be (I think) $0$ to $R$ and we can get $y$ as
$$
(x-R/2)^2 + y^2 = R^2/4 \\
y= \sqrt{ x^2 - Rx}$$
So, we have
$$
R_{CM} = \frac{1}{M_{tot}} 2\sigma \int_{x=0}^{R} \int_{y=0}^{\sqrt{x^2-Rx}} x dy dx\\
R_{CM} = \frac{1}{M_{tot}} 2\sigma \int_{x=0}^{R} x\sqrt{x^2-Rx} ~dx
$$
I don't know how to carry out that last integral.
The answer to this question is "$R/6$ to the left of O" but where am I mistaking? Can someone help me out?
AI: Given the symmetry you noted, it seems not worth it to do the integral in polar coordinates when you know the center of mass will only have an $x$. Let's rotate your picture upside down. Notice that computing the integral will give us
$$\iint\limits_{\text{shaded region}}^{} x\:dA = \iint\limits_{\text{small circle on right side}}^{} x\:dA$$
by symmetry because $x$ is an odd function. Next, you want integrate w.r.t. $x$ first because the square roots will cancel
$$\bar{x} =\frac{4}{3\pi R^2}\int_{-\frac{R}{2}}^\frac{R}{2} \int_{\frac{R}{2}-\sqrt{\frac{R^2}{4}-y^2}}^{\frac{R}{2}+\sqrt{\frac{R^2}{4}-y^2}} x\:dx \:dy = \frac{4}{3\pi R}\int_{-\frac{R}{2}}^\frac{R}{2} \sqrt{\frac{R^2}{4}-y^2}\:dy = \frac{R}{6}$$
|
H: Isometry that translates a geodesic has constant displacement
In a Riemannian manifold $M$ we are given an isometry $\alpha$ that translates a geodesic $\gamma$, meaning that $\alpha(\gamma) \subseteq \gamma$, I would like to show that $d(\gamma(t), \alpha\gamma(t))$ is constant for all $t$.
I tried letting $s > 0$ such that $\alpha\gamma(0) = \gamma(s)$ and trying to prove that $d(\gamma(t), \alpha\gamma(t)) = d(\gamma(0), \gamma(s)) = s$ (supposing unit length parametrization) but so far I'm quite stuck, any idea on how to proceed?
AI: This is in general not true. Think of $M = \mathbb R$ with the standard metric and $\alpha (x) = -x$. If $\gamma$ is the geodesic given by $\gamma(t) = t$, then
$$ d(\gamma (t), \alpha(\gamma(t))) = 2|t|$$
depends on $t$.
|
H: $\int_0^{\infty} \frac {x.dx}{(x^2+a^2)^{\frac 32}(x^2+b^2)} $
I am solving a problem where I need to find the charge distribution on a conducting plate, and the field due to it. I'm stuck on this integral.
$$\int_0^{\infty} \frac {x.dx}{(x^2+a^2)^{\frac 32}(x^2+b^2)} $$
AI: Hint:
(I leave the intermediate steps for you as an exercise)
First substitute $u = \sqrt{x^2 + a^2}\implies\mathrm dx = \dfrac{\sqrt{x^2 + a^2}}x\,\mathrm du$.
$$\int\dfrac{x}{\left(x^2 + a^2\right)^{3/2}\left(x^2 + b^2\right)}\,\mathrm dx\equiv\int\dfrac{1}{u^2\left(u^2 + b^2 - a^2\right)}\,\mathrm du$$
Next, perform partial fraction decomposition.
$$\int\dfrac{1}{u^2\left(u^2 + b^2 - a^2\right)}\,\mathrm du\equiv\dfrac{1}{b^2 - a^2}\int\dfrac{1}{u^2}\,\mathrm du - \dfrac{1}{b^2 - a^2}\int\dfrac{1}{u^2 + b^2 - a^2}\,\mathrm du$$
Finally, substitute $v = \dfrac{u}{\sqrt{b^2 - a^2}}\implies\mathrm du = \sqrt{b^2 - a^2}\,\mathrm dv$.
Can you take it from here?
|
H: What is the limit of $\sum\limits_{i=1}^N (\frac{i}{N})^l\frac{1}{N}$?
I would like to know the limit of the above summation as $N$ tends to infinity. Thank you guys for hints or solutions. Cheers.
AI: It is the definition of Riemann integral for
$$
\int_{0}^{1}x^kdx
$$
|
H: How many unordered pairs of positive integers $(a,b)$ are there such that $\operatorname{lcm}(a,b) = 126000$?
How many unordered pairs of positive integers $(a,b)$ are there such that $\operatorname{lcm}(a,b) = 126000$?
Attempt:
Let $h= \gcd(A,B)$ so $A=hr$ and $B=hp$, and $$phr=\operatorname{lcm}(A,B)=3^2\cdot 7\cdot 5^3 \cdot 2^4\,.$$ Let $p = 3^a5^b7^c2^d$ and $r = 3^e 5^f 7^g 2^s$. Notice, that given $p$ and $r$, $h$ is determined, so we can count $p$ and $r$. Multiplying $p$ and $r$ we get $$pr = 3^{(a+e)} 5^{(b+f)} 7^{(c+g)} 2^{(d+s)}\,,$$ and so $a+e = 0,1,2$.
For the first case we have $0+1 = 1$ possibility, similarly $2$ and $3$ for the other cases, so the total number is $6$. For $b+f$ we have $b + f = 0,1,2,3$ giving $10$ options. Similarly for $c + g$ we have $3$ choices and for $d + h$ we have $$1+2+3+4+5 = 15$$ choices. Multiplying these together, we get $$15\cdot 3\cdot 6\cdot 10 = 60\cdot 45 = 2700\,,$$ which is not equal to the given answer of $473$.
Edit: sorry for the weird variables. I think I've fixed everything, if not, please do point it out
AI: I have explained why the OP did not get a correct solution. See my comment here. Below is a generalization of the OP's problem.
For positive integers $k$ and $l$, let $f_k(l)$ denote the number of ordered $k$-tuples $(n_1,n_2,\ldots,n_k)\in\mathbb{Z}^k_{>0}$ such that $$\text{lcm}(n_1,n_2,\ldots,n_k)=l\,.$$
Observe that $f_1(l)=1$ always.
Write
$$l=p_1^{\alpha_1}p_2^{\alpha_2}\cdots p_r^{\alpha_r}\,,$$
where $p_1,p_2,\ldots,p_r$ are pairwise distinct prime natural numbers and $\alpha_1,\alpha_2,\ldots,\alpha_r\in\mathbb{Z}_{>0}$. Then,
$$n_i=p_1^{\beta_{i,1}}p_2^{\beta_{i,2}}\cdots p_r^{\beta_{i,r}}$$
for some integers $\beta_{i,j}$ such that
$$0\leq \beta_{i,j}\leq \alpha_j$$
for $j=1,2,\ldots,r$. However, for each $j=1,2,\ldots,r$, at least one $i\in\{1,2,\ldots,k\}$ must satisfy $\beta_{i,j}=\alpha_j$. For a fixed $j=1,2,\ldots,r$, the number of ways to make $\beta_{i,j}<\alpha_j$ for all $i=1,2,\ldots,k$ is $\alpha_j^k$. Hence, the number of ways to make $\beta_{i,j}=\alpha_j$ for some $i=1,2,\ldots,k$ is $$\big(\alpha_j+1\big)^{k}-\alpha_j^k\,.$$
This means
$$f_k(l)=\prod_{j=1}^r\,\big((\alpha_j+1)^k-\alpha_j^k\big)\,.$$
In particular,
$$f_2(l)=\prod_{j=1}^r\,(2\alpha_j+1)\,.$$
Now, calculate $f_2(126000)$.
Now, let $\tilde{f_k}(l)$ denote the number of unordered $k$-tuples $(n_1,n_2,\ldots,n_k)\in\mathbb{Z}^k_{>0}$ such that $$\text{lcm}(n_1,n_2,\ldots,n_k)=l\,.$$
In the case $k=2$, we have
$$\tilde{f_2}(l)=\dfrac{1+f_2(l)}{2}\,.$$
Why is that so? What is $\tilde{f_2}(126000)$?
For a general value of $k$, counting unordered $k$-tuples is a tricky combinatorial problem. I think the easiest way might be using Burnside's Lemma. Using Burnside's Lemma, we have
$$\tilde{f_k}(l)=\sum_{\substack{(t_1,t_2,\ldots,t_k)\in \mathbb{Z}_{\geq 0}^k\\ \sum\limits_{\mu=1}^k\,\mu\,t_\mu=k}}\,\left(\frac{f_{\sum\limits_{\mu=1}^k\,t_\mu}(l)}{\prod\limits_{\mu=1}^k\,\big(\mu^{t_\mu}\cdot t_\mu!\big)}\right)\,.$$
For example,
$$\tilde{f_3}(l)=\frac{2+3\,f_2(l)+f_3(l)}{6}\,.$$
|
H: What exactly is a constant angle?
I previously asked a question about what a non constant angle is but it was closed due to lack of clarity and hence, I'm posting a new question. The notation $x^c$ will be used in this question to represent $x$ radians.
So, a proof in my Mathematics textbook is about proving that radian is a constant angle and it has been proved by expressing $1^c$ in terms of two constants, i.e. $1^c = \dfrac{\text {a straight angle}}{\pi}$.
As I understood it at first, a constant angle measurement is an angle measurement that does not depend on the length of it's initial or terminal sides but I realized that this is wrong as if some angle measurement depends on the lengths of initial or terminal sides, then it cannot be qualified as an angle measurement because an angle measurement is only supposed to measure the degree of rotation from the initial to terminal side which does not depend on the length of the sides.
User Somos stated that In many physical applications, an angle varies in time, and hence is not a constant angle. Think of a rotating propeller, for example. in my previous question. I didn't quite get what he meant. The angle might change with respect to time, for example, the angle b/n the arms of a compass may be $30^o$ at a point in time and $45^o$ at another, but that doesn't mean that under certain circumstances, the value of an angle of $30^o$ would change, right?
Did I misunderstand something in this?
Now, the two questions I have are :
What was the meaning of Somos' statement?
When we say that an angle measurement is constant, with respect to what factor are we declaring it constant?
Thanks!
AI: I don't know what mathematics textbook you are using, but the angle equation
is essentially relating two angles by giving their ratio which is $\,\pi\,$
which is also a well-known real constant. Of course, the two angles are also
well-known constant angles themselves. In this context, the reference to the
adjective constant seems not helpful to me. It does not add any useful
information to the reader, and can be a potential cause of confusion.
In the context of physics, there are many physical constants such
as the speed of light. These constants come with units of measure
such as meters per second. Originally they were measured but more
recently some of them are defined to have exact values. Refer to
the Wikipedia article Metric system for details. One exceptional case is that of
measure of angles.
The Wikipedia article Degree states
It is not an SI unit, as the SI unit of angular measure is the radian, but it is mentioned in the SI brochure as an accepted unit.[4] Because a full rotation equals 2π radians, one degree is equivalent to π/180 radians.
So the measure of an angle is regarded as the ratio of two
quantities with the same units although a selected angle can
be regarded as a unit angle. This is the case for degrees and
radians which have exact values.
|
H: Let $f:A \rightarrow B$ be a bijective map, and let $P,Q \subseteq A$ be any sets. Then $f(P-Q)=f(P)-f(Q)$
Can you please check my proof of the following theorem?
Theorem: Let $f:A \rightarrow B$ be a bijective map, and let $P,Q \subseteq A$ be any sets. Then $f(P-Q)=f(P)-f(Q)$.
Proof: Let $x \in f(P)-f(Q)$. Hence $x \in f(P)$ and $x \notin f(Q)$. From the former, we deduce that there exists some $a \in P$ such that $f(a)=x$. Let $a_0 \in P$ that element. We know that $f(a_0) \notin f(Q)$, so $a_0 \notin Q$. Then we have that $a_0 \in P-Q$ and it follows that $x=f(a_0) \in f(P-Q)$. Therefore $f(P)-f(Q) \subseteq f(P-Q)$.
Let $y \in f(P-Q)$. Since $f$ is bijective, there is only one $b \in P-Q$ such that $f(b)=y$. Let $b_0 \in P-Q$ be that element. We know that $b_0 \in P-Q$, so $b_0 \in P$ and $b_0 \notin Q$. From that we conclude that $f(b_0) \in f(P)$ and $f(b_0) \notin f(Q)$. Hence $f(b_0) \in f(P)-f(Q)$, so $y \in f(P)-f(Q)$. Therefore $f(P-Q) \subseteq f(P)-f(Q)$. $\square$
I'm not sure about the second part of the proof. My doubts are:
I'm not quite sure if we can deduce that $f(b_0) \notin f(Q)$ from $b_0 \notin Q$.
The fact that $f$ is bijective allows us to conclude that such $b_0$ is unique, and (for example) it could never be the case of having one element in $P$ and other in $Q$ with the same image (what would implied that such image could be in $f(P-Q)$ but not in $f(P)-f(Q)$), right?
Thank you for your attention!
AI: using the definition of image you have $a \in f(Q)$ if and only if there exist an element $b \in Q$ such that $f(b) =a$. substituting $a$ with $f(b_0)$ in the definition you get that $f(b_0) \in Q$ then $b_0 \in Q$ taking the contrapositive of this statement you get if $b_0 \notin Q$ then $f(b_0) \notin Q$. ($a \implies b$ if and only if $\neg b \implies \neg a$).
In general for any function $f:A\to B$ you do not have this property a very trivial example is a function $f:\{0,1\}\to \{0\}$ We have that $f(\{0\})-f(\{1\})=\emptyset$ but $f(\{0\}-\{1\})=f(\{0\})=\{0\}$
|
H: Proving combinatorial identities
Prove $\displaystyle\sum_{k=1}^n kx^k{x\choose k}=nx(1+x) ^{n-1}$
This question can be solved easily (by taking the derivative of the binomial theorem formula), if there was an $\binom{n}{k}$ instead of $\binom{x}{k}$. I mean, the presence of $\binom{x}{k}$ seems a bit fishy. so, is this question correct? If yes, could you give a hint on how to solve it?
Source: A first course in probability theory, Sheldon Ross (9th edition)
AI: It's probably just a typo.
this question can be solved easily (by taking the derivative of the binomial theorem formula)
and multiplying both sides by $x$ you get
$nx(1+x)^{n-1} = \sum\limits_{k=1}^n k x^k{n\choose k}$.
If what is typed were also true we'd have
$ \sum\limits_{k=1}^n k x^k{n\choose k} = \sum\limits_{k=1}^n k x^k{x\choose k}$
.... for every possible value of $n$ and $x$. That's surely nuts!
A simple counter-example would be $x=1$.
The left hand side $n(1+1)^{n-1} = \sum\limits_{k=1}^n k {n\choose k}= n2^{n-1}$ for all $n$. (Mildly interesting... so $5*2^4 = 80$ is equal to ${5\choose 1}+2{5\choose 2} + 3{5\choose 3}+4{5\choose 4}+5{5\choose 5}=5+20 + 30 + 20+5$.... cool.... I guess.....That's a funny side shift I never noticed... well, I'll play with that later... but I digress)
But the right hand side is $\sum\limits_{k=1}^n k x^k{x\choose k}= 1*1^1{1\choose k} + \sum_{k=2}^n k1^k{1\choose k} = 1 + \sum_{k=1}^n 0 = 1$.
Obviously $n2^{n-1} =1$ is not a universal identity for all $n$ so the statement is false.
It's probably just a typo.
|
H: In Halmos' Naive Set Theory, how can there be more than 1 successor set?
The axiom of infinity clearly states that there exists a set $A$ containing $0$ and the successor of its elements.
Shortly after introducing this axiom, Halmos goes to say:
Since the intersection of every non-empty family of successor sets is a successor set itself...
But by the axiom we're only guaranteed one successor set - $A$. How can we then have a family of the same repeated set?
Edit:
Recently today I asked a question which I since deleted about the definition of successor sets. Mainly - Halmos defines the successor set:
We shall say, temporarily, that a set $A$ is a successor set if $0 \in A$ and if $x^+ \in A$ whenever $x \in A$
This gave me the illusion that there's only one such set.
AI: There are, in fact, lots of successor sets. Start with any set $B$, then
$$
A \cup \{B, B^+, B^{++}, B^{+++}, \dots\}
$$
is a successor set. You are correct that the axiom guarantees only one successor set. But it leaves the possiblity open of whether there are many of them.
|
H: Given two polynomials, determine two other degrees through polynomial division
I read in a proof, where it says something in this direction
Divide any polynomial $Q$ of degree $2n-2$ by $P$ (of degree $n$) and get an equation $Q=SP+R$ with $S$ and $R$ polynomials of degree at most $n-2$ and $n-1$, respectively.
If two polynomials $P,Q$ are given with $\deg Q=2n-2$ and $\deg P=n$, it is well-known that we can write $Q=SP+R$ for some unique polynomials of $S$ and $R$ with $\deg R<\deg S$. The question is, how do you see that $\deg S\leq n-2$ and $\deg R\leq n-1$, as stated above?
AI: Sum of degrees of $S$ and $P$ must equal to the degree of $Q$ so $$\deg S +n = 2n-2\implies \deg S = n-2$$
Since $\deg R < \deg P=n$ we have $\deg R\leq n-1$.
|
H: Example of $X_n$ which converge a.s. but not in mean
Provide an example of a sequence of random variables which converge a.s. but not in mean.
I know that the random variables $X_n=n\cdot\mathbb{1}_{(0,\frac{1}{n})}$ converge in probability as given any $\varepsilon>0$
\begin{align*}
P(|X_n-0|>\varepsilon)=P(X_n>\varepsilon)\le P(X_n>0)=P\big(\big(0,\frac{1}{n}\big)\big)=\frac{1}{n}\to 0
\end{align*}
However, they do not converge in mean as
\begin{align*}
E|X_n-0|=E\big(n\cdot\mathbb{1}_{(0\frac{1}{n})}\big)=n\cdot P\big(\big(0,\frac{1}{n}\big)\big)=n\cdot\frac{1}{n}=1\,\,\text{for all n}
\end{align*}
So, my question here is do these $X_n$ converge to $0$ a.s.? And if so, how does one show this rigorously with an $\varepsilon$ proof? I know we need to find $N\in\mathbb{N}$ such that $X_N<\varepsilon$. To this end, we can make the lengths of the intervals $\big(0,\frac{1}{n}\big)$ arbitrarily small but the multiplication by $n$ stops $X_n$ from being arbitrarily small, so I am sort of thinking that these $X_n$ do not converge a.s. If thats the case, whats an example that would work here?
AI: You want to prove that the event $\lbrace X_n(\omega) \rightarrow 0 \rbrace$ has probability $1$.
I claim that this event is the universe $\Omega$ itself. In fact, for any $\omega \in (0,1)$, the sequence $X_n(\omega)$ eventually vanishes.
|
H: Expected number of coin side changes in a sequence of coin tosses with unfair coin
Suppose with have an unfair coin with probability p for heads and 1-p for tails. In a series of coin tosses(like n times) what is the expected number of times that the coin side changes? for instance if we toss the coin 5 times and the following sequence comes: $$THHHT$$
Then the coin side has changed 2 times ( from tails to heads at the beginning and from head to tail at the end)
If $X$ is a random variable of number of side changes, we need $E[x]$. How ever I'm really struggling to find the probability of a side change. I tired to use conditionals but no luck.
AI: Suppose we flipped the coin $n$ times.
In order to help organize our thoughts, let's define several random variables. (With more practice, you can easily skip defining so many random variables, but I think it will be beneficial to help explain what is going on).
Let $H_1,H_2,H_3,\dots,H_n$ be the indicator random variable that takes value $1$ if the corresponding coin flip was heads and $0$ otherwise.
Let $T_1,T_2,T_3,\dots,T_n$ be the indicator random variable that takes value $1$ if the corresponding coin flip was tails and $0$ otherwise. (Note: $T_i = 1 - H_i$)
Let $X_1,X_2,X_3,\dots,X_{n-1}$ be the indicator random variable that takes value $1$ if there was a "coin side change" that occurred from the corresponding numbered coin to the next. (Note: $X_i = H_iT_{i+1}+T_iH_{i+1}$) (Note also: we stop here at $n-1$ because there is no coin after the $n$'th that we could change to)
Finally, let $X = X_1+X_2+\dots+X_{n-1}$. Recognize that $X$ is precisely the random variable counting the total number of side changes that we were asked to find the expected value of.
Now... by our convenient choices of random variables we have $$E[X] = E[X_1+X_2+X_3+\dots+X_{n-1}]$$
Then, from the linearity of expectation this continues further as $$\dots = E[X_1]+E[X_2]+\dots+E[X_{n-1}]$$
Now... again, by our convenient choices of random variables, this expands further as
$$\dots = E[H_1T_2+T_1H_2]+E[H_2T_3+T_2H_3]+\dots+E[H_{n-1}T_n+T_{n-1}H_n]$$
Which by linearity of expectation again and properties of independent random variables
$$\dots = E[H_1]E[T_2]+E[T_1]E[H_2]+E[H_2]E[T_3]+E[T_2]E[H_3]+\dots+E[T_{n-1}]E[H_n]$$
Finally, since the expected value of an indicator random variable is precisely the probability of said indicator random variable as having occurred, this all simplifies to:
$$\dots = p(1-p)+(1-p)p+p(1-p)+(1-p)p+\dots+p(1-p)+(1-p)p$$
and collecting like terms and noting how many occurrences of each there were simplifies to the final answer of:
$$E[X] = 2(n-1)p(1-p)$$
It is worth emphasizing that $X_i,X_j$ might not be independent of each other, but the strength of linearity of expectation is that that doesn't matter! Addition inside of expected value can be turned into addition outside of expected value, regardless of the dependence or independence of the respective random variables involved. The only events that we cared about the independence of was the individual results of the coinflips which by the very nature of what coin flips are we know to be independent (or more accurately, we always model the situation mathematically as to assume that they are).
|
H: Show $f$ can be extended to be analytic in $\mathbb{C}$ except at finitely many poles.
I have attempted the following problem but I am stuck on one part:
Suppose $f$ is analytic on the unit disk and continuous on the boundary of the disk. Also, suppose $|f(z)|=1$ for $|z|=1$. Show that $f$ can be extended to be analytic in $\mathbb{C}$, except for finitely many poles, by defining $F(z) = \left(\overline{f(\frac{1}{\overline{z}})}\right)^{-1}$.
My attempt: Let $z, z_0$ be in $\mathbb{C}\setminus\mathbb{D}$. Then $\frac{1}{\overline{z}}$ and $\frac{1}{\overline{z_0}}$ are in $\mathbb{D}$ so there is a power series centered at $\frac{1}{\overline{z_0}}$ convergent in a neighborhood contained in $\mathbb{D}$. Then we can write $f(\frac{1}{\overline{z}}) = \sum_{n=0}^{\infty}a_n(\frac{1}{\overline{z}} - \frac{1}{\overline{z_0}})^n$ so that $\overline{f(\frac{1}{\overline{z}})}= \sum_{n=0}^{\infty}\overline{a_n}(\frac{1}{{z}} - \frac{1}{z_0})^n$.
Now I am stuck on writing $F$ as a power series in $z$. Once this is done, I know that I can use that fact that $|f(z)| = 1$ on the boundary of $\mathbb{D}$ to show that $F$ and $f$ agree on the boundary, and therefore invoke the symmetry principle to get my result. Regarding the finiteness of the poles, I know this follows from the fact that $f$ must have finitely many zeros in the disk.
Any help is appreciated!
AI: Hint: Use Cauchy-Riemann equations to show that if $g$ is analytic on $U$, $\overline{g(\overline{z})}$ is analytic on $\{z: \overline{z} \in U\}$.
Using the fact that $z \to 1/z$ is analytic on $\mathbb C \backslash \{0\}$,
$F(z)$ is analytic on $\{z:|z|>1, f(1/\overline{z}) \ne 0\}$.
|
H: Is there a simple function $f(x)$ that follows $2$ rules when $x$ is rational?
Is there a simple function $f(x)$ that follows $2$ rules when $x$ is rational?
$x$'s simplest form is $\frac{a}{b}$ if $x$ is a rational number.
$$f(x) \in \begin{cases} \mathbb{R} \setminus \mathbb{Q}, \ \ \ \ x=\frac{a}{b} \text{ and } a + b = \text{even} \\ \mathbb{Q}, \ \ \ \ \ \ \ \ \ \ \ x \not\in \mathbb{Q} \text{ and } a + b = \text{odd} \end{cases}$$
for $x$ being irrational it doesn't need to follow any specific rule besides being continuous.
and for $x=0$ the simple fraction is $\frac{0}{1}$ so $f(0)$ is rational
AI: How about
$$f(x)=
\begin{cases}
0\quad\text{if }x\not\in\mathbb{Q}\\
\displaystyle{(-1)^a+(-1)^b\over b}\pi\quad\text{if }x=a/b\in\mathbb{Q}\text{ (with }\gcd(a,b)=1\text{ and }b\ge1)
\end{cases}$$
|
H: Matrix of the linear transformation $T$
To the following theorem,
Let $V$ and $W$ be finite-dimensional vector spaces having ordered bases $\beta$ and $\gamma$, respectively, and let $T : V \rightarrow W$ be linear. Then, for each $u \in V$, we have $[T(u)]_\gamma=[T]_\beta^\gamma[u]_\beta$.
the textbook gives the following example:
Let $T:P_3(\mathbb{R}) \rightarrow P_2(\mathbb{R})$ be the linear transformation defined by $T(f(x))=f'(x)$, and let $\beta$ and $\gamma$ be the standard ordered bases for $P_3(\mathbb{R})$ and $P_2(\mathbb{R})$, respectively. If $A=[T]_\beta^\gamma$, then we have
$$A=
\begin{bmatrix}
0 & 1 & 0 & 0 \\
0 & 0 & 2 & 0 \\
0 & 0 & 0 & 3
\end{bmatrix}
$$
Now, the textbook tries to illustrate the previous theorem by verifying that $[T(p(x))]_\gamma=[T]_\beta^\gamma[p(x)]_\beta$, where $p(x)\in P_3(\mathbb{R})$ is the polynomial $p(x)=2-4x+x^2+3x^3$. Then, it takes $q(x)=T(p(x))$ and $q(x)=p'(x)=-4+2x+9x^2$ and he gets that $$[T(p(x))]_\gamma=[q(x)]_\gamma=
\begin{bmatrix}
-4\\
2\\
9\\
\end{bmatrix}
$$
The thing that I really don't understand is why does it take $q(x)=p'(x)$?
AI: The standard basis of $P_{3}(\mathbb{R})$ is $1, x, x^{2}, x^{3}$. The example shown is the polynomial $p(x) = 2 -4x + x^2 + 3x^3$. The coordinates of $p$ with respect to the standard basis are thus
$$ [p(x)]_{\beta} = \begin{pmatrix}
2 \\
-4 \\
1 \\
3
\end{pmatrix}.$$
If we now apply the linear transformation $T$ to $p$, we get the polynomial $q(x) = T(p(x)) = -4 + 2x + 9x^{2} $ in $P_{2}(\mathbb{R})$. Now with respect to the standard basis of $P_{2}(\mathbb{R})$, the coordinates of $T(p(x))$ are
$$ [T(p(x))]_{\gamma} = \begin{pmatrix}
-4 \\
2 \\
9
\end{pmatrix}.$$
If you do the calculations, $[T(p(x))]_{\gamma}$ can also be obtained as
$$ [T]_{\beta}^{\gamma} [p(x)]_{\beta} = \begin{pmatrix}
0 & 1 & 0 & 0 \\
0 & 0 & 2 & 0\\
0 & 0 & 0 & 3 \\
\end{pmatrix} \begin{pmatrix}
2 \\
-4 \\
1 \\
3
\end{pmatrix} = 2 \begin{pmatrix}
0 \\
0 \\
0
\end{pmatrix} -4 \begin{pmatrix}
1 \\
0 \\
0
\end{pmatrix} + \begin{pmatrix}
0 \\
2 \\
0
\end{pmatrix} + 3 \begin{pmatrix}
0 \\
0 \\
3
\end{pmatrix} = \begin{pmatrix}
-4 \\
2 \\
9
\end{pmatrix}. $$
The theorem quoted implies that we can recover the coordinates of $q(x) = T(p(x))$ for any polynomial $p$ in $P_{3}(\mathbb{R})$ via matrix multiplication.
|
H: Permutation of a number yields a prime?
Given a number $N$ that is constructed only by using these digits: $\{1,3,7,9\}$,
It is not divisible by $3$ (The sum of digits are not divisible by $3$) and thus $3 \nmid N$.
And - it has at least $3$ different digits (maybe it uses only $1,3,9$ or $1,3,7$ or $1,3,7,9$ ...)
Theory ("Conclusion") : The number will have at least one permutation of a prime number.
For example: $N = 1337$ is not a prime, but does satisfy the conditions, and thus it has a permutation that is also a prime: $\text{Prime} = 1373$
I couldn't find any number that satisfies these conditions and does not have at least one permutation which is a prime number.
Some more information:
There is a function: $\pi(x) = \frac{x}{\ln(x)}$ function which takes an $x$ and roughly tells us how many primes there are below $x$. And so, for example, below $x = 10^7$ there are roughly $\frac{10^7}{\ln(10^7)} \approx 6.2 \cdot 10^5 \approx 6.2 \%$ primes.
And thus if we look on the numbers which do satisfy these conditions we are left with a few numbers - and then if we check for each permutation then we will have a greater chance of finding a prime.
BUT - This is only probability speaking, I wrote a slow program that lists all the numbers that are constructed using only $\{1,3,7,9\}$ and have at least $3$ distinct digits (from that set) and checked if they have a prime permutation, and for each number that is not divisible by $3$ , I found one. So does a number which satisfies these rules may not have a prime permutation? (Because it is probability speaking it doesn't mean such number could not exist..)
What is it good for?
I don't really know, it is just my cute little theory, I do not study mathematics in the academia (Studying engineering) But one thing I thought about, that if this conclusion is correct, then we will be able to find the biggest prime number in a very clever way, by just saying that $N = 137777777\dots 7777$ and because it satisfies these conditions ( has at least $3$ unique digits from the set $\{1,3,7,9\}$ and the sum of the digits are not divisible by $3$ (we will choose the number of $7$'s ($n$) such that $3 \nmid (1+3+7\cdot n)$ and thus this huge number will most certainty have a prime permutation.
Sorry for the huge post, I am really interested in prime numbers :)
Thank you very much!
AI: Let's say you have an $n$-digit number with $a$ $1$'s, $b$ $3$'s, $c$ $7$'s and $d$ $9$'s. The number of permutations is $n!/(a! b! c! d!)$.
Heuristically, unless there is a good reason for these not to be prime (e.g. if they are all divisible by $3$) each has probability on the order of $1/n$ of being prime.
If the number of permutations is much larger than $n$, it is very likely
that at least one of those is prime. For example, if there is one $1$, one $3$ and the rest are $7$, the number of permutations is $n!/(1! 1! (n-2)! 0!) = n(n-1)$ which is much larger than $n$ if $n$ is large. So I wouldn't be surprised if your conjecture is true; or if it is false, there is some
fairly small example.
I don't understand your idea to "find the biggest prime number". Of course there is no "biggest prime number", but apart from that,
even if we accept that there is some prime consisting of one $1$, one $3$ and, say, $10^6$ $7$'s, if we don't know which of the permutations it is we're not done. We'd still have to check a bunch of possibilities until we find one that's prime.
|
H: Can the interval $[0,1]$ be made into a field?
After some cups of coffee with a friend we come up with a non-trivial question to our knowledge and it reads as follows :
Is is possible to define the operations of sum and product on $[0,1]$ so that makes it a field ?
As mentioned, this question is most likely beyond of my reach. So, could you give me some lead to solve this problem ?
AI: Is there a field with the same cardinal as $[0,1]$? Yes: take $(\Bbb R,+,.)$, for instance. So, take a bijection $b\colon[0,1]\longrightarrow\Bbb R$ and define on $[0,1]$:
the additive identity is $b^{-1}(0)$;
the multiplicative identity is $b^{-1}(1)$;
$x+y=b^{-1}\bigl(b(x)+b(y)\bigr)$;
$x.y=b^{-1}\bigl(b(x).b(y)\bigr)$.
And now you have a field $([0,1],+,.)$, which is isomorphic to $(\Bbb R,+,.)$.
|
H: What formula could I use to find out how many paperclips my factories could produce?
I'm playing Universal Paperclips and I'm near the end of the game. I currently have 2.3 quadrillion factories. Each factory un-upgraded can produce 100 billion clips per second. The first upgrade increases each factory's performance by 100x. The second increases performance by 1000x. The last upgrade says "Each factory added to the network increases every factory's output 1,000x".
I'm trying to figure out what formula I can use to determine the theoretical maximum number of paperclips my factories can produce per second with all three upgrades.
If I'm understanding this right after the first two upgrades the "base" value of 100 billion per second becomes (100 billion * 100 * 1000 =) 1 quintillion per second. But how do I express the part where each factory added increases every factory's output 1000x?
AI: If each of the upgrades stack, you're looking at a paperclip rate formula of
$$
\text{total clips/sec from factories} = (n \text{ factories}) \times \left(\left(10^{11} \frac{\text{clips}/\text{sec}}{\text{factory}}\right) \times 100 \times 1000 \times \left(\frac{1000}{\text{factory}} \times n\text{ factories}\right)\right)
$$
Increasing your number of factories from $n$ to $n + 1$ would lead to an increase of:
$$
10^{19} \times (2n+1) \text{ clips/sec}
$$
This can be calculated by taking your rate at $n+1$ factories and subtracting your rate at $n$ factories.
You should confirm with your findings in-game to determine if the upgrades indeed stack in the above way. Otherwise, you'll be making as much as $10^5$ times less.
|
H: Prove that $B\cup(\bigcap \mathcal F)=\bigcap_{A\in \mathcal F}(B\cup A)$.
Not a duplicate of
$\cap_{A \in \mathcal{F}}(B \cup A) \subseteq B \cup (\cap \mathcal{F})$
This is exercise $3.5.16.b$ from the book How to Prove it by Velleman $($$2^{nd}$ edition$)$:
Suppose $\mathcal F$ is a nonempty family of sets and $B$ is a set. Prove that $B\cup(\bigcap \mathcal F)=\bigcap_{A\in \mathcal F}(B\cup A)$.
Here is my proof:
$(\rightarrow)$ Let $x$ be an arbitrary element of $B\cup(\bigcap\mathcal F)$. Let $A$ be an arbitrary element of $\mathcal F$. Now we consider two different cases.
Case $1.$ Suppose $x\in B$ and so $x\in B\cup A$.
Case $2.$ Suppose $x\in\bigcap\mathcal F$. From $x\in\bigcap\mathcal F$ and $A\in \mathcal F$, $x\in A$ and so $x\in B\cup A$.
Since the above cases are exhaustive, $x\in B\cup A$. Thus if $A\in\mathcal F$ then $x\in B\cup A$. Since $A$ is arbitrary, $\forall A(A\in\mathcal F\rightarrow x\in B\cup A)$ and so $x\in\bigcap_{A\in\mathcal F}(B\cup A)$. Therefore if $x\in B\cup(\bigcap\mathcal F)$ then $x\in\bigcap_{A\in\mathcal F}(B\cup A)$. Since $x$ is arbitrary, $\forall x\Bigr(x\in B\cup(\bigcap\mathcal F)\rightarrow x\in\bigcap_{A\in\mathcal F}(B\cup A)\Bigr)$ and so $B\cup(\bigcap \mathcal F)\subseteq\bigcap_{A\in \mathcal F}(B\cup A)$.
$(\leftarrow)$ Let $x$ be an arbitrary element of $\bigcap_{A\in\mathcal F}(B\cup A)$. We consider two different cases.
Case $1.$ Suppose $x\in\bigcap\mathcal F$. Therefore $x\in B\cup(\bigcap\mathcal F)$.
Case $2.$ Suppose $x\notin \bigcap\mathcal F$. So we can choose some $A_0$ such that $A_0\in\mathcal F$ and $x\notin A_0$. From $x\in\bigcap_{A\in\mathcal F}(B\cup A)$ and $A_0\in\mathcal F$, $x\in B\cup A_0$. From $x\in B\cup A_0$ and $x\notin A_0$, $x\in B$. Therefore $x\in B\cup(\bigcap\mathcal F)$.
Since the above cases are exhaustive, $x\in B\cup(\bigcap\mathcal F)$. Therefore if $x\in\bigcap_{A\in\mathcal F}(B\cup A)$ then $x\in B\cup(\bigcap\mathcal F)$. Since $x$ is arbitrary, $\forall x\Bigr(x\in\bigcap_{A\in\mathcal F}(B\cup A)\rightarrow x\in B\cup(\bigcap\mathcal F)\Bigr)$ and so $\bigcap_{A\in \mathcal F}(B\cup A)\subseteq B\cup(\bigcap \mathcal F)$.
Ergo $B\cup(\bigcap \mathcal F)=\bigcap_{A\in \mathcal F}(B\cup A)$. $Q.E.D.$
Is my proof valid$?$
Thanks for your attention.
AI: It’s correct, but Case $1$ of the second part is incomplete: given the level of detail that you’re using elsewhere in the proof, you really should justify the unstated assumption that $\bigcap\mathcal{F}\subseteq\bigcap_{A\in\mathcal{F}}(B\cup A)$. I would reorganize the second part altogether (and shorten it!):
Let $x\in\bigcap_{A\in\mathcal{F}}(B\cup A)$ be arbitrary; then $x\in B\cup A$ for each $A\in\mathcal{F}$. If $x\in B$, then certainly $x\in B\cup\bigcap\mathcal{F}$. If $x\notin B$, then $x\in A$ for each $A\in\mathcal{F}$, so $x\in\bigcap\mathcal{F}$, and again $x\in B\cup\bigcap\mathcal{F}$. Thus, $\bigcap_{A\in\mathcal{F}}(B\cup A)\subseteq B\cup\bigcap\mathcal{F}$.
Further explanation as requested: To begin the second part you assume that $x\in\bigcap_{A\in\mathcal{F}}(B\cup A)$, which is fine. You then consider the cases $x\in\bigcap\mathcal{F}$ and $x\notin\bigcap\mathcal{F}$, but it’s not immediately clear why these are relevant. If there is to be a division into cases at this point, one would expect the cases to derive fairly straightforwardly from the assumption that $x\in\bigcap_{A\in\mathcal{F}}(B\cup A)$, just as in the first part your two cases derive naturally from the assumption that $x\in B\cup\bigcap\mathcal{F}$.
That’s why I first drew the immediate conclusion from $x\in\bigcap_{A\in\mathcal{F}}(B\cup A)$ that $x\in B\cup A$ for each $A\in\mathcal{F}$. Now, because we’re dealing with a union, it’s easy to see what the natural cases are: either $x\in B$, or $x\in A$ for each $A\in\mathcal{F}$. And those two cases match up perfectly with the structure of the target set $B\cup\bigcap\mathcal{F}$,
|
H: Does $A \rightarrow B$ imply $P(A) \le P(B)$?
My intuition is that if you have two events, $A$ and $B$, and you can show that event $A$ implies event $B$, then you should have $P(A) \le P(B)$ because any time A happens so does B, but not necessarily the other way around. Similarly then $A \leftrightarrow B$ implies $P(A) = P(B)$ because they'd just be different ways of talking about the same event.
I'm an undergraduate CS student who has recently started assisting with some machine learning research. As such I'm a little bit in over my head with some of the math, especially with probability theory. Even though I feel pretty confident about this I have no way to show it formally and my set-theoretic intuitions aren't always perfect. Could someone out there give me a more formal grounding for why this does or doesn't work?
AI: A implies B is equivalent to $A\subseteq B$. The inequality follows immediately from the probability axioms.
|
H: Question about the proof of Theorem 14 of Hoffman and Kunze
While self studying Linear Algebra from Hoffman and Kunze, I have a question in Theorem 14 in the section on unitary operators from Chapter 8.
Here are the relevant images.
My question is in highlighted line of the image. I am unable to get what reasoning is behind the line that $M_{2}^{-1} $, $ M_{1}M_{2}^{-1}, $ and $ {(M_{1} M_{2})}^{-1}$ are all in $T^{+}(n).$
Edit: I am adding a related question on normal operators here.
Suppose $U$ is any normal operator on $V$ and $\alpha$ is a vector in $V.$ Why should $||U \alpha || = || U^{*} \alpha ||$ always hold true?
AI: Essentially, your first question is answered by the fact that $T^+(n)$ is a multiplicative group. By hypothesis, we have that $M_1$ and $M_2$ are both elements of $T^+(n),$ hence their inverses $M_1^{-1}$ and $M_2^{-1}$ and any product of these four matrices are also in $T^+(n),$ i.e., $M_2^{-1},$ $M_1 M_2^{-1},$ and $(M_1 M_2)^{-1}$ are all in $T^+(n).$
Perhaps a better question is to understand why $T^+(n)$ is a multiplicative group. Observe that the $n \times n$ identity matrix $I$ is an element of $T^+(n),$ and it also functions as the identity element of $T^+(n).$ Further, associativity of matrix multiplication holds, so we need only establish that $T^+(n)$ is closed under multiplication and that the inverse of a matrix in $T^+(n)$ is also in $T^+(n).$ For the latter, I suggest you check out this answer; the former is a matter of routine computation. By writing down the matrices $A$ and $B$ of $T^+(n)$ in terms of their respective elements $a_{ij}$ and $b_{ij}$ in the $i$th row and $j$th column, we have that $$[AB]_{ij} = \sum_{k = 1}^n a_{ik} b_{kj}.$$ Use the fact that $a_{ik} = 0$ whenever $k \geq i + 1$ and $b_{kj} = 0$ whenever $j \geq k + 1$ to see that $[AB]_{ij} = 0$ whenever $j \geq i + 1.$ Likewise, we have that $(AB)_{ii} > 0.$ Both of these together say that $AB \in T^+(n).$
On the matter of your second question, observe that for any vector $\alpha$ and any normal operator $U,$ $$||U \alpha|| = \langle U \alpha, U \alpha \rangle = \langle \alpha, U^* U \alpha \rangle = \langle \alpha, UU^* \alpha \rangle = \langle U^* \alpha, U^* \alpha \rangle = ||U^* \alpha||.$$
|
H: Proving the connection between limit points and neighborhoods
I am trying to prove the following proposition stated in my general topology textbook:
Let $A$ be a subset of a topological space $(X ,\tau).$ A point $x \in X$ is a limit point of $A$ if and only if every neighborhood of $x$ contains a point of $A$ different than $x$.
So, I already proved that if $x$ is a limit point of $A$, then every neighborhood of $x$ contains a point of $A$ different than $x$, But I'm having trouble proving that the other way is also true.
My approach:
Let's assume that every neighbourhood of $x$ contains a point of $A$ different than $X$.
Let $N$ be a neighbourhood of $x$, then:
$$\exists U\in \tau:x \in U \subseteq B$$
We do know that $\exists a \in A: a\neq x \wedge a \in B$. But that does not imply that $a \in U$, making $x$ a limit point.
How can I prove this?
AI: From the definition of neighborhood, any open set $U$ containing $x$ is a neighborhood of $x$. Which trivially proves the converse.
|
H: Does every continuous random variable have a pdf?
Does every continuous random variable have a pdf?
Is there any random variable which is neither discrete nor continuous?
Here, by continuous random variable I meant those random variables for which probablity of a singleton set is 0.
AI: If $X$ is a random variable with a Cantor distribution (i.e. the uniform distribution on the Cantor set $\subset [0, 1]$) then $X$ is a continuous r.v. without a pdf.
Lebesgue's decomposition theorem describes how any probability measure on $\Bbb{R}$ can be broken up into three parts with well-defined properties: a discrete part, a "pdf" part, and a singular part (one that's neither discrete nor has a pdf). So there's actually lots and lots of examples of continuous random variables that don't have pdfs.
|
H: Do functions with the same gradient differ by a constant?
Let $f,g:\mathbb{R}^n\to\mathbb{R}$ be such that $\nabla f=\nabla g$.
I believe this implies that $f$ and $g$ only differ by a constant, like in the one-dimensional case. But I'm not sure how to prove it. If it's indeed true, can you give me a hint?
Thanks!
AI: [Spoiler warning, this is more than a hint. I wanted to show this method because it avoids working with components.]
First suppose that $h:\mathbb R^n \to \mathbb R$ is differentiable and that $\nabla h(x) = 0$ for all $x \in \mathbb R^n$. I'll prove that $h$ is constant. Suppose (for a contradiction) that there exist points $a$ and $b$ in $\mathbb R^n$ such that $h(a) \neq h(b)$. Let $z:[0,1] \to \mathbb R$ be the function defined by
$$
z(t) = h(a + t(b - a)).
$$
Note that $z$ is continuous on $[0,1]$ and differentiable on $(0,1)$ and that $z(0) \neq z(1)$. By the mean value theorem, there exists a number $c$ such that $0 < c < 1$ and
$$
z'(c) = z(1) - z(0) \neq 0.
$$
But, by the chain rule,
$$
z'(c) = \langle \nabla h(a + c(b -a)), b - a \rangle
$$
which is $0$ because we are assuming that $\nabla h(x) = 0$ for all $x$ in $\mathbb R^n$.
This is a contradiction. Therefore $h$ is constant.
Next, to solve the original problem, let $h = f - g$ and apply the above result.
|
H: When is the Lagrange interpolation polynomial exact?
Find the Lagrange interpolation polynomial for data points $x_k=k$ and $f(k)=k^2$, where $k=0,1,2,3$.
Also, find the Lagrange interpolation polynomial for the same data points but with $g(k)= k^4$.
I would like to say that in both cases the Lagrange polynomial (L.p.) is just the given function, as the number of data points is greater than or equal to the degrees of $f$ and $g$ and the L.p. is unique.
Is this true? More generally when is the Lagrange interpolation polynomial exact? Many thanks!
AI: Given the collocation points at $x_k=k$ where $k=0,1,2,3$ with $\mathcal{N}=4$ your Lagrange polynomial $\mathcal{L}(k)$ is of order $\mathcal{P}=3$. This means:
For the first function $f_1(k)=k^2$ the Lagrange interpolation polynomial $\mathcal{L_{P=3}}(k)$ is exact.
For the second function $f_2(k)=k^4$ the Lagrange interpolation polynomial $\mathcal{L_{P=3}}(k)$ is not exact and only an approximation.
You mentioned:
"I would like to say that in both cases the Lagrange polynomial (L.p.) is just the given function."
You have to distinguish between the Langrange interpolation problem itself and the representation of the given polynomial.
In both cases the given functions are not in a Langrange representation
$$\mathcal{L(k)}=\sum_{i=0}^{\mathcal{P}} a_i \phi_i(k).$$ In the given plots you can see that the i'th basis function $\phi_i$ intersects with $f_1, f_2$ at the i'th collocation point.
The functions $f_1, f_2$ are in a Monomial representation
$$\mathcal{T(k)}=\sum_{i=0}^{\mathcal{P}} b_i \psi_i(k),$$ where $b_2=1,~\psi_2=k^2$ and $b_4=1,~\psi_4=k^4$
Regards
|
H: Is there a story proof behind the combinatorial identity $(n-2k)\binom{n}{k} = n\left[ \binom{n-1}{k} - \binom{n-1}{k-1} \right]$?
Is there a "story proof"/combinatorial proof for the following combinatorial identity:
$$(n-2k)\binom{n}{k} = n\left[ \binom{n-1}{k} - \binom{n-1}{k-1} \right]\tag1$$
I know that this identity can be proved by using the following identities:
$$k\binom{n-1}{k} = (n-k)\binom{n-1}{k-1}\tag2$$
$$k\binom{n}{k} = n\binom{n-1}{k-1}\tag3$$
but is there a "story proof" for equation $(1)$?
Edit 1: I do know the story proofs for equations 2 and 3. But 'sewing them together' is the problem!
$$\text{RHS} \stackrel{\text{i}}{=} n\left[ \binom{n-1}{k} - \binom{n-1}{k-1} \right] \stackrel{\text{ii}}{=} \frac{n}{k}\left[ k\binom{n-1}{k} - k\binom{n-1}{k-1} \right] \stackrel{\text{iii}}{=} \frac{n}{k}\left[ (n-k)\binom{n-1}{k-1} - k\binom{n-1}{k-1} \right] \stackrel{\text{iv}}{=} \frac{n}{k}\binom{n-1}{k-1}\left[ (n-k) - k \right] \stackrel{\text{v}}{=} (n-2k)\binom{n}{k}$$
Precisely, how do you formulate a story proof for step (iv)? i mean the term $\binom{n-1}{k-1}$ is being taken common in step iv. What could a story proof for taking a term common be?
AI: I can come up with a combinatorial argument if I rearrange the identity a little. We’re starting with
$$(n-2k)\binom{n}k=n\left[\binom{n-1}k-\binom{n-1}{k-1}\right]\;,$$
which is clearly the same as
$$(n-k)\binom{n}{n-k}-k\binom{n}k=n\binom{n-1}k-n\binom{n-1}{k-1}\;.$$
Transposing the two negative terms yields
$$(n-k)\binom{n}{n-k}+n\binom{n-1}{k-1}=n\binom{n-1}k+k\binom{n}k\;.\tag{1}$$
Now suppose that we have a group of $n$ athletes, and we want to form a team of either $k$ or $n-k$ players and choose one member of the team to be its captain; in how many different ways can we do this?
We can choose a team of $n-k$ in $\binom{n}{n-k}$ ways; having done that, we can choose its captain in $n-k$ ways, so there are $(n-k)\binom{n}{n-k}$ ways to choose this team and its captain. To form a team of $k$ players we can first choose one of the $n$ athletes to be its captain, after which there are $\binom{n-1}{k-1}$ ways to choose the other $k-1$ players from the remaining $n-1$ athletes, so there are altogether $n\binom{n-1}{k-1}$ ways to choose this team and its captain. Thus, the lefthand side of $(1)$ is the number of ways to choose a team of $k$ or $n-k$ players and appoint its captain.
Alternatively, we can choose a team of $k$ players in $\binom{n}k$ ways, after which we can select its captain in $k$ ways, so there are $k\binom{n}k$ ways to choose a team of $k$ and its captain. To form a team of $n-k$ players, we can first choose any one of the $n$ athletes to be its captain. Then to fill out the rest of the team we can choose the $k$ the remaining $n-1$ athletes who will not be on the team in $\binom{n-1}k$ ways. Thus, there are $n\binom{n-1}k$ ways to form a team of $n-k$ and choose its captain, and the righthand side of $(1)$ is also the number of ways to choose a team of $k$ or $n-k$ players and appoint its captain.
|
H: Order of a subgroup generated by two elements in $S_5$
Let $G = \langle(12)(34), (15)\rangle$ be a subgroup of $S_5$.
Then I need to show that $G$ has order $12$ and has a non-trivial centre.
I have found thse elements- $$I,(12)(34), (15), (12)(34)(15), (15)(12)(34).$$
If I just keep computing compositions, then the whole process is becoming really cumbersome.
Please, help!
AI: Let $a = (12)(34)$ and $b = (15)$. Clearly $a^2 = b^2 = e$, where $e$ is the identity.
Define $u = ab$. It is easy to check that $u$ has order six, and also that $auau = e$. I can add detail if you need me to, but this seems like a good exercise to me.
The dihedral group of order $12$ (the symmetries of a regular hexagon) has presentation
$$
D_{12} = \langle \sigma, \tau \mid \sigma^6 = \tau^2 = \sigma \tau\sigma\tau = e \rangle
$$
We have verified that $a, u$ satisfy the defining relations of $D_{12}$, which means that the group generated by $a$ and $u$ has size at most $12$ (intuitively, this is because any combination of them that equals the identity in $D_{12}$ will also equal the identity in $G$, so there can be at most twelve distinct combinations. This follows very easily from some slightly more advanced theory but I'll try to avoid that).
Since $b = a^{-1}u$, the group generated by $a$ and $u$ is actually, $G$, so $G$ has size at most twelve. But $u$ is an element of order six, so the order of $G$ is a multiple of six. The cyclic group of order six has only one element of order six, but $a$ and $b$ are distinct elements of order $2$, which means that $G \neq \langle u \rangle \cong C_6$, and hence $\lvert G \rvert > 6$, so $\lvert G \rvert = 12$.
|
H: How prove that the elementary operations don't change the rank of a matrix
One considers certain operations, called elementary row operations, that are applied to a matrix $A$ to obtain a new matrix $B$ of the same size.
These are the following:
exchange rows $i_1$ and $i_2$ of $A$ (where $i_1\neq i_2$);
replace row $i_1$ of $A$ by itself plus the scalar $c$ times row $i_2$ (where $i_1\neq i_2$);
multiply row $i$ of $A$ by the non-zero scalar $\lambda$.
Naturally this operations can be implemented on a column and so we would call the analogous operations on the columns "elementary column operations".
Theorem
If $B$ is the matrix obtained by applying an elementary row/column operation to $A$, then these two matrix has the same rank.
Unfortunately I'm not able to prove the previous theorem, so could someone help me, please?
AI: The most important fact is that elementary row operations are realized as multiplication (on the left) by an invertible matrix.
Once you know this fact, you can proceed as follows. Suppose $A$ and $B$ are $m\times n$ and there exists an invertible $m\times m$ matrix $F$ such that $A=FB$. Denote by $a_1,a_2,\dots,a_n$ and $b_1,b_2,\dots,b_n$ the columns of $A$ and $B$ respectively.
Consider indices $i_1,i_2,\dots,i_k$ such that $1\le i_1<i_2<\dots<i_k\le n$. Then the columns $a_{i_1},a_{i_2},\dots,a_{i_k}$ are linearly independent if and only if $b_{i_1},b_{i_2},\dots,b_{i_k}$ are linearly independent.
It's sufficient to prove one implication, because $B=F^{-1}A$. So, suppose the columns $a_{i_1},a_{i_2},\dots,a_{i_k}$ are linearly independent and that
$$
\alpha_1b_{i_1}+\alpha_2b_{i_2}+\dots+\alpha_kb_{i_k}=0
$$
Then we can multiply both sides by $F$ and get
$$
\alpha_1Fb_{i_1}+\alpha_2Fb_{i_2}+\dots+\alpha_kFb_{i_k}=0
$$
Since $Fb_i=a_i$, by definition of matrix product, we obtain
$$
\alpha_1a_{i_1}+\alpha_2a_{i_2}+\dots+\alpha_ka_{i_k}=0
$$
so $\alpha_1=\alpha_2=\dots=\alpha_k=0$.
In a similar way, we see that a column $a_i$ of $A$ is a linear combination of the columns $a_{i_1},a_{i_2},\dots,a_{i_k}$ if and only if $b_i$ is a linear combination of $b_{i_1},b_{i_2},\dots,b_{i_k}$, with the same coefficients.
As a consequence, $a_{i_1},a_{i_2},\dots,a_{i_k}$ is a basis of the column space of $A$ if and only if $b_{i_1},b_{i_2},\dots,b_{i_k}$ is a basis of the column space of $B$.
In particular, the column space of $A$ has the same dimension as the column space of $B$. Therefore $A$ and $B$ have the same column rank (the dimension is the maximum number of linearly independent columns, of course, because the columns are, by definition, generators of the column space).
This has other important consequences. When you find a row echelon form $U$ for $A$, it's easy to see that the pivot columns of $U$ form a basis of the column space of $U$. Therefore, the columns of $A$ corresponding to the pivot columns form a basis of the column space of $A$. This provides an algorithm for extracting a basis from the columns of $A$.
Not only this. If $U$ is the reduced row echelon form, we see that a nonpivot column is the linear combination of the pivot columns with lower column index and the coefficients in the nonpivot column are exactly those needed to write it as a linear combination.
Thus the same coefficients can be used to express the columns of $A$ corresponding to nonpivot columns as linear combination of the already found basis for the column space of $A$. Thus the reduced row echelon form of $A$ is unique, because its entries only depend on the linear relations between the columns of $A$.
Elementary row operations also preserve the row rank (dimension of the row space or maximum number of linearly independent rows). This is easier, because the row space is unchanged by elementary row operations.
This is obvious if the operation is swapping two rows. If the operation is multiplying a row by a nonzero constant, then the original row is a multiple of the new row, and conversely.
If the operation is of the form $r_i+kr_j$, then $r_i=(r_i+kr_j)-kr_j$, and conversely.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.