text
stringlengths 83
79.5k
|
|---|
H: How to check if matrices form the basis for a subset?
I have worked out a solution to a hw problem, but it felt too easy. That always means I'm wrong.
Here's the question...
Consider the subspace S of 2×2 symmetric matrices. Show that the following three matrices form a basis for S
a = [1 0 0 0] b = [0 1 1 0] c = [0 0 0 1], where that is read as a 2x2 matrix with the first two numbers the first row and the second row the second two numbers.
From what I can understand, you check for independence, and if they're linearly independent (not equaling the zero vector) they form a basis.
So I added them. I did not get the zero vector. So based off that, I would answer they form a basis.
I'm not confident about this at all though.
Here's my work.
Reworked.
AI: To prove linear independence, you should introduce variables for the coefficients (which you implicitly assumed to be $1$), and deduce that each must be zero.
$$x\pmatrix{1&0\\0&0} +y\pmatrix{0&1\\1&0} +z\pmatrix{0&0\\0&1}=0 \ \implies \ x=y=z=0\,.$$
Then, you still have to prove that these three matrices form a generating system for the space of all symmetric matrices.
A more straightforward approach is to show them at once, i.e. that every symmetric matrix can be uniquely written as a linear combination of the given matrices.
Hint: Just start off with an arbitrary symmetric matrix $\pmatrix{x&y\\y&z}$.
|
H: Ordered pairs with 2 sets with some conditions
$\langle B,C\rangle$ such that $ |B|=|C|=2$ and $ B,C\subseteq A $ and $B \cap C=\emptyset $
where $A=\{1,2,3,4,5,6\}$ when the number of ordered pairs B,C
Is equal to the number of words length $6$ such that the numbers 0,1,2 is repeated each twice exactly
I suppose : $6\choose 2$ $\times 2 \times 4\times 3$ (updated**)
It is true?
How can i generalize it ?
AI: No, it is not true. Assuming $B,C$ are each ordered pairs, you can select $B$ in $6 \cdot 5$ ways, which matches your ${6 \choose 2}\cdot 2$. You can then select $C$ in $4 \cdot 3=12$ ways, not $2$. It is equivalent to choosing four items from $A$ in order without replacement. The first two make $B$ and the other two make $C$.
If $B,C$ are two element subsets of $A$, divide by $4$ to ignore the order in each of $B,C$
|
H: Does the proof of uniqueness of a solution depend on a proof that the solution is actually a solution?
To prove: The equation $a + x = b$ has the unique solution $x = b - a$
The proof I am looking at has two parts.
There is at first the proof that $x = b - a$ solves the equation by plugging it into the equation and after some steps using the axioms of addition arriving at equivalence.
The second part is a proof of the uniqueness of the solution showing that any $y$ with $a + y = b$ is equal to $b - a$. It subtracts $a$ from each side and arrives at equivalence.
I think that the second part does prove both parts, that $x=b-a$ is a solution and that it is unique. My argument is that it finds a solution for arbitrary y with $a + y = b$, namely $b - a$, so that we can see that $x = b - a$ is a solution.
I like that there are two parts and think that proving existence is not a bad start for proving uniqueness, but in this example it just feels like something is proven twice.
AI: Short answer, no. As soon as you say let $y$ solve $a+y=b$, you're assuming existence, and therefore you can't also be proving it.
This is sometimes hard to see because these vector space proofs are so elementary. But say I told you to prove that for any $A,\;b$, $A$ invertible, any solution to $Ax=b$ was of the form $x=A^{-1}b$? If you were a beginner to linear algebra and I said let $x$ be a solution to $Ax=b$. Then $A^{-1}Ax=A^{-1}b$, so $x=A^{-1}b$, you'd pull me up on assuming existence.
|
H: Operator norm of a matrix in terms of its coefficients
Let $M:\mathbb{C}\to \mathbb{C}$ be a matrix and equip $\mathbb{C}$ with the norm
$$\|x\|_\infty=\max_{1\le j\le n}|x_j|.$$
If the operator norm is given by
$$\|M\|=\sup_{\|x\|=1}|Mx|,$$ is it possible to compute the operator norm exactly in terms of the matrix entries?
Since $(Mx)_i=\sum_{j=1}^nm_{ij}x_j$, we have
$$\|M\|=\sup_{\|x\|_\infty=1}\max_{1\le i\le n}\bigg|\sum_{j=1}^nm_{ij}x_j\bigg|.$$ From here, it is clear how one might bound this norm, but it is not clear to me how to compute it exactly without knowledge of the matrix.
AI: Hint: Using the triangle inequality, show that if $\|x\|_\infty = 1$, then
$$
|(Mx)_i| \leq \sum_{j=1}^n |m_{ij}|.
$$
This gives you an upper bound for $\|M\|$, i.e. a value $C$ that depends on the entries of $M$ for which $\|M\| \leq C$. Using the entries of $M$, find a vector $x$ for which $\|x\|_\infty = 1$ and $\|Mx\|_\infty = C$.
|
H: Show that $ \Phi(x,z) = x^{\delta} \cdot \prod_{p \le z} \bigg(1-\frac{1}{p^{\delta}}\bigg)^{-1}$
I am stuck at the following exercise:
Let $\Phi(x, z)$ be the number of $n \le x$ all of whose prime factors are less than or equal to $z$. Prove that for any $\delta > 0$ holds
$$ \Phi(x,z) \le x^{\delta} \cdot \prod_{p \le z} \bigg(1-\frac{1}{p^{\delta}}\bigg)^{-1}.$$
I recognise the similarity of $\prod_{p \le z} \bigg(1-\frac{1}{p^{\delta}}\bigg)^{-1}$ to the Euler Product and if I am not mistaken it should thus hold:
$$\prod_{p \le z} \bigg(1-\frac{1}{p^{\delta}}\bigg)^{-1} = \prod_{p \le z} \frac{1}{1-p^{-\delta}}$$
But I do not see how this could help me.
AI: $$\prod_{p \le z} \bigg(1-\frac{1}{p^{\delta}}\bigg)^{-1}=\sum_{n\ge 1,LargestPrimeFactor(n)\le z} n^{-\delta}$$
If $n\le x$ then $x^\delta n^{-\delta}$..
|
H: How to solve this integral with multiple variables
$\int_{-1}^{-2}\int_{-1}^{-2}\int_{-1}^{-2}\frac{x^2}{x^2+y^2+z^2}dxdydz$
I've tried looking it up and as far as I get is that I probably need to use cylindrical coordinates but I haven't been able to solve it.
I also tried those sites that calculate them for you and give you steps but they say that they can't solve it (one says is not possible and the other says comp time exceeded).
AI: $$I_x=\int\limits_{-1}^{-2}\int\limits_{-1}^{-2}\int\limits_{-1}^{-2}\frac{x^2}{x^2+y^2+z^2}\,\mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z=
\iiint\limits_V\frac{x^2}{x^2+y^2+z^2}\,\mathrm{d}v$$ where $V:\,\{(x,y,z)|-2\le x\le -1,-2\le y\le -1,-2\le z\le -1\}$, $\mathrm{d}v=\mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z$.
Let $I_y=\iiint\limits_V\frac{y^2}{x^2+y^2+z^2}\,\mathrm{d}v$ and $I_z=\iiint\limits_V\frac{z^2}{x^2+y^2+z^2}\,\mathrm{d}v$ then it's obvious that $I_x=I_y=I_z$ because of symmethry of $V$ respective to a permutation of $x,y,z$. So
$$I_x+I_y+I_z=\iiint\limits_V\frac{x^2+y^2+z^2}{x^2+y^2+z^2}\,\mathrm{d}v=\iiint\limits_V\,\mathrm{d}v=(-2-(-1))^3=-1$$
and thus $I_x=-\frac{1}{3}$ (Thanks to Jason Helman and Integrand for mentioning the sign).
|
H: Using Uniqueness Result for Analytic Functions
I am reviewing for an Analysis qual and stumbled upon this question. In particular, I am having difficulties with part (ii). My attempt is the following:
Using the hint, let $\Omega = \mathbb{C}$, $S=\{1/n : n\in \mathbb{N}\}$, and $g(z)=z^2$. We have that since $S \subset \Omega$ and both $f$ and $g$ are entire, then $f$ and $g$ are analytic on $S$. Per the uniqueness result, if $g(z)=f(z)$ for all $z\in S$, we know that since $0$ is a limit point of $S$ that is in $\Omega$, then it must be the case that $g(z)=f(z)$ for all $z\in \Omega$. However, we are given that $|f(i)| =2$, yet $|g(i)| = 1$. So in this case, just because $|g(z)| = |f(z)|$ for all $z\in S$, we don't have $g(z)=f(z)$. My strategy is then to find different functions $g$ such that $|g(z)| = |f(z)|$ for all $z\in S$ and $|g(i)|=2$. After finding all these different $g's$, I should have all the possible values of $|f(-i)|$ by just calculating $|g(-i)|$. However, I'm having trouble finding even a single function $g$ that satisfies these two conditions, much less finding all of them. Is there some systematic way I can go about finding these different $g$ functions?
AI: First, if $f$ is any entire function, $\overline{f(\bar{z})}$ is always entire, because $\overline{\sum_{n=0}^\infty a_n \bar{z}^n} = \sum_{n=0}^\infty \bar{a_n}z^n,$ which converges exactly when $\sum_{n=0}^\infty a_n z^n$ does.
As $f$ is holomorphic at $0$ and not uniformly $0$, there is a unique integer $n \geq 0$ so $\lim_{z \to 0} \frac{f(z)}{z^n}$ is a nonzero complex number ($n$ is the order of the zero of $f$ at $0$). Our hypotheses show that $f$ has a zero of degree $2$ at $0$. So we can express $f(z) = z^2 h(z)$ for some entire function $h$, and our hypotheses show that $|h(z)| = 1$ for $z = 1/n, n \geq 1$. In particular, $|h(0)| = 1$. So there's a neighborhood of $0$ on which $h(z)$ is nonzero.
Now put $g(z) = \frac{1}{\overline{h(\bar{z})}}$, which holomorphic on a neighborhood of $0$, by part i. We observe that for $1/n$, $n\geq 1$, $$\frac{h(z)}{g(z)} = h(z) \overline{h(z)} = |h(z)|^2 = 1.$$ So by the identity principle, $g(z)$ and $h$ agree in a neighborhood of $0$, and we discover that in fact on all $z \in \mathbb{C}$, $h(z) = \frac{1}{\overline{h(\bar{z})}},$ or equivalently that $h(\bar{z}) = \frac{1}{\overline{h(z)}}$. For $|z| = 1$, $|f(z)| = |h(z)|$ and we see that for $|z| = 1$, $|f(\bar{z})| = \frac{1}{|f(z)|}$. So we conclude $|f(-i)| = 1/|f(i)| = 1/2.$
|
H: What is $i^j$ for quaternions?
Given complex numbers, we can calculate e.g. $i^i$.
Given quaternions, how can we calculate something like $i^j$? Wolfram Mathematica choked on that and googling did not produce any useful results. My guess is that this could be something ill defined, similar to quaternion derivative or, perhaps, even worse.
AI: $$
i^j
= (e^{i\pi/2})^j
= e^{ij\pi/2}
= e^{k\pi/2}
= k
$$
|
H: If a function $f$ is $L$-periodic and even, then $f'$ has $2$ zeros in $[0,L)$?
Let $f: \mathbb{R} \longrightarrow \mathbb{R} $ be a differentiable and even function. If $f$ is periodic and the (minimal) period $L>0$, then $f'$ has $2$ zeros in $[0,L)$?
For example, this occurs if we consider $f(x)=\cos(x)$, for all $x \in \mathbb{R} $, since in this case $L=2\pi$.
This is in general true?
AI: No. Define the function $f\colon[-2,2]\to\mathbb{R}$ by
\begin{align}
f(x)=\begin{cases}
-x^2+2\,, &\text{if $|x|\leq 1$}\,, \\
(|x|-2)^2\,, &\text{if $|x|>1$}\,.
\end{cases}
\end{align}
Then $f'(-2)=f'(2)=0$. We can shift $f$ by integral multiples of $T>4$. The resultant graph is a function $F$ as desired with period $T$, such that $F'(x)=0$ for all $x\in[4,T)$.
Edit: Forgot you were asking for the zeroes of $F'$ half way writing. $F'$ must have at least two zeroes, for there must be at least one local maximum and one local minimum in a period.
|
H: Approximation of a function’s derivative
I came across this result when I was tinkering with some summations and integrals. Any ideas if it could be useful?
$\frac{d}{dx}f(x)\approx f(x+\frac{1}{2})-f(x-\frac{1}{2})$
AI: This centered difference is often used to estimate $f'(x)$. More generally, $f'(x)$ can be approximated as
$$
f'(x) \approx \frac{f(x + \frac{\Delta x}{2}) - f(x - \frac{\Delta x}{2})}{\Delta x},
$$
where $\Delta x$ is a small number. In your case, you're taking $\Delta x = 1$.
Visually, we are approximating $f'(x)$ by the slope of the line through the points $(x - \Delta x/2, f(x - \Delta x/2))$ and $(x + \Delta x/2, f(x + \Delta x/2))$
|
H: Prove that, If $f(0)=f(1)=0,$ and $M:=\max _{[0,1]}\left|f^{\prime}\right|,$ then $\int_{0}^{1}|f| \leq \frac{M}{4}$.
Let $f:[0,1] \rightarrow \mathbb{R}$ is continuously differentiable, $M:=\max _{[0,1]}\left|f^{\prime}\right| .$ Prove the following statement.
(a) If $f(0)=f(1)=0,$ then $\int_{0}^{1}|f| \leq \frac{M}{4}$.
Attempt:
By taking $f(x)=\int_0^xf'(t)dt$, I am getting $|f(x)|\leq M x,~x \in [0,1]$. This gives $\int_{0}^{1}|f| \leq \frac{M}{2}$. But how to strengthen the estimate?
AI: Note that $|f(x)| = |f(x)-f(0)| \le Mx$ and
$|f(x)-f(1)| \le M(1-x)$, so
$|f(x)| \le M\min(x,1-x)$. Now integrate the upper bound.
|
H: Set of random numbers with uniform distribution - justify the distribution of the differences
Make a set of random integers, uniformly distributed between 0 and n=10^4. The size of the set is n^2.
import pandas as pd
import numpy as np
n = 10 ** 4
seq = pd.DataFrame(np.random.randint(0, n, n ** 2))
Plot the histogram and indeed the distribution of values is uniform:
seq.hist(bins=n)
Now, from each element k in the set, subtract the element k-1 and keep the absolute value of the difference. Drop the first element since there's no previous element to that.
diff = seq.diff().dropna().abs().astype(int)
The result is also a set of random numbers with values between 0 and n. But what is the histogram of this new set? Let's see:
diff.hist(bins=n)
Why is the new distribution skewed so much towards small values? Intuitively I expected a bell curve, maybe centered on the average.
What this distribution says is - the most likely difference between any k-1 and k is 0.
Can you provide an intuitive explanation?
AI: Here is an intuitive explanation:
Let's call the two consecutive samples $X, Y$. Since $X$ and $Y$ are drawn independently and from a uniform distribution, we have $P((X,Y) = (i,j)) = P(X=i)P(Y=j) = \frac{1}{(n+1)^2}$ fixed for all pairs $0 \leq i,j \leq n$. So, for a fixed $z \in \{0,1,...,n\}$, $$P(|X-Y| = z)=\sum_{\substack{0 \leq i,j \leq n \\ |i-j|=z}} P((X,Y) = (i,j))$$ boils down to counting how many different pair differences leads to the given $z$ value, since all pairs are equally likely.
For $z=n$, there are only two pairs: $(n,0)$ and $(0,n)$. For $z=n-1$, there are four pairs: $(n, 1), (n-1, 0), (1, n)$ and $(0, n-1)$. And so on.
If you work out the number of such pairs using simple combinatorics, you'll see that moving from $z=n$ to $z=1$, the number of pairs are increasing linearly, which matches what you are seeing at your last plot.
|
H: Connection between trigonometric identities and secant/tangent lines
Assuming the relationship I am asking about is obvious to most students, I hope this post is an opportunity for some to have fun exploring a basic question. What I'm wondering about is the relationship to the trigonometric identities I learned about in PreCalculus and the secant/tangent lines that are used to estimate a rate of change at the start of Differential Calculus (or Calc I).
While I am can solve problems using the secant identity, $sec=\frac{r}{x}$, and I understand what it is (the inverse of cosine), I am having trouble connecting the relationship this identity has with the line I draw between two points on a curve, $m_{sec}=\frac{f(x)-f(a)}{x-a}$, also known as the difference quotient.
The same question comes up when I find the slope of a tangent line using the secant line. What is the relationship between the tangent I know from Trigonometry, $tan = \frac{y}{x}$, and the slope of the tangent line that I find in Calc I, $m_{tan} = \lim_{x \to a} \frac{f(x)-f(a)}{x-a}$?
I'm having trouble finding resources that address my questions directly online. So, any help would be greatly appreciated! I'll put in the time if you can point me in the right direction. Thank you!
AI: The term "secant" and "tangent" have a more general meaning than just the trig function names.
A secant line is a line that intersects a curve in at least 2
distinct points.
A tangent line is a line that only "touches" a curve once.
In the pre-calc definitions you give, the term $m_{\text{sec}}$ is called this way because it represents the slope of a line which intersects the curve given by the function $y=f(x)$ in 2 points: $(x,f(x))$ and $(a,f(a))$.
Similarly, the term $m_{\text{tan}}$ is called this way because it represents the slope of a line which only touches the curve at the point $(x,f(x))$.
In the diagram below you can visually see this. Here the red dot is the point $(x,f(x))$, the blue point is $(a,f(a))$, $m_{\text{sec}}$ would correspond to the slope of the purple line, and $m_{\text{tan}}$ corresponds to the slope of the orange line.
As far as the relation of these definitions of tangent and secant with how they're used in trigonometry, pyon's answer gives the diagram of the visual representation of secant and tangent functions as lines. Here we see that the secant function can be seen as a line that intersects the unit circle in 2 points, and similarly, the tangent function can be seen as another line which only touches the unit circle once.
|
H: Limitation of eigenvalues and eigenvectors
Consider a simple example of a 2x2 matrix. Let's say we assign two numbers $\lambda_1$ and $\lambda_2$ and for each of these numbers, a corresponding $x_1,x_2$ vectors with two values each and then assume that these are the eigenvalues and corresponding eigenvectors of some 2x2 square matrix.
How can we go about to prove that for arbitrary values of $\lambda_1, \lambda_2, x_1,x_2$ a corresponding square matrix does not always exist? Also, general proof would be a lot better than for a set size.
AI: You can lean fairly heavily on diagonalisability here. Just let $V$ have columns $x_1$, $x_2$, $D$ have diagonal $\lambda_1$, $\lambda_2$, and let $A=VDV^{-1}$ be your matrix with these eigenvalues and eigenvectors. It works in all cases except those where $x_1$, $x_2$ are parallel, which you shouldn't be considering. It even works when $\lambda_1=\lambda_2$, and generalises beyond the 2x2 case.
|
H: Sorted extraction
I have numbered balls from $1$ to $N$ in an urn and take out $n$ balls, one at a time, putting it back each time. I want to calculate the probability of getting a strictly growing sequence. I thought about doing it with success cases over total cases.
If I'm not mistaken, total cases would be $N^n$ and success cases would be $\frac{\binom{N}{n}}{n!}$. I think the total cases are like this since it would be that I can choose between $N$ balls in $n$ oportunities, and the success cases I thought it like I have to choose $n$ different numbers between the $N$ possibilities and since it has to be sorted growing, I have to remove the repeats, so I divide by $n!$
The thing I'm insecure about this is that I've also programmed this and the result is giving me more like $\frac{\binom{N}{n}}{N^n}$ which I think it should be wrong, but watching results I can't find any bug, so I'm not sure if I have a bug or I'm thinking it incorrectly.
Thanks
AI: There are two things going on. The first is that you have to draw $n$ distinct balls, and the second is that they have to come in increasing order. Given that the balls are distinct, the probability that they come in increasing order is $\frac1{n!}$.
Now there are $n!\binom{N}{n}$ ways to draw $n$ distinct balls, and $N^n$ equally likely ways to draw the balls, so the probability is $$\frac{\binom{N}{n}}{N^n}$$ as your simulations confirm.
|
H: How does one handle series generating functions with multiple equals signs?
How would somebody walk through this equation? I'm looking for $q(n)$. If I'm given an input of 10, for example, how would this play out? The two equals signs is throwing me off. Does the result of the far right equation serve as input to the middle equation? If so, where?
I haven't "mathed" in a very long time. Any help and patience would be BEYOND appreciated.
$$\sum_{n=0}^\infty q(n) x^n = \prod_{k=1}^\infty (1 + x^k) = \prod_{k=1}^\infty \frac{1}{1-x^{2k-1}}$$
This is a link to where I found this equation on Wikipedia => https://en.wikipedia.org/wiki/Partition_(number_theory)
AI: The "two equal signs" are saying that all 3 expressions are the same. As a simpler example, consider the following statement:
$$x^2 + 2x + 1 = (x+1)^2 = (x+1)(x+1)$$
We are saying that all 3 of the above are the same.
Now, as for actually computing the $q(n)$, you have two options. You can expand out $\prod (1+x^k)$ or you can expand out $\prod (1 - x^{2k-1})^{-1}$. Either will give you the right answer.
As an example, notice
$$(1+x)(1+x^2)(1+x^3) = 1 + x + x^2 + 2x^3 + x^4 + x^5 + x^6$$
Also notice $q(0) = 1$, $q(1) = 1$, $q(2)=1$, and $q(3)=2$. So the first few coefficients are exactly the partition numbers!
The reason the $x^4$ coefficient is $1$ instead of $2$ is because we haven't used enough terms! If we multiply by $(1+x^4)$ then we find the correct coefficient. Indeed, if we were to multiply all of the $(1+x^k)$ instead of stopping at some finite point, then the coefficients would be correct for every $n$.
For fun, you should (by hand) compute
$$\prod_{k=1}^7 (1+x^k)$$
that is, you should actually expand out
$$(1+x)(1+x^2)(1+x^3)(1+x^4)(1+x^5)(1+x^6)(1+x^7)$$
You'll find that the first 7 coefficients of $x^n$ is exactly $q(n)$! The idea is that multiplying by $(1+x^8)$ cannot possibly impact the $x^7$ term. Do you see why?
So, to actually do this in "the real world" you would get a computer to do this for you. A computer algebra system (like sage) will happily tell you the $nth$ coefficient of $\prod(1+x^k)$ by doing the annoying foiling faster than you or I ever could! There are plenty of tutorials online, but if you want me to say how to do it here, feel free to ask and I'll be happy to.
I hope this helps ^_^
|
H: Basic ODE with Initial Value
old guy here working through an old ODE book (separable equations):
$(\ln y^x)dy = 3x^2ydx$ which rearranges to:
$ln (y)/y dy = 3x dx$
solving this yields:
$(ln y)^2 = (3x^2)/2 + C$
initial boundary: $y(2) = e^3$
My answer: $(ln y)^2 = 3x^2/2 +3$
Book's answer: $(ln y)^2 = 3x^2 + 3$ <- I also copied this wrong, should be (- 3), but I forgot to bring the 1/2 when I did the integration.
I don't see how they just removed the 2 from $3x^2/2$. Did I forget everything in 20 years about just lumping that value in with the integration constant C?
AI: You made a mistake in your integral on the LHS:
$\int\frac{ln(y)}{y}dy = \frac{1}{2}{ln(y)^2}$
|
H: Is it possible to show $(\lnot p \implies p) \implies p \vdash (\lnot \lnot p \implies p)$ in constructive logic?
I was given the task of showing that $(\lnot p \implies p) \vdash p$ cannot be proven in constructive logic (that is, a system with no excluded middle, double negation, or $\lnot$-elimination).
I'm trying to assume that a proof for it exists and use that to show the law of excluded middle, or double negation to arrive at a contradiction. However, I'm a bit stuck and I'm not sure if this is the right approach.
Any input appreciated!
AI: Yes. We can reduce as $$(\lnot p\to p)\to p,\lnot\lnot p\vdash p,$$ and then it suffices to show $$ (\lnot p\to p)\to p, \lnot\lnot p\vdash \lnot p \to p,$$ and again we can reduce that to $$ (\lnot p\to p)\to p,\lnot\lnot p, \lnot p\vdash p,$$ and we see we have a statement and its negation to the left of the turnstyle and we're done.
Observe that the left-hand side of the turnstyle in your title has the schematic form of proof by contradiction (note, nothing would change with the above proof if we replaced with $(\lnot p \to \bot)\to p$, and indeed this does not change the semantics of the statement, since we can prove $\bot$ from $\lnot p$ if and only if we can prove $p$ from $\lnot p$). Thus, we might expect this to be an equivalence. And it is... try to prove the other direction.
|
H: Show that $\lim_{n\rightarrow\infty} \frac{\binom{n}{k}}{2^n} =0$
Show that the following limit holds
$$
\lim_{n\rightarrow\infty} \frac{\binom{n}{k}}{2^n} =0
$$
for a fixed value of $k$
I really am just stuck at the first step here. Normally I would consider tackling this using L'Hopitals rule, however $\binom{n}{k}$ is not differentiable. I was considering using the binomial theorem, but that is for sums, not just the single scenario. Any help appreciated!
AI: If you accept as known $\binom{n}{k} \leqslant \frac{n^k}{k!}$, then you obtain it by estimation.
Addition: added second part, as it can be helpful for somebody.
$$\frac{n^k}{k^k} \leqslant \binom{n}{k} \leqslant \frac{n^k}{k!}$$
|
H: Understanding why the hyperplane is perpendicular to the vector
I am following along Stephen Boyd's lectures on convex optimization and am having trouble understanding the diagram in this screenshot.
I have read through a few answers such as this one and this one.
My question is I am having trouble understanding why $$a^Tx = b$$ implies that a and x are perpendicular.
One way I understood it, following this post's answer is $$a^Tx = b \implies a^T x - a^T x_b = 0 $$ for some $x_b$ and so $a^T (x - x_b) = 0$. I understand why $a$ will be perpendicular to $x - x_b$ but am having trouble seeing why $a$ will be perpendicular to $x$.
One way I think about it is $x_b$ is a particular solution to the equation $a^Tx = b$ and since this is a linear program, any $a' = ka$ and $x' = x_b/k$ for non-zero $k$ will solve for $a'^T x' = b$. Hence, $x_b$ is just one of the solutions, and the other solutions are of the form $x_b/ k$, for non-zero $k$. Note, the other solutions are all parallel to $x_b$ and I think the linear combination of parallel lines is parallel to any of of them.
Hence, since $x$ and $x_b$ are parallel, $x - x_b$ is parallel to $x$ and so $a^T$ which is perpendicular to $x - x_b$ is perpendicular to $x$.
However, I'm having trouble seeing this for fixed $a$. In particular, is $a$ fixed?
AI: The diagram in the screenshot is not showing that $a$ and $x$ are perpendicular. Indeed, by definition two vectors are perpendicular (aka orthogonal) iff their dot product is zero. Rather, it is showing that if $x_0$ satisfies $a^\top x_0 =b$, then every element $y$ in the hyperplane $H = \{y: a^\top y = b\}$ may be written in the form $y = x_0 + x$, where $a^\top x = 0$.
|
H: Find the minimum perimeter of the triangle.
Consider point $A(5, 2)$ and variable points $B(a, a)$ and $C(b,0),\, a\in R, \, b\in R$. If the perimeter of $\triangle ABC$ is minimum, find $a$ and $b$.
My attempt:
$\begin{align}
P&=AB+BC+CA\\
&=\sqrt{(a-5)^2+(a-2)^2}+\sqrt{(a-b)^2+(a)^2}+\sqrt{(5-b)^2+(2)^2}
\end{align}$
I tried to partial differentiate the equation w.r.t. $a$ and $b$ and solve for $a$ and $b$ and obviously I didn't get the answer as the derivative of square root into its argument is getting much more complicated. I didn't show the calculations here as it is very tedious.
I also know that this is not the expected method to solve this problem. It has something to do with "For minimum length, path followed by light ray is to be traced". But I don't know how to apply it here.
Any hints are appreciated!
AI: Suppose we draw $\triangle ABC$. Reflect $A$ about the line $y = x$, to the point $A' = (2,5)$. Also, reflect $A$ about the line $y = 0$ to the point $A'' = (5,-2)$. Then the side $AB$, when reflected about $y = x$, becomes $A'B$, and the side $AC$, when reflected about $y = 0$, becomes $A''C$, and the path $A'BCA''$ has the same length as the perimeter of $\triangle ABC$ because $A'B = AB$ and $A''C = AC$. For what choice of $B$ and $C$ do we have the shortest possible distance between $A'$ and $A''$, and therefore the least possible perimeter?
|
H: Unable to prove an exercise in Continuous functions in Topology
I am self studying Topology from C. Wayne Patty and I am unable to solve the following question in exercise 1.7
Adding image->
I tried by assuming a sequence $x_n$ $ \epsilon $ A which converges to x . I got f($x_n$) = g($x_n$ ) but I am not able to move forward.
Please give some hint. No need to fully answer it.
AI: The proof writes itself if you use a proof from contradiction:
Suppose, for a contradiction, that $f(x) \neq g(x)$ for some (now fixed) $x \in \overline{A}$.
Then as $Y$ is Hausdorff, there are open, disjoint sets $U,V$ in $Y$ such that $f(x) \in U$ and $g(x) \in V$.
As $f$ is continuous at $x$, there is some open neighbourhood $U_x$ of $x$ such that $$f[U_x] \subseteq U\tag{1}$$
As $g$ is continuous at $x$, there is some open neighbourhood $V_x$ of $x$ such that $$g[V_x] \subseteq V\tag{2}$$
Now $U_x \cap V_x$ is an open neighbourhood of $x$ and as $x \in \overline{A}$, there exists some point $a \in (U_x \cap V_x) \cap A$.
$(1)$ implies (as $a \in U_x$) that $f(a) \in U$. Also, $(2)$ implies that $g(a) \in V$. But then $$f(a) = g(a) \in U \cap V$$
contradicts the disjointness of $U$ and $V$. This contradiction shows that or initial assumption was false and so $f(x)=g(x)$ for all $x \in \overline{A}$.
|
H: Applying the mean value theorem to sine function
Prove that $ \pi x\cos(\frac{1}{2}\pi x^2) = c$ has a solution in $0 < x< 1$ for $c=1$?
Is this true for all positive values of $c$?
I said define $f:[0,1] \to \mathbb{R}$ by $f(x) =\sin(\frac{1}{2}\pi x^2)-cx$. Then $f$ is (in particular) continuous on $[0,1]$ and differentiable on $(0,1)$ so by the MVT there exists $\xi \in (0,1)$ s.t. $f'(\xi) = \frac{f(1)-f(0)}{1-0} = 1-c$. But also since $f$ is differentiable we can write $f'(\xi) = \pi \xi \cos(\frac{1}{2}\pi \xi^2)-c$.
So there does exist a solution when $c=1$ since then $f'(\xi) =0$.
Is it right to say that since $f'(\xi) = 1-c \ne 0$ for any $c \ne 1$, the equation has no solutions for all other positive values of $c$?
AI: No, that is not correct. $f(x) =\pi x \cos (\frac {\pi x^{2}} 2)$ is a nonnegative continuous function on $[0,1]$ vanishing at $0$ and taking strictly positive values at other points. So it attains all values between $0$ and the maximum value. So $f(x)=c$ has a solution in $(0,1)$ for all sufficiently small positive numbers $c$. But there is no solution when $c $ exceeds the maximum value. Actually it is easy to calculate the maximum value but I will leave that to you.
|
H: Asking about number of parellogram in a figure
This question was asked by my younger brother and I couldn't solve it.
So, I am asking it here.
Question is ->
I think directly calculating it is a bit lenthy and could lead to error.
Can someone please tell a method for such questions.
I am a masters of mathematics student. So, answer can include proper mathematics techniques.
AI: I've divided the parallelograms in two basic types right sided (whose two parallel sides are inclined on right side) and left sided. By symmetry we can say that the no. of parallelograms are equal of each type. So, we will just count the right sided.
$1×1$: There are $5$ parallelograms with sides of one unit. $3$ below the middle line and $2$ above.
$1×2$: This also includes $5$ parallelograms. $3$ horizontal and $2$ vertical.
$1×3$: There is only $1$ parallelogram of these side type which is horizontal below the middle line.
$2×2$: It also has $1$ parallelogram.
Hence, it gives us $12$ parallelograms. That means total number of right sided and left sided parallelograms are $24$.
But, there is also a third type which we missed that is neither right nor left. Notice the $3$ three kite shaped parallelograms whose diagonal is the middle line.
So, there are total $27$ parallelograms.
|
H: Consider the function$ f : \mathbb Z \to \mathbb Z$ given by$ f(n) = n^2.$ Write down the set $f ^{−1} (\{9, 10\}).$
How can I solve this?
I think they for $f$ inverse $9$. Answer is $3$. And for $f$ inverse $10$ I am confused. Is this possible to find the solution for $f$ inverse $10$?
AI: By definition $f^{-1} (\{9,10\})$ is the set of all integers $n$ such that $n^{2}=9$ or $n^{2}=10$. The answer therefore is $\{-3,3\}$.
|
H: Prove the therem of function f that is differentiable at only one point
Let $f$ be $f:\mathbb{R}→\mathbb{R}$. I understand and I saw function as $f(x)=x^2g(x)$ that can show that a function can be differentiable at one point. If such $x0$ exists is differentiable at this point then:
$\lim_{x \to x0} \frac{f(x+\Delta x) - f(x)} {\Delta x} = f'(x0)$
how can I prove it ?
AI: As $f(x)=x^2 g(x)$, you have $f(0) = 0$. Then
$$\left\vert \frac{f(x)-f(0)}{x} \right\vert \le \vert x \vert \vert g(x) \vert.$$
So at least if $g$ is bounded in a neighborhood of $0$ this proves that $f^\prime(0) = 0$ as $\lim\limits_{x \to 0} x g(x)=0$.
|
H: Show that a sequence of PDF of normal distribution with running mean and unit variance is not bounded by an integrable function
I am trying to show that the condition of bounded by an integrable function is crucial in the Dominated Convergence Theorem.
Consider a sequence of functions $(f_n)$ on $\mathbb{R}, $ which is equipped with the Lebesgue measure, defined by
$$f_n(x) = \frac{1}{\sqrt{2\pi}} e^{- \frac{(x-n)^2}{2}}.$$
So, each $f_n$ is the probability density function of the normal distribution with mean $n$ and variance $1.$
Clearly
$$\lim_{n\to\infty} \int_{-\infty}^\infty f_n(x) \, dx = 1 \neq 0 = \int_{-\infty}^\infty \lim_{n\to\infty} f_n(x) \, dx.$$
In this case, the Dominated Converegence Theorem fails because $f_n$ is not bounded by an integrable function.
Intuitively, this is clear as $f_n$ is 'running' towards infinity.
However, I have difficulty showing it.
AI: Suppose there is an integrable function $g$ such that $e^{-(x-n)^{2}/2} \leq g(x)$ for all $n$ and $x$. Then $g(x) \geq e^{-1/8}$ on the interval $(n-\frac 1 2, n+\frac 1 2)$ for each $n$. Hence $\int g(x) dx \geq \sum_n \int\limits_{n-\frac 1 2}^{n+\frac 1 2} g(x) dx =\infty$.
|
H: Meaning of probability of the intersection of two events
Suppose X is a random variable of a rolling 6-side die. Suppose A is the event that the outcome is even.
The example asks for the conditional probability of X=k, given A.
The answer to this is 1/3. We also know that P(A) = 1/2.
If we write down the formula of the conditional probability and solve for the probability of the intersection of the two events, the outcomes is 1/6.
I cant understand why the probability of the intersection is 1/6. What actually means for the two events to happen at the same time. How could we find this probability without using the conditional probability?
AI: The probability of the intersection $\{X=k\}\cap A$ depends of the value of $k$:
if $k$ is odd then cannot be odd and even at the same time so the probability would be zero, in other words $P(\{X=k\}\cap A)=P(\emptyset )=0$.
if $k$ is even then $\{X=k\}\subset A$ so $\{X=k\}\cap A=\{X=k\}$, therefore $P(\{X=k\}\cap A)=P(\{X=k\})=\frac1{6}$.
|
H: Prove that $n^2>n+1$ for all $n\geq 2\in\mathbb{Z}$
I'm a bit unsure of my reasoning here:
Clearly the proposition $P_2$ which states that $2^2>2+1$ is true.
Suppose that it holds true for $k^2>k+1$ where $k>2$.
We define $d_k=k^2-(k+1)=k^2-k-1$
Suppose we also have $(k+1)^2>(k+1)+1$.
As before we define $d_{k+1}=(k+1)^2-((k+1)+1)$
Now we have
$\begin{align} d_{k+1}=k^2+k-1&>k^2-k-1=d_k\\
2k&>0 \end{align}$
which must be true since $k>2$.
My intuition was that by showing that the difference between the terms is increasing with each iteration, it would prove that the inequality must always hold. However I feel a bit uneasy that I assumed that both $P_k$ and $P_{k+1}$ were true. Is this line of reasoning valid?
Any advice helps, thanks!
AI: $P_k$ is the statement that $d_k >0$. You showed that $d_{k+1} >d_k$ so if you assume $P_k$ you get $d_{k+1} >d_k>0$ which shows that $P_{k+1}$ is true.
|
H: Proving a function to be continuous in Topology
I am trying exercises of section 1.7 of C. Wayne Patty and I am unable to think about solution of this question.
Note that I want to ask b part only.
My attempt -> The definition of continuity is inverse of the definition of open sets given here. So, I don't know how defination of continuity can be derived from definition of open sets here ?
Kindly help.
AI: Let $U$ be an open subset of $Y$. You want to prove that $f^{-1}(U)$ is an open subset of $X$. But, by the definition of the topology on $X$, this is true indeed, since $A\in\mathcal T$ if and only if $A=f^{-1}(B)$ for some $B\in\mathcal U$.
|
H: Choice of a vector for supporting hyperplane theorem
I'm having trouble relating the content on these notes to these ones from MIT OCW here.
Specifically, the question I'm having is the first set of notes describes the specific half space where $a^Tx \leq a^T x_0$ and the second concludes that any one of the half spaces must include $C$.
Generally, I'm having trouble visualizing this for all $a$. Does the theorem assume the same $a$ for all boundary points?
AI: Both statements are equivalent. For any given point on the boundary, there is a hyperplane $a^Tx=k=a^Tx_0$ such that your set is in one of the half-spaces defined by it. You can always assume it is the "positive" half-space $a^Tx\ge k$, otherwise you replace $a$ by $-a$. Yes, the linear form $a$ depends on the point, it is different at every point, in general.
|
H: Prove that $\frac{1}{\sqrt{1}}+\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{3}}+\frac{1}{\sqrt{4}}+\frac{1}{\sqrt{5}}+\frac{1}{\sqrt{6}}\cdots>\infty$ .
I need to a fresh solution, but mine
We have that
$$\begin{align*}
\frac{1}{\sqrt{1}} &+ \frac{1}{\sqrt{2}}+ \frac{1}{\sqrt{3}}+ \frac{1}{\sqrt{4}}+ \frac{1}{\sqrt{5}}+ \frac{1}{\sqrt{6}}+ \frac{1}{\sqrt{7}}\cdots \\
&> \frac{1}{1}+ \frac{1}{2}+ \frac{1}{3}+ \frac{1}{4}+ \frac{1}{5}+ \frac{1}{6}+ \frac{1}{7}\cdots \\
&> \frac{1}{1}+ \frac{1}{2}+ \left ( \frac{1}{4}+ \frac{1}{4} \right )+ \left ( \frac{1}{8}+ \frac{1}{8}+
\frac{1}{8}+ \frac{1}{8} \right )\cdots \\
&= 1+ \frac{1}{2}+ \frac{1}{2}+ \frac{1}{2}+ \frac{1}{2}+ \frac{1}{2}\cdots \\
&= \infty
\end{align*}$$
AI: Because $$\frac{1}{\sqrt{n}}>\frac{2}{\sqrt{n}+\sqrt{n+1}}=2(\sqrt{n+1}-\sqrt{n})$$ and the telescopic summation.
|
H: the example that CLT holds, but LLN does not
When a sequence of random variables $(X_k)$ have different probability distributions, CLT is not necessarily stronger than LLN. The image above is captured from Feller Vol.1, P.255. This shows one example that CLT holds but LLN does not. The author says that the $\bf{sufficient}$ condition for LLN is $s_n/n \to 0$, where $s_n^2 = \sigma_1^2 + \cdots + \sigma_n^2$ ($Var(X_k) = \sigma_k^2$). Also, the Lindeberg theorem indicates that when $s_n \to \infty$ and $\frac{1}{s_n^2}\sum_{k=1}^n E(U_k^2) \to 1$, CLT can be applied. Here, $U_k$ is a truncated random variable, which is equal to $X_k$ if $|X_k| \le s_n \epsilon$ for any $\epsilon >0$, otherwise $U_k =0$.
I understand the example given here other than the statement which I highlight with purple line. I am confused whenever the author says "the order of magnitude ...". What does it mean exactly in this context? In addition, I can't see how the author conclude that the law of large numbers cannot apply for $\lambda \ge 1/2$.
I would greatly appreciate if you give some help.
AI: About the order of magnitude: it just means that the quotient
$$
S_n/n^{\lambda+1/2}
$$
is asymptotically (as $n\to\infty$) between two positive constants in probability. Therefore LLN cannot hold if $\lambda>1/2$ because in that case you would have $n^{\lambda+1/2}/n=n^{\epsilon}\to \infty$ as $n\to \infty$, which prevents $S_n/n$ to remain finite, close to the mean.
|
H: Let $A = \{1, 2, 3\}$. Find distinct functions $f:A\to A$ and $g:A\to A$ such that $g\circ f\neq f\circ g$.
This question appeared in my text book while solving. I am not super sure about the solution to it. Can anyone please check my solution? I can't consult my professor right not (by mail) because it's midnight here.
I am unable to attach a picture of my work but my answer is:
$$f:\{(1, 2), (2, 2), (3, 1)\}, and\\
g:\{(2,1),(1,3),(3,3)\}$$
It will be very helpful if you tell me whether it is correct or not.
AI: Yours is correct. Another solution would be to take $f$ and $g$ to be constant functions $1$ and $2$. Then
$$f\circ g=f\ne g=g\circ f$$
|
H: Domain of $\arccos(x)$
Is there an explanation of why the domain of $\arccos(x)$ is $[-1, 1]$?
AI: Because it is meant to be the inverse function to $\cos(x)$, which has a range of $[-1,1]$. The inverse $f^{-1}$ of a function $f$ has $f$'s codomain as its domain by definition; i.e., if $f: A \to B$ then $f^{-1} : B \to A$.
|
H: Question about a proof related to the preimage of an measurable function.
I have a question regarding the proof of Thm A. I know that f is measurable if the set {x: f(x)<c} for all real value c, is measurable.
I have a hard time understanding(fill in the detail) of how did the set A transforms into a union of the intersection of two sets involving r. My current guess is that we want to use rational number to be a countable union? But why is that? and why is this the intersection of two sets?
AI: Here $A=\{x: f(x)<g(x)+c\}$. Also write $A'=\bigcup_{r\in\Bbb Q}\big[\{x:f(x)<r\}\cap\{x:r-c<g(x)\}\big].$
Now, $y\in A\implies f(y)<g(y)+c$, since between two real numbers we have a rational, there is $r\in\Bbb Q$ such that $f(y)<r<g(y)+c$. So, $y\in A'$.
Conversely if $z\in A'$, then $z\in \{x:f(x)<r_1\}\cap\{x:r_1-c<g(x)\}$ for some $r_1\in\Bbb Q$. That is $f(z)<r_1$ and $r_1-c<g(z)$, adding these we get $f(z)<g(z)+c$. So that $z\in A$.
|
H: When can you switch the limites of integration of a line integral?
I was looking into why the property that $\int_a^b f(x) \ dx = -\int_b^a f(x) \ dx$ holds true. I found that 2 common answers were that
It comes from $\int_a^b f(x) dx + \int_b^c f(x) dx = \int_a^c f(x) dx$ for arbitrary $a \le b \le c$ (for example, in this answer).
It comes from the fundamental theorem of calculus $\int_a^b f(x)\,dx = F(b) - F(a)$ (for example, in this answer).
From my understanding of these answers, the first one has a more lenient hypothesis, since to apply F.T.C. we need the function to have an antiderivative, which is not always the case.
Knowing this, I was wondering about the extension of this question into a line integral. Let's say that $C$ is a path that starts at point $p$ and ends at point $q$. If I define $C^*$ to be the same path but staring at $q$ and ending at $p$, is it generally true that
$$
\int_{C} \mathbf{F} \cdot d\mathbf{r} = - \int_{C^*} \mathbf{F} \cdot d\mathbf{r} \quad ?
$$
where here $r:[t_0, t_f]\subset \mathbb{R} \to C$, with $r(t_0) = p$ and $r(t_f) = q$ being a biyective parametrization of our path.
I know that I can show this to be true if $\mathbf{F}$ happens to be a conservative field using the gradient theorem, in a similar manner as the 1D case can be shown by F.T.C., but since this is not always true I don't know if I can say that this holds in general.
Is there a way to show that this always holds? Or alternatively, is there a counterexample where this fails? Thank you!
AI: You can deduce this property from the one dimensional case:
Suppose we have a $C^1$ path $\gamma:[0,1]\to C$, where $C=\gamma([0,1])$.
Now we consider a $C^1$ 'reparametrization' $\varphi:[0,1]\to [0,1]$ with $\varphi(0)=1$, $\varphi(1)=0$ and $\varphi'(s)<0$ for all $s\in [0,1]$. Let's define a new path $\gamma^*:[0,1]\to C$ by $\gamma^*(s)=\gamma(\varphi(s))$. (Note that the new path 'runs through $C$ in the opposite direction')
Then by definition we have
\begin{align}
\int_\gamma F\cdot dr=\int_0^1 F(\gamma(t))\cdot \dot{\gamma}(t)dt
\end{align}
If we make the substitution $t=\varphi(s)$ then $dt=\varphi'(s)ds$ and
\begin{align}\int_0^1F(\gamma(t)))\cdot \dot{\gamma} (t)dt&=\int_1^0F(\gamma(\varphi(s))\dot{\gamma}(\varphi(s))\varphi'(s)ds\\
&=\int_1^0F(\gamma^*(s))\dot{\gamma}^*(s)ds\\
&=-\int_0^1F(\gamma^*(s))\dot{\gamma}^*(s)ds\\
&=-\int_{\gamma^*}F\cdot dr\end{align}
(The second line follows from the chain rule and the third line by the identities for the 1-d case that you mentioned)
Hope this helps!
|
H: What can we say about the probability of two events when $A$ implies event $B$?
Suppose we have two events $A$ and $B$ where $A$ implies $B$. What can we say about their probabilities?
My try:
I can come up with two events
$A=\{\text{rainy weather}\}$
and
$B=\{\text{cloudy weather}\}$
where $A \rightarrow B$.
Also, we all know intuitively $\Pr\{\text{rainy weather}\} \leq \Pr\{\text{cloudy weather}\}$.
Can you prove my argument rigorously using probability laws? I do not want showing this fact by words.
AI: $A\Rightarrow B$ means that $A\subseteq B$
this is the situation of $A\Rightarrow B$
if A is true, B is true
if B is true, A can be true or false.
Then, it is trivial that also $\mathbb{P}[A] \leq \mathbb{P}[B]$
|
H: If a convex combination of conformal matrices is conformal, are they all proportional?
$\newcommand{\CO}{\text{CO}}$
$\newcommand{\SO}{\text{SO}}$
$\newcommand{\dist}{\text{dist}}$
Let
$\CO(2) =\{\lambda R : R \in \SO(2)\, | \, \lambda > 0\} $ be the set of $2 \times 2$ conformal matrices.
Let $A_i \in \CO(2)$ be a finite number of conformal matrices, and suppose that $\sum \lambda_i A_i \in \CO(2)$, where $\lambda_i \ge 0$ and $\sum \lambda_i=1$.
Is it true that all the $A_i$ are multiples of the same conformal matrix?
I don't even know the answer for the special case where we have only two matrices in the convex combination.
AI: Counterexample:
$$\frac{1}{2}\begin{pmatrix}1&0\\0&1\end{pmatrix}+\frac{1}{2}\begin{pmatrix}0&-1\\1&0\end{pmatrix}=\frac{1}{\sqrt2}\begin{pmatrix}\frac{1}{\sqrt2}&-\frac{1}{\sqrt2}\\\frac{1}{\sqrt2}&\frac{1}{\sqrt2}\end{pmatrix}$$
|
H: How to write multiplication of series of numbers in factorial form?
I want to write
$$1 \times 2 \times 3 \times \dotsb \times 1 \times 5 \times 9 \times 13 \times \dotsb$$
in factorial form, but I don't know how?
AI: You could use multifactorial notation:
$1 \times 2 \times 3 \times \dotsb \times n \times 1 \times 5 \times 9 \times \dotsb \times (4m+1) = n! \times (4m+1)!!!!$
|
H: Passage missing in simple proof of orthogonal decomposition
I am reading this nice book on linear algebra. Specifically, I am reading the proof of the Theorem for Orthogonal decomposition of a vector $x\in \mathbb{R}^n$, given a subspace $W$. I think there is a step missing on the proof of its uniqueness.
Theorem:
Let $W$ be a subspace of $\mathbb{R}^{n}$ and let $x$ be a vector in $\mathbb{R}^{n}$. Then we can write $x$ uniquely as
$$x = x_W + x_{W^\perp}$$
where $x_W$ is the closest vector to $x$ on $W$ and $x_{W^\perp}$ is in $W^\perp$.
If I am not wrong, the proof for uniqueness is commonly carried out usually by absurd, i.e. assuming that the decomposition is not unique and deriving a contradiction.
Proof:
We assume that
$$x = x_W + x_{W^\perp}=y_W + y_{W^\perp}$$
Rearranging gives:
$$x_W - y_W - y_{W^\perp} - x_{W^\perp} $$
This is the passage I don't fully understand:
Since $W$ and $W^\perp$ are subspaces, the left side of the equation is in $W$ and the right side is in $W^\perp$. Therefore, $x_W - y_W$ is in $W$ and $W^\perp$, so it's orthogonal to itself.
I don't understand why there is a "therefore" there, and why we can say that $x_W - y_W$ is in $W^\perp$ as a linear combination of vectors in $W$ should stay in $W$, and not in $W^\perp$.
AI: Rearranging you get that$$x_W-y_W=y_{W^\perp}-x_{W^\perp}.\tag1$$So, since $x_W-y_W\in W$, $y_{W^\perp}-x_{W^\perp}\in W$, too. But $y_{W^\perp}-x_{W^\perp}\in W^\perp$. So, $y_{W^\perp}-x_{W^\perp}=0$, and it follows from $(1)$ that $x_W-y_W=0$ too. So, $x_W=y_W$, and $x_{W^\perp}=y_{W^\perp}$.
|
H: Is there a precise definition of "arbitrary union"?
Is there a precise formulation of what "arbitrary union" means? For example, for a topology, we require that it be closed under arbitrary unions. Do we mean the union of any subcollection of the topology is also in the topology?
In general, do we define "arbitrary union" in terms of the union of a family? If so, how do we know this captures the idea of "arbitrary union" completely?
AI: Yes, if $\mathcal{T}$ is a collection of sets then it is closed under "arbitrary unions" if
$$\forall \mathcal{T}' \subseteq \mathcal{T}: \bigcup \mathcal{T}' \in \mathcal{T}$$
so in words: the union of any subfamily of the family is also in the family.
Note that this includes the finite unions: if $O_1, O_2 \in \mathcal{T}$ we can take $\mathcal{T}'=\{O_1,O_2\}\subseteq \mathcal{T}$ and then $\bigcup \mathcal{T}' = O_1 \cup O_2 \in \mathcal{T}$ e.g. And likewise for countable unions: if $O_n, n \in \Bbb N$ are in $\mathcal{T}$ , take $\mathcal{T}'=\{O_n\mid n \in \Bbb N\}$ and then $\bigcup_n O_n = \bigcup \mathcal{T'} \in \mathcal{T}$ etc.
Often we just write an arbitrary union as $\bigcup_{i \in I} O_i$ where $i \in I$, $I$ is some index set, and all $O_i \in \mathcal{T}$. Then we leave unspecified whether $I$ is finite, countable or whatever.
|
H: List of operations on a set to make it a 2-dimensional vector space
This question might be very silly. I was working on two Examples of Friedberg- Insel-Spence's Linear Algebra. In example $6$, in $\mathbb{R}^2$(not 2-dimensional real vector space, consider it as the set $\mathbb{R} \times \mathbb{R})$ scalar multiplication was defined as usual, but vector addition was defined as the following:
$(a_1, b_1) +(a_2, b_2)= (a_1+b_2, a_1- b_2)$ for any $(a_1, b_1), (a_2, b_2) \in \mathbb{R}^2.$ The set $\mathbb{R}^2$ is closed addition and scalar multiplication, but it's not a vector space over $\mathbb{R}$ because $\mathbb{R}^2$ is not an abelian group under addition, for instance, this operation is neither commutative nor associative. Moreover, there is an issue with the distribution. Is there any complete list of ways(addition + scalar multiplication) such that the set $\mathbb{R}^2$ is a 2-dimensional vector space?
AI: I think that you are hoping for too much.
Let $\pi:\mathbb{R}^2\to\mathbb{R}^2$ be any $1-1$ and onto map. (Any map, not necessarily linear or even additive.)
Now define on $\mathbb{R}^2$ a new addition and scalar multiplication as follows:
$$
a\oplus b:=\pi^{-1}(\pi(a)+\pi(b)),
\
\lambda\odot a:=\pi^{-1}(\lambda\cdot\pi(a)).
$$
Then $\mathbb{R}^2$ is an $\mathbb{R}$ vector space of dimension 2 with respect to these operations. You can check the axioms, but as we have just re-labelled all the vectors it is probably "clear" that this is the case.
|
H: Lower limit and subsequence
Let $x_n$ be a convergent sequence $x_n \to x$ in a topological space $X$ and $F:X\longrightarrow \mathbb R$ any function (not necessarily continuous). If there exist a subsequence $x_{n_k}$ such that
$$F(x) \leq \liminf_k F(x_{n_k})$$
then can we conclude that
$$F(x) \leq \liminf_n F(x_n) \,\,\, ?$$
AI: Let $X = \mathbb{R}$. Let $x_n$ be a sequence converging to $0$ such that $x_n$ is rational if $n$ is odd, and irrational if $n$ is even. Let $F = \mathbb{1}_{\Bbb{Q}}$ be the rational indicator function. Finally, let $(x_{n_k})$ be the subsequence of all odd indices, so $x_{n_k} \in \Bbb{Q}$ for all $k$. Then:
$$
\liminf_k F(x_{n_k}) = \liminf_k 1 = 1 \geq F(x)
$$
But:
$$
\liminf_n F(x_n) = 0 \not\geq F(x)
$$
as $F(0) = 1$.
|
H: Proof verification for $a > b \implies -a < -b$
I am aware that this question asks for the verification of a proof of (almost) the same problem but my proof is different and in my opinion, a bit simpler and more intuitive. Here's how it goes :
Let us assume that $a > b$.
We can write this inequality in the form of an equation as follows :
$$a = b + x \text{, where } x > 0$$
On multiplying the LHS and RHS by $-1$, we obtain :
$$-a = -(b+x) = -b-x \implies -b = -a+x$$
We have already mentioned that $x > 0$. From this, we can say that $-b$ is obtained when we add a positive number ($x$) to $-a$. Hence, $-b > -a \implies -a < -b$
Thanks!
AI: If you are allowed to use $a>b\iff a+c>b+c$, then the theorem can be reduced to
$$a>0\iff -a<0,$$ which seems easier.
Update:
Using the above lemma, you could as well write the whole proof as
"WLOG $b=0$ (translate as required) and $a>0\iff -a<0$".
|
H: Max and Min value of a function on a circle
Find the maximum and minimum values of the function $f(x,y) = 5x^2 + 2xy + 5y^2$ on the circle $x^2 + y^2 = 1$.
After substituting the equation of the circle in that of the function and then equating $f'(x) = 0$, I get the values of $y$ to be $\pm1\sqrt2$. Plugging these into the function, the resulting values are $\max(f) = 6$ and $\min(f) = 4$. However, the answer given is $\max(f) = \min(f) = 5$. I would like to know where I went wrong.
AI: The given answer is wrong. For example $f(x,y) \leq 5x^{2}+(x^{2}+y^{2})+5y^{2} \leq 6$ and the value $6$ is attained when $x=y=\frac 1 {\sqrt 2}$. Hence the correct maximum value is $6$. Note also that minimum and maximum cannot be the same for a non-constant function.
|
H: Consider the system $\dot{x}=4x^{2}-16$ find an analytical solution.
I am working through a text book by Strogatz Nonlinear dynamics and chaos . In chapter 2 question 2.2.1 , I am looking for an analytical solution. I have the question's answer but would like to ask how a certain step was performed.
Question
Consider the system $\dot{x}=4x^{2}-16$ Find an analytical solution to the problem.
Answer
\begin{equation}
\dot{x}=4x^{2}-16
\end{equation}
\begin{equation}
\int \frac{1}{x^{2}-4} dx = \int 4 dt \\
\frac{1}{4} \ln(\frac{x-2}{x+2}) = 4t + C_{1} \\
x = 2 \frac{1 + C_{2}e^{16t}}{1 - C_{2}e^{16t}}
\end{equation}
\begin{equation}
C_{2}(t=0) = \frac{x-2}{x+2}
\end{equation}
where $C_{1}$ and $C_{2}$ are constants.
Summary
In the first step to get to
$\int \frac{1}{x^{2}-4} dx = \int 4 dt $ how does this happen? There is an intermediary step/result that is not clear. Any help would be really appreciated.
Edit 1:
In other words, is this step okay?
\begin{equation}
\frac{\dot{x}}{x^{2}-4} = 4\\
\int \frac{1}{x^{2}-4} dx = \int 4 dt
\end{equation}
Edit 2:
Can I then denote my solution as:
$x(t) = \frac{2(e^{4c_{1}+16t})}{(e^{4c_{1}-16t})}$
AI: The equation ${\dot{x}\over x^2-4}=4$ is actually equivalent to $${{dx\over dt}\over x^2-4}=4$$which, by multiplying both sides in $dt$ leads to $${dx\over x^2-4}=4dt$$or$$\int{dx\over x^2-4}=\int 4dt$$
|
H: A simple question about conditional expectation
If I got
$$E\left(\min\left(X,Y\right)\right)$$
Why is it equal to
$$E\left(\min\left(X,Y\right)\right)=E\left(\min\left(X,Y\right)\mid\min\left(X,Y\right)=X\right)P\left(X\le Y\right)+E\left(\min\left(X,Y\right)\mid\min\left(X,Y\right)=Y\right)P\left(X>Y\right)$$
AI: Let $A$ be the event $)X \leq Y$ and $B=(X>Y)$. Then $E(X\wedge Y)=E((X\wedge Y) I_A)+E((X\wedge Y) I_B)=E(X\wedge Y : A) P(A)+E(X\wedge Y : B) P(B)$.
Note: $E(X:A)$ is defined as $\frac 1 {P(A)} EXI_A$.
|
H: $0 \cdot \infty$ object. Does it make sense?
There maybe a mistake in the question but, let that someone asks you to calculate something like this: $$0\cdot \lim_{x\to0}(log(x))$$ with no further information. The assumptions that one makes is just that log is the natural logarithm, $x\epsilon\mathcal{R}$ and generally maybe some assumptions that a first year calculus course would assume. Nothing too complicated.
The question is the following: How do you somewhat rigorously attack this thing?
My thoughts:
If you see this as a whole is an undefined quantity of the type: $0 \cdot \infty$.
If you see it as parts you have a number $0$ and a limit that diverges. Since the limit does not exist (of course we implicitly assume that $\lim_{x\to0^{+}}$) we could not use the multiplication rule.
Hypothetically if we could use the multiplication rule we run into problems of what function's limit should we represent $0$ with. $x$? $x^2$? $x^{1/10}$?
What do I say then about this object? Does it even make sense to ask something like that?
AI: An algebraic expression needs to have all its terms defined to have a meaning.
As
$$\lim_{x\to0}\log x$$ is undefined, the whole expression $0\cdot\lim_{x\to0}\log x$ is undefined.
And we also have $$(\lim_{x\to0} x)(\lim_{x\to0}\log x)$$
undefined, while
$$\lim_{x\to0}(x\log x)=0.$$
Also note that $0\cdot\infty$ is not an expression but an expression pattern which describes a limit of the form
$$\lim_{x\to a}(f(x)g(x))$$ where
$$\lim_{x\to a}f(x)=0\text{ and }\lim_{x\to a}g(x)=\infty,$$ as in my third example only.
|
H: Lie Group has non degenerate two form
Show that
Every even dimensional Lie Group has a non degenerate two form.
How does one answer this question?
I can see it's true for $R^{2n}$ and I was thinking about pulling back the forms on $R^{2n}$ on coordinate patches. But I don't know how to show they will agree on intersection (if they will at all).
AI: Since Lie groups act smoothly and transitively on themselves, there's a more elegant approach (which works for constructing all kinds of invariant tensors).
Let $G$, be a Lie group with identity $e$, and $T_eG$ be its Lie algebra. For each $g\in G$, there is a left translation operator $L_g:G\to G$ defined by $L_g(h)=gh$. These are all diffeomorphisms, which we can use to take a tensor defined at a point and transport it to every other point.
Let $F$ be a tensor on $T_eG$ (e.g. a nondegenerate two-form, a inner product, volume form, etc.). We can define an extension $\widetilde{F}$ of $F$ to a tensor field on all of $G$, using the pushforward of left translations:
$$
\widetilde{F}|_g=(L_g)_*F
$$
(It might be worth proving that this field is smooth.) This tensor field is automatically invariant under left translation, and any retains any pointwise properties of $F$ (e.g. (anti)symmetry, nondegeracy, nonvanishing, etc.). It is in fact the unique left-invariant tensor field with $\widetilde{F}|_e=F$. This reduces your problem to finding a nondegerate $2$-form at the identity.
|
H: What went wrong in the evaluation of $\int \frac{1}{3-2\sin(x)}dx$?
I tried to evaluate the following integral:
$$\int \frac{1}{3-2\sin(x)}dx\,\, $$ with universal substitution, using the fact that
$t(x):=\tan\left(\frac{x}{2}\right)$
$\sin(x)=\frac{2t(x)}{1+t(x)^2}$ and $t'(x)=\frac{1+t(x)^2}{2}$
$\displaystyle\int \frac{1}{3-2\sin(x)}dx=$$\displaystyle\int \frac{1}{3-2(\frac{2t(x)}{1+t(x)^2})}dx=\displaystyle\int \frac{1}{3-\frac{4t(x)}{1+t(x)^2}}dx=\displaystyle\int \frac{1}{\frac{3+3t(x)^2-4t(x)}{1+t(x)^2}}dx=\displaystyle\int \frac{1+t(x)^2}{3+3t(x)^2-4t(x)}dx$
Now I substituted: $u:=t(x) \Longrightarrow dx= \frac{du}{t'(x)}=\frac{2du}{1+t(x)^2}$
So:
$\displaystyle\int \frac{1+u^2}{3+3u^2-4u}\frac{2}{1+u^2}du=\displaystyle\int \frac{2}{3+3u^2-4u}du=2\displaystyle\int \frac{1}{3u^2-4u+3}du=2\displaystyle\int \frac{1}{(\sqrt{3}u+\frac{2}{\sqrt{3}})^2+\frac{5}{3}}du$
Here I used the formular:
$\displaystyle\int \frac{1}{t^2+m^2}dt=\left[\frac{1}{m}\arctan\left(\frac{t}{m}\right)\right]$
So:
$2\displaystyle\int \frac{1}{(\sqrt{3}u+\frac{2}{\sqrt{3}})^2+\frac{5}{3}}du=2\left[\frac{\sqrt{3}}{\sqrt{5}}\arctan\left(\frac{(\sqrt{3}u-\frac{2}{\sqrt{3}})\sqrt{3}}{\sqrt{5}}\right)\right]=2\left[\frac{\sqrt{3}}{\sqrt{5}}\arctan\left(\frac{3u-2}{\sqrt{5}}\right)\right]$
Resubstituting:
$2\left[\frac{\sqrt{3}}{\sqrt{5}}\arctan\left(\frac{3\tan\left(\frac{x}{2}\right)-2}{\sqrt{5}}\right)\right]$
Here is the issue:
Wolfram tells:
$\displaystyle\int \frac{1}{3-2\sin(x)}dx=2\left[\frac{1}{\sqrt{5}}\arctan\left(\frac{3\tan\left(\frac{x}{2}\right)-2}{\sqrt{5}}\right)\right]$
I cannot figure out where I went wrong.. like.. its only one factor..
could someone maybe show me what went wrong? Thank you :)
AI: You made a mistake when applying
$$\displaystyle\int \frac{1}{t^2+m^2}dt=\left[\frac{1}{m}\arctan\left(\frac{t}{m}\right)\right].$$
Define $t := (\sqrt{3}u + \frac{2}{\sqrt{3}})$ as you did implicitly, but make sure to rework $du$ into $c \cdot dt$.
|
H: Algorithm to find orthnormal eigenvectors of a symmetric matrix
I have a symmetric matrix $S$ and I'm trying to implement the following algorithm to find first $k$ orthnormal eigenvectors
Note: The picture is from http://www.wisdom.weizmann.ac.il/~harel/papers/highdimensionalGD.pdf
I use a very simple $2x2$ matrix for tests:
$$
\begin{matrix}
1 & 2 \\
2 & 3 \\
\end{matrix}
$$
The code finds the first eigenvector without problems, but it gets stuck on the second eigenvector.
The Gram Schmidt process makes the second vector orthogonal to the previous eigenvector, but then matrix multiplication "flips" candidate around, and they fight in never-ending loop.
Here gray line is the first eigenvector, the thick red is the next candidate $\hat{u_{i}}$
I spent one night debugging it and can't spot anything obviously wrong. It must be something trivial, but I don't understand what. Can you please help me? What am I missing?
https://jsbin.com/zufejir/5/edit?js,output - the code. Each click advances algorithm to the next state.
AI: Eigenvalues can be non-positive real numbers or even complex numbers.
eig([1,2;2,3]) in Octave gives $[-0.23..., 4.23...]$
so the smallest one is indeed negative.
So what happens is that $Su$ will have opposite direction compared to $u$, and then you normalize it so vector will be flipped and of same length. You would circumnavigate this problem by changing stop condition to for example $(u_i^T \hat u_i)^2<(1-\epsilon)^2$
A better condition is probably to do $$\text{var}((\hat u_i)/u_i) < \epsilon$$ element wise as each point wise division should estimate $\lambda_i$, the eigenvalue, which could be a negative or even complex number.
|
H: Find residues at the singularities
I have a function $f(z) = \frac{\cos(z)}{z^6}$. I have to find the singularities and the corresponding residues. I think there is a single pole at $z=0$, which has order 6.
For the residue, I did this:
$\text{Res}(f,0) = \frac{1}{5!} \lim_{z\to 0}\frac{d^5}{dz^5}(z^6 \cdot \frac{\cos(z)}{z^6}) = \frac{1}{5!}\lim_{z \to 0} -\sin(x) = 0$
Am I right, or am I missing something?
AI: That is correct, but it is much simpler to say that, since $f$ is an even function, the residue at $0$ is $0$.
|
H: Contour integration $\frac{e^{iz}}{2\sqrt{z}}$
When $z=u+iv$,
I would like to compute the integral of $\frac{e^{iz}}{2\sqrt{z}}$ along above curve.
The imaginary axis
$$\frac{1}{2}\int_{R}^{0} \frac{e^{-v}}{\sqrt{iv}}d(iv)$$
$R$ goes to $\infty$.
Here because of $\sqrt{iv}$. I confuse to use the change of variable $v=y^{2}$.
Anyone can helps me about this?
the integral over the circular arc goes to zero as $R\rightarrow \infty$. How can I show it?
AI: The integral converges, so you can already take the limit $R\rightarrow \infty$. As you mention, the change of variables $y=\sqrt{v}$ yields a gaussian integral you can simply evaluate.
Generally for the function $\frac{e^{iz}}{z^s}$ with $s>0$ you can proceed as follows: Parametrize $z=Re^{it}$ with $t\in[0,\pi/2]$. Then
$$\left| \int_C \frac{e^{iz}}{z^s} \, {\rm d}z \right| \leq R^{1-s} \int_0^{\pi/2} e^{-R\sin(t)} \, {\rm d}t \stackrel{\sin(t)\geq 2t/\pi}{\leq} R^{1-s} \int_0^{\pi/2} e^{-2Rt/\pi} \, {\rm d}t \, .$$
|
H: Binomial coefficient sum: $\sum_{k=0}^{i-1}{i-1 \choose k}{j-1\choose k} = {i+j-2\choose j-1}$
I'm having problems showing this equation, hope someone here can help me:
$$\sum_{k=0}^{i-1}{i-1 \choose k}{j-1\choose k} = {i+j-2\choose j-1}$$
where $1 \leq i\leq j $.
AI: Note that $\binom{j-1}{k} = \binom{j-1}{j-1-k}.$
Then both sides are different ways of counting how to take $j-1$ elements from $\{1, 2, \ldots , i+j-2\}$.
Edit to clarify:
Imagine there are $i-1$ boys and $j-1$ girls, and you want to form a committee of $j-1$ people. How many ways to do this are there? Obviously, one way of expressing this number is the RHS. On the other hand, you can pick $k$ boys first, then pick $j-1-k$ girls. This gives one term on the LHS. We sum this for all possible values of $k$ to give the LHS. (Note: the expression works if there are at least as many girls as boys. Otherwise, the range of $k$ will overshoot.)
|
H: Finding a sequence $x_n$ with $\lim \sup(x_n)= 2$ and $\lim \inf(x_n)=-5$
Find a sequence $x_n$ with $\lim \sup(x_n)= 2$ and $\lim \inf(x_n)=-5$.
I am stuck on this one. I am trying to find a sequence $x_n$ with the given information.
AI: What about $$X_n=\begin{cases}-5&n\text{ even}\\2&n\text{ odd} \end{cases}\ \ ?$$
|
H: $\sum_{i=1}^{\infty} v_i $ converges?
If I have a series of vectors $(v_i)_{i\in V}$ in a vector space $V$. Then if $\sum_{i=1}^{\infty} v_i$, can I say that if $\sum_{i=1}^{\infty} \mid \mid v_i \mid \mid < \infty$ then the series converges? Or do I need the vector space to be complete, or some other conditions?
AI: Consider the space of al sequences $ (a_n)$ of real numbers such that $a_n=0$ for $n$ sufficiently large with the norm defined by $\|(a_n)\|=\sum|a_n|$. Let $v_i$ be the sequence which has $\frac 1 {i^{2}}$ in the $i-$th position and $0$ elsewhere. Then $\sum\|v_i\| <\infty$ but $\sum v_i$ does not converge. Can you verify this?
But the implication holds in any Banach space since the partial sums form a Cauchy sequence.
|
H: Sorting socks after laundry.
Yesterday sorting socks after laundry I came across the following problem. Assume we have a bin with $n$ pairs of objects. We draw an object from the bin. If there is a paired object on the table we put both objects in another bin. Otherwise we lay the unpaired object on the table.
What is the probability that after $k$ draws the second bin contains $m$ pairs of objects?
I took the following approach. Assume we already have $m-1$ pairs in the second bin after $k-1$ draws. This means we have $k-1-2(m-1)=k-2m+1$ unpaired objects on the table. Similarly one can treat the case if there are already $m$ pairs in the second bin before the $k$-th draw.
The considerations boil down to the following recurrence relation for the probability in question:
$$
P_n(k,m)=P_n(k-1,m-1)\frac{k-2m+1}{2n-k+1}+P_n(k-1,m)\frac{2(n+m-k+1)}{2n-k+1};\quad
P_n(0,m)=\delta_{0m}.\tag1
$$
The calculations based on the expression (1) show reasonable values (here for $n=7$):
$$%\left(
\begin{array}{r|cccccccc}
&0&1&2&3&4&5&6&7\\
\hline
0& 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1& 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
2& \frac{12}{13} & \frac{1}{13} & 0 & 0 & 0 & 0 & 0 & 0 \\
3& \frac{10}{13} & \frac{3}{13} & 0 & 0 & 0 & 0 & 0 & 0 \\
4& \frac{80}{143} & \frac{60}{143} & \frac{3}{143} & 0 & 0 & 0 & 0 & 0 \\
5& \frac{48}{143} & \frac{80}{143} & \frac{15}{143} & 0 & 0 & 0 & 0 & 0 \\
6& \frac{64}{429} & \frac{80}{143} & \frac{40}{143} & \frac{5}{429} & 0 & 0 & 0 & 0 \\
7& \frac{16}{429} & \frac{56}{143} & \frac{70}{143} & \frac{35}{429} & 0 & 0 & 0 & 0 \\
8& 0 & \frac{64}{429} & \frac{80}{143} & \frac{40}{143} & \frac{5}{429} & 0 & 0 & 0 \\
9& 0 & 0 & \frac{48}{143} & \frac{80}{143} & \frac{15}{143} & 0 & 0 & 0 \\
10& 0 & 0 & 0 & \frac{80}{143} & \frac{60}{143} & \frac{3}{143} & 0 & 0 \\
11& 0 & 0 & 0 & 0 & \frac{10}{13} & \frac{3}{13} & 0 & 0 \\
12& 0 & 0 & 0 & 0 & 0 & \frac{12}{13} & \frac{1}{13} & 0 \\
13& 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
14& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\end{array}
%\right)
$$
Moreover the values look so nice and symmetric $$P_n(k,m)=P_n(2n-k,m+n-k)$$ that I would assume the existence of a closed form expression for the probability. However I could not find it. Is there possibly a way to avoid the recursion altogether?
AI: There are $\binom{2n}k$ ways to choose $k$ socks and $\binom nm$ ways to choose $m$ pairs. Now we have $n-m$ unchosen pairs remaining, and we have to draw an unmatched sock from $k-2m$ of them. The pairs can be chosen in in $\binom{n-m}{k-2m}$ ways, and for each pair, we have a choice of two socks.
This gives $$\frac{\binom nm\binom{n-m}{k-2m}2^{k-2m}}{\binom {2n}k}$$ for the probability. I confirmed that for $n=7$ this gives the values shown in the table in the question.
|
H: Solving Boundary Value Problem using the Finite Difference Method - What Values to Substitute for $y'$?
I am given a boundary value problem of the form $y'' = f(y', y, x)$ and asked to solve the system subject to boundary conditions using the finite difference method.
I proceed to develop a system of equations with each equation representing the equation for a given node. If I use node $x_1$ as an example, I know that I have to:
Replace all instances of $y''$ with $\dfrac{y_0 - 2y_1+y_2}{h^2}$
Replace all instances of $y$ with $y_1$
My question is, what do I replace the instance(s) of $y'$ with?
AI: To remain second order in the space approximation, you would use the central difference quotient. You would then subtract a linear approximation of the right side from both sides, so that you get an iterative fixed-point scheme $A\vec y^{\rm new}=G(\vec y^{\rm old})$, where $\vec y$ is the vector of sample points. Ideally this would be Newton-like so that you get super-linear convergence.
If you want to generalize this scheme you get to a multiple shooting approach with a collocation method, which is for first order systems, so set $v=y'$, $v'=f(v,y,x)$. Staying in second order this would be the implicit trapezoidal method,
\begin{align}
\frac{y_{i+1}-y_i}{\Delta x}&=\frac{v_{i+1}+v_i}2\\
\frac{v_{i+1}-v_i}{\Delta x}&=\frac{f(v_{i+1},x_{i+1},v_{i+1})+f(v_i,y_i,x_i)}2\\
\end{align}
You could also try to base your system on Numerov's method (wiki)
|
H: What does $x\in (0,1)$ mean?
What does $ x \in (0,1)$ mean? Does it mean $x = 0$ or $x=1$ or does it mean $0<x<1$?
I know $x \in \{0,1\}$ but in this case it has parentheses instead of curly brackets which confuses me. Sorry for the very easy question I am just a bit confused what the different kinds of brackets mean.
AI: It (typically) means $0<x<1$.
The parentheses $(a,b)$ refer to the "open interval", whereas the hard brackets $[a,b]$ refer to the closed interval. Sometimes the open interval is also denoted by reversed hard brackets $]a,b[$. See e.g., from Wikipedia:
An open interval does not include its endpoints, and is indicated with parentheses. For example, $(0,1)$ means greater than $0$ and less than $1$. This means $(0,1) = \{x \mid 0 < x < 1\}$. A closed interval is an interval which includes all its limit points, and is denoted with square brackets. For example, $[0,1]$ means greater than or equal to $0$ and less than or equal to $1$. A half-open interval includes only one of its endpoints, and is denoted by mixing the notations for open and closed intervals. $(0,1]$ means greater than $0$ and less than or equal to $1$, while $[0,1)$ means greater than or equal to $0$ and less than $1$.
|
H: Two integrating factors: same solution to differential equation
If we have to valid integrating factors to a differential equation we must get the same general solution using each of them although we get two new equations by multiplying the original for each of the integral factors?
So if the original equation is not exact and we verify that the two new functions we get by multiplying it for each integrating factors are exact then you will get the same solution in both cases. Is that right? I´m referring as "new equations" because I read it in the bibliography. First you call P(x,y) and Q(x,y) when the equation is not exact and when you multiply it by the integrating factor you call M(x,y) and N(x,y) to the functions of the new exact equations. The new exact solutions will have the same solution that will also be solution to the original equation.
AI: The two integrating factors differ by a factor that is an (sometimes trivial) integral of the differential equation. That is, if $\mu(x,y)$ is an integrating factor giving the integral or implicit solution $F(x,y)=C$, then any product $g(F(x,y))\mu(x,y)$ with a differentiable (and bijective) function $g$ will also be an integrating factor leading to the integral $G(F(x,y))$, $G'=g$.
|
H: What is the coefficient of a6b6 in (a+b)12
I can't use binomial theorem, how should I solve it?
AI: From the $21$ factors of $(x+y+z)^{21}$, you have to choose $7$ that you pick the $x$ from, and from the remaining $14$ factors, you have to choose $9$ to pick an $y$ from. From the remaining $5$ factors you will pick the $z$. The total number of summands is:
$$
\binom{21}{7}\cdot\binom{14}{9}\cdot\binom{5}{5}=\frac{21!}{7!\cdot9!\cdot5!}=\binom{21}{7, 9, 5}
$$
|
H: How to find the volume by triple integral?
I'm new to triple integral and this triple integral volume problem seems impossible to solve, and I have no idea where to start and how to solve it, could someone have a look at it please.
Let $G$ be the wedge in the first octant that is cut from the cylindrical solid $y^2 + z^2 < 1$ by the planes $y = x$ and $x = 0$. Evaluate $\iiint_G dV$
AI: $$\int_0^1\int_0^{\sqrt{1-z^2}}\int_0^ydxdydz=\int_0^1\int_0^{\sqrt{1-z^2}}ydydz=\int_0^1\frac{(1-z^2)}{2}dz=\left[\frac z2\right]_0^1-\left[\frac{z^3}6\right]_0^1=\frac12-\frac16=\frac13$$
|
H: Union and Difference in Set Theory
Let x ∈ A. {x,x,x,x,x} ∪ {x,x} = {x,x,x,x,x,x,x}
This statement is false right? Because the union of two sets is a set of the first and second set's elements with no elements repeating. I guess I am confused because x seems to be an arbitrary value.
Let x ∈ A. {x,x} / {x} = {x}
Would this also be false? Because If the set difference is removing every element of the second set from the first, x is seen in both sets so the result would have to be an empty set. Again, I am confused because x seems to be arbitrary.
AI: The first statement is, in a sense, true. {x, x, x, x, x}= {x} and {x,x}={x} so that ${x, x, x, x, x}\cup {x, x}= {x}\cup{x}= {x}$, and that is, indeed, equal to {x, x, x, x, x, x, x}! That's just a very silly way of writing "${x}\cup{x}= {x}$".
But you are correct that {x, x}{x}= {x} is false. {x, x}{x} is the same as {x}{x} which is the empty set, not {x}.
|
H: If $a:b:c=d:e:f$, how to show that $\frac{(a+b+c)^2}{(d+e+f)^2}=\frac{(ab+bc+ca)}{(de+ef+df)}$?
If $a:b:c=d:e:f$, how to show that $\frac{(a+b+c)^2}{(d+e+f)^2}=\frac{(ab+bc+ca)}{(de+ef+df)}$?
AI: Let $\dfrac ad=\dfrac be=\dfrac cf=k$(say)
$$\dfrac{ab+bc+ca}{de+ef+fd}=\dfrac{k^2(de+ef+df)}{de+ef+df}=k^2\text{ if } de+ef+df\ne0$$
$$k=\dfrac ad=\dfrac be=\dfrac cf=\dfrac{a+b+c}{d+e+f}\text{ if } a+b+c\ne0$$
|
H: Primitive Wreath Product Example
I'm trying to make sense of the primitive Wreath product action by looking at an example.
I took $S_3$ acting on a triangle $\Delta=\{1,2,3\}$, $C_2$ acting on $\Gamma=\{1,2\}$ and constructed the Wreath product $S_3 wr C_2$. I looked at the action of $S_3 wr C_2$ on $\Delta\times \Gamma$ (two copies of the triangle) and found generators $\langle(1\ 2\ 3),(2\ 3),(4\ 5\ 6),(5\ 6),(1\ 4)(2\ 5)(3\ 6) \rangle$ in $S_6$ for this action on the set $\Delta\times \Gamma$ of cardinality 6.
The problem I found is that when I considered the action of $S_3 wr C_2$ on $\Delta^{\Gamma}$ (three copies of the triangle) I found the generators $$\langle (1\ 2\ 3),(2\ 3),(4\ 5\ 6),(5\ 6),(7,8,9),(8,9),(1\ 4\ 7)(2\ 5\ 8)(3\ 6\ 9),(4\ 7)(5\ 8)(6\ 9),(2\ 4)(3\ 7)(6\ 8) \rangle$$
in $S_9$ of the action on the set $\Delta^{\Gamma}$, (this is simply the action of $S_3$ on each coordinate and swapping coordinates with $C_2$).
These generators generate the whole group $S_9$, whereas generators in $S_6$ generate a group of order 72, which is the right order for the Wreath product.
What got me even more confused is that $\langle (1\ 2\ 3),(4\ 5\ 6),(7,8,9),(4\ 7)(5\ 8)(6\ 9)\rangle = S_9$, so if I used $C_3$ instead of $S_3$ I apparently get a primitive action, and this should be impossible.
I am clearly not understanding this action correctly, but I don't know where I am mistaken exactly.
AI: It should be
$$\langle (1\ 2\ 3)(4\ 5\ 6)(7\ 8\ 9),(2\ 3)(5\ 6)(8\ 9),(1\ 4\ 7)(2\ 5\ 8)(3\ 6\ 9),(4\ 7)(5\ 8)(6\ 9),(2\ 4)(3\ 7)(6\ 8) \rangle.$$
You got the final three generators correct, but you must have got confused and split the first two generators up into separate cycles.
|
H: If I take limit, the strict inequality may become equal.
Suppose $$x \lt 1+\frac{1}{n} \;\; \forall n\in \Bbb N$$
If I let $n\to \infty$, I get $$x \le 1$$
But I am wondering the reason why $\lt$ should turned to $\le$ ?
This just an example in my analysis class, and this one is simple that $x$ can be $1$. But I want to know why if I take a limit to the strict inequality ,it becomes weaker. It looks simple but I have no idea to prove it.
Thanks for someone helping me.
AI: Note that $x<a_n\,\forall n\in\Bbb N$ is equivalent to saying that $\{a_n-x\}_{n\in\Bbb N}$ is a positive sequence. Is it true that limit of a strictly positive sequence is strictly positive always??
|
H: Limit of a Function Definition (Why restrict domain?)
The definition I use for a limit of a function is the following:
$\lim_{x\rightarrow c}f(x):=L$ if $\forall \epsilon>0, \exists \delta>0, \forall x\in \mathbb{R}^{\neq c}$, [ $|x-c|<\delta\implies|f(x)-L|<\epsilon$]
However, the video here gets rid of the requirement that $x\neq c$ and just writes $\forall x\in \mathbb{R}$. It even proves the limit is well-defined in the sense that this definition is unique here.
$\textbf{Question:}$ What is the reason that we require $x\neq c$ in the first definition and not the second? It seems unnecessary and I feel like I am missing something. There has got to be a reason why $x\neq c$ here.
AI: Deletion is necessary for the following reasons:
If you don't delete the point of consideration, you cannot define limit of a function at a point outside its domain. For example, consider
\begin{align*}
f\colon \mathbb R\setminus\{1\}&\to\{1\}\\
x&\mapsto 1\end{align*}
We want to say that $\lim\limits_{x\to1}f(x)=1$, but we can't say that using the undeleted definition.
Also, if you use the undeleted definition, the limit of a function at a point has to be the value of the function at that point, if it exists. Suppose the limit exists, then
$$\forall \varepsilon>0\exists\delta>0:|x-c|<\delta\implies |f(x)-L|<\varepsilon$$
Taking $x=c$, the first condition is satisfied, and therefore by the implication, we get
$$|f(c)-L|<\varepsilon\forall\varepsilon>0$$
which implies $L=f(c)$.
|
H: Show that the sum function $f(x) = \sum_{n=1}^\infty \frac{1}{ \sqrt{n} } (exp(-x^2/n)-1)$ is continous
Consider for $x \in \mathbb{R}$ the sum function defined as
$$
f(x) = \sum_{n=1}^\infty \frac{1}{ \sqrt{n} } (exp(-x^2/n)-1)
$$
I have shown that the series converges point wise by using that
$$
|exp(-x^2/n)| \leq |-x^2/n| = x^2/n
$$
from an earlier question. The problem is now that to show that $f$ is continous on $\mathbb{R}$ from a sentence in my book I would have to show that $f$ has a convergent majorant series satisfying that
$$
|f_n(x)| \leq M_n
$$ for all $x \in \mathbb{R}$ but as I know that this is not possible unless $x$ is in a compact set. I thought about letting
$x \in [-K,K]$ but I don't think I am allowed as I have to show that $f$ is continous for all $x \in \mathbb{R}$ which $x \in [-K,K]$ doesn't satisfy I suppose.
What can I do?
AI: Showing that $f$ is continuous on $[-K, K]$ for all $K>0$ is enough, because for a given $x \in \mathbb{R}$, you can let $K=2|x|$, and have that $f$ is continuous on $[-2|x|, 2|x|]$, hence it's continuous at $x \in [-2|x|, 2|x|]$.
|
H: Complex polarization identity proof getting stuck towards the end w.r.t. the imaginary part
I'm working on a homework problem regarding the proof for the polarization identity for complex scalars. I've taken a look at another question on this community (Polarization Identity for Complex Scalars) and have tried working it out on my own, but am getting stuck towards the end, particularly towards dealing with the imaginary part. I'll elaborate on my approach.
Starting with the definition:
$$
\langle x, y \rangle = \frac{1}{4} \left( \Vert x + y \Vert^2 - \Vert x - y \Vert^2 - i\Vert x - iy \Vert^2 + i \Vert x + iy \Vert^2 \right)
$$
Focusing only on the imaginary part (i.e. $i \Vert x + iy \Vert^2 - i \Vert x - iy \Vert^2$):
$$
\begin{align}
i \Vert x + iy \Vert^2 - i \Vert x - iy \Vert^2 & = i \left( \Vert x + iy \Vert^2 - \Vert x - iy \Vert^2 \right) \\
& = i \left( \langle x + iy, x + iy \rangle - \langle x -iy, x- iy \rangle \right) \\
& = i \left[ (\langle x, x \rangle + \langle x, iy \rangle + \langle iy, x \rangle + \langle iy, iy \rangle ) - ( \langle x, x \rangle + \langle x, -iy \rangle + \langle -iy, x \rangle + \langle -iy, -iy \rangle ) \right] \\
& = 2i \left( \langle x, iy \rangle + \langle iy, x \rangle \right)
\end{align}
$$
Using $\langle x, iy \rangle = -i \langle x, y \rangle$ and $\langle iy, x \rangle = \overline{\langle x, iy \rangle} = \overline{-i\langle x, y \rangle} = i\overline{\langle x, y \rangle}$,
$$
\begin{align}
2i\left( \langle x, iy \rangle + \langle iy, x \rangle \right) & = 2i \left( -i\langle x, y \rangle + i \overline{\langle x, y \rangle} \right) \\
& = 2 \langle x, y \rangle - 2\overline{\langle x, y \rangle}
\end{align}
$$
I'm not sure how to proceed from here. I believe that I should end up with something like $-2(-2 \mathfrak{I} \langle x, y \rangle )$ but how does the last line become expressed as this? Thanks.
AI: Hint: For $z \in \mathbb{C},$ $$z-\overline{z}=2i\,\Im{z},$$ where $\Im z$ denotes the imaginary part of $z.$
Btw, once you have shown that $$ \Vert x + y \Vert^2 - \Vert x - y \Vert^2=4\Re \langle x,y\rangle,$$ then replacing $y$ by $iy:$
$$ \Vert x + iy \Vert^2 - \Vert x - iy \Vert^2=4\Re\langle x,iy\rangle=4\Im \langle x,y\rangle.$$
|
H: Verify the identity $\frac{\tan(a+b)}{\tan(a-b)}$ = $\frac{\sin(a)\cos(a)+\sin(b)\cos(b)}{\sin(a)\cos(a)-\sin(b)\cos(b)}$
I've been asked to verify the following identity but I don't know how to do it.
$$\frac{\tan(a+b)}{\tan(a-b)} = \frac{\sin(a)\cos(a)+\sin(b)\cos(b)}{\sin(a)\cos(a)-\sin(b)\cos(b)}$$
When I try I get
$$\frac{\tan(a+b)}{\tan(a-b)} = \frac{\dfrac{\sin(a+b)}{\cos(a+b)}}{\dfrac{\sin(a-b)}{\cos(a-b)}} = \frac{\dfrac{\sin(a)\cos(b)+\cos(a)\sin(b)}{\cos(a)\cos(b)-\sin(a)\sin(b)}}{\dfrac{\sin(a)\cos(b)-\cos(a)\sin(b)}{\cos(a)\cos(b)+\sin(a)\sin(b)}}$$ But I don't know really where to go from here.
AI: Hint :
Use $$2\sin x\cos x=\sin2x$$
Then Prosthaphaeresis Formulas $$\sin C-\sin D=?\text{ and }\sin C+\sin D=?$$
|
H: Consider $\dot{x}=4x^{2}-16$.
I am solving the ODE above, it is a question from Strogatz Nonlinear dynamics and chaos, chapter 2 question 2.2.1.
Question
\begin{equation}
\dot{x}=4x^{2}-16
\end{equation}
Answer
\begin{equation}
\frac{\dot{x}}{4x^{2}-16} = 1\\
\frac{\dot{x}}{x^{2}-4} = 4\\
{{dx\over dt}\over x^2-4}=4 \\
{{dx\over dt}\over x^2-4}. dt=4. dt \\
{dx\over x^2-4}=4dt \\
\int \frac{1}{x^{2}-4} dx = \int 4 dt \\
\frac{1}{4} \ln(\frac{x-2}{x+2}) = 4t + C_{1} \\
x = 2 \frac{1 + C_{2}e^{16t}}{1 - C_{2}e^{16t}}
\end{equation}
\begin{equation}
C_{2}(t=0) = \frac{x-2}{x+2}
\end{equation}
Summary
I am looking to understand the intermediary step in the proof above. How do we get to this step $\frac{1}{4} \ln(\frac{x-2}{x+2}) = 4t + C_{1} $ from the previous step. Can we remove the constant $\frac{1}{4} $ then integrate the remaining portion?
AI: $$I=\int \frac{dx}{x^{2}-4}$$
Partial fraction decomposition gives us:
$$I=\int \left ( \frac A {x-2}-\dfrac B {x+2} \right)dx$$
$$I=\int \frac{x(A-B)+2(A+B)}{x^{2}-4}dx$$
$$\implies A=B=\dfrac 14$$
$$I=\dfrac 14\int \left ( \frac 1 {x-2}-\dfrac 1 {x+2} \right)dx$$
Then integrate with $\ln $ function .
$$I=\dfrac 14 \ln |{x-2}|-\dfrac 14\ln |{x+2}|+C$$
$$I=\dfrac 14 \ln \left | \dfrac {x-2}{x+2} \right |+C$$
|
H: Solving $k(k+1)(k-1)=k$
The function $y=e^{kx}$ satisfies the equation
$$\left(\frac{d^{2}y}{dx^{2}}+\frac{dy}{dx}\right)\left(\frac{dy}{dx}-y\right)=y\frac{dy}{dx}$$
I found the derivative and the second derivative of $y$, resulting in
$$\left(k^{2}e^{kx}+ke^{kx}\right)\left(ke^{kx}-e^{kx}\right)=ke^{2kx}$$
Dividing by $e^{2kx}$:
$$k(k+1)(k-1)=k \tag{$\star$}$$
I don't understand this part. How do you simplify $(\star)$ to find the values of $k$?
I know one of the roots is $0$, but not the other two.
AI: $k(k+1)(k-1)=k$
$k(k+1)(k-1) - k = 0$
$k(k^2-1) - k = 0$
$k(k^2-1 - 1)= 0$
$k(k^2 - 2)=0$
$k =0$ or $k^2 = 2 \implies k = \pm\sqrt 2$
$\therefore k = 0, - \sqrt 2, \sqrt 2$
|
H: Uniform continuity of continuous real-valued function from space of trace-class operators on a Hilbert space
Let $\phi$ be a function from the space of trace-class operators on a separable Hilbert space $\mathcal{H}$ into the reals. Assume that $\phi$ is continuous and consider the restriction of $\phi$ to an open ball of radius $M$. Is this restriction uniformly continuous? I know of the Heine-Cantor theorem but since the ball is not compact, I'm not sure it is useful here.
AI: Let $(X,d)$ be a metric space without isolated points. If every continuous function on $X$ is uniformly continuous then $X$ is necessarily compact.
This shows that the answer to you question is NO. (take $X$ be the open unit ball of $\mathcal H$).
Proof of above theorem: Suppose $X$ is not compact. Let $\{x_{n}\}$ be a sequence with no convergent
subsequence. There exists a sequence $\{y_{n}\}$ such that $0<d(x_{n},y_{n})<%
\frac{1}{n}$. The set $\{x_{n}:n\geq 1\}\cup \{y_{n}:n\geq 1\}$ has no limit
points. Define $f(x_{n})=n,f(y_{n})=2n,n=1,2,...$. Extend $f$ to a
continuous function on $X$. The extended function is obviuosly not uniformly
continuous.
|
H: Use the laws of logic to show that $[a\Rightarrow(b\lor c)]\Leftrightarrow[(a\land\lnot b)\Rightarrow c]$
I am trying to prove that $[a\Rightarrow(b\lor c)]\Leftrightarrow[(a\land\lnot b)\Rightarrow c]$.
My proof is the following:
$a\Rightarrow(b\lor c)~$ Premise
$(a\Rightarrow b)\lor c~$ Associative Law
$(\lnot a\lor b)\lor c~$ Material Implication
$\lnot(a\lor\lnot b)\lor c~$ De Morgan's Law
$(a\land\lnot b)\Rightarrow c~$ Material Implication
I'm having doubts about my second step. I tried to check for the validity of my step using truth table, and the statements in the first and second steps are logically equivalent. Is my application of associative law legal?
AI: Yes it is logically equivalent. See that you can go directly from first line to third line by material implication
$$ a \rightarrow (b \lor c) \iff \neg a \lor (b \lor c) \iff (\neg a \lor b) \lor c $$
Assuming you know how to go from step 2 to 3, then you should know how to go from step 3 to step 2.
Also, see Material Implication. As a rule of thumb to look at boolean algebra, remove that arrow!
|
H: Complex conjugations
I'm stumped on an equation from a coursera course (Intro to DSP) that has to do with complex exponential multiplication:
In line 2, by simple rules of complex multiplication, it should be (h+k). I understand it being (h-k) has to do with complex exponential conjugation but not really sure why/how.
Any help understanding would be appreciated.
AI: The complex conjugate of a complex number written as $re^{i\theta}$ where $r,\theta \in \mathbb{R}$ is given by $(re^{i\theta})^* = re^{-i\theta}$. Since we're taking the complex conjugate before multiplying, we have $$(e^{j\frac{2\pi}{N}nk})^*e^{j\frac{2\pi}{N}nh} = e^{-j\frac{2\pi}{N}nk}e^{j\frac{2\pi}{N}nh} = e^{j\frac{2\pi}{N}(h-k)n}. $$
|
H: An example of a field $F$ such that $F^n$ uses element-wise operations, but $F$ is not a subfield of $F^n$?
I'm going through The Linear Algebra a Beginning Graduate Student Ought to Know, and I came across this idea that I can't seem to understand. Suppose $F$ is a field, then he asks "is it possible to define multiplication in such a manner that $F^n$ will become a field naturally containing $F$ as a subfield?" The answer seems easy, I could just do elementwise multiplication (like we do addition), then for $(a_1,...,a_n),(b_1,...,b_n)\in F^n$, we have a that $(a_1,...,a_n)(b_1,...,b_2)=(a_1b_1,...,a_nb_n)$. Clearly multiplication is commutative and associative, every element has a multiplicative inverse except the identity, and it is distributive because $F$ is a field. However, he says "in general, the answer is negative."
Can you help me convince myself as to why this is true?
AI: If one of the entries of $x \in F^n$ is zero, then that element has no multiplicative inverse.
For an example of a field such that $F^n$ has no field structure, consider $\mathbb{C}^2$. If there is a field structure on $\mathbb{C}^2$ such that $\mathbb{C}$ is a subfield of $\mathbb{C}^2$, then it would be a degree 2 field extension. However, since $\mathbb{C}$ is algebraically closed, it has no degree 2 field extension.
|
H: Diffrentiability of modulus function
$f(x) = (x-1)|(x-1)(x-2)|$. My teacher explained that since the since the effective power of $(x-1) = 2$ which is greater than $1$, it will be differentiable at $x = 1$. But since the effective power of $(x-2)$ is $1$, it wont be differentiable at $x = 2$.
$f(x) = |x-a|^n$
if $n < 0:$ discontinuous , non-differentiable
if $0<n\le 1:$ continuous, non differentiable
if $n>2:$ continuous, differentiable
Can somebody help me prove it and explain the deeper implications of this concept ?
AI: As you may know, $$f'(a)=\lim_{h\to 0}\frac{f(a+h)-f(a)}{h}$$ so if we consider $f(x)=|x-a|^n,\;n\in\mathbb{N}$ we have $$f'(a)=\lim_{h\to 0}\frac{|a+h-a|^n-|a-a|^n}{h}=\lim_{h\to 0}\frac{|h|^n}{h}.$$
If $n\in(0,1]$ then $$\lim_{h\to 0^+}\frac{h^n}{h}=\lim_{h\to 0^+}h^{n-1}>0$$ but $$\lim_{h\to 0^-}-\frac{h^n}{h}=-\lim_{h\to 0^+}h^{n-1}<0$$ so $$\lim_{h\to 0^+}\frac{|h|^n}{h}\neq \lim_{h\to 0^-}\frac{|h|^n}{h},$$ therefore, it is not differentiable.
On the other hand, if $n>1$ you don't face that problem.
|
H: Doubt on strong law of large number theorem
Suppose $\{X_1,X_2,.....\}$ sequence of independent and identical random variable.
Let $\mathbb{E}(X_1^{+})<\infty$, i.e. expectation of positive part of the random variable $X_1$ is finite. Instead of saying $\mathbb{E}(X_1)<\infty$
From here, can I conclude that
$$
\frac{1}{n}\sum_{i=1}^{n}X_i \xrightarrow{a.s.} \mathbb{E}(X_1)
$$
(a.s. = almost surely)
Thanks in advance
AI: I suppose you can! If $E|X_i| <\infty$, then the regular SLLN is in force. If not, then $X_i=X_i^{+}-X_i^{-}$, and $E[X_1^{-}] = \infty$, with $EX_1^+ < \infty$. Let $X_{i,M}^{-} = X_i^{-}1_{\{X_i^{-}\le M\}} \le X_1^{-}$. Then $EX_{i,M}^{-} \to \infty$, as $M\to \infty$. Hence
$\frac{1}{n}\sum_{i=1}^n X_i^{+}-X_i^{-} \le \frac{1}{n}\sum_{i=1}^n X_i^{+}-X_{i,M}^{-} \stackrel{a.s.}{\to} EX_1^+ - EX_{i,M}^{-} $.
Since $EX_1^+ < \infty$, the right hand side converges down to $-\infty$ as $M\to \infty$, so $\frac{1}{n}\sum_{i=1}^n X_i \to -\infty$, a.s.
|
H: If $T$ is a bounded linear map and $\sum x_n$ is an absolutely convergent series, then $T(\sum x_n) = \sum T(x_n)$
Is the following true? If so, how to prove it?
If $T:X \to Y$ is a bounded linear map between the Banach space $X$ and the normed vector space $Y$ and $\sum x_n$ is an absolutely convergent series, then $T(\sum x_n) = \sum T(x_n)$.
I need this claim to finish a proof that if $X$ is a Banach space and $M$ is a proper closed subspace, them $X/M$ is a Banach space, but I am not able to show it.
Thanks in advance and kind regards.
AI: You can find detailed proofs of both facts as Lemma 4.4 and Theorem 4.5 in:
http://www.pitt.edu/~hajlasz/Notatki/Functional%20Analysis2.pdf
|
H: Connected components of free loop space
Let $X$ be a topological space. And let $\Lambda X=\mathrm{Top}\left[S^1,X\right]$ be the space of continuous loops in $X$. Then how do we calculate $\Pi_0\Lambda X$, the set of connected components of $\Lambda X$. Is it realted to $\pi_1\left(X,x_0\right)$ for some $x_0$?
For simplifications, assume $\Sigma$ is a path connected manifold but I would prefer a general answer.
AI: Suppose X is connected. Since the inclusion of a point into a circle is a cofibration, then the inclusion of pointed loops into free loops is surjective on homotopy classes.
Moreover, the fundamental group of X acts on based loops by conjugation. Hatcher in his appendix proves the orbits of this action are in bijection with unpointed homotopy classes.
|
H: Probability of a Gaussian random variable
Hi have the following problem... I have a tank represented with a Gaussian random variable: $X\sim N(6;0.25)$, the request:
Check that the sum of two independent measures is greater than 10 meters
How can I check my solution? Thank you
AI: Hint: The sum of two independent measures is still a Gaussian
$Z=X+Y \sim N(12; 0.5)$
|
H: Construct a Reed Solomon code: find the parity check matrix
I am trying to solve the following exercise, but I need a check/opinion on how to solve it.
Construct a Reed-Solomon code with dimensions $[12,7]$ over $\mathbb{F}_{13}$ and find a parity check matrix for the code $C$. Hint: $2$ is a primitive element of $\mathbb{F}_{13}$.
First thing: I have $\delta=12-7+1=6$, so the minimum distance is exactly $6$. Also, I choose to build a narrow-sense code, so the defining set is $T = \mathcal{C}_1 \cup \ldots \cup \mathcal{C}_{5}$.
As $12=n=13-1$, then $\mathcal{C}_i=\{ i \}$, so the generator polynomial is $$g(x)=(x-2)(x-2^2)(x-2^3)(x-2^4)(x-2^5)=(x-2)(x-4)(x-8)(x-3)(x-6)$$
Now, I can work out the computations and find $h(x)$,check polynomial, dividing $x^{12}-1$ by $g(x)$, but it seems a bit heavy to me. Is there any other possibility to compute the check polynomial faster? And so also the parity check matrix.
AI: Why would you need to divide? You already know its structure as well as you know $g$'s:
It's equal to $(x-1)(x-5)(x-7)(x-9)(x-10)(x-11)(x-12)$
After you have this, you can use its corresponding word, then do cyclic shifts to find the rest of the parity matrix.
|
H: Finding the p-value between two numbers
I am trying to solve this example in statistics, but my results are different from the ones in the solution and I do not understand why. The example say:
A study finds a test statistic $t$-value of $1.03$ for a t-test on a single population mean. The sample size is $11$, and the alternative hypothesis is $H_a:\mu \neq 5$. Using Table A-2 in the appendix, what range of values is sure to include the p-value for this value of $t$?
And the solution for this exercise is
$$0.20 < p\text{ -value} < 0.50$$
But the solution I get is
$$0.30 < p\text{ -value} < 0.40$$
Because if I go to the t-table, search for $ df=10 $ and search the values that contain $1.03$, it's between $0.879$ and $1.093$ which are $0.2$ and $0.15$. And after multiplying them by two because it's a two tailed test, I get my result.
My t-table:
AI: The result depends on wich table you read the result.
With my table
I get exacltly the result you are looking for
$0.2 < p <0.5$
Perhaps your table is more detailed
|
H: Value of $\alpha$ for which $x^5+5\lambda x^4-x^3+(\lambda\alpha-4)x^2-(8\lambda+3)x+\lambda\alpha-2=0$ has roots independent of $\lambda$
Consider the equation $$x^5 + 5\lambda x^4 -x^3 + (\lambda \alpha -4)x^2 - (8\lambda +3)x + \lambda\alpha - 2 = 0$$ The value of $\alpha$ for which the roots of the equation are independent of $\lambda$ is _______
My approach: The equation can be rewritten as:
$$\underbrace{(x-2)(x^4 + 2x^3 + 3x^2 + 2x + 1)}_{f(x)} + \lambda\underbrace{(5x^4 + \alpha x^2 -8x + \alpha)}_{g(x)} = 0$$
For this equation to be valid independent of $\lambda$, $f(x) = g(x) = 0$. $f(x)$ has $2$ as one of it's roots. Solving $g(2) = 0$, the value of $\alpha$ comes out to be
$$\alpha = -\frac{64}{5}$$
which is unfortunately not the correct answer. Where is my approach breaking down?
AI: The question may be phrased incorrectly, as it is not possible to make the set of all roots independent of $\lambda$. The question I will answer is: For what value of $\alpha$ does the equation have some roots which are independent of $\lambda$?
As demonstrated in the question, $\alpha=-\frac{64}{5}$ is one possibility, which gives the root $x=2$, independent of $\lambda$.
But there is one other possibility that we can find by further factoring: $f(x) = (x-2)(x^2+x+1)^2$. Is there a value of $\alpha$ for which $g(x)$ shares a root with $x^2+x+1$? Setting $x=\omega$ with $\omega^3=1$ and $g(\omega)=0$ gives us $\alpha \omega^2 - 3\omega + \alpha = 0$. Reducing further using $\omega^2+\omega+1 = 0$ gives $-a\omega - 3\omega = 0$ or $a=-3$.
Indeed, we can verify that if $a=-3$, the original equation is divisible by $x^2 + x+1$ regardless of the value of $\lambda$.
|
H: If $H$ and $K$ are normal subgroups of $G$ and $G/H \cong K$, does this imply that $G/K \cong H$?
If $H$ and $K$ are normal subgroups of $G$ and $G/H \cong K$, does this imply that $G/K \cong H$?
I wasn't able to find a counterexample or to prove that the implication is true. I would appreciate any help with this question. Thank you!
AI: Counterexample: take $G=Q=\{ 1, -1, i, -i, j, -j, k, -k\}$ the quaternion group of order $8$. Take $H=\langle i \rangle$ and $K=Z(Q)=\{1, -1\}$. $G/K$ in non-cyclic, where $H$ is.
|
H: Fixed points Cross products
Let $(A,G,\alpha)$ be a $C^*$-dynamical system with $G$ amenable (So that I need to consider just cross-product since reduced and full are the same). Let $\theta$ be an automorphism of $A \rtimes G$. Is it always true that $A$ is in the fixed point set of $\theta$? I have the $C*$ algebra $C(X)\rtimes_{\phi}\mathbb{Z}$ where $X$ is a compact Hausdorff space and $\phi $ a homeomorphism on $X$.
AI: Here’s an example arising from a non-trivial homeomorphism $\phi$ that acts non-trivially both on $C(X)$ and on the unitaries corresponding to elements of $\mathbb{Z}$.
Let $X = \mathbb{T}$ and let $\phi : \mathbb{T} \to \mathbb{T}$ be rotation by $2\pi\alpha$ radians, so that $C(X) \rtimes_\phi \mathbb{Z}$ is the rotation algebra/noncommutative $2$-torus generated by unitaries $U$ (corresponding to the Fourier mode $(z \mapsto z) \in C(X)$) and $V$ (corresponding to the automorphism $\phi$) satisfying the commutation relation $$VU = e^{-2\pi i \alpha}UV.$$ Then the flip automorphism $\theta : C(X) \rtimes_\phi \mathbb{Z} \to C(X) \rtimes_\phi \mathbb{Z}$ defined on generators by $$\theta(U) := U^\ast, \quad \theta(V) := V^\ast$$ doesn’t fix $U \in C(X) \subset C(X) \rtimes_\phi \mathbb{Z}$. Indeed, $\theta$ restricts to the $\ast$-automorphism of $C(X)$ corresponding to reflection in the real axis on $X = \mathbb{T} \subset \mathbb{C}$.
|
H: Question regarding specific steps of Fatou's lemma proof.
I have a question regarding to the following two steps of the proof.
How can we get $\int g_n \leq \int f_n$, or $g_n \leq f_n$ for all n, Since by definition of liminf, $g_n \leq f_m $ for all $m \geq n$. Where did the $m$ go?
Why did $\int f_n$ become $\liminf\int f_n$ suddenly?
AI: For your first question, since $g_n \leq f_m$ for all $m \geq n$, then, if $m = n\implies m=n \geq n$, so $g_n \leq f_n$.
The second question, what happened was that he passed the liminf in the inequality:
$$\int g_nd\mu \leq \int f_n d\mu \implies
lim\inf_n \int g_nd\mu \leq lim\inf_n\int f_n d\mu $$
Now, since $g_n$ is increasing, then
$$lim\inf_n \int g_nd\mu =lim_n \int g_nd\mu $$ Therefore, you get
$$lim_n \int g_nd\mu \leq lim\inf_n\int f_n d\mu $$
|
H: Definition of basis in topology
In the definition of a basis on a set $X$ in topology, one of the properties is that, "for any two basis elements $B_1, B_2$ and any point $x \in B_1 \cap B_2$, there is a third basis element $B_3$ containing $x$ and contained in $B_1 \cap B_2$". I am wondering what happens if we change this property to the stronger property of requiring closure under pairwise intersections (and hence, inductively, by finite intersections)? In other words, what if we changed the requirement to "for all basis elements $B_1, B_2$, their intersection $B_1 \cap B_2$ is a basis element"?
Are there any simple/important examples in which the original (i.e. weaker) property holds but the latter (i.e. stronger) property doesn't?
AI: There are easy examples of bases that have the first property but not the second; open balls in $\Bbb R^n$ for any $n>1$ will do. On the other hand, if you take the closure of any base under finite intersections, you get a base for the same topology that has the stronger property, so in that sense it doesn’t really matter which property you use to define bases. The main advantage to using the weaker property, apart from the fact that it is necessary as well as sufficient, is that it often allows simpler descriptions of a base.
|
H: integrals in terms of gamma function
$\displaystyle I_1 = \int^1_0 \sqrt[3]{\log( x)}\,dx$
I am trying to write this in terms of gamma functions.
After I substitute
$\displaystyle y = \log( x)$ , $x = e^y$ , $dx = e^y dy$
Then,
$\displaystyle I_1 = \int^1_0 e^yy^{\frac{1}{2}}\,dy$
but the upper limit isn't infinite.
Can anyone please explain or help me?
AI: The limits are not correct. With $y=\log(x)$, note that when $x\to 0^+$, $y\to -\infty$ and when $x=1$, $y=0$. With that substitution we find that
$$\int_0^1 \log^{1/3}(x)\,dx=\int_{-\infty}^0 y^{1/3}e^y\,dy$$
A slightly more direct substitution is to let $x=e^{-y}$ so that as $x\to 0^+$, $y\to \infty$, and for $x=1$, $y=0$, and $dx=-e^{-y}\,dy$.
Then, we can write
$$\int_0^1 \log^{1/3}(x)\,dx=-\int_0^\infty (y)^{1/3}e^{-y}\,dy=-\Gamma(4/3)$$
|
H: Calculating $\cos\frac\pi4$ from the half-angle formula gives $\sqrt{\frac12}$ instead of $\frac{\sqrt{2}}2$. What went wrong?
I am using the formula $$\cos\left(\frac x2\right)=\sqrt{\frac{1+\cos(x)}2}$$ to find $\cos\left(\frac{pi}{4}\right)$ but it does not give me the correct result.
$$
\cos\left(\frac{\pi}{4}\right)
= \sqrt{\frac{1+\cos\left(\frac{\pi}{2}\right)}2}
= \sqrt{\frac{1+0}2}
= \sqrt{\frac12}
$$
This contradicts $\cos\left(\frac{\pi}{4}\right) = \frac{\sqrt{2}}{2} $. What did I do wrong?
AI: Nothing: $\dfrac{\sqrt2}2=\dfrac{\sqrt2}{\sqrt4}=\sqrt{\dfrac24}=\sqrt{\dfrac12}$.
|
H: Markov chains: showing $P$ has unique eigenvalue $1$
I have a $4\times 4$ matrix and I tried solving for the determinant of $P-\lambda I$. This came out really messy and when I put the matrix into a matrix calculator my solution was $1,0$ and $-1$. Does this still mean $1$ is a unique eigenvalue solution? Is there a quicker method of proving $1$ is a unique eigenvalue?
AI: You correctly computed the eigenvalues: they are (with multiplicity), $\{-1,0,0,1\}$. So $1$ is not a unique eigenvalue.
Looking at the matrix, you can see that several rows are equal. Each time this happens, you get $0$ as an eigenvalue.
|
H: Vanishing derivative at infinity implies slowly varying
The functions $$f(x)=e^{(\ln x)^{1/3} \cos((\ln x)^{1/3}) } \quad g(x)=e^{\sqrt{\ln x} \cos((\ln x)^{1/3}) }$$
are oscillating but slowly varying at infinity, that is for all $\lambda >0$, we have
$$\lim_{x\to \infty} \frac{f(\lambda x)}{{f(x)}}=\lim_{x\to \infty} \frac{g(\lambda x)}{{g(x)}}=1$$
this leads me to ask the following question:
Suppose $h$ is differentiable hence continuous (or maybe uniformly?) and positive on
$[A,\infty)$ with $h'(x)\to 0$ as $x\to \infty$, then
$$f(x)=e^{h(\log(x))}$$
is slowly varying at infinity.
For the concrete functions I had, namely $f$ and $g$, I simply used L'Hopital to show the exponent goes to zero, not sure if this would hold for arbitrary $h$.
(The reason probability theory is tagged is because this is from extreme value theory)
AI: Assume $h$ is differentiable on an interval $[A,\infty)$ and $h'(x)\to 0$ as $x\to \infty$. Then by the Mean Value Theorem,
$$\frac{f(\lambda x)}{f(x)} = e^{h(\log \lambda + \log x)-h(\log x)} = e^{\log(\lambda) h'(\xi(x))},
$$
where $\log x\le \xi(x)\le \log x + \log\lambda$. So, $h'(\xi(x))\to 0$ as $x\to \infty$, which shows that your conjecture is correct. You don't need to assume $h$ is positive.
|
H: What does this definition of a polynomial mean?
In a book I am reading, there is the following definition for a polynomial:
A function $p: \mathbb{F} \rightarrow \mathbb{F}$ is called a
polynomial with coefficients in $\mathbb{F}$ if there exists $a_0, \ldots, a_m \in \mathbb{F}$ such that
$p(z) = a_0 + a_1z + a_2z^2 + \ldots + a_mz^m$
for all $z \in \mathbb{F}$.
However there aspects of this that do not make sense to me. For example, I can not think of any examples where you have a polynomial $p(z) = a_0 + a_1z + a_2z^2 + \ldots + a_mz^m$, with coefficients $a_0, \ldots, a_m \in \mathbb{F}$ but for only some $z \in \mathbb{F}$? Does that even make sense?
What precisely is this definition saying, as it seems to differ slightly (at least in terms of wording) from other definitions, e.g. here.
AI: The definition does not define what a polynomial is; it defines what it means for a function to be polynomial. The definition above could be written as follows:
A function $f: \mathbb{F} \rightarrow \mathbb{F}$ is called a
polynomial with coefficients in $\mathbb{F}$ if there exists a polynomial $p$ with coefficients in $\mathbb{F}$ such that $f(x) = p(x)$ for all $x \in \mathbb{F}.$
The function $p(z) = \sin z$ satisfies the condition that there are $a_0, \ldots, a_m \in \mathbb{R}$ such that $p(z) = a_0 + a_1 z + a_2 z^2 + \cdots + a_m z^m$ for only some $z \in \mathbb{R}.$ One can for example take $m=0,\ a_0=0$ and those "some" $z$ to be $\{n\pi \mid n\in\mathbb{Z}\}.$ But there are no $a_0, \ldots, a_m \in \mathbb{R}$ such that $p(z) = a_0 + a_1 z + a_2 z^2 + \cdots + a_m z^m$ for all $z \in \mathbb{R}.$ Therefore $p$ is not a polynomial function.
|
H: Covering $\Bbb RP^\text{odd}\longrightarrow X$, what can be said about $X$?
I am looking for any argument related to the following fact, which may or may not be true.
Let $f:\Bbb RP^n\longrightarrow X$ be a covering space, where $n\geq 2$. Then, $X=\Bbb RP^n$.
Now, for $n=\text{even}$, this is surely true and this is given below :
Let $f:\Bbb RP^n\to X$ be a covering space, where $n=2m$ for some $m\in\Bbb N$. Then, $X$ is compact connected $2m$-manifold. So, $X$ is a finite CW-complex. Also, $f$ is a finite sheeted covering as fibres are discrete subsets of the compact space $\Bbb RP^n$. Let $f$ be $k$-sheeted covering. Then, $$1=\chi(\Bbb RP^n)=k\cdot \chi(X)\implies k=1=\chi(X).$$ Now, single-fold covering is a homeomorphim, so we are done.
So, my question is what about $n=\text{odd}$, here $\chi(\Bbb RP^{\text{odd}})=0$, probably we can not modify the above argument. Is there any alternative argument to prove above? Is there any $X$ not homeomorphic to $\Bbb RP^\text{odd}$ with a covering $\Bbb RP^\text{odd}\longrightarrow X$.
AI: If $n = 1$, then $\mathbb{RP}^1 = S^1$ which only covers itself.
If $n > 1$, the manifold $\mathbb{RP}^{2n-1}$ covers infinitely many manifolds which are pairwise non-homotopy equivalent. To see this, we use the fact that $\mathbb{RP}^{2n-1}$ is a lens space.
Recall, if we identify $S^{2n-1}$ with the unit vectors in $\mathbb{C}^n$, then for positive integers $m, l_1, \dots, l_n$ with $(m, l_i) = 1$, the lens space $L(m; l_1, \dots, l_n)$ is the quotient of $S^{2n-1}$ by $\mathbb{Z}_m$, where the action of $\mathbb{Z}_m$ is generated by $(z_1, \dots, z_n) \mapsto (e^{2\pi i l_1/m}z_1, \dots, e^{2\pi i l_n/m}z_n)$. In particular, when $m = 2$ and $l_1 = \dots = l_n = 1$, the $\mathbb{Z}_2$ action on $S^{2n-1}$ is given by $(z_1, \dots, z_n) \mapsto (-z_1, \dots, -z_n)$ which is the antipodal map, and hence $L(2; 1, \dots, 1) = \mathbb{RP}^{2n-1}$.
For $k > 0$, consider the lens space $L(2k; 1, \dots, 1)$. The $\mathbb{Z}_{2k}$-action on $S^{2n-1}$ has an index $k$ subgroup which is just the antipodal action: if $g$ is a generator of the $\mathbb{Z}_{2k}$-action, then $g^k$ is the antipodal map and $\{\operatorname{id}, g^k\}$ is the desired subgroup. It follows that $L(2k; 1, \dots, 1)$ has $L(2; 1, \dots, 1) = \mathbb{RP}^{2n-1}$ as a $k$-sheeted cover.
The manifolds $L(2k; 1, \dots, 1)$ are homotopically distinct for every $k$ because $\pi_1(L(2k; 1, \dots, 1)) \cong \mathbb{Z}_{2k}$.
|
H: Why is it necessary to exclude empty set to accomplish this relation proof?
Working on the book: Daniel J. Velleman. "HOW TO PROVE IT: A Structured Approach, Second Edition" (p. 201)
∗20. Suppose $R$ is a relation on $A$. Let $B = \{X \in P (A) \colon X\neq\emptyset \}$, and define a relation $S$ on $B$ as follows:
$S = \{(X,Y) \in B \times B \colon \forall x \in X, \forall y \in Y(xRy)\}$.
Prove that if $R$ is transitive, then so is $S$. Why did the empty set have to
be excluded from the set $B$ to make this proof work?
I am trying to find a counterexample to justify the exclusion of the empty set from this proof. So, I define:
$R = \{(1,2), (2,1), (1,1), (2,2)\}$
$A = \{1,2\}$
$B = \{X \in P (A)\} = \{\emptyset, \{1\}, \{2\}, \{1,2\}\}$
Can I find a suitable counterexample with these sets? Why does $S$ lose transitivity property if the empty set is not excluded ?
AI: The problem is that if $X=\varnothing$, then $\forall x\in X\,\forall y\in Y\,(xRy)$ is vacuously true for every $Y\subseteq A$. What could make it false? There would have to be some $x\in X$ and some $y\in Y$ such that $x\not Ry$. But if $X=\varnothing$, there isn’t any $x\in X$ at all, so it’s impossible for such a pair of $x$ and $y$ to exist.
Now take $X$ and $Y$ to be two subsets of $A$ such that $X\not S Y$. Then $XS\varnothing$ and $\varnothing SY$, but $X\not SY$, so $S$ is not transitive. Your example won’t work, because all subsets of your $A$ are related by $S$, but it could be changed easily enough; try $R=\{\langle 1,1\rangle,\langle 1,2\rangle,\langle 2,2\rangle\}$, the $\le$ relation on $A$, and $X=\{2\}$. (I’ll let you find a $Y$ such that $X\not SY$.)
|
H: Prove that $\int_2^x \frac{dt}{\log(t)^n} = \mathcal{O}\bigg(\frac{x}{\log(x)^n}\bigg)$
I am stuck at the following exercise:
Show that for $n \in \mathbb{N}$ holds
$$\int_2^x \frac{dt}{\log(t)^n} = \mathcal{O}\bigg(\frac{x}{\log(x)^n}\bigg).$$
I do not see how I could prove this. I know that the following identity holds:
$$\int_2^x \frac{dt}{\log(t)^n} = \frac{t}{\log(t)^n} \bigg\vert^x_2 + n \int_2^x\frac{dt}{\log(t)^{n+1}},$$
but I do not see how this could help here. Could you give me a hint?
AI: You could use the following theorem : if f and g are positive, $\int_a^x f$ and $\int_a^x g$ both have infinite limit and $f=o(g)$, then $\int_a^x f = o\left(\int_a^x g\right)$.
So $\int_2^x \frac{{\rm d}t}{(\log(t))^{n+1}} = o\left(\int_2^x \frac{{\rm d}t}{(\log(t))^n}\right)$, and your proof gives you the result.
|
H: When is the supremum of a function on a subset equal to the supremum on a set?
Let $f$ be a continuous, real function on a set $X$. Suppose the set $D \subset X$ is dense in $X$, and the set $S \subset X$ is closed and has empty interior, i.e. it is nowhere dense. Moreover, $S\cap D$ is nonempty.
Are there any known conditions for when $\sup(f(S))=\sup(f(S\cap D))$?
I would like to treat $X$ as a general topological space, but feel free to make any assumptions you need.
AI: Your conditions allow $S\cap D$ to be a singleton, which isn’t much better than allowing $S$ and $D$ to be disjoint; you’ll need a lot more interaction between $S$ and $D$ to get equality of the suprema. Suppose, in fact, that $S\cap D$ is not dense in $S$, and let $H=\operatorname{cl}(S\cap D)\subsetneqq S$. Fix $p\in S\setminus H$. If $X$ is Tikhonov, there is a continuous real-valued function $h:X\to[0,1]$ such that $h(p)=1$ and $h[H]=\{0\}$; clearly $\sup h[S]=1>0=\sup h[S\cap D]$.
This shows that if $X$ is at all nice in terms of the existence of real-valued functions, you’ll need $S\cap D$ to be dense in $S$, and in that case you do of course get equality of the suprema.
|
H: Is there a solution to $ \lim_{x\to1^+} \sin\frac{\sqrt{x+1}}{{x^2-1}} $
Is there a solution to the below /Does it exist?
$
\lim_{x\to1^+} \sin\frac{\sqrt{x+1}}{{x^2-1}}
$
When I use conjugate and factor it, I get:
$$
Sin(\sqrt2)
$$
$$
x \neq 0
$$
I found this question asked before and the answer back was it doesn't exist.
Also, found another similar question which does have a solution. So now getting mixed up.
Does the Squeeze Theorem apply to $\lim_{x\to\infty}\sin(\frac{\pi x}{2-3x})$?
$
\sin(\frac{\pi x}{2-3x}) \to \sin(-\frac{\pi}{3}) = -\frac{\sqrt3}{2}
$
AI: As $x \to 1+$, $\sqrt{x+1} \to \sqrt{2}$, but $x^2-1 \to 0$, so $\sqrt{x+1}/(x^2-1) \to \infty$. $\lim_{t \to \infty} \sin(t)$ does not exist.
|
H: Property of decreasing functions
I was reading the proof of a theorem, and then I got stuck into this sentence:
"Since $f(k)=e^{-\frac{k^2}{m}}$ is a decreasing function, we have that:
$$\int_k^{k+1}e^{-\frac{x^2}{m}}d x\le e^{-\frac{k^2}{m}}\le\int_{k-1}^ke^{-\frac{x^2}{m}}dx"$$
I cannot understand how can I prove that this is true, and if it's true only in this case or for all decreasing functions and all integration intervals (in this case $k \in \mathbb{N}$).
AI: It's an application of the fact that if $f$ is continuous on $[a,b]$, then
$$
m(b-a)\leq \int_{a}^bf(x)\, dx\le M(b-a)
$$
where $M$ is the maximum and $m$ is the minimum of $f$ on $[a,b]$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.