text
stringlengths 83
79.5k
|
|---|
H: Calculating a limit with exponent and trig function
I got this limit to calculate:
$$
\lim_{x\to\frac{\pi}{2}}(\tan x)^\frac{1}{x-\frac{\pi}{2}}
$$
I'm trying to solve it with De L'Hopitals rule and the first step should be this, I guess:
$$
\lim_{x\to\frac{\pi}{2}}e^\frac{\ln(\tan x)}{x-\frac{\pi}{2}}
$$
Then I'm trying to solve the limit of the exponent:
$$
\lim_{x\to\frac{\pi}{2}}\frac{\ln(\tan x)}{x - \frac{\pi}{2}}
$$
In the last step I inversed the function in the denominator of the exponent. Next I do:
$$
\lim_{x\to\frac{\pi}{2}}\frac{\frac{1}{\tan x}*\frac{1}{\cos^2x}}{1} = \lim_{x\to\frac{\pi}{2}}\frac{1}{\tan x\cos^2x}= \lim_{x\to\frac{\pi}{2}}=...
$$
Skipping a few calculations, in the end I get
$$
\lim_{x\to\frac{\pi}{2}}\frac{1}{2\cos x\sin^3x}
$$
Which would mean the limit of the exponent = infinity, but the answer sheet says it's 2. I have a strong feeling I did something wrong in one of the first steps, however I'm unable to find out what exactly...
AI: Something's broken. First, if $x$ is a bit bigger than $\pi/2$, then $\tan x$ is negative which makes the exponential really hard to deal with. So the limit only makes sense "from the left."
In that case, there's something odd with the steps you skipped. $\tan x \cos^2 x = \sin x \cos x =\frac{1}{2}\sin 2x.$
|
H: Explanation of a scalar product calculation
Let $f_1, f_2, \ldots, f_n$ be continuous real valued functions on
$[a,b]$. Show that the set $\{f_1,\ldots,f_n\}$ is linearly dependent
on $[a,b]$ if and only if $$\det\left(\int\limits_a^b
f_i(x)f_j(x)dx\right)=0$$
I can't understand the solution to the sufficient condition of this problem. The solution says
Let $G$ be the matrix with entries $$G_{ij}=\int\limits_a^b
f_i(x)f_j(x)dx.$$
If the determinant of $G$ vanishes, then $G$ is singular; let $a$ be a
non-zero $n$-vector with $Ga=0$. Then
$$0=a^TGa=\sum\limits_{i=1}^n\sum\limits_{i=j}^n\int\limits_a^b
a_if_i(x)a_jf_j(x)dx=\int\limits_a^b\left(\sum\limits_{i=1}^n
a_if_i(x)\right)^2dx,$$ so, since the $f_i$'s are continuous
functions, the linear combination $\sum a_if_i$ must vanish
identically.
I have troubles replicating the scalar product calculation above. Let $a_i$ be the $i$-th component of $a$, then the $i$-th component of $Ga$ is $\sum\limits_{j=1}^n\int\limits_a^b f_i(x)a_jf_j(x)dx$ and so
$$a^TGa=\sum\limits_{i=1}^n\sum\limits_{j=1}^n\int\limits_a^b a_if_i(x)a_jf_j(x)dx,$$
i.e. my summation is $\sum\limits_{i=1}^n\sum\limits_{j=1}^n$ rather than $\sum\limits_{i=1}^n\sum\limits_{i=j}^n$. Can someone please spot my mistake and explain the calculation in more details?
AI: I see nothing wrong in your computations. This is probably due to a typo in the original proof. But yours is fine.
|
H: Probability selecting three cards out of a deck
In Introduction to Probability by Blitzstein & Hwang, Chapter 2 Problem 5:
Three cards are dealt from a standard, well-shuffled deck. The first two cards are flipped
over, revealing the Ace of Spades as the first card and the 8 of Clubs as the second card.
Given this information, find the probability that the third card is an ace in two ways:
using the definition of conditional probability, and by symmetry.
Solution:
Let A be the event that the first card is Ace of Spades, B be the event that second card is 8 of Clubs, and C be the event that third card is an Ace.
$P(C|A,B) = \dfrac{P(A,B,C)}{P(A,B)}$
Numerator: Having first as Ace of Spade, second as 8 of Clubs and third as an Ace, is similar to choosing three cards out of 52 cards without replacement. However, there are 3 ways for the third card to be an Ace since there are three Aces left, Ace of Hearts, Diamonds, and Clubs.
$P(A,B,C) = 3\cdot(\dfrac{1}{52})(\dfrac{1}{51})(\dfrac{1}{50})$
Denominator: This is the same as choose two cards out of 52 without replacement.
$P(A,B) = (\dfrac{1}{52})(\dfrac{1}{51})$
Therefore, $P(C|A,B) = \dfrac{P(A,B,C)}{P(A,B)} = \dfrac{3\cdot(\dfrac{1}{52})(\dfrac{1}{51})(\dfrac{1}{50})}{(\dfrac{1}{52})(\dfrac{1}{51})} = \dfrac{3}{50}$
Is this solution correct? By the way I don't get it as how to use symmetry to view this problem...
AI: Symmetry: there are 50 cards left. Each has the same probability so the probability to get an ace is $\tfrac{3}{50}$ as there are 3 aces left.
|
H: Calculate $\int x^m \ln(x)\,dx,\,\,\, m \in \mathbb{Z}$
Calculate
$$\int x^m \ln(x)\,dx,\,\,\, m \in \mathbb{Z}$$
My attempt:
First suppose $m\ne-1$
$$\int x^m \ln(x)\,dx=\left[\frac{1}{(m+1)}x^{m+1}\ln(x)\right]-\int \frac{1}{(m+1)}x^{m+1}x^{-1}\,dx$$
$$=\left[\frac{1}{(m+1)}x^{m+1}\ln(x)\right]-\int \frac{1}{(m+1)}x^mdx=\left[\frac{1}{(m+1)}x^{m+1}\ln(x)\right]-\left[\frac{1}{(m+1)^2}x^{m+1}\right]$$
$$=\left[\frac{1}{(m+1)}x^{m+1}\ln(x)-\frac{1}{(m+1)^2}x^{m+1}\right]$$
Now suppose $m=-1$
$$\int x^{-1} \ln(x)\,dx=\left[(\ln(x))^2\right]-\int \ln(x)x^{-1}\,dx$$
$$\Longleftrightarrow 2\int x^{-1} \ln(x)\,dx=\left[(\ln(x))^2\right]$$
$$\Longrightarrow \int x^{m} \ln(x) \, dx=\frac{1}{2} \left[(\ln(x))^2\right], m=-1$$
Hey it would be great if someone could check my attempt to solve the task :) thank you
AI: You solution looks correct.
I obtain the same result using an alternative approach.
Substitute $ x = e^y $, the integral will transform to,
$$I = \int ye^{(m+1)y} dy$$
For $m \neq -1$, by using integration by parts we get,
$$I = \frac{ye^{(m+1)y}}{m+1} - \frac{e^{(m+1)y}}{(m+1)^2} $$
Replacing $y$ gives,
$$I = \frac{\ln(x)}{m+1}x^{(m+1)} - \frac{x^{(m+1)}}{(m+1)^2} $$
For $ m = -1$,
$$ I = \int y dy = \frac{1}{2}y^2$$
Replacing $y$ gives,
$$ I = \frac{1}{2} [\ln(x)]^2 $$
|
H: Does this situation indicate that $g(x)>f(x)$?
I have two functions $f(x)$ and $g(x)$ for $0<x<5$. By calculating the first and second derivatives I see that both functions are increasing and concave (not strictly concave), i.e. $f''<0$ and $g''<0$. Then If I have $$f(0)=g(0)=0$$ and $$f(5)=10\quad ,\quad g(5)=11.$$ Then, does this prove that $g$ is always grater than $f$?
AI: No. Take $f(x)=10\sqrt[3]{x/5}$ and $g(x)=11\sqrt{x/5}$.
You get something like this
You can easily come up with the same type of plot if $f''$ and $g''$ are positive
|
H: Bounded idempotent on Hilbert space has closed range.
Let $H$ be a Hilbert space and $p\in B(H)$ an idempotent on $H$, i.e. a continuous linear map satisfying $p^2 = p$. Is it true that $p(H)$ is a closed subspace of $H$?
Attempt:
Consider a sequence $p(h_n) \to h $. We show $h \in p(H)$. By continuity, $$p(h_n) = p^2(h_n) = p(p(h_n)) \to p(h)$$ but also $p(h_n) \to h$ so we obtain $h = p(h) \in p(H)$ and we are done.
Is this correct?
AI: Yes, your proof is correct.
As an alternative approach, we could note that $p(H) = \ker(\operatorname{id}_H - p)$, and the kernel of any bounded linear map is closed.
|
H: Solution to trigonometric equation
I wonder that the solution to the following equation:
If $0\le x,y\le\pi$, solving the equation $\cos(x-y)=\cos(2x)$ for $x=?$.
My attempted solutions are $x=\frac{y+2k\pi}{3}$ for $k=0,1$.
Am i right? I'm not sure about that ;(
Give some comment. Thank you!
AI: Use https://mathworld.wolfram.com/ProsthaphaeresisFormulas.html
$$\cos(x-y)-\cos2x=2\sin\dfrac{2x+x-y}2\sin\dfrac{2x-(x-y)}2$$
Finally $\sin z=0\implies z=n\pi$ for any integer $n$
|
H: What is the largest number of row operations required to row reduce an $n \times n$ matrix efficiently?
Given an $n \times n$ matrix, and row operations shear, switch, and scale, what is the largest number of efficient row operations required to row reduce to echelon form?
For reference, a shear is defined as $R_i \rightarrow R_i + \lambda R_j$, a switch $R_i \leftarrow \rightarrow R_j$ and a scale, $R_i \rightarrow \lambda R_i$ for non-zero $\lambda$.
I've claimed that $n^2$ is an upper-bound for efficient operations, but never proven. So, I'm curious, how would one determine the best set of moves? I would ask about the least amount, but that is highly dependent on the composition of the matrix. Specifically, if the matrix is already in reduced-row-echelon form then no moves are required. Even supposing that the matrix isn't, the ideal scenario is we have to perform 1 operation. Hence, a greater motivation for the maximum of efficient moves instead of least amount of moves.
A possible application of this result could aid students in understanding if row-reduction of an arbitrary $n \times n$ matrix is incorrect based on length. For example, if $n^2$ is held true, a $2 \times 2$ matrix should take no more than 4 efficient operations to solve.
AI: If the matrix elements are algebraically independent, each row operation can only produce one more $0$ or $1$. For an $n \times n$ nonsingular matrix, you want $n(n-1)/2$ $0$'s and $n$ $1$'s, so this will take at least $n(n+1)/2$ row operations.
|
H: High dimensional "eigenvector"
I have come across an oddity in the solution to a problem and I am unsure with how to proceed.
For an $n\times n$ (symmetric and real-valued, usually poorly conditioned and low-rank) matrix $A$, and an $n\times m$ matrix $B$ (with $m \leq n$), and a scalar $c \neq 0$, the solution comes where:$$AB = cB$$
If $B$ were a vector, this would be a straightforward eigenvalue problem. However, $B$ is in this case a tall skinny matrix.
My intuition says that each column of $B$ must be an eigenvector of $M$ associated with the eigenvalue $c$. But as far as I can tell the only repeated eigenvalue of $M$ is $0$. Is there a solution to this problem?
AI: It is indeed the case that $AB = cB$ means that every column of $B$ is an eigenvector of $A$ associated with eigenvalue $c$. Indeed, if $B_1,\dots,B_n$ denote the columns of $B$, then we have
$$
AB = cB \implies A[B_1 \ \ \cdots \ \ B_n] = c[B_1 \ \ \cdots \ \ B_n] \implies
\\
[AB_1 \ \ \cdots \ \ AB_n] = [cB_1 \ \ \cdots \ \ cB_n].
$$
The column span of $B$ must be a subspace of the eigenspace associated with $c$, so the rank of $B$ is at most equal to the multiplicity of the eigenvalue $c$.
|
H: If $\sum a_{n} x^{n}$ converges in $( -1,1)$, then $\sum a^{k}_{n} x^{n}$ also converges in $( -1,1)$
I have to prove or disprove the following statement:
If $\sum a_{n} x^{n}$ converges in $( -1,1)$, then $\sum a^{k}_{n} x^{n}$ also converges in $( -1,1)$ , for $k\in \mathbb{N} $.
I actually already proved it. But I'm not 100% sure about my proof. Could someone confirm or find a way to disprove it?
My solution:
$\displaystyle \sum a_{n} x^{n}$ is a power series with center $\displaystyle 0$, it's given that it converges in $\displaystyle ( -1,1)$, from here that the radius is at least $\displaystyle 1$. Let's denote the radius $\displaystyle r$.
By definition $\displaystyle r\ =\ \frac{1}{\lim _{x\rightarrow \infty }\sqrt[n]{|a_{n} |}}$. Since, $\displaystyle 1\leqslant r\leqslant \infty $, it comes that $\displaystyle 0\leqslant \lim _{x\rightarrow \infty }\sqrt[n]{|a_{n} |} \leqslant 1$.
Now let's look at $\displaystyle \sum a^{k}_{n} x^{n}$. Also here we have a power series with center $\displaystyle 0$, in order to prove it converges in $\displaystyle ( -1,1)$ we have to prove the radius is at least $\displaystyle 1$. Let's denote it's radius $\displaystyle r_{k}$. We need to prove that : $\displaystyle 1\leqslant r_{k} \leqslant \infty $, meaning $\displaystyle 1\leqslant \frac{1}{\lim _{x\rightarrow \infty }\sqrt[n]{|a^{k}_{n} |}} \leqslant \infty $, meaning $\displaystyle 0\leqslant \lim _{x\rightarrow \infty }\sqrt[n]{|a^{k}_{n} |} \leqslant 1$.
We know that $\displaystyle 0\leqslant \lim _{x\rightarrow \infty } |a_{n} |^{\frac{1}{n}} \leqslant 1$, it derives that also $\displaystyle 0\leqslant \lim _{x\rightarrow \infty }\left( |a_{n} |^{\frac{1}{n}}\right)^{k} \leqslant 1$, since $\displaystyle k\in \mathbb{N}$.
From here that $\displaystyle 1\leqslant r_{k} \leqslant \infty $, the radius is at least $\displaystyle 1$, and it converges by definition in $\displaystyle ( -1,1)$.
Thanks a lot
AI: My solution:
$\displaystyle \sum a_{n} x^{n}$ is a power series with center $\displaystyle 0$, it's given that it converges in $\displaystyle ( -1,1)$, from here that the radius is at least $\displaystyle 1$. Let's denote the radius $\displaystyle r$.
By definition $\displaystyle r\ =\ \frac{1}{\lim _{x\rightarrow \infty }\sqrt[n]{|a_{n} |}}$. Since, $\displaystyle 1\leqslant r\leqslant \infty $, it comes that $\displaystyle 0\leqslant \lim _{x\rightarrow \infty }\sqrt[n]{|a_{n} |} \leqslant 1$.
Now let's look at $\displaystyle \sum a^{k}_{n} x^{n}$. Also here we have a power series with center $\displaystyle 0$, in order to prove it converges in $\displaystyle ( -1,1)$ we have to prove the radius is at least $\displaystyle 1$. Let's denote it's radius $\displaystyle r_{k}$. We need to prove that : $\displaystyle 1\leqslant r_{k} \leqslant \infty $, meaning \ $\displaystyle 1\leqslant \frac{1}{\lim _{x\rightarrow \infty }\sqrt[n]{|a^{k}_{n} |}} \leqslant \infty $, meaning $\displaystyle 0\leqslant \lim _{x\rightarrow \infty }\sqrt[n]{|a^{k}_{n} |} \leqslant 1$.
We know that $\displaystyle 0\leqslant \lim _{x\rightarrow \infty } |a_{n} |^{\frac{1}{n}} \leqslant 1$, it derives that also $\displaystyle 0\leqslant \lim _{x\rightarrow \infty }\left( |a_{n} |^{\frac{1}{n}}\right)^{k} \leqslant 1$, since $\displaystyle k\in \mathbb{N}$.
From here that $\displaystyle 1\leqslant r_{k} \leqslant \infty $, the radius is at least $\displaystyle 1$, and it converges by definition in $\displaystyle ( -1,1)$.
|
H: If $f\cdot g$ is differentiable at $x_0$, what conditions on $f$ guarantee the differentiability of $g$ at $x_0$?
Justify: If $f\cdot g$ is differentiable at $x_0$, what conditions on $f$ guarantee the differentiability of $g$ at $x_0$?
I tried coming up with an example, like $f(x)=x$ and $g(x)=|x|$. $g$ is not differentiable at $x_0=0$, but $f$ is and $f\cdot g$ is differentiable at $x_0=0$. Now, let $f(x)=g(x)=|x|$. Neither functions are differentiable at $x_0$, but $f\cdot g$ is. So, can I conclude $f$ just has to be continuous at $x_0=0$? Any hint, please?
Edit: $f\cdot g=f(x)g(x)$.
AI: It suffices for $f$ to be differentiable and $f(x_0) \ne 0$. Let $h(x)=f(x) g(x)$. THen
$$g'(x) = \left(\frac {h(x)} {f(x)}\right)'
= \frac {h'(x) f(x) - f'(x) h(x)} {f^2(x)}$$
On the other hand, this doesn't have to hold when $f(x)=0$. One example is $f(x) \equiv 0$ and $g(x) = any\ function$.
|
H: Determine whether the given function $F(x,y) = x/y$ is as binary operation on the given set $A$
The given function is $F(x,y) = x/y$ for all $x$ and $y$ in the set $A$.
where $A = \{a+b \sqrt[]{2} : a,b \in \mathbb{Q}\}$
and $a$ and $b$ cant be both $0.$
So what I need is to prove $F(x,y) \in A$ for every $x,y \in A$
Here is what I tried:
Let $x = \frac{p}{q} +\frac{m}{n} \sqrt[]2$ and $y =\frac{u}{v} +\frac{k}{j} \sqrt[]2$
Then $$F(x,y) = \frac{\frac{p}{q} +\frac{m}{n} \sqrt[]2}{\frac{u}{v} +\frac{k}{j} \sqrt[]2}$$
$$F(x,y) = \frac{\frac{pn+mq \sqrt[]{2}}{qn}}{\frac{uk+kv \sqrt[]{2}}{vj}}$$
$$F(x,y) = \frac{pn+mq \sqrt[]{2}}{qn}.\frac{vj}{uk+kv \sqrt[]{2}}$$
$$F(x,y) = \frac{vjpn+vjmq \sqrt[]{2}}{qnuj+qnkv \sqrt[]{2}}$$
I got stuck here, It dont seems that the result of this function will always produce number in the format of the elements from $A$, how can I show this if it is the case?
AI: For $y=0=0+0\sqrt{2}\in A$, the function $F$ is not well defined, but I assume you just missed that.
In general, if you have a fraction $\frac{a+b\sqrt{2}}{c+d\sqrt{2}}$ you can multiply above and below y $c-d\sqrt{2}$, which gives you
$$\frac{(ac-2bd)+(-ad+bc)\sqrt{2}}{c^2+2d^2}$$
which is clearly in $A$.
|
H: Let $f(x) \in \mathbb{Q}[x]$ be an irreducible polynomial of degree $4$ with exactly $2$ real roots.
Show that the Galois group of $f$ over $\mathbb{Q}$ is either $S_4$ or the dihedral group of order $8$.
So I'm studying from quals remotely, in a state hundreds of miles away from my university that is also quarantining the state that my university is in do to the pandemic. Before the pandemic one of my friends gave me this solution.
Since $f$ is degree $4$ we know that the Galois group is either $Z_4, V_4, D_8, A_4$, or $S_4$. Since it has $2$ real roots it has a pair of complex conjugate roots. Since complex conjugate roots are equivalent to conjugation a transposition, and since the only transitive subgroups containing a transposition are $D_8$ and $S_4$ we are done.
So I understand why we have $2$ complex roots but I don't understand why they have to be complex conjugate to one another. I am also struggling to figure out an element in $D_8$ that is equivalent to a transposition.
Any help is appreciated.
AI: As Theo says, the key for seeing that the complex roots come as conjugate pairs is that $f$ has real coefficients.
For finding transpositions in $D_8$, why not think of $D_8$ as the symmetry group of the square? By this analogy, the elements of $D_8$ are permuting the four corners of the square (whereas in your original problem, the elements of $D_8$ are permuting the four roots of $f(x)$).
Numbering the vertices of the square in clockwise order, the elements of $D_8$ are:
The identity, $e$.
The 90-degree rotations, $(1234)$ and $(1432)$.
The 180-degree rotation, $(13)(24)$,
The reflections across the horizontal and the vertical, $(14)(23)$ and $(12)(34)$
The reflections across the diagonals, $(13)$ and $(24)$.
The reflections across the diagonals are your transpositions.
|
H: Rescaling multivariable normal pdf and normalizing constant
I am trying to understand change of variables for a random variable and how it changes the pdf and the normalizing constant.
Let $\mathbf{w}$ be $N$-dimensional normal variable and let $\mathbf{S}$ be covariance matrix. I can diagonalize that matrix: $\mathbf{S} =\mathbf{O \Sigma O^T} $ and change original variable: $\mathbf{ x = O^T w} $. Now I have diagonal covariance matrix that I call $\mathbf{\Sigma_x}$, and probability density function for $x$ is
$$
P_x(\mathbf{x}) = \frac{\exp\{-\frac12 \mathbf{x}^T \mathbf{\Sigma^{-1}_x} \mathbf{x}\}}{\int d\mathbf{x} \exp\{-\frac12 \mathbf{x}^T \mathbf{\Sigma^{-1}_x} \mathbf{x}\}} = (2\pi)^{-N/2} (\det \mathbf{\Sigma_x})^{-\frac12} \exp\{-\frac12 \mathbf{x}^T \mathbf{\Sigma^{-1}_x} \mathbf{x}\}
$$
Now I make the change of variable to $\mathbf{y} = a \mathbf{x}$ and calculate Jacobian. Function for change of variable is $g(x) = a x$ and $g^{-1}(y) = a^{-1} y$ so according to the changing variable method new pdf must be
$$
P_y(\mathbf{y}) = P_x(g^{-1}(y)) \begin{vmatrix} \frac{d g^{-1}(y)}{dy}\end{vmatrix}
$$
where $\begin{vmatrix} \frac{d g^{-1}(y)}{dy}\end{vmatrix}$ is Jacobian. For case of rescaling of the variable this is just $\begin{vmatrix} a^{-1}\end{vmatrix}$. So pdf for $y$ is
$$
P_y(\mathbf{y}) = (2\pi)^{-N/2} [\det (a\ \mathbf{\Sigma_x} \ a)]^{-\frac12} \exp\{-\frac12 \mathbf{y}^T \frac{\mathbf{\Sigma^{-1}_x}}{a^2} \mathbf{y}\} \\= a^{-N} (2\pi)^{-N/2} [\det (\mathbf{\Sigma_x})]^{-\frac12} \exp\{-\frac12 \mathbf{y}^T \frac{\mathbf{\Sigma^{-1}_x}}{a^2} \mathbf{y}\}
$$
Now I try to calculate this result by making substitution $\mathbf{x}=a^{-1}\mathbf{y}$ in $P_x(\mathbf{x})$, but I get different answer for the integral in normalizing constant.
$$
P_y(\mathbf{y}) = \frac{\exp\{-\frac12 \mathbf{y}^T \frac{\mathbf{\Sigma^{-1}_x}}{a^2} \mathbf{y}\}}{\int d\mathbf{y} a^{-N} \exp\{-\frac12 \mathbf{y}^T \frac{\mathbf{\Sigma^{-1}_x}}{a^2} \mathbf{y}\}}
$$
The denominator above is normalizing constant and is a gauss integral
$$
\begin{align}
\mathcal{N}&=a^{-N} \int d\mathbf{y}\exp\{-\frac12 \mathbf{y}^T \frac{\mathbf{\Sigma^{-1}_x}}{a^2} \mathbf{y}\} = a^{-N} (2\pi)^{N/2} [\det a^2 \mathbf{\Sigma_x}]^{1/2} \\&= a^{-N}a^{N} (2\pi)^{N/2} [\det \mathbf{\Sigma_x}]^{1/2} = (2\pi)^{N/2} [\det \mathbf{\Sigma_x}]^{1/2}
\end{align}
$$
So the $a$ is cancelling out here and new pdf is
$$
P_y(\mathbf{y}) = (2\pi)^{-N/2} [\det (\mathbf{\Sigma_x})]^{-\frac12} \exp\{-\frac12 \mathbf{y}^T \frac{\mathbf{\Sigma^{-1}_x}}{a^2} \mathbf{y}\}
$$
I don't understand why this is different from answer I got when using Jacobian formula? I must have made mistakes in the integral. Thank you.
AI: Your expression for $P_y(\mathbf y)$ should be
$$
P_y(\mathbf{y}) = \frac{a^{-N} \exp\{-\frac12 \mathbf{y}^T \frac{\mathbf{\Sigma^{-1}_x}}{a^2} \mathbf{y}\}}{\int d\mathbf{y} \ a^{-N} \exp\{-\frac12 \mathbf{y}^T \frac{\mathbf{\Sigma^{-1}_x}}{a^2} \mathbf{y}\}}.
$$
i.e. the factor of $a^{-N}$ should appear in the numerator too.
Remember, the whole point of the normalization factor $\int d\mathbf{y} \ a^{-N} \exp\{-\frac12 \mathbf{y}^T \frac{\mathbf{\Sigma^{-1}_x}}{a^2} \mathbf{y}\}$ in the denominator is to ensure that $\int d\mathbf y P_y(\mathbf y) = 1$.
Staring at my expression, it's clear that $\int d\mathbf y P_y(\mathbf y) = 1$.
But with your expression in your question, it's clear that you'd get $\int d\mathbf y P_y(\mathbf y) = a^N$, which isn't right.
To address point raised in comments...
It's legitimate to substitute variables inside the integral. There, we're substituting one dummy variable for another dummy variable, which is fine. For example, it's legitimate to write
$$
P_x(\mathbf{x}) = \frac{\exp\{-\frac12 \mathbf{x}^T \mathbf{\Sigma^{-1}_x} \mathbf{x}\}}{\int d\mathbf{x'} \ \exp\{-\frac12 \mathbf{x'}^T \mathbf{\Sigma^{-1}_x} \mathbf{x'}\}} = \frac{\exp\{-\frac12 \mathbf{x}^T \mathbf{\Sigma^{-1}_x} \mathbf{x}\}}{\int d\mathbf{u} \ a^{-N} \exp\{-\frac12 \mathbf{u}^T \frac{\mathbf{\Sigma^{-1}_x}}{a^2} \mathbf{u}\}}.
$$
where we've performed a variable substitution $\mathbf {x'} = a^{-1} \mathbf u$ inside the integral.
But to derive $P_y(\mathbf {y})$ from $P_x(\mathbf {x})$, you must use the Jacobian. That is the only legitimate way to get the expression for $P_y(\mathbf {y})$.
$$ P_y(\mathbf y) = P_x(\mathbf x) \left| \frac{\partial \mathbf x}{\partial \mathbf y}\right| = a^{-N} P_x(\mathbf x) = \frac{a^{-N}\exp\{-\frac12 \mathbf{x}^T \mathbf{\Sigma^{-1}_x} \mathbf{x}\}}{\int d\mathbf{u} \ a^{-N} \exp\{-\frac12 \mathbf{u}^T \frac{\mathbf{\Sigma^{-1}_x}}{a^2} \mathbf{u}\}}$$
NB The correct way to change variables for the pdf is to use the Jacobian formula,
$$ P_y(\mathbf y) = P_x(\mathbf x) \left| \frac{\partial \mathbf x}{\partial \mathbf y}\right| = a^{-N} P_x(\mathbf x) .$$
It is incorrect to say that
$$ P_y(\mathbf y) = P_x(\mathbf x) $$
Why is this?
Consider a infinitesimal volume element $\delta \mathbf x$ in $\mathbf x$-space, which maps to an infinitesimal volume element $\delta \mathbf y$ in $\mathbf y$-space. What is the probability that the sample lies in this infinitesimal volume element?
We can calculate this probability in two ways - using the $\mathbf x$-space representation or using the $\mathbf y$-space representation. The two answers must be equal. In other words,
$$ P_x(\mathbf x) \delta \mathbf x = P_y(\mathbf y) \delta \mathbf y.$$
But the volumes of the volume elements are different! They are related by
$$ \delta \mathbf x = \left| \frac{\partial \mathbf x}{\partial \mathbf y}\right| \times \delta \mathbf y.$$
Hence it follows that
$$ P_x(\mathbf x) \left| \frac{\partial \mathbf x}{\partial \mathbf y}\right| = P_y(\mathbf y).$$
|
H: The signature of the tensor product of skew-symmetric non-singular matrices
Let $X$ be a non-singular (real) symmetric matrix which is the tensor product of two (real) $n\times n$ skew symmetric non-singular matrices, i.e. $X=A\otimes B$. Then how to see the number of negative eigenvalues of $X$ equals to the number of positive eigenvalues of $X$.
What I know is the eigenvalues of $X$ are the pairwise products of the eigenvalues of $A$ and $B$ and the eigenvalues of $X$ are purely imaginary. I wonder if it suffices to show the claim? If so, is that possible to show me a proof?
AI: Hint: In addition to the fact that eigenvalues of $X$ are the pairwise products of the eigenvalues of $A$ and $B$ and that the eigenvalues of $A$ and $B$ are purely imaginary, note that the eigenvalues of $A$ (and $B$) come in conjugate pairs. That is, both $A$ and $B$ have eigenvalues that come in pairs $\pm \lambda i$ (with $\lambda \in \Bbb R$).
Alternatively: note that $A$ is similar to $A^T = -A$. That is, there exists an invertible matrix $S$ such that $SAS^{-1} = -A$. It follows that
$$
(S \otimes I)(A \otimes B)(S \otimes I)^{-1} =
(S AS^{-1}) \otimes B = (-A)\otimes B= -(A \otimes B).
$$
That is, $A \otimes B$ is a non-singular symmetric matrix that is similar to $-(A \otimes B)$. This is enough to show the claim.
|
H: Function with a parameter controlling its growth
I am looking for a mathematical function with growth controlled by a parameter.
It would have two inputs:
A growth scale, further called $w$
An input ranging from $0$ to $1$, further called $x$
The function $f(x, w)$ should behave according to the following pattern.
If $w$ equals $2$, it would be a linear function (I think):
if $x = \frac{1}{2}$, then $f(x,w)$ should be $\frac{1}{2}$
if $x = 1$, then $f(x, w)$ should be $\frac{2}{2}$
If $w$ equals $3$, the growth increases:
If $x = \frac{1}{2}$, $f(x, w)$ should be $\frac{1}{3}$
If $x = \frac{3}{4}$ (or $\frac{1}{2} + \frac{1}{2} * \frac{1}{2}$), $f(x, w)$ should be $\frac{2}{3}$
If $x = 1$, $f(x, w)$ should be $\frac{3}{3}$
If $w$ equals $4$, the growth increases further:
If $x = \frac{1}{2}$, $f(x, w)$ should be $\frac{1}{4}$
If $x = \frac{3}{4}$, $f(x, w)$ should be $\frac{2}{4}$
If $x = \frac{7}{8}$, $f(x, w)$ should be $\frac{3}{4}$
If $x = 1$, $f(x, w)$ should be $\frac{4}{4}$
This behaviour should continue infinitely as $w$ increases.
How would the function for this behaviour look like?
AI: For $\;\displaystyle x=1-\frac 1{2^n},\ \ 1\le n<w\;$ you want the result $\displaystyle \frac nw\,$ while $x=1$ should have the image $1$.
$\displaystyle f(x,w)=-\frac{\log_2(1-x)}w\;$ for $x<1$ may be a starting point but probably not what you really wished...
The logarithm will indeed produce a singularity for $x=1$ instead of the $1$ value.
To illustrate the problem let's complete DMcMor's clear illustration in the case $w=5$ :
The way your points are produced becomes pretty clear :
the middle of $\;(0,0)-(1,2/w)\;$ for the first point $\;(1-1/2,1/w)\;$
the middle of the previous point and $\,(1,3/w)\;$ for the second one
and so on until getting to $(1,w/w)$ and producing $(1-1/2^{w-1},(w-1)/w)$
Clearly we could continue the process as illustrated and produce the next $\displaystyle (1-1/2^n,n/w)$ points up to infinity.
Since you want the value $1\,$ for $\,x=1\,$ I would suggest the function :
$$f(x,w)=\begin{cases}
-\dfrac{\log_2(1-x)}w&x\le 1-\dfrac 1{2^{w-1}}\\
1-\dfrac {2^{w-1}(1-x)}w&\text x>1-\dfrac 1{2^{w-1}}\\
\end{cases}$$
with a linear interpolation for the last segment as in the illustration.
You could too repeat the previous segment vertically shifted by $\dfrac 1w\,$ to get the last one (your suggestion I think) :
$$f\left(x-\frac 1{2^{w-1}}\right)+\frac 1w$$
or for a smoother derivative at $\,1-1/2^{w-1}\,$ prefer the symmetric :
$$2-\frac 2w-f\left(2-\frac 1{2^{w-2}}-x,\,w\right)$$
|
H: What does it mean to say that a measure is supported on a set?
What does it mean to say that a measure $\mu$ is supported on set $E$ ? I keep coming across this phrase, but its meaning is always assumed which can get somewhat confusing.
Does it simply mean that for each (open) neighborhood $N \subset E$, we have that $\mu(N) \neq 0$ ?
AI: It simply means that for any measurable set $A$, $\mu(A) = \mu(A \cap E)$.
(We can't talk about open neighbourhoods, since our measurable space doesn't necessarily have the structure of a topological space.)
|
H: $\omega$ a continuous $m$-form in $M$, $\dim M=m$, $M$ is orientable $\iff \omega \neq 0$
Let $M$ a surface, $\dim M=m$, and let $\omega$ a continuous $m-$form in $M$. I want to prove that $M$ is orientable $\iff \omega(x)\neq 0\; \forall x\in M$.
The $(\Leftarrow)$ part is ok to me, since I say this post. However, I don't know exactly how to prove the $(\Rightarrow) $ part, even if it looks kind of obvious.
I tried to taking two parametrizations $\phi,\psi$ of $x\in M$ in an orientation of $M$ and work with writting $\omega$ in each basis, but I don't know exactly how to proceed.
Any tips?
AI: The statement as written is false. On any manifold (orientable or otherwise), there exists a continuous $m$-form that is equal to zero everywhere. That is a trivial counterexample.
I think the statement should say, if $M$ is orientable then there exists a continuous $m$-form $\omega$ such that $\omega(x) \neq 0$ for all $x \in M$.
To construct this $m$-form, I suggest you take a positively-oriented atlas, and construct positive-valued $m$-forms locally in each coordinate chart. Then multiply these local $m$-forms by a partition of unity (so that they can be extended continuously to the entire manifold), and add them together. Use the fact that the Jacobian determinants are positive (since the charts are positively-oriented with respect to each other) to conclude that the sum of the local $m$-forms is also positive in each chart.
|
H: How is the "diagonal proof" valid?
Years ago I was taught a proof that there are more irrational numbers than rational numbers, i.e. that it is a bigger infinity, that went like this:
Take the set of all rational numbers. Now construct a new number as follows: Take the first rational number, and choose a digit for the first digit of our constructed number that is different from the first digit of this number. Then make the second digit different from the second digit of the second number. Make the third digit different from the third digit of the third number. Etc. If you did this for an infinite number of digits, you would then have a number that is different from all the numbers in the original set. Therefore, there are numbers that are not in the set of rational numbers. Therefore, there are more irrational numbers than there are rational numbers.
I see the proof up to the last sentence. How does this prove that there are more irrational numbers? Just because I can find a number in set B that is not in set A doesn't prove that set B is bigger than set A.
I mean ... suppose I take the set of all even integers. There are an infinite number of such numbers. (I assume you agree with that and I don't need to go into a proof.) Now consider the set of all integers, odd and even. There are members of the set of all integers that are not in the set of integers. But there are not more integers than there are even integers. It is easily provable that there are the same number: I can establish a one-to-one correspondance between the two sets.
So how does the diagonal proof prove that there are bigger infinities? I presume I'm missing something. What?
AI: Short version: that's not actually the diagonal proof.
That argument is indeed incorrect, for precisely the reason you say - as written, all it shows is that there is an irrational number, and so $\mathbb{R}\supsetneq\mathbb{Q}$.
However, it does contain the germ of the right idea. Note that in the first part we didn't actually use anything about rationals: what we really showed was the following.
Suppose $(a_i)_{i\in\mathbb{N}}$ is any sequence of real numbers. Then there is a real number $b$ such that $b\not=a_i$ for any $i$.
This does ultimately imply that $\mathbb{R}$ cannot be put in bijection with $\mathbb{Q}$. We already know that there is a sequence $(q_i)_{i\in\mathbb{N}}$ such that every rational appears in the sequence. Now suppose $f:\mathbb{Q}\rightarrow\mathbb{R}$ were a bijection. Then letting $a_i=f(q_i)$ we would get a counterexample to the result above.
So basically the argument in the OP has the right idea, but doesn't implement it correctly.
|
H: Stein and Shakarchi, Complex Analysis, Chapter 2, Problem 1(b)
Fix $\alpha$ with $0 < \alpha < \infty$. Show that the analytic function $f$ defined by $f(z) = \sum_{n=0}^\infty 2^{-n\alpha} z^{2^n}$ for $|z| < 1$ extends continuously to the unit circle, but cannot be analytically continued past the unit circle.
The problem also provides a hint concerning nowhere differentiable functions. In Stein and Shakarchi, Fourier Analysis, p. 118, it is shown that $\sum_{n=0}^\infty 2^{-n\alpha} \cos(2^n x)$ is nowhere differentiable for $0 < \alpha < 1$. On p. 174 of that book a problem with hints shows that this result is also true for $\alpha = 1$.
I see how to do the original problem using the hint for $0 < \alpha \le 1$. My question is how does one prove that $f(z)$ cannot be analytically continued for $\alpha > 1$?
AI: Hint: $z f'(z) = \sum_{n=0}^\infty 2^{n(1-\alpha)} z^{2^n}$. If that can't be continued, then $f(z)$ can't.
|
H: Finding the Centre of Gravity (Using Single Integral) of an Arc of the Astroid $x^{2/3} + y^{2/3} = a^{2/3}$ in the First Quadrant
My Attempted Solution: First off, the curve will intersect the x and y axis at $(a, 0)$ and $(0, a)$. We can also rewrite the function: $y = (a^{2/3}-x^{2/3})^{3/2}$. To find the y-coordinate of the centre of mass the equation is : $\frac{\int y_{avg} \ dm}{\int dm}$. For our purposes, since we are working with a line (arc length) we can rewrite the equation to $$y_{cm} =\frac{\int y \ dl}{\int dl}$$ in other words, the $y$ value at $dl$ divided by the total arc length. Now, I need to relate $dl$ to $x$ in order to solve the integrals. By definition we know that a small arc length can be calculated by:
$$dl = \sqrt{1+y'^{2}}dx = \sqrt{1 + [\frac{3}{2}(a^{2/3}-x^{2/3})^{1/2}(-\frac{2}{3}x^{-1/3})]^2} dx = \sqrt{ 1+ [(a^{2/3}-x^{2/3})x^{-2/3}]} \ dx = \sqrt{(\frac{a}{x})^{2/3}} \ dx = (\frac{a}{x})^{1/3} \ dx$$.
Hence the top part of the $y_{cm}$ can be written as $\int_{0}^{a} (a^{2/3} - x^{2/3})^{3/2}(\frac{a}{x})^{1/3} dx$. Which when put through a calculator produces and answer of $\frac{3}{5}a^2$. Furthermore, the length of this arc is $\frac{3}{2}a$. When you divide the two values you get $\frac{2}{5}a$. Is this correct?
AI: A cleaner approach is to let $ x = a\cos^3t$ and $y=a\sin^3t$. Then, $dl = \sqrt{(x_t’)^2+(y_t’)^2} dt = 3a\sin t\cos t dt$ and
$$y_c = \frac{\int y \ dl}{\int dl}
= \frac{\int_0^{\pi/2} a\sin^3t\cdot 3a\sin t\cos t dt}{\int_0^{\pi/2} 3a\sin t\cos t dt}
=\frac{\frac35 a^2}{\frac32a}= \frac25 a
$$
|
H: solve matrix equation
Given symmetric positive definite (SPD) matrices $A$ and $B$, I was wondering there is a solution for
$XAX=B$,
where $X$ is also a SPD matrix.
Thank you!
AI: Your question can be rewritten as
$$ (A^{1/2} X A^{1/2})^2 = A^{1/2} B A^{1/2}. $$
So
$$ X = A^{-1/2} (A^{1/2} B A^{1/2})^{1/2} A^{-1/2} .$$
|
H: Why are these dependent and independent events respectively.
Case #1.
P(A) is the probability of rolling a fair standard dice and getting numbers less than or equal to 4.
P(B) is the probability of rolling it and getting even numbers.
Case #2.
P(A) is the probability of rolling a fair standard dice and getting numbers less than or equal to 3.
P(B) is the probability of rolling it and getting even numbers.
Here, in the case #1, A and B are independent while in the case #2 they are dependent. I know how to calculate both of them, but I don't understand it intuitively. How come they come out so differently just by adjusting a number? Shouldn't independent cases be like the relation between winning a lottery and having a bug bite? And shouldn't dependent cases be like the relation between eating rotten food and getting a stomach ache?
AI: Dependence doesn't imply causation. Intuitively, two events are independent if knowing that one occurs gives you no information about whether the other will occur. If we know that the number is $\leq4$, well, half the numbers from $1$ to $4$ are even, so there's still a $50\%$ chance of getting an even number; the information doesn't change our assessment in nay way. But suppose we learn that the number is $\leq3$. Now there are two odd numbers possible, and only one even number, so the probability of getting an even number has changed to $\frac13.$
|
H: How to determine solutions to homogenous systems Ax = 0?
I have a 4x4 matrix which I need to determine the solutions for Ax = 0 as well as one basis of the space of solutions.
What I've done so far was to reduce the matrix to the echelon form and I ended up with:
Then I wrote:
(x = -2y)
(y = z)
(w = 0)
How do I find the solutions now? And how do I find a basis among the solutions?
AI: We can make the 4'd row as zero rows by subtracing $R_3$ and $R_4$
from there we getting the matrix : $$\begin{pmatrix}1&2&0&1\\ 0&1&-1&\frac{3}{5}\\ 0&0&0&\frac{6}{5}\\ 0&0&0&0\end{pmatrix}$$ from here we can see we have $z$ as free variable so we can write the following:
$$w=0$$ $$y=z$$ $$x=-2z$$
so we can write a general vector in the null space : $(x,y,z,w) = (-2z,z,z,0)= z(-2,1,1,0)$ while $z\in \mathbb{F}$ depends on the field u working on.
wich we can conclude that the null space is spanned by the vector $(-2,1,1,0)$
|
H: How many different ways are there to write $2016$ as the difference of two squares?
How many different ways are there to write $2016$ as the difference of two squares?
So the question is asking for the integer solutions for $2016 = x^2-y^2$. The prime factorization of $2016$ is $2016=2^5\cdot3^2\cdot 7^1$. I’m not sure how should I continue from here, what would be the proper way?
AI: Rewriting $x^2 - y^2$ as $(x - y)(x + y)$, the problem is to count the integer solutions $(x,y)$ to all systems $x - y = a, x + y = b$, where $a, b$ are integers satisfying $ab = 2016$.
This is a nonsingular $2 \times 2$ linear system, so a unique choice of $x, y$ is always determined by $a, b$. The solutions $x = (a + b)/2, y = (b - a)/2$ are integers if and only if $a, b$ have the same parity.
Since $2016$ is even, $a, b$ must both be even, say $a = 2a', b = 2b'$. Then the problem is equivalent to counting the number of ways to write $a'b' = 504$.
Since $504 = 2^3 3^2 7$, the number $504$ has $4 \times 3 \times 2 = 24$ positive divisors $a'$. Taking into account negative divisors, we see that there are $48$ possible choices of $a'$, hence $48$ possibilities for the pair $(x,y)$.
|
H: Show that for every integer $n$ there is a multiple of $n$ that has only $0s$ and $1s$ in its decimal expansion.
Can anyone please explain this example as I tried a lot to understand it but I can't!
The problem:
Show that for every integer n there is a multiple of n that has only
0s and 1s in its decimal expansion.
The Solution of the book:
Let $n$ be a positive integer. Consider the $n + 1$ integers $1, 11,$
$111, ..., 1111, ...$ (where the last integer in this list is the
integer with $n + 1$ $\ 1s$ in its decimal expansion). Note that there
are $n$ possible remainders when an integer is divided by $n$. Because
there are $n + 1$ integers in this list, by the pigeonhole principle
there must be two with the same remainder when divided by $n$. The
larger of these integers less the smaller one is a multiple of $n$,
which has a decimal expansion consisting entirely of $0s$ and $1s$.
This problem from Discrete Mathematics and its application's for Rosen
AI: Suppose, say, that $n=3$. Consider the four numbers $1$, $11$, $111$, and $1\,111$. What are the remainders of the division of these numbers by $3$? They are $1$, $2$, $0$, and $1$ respectively. The remainder $1$ appears twice (corresponding to the numbers $1$ and $1\,111$). So, $1\,111-1(=1\,110)$ is a multiple of $3$.
The same idea works with every $n$.
|
H: calculate the Integer solutions of an equation
How does anyone calculate the "integer solutions" of an equation in this example:
$$
x(x+1) + 62 = y^2
$$
https://www.wolframalpha.com/input/?i=x*%28x%2B1%29%2B62%3Dy%5E2
here the Integer solutions are :
$x = -62$, $y= 62$
$x = -2$, $y= 8$
$x = 1$, $y= 8$
$x = 61$, $y= 62$
How do they get $8$?
AI: Hint
The equation can be written as
$$(2x+1)^2-(2y)^2=-247 \implies (2x+1-2y)(2x+1+2y)=-247.$$
Note that $247=13 \cdot 19$. So factor pairs are $(1,247), \, (13,19)$.
|
H: Are fully faithful functors stable under pullback?
Let $A$, $B$ and $C$ be locally small categories, let $I:A\to C$ be a fully faithful functor and let $F:B\to C$ be any functor. Is the pullback $F^*I: A \times_C B\to B$ still fully faithful?
If yes, is there an underlying orthogonal factorization system?
Any reference would also be welcome.
AI: There is a bijective-on-objects/fully faithful orthogonal factorisation system on Cat. Hence, as a right class of a factorisation system, fully faithful functors are closed under pullback in Cat.
For another reference, see also Riehl's notes on Factorisation Systems.
|
H: The geometry meaning of Unitary matrix/operator
After studying well about the unitary,self adjoint and normal matrices and operator, I can say that they have pretty intresting characteristics, but I do not know how to visualize them in low dimensions spaces, I have seen some youtube clips about visualize a linear transformation and matrix multiplication but it was for general case, I still can't imagine a visualize for those operators with all of their characteristics.
AI: Long story short: there is no good way to completely "visualize" complex matrices with any useful generality. Even in the smallest non-trivial case, we are looking at a transformation over $\Bbb C^2$, which from a "geometric" standpoint is really a $4$-dimensional space.
With that said: with matrices and with other "complicated" mathematical objects, "visualization" in the usual sense is not always necessary to get a feeling for a mathematical object, and this includes complex matrices. As an analogy, I suggest you watch this video from 3Blue1Brown about 10-dimensional spheres and boxes, which are "visualized" (in a limited sense) in terms of "sliders". Note that there is really nothing geometric about a row of 10 sliders. Nevertheless, we can leverage our understanding of this representation to get a "feeling" for the fact that the volume of a box grows faster than the volume of the box's inscribed sphere as the number of dimensions is increased.
Similarly, here is a limited way in which normal matrices (which include unitary, self-adjoint, and skew-adjoint operators) can be visualized. When a real matrix is diagonalized with real eigenvalues, the picture associated with the diagonalization of a linear transformation is one of space being "stretched, squished, or flipped" along the directions corresponding to the eigenvectors of the transformation.
In the case where a complex matrix can be diagonalized with real eigenvalues (e.g. a self-adjoint operator), the span of a single vector in $\Bbb C^n$ is something that would normally be visualized as $2$-dimensional, so that the stretch/squish/flip occurs uniformly across the entirety of this "2-dimensional" complex line. With that established, we can say that the complex eigenvalue $\lambda = re^{i \theta}$ encodes an expansion by factor $r>0$ followed by a rotation by angle $\theta$ within this complex line. The spectral theorem tells us that this is enough to visualize any normal operator, and that for any normal operator, these eigenspaces will be mutually orthogonal.
With that, we can still understand the notion of independent directions and the action of the linear transformation along each of these direction, and often this is enough. What we lose, however, is our ability to visualize these directions at the same time.
|
H: Fibrantly generated model category
An important concept in the study of model categories is that of "cofibrantly generated model categories". These are nice because all morphisms can be obtained from a small subset of them and in general these specific categories are often easier to work with.
Now I was wondering why I can find almost nothing about "fibrantly generated model categories". Since duality is everything in category theory I expected to find at least something about them.
I probably don't know enough about these structures to realise some obvious stuff, but I do find it curious. Is there a solid reason for their non-existence?
AI: The most important cofibrantly generated model categories are the combinatorial ones, which are also locally presentable; by Dugger’s theorem these are all Quillen equivalent to left Bousfield localizations of the projective model structure on some category of simplicial presheaves. Such categories are a lot like the classical model category of simplicial sets: you build things out of cell complexes, most notably.
Certainly the opposite of a cofibrantly generated model category is fibrantly generated, but this corresponds roughly to taking presheaves valued in the opposite category of simplicial sets, which is not as important. This is a homotopy theory analogue of the fact that locally presentable categories are more practically important than their opposites, though the notions are formally equivalent by duality. A particular problem with getting fibrant generation is the shortage of cosmall objects in most “natural” categories.
|
H: What is the conjunctive normal form for $(\neg Q\land P) \lor (\neg Q\land R) \lor (\neg P \land \neg R)$
$(\neg Q\land P) \lor (\neg Q\land R) \lor (\neg P \land \neg R)$
i have calculated this using wolframalpha and the output of CNF was $(\neg Q \lor\neg P) \land (\neg Q \lor\neg R) $
but all i can reach out -after using the Absorption rule- is $(\neg Q\land P) \lor (\neg Q\land R) \lor (\neg P \land \neg R)$ and using Distribution the final result is $\neg Q\land(R \lor P)\lor \neg P \land \neg R $ so can someone help me getting what is wrong
AI: The Redundacy Law or Reduction might be useful here
$\def\box#1#2{\boxed{\underline{\text{#1}}\\#2\\}}$
$$\box{The Redundancy Law/Reduction}
{(P\land \neg Q)\lor Q\equiv P\lor Q\\
\underline{\text{Proof.}}\\
\begin{align}
&(P\land \neg Q)\lor Q\\
\equiv&(P\lor Q)\land (\neg Q\lor Q)\hspace{5ex}\text{Distributive law}\\
\equiv&(P\lor Q)\land \top\hspace{13ex}\text{Negation law}\\
\equiv&P\lor Q\hspace{19.3ex}\text{Identity law}
\end{align}
}$$
Here is a possible approach for your question, which gives the Minimal CNF
\begin{align}
&(¬Q∧P)∨(¬Q∧R)∨(¬P∧¬R)\\
\equiv&(\neg Q\land(P\lor R))∨(¬P∧¬R)\tag*{Distributive law}\\
\equiv&(\neg Q\land\neg(\neg P\land\neg R))∨(¬P∧¬R)\tag*{De Morgan's law}\\
\equiv&\neg Q\lor (\neg P\land \neg R)\tag*{Redundancy law}\\
\equiv&(\neg Q\lor \neg P)\land(\neg Q\lor\neg R)\tag*{Distributive law}
\end{align}
|
H: Smoothness of quotient map
I'm a beginner in differential topology and was studying: Introduction to Smooth Manifolds by John M. Lee. And came across an example (Example 2.13 e ) :
Could someone please explain how one could deduce that the quotient map, used to define the projective space, is smooth based on the above example.
Additionally here is the Example 1.33:
AI: In both examples, the maps are smooth simply by definition: map $f\colon (x_1,\ldots,x_n) \mapsto (f_1(x_1,\ldots,x_n), \ldots, f_m(x_1, \ldots,x_n))$ is called smooth if for all possible $j, k$ there exists derivative $\frac{\partial f_j}{\partial x_k}$, which is a continuous function in $x_1, x_2, \ldots, x_n$. In the first example, $\frac{\partial}{\partial x_j}(\frac{x_j}{x_i})=\frac{1}{x_i}$ and $\frac{\partial}{\partial x_i}(\frac{x_j}{x_i})=-\frac{x_j}{x_i^2}$ which are both continuous functions. Similarly for example $1.33$.
Edit. The definition given above is actually the definition of a $C^1$-smooth map. But I did not take into account that the word 'smooth' commonly means '$C^{\infty}$-smooth', i. e. there exist all possible higher partial derivatives $\frac{\partial^N f}{\partial x_{j_1}\partial x_{j_2} \ldots \partial x_{j_N}}$ (which are also automatically continuous). It is easy to see that in your example this condition holds: by differentiation of $\frac{x_j}{x_i}$ one can obtain only rational functions with some power of $x_i$ in the denominator. Such functions are defined and differentiable if $x_i \ne 0$.
|
H: Inequality involving two convergent sequences
Let $(x_n)$ and $(y_n)$ be sequences of real numbers that satisfy $\lim x_n =a$ and $\lim y_n=b$.
Prove that if $a<b$ then $\exists n_0 \in \mathbb{N}$ such that $\forall n > n_0$, $x_n < y_n$
I found this exercise at Elon Lages Lima's book Real Analysis vol. 1, chapter 3, and wrote the following proof, I would like to know if it is correct:
First of all, we already know from the definition of limit that
\begin{align}
\forall \epsilon >0, \exists n_1 \in \mathbb{N}\; &\text{s.t.}\; \forall n>n_1, |x_n-a|<\epsilon\\
\forall \epsilon >0, \exists n_2 \in \mathbb{N}\; &\text{s.t.}\; \forall n>n_2, |y_n-b|<\epsilon
\end{align}
Now let's suppose that $x_n>y_n$ for sufficiently big $n$ and try to prove it by absurd. We get the following inequality for $\forall n > \max\{n_1,n_2\}:$
\begin{align}
b-\epsilon < y_n < x_n < a + \epsilon
\end{align}
Choosing $\epsilon = a-b$, we get:
\begin{align}
b-(a-b)<y_n&<x_n<a+(a-b)\\
2b-a &< 2a-b\\
3b &< 3a \\
b &<a
\end{align}
which contradicts the first premise that $a<b$, thus proving our proposition.
Thanks in advance.
AI: Direct approach
Assume that $a<b$.
With $$\epsilon=\frac{b-a}{2}$$
we can say
$$(\exists N_1\in \Bbb N) \; : \;(\forall n\ge N_1)$$
$$
x_n<a+\frac{b-a}{2}$$
and
$$(\exists N_2\in \Bbb N)\;:\;(\forall n\ge N_2)\;$$
$$ b-\frac{b-a}{2}<y_n$$
then with $ N=\max(N_1,N_2)$, we have
$$(\forall n\ge N)\; \; x_n<\frac{a+b}{2}<y_n$$
Done.
|
H: How to show that $(3,x-1)\not=(3,x+1)$ as ideals in $Z[x]$
How to show that $(3,x-1)\not=(3,x+1)$ as ideals in $Z[x]$
Both are maximal.
I think all i need to do is to show is that $x+1 \not \in (3,x-1)$ but I do not know how to show that. I wanted to assume towards contradiction that $3f(x)+(x-1)g(x)=x+1$. It is clear that $deg(f(x))\geq1$ and $deg(g(x))=deg(f(x))-1$. I was thinking induction on the degree but it seems messy with weird system of equations.
AI: Hint: If $3f(x)+(x-1)g(x)=x+1$, then what happens when $x=1$?
|
H: Prove that there is a point $c \in (0,1)$ such that $f(c) \int_0^1 g(x)dx =g(c) \int_0^1 f(x)dx$.
Let $I = [0,1]$ and $f,g : I \to \Bbb{R}$ be real-valued functions such that $f$ and $g$ are continuous on $I$ and differentiable on $(0,1)$. Prove that there is a point $c \in (0,1)$ such that
$$
f(c) \int_0^1 g(x)dx =g(c) \int_0^1 f(x)dx.
$$
Is it true if $f$ and $g$ are complex valued function?
Since $f$ and $g$ are continuous on $I$ and differentiable on $(0,1)$, there exists $ c \in (0,1)$ such that
$$
g'(c)[f(1)-f(0)]=f'(c)[g(1)-g(0)]
$$
I tried to consider a function $y(x) = (\int_0^1 f(x) dx) (\int_0^1 g(x) dx) $ so that I can apply mean value theorem. I am confused to go further step.
AI: Consider $G(x) = \int_0^x g(t)dt, F(x) = \int_0^x f(t)dt$. Then $F,G$ are differentiable functions. By the generalized mean value theorem we have
$$\begin{align*}G'(c)[F(1)-F(0)] &= F'(c)[G(1)-G(0)]\\
g(c)\int_0^1 F'(t)dt &= f(c)\int_0^1 G'(t)dt\\
g(c)\int_0^1 f(t)dt &= f(c)\int_0^1 g(t)dt \end{align*}$$
for some $c\in I$.
|
H: If the symmedian at $A$ of $\triangle ABC$ meets the circumcircle at $K$, then $KA$ is a symmedian of $\triangle KBC$.
Let the symmedian of $\triangle ABC$ at $A$ pass through the circumcircle of $\triangle ABC$ at $K$. Prove that $KA$ is a symmedian of $\triangle KBC$.
The book says that it is very obvious, but I seem to be too tunnel visioned on angle chasing. I can't find anything that proves it.
AI: It is indeed obvious. Recall
Fact: The symmedian from $A$ of $\triangle ABC$ is the line $AX$ where $X$ is the intersection of tangents to the circumcircle at $B,C$.
Now $A,K,X$ are collinear, $\triangle KBC$ and $\triangle ABC$ have the same circumcircle, so the line $KX$ (same as line $KA$) is the symmedian of $\triangle KBC$ through $K$.
|
H: Is there a constant $c$ such that $\sigma_1(A)\leq c\rho(A)$ for all $n\times n$ matrices $A$?
Let $A$ be a $n\times n$ matrix. Denote by $\sigma_1(A)$ the largest singular value of $A$, and $\rho(A):=\max_\mu\{{|\lambda_\mu|}\}$ the radius of the eigenvalues $\lambda_\mu$ of $A$.
As is well-known, there is
\begin{eqnarray}
\rho(A)\leq\sigma_1(A).
\end{eqnarray}
My question is the converse:
Is there a constant $c$ such that
\begin{eqnarray}
\sigma_1(A)\leq c\rho(A),
\end{eqnarray}
for all $n\times n$ matrices $A$?
AI: No. Consider for instance $$A=\begin{bmatrix}0&1\\0&0\end{bmatrix}.$$ The answer is yes, with $c=1$, when $A$ is selfadjoint.
|
H: Notation for annihilator with ring and module interchanged?
Let $M$ be an $R$-module. For a subset $S\subseteq R$, the set
$$\{m\in M\mid (\forall s\in S)[sm=0]\}$$
is clearly a submodule of $M$. Is there a name or notation for it? It is essentially the definition of annihilator with ring and module interchanged.
AI: The notations $(0:_M S)$ and $\operatorname{Ann}_M(S)$ are both commonly used for this submodule. I'm not sure there's a standard name, but I've heard it informally referred to as "the annihilator of $S$ in $M$" before. More generally, if $N$ is a submodule of $M$ and $S \subseteq R$ (of course, we can always replace $S$ by the ideal it generates), we write $(N:_M S):=\{m \in M \mid Sm \subseteq N\}$.
|
H: $f_{*}$ is surjective if and only if $f$ is surjective
I’m having trouble proving the following theorem. I’ll show my proof so far. I would really appreciate if you could help me, so I can improve my proof and correct some mistakes.
Theorem: Let $f:A \rightarrow B$ be a map. Think of this map as inducing the map $f_{*}:\mathcal{P}(A) \rightarrow \mathcal{P}(B)$, and the map $f^{*}:\mathcal{P}(B) \rightarrow \mathcal{P}(A)$. Then, $f_{*}$ is surjective if and only if $f$ is surjective.
Proof: $\implies.$ Suppose that $f_{*}$ is surjective. Then, for all $Y \subseteq B$, there exists some $X \subseteq A$, such that $f_{*}(X)=Y$. In particular, for $Y=B$, there is some set $X \subseteq A$, such that $f_{*}(X)=B$. Let $X_0$ be that set. Since $X_{0} \subseteq A$, we know that $f_{*}(X_{0}) \subseteq f_{*}(A)$. Observe that $f_{*}(A) \subseteq B$. Given that $f_{*}(X_{0}) = B$, then $f_{*}(X_{0}) \subseteq B$. From $f_{*}(X_{0}) \subseteq f_{*}(A)$ we conclude that $B \subseteq f_{*}(A)$. So we have that $f_{*}(A)=B$, therefore $f$ is surjective.
$\Longleftarrow.$ Suppose that $f$ is surjective. From a former theorem, I know that $F = f_{*}(f^{*}(F))$ for all $F \subseteq B$. Let $S \subseteq B$ be any set. Observe that $S \in \mathcal{P}(B)$. We define the set $R \subseteq A$ as $R=f^{*}(S)$. Note that $R \in \mathcal{P}(A)$. So $f_{*}(R) = f_{*}(f^{*}(S)) = S$. Therefore $f_{*}$ is surjective. $\square$
Is this proof right? I think this could be better and I want to improve it! Any help is welcome. Thank you in advance!
AI: The "$\implies$" part is very well written, but you introduced $X_0$ and kept writing it as $X$
For the "$\Longleftarrow$" part, $f_*(f^*(R))$ is not defined, since $R\in \mathcal P(A)$, and nothing implies that $R\in \mathcal P(B)$ or $f_{*}(f^{*}(R)) = f_{*}(f^{*}(S))$ . However, correcting this detail and writing $f_*(R)=f_*(f^*(S))=S$ will do.
It seems that the only mistakes are in the notation, and besides that the proof is ok.
|
H: Can I get the derivation for the geodesic equation in Euclidean 3D?
Where can I find the derivation of the following formula for a geodesic in 3D Euclidean space expressed as:
Calling $\vec u$ the tangent unit vector $\mathrm dr/\mathrm d s$ along the geodesic, and $\vec n$ the unit vector normal to the surface, the resulting equation of a geodesic reads
$$\frac{\mathrm d \vec u}{\mathrm d s} = -\left ( \vec u \cdot \frac{\mathrm d \vec n}{\mathrm ds} \right ) \vec n=\vec u\times \left(\frac{\mathrm d\vec n}{\mathrm ds} \times \vec n\right)$$
?
I see a dot product between the tangent of the geodesic curve $\vec u$, and the tangent to the surface $\vec u \cdot \frac{\mathrm d \vec n}{\mathrm ds},$ that is to say, the projection of the vector representing the change in the normal vector to the surface (tangent to the surface) onto the unit tangent to the curve times the unit normal must be equal and opposite to the change in the tangent to the curve $\frac{\mathrm d \vec u}{\mathrm d s}$ in the middle expression. In the RHS there are two cross products.
AI: There's nothing too interesting going on here. For ease of typing, I will use $'$ for $d/ds$ and let $\vec v = \vec n\times\vec u$.
The first equation just follows from $0 = (\vec u\cdot\vec n)' = \vec u'\cdot\vec n + \vec u\cdot\vec n{}'$.
The second equation is just vector algebra: Write $\vec n{}' = \lambda\vec u + \mu\vec v$, where, of course, $\lambda = \vec n{}'\cdot\vec u$. Then
\begin{align*}
\vec u\times(\vec n'\times\vec n) &= \vec u\times\big((\lambda\vec u + \mu\vec v)\times\vec n\big) \\
&= -\lambda\vec u\times\vec v + \mu\vec u\times\vec u = -\lambda\vec n,
\end{align*}
as desired.
|
H: How many five letter words can be made from the letters in SUCCESSFUL?
Question from this video: https://www.youtube.com/watch?v=nU2NrXOCdwk
Actually the word there was SUCCESSES, I will just change it to SUCCESSFUL.
So let me partially do the solution because I don't have a problem doing exactly the entire thing. From my understanding, you have to create integer partitions and those are represented by the cases.
SUCCESSFUL
Letter Count
S - 3
U - 2
C - 2
E - 1
F - 1
L - 1
All - 10
Integer Partition Cases
Case 1: {1, 1, 1, 1, 1}
Case 2: {2, 1, 1, 1}
Case 3: {2, 2, 1}
Case 4: {3, 1, 1}
Case 5: {3, 2}
To give an explanation, the integer partitions can be interpreted as
Case 1 has all letters different from each other
Case 2 has 2 letters identical to each other while the rest are all different
Remember that each set has to be taken from a pool of more letters.
In all honesty, I only know how to do well probably Case 1 and 2 and even then I'm not sure.
Case 1: {1, 1, 1, 1, 1} - 5!
Case 2: {2, 1, 1, 1} - (5C2 * 3C1) * 3!
Case 1 is pretty straight forward.
For case 2, I have used 5C2 to take two positions. Because those position will be occupied by identical letters, the order does not really matter. But I had to multiply by 3C1 because there are three possible letters.
Yeah the tricky part is that you don't consider just letters with two repetitions but those with at least 2, so S is included.
I'm at a loss on how to do Case 3 to 5. I have an idea but still not sure about it.
If anyone can comment on my solution if anything is wrong or suggest a better way then they may feel free to do so. Thank you.
AI: The integer partition method will work, but you aren't counting it correctly.
Case 1. We are selecting 5 different letters from 6 choices of
letters. 5! is what you get when you select 5 letters from 5 letters
in a permutation, but there are 6 letters. Hence this should be 6
permute 5. Note that the number of 'C's or 'L's doesn't matter
because the resulting word is the same no matter what 'C' or what
'S' you choose. Number of words = 6P5.
Case 2. 3C1 is indeed how to choose the 2 in {2,1,1,1}. But you
also have to choose the 1's in {1,1,1}. After choosing the first number, there will be 5 numbers left. Hence, this will be equal to 5C3.
There is one more step, which is to permute the word. The word is of
length 5 with a repeated letter. The number of ways to permute this is 5!/2!. You can find more about this here: https://brilliant.org/wiki/permutations-with-repetition/.
Number of words = (5C3 * 3C1) * 5!/2!.
In case 3, to choose the {2,2} from {2,2,1}, this will be 3C2. After selecting 2 numbers, there are 4 numbers left. Hence to select the 1 from {2,2,1}, this will be 4C1. Then, to permute a 5-letter word with 2 repeated letters, this is equal to 5!/(2! x 2!). Number of words = 3C2 * 4C1 * 5!/(2!2!).
Extending this method to case 4 and 5, in case 4 we should have 5C2 * 5!/3!, in case 5 we should have 2C1 * 5!/(3!2!). The total should be 5900.
|
H: Prove $A \setminus (A \setminus (A \setminus B)) = A \setminus B.$
$\renewcommand{\backslash}{\setminus}$
The question is as is in the title. I am able to show that $A \setminus (A \setminus (A \setminus B)) \subseteq A \setminus B$ but I am stuck on showing that $A \setminus B \subseteq A \setminus (A \setminus (A \setminus B))$.
Am I doing something wrong? Any suggestions will help.
Here is what I have so far:
$\textbf{Proof:}$
To show that $A \backslash (A \backslash (A \backslash B)) = A \backslash B$, we need to show that $A \backslash (A \backslash (A \backslash B)) \subseteq A \backslash B$ and $A \backslash B \subseteq A \backslash (A \backslash (A \backslash B))$.
We first show that $A \backslash (A \backslash (A \backslash B)) \subseteq A \backslash B$.
Let $a \in A \backslash (A \backslash (A \backslash B))$.
$\implies a \in A \cap (A \backslash (A \backslash B))^c$
$\implies a \in A \cap (A \cap (A \backslash B)^c)^c$
$\implies a \in A \cap (A \cap (A \cap B^c)^c)^c$
$\implies a \in A \cap (A^c \cup (A \cap B^c))$
$\implies a \in A \cap ((A^c \cup A) \cap (A^c \cup B^c))$
$\implies a \in A \cap (U \cap (A^c \cup B^c))$
$\implies a \in A \cap (A^c \cup B^c)$
$\implies a \in (A \cap A^c) \cup (A \cap B^c)$
$\implies a \in \emptyset \cup (A \cap B^c)$
$\implies a \in (A \cap B^c)$
$\implies a \in (A \backslash B)$
Next, we show that $A \backslash B \subseteq A \backslash (A \backslash (A \backslash B))$.
Let $a \in A \backslash B$.
$\implies a \in A \backslash (A \backslash B)^c$
$\implies a \in A \backslash (A \backslash (A \backslash B)^c)^c$
$\implies a \in A \backslash (A \cap (A \backslash B))^c$
$\implies a \in A \backslash (A^c \cup (A \backslash B)^c)$
$\implies a \in A \backslash (A^c \cup (A^c \cup B))$
AI: Note that you can reverse the whole thing, i.e. if in the first half you simply change every $\Rightarrow$ into a $\Leftrightarrow$, you're there!
In fact, with algebra it's pretty simple:
$$A \setminus (A \setminus (A \setminus B)) = $$
$$A \cap (A \cap (A \cap B^c)^c)^c =$$
$$ A \cap (A^c \cup (A \cap B^c) =$$
$$ (A \cap A^c) \cup (A \cap A \cap B^c) =$$
$$ \emptyset \cup (A \cap B^c) =$$
$$ A\setminus B$$
|
H: Twice differentiable but not infinitely differentiable
I want an example of a function satisfying:
(1) $f:\mathbb R\to\mathbb R$ twice continuously differentiable
(2) $f''(x)$ $\ge 0$ for every x$\in \mathbb R$
(3)$\int\limits_0^x{f(t)dt}$ is NOT infinitely differentiable with respect to x.
I find it very difficult to get a function satisfying conditions 2 and 3. If I try $e^t$ it violates 3. If I try a function involving $t^5$sin(1/t), it violates condition 2. Please help me with a correct example. Thank you.
AI: This should do it: Define $g(s) = 1-s^2, |s|\le 1,$ $g=0$ elsewhere. Set
$$f(x) = \int_0^x\int_0^t g(s)\,ds\,dt.$$
Note $f''(x) = g(x),$ which is not differentiable at $\pm 1.$
|
H: Finding the equation of a circle given three points on the circle.
Problem:
Find an equation for the circle though the points $(2,3)$, $(3,2)$ and $(-4,3)$
Answer:
One of the general form of a circle is:
$$ x^2 + y^2 + ax + by + c = 0 $$
Hence we have:
\begin{align*}
4 + 9 + 2a + 3b + c &= 0 \,\,\, \text{ This is for the first point. } \\
9 + 4 + 3a + 2b + c &= 0 \,\,\, \text{ This is for the second point. }\\
16 + 9 - 4a + 3b + c &= 0 \,\,\, \text{ This is for the third point. }\\
\end{align*}
Now, I have a system of 3 linear equations. We put them in standard form.
\begin{align*}
2a + 3b + c &= -13 \\
3a + 2b + c &= -13 \\
-4a + 3b + c &= -16 \\
\end{align*}
Now, we will solve for $a$ using Cramer's rule.
\begin{align*}
\begin{vmatrix}
2 & 3 & 1 \\
3 & 2 & 1 \\
-4 & 3 & 1 \\
\end{vmatrix} &=2 \begin{vmatrix}
2 & 1 \\
3 & 1 \\
\end{vmatrix} - 3 \begin{vmatrix}
3 & 1 \\
-4 & 1 \\
\end{vmatrix} +
\begin{vmatrix}
3 & 2 \\
-4 & 3 \\
\end{vmatrix} \\
\begin{vmatrix}
2 & 1 \\
3 & 1 \\
\end{vmatrix} &= 2 - 3 = -1 \\
\begin{vmatrix}
3 & 1 \\
-4 & 1 \\
\end{vmatrix} &= 3 + 4 = 7 \\
\begin{vmatrix}
3 & 2 \\
-4 & 3 \\
\end{vmatrix} &= 9 + 8 = 17 \\
\begin{vmatrix}
2 & 3 & 1 \\
3 & 2 & 1 \\
-4 & 3 & 1 \\
\end{vmatrix} &= 2(-1) -3(7) + 17 = -2 - 21 + 17 = -6 \\
%
\end{align*}
\begin{align*}
\begin{vmatrix}
-13 & 3 & 1 \\
-13 & 2 & 1 \\
-16 & 3 & 1 \\
\end{vmatrix} &= -13 \begin{vmatrix}
2 & 1 \\
3 & 1 \\
\end{vmatrix} - 3 \begin{vmatrix}
-13 & 1 \\
-16 & 1 \\
\end{vmatrix} +
\begin{vmatrix}
-13 & 2 \\
-16 & 3 \\
\end{vmatrix} \\
\begin{vmatrix}
2 & 1 \\
3 & 1 \\
\end{vmatrix} &= 2 - 3 = -1 \\
\begin{vmatrix}
-13 & 1 \\
-16 & 1 \\
\end{vmatrix} &= -13 + 16 = 3 \\
\begin{vmatrix}
-13 & 2 \\
-16 & 3 \\
\end{vmatrix} &= -39 + 32 = -7 \\
\begin{vmatrix}
-13 & 3 & 1 \\
-13 & 2 & 1 \\
-16 & 3 & 1 \\
\end{vmatrix} &= -13( -1) - 3(3) - 7 = 13 - 9 - 7 = -3 \\
a &= \frac{-3}{-6} \\
a &= \frac{1}{2} \\
\end{align*}
\begin{align*}
\begin{vmatrix}
2 & -13 & 1 \\
3 & -13 & 1 \\
-4 & -16 & 1 \\
\end{vmatrix} &= 2
\begin{vmatrix}
-13 & 1 \\
-16 & 1 \\
\end{vmatrix} + 13 \begin{vmatrix}
3 & 1 \\
-4 & 1 \\
\end{vmatrix} + \begin{vmatrix}
3 & -13 \\
-4 & -16 \\
\end{vmatrix} \\
\begin{vmatrix}
-13 & 1 \\
-16 & 1 \\
\end{vmatrix} &= -13 + 16 = 3 \\
\begin{vmatrix}
3 & 1 \\
-4 & 1 \\
\end{vmatrix} &= 3 + 4 = 7 \\
\begin{vmatrix}
3 & -13 \\
-4 & -16 \\
\end{vmatrix} &= -48 - 52 = 100 \\
\begin{vmatrix}
2 & -13 & 1 \\
3 & -13 & 1 \\
-4 & -16 & 1 \\
\end{vmatrix} &= 2(3) + 13(7) - 100 = 6 + 91 - 100 \\
\begin{vmatrix}
2 & -13 & 1 \\
3 & -13 & 1 \\
-4 & -16 & 1 \\
\end{vmatrix} &= -3 \\
b &= \frac{-3}{-6} \\
b &= \frac{1}{2}
\end{align*}
Now we take the first equation and solve for $c$.
\begin{align*}
2a + 3b + c &= -13 \\
2\left( \frac{1}{2}\right) + 3\left( \frac{1}{2}\right) + c &= -13 \\
1 + \frac{3}{2} + c &= -13 \\
c &= - 13 - \frac{3}{2} - \frac{2}{2} \\
c &= -\frac{31}{2}
\end{align*}
Now we subsitute back into the general form of the circle we started with:
\begin{align*}
x^2 + y^2 + ax + by + c &= 0 \\
x^2 + y^2 + \left( \frac{1}{2} \right) x + \left( \frac{1}{2} \right)y - \frac{31}{2} &= 0 \\
2x^2 + 2y^2 + x + y - 32 &= 0 \\
\end{align*}
However, the book gets:
$$ 2x^2 + 2y^2 + x + y - 23 = 0 $$
I claim that the point $(2,3)$ goes through my equation but not the book's equation. Am I right? Is the book right?
I copied the book's answer incorrectly. The book's answer is:
$$ x^2 + y^2 + 2x + 2y - 23 = 0 $$
In addition, after correcting for the wrong
constant (e.g $-16$ should have been -$25$), I was able to get the correct answer. Here is my correct answer.
Answer:
One of the general form of a circle is:
$$ x^2 + y^2 + ax + by + c = 0 $$.
Hence we have:
\begin{align*}
4 + 9 + 2a + 3b + c &= 0 \,\,\, \text{ This is for the first point. } \\
9 + 4 + 3a + 2b + c &= 0 \,\,\, \text{ This is for the second point. }\\
16 + 9 - 4a + 3b + c &= 0 \,\,\, \text{ This is for the third point. }\\
\end{align*}
Now, I have a system of 3 linear equations. We put them in standard form.
\begin{align*}
2a + 3b + c &= -13 \\
3a + 2b + c &= -13 \\
-4a + 3b + c &= -25 \\
\end{align*}
Now, we will solve for $a$ using Cramer's rule.
\begin{align*}
\begin{vmatrix}
2 & 3 & 1 \\
3 & 2 & 1 \\
-4 & 3 & 1 \\
\end{vmatrix} &=2 \begin{vmatrix}
2 & 1 \\
3 & 1 \\
\end{vmatrix} - 3 \begin{vmatrix}
3 & 1 \\
-4 & 1 \\
\end{vmatrix} +
\begin{vmatrix}
3 & 2 \\
-4 & 3 \\
\end{vmatrix} \\
\begin{vmatrix}
2 & 1 \\
3 & 1 \\
\end{vmatrix} &= 2 - 3 = -1 \\
\begin{vmatrix}
3 & 1 \\
-4 & 1 \\
\end{vmatrix} &= 3 + 4 = 7 \\
\begin{vmatrix}
3 & 2 \\
-4 & 3 \\
\end{vmatrix} &= 9 + 8 = 17 \\
\begin{vmatrix}
2 & 3 & 1 \\
3 & 2 & 1 \\
-4 & 3 & 1 \\
\end{vmatrix} &= 2(-1) -3(7) + 17 = -2 - 21 + 17 = -6 \\
\end{align*}
\begin{align*}
\begin{vmatrix}
-13 & 3 & 1 \\
-13 & 2 & 1 \\
-25 & 3 & 1 \\
\end{vmatrix} &= -13 \begin{vmatrix}
2 & 1 \\
3 & 1 \\
\end{vmatrix} - 3 \begin{vmatrix}
-13 & 1 \\
-25 & 1 \\
\end{vmatrix} +
\begin{vmatrix}
-13 & 2 \\
-25 & 3 \\
\end{vmatrix} \\
\begin{vmatrix}
2 & 1 \\
3 & 1 \\
\end{vmatrix} &= 2 - 3 = -1 \\
\begin{vmatrix}
-13 & 1 \\
-25 & 1 \\
\end{vmatrix} &= -13 + 25 = 12 \\
\begin{vmatrix}
-13 & 2 \\
-25 & 3 \\
\end{vmatrix} &= -39 + 50 = 11 \\
\begin{vmatrix}
-13 & 3 & 1 \\
-13 & 2 & 1 \\
-25 & 3 & 1 \\
\end{vmatrix} &= -13(-1) - 3(12) + 11 = 13 - 36 + 11 = -12 \\
a &= \frac{-12}{-6} \\
a &= 2 \\
\end{align*}
\begin{align*}
\begin{vmatrix}
2 & -13 & 1 \\
3 & -13 & 1 \\
-4 & -25 & 1 \\
\end{vmatrix} &= 2
% continue here
\begin{vmatrix}
-13 & 1 \\
-25 & 1 \\
\end{vmatrix} + 13 \begin{vmatrix}
3 & 1 \\
-4 & 1 \\
\end{vmatrix} + \begin{vmatrix}
3 & -13 \\
-4 & -25 \\
\end{vmatrix} \\
\begin{vmatrix}
-13 & 1 \\
-25 & 1 \\
\end{vmatrix} &= -13 + 25 = 12 \\
\begin{vmatrix}
3 & 1 \\
-4 & 1 \\
\end{vmatrix} &= 3 + 4 = 7 \\
\begin{vmatrix}
3 & -13 \\
-4 & -25 \\
\end{vmatrix} &= -75 - 4(13) = -127 \\
\begin{vmatrix}
2 & -13 & 1 \\
3 & -13 & 1 \\
-4 & -25 & 1 \\
\end{vmatrix} &= 2(12) + 13(7) - 127 = 24 + 91 - 127 \\
\begin{vmatrix}
2 & -13 & 1 \\
3 & -13 & 1 \\
-4 & -16 & 1 \\
\end{vmatrix} &= -12 \\
b &= \frac{-12}{-6} \\
b &= 2
\end{align*}
Now we take the first equation and solve for $c$.
\begin{align*}
2a + 3b + c &= -13 \\
2( 2) + 3( 2 ) + c &= -13 \\
10 + c &= -13 \\
c &= -23
\end{align*}
Now we substitute back into the general form of the circle we started with:
\begin{align*}
x^2 + y^2 + ax + by + c &= 0 \\
x^2 + y^2 + 2x + 2y - 23 &= 0 \\
\end{align*}
AI: An alternate aproach.
Lets label our points
A = (2,3)
B = (3,2)
C = (-4,2)
The equation of the line containing AB is $x+y = 5$
Its perpendicular bisector is $x=y$
The equation of $AC$ is $y = 3$
And the perpendicular bisector is $x = -1$
The center of the circle then is $(-1,-1)$
And the square of the distance from the center to any of these points $3^2 + 4^2 = 5^2$
$(x+1)^2 + (y+1)^2 = 25$
|
H: Theorem $8.38$ - Elliptic Partial Differential Equations by Gilbarg and Trudinger
I am trying understand why if $u$ is an eigenfunction for the first eigenvalue $\sigma_1$, then $|u|$ is one also.
Theorem 8.38. Let $L$ be a self-adjoint operator satisfying $(8.5)$ and $(8.6)$. Then the minimum eigenvalue is simple and has a positive eigenfunction.
Proof. If $u$ is an eigenfunction of $\sigma_1$, then it follows from the formula $(8.92)$ that $|u|$ is one also.
Relevant informations:
$(8.92)$ is the variational characterization of $\sigma_1$. More precisely, if the operator $L$ has the form
$$Lu = D_i(a^{ij}D_j u + b^i u) - b^i D_i u + c u,$$
where $[a^{ij}]$ is symmetric,
then
The associated quadratic form on $H = W_0^{1,2}(\Omega)$ is then given by
$$\mathscr{L}(u,u) = \int_{\Omega} (a^{ij} D_i u D_j u + 2 b^i u D_i u + c u^2) dx.$$
The ratio
$$J(u) = \frac{\mathscr{L}(u,u)}{(u,u)}, u \not\equiv 0, u \in H$$
is called the Rayleigh quotient.
$$(8.92) \ \sigma = \inf_H J.$$
The conditions satisfied by $L$ are
$$ \quad a^{i j}(x) \xi_{i} \xi_{j} \geqslant \lambda|\xi|^{2}, \quad \forall x \in \Omega, \xi \in \mathbb{R}^{n} \tag{8.5}$$
We also assume (unless stated otherwise) that $L$ has bounded coefficients; that is for some constants $\Lambda$ and $v \geqslant 0$ we have for all $x \in \Omega$
$$
\sum\left|a^{i j}(x)\right|^{2} \leqslant \Lambda^{2}, \quad \lambda^{-2} \sum\left(\left|b^{i}(x)\right|^{2}+\left|c^{i}(x)\right|^{2}\right)+\lambda^{-1}|d(x)| \leqslant v^{2}
\tag{8.6}$$
Transcribed from this screenshot.
Thanks in advance!
AI: Let’s try to evaluate $\mathscr{L}(|u|,|u|)$ keeping in mind that $|u| = u^+ + u^-$ where:
$u^+ (x) = \max \{ u(x), 0\}$ and $u^- (x) := \max \{-u(x), 0\}$ for $x \in \Omega$,
$u^\pm \geq 0$ a.e. in $\Omega^\pm := \{ x \in \Omega : \pm u(x) \geq 0\}$,
$u^\mp = 0$ in $\Omega^\pm$,
and $u = u^+ - u^-$;
we get:
\begin{split} \mathscr{L}(|u|,|u|) &= \mathscr{L}(u^+ + u^-, u^+ + u^-) \\
&= \int_{\Omega^+} (a^{ij} D_i u^+ D_j u^+ + 2 b^i u^+ D_i u^+ + c (u^+)^2) dx + \int_{\Omega^-} (a^{ij} D_i u^- D_j u^- + 2 b^i u^- D_i u^- + c (u^-)^2) dx \\
&= \int_{\Omega^+} (a^{ij} D_i u^+ D_j u^+ + 2 b^i u^+ D_i u^+ + c (u^+)^2) dx + \int_{\Omega^-} (a^{ij} D_i (-u^-) D_j (-u^-) + 2 b^i (-u^-)D_i (-u^-) + c (-u^-)^2) dx \\
&= \mathscr{L}(u^+ - u^-, u^+ - u^-) \\
&= \mathscr{L} (u,u)\;. \end{split}
Since $\| |u|\|_2^2 = ( |u|, |u|) = (u,u) = \|u\|^2$, we have equality between the Rayleigh quotients $\frac{\mathscr{L} (|u|, |u|)}{\| |u|\|^2}$ and $\frac{\mathscr{L} (u,u)}{\|u\|^2}$, therefore if $u$ is a first eigenfunction then $|u|$ also does.
|
H: Error in Kleene "Introduction to Metamathematics" parentheses lemmas?
In the classic book, Introduction to Metamathematics by Steven Kleene, Lemma 2 of Section 7 (chapter 2), seems to me to be false. I am wondering if I am missing something. Here is the context:
The following definitions are used:
Proper pairing - one-to-one pairing between n left parentheses "(" and n right parentheses ")" such that for each pair left parenthesis is on the left from the right parenthesis and if no two pairs separate each other.
Pairs of parentheses that separate each other - two pairs separate each other if they occur in the order $(_i(_j)_i)_j$ (ignoring everything else).
Then Kleene gives Lemma 1, stated below, which I agree with and find easy to prove using strong induction. Note that the lemma states "an" innermost pair, not "precisely one" innermost pair.
Lemma 1: A proper pairing of $2n$ parentheses ($n>0$ and $n$ is a natural number) contains an innermost pair, i.e. a pair which includes no other of the parentheses between them.
Then Kleene gives Lemma 2 as follows, which I disagree with.
Lemma 2: A set of $2n$ parentheses admits at most one proper pairing.
Kleene gives the following explanation: "Prove by a (simple) induction on $n$. (HINT: Under the induction step by Lemma 1 the given parentheses contain an innermost pair. Withdrawing this, the hypothesis of the induction applies to the set of the parentheses remaining."
I have a problem with this. Why? Consider $(^1_1(^2_2)^3_2)^4_1$ and $(^1_1)^2_1(^3_2)^4_2$. Each of these examples contains $2n$ parentheses, is a proper pairing, but are not the same pairing. The last sentence of Kleene's explanation does not hold because, just because the an innermost pair is removed, you can include a set of parentheses around the outside or concatenated with the current pair.
Am I missing something?
AI: Your example does not support your objection, since $(())$ and $()()$ are not the same string of parentheses. They are different sets, and each of them does in fact admit a single proper pairing, illustrated by the colors here: $\color{red}(\color{blue}(\color{blue})\color{red})$, and $\color{red}(\color{red})\color{blue}(\color{blue})$. Moreover, for each of them it is true that if we remove an innermost pair, what is left is a single pair that has a unique proper pairing: there is only one innermost pair in $(())$, and there are two innermost pairs in $()()$, but removing any of these innermost pairs leaves the properly paired string $()$.
|
H: Other way to evaluate $\int \frac{1}{\cos 2x+3}\ dx$?
I am evaluating
$$\int \frac{1}{\cos 2x+3} dx \quad (1)$$
Using Weierstrass substitution:
$$ (1)=\int \frac{1}{\frac{1-v^2}{1+v^2}+3}\cdot \frac{2}{1+v^2}dv =\int \frac{1}{v^2+2}dv \quad (2) $$
And then $\:v=\sqrt{2}w$
$$ (2) = \int \frac{1}{\left(\sqrt{2}w\right)^2+2}\sqrt{2} dw$$$$= \frac{1}{2} \int \frac{1}{\sqrt{2}\left(w^2+1\right)}dw$$$$ = \frac{1}{2\sqrt{2}}\arctan \left(w\right) + C$$$$= \frac{1}{2\sqrt{2}}\arctan \left(\frac{\tan \left(x\right)}{\sqrt{2}}\right)+C$$
Therefore,
$$\int \frac{1}{\cos 2x+3} dx = \frac{1}{2\sqrt{2}}\arctan \left(\frac{\tan \left(x\right)}{\sqrt{2}}\right)+C $$
That's a decent solution but I am wondering if there are any other simpler ways to solve this (besides Weierstass). Can you come up with one?
AI: $$\int \frac{1}{\cos2x+3}dx=\int \frac{1}{\frac{1-\tan^2x}{1+\tan^2x}+3}dx$$
$$=\int \frac{1+\tan^2x}{2\tan^2x+4}dx$$
$$=\frac12\int \frac{\sec^2x\ dx}{\tan^2x+2}$$
$$=\frac12\int \frac{d(\tan x)}{(\tan x)^2+(\sqrt2)^2}$$
$$=\frac12\frac{1}{\sqrt2}\tan^{-1}\left(\frac{\tan x}{\sqrt2}\right)+C$$
$$=\bbox[15px,#ffd,border:1px solid green]{\frac{1}{2\sqrt2}\tan^{-1}\left(\frac{\tan x}{\sqrt2}\right)+C}$$
|
H: Which irrationals become rational for some positive integer power?
Related to Irrationals becoming rationals after being raised to some power. Let $r \in \mathbb{R} \setminus \mathbb{Q}$. True or false: there exists an $n \in \mathbb{N}$ (positive integers) such that $r^n = r \cdot \dots \cdot r \in \mathbb{Q}$. This is clearly true for some irrationals like $\sqrt{2}$ or $a^{1/n}$ (positive integer $a$ such that $a^{1/n} \notin \mathbb{Z}$; see How to prove: if $a,b \in \mathbb N$, then $a^{1/b}$ is an integer or an irrational number?). But is it true for ALL irrationals? If not, can we classify all the irrationals for which the statement is true?
AI: Every number whose $n$'th power is rational is algebraic, with a minimal polynomial that divides $X^n-q$ for some positive integer $n$ and rational $q$. In particular, all its conjugates are $n$'th roots of the same $q$.
EDIT: The minimal polynomial (over the rationals) of an algebraic number $\alpha$ is a polynomial $P(X)$ of lowest possible degree such that the coefficients are rational numbers, the leading coefficient (i.e. the coefficient for the highest power of $X$) is $1$, and $P(\alpha) = 0$.
For example, the minimal polynomial of $\sqrt{2}$ is $X^2 - 2$. The conjugates of $\alpha$ are all the roots of the minimal polynomial: in this example they are $\alpha$ and $-\alpha$. The minimal polynomial of $\alpha$ is the product of $X - r$ for each conjugate $r$ of $\alpha$.
|
H: Prove that $ab$ is a cube of a positive integer.
This is a question from Advanced Math Examination of Vietnam:
Let $a, b, c$ be the three positive integers such that $c+\frac{1}{b}=a+\frac{b}{a}$. Prove that $ab$ is a cube of a positive integer.
First solution I thought about is form the hypothesis to a more similar one by move $\frac{1}{b}$ to the other side to make an inverse expression.
I have tried to form $a.b=c^{3}$ using the hypothesis $c+\frac{1}{b}=a+\frac{b}{a}$ but I can't find a good way to simplify the equation. It's too complicated.
AI: Note that
$$\frac{b}{a}-\frac{1}{b} \in \mathbb Z$$
and hence $ab|b^2-a$. This implies that $b|a$ and $a|b^2$.
This implies that there exists integers $d,e$ such that
$$a=bd \\
b=de \Rightarrow a=d^2e$$
Simplification suggested by Will Jagy
Therefore, $ab|b^2-a$ means $d^3e^2|d^2e(e-1)$ and hence $de|e-1$. This implies $e|e-1$ from where it follows that $e=1$.
Then: $ab=d^3$.
Note The simplification suggested by Will Jagy actualy proves the following much stronger claim: If $\frac{b}{a}-\frac{1}{b} \in \mathbb Z$ then $a=b^2$.
It is trivially to see that the converse is also true.
|
H: Probability questions involving percentages
Lola is obsessed by the colour of her hair. On any given day there is
an 80% chance she will change the colour of her hair for the next day.
Her hair is blond 40% of the time, brown 30% , red 20% and purple for
the remainder. Given Lola has red hair on Friday, what is the probability that
Tomorrow her hair is brown ?
Her hair is not red on Saturday and Sunday AND her hair is a different
colour on Saturday and Sunday.
I have trouble with this probability question, even though I read the answers but I still don't understand their solution. In the solutions for Q1, they simply just wrote $0.8 \times \frac{3}{8}=0.3$, I understand where the $0.8$ came from since there is a 80% chance that she will change her hair. But where did the $\frac{3}{8}$ come from? Thanks.
AI: I think an interesting way to think of your question one is to imagine Lola without hair on Friday. On Saturday we know there is an 80% chance of changing so the first (0.8) is clear. Now if she doesn't have hair, there is a 30% chance of getting brown hair so (0.8)*(0.3) or $\frac{8}{10}$ * $\frac{3}{10}$ which is likely what you originally thought.
However, we know Lola does have hair and red hair at that. Red hair has a 20% chance of occurring so our formula looks like
$\frac{8}{10}$ *$\frac{30}{100-20}$ similarity, $\frac{8}{10}$ *$\frac{30}{40+30+10}$
The 100-20 is really meaning that there was a 100% chance but we've taken off 20% from the total and the 40+30+10 is a sum of all the other probabilities!
|
H: about prisoners and selection of numbers
Each of the three prisoners had a natural number written on their foreheads: 1, 2 or 3. Numbers can be repeated. The prisoners see all numbers except their own. After that, everyone tries to guess their number. If someone succeeds, the prisoners will be released, otherwise they will be executed. Before the trial, the prisoners can agree. How can they get out?
(prisoners cannot hear each other during the trial)
I thought that 1st prisoner can do this.
if he sees two identical numbers in front of him, then he will say the same number and thus one of the prisoners will definitely be right if all the numbers are the same, and if the first prisoner sees different numbers, then he will say the third number which was not there and thus the prisoners will be right in the case of three different digits. but after that I don't know how the other 2 prisoners should act, they already know that in the case with three different numbers and the case with three identical numbers, the first one has already taken over, so it is necessary to choose one number from the two that they see. But how can they choose to be right in all cases?
AI: Your solution does not work. If the people get numbers $a,a,b$ the two people who got $a$ will see $a,b$ and guess $c$, while the person who got $b$ will see $a,a$ and guess $a$. Nobody is right.
One way to think about the problem is that if each person guesses randomly, on average there will be $1$ correct guess. You need one correct guess, so you need to make sure you only get one. In your solution, if the numbers are $a,a,a$ or $a,b,c$ everybody will guess right. You can't afford that because you are using up too many right guesses in those cases and you don't have enough left for the other case.
Instead, let one person assume that the sum of the numbers is a multiple of $3$, one assume that the sum is one more than a multiple of $3$, and the last assume that the sum is two more than a multiple of $3$. Now exactly one person will be right however the numbers are distributed. If the numbers are $1,2,3$ in the order given, the person with $1$ will see $2,3$ and assume that the sum is a multiple of $3$, so guess $1$ correctly. The person with $2$ will see $1,3$ and guess $3$, while the person with $3$ will see $1,2$ and guess $2$. I leave it to you to verify this works in all cases.
|
H: Length of coefficient vector of linear combination of unit lengths vector is 1
In Prasolov's "Geometry", problemset 1, problem 1.10 in the solution the author states:
Suppose that set of vectors of unit length $a_1 ... a_n$ is linearly dependent. Then, there exist coefficients $y_1 ... y_n$ such that $\sum_{k=1}^{n}y_ka_k=0_n$ and sum of squares of the coefficients sums up to 1 (i.e. $(y_1...y_n)$ is also a vector of unit length).
The space we are talking about is $\mathbb{R}^n$. I'm having trouble proving for myself why such coefficients should exist.
AI: Since the vectors are linearly dependent, you can always find a non-trivial solution to the following system
$$\sum_{i=1}^ny_i\bar{a_i} = \bar{0}$$
Now, let a non-trivial solution be expressed as follows
$$\bar{y_{0}} = (y_1, y_2, y_3..., y_n)$$
Then the following will also be a non-trivial solution
$$\bar{y} = \left(\frac{y_1}{\sqrt{\sum y_i^2}}, \frac{y_2}{\sqrt{\sum y_i^2}}, ...\frac{y_n}{\sqrt{\sum y_i^2}}\right)$$
|
H: If you know 2 of 8 questions but the test has only 4 questions. What is the probability you will know at least one of the questions on the test?
Background:
This is a hypothetical question that and my friend and I have been arguing over all day. We're both upper-year mathematics students and should reasonably be able to come up with the solution, however, we've been arguing about it non stop.
Situation:
There's a test bank with $8$ questions in it. The test will randomly give you $4$ of these questions. If you happen to know $2$ of the $8$ questions ahead of time,
A) What is the probability your test would have at least one of the questions you know?
B) What is the probability your test would have both of the questions you know?
So far we've come up with the solutions for part A) of:
simply $25\%$ because you know $25$ percent of the questions
$39 \%$ because you have a $\frac{2}{8}$ chance of knowing the first question and a $\frac{1}{7}$ of the second and $\frac{2}{8}+ \frac{1}{7} \approx 0.39$
$\frac{1}{16}$ because the test has $\frac{1}{2}$ of the total questions and you have a $\frac{1}{8}$ chance of getting your question
$\frac{1}{8}$ similarly, because you know two of the questions so $2 \times \frac{1}{16}= \frac{1}{8}$
For part B), we think it could be
$\frac{2}{8} \times \frac{1}{7} \approx 3\% $
$\frac{1}{16}$?
We're really lost and at this point, I think we need a third party to arbitrate our discussion haha.
AI: I'd like to offer an alternative approach. I will instead find the probability that neither of the two questions you know is selected, then subtract that from 1. There are $6\choose 4$ ways to pick a set of 4 questions from the 6 questions you don't know, and there are $8\choose 4$ ways to pick a set of 4 questions without restrictions. So, the probability that you know none of the 4 questions is
$$\frac{6\choose 4}{8\choose 4}=\frac{3}{14}\approx 21.43\%$$
Subtracting this from 1 gives a probability of $\frac{11}{14}$ of knowing at least 1 question on the test, or about a $78.57\%$ chance.
I checked this with python, and it numerically lines up.
|
H: Construct a triangle, given the angle at the vertex, the altitude, and the ratio in which its foot divides the base.
The problem is from Kiselev's Geometry exercise 392:
Construct a triangle, given the angle at the vertex, the altitude, and
the ratio in which its foot divides the base.
The chapter is about homothety. My attempt was to first omit the altitude condition and start from an arbitrary segment that is divided by the ratio. From the division point, erect a perpendicular line. The problem will be solved by using homothety if an angle congruent to the given angle is constructed whose vertex lies on the perpendicular line and which passes through the endpoints of the segment. Unfortunately, I could not achieve this.
Any help would be much appreciated.
AI: I assume that you want a purely geometrical construction (not analytical). So you are given an angle $\alpha$. Let's call the vertex $O$.
You can easily create an angle $2\alpha$, with the vertex at $O$.
Draw a circle with the center at $O$, of any radius, say $R$, and call $A$ and $B$ the intersections of the circle with the sides of the angle $2\alpha$. Now any point on the circle on the right side of $AB$ will see the segment at an angle $\alpha$ (on the other side it will be $180^\circ-\alpha$).
Divide the segment $AB$ in the correct ratio. I assume that you have a different segment already divided in this ratio, let's call it $A'B'$, with $C'$ in between $A'$ and $B'$, so it is divided in the correct ratio. Move $A'$ to $A$, such as that $B'$ is not along $AB$. Draw the $BB'$ line, and a line through $C'$ parallel to it. It will intersect the $AB$ segment at $C$, which will divide the $AB$ in the desired ratio.
Draw the perpendicular to $AB$ at $C$, and the intersection with the circle will be point $D$. You have $\angle ADB=\alpha$, $AC/CB$is the correct ratio, and $DC\perp AB$.
The last step is to take a segment along $DC$, away from $D$ of the desired length for the altitude, call $P$ the point at this distance. Draw a parallel to $AB$ through $P$. The intersections of this line with $AD$ and $BD$ will give you the other two vertices of your triangle.
|
H: Sets of measure zero on the real line
Is there anything wrong with what I have written here?
Let $D$ be a non-empty subset of the real numbers, with Lebesgue measure $0$.
Say $x$ is in $D$. Since $D$ has measure $0$, it cannot contain any intervals. Therefore,
for any $d > 0$, we know $(x - d,x + d)$ is not contained in $D$. Hence, for any $d > 0$, there is a real number $z$ in $(x - d,x + d)$ so that $z$ is not in $D$.
AI: Yes, the statement is correct: a set $D$ of measure $0$ (a null set) in the reals (or any so-called strictly positive measure space, where $\mu(U) >0$ for any $U$ that is open and non-empty) has empty interior, or $D^\complement$ is dense.
|
H: Given $A,B$ symmetric positive definite matrices, is $B^{-1}A$ always symmetric positive definite?
Given $A$ and $B$ to be symmetric positive definite matrices. Is the product $B^{-1}A$ always positive definite and symmetric?
AI: No for symmetry. Symmetry is not true because:
$(B^{-1} A)^T = A^T B^{{-1}^T} = A B^{T^{-1}} = A B^{-1}$
The above only equals the original equation if $A$ and $B^{-1}$ are commutative, which isn't an inherent property of positive definite matrices. The only case I know of in which they are commutative is when they can both be diagonalized and they have the same eigenvector basis. Only the first one is guaranteed for these matrices as symmetric implies diagonalizable.
Further evidence: https://math.vanderbilt.edu/sapirmv/msapir/jan22.html#:~:text=If%20the%20product%20of%20two,BA%20then%20AB%20is%20symmetric.
A question that should address the eigenvalue part: Is the product of symmetric positive semidefinite matrices positive definite?
|
H: Tautology propositions
Anyone knows, considering $a$, $b$ and $c$ as propositions, if those propostions below are both tautologies? What I found out yet is that they are, but im not totally sure. $((a∨b)∧((a→c)∨(b→c)))→c$
$a→((¬b→c)∨(¬b→¬c))$
AI: The first statement is not necessarily true, consider when $a$ is false and $b\to c$ is false, in this case we may not have $c$.
The second one is true. Since the consequence $(¬b→c)∨(¬b→¬c)\equiv (b∨c)∨(b∨¬c)$ is always true.
|
H: Is there a one-word term for "grows by square root"?
We can describe a growth relationship $y = x$ as linear (e.g. "linear growth").
We can describe $y = \log(x)$ as logarithmic (e.g. "logarithmic growth").
We can describe $y = x^2$ as quadratic (e.g. "quadratic growth").
We can describe $y = 2^x$ as exponential (e.g. "exponential growth").
Is there a one-word term to describe a growth relationship where $y = \sqrt{x}$ ?
The best suggestion I've seen is "radical growth", which does not seem to be a standard term and I think would cause confusion if used without explanation.
AI: The asymptotic growth of a root is known as "sublinear growth". Another term is "fractional-power" growth but it sounds odd imo.
The sublinear term originates from linear algerba. It can be used not only for square-root but for any $n-th$ root complexity.
In strict terms as sublinear function is a function that satisfies the following properties:
Positive Homogeneity: $f(kx) = k^n f(x) \quad n\in\mathbb{R^+_0}$ and
Subadditivity: $f(x+y) \leq f(x) + f(y)$
Roots satisfy both. The proof of positive homogeneity is trivial. As far as the proof of subadditivity is concerned you can check this post
Other seemingly sublinear functions (such as log) do not satisfy these properties therefore, in a sense you can separate them from sublinears.
|
H: existence of a function such that it is equivalent to cosine.
Prove there exist continuous functions $Cos : \mathbb{R} \to \mathbb{R}$ and $Sin : \mathbb{R} \to \mathbb{R}$ such that $Cos(q) = \cos(q)$ and $Sin(q) = \sin(q)$ for all $q \in \mathbb{Q}$.
I have attempted this by noting that cosine and sine are continuous for all real numbers and therfore for the set of $q\in\mathbb{Q}$ we can hold that $Cos(q)=\cos(q)$ and that $Sin(q)=sin(q)$. I am not sure if this is sufficient however. Is correct or is there another way of showing this?
AI: Define $Cos(x):= \cos (x)$ and $Sin(x):= \sin (x)$ for $x \in \mathbb R$ and you are done.
|
H: X uniformly distributed, is X and X to the power of n got correlation or not
Basically I got this question where:
$X$~$U[-C,C]$ where C is natural > 1.
And the question that asks, name the correct answer:
$X$ and $X^2$ are independent
$X$ and $X^7$ are uncorrelated
$X^5$ and $X^6$ are uncorrelated
And I got an option for "none of above are true".
How to approach this kind of questions? I have no idea how to start.
AI: If $X$ and $X^{n}$ are independent for some $n >1$ then any mesurable function of $X$ is also independent of $X^{n}$ so $X^{n}$ is independent of itself. This implies that $X^{n}$ is a constant. Obviously this is not true. So 1) and 2) are false.If 3) is true then $X^{30}$ would be independent of itself.
|
H: How to solve polynomial rational relations for $y$ (e.g $\sqrt{4-3y-y^2} = x(y+4)$)?
From time to time, I struggle to solve polynomial relations for $y$.
A trivial example is :
$$ \frac{y}{x} = x \iff y = x^2$$
Easy.
But consider this relation:
$$ \sqrt{4-3y-y^2} = x(y+4)$$
No matter how much I mess around it, seems impossible to bring it in $y = f(x)$ form.
$ \frac{\sqrt{4-3y-y^2}}{(y+4)}= x $
$ 4 - 3y - y^2 = x^2(y^2 + 8y + 16) \iff (x^2-1)y^2+y(-8x-3)+4(1-4x) = 0$
Is there a trivial methodology that I am missing or is it indeed impossible to inverse some relations?
AI: \begin{align*}
\sqrt{4-3y-y^2} &= x(y+4)\\
\sqrt{(1-y)(y+4)} &= x(y+4)\Rightarrow y\in[-4,1],x\in[0,+\infty)\\
\sqrt{y+4}\left(\sqrt{1-y}-x\sqrt{y+4}\right) &=0\Rightarrow \boxed{y=-4}\\
\sqrt{1-y}&=x\sqrt{y+4}\\
\sqrt{\frac{5}{y+4}-1}&=x\\
\boxed{y=\frac5{x^2+1}-4}\in(-4,1]\,\forall\,x\in[0,+\infty)
\end{align*}
|
H: Find the domain and range of $f(x)=\sqrt{(16-x^2)}$
I came up with a wrong solution, can someone tell me where I went wrong?
AI: $\sqrt{x}$ is the principal square root function, it takes nonnegative value.
Hence $y \ge 0$.
I don't see how you get $16-y^2 \ge 16$.
We have $$-4 \le x \le 4$$
$$0 \le x^2 \le 16$$
$$16-16 \le 16-x^2 \le 16-0$$
$$0 \le \sqrt{16-x^2} \le 4$$
Let $y \in [0,4]$, we let $x=\sqrt{16-y^2}$, we can check that $\sqrt{16-x^2}=\sqrt{16-(16-y^2)}=y.$
|
H: Definition for Hermitian inner product?
I have found two different papers (both are well cited) Classification of Self-Orthogonal Codes over $\mathbb{F}_3$ and $\mathbb{F}_4$ (pg.3) and Convolutional and Tail-Biting Quantum Error-Correcting Codes (end of pg.4) that give different definitions for Hermitian inner product in $\mathbb{F}_4$.
The first paper (which has 380+ cites) gives the following:
$$ (\boldsymbol u, \boldsymbol v)_H = u_1v^2_1 + u_2v^2_2 + \ldots +
u_nv^2_n, $$ where $\boldsymbol u = \begin{pmatrix}u_1 & u_2 & \ldots
& u_n\end{pmatrix}$ and $\boldsymbol v = \begin{pmatrix}v_1 & v_2 &
\ldots & v_n\end{pmatrix}$.
The takeaway is that the second parameter (i.e., $\boldsymbol v$) is conjugated and $\boldsymbol u$ is left the same.
The second paper (which has 90 cites) gives the following definition:
The Hermitian inner product of two elements $a, b \in \mathbb{F}_4$ is
defined as $\langle a, b\rangle = a^\dagger b \in \mathbb{F}_4$ where
where ${}^{\dagger}$ denotes conjugation.
In other words, $a^\dagger = a^2$. The takeaway is that the first argument is conjugated, not the second.
If we try $\langle \bar{\omega}, \omega\rangle$ using the first definition (assuming $u = \begin{pmatrix}\bar{\omega}\end{pmatrix}$ and $\boldsymbol v = \begin{pmatrix}\omega\end{pmatrix}$):
$$\langle \bar{\omega}, \omega\rangle = \bar{\omega}\omega^2 = \bar{\omega}\bar{\omega}=\omega.$$
Using the second definition, we get:
$$\langle \bar{\omega}, \omega\rangle = \bar{\omega}^2\omega = \omega\omega=\bar{\omega}$$
Why are the two definitions different? Are they even describing the same inner product? Which definition should I follow and why not the other?
AI: First of all, please note that the number of citations is not a measure of how correct a paper is.
That said, both definitions are "correct" and they describe the same inner product up to notational differences, but they follow different conventions. In a general mathematical context, it is more common to define Hermitian forms $\langle \cdot, \cdot \rangle$ to be linear in the first argument. In a physics context (especially where quantum mechanics is involved), it is more common to follow the opposite convention whereby $\langle \cdot, \cdot \rangle$ is linear in the second argument (in analogy to "bra-ket" notation).
An analogous conflict occurs with the usage of spherical coordinates: in different subfields of mathematics and the sciences, different conventions are used. All of these are "correct", but in either case the convention that one follows should be made explicit.
I would say that you should follow whichever convention you are more comfortable with. If you are not significantly more comfortable with one than the other, then you could also choose the convention that is more common in the relevant subfield. For instance, I would follow the second convention in the context of quantum-mechanics related fields (such as quantum information theory) and follow the first convention otherwise.
|
H: Forming a 3x3 magic square with digits 1-9, subject to the constraint that sum of digits in each row, column and diagonal must be equal.
The Problem (Chapter 0, Problem 16 from Fomin's Mathematical Circles):
Form a magic square with the digits 1-9; that is, place them in the boxes of a 3x3 table so that all the sums of the numbers along the rows, columns, and two diagonals are equal.
The sum of numbers from 1-9 is 45, which when partitioned into 3(for each row and column) would be 15. Thus, I felt that the sum of all entries in a row/column/diagonal might be 15(this is certainly not a justified argument, just naive intuition). Continuing upon this, I tried several combinations in a trial-and-error fashion. Despite trying more than two dozen combinations, I've failed to reach a satisfactory answer(one time I was able to prove all the sums to be equal to 15, except a single diagonal).
Next, I tried solving it as a system of linear equations. Considering the magic square to be a $3X3$ matrix(I tried to format a matrix here, but there are some rendering issues so assume a standard matrix), I could devise the following system of equations:
Corresponding to the sum of entries in rows,
$$1) a_{11}+a_{12}+a_{13}=k$$
$$2) a_{21}+a_{22}+a_{23}=k$$
$$3) a_{31}+a_{32}+a_{33}=k$$
Corresponding to the sum of entries in columns,
$$4) a_{11}+a_{21}+a_{31}=k$$
$$5) a_{12}+a_{22}+a_{32}=k$$
$$6) a_{13}+a_{23}+a_{33}=k$$
Finally, corresponding to the sum of entries in diagonals:
$$7) a_{11}+a_{22}+a_{33}=k$$
$$8) a_{13}+a_{22}+a_{31}=k$$
I have 8 equations containing 9 unknown variables(although $k$ has an unknown value, it is still a constant). I'm not sure if these equations alone are sufficient to solve the problem. Beyond this point, I'm clueless about proceeding with either a different approach to solving this problem or continuing with this one. I'd like to know if a solution can be obtained using this approach, and if yes, then how? Any other methods which do not involve a lot of higher mathematics will also be appreciated.
EDIT: I've found one possible solution by trial-and-error method. \begin{matrix} 8 & 1 & 6 \\ 3 & 5 & 7\\ 4 & 9& 2\end{matrix}
AI: The solution you found can be shown to be unique.
The first step is to identify the constant $k$. The three rows (or, alternatively, the three columns) add up to $3k$; together, they comprise all nine digits without repetition, so $3k = 1 + 2 + \cdots + 9 = 45$, so $k = 15$.
The digit $5$ must occur in the middle. Only $5$ can participate in a three-way sum to $15$ in as many as four different ways:
$$
1+5+9 = 15 \\
2+5+8 = 15 \\
3+5+7 = 15 \\
4+5+6 = 15
$$
Since the center square participates in four such sums, it must contain the digit $5$.
$$
\begin{array}{|c|c|c|} \hline
\phantom{8} & \phantom{1} & \phantom{6} \\ \hline
\phantom{3} & 5 & \phantom{7} \\ \hline
\phantom{4} & \phantom{9} & \phantom{2} \\ \hline
\end{array}
$$
That means that the other eight digits must occur in pairs on opposite sides of the central $5$: $1$ opposite $9$, $2$ opposite $8$, $3$ opposite $7$, and $4$ opposite $6$.
Of those digits, $1$ and $9$ must occur on opposite sides, because they can each only participate in a three-way sum to $15$ in two different ways, and a corner square would be involved in three such sums. Without loss of generality, put $1$ at the center top, and $9$ at the center bottom.
$$
\begin{array}{|c|c|c|} \hline
\phantom{8} & 1 & \phantom{6} \\ \hline
\phantom{3} & 5 & \phantom{7} \\ \hline
\phantom{4} & 9 & \phantom{2} \\ \hline
\end{array}
$$
The only other sum that $1$ can be involved in is $1+6+8 = 15$. Again without loss of generality, put $8$ at upper left, and $6$ at upper right. That puts $4$ at lower left, and $2$ at lower right.
$$
\begin{array}{|c|c|c|} \hline
8 & 1 & 6 \\ \hline
\phantom{3} & 5 & \phantom{7} \\ \hline
4 & 9 & 2 \\ \hline
\end{array}
$$
That leaves room only for $3$ at center left, and $7$ at center right.
$$
\begin{array}{|c|c|c|} \hline
8 & 1 & 6 \\ \hline
3 & 5 & 7 \\ \hline
4 & 9 & 2 \\ \hline
\end{array}
$$
All other $3$-by-$3$ magic squares involving the digits $1$ through $9$ are identical to this one, up to rotation and reflection.
As to whether a solution can be found with your eight equations in nine unknowns: It requires a method more involved than solution of simultaneous equations, because that will not enforce the rule that all nine digits must be used, exactly once each.
|
H: Suppose that $N$ and $r$ are positive integers. Prove or disprove that if $N$ is an even integer and $r$ is odd, then $\binom{N}{r}$ is even.
Suppose that $N$ and $r$ are positive integers. Prove or disprove that if $N$ is an even integer and $r$ is odd, then $\binom{N}{r}$ is even.
My attempt:
Let $N=2m$ and $r=2k+1$. Then $$\binom{N}{r}=\binom{2m}{2k+1}=\dfrac{(2m)!}{(2(m-k)-1)!(2k+1)!}$$
Also, we know that $\binom{N}{r}$ is always an integer. How do I proceed to show it is even or odd?
Also note that $n!$ is even for all $n\ge2$.
AI: By Legendre's formula, the power of $2$ dividing $n!$ is $$v_2(n!)=\sum_{i=1}^\infty \left\lfloor\frac{n}{2^i}\right\rfloor.$$
So the power of $2$ dividing $\binom{n}{r}=\frac{n!}{r!(n-r)!}$ is
$$\sum_{i=1}^\infty\left( \left\lfloor\frac{n}{2^i}\right\rfloor-\left\lfloor\frac{r}{2^i}\right\rfloor-\left\lfloor\frac{n-r}{2^i}\right\rfloor\right).$$
For each $i$, the term in parentheses is nonnegative since $\lfloor a\rfloor + \lfloor b\rfloor \leq \lfloor a+b\rfloor$ for any $a,b$. Do you see why the term for $i=1$ must be strictly positive, which would imply that the sum is also positive and give the conclusion you want?
|
H: Conjugate in $S_n$ do not imply conjugate in its subgroup.
$(12)(34)$ and $(13)(24)$ are conjugate in $S_4$, but they are not conjugate in $\mathbf V_4$, the Klein four group, since $\mathbf V_4$ is abelian.
I know that
$\alpha ,\beta \in S_n , \alpha \text{ and } \beta \text{ are conjugate in $S_n$}\iff$ $\alpha$ and $\beta$ have the same number of $r$-cycles.
But I'm confused that why $(12)(34)$ and $(13)(24)$ do not have the same number of $2$-cycles in $\mathbf V_4$.
Thanks for helping.
AI: The definition is that $a,b\in G$ are conjugated in $G$ if and only if there is some $g\in G$ such that $b=gag^{-1}$. In specific instances of $G$ this condition may be equivalent to other conditions which, outside of the specific context, are not equivalent to being conjugate. For instance, in an abelian group, two elements are conjugate if and only if they are equal.
|
H: why are $e^{2x}$ and $e^{x^2}$ inequal?
enter image description here
From the index rules I learned from school, $a^{x^2}=a^{2x}$
Does it work the same for the natural constant?
Why is it?
AI: There is a difference between ${(a^b)}^c$ and $a^{(b^c)}$.
The first simplifies to ${(a^b)}^c=a^{bc}$ but the second does not.
If no brackets are used, it is assumed that it means $a^{b^c}=a^{(b^c)}$, the one that does not simplify.
|
H: The equivalence relation defined by normal form on Von Neumann algebra, its support and the it's representation .
Let M be a von Neumann algebra and $\varphi$ a positve normal form on M.
$N = \lbrace x\in M | \varphi(x^*x)=0\rbrace $ . We denote $M_{\varphi} := M/N$ as the pre-Hilbert space defined by the inner product: $\langle a_{\varphi},b_{\varphi}\rangle = \varphi(b^*a)$, when $a_{\varphi}$ the equivalence class of $a\in M$.
We also denote $s :=s(\varphi)$ , the support of $\varphi$.
Q : I would like to know if $(sa)_{\varphi} = a_{\varphi}$ as equivalence classes.
I do know how to prove that $(as)_{\varphi} = a_{\varphi}$ since
$\varphi((as)^*as) = \varphi(s^*a^*as) = \varphi(sa^*as) = \varphi(a^*a) $ from the definition of the support. but if I am trying to do the same with $(sa)_\varphi$, I have: $\varphi((sa)^*sa) = \varphi(a^*sa)$ which I dont know what to do with.
If I am wrong and $(sa)_{\varphi} \neq a_{\varphi}$ then I have another question about the representation defined by $\varphi$ which we denote as $ \pi :=\pi_{\varphi} \colon M \to \pi_\varphi(M)$.
Q: is $\pi|_{M_s} \colon M_s \to \pi(M)$ surjective?(when $M_s = sMs$).
If $(sa)_\varphi = a_\varphi $ then the proof is immediate since
$\langle \pi(sxs)a_\varphi,a_\varphi\rangle = \langle\pi(s)\pi(x)\pi(s)a_\varphi,a_\varphi \rangle = \langle \pi(x)\pi(s)a_\varphi,\pi(s)a_\varphi \rangle = \langle \pi(x)(sa)_\varphi,(sa)_\varphi \rangle = \langle \pi(x)a_\varphi,a_\varphi \rangle$
and we can conclude that $\pi(sxs) = \pi(x)$ and we have that $\pi|_{M_s}$ is surjective.
For the definition of the support of normal form:
Stratila, Serban; Zsidó, Lászlo, Lectures on von Neumann algebras. (Lectii de algebre von Neumann.), Bucuresti: Editura Academiei Republicii Socialiste Romania. 409 p. Lei 34.00 (1975). ZBL0324.46068.:
AI: Your concern in the first question is not true, let $M=M_2(\Bbb C)$ and $\varphi(a) = a_{11}$ so that $s=|e_1\rangle\langle e_1|$ is the projection onto the first component. Now let $x= |e_2\rangle\langle e_1|$, then:
$$\|[x]-[sx]\|^2= \varphi(((1-s)x)^*(1-s)x)) =\varphi(x^*(1-s)x)=\varphi(|e_1\rangle\langle e_2|e_2\rangle\langle e_2|e_2\rangle\langle e_1|)=\varphi(|e_1\rangle\langle e_1|)=1$$
Then your second question also cannot be true. $\pi(s)$ must be a projection, but it cannot be the identity so it admits a kernel in $\overline{M_\varphi}$.
Since the above example is finite dimensional you have that $M_\varphi$ is already complete, so there must be some $x\in M$, $[x]\neq0$ but $[sx]=0$. So $[x]$ cannot be in the image of any $\pi(sys)$, but $\pi(x)[1]=[x]$, implying that $\pi(x)$ cannot be of the form $\pi(sys)$ for any $y\in M$.
|
H: Differentiation Product Rule Question on $x^2(x-1)^3$
I have a question:
$x^2(x-1)^3$
Which I am supposed to solve using the product rule but it seems that I have hit a dead end with the algebraic part.
Here's my working so far:
$u = x^2$
$u' = 2x$
$v = (x-1)^3$
$v' = 3(x-1)^2$
$(uv)'= (x-1)^32x+x^23(x-1)^2$
This is the part I'm stuck on.
How can I move on?
AI: Now, $$2x(x-1)^3+3x^2(x-1)^2=x(x-1)^2(2(x-1)+3x)=x(x-1)^2(5x-2).$$
|
H: Solve a optimization problem with Lagrange multipliers
Trying to solve next issue $$ \begin{cases} \sum_{i=1}^n x_i(ln_{x_i} - c_i) - min
\\
\sum_{i=1}^n x_i = 1
\end{cases}$$
where
$$
x_i, c_i \in R^n, c_i = const > 0 \forall i
$$
I tried Lagrange method:
$$
L(x,\lambda) = \sum_{i=1}^n x_i(ln_{x_i} - c_i) + \lambda(\sum_{i=1}^n x_i - 1)
$$
$$
\begin{cases}
\ln x_i =c_i-(1+\lambda)
\\
\sum_{i=1}^n x_i - 1 = 0
\end{cases}
$$
so,
$$
x_i = e^{c_i - (1+\lambda)}
$$
and $\lambda$ gets from equation
$$
\sum_{i=1}^n e^{c_i - (1+\lambda)} = 1
$$
Second derivative matrix is positive defined for all $x_i >0$ therefore this point is the solution.
Could you suggest me how to solve equation for $\lambda$ ?
AI: From $\sum_{i=1}^n e^{c_i - (1+\lambda)} = 1$ we get $\frac{1}{e^{1+ \lambda}}\sum_{i=1}^n e^{c_i}=1$ thus $e^{1+ \lambda}=\sum_{i=1}^n e^{c_i}$, therefore
$$1 +\lambda= \ln (\sum_{i=1}^n e^{c_i}).$$
|
H: Unbounded on every interval except null set but finite a.e
This question originates from Folland's problem 2.25. In this problem, first given $f(x)=x^{-1/2}$ when $0<x<1$, and $0$ otherwise.Then consider $g(x)=\sum_{n}2^{-n}f(x-r_{n})$, where sequence $r_{n}$ is the all the rational number. Then it requires us to prove $g$ is integrable, discontinuous everywhere and unbounded on every interval. We need to show that discontinuity and unboundedness hold even removing a Lebesgue null set.
While I can show $g$ is integrable(thus finite a.e.),unboundedness on every interval. I have doubts about why $g$ is finite a.e. Consider any irrational number $x$, there always exist a sequence of $r_{n}$ such that $x-r_{n}<3^{-2n}$, so that $2^{-n}f(x-r_{n})>(3/2)^{n}$. This shows the unboundedness of $g$, however since this holds for any irrational number,this function should not be finite a.e. as $\sum_{n}(3/2)^{n}=\infty$ for every irrational number $x$, thus not finite a.e.
I know there must be something wrong in my argument, but I can't spot it. Can anyone tell what is the problem?
AI: The problem with your argument is your $r_n$ is different from the $r_n$'s in the definition of $g$. You have to arrange all rationals in a sequence $(r_n)$ once and for all before defining $g$.
|
H: Critical points of a nonnegative quadratic form on a subspace
Let $Q(x)=x^tAx$ for some square symmetric matrix $A\in R^{n\times n}$, such that $Q(x)\geq0$ for each $x\in R^n$. Let $S$ be an affine subspace of $R^n$. How can I show that if $y$ is a critical point of $Q$ on $S$, then $y$ is a point of global minimum of $Q$ on $S$?
AI: Notice that you have a strictly convex function in the subspace.
|
H: Jordan normal form of $\;\begin{pmatrix}0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & a & b \end{pmatrix},\; a,b\in\mathbb{R}$
If possible, compute the Jordan normal form of
$\begin{pmatrix}0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & a & b \end{pmatrix}\in\mathbb{R}^{3\times 3}$ with $a,b\in\mathbb{R}$.
In the case that $a,b=0$ the matrix already has Jordan normal form. However, the case that one or both, $a,b\neq0$ seems more complicated. How do I continue?
Edit: The eigenvalues are $b/2\pm\sqrt{b^2/4+a}$. Using this in order to find the kernel of $(A-\lambda_i)^j$ using Gaussian elimination doesn't seem like the intended approach. That is what I meant.
AI: We can avoid computing generalized eigenvectors. Separate this into $3$ cases.
Case 1: $a = b = 0$. As you said, the matrix is already in Jordan form
Case 2: $a=0$, $b \neq 0$. The matrix is upper triangular, so we quickly see that its only eigenvalues are $0$ and $b$. The block associated with $b$ has size $1$, and the because the matrix $M$ has rank $2$, the block associated with $0$ has size $2$.
Now, if $a \neq 0$, then $M$ is block upper triangular with the form
$$
M = \left[\begin{array}{c|cc} 0&1&0\\ \hline 0&0&1\\0&a&b\end{array}\right]
$$
Case 3: If $a = -(b/2)^2 \neq 0$, then the lower-right block has a repeated eigenvalue, and its Jordan form consists of a single block. Thus, the overall Jordan form has a $0$ followed by a size-$2$ block.
Case 4: In the remaining cases, the lower-right block has eigenvalues that are distinct and non-zero. Thus, $M$ has distinct eigenvalues, which means that its Jordan form is diagonal.
As an alternative, it would suffice to note that $M$ is the transpose of the companion matrix associated with the polynomial $p(t) = t^3 - bt^2 - at$. It follows that its Jordan form consists of a single block of maximal size for each of the roots of $p$.
|
H: Derivative of random normal times indicator function
I have to find $\frac{dE[f(X)]}{dX}$ where $f(X) = X1_{X>a}$ where $X \sim N(0,1)$ , $1_{X>a}$ is an indicator function taking value 1 if $X>a$ and $0$ otherwise, and $a$ is some constant. I have trouble understanding how to differentiate a random variable (standard normal in this case). On the top of it, an indicator function of random variable is involved.
My approach: Use simple chain rule first and get $\frac{dE[f(X)]}{dX} = E[X\delta_{X} + 1_{X>a}]$ where $\delta_{X}$ is the delta function. Is this in the right direction? Any hint is appreciated.
AI: $E(f(X))=\int_a^{\infty} x \phi (x)dx$ where $\phi$ is the standard normal density. The derivative of this w.r.t. $a$ is $-a\phi(a)=-\frac a {\sqrt 2\pi} e^{-a^{2}/2}$.
|
H: Vector notation for $k>n$ vectors in $\Bbb{R^n}$
I'm reading Advanced calculus of several variables by Edwards, C.Henry. On page 6, the author is proceeding with the proof that any $n+1$ vectors in $\Bbb{R^n}$ are linearly dependent and writes :
"Suppose that $v_1,...,v_k$ are $k>n$ vectors in $\Bbb{R^n}$ and write $$v_j = (a_{1j},...,a_{nj})$$
where $j=1,...,k$."
I have understood vectors of $\Bbb{R^n}$, as $n$-tuples of real numbers as in $(x_1, ... , x_n)$. What is the purpose of introducing row column notation to describe them? I think it might have something to do with the next part ;
"...We want to find real numbers $x_1,...,x_k$, not all zero, such that
$\bar0=x_1v_1 + x_2v_2 + ... + x_kv_k$."
Afterwards this vector equation can be written as a system of linear equations with more unknowns than equations ; a non-trivial solution exists and the proof is complete.
AI: I don't think the notation means row column vector. It's just a way to uniquely identify each real number. In other words, $a_{ij} \in \mathbb{R}$ is just the $i$-th entry of the $j$-th vector. This is useful because it easily lets you see the total number of unknowns.
If we have $k$ vectors in $\mathbb{R}^n$, then this notation implies $i \in \mathbb{N}\cap [1, n]$ and $j \in \mathbb{N}\cap [1, k]$. Since in the proof we want to see how many equations vs unknowns we have, using this notation we can expand the equation $\vec{0} = \sum_{i=1}^{k} x_i v_i$, and obtain $n$ equations of the form
$$
0 = x_1 a_{i 1} + x_2 a_{i 2} + ... + x_k a_{i k} \quad 1\le i \le n
$$
but since $k>n$, here we see that indeed we have more unknowns (the $x_k$'s) than equations.
|
H: Definition of addition and multiplication on $ℕ$ using recursion
My book, Classic Set Theory: For Guided Independent Study, gives me this theorem:
"let $y_0$ be any element of $ℕ$ and $h"ℕ×ℕ×→ℕ$ a function on pairs $(x,y)∈ℕ×ℕ$. Then there exists a unique function $f:ℕ→ℕ$ such that $f(0)=y_0$ and $f(n^+)=h(n,f(n))$ for all $n∈ℕ$"
($n^+$ is the successor of the number $n$, which in this case, since the natural numbers are defined as sets, is $n^+=n∪\{n\}$)
then it says:
"for defining addition, the trick is to use recursion to define $m+n$ for a fixed $m$ and all $n$. The $f$ in the theorem will be defined so that $f(n)$ is to be regarded as $m+n$. To emphasize this, we shall refer to this $f$ as $f_m$."
then it defines addition in this way:
"$f(0)=m$ and $f_m(n^+)=(f_m(n))^+$
I understand this last part, since $m+n^+=m+n+1=(m+n)+1=(m+n)^+$, but i don't understand what $h(n,f(n))$ is.
It gives also this exercise (you don't need to read this part, but maybe it can help you answering the question), which i actually dont understand..
"Define $h(x,y)$ so that $f_m(n^+)=h(m,f_m(n))$"
the solution is:"$h(x,y)=y^+$"(which comes from the definition above: $f(n^+)=h(n,f(n))$ but since $f_m(n^+)=(f_m(n))^+$ we have $h(n,f(n))=(f_m(n))^+$)
what i dont get about this excercise is the first "$m$" in $h(m,f_m(n))$, i don't know if this is a typo or not, what i thought is that since $m$ is fixed and in the theorem it says "for all $n$" you can write it like that, but it doesn't really makes sense since in $h$, $n$ appears multiple times you have to replace it throughout the equation.
If it can be useful, multiplication is defined using the definition of addition and that theorem as $f_m(0)=0$ and $f_m(n^+)=f_m(n)+m$ where $f_m(n)$ means $m·n$
Could you guys help me out please? Could you tell me what $h(n,f(n))$ means in the definition of addition (and if you want also multiplication)?
AI: First, the Theorem you gave is a particular case of the Recursion theorem
let E be a set. if there exists $x_0\in E$ and $h:E\to E,$ then there exists a unique $f: \Bbb N\to E$ by $f(0)=x_0$ and $f(n^+)=h(f(n))$.
Notice that there exists $S:\Bbb N\to \Bbb N$ by $S(n)=n^+ \forall n\in \Bbb N.$ (the succession)
For any $m \in \Bbb N$. Addition to $m$ is usually defined as $f_m:\Bbb N\to \Bbb N$ by $f_m(0)=m$ and $f_m(n^+)=S(f_m(n))=f_m(n)^+$ (i.e $m+0=m, m+1=m+1, m+2=(m+1)+1.... )$
In your book, the author defined $h:\Bbb N\times \Bbb N\to \Bbb N$ by $h(n,f(n))=f(n)^+\forall n$, which uses $\Bbb N\times \Bbb N$ as the domain to keep track of $n$, instead of just $\Bbb N$. I think it is not necessary, and $S$ defined above is good enugh.
For multiplication to $m$:
there exists $M_m:\Bbb N\to \Bbb N$ by $M(n)=n+m$ (as $+$ is defined above)
Multiplication is defined as $g:\Bbb N\to \Bbb N$ by $g_m(0)=0$ and $g_m(n^+)=M(g_m(n))=g_m(n)+m$ $(m\times 0=0, m\times 1=(m\times0)+m, m\times 2=(m\times1)+m...$)
|
H: Define a linear functional $T$ on $V$ by $Tv$ = $\langle v, u\rangle$. What is $T^∗ (\alpha)$ for a scalar $\alpha$ where $T^*$ is the adjoint.
Now I do understand that the question might involve using Riesz representation as it involves a linear functional and we know that it can be written using an inner product. So $u$ is the representer of the linear functional in this case. So I think I might be able to find $u$ iff I know the orthonormal basis of $V$(which is not given). That brings me to my current roadblock.
Also even if I were to assume the orthonormal basis how would I use it to figure out $T^*$($\alpha$). Is there some kind of relation between them.
PS: I know what adjoint is but how do I use it to figure out the adjoint of $T^*(\alpha)$.
AI: By definition $\langle T^{*}\alpha, x \rangle= \alpha Tx$ since the inner product of two scalars is just their ordinary product. Hence $\langle T^{*}\alpha, x \rangle= \alpha \langle x, u \rangle$ which shows that $T^{*}\alpha=\alpha u$. [This proof is when the scalar field is $\mathbb R$. I will let you show that the answer is same when the scalar field is $\mathbb C$].
|
H: How to show given function in $L^r(\Omega)$?
I wanted to show that $h(x-y)=|x-y|^{n(\mu-1)}\in L^r(\Omega)$ where $\Omega $ is bounded .\
$\Omega\in B(x,R)$ for some $R$.
So I have to show that $\int_{B(x,R)}|h|^r\leq\infty $.
$\int_{B(x,R)}|h|^r=\int_0^R\int_{\partial B(x,k)}|k|^{nr(\mu-1)}dSdk=\int_0^R k^{nr(\mu-1)}nw_nk^{n-1}dk$
$=\int_0^R k^{nr(\mu-1)+n-1}nw_ndk=R^{nr(\mu-1)+n}nw_n<\infty$
Is this true ? How to make sure $nr(\mu-1)+n-1>0$.
Any Help will be appreciated
AI: $\int_0^{R}k^{t}dt <\infty$ whenever $t >-1$. So you only need $nr(\mu -1)+n-1 >-1$ or $nr(\mu -1)+n >0$. This follows from the definition of $r$ and the fact $\mu >\frac 1 p-\frac 1 q$.
|
H: Solving the quadratic equation $a X^2 + a Y^2 + 2 b X + 2 c Y + d = 0$.
I'm trying to solve the following quadratic equation, but I have no idea how:
$$a X^2 + a Y^2 + 2 b X + 2 c Y + d = 0,$$
where the coefficients are real numbers. Can someone give me a hint?
AI: Find solutions of $ax^2+ay^2+2bx+2cy+d=0$, where $a,b,c,d\in\mathbb R$.
Meant to be a comment
Assuming $b^2+c^2\ge d, a\neq 0$, you can always choose $a>0$ and then
\begin{align*}
\left(x+\frac ba\right)^2+\left(y+\frac ca\right)^2&=r^2\\
x^2+y^2+\frac {2b}{a}x+\frac {2c}{a}y+\frac {b^2+c^2}{a^2}&=r^2\\
ax^2+ay^2+2bx+2cy+\underbrace{(b^2+c^2-ar^2)}_{d}&=0\\
\end{align*}
is a real circle with center $\left(-\frac ba,-\frac ca\right)$ and $r=\frac{\sqrt{b^2+c^2-d}}a$.
|
H: To find supremum of this
How do I find supremum of set ?
$ (0,1) \cap\Bbb{Q}$ , where $\Bbb Q$ is set of al rationals. How will answer change if rationals are replaced by irrationals?
I know supremum of $(0,1)$ is $1$. How do I do?
Thank you.
AI: $(0,1)\cap Q$ is the set of rational numbers between 0 and 1 (not including 0 and 1). We can find both rational numbers and irrational numbers arbitrarily close to 1 so the supremum, in both cases, is 1 (the answer doesn't change). 1 is not an irrational number, but the supremum of a set does not have to be in that set.
|
H: Value of $\frac{\partial }{\partial x}\left(f\left(x,y\right)\right)$ at $(0,1)$
$f\left(x,y\right)=e^{x+y}\left(x^{\frac{1}{3}}\left(y-1\right)^{\frac{1}{3}}+y^{\frac{1}{3}}\left(x-1\right)^{\frac{2}{3}}\right)$
What's the value of $\frac{\partial }{\partial x}\left(f\left(x,y\right)\right)$ at $(0,1)$? There two solutions below. Why are the answers different?
Answer 1:
$f\left(x,1\right)=e^{x+1}\left(x-1\right)^{\frac{2}{3}}$
$\frac{\partial }{\partial \:x}\left(f\left(x,1\right)\right)=e^{x+1}\left(x-1\right)^{\frac{2}{3}}+\frac{2}{3}e^{x+1}\left(x-1\right)^{-\frac{1}{3}}$
so $\frac{\partial}{\partial x}\left(f\left(0,1\right)\right)=\frac{e}{3}$
Answer 2:
$\lim_{\Delta x\to 0}\frac{f\left(0+\Delta x,1\right)-f\left(0,1\right)}{\Delta x}$
$=\lim_{\Delta x\to 0}\frac{e^{\Delta x+1}\left(\Delta x-1\right)^{\frac{2}{3}}-e}{\Delta x}$
$\lim_{\Delta x\to 0}\frac{e\left(\left(\Delta x-1\right)^{\frac{2}{3}}-1\right)}{\Delta x}$
$=\lim_{\Delta x\to 0}\frac{e\left(\left(1-\Delta x\right)^{\frac{2}{3}}-1\right)}{\Delta x}=\lim_{\Delta x\to 0}\frac{e\left(1-\frac{2}{3}\Delta x-1\right)}{\Delta x}$
$=-\frac{2}{3}e$
AI: In answer 1 you mistakenly go from
$$\require{color}
f(x,1)=e^{x+1}(x-1)^{{\color{red}2}/3}
$$
to
$$
\frac{\mathrm{d}}{\mathrm{d}x}[f(x,1)]=\frac{\mathrm{d}}{\mathrm{d}x}[e^{x+1}(x-1)^{{\color{red}1}/3}]
$$
(plus you forgot the exponential factor).
Doing it correctly would give answer 1 as
\begin{align*}
\frac{\mathrm{d}}{\mathrm{d}x}[f(x,1)]&=\frac{\mathrm{d}}{\mathrm{d}x}[e^{x+1}(x-1)^{2/3}]\\
&=e^{x+1}(x-1)^{2/3}+\frac23e^{x+1}(x-1)^{-1/3}\\
\therefore\left.\frac{\mathrm{d}}{\mathrm{d}x}\right\vert_{x=0}[f(x,1)]&=e^{0+1}(0-1)^{2/3}+\frac23e^{0+1}(0-1)^{-1/3}=\dots
\end{align*}
Similarly, in answer 2, you forget the factor $e^{\Delta x}$ when you go from
$$
\lim_{\Delta x\to 0}\frac{e^{\Delta x+1}(\Delta x-1)^{\frac{2}{3}}-e}{\Delta x}$$
to
$$\lim_{\Delta x\to 0}\frac{e\big((\Delta x-1)^{\frac{2}{3}}-1\big)}{\Delta x}$$
when it is, in fact,
$$\lim_{\Delta x\to 0}\frac{e\big(e^{\Delta x}(\Delta x-1)^{\frac{2}{3}}-1\big)}{\Delta x}.$$
|
H: How to compute the series: $\sum_{n=0}^{\infty} (-1)^{n-1}\binom{1/2}{n}$
I'm wondering about how to show compute this series: $$\sum_{n=0}^{\infty}(-1)^{n-1}\binom{1/2}{n}$$
My approach was to use the general formula of the binomial series, which is: $$(1+z)^r=\sum_{k=0}^{+\infty}z^{k}\binom{r}{k}$$
Yet this can't be used because in this case, we have $|z|=1$.
Thus, is there any method that I can use for this?
AI: You 'incororate' it into the binomial coefficient, e.g. for
$$
(1+z)^{-\frac{1}{2}} = \sum_{k=0}^{\infty} \binom{-\frac{1}{2}}{k}z^k
$$
It is $(-1)^{k}\frac{(2k-1)!!}{k!2^k}$
|
H: $F= \bigcap_{i=1}^{\infty} F_i$ isn't necessarily connected where $F_{i+1} \subseteq F_i$ and $F_i \subseteq \mathbb{R}^2$ are closed and connected
In my attempt, I first show that $F$ is closed, this is since we can write $F= \bigcap_{i=1}^{\infty} F_i = (\bigcup_{i=1}^{\infty} F_i^C)^C$ and $\bigcup_{i=1}^{\infty} F_i^C$ is a union of open sets, hence it is open, hence $F=(\bigcup_{i=1}^{\infty} F_i^C)^C$ must be closed.
From here, I am trying to construct an example. I was thinking of $F_i$ being two disjoint disks that are connected with a thick line between them, such that as $i$ grows the line becomes thinner and its thickness tends to zero. The problem is that in this case $F$ still has a one-point width line connecting the disks and this seems not good enough.
Will be happy for any help on this or a better example (Assuming the standart topology on $\mathbb{R}^2$)
AI: Consider taking $F_i$ as the closed upper half-plane with $(0,1)\times [0,i)$ removed.
|
H: Why does $\gcd(c_1,c_2,\ldots ,c_k)=1\Rightarrow \gcd(c_1-c_2,c_2, \ldots ,c_k)=1$?
Let $c_i\in \mathbb{N}$ and $c_1\geq c_2$. Why does $\gcd(c_1,c_2,\ldots ,c_k)=1\Rightarrow \gcd(c_1-c_2,c_2, \ldots ,c_k)=1$?
AI: Hint: Suppose $gcd(c_1-c_2,c_2,\cdots ,c_k)>1,$ then there exists a prime $p$ such that $p| c_1-c_2$ and $p|c_k$ for $k\geq 2.$ Conclude that $p|c_1.$
|
H: plot of $\sin(x) + \sin(y)= \cos(x) + \cos(y)$
I was playing arround with implicit plots of the form $f(x,y) = g(x,y)$, and I noticed that if you plot in the plane the following equation: $\sin(x) + \sin(y)= \cos(x) + \cos(y)$ you get the following graph:
My question is why does this trigonometric functions give us this squares spanning the entire plane?
AI: Using Prosthaphaeresis Formulas
$$2\sin\dfrac{x+y}2\cos\dfrac{x-y}2=\cos\dfrac{x+y}2\cos\dfrac{x-y}2$$
If $\cos\dfrac{x-y}2=0\implies\dfrac{x-y}2=(2n+1)\dfrac\pi2, x-y=(2n+1)\pi$
else $\sin\dfrac{x+y}2=\cos\dfrac{x+y}2\iff\tan\dfrac{x+y}2=1\implies\dfrac{x+y}2=m\pi+\dfrac\pi4\iff x+y=\dfrac{(4m+1)\pi}2$
So we are getting continuous perpendicular & equidistant straight lines.
In the first case, the distance between two consecutive lines is $$\dfrac{2(m+1)+1-(2m+1)}{\sqrt2}\cdot\pi$$
and in the second, $$\dfrac{2\pi}{\sqrt2}$$
So, we get infinite number of squares with each side $=\sqrt2\pi$
|
H: Convolution: Integral vs. Discrete sum
I recently stumbled across a question which really confused my understanding of convolution. It's the relation between the continuous integral and the discrete counterpart I don't get.
What I learned in school was:
$\int_{a}^{b} f(x) dx = \lim_{\Delta x\to 0} \sum_{i=1}^{n} f(x_i) \cdot \Delta x_i$
Why is the discrete version of a convolution integral for two series $f$ and $g$ defined as $(f * g)(n) = \sum_{k} f(k) \cdot g(n-k)$ and not some kind of "area under the curve" $\sum_{k} f(k) \cdot g(n-k) \cdot \Delta x_k$?
So why can't I say for a given $t$: $\int_{0}^{t} h(\tau) d\tau = \lim_{\Delta \tau\to 0} \sum_{i=1}^{n} h(\tau_i) \cdot \Delta \tau_i$ with $h(\tau_i) = f(\tau_i) \cdot g(t - \tau_i)$?
AI: In fact, the Convolution Integral between two functions $f(t)$ and $g(t)$ is defined as
$$(f*g)(t)=\int_{-\infty}^\infty f(\tau)g(t-\tau)\,d\tau$$
If both $f(t)$ and $g(t)$ are causal functions (i.e., $f(t)$ is causal if $f(t)=0$ for $t<0$), then
$$\begin{align}
(f*g)(t)&=\int_{0}^t f(\tau)g(t-\tau)\,d\tau\\\\
&=\lim_{\max_{i\in[1,n]}(\Delta \tau_i)\to 0}\sum_{i=1}^n f(\tau_i)g(t-\tau_i)\Delta \tau_i\tag1
\end{align}$$
For the discrete case, we sample $f(t)$ and $g(t)$ at integer values of $t$. So, in $(1)$, we set $t=n$.
Moreover, we define $f(t)$ and $g(t)$ in terms of a train of Dirac Deltas $\delta(t-k)$ with weights $f(k)$ and $g(k)$ so that $f(t)=\sum_{m=1}^n f(m)\delta(t-m)$, and $g(t)=\sum_{\ell=1}^n g(\ell)\delta(t-\ell)$ in the integral in $(1)$. Proceeding, we find that
$$\begin{align}
(f*g)(n)&=\int_{0}^t f(\tau)g(t-\tau)\,d\tau\\\\
&=\int_{0}^n \sum_{m=1}^n f(m)\delta(\tau-m)\sum_{\ell=1}^n g(\ell)\delta(n-\tau-\ell)\,d\tau\\\\
&=\sum_{m=1}^n f(m)\sum_{\ell=1}^n g(\ell)\underbrace{\int_0^n \delta(\tau-m)\delta(n-\tau-\ell)\,d\tau}_{=1\,\,\text{for}\,\,\ell=n-m\,\,\text{and otherwise}\,\,=0}\\\\
&=\sum_{m=1}^n f(m)g(n-m)
\end{align}$$
|
H: Convergence of $\sum\limits_{n=1}^\infty\left\{\frac{1\cdot 3\dots 2n-1 }{2\cdot 4\dots 2n}\cdot\frac{4n+3}{2n+2}\right\}^2$
Show if the inf series
$\sum\limits_{n=1}^\infty\left\{\frac{1\cdot3\dots2n-1 }{2\cdot 4\dots2n}\cdot\frac{4n+3}{2n+2}\right\}^2$
converges.
My thought:
When $2n=2^k$, $\frac{1\cdot 3\dots2n-1 }{2\cdot 4\dots2n}\cdot \frac{4n+3}{2n+2}
\approx (1-1/4)^1\cdot (1-1/8)^2\dots(1-1/(2^k))^{2^{k-2}} \approx (1-1/4)^{k-1},$
and so the $n$th item =
$((1-1/4)^{k-1})^2=(9/16)^{k-1}$.
When $2n \neq 2^k$, we have $2^{k-1}<2n<2^k$,
\begin{align*}
\frac{1\cdot 3\dots2n-1 }{2\cdot 4\dots2n}\cdot \frac{4n+3}{2n+2}
\approx (1-1/4)^1\cdot (1-1/8)^2\dots(1-1/(2^{k-1}))^{2^{k-3}}\cdot(1-1/(2^k))^{n-(2^{k-2})}\\
\approx (1-1/4)^{k-2}\cdot (1-1/(2^k))^{n-(2^{k-2})},
\end{align*}
where $0<n-2^{k-2}<2^{k-2}$.
Then sum $(2^{k-2}+1)$ th to $n'$ th terms, for $2^{k-1}<2n'<=2^k$,
\begin{align*}
\sum\limits_{n=2^{k-2}+1}^{n’}\left\{\frac{1\cdot3\dots2n-1 }{2. 4\dots2n}\cdot \frac{4n+3}{2n+2}\right\}^2
\approx \sum\limits_{n=2^{k-2}+1}^{n'} (1-1/4)^{2(k-2)}.(1-1/(2^k))^{2(n-(2^{k-2}))}\\
=(1-1/4)^{2(k-2)}\cdot \frac{(1-1/(2^k))^2}{1-(1-1/(2^k))^2}\cdot(1-(1-1/(2^k))^2)^{n’-(2^{k-2})})\\
=(9/16)^{(k-2}\cdot (2^{k-1}-1)\cdot (1-\frac{n'-(2^{k-2})}{2^{k-1}}),$
\end{align*}
which, when $2n'=2^k$, approximates
$(9/16)^{k-2}\cdot (2^{k-1}-1)\cdot(1-1/2)\approx (9/16)^{k-2}\cdot (2^{k-2}).$
(All the terms approximates $(9/16)^{k-1}$, and so the sum is roughly $(9/16)^{k-1}\cdot2^{k-2}$. This gives a little larger estimation.)
Therefore the infinite series equals
$$\sum\limits_{k=2}^{\infty}\sum\limits_{n=2^{k-2}+1}^{n'} \approx \sum\limits_{k=2}^{\infty} (9/16)^{k-2}\cdot (2^{k-2})=\sum\limits_{k=2}^{\infty} (9/8)^{k-2},$$
which diverges, and so the series possibly diverges.$
Given the final approximation being $\sum a^n$ where a is near 1, a little difference in the approximations above could change the convergence. Were the series convergent and had the approximations made it not, which one causes that? Besides, are there other methods?
AI: $1\times3\times\ldots\times (2n-1)=\frac{1\times2\times\ldots\times 2n}{2\times 4\times\ldots\times 2n}=\frac{(2n)!}{2^nn!}$. Thus
$$ \frac{1\times3\times\ldots\times(2n-1)}{2\times4\times\ldots\times 2n}=\frac{(2n)!}{4^n(n!)^2} $$
Now, using Stirling's approximation $n!\underset{n\rightarrow +\infty}{\sim}\sqrt{2\pi n}\left(\frac{n}{e}\right)^n$, we have
$$ \frac{1\times3\times\ldots\times(2n-1)}{2\times4\times\ldots\times 2n}\frac{4n+3}{2n+2}\underset{n\rightarrow +\infty}{\sim} 2\frac{\sqrt{4\pi n}\left(\frac{2n}{e}\right)^{2n}}{4^n\times 2\pi n\left(\frac{n}{e}\right)^{2n}}\underset{n\rightarrow +\infty}{\sim}\frac{2}{\sqrt{n\pi}} $$
and thus your series diverges. Your approximations does not work since you find a sum of the form $\sum a^n$ which can't lead to $\frac{4\ln n}{\pi}$ (using $\sum_{k=1}^n\frac{1}{k}\underset{n\rightarrow +\infty}{\sim} \ln n$).
|
H: Convergence of martingales is a martingale
I am reading the book "Introduction to Stochastic Integration (Second Edition)" by K.L. Chung and R.J. Williams. I have a question about the proof of Proposition 1.3 (on page 13) in that book. First, here are two definitions just so we are on the same page:
Definition. A collection $\{M_t, \mathcal{F}_t, t\in\mathbb{R}_+\}$ is called a martingale iff
$M_t\in L^1$ for each $t$,
$M_s = E(M_t\vert \mathcal{F}_s)$ for all $s<t$.
Definition. For $p\in [1,\infty)$, $M=\{M_t,\mathcal{F}_t, t\in\mathbb{R}_t\}$ is called an $L^p$ martingale iff it is a martingale and $M_t\in L^p$ for each $t$.
With these definitions in mind, the following is the statement and the proof of Proposition 1.3 (verbatim from the book):
Proposition 1.3. Let $p\in [1,\infty)$. Suppose $\{M_t^n, \mathcal{F}_t, t\in\mathbb{R}_+\}$ is an $L^p$-martingale for each $n\in\mathbb{N}$, and for each $t$, $M_t^n$ converges in $L^p$ to $M_t$ as $n\rightarrow\infty$. If $\mathcal{F}_0$ contains all of the $P$-null sets in $\mathcal{F}$, then $\{M_t, \mathcal{F}_t, t\in\mathbb{R}_+\}$ is an $L^p$-martingale.
Proof. It suffices to verify condition (2) in the definition of a martingale. Fix $s<t$ in $\mathbb{R}_+$. For each $n$, $$M_s^n = E(M_t^n\vert\mathcal{F}_s).$$ The left hand side above converges in $L^p$ to $M_s$, by hypothesis, and by Proposition 1.2, the right side converges to $E(M_t\vert \mathcal{F}_s)$ in $L^p$. Hence $$M_s = E(M_t\vert \mathcal{F}_s)\quad\text{a.s.}$$ If $\mathcal{F}_0$ contains all of the $P$-null sets in $\mathcal{F}$, it follows that $M_s\in\mathcal{F}_s$ and then the a.s. above may be removed. QED.
I understand the first sentence (condition (1) follows since $L^1\subseteq L^p$ for $1\le p\le \infty$ holds in probability spaces). The second sentence holds by definition of a martingale. The first half of the third sentence is clear, and the second half of the third sentence is true directly from Proposition 1.2 in the book which is the following:
Proposition 1.2. Suppose $\{X_n\}$ converges in $L^p$ to $X\in L^p$ for some $p\in [1,\infty)$. Then for any sub-$\sigma$-field $\mathcal{G}$ of $\mathcal{F}$, $\{E(X_n\vert\mathcal{G}\}$ converges in $L^p$ to $E(X\vert\mathcal{G})$.
The fourth sentence doesn't make perfect sense to me. In particular, I do not understand exactly why
$$M_s^n\rightarrow M_s\text{ in } L^p\quad \text{and} \quad E(M_t^n\vert\mathcal{F}_s)\rightarrow E(M_t\vert\mathcal{F}_s)\text{ in } L^p\implies M_s = E(M_t\vert \mathcal{F}_s)\text{ a.s.}$$
I guess I do not really understand what the "a.s." is referring to or why we get that. Is it saying that for any representative of the equivalence class of $\mathcal{F}_s$-measurable functions satisfying the martingale property, it only equals $M_s$ almost surely? (The weird thing to me is that any two representatives of $E(M_t\vert\mathcal{F}_s)$ only equal each other almost surely, so is the a.s. in the proof "more off" in some sense?) It seems to suggest that $M_s$ is not necessarily measurable with respect to $\mathcal{F}_s$, but I do not understand why $M_s$ may not be. On a similar note, $M_s^n$ is a representative of $E(M_t^n\vert\mathcal{F}_s)$, but the limit of $E(M_t^n\vert\mathcal{F}_s)$ is $\mathcal{F}_s$-measurable while the limit of $M_s^n$ isn't? Finally, I do not really understand the fifth sentence. Why would completing the filtration fix this?
I apologize for rambling, and I am not sure if I articulated my question(s) well or not, but the point is I do not know exactly what is going in in the fourth and fifth sentences. I want to understand it down to the last $\omega$, so clarity and details, rather than brevity and terseness, would be very much appreciated.
AI: Since $M_t^n\to M_t$ in $L^p$, there is a subsequence $(M_t^{n_k})$ s.t. $M_{t}^{n_k}\to M_t$ a.s. Since $\mathbb E[M_t^{n_k}\mid \mathcal F_s]\to \mathbb E[M_t\mid \mathcal F_s]$ in $L^p$, there is a subsequence $(\mathbb E[M_t^{n_{k_\ell}}\mid \mathcal F_s])$ s.t. $$\mathbb E[M_t^{n_{k_\ell}}\mid \mathcal F_s]\to \mathbb E[M_t\mid \mathcal F_s]\quad a.s.$$
Therefore $$M_s=\lim_{\ell\to \infty }M_{s}^{n_{k_\ell}}=\lim_{\ell\to \infty }\mathbb E[M_t^{n_{k_\ell}}\mid \mathcal F_s]=\mathbb E[M_t\mid \mathcal F_s]\quad a.s.$$
and thus $M_s=\mathbb E[M_t\mid \mathcal F_s]$ a.s.
|
H: Analytic and bounded in a region with two holes is constant
Let $p,q\in\mathbb{C}, p\neq q$ and $G = \mathbb{C}\setminus\left\{p,q\right\}$ and
$f: G\to \mathbb{C}$ analytic and bounded. Show that $f$ is constant.
Hi. How could I start from this problem? I suspect that with Cauchy's modification ...
actualization.
$\lim_{z\to p}|(z-p)|f(z)|\leq \lim_{z\to p} |(z-p)|M=0$ because $f$ is bounded
therefore $p$ removable singularity, analogous $q$.
thefore exists $g:\mathbb{C}\to\mathbb{C}$ analytic and bounded. By Liouville theorem, $g$ is constant then $f$ is constant. Thanks to José Carlos Santos
AI: By Riemann's theorem on removable singularities, you can extend $f$ to an entire function $F$, which will still be bounded. Therefore, $F$ is constant (by Liouville's theorem) and so, in particular, $f$ itself is constant.
|
H: A variant version of Euler's phi function
The original Euler's phi function goes like this.
$$\phi(n)=n\prod_{p|n} (1-1/p)$$
But I want to prove a modified version of it.
$\psi(n) $ : The number of $x$s when $1\le x \le n$ , $x\bot n$ and $(x+1) \bot n$. Then, for $n \ge 3 $
$$\psi(n)=n\prod_{p|n} (1-2/p)$$ where $p$ are distinct primes.
Until now I go by, if $n=p^k$ for some prime $p$, then
$$1\cdot p ,\ 2\cdot p,\ ...,\ p^{k-1}\cdot p \quad $$ and
$$1\cdot p-1 ,\ 2\cdot p-1,\ ...,\ p^{k-1}\cdot p-1 \quad $$
so total $2\cdot p^{k-1}$ of numbers are not coprime to $n$, therefore $$\psi(n)=p^k-2p^{k-1}$$
But I don't get to prove $\psi $ is multiplicative with regard to coprime numbers.
How can I prove this?
AI: We have that $\Psi (n)=|\{x\in [n]:(x,n)=1 \wedge (x+1,n)=1\}|.$
Let $n=p_1^{\alpha _1}\cdots p_k^{\alpha _k}.$ Consider the set $$A_j=\{x\in [n]:p_j|x\},$$
notice that this captures the property $(x,n)\neq 1.$ Notice also that $(x+1,n)$ is captured also with these sets because $(n,n+1)=1$ so imagine that we want the opposite, all the numbers minus the ones that do not have that property, check that the negation of the property is $(x,n)>1$ or $(x+1,n)>1.$Then we have that
$$\Psi (n)=n-\sum _{l=1}^k(-1)^{l-1}2^l\sum _{1\leq a_1<a_2<\cdots <a_l\leq k}\left |\bigcap _{i=1}^lA_{a_i}\right |$$
By the inclusion-exclusion principle. Notice that the $2^j$ comes from either $x$ or $x+1$ by the comment above. Notice also that $$\left |\bigcap _{i=1}^lA_{a_i}\right |=\frac{n}{\prod _{i=1}^lp_{a_i}}$$ and so we have that
$$\Psi(n)=\sum _{l=0}^k(-1)^{l}2^l\sum _{1\leq a_1<a_2<\cdots <a_l\leq k}\frac{n}{\prod _{i=1}^lp_{a_i}}=n\prod _{l=1}^k\left (1-\frac{2}{p}\right ).$$
Edited: Notice that when you have an expression of the form $(1-a_1x)(1-a_2x)\cdots (1-a_nx)$ and you unfold it, what you are doing is choosing either the $1$ or the $a_jx.$ So, to be able to understand the coefficient corresponding to $x^k$ you have to choose a set of $k$ indices in which you have chosen the $a_j$ in other words you will be adding over the following
$$(-1)^kx^k\sum _{\substack{X\subseteq \{a_1,\cdots ,a_n\}\\|X|=k}}\prod _{y\in X}y.$$ This is exactly what is happening with the $a_i$ they are the elements of the set $X$ of size $k.$
Notice that this process is a similar one to the inclusion exclusion process of combinatorics.
Notice that because the negation of the proposition is $(x,n)>1$ or $(x+1,n)>1$ what we are doing is choosing a prime, call it $p_{a_j}$ that is in the prime decomposition of $n,$ and making $x$ or $x+1$ be divisible by it. So we have all the time $2$ options(notice that $p_{a_j}$ can not divide both at the same time!), because in the sum we are assuming $l$ primes of that form then $\underbrace{2\times 2\times \cdots \times 2}_{\text{l times}}=2^l$ is there by the multiplication principle.
|
H: What am I misunderstanding about this isomorphism?
From Rotman's Algebraic Topology:
If $K$ is a connected simplicial complex with basepoin $p$, then $\pi(K,p) \simeq G_{K,T}$, where $T$ is a maximal tree in $K$ and $G_{K,T}$ represents the group with presentation $$ \langle \text{all edges $(p,q)$ in $G$} | (p,q) = 1 \text{ if $(p,q)$ is an edge in $T$ and $(p,q)(q,r) = (p,r)$ if $\{p,q,r\}$ is a simplex in $K$}\rangle$$
The isomorphism is defined as $\bar \phi : G_{K,T} \rightarrow \pi (K,p)$ by $(u,v)R \mapsto \phi(u,v) = [\alpha_u(u,v)\alpha_v^{-1}]$ where $\alpha_x$ is the unique path from $p$ to $x$ in $T$.
$(1.)$ Let $K$ be defined as follows: Draw a hexagon. Add the center point (p). For each vertex (not the center) draw a line from the vertex through the center point (p) and connect it to the opposing vertex. Let $K$ be the simplicial complex consisting of all the verticies, lines, and faces created from triangulating this hexagon.
$(2.)$ Define $T$ in $K$ as the tree created by drawing a line from $p$ to each vertex of the hexagon. This is maximal as it reaches all vertices of $K$.
Let $(u,v)$ and $(v,w)$ be an edges along the outside of the hexagon. Then $(u,v)R \neq (v,w)R$ but $\bar \phi (u,v)R = \bar \phi (v,w)R$ because $\bar \phi (u,v)R = [\alpha_u (u,v) \alpha_v^{-1}] = [\alpha_v (v,w) \alpha_w^{-1}] = \bar \phi (v,w)R$. This is true because $\alpha_u (u,v) \alpha_v^{-1}$ and $\alpha_v (v,w) \alpha_w^{-1}$ are homotopic in $K$.
But then it must be the case that $(u,v)R = (v,w)R$ since $\bar \phi$ is an isomorphism.
Why is it the case then that $(u,v)R = (v,w)R$? Or am I missing something else in my understanding of this isomorphism?
AI: For an edge $(u,v)$ along the outside of the hexagon, you always have the simplex $\{u,v,p\}$ in $K$. Hence $(u,v)(v,p)R = (u,p)R$. Now both $(v,p)$ and $(u,p)$ are edges in $T$ so that $(v,p)R=(u,p)R=1$ which also results in $(u,v)R=1$.
Indeed this implies that any edge $(u,v)$ of $K$ satisfies $(u,v)R=1$ (either because it is in $T$ or because of the above argument) so that $G_{K,T} = \{1\}$ is the trivial group.
|
H: Evaluate $\lim _{n \rightarrow \infty} \sum_{i=1}^{n} \frac{1}{i-3 n}$
Evaluate $\lim _{n \rightarrow \infty} \sum_{i=1}^{n} \frac{1}{i-3 n}$
Here $\sum_{i=1}^{n} \frac{1}{i-3 n}=\frac{1}{1-3 n}+\frac{1}{2-3 n}+\cdots+\frac{1}{n-3 n}$.
I tried to make the sum squeezed
between Convergent Sequences, but failed by getting
$\frac{-1}{2} \leq \frac{1}{-3 n+1}+\frac{1}{-3 n+2}+\cdots+\frac{1}{-3 n+n} \leq \frac{-1}{3}$
AI: Put your sum as
$$\frac{b-a}{n}\sum_{i=1}^nf(a+i\frac{b-a}{n})$$
If $ f $ is integrable or continuous at $ [a,b]$, the limit will be
$$\int_a^bf(x)dx$$
In general, $a=0\;,\;b=1$ and the sum reduces to
$$\frac 1n\sum_{i=1}^nf(\frac in)=\frac 1n\sum_{i=1}^n\frac{1}{\frac in-3}$$
|
H: How to prove something is a metric.
Let $X$ be a set and show that the following function defines a metric.
$f(x, y) = (0 \text{ if } x = y \text{ and } 1 \text{ if } x \neq y)$
I'm especially having trouble with the symmetry and triangle inequality steps. Thanks so much!
AI: Let's do symmetry first. We just need to show that $f(x, y)=f(y, x)$. We have
$f(x, y)= 1 * (x \not\equiv y) = 1 * (y \not\equiv x) = f(y,x)$.
Now for the triangle inequality. We have to show that for points $x,y, z$ in the space $f(x, z)\leq f(x, y) + f(y, z)$. Note that $f$ is nonnegative. So, if $x=z$, $f(x, z) =0$, so right away $0\leq f(x, y) + f(y, z)$. Otherwise $x\neq z, f(x, z) =1$. Note that it is impossible for both $x=y$ and $y=z$, since that implies that $x=z$. So at least one of $f(x, y),f(y, z)$ is 1. Since $f$ is nonnegative it follows again that $f(x, z)\leq f(x, y)+f(y, z)$, and the triangle inequality holds.
Note also that the other metric property, namely $f(x, y)=0 \Longleftrightarrow x=y$, holds by the definition of the function $f$.
|
H: What's the best alternative to using the $\sin^2x$ notation?
Although I understand that $\sin^2x$ refers to $(\sin x)^2$, and not $\sin(\sin(x))$, I find this notation to be confusing—often it can hamper my thinking. For instance, if I was solving the following equation:
$$
\sin^2x-\frac{7}{2}\sin x-2=0
$$
I might not realise that there is a squared term and a linear term, and hence it is a quadratic in $\sin x$. If on the other hand I wrote:
$$
(\sin x)^2-\frac{7}{2}\sin x-2=0
$$
then it would be much clearer to me what is going on. So what is the best alternative to writing $\sin^2x$? The obvious answer is to write $(\sin x)^2$, but at times this can feel cluttered, particularly if there are other bracketed terms. So is there a better solution that avoids this notational issue?
AI: One idea is to get goofy with $\sin(x)$, which you could denote $\sin_x$ which isn't confusing since sine isn't taking any arguments besides $x$ in that equation. You could terse-ify this even more to $\mathrm{s}_x$. So you get
$$
\mathrm{s}_x^2-\frac{7}{2}\mathrm{s}_x-2=0
$$
But this is all just a complicated way to do basically what Nate did here and say "let $\mathrm{s}_x = \sin(x)$ ..."
|
H: What is the number of ways you can build a sequence of positive integers with each number divisible by all the previous numbers?
I am looking for a solution to count the number of ways I can create a sequence of $n$ numbers, repetition allowed, where each number is divisible by all the previous numbers in the sequence, and the sequence is bounded.
For example, what is the number of ways you can create a sequence of 6 numbers $a_1$, $a_2$, $a_3$, $a_4$, $a_5$, $a_6$, all positive numbers, where $a_1 = 1$ and $a_6$ = 2000, and for each $2 \leq n \leq 6$, we have that $a_n$ is divisible by $a_{n-1}$?
I was thinking about using Stirling numbers of the second kind, but I am not getting anywhere with this idea.
AI: Suppose you have $a_n$ and $a_1$ then dividing everything by $a_1$ we see that this is equivalent to going from $1$ to a number $m.$ Lets say that $m=p_1^{\alpha _1}\cdots p_k^{\alpha _k}$ and just extract the exponents (the primes are not important) and you get $(\alpha _1,\cdots ,\alpha _k).$ Now you want to construct $k$ increasing chains from $0$ to $\alpha _k$ and so this is equivalent of having $n-1$ numbers $a_{k,1},\cdots a_{k,n-1}$ such that their sum is $\alpha _k.$ Using stars and bars you probably can get that this number would be $$\prod _{i=1}^k\binom{\alpha _i+n-2}{n-2}.$$
Added later: I came across this paper in which this problem, Thm 3.10 in the paper, can be embedded. So a natural generalization of this problem is in the context of mixed Stirling numbers.
|
H: need help solving a quartic equation
The question I am asking is to solve the equation $x^4-4x-1=0$,
I need an exact answer. What I have done was found out that it equals $(x^2+1)^2 - 2(x+1)^2 =0$.
Anybody help me, please?
AI: So you know
$$\begin{align}
x^4-4x-1&=(x^2+1)^2-2(x+1)^2\\
&=(x^2+1)^2-[\sqrt2(x+1)]^2.
\end{align}$$
Factorise this difference of two squares to get two quadratic factors, hence the four roots.
|
H: Convergence of $\sum \limits_{n=1}^{\infty}\sqrt{n^3+1}-\sqrt{n^3-1}$
Hello I am a high school student from germany and I am starting to study math this october. I am trying to prepare myself for the analysis class which I will attend so I got some analysis problems from my older cousin who also studied maths. But I am stuck on this problem.
Check the following series for convergence/divergence $$\sum \limits_{n=1}^{\infty}\sqrt{n^3+1}-\sqrt{n^3-1}$$ I tried to prove the convergence by comparison test $$\sqrt{n^3+1}-\sqrt{n^3-1}= \frac{2}{\sqrt{n^3+1}+\sqrt{n^3-1}}=\frac{1}{n^2} \cdot \frac{2\sqrt{n}}{\sqrt{1+\frac{1}{n^3}}+\sqrt{1-\frac{1}{n^3}}}$$ and then compare it with $$\sum \limits_{n=1}^{\infty}\frac{1}{n^2}$$ But in order to do that, I need to prove that $$\frac{2\sqrt{n}}{\sqrt{1+\frac{1}{n^3}}+\sqrt{1-\frac{1}{n^3}}} \leq 1$$ But I am having problems to prove that. Does anyone have tip how to solve this problem?
AI: You cannot prove
$$\frac{2\sqrt{n}}{\sqrt{1+\frac{1}{n^3}}+\sqrt{1-\frac{1}{n^3}}} \leq 1
$$
because the left-hand side tends to $+\infty$ for $n \to \infty$.
You should compare your series with $\sum_{n=1}^{\infty}\frac{1}{n^{3/2}}$ instead:
$$
\frac{2}{\sqrt{n^3+1}+\sqrt{n^3-1}}=\frac{2}{n^{3/2}} \cdot \frac{1}{\sqrt{1+\frac{1}{n^3}}+\sqrt{1-\frac{1}{n^3}}} <\frac{2}{n^{3/2}} \, .
$$
|
H: Recover $f$ if we know that $\frac{d}{dx} \log f(x)$ and $f(x) \to 0$ as $x \to \infty$
Let me set up the question by stating something that is well-known. Suppose $f$ is an unknown function, but we are given $f'$ and the fact that
\begin{align}
\lim_{x \to \infty} f(x)=0.
\end{align}
Then, by using fundamental theorem of calculus we have that
\begin{align}
f(a)-f(x)= \int_x^a f'(t) dt
\end{align}
and by using that $f(a) \to 0$ as $a\to \infty$ we arrive at
\begin{align}
f(x)= \int_x^\infty -f'(t) dt.
\end{align}
In other words, we can recover $f$ based on this information.
Here is the question: Suppose instead we know $\frac{d}{dx} \log f(x)=\frac{f'(x)}{f(x)}$ and $\lim_{x \to \infty} f(x)=0$. Can we still recover $f(x)$?
If we use, the previous approach we arrive at
\begin{align}
\frac{f(x)}{f(a)}= e^{-\int_x^a \frac{f'(t)}{f(t)} dt}.
\end{align}
Clearly, we have a problem with this approach as we get a division by zero in the limit.
My guess is that in general, it is not possible to recover $f(x)$ based on this information. (See one of the answers) However, can we do this with some extra minimal assumptions on $f$?
One trivial assumption that I would like to avoid making, is that we know $f(a)$ at some point $a$ which would make the FTC approach work.
AI: Given one of your functions $f$, we can always write it as $f(x) = e^{g(x)}$ where $f$ and $g$ determine each other uniquely. The condition $\lim_{x \to \infty} f(x) = 0$ translates to $\lim_{x \to \infty} g(x) = -\infty$. Also, $\frac{d}{dx} \log(f(x)) = g'(x)$, so knowing the logarathmic derivative of $f$ equates to knowing the ordinary derivative of $g$.
Having made this translation, we see that the basic issue is that, given a function $g(x)$ with $\lim_{x \to \infty} g(x) = -\infty$, we cannot recover $g(x)$ from $g'(x)$. For example $g(x)=-x+c$ has $g'(x) = -1$ and $\lim_{x \to \infty} g(x) = -\infty$, regardless of the value of the constant $c$. Translating back to $f$, we can say that $f(x) = k e^{-x}$ has logarithmic derivative $-1$ and $\lim_{x \to \infty} f(x) = 0$, regardless of the value of the constant $k$.
|
H: Is every function on the plane that satisfies this mixed convexity condition convex?
$\newcommand{\half}{\frac{1}{2}}$
Let $g:\mathbb R^2\to \mathbb R$ be a smooth symmetric function, i.e. $g(x,y)=g(y,x)$ for every $x,y$. Suppose that
$$
g\left( \half(x,y)+\half(y,x) \right) \le \half g(x,y)+\half g(y,x) =g(x,y), \tag{1}
$$
for every $x,y$ or equivalently
$$
g(x,y) \ge g(\frac{x + y}{2}, \frac{x + y}{2}) .
$$
Is $g$ midpoint convex?
That is, does
$$
g(\frac{u + v}{2}) \le \frac{g(u) + g(v)}{2} \tag{2}
$$
for every $u,v \in \mathbb R^2$?
Condition $(1)$ is more specific than $(2)$ since in inequality $(1)$, $u=(x,y),v=(y,x)$ are related to each other.
AI: Counterexample: let $g(x, y) = (x - y)^2 + (x + y)^3$. Then $g$ is clearly smooth, and $g(x, y) = g(y, x)$. We have,
$$g(x, y) - g\left(\frac{x + y}{2}, \frac{x + y}{2}\right) = (x - y)^2 + (x + y)^3 - (x + y)^3 \ge 0,$$
hence $g$ satisfies the premises of the question. But $g$ is not midpoint convex. If we take $u = (0, 0)$ and $v = (-2, -2)$, then,
\begin{align*}
g\left(\frac{u + v}{2}\right) &= g(-1, -1) = -8 \\
\frac{g(u) + g(v)}{2} &= \frac{0 - 64}{2} = -32 < g\left(\frac{u + v}{2}\right).
\end{align*}
|
H: Finding the parametric equation of a line in linear algebra
I was given the above question to answer, and I do not know how to do it. If t is the parameter, then I suppose we can create a relationship between a and b using t, but I don't see how that is helpful. Any guidance?
Thank you!
AI: $l:\Bbb R \to \Bbb R^2$ by $l(t)=a+bt$. Based on $a$ and take the direction $b$.
Note that $b$ is an vector, which has a direction. Intuitively, you can think the line $l$ as the track of a point moving from $(-1,2)$ along the direction of $(-8,4)$.
Another way is more algebra, for any point $(x,y)$ in l, it can be expressed by $\frac{y-2}{x-(-1))}=\frac{-8}{4}$, then you can find $(x,y)=(x,\frac{-8}{4}(x-(-1))+2)=a+bt$ for some $t\in \Bbb R$
Of course, this is just some explanation, not a proof, since the line passing through $a$ along $b$ is not formally defined. (you can take $a+bt$ as the formal definition)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.