text stringlengths 83 79.5k |
|---|
H: Asymptotics of the solution of $x^x=n$
I need to find asymptotics of the solution of the equation
$$
x^x=n
$$
while $n\to\infty$. The only thing I understand is this solution grows very slowly. I can't find $x$ explicitly, I think this is impossible. So what is the necessary trick?
AI: How about
$$
x =
\frac{\operatorname{ln} (n)}{\operatorname{ln} \bigl(\operatorname{ln} (n)\bigr)} + \frac{\operatorname{ln} (n) \operatorname{ln} \bigl(\operatorname{ln} \bigl(\operatorname{ln} (n)\bigr)\bigr)}{\operatorname{ln} \bigl(\operatorname{ln} (n)\bigr)^{2}} - \frac{\operatorname{ln} (n) \operatorname{ln} \bigl(\operatorname{ln} \bigl(\operatorname{ln} (n)\bigr)\bigr)}{\operatorname{ln} \bigl(\operatorname{ln} (n)\bigr)^{3}} +\dots
$$
You can find computaiton of asymptotics for the Lambert
W function as Problem 4.2 in
my paper Transseries for Beginners |
H: Matrix raised to a matrix: $M^N$, is this possible? with $M,N\in M_n(\Bbb K).$
I was wondering if there is such a valid operation as raising a matrix to the power of a matrix, e.g. vaguely, if $M$ is a matrix, is
$$
M^N
$$
valid, or is there at least something similar? Would it be the components of the matrix raised to each component of the matrix it's raised to, resulting in again, another matrix?
Thanks,
AI: It is possible to define the exponential
$$\exp(M) = \sum_{n \ge 0}^{\infty} \frac{M^n}{n!}$$
of any matrix using power series. Similarly, it is possible to define the logarithm
$$\log(I + M) = \sum_{n \ge 1} \frac{(-1)^{n-1} M^n}{n}$$
when the above series converges (this is guaranteed for example if the largest singular value of $M$ is less than $1$). We can therefore define
$$M^N = \exp(N \log M)$$
by imitating the identity $a^b = e^{b \log a}$ for, say, positive reals, but this won't have good properties unless $N$ and $M$ commute, I think. It's better to consider the exponential and logarithm separately.
As I have discussed elsewhere on math.SE, the fact that the ordinary exponential takes two inputs which are the same type is misleading. Most (but not all) "exponential-type" operations in mathematics take two inputs which are different types. |
H: If $f_n\colon [0, 1] \to [0, 1]$ are nondecreasing and $\{f_n\}$ converges pointwise to a continuous $f$, then the convergence is uniform
Suppose that $\{f_n\}$ is a sequence of nondecreasing functions which map the unit interval into itself. Suppose that $$\lim_{n\rightarrow \infty} f_n(x)=f(x)$$ pointwise and that $f$ is a continuous function. Prove that $f_n(x) \rightarrow f(x)$ uniformly as $n \rightarrow \infty$, $0\leq x\leq1$. Note that the functions $f_n$ are not necessarily continuous.
This is one of the preliminary exam from UC Berkeley, the solution goes like this:
Because $f$ is continuous on $[0,1]$, which is compact, it is then uniformly continuous. Hence there exists $\delta >0$ such that if $|x-y|<\delta$ then $|f(x)-f(y)|<\epsilon$.
We then partition the interval with $x_0=0, \cdots ,x_m=1$ such that the distance $x_{i}-x_{i-1}$ is less than $\delta$.
Note that since there are only finite number of $x_m$, there is $N\in \mathbb{N}$ such that if $n\geq N$ then $|f_n(x_i)-f(x_i)|<\epsilon$ where $i=0,\cdots, m$
Now if $x\in[0,1]$, then $x\in[x_{i-1},x_i]$ for some $i\in\{1, \cdots m\}$.
My question is how to use the nondecreasingness to arrived at this inequality, for $n\geq N$
$f(x_{i-1})-\epsilon<f_n(x)<f(x_{i-1})+2\epsilon$
Can someone please help, I have been staring at the inequality for about a day now. Thanks.
AI: BenjaLim has already dealt with the first half of the inequality. For the second half, note
$$f_n(x)-f(x_{i-1})\leq f_n(x_i)-f(x_{i-1})= (f_n(x_i)-f(x_i))+(f(x_i)-f(x_{i-1}))<\epsilon+\epsilon=2\epsilon.$$ |
H: Composition of measurable functions
Consider a locally bounded function $f: X \times W \rightarrow X$, where $X \subseteq \mathbb{R}^n$, $W \subseteq \mathbb{R}^m$, such that
for all $x \in X$ the function $w \mapsto f(x,w)$ is (Borel) measurable;
Consider a locally bounded, (Borel) measurable, function $g: W \rightarrow X$.
Say if the function
$$ (w,v) \mapsto f( g(w), v ) $$
is (Borel) measurable as well.
Notes: this question differs from both this and that post.
AI: What you are trying to prove does not hold. Let $V$ be the Vitali Set which is not measurable in $B(\mathcal{R})$. Let $f: R^{2} \mapsto R$, such that:
$$ f(x,y) = \left\{ \begin{array}{ll}
1 & \mbox{$x \in V$};\\
0 & \mbox{$x \notin V$}.\end{array} \right. $$
Note that $f$ is locally bounded and for any $x \in R$, $y \mapsto f(x,y)$ is Borel measurable, since it is constant. Let $g$ be the identity function. Observe that $f(g(x),y) = f(x,y)$ which is not measurable in $(R^{2},B(\mathcal{R}^{2}))$. |
H: Acceleration along a 2D plane
It's been a long time since I've done trig, and I never new it very well. I have a problem I don't know how to solve.
I have an object on a 2D plane that I want to move. I have this objects x and y coordinates, I also have it's velocity, and the angle of the velocity in radians.
How can I take this information and calculate velocity long the X axis and Y axis. I.E., how can I convert the angle and velocity to the delta of X and Y?
AI: $v_x = v\cdot \cos(\theta)$
$v_y = v\cdot \sin(\theta)$ |
H: Difference between power law distribution and exponential decay
This is probably a silly one, I've read in Wikipedia about power law and exponential decay. I really don't see any difference between them. For example, if I have a histogram or a plot that looks like the one in the Power law article, which is the same as the one for $e^{-x}$, how should I refer to it?
AI: $$
\begin{array}{rl}
\text{power law:} & y = x^{(\text{constant})}\\
\text{exponential:} & y = (\text{constant})^x
\end{array}
$$
That's the difference.
As for "looking the same", they're pretty different: Both are positive and go asymptotically to $0$, but with, for example $y=(1/2)^x$, the value of $y$ actually cuts in half every time $x$ increases by $1$, whereas, with $y = x^{-2}$, notice what happens as $x$ increases from $1\text{ million}$ to $1\text{ million}+1$. The amount by which $y$ gets multiplied is barely less than $1$, and if you put "billion" in place of "million", then it's even closer to $1$. With the exponential function, it always gets multiplied by $1/2$ no matter how big $x$ gets.
Also, notice that with the exponential probability distribution, you have the property of memorylessness. |
H: How to disprove this fallacy that derivatives of $x^2$ and $x+x+x+\dots\quad(x\text{ times})$ are not same.
Possible Duplicate:
Where is the flaw in this argument of a proof that 1=2? (Derivative of repeated addition)
\begin{align*}
x^2 &= \underbrace{x + x + x + \dots + x}_{x \text{ times}}, \\
\therefore \frac{\mathrm{d}}{\mathrm{d}x} (x^2)
&= \frac{\mathrm{d}}{\mathrm{d}x} (\underbrace{x + x + x + \dots + x}_{x \text{ times}}) \\
&= \underbrace{1 + 1 + 1 + \dots + 1}_{x \text{ times}} \\
&= x.
\end{align*}
But we know that
$$ \frac{\mathrm{d}}{\mathrm{d}x} (x^2) = 2x. $$
So what is the problem?
My take is that
we cannot differentiate both sides because $\underbrace{{x+x+x+\cdots+x}}_{x \text{ times}}$ is not fixed and thus $1$ is not equal to $2$.
AI: Simply because "$x \text{ times}$" is also a "function" of $x$. One mistake is not considering that in the derivation. |
H: Is there a function where it actually converges on the real line?
I am trying to come up with a function such that
$\int_{-\infty}^{\infty} f(x) dx$ coverges or $\int_{-a}^{a} g(x) dx$ converges where $g(x)$ is not defined on $x = -a$ or $x = a$ (so both will be improper integrals)
The integrand cannot have complex numbers, be $0$ (or some constant, not that it would work), must be real, must also be continuous on $x \in (-a,a)$ (so piecewise functions don't count), and if possible be symmetric (if the function is odd, only look at the infinity domain case)
AI: You mean something like $\int_{-\infty}^\infty \frac{dx}{x^2+1}$ or $\int_{-1}^1 \frac{dx}{\sqrt{1-x^2}}$? |
H: Coin Arrangement Puzzle
Disclaimer: I'm not sure how math related this puzzle is (it could potentially be), but I thought it was an interesting puzzle, and I also don't know how to solve it so I wanted to see if anyone had any ideas.
You have a board divided in quarters and a coin is in each spot. You
do not know whether each is facing heads or tails upwards. In each
turn, you can choose flip any number of coins. Specify a sequence of
turns that guarantees that at some point all coins will be facing the
same direction.
Follow up: Between each of your turns, the board is rotated an arbitrary
amount amount (90, 180, 270 degrees). Specify a sequence of moves that
guarantees that at some point all coins will be facing the same
direction.
AI: For the follow-up you have to assume that success will be announced after each try if you manage. We assume that a malevolent adversary controls the rotation, but you can flip a single coin, an opposite pair, or an adjacent pair at your option. You just can't keep track of anything except relative position between flips. You start knowing they are not all heads or all tails. Flip two opposite coins. If that doesn't work, you either have an odd number of heads or two adjacent heads. Flip two neighboring coins. If that doesn't work, you either have two opposite heads or an odd number of heads. Flip two opposite coins. If that doesn't work, you have an odd number of heads. Note that so far, we have always flipped an even number, so the parity hasn't changed. Flip one coin. If that doesn't work, you have two heads and two tails. Flip two opposite coins. If that doesn't work, you have two neighboring heads. Flip two adjacent. If that doesn't work, you have two opposite heads. Flip two opposite. Guaranteed to work.
If success is all heads instead of all the same, put flip all four at the start and after every step of the above. |
H: What is the inverse of $f(n)=\frac{n^2+n}{2}$?
I'm building an algorithm to determine whether a value is inside a series. To speed it up, I need the inverse function of the following series:
$$1 + 2 + 3+\cdots +n$$
What is the inverse function of $f(n) = \frac{n(n + 1)}{2}$?
AI: I don't understand much of your question, but if $$f(n)={n(n+1)\over2}$$ then $$8f(n)+1=(2n+1)^2$$ so $$n={\sqrt{8f(n)+1}-1\over2}$$ |
H: Some case when the central limit theorem fails
If I understand correctly, for various versions of the central limit theorems (CLT), when applying to a sequence of random variables, each random variable is required to have finite mean and finite variance, plus some other conditions depending on the version of the CLT.
Months ago, I heard of something, probably about some case when the classical CLT fails, which I haven't been able to understand. I am not sure if my following description is correct, but that is perhaps the best I can recall:
if one random variable in the sequence dominates (in some sense, such as in terms of magnitude?) the other random
variables, then the central limit theorem doesn't hold.
I was wondering if someone is able to figure out what the quote is trying to say?
Thanks and regards!
PS: A paper named Asymptotic Distribution Theory for the Kalman Filter State Estimator was mentioned regarding the above quote. I don't quite understand the paper, so cannot figure out how it helps to clarify the quote. But I guess Section "3.2 Remarks on Theorems and Corollaries" on page "1999" might be related.
AI: There are various ways in which the CLT can "fail", depending on which hypotheses are violated. Here's one. Suppose $X_k$ are independent random variables with $E[X_k] = \mu_k$ and variances $\sigma_k^2$, and let $s_n^2 = \sum_{k=1}^n \sigma_k^2$ and $S_n = \sum_{k=1}^n (X_k - \mu_k)$. Suppose
also that $\max_{k \le n} \sigma_k/s_n \to 0$ as $n \to \infty$ (so in that sense no $X_k$ is "dominant" in $S_n$). Then Lindeberg's condition is both necessary and sufficient for
$S_n/s_n$ to converge in distribution to ${\mathscr N}(0,1)$.
EDIT: Here's a nice example where the Central Limit Theorem fails. Let $X_n$ be independent with $P(X_n = 2^n) = P(X_n = -2^n) = 2^{-2n-1}$, $P(X_n = 0) = 1 - 2^{-2n}$. Thus $E[X_n] = 0$ and
$\sigma_n = 1$. But
$$P(S_n = 0) \ge P(X_j = 0 \text{ for all }j) > 1 - \sum_{j=1}^\infty 2^{-2j} = 2/3$$ |
H: Is it safe to assume that the altitude of a triangle always cuts the base in half
While solving different questions , I realized that whenever I constructed an altitude it always bisected the base in half. From what I deduced from Wikipedia is that this is only true if the triangle is either isosceles or a right triangle. What I really want to know is when I am given a question sometimes the type of triangle is not specified however I am still required to draw altitudes and assume they cut the emerging angle in half and also cut the other side in half. As shown in the following figures in red.Is there any safe way to make sure that the altitude i am drawing will definitely cut the base in half ? or do I have to first make a wise estimate of the type of triangle before constructing an altitude.
AI: Firstly, your statement : "whenever I constructed an altitude it always bisected the base in half. From what I deduced from Wikipedia is that this is only true if the triangle is either isosceles or a right triangle" is not fully correct. An altitude from a vertex bisects the opposite base if and only if the two sides emerging from that particular vertex are equal(not necessary in a right angle triangle).Therefore, you need to specify this condition before assuming that the altitude cuts the opposite base in half. |
H: Proof that Gauss-Jordan elimination works
Gauss-Jordan elimination is a technique that can be used to calculate the inverse of matrices (if they are invertible). It can also be used to solve simultaneous linear equations.
However, after a few google searches, I have failed to find a proof that this algorithm works for all $n \times n$, invertible matrices. How would you prove that the technique of using Gauss-Jordan elimination to calculate matrices will work for all invertible matrices of finite dimensions (we allow swapping of two rows)?
Induction on $n$ is a possible idea: the base case is very clear, but how would you prove the inductive step?
We are not trying to show that an answer generated using Gauss-Jordan will be correct. We are trying to show that Gauss-Jordan can apply to all invertible matrices.
Note: I realize that there is a similar question here, but this question is distinct in that it asks for a proof for invertible matrices.
AI: This is one of the typical cases where the most obvious reason something is true is because the associated algorithm cannot possibly fail.
Roughly speaking, the only way Gauss-Jordan can ever get stuck is if (at any intermediate point) there is a column containing too many zeroes, so there is no row that can be swapped in to produce a non-zero entry in the expected location. However, if this does happen, it is easy to see that the matrix is non-invertible, and since the row operations did not cause this, it must have been the original matrix that is to blame. |
H: How to approximate $\sum_{k=1}^n k!$ using Stirling's formula?
How to find summation of the first $n$ factorials,
$$1! + 2! + \cdots + n!$$
I know there's no direct formula, but how can it be estimated using Stirling's formula?
Another question :
Why can't we find the summation of n! ?
Why there's no direct formula?
AI: Stirling's formula gives us that $$n! \sim \sqrt{2 \pi n} \left( \dfrac{n}e\right)^n$$ i.e. $$\lim_{n \to \infty} \dfrac{n!}{\sqrt{2 \pi n} \left( \dfrac{n}e\right)^n} = 1$$
It is not hard to show that your sum, $$\sum_{k=1}^{n} k! \sim n!$$ and hence $$\sum_{k=1}^{n} k! \sim \sqrt{2 \pi n} \left( \dfrac{n}e\right)^n$$
EDIT To see that $\displaystyle \sum_{k=1}^{n} k! \sim n!$, note that
\begin{align}
\sum_{k=1}^{n} k! & = n! \left( 1 + \dfrac1n + \dfrac1{n(n-1)} + \dfrac1{n(n-1)(n-2)} + \cdots + \dfrac1{n!}\right)\\
& \leq n! \left( 1 + \dfrac1n + \dfrac{n-1}{n(n-1)}\right)\\
& = n! \left( 1 + \dfrac2n\right)
\end{align}
Hence, $\displaystyle \sum_{k=1}^{n} k! \sim n!$. |
H: For how many integral values of $R$ is $R^4 - 20R^2+ 4$ a prime number?
For how many integral values of $R$ is $R^4 - 20R^2+ 4$ a prime number?
I tried factorizing but couldn't conclude anything concrete.
Factorizing it, gives $(R^2 - 10)^2 - 96$. What should be my approach now?
AI: HINT $$R^4 - 20R^2 + 4 = (R^2 + 4R - 2)(R^2 - 4R - 2)$$ If this has to be a prime, then atleast one of the factors has to be $\pm 1$. Can you finish it from here?
Move your mouse over the gray area for a complete answer.
$$(R^2 + 4R - 2) = 1 \implies R^2 + 4R - 3 = 0 \implies R \notin \mathbb{Z}$$ $$(R^2 + 4R - 2) = -1 \implies R^2 + 4R - 1 = 0 \implies R \notin \mathbb{Z}$$ $$(R^2 - 4R - 2) = 1 \implies R^2 - 4R - 3 = 0 \implies R \notin \mathbb{Z}$$ $$(R^2 + 4R - 2) = -1 \implies R^2 + 4R - 1 = 0 \implies R \notin \mathbb{Z}$$ Hence, $R^4 - 20R^2 + 4$ is not a prime for all $R \in \mathbb{Z}$.
Also, what you have written is incorrect. $R^4 - 20R^2 + 4 = (R^2-10)^2 - 96$ |
H: Why is $\sqrt{\sum_{i=1}^n |v_i|^2} \leq \sum_{i=1}^n |v_i|$ true?
Sorry if this is very basic but here's a question.
Let $\mathbf{v}=(v_1,\ldots, v_n)\in k^n$ where $k=\bar{k}$.
Why do we have
$$
\sqrt{\sum_{i=1}^n |v_i|^2} \leq \sum_{i=1}^n |v_i|,
$$
where the left-hand side can be thought of as the $2$-norm $\|\mathbf{v}\|_2$ on $L^2(k^n)$?
$\mathbf{General \; case}$: If this is true for $p=2$-norm, I am guessing that this is true for all $p\geq 1$:
$$
\left( \sum_{i=1}^n |v_i|^p\right)^{1/p}\leq \sum_{i=1}^n |v_i|.
$$
In fact, does this inequality hold when $p$ is a rational number?
AI: Square both sides to get:
$$
\sum_{i=1}^n |v_i|^2 \leq \left(\sum_{i=1}^n |v_i|\right)^2
$$
Expand the RHS using the multinomial theorem to see that it's equal to the LHS plus a number of non-negative terms. Hence the inequality holds. |
H: Test for convergence $\sum_{n=1}^{\infty}\{(n^3+1)^{1/3} - n\}$
I want to expand and test this $\{(n^3+1)^{1/3} - n\}$ for convergence/divergence.
The edited version is: Test for convergence $\sum_{n=1}^{\infty}\{(n^3+1)^{1/3} - n\}$
AI: By direct inspection, for every pair of real numbers $A$ and $B$,
$$
A^3 - B^3 = (A-B) (A^2+AB+B^2).
$$
Choose now $A=\sqrt[3]{n^3+1}$ and $B=n$. Then
$$
(n^3+1)^{1/3} - n = \frac{n^3+1-n^3}{(n^3+1)^{2/3} + n (n^3+1)^{1/3}+n^2} \sim \frac{1}{n^2}
$$
as $n \to +\infty$. The limit is therefore zero. |
H: Largest modulus for Fermat-type polynomial
Motivated by this question, I wonder:
Given $k\in\mathbb N, k\ge2$, what is the largest $m\in\mathbb N$ such that
$n^k - n$ is divisible by $m$ for all $n\in\mathbb Z$ ?
AI: To find the highest power of a prime $p$ dividing $m$, we find the highest $r$ such that $n\mapsto n^k-n$ is identical to the zero map on $\Bbb Z/p^r\Bbb Z$. If $r>1$ then $p^k\equiv p~(p^r)$, which is impossible, so $r\in\{0,1\}$.
Therefore it suffices to find primes $p$ such that $n^k\equiv n$ on all of $\Bbb Z/p\Bbb Z$. This occurs if and only if
$$(p-1)\mid(k-1),$$
because $(\Bbb Z/p\Bbb Z)^\times$ is cyclic. Hence $m$ is the product of all primes $p$ such that $(p-1)|(k-1)$. |
H: How to obtain the number of digits in n!?
How to obtain the number of digits in $n!$ ?
My approach :
I Used Stirling's formula to find out the approximate value of $n!$
Let the approximate value be $S$
Thus, number of digits in $\ = \left \lfloor \log S \right \rfloor$ + 1
where $\left \lfloor . \right \rfloor$ is floor function.
AI: The question came up here on MathOverflow. |
H: product of the numbers in each subset is equal
Possible Duplicate:
product of six consecutive integers being a perfect square
Find all positive integers $n$ such that
the set {$n, n + 1, n + 2, n + 3, n + 4, n + 5$} can be partitioned into two subsets
so that the product of the numbers in each subset is equal.
One possible way I think to solve this is to consider this set $\mod 5$ and check for the partitions for which the product modulo 5 comes equal and then solve for those partitions to get the integer values of $n$. Is this a fine approach? If yes, are there other methods to solve this easily as I think my procedure is time consuming ?
AI: The condition implies that the product of the six consecutive numbers is a square. But it's known that the product of two or more cosecutive numbers can't be a square. For references, and discussion of the 6-case, see this earlier question. |
H: Chain rule and gradient
Let $\Gamma \subset \mathbb{R}^2$ be a curve. Define for a smooth function $f$, $$\nabla_\Gamma f = \nabla f - (\nabla f \cdot N)N$$ where $N$ is the unit normal.
Let $X:S \to \Gamma$ be a smooth regular parameterisation with $|\partial_s X(s) | > 0$.
Let $\tilde{f}(s) = f(X(s))$. How do I show that
$$\nabla_\Gamma f = \frac{1}{|\partial_s X|} \partial_s \tilde{f}\frac{\partial_s X}{|\partial_s X(s)|}$$
?
I don't know where to start. The notation is confusing..
AI: We don't have to look far: It's the chain rule in disguise.
Let $$T:={\partial_s X\over|\partial_s X|}$$ be the unit tangent vector. Then the definition of $\nabla_\Gamma f$ amounts to $$\nabla_\Gamma f=(\nabla f\cdot T)T\ .$$ This is just two dimensional vector algebra: For any two orthogonal unit vectors $T$, $N$ and an arbitrary vector $V$ one has $V=(V\cdot T)T+(V\cdot N)N$. It follows that
$$\nabla_\Gamma f=(\nabla f\cdot\partial_s X){\partial_s X\over|\partial_s X|^2}\ .$$
Now by the chain rule $\partial_s\tilde f=\nabla f\cdot\partial_s X$, and plugging this into the last formula you get the claim. |
H: Asymptotics for sums of the form $\sum \limits_{\substack{1\leq k\leq n \\ (n,k)=1}}f(k)$
How can we find an asymptotic formula for
$$\sum_{\substack{1\leq k\leq n \\ (n,k)=1}}f(k)?$$
Here $f$ is some function and $(n,k)$ is the gcd of $k$ and $n$. I am particularily interested in the case
$$\sum_{\substack{1\leq k\leq n \\ (n,k)=1}}\frac{1}{k}.$$
I know about the result
$$\sum_{\substack{1\le k\le n\\(n,k)=1}}k=\frac{n\varphi(n)}{2}$$
which was discussed here, but I don't know if I can use it in the case of $f(k)=1/k$.
AI: Hint: Try using the fact that $\sum_{d|n} \mu(d)$ is an indicator function for when $n=1$. This allows us to do the following for any function $f$:
$$\sum_{n\leq x}\sum_{k\leq n,\ \gcd (k,n)=1} f(k,n)=\sum_{n\leq x}\sum_{k\leq n} f(k,n) \sum_{d|k, \ d|n} \mu (d) =\sum_{d\leq x} \mu(d) \sum_{n\leq \frac{x}{d}}\sum_{k\leq n} f(dk,nk).$$
This method is very general, and works in a surprisingly large number of situations. I encourage you to try it.
Remark: Using this approach I get $$\sum_{n\leq x}\sum_{k\leq n,\ \gcd(k,n)=1} \frac{1}{k}=\frac{6x}{\pi^{2}}\log x+\left(-\frac{\zeta^{'}(2)}{\zeta(2)^2}+\frac{6\left(\gamma-1\right)}{\pi^{2}}\right)x+O\left(\log^{2}x\right).$$
Edit: I made a slight miscalculation in my remark, missing the factor of $\zeta(2)^2$ in the $\zeta^{'}(2)$ term, and have updated the asymptotic. |
H: Show there exists a positive, strictly increasing measurable function
Let $f$ be a nonnegative measurable function on $[0,\infty)$ such that $\int_0^\infty f(x)dx < \infty$ is finite. Show that there is a positive, strictly increasing measurable function $a(x)$ on $[0,\infty)$ with $\lim_{x\to\infty} a(x)=\infty$ and such that
$$\int_0^\infty a(x)f(x)dx<\infty.$$
I am just looking for a hint, since I'm pretty stuck. I thought about arguing by contradiction and assuming all such functions cause the last integral to diverge, but I'm not sure that's correct. Maybe I could use Chebychev's inequality to arrive at a contradiction?
AI: Ok, no solution, just 2 hints:
1) Since $\int_0^\infty f$ is finite you can choose any strictly decreasing sequence $a_n>0$ you like, tending to $0$, and there will be $R_n>0$ with $\int_{R_n}^\infty f < a_n/n^2$. So $1/a_n \int^\infty_{R_n}f \le 1/n^2 $. Wlog $R_n<R_{n+1}$. Then you can obviously also estimate $\int_{R_n}^{R_{n+1}} f $
2) it suffices to find $a$ as a step function, increasing strictly with each step, then interpolate.
Edit: Ok, the OP allowed to post a spoiler: let $0 < a_n < a_{n-1}$ be any strictly decreasing sequence tending to zero. Since for any $\varepsilon >0 $ we can find $R$ such that $\int_R^\infty f(x)\, dx < \varepsilon$ we can find a sequence $R_n$, wlog increasing, such that the inequality in 1) holds -- simply choose $\varepsilon = a_n/n^2$
Now define $\bar a(x) = 1/a_n$ for $R_n \le x < R_{n+1}$. Clearly $\bar a$ is increasing and tends to $\infty$ when $x$ does. Then
$$\int_{R_1}^\infty \bar a(x) f(x) dx = \sum_i \int_{R_i}^{R_{i+1}}\frac{f(x)}{a_i} < \sum_i\frac{1}{i^2} <\infty $$
(Exchanging sum and integral can be easily justified by looking at finite sums first).
Now define $a$ as a piecewise affine linear function such that $a(0)=0$, $a(R_1) = 1/a_0$, furthermore $a(R_2)= \bar a(R_1)$ and in general $a(R_i)=\bar a(R_{i-1})$. Because $a_n$ is strictly decreasing, $ a$ will be strictly increasing, and clearly $a\le \bar a$ for $x\ge R_1$ (draw a picture). Since $f\ge 0$,
$$ \int_{R_1}^\infty a(x) f(x) dx \le \int_{R_1}^\infty \bar a(x) f(x) dx < \infty$$
and $\int_0^{R1}af dx$ is clearly finite. |
H: Find the indefinite integral of $1/(16x^2+20x+35)$
Here is my steps of finding the integral, the result is wrong but I don't know where I made a mistake or I may used wrong method.
$$
\begin{align*}
\int \frac{dx}{16x^2+20x+35}
&=\frac{1}{16}\int \frac{dx}{x^2+\frac{20}{16}x+\frac{35}{16}} \\
&=\frac{1}{16}\int \frac{dx}{x^2+\frac{20}{16}x+\frac{10}{16}+\frac{25}{16}} \\
&=\frac{1}{16}\int \frac{dx}{(x+\frac{\sqrt{10}}{4})^2+(\frac{5}{4})^2}\\
&=\frac{1}{16}\frac{4}{5}\textstyle\arctan ((x+\frac{\sqrt{10}}{4})\cdot \frac{4}{5}) \\
&=\frac{1}{20}\textstyle\arctan(\frac{4x+\sqrt{10}}{5})
\end{align*}
$$
AI: Your problem is this step:
$$\frac{1}{16}\int \frac{dx}{x^2+\frac{20}{16}x+\frac{10}{16}+\frac{25}{16}}
=\frac{1}{16}\int \frac{dx}{(x+\frac{\sqrt{10}}{4})^2+(\frac{5}{4})^2}$$
for which you use this equality:
$$\textstyle x^2+\frac{20}{16}x+\frac{10}{16}+\frac{25}{16}
=(x+\frac{\sqrt{10}}{4})^2+(\frac{5}{4})^2$$
but that's just not true! The right-hand side expands into $x^2 + 2\frac{\sqrt{10}}{4}x + \frac{10}{16} + \frac{25}{16}$: as you can see the $x$ term is wrong, and the square root is unnecessary. Just to remind you, the general rule for completing the square is:
$$\textstyle x^2 + bx + c = (x + \frac{b}{2})^2 + c - \frac{b^2}{4}$$
No square roots anywhere! |
H: the sum $\sum \limits_{n>1} f(n)/n$ over primes
Let
$$
f(n)=\begin{cases}-1&\text{if $n$ is a prime integer},\\
1&\text{otherwise}.
\end{cases}
$$
Then, does the series
$$
\sum_{n>1} f(n)/n
$$
converge or diverge?
AI: Let $f(n)=-1$ if $n$ is prime, and $f(n)=1$ otherwise. We show that the sum
$$\sum_{n=6}^\infty \frac{f(n)}{n}$$
does not converge.
Arrange the integers $\ge 6$ in consecutive groups of $6$. In the set $\{6k,6k+1,6k+2,6k+3,6k+4,6k+5\}$, the numbers $6k+1$ and $6k+5$ may be prime. The other four are definitely composite. It follows that
$$\sum_{i=0}^5 \frac{f(6k+i)}{6k+i} \ge \frac{1}{6k}-\frac{1}{6k+1}+\frac{1}{6k+2}+\frac{1}{6k+3}+\frac{1}{6k+4}-\frac{1}{6k+5}.$$
The sum on the right is $\gt \frac{1}{6k+2}+\frac{1}{6k+3}$, which in turn is $\gt \frac{2}{6k+3}$.
But $\sum_{k=1}^\infty \frac{2}{6k+3}$ diverges. So the sequence of partial sums of the shape $\sum_{n=6}^{6k+5} \frac{f(n)}{n}$ is unbounded, and therefore the series of the question does not converge.
The argument proves a little more: the series in fact diverges to $\infty$. |
H: linear-algebra bases polynomials
the answer that was provided was great help thank you, but i had a similar type of question to this once and i used the same method but was marked incorrectly.. it didnt state to compute the inverse and was marked wrong. is there another way of doing it? thanks in advance
Let $B := [p_0, p_1, p_2]$ denote the natural ordered basis for $P_2(\mathbb R)$, the
vector space of real polynomial functions of degree less than or equal
to $2$. Define $f_1, f_2, f_3\in P_2(\mathbb R)$ by $f_1(x) = 1 − x$, $f_2 = x − x^2$ and
$f_3(x) = 1 + 2x + x^2$. Define $C := [f_1, f_2, f_3]$. Verify that $C$ is an
ordered basis for $P_2(\mathbb R)$. Compute the change of coordinates matrix $A$
which converts $B$-coordinates to $C$-coordinates. Define $f \in P_2(\mathbb R)$ by
$f(x) = 3 − 4x + 2x^2$. Compute $f_C$.
AI: Assuming you mean $\,B:=\{1,x,x^2\}\,$ , and since
$$a(1-x)+b(x-x^2)+c(1+2x+x^2)=0\,\,,\,a,b,c\in\mathbb R\Longrightarrow $$
$$\Longrightarrow(a+c)+(b-a+2c)x+(c-b)x^2=0\Longrightarrow b=c=-a\,\, (\text{first and last coefficients})\,\,,$$
$$\,b-a+2c=-a-a-2a=-4a=0\Longrightarrow a=b=c=0$$
and thus $\,C\,$ is a basis.
Since
$$\begin{align}1-x&=&1\cdot 1&+&(-1)\cdot 1&+&0\cdot x^2\\x-x^2&=&0\cdot 1&+&1\cdot x&+&(-1)\cdot x^2\\1+2x+x^2&=&1\cdot 1&+&2\cdot x&+&1\cdot x^2\end{align}$$
the wanted matrix is
$$A=\begin{pmatrix}1&0&1\\\!\!\!\!-1&1&2\\0&\!\!\!\!-1&1\end{pmatrix}^{-1}=\frac{1}{4}\begin{pmatrix}3&-1&-1\\1&\;\;1&-3\\1&\;\;1&\;\;1\end{pmatrix}$$
Finally, since
$$f(x)=3-4x+2x^2\stackrel{coord. wrt B}\longrightarrow \begin{pmatrix}3\\\!\!\!\!-4\\2\end{pmatrix} \,\,,\,\text{we get}$$
$$[Af]=\frac{1}{4}\begin{pmatrix}3&-1&-1\\1&\;\;1&-3\\1&\;\;1&\;\;1\end{pmatrix}\begin{pmatrix}3\\\!\!\!\!-4\\2\end{pmatrix}=\frac{1}{4}\begin{pmatrix}11\\\!\!\!\!-7\\1\end{pmatrix}$$
so $$\,f_C=\frac{11}{4}(1-x)-\frac{7}{4}(x-x^2)+\frac{1}{4}(1+2x+x^2)$$ |
H: $T^2=I$ implies that $T$ is a normal operator
I need to show that if $T$ is an operator in an inner product space over the complex field and if $T^2=I$, then $T$ has to be normal.
AI: This is false. Let
$$T = \left[ \begin{array}{cc} 1 & -2 \\\ 0 & -1 \end{array} \right].$$
$T$ has eigenvalues $1, -1$ with eigenvectors $(1, 0), (1, 1)$ respectively, so satisfies $T^2 = I$. But the eigenspaces of $T$ are not orthogonal, so $T$ cannot be normal. |
H: Integral of $1/x$ times a decaying function
Let $f:[1,\infty)\to\mathbb{R}$ be a measurable function with $\lim_{t\to\infty} f(t)=0$. I want to show that the function $x\mapsto \int_1^x \frac{f(t)}{t} dt$ is asymptotically sublogarithmic, i.e.
$$\lim_{x\to\infty}\frac{1}{\log x} \int_1^x \frac{f(t)}{t} dt = 0.$$
Although I think I should be able to prove this, I am not.
AI: Since $\lim_{t\to\infty} f(t) = 0$ you have that for every $\epsilon > 0$ there exists $M > 0$ such that $|f(t)| < \epsilon$ for all $t > M$. For $y > M$ you then have
$$ \frac{1}{\log y} \int_1^y \frac{f(t)}{t}\mathrm{d}t = \frac{1}{\log y} \int_1^M \frac{f(t)}{t} \mathrm{d}t + \frac{1}{\log y} \int_M^y \frac{f(t)}{t}\mathrm{d}t $$
The first integral contributes a constant term (only depending on $M$). The second integral can be bounded by
$$ \lvert\int_M^y \frac{f(t)}{t} \mathrm{d}t\rvert \leq \epsilon \log \frac{y}{M} $$
Hence we have that
$$ \left\lvert \frac{1}{\log y}\int_1^y \frac{f(t)}{t} \mathrm{d}t\right\rvert \leq \frac{C_M}{\log y} + \epsilon $$
By choosing $y > M$ large enough we can make the first term also $< \epsilon$ using that $\log y$ grows unboundedly. This means that for every $\epsilon$ you can choose $Y_0$ sufficienly large such that for every $y > Y_0$,
$$ \left\lvert\frac{1}{\log y}\int_0^y \frac{f}{t} \mathrm{d}t \right\rvert < 2\epsilon $$
as desired. |
H: Limsup of continuous functions between metric spaces
Let me start with a simple example:
Let $f_n:[0,1]\to[-1,1],x\mapsto \sin 2\pi nx$. For each $x\in[0,1]$, consider the sequence $\lbrace f_n(x):n\ge1\rbrace$ and denote by $F(x)$ the set of points of accumulation of $\lbrace f_n(x):n\ge1\rbrace$. This induces a map $F:[0,1]\to\mathcal{K}([-1,1])$, where $\mathcal{K}(X)$ is the set of nonempty compact subsets of $X$.
For example $F(x)=[-1,1]$ if $x$ is irrational, $F(q/p)=\lbrace \sin\frac{2\pi k}{p}:0\le k\le p-1 \rbrace$ if $(q,p)$ is coprime. In particular $F$ is measurable (in fact lower semicontinuous).
As suggested by Nate, $\mathcal{K}(X)$ is equiped by the Hausdorff distance and the induced topology.
More precisely, $D(K,L)=\inf\lbrace\delta>0: K\subset B(L,\delta)\text{ and }L\subset B(K,\delta)\rbrace$.
About limsup: for a sequence of compact sets $K_n\subset X$, $\limsup\limits_{n\to\infty} K_n=\bigcap_{N\ge1}\overline{\bigcup_{n\ge N}K_n}$.
Now let $X,Y$ be two compact metric space and $f_n:X\to Y$ be continuous functions. This induces a map $F:X\to \mathcal{K}(Y)$, where $F(x)$ is the set of points of accumulation of $\lbrace f_n(x):n\ge1\rbrace$. We also denote $F=\limsup f_n$.
What can we say about such $F$? Is it still measurable?
Assume that there exists a Borel subset $X_0\subset X$ on which the limit $f_n(x)$ exists, say $f(x)$. Then we can land on earth and define $f:X_0\to Y$. In this case we can ask
Is $f$, defined on $X_0$, measurable?
Thank you!
Let's put this in a more general frame:
Let $X,Y$ be compact metric space and $F_n:X\to \mathcal{K}(Y)$ be a sequence of continuous maps (w.r.t. the topology induced by Hausdorff distance). Let $F(x)=\limsup\limits_{n\to\infty} F_n(x)$ be defined as above (2). Then $F(x)$ is well defined for every $x\in X$ and hence we get a map $F:X\to \mathcal{K}(Y)$.
Is $F$ a Borel measurable map with respect to the induced topology?
I am trying to mimic Leonid's approach and characterize $F^{-1}\mathcal{U}$ for open sets $\mathcal{U}\subset\mathcal{K}(Y)$.
$F(x)\in \mathcal{U}$ iff $D(F(x),\mathcal{U}^c)>1/k$ for some $k\ge1$.
And $D(F(x),\mathcal{U}^c)>1/k$ iff $\exists N\ge1$ such that $D(\overline{\bigcup_{n\ge N}F_n(x)},\mathcal{U}^c)>1/k$ for all $n\ge N$.
Then I am stuck..
AI: The second part is easy: we are looking at the pointwise limit of $f_n$ restricted to $X_0$. For every open subset $U\subset Y$ we have
$$f^{-1}(U)=\bigcup_{k=1}^\infty \bigcup_{N=1}^\infty \bigcap_{n=N}^\infty \{x\in X_0 : \forall n\ge N \ d(f_n(x),U^c)>1/k\} $$
and since the sets under the intersection are Borel, $f^{-1}(U)$ is a Borel set.
For the first part, I don't recall the definition of a Borel-measurable set-valued functions: could you add it to the post? |
H: Find $\lim \limits_{y\rightarrow\infty}\left (\ln^2y\,-2\int_{0}^y\frac{\ln x}{\sqrt{x^2+1}}dx\right)$
I have difficulty with this limit. Where to start?
$$\lim_{y\rightarrow\infty}\left (\ln^2y\,-2\int_{0}^y\frac{\ln x}{\sqrt{x^2+1}}dx\right)$$
AI: By simple integration by parts, we have
$$ \int_{0}^{y} \frac{\log x}{\sqrt{1+x^2}} \; dx = \log y \, \sinh^{-1}y - \int_{0}^{y} \frac{\sinh^{-1}x}{x} \; dx. $$
Now by the substitution
$$x = \frac{u^2-1}{2u} \quad \Longleftrightarrow \quad u = x + \sqrt{x^2+1},$$
and the easy equality $ \sinh^{-1} y = \log \left( y + \sqrt{y^2+1} \right)$, we have
$$ \int_{0}^{y} \frac{\sinh^{-1}x}{x} \; dx = -\int_{1}^{y + \sqrt{y^2+1}} \left( \frac{2u}{1-u^2} + \frac{1}{u} \right) \log u \; du = -F\bigg(y+\sqrt{y^2+1}\bigg),$$
where
$$F(s) := \int_{1}^{s} \left( \frac{2u}{1-u^2} + \frac{1}{u} \right) \log u \; du.$$
Now simple observation shows that for $s > 0$ we have
$$F\left( \frac{1}{s} \right) = -F(s).$$
Since $y + \sqrt{y^2+1} \gg 1$ whenever $y \gg 1$, in view of the identity above, we may calculate $F(s)$ for $s = \left( y + \sqrt{y^2+1} \right)^{-1} \in (0, 1)$ instead since
$$ F\left(y+\sqrt{y^2+1}\right) = -F\left(\frac{1}{y+\sqrt{y^2+1}}\right) = -F(s) .$$
Now we introduce the dilogarithm function, defined by
$$ \mathrm{Li}_{2} (x) = \sum_{n=1}^{\infty} \frac{x^n}{n^2} = -\int_{0}^{x} \frac{\log(1-t)}{t} \; dt.$$
Then
$$ \begin{align*}
F(s)
&= \frac{1}{2} \int_{1}^{s} \frac{\log(u^2)}{1-u^2} \; (2udu) + \int_{1}^{s} \frac{\log u}{u} \; du \\
&= \frac{1}{2} \int_{1}^{s^2} \frac{\log v}{1-v} \; dv + \frac{1}{2} \log^2 s \qquad (v = u^2) \\
&= - \frac{1}{2} \int_{0}^{1-s^2} \frac{\log (1-w)}{w} \; dw + \frac{1}{2} \log^2 s \qquad (w = 1-v) \\
&= \frac{1}{2} \mathrm{Li}_{2}(1-s^2) + \frac{1}{2} \log^2 s. \qquad (w = 1-v)
\end{align*}$$
Thus plugging back, we have
$$ \int_{0}^{y} \frac{\sinh^{-1}x}{x} \; dx = \frac{1}{2} \left[ \mathrm{Li}_{2} \left( \frac{2y}{y+\sqrt{y^2+1}}\right) + \log^2 \left(y+\sqrt{y^2+1}\right) \right].$$
This shows that
$$ \begin{align*}
\log^2 y - 2\int_{0}^{y} \frac{\log x}{\sqrt{x^2+1}}\;dx
&= \log^2 y - 2 \log y \, \log \left( y + \sqrt{y^2+1} \right) \\
& \qquad + \mathrm{Li}_{2} \left( \frac{2y}{y+\sqrt{y^2+1}}\right) + \log^2 \left(y+\sqrt{y^2+1}\right) \\
&= \mathrm{Li}_{2} \left( \frac{2y}{y+\sqrt{y^2+1}}\right) + \log^2 \left(1+\sqrt{1+y^{-2}}\right),
\end{align*}$$
which clearly converges to
$$ \mathrm{Li}_{2}(1) + \log^2 2 = \zeta(2) + \log^2 2 = \frac{\pi^2}{6} + \log^2 2.$$
Numerical experiment shows that this converges to its limit relatively fast.
In fact, reflection formula for dilagorithm gives the following estimate.
$$ \log^2 y - 2\int_{0}^{y} \frac{\log x}{\sqrt{x^2+1}}\;dx = \frac{\pi^2}{6} + \log^2 2 - \frac{\log y}{2y^2} + O\left(\frac{1}{y^2}\right).$$ |
H: Proof that $\frac{(2n)!}{2^n}$ is integer
I am trying to prove that $\dfrac{(2n)!}{2^n}$ is integer. So I have tried it by induction, I have took $n=1$, for which we would have $2/2=1$ is integer. So for $n=k$ it is true, so now comes time to proof it for $k+1$, $(2(n+1))!=(2n+2)!$, which is equal to $$1 \times 2 \times 3 \times \cdots \times (2n) \times (2n+1) \times (2n+2),$$ second term would be $$2^{n+1}=2 \times 2^n$$
Finally if we divide $(1 \times 2 \times 3 \times \cdots \times (2n) \times (2n+1) \times (2n+2)$ by $2^{n+1}=2 \times 2^n$,and consider that,$(2n)!/(2^n)$ is integer, we get $(2n+1) \times (2n+2)/2=(2n+1) \times 2 \times (n+1)/2$, we can cancel out $2$, we get $(2n+1)(n+1)$ which is definitely integer.
I am curious this so simple? Does it means that I have proved correctly?
AI: Yes, you have proved it correctly. Indeed the proof is not difficult. If you need to do a formal induction, fine. But the result becomes obvious if you just expand, say, $(2\cdot 5)!$. It is clear that you pick up at least five $2$'s.
If you do need to write out a formal induction, it could be written out somewhat more clearly. For example, the phrase "so for $n=k$ it is true" is not clear. I assume you mean that "so if for $n=k$ it is true." We now write out a proof.
The result is obviously true for $n=1$. We show that if it is true for $n=k$, it is true for $n=k+1$.
Note that
$$(2\cdot (k+1))!=(2k)!(2k+1)(2k+2).$$
By the induction assumption, $2^k$ divides $(2k)!$. It follows that $2^{k+1}$ divides $(2k)!(2)$, and therefore $2^{k+1}$ divides $(2k)!(2k+1)(2k+2)$. |
H: Is there a pattern for reducing exponentiation to sigma sums?
The other day I was trying to find a method for cubing numbers similar to one I found for squaring numbers. I found that to find the square of a positive integer n, just sum up the first n odd integers.
$\sum_{t=1}^n 2t-1 = n^2$
Similarly, I found a method for cubing numbers
$\sum_{t=1}^n 3t^2-3t+1 = n^3$
Inside that, I realized I could condense 3t^2 to my sum I found earlier for squaring numbers, and I'd have a nested sigma sum. What I noticed at this point was that all I was doing was writing out in long hand the reduction of multiplication (and exponentiation) to the sum of 1, n times, which makes sense because after all, multiplication is just repeated addition. Also, the number of nested sigma sums was related to the power I was raising the original number to, which is also intuitive because it's just another series of additions.
What I'm curious about is if there is a pattern to this "reduction to summation" that I did. If I wanted to reduce a^b to a summation with terms that are at most of degree (b-1), how is there a repeating pattern that I could follow/extrapolate from the given sums that I have so far?
AI: You exploited the fact that
$$\sum_{t=1}^n (t^3-(t-1)^3)=n^3.$$
This result is clear, when you add up there is wholesale cancellation (telescoping). Your term $3t^2-3t+1$ is $t^3-(t-1)^3$.
Exactly the same idea works for any positive integer $b$. Use the fact that
$$\sum_{t=1}^n (t^b-(t-1)^b)=n^b.$$
Expand $(t-1)^b$ using the Binomial Theorem to get the analogue of your results for general $b$. The polynomial $t^b-(t-1)^b$ has degree $b-1$, precisely what you wanted.
For example, with $b=4$ we end up with $\sum_{t=1}^n (4t^3-6t^2+4t-1)$. With $b=5$ we get $\sum_{t=1}^n (5t^4-10t^3+10t^2-5t+1)$.
The procedure can indeed be used to build up to a formula for the sum of the first $n$ $b$-th powers.
The problem of summing consecutive powers has a long history. You might be interested in the Wikipedia article on Faulhaber's Formula. |
H: Steps to get Inverse of Pentagonal
I have solved http://projecteuler.net/problem=44 by getting the inverse equation from Wikipedia http://en.wikipedia.org/wiki/Pentagonal_number:
Pentagonal:
$f(n) = \frac{n(3n - 1)}{2}$
Inverse Pentagonal:
$n = \frac{\sqrt{24f(n) + 1}+1}{6}$
am interested in the steps from Pentagonal equation (quadratic?) to the Inverse.
I note that it is similar to What is the inverse of $f(n)=\frac{n^2+n}{2}$? and I've tried the same strategy:
$f(n) = \frac{n(3n - 1)}{2}$
*6 + 1 on each side
$6f(n) + 1 = 9n^2 -3n + 1$
but this isn't correct because I want:
$6f(n) + 1 = 9n^2 -6n + 1$
to give:
$(3n-1)^2$ on the right hand side
AI: To imitate the procedure used in the solution of the other problem, starting from
$$2f(n)=3n^2-n,$$
multiply both sides by $12$. We get
$$24f(n)=36n^2-12n.$$
Note that $36n^2-12n=(6n-1)^2-1$. The rest is easy. We get $(6n-1)^2=24f(n)+1$, then $6n-1=\sqrt{24f(n)+1}$, then $6n=\sqrt{24f(n)+1}+1$.
Another way to solve the same problem is to write our equation as
$$3n^2-n-2f(n)=0,$$
and use the Quadratic Formula.
Remark: Look at the quadratic equation $ax^2+bx+c=0$, where $a\ne 0$. Multiply both sides by $4a$. We get the equivalent equation
$$4a^2x^2+4abx+4ac=0.$$
Note that $4a^2x^2+4abx=(2ax+b)^2-b^2.$
So quickly we arrive at the equation
$$(2ax+b)^2=b^2-4ac.$$
From this we conclude that
$$2ax+b=\pm\sqrt{b^2-4ac},$$
and then straightforward algebra yields
$$x=\frac{-b\pm\sqrt{b^2-4ac}}{2a},$$
the important Quadratic Formula. |
H: For prime $p>2: 1^23^25^2\cdot\cdot\cdot(p-2)^2 \equiv (-1)^{\frac{p+1}{2}} \pmod p$
Possible Duplicate:
Why is the square of all odds less than an odd prime $p$ congruent to $(-1)^{(p+1)/(2)}\pmod p$?
If p is an odd prime, prove that $1^2 \times 3^2 \times 5^2 \cdots \times (p-2)^2 \equiv (-1)^{(p+1)/2}\pmod{p}$
I'd love your help with proving the following claim:
For prime $p>2$:
$$1^23^25^2\cdot\cdot\cdot(p-2)^2 \equiv (-1)^{\frac{p+1}{2}} \pmod p.$$
I instantly thought of Wilson Theorem which says that $1\cdot2\cdot3\cdot\cdot\cdot\cdot(p-1) \equiv (-1) \pmod p$, but I can't see how to use it.
I also tried to divide it to two cases, for $p \equiv 1 \pmod4$, and $p \equiv 3 \pmod4$, but again I didn't reach the conclusion.
Thanks a lot!
AI: You know that
$(p-1)! \equiv -1 \mod p$
Then, since $p-k\equiv -k \mod p$ we have
$$-1=(p-1)! =[1 \cdot 3 \cdot 5 \cdot ... \cdot (p-2)] \cdot [ 2 \cdot 4 ... \cdot (p-1)]$$
$$=[1 \cdot 3 \cdot 5 \cdot ... \cdot (p-2)] \cdot [ (p-2) \cdot (p-4) ... \cdot (1) \cdot (-1)^\frac{p-1}{2}] $$ |
H: When is Laplace variable $s =j\omega$?
Having an exam next week!
I've searched a lot, couldn't find anything I could understand.
When is the Laplace variable $s$ equal to $j\omega$? Because I know that, by definition, $s = \sigma +j\omega$
Thank you!
AI: $s=\sigma+j\omega $ means that $s$ is a complex variable with real part $\sigma$ and imaginary part $\omega$. When the real part is equal to zero, we have $s=j\omega$. |
H: Order of integration
I am reading a book by L. D. Landau titled Mechanics and there is a "changing order of the integral" step on page 28 that I don't get:
$$\int_0^a\int_0^E \left[{dx_2\over dU}-{dx_1\over dU}\right]{dUdE\over \sqrt{(a-E)(E-U)}}\\=\int_0^a\left[{dx_2\over dU}-{dx_1\over dU}\right] dU \int_U^a{dE\over \sqrt{(a-E)(E-U)}}$$
I am not very good with changing orders of integrals (or summations for that matter). Could someone please explain? (And if at all possible, offer an intuitive way of understanding it?)
Thanks.
AI: First sketch the are over which you are performing the integral. In the first integral, note that your $U$ goes from $0$ to $E$ and then $E$ goes from $0$ to $a$. This is shown in the figure below. The $X$ axis is the $E$ axis and the $Y$ axis is the $U$ axis. The inclined line is the line $U=E$. The vertical line at $E = 5$ is in general the line $E = a$. The small shaded vertical strip in the middle represent $U$ going from $0$ to $E$. Integrating over the strip corresponds to the inner integral of the first integral you have i.e. $\displaystyle \int_{U=0}^{U=E}$. For the outer integral, $\displaystyle \int_{E=0}^{E=a}$, $E$ goes from $0$ to $a$. Hence, in essence you are integrating over the triangle shown below by integrating over vertical strips and moving the vertical strips horizontally.
Now changing the order of integration essentially is the same as integrating over horizontal strips and moving the horizontal strips vertically as shown in the figure below.
Now note that to integrate over a horizontal strip, you integrate $E$ from $U$ to $a$. Then move the horizontal strip vertically which corresponds to $U$ going from $0$ to $a$.
Hence, $$\int_{E=0}^{E=a} \int_{U=0}^{U=E} (\cdot) dU dE = \int_{U=0}^{U=a} \int_{E=U}^{E=a} (\cdot) dE dU$$
You should also look here to see how the double integral is evaluated by changing the order of integration. |
H: Sumset using first moment method
This is problem 1.1.6 from the book "Additive Combinatorics" by Tao and Vu.
Suppose A is a subset of an additive group Z. We need to show that there exists a d-element subset of Z, denoted B = { v1, v2, ... vd } with d = O( log |Z|/|A| ) such that
A + FS(B) is of size at least |Z|/2. ( FS(B) is the subset sums of B)
So I tried taking a random d-element subset of Z and a specific z in Z so that I can bound the probability that z is contained in A + FS(B) and then apply the first moment to get a bound on the average size of the set.
But I am unable to get a bound. Please help if possible. (This is not part of any homework.)
AI: Hint: try an inductive construction rather than a random construction. Picking $z\in Z$ uniformly, the expected order of $A \cup (A+z)$ is what? |
H: Poisson integral in $R^2$
This question arises out of confusion about some aspects of the Poisson integral in $R^2$, simplified here to a function f on the boundary of the unit circle $ \partial C$ and on C,
$$u(r,\phi) = \frac{1}{2\pi} \int_0^{2\pi} \frac{ f (\theta) (1-r^2) d\theta}{1-2r \cos(\theta-\phi)+r^2}$$
The intuition behind the derivation of the integral is described nicely here
and based on this sort of intuition I wondered whether there might be an approach that did not involve the use of "inversive" geometry.
The article cited mentions that we would typically be considering harmonic functions (temperature, e.g.) and that given an arbitrary piecewise continuous assignment of values to the boundary $\partial C$, "there always exists a harmonic function in R that takes on these values as the boundary is approached."
Well, suppose $f(\phi) = \phi^2 $? There is a discontinuity at $(r = 1, \phi = 0, 2 \pi )$. According to the article, if we are approaching $ \mathit{continuous }$ boundary points this is not an issue.
So for the question(s). If I guess--naively-- that the influence of a point on the boundary on a point inside the circle is inversely proportional to the square of the distance between the two points, I might approximate the integral as
$$ u \approx \frac{\sum_{i=1}^n \frac{f(\phi_i) }{(d_i)^2}}{\sum_{i=1}^n \frac{1}{(d_i)^2}}$$
This approximation is not great, but it yields reasonable values near the center of the circle.
Is there a straightforward Riemann-sum approximation of the integral along these lines? Or are we stuck with the geometry of the article? Is the partial agreement I get for my guess fortuitous?
We could assign pretty outrageous piecewise-continuous values to the boundary--do every one of these correspond to a theoretically possible (for example) heat distribution?
Thanks for any insights, edits.
AI: The Poisson kernel $$P(r,\theta,\phi)=\frac{ 1-r^2 }{1-2r \cos(\theta-\phi)+r^2}$$ can be written in the more intuitive form $$P(z,\zeta)=\frac{1-|z|^2}{|z-\zeta|^2}$$ where $z=re^{i\theta}$ and $\zeta=e^{i\phi}$. This form clearly shows both the inverse square relation (in the denominator) and penalty for being close to the boundary (in the numerator). Notice that $1-|z|^2=(1+|z|)(1-|z|)$ where the first factor does not change much: $1\le 1+|z|\le 2$ for all $z$ in the disk. Thus, the penalty for proximity to the boundary is essentially linear, $(1-|z|)$.
Your sum with squared distances ignores the numerator, which of course leads to poor results near the boundary. If you want a Riemann sum approximation, why not just take the Riemann sum of the original Poisson integral itself: that is, split $[0,2\pi]$ into subintervals and pick a point in each?
Concerning (2): Anything piecewise continuous will work as you expect as long as the boundary values are integrable. For example, the boundary values $1/\theta$ for $\theta\in (0,2\pi]$ will not give you any harmonic function inside in the disk: the integral will simply diverge everywhere. But for any integrable boundary values you get a harmonic function, which can be thought of as a stationary heat distribution (under the assumption that the temperature of the boundary is somehow maintained by an outside device). |
H: Quadratic minimizing a certain maximum ratio
In connection with a CompSciSE question about largest eigenvalue of PSD matrics, I'd like to know which (nonzero) quadratic polynomial $f(x)$ minimizes the ratio:
$$\frac{\max_{0 \le x \le 0.8} |f(x)|}{|f(1)|}$$
[The goal is making a robust improvement on the simple power method's rate of convergence without having to code a Lanczos-like algorithm.]
Of course the method of solution is of greater interest than the specific polynomial. I see that the answer is not unique, insofar as any nonzero multiple of $f(x)$ gives the same ratio. So perhaps one wants to restrict attention to the cases $f(1) = 1$.
AI: You want to minimize $t$ such that $-t \le f(x) = a_0 + a_1 x + a_2 x^2 \le t$ for all $x \in [0,0.8]$ and $f(1)=1$. Think of this as a linear programming problem with infinitely many constraints and four (sign-free) variables $t$, $a_0$, $a_1$, $a_2$. At an optimal solution, four of the constraints should be binding. One must be $f(1) = 1$ (otherwise we could scale everything), so there should be three points $x_1 < x_2 <x_3 \in [0,0.8]$ with $f(x_i) = \pm t$. Since the graph of your quadratic is a parabola, it's clear that $x_1 = 0$, $x_2 = 0.4$, $x_3 = 0.8$, and $f(x) = - t + 2 t (x - 0.4)^2/0.4^2$. This would have
$f(1) = 3.5 t$, so the optimal answer is $t = 1/3.5 = 2/7$, and the quadratic is
$-2/7 + 4/7 (x - 0.4)^2/0.4^2 = (25 x^2 - 20 x + 2)/7$. |
H: Implicit differentiation for $y\cos x = 4x^2 + 3y^2$
I am stuck on doing this implicit differentiation problem below.
$$y \cos x = 4x^2 + 3y^2$$
I am now stuck at the following equality and I don't know how to proceed. Can someone help me?
$$y(−\sin x) + (\cos x)y' = 8x + 6yy' $$
AI: Group the $y'$ terms to one side to get $$(\cos(x)-6y)y' = 8x + y \sin(x) \implies y' = \dfrac{8x + y \sin(x)}{\cos(x) - 6y}$$ |
H: Implicit Differentiation $y''$
I'm trying to find $y''$ by implicit differentiation of this problem: $4x^2 + y^2 = 3$
So far, I was able to get $y'$ which is $\frac{-4x}{y}$
How do I go about getting $y''$? I am kind of lost on that part.
AI: You have $$y'=-\frac{4x}y\;.$$ Differentiate both sides with respect to $x$:
$$y''=-\frac{4y-4xy'}{y^2}=\frac{4xy'-4y}{y^2}\;.$$
Finally, substitute the known value of $y'$:
$$y''=\frac4{y^2}\left(x\left(-\frac{4x}y\right)-y\right)=-\frac4{y^2}\cdot\frac{4x^2+y^2}y=-\frac{4(4x^2+y^2)}{y^3}\;.$$
But from the original equation we know that $4x^2+y^2=3$, so in the end we have
$$y''=-\frac{12}{y^3}\;.$$ |
H: Prove that $\sum_{n=1}^\infty\frac{\sin(nz)}{2^n}$ is analytic on $\{z\in\mathbb{C}:|\operatorname{Im}(z)|<\log(2)\}$
Prove that $f(z)=\sum_{n=1}^\infty\frac{\sin(nz)}{2^n}$ is analytic on $A=\{z\in\mathbb{C}:|\operatorname{Im}(z)|<\log(2)\}$
I tried expanding $\sin(nz)$ in terms of $e^{inz}$ but that did not help me unless I am doing something wrong. I know Weierstrass's M-test comes in to play.
AI: Using $\sin nz=\frac{1}{2i}(e^{inz}-e^{-inz})$ works. The constant is irrelevant for the convergence.
Deal with the two exponentials separately. Let $z=x+iy$. Then $|e^{inz}|=e^{-ny}$, and $|e^{i((n+1)z}|=e^{-(n+1)y}$.
Thus, remembering about the $2^n$ in the denominator, we see that the norm of the ratio of two consecutive terms is $\frac{e^{-y}}{2}$. This norm is $\lt 1$ precisely if $e^{-y} \lt 2$, that is, if $y\gt -\log 2$.
In the same way, for the term in $e^{-iz}$, the norm of the ratio of two consecutive terms is $\lt 1$ precisely if $y \lt \log 2$. Thus, by the Weierstrass $M$-test, we have analyticity if $-\log 2\lt y\lt \log 2$. |
H: Left and Right Vector bundles
I am reading a paper that starts talking about 'left vector bundles' and I'm having trouble figuring out what they mean. The specific setup is as follows:
A quarternionic line bundle $L$ over manifold $M$ is a real smooth rank 4 vector bundle with fibers 1-dimensional quaternionic right vector spaces (*).
A complex quarternionic vector bundle is a pair $(L,J)$ with a quaternionic linear endomorphism $J$ such that $J^2=-1$.
A complex quarternionic vector bundle is thus a rank 2 left complex vector bundle whose complex structure is compatible with the right quaternionic structure (**).
(*) Is a right vector space a vector space $V$ with scalar multiplication only defined on the right? So $V \times F \to V$ for field $F$ but $F \times V \to V$ is not defined?
(**) If what I say above is correct, what does the compatibility mean? If this complex quaternionic vector bundle is left, then the compatability means
$a(QJ)=(aQ)J,~a\in F, Q\in \mathbb{H}$
or something? Or does the "rank 2" say the compatability is something like
$\mathbb{H}\oplus J\mathbb{H}=\mathbb{H}J\oplus \mathbb{H}$
The paper I am reading is "Quaternionic Analysis on Riemann Surfaces and Differential Geometry" by Pedit and Pinkall (1998). It seems like my confusion is not related at all to the quaternion structure, would apply to any vector space. Also, I have found some other references to the left- and right- vector bundles in double vector bundles, but that also seems not related to this. Does anyone have any clarity?
AI: (*) Is a right vector space a vector space V with scalar multiplication only defined on the right? So V×F→V for field F but F×V→V is not defined?
This is correct. The main point is the associativity rule: if $v$ is a vector and $a,b$ scalars, then in a left-vector space we have
$$ a \cdot (b \cdot v) = (a \cdot b) \cdot v$$
and in a right-vector space we have
$$ v \cdot (a \cdot b) = (v \cdot a) \cdot b.$$
One could always just swap the order of the factors to write scalar multiplication on the left in a right-vector space, but that is potentially very confusing, because the associativity rule would be
$$\color{red}{ b \cdot (a \cdot v) = (a \cdot b) \cdot v.}$$
I've colored this equation red because it's a bad idea! Among other things, it would mean that we are not allowed to write $abv$, because the two different interpretations $(ab)v$ and $a(bv)$ can be different.
However, for a commutative field (like the complex numbers), $ab = ba$ so we don't have to develop separate notions of left and right vector spaces in that context.
It is possible to talk about a left-$F$ right-$E$ vector space, where $F$ and $E$ are skew fields: this is a vector space that is a left $F$-vector space and a right $E$-vector space that are "compatible": they satisfy an additional associativity constraint:
$$ f \cdot (v \cdot e) = (f \cdot v) \cdot e. $$
Unfortunately, I'm not familiar with your context, so I can't answer your questions directly. In fact, I don't think I've ever done linear algebra over skew fields before -- all of these ideas I'm familiar with from module theory. But, at least, module theory is a generalization: a left vector space over a (skew) field $F$ is the same thing as a left module over $F$ (and the same on the right), so I'm assuming all of the notions you're talking about have the same meaning as the module-theoretic version. |
H: Weakly compact operators on $\ell_1$
Is the following assertion true/known?
Let $V$ be a Banach space and let $T\colon \ell_1\to V$ be a bounded linear operator. Is it true that $T$ is not weakly compact if and only if there is a complemented subspace $X$ of $\ell_1$ (thus, isomorphic to $\ell_1$) such that $T|_X\colon X\to T(X)$ is an isomorphism?
Of course, the part 'only if' is trivial.
I've got some evidences that it might be true, yet I am not sure one thing in my proof.
AI: It is not true. Consider a continuous linear surjection of $\ell_1$ onto $c_0$; such a map exists and is strictly singular (every operator from $\ell_1$ to $c_0$ is strictly singular).
What is true is the following result Pelczynski:
Theorem. Let $\mu$ be a non-trivial measure on the field of all Borel subsets of some topological space and $T: X\longrightarrow L_1(\mu)$ a bounded linear operator. The following are equivalent:
$T$ is not weakly compact.
$T$ factors the identity operator of $\ell_1$.
There exists a complemented subspace $Y$ of $X$ such that $T|_Y$ is an isomorphism, $Y$ (and hence $T(Y)$ also) is isomorphic to $\ell_1$ and $T(Y)$ is complemented in $L_1(\mu)$.
$T$ is strictly cosingular.
Moreover, if $X$ has the Dunford-Pettis property, then 1.-4. above are equivalent to:
$5.$ $T$ is strictly singular.
This result is proved in Pelczynski's paper On strictly singular and strictly cosingular operators. II. Strictly singular and strictly cosingular operators in $L_1(\nu)$ spaces, Bull. Acad. Polon. Sci. Ser. Sci. Math. Astronom. Phys. 13 (1965) 37-41. It generalises earlier joint work of Kadets and Pelczynski that showed that a nonreflexive subspace of an $L_1(\mu)$ space contains a subspace that is isomorphic to $\ell_1$ and complemented in the ambient space $L_1(\mu)$.
Pelczynski's theorem above for non-weakly compact operators into $L_1(\mu)$ spaces is, in a sense, dual to his theorem asserting that an operator from a $C(K)$ space is non-weakly compact if and only if it fixes a copy of $c_0$. As Pelczynski points out in the beginning to part I. of the above paper, "these results are closely connected with criteria of weak compactness of linear operators in $C(S)$ and $L_1(\nu)$ spaces due to Grothendieck". |
H: Derivative of square root
What would be the derivative of square roots? For example if I have $2 \sqrt{x}$ or $\sqrt{x}$.
I'm unsure how to find the derivative of these and include them especially in something like implicit.
AI: $\sqrt x=x^{1/2}$, so you just use the power rule: the derivative is $\frac12x^{-1/2}$. |
H: Explanation of passage in Atiyah-MacDonald
On page 52 they write "...By (4.3) we can achieve (i)..."
where (4.3) is the lemma on the previous page that states that if $q_i$ are all $p$-primary then $\bigcap_i q_i$ is $p$-primary and (i) is the property of a minimal primary decomposition that $r(q_i)$ are pairwise distinct.
I'm confused about how $r(q_i)=p$ for all $i$ helps us to get $r(q_i) \neq r(q_j)$ for all $i \neq j$. If $r(q_i)=p$ for all $i$ then the primary decomposition would only consist of one ideal because we'd have to throw all the others away to get (i). Or not?
AI: Let $I = q_1 \cap q_2 \cap q_3 \cap q_4 \cap q_5$.
Suppose $q_1, q_2$ are $p_1$-primary and $q_3, q_4, q_5$ are $p_2$-primary, where $p_1 \neq p_2$.
Let $q_1' = q_1 \cap q_2$, $q_3' = q_3 \cap q_4 \cap q_5$.
Then $q_1'$ is $p_1$-primary and $q_3'$ is $p_2$-primary.
And $ I = q_1 \cap q_2 \cap q_3 \cap q_4 \cap q_5 = q_1' \cap q_3'$. |
H: Can someone help me with this conic?
$$\frac{(x+1)^2}{16} + \frac{(y-2)^2}{9} = 1.$$
I just started conics, but I thought you would multiply both sides by $16$ and then $9$ and then expand, which would get you $x^2 +y^2+2x-4y+5$.
Both signs are the same, but the foci is supposed to be: $(−1 \pm \sqrt{7},2)$, which I don't get.
I thought it was supposed to be a circle. both squared variables are multiplied by the same number and have the same signs.
I tried to identify it off of:
Are both variables squared?
No: It's a parabola.
Yes: Go to the next test....
Do the squared terms have opposite signs?
Yes: It's an hyperbola.
No: Go to the next test....
Are the squared terms multiplied by the same number?
Yes: It's a circle.
No: It's an ellipse.
Gotten from purplemath: http://www.purplemath.com/modules/conics.htm
AI: If you take $$\frac{(x+1)^2}{16} + \frac{(y-2)^2}{9} = 1$$ and multiply both sides by $16$ and then $9$ you get $$9(x+1)^2 + 16(y-2)^2 = 144.$$ If you expand you get $$9x^2+18x+16y^2-64y-71 =0$$ which is not what you had.
In fact your equation is of a ellipse centred at $(-1,2)$ with a horizontal semi-major axis of $ \sqrt{16} =4$ and a vertical semi-minor axis of $ \sqrt{9} = 3$. |
H: Implicit Differentiation of $2\sqrt{x} + \sqrt{y} = 3$
I am trying to implicitly differentiate this problem below but I am stumped because of the square-roots.
$$2\sqrt{x} + \sqrt{y} = 3$$
AI: Our original is:
$$2\sqrt{x} + \sqrt{y} = 3 \tag{1}$$
Taking the derivative with respect to $x$ and recalling that the derivative of a constant is zero, we get:
$$\frac{1}{\sqrt{x}} + \frac{1}{2\sqrt{y}} \cdot y' = 0$$
Cross multiply and you get
$$2\sqrt{y} + y'\sqrt{x} = 0$$
Now subtract the $2\sqrt{y}$ term so we are dealing with the $y'$ term on one side only to get:
$$y'\sqrt{x} = -2\sqrt{y}$$
Divide through by $\sqrt{x}$ to get $y'$ by itself and you find that
$$y' = \frac{-2\sqrt{y}}{\sqrt{x}} \tag{2}$$
To show how this is equivalent to the answer provided by Marvis, look at equation $(1)$ and solve for $\sqrt{y}$ directly and you find that:
$$\sqrt{y} = 3 - 2\sqrt{x} \tag{3}$$
Moving along to equation $(2)$ and replacing the $\sqrt{y}$ term with $3-2\sqrt{x}$, we get:
$$y' = \frac{-2\cdot (3-2\sqrt{x})}{\sqrt{x}}$$
$$y' = \frac{-6 + 4\sqrt{x}}{\sqrt{x}}$$
$$y' = \frac{-6}{\sqrt{x}} + 4 \equiv 4 -\frac{6}{\sqrt{x}} \tag{4}$$
Remark: I would argue implicit differentiation is simple enough here and I would stop at $(2)$. If needed to simplify further though, I would have gone the route Marvis took by rewriting the original and squaring to find $y = \text{something}$ and then taking the derivative of both sides. |
H: Prove that $\int_0^1t^{p-1}(1-t)^{q-1}\,dt=\frac{\Gamma(p)\Gamma(q)}{\Gamma(p+q)}$ for positive $p$ and $q$
I'm trying to prove that for $p,q>0$, we have $$\int_0^1t^{p-1}(1-t)^{q-1}\,dt=\frac{\Gamma(p)\Gamma(q)}{\Gamma(p+q)}.$$
The hint given suggests that we express $\Gamma(p)\Gamma(q)$ as a double integral, then do a change of variables, but I've been unable thus far to express it as a double integral.
Can anyone get me started or suggest an alternate approach?
Note: This wasn't actually given to me as the $\Gamma$ function, just as a function $f$ satisfying $$f(p)=\int_0^\infty e^{-t}t^{p-1}\,dt$$ for all $p>0$, but I recognized that. This is in the context of an advanced calculus practice exam.
AI: Indeed, to show this identity, one can start from the double integral
$$
\Gamma(p)\cdot\Gamma(q)=\int_0^\infty \mathrm e^{-x}x^{p-1}\,\mathrm dx\cdot\int_0^\infty \mathrm e^{-y}y^{q-1}\,\mathrm dy=\iint_{[0,+\infty)^2} \mathrm e^{-x-y}x^{p-1}y^{q-1}\,\mathrm dx\,\mathrm dy.
$$
The change of variable $x=ts$, $y=(1-t)s$, with $0\leqslant t\leqslant1$ and $s\geqslant0$, whose Jacobian yields $\mathrm dx\,\mathrm dy=s\,\mathrm ds\,\mathrm dt$, shows that
$$
\Gamma(p)\cdot\Gamma(q)=\int_0^{+\infty}\int_0^1\mathrm e^{-s}t^{p-1}s^{p-1}(1-t)^{q-1}s^{q-1}s\,\mathrm ds\,\mathrm dt.
$$
Finally,
$$\Gamma(p)\cdot\Gamma(q)=\int_0^{+\infty}\mathrm e^{-s}s^{p+q-1}\,\mathrm ds\cdot\int_0^1t^{p-1}(1-t)^{q-1}\,\mathrm dt=\Gamma(p+q)\cdot\int_0^1t^{p-1}(1-t)^{q-1}\,\mathrm dt,
$$
and this is it. The last integral above is called the beta number $\mathrm B(p,q)$. |
H: Does $\lim_{h\rightarrow 0}\ [f(x+h)-f(x-h)]=0$ imply that $f$ is continuous?
Suppose $f$ is a real function defined on $\mathbb{R}$ which satisfies
$$\lim_{h\rightarrow 0}\ [f(x+h)-f(x-h)]=0.$$
Does this imply that $f$ is continuous?
Source: W. Rudin, Principles of Mathematical Analysis, Chapter 4, exercise 1.
AI: No. Consider the function $f$ that equal to $0$ on the entire real line, except at $x=0$, where $f(0)=1$. Because $\mathbb{R}-\{0\}$ is open, around any nonzero point, we can find a neighborhood where $f$ is equal to $0$, so the condition clearly holds. And the condition holds when $x=0$, because $x\pm h$ will always be $0$ for $h\neq 0$, and in the definition of limit, we never consider $h=0$.
But $f$ is manifestly discontinuous, so we are done. |
H: If $K$ is an extension field of $\mathbb{Q}$ such that $[K:\mathbb{Q}]=2$, prove that $K=\mathbb{Q}(\sqrt{d})$ for some square free integer $d$
I think I have the later parts of this proof worked out pretty well but what's really stumping me is how to go from knowing $[K:\mathbb{Q}]=2$ to knowing that $K = \mathbb{Q}[x]/a_2x^2 + a_1x + a_0$.
I mean all I know from $[K:\mathbb{Q}]=2$ is that every element of $K$ can be written in the form $bk_1 + ck_2$ for $b,c\in \mathbb{Q}$. As far as I can tell I don't yet have any theorems at my disposal that say if $[K:\mathbb{Q}]$ is finite than $K$ must be algebraic over $\mathbb{Q}$, or anything like that. How do I go from this premise about $K$ as a 2-dimensional vector space over $\mathbb{Q}$ to knowing something about elements of $K$ as roots of polynomials in $\mathbb{Q}[x]$? Thanks.
AI: Let $\alpha \in K - \mathbb{Q}$.
Since $1 , α, α^2$ are linearly dependent over $\mathbb{Q}$, $aα^2 + bα + c = 0$, where $a, b, c \in \mathbb{Q}$ and not all of $a, b, c$ are zero.
By multiplying a suitable nonzero integer, we can assume $a, b, c \in \mathbb{Z}$.
If $a = 0$, we get $bα + c = 0$.
Since $b$ or $c$ is not zero, this can't happen.
Hence $a \neq 0$.
Hence $\mathbb{Q}(\alpha) = \mathbb{Q}((b^2 - 4ac)^{1/2})$.
Since 1$ , α$ are linearly independent over $\mathbb{Q}$, [$\mathbb{Q}(\alpha) : \mathbb{Q}$] = 2. Hence $K = \mathbb{Q}(\alpha)$ and we are done. |
H: codimension of "jumping" of the dimension of fibers
Let $f:X\rightarrow Y$ be a dominant morphism of projective (and smooth if you like) varieties over an algebraically closed field $k$ such that $n=\dim(X)=\dim(Y)$. Then $f$ is proper, so by Chevalley's upper semi-continuity theorem, $\dim(X_y)$ is upper semi-continuous on $Y$. Since $f$ is dominant, on an open subset $U\subseteq Y$, $X_y$ is 0 dimensional for all $y\in U$. My question is, can we say $Y\setminus U$ must have codimension at least 2?
We can construct an example where the codimension is exactly two by taking the blowup $B$ of $\mathbb{P}^2$ at a point and looking at the natural map $B\rightarrow\mathbb{P}^2$. If we did not impose that $X$ is irreduicible this need not be true, for example $(xy)\subseteq \mathbb{P}^2$ mapping to $\mathbb{P}^1$ via projection has a one dimensional fiber above the origin. I cannot seem to find an irreducible example and am hoping the answer to the above is positive.
Thanks
AI: Suppose there is a codimension one locus $Z$ in $Y$ such that each fibre over a point $z \in Z$ is of positive dimension. Then $f^{-1}(Z)$ has dimension $\geq 1 + Z = \dim Y = \dim X$, and thus $f^{-1}(Z)$ is of the same dimension of $X$, and so must be a component of $X$. Hence, if $X$ is irreducible, no such $Z$ exists. In conclusion, the answer is positive. |
H: Finding the equation of the line tangent to $y^2(y^2-4)=x^2(x^2-5)$
I am looking to find the equation of the line tangent to
$$y^2(y^2-4)=x^2(x^2-5)$$
at the point $(0,-2)$.
I have a feeling I need to implicitly differentiate here?
Am I on the right track?
What do I do after finding $y'$ to actually find the solution? Like what steps do I take to find the tangent line?
Not asking for the solution but a push in the right direction would be helpful, although a solution would be nice to look over.
AI: Yes you do need to use implicit differentiation. When finding a tangent line, you nearly always need the point-slope formula:
$$y_2 - y_1 = m(x_2 - x_1)$$
Solution to the implicit differentiation is below (to check your work). Simply hover your mouse over the grey box.
$y^2(y^2 - 4) = x^2(x^2 -5) \\\\$ Multiplying the polynomials gets us to
$y^4 - 4y^2 = x^4 - 5x^2$. Taking the derivative with respect to $x$ gets us: $4y^3y' - >! 8yy'=4x^3 - 10x$. Factoring to get $y'$ by itself: $y'(4y^3 - 8y) = 4x^3 - 10)$. Divide through to get $y'$ by itself: $y' = \dfrac{4x^3 - 10x}{4y^3-8y}$. You could make your life a bit easier by factoring this into $y' = \dfrac{2x(2x^2 - 5)}{4y(y^2-2)}$. You could cancel out a factor of $2$ to get $y' = \dfrac{x(2x^2 - 5)}{2y(y^2-2)}$. To find the slope, plug in your points $x = 0, y = -2$ into our equation for $y'$ to find the slope of the line. Note that the slope is $0$. To find the equation of the tangent line, use that value for $m$ you just found ($m=0$) and your given points into the point-slope formula and you find that the tangent line is $y=-2$.
Added:
Our expression for $y'$ is:
$$y' = \dfrac{2x(2x^2 - 5)}{4y(y^2-2)}$$
We were given a point $(0,-2)$. So, $x = 0, y = -2.$ Plugging this into the expression above yields:
$$y' = 0.$$
So, our slope of the line tangent to the curve $y^4-4y^2 = x^4-5x^2$ is zero. Now, using the point-slope formula, we have:
$$y - -2 = 0(x-0)$$
$$y+2 = 0 \implies y = -2$$
So, the tangent line is simply the horizontal line $y=-2.$ |
H: Issues computing a directional derivative
I am trying to do a problem which wants me to compute the directional derivatives at $(0, 0)$ of $$f(x, y) = \frac{xy}{\sqrt{x^2 + y^2}}, \quad f(0, 0) = 0.$$
There are two equations I know for computing the directional derivative, and them seem to be inconsistent for some reason.
Here are the two formulas I have for directional derivatives, $(a, b)$ a unit vector
$$D_{(a, b)}f(x, y) = \lim_{t \to 0} \frac{f(x + ta, y + tb) - f(x, y)}{t}.$$
$$D_{(a, b)}f(x, y) = aD_xf(x, y) + bD_yf(x, y).$$
where $D_x$ and $D_y$ are the partial derivatives.
Computing the partials derivatives I get
$$D_x(0, 0) = \lim_{t \to 0} \frac{f(t, 0) - f(0, 0)}{t}
= \lim_{t \to 0} \frac{\displaystyle\frac{t\cdot 0}{\sqrt{t^2 + 0^2}} - 0}{t} = 0$$
$$D_y(x, 0) = \lim_{t \to 0} \frac{f(0, t) - f(0, 0)}{t}
= \lim_{t \to 0} \frac{\displaystyle\frac{0\cdot t}{\sqrt{0^2 + t^2}} - 0}{t} = 0$$
Which in turn should imply $D_{(a, b)}(0, 0) = a\cdot 0 + b\cdot 0 = 0$.
But computing with the limit formula
$$D_{(a, b)}f(x, y) = \lim_{t \to 0} \frac{f(ta, tb) - f(0, 0)}{t}
= \lim_{t \to 0}\frac{\displaystyle\frac{(ta)(tb)}{\sqrt{(ta)^2 + (tb)^2}} - 0}{t} = \lim_{t \to 0}\frac{t^2 ab}{t^2\sqrt{a^2 + b^2}} = \frac{ab}{\sqrt{a^2 + b^2}} = ab \neq 0$$
if $a \neq 0$ and $b \neq 0$.
I imagine I am doing something silly, or I have a formula wrong or something. But I haven't been able to correct myself yet. Can anyone tell me where my reasoning is going wrong?
AI: For $u=(a,b) \ne 0$ and $t \ne 0$ one has
$$
\frac{f(0+tu)-f(0)}{t}=\frac{f(tu)}{t}=\frac{t^2ab}{t\sqrt{t^2(a^2+b^2)}}=\frac{t}{|t|}f(u).
$$
If $ab=0$, then $D_uf(0)=0$. If $ab \ne 0$, since the limit of $t/|t|$ as $t$ tends to 0 doesn't exist, then the directional derivative does not exist. |
H: KL divergence between Bernoulli Distribution with parameter $p$ and Gaussian Distribution
I am trying to find the Kullback–Leibler divergence between Bernoulli Distribution on two points $T, -T$ with parameter $p$ and Gaussian Distribution with mean $\mu$ and variance $\sigma^2$. My attempt is as follows:
Let
$$
b(x) = q\delta(x-T)+p\delta(x+T) \sim \text{Bernoulli}(p) \\ g(x) \sim N(\mu, \sigma^2).
$$
$$
\begin{align}
D(b||g) &= \int_{-\infty}^{\infty}b(x)\log \left( \frac{b(x)}{g(x)}\right) dx \\
&=\int_{-\infty}^{\infty}b(x)\log \left( b(x) \right) dx - \int_{-\infty}^{\infty}b(x)\log \left( g(x) \right) dx \\
&=A-B
\end{align}
$$
My questions are as follows:
Can I use the continuous representation of Bernoulli RV with the help of $\delta(.)$ functions where $\delta(.)$ is Dirac Delta function?
Does $A$ exist? Because, on the set $\mathbb{R}-\{\pm T\}$, $\log(\delta(x \mp T))$ is $-\infty$.
If we cannot calculate the KLD between a continuous and a discrete random variable, what is the KLD analogue for this case? My thought was that $B$ alone can serve as a distance. For example, if we want to measure the distance of $b(x)$ from two different Gaussian distributions $g_1(x), g_2(x)$, only $B$ depends on $g_1(x)$ or $g_2(x)$, and thus can contribute to KLD.
AI: No, you cannot do this. The Kullback-Leibler divergence $D_{KL}(P\|Q)$ is defined only if $P\ll Q$. This means that no set of positive $P$-measure can have zero $Q$-measure. In your case it will not work because the point masses of the Bernoulli distribution have zero measure under the Gaussian distribution.
The integral A blows up. For a continuous distribution this would the negative of the continuous version of the Shannon entropy.
I suspect that you might be looking for the mutual information between parameter space and observation space. It is a common technique to try to maximize the mutual information in such settings. The mutual information is then equal to the expected Kullback-Leibler divergence of the posterior distribution on a parameter space (given the observations) from the prior distribution. Here, the requirement that $P\ll Q$ simply means that one is not allowed to make any conclusions that are a priori impossible! |
H: Is there a domain without unity in which every element is a product?
In this answer, Fortuon Paendrag provides an example of a ring without unity such that every element is a product of some two elements. The example has zero divisors. Can a ring without a unity and without non-zero zero divisors satisfy this condition if it's
(a) commutative,
(b) non-commutative?
Added: A related question.
AI: Here is a useful commutative example that one actually meets in the wild. Let $M$ be a non-standard model of analysis, and let our ring $I$ be the collection of infinitesimals in $M$, together with $0$.
One can make the example sound more explicit by constructing $M$ via the ultrapower.
One can also construct many function ring examples. One type of example is the unitless ring of all finite sums $\sum a_i x^{e_i}$, where the $a_i$ range over the reals, and the $e_i$ range over the positive reals. Or else we can restrict the $e_i$ o positive rationals, or positive dyadic rationals. |
H: Explicitly write down $g\in GL(n,\mathbb{C})$ so that $gAg^{-1}$ is upper triangular, where $A\in M_n(\mathbb{C})$
This is an elementary question which is do-able by hand but I am actually looking for suggestions or book references since I am sure that someone did this somewhere:
suppose
$$
A = \left( \begin{array}{cc}
a_{11} & a_{12} \\
a_{21} & a_{22} \\
\end{array}
\right) \in M_2(\mathbb{C}).
$$
Find $g=(g_{ij})\in GL(2,\mathbb{C})$ so that $gAg^{-1}$ is upper triangular.
One method is to explicitly write down $gAg^{-1}$ and set the function in 2nd row, 1st column equal to zero (which is $-a_{12} g_{21}^2 + g_{22} (a_{11} g_{21} - a_{22} g_{21} + a_{21} g_{22}) = 0 $) and attempt to find $g$ this way, while a second method is to find the eigenvalues of $A$ (the two eigenvalues may or may not be distinct) and find their eigenvectors.
Wiki recommends Linear Algebra Done Right by Sheldon Axler and I think Sheldon proves that any $A\in M_n(\mathbb{C})$ can be put into an upper triangular form using induction.
Either of the methods that I mentioned above seems to be quite messy if I want to explicitly write down such $g$ for any $A\in M_2(\mathbb{C})$, or even for any $A\in M_n(\mathbb{C})$.
Do you have any recommended approach or references because I would like to explicitly write down $g\in GL(n,\mathbb{C})$ so that $gAg^{-1}$ is upper triangular, where $A\in M_n(\mathbb{C})$.
AI: An algorithm for finding $g$, with the added condition of taking $g$ to be unitary, is given in Hogben's Handbook of linear algebra. |
H: Some digit summation problems
What is the sum of the digits of all numbers from 1 to 1000000?
In general, what is the sum of all digits between 1 and N?
f(n) is a function counting all the ones that show up in 1, 2, 3, ..., n. so f(1)=1, f(10)=2, f(11)=4 etc. When is the first time
f(n)=n.
So for the first question, I tried thinking about it such that between 000000 and 999999, each digit will appear the same number of times, so if I find out how many times one digit appears I can just apply that to the other 9 digits (then add 1 for the last number 1000000):
(the number of times 1 digit appears)*(1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9) = ...
1 Appears once between 1 and 9
1 appears 11 times between 1 and 99
1 appears 11 * 10 + 10 = 120 times between 1 and 999
...I'm not sure how to find the pattern
But firstly I'm not so sure of my approach, secondly I'm not sure about how to find how many times one particular number appears, and third if this method worked it doesn't seem very good for solving the second part of the question.
Lastly, I had a similar question to the first 2 (question 3) so I just grouped it with those. I hope they are related, and if not I can make a seperate question for that one.
Thanks.
AI: For question three, we can simply compute. Besides n=1, the next solution is 199981, as generated by this PARI/GP code:
Define $r(n)$ to be the number of ones in the digits of $n$:
r(n) = nn=n;cc=0;while(nn>0,if((nn%10)==1,cc=cc+1);nn=floor(nn/10));return(cc)
Then run a loop and output $i$ if the sum of $r(i)$ from 1 up to $i$ is equal to $i$:
yy=0;for(i=1,199981,yy=yy+r(i);if(yy==i,print(i)))
The output is this:
1
199981
This shows $f(1)=1$ and $f(199981)=199981$ and there are no other solutions less than 199981.
Similarly for question one, we define $g(n)$ to be the sum of the digits of $n$:
g(n) = nn=n; cc=0; while(nn>0,cc=cc+(nn % 10);nn=floor(nn/10));return(cc)
then calculate:
sum(i=1,1000000,g(i))
which yields the sum 27000001. |
H: Show $x+\epsilon g(x)$ is 1-1 if $g'$ is bounded and $\epsilon$ is small enough.
Problem: Suppose $g$ is a real function on $\mathbb{R}$ with bounded derivative (say $|g'|<M$). Fix $\epsilon>0$, and define $f(x)=x+\epsilon g(x)$. Prove that $f$ is one-to-one if $\epsilon$ is small enough.
(A set of admissible values of $\epsilon$ can be determined which depends only on $M$.)
Source: W. Rudin, Principles of Mathematical Analysis, Chapter 5, exercise 3.
AI: Suppose not. Then for every $\epsilon>0$, there exists $a,b\in\mathbb{R}$, $a\neq b$, with $f(a)=f(b)$. Using the mean value theorem, we see there exists $x\in(a,b)$ with $f'(x)=0$. Note that $f$ is differentiable, as it is the sum of two differentiable functions. For such an $x$, we have
$$f'(x)=1+\epsilon g'(x)=0 \Rightarrow g'(x)=-\frac{1}{\epsilon}.$$
Taking $\epsilon$ small enough, we can force $g'(x)$ to be arbitrarily large. This is a contradiction, as $g'$ is bounded. |
H: Derivative of $f(x)= (\sin x)^{\ln x}$
I am just wondering if i went ahead to solve this correctly?
I am trying to find the derivative of $f(x)= (\sin x)^{\ln x}$
Here is what i got below.
$$f(x)= (\sin x)^{\ln x}$$
$$f'(x)=\ln x(\sin x) \Rightarrow f'(x)=\frac{1}{x}\cdot\sin x + \ln x \cdot \cos x$$
Would that be the correct solution?
AI: It's instructive to look at this particular logarithmic-differentiation situation generally:
$$\begin{align}
y&=u^{v}\\[0.5em]
\implies \qquad \ln y &= v \ln u & \text{take logarithm of both sides}\\[0.5em]
\implies \qquad \frac{y^{\prime}}{y} &= v \cdot \frac{u^{\prime}}{u}+v^{\prime}\ln u & \text{differentiate}\\
\implies \qquad y^{\prime} &= u^{v} \left( v \frac{u^{\prime}}{u} + v' \ln u \right) & \text{multiply through by $y$, which is $u^{v}$} \\
&= v \; u^{v-1} u^{\prime} + u^{v} \ln u \; v^{\prime} & \text{expand}
\end{align}$$
Some (most?) people don't bother with the "expand" step, because right before that point the exercise is over anyway and they just want to move on. (Plus, generally, we like to see things factored.) Even so, look closely at the parts you get when you do bother:
$$\begin{align}
v \; u^{v-1} \; u^{\prime} &\qquad \text{is the result you'd expect from the Power Rule if $v$ were constant.} \\[0.5em]
u^{v} \ln u \; v^{\prime} &\qquad \text{is the result you'd expect from the Exponential Rule if $u$ were constant.}
\end{align}$$
So, there's actually a new Rule here: the Function-to-a-Function Rule is the "sum" of the Power Rule and Exponential Rule!
Knowing FtaF means you can skip the logarithmic differentiation steps. For example, your example:
$$\begin{align}
\left( \left(\sin x\right)^{\ln x} \right)^{\prime} &= \underbrace{\ln x \; \left( \sin x \right)^{\ln x - 1} \cos x}_{\text{Power Rule}} + \underbrace{\left(\sin x\right)^{\ln x} \; \ln \sin x \; \frac{1}{x}}_{\text{Exponential Rule}}
\end{align}$$
As I say, we generally like things factored, so you might want to manipulate the answer thusly,
$$
\left( \left(\sin x\right)^{\ln x} \right)^{\prime} = \left( \sin x \right)^{\ln x} \left( \frac{\ln x \cos x}{\sin x} + \frac{\ln \sin x}{x} \right) = \left( \sin x \right)^{\ln x} \left( \ln x \cot x + \frac{\ln \sin x}{x} \right)
$$
Another example:
$$\begin{align}
\left( \left(\tan x\right)^{\exp x} \right)^{\prime} &= \underbrace{ \exp x \; \left( \tan x \right)^{\exp x-1} \; \sec^2 x}_{\text{Power Rule}} + \underbrace{ \left(\tan x\right)^{\exp x} \ln \tan x \; \exp x}_{\text{Exponential Rule}} \\
&= \exp x \; \left( \tan x \right)^{\exp x} \left( \frac{\sec^2 x}{\tan x} + \ln \tan x \right) \\
&= \exp x \; \left( \tan x \right)^{\exp x} \left( \sec x \; \csc x + \ln \tan x \right)
\end{align}$$
Note. Be careful invoking FtaF in a class --especially on a test-- where the instructor expects (demands) that you go through the log-diff steps every time. (Of course, learning and practicing those steps is worthwhile, because they apply to situations beyond FtaF.) On the other hand, if you explain FtaF to the class, you could be a hero for saving everyone a lot of effort with function-to-a-function derivatives. |
H: Explanation for why $1\neq 0$ is explicitly mentioned in Chapter 1 of Spivak's Calculus for properties of numbers.
During the first few pages of Spivak's Calculus (Third edition) in chapter 1 it mentions six properties about numbers.
(P1) If $a,b,c$ are any numbers, then $a+(b+c)=(a+b)+c$
(P2) If $a$ is any number then $a+0=0+a=a$
(P3) For every number $a$, there is a number $-a$ such that $a+(-a)=(-a)+a=0$
(P4) If $a$ and $b$ are any numbers, then $a+b=b+a$
(P5) If $a,b$ and $c$ are any numbers, then $a\cdot(b\cdot c)=(a\cdot b)\cdot c$
(P6) If $a$ is any number, then $a\cdot 1=1\cdot a=a$
Then it further states that $1\neq 0$. In the book it says that it was an important fact to list because there is no way that it could be proven on the basis of the $6$ properties listed above - these properties would all hold if there were only one number, namely $0$.
Questions:
1) How does one rigorously prove that $1\neq0$ cannot be proven from the $6$ properties listed?
2) It says that "these properties would all hold if there were only one number, namely $0$." Is a reason as to why this is explicitly mentioned is to avoid this trivial case where we only have the number $0$? Is there another deeper reason as to why this sentence was mentioned in relation to $1\neq 0$?
NB: Can someone please check if the tags are appropriate and edit if necessary? Thanks.
AI: To show that $1\neq 0$ cannot be proven from the other six properties, consider the set that contains only one element, $\{\bullet\}$. Define $+$ by $\bullet+\bullet = \bullet$ and $\cdot$ by $\bullet\cdot\bullet = \bullet$. Then letting $0=\bullet$, $-\bullet =\bullet$, and $1=\bullet$, all six axioms are satisfied, but $1=0$. Thus, $1\neq 0$ cannot be proven from the first six axioms, since you have a model in which the first six axioms are true, but $1\neq 0$ is not.
Yes: the reason we need to specify it is so that we don't just have the one-element "field". Basicallly, the condition that $1\neq 0$ is formally undecidable from the first six properties, so it needs to be specified. |
H: Show $f$ is constant if $|f(x)-f(y)|\leq (x-y)^2$.
Problem: Let $f$ be defined for all real $x$, and suppose that
$$|f(x)-f(y)|\le (x-y)^2$$
for all real $x$ and $y$. Prove $f$ is constant.
Source: W. Rudin, Principles of Mathematical Analysis, Chapter 5, exercise 1.
AI: Here's a proof more elementary.
Let $c=f(0)$, we have to prove that $f(x)=c$ whenever $x\neq0$. Supposing that $n$ is an arbitrary positive integer, we have
$$\left|f\left(\frac{m+1}nx\right)-f\left(\frac mnx\right)\right|\le\left(\frac{m+1}nx-\frac mnx\right)^2=\frac{x^2}{n^2}$$
Hence
\begin{align*}
|f(x)-f(0)|
\;&=\;\left|\,\sum_{m=0}^{n-1}\left(f\left(\frac{m+1}nx\right)-f\left(\frac mnx\right)\right)\,\right|\\
&\le\;\sum_{m=0}^{n-1}\,\left|f\left(\frac{m+1}nx\right)-f\left(\frac mnx\right)\right|\\
&\le\;\frac{x^2}n
\end{align*}
Let $n\to\infty$, we have $|f(x)-f(0)|=0$, thus $f(x)=c$. |
H: Prove existence of a real root.
Problem: If
$$C_0+\frac{C_1}{2}+\cdots + \frac{C_{n-1}}{n}+\frac{C_n}{n+1} =0,$$
where $C_0,...,C_n$ are real constants, prove that the equation
$$C_0+C_1x+\cdots +C_{n-1}x^{n-1}+C_nx^n=0$$
has at least one real root between $0$ and $1$.
Source: W. Rudin, Principles of Mathematical Analysis, Chapter 5, exercise 4.
AI: Note that
$$g(x)=C_0x+\frac{C_1}{2}x^2+\cdots + \frac{C_{n-1}}{n}x^n+\frac{C_n}{n+1}x^{n+1} $$
is an antiderivative for $f$. Note further that $g(0)=0$ and $g(1)=0$ by hypothesis. Then there exists $t\in(0,1)$ with $g'(t)=0$, that is, $f(t)=0$. |
H: Derivative goes to $0$, then function goes to $0$.
Problem: Suppose $f$ is defined and differentiable for $x>0$, and $f'(x)\rightarrow 0$ as $x\rightarrow +\infty$. Put $g(x)=f(x+1)-f(x)$. Prove that $g(x)\rightarrow 0$ as $x\rightarrow +\infty$.
Source: W. Rudin, Principles of Mathematical Analysis, Chapter 5, exercise 5.
AI: From the mean value theorem, we know that
$$f(x+1)-f(x)=g(x)=f'(t)$$
for some $t\in (x, x+1)$. Then, because $f'\rightarrow 0$, so does $g$. |
H: How to prove that the space $ \omega_1\times R $ has countable extent?
How to proof that the space $ \omega_1\times R $ has countable extent? The topological space $\omega_1$ is the first uncountable ordinal with order topology.
A space $X$ has countable extent if every uncountable subset of $X$ has a limit point in $X$.
Thanks for any help:)
AI: By $R$ I assume you are denoting the real line.
(Oh, dear; the question seems to have been substantially altered. Please ignore the now silly sounding striked-out paragraph.)
Suppose that $A \subseteq (\omega + 1 ) \times R$ is uncountable. Note that there must be a $i \leq \omega$ such that $A_i = \{ x \in R : (i,x) \in A \}$ is uncountable. But as $R$ has countable extent it follows that $A_i$ has a limit point $x$ in $R$. It easily follows that $(i,x)$ is a limit point of $A$ in $(\omega +1 ) \times R$.
Let $A \subseteq \omega_1 \times R$ be uncountable. If there is an $\alpha < \omega_1$ such that $A_\alpha = \{ x \in R : (\alpha , x ) \in A \}$ is uncountable, then $A_\alpha$ has a limit point $x$ (as $R$ has countable extent), and it is easy to show that $(\alpha , x )$ is a limit point of $A$.
So assume that $A_\alpha$ is countable for each $\alpha < \omega_1$. We may then recursively construct a sequence $\langle (\alpha_i , x_i ) \rangle_{i \in \omega}$ in $A$ such that:
$\alpha _i < \alpha_{i+1}$ for all $i \in \omega$; and
$\langle x_i \rangle_{i \in \omega}$ is a convergent sequence in $R$.
Let $\alpha = \sup_{i \in \omega} \alpha_i < \omega_1$ (and note that $\alpha$ is a limit ordinal). Let $x = \lim_{i \in \omega} x_i$. It is easy to show that $( \alpha , x )$ is a limit point of $A$ in $\omega_1 \times R$. |
H: How to prove an integer is (not) a power of some other integer?
I assume that is nigh-impossible to prove when the conditions on the integers are very general. However, my algebra professor told me that the following is true:
If $n$ is a composite positive integer, $(n - 1)! + 1$ is not a power of $n$.
I assume this is an easy number theory problem, but I don't know how to approach it. The form of $(n-1)!+1$ makes me think of somehow splitting the term into its primes and using Wilson's, however improbable it is to do so. And for a proof by contradiction, finding parameters on $x$ such that $n^x=(n-1)!+1$ gets me nowhere, since it would just translate to a discrete log problem.
I appreciate any hints or input!
AI: HINT Since $n$ is composite, consider a prime dividing $n$. Will it divide $(n-1)! + 1$? (Remember that the prime dividing $n$ will occur in $(n-1)!$ and hence will divide $(n-1)!$). |
H: How to match a discrete distribution to a continuous distribution in information theoretic sense?
Let
$$
S \sim N(\mu, \sigma^2)
$$
be a normally distributed random variable with known $\mu$ and $\sigma^2$. Suppose, we observe
$$
X = \begin{cases} T & \text{if $S \ge 0$}, \\ -T & \text{if $S<0$},\end{cases}
$$
where $T \in \mathbb{R}$. The probability distribution of $X$ is given by:
$$
p(x) = Q\left(\frac{-\mu}{\sigma}\right)\delta(x-T)+Q\left(\frac{\mu}{\sigma}\right)\delta(x+T)
$$
I want to optimize the value of $T$ such that $X$ conveys as much information about $S$ as possible.
My Attempt:
a. I tried minimizing the Kullback–Leibler divergence between the distribution of $X$ and $S$, but as mentioned here, it is not possible.
b. I tried to calculate the mutual information between the two distributions, it turned out to be independent of $\alpha$.
Is there any other way of formulating this problem? I feel quite confident that there must be such $T$ for which $X$ explains $S$ better, e.g., assume $\mu=10000$ then a value of $T$ near $10000$ will better explain $S$ than say $T=2$? One method in my mind was to match the moments of the two distributions but I am not sure if it is the optimal way in the sense of maximizing the information?
AI: $P(S|X)$ is the same for any value of $T$. Hence, $X$ conveys the same information about $S$, no matter the value of $T$. No matter what $T$ you chose, you are only informing with $X$ if $S \geq 0$ or not.
A way for $X$ to be more informative about $S$ is to define it as:
$$ X = \left\{ \begin{array}{ll}
1 & \mbox{if $S \geq \mu$};\\
-1 & \mbox{if $S < \mu$}.\end{array} \right. $$
From your questions, you might be interested in Rate-distortion theory. |
H: What is the meaning of "countable spread"?
I encountered an example that said:
A Tychonoff 2-starcompact space of countable spread which is not $1\frac{1}{2}$-starcompact.
My question is this: What's the meaning of "countable spread" ?
AI: The question was answered in comments. Since we don't like leaving questions unanswered, I'll copy here the definitions from Wikipedia.
The cellularity of a space $X$ is
$${\rm c}(X)=\sup\{|{\mathcal U}|:{\mathcal U}\text{ is a family of mutually disjoint non-empty open subsets of }X \}+\aleph_0.$$
The hereditary cellularity (sometimes spread) is the least upper bound of cellularities of its subsets:
$$s(X)={\rm hc}(X)=\sup\{ {\rm c} (Y) : Y\subseteq X \}$$
or
$$s(X)=\sup\{|Y|:Y\subseteq X \text{ with the subspace topology is discrete}\}+\aleph_0.$$ |
H: If $f(0)=0$ and $f'$ is increasing, then $\frac{f(x)}{x}$ is increasing.
Problem: Suppose $f$ is continuous for $x\ge 0$, differentiable for $x>0$, $f(0)=0$, and $f'$ is monotonically increasing.
Define $g(x)=\frac{f(x)}{x}$ for $x>0$. Prove that $g$ is monotonically increasing.
Source: W. Rudin, Principles of Mathematical Analysis, Chapter 5, exercise 6.
AI: $${f(x)\over x}=\int_0^1 f'(t\, x)\ dt\qquad(x>0)\ .$$ |
H: reference in Montgomery/ Zippin
In a paper, the authors use the reference [M–Z, Theorem in 4.13] where [M-Z] denote the book
D. Montgomery and L. Zippin, Topological Transformation Groups. Interscience
Publishers, New York–London, 1955.
Unfortunately, I have no access to the edition of 1955 of this book. I would be very
grateful if you would help me to know the theorem.
AI: The part 4.13 has 3 pages, here are the scans:
188, 189, 190. |
H: How to find the eccentricity of this conic?
How to find the eccentricity of this conic?
$$4(2y-x-3)^2-9(2x+y-1)^2=80$$
My approach :
I rearranged the terms and by comparing it with general equation of 2nd degree, I found that its a hyperbola. Since this hyperbola is not in standard form $x^2/a^2-y^2/b^2= 1$, I don't know how to find its eccentricity.
Please guide me.
AI: First make the following change of coordinates:
$$
u=\frac{x-2y}{\sqrt{3}}, \ v=\frac{2x+y}{\sqrt{3}}.
$$
With these coordinates the canonical basis $e_1=(1,0), e_2=(0,1)$ is transformed into $e_1'=\frac{(1,2)}{\sqrt{3}}, e_2'=\frac{(-2,1)}{\sqrt{3}}$ which is clearly an orthonormal basis. The equation now reads:
$$
4(-\sqrt{3}u-3)^2-9(\sqrt{3}v-1)^2=80,
$$
i.e.
$$
\frac{(u+\sqrt{3})^2}{a^2}-\frac{(v-1/\sqrt{3})^2}{b^2}=1.
$$
with
$$
a^2=20/3>b^2=80/27.
$$
So, the eccentricity is
$$
e=\sqrt{1+b^2/a^2}=\sqrt{1+4/9}=\sqrt{13}/3.
$$ |
H: Multivariable Calculus - Gradient And Laplacian
I'm currently reading a couple of papers, which uses the following identity, which I can't figure out how to prove or see:
$$\int F(x,t) \Delta_x F(x,t) dx = -1 \int \left| \nabla _x F(x,t) \right| ^2 dx $$
Can someone help me figure out why this equality is true?
Thanks!
AI: The divergence theorem is lurking here somewhere, since $\mbox{div} (F \nabla F) = F\Delta F + |\nabla F|^2$. You just need some means to conclude $\int \mbox{div} (F \nabla F) = 0$ (which is equivalent to the statement in your posting). |
H: Existence of Consecutive Quadratic residues
For any prime $p\gt 5$,prove that there are consecutive quadratic residues of $p$ and consecutive non-residues as well(excluding $0$).I know that there are equal number of quadratic residues and non-residues(if we exclude $0$), so if there are two consecutive quadratic residues, then certainly there are two consecutive non-residues,therefore, effectively i am seeking proof only for existence of consecutive quadratic residues. Thanks in advance.
AI: Since 1 and 4 are both residues (for any $p\ge 5$), then to avoid having consecutive residues (with 1 and 4), we would have to have both 2 and 3 as non-residues, and then we have 2 consecutive non-residues.
Thus, we must have either 2 consecutive residues or 2 consecutive nonresidues.
i.e.: 1 and 4 are both residues, so we have R * * R for the quadratic character of 1, 2, 3 and 4. However we fill in the two blanks with Rs or Ns, we will get either 2 consecutive Rs or 2 consecutive Ns.
Edited:
To show that we must actually get both RR and NN for $p\gt 5$, we consider 2 cases:
$p\equiv -1 \pmod 4$: then the second half of the list of $p-1$ Ns and Rs is the inverse of the first half (Ns become Rs and the order is reversed), so that if we have NN or RR in the first half (using the argument above) then we get the other pattern in the second half.
$p\equiv 1 \pmod 4$: then the second half of the list is the reverse of the first half. Then if there is no RR amongst the first 4, then there must be an appearance of NN, i.e. sequence begins RNNR..., and if we fill in the dots (this is where we need $p>5$ - to ensure there ARE some dots!) with Ns and Rs trying to avoid an appearance of RR, then we have to alternate ...NRNR...NR. However the sequence then ends with R, and the second half begins with R, so we eventually get RR.
(The comments about the second half of the list in the 2 cases are easy consequences of -1 being a residue or a nonresidue of p). |
H: $|2^x-3^y|=1$ has only three natural pairs as solutions
Consider the equation $$|2^x-3^y|=1$$ in the unknowns $x \in \mathbb{N}$ and $y \in \mathbb{N}$. Is it possible to prove that the only solutions are $(1,1)$, $(2,1)$ and $(3,2)$?
AI: Yes. Levi ben Gerson (1288-1344), also known as Gersonides, proved this. The Gersonides proof can be found here.
EDIT: miracle notes that the link is broken. This link works (for now). But to keep everyone happy, I'll paste in the relevant parts of what's there:
In 1342, Levi ben Gerson, also known as Gersonides (1288-1344), proved that the original four pairs of harmonic numbers are the only ones that differ by 1. Here's how his proof went (using modern notation). [note --- "harmonic numbers" are those that can be written as $2^n3^m$]
If two harmonic numbers differ by 1, one must be odd and the other even. The only odd numbers are powers of 3. Hence one of the two harmonic numbers must be a power of 3 and the other a power of 2. The task then involves solving two equations:
$2^n = 3^m + 1$ and $2^n = 3^m - 1$.
Gersonides had the idea of looking at remainders after division of powers of 3 by 8 and powers of 2 by 8. For example, 27 divided by 8 gives a remainder of 3. For powers of 3, the remainders all turn out to be 1 or 3, depending on whether the power is even or odd. The remainders for powers of 2 are 1, 2, 4, then 0 for all powers higher than 2.
For $2^n = 3^m + 1$, when $m$ is odd, $3^m$ has remainder 3, and $2^n = 3^m + 1$ must then have the remainder 4, so $n = 2$ and $m = 1$. That gives the consecutive harmonic numbers 3, 4. When $m$ is even, the equation gives the consecutive harmonic numbers 1, 2.
For $2^n = 3^m - 1$, when $m$ is odd, $3^m$ has remainder 3, so $2^n = 3^m - 1$ has remainder 2; as a result, $n = 1$ and $m = 1$, to give the consecutive harmonic numbers 2, 3. The final case, when $m$ is even, is a little trickier and requires substituting $2k$ for $m$, then solving the equation $2^n = 3^{2k} - 1 = (3^k - 1)(3^k + 1)$. That gives the consecutive harmonic numbers 8, 9. QED. |
H: combinatoric problem related to drug
i want to choose optimal decision from following problem
Imagine having been bitten by an exotic, poisonous snake. Suppose the ER
physician estimates that the probability you will die is $1/3$ unless you receive
effective treatment immediately. At the moment, she can offer you a choice of
experimental antivenins from two competing ‘‘snake farms.’’ Antivenin $X$ has
been administered to ten previous victims of the same type of snake bite and
nine of them survived. Antivenin $Y$, on the other hand, has only been
administered to four previous patients, but all of them survived. Unfortunately,
mixing the two drugs in your body would create a toxic substance much
deadlier than the venom from the snake. Under these circumstances, which
antivenin would you choose, and why?
so first off all i have concluded that, for substance $X$,i would have $90$% chance to be survivded,and for $Y$ ,i would have $100$%,so maybe it should be indicator for me to choose $Y$,on the other hand,if we consider it as a combinatoric problem,then we have $p=2/3$ if we don't die when get medical treatment, and $q=1/3$ if we do,so for substance $X$,we would have
$(10!/9!)*((p)^{10})*q=0.05202459$
while for substance $Y$,it would be
$4!/4!*(2/3)^4*(1/3)^0=16/84=0.19047619$
so it means that i have more chance for $Y$,so does it means that i should choose $Y$?
AI: This question cannot be answered without knowledge of your prior assessment of the effectiveness of the antivenins. If X was produced by a company that usually produces effective drugs and Y was produced by a quack, the result will be different than if it was the other way around. If you don't have any prior information on the likely effectiveness of the antivenins, you'll need to make some assumption that will be to some degree arbitrary. For instance, since you'd survive with probability $2/3$ without the antivenin, you could assume a uniform distribution between $2/3$ and $1$ for the chance of surviving after taking the antivenin, for each of the two antivenins (assuming you're confident that they're not harmful). Then with $j$ trials successful and $k$ trials unsuccessful for antivenin $i$, the posterior distribution for the survival probability $p_i$ of antivenin $i$ would be
$$\frac{p_i^j(1-p_i)^k}{\int_{2/3}^1p^j(1-p)^k\mathrm dp}\;,$$
and your survival probability if you take antivenin $i$ would be
$$\int_{2/3}^1\frac{p_i^j(1-p_i)^k}{\int_{2/3}^1p^j(1-p)^k\mathrm dp}p_i\mathrm dp_i\;,$$
which comes out as $502769/589806\approx0.852$ for X and $3325/3798\approx0.875$ for Y. |
H: Example of non-decomposable ideal
An ideal $I$ of a commutative unital ring $R$ is called decomposable if it has a primary decomposition.
Can you give an example of an ideal that is not decomposable?
All the examples I can think of are decomposable. Thanks.
AI: Zero ideal in $C[0,1]$ is not decomposable.
More generally,
If $X$ is an infinite compact Hausdorff space then the zero ideal of $C(X)$, the ring of real valued continuous functions on $X$, is not decomposable.
Proof : First let us note that every maximal ideal of $C(X)$ is of the form $M_x=\lbrace f \in C(X) : f(x)=0\rbrace$, for some $x \in X$. Note that, if the zero ideal of $C(X)$ were decomposable, then there
would be only finitely many minimal prime ideals of $C(X)$. This certainly
sounds strange since every maximal ideal $M_x$ of $C(X)$ contains a minimal
prime ideal as every maximal ideal is also prime ideal. Hence to show that the zero ideal of $C(X)$ is not decomposable it is enough to show that if $x\not= y\in X$ then any two minimal prime ideals
$P_1\subseteq M_x, P_2\subseteq M_y$ are different: Since $X$ is Hausdorff and normal, there is an open set $U$
such that $x\in U$ and $y\not\in\overline{U}$. By Urysohn's lemma there are$ f,g\in C(X)$
such that $f(U) = 0, f(y) = 1, g(x) = 1,$ and $g(X \setminus U) = 0.$ So $fg = 0$. Therefore $fg \in P_1$ but $g \not\in P_1$, because $g(x)\not=0$, hence $f \in P_1$. But $f \not\in P_2$, because $f(y)\not=0$. Hence $P_1\not=P_2$. |
H: Cech cohomology of $\mathbb A^2_k\setminus\{0\}$
I'm trying to prove, via the Cech cohomology, that $S=\mathbb A^2_k\setminus\{0\}$ with the induced Zariski topology is not an affine variety. Consider the structure sheaf $\mathcal O_{\mathbb A^2_k}\big|_S:=\mathcal O_S$ (which is quasi coherent), i must show that $\exists n$ such that $\check H^n(S,\mathcal O_S)\neq 0$. It is enough to prove that $\check H^n(\mathcal U,\mathcal O_S)\neq0$ for a certain affine cover of $S$ (and a certain $n$); so let's choose $\mathcal U=\{D(X), D(Y)\}$ where $D(X)=\{(x,y)\in S\,:\, x\neq 0\}$ and $D(Y)=\{(x,y)\in S\,:\, y\neq 0\}$. Clearly for $n\ge 2$ we have that $\check H^n(S,\mathcal O_S)=0$, so i must show that $\check H^1(\mathcal U,\mathcal O_S)\neq0$. The Cech complex is:
$$\mathcal O_S(D(X))\times\mathcal O_S(D(Y))=\Gamma(S)_X\times\Gamma(S)_Y\longrightarrow \mathcal O_S(D(X)\cap D(Y))=\Gamma(S)_{XY}\longrightarrow 0\cdots$$
with the homomorphism: $d^0: (f,g)\mapsto g|_{{D(X)\cap D(Y)}}-f|_{{D(X)\cap D(Y)}}$. To complete the proof i should conclude that $d^0$ is not surjective, but why is this true?
thanks
AI: First notice that the restriction morphism $\Gamma(\mathbb A^2_k,\mathcal O_{ \mathbb A^2_k})\to \Gamma(S, \mathcal O_S)$ is bijective because the affine plane $\mathbb{A}^2_k$ is normal ("Hartogs phenomenon").
Hence we may identify $\Gamma(S, \mathcal O_S)$ with the polynomial ring $k[X,Y]$
a) The open set $D(X)$ is isomorphic to $\mathbb G_m\times \mathbb A^1_k$ where $\mathbb G_m=\operatorname {Spec} k[T,T^{-1}]$, the affine line with origin deleted.
Hence $\Gamma(D(X),\mathcal O_{ A^2_k})=k[X,X^{-1},Y]$.
b) Similarly $D(Y)$ is isomorphic to $\mathbb A^1_k \times \mathbb G_m$.
Hence $\Gamma(D(Y),\mathcal O_{ A^2_k})=k[X,Y, Y^{-1}]$.
c) Finally the open set $D(X)\cap D(Y)$ is isomorphic to the product $\mathbb G_m\times_k \mathbb G_m$ .
Hence $\Gamma(D(X)\cap D(Y),\mathcal O_{ A^2_k})=k[X,X^{-1}]\otimes _k k[Y,Y^{-1}]= k[X,X^{-1},Y,Y^{-1}]$.
d) With these identifications established, the first cohomology group $\check H^1(\mathcal U,\mathcal O_S)$ of the structural sheaf is the cohomology of the complex
$$ k[X,X^{-1},Y]\times k[X,Y,Y^{-1}] \to k[X,X^{-1},Y,Y^{-1}] \to 0 $$ where the non trivial map is $$(f(X,X^{-1},Y),g(X,Y,Y^{-1}))\mapsto g(X,Y,Y^{-1})-f(X,X^{-1},Y)$$
e) Hence we see that the required cohomology is the following infinite dimensional $k$-vector space , spectacularly violating vanishing of cohomology for affine schemes, which $S$ is thus not.
Final result
$$ \check H^1(\mathcal U,\mathcal O_S)=\check H^1(S,\mathcal O_S)=\oplus _{i,j\gt 0} \; k\cdot X^{-i} Y^{-j} $$ |
H: Is locally free sheaf of finite rank coherent?
Let $\mathcal{F}$ be a locally free sheaf of finite rank of scheme $X$, is $\mathcal{F}$ coherent?
By the definition of locally free sheaf, there exists an open cover {$U_i$} of $X$ such that $\mathcal{F}|_{U_i}$ is isomorphic to the sheaf $\widetilde{\mathcal{O}(U_i)^n}$. But we don't know each $U_i$ affine or not!
So, it that true or not?
How about $X$ being locally noetherian? It $X$ is, we can find $V_{ij} \subset U_i$ s.t. $V_{ji} = Spec(A_{ji})$. And $\mathcal{F}|_{V_{ij}} = \mathcal{F}|_{U_i}|_{V_{ij}}$...?
For example, $X(\Delta)$ is a toric varity with $\Delta$ consists of strongly convex polyhedral cones.
Thank you very much!!
AI: The exact condition for locally free sheaves on a ringed space $(X,\mathcal O_X)$ to be coherent is exactly that $\mathcal O_X$ be coherent.
a) The condition is clearly necessary since $\mathcal O_X$ is locally free.
b) It is sufficient because if the structure shaf is coherent, then coherence is a local property and because a direct sum of coherent sheaves is coherent: apply to $\mathcal F \mid U_i \cong (\mathcal O\mid U_i)^{\oplus r} $
And when is $\mathcal O_X$ coherent?
There is, to my knowledge, no very good non-tautological criterion.
However, for locally noetherian schemes, it is the case that $\mathcal O_X$ is coherent, so for these schemes, yes, locally free sheaves are coherent. |
H: Generalized Laplacian operator?
Suppose a surface $S$ is endowed with a metric given by the matrix $$M=\begin{pmatrix} E&F\\F&G\end{pmatrix}$$
And $f,g$ are scalar functions defined on the surface. What then is the (geometric) significance of the scalar function given by ${1\over \sqrt{\det(M)}}{\partial \over \partial x_i}\left(f\sqrt{\det(M)} (M^{-1})_{ij} {\partial \over \partial x_j} g\right)$?
I have been told that if we set $f=1$, we get an operator equivalent to the Laplacian acting on the function $g$. Why does the Laplacian become this form? Is there an intuitive geometric explanation of what is going on?
Thank you.
AI: Yes, there is a more intuitive geometric explanation, though it is kind of difficult to see if you just get to see this formula. In differential (Riemannian) geometry one looks at curved (in contrast to flat, like Euclidean space) surfaces or higher dimensional manifolds. The metric you are looking at (more precisely: the pointwise norm associated with the pointwise scalar product it defines) is kind of an infinitesimal measure for distances in these surfaces.
It turns out that in this context you can set up differential calculus, the basic operation of which, when applied to vector fields, is the so called covariant differentiation. If you are looking at surfaces embedded in Euclidean 3 space this is basically a differentiation in that surrounding space followed by an orthogonal projection onto the tangent plane to the surface, but one can define this in an abstract manner, too (without an ambient manifold). If you then take a function and it's gradient (a concept which is also to be defined and depends on the metric) and take the covariant derivative of this object, the trace of this object (wrt the metric) is the Laplacian of the function (as is in Euclidean space, the Laplacian is the trace of the Hessian). In local coordinates this happens to look like the object you wrote down (when $f=1$). While in this form it looks a bit arbitrary it turns out to have some interesting properties. In particular it is invariant under coordinate changes, that is, it is well defined as a differential operator on the surface.
If you want to learn more about this you should fetch some basic textbook on Riemannian Geometry, like do Carmo. |
H: Introductory (online) texts on Bayesian Network.
I would like to ask for some recommendation of introductory online texts on Bayesian Network.
What I am searching for is some accessible and instructive text not necessarily covering the subject in great depth, but explaining the main ideas. Simply an accessible introductory text (possibly online) for a fast orientation in the subject.
AI: Here is a small list of references:
Kevin Murphy's A Brief Introduction to Graphical Models and Bayesian Networks
Kevin Murphy's Phd Thesis
Michael Jordan's Graphical Models
Collection of papers appearing in Michael Jordan's Learning in Graphical Models |
H: cardinality: The cardinality of the set of all relations over the natural numbers.
I have to find the cardinality of the set of all relations over the natural numbers, without any limitations.
It seems to be א, but I can't find a function/other way to prove it.
help anyone?
thanxs.
AI: Recall that $R$ is a relation over $A$ if $R\subseteq A\times A$.
The definition above tells us that every subset of $\mathbb{N\times N}$ is a relation over $\mathbb N$, and vice versa - every relation over $\mathbb N$ is a subset of $\mathbb{N\times N}$.
Thus the set of all relations over $\mathbb N$ is exactly $\mathcal P(\mathbb{N\times N})$, that is the power of $\mathbb{N\times N}$.
We know that $\mathbb N$ and $\mathbb{N\times N}$ have the same cardinality, $\aleph_0$. So their power sets also have the same cardinality. Therefore $|\mathcal P(\mathbb{N\times N})|=\aleph=|\mathbb R|=2^{\aleph_0}$.
Note, however, that as a particular set, every relation in particular is a subset of a countable set and thus countable (or finite). |
H: All Bipartite Graphs on n number of vertices
I need to find a list of all connected bipartite graphs on 15 vertices.
http://mapleta.maths.uwa.edu.au/~gordon/remote/graphs/index.html#bips lists all graphs on 14 or fewer number of vertices.
http://oeis.org/A005142 says there are 575 252 112 such graphs.
AI: Try
geng -bc 15 conbip.g6.txt
with the program geng from Brendan McKay's nauty package, available from http://cs.anu.edu.au/~bdm/nauty/.
The list of connected bipartite graphs with n = 14 vertices is 74MB compressed and requires a few minutes to generate. The list for n = 15 may take a while to complete and the resulting file will be large. |
H: How to derive the equation for a bézier curve
So, I remember a while back there was a maths competition and we were given a curve that we needed to write an equation for. I just skipped the question since I didn't even know where to begin. I remember it was one among the last few questions of the paper and it was worth a lot of points.
I don't really remember what the curve looked like; it was something spirally, but I can't recall it to save my life right now.
So, I drew this curve in Inkscape (it's a Bézier curve. Or a few of them linked together, according to Wikipedia. If it's required I will post the whole path). And I would like to write the equation for it (with someone's help, obviously).
I was always a bit bad with curves, graphs and lines, but I want to understand them better. So, I was hoping someone could explain the process of deriving the equation for a curve.
P.S: I'd like it if you could use another curve (it can be something simpler, but try avoiding something overly complicated) so I can crack this one on my own, but if you feel like using this curve as an example I won't mind.
EDIT
So have been browsing the internet, read a few Wikipedia entries about Bazier curves, and I understand how they're drawn (mostly the GIFs helped, haha), but I am still stumped when it comes to mathematically representing a Bézier curve. Also, I will add this image, which is the path and its control points (at the end of the blue lines; I didn't paint them in):
And also, the contents of the .tex file for the shape.
%LaTeX with PSTricks extensions
%%Creator: 0.48.2
%%Please note this file requires PSTricks extensions
\psset{xunit=.5pt,yunit=.5pt,runit=.5pt}
\begin{pspicture}(451.46875,34.25392151)
{
\newrgbcolor{curcolor}{1 0 0}
\pscustom[linewidth=3,linecolor=curcolor]
{
\newpath
\moveto(450.48448,1.10834551)
\curveto(404.89404,41.45133951)(333.34998,42.21654151)(281.90128,9.03018551)
\curveto(258.09407,-6.32636849)(228.42388,9.91159551)(202.75741,15.38398551)
\curveto(145.68728,27.55199551)(85.852286,40.32786151)(28.08402514,26.23698551)
\curveto(18.5710181,23.91656551)(9.403556,20.24334551)(0.681686,15.78116551)
}
}
\end{pspicture}
Thanks!
AI: Linear Bézier curve is simply a line given by parametric equation $R(t) = A+t(AB)$ , A being initial point and B being final point.
For Quadratic Bézier curve, take a look at the following picture.
Let the point between $P_1$ and $P_0$ be $Q_1$ and $P_1$ and $P_2$ be $Q_2$. Let our path be traced by $Q_0$. Then from above figure.
$$ \frac{P_0Q_1}{P_0P_1} = \frac{P_1Q_2}{P_1P_2} = \frac{Q_1Q_0}{Q_1Q2} = t \text{ (say)} $$
$$Q_1 = P_0 + t(P_0P_1), Q_2 = P_1 + t(P_1P_2)$$
So we have
$$Q_0 = Q_1 + t(Q_1Q_2) = P_0 + t(P_0P_1) + t(P_1 + t(P_1P_2) - (P_0 + t(P_0P_1)))$$
Have a look at more elaborate article on Wikipedia. |
H: Are there rngs whose rngs of matrices are commutative?
If $R$ is a unital ring and $M_{2\times 2}(R)$ is a commutative ring, then $R$ is a trivial ring because if $$\begin{pmatrix}0 & 1 \\ 0 & 0\end{pmatrix}=\begin{pmatrix}1 & 0 \\ 0 & 0\end{pmatrix}\begin{pmatrix}0 & 1 \\ 0 & 1 \end{pmatrix}=\begin{pmatrix}0 & 1 \\ 0 & 1 \end{pmatrix}\begin{pmatrix}1 & 0 \\ 0 & 0\end{pmatrix}=\begin{pmatrix}0 & 0\\0 & 0\end{pmatrix}, $$ then $0=1.$
However, $M_{2\times 2}(R)$ can be commutative for possibly non-unital rings of any size. That is, for any cardinal number $\kappa$ (finite or not), there exists an abelian group of order $\kappa.$ Equipping this group with the zero multiplication gives a rng $R$ such that $M_{2\times 2}(R)$ has zero multiplication and so is commutative. The trivial ring is also in this class.
Are there any examples of rngs $R$ whose multiplication is non-zero and such that $M_{2\times 2}(R)$ is commutative?
AI: No. Expanding the first cell of the product of arbitrary matrices, we would have to have $aa'+bc'=a'a+b'c$ for any $a,b,c,a',b',c'$, so it suffices to look at the particular case $a=a'=b'=c=0$:
$$\begin{pmatrix}0&x\\0&0\end{pmatrix}\begin{pmatrix}0&0\\y&0\end{pmatrix}=\begin{pmatrix}xy&0\\0&0\end{pmatrix}\\
\begin{pmatrix}0&0\\y&0\end{pmatrix}\begin{pmatrix}0&x\\0&0\end{pmatrix}=\begin{pmatrix}0&0\\0&yx\end{pmatrix}$$
Therefore $xy=0$ for any $x,y\in R$, and the family of such rngs is exactly the abelian additive groups equipped with zero multiplication. |
H: Logical function plotting software in Linux
I want to plot function like this.
if $x > 6000$ plot function $y = 6000+ \frac{x}{15}$ otherwise plot y = $6000$
Suggest me any open source plotting software that has linux version. I have tried KmPlot, KAlgebra, kst they does not seem to has such option.
KAlgebra supports ? ... : notation but when I enter, it says invalid syntax. If you know how to draw this funciton in
AI: You should check out gnuplot. It is in almost any Linux repository. |
H: Calculating Distance - Issue
I am having difficulty with the following problem.
A man covers a distance on a scooter. Had he moved 3Kmph faster he would have taken 40 min less. If he had moved 2kmph slower he would have taken 40 min more. What is the distance (Ans=40)?
Here is what i came up with
$s = (v+3)(t - 40/60)$
$s = (v-2)(t+ 40/60)$
3 variables, two equations? Any suggestions what else I am missing or an easier way to solve this ?
AI: You have $s=v t$. So since $0=-\frac{4}{6}(v+3)+3t=\frac{4}{6}(v-2)-2t$ it follows that $t=\frac{10}{3}$, $v=12$, $s=40$ |
H: Does this weird sequence have a limit?
Yesterday I was trying to come up with an example of the weirdest sequence I could think of, and I came up with this. I'm not even sure if this could be called a sequence, but here it goes:
We'll define the sequence $a_n$ in the following way. We'll keep a list of previously calculated terms. Whenever we want to know $a_k$ for some $k$, we'll check if $a_k$ is in the list. If it is, we're done: that's its value. If it isn't, we roll a die, assign the number we get to $a_k$, and write that down in the list. This way, we can find out $a_k$ for any $k$ we want. For example, I just calculated some terms of it: $a_1 =1, a_2=4, a_{27}=1, a_{googol} = 5$.
There is an obvious problem with this definition, namely, that the sequence will be different each time someone different calculates it. I think a way to get past this would be to say that there will be a file on the Internet or something that keeps a list of all previously calculated terms of $a_n$. Another way would be, maybe, to talk not about the sequence itself (because it isn't unique, in a way) but of the probability of it having certain properties each time someone different calculates it.
My main question is: does this sequence have a limit? Or, to put it in the second way I just mentioned: what is the probability it will have a limit? What would happen if, instead of rolling a die, we pick a random real number for each term?
I'm sorry if this question doesn't make sense; I know practically nothing about the kind of math that would be needed to answer it. I don't even know how one would approach a problem like this, and I'm not sure this even counts as a sequence. This is something I randomly thought of, and I'm wondering if there is any previous work on problems like this.
AI: This is very similar to the definition of a random oracle in cryptography.
In any case, it's pretty obvious that your sequence almost surely (i.e. with probability 1) does not converge.
In particular, since your sequence $(a_k)$ only takes on values from a discrete set, in order to converge it would have to be constant from some point $k_0 \in \mathbb N$ onwards. But that can only happen if $a_k = a_{k_0}$ for all $k > k_0$, which is the intersection of infinitely many independent events, each occurring with probability less than $1-\epsilon$ for some fixed $\epsilon > 0$, and thus their intersection only occurs with probability 0.
Ps. Essentially the same result holds even if the sequence can take values from an infinite set, such as $\mathbb R$ or any (non-singleton) subset of it. If the sequence converged to some $x \in \mathbb R$, then for any $\epsilon>0$, there would have to be some $k_0$ such that, for all $k>k_0$, $a_k\in[x-\epsilon,x+\epsilon]$. As long as we can choose an $\epsilon>0$ such that each $a_k$ has a non-zero probability of not belonging to $[x-\epsilon,x+\epsilon]$ (i.e. as long as the probability distribution is not concentrated at $x$), the conclusion still follows. |
H: Open math problems which high school students can understand
I request people to list some moderately and/or very famous open problems which high school students,perhaps with enough contest math background, can understand, classified by categories as on arxiv.org. Please include statement of the theorems,if possible, and if there are specific terms, please state what they mean.
Thank you.I am quite inquisitive to know about them and I asked this question after seeing how Andrew J.Wiles was fascinated by Fermat's last theorem back in high school.
AI: Goldbach's conjecture (math.NT)
An even integer is a positive integer, which is divisible by $2$.
Goldbach's conjecture states that
$$\text{"Every even integer greater than $2$ can be expressed as the sum of two primes."}$$
For instance, $4= 2 + 2$, $6 = 3 + 3$, $8 = 5 + 3$, $10 = 7 + 3$, $12 = 7 + 5$ and so on.
Twin prime conjecture (math.NT)
A prime positive integer is one, which is divisible only by $1$ and itself. Twin primes are the primes which differ by $2$. For instance, $(5,7)$, $(11,13)$, $(17,19)$, $(101,103)$ are all examples of twin primes.
The twin prime conjecture asks the following question
$$\text{"Are there infinitely many twin primes?}"$$
Mersenne prime (math.NT)
A Mersenne prime is a prime of the form $2^n-1$. For instance, $31$ is a Mersenne prime, since $31 = 2^5-1$. Similarly, $127 = 2^7-1$ is also a Mersenne prime.
It is easy to show that if $2^n-1$ is a prime, then $n$ has to be a prime. However, the converse is not true.
The Mersenne prime conjecture asks the following question
$$\text{"Are there infinitely many Mersenne primes?"}$$
Perfect numbers (math.NT)
A perfect number is a positive integer that is equal to the sum of its proper positive divisors, that is, the sum of its positive divisors excluding the number itself.
The first few perfect numbers are $$6 = 1 + 2 + 3$$ $$28 = 1 + 2 + 4 + 7 + 14$$ $$496 = 1 + 2 + 4 + 8 + 16 + 31 + 62 + 124 + 248$$
There are two interesting conjectures on perfect numbers. The first one asks
$$\text{"Are there infinitely many perfect numbers?"}$$
The second one asks
$$\text{"Are there any odd perfect numbers?"}$$
Euler proved that $2^{p-1} \left(2^p-1 \right)$, where $2^p-1$ is a Mersenne prime, generates all the even perfect numbers. Note that if one proves that the Mersenne prime conjecture is true, then this will also imply the first conjecture on perfect numbers.
EDIT
This MO thread is also relevant. |
H: Is this kind of simplicial complex necessarily homotopy equivalent to a wedge of spheres?
Suppose $d \ge 2$ and $S$ is a finite simplicial complex of dimension $2d$, such that
(1) $S$ is simply connected, i.e. $\pi_1 ( S) = 0$, and
(2) all the homology of $S$ is in middle degree, i.e. $\widetilde{H}_i ( S, \mathbb{Z}) = 0,$ unless $i = d$, and
(3) homology $\widetilde{H}_d(S, \mathbb{Z})$ is torsion-free.
Does it necessarily follow that $S$ is homotopy equivalent to a wedge of spheres of dimension $d$?
If $S$ can be shown to be homotopy equivalent to a $d$-dimensional cell complex, for example, this would follow by standard results.
AI: Your space is what is called a Moore space (a space with a unique nontrivial homology group), and these are determined up to homotopy by the nontrivial homology group $G$ and the nontrivial dimension (see Hatcher Example 4.34). It follows that your space is homotopy equivalent to a wedge of spheres whenever its nontrivial homology group is a direct sum of $\mathbb Z$'s. |
H: Inspecting Direction field
This question is from Boyce and Diprima, page no 38, question 22.
Draw a direction field for the given differential equation. How do
solutions appear to behave as $t$ becomes large? Does the behavior
depend on the choice of the initial value a? Let $a_o$ be the value
of $a$ for which the transition from one type of behavior to another
occurs. Estimate the value of $a_o$.
The equation is
\begin{align} 2y'- y = e^{\frac{t}{3}}, \quad y(0)=a\end{align}
I used "Maxima" to draw the direction field, but cannot find out where/how to find the change in the behavior of the plot just by observing the plot.
AI: Looking at your curves one is tempted to say the following: If the value $a:=y(0)>0$ then the solution $x\mapsto y(x)$ is increasing for all $x>0$. If $a<0$ then the solution is first decreasing, then reaches a minimum at a certain point $x_a$, and for $x>x_a$ increases definitely to infinity.
But this is not the whole truth.
In order to get a full view one has to determine the general solution of the given ODE. Using standard methods one obtains
$$y(x)=C e^{x/2}-3 e^{x/3}\ ,\qquad C\ \ {\rm arbitrary}\ ,$$
and introducing the initial condition gives
$$y(x)=e^{x/2}\bigl(a+3-3e^{-x/6})\bigr)\ .$$
Now you can see that the really crucial value is $a_0=-3$. I leave the details of the further discussion to you. |
H: Find the probability of the given event.
Team $A$ is playing team B in a series of three games. Team $A$'s probability of winning any particular game is nonzero, independent of other games, and is $1.6$ times as large as its probability of winning the series. What is the probability of team $A$ winning the series?
My approach :
Let the probability of winning any game be $x$, then
$$x = 1.6 \left( x^3 + 3x(1 -x) \right)$$
because the series can be won as WWW, WLW, LWW, WWL.
Please help.
AI: Yes your approach is indeed correct. However, the equation you have obtained is incorrect. You will obtain $$x = 1.6(x^3 + 3\color{red}{x^2}(1-x)).$$ Since you are given that the probability of $A$ winning a game is non-zero, we can cancel off the $x$ to get $$5 = 8 (x^2 + 3x -3x^2) = -16x^2 + 24x \implies 16x^2 - 24x + 5 = 0$$
Now solve the above quadratic equation, with the constraint the $0 <x < 1$.
$$16x^2 - 24x + 5 = 0 \implies x = \dfrac{24 \pm \sqrt{24^2 - 4 \times 16 \times 5}}{2 \times 16} = \dfrac34 \pm \dfrac12 = \dfrac14, \dfrac54.$$ Since $x$ denotes, a non-zero probability, we get that the desired probability is $x = \dfrac14$. |
H: Recurrence relation, error in generating function - where did I go wrong?
This is the homework. If you are interested in precise formulation, it is as follows: write a recurrence relation and a generating function that would generate a sequence of trites (elements of a set $\{0, 1, 2\}$), where subsequences of 01 and 00 are not allowed.
I've recognized this as being the following recurrence relation: $f(n) = 2f(n - 1) + f(n - 2)$. I've built a model of it and tested, it looks like this is correct. Now, below is my effort at writing a generating function, which does not give me the expected result, but I cannot find the error.
$$\begin{align*}
a_n = 2a_{n-1} + a_{n-2}\\
s^2 = 2s + 1\\
s^2 - 2s - 1 = 0\\
\end{align*}$$
$$\begin{align*}
s = \frac{ 2 \pm \sqrt{ 4 + 4 } } { 2 }\\
s_0 = \frac{ 2 + 2 \sqrt{ 2 } } { 2 } = 1 + \sqrt{ 2 }\\
s_1 = \frac{ 2 - 2 \sqrt{ 2 } } { 2 } = 1 - \sqrt{ 2 }\\
\end{align*}$$
$$\begin{align*}
a_n = \alpha s_0^n + \beta s_1^n\\
a_0 = \alpha + \beta = 1\\
a_1 = (1 + \sqrt{ 2 }) \alpha + (1 - \sqrt{ 2 }) \beta = 3\\
\alpha = 1 - \beta\\
(1 + \sqrt{ 2 }) (1 - \beta) + (1 - \sqrt{ 2 }) \beta = 3\\
(1 + \sqrt{ 2 }) - (1 + \sqrt{ 2 }) \beta + (1 - \sqrt{ 2 }) \beta = 3\\
\beta ((1 - \sqrt{ 2 }) - (1 + \sqrt{ 2 })) = 3 - (1 + \sqrt{ 2 })\\
\beta = \frac{ 3 - (1 + \sqrt{ 2 }) } { (1 - \sqrt{ 2 }) - (1 + \sqrt{ 2 }) }\\
\beta = \frac{ 3 - (1 + \sqrt{ 2 }) } { 1 - \sqrt{ 2 } - 1 - \sqrt{ 2 } }\\
\beta = \frac{ 2 - \sqrt{ 2 } } { -2 \sqrt{ 2 } }\\
\alpha = 1 - \frac{ 2 - \sqrt{ 2 } } { -2 \sqrt{ 2 } }\\
\alpha = \frac{ -2 \sqrt{ 2 } - 2 - \sqrt{ 2 } } { -2 \sqrt{ 2 } }\\
\alpha = \frac{ -3 \sqrt{ 2 } - 2 } { -2 \sqrt{ 2 } }\\
\end{align*}$$
$$\begin{align*}
a_n = \frac{ -3 \sqrt{ 2 } - 2 } { -2 \sqrt{ 2 } } \times (1 + \sqrt{ 2 }) + \frac{ 2 - \sqrt{ 2 } } { -2 \sqrt{ 2 } } \times (1 - \sqrt{ 2 })\\
a_n = \frac{ (-3 \sqrt{ 2 } - 2)(1 + \sqrt{ 2 })^n + (2 - \sqrt{ 2 })(1 - \sqrt{ 2 })^n }{ -2 \sqrt{ 2 } }\\
a_n = \frac{ (-3 \sqrt{ 2 } - \sqrt{ 2 } \sqrt{ 2 })(1 + \sqrt{ 2 })^n + (\sqrt{ 2 } \sqrt{ 2 } - \sqrt{ 2 })(1 - \sqrt{ 2 })^n } { -2 \sqrt{ 2 } }\\
a_n = \frac{ \sqrt{ 2 } (-3 - \sqrt{ 2 })(1 + \sqrt{ 2 })^n + \sqrt{ 2 } (\sqrt{ 2 } - 1)(1 - \sqrt{ 2 })^n } { -2 \sqrt{ 2 } }\\
a_n = \frac{ \sqrt{ 2 } ((-3 - \sqrt{ 2 })(1 + \sqrt{ 2 })^n + (\sqrt{ 2 } - 1)(1 - \sqrt{ 2 })^n) } { -2 \sqrt{ 2 } }\\
a_n = \frac{ (-3 - \sqrt{ 2 })(1 + \sqrt{ 2 })^n + (\sqrt{ 2 } - 1)(1 - \sqrt{ 2 })^n } { -2 }
\end{align*}$$
Sorry, my TeX-fu isn't strong enough to make this look good. Feel free to edit it to make ti look more comprehensible.
You may find the model to test my calculations here: http://pastebin.com/R1aRmeL7
AI: You could have made things a little easier for yourself after you found $s_0=1+\sqrt2$ and $s_1=1-\sqrt2$, starting with finding $\alpha$ and $\beta$. You have $\alpha+\beta=1$ and $(1+\sqrt2)\alpha+(1-\sqrt2)\beta=3$. The latter can be written $(\alpha+\beta)+\sqrt2(\alpha-\beta)=3$, and since $\alpha+\beta=1$, this simplifies immediately to give you the system
$$\left\{\begin{align*}
&\alpha+\beta=1\\
&\alpha-\beta=\sqrt2\;.
\end{align*}\right.$$
Clearly $2\alpha=1+\sqrt2=s_0$ and $2\beta=1-\sqrt2=s_1$, so the general term is $$a_n=\frac12\left(s_0^{n+1}+s_1^{n+1}\right)\;,$$ or, if you prefer,
$$a_n=\frac12\left((1+\sqrt2)^{n+1}+(1-\sqrt2)^{n+1}\right)\;.$$ |
H: Taking stalk of a product of sheaves
Let $(\mathscr{F}_\alpha)_\alpha$ be a family of sheaves on $X$, and $\prod_\alpha\mathscr{F}_\alpha$ the product sheaf. If $x\in X$, is it true that
$$\left(\prod_\alpha\mathscr{F}_\alpha\right)_x\simeq\prod_\alpha(\mathscr{F}_\alpha)_x \ ?$$
I think $(\oplus_\alpha\mathscr{F}_\alpha)_x\simeq\oplus_\alpha(\mathscr{F}_\alpha)_x$ may be true, but not the product sheaf.
AI: You are right. It is true for direct sums, even arbitrary colimits (since colimits commute with colimits, and remember the description of the stalk as a colimit; or represent it as a pullback functor to the point), and also for finite products, even more generally for finite limits (since these commute with filtered colimits in, say, algebraic categories, in which the sheaves should live). But it is not true for infinite products.
There is always a canonical map $(\prod_{\alpha} F_{\alpha})_x \to \prod_{\alpha} (F_{\alpha})_x$. But it doesn't have to be injective, even for very nice spaces $X$ and sheaves $F_{\alpha}$. Take $X=\mathbb{R}$ and $F_{\alpha}$ the sheaf of continuous function for $\alpha \in \mathbb{N}$, and $x=0$. Let $f_{\alpha} : \mathbb{R} \to \mathbb{R}$ be a continuous function which vanishes on $]-1/(\alpha+1),+1/(\alpha+1)[$, but does not vanish at $1/{\alpha}$. Then $(f_{\alpha})_{\alpha}$ represents an element in the kernel of the canonical map, which is not trivial. |
H: Is there any famous number theory conjecture proven impossible to be find out the truth or false?
Is there any famous number theory conjecture proven undecidable?
Is there any history about it?
i would like to know any number theory conjecture by the types of undecidable.
AI: Perhaps Hilbert's Tenth Problem and Matiyasevich's Theorem are what you have in mind. Here are some particular points taken from the two wikipages:
Corresponding to any given consistent axiomatization of number theory, one can explicitly construct a Diophantine equation which has no solutions, but such that this fact cannot be proved within the given axiomatization. Hilbert's tenth problem asks for a general algorithm deciding the solvability of Diophantine equations. The conjunction of Matiyasevich's theorem with earlier results known collectively as the MRDP theorem implies that a solution to Hilbert's tenth problem is impossible.
Harold N. Shapiro and Alexandra Shlapentokh prove that Hilbert's tenth problem is unsolvable for the ring of integers of any algebraic number field whose Galois group over the rationals is abelian.
Later work has shown that the question of solvability of a Diophantine equation is undecidable even if the equation only has 9 natural number variables (Matiyasevich, 1977) or 11 integer variables (Zhi Wei Sun, 1992). |
H: Is $3^n - 2^n$ composite for all integers $n \geq 6$?
I made a conjecture about the values of n for which $3^n - 2^n$ is not prime, but I didn't succeed in proving the conjecture. My conjecture is the following: "Suppose n is an integer greater than or equal to 6. Then $3^n - 2^n$ is not prime". I tried to prove that by induction, but I got stuck. Any ideas?
AI: $n=17$ gives us $3^n-2^n = 129009091$, which is a prime number. Below is a list for $n \leq 100000$.
$$
\begin{array}{|c|c|}
\hline
n& (3^n-2^n)\\
\hline
2 & 5\\
3 & 19\\
5 & 211\\
17 & 129009091\\
29 & 68629840493971\\
31 & 617671248800299\\
53 & 19383245658672820642055731\\
59 & 14130386091162273752461387579\\
101 & 1546132562196033990574082188840405015112916155251\\
277 & \text{A $133$ digit prime number}\\
647 & \text{A $309$ digit prime number}\\
1061 & \text{A $507$ digit prime number}\\
2381 & \text{A $1137$ digit prime number}\\
2833 & \text{A $1352$ digit prime number}\\
3613 & \text{A $1724$ digit prime number}\\
3853 & \text{A $1839$ digit prime number}\\
3929 & \text{A $1875$ digit prime number}\\
5297 & \text{A $2528$ digit prime number}\\
7417 & \text{A $3539$ digit prime number}\\
90217 & \text{A $43045$ digit prime number}\\
\hline
\end{array}$$
It is clear that if $3^n - 2^n$ is a prime, then $n$ has to be a prime but the converse is not true. In general, questions like these are likely to be hard.
For instance, replacing the $2$ by $1$ and $3$ by $2$ in your question, we get the Mersenne prime i.e. a prime of the form $2^n-1$. For instance, $31$ is a Mersenne prime, since $31 = 2^5-1$. Similarly, $127 = 2^7-1$ is also a Mersenne prime.
The Mersenne prime conjecture asks the following question
$$\text{"Are there infinitely many Mersenne primes?"}$$ |
H: Find the minimum number of tests
Sorry for the title, but I couldn't think of something else, its not actually homework, but rather a question from a maths question book I am currently stuck on, but still I've tagged it in homework.
The question is as follows:
An institute holds 32 mock tests, students have the option to appear
for any number of mocks, even 0. Student A gives 16, student B gives
18, student C gives 20. What is the minimum number of mocks which were
written by more than one among A, B, and C.
The way I approached was as follows:
Since total mocks by A, B, and C if we sum them are 54, and the total mocks held by the institute were 32, so atleast 54 - 32 = 22 mocks could have been written by more than one.
The problem is I am stuck here, and can't move ahead.
I think this problem maybe solved by Set theory, but being an aptitude problem, I think there's more to it and it can be solved logically without going into much mathematics.
AI: We will minimize the number of common exams written if we make sure that each exam written by at least two people is written by all three. (Of course it is not yet clear that this can be arranged).
If it can be arranged, let $n$ be the number of exams written by all three. Then A will write $16-n$ exams that no one else writes, B will write $18-n$, and C will write $20-n$. Thus
$$n+(16-n)+(18-n)+(20-n)\le 32.$$
Solve. We get $n\ge 11$.
We can arrange for $n$ to be exactly $11$ by giving the first $11$ exams to everybody. In addition, A writes exams $12$ to $16$, B writes $17$ to $23$, and C writes $24$ to $32$. |
H: Find max vertical distance
What is the maximum vertical distance between the line
$y = x + 20$
and the parabola
$y = x^2$ for $−4 ≤ x ≤ 5?$
What steps do I take to solve this? Do I have to use the distance formula and what do I do with the points it gave me?
If anyone could just bounce me in the right direction that would be neat. I can probably work an answer from there!
Also what's the distance formula to use here?
AI: The vertical distance at $x=a$ is the difference in $y$-coordinates at $x=a$, so it’s $|(x+20)-x^2|$. Now $x^2-x-20=(x+4)(x-5)$, so it’s negative between $x=-4$ and $x=5$. Thus, on the interval $[-4,5]$ we have $|(x+20)-x^2|=x+20-x^2$, not $x^2-x-20$.
Now let $f(x)=x+20-x^2$ and find the maximum of $f(x)$ on the interval $[-4,5]$. |
H: Sheaf of rings with vanishing stalk?
How common is that a sheaf of rings has a vanishing stalk? To define the rank of a locally free sheaf of $\mathscr{O}$-modules, for instance, $\mathscr{O}_x=0$ may cause some problem, since the rank of a free $A$-module is not well-defined if $A=0$. It would make life easier if $\mathscr{O}_x\neq0\ \forall x\in X$, but is this condition somehow incorporated in the definition of sheaf of rings?
AI: I will presume that by "a sheaf of rings" you mean "a sheaf of rings with $1$" (since this is what is usually so meant). If the stalk of $\mathcal O_X$ vanishes at $x$, then this means that $1 = 0$ in the stalk, and hence in $\mathcal O_X(U)$ for some
neighbourhood $U$ of $x$. Thus the stalk of $\mathcal O_X$ vanishes at $x$ if and only if the sheaf $\mathcal O_X$ restricts to the zero sheaf in some n.h of $x$.
If e.g. $X$ is not only ringed, but locally ringed, then the stalks of $\mathcal O_X$ (which are then local rings by definition) never vanish at a point (since local rings are non-zero, again by definition).
Added: If $U$ is an open subset of $X$ with complement $Z$, and $i: Z \to X$ is the
inclusion, then for any sheaf of rings $\mathcal O_Z$ on $Z$, the pushforward $i_* \mathcal O_Z$ will be a sheaf of rings on $X$ whose stalks vanish on $U$. Thus we can always find examples realizing the discussion of the first paragraph. More generally, if we let $Z$ be the support of any sheaf of rings $\mathcal O_X$ on $X$, this will coincide with the support of the identity section $1 \in \mathcal O_X(X)$, and hence will be a closed subset of $X$, and we will have that $\mathcal O_X = i_* i^{-1} \mathcal O_X,$ with $i^{-1}\mathcal O_X$ a sheaf of rings on $Z$, none of whose stalks vanish. Consequently, any sheaf of rings on $X$ can be obtained by a sheaf of rings with non-vanishing stalks on a closed subspace by pushing forward. |
H: Minimum value of $p^2x + q^2y + r^2z$ if $pqxyz = 54r$
What is the minimum value of $p^2x + q^2y + r^2z$ if $pqxyz = 54r$, where $p, q, r, x, y$ and $z$ are positive real numbers?
I tried applying Cauchy's here but it didn't yield any significant result. Please help.
AI: The problem might be incorrect since as such $p^2x + q^2 y + r^2z$ can be made arbitrarily close to $0$. Also, I assume that $pqxyz = 54r$ as your title says and not $pqxyz = 54c$ as your question says.
A way to see that is to take $p = \dfrac1x$, $q = \dfrac1y$ and $z = 54r$. The objective function now becomes $\dfrac1x + \dfrac1y + 54 r^3$. Now let $x,y \to \infty$ and $r \to 0$ to see that the objective function can be made arbitrary small. |
H: Find $(x, a, b, c)$ if $x! = a! + b! + c!$
Find $(x, a, b, c)$ if $$x! = a! + b! + c!$$
I want to know if there are more solutions to this apart from $(x, a, b, c) = (3, 2, 2, 2)$.
AI: First note that $a,b,c < x$, since $1 \leq n!$. This means that $a,b,c \leq x-1$. This implies that $$x! = a! + b! + c! \leq 3 (x-1)! \implies x \leq 3$$
Further, $x! = a! + b! + c! \geq 3 \implies x >2$. Hence, $x=3$ is the only possibility left. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.