Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
If B is invertible matrix , then the Row space of A and BA are same I have seen a proof that B is row equivalent to A iff exist invertible matrix C such that :
B = CA , because C is elementary matrix , but i cant find the next step . can B "used" as the elementary matrix that lead from B to BA without changing the row space of A , so A and BA have same rowspace?
|
Well, since $B=CA$, you see that rows of $B$ can be written as linear combinations of rows of $A$. (To be more explicit, write
$$B=\begin{bmatrix}\mathbf{b_1\\b_2\\ \vdots \\ b_m}\end{bmatrix},\quad A=\begin{bmatrix}\mathbf{a_1\\a_2\\ \vdots, \\ a_m}\end{bmatrix},\quad C=\begin{bmatrix}c_{11} &\cdots & c_{1m}\\
\vdots & \ddots&\vdots\\ c_{m1} & \cdots & c_{mm}
\end{bmatrix}$$
and perform partitioned matrix multiplication. Here letters in boldface denote row vectors comprising matrices.)
Likewise we note $A=C^{-1}B$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2362354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Can two vectors of 3-Tuples span $\mathbb R^3$? I'm just checking, but when we have $2$ vectors
$$
V_1=\begin{pmatrix}a\\b\\c\end{pmatrix}\text{ and }V_2=\begin{pmatrix}e\\f\\g\end{pmatrix}
$$
We could theoretically span $\mathbb R^3$ real space with just these two vectors right?
|
While skyking gave a very elegant explanation for $\mathbb{R}^3$, there is a general fact that $n$ $n+1$ tuples can't span $\mathbb{R}^{n+1}$. One possible proof is this:
1) For $n=1$ this is obviously true.
2) Suppose any element of $\mathbb{R}^{n+1}$ can be represented as a linear combination of your tuples, then you have $e_{n+1} = (0, 0, ..., 0, 1)^T = a_1V_1+a_2V_2+...+a_nV_n$. Let $a_n \neq 0$, then $V_n = b_1V_1 + b_2V_2 + ... + b_{n-1}V_{n-1} + b_ne_{n+1}$. Thus, vectors $e_1, e_2, ..., e_n$ can be expressed using only $V_1, ..., V_{n-1}$ and $e_{n+1}$. So, ignoring the last component, you get that $n-1$ $n$-tuples span $\mathbb{R}^n$.
3) By induction, from 1) and 2) we conclude that the statement holds for all $n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2362446",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Solutions to $100a+10b+c=11a^2+11b^2+11c^2$ and $100a+10b+c=11a^2+11b^2+11c^2+ k$ The problem states that
Find all three digit natural numbers such that when the number is divided by $11$ gives the quotient equal to sum of squares of their digits
Since there is no information about whether remainder is $0$ or not , I firstly assumed that the question is talking about the numbers perfectly divisible by $11$
Now I have $80$ numbers left , I can check them separately , but it will be lengthy
I made an equation $$100a+10b+c=11a^2+11b^2+11c^2$$
Rearranged and got $$a(11a-100)+b(11b-10)+c(11c-1)=0$$
I have $10$ values for $a,b,c=\{1,2,3,4,5,6,7,8,9,0\}$
I made a table corresponding to $a(11a-100),b(11b-10),c(11c-1)$ and found that only for $a=b=5,c=0$ and $a=8,b=0,c=3$ are giving their sum $0$ , hence $550$ and $803$ are the only numbers satisfying given property and divisible by 11.
Now I have two questions:
$1.)$Is my way and my answer correct? If no, then where have I misunderstood?
$2.)$What about the numbers which are not divisible by $11$?
As mentioned by an answer er of this post , there are six such numbers which are not divisible by $11$ , but still give the quotient the sum of square of their digits. But answer er found it using a computer program, which is not suitable for pen paper mathematics. So how can I find those six numbers ?
|
This might not be the best way but here is how I would do it.
Assume the remainder to be $\lambda$. Remember that $\lambda$ can take any value from 0 to 10.
Note that $100a + 10b +c = 11a^2 + 11b^2 + 11c^2 + \lambda$ can be rewritten as
$$(11a - 50)^2 + (11b - 5)^2 + (11c - \frac{1}{2})^2 = 2525 + \frac{1}{4} - \lambda$$
For the remainder of the answer, I'll assume $\lambda = 0.$ The other cases for $\lambda$ can be handled similarly.
Since each of the terms on the left hand side are positive, we get that $c \leq 4$
We can take 5 cases based upon each of the five possible values for $c$ and solve for $a$ and $b$.
I won't go through all of the cases but in each one of them we should be able to form upper bounds on the values of $a$ and $b$. Furthermore, the search space can be pruned even further by making use of the parity of $a$ and $b$.
This method is not very different from your "table generating" method. The only difference is that the search space is pruned quite a bit.
Also, if the number is itself divisible by 11, we can use the property mentioned in the comments to prune the search space even more.
The process can be repeated for other values of $\lambda$. The upper bounds should not differ a lot but the individual solutions will.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2362528",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
convergence of $\int_1^{\infty}\frac{lnx}{\sqrt[3]x(x+1)}$ I need help checking if the integral:
$$\int_1^{\infty}\frac{\ln x}{\sqrt[3]x(x+1)}$$
converge or diverge.
I tried comparing to bigger integrals using $\ln x \lt x$ or making the denominator smaller but without success.
any suggestions?
|
The substitution $x\mapsto z^3$ leads to an absolutely convergent integral:
$$ I = \int_{1}^{+\infty}\frac{\log x}{\sqrt[3]{x}(1+x)}\,dx \stackrel{x\mapsto z^3}{=} \color{blue}{9\int_{1}^{+\infty}\frac{z \log z}{1+z^3}\,dz}\stackrel{z\mapsto t^{-1}}{=}9\int_{0}^{1}\frac{-\log(t)}{1+t^3}\,dt\tag{1}$$
that can be easily computed: since $\int_{0}^{1}t^k(-\log t)\,dt = \frac{1}{(k+1)^2}$,
$$ I = 9\sum_{k\geq 0}\frac{(-1)^k}{(3k+1)^2}\approx\color{blue}{8.56365942}.\tag{2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2362662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Prove the normal approximation of beta distribution How should I prove the normal approximation of beta distribution as follows:
Let $\mathrm B_{r_1, r_2}\sim \mathrm{Beta}(r_1, r_2)$, then prove that
$\sqrt{r_1+r_2} (\mathrm B_{r_1, r_2}- \dfrac{r_1}{r_1+r_2}) \to \mathrm N(0, \gamma(1-\gamma))$
where $r_1, r_2 \to \infty$ and $\frac{r_1}{r_1+r_2}\to \gamma$ $(0<\gamma<1)$.
My attempt: By some calculation, I figured out that $\mathbb E(\mathrm B_{r_1, r_2})=\dfrac{r_1}{r_1+r_2}$ and $\operatorname{Var}(\mathrm B_{r_1, r_2})=\dfrac{r_1 r_2}{(r_1 + r_2)^2 (r_1 +r_2 +1)}$.
Therefore, it remains to prove that $\sqrt{r_1 + r_2}\dfrac{\mathrm B_{r_1, r_2}-\mathbb E(\mathrm B_{r_1, r_2})}{\sqrt{\operatorname{Var}(\mathrm B_{r_1, r_2})}}$ converges to $Z \sim \mathrm N(0, 1)$.
Here I think I have to apply CLT, but I don't know how to because the given quantity does not contain sample mean. Does anyone have ideas?
Thanks for your help!
|
I finally figured out the answer:
Let $\mathrm B_{r_1, r_2} = \dfrac{V_1}{V_1+V_2}$ where $V_1 \sim \chi^2(2r_1 ) = \mathrm{Gamma}(r_1, 2)$ and $V_2 \sim \chi^2(2r_2 ) = \mathrm{Gamma}(r_2, 2)$ and $V_1, V_2$ are independent.
Now we know that $\dfrac{V_1-2r_1}{\sqrt{4r_1}} \to \mathrm N(0, 1)$ as $r_1\to\infty$ and $\dfrac{V_2-2r_2}{\sqrt{4r_2}} \to \mathrm N(0, 1)$ as $r_2\to\infty$. Therefore, $\sqrt{r_1+r_2}(\dfrac{V_1-2r_1}{\sqrt{4r_1(r_1+r_2)}}, \dfrac{V_2-2r_2}{\sqrt{4r_2(r_1+r_2)}})^t \to \mathrm N_2(0, I)$ where $I$ is the identity matrix.
Also, $\sqrt{r_1+r_2}\left(\dfrac{V_1-2r_1}{\sqrt{4r_1(r_1+r_2)}}, \dfrac{V_2-2r_2}{\sqrt{4r_2(r_1+r_2)}}\right)^t\sim \sqrt{n}\left(\dfrac {V_1}{\sqrt{4\gamma}n}-\sqrt{\gamma}, \dfrac {V_2}{\sqrt{4(1-\gamma)}n}-\sqrt{1-\gamma} \right)^t$
Where $r_1+r_2=n, r_1 \sim \gamma n, r_2 \sim (1-\gamma)n$.
Now let $g(x, y)=\dfrac{\sqrt{\gamma}x}{\sqrt{\gamma}x+\sqrt{1-\gamma}y}$, then $g'(x, y)=\left(\dfrac{\sqrt{\gamma (1-\gamma)}y}{\left(\sqrt{\gamma}x+\sqrt{1-\gamma}y\right)^2}, -\dfrac{\sqrt{\gamma (1-\gamma)}x}{\left(\sqrt{\gamma}x+\sqrt{1-\gamma}y\right)^2} \right)^t$.
Therefore, $\sqrt{n}\left( g\left(\dfrac {V_1}{\sqrt{4\gamma}n}, \dfrac {V_2}{\sqrt{4(1-\gamma)}n} \right)-g\left(\sqrt{\gamma}, \sqrt{1-\gamma}\right) \right)=\sqrt{n}(\mathrm B_{r_1, r_2}-\gamma)\to g'\left(\sqrt{\gamma}, \sqrt{1-\gamma}\right)^t\mathrm N(0, I) = \mathrm N(0, \gamma(1-\gamma))$
Am I right?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2362780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
}
|
Question on adjunctions Let $\mathcal{A},\mathcal{B}$ and $\mathcal{C}$ be three categories. Let $F:\mathcal{A}\to \mathcal{B}$ be a functor, and let $G\dashv H$ be an adjunction, where $G:\mathcal{B}\to\mathcal{C}$ and $H:\mathcal{C}\to\mathcal{B}$.
If $G\circ F$ is to have a right adjoint, is it necessary that
i) there exists a functor $K:\mathcal{B}\to\mathcal{A}$? And, if yes,
ii) is $K$ right adjoint to $F$?
|
i) Your assumptions imply that all three categories are either empty or nonempty simultaneously, so that there certainly exists some functor $B\to A.$ But that's not very interesting.
ii) There's no reason $F$ should have a right adjoint. For instance, if $A,C$ are both copies of the category with a single object and arrow, and if $B$ is a category with a terminal object, then your assumptions all hold. But $F$ will only have a right adjoint if it happens to send the object of $A$ to an initial object of $B$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2362874",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Value of this expression If $\alpha$ and $\beta$ are the roots of the equation $$x^2 + x − 3 = 0$$ find the value of the expression $4\beta^2 − \alpha^3$.
I tried using sum of roots and product of roots formulas but could not get the answer.
|
We have $$\beta^2=3-\beta\\4\beta^2=12-4\beta$$
Also $$\alpha^2=3-\alpha\\\alpha^3=3\alpha-\alpha^2=3\alpha-3+\alpha=4\alpha-3$$
Thus, $$4\beta^2-\alpha^3=15-4(\alpha+\beta)=15+4=19$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2362954",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Trace of product of positive matrices Let $A$, $B$ symmetric matrices over $\mathbb{R}$ with the same dimension. If $A$ has only positive eigenvalues and $B$ has only nonnegative eigenvalues, is $\text{trace}(AB)\ge 0?$
If yes, prove it. If no, counterexample it.
|
Yes. In fact all eigenvalues of $A B$ are nonnegative. This is because $A^{1/2} B A^{1/2}$ is positive semidefinite, where
$A^{1/2}$ is the positive definite square root of $A$, and
$A B = A^{1/2} (A^{1/2} B)$ and $(A^{1/2} B) A^{1/2}$ have the same
eigenvalues (the products of two matrices in either order always have the same nonzero eigenvalues).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2363055",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Is there any geometric demonstration that proves $m= \frac{y_2 -y_1}{x_2 - x_1}$ (the slope formula)? Is there any geometric demonstration that proves $$m= \frac{y_2 -y_1}{x_2 - x_1}?$$
I know one algebraic demonstration. It's:
1) If you have a function $y=mx+b$ and two points $P_1(x_1,y_1)$ and $P_2(x_2,y_2)$, you can make two equations and solve for $m$:
$$1) \: y_1=mx_1+b, $$
$$2) \: y_2=mx_2+b. $$
If you subtract equation $1$ from equation $2$, you get:
$$y_2-y_1=mx_2+b-(mx_1+b)$$
$$y_2-y_1=mx_2+b-mx_1-b$$
$$y_2-y_1=mx_2-mx_1$$
$$y_2-y_1=m(x_2-x_1)$$
$$ \frac{y_2 -y_1}{x_2 - x_1}=m.$$
That's the one I know, but since it's more of a geometry topic I'd rather like a geometric demonstration. Thanks.
I figured out this demonstration:
I am not an english speaker so I translated the best I could. Tell me what you think about it and if it´s valid. Thanks.
NOTE= In the second picture it should have said :"I´ll take a third point..."
|
Why did mathematicians decide that the slope was important?
It's important for a lot of reasons, but they mostly come down to differentiation. The wiki page has some beautiful geometrical demonstrations.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2363138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
Surface measure $\mathrm{d}S$ of $M$ corresponding to parametrisation I'm reading these notes, and there's a part I don't understand on page 72 (76 in the PDF), right after the equation marked with two stars.
Note that $\sqrt{1+|Dg(y')|^2}dy'$ is the surface measure $\mathrm{d}S$ of $M$ corresponding to the parametrisation $y'\mapsto(g(y'),y')$ of $M$
A reference is then given, but I can only find it in German and can't understand it.
$y'$ is some point in $\mathbb{R}^{n-1}$, $M$ is a hypersurface in an open subset of $\mathbb{R}^n$. $g$ is a function from $\mathbb{R}^{n-1}$ to $\mathbb{R}$.
|
This hypersurface $S$ is parametrized by $$y'\mapsto (y',g(y'))\in{\mathbb R}^n\qquad(y'\in{\mathbb R}^{n-1})\ ,$$
hence is considered as graph of the scalar function $g$ defined on ${\mathbb R}^{n-1}$.
If $S$ were a hyperplane $y_n=a_1y_1+a_2y_2+\ldots a_{n-1}y_{n-1}$ then the surface measure on $S$ would be ${1\over\cos\phi}{\rm d}(y')$, where ${\rm d}(y')$ is the $(n-1)$-dimensional euclidean (or Lebesgue) measure on the coordinate hyperplane $y_n=0$, and $\phi$ is the angle between the normal of $S$ and the $y_n$-axis. Now
$${1\over\cos\phi}={1\cdot\sqrt{1+a_1^2+a_2^2+\ldots+a_{n-1}^2}\over\langle{\bf e}_n\cdot(-a_1,-a_2,\ldots,-a_{n-1},1)\rangle}=\sqrt{1+a_1^2+a_2^2+\ldots+a_{n-1}^2}=\sqrt{1+|{\bf a}|^2}\ .$$
In the case at hand the tangent plane $T_p$ to the curved surface $S$ at some point $p\in S$ is given by
$$T_p:\quad y_n-p_n=\nabla g(p')\cdot (y'-p')\ .$$
The local factor ${1\over\cos\phi}$ therefore is given by $\sqrt{1+|\nabla g(y')|^2}$, as indicated in your source.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2363246",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Expressing variable q in terms of p. Where p and q are contents in a quadratic. Suppose that $p$ and $q$ are constants such that the smallest possible value of $x^2+px+q$ is $0$. Express $q$ in terms of $p$.
I am unsure what it is asking. I feel it is asking something very simple. However I am unsure of what it is. Any solutions or walk-throughs would be appreciated.
|
Hint: Apply completing the square
$$x^2+px+q=\left(x+\frac{p}2 \right)^2-\frac{p^2}{4}+q$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2363318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Lemma used in Proof of L'Hôpital's Rule for Indeterminate Types of $\infty/\infty$ This question arises from an unproved assumption made in a proof of L'Hôpital's Rule for Indeterminate Types of $\infty/\infty$ from a Real Analysis textbook I am using. The result is intuitively simple to understand, but I am having trouble formulating a rigorous proof based on limit properties of functions and/or sequences.
Statement/Lemma to be Proved:
Let $f$ be a continuous function on interval $(a,b)\!\subset\!\mathbb{R}$. If $\displaystyle{\lim_{x\rightarrow a+}\!f(x)\!=\!\infty}$, then, given any $\alpha\!\in\!\mathbb{R}$, there exists $c\!>\!a$ such that $x\!\in\!A\cap(a,c)$ implies $\alpha\leq f(c)<f(x)$.
Relationship with Infinite Limit Definition:
At first glance, this may appear to slimply be the definition of right-hand infinite limits:
*
*$\displaystyle{\lim_{x\rightarrow a+}\!f(x)\!=\!\infty}$ is defined to mean: given any $\alpha\!\in\!\mathbb{R}$, there exists $\delta\!>\!0$, such that $x\!\in\!A\cap(a,a+\delta)$ implies $\alpha<f(x)$.
However, the main difference is that the result I am interested in forces an association between the "$\delta$" and the "$\alpha$" (where $\alpha=f(a+\delta)$, i.e., it forces $c\!\equiv\!a+\delta$ to be in the domain of $f$).
EDIT: The statement to be proved that I originally presented did not require $f$ to be continuous on $A$. However, this was added to address the comment and counterexample below.
EDIT #2: Again, a helpful user (@DanielFischer) commented that the statement after the first edit needed yet an additional limitation--i.e., that $A$ must also be an interval--for it to hold.
|
Assuming that $a$ is a lower limit point of $A$ \ $\{a\},$ the statement $\lim_{x\to a+}f(x)=\infty$ is equivalent to $$\lim_{c\to a+}\inf \{f(x): x\in (a,c)\cap A\}=\infty.$$ And the statement you wish to derive from $\lim_{x\to a+}f(x)=\infty$ is equivalent to $$\lim_{c\to a+}\sup \{f(x): x\in (a,c)\cap A\}=\infty.$$ Which is obvious because $\sup S\geq \inf S$ for any $S\subset \mathbb R.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2363390",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Lengthy integration problem $$ \int\frac{x^3+3x+2}{(x^2+1)^2(x+1)} \, dx$$
I managed to solve the problem using partial fraction decomposition.
But that approach is pretty long as it creates five variables. Is there any other shorter method to solve this problem(other than partial fractions)?
I also tried trigonometric substitution and creating the derivative of the denominator in the numerator... but it becomes even longer. Thanks in advance!!
Just to clarify: (My partial fraction decomposition)
$$\frac{x^3+3x+2}{(x^2+1)^2(x+1)}= \frac{Ax+B}{x^2+1} + \frac{Cx+D}{(x^2+1)^2} + \frac{E}{x+1}$$
|
There are much better ways to find the coefficients in partial fractions than solving five equations in five variables.
Writing your function as
$$ \frac{x^3 + 3 x + 2}{(x^2+1)^2 (x+1)} = \frac{Q(x)}{(x^2+1)^2} + \frac{E}{x+1} $$
multiply both sides by $x+1$ and substitute $x=-1$. We get
$$ \frac{-2}{2^2} = 0 + E$$
so $E = -1/2$, and
$$ \eqalign{\frac{Q(x)}{(x^2+1)^2} &= \frac{x^3 + 3 x + 2}{(x^2+1)^2 (x+1)} + \frac{1/2}{x+1}\cr &= \frac{x^2 + 3 x + 2 + (1/2)(x^2+1)^2)}{(x^2+1)^2(x+1)}\cr
&= \frac{x^3 + x^2 + x + 5}{2(x^2 + 1)^2} }$$
Now you want this in the form
$$\frac{Ax+B}{x^2+1} + \frac{Cx+D}{(x^2+1)^2} $$
Multiply by $(x^2 + 1)^2$ and substitute $x=i$ (yes, $\sqrt{-1}$).
We get $$ 2 = C i + D $$
But since we want $C$ and $D$ to be real, we must have $C = 0$, $D = 2$. This leaves
$$ \eqalign{\frac{Ax+B}{x^2+1} &= \frac{x^3 + x^2 + x + 5}{2(x^2 + 1)^2} - \frac{1}{(x^2+1)^2} \cr
&= \frac{x + 1}{2(x^2 + 1)}} $$
and we're done: $A = 1/2$, $B = 1/2$, $C = 0$, $D = 2$, $E = -1/2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2363482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
How can I express $\frac{dy}{dx}$ if I make the change of variable $x-1 = t$? If $y$ is a function of x and I make the change of variable $x-1=t$. Now what would $\frac{dy}{dx}$ equal in terms of $\frac{dy}{dt}$.
|
As: $x-1=t$ then $x(t)=t+1$ so $\frac{dx}{dt}=1$
Now: $$y(x(t))'=\frac{dy}{dt}=\frac{dy}{dx}\frac{dx}{dt}=\frac{dy}{dx}\times1=\frac{dy}{dx}$$
Therefore: $$\frac{dy}{dx}=\frac{dy}{dt}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2363601",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Composition of exponential with an isometry I have trouble understanding the following equation.
We are given an isometry $$F: M \rightarrow M$$ on a Riemannian manifold $M$.
Why does the following hold true?
$$
F \circ exp_p = exp_{F(p)} \circ dF_p
$$
Does someone have any ideas? Thank you.
|
Let $M$ be a Riemannian manifold and for $p \in M$, $D(p)$ be the open subset of the tangent space $T_{p}M$ such that :
$$ D(p) = \lbrace v \in T_{p}M, \; \gamma_{v}(1) \; \text{exists} \rbrace $$
where $\gamma_{v}$ is the unique maximal geodesic of $M$ with initial conditions: $\gamma_{v}(0) = p$ and $\dot{\gamma_{v}}(0) = v$. $D(p)$ is the definition domain of $\mathrm{exp}_{p}$.
Let $p \in M$ and $v \in D(p)$. Since $F$ is an isometry, it is distance-preserving and it sends geodesics of $M$ onto geodesics of $M$ (this can be seen using the length minimizing property of geodesics). Hence, $\eta \, : t \, \mapsto \, F\big( \mathrm{exp}_{p}(tv) \big)$ is a geodesic of $M$. It satisfies : $\eta(0) = F(p)$ and $\dot{\eta}(0) = dF(p)(v)$. The curve $t \mapsto \mathrm{exp}_{F(p)}\big( tdF(p)(v) \big)$ is another geodesic of $M$ which satisfies the same set of initial conditions. By unicity, these two curves are equal.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2363709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find the limit $ \lim_{x \to (\frac{1}{2})^{-}} \frac{\ln(1 - 2x)}{\tan \pi x} $ Question
$$
\lim_{x \to (\frac{1}{2})^{-}} \frac{\ln(1 - 2x)}{\tan \pi x}
$$
I'm not sure how to go about this limit, I've tried to apply L'Hopital's rule
(as shown).
It seems that the form is going to be forever indeterminate? Unless I'm missing
something, but the limit is (apparently) zero.
Working
As this is in the indeterminate form of $- \infty / + \infty$, apply L'Hopital's rule.
Let $\frac{\ln(1 - 2x)}{\tan \pi x} = f/g$, then
$f' = \frac{-2}{1 - 2x}$ , and $f'' = \frac{-4}{(1 - 2x)^2}$
$g' = \pi \sec^2 \pi x$ (which is equal to $\pi (\tan^2 (\pi x )+ 1)$ ) and
$g'' = 2 \pi^2 \sec^2 (\pi x) \tan \pi x $ ( or
$2 \pi^2 (\tan^2 (\pi x) + 1) \tan \pi x$
)
Using the first derivatives gives
$$
\lim_{x \to (\frac{1}{2})^{-}}
\frac{\frac{-2}{1 - 2x}}{\pi \sec^2 \pi x}
$$
Which is in the form
$$
\frac{- \infty}{+ \infty}
$$
So that I would now use the second derivative, which is
$$
\lim_{x \to (\frac{1}{2})^{-}}
\frac{
\frac{-4}{(1 - 2x)^2}
}{
2 \pi^2 \sec^2 (\pi x) \tan \pi x
}
$$
Or
$$
\lim_{x \to (\frac{1}{2})^{-}}
\frac{
\frac{-4}{(1 - 2x)^2}
}{
2 \pi^2 (\tan^2 (\pi x) + 1) \tan \pi x
}
$$
But this is still in an indeterminate form?
|
You do not need de l'Hospital rule for the evaluation of such limit:
$$\lim_{x\to(1/2)^{-}}\frac{\log(1-2x)}{\tan(\pi x)}=\lim_{z\to 0^+}\frac{\log(2z)}{\cot(\pi z)}=\lim_{z\to 0^+}\frac{z\log(2z)}{z\cot(\pi z)} =\frac{0}{\frac{1}{\pi}}=0.$$
It is enough to exploit a substitution $x\mapsto \frac{1}{2}-z$ and the well-known limits (that should be studied before approaching de l'Hospital rule, the Stolz-Cesàro theorem and so on)
$$ \lim_{x\to 0}\frac{\sin x}{x}=1,\qquad \forall\epsilon>0,\;\;\lim_{x\to 0^+} x^{\varepsilon}\log(x)=0.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2363809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
}
|
Find number of real solutions of $3x^5+2x^4+x^3+2x^2-x-2=0$
Find a number of real roots of $$f(x)=3x^5+2x^4+x^3+2x^2-x-2$$
I tried using differentiation:
$$f'(x)=15x^4+8x^3+3x^2+4x-1=0$$ and I found number of real roots of $f'(x)=0$ by drawing graphs of $g(x)=-15x^4$ and $h(x)=8x^3+3x^2+4x-1$ and obviously from graph there are two real solutions as $h(x)$ is an increasing function.
But now how to proceed?
|
$f'(x)=15x^4+8x^3+3x^2+4x-1>0$ for $x>\frac{1}{2}$ and $f\left(\frac{1}{2}\right)<0$.
Hence, since $\lim\limits_{x\rightarrow+\infty}f(x)=+\infty$, we see that there is unique root for $x>\frac{1}{2}$.
Now, prove that $f(x)<0$ for all $x\leq\frac{1}{2}$.
For example, for $0\leq x\leq\frac{1}{2}$ we have
$$3x^5+2x^4+x^3+2x^2-x-2=$$
$$=\left(3x^5-\frac{3}{4}x^3\right)+\left(2x^4-x^3\right)+(2x^2-x)+\left(\frac{11}{4}x^3-2\right)<0.$$
For $x<0$ we can replace $x$ at $-x$ and it's enough to prove that
$$-3x^5+2x^4-x^3+2x^2+x-2<0$$ for all $x>0$ or
$$3x^5-2x^4+x^3-2x^2-x+2>0$$ or
$$3x^5-3x^4-3x^3+3x^2+x^4+4x^3-5x^2-x+2>0$$ or
$$3x^2(x+1)(x-1)^2+x^4-2x^2+1+4x^3-3x^2-x+1>0$$ or
$$3x^2(x+1)(x-1)^2+(x^2-1)^2+4x^3-3x^2-x+1>0$$ or
$$3x^2(x+1)(x-1)^2+(x^2-1)^2+(x-0.5)^2+4x^3-4x^2+0.75>0$$ and it remains to prove that
$$4x^3-4x^2+0.75>0,$$
which is AM-GM:
$$4x^3-4x^2+0.75=2x^3+2x^3+0.75-4x^2\geq3\sqrt[3]{(2x^3)^2\cdot0.75}-4x^2=\left(3\sqrt[3]3-4\right)x^2>0,$$
which gives that our equation has one real root.
Done!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2363898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
}
|
Expected value of the maximum of binomial random variables Let $X = \{X_1, ..., X_k\}$ be a set of $k$ iid variables drawn from a binomial distribution: $X_i \sim B(n, p)$. How to calculate the upper bound of the expected value of $max(X_i)$?
Several related question (such as: Bounds for the maximum of binomial random variables or Maximum of Binomial Random Variables) give such estimates for cases when $n = k$. I am, however, interested in the general case.
|
$\newcommand\P{\mathbb{P}}\newcommand\E{\mathbb{E}}\newcommand\ol{\overline}$Write $\ol X_{\max} = \max\{\ol X_i\} = \max\{\tfrac{1}{n} X_i\}$. We can compute
$$
\P(\ol X_{\max} > p + t) = \P(\ol X_i > p + t \text{ for some } i=1,\ldots,k) \leq k\P(\ol X_1 > p + t) \leq k\,e^{-2nt^2}
$$
by the union bound and Hoeffding's inequality. Denoting $(x)_+ = \max\{x,0\}$,
this implies
$$
\E \ol X_{\max} \leq p + \E(\ol X_{\max} - p)_+ = p + \int_0^\infty \P(X_{\max} > p + t)dt \leq p + k\sqrt{\frac{\pi}{8n}}
$$
In terms of the original variables $X_{\max} = n\ol X_{\max}$, we have
$$
\E X_{\max} \leq np + k\sqrt{\frac{n\pi}{8}} \leq np + \tfrac{2}{3}k\sqrt{n}.
$$
Note that this is much weaker in terms of $k$ than the asymptotically correct bound
$$
\E X_{\max} \asymp np + \sqrt{2p(1-p)} \sqrt{n\log k}
$$
based on a normal approximation and the fact that if $Z_i\sim\mathcal{N}(0,1)$, $\E Z_{\max} \asymp \sqrt{2\log k}$, but it does give the right dependence on $n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2364006",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
How to remember which function is concave and which one is convex? I always struggle to remember when a function is convex and concave:
Do you have a particular trick to help you remember this?
My trick is based on the Spanish phrase "No cabe", pronounced nô ˈka.βe, which sound just like "concave". "No cabe" means it does not fit. Thus, whilst you can put something into a convex function (e.g. think of a bowl), you cannot put something into a concave function. Hence the relation.
I am curious on what other, perhaps more efficient methods people use.
|
conVex - V looks like the convex function :)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2364116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 5,
"answer_id": 2
}
|
Put $f(x) =\int_0^{t^2} \sin(x^4) dx$ and give a formula for $f'(t)$ $$f\left(t\right) = \int_{0}^{t^2} \sin\left(x^4\right)\ {\rm d}x$$
My answer:
The chain rule is
$$g'(t)f'(g(t))$$
So if
$$g(t) = t^2$$ and
$$f'(x) = \sin(x^4)$$
Then $f'(t)$ must be
$$f'(t) = 2t \sin(t^4)$$
Which was wrong. What is the correct way to approach this problem?
|
Hint: Another method is to try to find a change of variables so that you can write
$$f(t) = \int_c^t g(x)\; dx$$
and then you have $f'(t) = g(t)$. The important thing is that you end up with "$t$" as the upper limit.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2364194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
integer solutions for $d^2 + 4kT=S^2$ for a given $T$ Consider the following equation:
$d^2 + 4kT=S^2$
We are interested in nonzero integers $d,k,T,S$ that satisfy the above equation. Specifically, we are interested in some values of $T$, for which there exist multiple solutions for $d,k,S$. For example does there exist $d_1,d_2,k_1,k_2,S_1,S_2,T$ such that $d_1^2 + 4k_1 T=S_1^2$ and $d_2^2 + 4k_2 T=S_2^2$ ?
|
The discriminant of the equation
$$kX^2-dX-T=0$$
is $d^2+4kT$.
Since $k,d,T$ are integers, this discriminant is the square of an integer if and only if the equation has rational solutions, let them be $u$ and $v$.
We know that $uv=-T/k$ and $u+v=d/k$. Then, given $T$, you can just give values to $k$ and choose a pair of rational numbers $u,v$ such that $uv=-T/k$, only having in mind that the denominator of $u+v$ must be a divisor of $k$.
Example: Pick $T=12$, $k=10$. Choose $u,v$ such that $uv=-T/k=-6/5$. For example: $u=3$, $v=-2/5$. Then $u+v=13/5$, so $d=26$. Then $26^2+4\cdot12\cdot10=676+480=1156=34^2$. The denominators of $u$ and $v$ should be both divisors of $k=10$: if we picked $u=42$, $v=-1/35$ then the denominator of $u+v$ is $35$ and $d$ would not be integer.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2364345",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Graded vector space conditions Wikipedia define the graded ( ring, module, vector space, ...) as here
I noted that in the rings and modules it required the condition of inclusion but in the vector spaces it did not. just the direct sum condition ..why ??
Any hint ?
|
Any inclusion V -> W of vector spaces creates a direct sum decomposition $W \cong V \oplus V^{\perp}$.
A graded vector space $W = V_1 \oplus V_2$ would have graded part ($\le 2$) equal to W not $V_2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2364451",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Prove $\{f(x,y) \in \mathbb{C}[[x,y]] \mid f(\zeta_n x,\zeta_n^{-1}y) = f(x,y)\}$ is not isomorphic to the formal series ring
Suppose that $\zeta_n$ is a primitive n-th root of $1$.
Let $R$ be $\{f(x,y) \in \mathbb{C}[[x,y]] \mid f(\zeta_n x,\zeta_n^{-1}y) = f(x,y)\}$.
Try to prove that $R$ is not isomorphic to $\mathbb{C}[[x,y]]$ as $\mathbb{C}$-algebras.
My Work: Assume that $f(x,y)=\sum a_{i,j}x^iy^j$. Then $f(\zeta_n x,\zeta_n^{-1}y)=f(x,y)$ implies that $a_{i,j}=0$ when $n\not\mid i-j$. Thus $$f(x,y)=\sum_{in+j\geq 0}(xy)^jx^{in}.$$ Then I can't go any further. I think we need some properties of $\mathbb{C}[[x,y]]$.
Any one has some advice or ideas?
And there is another question, let $S=\{f[x,y]\in \mathbb{C}[[x,y]]\mid f[\zeta_nx,\zeta_ny]=f[x,y]\}$. Try to prove that $S$ is not isomorphic to $R$ as $\mathbb{C}$-algebras.
|
Each of the three rings $\mathbb{C}[[x,y]]$, $R$, and $S$ is local, with a unique maximal ideal consisting of power series with constant term $0$. For each ring, I will compute the vector space dimension of the quotient of the ring by the square of the maximal ideal. The three dimensions will be distinct, which proves the three rings are pairwise non-isomorphic as $\mathbb{C}$-algebras.
For $\mathbb{C}[[x,y]]$, the maximal ideal $\mathfrak{m}$ is the set of power series whose monomials $x^iy^j$ satisfy $i+j\geq 1$, so $\mathfrak{m}^2$ is power series whose monomials $x^iy^j$ satisfy $i+j\geq 2$. The quotient $\mathbb{C}[[x,y]]/\mathfrak{m}^2$ has a basis $1$, $x$, $y$, so is $3$-dimensional.
The ring $R$ consists of power series whose monomials $x^iy^j$ satisfy $n|i-j$. The maximal ideal $\mathfrak{m}$ is the set of power series whose monomials $x^iy^j$ satisfy $i+j\geq 1$ and $n|i-j$. By taking pairwise products of such monomials, we can get monomials $x^iy^j$ satisfying either $i$, $j\geq 2$, or $i\geq 1$, $j\geq n$, or $j\geq 1$, $i\geq n$. The quotient $R/\mathfrak{m}^2$ has a basis $1$, $xy$, $x^n$, $y^n$, so is $4$-dimensional.
The ring $S$ consists of power series whose monomials $x^iy^j$ satisfy $n|i+j$. The maximal ideal $\mathfrak{m}$ is the set of power series whose monomials $x^iy^j$ satisfy $n|i+j$ and $i+j\geq n$, so $\mathfrak{m}^2$ is the set of power series whose monomials $x^iy^j$ satisfy $n|i+j$ and $i+j\geq 2n$. The quotient $S/\mathfrak{m}^2$ has a basis
$$
1,x^n, x^{n-1}y, x^{n-2}y^2,\ldots,xy^{n-1},y^n,
$$
so is $(n+2)$-dimensional.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2364561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
If the adjoint of an operator is bounded is the operator as well? Let $X$ and $Y$ be normed vector spaces. Let $T: X \to Y$ be a linear operator. Let $T^* : Y^* \to X^*$ be the adjoint of $T$ defined by $T^*(f) =f \cdot T$. Show that if $T^*$ is bounded then $T$ is bounded.
I know that the converse of this statement is true but I'm not sure about this direction. If it is not true, I am wondering if there is some extra hypothesis which makes it true.
|
For any $x\in X$ and $\phi\in Y^*$, we have
$$|T^*\phi(x)|=|\phi(Tx)|\leq\|\phi\|\|T\|\|x\|.$$
It follows that
$$\|T^*\phi\|\leq\|\phi\|\|T\|,$$
thus by definition of the operator norm we have
$$\|T^*\|\leq \|T\|.$$
Similarly we have
$$\|T^{**}\|\leq\|T^*\|,$$
where $T^{**}:X^{**}\to Y^{**}$ is the adjoint operator of $T^*$. Now let $J_X:X\to X^{**}$ be the canonical isometry given by
$$(J_Xx)(\psi)=\psi(x),\quad\forall x\in X,\psi\in X^*.$$
Then for any $x\in X, \phi\in Y^*$ we have
$$(T^{**}(J_Xx))(\phi)=(J_Xx)(T^*\phi)=(T^*\phi)(x)=\phi(Tx)=(J_Y(Tx))(\phi).$$
It follows that
$$|J_Y(Tx)(\phi)|\leq \|T^{**}\|\|J_Xx\|\|\phi\|=\|T^{**}\|\|x\|\|\phi\|,$$
thus
$$\|T^{**}\|\|x\|\geq \|J_Y(Tx)\|=\|Tx\|,$$
and hence
$$\|T^{**}\|\geq\|T\|.$$
Consequently we have
$$\|T\|\leq\|T^*\|,$$
which means $T$ is bounded. In fact we can see that $\|T\|=\|T^*\|$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2364655",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
$\epsilon$ $\delta$ again! Is the following proof correct?
Theorem. If $\lim_{x\to c}f(x)=L$ and $L>0$ then there is some $\delta>0$ such that for all $x$ such that $0<|x-c|<\delta$, $f(x)>0$.
Proof. Let $\epsilon=\frac{L}{2}$ and since $\lim_{x\to c}f(x)=L$ it follows that for some $\delta>0$ it is the case that
$$\forall x(0<|x-c|<\delta\implies |f(x)-L|<\epsilon)\tag{1}$$
It is now apparent that given any arbitrary $x$ such that $0<|x-c|<\delta$ it follows that $|f(x)-L|<\epsilon$.
consequently $-\frac{L}{2}<f(x)-L<\frac{L}{2}$ and so $\frac{L}{2}<f(x)<\frac{3L}{2}$ but $0<L$ and therefore $0<\frac{L}{2}$ an thus $0<\frac{L}{2}<f(x)$
$\blacksquare$
|
It is correct, but too long, since you write$$\forall x(0<|x-c|<\delta\implies |f(x)-L|<\epsilon)$$and, right after that, “It is now apparent that given any arbitrary $x$ such that $0<|x-c|<\delta$ it follows that $|f(x)-L|<\epsilon$.” Therefore, you wrote the same thing twice.
Besides, after making $\epsilon=\frac L2$, there is no need to use the symbol $\epsilon$ again. But using it is not an error, of course.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2364903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Convex Ideal quadrilaterals are not all the same In hyperbolic geometry, ideal triangles are all congruent to each other.
Convex Ideal quadrilaterals ( a quadrilateral where all 4 points are ideal points) all have the same area $(2\pi)$
But I think for the rest they are not all congruent for example the angle the diagonals make can be different.
But that made me think of we have two ideal quadrilaterals that have the same angle between the diagonals are they congruent ? Or is there more to concider?
|
I'd suggest thinking about this in the context of the Poincaré disk model. An isometry of the hyperbolic plane is uniquely determined by mapping three ideal points to their images. That's because that isometry is a Möbius transformation, which is uniquely determined by three points and their images. (Actually if the order of ideal points changes, then you's have to compose this with an inversion in the unit circle, because the Möbius transformation would exchange inside and outside.)
So you can map any three points to any three points in an isometric way. Conversely, if you fix three corners, the position of the fourth uniquely determines the shape of the quadrilateral.
So what parameters can you use to describe that shape? The angle between the diagonals does seem like an intuitive choice. Coming from a background of projective geometry, I'd probably pick the cross ratio of the four ideal vertices. You can compute this e.g. as $\lambda=\frac{(a-c)(b-d)}{(a-d)(b-c)}$ with $a,b,c,d\in\mathbb C$ having absolute value $1$. Both approaches have benefits and drawbacks.
One thing worth considering is whether you allow for self-intersecting quadrilaterals. If two edges may intersect, then the diagonals would not intersect. Extending them to infinite lines, one could see them intersect in the Beltrami-Klein model, forming an imaginary angle of intersection. So you may want to forbid self intersection, or allow for complex angles. Cross ratios cover the self-intersecting case using real numbers (and $\infty$ for the special case of two certain points coinciding). To forbid them you could restrict the value of the cross ratio to $1<\lambda<\infty$ for $(a,b,c,d)$ cyclic in that order, or to $0<\lambda<1$ for $(a,b,d,c)$ cyclic in that order.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2364989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Improper integral $\int_{0}^{\infty} \frac{x}{e^{x}-1} \ dx$
$$\int_{0}^{\infty} \frac{x}{e^{x}-1}\ dx$$
Is there a way to evaluate this integral without using the zeta function?
|
$$\text{ if } L(f) = F(s) \implies \int_0^{\infty} \frac f{e^t-1}dt = \sum_{s\ge1} F(s)$$
$$L(t) =\frac 1{s^2}\implies\int_0^{\infty} \frac t{e^t-1}dt = \sum_{s\ge1}\frac 1{s^2} = \zeta(2) $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2365099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Showing a function can be continuously extended to the unit circle Fix $0 < a < \infty.$ Define for $|z| < 1$ the function $$ f(z) = \sum_{n=0}^\infty 2^{-na} z^{2^n}$$
and show that $f$ extends continuously to the unit circle but can not be analytically continued past the unit circle.
The second part of the problem is a whole new can of worms but I'm even having difficulty with the first part: it's easy to show that $|f|$ is bounded if $z = e^{i\theta}$ but I can't quite figure out the continuity piece.
Let $z_0 = re^{i\phi}$ be a point in the unit disk. Then
$$f(z) - f(z_0) = \sum_{n=0}^{\infty} 2^{-na}(e^{i\theta 2^n} - r^{2^n} e^{i\phi 2^n})
\\= \sum_{n=0}^\infty 2^{-na}[(1+r^{2^n})(e^{i\theta 2^n} - e^{i\phi2^n}) +
(e^{i\phi 2^n} - r^{2^n} e^{i\theta 2^n})]$$
and I tried finding some bounds for $|f(z) - f(z_0)|$ given bounds for $|z-z_0|$ in this manner (and some other ways) but I wasn't able to find any results.
|
Finding explicit bounds for $\lvert f(z) - f(z_0)\rvert$ is not nice. It's nicer if one uses the absolute and uniform convergence of the series on the closed unit disk, but then it's still nicer to use the theorem that the uniform limit of continuous functions is continuous.
To see that the series converges absolutely and uniformly on the closed unit disk, note that the geometric series
$$\sum_{n = 0}^{\infty} 2^{-na} = \frac{1}{1 - 2^{-a}}$$
converges and consists of positive terms, so for $\lvert z\rvert \leqslant 1$ we have
$$\Biggl\lvert f(z) - \sum_{n = 0}^k 2^{-na} z^{2^n}\Biggr\rvert \leqslant \sum_{n = k+1}^{\infty} 2^{-na} \lvert z\rvert^{2^n} \leqslant \sum_{n = k+1}^{\infty} 2^{-na} = \frac{2^{-ka}}{2^a-1}.$$
For the second part of the exercise, note that
$$f(z^2) = \sum_{n = 0}^{\infty} 2^{-na} (z^2)^{2^n} = \sum_{n = 0}^{\infty} 2^{-na} z^{2^{n+1}} = 2^a \sum_{m = 1}^{\infty} 2^{-ma} z^{2^m} = 2^a\bigl(f(z) - z\bigr).$$
Thus a hypothetical analytic continuation to a neighbourhood of $z_0 = e^{i \varphi_0}$ would imply an analytic continuation to a neighbourhood of $z_0^2$. Continuing the argument, we'd have an analytic continuation to a larger disk, and that contradicts the fact that the radius of convergence of the series is $1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2365416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Is the boundary of a manifold always compact? Coming from a background of undergrad physics training, I know that "the boundary of a boundary is zero".
Does this mean that the boundary of a manifold is always compact(or maybe just closed)?
Keep in mind that some of the terminology that mathematicians use might not be familiar to me due to my aforementioned background.
Thank you.
Note: This question arises from my reading of the anti De-Sitter spacetime which has a compactified Minkowski spacetime as its boundary.
EDIT: How about the boundary of a compact manifold?
|
No, consider $\mathbb R^n $ itself embedded in the " Standard Way" in $\mathbb R^{n+1}$ as $(x_1,x_2,..,x_n,0): x_i \in \mathbb R$. It is itself a manifold with boundary -- and every point is a boundary point -- but it is not compact, since it is not bounded -- you can even let a single coordinate grow as much as you want it to , to argue for unboundedness.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2365488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Showing the following sequence of functions are uniformly convergent
Let $f_n:[0,1]\to \mathbb{R}$ be a sequence of continuously differentiable functions such that
$$f_n(0)=0,\:\: |f_n'(x)|\leq 1, \text{for all }n\geq 1, x\in (0,1).$$
Suppose further that $f_n(.)$ is convergent to some function $f(.)$. Show that $f_n(.)$ converges to $f(.)$ uniformly.
I tired to prove this problem, but I'm lost and I cannot find a correct approach.
Firstly, I don't know if $f(.)$ is continuous. There must be some way to show $f$ is continuous, but I can't and I'm not sure if I need it. Secondly, I've obtained that $(f_n(.))$ are uniformly bounded and this because of
$$\text{for some }z\in(0,x)\subset(0,1),\:\:|f'_n(z)|=|\frac{f_n(x)-f_n(0)}{x-0}|\leq 1\to |f_n(x)|< 1.$$
I tried to show $(f_n(.)) $ is equicontinuous. Here is my work.
Let $x\in (0,1),$ and let $\epsilon>0$. As $|f_n'(x)|\leq 1$, we can write $\lim_{y\to x}|\frac{f_n(x)-f_n(y)}{y-x}|\leq 1$, hence $|f_n(x)-f_n(y)|\leq |x-y|$. So if we take $\delta=\epsilon$,then for any $y$ that $|x-y|<\delta$, we have $|f_n(x)-f_n(y)|<\epsilon.$ So $(f_n)$ is equicontinuous on $(0,1)$.
Also we know $f_n(0)=0$, so this sequence is equicontinuous on $x=0$. If I'm correct in each of these steps, probably I'm done with the proof by showing that (f_n(.)) is equicontinuous on $x=1$.
Also in my proof, I didn't use this fact that $(f_n())$ are continuously differnetiable. I also saw the following proof, for this quetsion, but honestly, I think, it's not totally correct, since they use the continuity of $f(.)$ without proving it. here is the link "Show that $f_n(\cdot)$ is uniformly convergent."
|
Let $\varepsilon>0$ be given, and set $\|\, f_k'\| = \sup \left[|\,f_k'(x)|: x \in (0,1) \right]$ $(k=1,2,\ldots)$. Since the collection of open balls $\mathcal{B}: = \{B(\, x, \frac{\varepsilon}{3}) : x \in [0,1] \}$ is a cover for $[0,1]$, we may find a finite subcover, say $B(\,x_1, \frac{\varepsilon}{3}), \, \ldots, \, B(\,x_M, \frac{\varepsilon}{3})$ (Heine-Borel Theorem). Since $f_n$ converges pointwise on $[0,1]$, for each point $x_j \: \left(\,j=1,\ldots, M \right)$ we may find a positive integer $N_j$ so that
\begin{equation} \left|\, f_n(x_j) -
f_m(x_j) \right| < \frac{\varepsilon}{3} \text{ whenever } n, m \geq N_j \,.
\end{equation}
Setting $N = \max [N_1, \ldots, N_M]$ shows that
\begin{aligned}
\left|\,f_n(x)- f_m(x) \right| & \leq \left| \,f_n (x)- f_n(x_j) \right| + \left|\, f_n (x_j)- f_m(x_j) \right| + \left|\, f_m(x_j)- f_m(x) \right| \\
& < \| \,f_n'\||x-x_j| + \frac{\varepsilon}{3} + \|\,f_m'\| |x_j-x|
\\
& \leq \frac{\varepsilon}{3} + \frac{\varepsilon}{3} + \frac{\varepsilon}{3}= \varepsilon
\; \: \text{ whenever } \, n,m \geq N \text{ and } x \in [0,1] .
\end{aligned}
Since $\mathbb{R}$ is complete (or rather $f(x):=\lim_{n \to \infty} f_n(x)$ exists for $x \in [0,1]$ ), it follows that the sequence of functions $\{\,f_n\}_{n=1}^\infty$ converges uniformly on $[0,1]$ (Cauchy Criterion).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2365584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Let S be an infinite set and A a finite subset. Prove that $|S| = |S -A|$. I don't know if my solution is correct. This is what I have so far:
Let $S = \{ s_1, s_2,.....\}$ and $S-A = \{a_1, a_2,.....\}$ where all the elements are arranged in an fixed order.
let $f(x): S \to A , f(s_i) = a_i$
if I prove $f$ is bijective will my solution be correct?
|
Let $\mathfrak{a}=|A|$ and $\mathfrak{s}=|S|$, and let $\alpha$ denote an ordinal. We then have that $|S\sim A|=|\{\alpha:\mathfrak{a}<\alpha<\mathfrak{s}\}|$, and from here you should be able to construct a bijective function as you suggest.
As a hint for constructing such a bijection, observe that $$\mathbb{F}=\langle\alpha+1:\alpha\in\omega\rangle\cup(\omega,0)$$ is a injective function mapping $\omega+1$ onto $\omega$ -- pretty much all 'finite collapse' bijections between infinities and natural numbers can be constructed in this fashion, and all transfinite cardinalities contain $\omega$ as a subset to be safely used for this purpose.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2365686",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find the number of 4 digit positive integers if the product of their digits is divisible by 3. Find the number of 4 digit positive integers if the product of their digits is divisible by 3.
Let $abcd$ be the required number.For the product of digits of this number is to be divisible by 3,atleast one digit has to be 3,6 or 9.
Total there are 9000 four digit numbers.
The possible combinations are 3 and three other digits,33 and two other digits,333 and one other digit,3333,6 and three other digits,66 and two other digits,666 and one other digit,6666,9 and three other digits,99 and two other digits,999 and one other digit,9999.
I dont know how to solve further.
|
How many have a digit product that is not divisble by $3$? You can then only use $6$ digits for all digits as also $0$ is forbidden for all digits. In total, as you said, there are $9000 =9\times 10^3$ $4$-digit numbers..
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2365813",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Evaluate $\lim_{ x\to \infty} \left( \tan^{-1}\left(\frac{1+x}{4+x}\right)-\frac{\pi}{4}\right)x$
Evaluate
$$\lim_{ x\to \infty} \left( \tan^{-1}\left(\frac{1+x}{4+x}\right)-\frac{\pi}{4}\right)x$$
I assumed $x=\frac{1}{y}$ we get
$$L=\lim_{y \to 0}\frac{\left( \tan^{-1}\left(\frac{1+y}{1+4y}\right)-\frac{\pi}{4}\right)}{y}$$
using L'Hopital's rule we get
$$L=\lim_{y \to 0} \frac{1}{1+\left(\frac{1+y}{1+4y}\right)^2} \times \frac{-3}{(1+4y)^2}$$
$$L=\lim_{y \to 0}\frac{-3}{(1+y)^2+(1+4y)^2}=\frac{-3}{2}$$
is this possible to do without Lhopita's rule
|
Well, we can start as you, by setting $y=\frac{1}{x}$. Now, our limits transforms to:
$$L=\lim_{y\to0}\frac{\tan^{-1}\left(\frac{1+y}{1+4y}\right)-\frac{\pi}{4}}{y}$$
Now, let $f:\mathbb{R}\to\mathbb{R}$ with
$$f(x)=\tan\left(\frac{1+x}{1+4x}\right)$$
Note that $f(0)=\tan^{-1}(1)=\frac{\pi}{4}$. So, we have:
$$L=\lim_{x\to0}\frac{f(x)-f(0)}{x-0}=f'(0)$$
since $f$ is differentiable.
Now, $f$'s formula is:
$$f(x)=\int_0^\frac{1+x}{1+4x}\frac{1}{1+t^2}dt$$
So, we have:
$$f'(x)=\frac{1}{1+\left(\frac{1+x}{1+4x}\right)^2}\left(\frac{1+x}{1+4x}\right)'=\frac{(1+4x)^2}{1+(1+x)^2}\frac{-3}{(1+4x)^2}=\frac{-3(1+4x)^2}{(1+(1+x)^2)(1+4x)^2}$$
So
$$L=f'(0)=-\frac{3}{2}$$
As you have already proved.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2365914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 8,
"answer_id": 3
}
|
Finding the maximum value from a derivative function I am having problems understanding how to find the maximum value from a rate of change (derivative) function. The rate of change of Volume with respect to time is $\frac{dv}{dt}=1000- 30t^2 +2t^3$, $0 \le t \le 15$
How do I find the maximum rate of change? the answers is where $t=0$ and $t=15$, but I can't see how this is done?
|
Relative extrema occur at endpoints or critical points. Critical points occur where a function's derivative is $0$ or undefined.
The maximum of $\frac{dv}{dt}$ is where $\frac{d^2v}{dt^2}=0$ or is undefined.
\begin{align*}
\frac{d^2v}{dt^2} &= -60t + 6t^2 =0 \\
0 &= 6t(t - 10) \\
t&=0, \ t=10
\end{align*}
So $\frac{dv}{dt}$ has critical points at $t=0,10$. Now you have to find $\frac{dv}{dt}(0)$, $\frac{dv}{dt}(10)$, and $\frac{dv}{dt}(15)$ and see which one has the highest value to find the max.
\begin{align*}
\frac{dv}{dt}(0)\ \ &= 1000 \\
\frac{dv}{dt}(10) &= 0 \\
\frac{dv}{dt}(15) &= 1000
\end{align*}
On the domain $[0,15]$, $\frac{dv}{dt}$ has a maximum value of $1000$ at $t=0,15$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2366010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
When is a measure the pushforward of another measure? Let $(X,\Sigma)$ be a measurable space. Let $\mu$ and $\nu$ be two measures on thereon. Are there any reasonable restrictions on these measures to ensure the existence of a (measurable) function, $f: X \to X$ such that
$$ \nu = f_*\mu$$
where $f_*\mu$ is the push forward of $\mu$ with respect to $f$, i.e.,
$$\nu(A) = \mu(f^{-1}(A))$$
for all $A \in \Sigma$.
I understand this is a pretty unstructured question. Perhaps, if we require structure of $f$. For example, if $X = \mathbb{R}$ and we require $f$ to be monotone or linear?
|
If $X$ is any separable completely metrizable space, such as $\mathbb{R}$, a sufficient condition is that $\mu$ is atomless and $\mu(X)=\nu(X)<\infty$, see here for the argument. It should be quite clear that handling atoms of $\mu$ is not easy, the will lead to atoms in $\nu$.
Alternatively, it is sufficient that both $\mu$ and $\nu$ are infinite but $\sigma$-finite and $\mu$ is atomless. This case follows from the previous one.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2366159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
}
|
Linear Independence I've come across a question in Linear Algebra that I can't quite figure out. I've tried a multitude of things that either don't work or aren't sufficient enough to convince me I understand linear independence well enough.
I know a set of vectors, S, in vector space V are linearly independent if their linear combination, that is,
$\lambda_1 \mathbf{v}_1 + ... + \lambda_n \mathbf{v}_n = \mathbf{0}$
means all scalars are equal to each other and 0,
$\lambda_1 = ... = \lambda_n = 0.$
I can also show a set of vectors S is linearly independent if I'm given a set of vectors with numerical values - by creating a matrix and reducing it to row echelon form. However, my understanding isn't great enough that I can expand on this and answer questions such as the following:
Assume the vectors u, v and w are linearly independent elements of a vector space V.
For each of the following sets decide whether it is linearly independent.
A. {u + v + w, v - 2w, 2u + 3w}
B. {u + 2w, v + 2w, 2w}
C. {x, y, z}
where,
x = u + 2v - w,
y = 2x + u + 2v - w,
z = 3x - 2y.
If anyone can explain to me the connection between this type of question and the definition of linear independence by answering A or providing a guideline of how to answer A then hopefully I can tackle B and C and any related questions. Thanks.
|
You just need to use the definition. For the first case you can write:
$$\lambda_1(u+v+w)+\lambda_2(v-2w)+\lambda_3(2u+3w)=0$$
now rewrite like:
$$(\lambda_1+2\lambda_3)u+(\lambda_1+\lambda_2)v+(\lambda_1-2\lambda_2+3\lambda_3)w=0$$
Now use that $u,v,w$ are independent, solve the system and find $\lambda_1,\lambda_2,\lambda_3$.
Can you finish?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2366240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
}
|
Limit of $p_{k+1}(z)/p_k(z)$ with $p_k(z)=\sum\limits_{i=0}^{\lceil zk \rceil}{k\choose i} p^i(1-p)^{k-i}$ and $z
How to find the limit of $\dfrac{p_{k+1}(z)}{p_k(z)}$ as $k$ tends to infinity, where, for every $k$, $$p_k(z)=\sum_{i=0}^{\lceil zk \rceil}{k\choose i} p^i(1-p)^{k-i}$$ for some $z$ and $p$ in $(0,1)$ such that $z<p$?
The top and bottom both converge to $0$ by the law of large numbers.
I was thinking of using L'Hopital with respect to $k$ but not sure how that would work because of the ceil function.
|
If $X\sim\operatorname{Bin}(n,p)$, then $\frac{X-np}{\sqrt{np(1-p)}} \xrightarrow{d} N(0,1)$ and one can use it to estimate numerator and denominator. Basically, both are $\Phi(\frac{\sqrt{n}(z-p)}{\sqrt{p(1-p)}})(1+O(\frac1n))$, so the limit is 1.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2366368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
What does $L(n,\chi_4)$ mean? I was reading some articles related to Euler sums and the Riemann zeta function, when I came across this definition:
$$
L(n,\chi_4) = \sum_{k=0}^{\infty}\frac{(-1)^k}{(2k+1)^n}
$$
What is this function called and how is it related to the zeta function?
|
That author is writing $\chi_4$ for a certain "character", defined by
$$
\chi_4(n) = \begin{cases}
1, &n \equiv 1\pmod 4,\\
-1,&n\equiv 3\pmod 4,\\
0, &\text{otherwise}
\end{cases}
$$
and then
$$
L(s,\chi_4) = \sum_{n=1}^\infty \frac{\chi_4(n)}{n^s} =
\sum_{k=0}^{\infty}\frac{(-1)^k}{(2k+1)^s}
$$
is the corresponding "$L$-function" of that character.
The simple connection with the zeta function is that, taking a different character $\chi(n) = 1$ for all $n$, we get
$$
L(s,\chi) = \sum_{n=1}^{\infty}\frac{1}{n^s} = \zeta(s)
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2366504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
How can I solve this limit without L'Hopital rule? I have found this interesting limit and I'm trying to solve it without use L'Hopital's Rule.
$$\lim\limits_{x\rightarrow 0}\frac{\sinh^{-1}(\sinh(x))-\sinh^{-1}(\sin(x))}{\sinh(x)-\sin(x)}$$
I solved it with L'Hopital's rule and I found that the solution is $1$. But if I try without this rule, I can't solve it. Any ideas?
|
Using the Mean Value Theorem, and the fact that $\frac{\mathrm{d}}{\mathrm{d}x}\sinh^{-1}(x)=\frac1{\sqrt{1+x^2}}$, we get that
$$
\frac{\sinh^{-1}(\sinh(x))-\sinh^{-1}(\sin(x))}{\sinh(x)-\sin(x)}=\frac1{\sqrt{1+\xi^2}}\tag{1}
$$
for some $\xi$ between $\sin(x)$ and $\sinh(x)$.
Therefore, since both $\sinh(x)$ and $\sin(x)$ tend to $0$, the $\xi$ in $(1)$ tends to $0$; that is,
$$
\begin{align}
\lim_{x\to0}\frac{\sinh^{-1}(\sinh(x))-\sinh^{-1}(\sin(x))}{\sinh(x)-\sin(x)}
&=\lim_{\xi\to0}\frac1{\sqrt{1+\xi^2}}\\[3pt]
&=1\tag{2}
\end{align}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2366608",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
}
|
Why is convolution with Fejer kernel enough to prove this result? Exercise 2.9 in Katznelson's Introduction to Harmonic Analysis reads as follows.
Show that for $f \in L^{1}(\mathbb{T})$ the norm of the operator $ F: g \mapsto f * g$ on $L^{1}(\mathbb{T})$ is $||f||_{L^{1}}$.
I can see easily enough how $||f * g)||_{1} \leq ||f||_{1}||g||_{1} \ \forall \ g \in L^{1}(\mathbb{T})$ and thus $||F||_{\text{operator}}$ is bounded by using Fubini's theorem, but was struggling to prove the actual equality.
Then my professor told me that the solution was just to recall that for the Fejer summability kernel $K_{n}$,
$$ f * K_{n} \to f \text{ as } n \to \infty$$
and so $||f * K_{n}||_{L^1} \to ||f||_{L^1}$, therefore $||F||_{\text{operator}} = ||f||_{L^1}$.
My question is, why is it enough to just show this? I have looked at my definitions for Fejer's kernel and summability kernels, and can't make the logical step from trying to show "$\sup_{||g||_{L^{1}} \leq 1} ||f * g||_{L^{1}} = ||f||_{1}$" being equivalent to showing "$||f * K_{n}||_{L^1} \to ||f||_{L^1}$", and so think I must be missing something very obvious.
|
Note that $\|K_n\|_{L^1}=1$ (using Katznelson's normalisation). Let the
operator norm of $g\mapsto f*g$ be $A$. We know $A\le \|f\|_{L^1}$.
Then $\|f\ast K_n\|_{L_1}\le A\|K_n\|_{L^1}=A$. But $\|f\ast K_n\|_{L^1}\to
\|f\|_{L^1}$, so in the limit, $\|f\|_{L^1}\le A$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2367704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
$\textbf Z[\sqrt{pq}]$ is not a UFD if $\left( \frac{q}p \right) = -1$ and $p \equiv 1 \pmod 4$.
Let $p$ and $q$ be primes such that $p \equiv 1 \pmod 4$ and $\left( \frac q p \right) = -1$. Show that $\textbf Z[\sqrt {pq}]$ is not a UFD.
I tried some examples like $p=5$ and $q = 2$. But I have no clue about the general case. Any hint?
|
If $$\left(\frac{q}{p}\right) = -1,$$ that means that $q$ is not a quadratic residue modulo $p$. That much is obvious, right? It also means that $q$ is not a quadratic residue modulo $pq$ either. Thus, in your example, since 2 is not a quadratic residue modulo 5, it can't be a residue modulo 10 either, and is therefore irreducible.
Since $q$ is irreducible, it's not divisible by $\sqrt{pq}$. So we conclude that $pq$ has two distinct factorizations in $\mathbb Z[\sqrt{pq}]$: $$(\sqrt{pq})^2 = pq.$$
Well, that might slightly wrong if $p$ can be broken down, but $q$ can't. In order to present this answer first, I have not actually pondered the significance of quadratic reciprocity to your question.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2367796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 3,
"answer_id": 0
}
|
What is the formal adjoint? Let $L$ a differential operator and consider the equation $Lu=f$ for $f\in L^2$. Then $u$ is a weak solution if $$\left<u,L^*\varphi\right>=\left<f,\varphi\right>,$$
for all $\varphi\in \mathcal C_0^\infty $ where $L^*$ is the formal adjoint.
What is the "formal adjoint" ?
I recall that $\left<f,g\right>=\int fg.$
|
I assume we work with real-valued functions. If $L = \sum_{\alpha} k_{\alpha} D^{\alpha}$ (using multi-index notation), where $k_{\alpha}$ are constants, then $L^{*}$ is given by
$$
L^{*} = \sum_{\alpha} k_{\alpha} (-1)^{|\alpha|} D^{\alpha}.
$$
To see why it makes sense, one may check that for $\phi, \psi \in C_0^{\infty}$ equality $\langle L \phi, \psi \rangle = \langle \phi, L^{*} \psi \rangle$ is just integration by parts.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2367884",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Let $X$ be a connected scheme. Then $X$ irreducible iff $\forall x, \text{ Spec}(\mathcal{O}_{X,x})$ is I could use some help for the second part of exercise I-34 from Eisenbud and Harris' Geometry of Schemes:
Let $X$ be a connected scheme. Show that $X$ is irreducible if and only if
for all $x \in X$, the stalk local ring has a unique minimal prime ideal.
I guess it must be related to the first part of the exercise, that states:
An arbitrary scheme is irreducible iff every open affine subset is irreducible.
My ideas
I know that an affine scheme $\text{Spec }R$ is irreducible precisely when $R$ has a unique minimal prime ideal, so that leaves us with the to be proven statement:
$$
X \text{ is irreducible}
\quad\iff \quad
\forall x \in X, \text{ Spec}(\mathcal{O}_{X,x}) \text{ is irreducible}.
$$
but the RHS seems even harder to prove: $\text{ Spec}(\mathcal{O}_{X,x})$ doesn't really seem to be "accessible" in some way. For example, this result cannot be applied since $\text{ Spec}(\mathcal{O}_{X,x})$ cannot be identified with affine patches as one would like to do. Do you have ideas?
|
This is false. Indeed, there exists a ring $A$ which has no nontrivial idempotent, which is not a domain, and such that the localization of $A$ at any prime ideal is a domain. See http://stacks.math.columbia.edu/tag/0568 for details of the construction. Note that such a ring automatically is reduced, since all of its localizations at prime ideals are reduced.
Since $A$ has no nontrivial idempotent elements, $\operatorname{Spec} A$ is connected, and since every localization of $A$ at a prime ideal is a domain, every stalk has a unique minimal prime. But $\operatorname{Spec}A$ is reducible, since any nonzero $f,g\in A$ with $fg=0$ give proper closed subsets whose union is all of $\operatorname{Spec} A$ (the vanishing sets of $f$ and $g$ are proper subsets since $f$ and $g$ are not nilpotent).
On the other hand, here are some positive results. First, the forward direction is always true. Indeed, if $X$ is irreducible and $x\in X$, let $U=\operatorname{Spec} A$ be an affine open subset containing $x$. Then $U$ is irreducible, so $A$ has a unique minimal prime. The same is then true of any (nonzero) localization of $A$, in particular the local ring $\mathcal{O}_{X,x}$.
Second, the reverse direction is true if $X$ is Noetherian. Indeed, in that case $X$ can be written as a finite union of irreducible components. If $X$ is not irreducible, it has more than one irreducible component, and if $X$ is connected, there must be two distinct irreducible components $A$ and $B$ of $X$ which intersect (otherwise, each irreducible component would be clopen). Let $x\in A\cap B$. Then the generic point of $A$ and the generic point of $B$ give distinct minimal primes in the local ring $\mathcal{O}_{X,x}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2368003",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Closed form for $\sum_{0\le x\lt\infty}n^{2^{-x}}-1$ Is there a closed form for the expression below?
$$f(n)=\sum_{0\le x\lt\infty}n^{2^{-x}}-1=(n-1)+(\sqrt{n}-1)+(\sqrt{\sqrt{n}}-1)+\cdots$$
Approximations are good as well. This appeared to me while analyzing an algorithm.
|
Let $g(x) := f(e^{x/2})$ so that $g(x) = g(x/2) + e^{x/2}-1$. Expanding $g(x)$ in power series we have
$$ g(x) = \sum_{k>0} \frac{x^k}{k!(2^k-1)} =
x + \frac{x^2}{6} + \frac{x^3}{42} + \frac{x^4}{360} + \frac{x^5}{3720} +\cdots $$ which converges everywhere but is unlikely to have closed form.
Approximations of $f(x)$ around $x=1$ where $f(1)=0$ are $x-1+\log(x)$ and $x-3+2\sqrt{x}$ and the average of the two is even better.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2368214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
How do you calculate the expected value of geometric distribution without diffrentiation? Is there any way I can calculate the expected value of geometric distribution without diffrentiation? All other ways I saw here have diffrentiation in them.
Thanks in advance!
|
The problem can be viewed in a different perspective to understand more intuitively. Let's see the following definition.
"A person tosses a coin, if head comes he stops, else he passes the coin to the next person. For the next person, he follows the same process: If head comes he stops, else passes the coin to next person and so on."
So the process can be modelled as,
$$
X =
\begin{cases}
1, & \text{if $head$ occurs} \\
1 + Y, & \text{if $tail$ occurs}
\end{cases}
$$
where $Y$ denotes the next person.
Similarly for $Y$, the equation can be written as,
$$
Y =
\begin{cases}
1, & \text{if $head$ occurs} \\
1 + Y', & \text{if $tail$ occurs}
\end{cases}
$$
where $Y'$ is the next after $Y$ and so on.
We see that there is no difference between $X$ and $Y$, both toss the coin and stop if head comes else pass the coin to next person who does the same. If both the events were occurring separately and independently (i.e. $Y$ had a coin already and didn't know it came from $X$), their average (or expected) values would be same.
Let's calculate average value of $X$ using the formula of expected values.
Here $p$ is the probability of head and $q$ is the probability of tail.
$$
E(X) = p \times 1 + q \times (1 + Y) \tag{1}\label{eq1}
$$
Let's see the formula in an intuitive way. Suppose we ask $X$, What is the average number of tosses that you need to get heads?. He will be like, My expected value is 1 into p if head occurs in the first toss else (1 + no. of tosses that $Y$ had to make) into q. And $Y$ is like so on an average I had to do $E(Y)$ tosses to get heads. So $X$'s final statement is 1 into p if head comes in first toss else 1 + $E(Y)$ tosses tail comes.
So equation can be reduced as,
$$
\begin{align}
&E(X) = p \times 1 + q \times (1 + E(Y)) && \text{} \tag{2}\label{eq2} \\
\Rightarrow \ &E(X) = p \times 1 + q \times (1 + E(X)) && \text{Since $E(X) = E(Y)$} \\
\Rightarrow \ &E(X) = 1/p
\end{align}
$$
Note that, $\eqref{eq1}$ is very different from $\eqref{eq2}$.
$(2)$ can be solved as seen above, but $(1)$ just continues to unfold further like $E(X) = 1 \times p + q \times (1 + 1) \times p + q \times (1 + Y') \times q$ and so on. Although this can also be reduced to $E(X) = 1 \times p + q \times (1 + E(X))$, but I wanted to follow the process intuitively.
Since it can seem off, to just put the average value of $Y$ in $(1)$, improvements with strong reasoning are welcomed.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2368304",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 5,
"answer_id": 4
}
|
Is there any other function $f(x)$ such that $f$ is Discontinous at $x=a$ but $fof$ is continous Is there any other function $f(x)$ such that $f$ is Discontinous at $x=a$ but $fof$ is continous.
I know the well know example for this as Dirchlet's function. is there any other?
|
The function
$$
f(x) = \begin{cases}a - 1 & x\ge a \\ a - 2 & x < a \end{cases}
$$
is discontinuous at $x = a$, but $f(f(x)) = a - 2$ is a continuous (constant) function.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2368533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Why is $\bigcup_{n\geq 1}[0,1-1/n] \neq [0,1]$? Sorry but aren't we taking limits $\lim_{m \to \infty} \cup_{n =1 }^{m}[0,1-1/n] = [0,1]$? Why is this supposed to be equal to $[0,1)$?
|
This isn't a silly question. It's just that you've discovered a false belief.
Conjecture. Suppose $a : \mathbb{N} \rightarrow \mathbb{R}$ is an order-reversing sequence that's bounded below and $b : \mathbb{N} \rightarrow \mathbb{R}$ is an order-preserving sequence that's bounded above. Then:
$$\bigcup_{i \in \mathbb{N}}[a_i,b_i] = \left[\lim_{i \in \mathbb{N}} a_i,\lim_{i \in \mathbb{N}} b_i \right]$$
Disproof. Assume toward a contradiction this were true. Then
$$\bigcup_{i \in \mathbb{N}}\left[\frac{1}{i+1},1 \right] = \left[\lim_{i \in \mathbb{N}} \frac{1}{i+1},\lim_{i \in \mathbb{N}} 1 \right]$$
So $$\bigcup_{i \in \mathbb{N}}\left[\frac{1}{i+1},1 \right] = \left[0,1 \right]$$
So
$$[0,1] \subseteq \bigcup_{i \in \mathbb{N}}\left[\frac{1}{i+1},1 \right]$$
Hence $$0 \in \bigcup_{i \in \mathbb{N}}\left[\frac{1}{i+1},1 \right]$$
Hence $$\mathop{\exists}_{i \in \mathbb{N}}\left(\frac{1}{i+1} \leq 0 \leq 1\right)$$
Hence $$\mathop{\exists}_{i \in \mathbb{N}}\left(\frac{1}{i+1} \leq 0\right)$$
Hence $$\mathop{\exists}_{i \in \mathbb{N}} \bot$$
Hence $$\bot,$$ a contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2368602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 7,
"answer_id": 6
}
|
Nim game variant Statment is as follow
Given a number of piles in which each pile contains some numbers of stones/coins. In each turn, a player can choose only one pile and remove any number of stones (at least one) from that pile. The player who cannot move is considered to lose the game (i.e., one who take the last stone is the winner).
We can find solution by Xoring all the values of piles. But we have constraint In each turn, a player can choose only one pile and remove any number of stones between 1 to H. How to solve this modified one ?
From my point of view it will remain unchanged.. We have to calculated the xor values only.
|
We can use the theory of nim-values. Each pile has a nim-value
and the value of the game is the nim-sum (XOR of the binary representations) of the nim-values of the piles. In this variant
the nim-value of a pile is periodic with period $H+1$. The nim-value
of a pile with $k$ stones, $0\le k\le H$ is $H$, and adding $H+1$
stones to a pile leaves its nim-value invariant.
As ever the game is a second player win iff its nim-value is zero.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2368702",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
An arithmetic sequence “The sum of the first $13$ terms of an arithmetic sequence is $-234$ and the sum of the first $41$ terms is $-780$. Find the $27^\text{th}$ term of the sequence.”
The $S_n=\frac n2(2a+(n-1)d)$ formula ($S_n$ represents the sum of the first to $n^\text{th}$ terms) only gives me fractional values for $a$ (which is the first term) and $d$, which is the common difference. Are there other viable methods?
|
$a_1$ and $d$ are unknown, but we know:
$\frac{13}{2}(a_1 + a_{13}) = -234 \implies a_1 + a_{13} = - \frac{234\times 2}{13} = -36$
and also $\frac{41}{2}(a_1 + a_{41}) = -780 \implies a_1 + a_{41} = -38\frac{2}{41}$
and also $a_{13}= a_1 + 12d$
and $a_{41} = a_1 + 40d$.
So we get two linear equations in two variables, by substiting the last two into the first two. There is no problem with fractions, or even arbitrary reals being used as $a_1$ or $d$. That some sums of fractions are integers is OK, it doesn't mean all solutions are integers.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2368788",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Unable to reach the desired answer in trigonometry. The question is:
If $\sin x + \sin y = \sqrt3 (\cos y - \cos x)$
show that $\sin 3x + \sin 3y= 0 $
This is what I have tried:
*
*Squaring of the first equation (Result: Failure)
*Tried to use the $\sin(3x)$ identity but got stuck in the middle steps because I couldn't simplify it any further.
Can someone provide any hint/ suggestion?
|
I was trying to find out how the condition was conceived.
$$\sin3x+\sin3y=0\implies\sin3x=\sin(-3y)$$
$\implies3x=180^\circ n+(-1)^n(-3y)$ where $n$ is any integer
$\iff x=60^\circ n+(-1)^{n+1}y$
If $n$ is even $=2m$(say), $x=120^\circ m-y$
$$\implies x+y\equiv\begin{cases}0 &\mbox{if }3\mid m\\120^\circ& \mbox{if } n \equiv1\pmod3\\240^\circ& \mbox{if } n\equiv2\pmod3 \end{cases}\pmod{360^\circ}$$
$\implies\sin\dfrac{x+y}2=\tan\dfrac{x+y}2=0$ or $\tan\dfrac{x+y}2=\pm\sqrt3$
Similarly for odd $n=2m+1$(say),
$\cos\dfrac{x-y}2=\cot\dfrac{x-y}2=0$ or $\cot\dfrac{x-y}2=\pm\sqrt3$
Here the condition chosen $$0=\sin\dfrac{x+y}2\left(\cot\dfrac{x-y}2-\sqrt3\right)=\dfrac{2\sin\dfrac{x+y}2\cos\dfrac{x-y}2-\sqrt3\cdot2\sin\dfrac{x+y}2\sin\dfrac{x-y}2}{2\sin\dfrac{x-y}2}=\dfrac{\sin x+\sin y-\sqrt3(\cos y-\cos x)}{2\sin\dfrac{x-y}2}$$ with $\sin\dfrac{x-y}2\ne0$ as $\cot\dfrac{x-y}2=\sqrt3$
which could easily be $$\sin\dfrac{x+y}2\left(\cot\dfrac{x-y}2+\sqrt3\right)=0\iff\sin x+\sin y=-\sqrt3(\cos y-\cos x)$$
Or $$\cos\dfrac{x-y}2\left(\tan\dfrac{x+y}2\pm\sqrt3\right)=0\iff\sin x+\sin y=\pm\sqrt3(\cos y+\cos x)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2368879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
What does it mean to say that a statement is independent of a theory T? Let $\phi$ = $\forall x \forall y (x \cdot y = y \cdot x)$. It is sometimes said that $\phi$ is "independent" of the axioms of group theory. What does 'independent' mean $\textit{exactly}$? Is it:
(1) That we can find models of group theory $\mathfrak{M}$ and $\mathfrak{N}$ such that $\mathfrak{M} \models \phi$ but $\mathfrak{N} \models \lnot\phi $
Or:
(2) We cannot derive $\phi$ from the axioms of group theory.
Or: is it the case that (1) and (2) are equivalent?
|
A sentence $\phi$ is independent of $T$ if there are $M,N\models T$ such that $M\models\phi$ and $N\models\neg\phi$. [This is your (1).]
It is a theorem (Gödel completeness theorem) that $\phi$ is independent of $T$ if and only if $\phi$ and $\neg\phi$ cannot be derived $T$ in some (here not defined) syntactic calculus.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2369056",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Area of rectangle knowing diagonal and angle between diagonal and edge I found on the web that the area of a rectangle with the diagonal of length $d$, and inner angle (between the diagonal and edge) $\theta$ is $d^2\cos(\theta)\sin(\theta)$. However, I wasn't able to deduce it myself. I tried applying law of sines or generalised Pythagorean theorem but I couldn't derive the area using only the length of the diagonal and the angle between diagonal and edge. How might I get to this result ?
|
If you use the formulas for sine and cosine in right-angled triangles, the formula can be proved rather easily: If the width and the height of the rectangle are resp. $w$ and $h$, then the formulas say $\cos(\theta)=w/d$ and $\sin(\theta)=h/d$. If you isolate $w$ and $h$ in these formulas and substitute in the formula "area $=wh$", then the formula you mention appears.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2369269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Different ways to state the motivation of the definition of the product topology Suppose for every $i\in\mathscr I,$ $X_i$ is a topological space.
The product space has as its underlying set the product set $X =\prod \limits_{i\,\in\,\mathscr I} X_i$ and as its open sets product sets of the form $\prod\limits_{i\,\in\,\mathscr I} G_i$ where for every $i\in\mathscr I,$ $G_i$ is open and for all except finitely many $i\in\mathscr I,$ $G_i=X_i.$
Now suppose one is asked why the definition is that rather than something else ‒ for example, omitting the restriction to finitely many factors.
The answer that I know instantly is this: This is the same as the topology of pointwise convergence. That is, a net of points in $X$ converges to a point in $X$ if and only if for every $i\in\mathscr I,$ the projection of the net onto the $i$th factor space is a net that converges to the projection of the limit point onto that factor space.
However, there may be other and maybe even better ways of stating the motivation. What are they?
|
For the sake of definiteness, I will refer to the name I have seen most commonly used: the product topology is, as others have mentioned, the Initial Topology https://en.wikipedia.org/wiki/Initial_topology with respect to the projections.
A dual concept is that of the Final Topology, which is the finest topology put on
the codomain in $f: Y \rightarrow X $ that makes the set continuous.
https://en.wikipedia.org/wiki/Final_topology.
As an example, the quotient topology is the final topology with respect to quotient maps.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2369368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 5,
"answer_id": 2
}
|
upper bound for $L^1$ norm of Dirichlet kernel I showed there exists a constant $c$ such that $\|D_N\|_1 \geq c \log N$ and $c$ is independent to $N$
using the fact that $$\| D_N\|_1= \frac 1 \pi \int_{[0,\pi]} \left|\frac{\sin(2N+1)y}{\sin y}\right|\,dt \geq \frac{1}{\pi}\int_{[0,\pi/2]} \left| \frac{\sin(2N+1)y}{\sin y} \right| \, dt$$
and $$\frac 1 {| y |} \leq \frac{1}{\sin(y)} \leq \frac{\pi}{2|y|} \text{ for } y \in [0, \frac{\pi}{2}].$$
Now I want to show that there is a upper bound (i.e there exists a constant $c'$ such that $\|D_N\|_1 \leq c' \log N$ for $N\geq2$)
but this time I can't deduce the interval and the function diverges to infinity near $\pi$, so i have no clue how to start. Am I sppose to divide $[0, 2\pi]$ into three subintervals such as $[0, \delta], [\delta, 2\pi-\delta],$ and $[2\pi-\delta, 2\pi]$ and show on each interval $L^1$ norm of Dirichlet kernel converges to $0$ or multiple of $\log N$ as $\delta \rightarrow 0$? I saw this trick a lot in the other examples.
|
$D_{n}(x)$ has the form $$D_{n}(x)=\dfrac{1}{2\pi}\dfrac{\sin(n+\frac{1}{2})x}{\sin\frac{1}{2}(x)},$$ so that $$D_{n}(2x)=\dfrac{1}{2\pi}\dfrac{\sin(2n+1)x}{\sin x}.$$
Since $\sin n\alpha\leq n\sin\alpha$, we know that $\sin(2n+1)x\leq (2n+1)\sin x$ and thus
\begin{align*}
(1)\ \ |D_{n}(2x)|=\dfrac{1}{2\pi}\dfrac{|\sin(2n+1)x|}{|\sin x|}&\leq \dfrac{1}{2\pi}\dfrac{(2n+1)|\sin x|}{|\sin x|}\\
&=\dfrac{2n+1}{2\pi}\leq 2n+1<4n\ \text{for all}\ n\geq 1.
\end{align*}
On the other hand, note that $$|\sin\frac{x}{2}|>\dfrac{|x|}{\pi}\geq\dfrac{|x|}{2\pi}\ \text{for}\ 0<|x|<\pi,$$ and thus $$(2)\ \ |D_{n}(x)|=\dfrac{1}{2\pi}\dfrac{|\sin(n+\frac{1}{2})x|}{|\sin\frac{1}{2}x|}\leq\dfrac{1}{2\pi}\dfrac{1}{|\sin \frac{1}{2}x|}\leq\dfrac{1}{|x|}.$$
Now, let's compute:
\begin{align*}
\|D_{n}\|_{1}=\int_{-\pi}^{\pi}|D_{n}(x)|dx&=2\int_{0}^{\pi}|D_{n}(x)|dx\\
&=2\int_{0}^{\frac{\pi}{n}}|D_{n}(x)|dx+2\int_{\frac{\pi}{n}}^{\pi}|D_{n}(x)|dx,\\
&\text{replacing}\ y:=\dfrac{x}{2}\ \text{in the first integral, then}\\
&=2\int_{0}^{\frac{\pi}{2n}}|D_{n}(2y)|\cdot\dfrac{1}{2}dy+2\int_{\frac{\pi}{n}}^{\pi}|D_{n}(x)|dx.
\end{align*}
Now we apply the inequality $(1)$ to the first term and inequality $(2)$ to the second term, and we have
\begin{align*}
\|D_{n}\|_{1}\leq \int_{0}^{\frac{\pi}{2n}}4ndy+2\int_{\frac{\pi}{n}}^{n}\dfrac{1}{|x|}dx&=\dfrac{4n\pi}{2n}+2\int_{\frac{\pi}{n}}^{\pi}\dfrac{1}{x}dx\\
&=2\pi+2(\log(\pi)-\log(\pi/n))\\
&=2(\log n+\pi)\\
&<12\log n\ \text{for all}\ n\geq 2,
\end{align*}
where the last inequality was obtained by $\pi<5\log(n)$ for all $n\geq 2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2369473",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Find hyperbolic area of hyperbolic triangle Given $u=(0,0)$, $v=(\frac12,0)$, $w=(\frac12,\frac12)$. Find the hyperbolic area of the hyperbolic triangle $[u,v,w]$ in $\mathbb{H}^2$.
My approach:
One of the angles, say $\alpha$, is a right angle, so we have that $\alpha=\frac{\pi}2$, so that $[u,v,w]$ is a right triangle. Now, this triangle is also isosceles, so two of its sides, say $b$ and $c$, are equal.
Now, $$b=c=\cosh^{-1}\left(\frac53\right)$$
Also, the other side, say $a$, is determined as follows:
$$a=\cosh^{-1}\left(\frac73\right)$$
Now, we get
$$\frac73 = \cosh b=\frac{\cos\beta}{\sin\alpha}=\cot\beta$$
$$\implies \beta = \cot^{-1}\left(\frac73\right)=\alpha$$
$$\implies A=\frac{\pi}2-2\cot^{-1}\left(\frac73\right)$$
I'd appreciate if someone would let me know whether or not my reasoning is correct in this solution.
|
The area of a hyperbolic triangle is equal to the angle deficit, i.e. to the difference between the hyperbolic sum of interior angles and the Euclidean sum of $\pi$.
Poincaré half plane
Draw the triangle, and it will look like this:
This triangle has two ideal points. I'm not sure I'd call it isosceles, since two legs have infinite length, but you might as well. It has two angles of $0$ and one of $\frac\pi2$, so the sum is $\frac\pi2$, which is $\frac\pi2$ less than the Euclidean sum of a triangle. Thus the area is $\frac\pi2\approx1.5708$.
Poincaré disk
Draw the triangle, and it will look like this:
No right angle here, nor is it obviously isosceles. The segment $vw$ belongs to a circle with center at $\left(\frac54,\frac14\right)$. You can verify that by checking that inverting $v$ or $w$ results in a point on that circle, too. You can use that to compute the lines connecting that center to $v$ resp. $w$, then have the tangents in these points orthogonal to that, and finally compute the angle between tangent and the line to the origin. Sum the angles, subtract it from $\pi$ and you have the area $\pi-2\arctan3\approx0.6435$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2369593",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
3 interior points in a grid based polygon Given a polygon with vertices on a grid with 3 interior grid points and no 3 vertices lying on the same line.
Is it true that all vertices are on the same circle?
EDITED
There is also another counter example with convex points:
|
Would this qualify as a counterexample (J - I - H - G - L and then the red sides)? It is a polygon, it has vertices on a grid, 3 interior points, and no 3 vertices lying on the same line.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2369684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Confusion regarding the domain of a function Let $f(x)=x^2$ and $g(x)=x$. What is the domain of $\frac{f(x)}{g(x)}$?
Evaluating $\frac{f(x)}{g(x)}$ gives us $x$. Does that mean that its domain is all real numbers?
If we evaluate the function at $x=0$, $\frac{f(0)}{g(0)}$, then $g(0)$ will gives us zero. Does that mean zero is not in the domain of $\frac{f(x)}{g(x)}$?
From what I understand, the domain of $x$ is the set of all real numbers, but the domain of $\frac{f(x)}{g(x)}$ is the set of all real numbers except zero. Am I right?
Edit: $f(x)=2x^2$. Sorry. I forgot to add the two. To make it less confusing, I'm just gonna remove the "2" in $2x$
|
The answer to your question is: It depends. Namely on the domain specified for $f$ and the domain specified for $g$.
Attention: A function is not fully specified as long as the domain of the function is not given.
At first you have to check the domain of the function $f$ and the domain of the function $g$. Although it seems natural that the domain is the largest possible set for which a function gives reasonable values, this has always to be clarified before doing some calculation.
The domain of $f/g$ is the intersection of the domain of $f$ and $g$ minus all points where $g(x)=0$.
Example 1:
\begin{align*}
&f:\mathbb{R}\rightarrow\mathbb{R}&\qquad &g:\mathbb{R}\rightarrow\mathbb{R}\\
&f(x)=2x^2&\qquad &g(x)=x\\
\\
&\frac{f(x)}{g(x)}=2x&\qquad &x\in\mathbb{R}\setminus\{0\}
\end{align*}
Example 2:
\begin{align*}
&f:\mathbb{R^+}\rightarrow\mathbb{R}&\qquad &g:\mathbb{R^+}\rightarrow\mathbb{R}\\
&f(x)=2x^2&\qquad &g(x)=x\\
\\
&\frac{f(x)}{g(x)}=2x&\qquad &x\in\mathbb{R}^+
\end{align*}
Example 3:
\begin{align*}
&f:\{0\}\rightarrow\mathbb{R}&\qquad &g:\mathbb{R}\rightarrow\mathbb{R}\\
&f(x)=2x^2&\qquad &g(x)=x\\
\\
&\frac{f(x)}{g(x)} &\qquad&\text{ is not defined}
\end{align*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2369903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Solving $\sqrt{8+2x-x^2} > 6-3x$.
So I was solving this inequality.
$$\sqrt{8+2x-x^2} > 6-3x$$
*
*First things first, I obtained that the common domain of definition is actually $[-2,4]$.
*Next we would square and solve the quadratic that follows.
But the "solution" seems to have a part, where they took $6-3x \geq 0$,
which gave another restriction for $x$ as $(-\infty,2]$.
I did not understand this. Why was this necessary?
|
Let's say we have an inequality $\sqrt a>b$. We're often interested in getting rid of the square root, so we want to do something along the lines of 'squaring both sides'. But squaring both sides doesn't necessarily preserve the inequality.
Example:
$$\sqrt5>-3$$
But
$$\implies\sqrt5^2>(-3)^2 \implies 5\gt9$$
is clearly false.
If you want a general rule, then you should use
$$|a|\gt|b|\iff a^2>b^2$$
This explains why you need to consider separate cases here
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2369972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 0
}
|
Picard group of $\Bbb R[x_1,\dots,x_n]/(x_1^2+\dots+x_n^2-1)$ and $\Bbb C[x_1,\dots,x_n]/(x_1^2+\dots+x_n^2-1)$ What is the Picard group of $\Bbb R[x_1,\dots,x_n]/(x_1^2+\dots+x_n^2-1)$, i.e. the coordinate ring of real sphere $S^{n-1}$, and $\Bbb C[x_1,\dots,x_n]/(x_1^2+\dots+x_n^2-1)$?
As $\Bbb R[x_1,x_2]/(x_1^2+x_2^2-1)$ is not a UFD while $\Bbb C[x_1,x_2]/(x_1^2+x_2^2-1)$ is, maybe there is some difference between two results.
|
For the reals, $n=2$, Picard group is $\mathbb{Z}/2\mathbb{Z}$ and $n>2$, it is trivial. For complex numbers, Picard group is trivial for $n= 2$, equal to $\mathbb{Z}$ when $n=3$ and trivial for $n>3$.
Let us first look at the case of reals. For $n=2$, easy to check that that $I=(x_1, 1-x_2)$ generates the Picard group and it is not trivial, but it is 2-torsion.
Most of the proofs will depend upon a useful result due to Nagata, which I state in slightly restricted form.
If $A$ is an integral domain and $p\in A$ is a prime (this means $pA$ is a non-zero prime ideal) then $A$ is a UFD if and only if $A_p$, the localization at $p$ is a UFD.
Given this, let us show that for $n>2$, the ring in question is a UFD. Since $n>2$, we see that $x_n\in A=\mathbb{R}[x_1,\ldots, x_n]/(x_1^2+\cdots+x_n^2-1)$ is a prime. Inverting $x_n$, $A_{x_n}$ can be written as $\mathbb{R}[u_1,\ldots, u_n, u_n^{-1}]/ u_1^2+\cdots+u_{n-1}^2-u_n^2+1)$ where $u_i=x_i/x_n, i<n, u_n=x_n^{-1}$. Thus, it suffices to show that $B=\mathbb{R}[u_i,\ldots, u_n]/(u_1^2+\cdots-u_n^2+1)$ is a UFD. Now, change variables again with $v_{n-1}=u_{n_1}+u_n, v_n=u_{n-1}-u_n$ and then our ring is defined by the equation $u_1^2+\cdots+u_{n-2}^2+v_{n-1}v_n+1=0$. Now, again $v_{n-1}\in B$ is a prime and so suffices to show that $B_{v_{n-1}}$ is a UFD. This ring is just $\mathbb{R}[u_1,\ldots, u_{n-2}, v_{n-1}, v_{n-1}^{-1}]$, since we can write $v_n$ in terms of the others. This is just the localization of a polynomial ring and thus a UFD.
Similar arguments can be made for complex numbers, the only slightly difficult case is $n=3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2370059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Finding the number of elements of a set
Let $S$ be the set of all integers from $100$ to $999$ which are neither divisible by $3$ nor divisible by $5$. The number of elements in $S$ is
*
*$480$
*$420$
*$360$
*$240$
My answer is coming out as $420$, but in the actual answer-sheet the answer is given as $480$. Why is my answer not correct? Please, someone help.
|
Solve[{100 <= n <= 999, ! Element[n/3, Integers], !
Element[n/5, Integers]}, n, Integers] // Length
(* 480 *)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2370169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
}
|
Compilation of proofs for the summation of natural squares and cubes I want to know different proofs for the following formulas,
$$
\sum_{i=1}^n{i^2} = \frac{(n)(n+1)(2n+1)}{6}
$$
$$
\sum_{i=1}^n{i^3} = \frac{n^2(n+1)^2}{2^2}
$$
Please do not mark this as duplicate, since what I specifically want is to be exposed to a variety of proofs using different techniques (I did not find such a compilation anywhere on the net)
I am only familiar with two proofs, one which uses expansion of $(x+1)^2 - x^2$ and $(x+1)^3 - x^3$ and the other which uses induction. I have provided link for the induction proof in a self-answer.
I am particularly interested in proof without words and proofs which use a unrelated mathematical concept (higher level math upto class 12 level is acceptable).
Also,
(1) Don't think I am being rude or anything, it is out of genuine interest that I am asking this question.
(2) Someone marked this as a duplicate of Methods to compute $\sum_{k=1}^nk^p$ without Faulhaber's formula
My question is different in three ways:
(i) I want to focus only on these two summation and not the general case,
(ii) Hence, it follows that the proofs which I am looking for a simpler than the ones provided in that link and are simpler (using images, pictures or high school algebra). What I want is to study new proofs. I believe it is a good practice when learning math to so this.
(iii) Since the proofs in the link are given for the general case, they are complicated and I am finding it hard to understand them. If someone is able to use the same method to the two cases in my question, then it would probably become much simpler and easier to digest.
Appendix
Feel free to make use of these topics in your answers,
Calculus
Basic Binomial Expansion
Coordinate Geometry
Algebra (upto what 18 year olds learn)
Taylor Series Expansions
Geometry (18 year old level)
basically.............math which 18 year old's learn on Earth.
If you want to err, then err on the higher math side:)
Answers Compilation List
*
*By Newton series
*By Sterling Numbers
*By Induction
*From the book Generatingfunctionology by Herbert Wilf
*By generalizing the following pattern
$$\begin{align}
&\ \,4\cdot5\cdot6\cdot7\\=&(1\cdot2\cdot3\cdot4+2\cdot3\cdot4\cdot5+3\cdot4\cdot5\cdot6+4\cdot5\cdot6\cdot7)\\-&(0\cdot1\cdot2\cdot3+1\cdot2\cdot3\cdot4+2\cdot3\cdot4\cdot5+3\cdot4\cdot5\cdot6)\\
=&(1\cdot2\cdot3\cdot4+2\cdot3\cdot4\cdot4+3\cdot4\cdot5\cdot4+4\cdot5\cdot6\cdot4)\\
\end{align}$$
*By Lagrangian Interpolation
*By Formal Differentiation
*By the Euler-Maclaurin Summation Formula
*By Assuming that the expression is a polynomial of degree $2$.
*A Proof Without Words for the cube case
*By integrating and assuming a error term.
*SimplyBeatifulArt's Personal Approach
|
By Lagrangian interpolation.
The expression of the sum of the cubes must be a quartic polynomial, because its first order difference is cubic. Furthermore, it has no constant term because the sum of no numbers is zero.
Hence, the average of the cubes is a cubic polynomial, by the four points
$$(1,1),\left(2,\frac{1+2^3}2\right),\left(3,\frac{1+2^3+3^3}3\right),\left(4,\frac{1+2^3+3^3+4^4}4\right).$$
The resquested polynomial is the Lagrangian interpolant.
Average of ones (constant, by $(1,1)$):
$$\overline S_0(n)=1$$
Average of naturals (linear, by $(1,1)$ and $(2,\frac32)$):
$$\overline S_1(n)=\frac{\frac32-1}{2-1}(n-1)+1=\frac{n+1}2$$
Average of squares (quadratic, by $(1,1)$, $(2,\frac52)$ and $(3,\frac{14}3)$):
$$\overline S_2(n)=\cdots=\frac{2n^2+3n+1}2$$
Average of cubes (cubic, by $(1,1)$, $(2,\frac92)$, $(3,\frac{26}3)$ and $(4,\frac{45}2)$):
$$\overline S_3(n)=\cdots=\frac{n^3+2n^2+n}4$$
An explicit formula can be obtained for any degree, by means of the Lagrangian interpolation formula for equidistant points.
Newtonian interpolation could be easier.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2370264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 13,
"answer_id": 8
}
|
Does $3$ divide $2^{2n}-1$? Prove or find a counter example of : $3$ divide $2^{2n}-1$ for all $n\in\mathbb N$.
I compute $2^{2n}-1$ until $n=5$ and it looks to work, but how can I prove this ? I tried by contradiction, but impossible to conclude. Any idea ?
|
HINT: $$2^{2n}-1=4^n-1\equiv 1^n-1=1-1\equiv 0\mod 3$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2370454",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 2
}
|
Does $\partial_t u(x,t) - \partial_x u(x,t) = 0$ have a non-trivial solution? Consider the linear PDE $$\partial_t u(x,t) - \partial_x u(x,t) = 0$$ on the domain $$\Omega=\{(x,t)\;\;|\;\;0\leq x \leq 1,\;0\leq t \leq T\}$$
with boundary conditions $$u(0,t)=u(1,t)=c$$ and the initial condition $$u(x,0)=u_0(x).$$
This is a special case of the advection-diffusion equation or the heat equation with additional terms. In trying to solve this PDE using separation of variables and specifying $u(0,t)=u(1,t)=0$, I find only the trivial solution. In trying to use Fourier methods to solve the more general problem \begin{align}\partial_{t} u(x,t)&=k\,\partial_{xx}u(x,t)+A\,\partial_x u(x,t)+B\,u(x,t)\\u(0,t)&=u(L,t)=0\\u(x,0)&=u_0(x)\end{align}
on the domain $$\Omega=\{(x,t)\;\;|\;\;0\leq x \leq L,\;0\leq t \leq T\},$$ I find the solution to be $$u(x,t)=e^{-\frac{A}{2k}x+(-\frac{A^{2}}{4k}+B)t}\sum_{n=1}^{\infty}b_{n}sin\left(\frac{n\pi x}{L}\right)e^{-n^{2}\pi^{2}kt/L^{2}}$$where$$
b_{n}=\int_{0}^{L}e^{\frac{A}{2k}\xi}f(\xi)\,sin\left(\frac{n\pi\xi}{L}\right)\,d\xi$$
and $f(x)=u(x,0)$ (see P. 52 of O'Neil, Beginning Partial Differential Equations, 3rd Edition, Wiley (2014)). However, for the case of $k=B=0$, the Fourier coefficients blow up.
Does this equation have a non-trivial solution?
|
Given the boundary conditions, there is only the trivial solution, $u\equiv c$ if $T\ge 1$. However, for $T<1$, non-trivial solutions are possible. This can be seen by using the method of characteristics:
Here, the characteristics are easily found, they are the lines parametrized by $s\mapsto (-s, s)$, along these lines the solution of your PDE has to be constant. By ignoring the particular choice of $T$ and the boundary condition at $x=0$, we get the unique solution:
$$u(x,t) = \begin{cases}u_0(x+t),\quad &\text{for }0<x<1-t, \\
c,\quad &\text{for }0<x\ge 1-t \end{cases}.$$
Now we need to also take the B.C. at $x = 0$ into account, which is fulfilled iff (according to the above representation of $u$) $c = u(0,t) = u_0(t)$ for $t<T$. If $u_0$ doesn't fulfill this condition, there is no solution to the problem at all.
If $T\ge 1$, this means that $u_0 \equiv c$ is necessary, and $u\equiv c$ is the only solution, which is trivial.
If $T<1$, we are free to choose $u_0(x)\neq c$ for $1>x>T$, without conflicting the left boundary condition.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2370573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 0
}
|
Why is this inference invalid? So I purchased a book on logic (for beginners) as the subject interests me, and the author presents the following statement as an example of an invalid inference:
"Everyone wanted to win the prize; so the person who won the race wanted to win the prize."
Symbolised as follows, where xP is 'x wanted to win the prize' and xR is 'x won the race': $$\frac{\forall x\;xP}{(|x\;xR)P }$$ where |x means "the object x, such that" (a notation I cannot seem to find anywhere else?).
The author states this is invalid because there is potentially a situation s in which everyone satisfies P but nobody satisfies R. But I do not understand how this could be true, due to the structure of the sentence - surely, xR in the conclusion means the race was ran and there was a winner?
|
The notation is difficult to understand, the way I read it is:
$$\frac{\forall x\;xP}{(|x\;xR)P }$$
is equivalent to:
$$\forall x P(x) \rightarrow \exists x (R(x)\land P(x))$$
Which implies:
$$\forall x P(x) \rightarrow \exists x R(x)$$
And this is indeed not the case as $R(x)$ could never be true, for any x.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2370659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 5,
"answer_id": 0
}
|
Symbolic Notation for $\theta "=" \arcsin(-.5)$? I'm teaching PreCalculus and the following issue has always bugged me.
Problem: Solve $\sin\theta = -.5$ for $0 \le \theta \le 2\pi$.
Solution:
\begin{align*}
\sin\theta &= -.5\\
\theta &= \arcsin(-.5) = -\frac\pi6
\end{align*}
But to get this into our desired domain, our solutions are $\boxed{\theta = \frac{7\pi}6 \text{ or }\frac{11\pi}6}$.
So my objection is the line $\theta = \arcsin(-.5)$, because that's really not true. $\theta$ can be a whole lot of things! So does there exist some symbol or notation that expresses this? Something a-la "If $x^2=9$, then $x = \pm 3$." Like
$$\theta \stackrel{\text{is related to}}{\sim} \arcsin(-.5) = -\frac\pi6 ?$$
|
The way I always explained it to my students was "First, you find a solution, then you find the solution."
Let $\theta_0 = \arcsin(-\frac 12)$.
The range of $\arcsin$ is $-\frac{\pi}{2} \le \theta_0 \le \frac{\pi}{2}$. Since, on the unit circle, $\sin \theta = y=-\frac 12$, we see quickly that
$\theta_0 = -\frac{\pi}{6}$. The two red dots indicate the two points on the unit circle for which $\sin \theta = -\frac 12$. One way to express this is
$\theta \in \{ -\frac{\pi}{6} + 2n\pi : n \in \mathbb Z\} \cup
\{ \frac{7\pi}{6} + 2n\pi : n \in \mathbb Z\}$
To achieve $0 \le \theta \le 2\pi$ we find that
$\theta \in \{\frac{7\pi}{6}, \frac{11\pi}{6} \}$
where $ \frac{11\pi}{6} = -\frac{\pi}{6} + 2\pi$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2370728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
}
|
Third hardest question on a hospital IQ test(Fill the blank)
Place the numbers 1 through 9 in the available input boxes so that the
results of performing the mathematical operations in any row and
column (from left to right and top to bottom) always equals 13.
\begin{bmatrix}\square&+&\square&-&\square\\
-& &+& &\times\\ \square&\times&\square&+&\square\\
+&&-&&+\\ \square&\times&\square&-&\square\\ \end{bmatrix}
It takes me an hour to solve this. Do you have an easy way to solve this? Let's explore.
Original image here.
|
Label spaces A through I.
A+B, B+E,A+G are all at least 14. So A,B,E, and G are all greater than or equal to 5.
D x E must be less than 13 and E is 5 or more so D = 1 or 2.
If D=2 then E=5. If E = 5 then B+5-H=13 so B=9, H=1. So Gx1 -I =13 is impossible.
So D isn't 2. So D=1. So E+F = 13, so F is at least 4. C x F +I = 13 but F is at least 4 and I is at least 3 and C is at least 2.
If C is at least 3 then CxD+I is at least 14 so C is at most 2 and at least 2. So C is 2 and I is at least 3.
2x F + I =13 and F is at least 4 so we have either 2x4 + 5. Or 2x5 +3.
If F=4, I=5, E=9. GxH = 18 and neither G or H is 9 so G and H are 3 and 6. And G is at least 5 so G = 6, H =3 and B+9-3=13 so B=7. That leaves A =8. And that works as A+7-2=13 and A-1+6=13.
That's a solution.
any other solution would have F=5 and I =3 and E=8.
So A+B=11, GxH= 16, A+G=12, B-H=5. 1,2,3 are assigned so G and H are at least 4, so G xH =16 is impossible.
So only solution is
A=8,B=7,C=2,D=1,E=9,F=4,G=6,H=3,I=5.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2370819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Rationalising the denominator surds I have been struggling on this question. I don't understand how to change a negative surd fraction to a positive surd fraction.
Question: Rationalise and simply
$$\frac{2}{1+{\sqrt 6}}$$
What I did:
$\frac{2(1-{\sqrt 6})}{(1+{\sqrt 6)(1-{\sqrt 6})}}$
= $\frac{2{-2\sqrt 6 }}{-5}$
When I checked the answers It said the answer was
$$\frac{2({\sqrt 6} -1)}{5}$$
What did I do wrong
Thank you and help is appreciated
|
Nothing went wrong
If you take your result $\dfrac{2-2\sqrt 6 }{-5}$
Multiply num and den by $-1$ and collect $2$ in the numerator, you get
$$\dfrac{2(\sqrt 6 -1)}{5}$$ which is the result of the book
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2370915",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
Cross product in higher dimensions Suppose we have a vector $(a,b)$ in $2$-space. Then the vector $(-b,a)$ is orthogonal to the one we started with. Furthermore, the function $$(a,b) \mapsto (-b,a)$$ is linear.
Suppose instead we have two vectors $x$ and $y$ in $3$-space. Then the cross product gives us a new vector $x \times y$ that's orthogonal to the first two. Furthermore, cross products are bilinear.
Question. Can we do this in higher dimensions? For example, is there a way of turning three vectors in $4$-space into a fourth vector, orthogonal to the others, in a trilinear way?
|
Yes. It is just like in dimension $3$: if your vectors are $(t_1,t_2,t_3,t_4)$, $(u_1,u_2,u_3,u_4)$, and $(v_1,v_2,v_3,v_4)$, compute the formal determinant:$$\begin{vmatrix}t_1&t_2&t_3&t_4\\u_1&u_2&u_3&u_4\\v_1&v_2&v_3&v_4\\e_1&e_2&e_3&e_4\end{vmatrix}.$$ You then see $(e_1,e_2,e_3,e_4)$ as the canonical basis of $\mathbb{R}^4$. Then the previous determinant is $(\alpha_1,\alpha_2,\alpha_3,\alpha_4)$ with\begin{align*}\alpha_1&=t_4u_3v_2-t_3u_4v_2-t_4u_2v_3+t_2u_4v_3+t_3u_2v_4-t_2u_3v_4\\\alpha_2&=-t_4u_3v_1+t_3u_4v_1+t_4u_1v_3-t_1u_4v_3-t_3u_1v_4+t_1u_3v_4\\\alpha_3&=t_4u_2v_1-t_2u_4v_1-t_4u_1v_2+t_1u_4v_2+t_2u_1v_4-t_1u_2v_4\\\alpha_4&=-t_3u_2v_1+t_2u_3v_1+t_3u_1v_2-t_1u_3v_2-t_2u_1v_3+t_1u_2v_3\end{align*}It's a vector orthogonal to the other three.
I followed a suggestion taken from the comments on this answer: to put the entries $e_1$, $e_2$, $e_3$, and $e_4$ at the bottom. It makes no difference in odd dimension, but it produces the natural sign in even dimension.
Following another suggestion, I would like to add this remark:$$\alpha_1=-\begin{vmatrix}t_2&t_3&t_4\\u_2&u_3&u_4\\v_2&v_3&v_4\end{vmatrix}\text{, }\alpha_2=\begin{vmatrix}t_1&t_3&t_4\\u_1&u_3&u_4\\v_1&v_3&v_4\end{vmatrix}\text{, }\alpha_3=-\begin{vmatrix}t_1&t_2&t_4\\u_1&u_2&u_4\\v_1&v_2&v_4\end{vmatrix}\text{ and }\alpha_4=\begin{vmatrix}t_1&t_2&t_3\\u_1&u_2&u_3\\v_1&v_2&v_3\\\end{vmatrix}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2371022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37",
"answer_count": 4,
"answer_id": 0
}
|
Prove $ 2<(1+\frac{1}{n})^{n}$ How to prove that $ 2<(1+\frac{1}{n})^{n}$ for every integer $n>1$. I was thinking by induction, it works for $n=2$ but then I couldn't move forward.
|
Approach using AM-GM Inequality
First we show that
$$f(n)=\left(1+\frac{1}{n}\right)^n$$
is monotone increasing. To see this, apply the AM-GM inequality to the following $n+1$ terms
$$\left\{1,1+\frac{1}{n},1+\frac{1}{n},\cdots n\mbox{ times}\right\}$$
You get
$$\frac{1+n+1}{n+1} \geq \left(1+\frac{1}{n}\right)^{\frac{n}{n+1}}$$
Rearranging gives you $f(n) < f(n+1)$ since the terms in AM-GM are not equal.
Since $f(1)=2$, the result follows.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2371118",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 7,
"answer_id": 5
}
|
Limit for entropy of prime powers defined by multiplicative arithmetic function This question is related to my other question ( Entropy of a natural number ).
Let $f \ge 0$ be a multiplicative arithmetic function and $F(n) = \sum_{d|n}f(d)$.
Define the entropy of $n$ with respect to $f$ to be
$H_f(n) = -\sum_{d|n} \frac{f(d)}{F(n)}\log(\frac{f(d)}{F(n)}) = \log(F(n)) - \frac{1}{F(n)}\sum_{d|n} f(d)\log(f(d))$
For instance in the last question we had $f=id$.
Then I can prove that $H_f(mn) = H_f(m)+H_f(n)$ if $\gcd(m,n)=1$, hence $H_f$ is an additive function.
Is it true that $\lim_{\alpha \rightarrow \infty} H_f(p^\alpha)$ always exists, where $p$ is a prime?
In the last question we had $\lim_{\alpha \rightarrow \infty} H_{id}(p^\alpha) = \frac{p \log(p)}{p-1}-\log(p-1)$
If $f=\phi$ is the Euler totient function, then I can prove, that
$\lim_{\alpha \rightarrow \infty} H_{\phi}(p^\alpha) = \frac{ \log(p)}{p-1}+\log(\frac{p}{p-1})$
Edit:
I found a counterexample: $f\equiv 1$,$F(n) = \tau(n)$, where $\tau$ counts the divisors of $n$, then $H_f(n)=\log(\tau(n))$ and $H_f(p^\alpha)=\log(\alpha+1)$ is unbounded.
Hence the question might be phrased like this:
What properties must $f$ have such that the above limit exists?
Edit Why is $H_f$ additive?:
First $H_f(n) = \log(F(n)) - \frac{1}{F(n)} \sum_{d|n} f(d) \log(f(d))$
Denote by $E_f(n) = \sum_{d|n} f(d) \log(f(d))$
Then using the multiplicativity of $f$ one can show that $E_f(mn) = F(m)E_f(n)+F(n)E_f(m)$ when $\gcd(m,n)=1$.
Using this one can show that $H_f$ is additive.
If you have a counterexample $f$ and $m,n$ where this is not true, please post it.
|
If $f(n)$ is multiplicative and non-zero then let $F(n) = \sum_{d | n} f(d)$ and
$$ h(n) = \frac{\sum_{d | n} f(d) \log f(d)}{F(n)}$$
If $gcd(n,m)=1$ then
$h(n)+h(m)$ $ = \frac{F(m)\sum_{d | n} f(d) \log f(d)+F(n)\sum_{d | m} f(d) \log f(d)}{F(n)F(m)}$ $=\frac{\sum_{d ' | m,d | n} f(dd') \log f(dd')}{F(nm)}=h(nm)$
Thus $$h(n) = \sum_{p^k \| n} h(p^k)=\sum_{p^k \| n}\frac{\sum_{m=0}^k f(p^m)\log f(p^m)}{\sum_{l=0}^k f(p^l)}$$
Now $\log F(n)$ is additive too, as well as
$$H_f(n) = \log F(n)-h(n) = \sum_{p^k \| n} =\sum_{p^k \| n}\sum_{m=0}^k \log f(p^m) \left(1-\frac{(f(p^m) }{\sum_{l=0}^k f(p^l)}\right)$$
If $f(n) \ge 1$ then $H_f(n)$ small implies that
$$\sum_{m=0}^k \left(1-\frac{(f(p^m) }{\sum_{l=0}^k f(p^l)}\right)=\frac{k}{\sum_{l=0}^k f(p^l)}-1$$
is small. It happens for example when $f(n) = \phi(n)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2371244",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Explanation of this theorem on combinations There is this theorem in my book in the chapter of permutations and combinations which states: The total number of combinations of n different things taken any number of them at a time is $2^n$.
Its proof is given as: Each thing may be disposed of in two ways-it may or may not be chosen. Therefore, the total number of ways of disposing of all the n things $=2 \times 2\times 2 \times ...\text{n times}=2^n$. Hence, the number of combination (selections)=$2^n$.
I searched the internet for this theorem but could not find it. Can someone please explain the meaning of this theorem to me as well as its proof. What does "disposed" mean here?
|
Suppose you have some set $S=\{s_1,...,s_n\}$.
You're essentially trying to see how many subsets of $S$ there are (ie. the cardinality of the powerset of $S$).
For each subset $S_i$, assign it the ordered tuple $((x_i)_1,...,(x_i)_n)$ where $(x_i)_k=\begin{cases}0 & s_k\not\in S_i\\1 & s_k\in S_i \end{cases}$
Clearly, the ordered tuples are different for each subset and all possible tuples in $\{0, 1\}^n$ are assigned to some subset. Therefore counting the number of subsets is equivalent to counting the number of possible ordered tuples.
Each ordered tuple is a sequence of $0$s and $1$s and therefore counting the number of possible tuples is equivalent to counting the number of different length $n$ binary strings.
The answer is $2^n$.
What I've laid out here is essentially the idea which your book explains. Each item in the set can be 'on' or 'off' and this is like binary.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2371384",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Proof for corresponding eigenvalue If x is an eigenvector of a matrix A, then show that its corresponding eigenvalue is given by $\lambda=\dfrac{Ax\cdot x}{x\cdot x}$
I tried starting from $(A-\lambda{I})x=0$.
$Ax-\lambda{Ix}=0$
$\lambda{Ix}=Ax$
$\lambda=\dfrac{Ax}{Ix}$. This now is a bit confusing. Any help?
|
Your intuition starts off right, it is a good idea to begin with the definition of an eigenvalue, i.e. $A x = \lambda x$. Now the problem is that on both sides of the equation you have vectors, so you cannot divide by $x$. The idea is to "transform" the equation into an equation that only involves scalar quantities. A good way to do this is to take the scalar product with another vector $y$, i.e. (I will write the scalar product $x \cdot y$ as $\langle x, y \rangle$ to avoid confusing it with the normal multiplication of real numbers)
$$
\langle Ax, y \rangle = \lambda \langle x,y \rangle.
$$
Now the question arises: which $y$ should we choose? To solve the equation for $\lambda$, we obviously would like to divide by $\langle x,y \rangle$ (which is a real number now!). But this number might be zero if we choose the wrong vector $y$ (e.g. if $y = 0$ or $y \perp x$). So we need to make sure that $\langle x,y \rangle \neq 0$. The only thing we know is that $x$ is an eigenvector. But by definition this means that $x \neq 0$, and by the properties of scalar products, you know that $\langle x, x \rangle \neq 0$ if and only if $x \neq 0$.
So $y = x$ looks like a good choice!
Then we get
$$
\langle Ax, x \rangle = \lambda \langle x,x \rangle \Rightarrow \lambda = \frac{\langle Ax, x \rangle}{\langle x,x \rangle}
$$
or in your notation
$$
\lambda = \frac{Ax \cdot x}{x \cdot x}.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2371475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
}
|
Minimum radius of curvature of a sinusoidal curve Intuitively, the minimum radius of curvature is obtained at the point of highest amplitude. How do I prove it mathematically?
I proceeded by assuming the curve, $y=A\sin\omega x$. We know that radius of curvature is given by, $$\rho= \frac{[1+(\frac{dy}{dx})^2]^{1.5}}{\frac{d^2y}{dx^2}}$$
So I proceeded by substituting the respective values and differentiating $\rho$ with respect to $x$ and putting it equal to zero but each time I got vague results instead of getting $T/4$ or $3T/4$, assuming $\omega= 2\pi/T$.
Any help is appreciated.
|
In full form:
${d \rho \over d x} = \left\{-\left(\cos ^2(x)+1\right)^{1.5} \csc (x)\right\}$
Set it to zero and find (for $x \in \mathbb{R}$) that $x = 1.5708 = \pi/2$, and obvious integral increments.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2371557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Exercises in comparison geometry After taking an introductory graduate differential geometry course last year and doing a bit of reading about the Ricci flow, I was considering reading Cheeger and Ebin's book on comparison geometry to get some exposure to classical results in Riemannian geometry.
However, the book doesn't really have any exercises. Is there by any chance a nice bank of exercises in Riemannian/comparison geometry (say, from a course that uses the book) with which it would be good to follow along?
|
Petersen's Riemannian Geometry has a few chapters on comparison geometry with quite a few exercises, so it might be worth a look.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2371791",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove that there exist constants $u,v$ such that $uA+vB$ is positive definite. $A, B$ are $n$ by $n$ symmetric real matrices where $x^TAx=x^TBx=0$ implies $x=0$.
Prove that there are real numbers $u, v$ making $uA+vB$ postive-definite.
I am not sure whether it is true or not. I tried to disprove it but found it hard to find an $x$ on unit sphere killing both forms. Any help is appreciated. Thx.
|
It isn't true. Counterexample:
$$
A=\pmatrix{1&0\\ 0&-1},\ B=\pmatrix{0&1\\ 1&0}.
$$
Clearly, $x^TAx=0$ if and only if $x=(t,\pm t)^T$, and $x^TBx=0$ if and only if $x=(t,0)^T$ or $(0,t)^T$. So, the only solution to $x^TAx=x^TBx=0$ is the zero vector. However, $uA+vB$ is never positive definite because it has a zero trace.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2371918",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
How to find the coordinates of points on a line perpendicular to a given plane I am given a plane equation $Ax+By+Cz+D=0$ and coordinates $(x,y,z)$ of a point $P$ lying on a plane. I need to determine coordinates $(x_1,y_1,z_1)$ and $(x_2,y_2,z_2)$ of two points $P_1$ and $P_2$ which are located on a line that is passing through $P$ perpendicular to the plane. Furthermore I need $P_1$ and $P_2$ to be located at the same distance $d$ from the plane.
The illustration below shows everything better than I explained. Is it possible to find out these coordinates? I've always been bad with analytic geometry so I hope someone could put this to me simply. Thanks in advance.
|
I'll give you an example, hoping it will be better understood
Let the plane be $\pi:\;x+2 y+3 z+4=0$ and the point on it $P(1;\;2;-3)$
The line $r$ passing through $P$ and perpendicular to $\pi$ has parametric equation $r=(1 + t,\; 2 + 2 t,\; -3 - 3 t)$
To find a pair of points $P_1$ and $P_2$ having distance $d=\sqrt{14}$ from $P$ you write the equation of the sphere with centre in $P$ and radius $\sqrt{14}$
$(x-1)^2+(y-2)^2+(z+3)^2=14$
and find the intersection with $r$ plugging $x=1+t;\;y=2+2t;\;z=-3-3t$ in the previous equation
$14 t^2=14\to t=\pm1$
therefore the points are $P_1(0;\;0;\;0),\;P_2(2;\;4;\;-6)$
Hope this helps
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2372013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Can we Clamp and Clip numbers mathematically? While programming I came up with a the following equation / workflow
clampValue = (value - minValueofRange) / (maxValueofRange - minValueofRange);
clippedValue = min(1, max(0, clampValue));
finalValue = clippedValue * scale;
Now the clipping ultimately causes an if and else in programming. Can we convert it completely to a mathematical equation?
Edit:
max and min are functions in c++ defined like:
int max(int v1, int v2){
if(v1 > v2) return v1;
else return v2;
}
Similar for the min() function
|
Using a piecewise function as in T. Linnell's answer or simply writing something like $\min(1,\max(0,c))$ would be clear and absolutely fine in mathematics.
If you just happen to be curious about how to represent $\min$ and $\max$ in a different way, $\max(a,b)=\dfrac{|a-b|+a+b}{2}$ and $\min(a,b)=\dfrac{a+b-|a-b|}{2}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2372156",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
}
|
Filling a cone part way, find formula for height expressed in V I have an ice cream cone, the pointy side downwards. It's 6 cm tall and the diameter of the opening is 3 centimeter. I fill it to some height $h$ . Now how do I find a formula for its height as a function of volume.
I understand the volume of a cone is calculated by $V = \frac{\pi}{3}r^2 h$ and I know how to isolate $h$. But I'm totally in the dark as to how to approach this. If I only fill it partly, let's just say I fill it to 5 centimeters, the diameter of the surface of the ice cream is smaller then the diameter of the opening of the cone itself. How do I calculate the volume then? Since I don't know the diameter of the cone at that point...
How does the diameter of the surface of the ice cream depend on the amount of ice cream in the cone? How do I approach this?
|
Draw a right angle triangle representing the half of the vertical cross section of the cone. Draw a smaller triangle inside the previous triangle, this represents the partially filled ice cream. By using ratios of $\text{opening radius}:\text{cone height}=\text{radius if ice cream surface}:\text{height of the ice cream}$, you'll get the radius of the ice cream surface, hence u can get the volume.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2372415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Expression for the highest power of 2 dividing $3^a\left(2b-1\right)-1$
Question: I am wondering if an expression exists for the highest power of 2 that divides $3^a\left(2b-1\right)-1$, in terms of $a$ and $b$, or perhaps a more general expression for the highest power of 2 dividing some even $n$?
EDIT: This is equivalent to finding an expression for the highest power of 2 dividing some even $n$, however finding it in terms of $a$ and $b$ may make it more useful for what I have described below.
Motivation For Asking:
The reason for this particular question is because, having had a look at the Collatz conjecture, I have found that positive odd integers $n$ of the form $2^a\cdot\left(2b-1\right)-1$ will iterate to the even value of $3^a\cdot\left(2b-1\right)-1$ after $a$ iterations of $\frac{3n-1}{2}$, then being divided by the highest power of 2 dividing it. Since I have also found that numbers of the form ${\left(\frac{2}{3}\right)}^c\cdot\left(2^{3^{c-1}\cdot(2d-1)}+1\right)-1$ become powers of 2 after $c$ iterations, I am hoping to find an expression for the number of iterations for $n$ to become a power of 2, and hence reach the 4-2-1 loop.
Updates:
EDIT 1: I have found a post on mathoverflow at https://mathoverflow.net/questions/29828/greatest-power-of-two-dividing-an-integer giving a rather long expression for the highest power of 2 dividing any odd positive integer $n$ which may be useful.
EDIT 2: Here is an image displaying patterns I find quite interesting. It is from a program I wrote which displays the exponent of the highest power of 2 dividing each value of $3^a\left(2b-1\right)-1$, represented by lighter pixels for higher values and darker pixels for lower values. It is plotted on a 2d grid such that each pixel on the x-axis represents the $b$ value, starting from one, and the same for $a$ on the y-axis:
This seems to suggest that if $a$ and $b$ are both either odd or even, the highest power of 2 dividing $3^a\left(2b-1\right)-1$ is 2 (I am currently having a look at this further).
EDIT 3: Defining $2^{b_s}$ as the highest power of 2 dividing $1.5^{a_s}(n_{s}+1)-1$, I know that any odd $n_s=2^{a_s}(2x_s-1)-1$ will reach:
$$n_{s+1}=2^{a_{s+1}}(2x_{s+1}-1)-1=\frac{3^{a_s}(2x_s-1)-1}{2^{b_s}}=\frac{1.5^{a_s}(n_{s}+1)-1}{2^{b_s}}$$
This has led me to find an expression for the $z$th iteration, $T_z(n_1)$, starting from some odd $n_1$, of:
$$T(n_s)=\frac{1.5^{a_s}\left(n_s+1\right)-1}{2^{b_s}}=n_{s+1}$$
The expression I found is as follows:
$$T_z(n_1)=\frac{1.5^{\sum_{c=2}^{z}a_c}\left(1.5^{a_1}\left(n_1+1\right)-1\right)}{2^{\sum_{d=1}^{z}b_d}}+\frac{1.5^{a_z}-1}{2^{b_z}}+\sum_{e=2}^{z-1}\frac{1.5^{\sum_{f=e+1}^{z}a_f}\left(1.5^{a_e}-1\right)}{2^{\sum_{g=e}^{z}b_g}}$$
Hence, for the Collatz conjecture to be true:
$$T_z(n_1)=\frac{1.5^{\sum_{c=2}^{z}a_c}\left(1.5^{a_1}\left(n_1+1\right)-1\right)}{2^{\sum_{d=1}^{z}b_d}}+\frac{1.5^{a_z}-1}{2^{b_z}}+\sum_{e=2}^{z-1}\frac{1.5^{\sum_{f=e+1}^{z}a_f}\left(1.5^{a_e}-1\right)}{2^{\sum_{g=e}^{z}b_g}}=1$$
must have a unique solution for any odd $n_1$ such that $\left\{z,a_{1...z},b_{1...z}\in\mathbb{N^+}\right\}$
This raises a separate question - namely what can be said of $a_{s+1}$ and $b_{s+1}$ from the values of $a_{s}$ and $b_{s}$? If there turns out to be some connection, the values of $\sum_{i=1}^{z}a_{i}$ and $\sum_{i=1}^{z}b_{i}$ may be expressed simply in terms of $a_1$ and $b_1$, hence turning the problem into a (diophantine equation? I have not yet studied Mathematics beyond secondary school, so I am unsure of correct Mathematical terminology or notation - please feel free to correct me).
EDIT 4: As suggested by Gottfried Helms, when $3^a\left(2b-1\right)-1$ is written as $3^ab-\left(3^a+1\right)$, factoring out the highest power of 2 shows that for $a \equiv b \pmod 2$, the highest power of 2 dividing $3^a\left(2b-1\right)-1$ is 2, and for $a \equiv 1 \pmod 2$ where $b \equiv 0 \pmod 4$, or $a \equiv 0 \pmod 2$ where $b \equiv 3 \pmod 4$ it must be 4. In other cases it seems to be no longer dependant on $a$ or $b$ and becomes pseudo-random. This helps to explain the patterns found above, but not all of them.
|
This is only a reply to Daniel's comment but too long for the box
Legend: In the following I mean
*
*$ \{expression,p \} $ the exponent to which primefactor $p$ occurs in $expression$
*$[ m : a ]$ equals $1$ if $a$ divides $m$ otherwise it equals $0$
By analysis of cyclicitiness due to "little Fermat" of exponents of primefactor $2$ :
*
*(1) given $\qquad \displaystyle \{3^n-1,2\} = 1 + [n:2] + \{n,2\} $
Then in general
*
*(2) because $ \qquad \displaystyle 3^n+1 = { 3^{2n}-1 \over 3^n-1} $
it follows
*
*(3) $ \displaystyle \qquad \qquad \{3^n+1,2\} = \{3^{2n}-1,2\} - \{3^{n}-1,2\} $
Algebraical reformulation of (3) using (1) and (2):
$ \displaystyle \qquad
\{3^n+1,2\} \;=
\left(1 + [2n:2] + \{2n,2\}\right) - \left(1 + [n:2] + \{n,2\}\right) \\
\qquad \qquad \qquad \quad = \left([2n:2] + \{2n,2\} \right) - \left([n:2] + \{n,2\} \right) \\
\qquad \qquad \qquad \quad = \left(1 + 1+\{n,2\}\right) - \left([n:2] + \{n,2\}\right) \\
\qquad \qquad \qquad \quad = 2 - [n:2]
$
Result:
*
*(4) $ \displaystyle \qquad \implies \{3^n+1 ,2 \} = 2 - [n:2] $
which agrees with your formulation in your comment.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2372512",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 2,
"answer_id": 0
}
|
Counting the Number of Reps From a Rep Height I'm trying to calculate the number of reps from a rep height.
Example #1
Input: n = 2
Output: Result = 6
Because 1+2+2+1 = 6
--
Example #2
Input: n = 3
Output: Result = 12
Because 1+2+3+3+2+1 = 12
What formula could I use to get from input to output?
|
My way before is unnecessarily long. When you add $(1 + 2 + \dots + n) + (n + (n-1) + \dots + 1)$, line up the pairs $(n, 1)$, $(2, n-1)$, $(3, n-2)$, and you'll notice that they all sum to $n+1$. Since you have $n$ pairs, the sum is $n(n+1)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2372614",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Describing all holomorphic functions such that $f(n)=n$ for $n \in \mathbb{N}$ This question is inspired by a somewhat simpler one.
The question is: how can we classify all holomorphic functions $f:\mathbb{C}\rightarrow\mathbb{C}$ satisfying the property $\forall n \in \mathbb{N} \quad f(n)=n $?
If we have $g:\mathbb{C}\rightarrow\mathbb{C}$ such that $g\big|_\mathbb{N}\equiv 0$, then $f(z)=z+g(z)$ satisfies the criterion. Conversly, given such $f$ and defining $g(z)=f(z)-z$, we get $g\big|_\mathbb{N}\equiv 0$. So, the question boils down to classifying such $g$.
The set $I$ of such $g$, which is $I=\{g:\mathbb{C}\rightarrow\mathbb{C}, g\big|_\mathbb{N}\equiv 0\}$, is an ideal of the algebra of holomorphic functions, so we can ask for its generators. Obviously, $\forall k\in\mathbb{Z}\quad 1-e^{2\pi kz}\in I$, but I am not able to prove that they generate $I$.
|
Let $f$ any such function. Then $g(z)= f(z)/(ze^{2\pi iz})$ is entire and equal to $1$ at the natural numbers.
Therefore $g(z)-1$ is zero at the naturals. Let $\Pi(z)$ be a Weierstrass product giving you an entire function vanishing exactly at the natural numbers.
Therefore all the $g$ are of the form $\Pi(z)h(z)$ for any entire $h(z)$.
Hence $g(z)=\Pi(z)h(z)+1$ and $$f(z)=ze^{2\pi i z}(\Pi(z)h(z)+1)$$
Two things change here to get all the functions. The product $\Pi(z)$ for all the orders that you may want the zeros to have, and the entire function $h$, which is arbitrary and you may just consider it part of $\Pi(z)$ anyway.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2372771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Solving linear equation modulo two pi I have an equation I'd like to solve given as $$s \cdot \alpha \equiv p (\mathrm{mod} 2 \pi) $$ The numbers $s$ and $p$ are known, while $\alpha$ is to be solved for, and additionally is between $0$ and $2 \pi$. I have found a solution using other information, so I know solutions exist, but I'd like to know if solutions are unique and more importantly a general way of calculating them from the given information.
|
$$\begin{align} s\alpha\, &\equiv\, p\!\!\pmod{\!2\pi \Bbb Z}\\[.3em]
\iff\ s\alpha\, &=\, p\ +\ 2\pi n,\quad {\rm for\ some}\ \ n\in\Bbb Z\\[.3em]
\iff\ \ \ \alpha\, &=\, \dfrac{p}s + \dfrac{2\pi}s\, n,\,\ \ {\rm for\ some}\ \ n\in\Bbb Z\\[.3em]
\iff\ \ \ \alpha\, &\equiv\, \dfrac{p}{s}\!\!\pmod{\!\dfrac{2\pi}s\, \Bbb Z}
\end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2372886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Evaluate the integral $\int\limits_{-\infty}^\infty \frac{\cos(x)}{x^2+1}dx$. Evaluate the integral $\displaystyle\int\limits_{-\infty}^\infty \frac{\cos(x)}{x^2+1} dx$.
Hint: $\cos(x) = \Re(\exp(ix))$
Hi, I am confused that if I need to use the Residue Theorem in order to solve this, and I am not sure where I should start.
|
We may also see that $$I=\int_{-\infty}^{\infty}\frac{\cos\left(x\right)}{1+x^{2}}dx=2\int_{0}^{\infty}\frac{\cos\left(x\right)}{1+x^{2}}dx$$ $$ =\int_{0}^{\infty}\frac{e^{ix}+e^{-ix}}{1+x^{2}}dx=\frac{e^{-1}}{2}\left(\int_{0}^{\infty}\frac{e^{1+ix}+e^{1-ix}}{1+ix}dx+\int_{0}^{\infty}\frac{e^{1+ix}+e^{1-ix}}{1-ix}dx\right)$$ $$ =\frac{e^{-1}}{2i}\left(\int_{0}^{\infty}\frac{1}{x}\left(\frac{ixe^{1+ix}}{1+ix}+\frac{ixe^{1-ix}}{1-ix}\right)dx+\int_{0}^{\infty}\frac{1}{x}\left(\frac{ixe^{1-ix}}{1+ix}+\frac{ixe^{1+ix}}{1-ix}\right)dx\right)$$ and now applying the complex version of Frullani's theorem to the functions $$f\left(x\right)=\frac{xe^{1-x}}{1-x},\,g\left(x\right)=\frac{xe^{1-x}}{1+x}$$ we get $$I=\frac{e^{-1}}{i}\log\left(-1\right)=\color{red}{\pi e^{-1}}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2372949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
$x^5 + y^2 = z^3$ While waiting for my döner at lunch the other day, I noticed my order number was $343 = 7^3$ (surely not the total for that day), which reminded me of how $3^5 = 243$, so that $$7^3 = 3^5 + 100 = 3^5 + 10^2.$$
Naturally, I started wondering about nontrivial integer solutions to $$x^5 + y^2 = z^3 \tag{*}$$
("nontrivial" meaning $xyz \ne 0$). I did not make much progress, though apparently there are infinitely many solutions: this was Problem 1 on the 1991 Canadian Mathematical Olympiad. The official solutions (at the bottom of this page) only go back to 1994. A cheap answer is given by taking $x = 2^{2k}$ and $y = 2^{5k}$ so that the l.h.s. is $2^{10k + 1}$. This is a cube iff $10k + 1 \equiv 0 \,(3)$ i.e. $k \equiv 2\,(3)$ thus giving an arithmetic progression's worth of solutions, starting with $$(x, y, z) = (16, 1024, 128)$$ corresponding to $k = 2$
and $$(x, y, z) = (1024, 33554432, 131072)$$
coming from $k = 5$.
What else is known about the equation $(*)$? In particular, are there infinitely many solutions with $x$, $y$, $z$ relatively prime? The one that caught my attention was $(x, y, z) = (3, 10, 7)$. Another one is $(-1, 3, 2)$ because $-1 + 9 = 8$. By Catalan's conjecture (now a theorem), this is the only solution with $x = \pm 1$ or $y = \pm 1$ or $z = 1$. Are there any solutions with $z = -1$? In this case, $(*)$ reduces to $x^5 + y^2 = -1$ and Mihăilescu's theorem does not apply.
Update. This question was essentially already asked here, since the equation $a^2 + b^3 = c^5$ is equivalent to $(-c)^5 + a^2 = (-b)^3$.
|
There is a beautiful connection between $a^5+b^3=c^2$ and the icosahedron. Consider the unscaled icosahedral equation,
$$\color{blue}{12^3u v(u^2 + 11 u v - v^2)^5}+(u^4 - 228 u^3 v + 494 u^2 v^2 + 228 u v^3 + v^4)^3 = (u^6 + 522 u^5 v - 10005 u^4 v^2 - 10005 u^2 v^4 - 522 u v^5 + v^6)^2\tag1$$
By scaling $u=12x^5$ and $v=12y^5$ (or various combinations thereof like $u=12^2x^5$, etc), we then get a relation of form,
$$12^5a^5+b^3=c^2$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2373028",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 1,
"answer_id": 0
}
|
Can we use chi-square distribution and central limit theorem to find the approximate normal distribution? If $X_1,\ldots,X_i,\ldots,X_n$ are same normal distribution, $X_i \sim \operatorname{Normal}(0,σ^2)$,
and they are independent.
$$
Z = \frac{\sum_{i=1}^n X_i^2} n.
$$
What is the distribution of the square of the normal distribution?like $X_i^2$,and,what is it mean and variance?
I am trying to turn this Z into a normal distribution
can we use chi-square distribution and central limit theorem to find the approximate normal distribution ?
How to do it?
I do not quite understand the chi-square distribution and central limit theorem,
could you answer this question in detail?
Any help would be much appreciated!
re-edit:
I do this works:
$$
Z = \frac{\sum_{i=1}^n X_i^2} n= σ^2\sum_{i=1}^n \left(\frac{X_i}{σ}\right)^2.
$$
this is a chi-square distribution,and mean $= nσ^2$, var${}=2nσ^2$.
is this right?
and how to use CLT to find the approximate normal distribution?
|
If $X_i/\sigma\overset{iid}{\sim}N(0,1)$, then $W_i = (X_i/\sigma)^2\overset{iid}{\sim}\chi^2_1$ and $\mathbb{E}(W_i) = 1$, $\mathbb{Var}(W_i) = 2$. Therefore, if $\bar{W}_n$ is the sample mean, we have by the classic CLT
$$
\sqrt{n}(\bar{W}_n-1)\overset{d}{\rightarrow} N(0,2).
$$
Based on the above we obtain that for large $n$,
$$
\bar{W}_n \overset{approx}{\sim}N(1,\dfrac{2}{n}).
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2373140",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Ignoring the constant of integration $C$ in the integrating factor method for solving Linear ODE $$\dfrac{dy}{dx} + p(x)y = f(x)$$
Solving the linear dif. equation, we can use integrating factor method.
We know integrating factor: $exp(\int p(x) dt) = exp(P(x) + C)$.
But we ignore the constant of integration $C$. How can we explain why the constant was ignored?
|
The integrating factor is $\exp(P(x)+C)= K\exp(P(x)),$ where $K=\exp(C) > 0$.
Multiplying to the equation $$K\exp(P(x))(\dfrac{dy}{dx} + p(x)y) = K\exp(P(x))f(x)$$
$$K\frac{d}{dx}(\exp(P(x)) y)=K\exp(P(x))f(x)$$
We can always divide by $K$ anyway, hence there isn't a need to include the constant.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2373282",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 1,
"answer_id": 0
}
|
What is an intuitive approach to solving $\lim_{n\rightarrow\infty}\biggl(\frac{1}{n^2} + \frac{2}{n^2} + \frac{3}{n^2}+\dots+\frac{n}{n^2}\biggr)$?
$$\lim_{n\rightarrow\infty}\biggl(\frac{1}{n^2} + \frac{2}{n^2} + \frac{3}{n^2}+\dots+\frac{n}{n^2}\biggr)$$
I managed to get the answer as $1$ by standard methods of solving, learned from teachers, but my intuition says that the denominator of every term grows much faster than the numerator so limit must equal to zero.
Where is my mistake? Please explain very intuitively.
|
"grows much faster than the numerator": you are disregarding the fact that the numerator is actually
$$1+2+3+\cdots n, $$ which doesn't grow slower than $n^2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2373357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 11,
"answer_id": 5
}
|
How would I express the series $|1+1+1|+|1+1-1|+|1-1+1|+|1-1-1|+|-1+1+1|+|-1+1-1|+|-1-1+1|+|-1-1-1|$ in summation notation? I tried putting the series
$$
|1+1+1|+|1+1-1|\\+|1-1+1|+|1-1-1|\\+|-1+1+1|+|-1+1-1|\\+|-1-1+1|+|-1-1-1|
$$
into Wolfram Alpha and typing "in summation notation" but it wouldn't tell me what it is in summation notation. I tried to figure it out on my own but I can't figure out how to put this series into summation notation.
How would I express this sequence in summation notation?
|
$\sum_{i,j,k=0}^1 | (-1)^i + (-1)^j + (-1)^k |$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2373446",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 7,
"answer_id": 2
}
|
coins on chessboard, who has the winning strategy The game begins with empty $n\times n$ chessboard and a fixed number $m\in\{1,2,\dots,n\}$.
Two players are making moves alternately, each move is placing a coin on one empty square, each row and column can contain at most $m$ coins, the guy who cannot put a coin when he is to play, loses.
Who has the winning strategy?
In the original problem there was $n=2011$ and $m=1005$.
My solution:
The first guy wins. First move: a coin in the centre, then symetrical reflections of opponent's moves.
After solving the problem, I generalised it.
My above solution works for all $n,m$ both odd.
If $n$ is even, then the second guy wins by symetrical reflections.
What about remaining cases?
|
The remaining case is $n$ odd, $m$ even. My first intuition was the second player wins but my proof fails.
The maximum number of coins that can fit on a $n \times n$ chessboard without making a $m+1$-alignment is $n*m$, which is even.
If there are less than $n*m$ coins disposed, you can always find at least one row and one column with less than $m$ coins. If their intersection is free, you can play there. Alas we cannot be sure that this is the case, for instance with n=3, m=2:
| _ | _ | X |
| X | X | _ |
| X | X | _ |
Second player cannot play anymore.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2373533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
}
|
Natural numbers large enough can be written as $ab+ac+bc$ for some $a,b,c>0$ Conjecture: Any integer $n>462$ can be written as $n=ab+ac+bc$, where $a,b,c\in\mathbb Z_+$.
Tested for all $n\leq 100,000$, but I would like to see a proof.
The exceptions seems to be $\{1,2,4,6,10,18,22,30,42,58,70,78,102,130,190,210,330,462\}$
It's funny, I played around with BigZ and found that every odd prime seemed to be on the form. I was going to post on that, but then I found that almost any number where on this form. And now it turns out that it depends on the Generalized Riemann Hypothesis.
|
Not an answer, just a reduction to one particular class of cases.
Claim: If there is a prime $p<2\sqrt{n}$ such that $p\not\mid n$ and $-n$ is a square modulo $p$, then such $a,b,c$ exist.
When $n>1$ is odd, we have $(a,b,c)=\left(1,1,\frac{n-1}{2}\right)$.
When $n>4$ is divisible by $4$, we have $(a,b,c)=\left(2,2,\frac{n}{4}-1\right)$.
So we only need to solve for $n\equiv 2\pmod{4}$.
As JG pointed out, if $n+1$ is not prime, with factorization $n+1=uv$, $u,v>1$, then we get $(a,b,c)=(1,u-1,v-1)$.
So we've reduced to when $n+1\equiv 3\pmod{4}$ is a prime.
[I suppose that all these cases can be seen via Jaap's comment about seeking $c$ so that $n+c^2=rs$ with $r,s>c$. In the case $c=1$ give us the odd case or $n+1$ composite, the case $c=2$ gives the case $n$ divisible by $4$.]
We will proceed using Jaap's comment:
If there exists $c$ such that $n+c^2=rs$ for some $r,s>c$, then we can choose $(a,b,c)=(r-c,s-c,c)$.
Let $p$ be a prime relatively prime to $n$ such that $-n$ is a square modulo $p$.
Then $n+c^2$ is divisible by $p$ for some $\frac{p-1}{2}<c<p$.
If $d=\frac{n+c^2}{p}$, we want $d>c$ to ensure a solution.
So we want $$pc<n+c^2$$
But $c^2-pc+n=\left(c-\frac{p}{2}\right)^2+n-\frac{p^2}{4}$.
So if $p^2\leq 4n$ then this value is positive, and we get $d>c$ and hence $(a,b,c)=(p-c,d-c,c)$.
This explains why $462$ is a good candidate to fail - $462=2\cdot 3\cdot 7\cdot 11$, so $-462$ needs to not be square modulo $5,13,17,19,23,29,31,37,41$. Essentially, for large $n$ we need a lot of distinct prime factors.
It also explains why $n$ tends to be square-free in the examples.
If $p^2\mid n$ then $n+p^2=p^2\left(1+\frac{n}{p^2}\right)$ so if $p^2\mid n$ then $1+\frac{n}{p^2}\leq p$, and it must be prime. If it is not prime, then $n+p^2=(pa)(pb)$ where $ab=1+\frac{n}{p^2}$. That gives us that if $n$ has a square factor then $n=p^2(q-1)$ for some prime $q\leq p$, and there must be no solution to $ab+ac+bc=q-1$.
If $d\mid n$ then $d+\frac{n}{d}$ must not be factorizable as $ab$ with $a>1,b>d$, or otherwise, $n+d^2$ is factorizable as $(ad)(b)$.
But if $d+\frac{n}{d}\geq d^2$ this means that $d+\frac{n}{d}$ must be prime.
So, if $d\mid n$ and $d\leq \sqrt[3]{n}$, you must have $d+\frac{n}{d}$ prime.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2373650",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
}
|
Injective ring homomorphism between finite direct product of rings All rings below are commutative with unity.
Let $R$ be a Noetherian ring such that there is an injective ring homomorphism from $R^m$ to $R^n$ , then is it true that $m\le n $ ? If not true in general then can we impose any condition (apart from Artinian) on $R$ to make the statement true ?
( NOTE : I know the statement to be true for any ring if we consider injective module homomorphism ... )
|
Suppose $\alpha:R^n\times R\to R^n$ is an injective ring homomorphism. Extend the codomain to get a (non-unital) endomorphism $\beta:R^n\times R\to R^n\times R$, where $\beta(x)=(\alpha(x),0)$.
Consider the idempotent element $e_0=(0,1)\in R^n\times R$ and its images $e_n=\beta^n(e_0)$ under powers of $\beta$. Then $\{e_n\vert n\geq0\}$ is an infinite set of orthogonal idempotents ($e_ie_j=0$ for $i\neq j$) since $e_0\beta(e_0)=0$.
The ideal generated by $\{e_n\vert n\geq0\}$ is infinitely generated, so $R^n\times R$ is not noetherian. Since a finite direct product of noetherian rings is noetherian, $R$ is not noetherian.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2373751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
The length of a gap between the rationals In Terence Tao's book Analysis I, he says
there are still an infinite number of “gaps” or “holes” between the
rationals, although this denseness property does ensure that these
holes are in some sense infinitely small.
I think a gap between the rationals should have zero length.
Supposing $A_{1}=\{a\in {\mathbb {Q}}:a^{2}<2{\text{ or }}a<0\},
{\displaystyle A_{2}=\{a\in \mathbb {Q} :a^{2}>2{\text{ and }}a>0\}} $,
I define the "length" of the gap between $A_{1}$ and $A_{2}$ to be the greatest lower bound of $A =\left\{ a \middle| a = a_{2} - a_{1}, {\ a}_{1} \in A_{1}{,a}_{2} \in A_{2} \right\}$, so how to prove the greatest lower bound is $0$ ? especially using the density property of rational numbers to prove it?
Maybe I have a lack of understanding in the density property of rational numbers , so I am unable to give a proof to my question here.
|
It is possible to prove the greatest lower bound of $A =\left\{ a \middle| a = a_{2} - a_{1}, {\ a}_{1} \in A_{1}{,a}_{2} \in A_{2} \right\}$ is 0 in the rational number system, since if any other positive number $b$ is the greatest lower bound, we could always find a positive number $c=a_{2} - a_{1}<b$ .
I think what is important here is that the question won't make much sense unless we prove the greatest lower bound of $A$ is 0 in a continuous number system ($\mathbb R$). We may use Archimedean property of $\mathbb R$ to prove the conclusion .
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2373864",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Does anyone know the name of this conjecture? Given $p$ and $q$ are two different prime numbers.
Does there exist a positive integer $n$ such that
$p^n = 1 \pmod q$
Is this conjecture true? If so, any source of the prove. What is the name of this conjecture or theorem (if it is true)?
|
Independently of Fermat, it simply results from the fact that the group of units $(\mathbf Z/q\mathbf Z)^\times$ is finite, hence the (multiplicative) subgroup generated by the congruence class $\bar p=p+q\mathbf Z$ is finite.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2374156",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Involution that brings sets to disjoint sets Let $A$ be a collection of subsets of $\{1,2,\dots,n\}$ that is closed under taking subsets (that is, if $U\in A$ and $V\subseteq U$ then $V\in A$). Is there always an involution $f:A\to A$ such that $f(V)\cap V=\emptyset$ for all $V\in A$? I'm guessing yes.
Note that if $A=\mathcal P(\{1,2,\dots,n\})$, then taking the complement works. Also note that if $|A|$ is odd, we can send the empty set to itself (the empty set is disjoint from itself, isn't that weird?).
I tried working through a few small examples. I haven't found a counterexample, but I also haven't found a proof.
|
For any $A$ start with the smallest powerset that includes $A$ (e.g. if $A=\{\emptyset,\{1\},\{2\}\}$ start with $\mathcal P(\{1,2\})$). Use the trivial involution you suggested for this power set. Then transform this involution to a new involution by eliminating elements one by one to reach $A$ (start with bigger elements). In doing so alternate between reassigning $\emptyset$ to itself and the element that is mapped to the eliminated element. This way you can construct an involution for any $A$. You need to prove that the required conditions are preserved under these transformations. Note that each transformation is applied on the previous transformed involution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2374230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 2
}
|
Effects of a transformation which resembles a projection For a given symmetric and positive definite matrix $A \in \mathbb{R}^{n\times n}$ having its columns being a basis in $\mathbb{R}^n$ we have:
$$A(A^T A)^{-1} A^T$$
$$ = A(A^2)^{-1} A $$
being a projection matrix. The term $(A^TA)^{-1}$ represents a normalizing factor since the columns of $A$ are not an orthonormal basis. What happens if the normalizing constant is defined by another symmetric and positive definite matrix $B$, so we have:
$$A(B^T B)^{-1} A^T$$
$$ = A(B^2)^{-1}A^T $$
Does this new form mimic a projection matrix? Does it have eigenvalues close to 0 or 1?
|
Let $P_n$ denote the set of positive definite $n\times n$ real matrices. The set of matrices of the form you describe,
$$
X=\{AB^{-2}A\mid A,B\in P_n\},
$$
is equal to $P_n$. Indeed if $A,B\in P_n$ then
$$
(AB^{-2}A)^T=A^T(B^T)^{-2}A^T=AB^{-2}A
$$
and
$$
v^T(AB^{-2}A)v=(B^{-1}Av)^T(B^{-1}Av)>0
$$
for any nonzero $v$ (since $A,B$ are nonsingular). This shows $X\subseteq P_n$.
Conversely any $C\in P_n$ is diagonalizable via an orthogonal matrix, and has positive eigenvalues. That is,
$$
C=O^TDO
$$
where $O^TO=I$ and $D=\mathrm{diag}(d_1,\ldots,d_n)$ with $d_i>0$. Let $D'=\mathrm{diag}(\sqrt{d_1},\ldots,\sqrt{d_n})$. Let $A=O^TD'O$ and $B=I$. Then
$$
AB^{-2}A=A^2=O^T(D')^2O=O^TDO=C.
$$
This shows $P_n\subseteq X$.
So the only thing you can say about the eigenvalues of such a matrix is that they are all positive and real.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2374318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Reference request for complete probability text without measure theory I'm looking for a complete probability reference text, covering the majority of standard probability and stochastic process topics that can be covered without the use of measure theory.
I've already had a basic course in probability and in stochastic processes, so I'm looking for more of a desk reference type of book.
The three that have been recommended to me are
*
*Probability, Statistics, and Random Processes by Papoulis
*Probability for Statistics and Machine Learning by DasGupta
*Probability By Feller
I'm not too interested in Feller, since it seems that vol 2 has a fair bit of measure theory and vol 1 is only discrete. Any other recommendations?
|
A First Course in Probability & Introduction to Probability Models by Sheldon. Ross. The first one is introduction to probability without stochastic processes, while the second one delves into probability models ("Stochastic models") after relatively short brief on basic probability notions, random variables, etc.
In my University, these two books are the standard references for the mandatory three undergraduate level courses in probability and stochastic processes. Absolutely no measure theory is required (or even mentioned in is these books).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2374403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
}
|
Why are module preserving elements units? Let $K$ be a real quadratic field and $M \subset K$ be a subgroup of the additive group of $K$ of rank $2$. Why is each $\varepsilon \in K$ with $\varepsilon M=M$ a unit, so an element of $\mathcal O_K^\times$?
What I tried: We can find a basis $b_1,b_2 \in M$ for $M$ such that
$M=b_1 \mathbb Z + b_2 \mathbb Z$. Then there is an integer valued matrix of determinant $\pm 1$ with
$\begin{pmatrix} a & b \\ c & d \end{pmatrix}\begin{pmatrix} b_1 \\ b_2 \end{pmatrix}=\begin{pmatrix} \varepsilon b_1 \\ \varepsilon b_2 \end{pmatrix}.$
From that you can deduce equations like $\varepsilon = a+b b_2/b_1$. But I couldn't find a monic polynomial with root $\varepsilon$ and integer coefficients. When I have that I would be done since this shows $\varepsilon \in \mathcal O_K$ and we get with the same argument $\varepsilon^{-1} \in \mathcal O_K$, so we are done.
EDIT: Now I found the monic polynomial (thanks to Barry Smith). It is the characteristic polynomial of the matrix because $\varepsilon$ is an eigenvalue of it.
|
In case that $M$ is a fractional ideal this answer is simple: $ M = \varepsilon M =(\varepsilon) M $, so the principal ideal generated by $\varepsilon$ has to be $\mathcal O_K$. Hence, $\varepsilon \in \mathcal O_K$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2374552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Functions that are always less than their derivatives I was wondering if there are functions for which $$f'(x) > f(x)$$ for all $x$. Only examples I could think of were $e^x - c$ and simply $- c$ in which $c > 0$. Also, is there any significance in a function that is always less than its derivative?
Edit: Thank you very much for all the replies. It seems almost all functions that apply are exponential by nature...
Are there more examples like - 1/x?
Again are there any applications/physical manifestations of these functions? [for example an object with a velocity that is always greater than its position/acceleration is always greater than its velocity]
|
The inequality $$f'(x) > f(x)$$ is equivalent to $$\left[ f(x) e^{-x} \right]' > 0.$$
So the general solution is to take any differentiable function $g(x)$ with $g'(x) > 0$ and put $f(x) = g(x) e^x$.
Note that nothing is assumed about $f$ except differentiability, which is necessary to ask the question in the first place.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2374685",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 14,
"answer_id": 2
}
|
Minimum of the given expression
For all real numbers $a$ and $b$ find the minimum of the following expression.
$$(a-b)^2 + (2-a-b)^2 + (2a-3b)^2$$
I tried expressing the entire expression in terms of a single function of $a$ and $b$. For example, if the entire expression reduces to $(a-2b)^2+(a-2b)+5$ then its minimum can be easily found. But nothing seems to get this expression in such a form, because of the third unsymmetric square.
Since there are two variables here we can also not use differentiation.
Can you please provide hints on how to solve this?
|
Let $a=\frac{17}{15}$ and $b=\frac{4}{5}$.
Hence, we get a value $\frac{2}{15}$.
Thus, it remains to prove that
$$(a-b)^2 + (2-a-b)^2 + (2a-3b)^2\geq\frac{2}{15}$$ or
$$10(3a-3b-1)^2+3(5b-4)^2\geq0$$
Done!
I got my solution by the following way.
We need to find a maximal $k$ for which the following inequality is true for all reals $a$, $b$ and $c$.
$$(a-b)^2 + (2-a-b)^2 + (2a-3b)^2\geq k$$ or
$$6a^2-4(3b+1)a+11b^2-4b+4-k\geq0,$$ for which we need
$$4(3b+1)^2-6(11b^2-4b+4-k)\leq0$$ or
$$15b^2-24b+10-3k\geq0,$$
for which we need $$12^2-15(10-3k)\leq0$$ or
$$k\leq\frac{2}{15}.$$
The equality occurs for $k=\frac{2}{15}$, $b=\frac{24}{2\cdot15}$, which is $b=\frac{4}{5}$ and for these values we obtain
$$(a-b)^2 + (2-a-b)^2 + (2a-3b)^2\geq \frac{2}{15}$$ it's
$$6a^2-4(3b+1)a+11b^2-4b+4-\frac{2}{15}\geq0$$ or
$$90a^2-60(3b+1)a+165b^2-60b+58\geq0$$ or
$$10(9a^2-6(3b+1)a+(3b+1)^2)-10(3b+1)^2+165b^2-60b+58\geq0$$ or
$$10(3a-3b-1)^2+75b^2-120b+48\geq0$$ or
$$10(3a-3b-1)^2+3(5a-4)^2\geq0.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2374768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
}
|
Marble Probability Problem (Not Putting Marbles Back In Bag) A friend of mine asked this question to me recently. It has been ages since I did probability, but it seems interesting mathematically regardless. Here it is:
Suppose you have $5$ yellow marbles and $135$ green marbles in a bag. $10$ marbles will be pulled out and not put back in the bag, i.e. one marble will be pulled out and not placed in the bag followed by another marble being pulled out and not placed in the bag and so forth. What is the probability of pulling a yellow marble?
|
Let $P$ be the probability of pulling at least one yellow marble, and let $P^*$ be the probability of pulling no yellow marbles. Then
$$P=1-P^*$$
The probability of pulling no yellow marbles is
$$p_1p_2...p_{10}$$
where $p_i$ is the probability of pulling a green marble on the $i$th draw. The probability on the first draw is
$$p_1=\frac{135}{140}$$
and on the second, it is
$$p_2=\frac{134}{139}$$
and so on, so that
$$p_n=\frac{136-n}{141-n}$$
and so their product is equal to
$$\frac{\frac{135!}{125!}}{\frac{140!}{130!}}$$
$$=\frac{135!130!}{125!140!}$$
Which is the final answer.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2374871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.