Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
How to interpret this Mathematica command input as given in book? I am unable to understand what the book stands for by the given command shown as input in Mathematica, as given at its Googlebooks limk here.
I mean the line given by:
$lim_{x\rightarrow \infty} cos2(2x)2x -3.$
I tried to look into Mathematica syntax for inputting, and found that subscripts, and superscripts are allowed; as is also allowed to show fractions directly in input.
So, how should I interpret the same:
as: $lim_{x\rightarrow \infty} cos^{2(2x)}2x -3$, or not?
| I searched in the book, Exploring Calculus: Labs and Projects with Mathematica Crista Arangala Karen A. Yokley, for Lab-2, and here is what those functions were intending.
In Mathematica syntax, the last item $i$ is defined as (of course they want you to use the items listed and not this approach), so you can see how to input that function
Limit[Cos[2 x]^2/(2 x - 3), x -> Infinity]
Here is a Google Books link.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3267863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Integration using substitution - applying integral of symmetric functions properties or second substitution? I need help with this integral: $\int_0^2 (x-1)e^{(x-1)^2}\;\mathrm{d}x$
Okey. I'm choosing $u=x-1$, so $du=dx$.
$a=0$ and $b=2$
$u=g(x)=x-1$
$g(a)=g(0)=0-1=-1$
$g(b)=g(2)=2-1=1$
$$\int_0^2 (x-1)e^{(x-1)^2}\;\mathrm{d}x$$
$$=\int_{-1}^1 ue^{u^2}\;\mathrm{d}u$$
I'm stuck here. I was wondering if I can use one of the integral of symmetric functions properties there:
*
*If $f$ is even, then $\int_{-a}^a f(x)\;\mathrm{d}x=2\int_{0}^a f(x)\;\mathrm{d}x$
*If $f$ is odd, then $\int_{-a}^a f(x)\;\mathrm{d}x=0$
Let be $h(u)=ue^{u^2}$, then $h(-u)=-ue^{(-u)^2}=-ue^{u^2}$. Therefore is odd. So I can use the second property and conclude that $\int_0^2 (x-1)e^{(x-1)^2}\;\mathrm{d}x=\int_{-1}^1 ue^{u^2}\;\mathrm{d}u=0$???
Or shall I make a second substitution with $\int_{-1}^1 ue^{u^2}\;\mathrm{d}u$?
$v=u^2$, so $dv=2u du$, $\frac{1}{2}dv=u du$
$v=(1)^2=1$ and $v=(-1)^2=1$
Then,
$$\int_0^2 (x-1)e^{(x-1)^2}\;\mathrm{d}x$$
$$=\int_{-1}^1 ue^{u^2}\;\mathrm{d}u$$
$$=\int_{1}^1 e^{v}\cdot\frac{1}{2}\mathrm{d}v$$
$$=\frac{1}{2}\int_{1}^1 e^{v}\mathrm{d}v$$
$$= \tfrac{1}{2} [e^{v}] \Big|_{1}^1$$
$$= \tfrac{1}{2} [e^{1}-e^{1}]$$
$$= \tfrac{1}{2} [e-e]$$
$$= \tfrac{1}{2} [0]$$
$$=0$$
Are these analysis all right? If so, which one: integral of symmetric functions properties or second substitution or both? If not how can I work with this? Thanks in advance.
UPDATE! I found another way to do it. Faster.
$\int_0^2 (x-1)e^{(x-1)^2}\;\mathrm{d}x$
Is choosing $u=(x-1)^2$, so $du=2(x-1)dx$ $\Rightarrow$ $\frac{1}{2}du=(x-1)dx$.
$a=0$ and $b=2$
$u=g(x)=x-1$
$g(a)=g(0)=(0-1)^2=(-1)^2=1$
$g(b)=g(2)=(2-1)^2=(1)^2=1$
$$\int_0^2 (x-1)e^{(x-1)^2}\;\mathrm{d}x$$
$$=\int_0^2 e^{(x-1)^2}(x-1)\;\mathrm{d}x$$
$$=\int_1^1 e^{u}\frac{1}{2}\;\mathrm{d}u$$
$$=\frac{1}{2}\int_1^1 e^{u}\;\mathrm{d}u$$
$$=0$$
| note:
$$I=\int_0^2(x-1)e^{(x-1)^2}dx$$
by letting $u=x-1$ like you suggested this can be turned into:
$$I=\int_{-1}^1ue^{u^2}du$$
now notice that splitting this up gives:
$$\int_{-1}^0f(u)du+\int_0^1f(u)du=\int_0^1f(u)du+\int_0^1f(-u)du=\int_0^1f(u)du-\int_0^1f(u)du=0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3268272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How to calculate Net Present Value? So I have this question :
Your boss asked you to evaluate a project with an infinite life.
Sales and costs project to$ \$1,000 $ and $\$500$ per year, respectively. (Assume sales and costs occur at the end of the year, i.e., profit of $\$500$ at the end of year one.)
There is no depreciation and the tax rate is $30 \% $. The real required rate of return is $10\%$. The inflation rate is $4\%$ and is expected to be $4 \%$ forever. Sales and costs will increase at the rate of inflation. If the project costs $\$3,000$, what is the NPV?
*
*$\$500.00$
*$\$1629.62$
*$\$365.38$
*$\$472.22$
On the answer sheet it states that
$$NPV = -3000+ \cfrac{(1000-50)(1-0.30)}{0.10}$$,
which will give me a result of $ \$500$.
The only thing i do not understand is why is it divided by the real rate of return $0.10$, and not $1+r$ real?
I have done various exercises where I always divide by $1+r$.
Somebody please explain ! Thanks!
| The present value of the series of cash flows is as follows;
$$-3000+\frac{(1000-500)\cdot (1-0.3)}{1.1^1}+\frac{(1000-500)\cdot (1-0.3)}{1.1^2}+\frac{(1000-500)\cdot (1-0.3)}{1.1^3}+\frac{(1000-500)\cdot (1-0.3)}{1.1^4}+\ldots$$
$$=-3000+\sum_{k=1}^{\infty}\frac{(1000-500)\cdot (1-0.3)}{1.1^k}$$
For simplicity let $C=(1000-500)\cdot (1-0.3)$. The infinite sum is $\sum\limits_{k=1}^{\infty}\frac{C}{1.1^k}$
We can look at the partial sum of the geometric series $$\sum\limits_{k=1}^{n}\frac{C}{1.1^k}=C\cdot \frac{1}{1.1}\cdot \frac{1-\left(\frac{1}{1.1} \right)^n}{1-\frac{1}{1.1}}$$
Now we can expand the term by $1.1$ (blue terms)
$$=C\cdot \frac{\color{blue}{1.1}}{1.1}\cdot \frac{ 1-\left(\frac{1}{1.1} \right)^n}{\color{blue}{1.1}\cdot \left(1-\frac{1}{1.1}\right)}=C\cdot \frac{ 1-\left(\frac{1}{1.1} \right)^n}{1.1-1}=C\cdot \frac{ 1-\left(\frac{1}{1.1} \right)^n}{0.1}$$
Finally $n$ goes to infinty. $\left(\frac{1}{1.1} \right)$ is smaller than $1$. It decreases when n increases. Therefore
$$\lim_{n \to \infty} C\cdot \frac{ 1-\left(\frac{1}{1.1} \right)^n}{0.1}= C\cdot \frac{ 1-0}{0.1}=\frac{C}{0.1}$$
I think it is clear from where the $0.1$ comes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3268435",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$dxdy=rdrd\theta$ I'm trying to show that $dx\,dy=r\,dr\,d\theta$ using differentials.
$x=r\cos(\theta)$ and $y=r\sin(\theta)$
thus $dx=\cos(\theta)dr-r\sin(\theta)d\theta$ and $dy=\sin(\theta)dr+r\cos(\theta)d\theta$
$\begin{align}dx\,dy&=(\cos(\theta)dr-r\sin(\theta)d\theta)(\sin(\theta)dr+r\cos(\theta)d\theta)\\&
=\cos(\theta)\sin(\theta)
dr^2+r\cos^2(\theta)drd\theta-r\sin^2(\theta)d\theta dr-r^2\cos(\theta)\sin(\theta)d\theta^2\\&=\cos(\theta)\sin(\theta)
dr^2-r^2\cos(\theta)\sin(\theta)d\theta^2+rdrd\theta(1-\sin(\theta)^2-\sin(\theta)^2)\end{align}$
If my calculations are correct, $\cos(\theta)\sin(\theta)
dr^2-r^2\cos(\theta)\sin(\theta)d\theta^2-2 \sin(\theta)^2rdrd\theta=0$ but how am I suppose to show that?
| You are missing the important point on the difference between the partition elements in Cartesian and Polar systems.
While $dxdy$ is the area of a rectangle $rdrd\theta $ is the area of the curved section between circles of radii $r$ and $r+dr$ and the central angle of $d\theta$
The so called Jacobian gives you the multiplier of your transformation. The Jacobian is the determinant of a matrix whose terms are partial derivatives of $x$ and $y$ with respect to $r$ and $\theta$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3268573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
proof of binomial identity involving double sum It's asked to simplify the sum
$$\displaystyle\sum_{0\le i <j\le n+1} \binom{n}{i} \times \binom{n+1}{j} $$
inspecting first values of $n$ shows the sum if apparently equal to $4^n$
I tried re-writing the sum as
$$\displaystyle\sum_{j=1}^{n+1} \binom{n+1}{j} \displaystyle\sum_{i=0}^{j-1} \binom{n}{i} $$ but that doesn't seem to lead us to the result.
Any suggestions are welcome.
Thanks.
| Starting from $$\sum_{0\le i <j\le n+1} \binom{n}{i} \binom{n+1}{j}$$
we split the second binomial to
$$\sum_{0\le i <j\le n+1} \binom{n}{i} \binom{n}{j} + \sum_{0\le i <j\le n+1} \binom{n}{i} \binom{n}{j-1}$$ and reindex to
$$\sum_{0\le i <j\le n} \binom{n}{i} \binom{n}{j} + \sum_{0\le i \le k \le n} \binom{n}{i} \binom{n}{k}$$
Now we can expand both of those terms by symmetry to
$$\frac{\left[\sum_{0\le i\le n} \binom{n}{i}\right]\left[\sum_{0\le j\le n} \binom{n}{j}\right] - \sum_{0\le \iota\le n} \binom{n}{\iota}^2}{2} + \frac{\left[\sum_{0\le i\le n} \binom{n}{i}\right]\left[\sum_{0\le k\le n} \binom{n}{k}\right] + \sum_{0\le \kappa\le n} \binom{n}{\kappa}^2}{2}
$$
and the rest is easy.
Or even more straightforwardly, rename variables in the second sum to get
$$\sum_{0\le i <j\le n} \binom{n}{i} \binom{n}{j} + \sum_{0\le j \le i \le n} \binom{n}{j} \binom{n}{i} = \sum_{0\le i\le n \\ 0\le j\le n} \binom{n}{i} \binom{n}{j}$$
This points the way to a bijective proof: interpret the original sum as the number of ways of putting a red hat on $i$ out of $n$ people and a green hat on $j > i$ out of ($n$ people and one dressmaker's dummy). Then if there's a green hat on the dummy, take the green hat away from the dummy and each person who has one and give a green hat to each person who doesn't have one. There are at most $4^n$ resulting hat distributions (every person can have no hats, a red hat, a green hat, or both hats), every one is possible (if there are more green hats than red hats then we know that the dummy didn't receive a hat; otherwise we know that the dummy did receive a hat), and every one is obtained in precisely one way.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3268712",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Determine if there exist rational number a and irrational number A such that $A^3+aA^2+aA+a=0$. Determine if there exist a rational number a and irrational number A such that $A^3+aA^2+aA+a=0$. If so, can we say something about them? Are there infinitely many of them?
| For any integer $a$ except $0$ or $1$, the polynomial $x^3 + a x^2 + a x + a$ has no rational roots. Any rational root $A$ would have to be an integer (by Gauss's lemma, or the Rational Root Theorem). Now $A^3 + a A^2 + a A + a = 0 $ means
$$a = - \frac{A^3}{A^2 + A + 1} = -A + 1 - \frac{1}{A^2 + A + 1}$$
which, if $A$ is an integer, is not an integer unless $A = 0$ (corresponding to $a=0$) or $A = -1$ (corresponding to $a = 1$): otherwise $A^2 + A + 1 = (A + 1/2)^2 + 3/4 > 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3268806",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Does $\forall i\in \mathbb Z^+:\left \lfloor log_{2}(i) \right \rfloor = \left \lfloor log_{2}(i+0.999999) \right \rfloor$ Is it true that
$\forall i\in \mathbb Z^+:\left \lfloor log_{2}(i) \right \rfloor = \left \lfloor log_{2}(i+0.999999) \right \rfloor$ ?
The following is false:
$\forall i\in \mathbb Z^+:\left \lfloor log_{2}(i) \right \rfloor = \left \lfloor log_{2}(i+1) \right \rfloor$
(R code):
> for (i in 0:1e9) if ( floor(log2(i)) != floor(log2(i+1))) print(i)
[1] 0
[1] 1
[1] 3
[1] 7
[1] 15
[1] 31
[1] 63
[1] 127
[1] 255
[1] 511
[1] 1023
[1] 2047
[1] 4095
[1] 8191
[1] 16383
[1] 32767
[1] 65535
[1] 131071
[1] 262143
[1] 524287
[1] 1048575
[1] 2097151
[1] 4194303
[1] 8388607
[1] 16777215
[1] 33554431
However
> for (i in 0:1e9) if ( floor(log2(i)) != floor(log2(i+0.999999))) print(i)
[1] 0
Is true for any positive i within the checked range.
| Let $$\lfloor\lg(i)\rfloor\le n\land n<\lfloor\lg(i+e)\rfloor$$ for some integer $n$.
This is equivalent to
$$\lg(i)<n+1\land n+1\le\lg(i+e)$$
or, when $i$ is an integer
$$ i\le2^{n+1}-1\land 2^{n+1}\le i+e.$$
By subtraction of the inequalities,
$$1\le e.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3268998",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
A strange result if $|G/\mathrm{Z}(G)|=p$ I came across something strange, which I would like to share.
Let's take a group $G$ such that $|G/\mathrm{Z}(G)|=p$, where $p$ is a prime number.
Then, we can show that $G$ is abelian $\iff \mathrm{Z}(G)=G$.
But then $|G/\mathrm{Z}(G)|=|G/G|=1$ and we have a contradiction.
What do I miss?
Thanks
| We have that if $G/Z(G)$ is cyclic then $G$ is abelian. But since $G$ abelian means $G=Z(G)$, this forces $|G/Z(G)|=1$. Now, if $|G/Z(G)|$ were a prime number then $G/Z(G)$ would be cyclic and then $|G/Z(G)|$ would be 1, which is impossible. Hence we can never have $|G/Z(G)|$ prime. We have proved that either $G$ is abelian or $|G/Z(G)|$ has at least two prime factors (which may be equal).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3269102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Continuous mapping of Cantor Set to $[0,1]$ I read that $[0,1]$ is a continuous image of the Cantor set using the dyadic expansion $f$ of real numbers. $f$ is an onto function, since all $x \in [0,1]$ can be represented by an element in the Cantor set.
But $f$ should not be injective right? since otherwise we will have a homeomorphism between the Cantor set and $[0,1]$, but the Cantor set is disconnected while $[0,1]$ is connected. But I could not picture how the Cantor set cannot have a one-to-one function with $[0,1]$. How does $f$ not become injective (aside from the reason re homeomorphism)?
| The Cantor set $K$ and $[0,1]$ are both compact Hausdorff spaces. A continuous bijection from one compact Hausdorff space to another must be a homeomorphism. So if any $f:K\to [0,1]$ was continuous, surjective, and injective then $f$ would be a homeomorphism, implying that $f^{-1}:[0,1]\to K$ is continuous and surjective, implying that the disconnected space $K$ is a continuous image of the connected space $[0,1],$ which is impossible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3269258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Prove or disprove $\sum_{i=0}^\infty \frac{1}{i+j+1}\frac{1}{\sqrt{i+\frac{1}{2}}}<\frac{\pi}{\sqrt{j+1}}$ In Example 2.3.5; Functional Analysis book by S. Kesavan it was shown that for $j\gt -\frac{1}{2}$
$$s(j) := \sum_{i=0}^\infty \frac{1}{i+j+1}\frac{1}{\sqrt{i+\frac{1}{2}}}<\frac{\pi}{\sqrt{j+\frac{1}{2}}}\tag{1}$$
Numerically, even the stronger inequality seems to hold
$$s(j) <\frac{\pi}{\sqrt{j+1}}\tag{2}$$
I have proved $(1)$ by comparing the sum with an integral, but I didn't succeed to prove $(2)$. Can you find a proof?
| In S. Kesavan's example, we do no need the summation to begin from $i=0$. As $a_{ij}$ is not even defined there. Therefore, I was able to prove a relaxed version of the inequality.
$$\sum_{i=1}^{\infty}\frac{1}{(i+j+1)\sqrt{i+1/2}} = \frac{1}{(j+2)\sqrt{3/2}}+\sum_{i=2}^{\infty}\frac{1}{(i+j+1)\sqrt{i+1/2}} \tag{1}$$
Further,
$$\sum_{i=2}^{\infty}\frac{1}{(i+j+1)\sqrt{i+1/2}} < \sum_{i=2}^{\infty}\frac{1}{(i+j+1)\sqrt{i}}<\sum_{i=2}^{\infty}(\int_{i-1}^{i}\frac{dx}{(x+j+1)\sqrt{x}})$$
$$\sum_{i=2}^{\infty}(\int_{i-1}^{i}\frac{dx}{(x+j+1)\sqrt{x}})=\int_{1}^{\infty}\frac{dx}{(x+j+1)\sqrt{x}} \tag{2}$$
Also,
$$\frac{1}{(j+2)\sqrt{3/2}}<\frac{1}{(j+2)}<\int_{0}^{1}\frac{dx}{(x+j+1)\sqrt{x}} \tag{3}$$
From $(1),(2)$ and $(3)$ we have,
$$ \sum_{i=1}^{\infty}\frac{1}{(i+j+1)\sqrt{i+1/2}} < \int_{0}^{\infty}\frac{dx}{(x+j+1)\sqrt{x}} = \frac{\pi}{\sqrt{j+1}} $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3269420",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to check if a set is compact? I am struggling in finding a way to check if a set is compact. I know by definition that the set is compact if it is closed and bounded, but what about practice?
Especially if I have something like that:
$$\{(x,y)\mid x^2+y^2 < 2\}$$
Do you know how to approach the problem?
| Sure. That specific set is not compact since it is not a closed set: $\lim_{n\to\infty}\left(\sqrt{2}-\frac1n,0\right)=(\sqrt{2},0)$, which does not belong to your set, whereas each $\left(\sqrt{2}-\frac1n,0\right)$ does.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3269512",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is there algebraic functions with infinitely many roots? For example, a rational function is zero if and only if its numerator (which is a polynomial) is zero. Thus, a rational function which is not identically zero have only a finite number of roots.
Is the same conclusion valid for smooth algebraic functions? If so, what would a proof or a source?
Edit (in response to the comments). I'm particularly interested in a real-valued function of a real variable given explicitly by a formula obtained from the elementary algebraic operations (addition, subtraction, multiplication, division, roots).
| There are even non-zero polynomials $f(x)$ having infinitely many roots. This can happen when we do not consider polynmials over fields, but, say, over the real algebra of quaternions $\mathbb{H}$. The polynomial
$$
f(x) = x^2+1
$$
has infinitely many roots in $\mathbb{H}[x]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3269621",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Finding shaded triangle areas in a parallelogram There is the following parallelogram involving two shaded triangles.
If I found rightly, angles of $AMD$, $BMN$ and $CDM$ are $45$. But I can’t go further.
| $\angle AMD$, $\angle BMN$ and $\angle CDM$ are not necessarily $45^\circ$.
$[\triangle AMD]=\dfrac12\times\dfrac23\times[ABCD]$
$[\triangle BMN]=\dfrac12\times\dfrac13\times\dfrac13\times[ABCD]$
$[\triangle CND]=\dfrac12\times\dfrac23\times[ABCD]$
$[\triangle DMN]=\left(1-\dfrac13-\dfrac1{18}-\dfrac13\right)\times[ABCD]=\dfrac5{18}[ABCD]$
So, $\dfrac5{18}[ABCD]=\dfrac12\times6\times10$
$[ABCD]=108$
$\textrm{shaded area}=\left(\dfrac13+\dfrac1{18}\right)\times 108 = 42$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3269897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Arrangements of the word $ABCDEFGGGG$ If we consider the word $ABCDEFGGGG$. To find the number of arrangments for that word, we just calculate: $\frac{10!}{4!}$.
But if now we want to find the total number of arrangements for that word such that $2$ $G$'s must come together and the two other $G$'s be separated. One of the arrangements is for example: $ABGGCDGEFG$.
*Note that the two G's that are separated must also be separated from the other 2'G that are together.
How can we think about this problem?
Any help will be very appreciated.
| the answer is not $75600$. The reason is some of the combinations are repeated.
For example $G G GG$ are separated but when we permute them the first and the second $G$ will make the same combination. So, the solution is $2! * 7C3 * 6!$ .
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3270001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Let $(\textbf{a}_{n})_{n = 1}^{\infty}$ be a sequence in $\mathbb{R}^k$. Show $\{\textbf{a}_{n} : n \geq 1 \} \cup \{\textbf{a}\}$ is closed. Let $(\textbf{a}_{n})_{n = 1}^{\infty}$ be a sequence in $\mathbb{R}^k$ and $$\lim_{n \rightarrow \infty}\textbf{a}_{n} = \textbf{a}.$$ Show $B = \{\textbf{a}_{n} : n \geq 1 \} \cup \{\textbf{a}\}$ is closed.
Edit: I would specifically like to use the limit point definition of closed. i.e:
A set $A$ is considered closed if it contains all of its limit points
EDIT: I think I've gotten a way to prove it.
Attempt 2:
We will show $B^{c}$ is open. This means I have to construct an open ball $B_{\delta}(x)$ s.t $B_{\delta}(x) \subset B^{c}$.
Given that the sequence $a_{n}$ converges then for all $\epsilon > 0$ there exists a $N_{\epsilon} > 0$ such that for all $n > N_{\epsilon}$, $\|a_{n} - a \| < \epsilon$.
Let $x \in B^{c}$, $\epsilon > 0$, and $\epsilon = \frac{|x-a|}{2}$.
As $a_{n} \rightarrow a$ then $\|a_{n} - a\| < \frac{|x-a|}{2}$, let $\omega = min \{\frac{|x-a|}{2}, \frac{|x-a_{1}|}{2}, \dots, \frac{|x-a_{n}|}{2}\}$.
This then means $\|x-a_{n}\| > \omega \\ \Rightarrow \ B_{\omega}(x) \subset B^{c}$
Therefore $B^{c}$ is open and this means $B$ is closed.
Comment:
I'm not sure if the ball I constructed makes sense. I see it visually what I would like to accomplish but I may not be articulating it correctly.
| Suppose there is a different limit point $\mathbf b$ of $\mathbf B$. Set $\varepsilon=\frac12|\mathbf b-\mathbf a|$. Then by convergence of the sequence there are only finitely many points outside $B_ε(\mathbf a)$.
However at the same time, as $\mathbf b$ is a limit point, there need to be infinitely many points of $\mathbf B$ inside $B_ε(\mathbf b)$, which is contained in that "outside".
This is a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3270118",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 1
} |
Using Weierstrass theorem to prove having a finite optimal solution I was wondering if anyone has used Weierstrass theorem to prove that we have a finite optimal solution or has any reference for this claim.
I read in a paper that if an objective function is convex a, using Weierstrass theorem, we can conclude that the optimal objective function is finite, and the optimal set is non-empty. However, they have not provided a reference for that.
Thanks a lot for your help in advance,
| The Weierstrass extreme value theorem asserts that if you minimize a continuous function over a closed and bounded set in $\mathbb R^{n}$, then the minimum will be achieved at some point in the set.
I read in a paper that if an objective function is convex a, using
Weierstrass theorem, we can conclude that the optimal objective
function is finite, and the optimal set is non-empty. However, they
have not provided a reference for that.
To use the version of the Weierstrass theorem that I summarized above, you need the set under consideration to be closed and bounded and the function to be continuous. On a subset of $\mathbb R^{n}$, if $f$ is convex, then it is also continuous. However, there's nothing in your statement that says that the set of points is closed and bounded.
It's easy to construct examples of convex functions on sets that are either not closed or not bounded where no minimum is achieved. For example, minimize $f(x)=e^{x}$ on $\mathbb R$.
The OP has now cited the paper that they were reading. In this paper, a convex function $f$ is being minimized over a closed and bounded (compact) set $X$ in $\mathbb R^{n}$, so the extreme value theorem applies.
Statements and proofs of the Weierstrass extreme value can be found in many undergraduate analysis textbooks. See for example theorem 4.16 in the third (1976) edition of Rudin's Principles of Mathematical Analysis.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3270330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Cycle type of a permutation in $S_n$ and its relation to partition of $n$ and its Young diagram I know that it's possible to assign to each permutation its cycle type. I found two definitions of the cycle type and its relation to a partition of $n$:
First definition
Given $\sigma \in S_n$ written as product of $l$ cycles of lengths $(k_1,\dots,k_l)$, the cycle type is just $(k_1,\dots,k_l)$. Then by ordering the product so that $k_1 \geq k_2 \geq \dots \geq k_l$ and seeing that $\sum_i k_i = n$, one obtains that $(k_1,\dots,k_l)$ is a partition of $n$.
Second definition
Given $\sigma \in S_n$ the cycle type is the list $(w_1,...,w_n)$ where $w_i$ is the number of $i$-cycles in the product. Then it's possible to build a partition $\lambda = (\lambda_1,\dots,\lambda_n)$ of $n$ in the following manner:
*
*$\lambda_1 = \sum_{i=1}^{n} w_i$
*$\lambda_2 = \sum_{i=2}^{n} w_i$
*$\dots$
*$\lambda_n = w_n$
since $\lambda_1 \geq \lambda_2 \geq \dots \geq \lambda_n$ and $\sum_{i=1}^{n} \lambda_i = n$ we know that $\lambda$ is indeed a partition of $n$.
Young Diagram
I know that we can show a partition graphically using a Young diagram where each row, starting from the top (in the English style), contains a number of boxes that is equal to the i-th number listed in the partition.
So, by the previous definitions, a Young diagram would have $k_1$ or $\lambda_1$ in its first row, then $k_2$ or $\lambda_2$ in its second row, and so on.
Question
I'm getting confused because when I apply these two definitions, given a $\sigma \in S_n$ I get two different partitions and hence two different Young diagrams.
For example let $\sigma = (123)(45) \in S_5$. If we use the first definition, the cycle type would be $(3,2)$ and the corresponding Young diagram would have $3$ boxes in the first row and $2$ boxes in the second.
However, if we use the second definition the cycle type ($w$) would be $(0,1,1,0,0)$ while the partition ($\lambda$) would be $(2, 2, 1, 0, 0)$ so that the Young diagram would have $2$ boxes in the first row, $2$ boxes in the second row and $1$ box in the third row.
Which one is the correct definition?
Thanks.
| I have never seen the second definition to be honest. For me the cycle type is given by the first one and at least for me that is the classical one. Most often one defines the cycle type to see that the cycle type determines the conjugacy class, i.e. two permutations have the same cycle type iff they are conjugate and for me that is more intuitive via the first one. There is a way to convert them into each other though by conjugation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3270465",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is it obvious that the plane $z=0$ is tangent to the surface $z=x^{2}+y^{2}$ Why is it obvious that the plane $z=0$ is tangent to the surface $z=x^{2}+y^{2}$
I don't quite understand, is this obvious? I have a problem with the background knowledge, I don't even know how to deal with the surface $z=x^{2}+y^{2}$.
| In fact at $z=0$ the equation $x^2+y^2=0$ defines only $1$ point, which is $(0,0)$.
So the plane $z=0$ and the surface intersect in a single point.
But since the surface is smooth (the equation is polynomial), it cannot have singular points, and the touching point is automatically a tangent point.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3270540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
} |
Generating function of a parametrized binomial coefficient Let be $m$ an integer and $A_p(m) = \binom{mp}{p}$.
I'd like to know more about $B_m(z) = \sum_{p \geq 0} A_p(m) z^p$.
At least, I'd love to be able to compute $B_m\left(\dfrac{1}{q}\right)$ for some $q$ integers.
What I tried:
*
*Look at Fuss-Catalan numbers and its generating function to derive a relation with $B_m$
*But as there is no closed form of the generating function, I cannot derive an interesting enough relation here.
Intuitively, I could try to interpret $A_p(m)$ as something combinatorial and look for a recurrence relation to explicit $B_m$, but I have no idea.
| This is not an answer.
For $m=1,2,3$ there are closed forms for $B_m(z)$.
For $m\geq 4$ come again hypergeometric functions with interesting patterns
$$B_m(z)=\,
_{m-1}F_{m-2}\left(\frac{1}{m},\frac{2}{m},\cdots,\frac{m-1}m; \frac{1}{m-1},\frac{2}{m-1},\cdots,\frac{m-2}{m-1};\frac{m^m}{(m-1)^{m-1}}z\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3270761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
If $T$ is terminal object, is also $\text{hom}(X,T)$ a terminal object? In a category with internal Hom and terminal object $T$. Is it true that also $\text{hom}(X,T)$ is a terminal object for any object $X$?
It is definitely true for $\mathbf{Set}$ or $\mathbf{Vec}$, but I'm not sure if it is true in general.
I'm probably stuck in thinking about sets, and I do not even know where to start in proving or disproving it.
| By definition of the terminal object, $\hom(X,T)$ is a one element set for every $X$.
And one element sets are exactly the terminal objects in $Set$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3270858",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Does $\mathbf{N}$ with the reverse divisibility order form a Heyting algebra?
Consider the nonnegative integers $\mathbf{N}$ with the reverse divisibility order (i.e. $\mathrm{a} \leq \mathrm{b}$ $\iff$ $\mathrm{b} \mid \mathrm{a}$). Is this a Heyting algebra?
One advantage of the reverse ordering is that the elements we usually call 0 and 1 in a lattice are really 0 and 1 respectively, rather than vice versa. It may help to instead consider the isomorphic lattice of subgroups or ideals of the group or ring of integers $\mathbf{Z}$.
| Yes, the lattice of subgroups of the infinite cyclic group is relatively pseudocomplemented.
It is easy to check the pseudocomplement $(p^m)\to(p^n)$ is $(p^n)$ if $m<n$ and $\mathbb{Z}$ if $m\geq n$. So
$$
((p_1^{m_1}\dots p_k^{m_k})\to(p_1^{n_1}\dots p_k^{n_k})) = \biggl(\prod_{\substack{j\\ m_j<n_j}} p_j^{n_j}\biggr).
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3270998",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Clarification of L'Hopital Proof Pugh I am self-studying Real Analysis right now via Pugh's Real Mathematical Analysis but am having trouble understanding a step of the author's proof of L'Hopital's rule.
The theorem is stated as:
If $f$ and $g$ are differentiable functions defined on an intveral $(a,b)$, both of which tend to $0$ at $b$, ad if the ratio of their derivatives $f'(x)/g'(x)$ tends to a finite limit $L$ at $b$ then $f(x)/g(x)$ also tends to $L$ at $b$, where $g(x),g'(x) \neq 0.$
His proof reads as follows:
Given $\epsilon > 0$ we must find a $\delta > 0$ such that if $|x-b| < \delta$ then $|f(x)/g(x) - L|< \epsilon.$ Since $f'(x)/g'(x)$ tends to $L$ as $x$ tends to $b$ there does exist a $\delta > 0$ such that if $x \in (b-\delta, b)$ then $$\left\vert \frac{f'(x)}{g'(x)}-L \right\vert < \frac \epsilon 2.$$ For each $x \in (b-\delta, b)$ determine a point $t \in (b-\delta, b)$ which is so near to $b$ that \begin{align}|f(t)+g(t)| &< \frac{g(x)^2\epsilon}{4(|f(x)|+|g(x)|)} \\ |g(t)| &< \frac{|g(x)|}{2}.\end{align} Since $f(t)$ and $g(t)$ tend to $0$ as $t$ tends to $b$, and since $g(x) \neq 0$ such a $t$ exists. It depends on $x$, of course. By this choice of $t$ and the Ratio Mean Value Theorem we have, for some $\theta \in (x,t),$ \begin{align*}\left\vert \frac{f'(x)}{g'(x)}-L \right\vert &= \left\vert \frac{f(x)}{g(x)}-\frac{f(x)-f(t)}{g(x)-g(t)}+\frac{f(x)-f(t)}{g(x)-g(t)} - L \right\vert \\ &\le \left\vert \frac{g(x)f(t)-f(x)g(t)}{g(x)(g(x)-g(t))} \right\vert + \left\vert \frac{f'(\theta)}{g'(\theta)}-L \right\vert < \epsilon, \end{align*} which completes the proof that $f(x)/g(x) \to L$ as $x \to b.$
The part I didn't get was the last inequality
$$\left\vert \frac{g(x)f(t)-f(x)g(t)}{g(x)(g(x)-g(t))} \right\vert + \left\vert \frac{f'(\theta)}{g'(\theta)}-L \right\vert < \epsilon,$$
which I'm sure relates to his constraints on $|f(t) + g(t)|$ and $g(t)$. I understood his general point about $f(t)/f(x), g(t)/g(x)$ getting arbitrarily small so that $$\frac{f(x)}{g(x)} \approx \frac{f(x)-f(t)}{g(x)-g(t)}$$ but don't really understand the finer details of the proof.
Any help is greatly appreciated. :)
| This is a good question. I actually don't think it follows from what he has written. Take, for example, $g(t) = 1/2, f(t) = -1/2, g(x) = 1, f(x) = 1$. Then $|f(t)+g(t)| < \frac{g(x)^2\epsilon}{4(|f(x)|+|g(x)|)}$ and $|g(t)| < \frac{|g(x)|}{2}$, but $|\frac{g(x)f(t)-f(x)g(t)}{g(x)(g(x)-g(t))}| = |\frac{-1/2-1/2}{1/2}| = 2$.
I can't figure out what he was going for. He already said $x \in (b-\delta,b)$ implies $|\frac{f'(x)}{g'(x)}-L| < \frac{\epsilon}{2}$. He fixed an $x \in (b-\delta,b)$ and $t \in (x,b)$, so since $\theta \in (x,t)$, we know $\theta \in (b-\delta,b)$ and thus $|\frac{f'(\theta)}{g'(\theta)}-L| < \frac{\epsilon}{2}$. So we just need to show that $|\frac{g(x)f(t)-f(x)g(t)}{g(x)(g(x)-g(t))}| < \frac{\epsilon}{2}$. But I don't see how the two chosen conditions on $t$ would help (indeed, the first part of my answer shows that more is needed).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3271142",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 4,
"answer_id": 2
} |
Proving that an integral is a holomorphic function Let $U\in \mathbb C$. Let $f:U\to\mathbb C$ be analytic on $U$ and continuous on the boundary of $U$. I want to prove that, for each $a\in U$, and sufficiently small $r>0$,
$$
g(w)=\frac{1}{2\pi i}\int_{|z-a|=r}\frac{zf'(z)}{f(z)-w}dz
$$
defines a holomorphic function.
My attempt: Assume for a moment that $f$ is injective (not actually valid). Using the substitution $x=f(z)$, we get
$$
g(w)=\frac{1}{2\pi i}\int_{|f^{-1}(x)-a|=r}\frac{f^{-1}(x)}{x-w}dx=f^{-1}(w)
$$
by Cauchy integration formula, since $f^{-1}$ is analytic (because $f'(z)\neq0$).
Is this valid? I feel something missing. For example, I have not addressed the condition "sufficiently small r". What's wrong?
| Let $n$ be the order of the zero of $f(z)-f(a)$ at $z=a$.
$$f(z)- f(a)= f^{(n)}(a) (z-a)^n+O((z-a)^{n+1})$$
For $r$ small enough then $f(z)-f(a)-w, |z-a|=r$ doesn't vanish on $|w| < R= \frac12 |f^{(n)}(a)| r^n$ so that $$g(f(a)+w)=\frac{1}{2\pi i}\int_{|z-a|=r}\frac{zf'(z)}{f(z)-f(a)-w}dz$$ is analytic on $|w| < R$.
Iff $n=1$ then $g(w) = f^{-1}(w)$ (proving the latter is locally analytic). If $n=2$ then the substitution $u = f(z)$ in the integral transforms the simple loop $|z-a| = r$ into a double loop around $f(a)$.
You can use the residue theorem to express $g(f(a)+w)$ in term of $f^{-1}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3271272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Prove that $\sum_{n=1}^\infty \int_{-\infty}^\infty\cos(n^2x)I(x)dx$ converges absolutely.
Let $I$ be a measurable subset of $\mathbb R$. We define
$$ I(x)=\int_I\frac{\chi_{(-1\le x-y\le 1)}}{1+y^2}dy. $$
For $n\ge 1$ we define
$$a_n=\int_{-\infty}^\infty\cos(n^2x)I(x)dx.$$
Prove that $\sum_{n=1}^\infty a_n$ converges absolutely.
My attempt:
I know that $I(x)$ is a nonnegative bounded $L^1$ function on $\mathbb R$. In order to show that $\sum_{n=1}^\infty a_n$ converges absolutely, I tried to estimate the integral:
\begin{align}
\sum_{n=1}^\infty |a_n|&\le\sum_{n=1}^\infty\int_{-\infty}^\infty\left|\cos(n^2x)[\arctan(x+1)-\arctan(x-1)]\right|dx\\
\end{align}
However, I don't know how to proceed without sacrificing the term $\cos(n^2x)$. I have the intuition that $\arctan(x+1)-\arctan(x-1)$ goes to zero as $x$ goes to infinity. Meanwhile, $|\cos(n^2x)|$ should be magnified to something regarding $n$ from where we compare each term with a convergent series to conclude. But I am not sure how to carry it out explicitly. Any help? Thank you.
| We need resort to oscillatory nature of the integrand. By Fubini's theorem1),
$$ a_n
= \int_{I}\int_{\mathbb{R}} \frac{\cos(n^2 x)\mathbf{1}_{\{\left|x-y\right|\leq 1\}}}{1+y^2}\,\mathrm{d}x\mathrm{d}y
= \int_{I} \frac{\sin(n^2(y+1)) - \sin(n^2(y-1))}{n^2(1+y^2)}\,\mathrm{d}y, $$
and so, $\left|a_n\right| \leq c/n^2$ for come constant $c > 0$. This is enough to conclude that $\sum_{n\geq 1} a_n$ converges absolutely.
1) Fubini's theorem is applicable because
$$ \int_{I}\int_{\mathbb{R}} \left| \frac{\cos(n^2 x)\mathbf{1}_{\{\left|x-y\right|\leq 1\}}}{1+y^2}\right| \,\mathrm{d}x\mathrm{d}y
\leq \int_{I}\int_{\mathbb{R}} \frac{\mathbf{1}_{\{\left|x-y\right|\leq 1\}}}{1+y^2} \,\mathrm{d}x\mathrm{d}y
\leq \int_{\mathbb{R}} \frac{2}{1+y^2} \, \mathrm{d}y = 2\pi < \infty. $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3271390",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$ 5r + 4s + 3t + 6u = 100, \:\: r \ge s \ge t \ge u \ge 0 $ maximum and minimum possible of $r + s + t + u$? We have
$$ 5r + 4s + 3t + 6u = 100, \:\: r \ge s \ge t \ge u \ge 0 $$
What is the sum of the maximum and minimum possible of $r + s + t + u$?
Attempt:
Assume that $r'+s'+t'+u'$ is the maximum. Now if $u > 0$, then we can have
$$ (r' + K) + s' + t' + (u' - \frac{5}{6} K), \:\: K > 0 $$
is bigger than the claimed maximum, with ($r=r'+K, s=s', t=t', u = u' - \frac{5}{6}K$). We have a contradiction. So $u'=0$.
Now see
$$ 5r + 4s + 3t = 100 $$
Similarly, let $r^{*} + s^{*} + u^{*}$ maximum. Then
$$ r^{*} + (s^{*} - \frac{3}{4}C) + (t^{*} + C), \:\: C > 0 $$
is bigger then the claimed maximum, provided that $s = s^{*} -(3/4)C \ge t = t^{*} + C$
with
$$ C \le \frac{4}{7}(s^{*}-t^{*})$$
After this I have no idea.
| Hint: We get $$20\le r+s+t+u\le 25$$, where the minum will be attained for $$r=20,s=t=u=0$$ and the maximum by $$r=s=t=\frac{25}{3},u=0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3271522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Two sided normal p-value question From Statistical Inference by Casella and Berger:
Let $X_1 , \dots, X_n$ be a random sample from a $n(\mu, \sigma^2)$ population. Consider testing $:H_0: \mu = \mu_0$ verses $H_1 : \mu \neq \mu_0$. $W(X) = |\bar X - \mu_0| / (S / \sqrt n)$ is a test statistic that rejects $H_0$ for large values which has a Student's $t$ distribution with $n-1$ degrees of freedom.
To calculate $p(x) = \sup_{(\mu, \sigma^2) \in \Theta_0} P_{(\mu, \sigma^2)}(W(X) \ge W(x))$ we recognize that the supremum is the same regardless of what value of $\sigma$ is chosen and therefore $p(x) = 2 P (T_{n-1} \ge |\bar x - \mu_0| / (s / \sqrt n))$.
How is $p(x) = 2 P (T_{n-1} \ge |\bar x - \mu_0| / (s / \sqrt n))$? I'm not following the logic of how the supremum is twice the probability of any $(\mu, \sigma)$ chosen.
| The distribution of $T_{n-1}$ is independent of $\sigma$, hence for every value of $\sigma$ the p.value equals the probability of $T_{n-1}$ being larger than $|\bar{x} - \mu_0|/(s/\sqrt{n})$. Due to symmetry of $T_{n-1}$ around $0$, it is suffice to calculate only one sided probaility and multiplying it by $2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3271603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
if the sum of two numbers $\alpha$ and $\beta$ is algebraic, and their product is transcendental, what do we know about these numbers? These are elements of a field. My intuition says that $\alpha=a+b$, $\beta=a-b$, where, $a$ is algebraic and $b$ is transcendental, but I can't prove it. I don't even know where to start.
Thanks in advance!
| If $\alpha + \beta$ is algebraic and $\alpha \beta$ is transcendental, then
$(\alpha - \beta)^2 = (\alpha+\beta)^2 - 4 \alpha \beta$ is transcendental, so $\alpha - \beta$ is transcendental.
Thus $\alpha = ((\alpha + \beta) + (\alpha - \beta))/2$ is transcendental, and so is
$\beta = ((\alpha + \beta) - (\alpha - \beta))/2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3271729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Why does the congruence hold? Let $p$ be a prime number and $\mathbb{Q}(\zeta)$ be the pth cyclotomic number field where $\zeta$ is any primitive pth root of unity.
Writing $$t=b_0+b_1\zeta+...+b_{p-2}\zeta^{p-2} $$ with $b_j \in \mathbb{Z}$ , we get $$t^p \equiv b_0^p+b_1^p+...+b_{p-2}^p \pmod{p\mathbb{Z}[\zeta]}$$
I understand the congruence if I consider this modulo p .
| First prove that $t^p = b_0^p + (b_1\zeta)^p + \cdots + (b_{p-2}\zeta^{p-2})^p$.
Hint: write it out and note what terms get a coefficient divisible by $p$, then note that all those coefficients are in your ideal $(p)$.
Another hint: Maybe start small and show $(a + b)^p = a^p + p(\cdots) + b^p$.
Next, use $\zeta^p = 1$ to arrive at your final answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3271851",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
$F$ is algebraically closed $\iff$ $\nexists$ $K$ s.t. $F \leq K$, $K \neq F$ and $[K:F] < \infty$
$F$ is algebraically closed $\iff$ $\nexists$ $K$ s.t. $F \leq K$, $K \neq F$ and $[K:F] < \infty$.
Proof:
($\implies$) assume by way of contradiction that $F$ is an algebraically closed field and there does exist such a field $K$, say $[K:F] = n$.
Let $a_1,....a_n$ be a basis for $K$ over $F$.
Okay, now I'm lost, haha
$(\impliedby)$
I need help with this direction also.
Thanks all! You're the best!
| The correct statement is :
$F$ is algebraically closed if and only if there does not exist a finite field extension $K$ of $F$, i.e. there does not exist $K$ such that $F\leq K$ and $[K:F]<\infty$.
Proof : $(\Rightarrow)$ Say $K$ is a finite field extension of $F$ of degree $n$. Let $\alpha\in K-F$. Then $\{1,\alpha,\alpha^2,\dots,\alpha^n\}$ is linearly dependent over $F$. Then there exists $\lambda_i\in F$ such that $\sum_i\lambda_i\alpha^i=0$, i.e. $\alpha$ satisfies a polynomial with coefficients in $F$, which contradicts the fact that $F$ is algebraically closed.
$(\Leftarrow)$ Say $F$ is not algebraically closed. Then there exists an irreducible polynomial $f\in F[x]$ such that $f$ has no root in $F$. Then $K=F[x]/(f)$ is a finite field extension of $F$, which is a contradiction. Hence $F$ is algebraically closed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3271975",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Calculate the sum of series with square roots Calculate the sum of the following series using partial sums:
$$\sum_{n=1}^\infty \frac{\sqrt{n+1} - \sqrt{n}}{\sqrt{n} \sqrt{n+1}} $$
I rationalized the upper part of the fraction but I got lost. Could you please help me showing the steps of the how to transform the fraction into a partial sum? Thanks in advance.
| HINT:$$\frac{\sqrt{n+1} - \sqrt{n}}{\sqrt{n} \sqrt{n+1}} = \frac{1}{\sqrt{n}}-\frac{1}{\sqrt {n+1}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3272127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
What is $\sqrt [3]{-1}$ and how does one obtain its value? I know that $i=\sqrt{-1}$. I was wondering what the $\sqrt [3] {-1}$ is.
I went on wolfram alpha, and it gave me values for $a$ and $b$ such that $\sqrt [3] {-1}=a+bi$. After some experimenting, I am almost absolutely certain we have:
$$\sqrt [3] {-1}=\frac12+\frac{\sqrt{3}}{2}i$$
I didn't know how to get to the equation above, so now that I knew the equation, I tried to work backwards. $$2\sqrt [3]{-1}=1+i\sqrt{3}$$ Square both sides:
$$4\sqrt [3]{(-1)}^2=(1+i\sqrt{3})^2$$ Now I know that $\sqrt [3]{-1}=(-1)^{1/3}$ therefore $\sqrt [3] {-1}^2=\big((-1)^{1/3}\big)^2=\big((-1)^2\big)^{1/3}=1^{1/3}=1.$
Also, using the binomial theorem (or Pascal's triangle) on the RHS, and then simplifying a little, $$4=-2+2i\sqrt{3}$$ or $3=i\sqrt{3}$ which means $i=\sqrt {3}$. Uhhh... I did something wrong, didn't I :\
Could someone please help me? I went to this question, but it didn't help, albeit similar.
EDIT:
Wait, the question there actually did help on second thought. I just let $\sqrt [3]{-1}=1$! which means $-1=1$. Aha!
So... now I know that step is wrong, what do we do, still?
Any help would be much appreciated. Thanks! :)
P.S. Apologies if this question is a duplicate.
| Assuming you're not familiar with polar coordinates as suggested by @J.W.Tanner, this algebraic way might be simpler to understand.
Define a complex number to represent the cube root:
$\sqrt [3] {-1} = a + b i$
so
$-1 = (a + b i)^3$
which simplifies to
$-1 = a^3 - 3 a b^2 + (3a^2 b - b^3 )i$
Therefore:
$-1 = a^3 - 3 a b^2 $
$ 0 = 3a^2 b - b^3 $
which has solutions $a = \frac {1}{2}; b = \frac{\sqrt{3}}{2}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3272281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
} |
$A=\{\frac ab | a,b \in Z^+ , \frac{a^2}{b^2}<2 \}$
Show that the set $$A=\left\{\frac ab | a,b \in Z^+ , \frac{a^2}{b^2}<2 \right\}$$ has a least upper bound $L$
My try:
$$\frac{a^2}{b^2}<2$$
$$\frac{a^2}{b^2}<(\sqrt2)^2$$
$$-\sqrt2<\frac{a}{b}<\sqrt2$$
But $a,b >0$ so $\frac ab >0$
Thus, we get
$$0<\frac ab<\sqrt2$$
which implies
$$\sup A = \sqrt 2 = L$$
So, $L$ exists. Hence proved.
I wonder is my proof correct or not ? Or is there any other specific way to solve it using definitions in the real analysis
| The least upper bound $L$ exists by definition of the real numbers, which have as their defining axiom that every set which has some upper bound actually has a least upper bound.
This is, in fact, how one defines $\sqrt{2}$ - so strictly speaking your proof is incorrect, since you're invoking the existence of some real number called $\sqrt{2}$ which is by definition exactly the least upper bound $L$ you're trying to construct.
A proof should go as follows: the set $A$ is a set of rational numbers and hence is a subset of $\mathbb{R}$ (by definition of $\mathbb{R}$). It has an upper bound - for example, 2 is an upper bound, since either $\lvert a/b\rvert \leq 1$, or $a/b\leq a^2/b^2 <2$ by definition of the set.
Since $A$ has some upper bound, and is a set of real numbers, by the axiom of completeness of the real numbers there exists a least upper bound (usually denoted $\sqrt{2}$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3272383",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Are countable topological spaces second-countable? Are countable spaces (i.e. $\mathbb{N}$ with any topology) second-countable? A countable space can have at most $2^\omega$ open subsets which suggests that a counterexample may exist. On the other hand both discrete and anti-discrete (or more generally with countable topology) spaces are second-countable. Also note that obviously a countable space is separable. So if it is additionally metrizable then it is second-countable.
But I couldn't prove that in general. Or is there a counterexample?
| Consider $ω$ many convergent sequences, and glue their limits. The resulting space won't have countable base at the common limit point.
Also note that a countable space is second-countable if and only if it is first-countable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3272545",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 0
} |
Extraction of coefficient from Generating Function Determine the coefficient of $~x ^ {12}~$ in:
$(1+^2+^4+^6+^8+^{10}+^{12})(1+^4+^8+^{12})(1+^6+^{12})(1+^8)(1+^{10})(1+^{12})$
How to proceed with the resolution of this type of question when there is the product of more than two functions?
| The coefficient of $x^{12}$ is equal to the number of partitions of $12$ in which all summands are even.
Given a partition of $12$ in which all summands are even we can divide each summand by $2$ to get a partition of $6$. And given a partition of $6$ we can find a partition of $12$ with even summands by doubling each summand. So there is a one-to-one correspondence between the partitions of $12$ in which all summands are even and the partitions of $6$.
Therefore the coefficient of $x^{12}$ is equal to the number of partitions of $6$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3272781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Implicit Function Theorem Intersection of Hyperbolas Let $$M:=\{(x,y,z)^T\in\mathbb{R}^3:x^2+2yz=3, x^2+y^2+yz=z^2+5\}$$ and $(x_0,y_0,z_0)\in M,\ y_0z_0 \neq 0$
Show that there is an open neighborhood $U \subseteq \mathbb{R}$ around $x_0$ and continuously differentiable functions $g,h:U\rightarrow \mathbb{R}$ with $g(x_0)=y_0,\ \ h(x_0)=z_0$
The only examples I have ever seen of the implicit function theorem are in $\mathbb{R}^2$, and am finding it hard to translate. How does it translate to higher dimensions, i.e. this problem?
| Set $F:(x,y,z)\mapsto (u(x,y,z),v(x,y,z))=(x^2+2yz,x^2+y^2+yz-z^2)\ $ so $F(x_0,y_0,z_0)=(3,5).$ To apply the implicit function theorem, we use the Jacobian (in $y$ and $z$) and check that $(x_0,y_0,z_0)$ is a regular point. This follows by hypothesis and the fact that
$\begin{bmatrix}
2z & 2y\\
2y+z&y-2z
\end{bmatrix}
=0 \Rightarrow z=y=0.$
Now, the result is an immediate consequence of the implicit function theorem: there is an open set $U\subseteq \mathbb R$ and a function $G:U\to \mathbb R^2:x\mapsto (g(x),h(x))$ such that $F(x,G(x))=(3,5)$ for all $x\in U$. In particular, $g(x_0)=y_0$ and $h(x_0)=z_0.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3273005",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
The image of a functor need not be a subcategory Warning 1.2.19 gives an example when the image of a functor is not a subcategory:
But I'm confused: the author defines a functor $F$ right away without saying what the codomain category is. This causes the question: the image of that functor is not subcategory of which category? If the codomain is the category depicted on the right (which is the same as the image of $F$), then that is a subcategory of itself.
| The codomain is the category depicted on the right, and the image of $F$ is not a subcategory because it contains the morphisms $p$ and $q$ but not their composition $qp$. This is explained under the diagram.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3273147",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Why is this matrix invertible? I'm following Intro to stochastic processes by Lawler, page 27.
It says if we have a matrix Q such that $Q^n \rightarrow0$, then the eigenvalues of Q have absolute value less than $1$. That part I understand.
Then it says: "Hence, $I-Q$ is invertible." How does that follow?
P.S. I understand the conditions for invertibility, like det can't be zero etc.
| Hint: If $Q$ has no eigenvalue of $1$ then $I - Q$ has no eigenvalue of $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3273263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
A definite integral: $\int_{0}^{\pi/2} \frac{\sin x~\mathrm dx}{\sin x+\cos x+ e^x}$ Mathematica can do this integral,
$$\int_{0}^{\pi/2} \frac{\sin x~ \mathrm dx}{\sin x+\cos x+ e^x}\,,$$
the question is: how to do it by hand?
| $$\int_{0}^{\pi/2} \frac{\sin x~ dx}{\sin x+\cos x+ e^x}dx$$
$$=\int_{0}^{\pi/2} \frac{e^{-x}\sin x~ dx}{e^{-x}(\sin x+\cos x)+ 1}dx$$
Put $1+e^{-x}(\sin x+\cos x)=t$. Then, $-2e^{-x}\sin x dx=dt$.
The integral changes to
$$=\int_{2}^{1+e^{-\pi/2}} \frac{-1}{2t}dt$$
$$=\frac{1}{2}\ln\left(\frac{2}{1+e^{-\pi/2}}\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3273439",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Dimension of a subspace of $2\times2$ Matrices The question is asking to find the dimension of the subspace $W$, where, $V = M_{2,2}$,
$$
W = \{A \in V: AB= BA\}
$$
where
$$B=\begin{bmatrix}1&2\\3&4\\\end{bmatrix}$$
I defined an arbitrary matrix $A$ which contains the entries $a,b,c,d$. Then I considered the equailty and multiplied the matrices in both sides then got:
$$\begin{bmatrix}a+3b&2a+4b\\c+3d&2c+4d\\\end{bmatrix} = \begin{bmatrix}a+2c&b+2d\\3a+4c&3b+4d\\\end{bmatrix}$$
Then I set:
1)$a+3b = a+2c$
2) $2a+4b = b+2d$
3) $c+3d = 3a+4c$
4) $2c+4d = 3b+4d$
The problem is that things got complicated to find $a,b,c$ and $d$, which allow us to find the dimension! What is the next step?
| Your equations give
\begin{align*}
3b&=2c\\
a+c&=d\\
\end{align*}
Meaning that once $a$ and $c$ are known you can deduce $b$ an $d$.
Consequently the matrix $A$ has the following form
\begin{pmatrix}
a & 2c/3 \\
c & a+c
\end{pmatrix}
Can you conclude from there ?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3273551",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
If $\tan x=3$, then what is the value of ${3\cos{2x}-2\sin{2x}\over4\sin{2x}+5\cos{2x}}$?
If $\tan x=3$, then what is the value of
$${3\cos{2x}-2\sin{2x}\over4\sin{2x}+5\cos{2x}}$$
So what I did is change all the $\sin{2x}$ and $cos{2x}$ with double angle formulas, getting
$${3\cos^2{x}-3\sin^2{x}-4\sin{x}\cos{x}\over5\cos^2{x}-5\sin^2{x}+8\sin{x}\cos{x}}$$
Now I thought of changing the top part to $\sin{x}$ and bottom part to $\cos{x}$ hoping to somehow get $\tan{x}$ in this way, but I ultimately got just
$${3-6\sin^2{x}-4\sin{x}\cos{x}\over-5+10\cos^2{x}+8\sin{x}\cos{x}}$$
Had really no ideas what to either do after this, seems pretty unusable to me. Was there possibly a mistake I made in the transformation or maybe another way of solving this?
| The answer is $\displaystyle \frac 94$.
Alternative method.
I like this half angle identity: $\displaystyle \tan \frac 12 y = \frac{\sin y}{1 + \cos y}$
So $\displaystyle 3 = \tan x = \frac{\sin 2x}{1 + \cos 2x}$, giving $\displaystyle \sin 2x = 3 + 3\cos 2x$.
Substituting that into the original expression transforms it into:
$\displaystyle \frac{-3\cos 2x - 6}{17\cos 2x + 12}$
and using $\displaystyle \cos 2x = 2\cos^2x - 1$, that can be re-written:
$\displaystyle \frac{-6\cos^2x - 3}{34\cos^2x -5}$
Finally, going back to $\tan x = 3$, note that $\displaystyle 1+ \tan^2x = 10$, so $\displaystyle \sec^2x = 10$, so $\displaystyle \cos^2x = \frac 1{10}$.
Putting that into the expression yields the value $\displaystyle \frac 94$, which is the required answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3273674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Analytical expression for the shape of the rounded pyramid. I'm searching for an analytical equation approximating the pyramid with rounded tip.
In particular, I have a pyramid whose base is an equilateral triangle with side "a", and height "h". The tip of the pyramid is rounded by some radius "r". (The point is to model the shape of the Berkovich indenter).
Could you suggest any (possibly simple) function that could approximate the shape of this pyramid?
| Since I was not satisfied with existing formulations, I decided to approximate the pyramid with the rounded cone:
where a describes the slope of the cone's side and delta is the rounding parameter. Inside the square root it is reponsible for roundness, outside for having a tip of the approximated indenter in the origin (0,0,0).
This is a very crude approximation, however I have shown that it led to satisfactory results in my case.
If you ever find this information useful and use it in your publication, please cite:
K. Frydrych, CRYSTAL PLASTICITY FINITE ELEMENT SIMULATIONS OF THE INDENTATION TEST, Computer Methods in Materials Science, 19(2), 41-49, 2019.
https://www.researchgate.net/publication/337768801_CRYSTAL_PLASTICITY_FINITE_ELEMENT_SIMULATIONS_OF_THE_INDENTATION_TEST
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3273803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Radical equation solve $\sqrt{3x+7}-\sqrt{x+2}=1$. Cannot arrive at solution $x=-2$ I am to solve $\sqrt{3x+7}-\sqrt{x+2}=1$ and the solution is provided as -2.
Since this is a radical equation with 2 radicals, I followed suggested textbook steps of isolating each radical and squaring:
$\sqrt{3x+7}-\sqrt{x+2}=1$
$(3x+7=(1-\sqrt{x+2})^2$ # square both sides
(Use perfect square formula on right hand side $a^2-2ab+b^2$)
$3x+7=1^2-2(1)(-\sqrt{x+2})+x+2$ # lhs radical is removed, rhs use perfect square formula
$3x+7=1+2(\sqrt{x+2})+x+2$ # simplify
$3x+7=x+3+2\sqrt{x+2}$ # keep simplifying
$2x+4=2\sqrt{x+2}$ # simplify across both sides
$(2x+4)^2=(2\sqrt{x+2})^2$
$4x^2+16x+16=4(x+2)$ # now that radical on rhs is isolated, square both sides again
$4x^2+12x+14=0$ # a quadratic formula I can use to solve for x
For use int he quadratic function, my parameters are: a=4, b=12 and c=14:
$x=\frac{-12\pm\sqrt{12^2-(4)(4)(14)}}{2(4)}$
$x=\frac{-12\pm{\sqrt{(144-224)}}}{8}$
$x=\frac{-12\pm{\sqrt{-80}}}{8}$
$x=\frac{-12\pm{i\sqrt{16}*i\sqrt{5}}}{8}$
$x=\frac{-12\pm{4i*i\sqrt{5}}}{8}$
$x=\frac{-12\pm{-4\sqrt{5}}}{8}$ #since $4i*i\sqrt{5}$ and i^2 is -1
This is as far as I get:
$\frac{-12}{8}\pm\frac{4\sqrt{5}}{8}$
I must have gone of course somewhere further up since the solution is provided as x=-2.
How can I arrive at -2?
| The big error is that $4x^2+16x+16=4(x+2)$ is the same as $4x^2+12x+8=0.$ You somehow got $4x^2+12x+14=0.$ Did you treat $4(x+2)$ as the same as $4x+2?$ The equation $4x^2+12x+8=0$ has $x=-1$ and $x=-2$ as roots.
There's an earlier error where you write: $3x+7=(1-\sqrt{x+2})^2.$ The right side should be $(1+\sqrt{x+2})^2,$ but your later expansion somehow yields the correct value - so two errors led to a correct expression.
It's easier, when you have $2x+4=2\sqrt{x+2},$ if you divide by $2$ before squaring, and get: $x+2=\sqrt{x+2}.$
One quick way to simplify it from the start is to set $y=x+2.$ Then $3y+1=3x+7.$ So you have a slightly simpler equation:
$$\sqrt{3y+1}-\sqrt{y}=1\\
\sqrt{3y+1}=1+\sqrt{y}\\
3y+1 = 1+2\sqrt{y}+y\\
2y=2\sqrt{y}\\
y=\sqrt{y}\\
y^2=y\\
y=0,1$$
You have to go back and check each $y$ in the original equation, then take $x=y-2$ for each solution $y.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3273876",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 9,
"answer_id": 6
} |
Integral from infinity to infinity My physics professor today wrote on the blackboard:
$$ \int_{\infty}^{\infty} f(x) dx = 0 $$
for every function $f$.
And the proof he gave was:
$$ \int_{\infty}^{\infty} f(x) dx = \int_{\infty}^{a} f(x) dx + \int_{a}^{\infty} f(x)dx = - \int_{a}^{\infty} f(x) dx + \int_{a}^{\infty}f(x)dx = 0$$
However I'm still not convinced, for me an integral from infinity to infinity has no meaning. Therefore, what I'm asking is: does the above equations make sense? If not, are there cases where they do make sense? I'm thinking about functions that converge to 0 in $+\infty$.
EDIT: Actually, the function f considered was a density, i.e.:
$$ \int_{-\infty}^{+\infty} f(x)dx = 1 $$
and $f(x) \geq 0$ for all $x$.
| This is not necessarily true. Take the following example;
$$\int_a^{2a}\frac1x\mathrm{d}x=[\ln{|x|}]_a^{2a}=\ln{(2)}$$
If we take $a\to\infty$ then the integral becomes
$$\int_\infty^\infty\frac1x\mathrm{d}x=\ln{(2)}$$
as the integral is constant for all $a\in\mathbb{R}$. What I guess your professor meant was that
$$\lim_{a\to\infty}\int_a^a f(x)\mathrm{d}x=0$$
which is trivially true as the LHS is constantly zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3274013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 7,
"answer_id": 1
} |
How to prove this integration is not zero Let {$f_{n}$}$_{n=1}^{\infty}$ be a sequence of non-zero elements of $L^{2}[0,1]$. Prove that there is a function $g\in L^{2}[0,1]$ such that for all $n\ge1$ we have $\int_{0}^{1}g(x)f_{n}(x)dx\neq0$.
I try to assume there isn't such function $g$ and get contradiction. But I couldn't find what's the contradiction. I can show all $f_{n}=0 \quad a.e$.
Maybe my idea is not correct and maybe we can construct a function $g$ satisfying $\int_{0}^{1}g(x)f_{n}(x)dx\neq0$.
| Note that
$$U_n = \left\{ f\in L^2([0,1]) : \int_0^1 f f_n dx\neq 0 \right\}$$
is an nonempty (since $f_n$ is nonzero) open sets which is dense in $L^2 ([0,1])$. The Baire Category theorem says that
$$ \bigcap U_n$$
is nonempty. Thus there is $g\in L^2([0,1])$ so that
$$ \int_0^1 g f_n dx\neq 0$$
for all $n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3274115",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to integrate $\int_0^{\infty} \frac{\sin(x^{-p})}{x^2}dx$ where $p>1$? How to integrate the following integral:
$$\int_0^{\infty} \dfrac{\sin(x^{-p})}{x^2} dx, p>1 ?$$
Thank you for any help.
Attempt: I have tried simple sub: $x^{-p} =u \implies du=dx (-p)x^{-p-1}.$
$$\int_0^{\infty} \dfrac{\sin(x^{-p})}{x^2} dx = \dfrac{-1}{p}\int_{\infty}^{0} \sin(u)x^{p-1}du .$$
However, I can not proceed further with this sub.
| From the change of variable $u=x^{-p}$, the integral becomes
$$
-\frac{1}{p}\int_{0}^\infty \sin(u)u^{1/p-1}du=-\frac{1}{p}\mathcal{M}\{\sin(u)\}(1/p),
$$
where $\mathcal{M}$ denotes the Mellin transform.
Since $0<1/p<1$, one can easily infer from http://mathworld.wolfram.com/MellinTransform.html
that
$$
-\frac{1}{p}\Gamma\left(\frac{1}{p}\right)\sin\left(\frac{\pi}{2p}\right)=-\Gamma\left(\frac{1}{p}+1\right)\sin\left(\frac{\pi}{2p}\right)
$$
is the value of the aforementioned integral.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3274219",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Calculating inverse trigonometric values without a calculator (AEA 2016)
Find the value of
$$\arccos(1/\sqrt2) + \arcsin (1/3) + 2 \arctan(1/\sqrt2).$$
Give your answer as a multiple of $\pi$.
This was the least well answered question on Edexcel's Advanced Extension Award annual paper in 2016. The next paper is tomorrow and I will be sitting it.
The examiner reports tells us that, out of the 7 marks that this question is worth, almost everyone got 2 marks.
*
*$\arccos (1/\sqrt2) = \pi/4$
*$\sin x = 1/3$
However, the rest of the mark scheme's solution is almost incomprehensible and doesn't explain anything at all well. There are no worked solutions online, so could someone please tell me how to come to the correct solution of $3\pi/4$?
| $$\sin x=\frac13\implies \tan x=\frac{\frac13}{\sqrt{1-(\frac13)^2}}=\frac1{2\sqrt2}\implies
x=\arctan\frac1{2\sqrt2}\tag1$$
$$\tan(y/2)=\frac1{\sqrt2}\implies\tan y=\frac{2\frac1{\sqrt2}}{1-(\frac1{\sqrt2})^2}=2\sqrt2\implies y=\arctan(2\sqrt2)\tag2$$
$$(1)\& (2) \implies x+y=\frac\pi2.$$
Somewhat shorter aproach:
$$
\cos y=\frac {1-\tan^2\frac y2}{1+\tan^2\frac y2}=\frac {1-\frac 12}{1+\frac 12}=\frac13.$$
$$\arcsin\frac13+\arccos\frac13=\frac\pi2.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3274348",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
$p^2 - 2 q^2 = 5039$ for primes $p, q$ Are there primes $p$ and $q$ for which $p^2 - 2 q^2 = 5039$?
This is the least prime $r$ for which I don't know whether $p^2 - 2 q^2 = r$ has a solution in primes.
The solutions of the Pell-type equation $x^2 - 2 y^2 = 5039$ are $x_n, y_n$ given by the recurrences
$x_{n+4} = 6 x_{n+2} - x_n$ with initial values $x_0 = 71, x_1 = 209, x_2 = 217, x_3 = 1183$ and
$y_{n+4} = 6 y_{n+2} - y_n$ with initial values $y_{{0}}=1,y_{{1}}=139,y_{{2}}=145,y_{{3}}=835$. Both $x_n$ and $y_n$ have lots of prime values. I haven't found any cases where they are both prime for the same $n$ (having tested up to $n=10000$). There are some "near misses", e.g. neither $x_{179}$ nor $y_{179}$ is prime but they have no small factors. Thus there doesn't appear to be any modular reason for solutions not to exist.
Heuristically, since $x_n$ and $y_n$ increase exponentially, each has probability $O(1/n)$ of being prime, so the probability of both being prime
is $O(1/n^2)$, and since $\sum_n 1/n^2 < \infty$, we might expect finitely
many $n$ with both $x_n$ and $y_n$ prime. So maybe there just happen to be none, but there's no way to actually prove that. Still, I thought I'd put this to MSE in the hope that there's something clever that I'm missing.
EDIT: I might mention that in order for $p^2 - 2 q^2 = r$ to have prime solutions, where $r$ is prime, either $q=2$ or $3$ (so $r + 8$ or $r + 18$ is the square of a prime) or $r \equiv 23 \mod 24$. Of the primes $\equiv 23 \mod 24$
less than $10000$, the only ones for which I haven't found prime solutions are
$4079$, $5039$ and $7703$, but I can prove there are no prime solutions for $4079$ and $7703$ (all solutions to $x^2 - 2 y^2 = r$ in those cases have $x$ or $y$ divisible by $5$, $7$ or $11$).
EDIT: See OEIS sequence A308816.
Miracles do happen. For $r = 96431$ the least primes $p$ and $q$ for which $p^2 - 2 q^2 = r$ have $685$ digits each.
| COMMENT.- I do not handle powerful calculators but I want to suggest what seems to me a way to calculate solutions or impossibility of such with more comfort maybe (I am not sure of this!).
The complementary formulas of the law of quadratic reciprocity allow to say that $2$ is a square module the prime $5039$. In effect, we have
$71^2=4968^2=2$ in $\mathbb F_{5039}$.
Then we have in $\mathbb F_{5039}$ the equations $$p ^ 2 - (71q) ^ 2 =0\\ p ^ 2 - (4968) ^ 2=0$$
Taking now, for example the equation
$$p+71q=5039t$$ we have the integer solutions
$$\begin{cases}q=5039n+2484p\\t=71n+35p\\n\in\mathbb Z\end{cases}$$
There are severe restrictions that could suggest impossibility. We must have $p$ and $q$ primes and in the zero of $\mathbb F_{5039}$ i.e. $5039\mathbb Z$ the only $n\in\mathbb Z$ to be used at the end is $n=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3274493",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 3,
"answer_id": 2
} |
Find $ \frac{1}{2^2 –1} + \frac{1}{4^2 –1} + \frac{1}{6^2 –1} + \ldots + \frac{1}{20^2 –1} $ Find the following sum
$$
\frac{1}{2^2 –1} + \frac{1}{4^2 –1} + \frac{1}{6^2 –1} + \ldots + \frac{1}{20^2 –1}
$$
I am not able to find any short trick for it.
Is there any short trick or do we have to simplify and add it?
| Alternatively to the telescoping sum decomposition, there is an easy pattern
$$\frac13$$
$$\frac13+\frac1{15}=\frac25$$
$$\frac13+\frac1{15}+\frac1{35}=\frac37$$
$$\frac13+\frac1{15}+\frac1{35}+\frac1{63}=\frac49$$
$$\cdots$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3274605",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Suppose $B_j = \sum_{i=1}^{r} a_{ij} A_i, j= 1,2,....,t$. How does showing that $B_i's$ are dependent prove that $r \geq n$? I am reading 'Galois Theory by Emil Artin', and while reading the proof of Theorem $2$ on Pg. No. $5$, I couldn't grasp the following step :
Now, let $B_1,. . ., B_t$ be any system of vectors in $V$ where $t > r$,
then there exist $a_{ij}$ such that $B_j = \sum_{i=1}^{r} a_{ij}A_i , j=1,2,...,t$, since $A_{i}^{'}s$ form a generating system. If we can show that $B_1, . . ., B_t$ are dependent, this will give us $r \geq n$.
Here, $A_1 , . .., A_m$ are a generating system of a vector space V of dimension $n$, $r$ is the maximum number of independent elements in the generating system.
I don't get how the above argument gives us $r \geq n$. I now add the neccessary definitions :
Definition $1$ : The dimension of a vector space $V$ over a field $F$ is the maximum number of independent elements of $V$.
Definition $2$: A system $A_1, . .., A_m$ of elements in $V$ is called a
generating system of $V$ if each element $A$ of $V$ can be expressed linearly in terms of $A_1, . .., A_m$.
I feel this is something very basic, but at this point I can't make out what I am missing.
| Suppose we can show that $B_1,\ldots,B_t$ are dependent for all choises of $t>r$ and $B_1,\ldots, B_t$. The claim is that we can conclude $r\ge n$.
Indeed, assume $r<n$. Then by definition 1, we can find $t:=n$ independent vectors $B_1,\ldots, B_t$ - and can show that they are dependent!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3274712",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Writing a matrix as a product of two matrices Consider the matrix
$$ A = \begin{pmatrix}
0 & y & -x\\
y & y^2 & -xy\\
-x & -xy & x^2
\end{pmatrix}. $$
Is it possible to find matrices $X = X(x)$ and $Y=Y(y)$ such that $A = XY$ (or $A = YX$)?
A possibly unrelated observation of mine is that if we consider the vector $v = \begin{pmatrix}y\\-x\end{pmatrix}$, then we can write $A$ in block form as
$$ A = \begin{pmatrix}
0 & v^t\\
v & vv^t
\end{pmatrix}, $$
which allows us to write
$$ A = \begin{pmatrix}
1 & 1\\
0 & v
\end{pmatrix}\cdot
\begin{pmatrix}
-1 & 0\\
1 & v^t
\end{pmatrix}, $$
but this is not really what I want since now the factors depend on both $x$ and $y$.
EDIT: As suggested in the comments, setting $z = -x$ yields
$$ A = \begin{pmatrix}
0 & y & z\\
y & y^2 & yz\\
z & yz & z^2
\end{pmatrix}. $$
| Becasue of the symmetry, if it is possible with $A=YX$ then it is possible with $A=XY$ too. So without loss of generality let us assume that we can write $A(x,y)=X(x)Y(y)$ for some matrix-valued functions $X$ and $Y$.
Now setting $x=-1$ we get
$$ X(-1)Y(y)\begin{pmatrix}1-w\\0\\w\end{pmatrix} = \begin{pmatrix} 0 & y & 1 \\ y & y^2 & y \\ 1 & y & 1 \end{pmatrix} \begin{pmatrix}1-w\\0\\w\end{pmatrix} =
\begin{pmatrix} w \\ y \\ 1\end{pmatrix}$$
and these vectors clearly span $\mathbb R^3$ when we allow $y$ and $w$ to vary. So $X(-1)$ must have full rank.
A similar argument from the other side shows that $Y(1)$ must have full rank too.
But $A(-1,1)$ has rank $2$ and cannot be the product of invertible $X(-1)$ and $Y(1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3274815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Expanding random N(0,1) variable If I have an expression
$$\frac{1}{1+\sigma m(z/l)}$$
where $m(z/l)$ is a random $N(0,1)$ variable, $\sigma$ is dimensionless, can I rewrite this via an expansion to bring up the random variable on the numerator?
| If you want to use $$\frac1{1+x}=1-x+x^2-x^3+\cdots$$
then you need to remember that this only works for $|x|\lt 1$
and that a random variable with a normal distribution has a positive probability of being outside this
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3274944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What values of $\alpha$ make this improper integral convergent? I'm having trouble discussing what values of $\alpha$ make
$$\int_{0}^{+\infty}\frac{1-\cos{x}}{x^{\alpha}}dx$$
convergent. The problem explicits that $\alpha \gt 1$.
I've seen that the integral can be written like
$$\lim_{a \to 0^+}\int_{a}^{b}\frac{1-\cos{x}}{x^{\alpha}}dx \,\,\,+ \,\,\, \lim_{c \to +\infty}\int_{b}^{c}\frac{1-\cos{x}}{x^{\alpha}}dx,$$
where b $\gt 0$. From this point I don't have a clue on how to continue. Can someone give me a hand? Thank you so much.
| Regarding your first integral, for $x$ near zero, we know $1-\cos x \approx x^2/2$ so your integrand is close to $x^2/x^\alpha$. This puts a condition on $\alpha$ that I will let you figure out.
In your second integral, the first term of the integrand is $1/x^\alpha$. I think you can see that this is divergent for the "wrong" $\alpha$. There is no hope that the second term will help cancel out the divergence because $\cos x$ oscillates around zero and will not produce a value large enough to "help". So $\alpha$ has to be such that the integral of $1/x^\alpha$ will not diverge.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3275022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
How to find the week day of (any) given date? How to find the week day of any given date?
Say we need to know in which week-day was June $25,2019$?
| To determine the week-day of a given date, we need to:
*
*find out whether the given year is "common" or "leap".
*know $\mod(a,b)$.
*know $\left \lfloor a \right \rfloor$.
To find out whether the given year is "common" or "leap", we can use the following chart:
$\mod(a,b)$ means the remainder when dividing $a$ by $b$. For example, when we divide $17$ by $3$, we get $5$ and the remainder is $2$. Therefore, $\mod(17,3)=2$.
For convince, $\mod(a,100)=$ the number formed by the last two digits of $a$. For example, $\mod(13527,100)=27$.
$\left \lfloor a \right \rfloor$ means the nearst integer less than or equal to $a$. For examples,
$\left \lfloor 6.97 \right \rfloor=6$,$\left \lfloor -2.8 \right \rfloor=-3,\left \lfloor \frac{20}{4} \right \rfloor=5$.
Suppose that the given date is of the form: MONTH $d, y$
We have to calculate the following:
*
*$A=\mod(y,100)$
*$B=\left \lfloor \frac{A}{4} \right \rfloor$
*$C=\frac{y-A}{100}$
*$D = d$ which is the given date.
*$E =\left \lfloor \frac{C}{4} \right \rfloor$
*$F=0,3,2,5,0,3,5,1,4,6,2,$ or $4$ depending on the given month (Jan, Feb, March, ..., or Dec) respectively.
*$G=\left\{\begin{matrix}
0 & \text{if the month is not Jan nor Feb}\\
1 & \text{for Jan or Feb in a common year}\\
2 & \text{for Jan or Feb in a leap year}
\end{matrix}\right.$
*$H=\mod(A+B-2C+D+E+F-G,7)$
*The week-day depends on the $H$ value,
$0$ for Sunday, $1$ for Monday, $2$ for Tuesday, $3$ for Wednesday, $4$ for Thursday, $5$ for Friday, and $6$ for Saturday.
Consider the example, June $25,2019$
Since $2019$ is not divisible by $4$, then $2019$ is a common year.
$A=\mod(2019,100)=19$
$B= \left \lfloor \frac{19}{4} \right \rfloor=4$
$C=\frac{2019-19}{100}=20$
$D= 25$ as given.
$E= \left \lfloor \frac{20}{4} \right \rfloor=5$
$F=3$ for June.
$G=0$ since the given month is neither Jan nor Feb.
$H=\mod(19+4-2\times20+25+5+3-0,7)=\mod(16,7)=2=$ Tuesday.
I noticed that many people ask about this. I posted this way because I think it is the simplest way for any given date , whatever the given century.
There are some simpler ways but for years between 2000 and 2099 only.
So it is a general way.
If you know any simpler way than this, please leave a comment or just post it as an answer, THANKS!.
This may be a useful page for you: https://en.wikipedia.org/wiki/Determination_of_the_day_of_the_week
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3275126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
absolutely continuous and increasing function Im having a really hard time trying to solve this problem that appears in the book of Royden guy. Any help would be extremely appreciated.
Let $f:[a,b]\rightarrow \mathbb{R}$ an absolutely continuous and increasing function. Show that $\lambda(f(A)) = \int_{A}f'd\lambda$ for all $A\subset [a,b]$.
By now I've showed that $\lambda(f(A))$ sends null sets into null sets and $G_{\delta}$ into $G_{\delta}$. But Im pretty lost in the procedure of the integrals D:
Any help would be extremely appreciated! :)
| It suffices to prove this for Borel sets. Without loss of generality, $a=0,\ b=1.$ Set $\mathscr S = \{A\subset\mathscr B([0,1]) : \lambda(f(A)) = \int_A f' \}.$ Absolute continuity of $f$ implies that $f(b)-f(a)=\int^b_af'$ for all $0 \le a\le b\le 1$ and this in turn implies that $\mathscr S$ contains the intervals, and unions and intersections of finitely many intervals. Let $\{A_n\}$ be an increasing sequence of sets in $\mathscr S$ and set $A=\bigcup_n A_n.$ Then, $\lim \int_{A_n} f'=\int_Af'$ by the monotone convergence theorem. On the other hand, $\lim \int_{A_n} f'=\lim \lambda(f(A_n))=\lambda f(A)$ because $\{f(A_n)\}$ is increasing. Thus, $A\in \mathscr S.$ A similar argument shows that $\mathscr A$ is closed under decreasing sequences, and now by the monotone class theorem, $\mathscr S=\mathscr B([0,1]).$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3275270",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proof of Bound for Growth of Divergent Trajectory in $3x+1$ Problem In this paper, Lagarias makes the following claim in section 2.7 (Do divergent trajectories exist?).
Context
$$T(x) = \left\{ \begin{array}{rl} \dfrac{3x + 1}{2}, & 2 \nmid x \\ \dfrac{x}{2}, & 2 \mid x \end{array} \right.$$
$$\begin{align*} \tag{2.30} \lim_{k \to \infty} |T^{(k)}(n_0)| = \infty \end{align*}$$
Claim
If a divergent trajectory $\{T^{(k)}(n_0) : 0 \leq k < \infty\}$ exists, it cannot be equidistributed $\pmod{2}$. Indeed if one defines
$N^*(k) = |\{j : j \leq k \mathrm{\ and\ } T^{(j)}(n_0) \equiv 1 \pmod{2}\}|$,
then it can be proved that the condition (2.30) implies that
$$\begin{align*} \tag{2.31} \liminf_{k \to \infty} \dfrac{N^*(k)}{k} \geq (\log_2 3)^{-1} \approx .63097 \end{align*}$$
Question
How can this statement be proved?
Difficulty
It seems like the author may be ignoring the $+1$ term under the assumption that the factors will dominate. (I've seen this assumption made often for heuristic arguments for the truth of the Collatz conjecture.) I don't see how such an assumption can be justified.
Given any length $n$ sequence of $n - k$ zeros and $k$ ones, we can find an $x \in \mathbb{N}$ such that
$$T^n(x) = \dfrac{3^k x + m}{2^n}$$
where
$$3^k - 2^k \leq m \leq 2^{n-k}(3^k - 2^k)$$
Now, suppose, for example, $n = 2k$. Then, we have the bound
$$T^n(x) \leq \dfrac{3^k x + 2^{n-k}(3^k - 2^k)}{2^n} = \left(\dfrac{3}{4}\right)^k x + \left(\dfrac{3}{2}\right)^k - 1$$
and the exponential "$+1$" term dominates for large $n$. Now, of course, $m$ won't always be as large as possible, but even if we look at "random" $m$, that only introduces a constant factor in front of the exponential.
Additional Questions
Is the proof of this statement difficult? Is that why the author doesn't include it? Is there a paper containing a proof?
| I contacted the author, and he was kind enough to write up a proof for me. I have attempted to simplify his proof for presentation here. I also use some notation without explanation to reduce clutter; the meanings should be clear. The trick is to use an apparently well known result from lattice theory.
Proposition 1 (Lattice Theory rotation trick)
If $b_1, b_2, \ldots, b_\ell$ are real numbers such that
\begin{align*}
b_1 + b_2 + \cdots + b_\ell = r \ell
\end{align*}
($\ell \geq 2$) then the lattice path
\begin{align*}
(0, 0), (1, b_1), (2, b_1 + b_2), \ldots, (\ell, b_1 + b_2 + \cdots + b_\ell) = (\ell, r \ell)
\end{align*}
has a cyclic forward shift by some $k$ with $0 \leq k \leq \ell - 1$
\begin{align*}
(0, 0), (1, b_{k+1}), (2, b_{k+1} + b_{k+2}), \ldots, (\ell, b_{k+1} + b_{k+2} + \cdots + b_\ell + b_1 + b_2 + \cdots + b_k) = (\ell, r \ell)
\end{align*}
so that
\begin{align*}
b_{\overline{k+1}} + b_{\overline{k+2}} + \cdots + b_{\overline{k+j}} \leq jr
\end{align*}
for all $1 \leq j \leq \ell$, where $\overline{k+i} \equiv k + i \pmod{\ell}$ and $1 \leq \overline{k+i} \leq \ell$.
Proof
Let $k$ be the smallest index such that the point $(k, b_1 + b_2 + \cdots + b_k)$ is not above the line $y = rx$ and the distance between this point and the line is maximum. Notice that $k \leq \ell - 1$ by the extreme value theorem.
Corollary 2
If $n_1 < n_2 < \ldots < n_k$ and $r = n_k/k$, then there is some $\hat{k}$ with $1 \leq \hat{k} \leq k - 1$ such that
\begin{align*}
n_{k-\hat{k}+1} - n_j \geq (k - \hat{k} + 1 - j) r
\end{align*}
for all $1 \leq j \leq k - \hat{k}$.
Proof
Apply Propositon 1 to the sequence
\begin{align*}
b_1 = n_k - n_{k-1}, b_2 = n_{k-1} - n_{k-2}, \ldots, b_{k-1} = n_2 - n_1, b_k = n_1
\end{align*}
Lemma 3 (bound on additive term)
For odd $x \in \mathbb{N}$, suppose that
\begin{align*}
T^n(x) = \dfrac{3^k}{2^n}x + e(x, k)\;\;\;\;\; e(x, k) = \sum_{i=0}^{k-1} \dfrac{3^i}{2^{n_k - n_{k-1-i}}}
\end{align*}
where $r = n/k \geq \log_2 3$. Then there is a $1 \leq \hat{k} < k$ such that
\begin{align*}
e(x,k-\hat{k}) \leq \dfrac{1}{2^r - 3}
\end{align*}
Proof
Let $r = n/k = (\log_2 3)(1 + \delta)$, where $\delta > 0$, and apply Corollary 2 to $0 = n_0 < n_1 < \cdots < n_{k-1}$ to find an index $\hat{k}$ such that $1 \leq \hat{k} < k$ and
\begin{align*}
n_{k-\hat{k}} - n_{j-1} \geq (k - \hat{k} - j + 1)r
\end{align*}
for all $1 \leq j \leq k - \hat{k}$. Then,
\begin{align*}
2^{n_{k-\hat{k}} - n_{i-1}} \geq 2^{(k-\hat{k}-i+1)r} = 3^{(k-\hat{k}-i+1)(1+\delta)}
\end{align*}
and
\begin{align*}
e(x, k - \hat{k}) & = \sum_{i=1}^{k-\hat{k}} \dfrac{3^{k-\hat{k}-i}}{2^{n_{k-\hat{k}}-n_{i-1}}} \\
& \leq \sum_{i=1}^{k-\hat{k}} \dfrac{3^{k-\hat{k}-i}}{3^{(k-\hat{k}-i+1)(1 + \delta)}} \\
& = \dfrac{1}{3} \sum_{i=1}^{k-\hat{k}} \dfrac{1}{3^{(k-\hat{k}-i+1)\delta}} \\
& \leq \dfrac{1}{3^{1+\delta} - 3} \\
& = \dfrac{1}{2^r - 3}
\end{align*}
Remark
The key fact about Lemma 3 is that the bound is independent of both $x$ and $e(x,k)$, depending only on the value of $r = n/k$. It achieves this by making the choice $\hat{k}$ that depends on $x$ and showing that such a choice must exist for every $x$ with $r \geq \log_2 3$. This is the ``trick."
Theorem 4
If $x, T(x), T^2(x), \ldots$ is a divergent trajectory with
\begin{align*}
k(x, n) = |\{T^j(x) \equiv 1 \pmod{2} : 0 \leq j < n\}|
\end{align*}
then
\begin{align*}
\liminf_{n \to \infty} \dfrac{k(x, n)}{n} \geq \log_3 2
\end{align*}
Proof
Since $x$ has a divergent trajectory, there must be a sequence $y_0 < y_1 < y_2 < \cdots$ such that $y_j = T^{n_j}(x)$, for some natural numbers $n_0 < n_1 < n_2 < \cdots$, and such that $T^n(y_j) > y_j$ for all $n \in \mathbb{N}$. For each (fixed) $y_j$, we have
\begin{align*}
\liminf_{n \to \infty} \dfrac{k(x, n)}{n} = \liminf_{n \to \infty} \dfrac{k(x, n) - k(x, n_j)}{n - n_j} = \liminf_{n \to \infty} \dfrac{k(y_j, n)}{n}
\end{align*}
Now, suppose that
\begin{align*}
\liminf_{n \to \infty} \dfrac{k(x, n)}{n} < \log_3 2
\end{align*}
Then, there is some constant $c$ such that
\begin{align*}
\dfrac{k(x, n)}{n} \leq \dfrac{1}{c} < \log_3 2
\end{align*}
infinitely often, and, in particular, for each $y_j$, there is always an $n$ such that
\begin{align*}
\dfrac{k(y_j, n)}{n} \leq \dfrac{1}{c} < \log_3 2
\end{align*}
Let $d = c - \log_2 3 > 0$. Then, $n \geq k(y_j, n)(\log_2 3 + d)$ implies
\begin{align*}
2^n \geq 3^{k(y_j, n)} 2^{kd} \geq 3^{k(y_j, n)} 2^d \iff \dfrac{3^{k(y_j, n)}}{2^n} \leq 2^{-d}
\end{align*}
Let $r = n/k(y_j, n) \geq c$. Applying Lemma 3, we have an $n^*$ such that
\begin{align*}
T^{n^*}(y_j) \leq 2^{-d} y_j + \dfrac{1}{2^r - 3} \leq 2^{-d} y_j + \dfrac{1}{2^c - 3}
\end{align*}
where the values $c$ and $d$ are constant across all the $y_j$ (i.e., for each $y_j$, there is an $n^*$ such that the bound holds). Since $2^{-d} < 1$ and $y_j$ grow unbounded, there is a
\begin{align*}
y_j > \dfrac{1}{(2^c - 3)(1 - 2^{-d})}
\end{align*}
at which point
\begin{align*}
T^{n^*}(y_j) < y_j
\end{align*}
contradicting our construction of the $y_j$ and proving
\begin{align*}
\liminf_{n \to \infty} \dfrac{k(x,n)}{n} \geq \log_3 2
\end{align*}
Remark
If you find any errors in the above, it is almost certainly from my attempt to simplify the proof I was given and not from the author.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3275361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Solving a Fractional Equation Involving a Logarithm I may be being stupid right now, so I've come to Stack to see if this elementary algebra holds up.
Suppose I have the equation $$\frac{\ln x}{(1+ \ln x)^2} = \frac{1}{4}$$
My chosen way to solve this would be to cross multiply and expand brackets, solve the quadratic and get the value of $x$.
However, a student I am helping got this by saying $\ln x = 1$ gives $x = \mathrm{e}$ and at $x= \mathrm{e}$, the denominator $(1+ \ln x)^2 = 4$.
Hence $x= \mathrm{e}$.
Is this approach always correct or is it just luck here?
In general if I have $\frac{f(x)}{g(x)} = \frac{m(x)}{n(x)}$, can I solve it by finding the common solutions of $f(x) = m(x)$ and $g(x) = n(x)$?
[Edit: clearly not because if I have $\frac{x}{x+2} = \frac{1}{x+3}$, then $x= 1$ and $x+2 = x + 3$ don't give you anything..., so why does it work in this case?]
| $$\frac{\ln x}{(1+\ln x)^2}=\frac{1}{4}$$
with $u=\ln(x)$ we get:
$$\frac{u}{(1+u)^2}=\frac{1}{4}$$
$$4u=1+2u+u^2$$
$$u^2-2u+1=0\Rightarrow (u-1)^2=0$$
$$\therefore u=1$$
$$x=e^u\Rightarrow x=e,$$
This appear to be the only solution
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3275438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
Calculus, water poured into a cone: Why is the derivative non-linear? If water is poured into a cone at a constant rate and if $\frac {dh}{dt}$ is the rate of change of the depth of the water, I understand that $\frac {dh}{dt}$ is decreasing. However, I don't understand why $\frac {dh}{dt}$ is non-linear. Why can't it be linear?
I am NOT asking whether or not the height function is linear. Many are telling me that the derivative of height is not a constant so thus the height function is not linear, but this is not what I am asking.
This is my mistake, because I had used $h(t)$ originally to denote the derivative of height which is what my book used. Rather I am asking if $\frac {dh}{dt}$ is linear or not and why. It would be nice if someone could better explain what my book is telling me:
At every instant the portion of the cone containing water is similar to the entire cone; the volume is proportional to the cube of the depth of the water. The rate of change of depth (the derivative) is therefore not linear.
| The notion by Mike is nearly correct. Christian points out the correct result without being overly specific.
If you consider the volume of a cone of maximum height $h$ and maximum radius $R$ but only calculate it to the height $h'<h$, you can shuffle the equation to give that height depending on the volume of that fraction of the cone:
$$h'=\left(\frac{3V h²}{\pi R²}\right)^{1/3}$$
We assume the tip of the cone pointing downwards and the opening in positive $z$-direction with the water pouring in from above that, so no sideway filling.
If you than put in $\frac{dV}{dt}\cdot t$ as the volume depending on the time, where $\frac{dV}{dt}$ is the constant that describes how much water is added to the cone per time element, you get:
$$\frac{dh'}{dt}\sim t^{-2/3}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3275538",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 8,
"answer_id": 5
} |
Is there a positive integer $n \ge 2$ for which $\frac{k}{\pi(k)} = n$ has no solution? For a given positive integer $n \ge 2$ let $a_n$ be the number of integers $k$ such that $\dfrac{k}{\pi(k)} = n$ where $\pi(x)$ is the prime counting function. The first few values of $(n,a_n)$ are
$$(2, 4), (3, 3), (4, 3), (5, 6), (6, 7), (7, 6), (8, 6), (9, 3), (10, 9)(11, 1), (12, 18),$$
$$(13, 11),(14, 12),(15, 21),(16, 3),(17, 10), (18, 33), (19, 31), (20, 32), (21, 24)$$
In the example above we see that for $n \ge 2, a_n \ge 1$. Intuitively this is expected because by the prime number theorem, $\dfrac{k}{\pi(k)} \sim \log k$. Hence as $k$ increases, the integer part of $\log k$ is expected to run through all positive integers after a certain point. However $a_k$ we notice that is not strictly increasing and $a_{11} = 1$. This brings the question:
Question: Is there a positive integer $n \ge 2$ for which $\dfrac{k}{\pi(k)} = n$ has no solution?
| Not an answer but too long for comment: here is a Mathematica script to look for solutions systematically:
solve[] := Module[
{i, n},
i = 2;
n = 2;
While[True,
While[i/PrimePi[i] != n,
i++
];
Print["solve(", n, ")=", i];
n++;
];
];
The first few solutions for $n=2,3,4,...$
solve(2)=2
solve(3)=27
solve(4)=96
solve(5)=330
solve(6)=1008
solve(7)=3059
solve(8)=8408
solve(9)=23526
solve(10)=64540
solve(11)=175197
solve(12)=480852
solve(13)=1304498
solve(14)=3523884
solve(15)=9557955
solve(16)=25874752
solve(17)=70115412
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3275659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Null of Quotient Map is the Subspace Consider a subspace $U$ of $V$, where $V$ is a finite-dimensional vector space over a generic field, $F$.
Let the quotient map $\pi$ be the linear map $\pi:V\rightarrow V/U$ such that $\pi(v)=v+U$ for $v\in V$. Note $V/U$ is the quotient space such that $V/U=\{v+U:v\in V\}$.
$\textbf{My Question:}$
*
*Why is $\operatorname{null}\pi=U$?
By the definition, $\operatorname{null}\pi=\{v\in V:\pi(v)=0\}$. So, then does this mean, for any $w\in U$, $v(w)=w+U=0$? How?
*Why is the representation of an affine subset parallel to $U$ not $\textbf{unique}$? In other words, why is it the case, we have for $v,v'\in V$, $v+U=v'+U$?
Reference:
Axler, Sheldon J. $\textit{Linear Algebra Done Right}$, New York: Springer, 2015.
| Remember that addition in the quotient space is defined as
$$(a+U) + (b+U) := (a+b) + U$$
This also corresponds to addition of sets $a+U = \{a+u : u \in U\}$ and $b+U = \{b+u : u \in U\}$ obtaining $(a+b)+U = \{a+b+u : u \in U\}$.
Hence the zero element in $V/U$ is $0+U = \{0+u : u \in U\} =U$.
We have
$$v \in \ker \pi \iff \pi(v) = U \iff v+U = U \iff v \in U$$
For the last equivalence, if $v \in U$ then $v+U = \{v+u : u \in U\} = \{v+(u-v) : u \in U \} = U$. Conversely, if $v+U = U$, then in particular $0 \in U = v+U$ so there exists $u \in U$ such that $0 =v+u$, which implies $v = -u \in U$.
For the second question, consider:
$$v+U= v'+U \iff U= (v+U) - (v'+U) = (v-v')+U \iff v-v' \in U$$
so $v+U \ne v'+U$ as long as $v-v' \notin U$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3275794",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Prove $H\circ N = H\cap N$. Where $H$ and $N$ are two subgroups of a group $G$. I was trying to prove another theorem where I thought the above result could be helpful and started trying to prove it.
I am not sure whether the above statement is true or not, but I am unable to prove it. I'll be very thankful if somebody can help me out.
[NOTE: $H\circ N = \{h\circ n|h\in H, n\in N\}$].
| Thanks to Mindlack, the doubt is now clear, and it was very silly.
$H\circ N$ as defined above should contain all elements of $H\cup N$, and only in the case $H=N$, $H\circ N = H\cap N$ is true.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3275945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Derivative when $(\sqrt{x})^2$ is involved? Problem: If $f(x)=\frac{1}{x^2+1}$ and $g(x)=\sqrt{x}$, then what is the derivative of $f(g(x))$?
My book says the answer is $-(x+1)^{-2}$. This answer seems flawed because $(\sqrt{x})^2$ is being simplified to $x$ when it should really be simplified to $|x|$. If $(\sqrt{x})^2$ is simplified to $|x|$ then the answer I get is instead $-\frac{x}{|x|(|x|+1)^2}$. But the book is probably right so is there something I'm overlooking?
| The domain of $f(g(x))$ is $[0,\infty)$ so the absolute value does not do anything here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3276055",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 3
} |
$\cos^2 \alpha + \cos^2 \beta + \cos^2 \gamma =1$
Let be $\alpha, \beta, \gamma$ the angles between a generic direction in 3D and the axes $x,y,z$, respectively.
Prove that
$\cos^2 \alpha + \cos^2 \beta + \cos^2 \gamma =1$.
PS: the 2D case is trivial. But I can't prove the 3D case.
| Let $$\vec{v}=[v_1,v_2,v_3]$$ then we get
$$\cos(\alpha)=\frac{\vec{v}\cdot\vec{e_1}}{|\vec{v}|\cdot|\vec{e_1}|}=\frac{{v_1}}{|\vec{v}|}=\frac{v_1}{\sqrt{v_1^2+v_2^2+v_3^2}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3276199",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Reference for basic result on algebraic dimension of a complex manifold In his book Complex Geometry, Huybrechts states and proves a classical theorem of Siegel that a compact complex manifold of complex dimension n has algebraic dimension at most n. Unfortunately, I don't understand his proof, so I'd like another reference - but it's proving surprisingly difficult to turn up. I can't find it in Griffiths-Harris or Voisin. Did I just miss it?
| Try Shafarevich's Basic Algebraic Geometry II: Schemes and Complex Manifolds. The result you're looking for is Theorem 3 on page 175.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3276327",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prime Numbers. Show that if $a \mid 42n + 37$ and $a \mid 7n +4$, for some integer $n$, then $a = 1$ or $a = 13$ Show that if $a \mid 42n + 37$ and $a \mid 7n +4$, for some integer $n$, then $a = 1$ or $a = 13$
I know most of the rules of divisibility and that any integer number can be expressed as the product of primes.
Given $a = \prod_{i=1}^\infty p_i^{\alpha_i}$ and $b = \prod_{i=1}^\infty p_i^{\beta_i}$
$a \mid b$ if and only of $\alpha_i \le \beta_i$ for every $i = 1, 2, 3, \ldots$
Even though i have this information, I cannot prove the statement. Help me please.
| $$a|7n+4 \implies a|6(7n+4)=42n+24$$
$$ a|42n+37\text { and, } a|42n+24 \implies a|(42n+37)-(42n +24) = 13 $$
$$ a|13 \implies a=\pm 1, \text {or, } a=\pm 13 $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3276410",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Burnside's Lemma on octagon, using one of two colors on each side I am trying to find why the number of colorings of a regular octagon there are such that each side is colored either red or blue is 10. As well as this condition, each color must be used once. Essentially, out of the 8 edges, 4 are red and 4 are blue. If an octagon can be rotated to become another octagon, then they are considered the same.
My work on this is such: there are ${8 \choose 4} = 70$ possible combinations, forgetting about rotation. If a possible octagon is rotated clockwise once, there is no way it can be considered the same. If it is rotated twice, there are two ways it can be considered the same - a RBRBRBRB or BRBRBRBR color scheme. Similarly, if it is rotated $3$, $5$, or $7$ times, there are $0$ "fixed" combinations. If it is rotated $4$ or $6$ times, there are $2$ fixed points. The answer should then be $\frac{70+0+2+0+2+0+2+0}{8} = 9.5$. However, this is obviously not right, being a non-integer. The correct answer is $10$, found through writing a quick computer program.
Can anyone help me find where I messed up?
| We may apply PET here since we require the cycle index $Z(C_8)$ of the
cyclic group $C_8$ anyway in order to apply Burnside. We have
$$Z(C_n) = \frac{1}{n} \sum_{d|n} \varphi(d) a_d^{n/d}$$
With $n=8$ this works out to
$$Z(C_8) = \frac{1}{8} a_1^8 + \frac{1}{8} a_2^4
+ \frac{1}{4} a_4^2 + \frac{1}{2} a_8.$$
We get
$$[R^4 B^4] Z(C_8; R+B) \\ =
\frac{1}{8} [R^4 B^4] (R+B)^8
+ \frac{1}{8} [R^4 B^4] (R^2+B^2)^4
+ \frac{1}{4} [R^4 B^4] (R^4+B^4)^2
\\ + \frac{1}{2} [R^4 B^4] (R^8+B^8)
\\ = \frac{1}{8} {8\choose 4}
+ \frac{1}{8} [R^2 B^2] (R+B)^4
+ \frac{1}{4} [R B] (R+B)^2
\\ = \frac{1}{8} {8\choose 4} + \frac{1}{8} {4\choose 2}
+ \frac{1}{4} {2\choose 1}
= \frac{35}{4} + \frac{3}{4} + \frac{1}{2} = 10.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3276521",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
I have a question about partial differential equations How to find $\frac{\partial^2 e^u}{\partial y^2} =$? When $u = u(x,y)$
| $$\frac{\partial}{\partial y}e^u=e^u\frac{\partial u}{\partial y},$$
$$\frac{\partial^2}{\partial y^2}e^u
=\frac{\partial}{\partial y}\left(e^u\frac{\partial u}{\partial y}\right)
=\left(\frac{\partial}{\partial y}e^u\right)\frac{\partial u}{\partial y}+e^u\frac{\partial}{\partial y}\frac{\partial u}{\partial y}
=e^u\left(\frac{\partial u}{\partial y}\right)^2+e^u\frac{\partial^2u}{\partial y^2}.$$
Or more concisely,
$$\frac{\partial}{\partial y}e^u=e^uu_y,$$
$$\frac{\partial^2}{\partial y^2}e^u=\frac{\partial}{\partial y}\left(e^uu_y\right)=\left(\frac{\partial}{\partial y}e^u\right)u_y+e^u\frac{\partial}{\partial y}u_y=e^uu_y^2+e^uu_{yy}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3276798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Normal distribution: Weight of a package of cookies Suppose the mass of a cookie is a normal random variable X. Let's say a cookie weighs $20g$ on
average with a standard deviation of $2g$. A packet contains exactly $25$ cookies, with the weight
of the packaging also being a normal random variable Y with mean $100g$ and standard
deviation $6g$. Assume all random variables are independent.
(i) Calculate the variance of the weight of a packet of cookies.
(ii) If you buy $3$ packets, what is the probability that the total weight of
of this purchase is less than $1.82$ kg.
My attempt:
(i) $6^{2}$ $=$ $36$
(ii) X ~ N($20,4$) and Y ~ N($100,36$). Let W be the total mass
of the package. So, $W=X+3Y~N(20+3(100),4+3(36))=N(320,112)$. We are
required to find $P_r[W<1.82]$ but the z score I am getting does not
make sense at all. I got a z score of 141.7366. I converted everyone to grams. Please help.
| $(i) \ \ X \ $~$ \ N(20,4), \ \ Y \ $~$ \ N(100,36)$. A packet contains $25$ cookies, so the weight of the packet $W \ $ ~ $\ N(100+25\times20,36+25\times 4)=N(600,136).$
So the variance of the packet is $136$.
$(ii)$ If you buy $3$ packets, their combined weight follows $K \ $~ $\ N(1800,408)$ and operating with this you get $\mathbb{P}(K<1820) =51.95\%$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3276914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to find the shortest path between several nodes in a particular order without ever using any edge twice? Given are a number of ordered nodes in a bidirectional graph with known only positive cost for each edge.
I need to find the shortest path through the given nodes in the particular order that are given while never using any edge twice.
So in contrast to Travelling Salesman problems, I know the order of traversal.
However, the last requirement is key here. Due to not being able to use any edge twice, the locally optimal path between node 1 and 2 could create a suboptimal path between 2 and 3 and so on or even make a complete path impossible.
Right now, I am using A-Star in succession to build the total path, but it is clear that this is not optimal.
So, is there a way to find the global optimum over all given nodes (if it exists at all) by looking at all of them at the same time to find the optimal path?
| Even finding out whether there is a solution at all is NP-complete. I will show this by reducing from 3SAT.
Given a 3SAT instance, create a copy of the following 18-node network for each 3-literal clause:
The edges $Y_1\leftrightarrow Z_1$, $Y_2\leftrightarrow Z_2$, and $Y_3\leftrightarrow Z_3$ represent the literal; the connections to the $Y_i$ and $Z_i$ nodes will be made later.
The nodes $A, B, C, D, E, F$ must be visited in that order. The edge going to the right from $F$ connects to the $A$ of the next clause.
The part of the path that visits $ABCDEF$ must use at least one of the $Y_i\leftrightarrow Z_i$ edges. Because $C$ must be visited, at least two of $X_1 X_2 X_3$ must be visited by this part of the path; this prevents any part of the path outside $ABCDEF$ from entering the $BCX_1X_2X_3$ network. (The rules imply that a node of degree $\le 3$ can be used at most once). Similarly on the other side of the fragment, no part of the path outside $ABCDEF$ can enter the $W_1W_2W_3DE$ network.
In addition to these clause networks, for each variable $x_n$ in the 3SAT problem have nodes $P_n, Q_n$ that must be visited in this order. Add a path from $P_n$ to $Q_n$ that passes through all the $Y_iZ_i$ edges for $x_i$ literals (connecting each $Z_i$ to the $Y_i$ of the next instance of $x_i$), and another path from $P_n$ to $Q_n$ that passes through all of the $\neg x_i$ literals. Because the $X_i$ and $Y_i$ nodes are not available outside the $ABCDEF$ segments, these two full paths will be the only ways to get from $P_n$ to $Q_n$.
Finally add connecting edges from each $Q_n$ to $P_{n+1}$, from the last $F$ to $P_1$, etc.
Now a path that passes through everything in order and does not reuse edges will correspond to a satisfying truth assignment, with each variable's truth variable determined by the path from $P_n$ to $Q_n$ we don't take. And in each clause there must be at least one of the literals that have the right truth value.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3277016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Simplifying $\sqrt\frac{\left(a^2\cos^2t+b^2\sin^2t\right)^3}{\left(b^2\cos^2t+a^2\sin^2t\right)^3}$ I am looking to simplify these term [ I forgot the 3 :( ]
$$\sqrt\frac{\left(a^2\cos^2t+b^2\sin^2t\right)^3}{\left(b^2\cos^2t+a^2\sin^2t\right)^3}$$
where $a$ and $b$ are two non-negative reals.
(This is not homework. I am just trying to make my expression easy, but I didn't find a way.)
Thanks for your help.
| The form is
$$\sqrt{\frac{f^3}{g^3}}$$
which we can write as
$$\left(\frac{f}{g}\right)^{3/2}$$
so let's just worry about that inner quotient, $f/g$.
The quotient is definitely not constant; different $t$ values give different results:
$$t = 0 \;\to\; \frac{a^2}{b^2} \qquad\qquad t= \frac{\pi}{2}\;\to\;\frac{b^2}{a^2}$$
Having to set aside this dream case, we're left to consider a few alternatives and decide which might be consider least-bad.
*
*Leave it as-is.
$$\frac{a^2\cos^2 t + b^2 \sin^2 t}{a^2 \sin^2 t+b^2\cos^2 t} \tag{1}$$ That expression isn't terribly complicated.
*Divide-through by $\cos^2t$ in the numerator and denominator, to get
$$\frac{a^2+b^2\tan^2t}{b^2+a^2\tan^2t} \tag{2}$$ This may not be appreciably better, though ... and it introduces unnecessary concern about $t=\pi/2$.
*@Dr.SonnhardGraubner's answer invokes the double-angle formula to get something I'll write as
$$\frac{\left(a^2+b^2\right)+\left(a^2-b^2\right)\cos 2t}{\left(a^2+b^2\right)-\left(a^2-b^2\right)\cos 2t} \tag{3}$$
which is "simpler" in that the degree of the trig functions is lower, at the cose of adding some complexity to the coefficients.
*@Andrei's suggestion to trade $a$ and $b$ for trig functions is a good one, although I'd choose to swap the sine and cosine assignment to write $a = \sqrt{a^2+b^2} \cos u$ (and $b = \sqrt{a^2+b^2} \sin u$). I'd also use the double-angle formulas to simplify the resulting quotient to
$$\frac{1 + \cos 2t \cos 2u}{1 - \cos 2t \cos 2u} \tag{4}$$ (I'd also probably choose to associate $a$ with $\cos u$ (and $b$ with $\sin u$), which changes the above slightly.) Note that $(3)$ seems to be crying-out for us to make such a substitution.
*Since the quotient appears in the context of ellipses, we could use $a^2-b^2=c^2$ (where $c$ is the center-to-focus distance) and $c = ea$ (where $e$ is the eccentricity) to write
$$\frac{ 1 -e^2 \sin^2 t}{1 -e^2 \cos^2 t} \tag{5}$$ I like this one, personally.
*One can also combine re-writing in terms of $e$ with re-writing in terms of $\cos 2t$ (left as an exercise to the reader), but this just seems to re-complicate things.
One can imagine other variations, too. How useful any one version is depends upon how it's intended to be used.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3277124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Poincaré inequality for Lipschitz functions with bounded domain Let $u\in W^{1,\infty}(B_h(0),\mathbb R^n)$, where $B_h(0)=\{x\in\mathbb R^n:|x|<h\}$.
From the Poincaré inequality we know that
$$
\|u-\mathrm{Id}-\frac{1}{\mathrm{Vol}(B_h(0))}\int_{B_h(0)}(u-\mathrm{Id})\|_{L^2}
\leq C\|du-\mathrm{Id}\|_{L^2}
$$
for some constant $C>0$ independent of $u$.
Now I want to bound $\|u-\mathrm{Id}\|_{W^{1,2}}$ in terms of $du$.
Is it also true that there exists a constant $C'>0$ such that
$$
\|u-\mathrm{Id}\|_{L^2}\leq C'\|du-\mathrm{Id}\|_{L^2}
$$
so that $\|u-\mathrm{Id}\|_{W^{1,2}}\leq C''\|du-\mathrm{Id}\|_{L^2}$ independent of $u$?
| Is is not true. Consider a constant function u(x)=N. The estimate would lead to a contradiction for a sufficiently large N.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3277238",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the midsection of a frustum and how do you calculate its perimeter? A doubt while reading "How to solve it" by George Polya.
Given the figure below:
What is the midsection of this figure and how would you calculate its perimeter? (Would be great if you could tell me how to find it on the diagram).
Quoting from the book the midsection is defined as :
We call here mid-section the intersection of the frustum with a plane
which is parallel both to the lower base and to the upper base of the
frustum and bisects the altitude.
So is it the dotted area or the area with the solid line in the figure? You see this does not make since to me because he says the midsection bisects the altitude. There is no such construct on the given figure, so it must be something else. Also how can a plane bisect the altitude, a planar figure is 2D right? And the height is a 3D aspect right?
How would you also calculate the perimeter of that midsection ?
If can please do briefly describe what a midsection is? Is it something that only exists in solid objects or is present in objects of planar geometry as well?
| Just $$2\pi\cdot\frac{R+r}{2}=\pi(R+r).$$
Because a perimeter of the circle with radius $x$ it's $2\pi x$.
The needed midsection it's a circle with diameter, which is a midline of the trapezoid with bases $2R$ and $2r$ and this midline is equal to $\frac{2R+2r}{2}=R+r.$
Id est, the radius of the circle is equal to $\frac{R+r}{2}.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3277311",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Compute the following sum in closed form : $\sum_{n=1}^{\infty}\frac{n\binom{2n}{n}}{4^{n}(2n+1)(2n-1)(4n+1)}$ $$\text{Find : }\sum_{n=1}^{\infty}\frac{n\binom{2n}{n}}{4^{n}(2n+1)(2n-1)(4n+1)}$$
I know that $\displaystyle\sum_{n=0}^{\infty}\binom{2n}{n}x^{2n}=\frac{1}{\sqrt{1-4x^{2}}}$ so $\displaystyle\sum_{n=1}^{\infty}n\binom{2n}{n}x^{2n}=\frac{2x^2}{\sqrt{1-4x^{2}}}$.
But I don't know he to complete this work because I find hypergeometric function.
| As remarked in the OP,
\begin{equation}
\sum_{n=0}^{\infty}\binom{2n}{n}x^{2n}=\frac{1}{\sqrt{1-4x^{2}}}
\end{equation}
or, by changing $x\to x/2$,
\begin{equation}
\sum_{n=0}^{\infty}\frac{1}{4^n}\binom{2n}{n}x^{2n}=\frac{1}{\sqrt{1-x^{2}}}
\end{equation}
We use the decomposition
\begin{equation}
\frac{n}{\left( 2n-1 \right)\left( 2n+1 \right)\left( 4n+1 \right)}=\frac{1}{12}\frac{1}{2n-1}-\frac{1}{4}\frac{1}{2n+1}+\frac{1}{3}\frac{1}{4n+1}
\end{equation}
Then, for $\left|x\right|<1$, as the three series converge absolutely
\begin{equation}
\sum_{n=1}^{\infty}\frac{1}{4^n}\binom{2n}{n}\frac{nx^{2n}}{\left( 2n-1 \right)\left( 2n+1 \right)\left( 4n+1 \right)}=\frac{1}{12}S_--\frac{1}{4}S_++\frac{1}{3}S_2
\end{equation}
where
\begin{align}
S_+&=\sum_{n=1}^{\infty}\frac{1}{4^n}\binom{2n}{n}\frac{1}{ 2n+1}\\
S_-&=\sum_{n=1}^{\infty}\frac{1}{4^n}\binom{2n}{n}\frac{1}{2n-1}\\
S_2&=\sum_{n=1}^{\infty}\frac{1}{4^n}\binom{2n}{n}\frac{1}{4n+1}
\end{align}
With the notation
\begin{equation}
f(x)=\frac{1}{\sqrt{1-x^{2}}}
\end{equation}
by integrating the above series, we have
\begin{align}
\sum_{n=1}^{\infty}\frac{1}{4^n}\binom{2n}{n}x^{2n}&=f(x)-1\\
S_+=\sum_{n=1}^{\infty}\frac{1}{4^n}\binom{2n}{n}\frac{1}{2n+1}&=\int_0^1 \left[f(x)-1\right]\,dx\\
&=\frac{\pi}{2}-1
\end{align}
A simple transformations is necessary forevaluating $S_-$:
\begin{align}
\sum_{n=1}^{\infty}\frac{1}{4^n}\binom{2n}{n}x^{2n-2}&=\frac{f(x)-1}{x^2}\\
S_-=\sum_{n=1}^{\infty}\frac{1}{4^n}\binom{2n}{n}\frac{1}{2n-1}&=\int_0^1\frac{f(x)-1}{x^2}\,dx\\
&=1
\end{align}
For the third term, changing $x\to x^2$ in the series before integrating
\begin{align}
\sum_{n=1}^{\infty}\frac{1}{4^n}\binom{2n}{n}x^{4n}&=f(x^2)-1\\
S_2=\sum_{n=1}^{\infty}\frac{1}{4^n}\binom{2n}{n}\frac{1}{4n+1}&=\int_0^1\frac{dx}{\sqrt{1-x^4}}-1\\
&=\frac{\Gamma(\frac{5}{4})\sqrt{\pi}}{\Gamma(\frac{3}{4})}-1\\
&=\frac{\left[\Gamma\left( 1/4\right)\right]^2 }{4\sqrt{2\pi}}-1
\end{align}
This classical integral can be found for example here.
Putting all the results together gives
\begin{equation}
\sum_{n=1}^{\infty}\frac{n\binom{2n}{n}}{4^{n}(2n+1)(2n-1)(4n+1)}=-\frac{\pi}{8}+\frac{\left[\Gamma\left( 1/4\right)\right]^2 }{12\sqrt{2\pi}}
\end{equation}
which is also identical to the result given by @ChipHurst in the comments.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3277455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Having trouble finding the range of this function. $$f(x)=\frac{e^{2x}-e^x+1}{e^{2x}+e^x+1}$$
Let, $e^x=t$ . Then,
$$f(x)=\frac{t^{2}-t+1}{t^{2}+t+1}=y\quad where,\ t>0$$
$$(y-1)t^2+(y+1)t+(y-1)=0$$
so from the discriminent of the quadratic equation of $t$ I get,
$$(y+1)^2-4(y-1)^2\ge0$$
$$(3y-1)(y-3)\le0$$
$$\frac{1}{3}\le y\le 3$$
But from the graph I can see the range is, $$\frac{1}{3}\le y<1$$
So how can I calculate the range?
| Note that
$$h(t)=\frac{t^{2}-t+1}{t^{2}+t+1}=1-\frac{2t}{t^{2}+t+1}=1-\frac{2}{t+\frac{1}{t}+1}$$
Now by AGM inequality, for $t=e^x>0$, $t+\frac{1}{t}\in [2,+\infty)$ and therefore
$$f(\mathbb{R})=h((0,+\infty))=[1/3,1).$$
P.S. By solving the quadratic equation
$$(1-y)t^2-(1+y)t+(1-y)=0$$
(for $y=1$, we have that $t=0$), we get
$$t=\frac{1+y\pm \sqrt{-3+10y-3y^2}}{2(1-y)}$$
where the discriminant is non-negative when $\frac{1}{3}\le y\le 3$, but you should remember that $t=e^x$ has to be POSITIVE.
If, for example, for $y=3$, we get $t=-1<0$. So we have a further condition: at least one of the real solutions is positive.
Since their product is $1$, such condition is equivalent to the positivity of their sum, i. e. $(1+y)/(1-y)>0$, or $-1<y<1$.
Again we find that $y\in [1/3,1).$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3277602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Commutativity of $\bigcup$ and $\wp$ In his Naive Set Theory, Halmos in Section 5, Complements and Powers, asks the following.
Show that $E$ is always equal to $\bigcup_{X\in\wp (E)} X$ (that is $E=\bigcup\wp (E)$), but that the result of applying $\wp$ and $\bigcup$ to $E$ in the other order is a set that includes $E$ as a subset, typically a proper subset.
I’ve been able to show the first part. For the second part, I don’t think that $E$ can be a subset of $\wp\bigl(\bigcup E\bigr)$.
$\wp (X)$ means power set of $X$.
| A simple illustration of the second fact: let $E=\{\{\emptyset\}\}$, then $\bigcup E = \{\emptyset\}$ (all elements of elements of $E$ together) and $\mathscr{P}\left(\bigcup E)\right) = \{\emptyset, \{\emptyset\}\}$ which indeed properly contains $E$ as a subset.
That $E \subseteq \mathscr{P}(\bigcup E)$ is clear:
suppose that $x \in E$. Then $x \subseteq \bigcup E$ (because every $y \in x$ will be in the union $\bigcup E$ by definition) and so $x \in \mathscr{P}\left(\bigcup E\right)$ (a subset of the union is an element of the powerset of the union).
As Henning Makholm pointed out, iff $E$ is a powerset of some set, we will have $E= \mathscr{P}\left(\bigcup E\right)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3277728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
An exercise on Möbius transformations I had an exam on Complex Analysis and I could not solve the following exercise on Möbius transformations:
Let $f(z)=\frac{z+1}{z-1}$ and $A=\{z : \operatorname{Im}(z) >0\}\setminus \{|z|<1\}$. Find $f(A)$
I know this probably is not that difficult but I don't know how to solve these kind of exercises and I want to learn in case I want to contest my grade. Any help?
| Moebius transformations map circles* (meaning circles or lines) to circles*. The problem tells about two circles* in the $z$-plane. Compute $f(-1)$, $f(0)$, $f(1)$, and $f(i)$ in order to obtain three points of each of the two image circles*. It turns out that $f(A)$ is one of the quadrants in the $w$-plane. In order to determine which one compute $f(2i)$ as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3277894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Determine the character of the singularity at $z=-2$ for function$\frac{1}{(z+2)^{2} \sin z}.$ Determine the character of the singularity at $z=-2$ for function
$$\frac{1}{(z+2)^{2} \sin z }.$$
Since function $ z \mapsto \frac{1}{\sin z} $ is holomorphic in some neighbourhood around $z=-2$, its development to Laurent's series equals to develompent to Taylor's series. Using that development I could read what character is singularity $z=-2$, but I got stuck right after first step ($\sin z $ has known development to Taylor's so I have put that in the denominator). Is that good way of solving this? I am begginer in this area, so I am not sure what exactly I am "allowed" to do in $\mathbb{C}$. Any hint helps!
| What you say is correct.
The function $1/\sin z$ is analytic in a suitable neighborhood of $z = -2$. Therefore, in such a neighborhood we have:
\begin{align}
f(z) & = \frac{1}{(z+2)^{2}}\frac{1}{\sin z} \\ & = \frac{1}{(z+2)^{2}} \left\{ a_0 +a_1 (z+2)+a_2 (z+2)^{2}+ a_3 (z+2)^{3} + \cdots \right\} \\ & = \frac{a_0}{(z+2)^2} + \frac{a_1}{z+2} + a_2 + a_3 (z+2) + \cdots
\end{align}
Notice that $ a_{0} = 1/\sin(-2) \neq 0$, which implies that $z =-2$ is a pole of order 2.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3278012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$G\leqslant GL(n,\mathbb R)\cap\operatorname{Sym}(n,\mathbb R)$ and $|G|=m<\infty$. Prove that $G\cong (\mathbb Z/2\mathbb Z)^k$ for some $k\ge 0$.
Let $G$ be a finite subgroup of the group of real $n\times n$ matrices with nonzero determinant such that all elements of $G$ are symmetric matrices. Prove that $G$ is isomorphic to $(\mathbb Z/2\mathbb Z)^k$ for some $k\ge 0$.
My attempt:
Since $G\leqslant GL(n,\mathbb R)\cap\operatorname{Sym}(n,\mathbb R)$ and $|G|=m<\infty$, we have $\forall g\in G$, $g^m=I$ which implies that the only possible eigenvalues of $g$ are $\pm 1$.
Moreover, we need to show that $g^2=I$ and $g_1g_2=g_2g_1$ for every $g_1,g_2\in G$, which will guarantee that $G$ is a finite abelian group with the only orders of elements in $G$ is $1$ or $2$. Consequently, we can conclude that $G\cong (\mathbb Z/2\mathbb Z)^k$ for some $k\ge 0$.
Indeed, suppose $PgP^{-1}$ is in its Jordan for some $P\in\ GL(n,\mathbb R)$ then we can easily conclude that $g^2=I$. But I am stuck at showing the second fact. Any help? Thanks.
| It is an exercise in first courses in group theory to prove $g^2=e$ for all $g\in G$ implies $gh=hg$ for all pairs of elements $g,h\in G$. The way to do it is expand out $(gh)^2=g^2h^2$ and cancel $g,h$ on the sides. (So, actually, in general it's sufficient for $x\mapsto x^2$ to be a homomorphism for $G$ to be commutative.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3278288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
On closed forms for the binomial sum $\sum_{n=1}^\infty \frac{z^n}{n^p\,\binom {2n}n}$ for general $p$? Define the function,
$$A_p(z)=\sum_{n=1}^\infty \frac{z^n}{n^p\,\binom {2n}n}$$
I've asked about the special case $z=1$ of this function before. At the end of this post, we find for $p\geq 2$ a closed-form in terms of a log sine integral. A variant is,
$$A_p(1)=\sum_{n=1}^\infty \frac{1}{n^p\,\binom {2n}n} = \frac{(-2)^{p}}{(p-2)!}\int_0^{\color{red}{\pi/6}} x\,\ln^{p-2}\big(\sqrt4\sin x\big)dx\tag1$$
and some experimentation shows,
$$A_p(2)=\sum_{n=1}^\infty \frac{2^n}{n^p\,\binom {2n}n} = \frac{(-2)^{p}}{(p-2)!}\int_0^{\color{red}{\pi/4}} x\,\ln^{p-2}\big(\sqrt2\sin x\big)dx\tag2$$
However, another post is about the case $z=4$ and we have the similar,
$$A_p(4)=\sum_{n=1}^\infty \frac{4^n}{n^p\binom{2n}{n}}
=\frac{(-2)^p}{(p-2)!}\int_0^{\color{red}{\pi/2}} x\ln^{p-2}(\sin x)\,dx\tag3$$
Q: What is the formula for $A_p(3)$? And what other $A_p(z)$ are formulas (whether as log sine integrals or other) known for general $p$?
Edit: As I suspected, there is a formula for $z=3$. Courtesy of nospoon's answer below, we have,
$$A_p(3)=\sum_{n=1}^\infty \frac{3^n}{n^p\binom{2n}{n}}
=\frac{(-2)^p}{(p-2)!}\int_0^{\color{red}{\pi/3}} x\ln^{p-2}\big(\tfrac2{\sqrt3}\sin x\big)\,dx\tag4$$
| Hoping that you enjoy hypergeometric functions,
$$ A_p(z)=\sum_{n=1}^\infty \frac{z^n}{n^p\,\binom {2n}n}=\frac{z}{2} \, \, _{p+1}F_p\left(1,\cdots,1;\frac{3}{2},2,\cdots,2;\frac{z}{4}\right)$$ and what you wrote in comments is correct.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3278448",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Evaluate $\int_0^1\frac{\ln(1-x)\ln(1+x)}{1+x^2}dx$
How to prove $$\int_0^1\frac{\ln(1-x)\ln(1+x)}{1+x^2}\ dx=\text{Im}\left(\operatorname{Li}_3(1+i)\right)-\frac{\pi^3}{32}-G\ln2 \ ?$$
where $\operatorname{Li}_3(x)=\sum\limits_{n=1}^\infty\frac{x^n}{n^3}$ is the trilogarithm and $G=\sum_{n=0}^\infty\frac{(-1)^n}{(2n+1)^2}$ is Catalan's constant
Trying the algebraic identity $\ 4ab=(a+b)^2-(a-b)^2\ $ where $\ a=\ln(1-x)$ and $b=\ln(1+x)\ $is not helpful here and the integral will be more complicated.
Also, applying IBP or substituting $x=\frac{1-y}{1+y}$ is not that useful either.
All approaches are appreciated.
| Different approach:
Start with subbing $x\mapsto \frac{1-x}{1+x}$
$$\small{\int_0^1\frac{\ln(1-x)\ln(1+x)}{1+x^2}dx=\ln2\underbrace{\int_0^1\frac{\ln\left(\frac{1-x}{1+x}\right)}{1+x^2}dx}_{-G}-\int_0^1\frac{\ln x\ln(1+x)}{1+x^2}dx+\int_0^1\frac{\ln^2(1+x)}{1+x^2}dx}\tag1$$
where
$$\int_0^1\frac{\ln^2(1+x)}{1+x^2}dx=\int_0^\infty\frac{\ln^2(1+x)}{1+x^2}dx-\underbrace{\int_1^\infty\frac{\ln^2(1+x)}{1+x^2}dx}_{x\mapsto 1/x}$$
$$=\underbrace{\int_0^\infty\frac{\ln^2(1+x)}{1+x^2}dx}_{2\ \text{Im}\operatorname{Li}_3(1+i)}-\int_0^1\frac{\ln^2(1+x)}{1+x^2}dx+2\int_0^1\frac{\ln x\ln(1+x)}{1+x^2}dx-\underbrace{\int_0^1\frac{\ln^2x}{1+x^2}dx}_{\pi^3/16}$$
$$\Longrightarrow \int_0^1\frac{\ln^2(1+x)}{1+x^2}dx=\int_0^1\frac{\ln x\ln(1+x)}{1+x^2}dx+\text{Im}\operatorname{Li}_3(1+i)-\frac{\pi^3}{32}\tag2$$
Plug $(2)$ in $(1)$ we obtain
$$\int_0^1\frac{\ln(1-x)\ln(1+x)}{1+x^2}\ dx=\text{Im}\left(\operatorname{Li}_3(1+i)\right)-\frac{\pi^3}{32}-G\ln2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3278573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Existence of infinite set of positive integers s.t sum of reciprocals is rational and set of primes dividing an element is infinite Does there exist a sequence $(a_i)_{i \geq 0}$ of distinct positive integers such that
$\sum_{i\geq 0}\frac{1}{a_i} \in \mathbb{Q}$ and
$$\{ p \in \mathbb{P} \text{ }|\text{ } \exists\text{ } i\geq 0 \text{ s.t.}\text{ } p | a_i\}$$
is infinite?
Motivation:
All geometric series (corresponding to sets $\{ 1,n,n^2,n^3,... \}$) are rational and the terms obviously contain finitely many primes. The same is true for say, sums of reciprocals of all numbers whose prime factiorisation contains only the primes $p_1, p_2, ...,p_k$ : the sum is then $\prod_{i=1}^k\left(\frac{p_i}{p_i-1}\right)$
On the other side, series corresponding to sets $\{1^2, 2^2, 3^2, ...\}, \{1^3,2^3,3^3,...\},\{1!,2!,3!,...\}$ converge to $\frac{\pi^2}{6}$, Apery's constant and $e$ respectively, which are all known to be irrationals.
I am aware of the fact that if this statement is true then it has not been proven yet (since it implies that the values of the zeta function at positive integers are irrational, which to my knowledge has not been shown yet).
Any counterexamples or other possible observations (such as, instead of requiring the set of primes to be infinite, requiring that it contains all primes except a finite set)?
| Let $f=(f_1,f_2): \mathbb{N} \rightarrow \mathbb{N}^2$ be a bijection.
Define $a_n=(2^{3+f_1(n)}+1)^{f_2(n)}$.
By Bang’s theorem (https://en.wikipedia.org/wiki/Zsigmondy%27s_theorem), if $n > m > 3$, then there exists some prime $p$ dividing $2^{2n}-1$ but neither $2^{2m}-1$ nor $2^n-1$, thus $p|2^n+1$ but not $p|2^m+1$.
As a consequence, no integer can be both a power of $2^m+1$ and $2^n+1$, hence the $a_n$ are pairwise distinct.
Besides, $$\sum_n{a_n^{-1}}=\sum_{n \geq 1}{\sum_{m \geq 1}{(1+2^{n+3})^{-m}}}=\sum_{n \geq 1}{2^{-3-n}} \in \mathbb{Q}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3278677",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 3,
"answer_id": 2
} |
For $0\lt\theta\lt1$, $\frac 1\theta\notin\mathbb Z$, there exists $f\in C[0, 1]$ such that $f(0)=f(1)$ and $f(x+\theta)-f(x)\ne0$ Prove that for each $0\lt\theta\lt1, \dfrac{1}{\theta} $ isn't an integer, there exists $f \in C[0, 1]$ such that $f(0)=f(1)$, and $ \forall x\in[0,1-\theta] , f(x+\theta)-f(x)\ne0 $
(If $\theta\gt\frac12, $ it is obvious that such an $f$ exists.)
| Parting from predicates, as $0 < \theta < 1$ in strict order, we can establish a function $\phi : ]0,1[ \rightarrow \mathbb{R}$ such that $\phi(\theta) = \frac{1}{\theta}$. As we know that $\phi(\theta)$ tends to $1$ as $\theta$ tends to $1$, and $+ \infty$ as $\theta$ tends to $0$, we can restrict the function as follows:
$$ \phi : ]0,1[ \rightarrow ]1,+\infty[ \\
\theta \mapsto \frac{1}{\theta} $$
Note that as $\mathbb{R}$ is a dense set, $]0,1[$ is homeomorphic to $ ]1,+\infty [ \subset \mathbb{R}$.
*
*If such an $f$ existed for all such $\theta$, we'd only need continuity to conclude by Rolle's Theorem that, a subset of all these functions is indeed defined such that $f(0) = f(1)$.
*Also, if such an $f$ existed for all such $\theta$, we'd have:
$$ \forall x \in [0,1-\theta], \exists a \ne 0 : f(x+\theta) - f(x) = a $$
then, as $\theta \ne 0$,
$$ \frac{f(x+\theta) - f(x)}{x+\theta-x} = f’(\theta) = \frac{a}{\theta} \ne 0 $$
Thus, $f$ is differentiable on $[0,1 - \theta]$ and non-constant.
Let $\alpha \in ]1,+\infty[$,
Then there exists $\theta \in ]0,1[$ such that $\phi (\theta) = \frac{1}{\theta} = \alpha$.
Indeed, we can consider the function:
$$ f(x) = sin(2 \alpha \pi x) $$
The function $f$ defined above -luckily- holds for an example for each choice of $\theta \in ]0,1[$ such that $\alpha = \phi^{-1}(\theta)$ whose existence is guaranteed by construction. That is why I won't discuss the property of $\frac{1}{\theta}$ being non-integer. It is a continuous function and clearly not constant over $[0,1-\theta]$, which we will extend by continuity to its values on ${0}$ and ${1}$ too.
We then have the existence for them all.
Furthermore, the only fact that any step interval $ ]n,n+1[ $ with $n \in \mathbb{N}$ is homeomorphic to $ ]1,+\infty[ $ leads to the existence of a family $F$ of such functions.
Notice that, we can generalize the subset of $F$ generated by these $f$ above:
$$ \forall n \in \mathbb{N}, f_{n}(x) = sin(2 (n+1) \alpha \pi x) $$
Observe that each element of $ \{ f_{i} \}_{i \in I_n} $ is in $F$.
Thus the generalized form.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3278772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Develop into Laurent series around $0$: $\frac{e^{1/z}}{z - 3i} .$ Develop into Laurent series around $0$:
$$\frac{e^{\frac{1}{z}}}{z - 3i} .$$
I was thinking of developing $e^{\frac{1}{z}}$ first and then $\frac{1}{z - 3i}$, but I got stuck while writing its multiple as one sum. Is that a good way of solving this?
I'm new to developing to Laurent series so any hint helps!
| If you have two functions $f$ and $g$ holomorphic on an annulus $$\mathcal{A}=\{z\in\mathbb{C}: r_0<|z-z_0|<r_1\},$$ then given Laurent series' $$f(z)=\sum\limits_{n=-\infty}^\infty a_n (z-z_0)^n$$ and $$g(z)=\sum\limits_{n=-\infty}^\infty b_n (z-z_0)^n$$ valid on $\mathcal{A}$, we will have $fg$ hololomorphic on $\mathcal{A}$, as well, with Laurent series $$f(z)g(z)=\sum\limits_{n=-\infty}^\infty c_n (z-z_0)^n, $$ where $$c_n=\sum\limits_{k=-\infty}^\infty a_kb_{n-k},$$ valid on $\mathcal{A}.$ This is analogous to taking the product of two power series' with the same radius of convergence.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3279050",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Writing $a^2b^0c^0+a^0b^2c^0+a^0b^0c^2+a^1b^1c^0+a^1b^0c^1+a^0b^1c^1$ using $\sum$? $$a^2+b^2+c^2+ab+ac+bc$$
$$=a^2b^0c^0+a^0b^2c^0+a^0b^0c^2+a^1b^1c^0+a^1b^0c^1+a^0b^1c^1$$
Been messing around with some probability stuff and that popped up. I couldn't figure out how to write it in summation form so I can generalize it for when the sum of the exponents isn't just 2.
| $$\sum_{\substack{i,j,k\ge0\\ i+j+k=2}}\mkern-9mua^i b^j c^k$$
seems to be what you're after.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3279194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Find all primes $(p,q)$ such that $p|q+6$ and $q|p+7$ Find all primes $(p,q)$ such that $p|q+6$ and $q|p+7$
I haven't found any. I initially started from $p,q\gt 3$ since, by a simple substitution you get, if of $p=2$ then $q|8$ and $q=2$, but then $2|9$ and it's a contradiction. Similarly happens with 3. Then, $p,q$ are odd and greater than 3. Also, I tried using $p,q\equiv \pm1\pmod 6$ and linear combinations, but I haven't gotten anything and I don't know how to proceed.
I would prefer a suggestion rather than an answer, if possible without congruences, thanks beforehand.
| From $q\mid p+7$ we have $p+7=qk$ for some positive integer $k$.
*
*If $k=1$ then $p+7=q$ so $p,q$ one is even, so $p=2$ and $q=9$. Not good.
*If $k=2$ then $p+7=2q$ and since $p\mid 2q+12$, then $p\mid 19$, so $p=19$ and
$q=13$.
*If $k=3$ then $p+7=3q$ so $p,q$ one is even, so $p=2$ and $q=3$ which doesn't work in second relation.
*If $k\geq 4$ then $p+7\geq 4q$. But $p\leq q+6$ so $3q\leq 13\implies q\leq 3$. If $q=2$ then $p=2$ which doesn't work and if $q=3$ then $p= 3$ which also doesn't work.
Conclusion: $p= 19$ and $q=13$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3279284",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
} |
Prove by epsilon-delta that $\lim \limits_{x \to 1} x^3-4=-3$ I need to prove that $\lim \limits_{x \to 1} x^3-4=-3$ with epsilon-delta.
My work
$\forall \varepsilon > 0 ,\exists \space \delta > 0: 0<|x-1|< \delta \implies |x^3-4+3| < \varepsilon$
Working with the consequent:
$|x^3-4+3| < \varepsilon \iff |x^3-1| < \varepsilon \iff |x-1||x^2+x+1| < \varepsilon$
Multiplying the antecedent by $|x^2+x+1|:$
$|x-1|< \delta$ $/\cdot |x^2+x+1| \iff |x-1||x^2+x+1|< \delta |x^2+x+1|$
Here i found a relation but i don't know how to proceed, and i don't know if this way is the best way to prove this. Any hints?
| Assume that $\delta < 1/2$, then $1/2<x<3/2$.
$$|x^2+x+1|=x^2+x+1 < 5$$
$$ |x-1||x^2+x+1|< \delta |x^2+x+1|<5\delta =\epsilon$$
You can take it from here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3279354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
How can a transformation be linear transformation without linearity? My teacher at the University gave me a question I could not understand completely. Here is the question:
Let $T: \mathbb R^3 \to P[x]$ be a linear transformation with $$T([1, 0,
0])=x+1, \quad T([0, 1, 0])=x^2-x, \quad T([0, 0, 1])=x^2,$$ find
$T([a, b, c])$, also find the standard matrix $A$ for the
transformation.
The part that I did not understand is that how can $T([0, 1, 0])$ and $T([0, 0, 1])$ be linear since they have $x^2$. Also the $T([1, 0, 0])$ term has a constant. Those violate the linear transformation rules. Don't they?
| Linearity is nothing more or less than requiring $T(u+v)=T(u) +(v), T(au)=aT(u)$. This does not rule out
$T(u)=x+1$.
In fact consider the function $M\colon P[x]\to P[x]$ which has the effect of multiplying anything by $x^2+3x+1$ (or any random, fixed polynomial).
$M(f(x)+g(x) )= M(f(x) ) + M(g(x))$. ANd M(af(x) ) = aM(f(x)).
Can see $M(1)= x^2+3x+1$, a quadratic function! $M$ satisfies the requirement of linear transformation, Perfectly alright.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3279529",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Minimize the Sum of Reciprocal of Probabilities I need a probability distribution over $ n $ events such that the sum of expected values is minimized. That is, minimize
The problem is given by:
$$\begin{aligned}
\arg \min_{ {p}_{i} } & \; && \sum_{i = 1}^{n} \frac{1}{ {p}_{i} } \\
\text{subject to} & \; && \sum_{i = 1}^{n} {p}_{i} = 1
\end{aligned}$$
I guess it's a basic easy question. I would appreciate any hint.
| Here is an argument that tells you that the minimum value can only be attained the $p_i$'s are equal. Suppose the minimum value is attained when $p_i=q_i, 1 \leq i \leq n$. If possible let $q_i \neq q_j$. Note that $\frac 1 {q_i} +\frac 1 {q_j} >\frac 1 {\frac {q_i+q_j} 2}+\frac 1 {\frac {q_i+q_j} 2}$. [This is simply a re-writing of the inequality $(q_i-q_j)^{2} >0]$. Thus there is a choice of $p_i$'s for which we get a value lower than the minimum. This contradiction shows that $p_i$ are all equal when the minimum is attained.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3279641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Solve the equation $|2x^2+x-1|=|x^2+4x+1|$ Find the sum of all the solutions of the equation $|2x^2+x-1|=|x^2+4x+1|$
Though I tried to solve it in desmos.com and getting the requisite answer but while solving it manually it is getting very lengthy.
I tried to construct the two parabola and mirror image the region below y axis but still getting it is getting complicated.
Is there any easy method to solve it and get the sum of all the solutions ?
| The expressions between the absolute value bars have the same or opposite signs. Hence there are two independent cases (by addition and subtraction):
$$3x^2+5x=0$$ and $$x^2-3x-2=0.$$
Then by Vieta,
$$-\frac53+3.$$
For complete rigor, one should show that no root is repeated. This is true, because the polynomials have no double root, and their $\text{gcd}$ is $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3279752",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
} |
Asymptotic formula for $\sum_{k=N}^\infty \frac{x^k}{k!}$ as $N \to \infty$ This seems like a weird question, because the series has good convergence and there's no need to use other methods to estimate it for $N \to \infty$.
However, after seeing this question, I tried to come up with some approximation which could allow us to treat integrals containing this sum in the denominator.
Regardless of the particular application, I just wanted to ask if there's a way to approximate this sum using a finite combination of elementary or special functions?
Not counting the obvious $e^x-\sum_{k=0}^{N-1} \frac{x^k}{k!}$ of course.
Euler-Maclaurin formula doesn't seem very promising, because the resulting integral is even more complicated $$\int_N^\infty \frac{x^y}{\Gamma(y+1)}dy$$ And the derivatives containing various combinations of polygamma functions are also a pain to write.
If there's no better asymptotic expression than the sum itself, so be it.
| Note that
$$
\frac{n!}{x^n}\sum_{k=n}^\infty\frac{x^k}{k!}
=1+\sum_{k=n+1}^\infty\frac{x^{k-n}n!}{k!}\\
$$
and for $n\ge|x|$,
$$
\begin{align}
\left|\sum_{k=n+1}^\infty\frac{x^{k-n}n!}{k!}\right|
&\le\sum_{k=1}^\infty\frac{|x|^k}{(n+1)^k}\\
&=\frac{|x|}{n+1-|x|}
\end{align}
$$
Thus,
$$
\sum_{k=n}^\infty\frac{x^k}{k!}
=\frac{x^n}{n!}\left(1+O\!\left(\frac1n\right)\right)
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3279850",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Interesting topological spaces to calculate the homology groups. Interesting topological spaces to calculate the homology groups.
I am calculating homology groups of several topological spaces to learn and I have already calculated the homology groups of $\mathbb{S}^m$, $\mathbb{R}P^2$, the Klein bottle, $\mathbb{R}^2$ minus a finite number of points and I am going to calculate the homology groups to the moebius band and I was wondering what other interesting topological spaces I can easily calculate the homology groups, thank you very much.
| Not sure whether you consider them to be interesting, but these are some spaces whose homology groups I once computed in the past when I was studying algebraic topology:
1) The torus $T^2 = S^1 \times S^1$ or more generally $T^n = S^1 \times \dots \times S^1$
2) The space you get when you take $S^2$ and identify the north and south poles
3) Take $T^2 = S^1 \times S^1$ and quotient out the circle $S^1 \times \lbrace x \rbrace$ for some point $x \in S^1$
4) Take $T^2 = S^1 \times S^1$ and quotient out two different circles $S^1 \times \lbrace x \rbrace$ and $S^1 \times \lbrace y \rbrace$
5) The space $X$ one gets by glueing two solid tori $S^1 \times D^2$ along their boundaries via the identity $S^1 \times S^1 \rightarrow S^1 \times S^1$
6) The dunce hat/cap (Take a solid triangle and identify the sides by the edge word $a^3$ (or $a^2a^{-1}$ - definition varies a bit in the literature)
7) If you managed to compute your examples plus these and you still want more, then I suggest that you just construct some examples and try to compute the homology groups. Sometimes it will work out and otherwise you can still ask questions here.
I will not be able to remember all the homology groups for you to compare though.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3280042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
How to write recursive functions in mathematics Okay this is a really stupid question, but anyways.
Action isn't just the effect of motivation; it is also the cause of
it.
~ "The subtle art of not giving a f*ck", Mark Manson
If I take variable action as a and variable motivation as m, I want to write a simple equation f(a, m) to state the above fact mathematically. Here my confusion is that we know cause leads to effect and is its precursor, but the above statement makes the action f(a) as both cause and effect. So, how do I write f(a, m)?
The second part of this question is what would happen if I add a third variable, inspiration f(i).
The thing about motivation is that it's not only a three-part chain,
but am endless loop:
Inspiration -> Motivation -> Action -> Inspiration -> Motivation ->
Action -> ...
I want to reorient above as
Action -> Inspiration -> Motivation
So, how do I write or hypothese f(a,i,m).
| $$a_t=m(a_{t-1})$$
This recursive formula treats motivation as the function and actions as both the input and output to that function. In other words, motivation has a predefined relationship with action, whereby actions cause motivation which produces the next action.
With inspiration:
$$a_t = m(i(a_{t-1}))$$
or in English, action inspires motivation which produces the next action. Your next action is determined by motivation, which is a function of inspiration, which is a function of your previous action.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3280166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Show that MLE of $\theta$ is consistent for $N(\theta, \theta)$ distribution I want to show that the MLE of $N(\theta, \theta) $, namely :
$$\theta_1 = \frac { \sqrt{1+\frac 4 n \sum^n x_i^2} } 2 $$ converges in probability towards the true parameter $ \theta$. I thought about showing that the mean square error converges to zero but I don't know the law of $\theta_1$.
What can I do? I want to show that it converges in probability and if possible also to find the law when $n\to \infty$ of $\theta_1$.
| By the WLLN
$$
1/n \sum X_i^2 \xrightarrow{p} \mathbb{E}X^2=Var(X)+\mathbb{E}^2X=\theta+\theta^2.
$$
as $n \to \infty$,
and
$$
g(x) = \frac{\sqrt{1 + 4 x}}{2}
$$
it a continuous transformation. Hence by the continuous mapping theorem,
$$
g\left( \sum X_i^2/n \right) \xrightarrow{p}g(\theta + \theta^2)=\frac{ \sqrt{(1+2\theta)^2}}{2} = 1/2+\theta,
$$
as $n \to \infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3280268",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Proof verification: the angle subtended by a chord can never be 90 degrees I couldn't find any sources of this online, so I would like to ask if what I'm proposing below is correct, or if a similar theorem has been proven before.
We know that the angle subtended by the diameter of a circle is always $90^\circ$ (Thales' Theorem). In the image below, the angle at any point $C$ on the highlighted arc will be $90^\circ$ if $AB$ is the diameter.
Suppose now that $AB$ wasn't the diameter. Is there a theorem that says that there does not exist a point $C$ on the circle such that the angle at $C$ is $90^\circ$?
I will use a diagram to explain my reasoning.
My reasoning is as follows: if you could find me a point in the minor arc highlighted above such that $\angle ACB=90^\circ$, then if you 'push' AB down to the diameter to get DE, the $\angle DCE < \angle ACB$, which is a contradiction of Thales' Theorem!
A similar reasoning can be used to explain for the major arc of the circle.
I'm posting this on MSE as I want to know:
*
*Is this a valid proof? I know it's not rigorous but is the way I'm going about it correct?
*Has this already been proven? Is there a name for this theorem or is it simply an obvious corollary of Thales' theorem (that I wasn't aware of)?
| not the prettiest solution but perhaps this helps:
basically I assumed that we have input of 90 degree angle without assuming we are at the center, and we got that $y=x$ thus it indeed must be the center - proving back Thales' theorem
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3280387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Evaluate $S=\sum_{n=2}^\infty\frac{_nC_2}{(n+1)!}.$
Prove that $$S=\sum_{n=2}^\infty\frac{_nC_2}{(n+1)!}=\frac{e}{2}-1.$$
$$
S=\sum_{n=2}^\infty\frac{_nC_2}{(n+1)!}=\sum_{n=2}^\infty\frac{n!}{2(n-2)!(n+1)!}=\sum_{n=2}^\infty\frac{1}{2(n+1)(n-2)!}\\
=\frac{1}{2}\bigg[\frac{1}{3.0!}+\frac{1}{4.1!}+\frac{1}{5.2!}+\frac{1}{6.3!}+\dots\bigg]=\frac{1}{2}\bigg[\Big(\frac{1}{0!}+\frac{1}{1!}+\frac{1}{2!}+\dots\Big)-\Big(\frac{2}{3.0!}+\frac{5}{4.1!}+\dots\Big)\bigg]\\=\frac{e}{2}-\frac{1}{2}\Big(\frac{2}{3.0!}+\frac{5}{4.1!}+\dots\Big)
$$
How do I proceed further as I am stuck with the last infinite series.
| First, it must be:
$$S=\sum_{n=2}^\infty\frac{^nC_2}{(n+1)!}=\sum_{n=2}^\infty\frac{n!}{2(n-2)!(n+1)!}=\sum_{n=2}^\infty\frac{1}{2(n+1)(n-2)!}=\\
\color{blue}{=\sum_{n=2}^\infty\frac{(n+1)-n}{2(n+1)(n-2)!}=\frac12\left[\sum_{n=2}^\infty\frac{1}{(n-2)!}-\sum_{n=2}^\infty\frac{n}{(n+1)(n-2)!}\right]=}\\
=\frac{1}{2}\bigg[\Big(\frac{1}{0!}+\frac{1}{1!}+\frac{1}{2!}+\dots\Big)-\Big(\frac{2}{3.0!}+\frac{\color{red}3}{4.1!}+\dots\Big)\bigg]\\=\frac{e}{2}-\frac{1}{2}\Big(\frac{2}{3.0!}+\frac{\color{red}3}{4.1!}+\dots\Big).$$
Second, evaluating the second series is more difficult:
$$\sum_{n=2}^\infty\frac{n}{(n+1)(n-2)!}=\sum_{n=2}^\infty\frac{n^2(n-1)}{(n+1)!}=\\
\sum_{n=2}^\infty\frac{(n+1)(n^2-2n+2)-2}{(n+1)!}=\sum_{n=2}^\infty\frac{(n+1)n(n-1)-(n+1)n+2(n+1)-2}{(n+1)!}=\\
\sum_{n=2}^\infty\left[\frac{1}{(n-2)!}-\frac{1}{(n-1)!}\right]+2\sum_{n=2}^\infty\left[\frac{1}{n!}-\frac{1}{(n+1)!}\right]=\\
\left[\frac1{0!}-\frac1{1!}+\frac1{1!}-\frac1{2!}+\frac1{2!}-\frac1{3!}+\cdots\right]+2\left[\frac1{2!}-\frac1{3!}+\frac1{3!}-\frac1{4!}+\frac1{4!}-\frac1{5!}+\cdots\right]=2.$$
Thus, following lab bhattacharjee's method is more efficient.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3280661",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
What does P@1 mean in this scientific article? In this scientific article: Label Filters for Large Scale Multilabel Classification by Alexandru Niculescu-Mizil and Ehsan Abbasnejad, they use the notations P@1, P@5 and P@10 as the following table shows
I was thinking that they were $p$-values at the beginning, but it somehow doesn't make sense.
Could someone explain me what this notations mean?
| On the same page of the figure (Section 4) in the article, the notation is defined as follows:
"Following previous work on large scale multilabel classification (Weston et al., 2013; Prabhu and Varma, 2014; Bhatia et al., 2015) we use precision at k (P@k) as the
evaluation metric. Precision at k is defined as the fraction of true labels among the top k predictions made by the classifier."
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3280781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
$f$ is holomorphic in $B(z_0,r)\setminus\{z_0\}$ and does not except real values. Then $z_0$ is a removable singularity $f$ is holomorphic at $B(z_0,r)\setminus\{z_0\}$ and $f$ doesn't except real values - i.e $f(z)\notin\mathbb{R}$ for all $z\in \mathbb{R}$. Then $z_0$ is a removable singularity point ($f$ can be extended holomorphically in $z_0$).
Well I tried to use Riemann theorem, and show that $\exists 0<r'\le r$ such that $f$ is bounded in $B(z_0,r)$, but didn't succeed to do so. Formerly I solved a similar question which demand that $\Re(f)>0$ and then by defining $e^{-f(z)}$ which is holomorphic and bounded, which by taking $log$ holomorphic branch promises $f$ is holomorphic. Is there any manipulation or composition I may make to $f$, to get a similar results?
I also tried to assume that $f$ is not bounded, so in $B(z_0,r)$ one may find $z$ such that $|f(z)|$ is arbitrary big. However, is there any kind of intermediate value principle which assures that $f$ must "cross" the real line in case $|f(z)|$ is not bounded in $B(z_0,r)$?
| EDITED: If $f$ has a pole at $z_0$, $1/f$ has a zero there, and by the Open Mapping Theorem $1/f$ would take all values in some interval near $0$.
If $f$ has an essential singularity at $z_0$, Picard says it can omit at most one value near $z_0$.
Removable is all that's left.
EDIT If you don't want to use the heavy artillery of Picard, note that if $f$ takes no real values, since $B(z_0,r)$ is connected the values it does take are either in the upper or lower half plane.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3280930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Average relative to quantity Not really sure how to ask my question. I have a list of average transaction wait times which are averaged over the count of the transactions for each zone, as shown in the list below which is sorted by the avg wait highest to lowest. When I look at the top entry I think "yuck, they waited 177 seconds and it was only 1293 transactions." When I look at the entry for zone:22 I think "wooHoo! This zone had 2600 transactions and only waited 47 seconds!"
I want to give each row a score based on the number of transactions and the wait time that they experienced. What would be the bast way to do that? (I'm sure I learned how to do this 40 years ago when I was in High School, but I don't remember today what formula to apply. ;-) ) Maybe I'm overthinking this??
| The question has many answers. It depends on what is important. I can give a score as number of transactions plus number of seconds wait time, but that is probably not what you want. I would suggest giving separate scores for wait time and number of transactions (how you score these is also subjective), then take an average (or some weighted average) of the scores. Here is an example:
*
*Score for transactions is #of transactions/2609 *100, where 2609 is the maximum number of transactions in a zone
*Score for wait times is 100*28.6/wait time, where 28.6 is the minimum wait time
*Weighted score is (2*Wait time score+transactions score)/3
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3281013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Expression for ( or approximation of ) series of odd terms in series representation of Bessel function of first kind? We know:
$$J_v(z) = \sum_{k=0}^{\infty}\frac{(-1)^k}{\Gamma(k+v+1)k!}\bigr(\frac{z}{2}\bigl)^{2k+v} \ \ (Eq. 1)$$
$$s.t.\ (v,k)\in \mathbb N,z\in \mathbb R$$
courtesy of Introduction to Bessel Functions and that there are quite a few ways of approximating Bessel functions of the first kind (written above) asymptotically.
To move on, if we take $v=1$, then:
$$J_1(z) = \sum_{k=0}^{\infty}\frac{(-1)^k}{\Gamma(k+2)k!}\bigr(\frac{z}{2}\bigl)^{2k+1} \ \ \ \ (Eq.\ 2) $$
But I am interested in approximations for a series composed of the odd terms of the series representation above, $(Eq. 2)$. Namely $k \in \mathbb N_{odd}$.
| This is a generalized hypergeometric function, with
\begin{align}\sum_{k=0}^\infty\frac{(-1)^{2k+1}}{\Gamma(2k+v+2)(2k+1)!}\left(\frac z2\right)^{4k+v+2}&=-\left(\frac z2\right)^{v+2}\sum_{k=0}^\infty\frac1{\Gamma(2k+v+2)(2k+1)!}\left(\frac z2\right)^{4k}\\&=-\left(\frac z2\right)^{v+2}{}_0F_2\left(;-\frac32,\frac{v+3}2;\frac{z^4}{64}\right)\end{align}
One can also view this as a difference of the alternating and non-alternating series given by
$$J_v(z)=\sum_{k=0}^\infty\frac{(-1)^k}{\Gamma(k+v+1)k!}\left(\frac z2\right)^{2k+v}$$
$$I_v(z)=\sum_{k=0}^\infty\frac1{\Gamma(k+v+1)k!}\left(\frac z2\right)^{2k+v}$$
$$\frac{J_v(z)-I_v(z)}2=\sum_{k=0}^\infty\frac{(-1)^{2k+1}}{\Gamma(2k+v+2)(2k+1)!}\left(\frac z2\right)^{4k+v+2}$$
where $I$ is the modified Bessel function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3281135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to solve this multivariable exponential equation? I searched if this was asked before but couldn't find a solution. I have this equation
$y^{70} = x + 500 $
$y^{50} = x + 1 $
Is it possible to solve this equation? The only thing I could do is to bring it into this form and then cross-multiply which didn't yield many results.
$y^{20} = \frac{x+500}{x+1} $
| The best I can think of at the moment is
$$
\eqalign{
& \left\{ \matrix{
50\ln y = \ln \left( {1 + x} \right) \hfill \cr
20\ln y = \ln \left( {{{x + 500} \over {x + 1}}} \right) = \ln \left( {1 + {{499} \over {x + 1}}} \right) \hfill \cr} \right. \cr
& \ln y = {1 \over {50}}\ln \left( {1 + x} \right) = {1 \over {20}}\ln \left( {1 + {{499} \over {x + 1}}} \right) \cr
& \left( {1 + x} \right)^{\,2/5} = 1 + {{499} \over {x + 1}} \cr
& \left( {1 + x} \right)^{\,7/5} = x + 1 + 499 \cr
& 1 + x = u^{\,5} \cr
& u^{\,7} - u^{\,5} = u^{\,5} \left( {u^{\,2} - 1} \right) = 499 \cr}
$$
which clearly has only one solution, and that numerically is easy to solve.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3281237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How do I find x as a function of x? Sorry for the title, I don't know how else to put this into words.
Basically I wanted to know how to get the result below:
I have no idea about why the graph is showing X as a diagonal line. How can X by itself be a line which is not constant?
| $$x=462+0.085x$$
$$(1-0.085)x=462$$
$$0.915x=462$$
$$x=\frac{462}{0.915}=\frac{30800}{61}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3281372",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.