Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Show that $Z(G)=\prod\limits_{i=1}^n Z(N_i)$
Lemma 7.7 (From Isaacs's book): Suppose that $G= \prod\limits_{i=1}^n\; N_i$ . Then, $Z(G)=\prod\limits_{i=1}^n Z(N_i).$
Let $Z_i=Z(N_i)$ and $C_G(X)$ denoting the centralizer of $X$ in $G$. Since $N_i \trianglelefteq G$ for all $i$ and $N_i \cap N_j = \{1\}$ for $i\neq j$, we get
$$ N_j \subset C_G(N_i)\subset C_G(Z_i) $$
and since also $N_i\subset C_G(Z_i)$, we have
$$ G= \prod N_j \subset C_G(Z_i)$$
and $Z_i \subset Z(G)$ for each $i$. Thus, $\prod Z_i \subset Z(G) ...$
Question: I'm trying to understand how to show that $Z_i \subset Z(G)$ from that, but I'm not seeing how. Thanks in advance for any hint.
| You are making a meal of this
You don't need normal subgroups or $C_G$'s to answer this problem. I am not sure what $C_G$ even means. Your notation $N_i \cap N_j$ is also imprecise but I gather you are writing $N_1$ for $N_1 \times \prod_{i=2}^{n}\{1_i\}$ for example.
I will do half of the problem using elementary methods and leave the second half to you.
Consider the element $a= (g,1,\ldots, 1)$ for $g \in Z_1$. For any $b = (g_1,g_2,\ldots, g_n) \in G$ we have $g g_1 = g_1 g$ and so $$ab = (g g_1,g_2,\ldots, g_n) = (g_1 g ,g_2,\ldots, g_n) = ba$$
It follows $a \in Z(G)$. Likewise we see the elements $(1,,\ldots, 1, g, 1 \ldots, 1)$ are in $Z(G)$ where $g$ is in the corresponding $Z_i$. Since $Z(G)$ is a group we can multiply all these elements to get everything in $\prod_i Z_i$.
Thus we have proved $\prod_i Z_i \subset Z(G)$.
Now use similar methods to show $Z(G) \subset \prod_i Z_i $.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4463298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Intersection of free submodules over Noetherian ring In general, it is not true that intersections of free submodules of a free module are free, see here. What if our ring is particularly nice?
Over $k[x_1, x_2]$, the intersection of free submodules $F_1, F_2 \subseteq F$ of a free module can be written as the pull-back $F_1 \times_F F_2$, which is free by the syzygy theorem.
What about more general polynomial rings? Are intersections of free submodules of a free modules free for $k[x_1,\dotsc,x_n]$-modules, or other Noetherian rings?
| The answer is no. Here is an example.
Let the ring be $k[x,y,z]$. Let $F$ be the free module of rank $4$ on $e_1,e_2,e_3,e_4$. Let $F_1$ be the free submodule generated by $e_1,e_2,e_3$ and $F_2$ be the free submodule generated by $e_1+xe_4,e_2+ye_4,e_3+ze_4$. I will let you check that $F_1\cap F_2$ is not free.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4463460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
System of derivative equations best way to solve $$
f_x = 2x\sin(z)\\
f_y = 3y^2\sin(z)\\
f_z = (x^2+y^3)\cos(z)\\
$$
Where the indexes represent the partial derivative with respect to the letter.
What we did in the solutions was the following:
We integrated $f_x$ and got $f = x^2 \sin(z)+g(y,z)$
Then we also integrated $f_y$ and got $g = y^3\sin(z)+h(z)$
Why is it in the second case that there is only $h(z)$ and not $h(y,z)$ as it is in the first case.
And then for $f_z$ we did: $(x^2+y^3)\cos(z) \implies x^2\cos(z)+g_z = (x^2+y^3)\cos(z) \implies g_z = y^3\cos(z)$
And then we somehow got: $f(x,y,z) = (x^2+y^2)\sin(z)+c $
However I do not understand this procedure. Can somebody explain or if there is a better procedure.
| $$f_x = 2x\sin(z)$$
Integration gives:
$$f = x^2 \sin(z)+g(y,z)$$
Differentiate with respect to the variable $y$:
$$f_y = \dfrac {dg(y,z)}{dy}$$
Now use $f_y = 3y^2\sin(z)$:
$$ 3y^2\sin(z)=\dfrac {dg(y,z)}{dy}$$
Adter integration:
$$ y^3\sin(z)+h(z)=g(y,z)$$
Since $g=g(y,z)$ the variable $x$ is not involved.
In the first case you have that:
$$f_x = 2x\sin(z)\\$$
$f=f(x,y,z)$ that's why you have the $g(y,z)$ after integration:
$$f(x,y,z) = x^2\sin(z)+g(y,z)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4463595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Characteristic function of $\frac{1-\cos(x)}{x^2}$. I am trying to compute the characteristic function of the density $f(x)=\frac{1-\cos(x)}{x^2}$. But I do not know how to do it, I was trying to use the residue theorem to compute
$$\phi(t)=\int_{-\infty}^\infty e^{itx}\frac{1-\cos(x)}{x^2}dx$$ but in this case it is not possible because there is no singularities, or I don't know how to use it in this case. Any help is welcome; Thank you!.
| Let us write
$$
\phi(t) =\int e^{it x} \frac{2- e^{i x} - e^{-ix} }{2x^2} \,dx =
\lim_{\eta\to0^+}\int e^{it x} \frac{2- e^{i x} - e^{-ix} }{2 (x-i\eta)(x+i\eta)} \,dx
\,.
$$
The second step is important, as we will separate the integral into parts and we do want that they converge individually.
Then we need the integrals ($\eta>0$)
$$ \int e^{i t x} \frac{1}{(x+i \eta) (x-i\eta)} \,dx= \frac{\pi}{\eta}e^{-\eta |t|},$$
$$ \int e^{i t x} \frac{e^{ix}}{(x+i \eta) (x-i\eta)}\,dx = \frac{\pi}{\eta}e^{-\eta |t+1|},$$
$$ \int e^{i t x} \frac{e^{-ix}}{(x+i \eta) (x-i\eta)}\,dx = \frac{\pi}{\eta}e^{-\eta |t-1|},$$
which can be obtained via residue theorem (please indicate, if you need help with that).
Combining these results, we obtain
$$ \phi(t) = \lim_{\eta\to0^+}\frac{\pi}{\eta} \left(e^{-\eta |t|} - \frac12 e^{-\eta |t+1|}-\frac12 e^{-\eta |t-1|}\right) = \frac{\pi}{2} ( |t+1| +|t-1|-2|t| ) \,.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4463777",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Does $\exp(f(x)) = e^{f(x)}$? I've rarely seen the notation $\exp(f(x))$ but whenever I do I just replace it with $e^{f(x)}$. Is this correct, or do these mean something different? Also, in computer science, should these be replaced rather with $2^{f(x)}$ since the base mostly considered is $2$?
|
Is this correct, or do these mean something different?
Yes, it is correct.
And yes, it is different — typographically. If the exponent gets complicated, typesetting the exponent can result in tiny symbols that might be hard to read. Using $\exp$ makes the exponent "one level" bigger. Using notation $e^x$ is preferred, IMHO, for simple exponents because it is shorter and needs less parenthesis.
Also, in computer science, should these be replaced rather with $2^{f(x)}$ since the base mostly considered is $2$?
No, of course not. Some math libs provide functions like $\operatorname{exp2}$ and $\operatorname{exp10}$ for bases 2 and 10, respectively, but $\exp$ is still base $e$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4463896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Prove or disprove $\sum\limits_{1\le i < j \le n} \frac{x_ix_j}{1-x_i-x_j} \le \frac18$ for $\sum\limits_{i=1}^n x_i = \frac12$($x_i\ge 0, \forall i$)
Problem 1: Let $x_i \ge 0, \, i=1, 2, \cdots, n$ with $\sum_{i=1}^n x_i = \frac12$. Prove or disprove that
$$\sum_{1\le i < j \le n} \frac{x_ix_j}{1-x_i-x_j} \le \frac18.$$
This is related to the following problem:
Problem 2: Let $x_i \ge 0, \, i=1, 2, \cdots, n$ with $\sum_{i=1}^n x_i = \frac12$. Prove that
$$\sum_{1\le i<j\le n}\frac{x_ix_j}{(1-x_i)(1-x_j)}\le \frac{n(n-1)}{2(2n-1)^2}.$$
Problem 2 is in "Problems From the Book", 2008, Ch. 2, which was proposed by Vasile Cartoaje.
See: Prove that $\sum_{1\le i<j\le n}\frac{x_ix_j}{(1-x_i)(1-x_j)} \le \frac{n(n-1)}{2(2n-1)^2}$
Background:
I proposed Problem 1 when I tried to find my 2nd proof for Problem 2.
It is not difficult to prove that
$$\frac{1}{(2n-1)^4} + \frac{16n^2(n-1)^2}{(2n-1)^4}\cdot \frac{x_ix_j}{1-x_i-x_j}
\ge \frac{x_ix_j}{(1-x_i)(1-x_j)}.$$
(Hint: Use $\frac{x_ix_j}{(1-x_i)(1-x_j)}= 1 - \frac{1}{1 + x_ix_j/(1-x_i-x_j)}$
and $\frac{1}{1+u} \ge \frac{1}{1+v} - \frac{1}{(1+v)^2}(u-v)$
for $u = x_ix_j/(1-x_i-x_j)$ and $v=\frac{1}{4n(n-1)}$. Or simply
$\mathrm{LHS} - \mathrm{RHS} = \frac{(4x_ix_jn^2 - 4x_ix_j n + x_i + x_j - 1)^2}{(2n-1)^4(1-x_i-x_j)(1-x_i)(1-x_j)}\ge 0$.)
To prove Problem 2, it suffices to prove that
$$\frac{1}{(2n-1)^4}\cdot \frac{n(n-1)}{2} + \frac{16n^2(n-1)^2}{(2n-1)^4}\sum_{1\le i < j \le n} \frac{x_ix_j}{1-x_i-x_j} \le \frac{n(n-1)}{2(2n-1)^2} $$
or
$$\sum_{1\le i < j \le n} \frac{x_ix_j}{1-x_i-x_j} \le \frac18.$$
For $n=2, 3, 4$, the inequality is true.
For $n=5, 6$, numerical evidence supports the statement.
Any comments and solutions are welcome and appreciated.
| Write $p_i = 2x_i$ and note that $\sum_i p_i = 1$. Then
\begin{align*}
1 + \sum_i \frac{p_i^2}{1 - p_i}
&= \sum_i \frac{p_i}{1 - p_i} \\
&= \sum_{i,j} \frac{1}{2} \left( \frac{1}{1 - p_i} + \frac{1}{1 - p_j} \right) p_i p_j \\
&\geq \sum_{i,j} \left( \frac{2}{2-p_i-p_j} \right) p_i p_j. \tag{by AM–HM}
\end{align*}
Rearranging this inequality, we get
$$ 1 \geq \sum_{i \neq j} \frac{2p_i p_j}{2 - p_i - p_j} = 8 \sum_{i < j} \frac{x_i x_j}{1 - x_i - x_j},$$
completing the proof.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4464073",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 0
} |
Hard problem from elementary geometry - how do I discard this option for x? I came across this problem in a FB group.
Seems like a math competition problem.
See the drawing below.
First we draw the big quarter circle,
then the blue semi-circle, then the blue full circle.
Finally we draw the small yellow circle.
In this order we do the drawing.
We need to prove $r/R = 3/29$.
OK, I denote $x=r/R$.
The easy part is to compute the radius of the blue full circle. It turns out to be $3R/8$. Then it took me a lot of time and pain (about 8 hours in total maybe) but I was finally able to come up with an equation for $x=r/R$.
Here it is
WA Equation
I can explain how I got this equation (I use twice the cosine rule, other things, etc. etc.), but this is not really my issue.
Now I have one final issue to overcome. The algebraic equation has 2 roots for x.
So how do I prove $x$ cannot be $3/5$?
From the drawing it's kind of obvious $r/R$ cannot be $3/5$ but how do I discard this option in a more rigorous way? I know that "kind of obvious" doesn't count in maths.
|
Hint: As you see in figure there is second circle (5) tangent to circles 1, 2 and 3. In figure:
$R_1=100$
$R_2=\frac {R_1}4=25$
$R_3=\frac{3R_1}8 =37.5$
$R_4=\frac{3R_1}{29}$
$R_5=\frac {3R_1}5=60$
To find the equation you may use Descartes Theorem:
$$\big(\frac1{R_1}+\frac 1{R_2}+\frac 1{R_3}+\frac 1 r\big)^2=2\big(\frac1{R_1^2}+\frac 1{R_2^2}+\frac 1{R_3^2}+\frac 1 {r^2}\big)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4464402",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
The complex number $(1+i)$ is root of polynomial $x^3-x^2+2$. Find the other two roots. The complex number $(1+i)$ is root of polynomial $x^3-x^2+2$.
Find the other two roots.
$(1+i)^3 -(1+i)^2+2= (1-i-3+3i)-(1-1+2i) +2= (-2+2i)-(2i) +2= 0$.
The other two roots are found by division.
$$
\require{enclose}
\begin{array}{rll}
x^2 && \hbox{} \\[-3pt]
x-1-i \enclose{longdiv}{x^3 -x^2 + 2}\kern-.2ex \\[-3pt]
\underline{x^3-x^2- i.x^2} && \hbox{} \\[-3pt]
2 +i.x^2
\end{array}
$$
$x^3-x^2+2= (x-1-i)(x^2) +2+i.x^2$
How to pursue by this or some other approach?
| Maybe all has been said already.
Polynomial with real coefficients:
1)Complex roots occur in pairs, one is the complex conjugate of the other.
(Complex root theorem)
$x_1=1+i,$ $x_2=1-i;$
2)By inspection the real root is $x_3=-1.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4464516",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 5
} |
What is $\sum_{k = 1}^n (k \log k)\binom{n}{k}$? If the exact answer is difficult to find, what is the tightest asymptotic upper bound? While trying to solve the complexity of my program I came across the the following summation:
$$\sum_{k = 1}^n (k \log k)\binom{n}{k}$$
Could you please provide a solution to this sum. If it is difficult to obtain the exact solution, could you please provide an asymptotic upper bound that is as close as possible?
I was able to obtain the following asymptotic upperbound:
\begin{align*}
\sum_{k = 1}^n (k \log k)\binom{n}{k}
&= \mathop{O}\left(\sum k(k-1) \binom{n}{k} \right) \\
&= \mathop{O}\left(\sum n(n-1) \binom{n-2}{k-2} \right) \\
&= \mathop{O}\left(n^2 \sum \binom{n-2}{k-2} \right) \\
&= \mathop{O}(n^2 2^n)
\end{align*}
Is it possible to get smaller upper bound, for example $O(2^n n \log n)$.
| If we write $(1+x)^n=\sum\limits_{k=0}^n\binom{n}{k}x^k$, differentiate, and evaluate at $x=1$, we get the lower bound
$$ \sum_{k=0}^n k\binom{n}{k}=n2^{n-1}. $$
If we also use $\log k\le \log n$, we get the upper bound of $n2^{n-1}\log n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4464680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Solving equation involving roots and powers . I'm trying to solve this equation :
$\sqrt{3}\sqrt{237x^2 + \frac{224}{x^2}}x^7 + \frac{35293}{222}x^8 + \frac{2}{999}\sqrt{3}{(\sqrt{237x^2 + \frac{224}{x^2}})}^3x^5 - \frac{44968}{111}x^4 + \frac{12544}{333}=0$.
It looks pretty complicated to me, but the computer gives me exact solutions. For example one real solution is $\frac{2}{\sqrt{3}}$, and another : $-\frac{2}{3^{\frac{3}{4}}\sqrt[4]{7}}$ and so on.
Does anyone know how to obtain the exact solutions to the above equation? Thanks in advance!
EDIT:
This came from fooling around with functions that describe surfaces spanned by the roots of polynomials $x^4+c_2x^2+c_3x+c_4$ in 3 dimensions. So here $r_1+r_2+r_3+r_4=0$. Newton's identity $r_1^4+r_2^4+r_3^4+r_4^4 = 4$ (the 4 here is randomly chosen) can thus be represented as a 2D surface in 3D. With some other transformations this gives :
$-\frac{1}{3}\sqrt{6}\sqrt{3}x_1x_2^2x_3 + \frac{1}{9}\sqrt{6}\sqrt{3}x_1x_3^3 + \frac{7}{12}x_1^4 + \frac{1}{2}x_1^2x_2^2 + \frac{1}{2}x_2^4 + \frac{1}{2}x_1^2x_3^2 + x_2^2x_3^2 + \frac{1}{2}x_3^4 = 4$ .
This surface looks somewhat like an octahedron. To find the extrema on the surface I used Lagrange multipliers leading to the equation in question. So I guess it's not very surprising that this has an exact solution. But I was surprised that the computer could find them so easily..
|
how to obtain the exact solutions to the above equation?
Not something you'd want to do by hand, but the equation can be reduced to a quartic in $\,x^4\,$, which just "happens" to factor nicely.
Let $\,y = \sqrt{237x^2 + \dfrac{224}{x^2}}\,$ then, after eliminating the denominators, the original equation can be written as the following system (with the restriction $\,y \ge 0\,$ for the real solutions):
$$
\begin{cases}
\begin{align}
p(x,y) &= 317637\,x^8 + 1998\sqrt{3}\,x^7 y + 4\sqrt{3}\,x^5 y^3 - 809424\,x^4 + 75264 &= 0
\\ q(x,y) &= 237\,x^4 - x^2y^2 + 224 &= 0
\end{align}
\end{cases}
$$
Eliminating $\,y\,$ between the two equations using resultants gives the following, courtesy WA:
$$
\begin{align}
0 = \text{res}_y(p,q) &= -7203 x^6 (13150431 x^{16} - 72718560 x^{12} + 97023744 x^8 \\ &\quad\quad\quad\quad\quad - 16990208 x^4 + 786432)
\\ &= -7203 x^6 (3 x^2 - 4) (3 x^2 + 4) (9 x^4 - 32) (189 x^4 - 16) (859 x^4 - 96)
\end{align}
$$
[ EDIT ] $\;$ The same result can be derived directly by eliminating the radicals from the original equation (just move the two radicals to one side, then square the equation and collect the terms), but the calculations are laborious, which is what I meant by "wouldn't want to do it by hand".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4464932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Defining the n-glob as a HIT I'm trying to define n-globs, for each n. I'm trying to do this in terms of an indexed family of higher inductive types. I think I have a working definition but it is almost intractable and hard to work with. Here is what I have so far:
Define 0-glob via:
G0 : 0-glob
Define 1-glob via:
in$_1$ : 0-glob $\to$ 1-glob
G0$_2$ : 1-glob
G1 : in$_1$(G0) = G0$_2$
Assuming we have the n-glob and the n+1-glob, define (n+2)-glob via:
in$_{n+2}$ : (n+1)-glob $\to$ (n+2)-glob
G(n+1)$_2$ : in$_{n+2}$(in$_{n+1}$(Gn)) = in$_{n+2}$(Gn$_2$)
G(n+2) : in$_{n+2}$(G(n+1)) = G(n+1)$_2$
The idea behind this is that an n+1-glob contains "a copy" of the n-glob, so we should have in$_{n+1}$:n-glob $\to$ n+1-glob. But, the n+1 glob contains another n-morphism parallel to the copied n-morphism. This is why I have the G(n+1)$_2$; it is supposed to be that parallel n-morphism. Finally, the n+1-glob has a n+1-morphism filling the frame of the two parallel n-morphisms. I think everything is well typed in the definition.
My problem is that this definition is, well, gross. The elimination principle for even just 2-glob will not be nice to work with. The main difficulty is the "in" constructor at each level. But, I do not know how I could remove this constructor. So, my question is how are n-globs defined as HITs? I could not find anything else where online.
| I doubt you'll find anyone defining n-globes as HITs because they are not homotopically very interesting: they are all contractible!
That said, your definition looks like it might be sensible. But I think an easier definition would be to induct at the bottom: the $(n+1)$-globe has two points $s,t$ and a map from the $n$-globe into $s=t$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4465099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
a small category equivalent to the category of all sets assuming grothendieck universes, is it possible to have a small category equivalent the category of all sets?
mac lane in categories for the working mathematician assumes the existence of a grothendieck universe set, which forms
a category $\mathbf {Set}$ of all small sets. skimming the book, mac lane never states this, but he seems do assume or suggest that the category of sets is somehow equivalent to $\mathbf {Set}$. but mac lane here only speaks of metacategories as some sort of pre-foundational mathematical objects (for instance he has a metacategory of all classes), so maybe one has to be more precise here, so:
taking as a basis an axiomatic set theory with a solid notion of classes such as neumann-gödel-bernays or morse-kelley together with the existence of a (sufficiently large) grothendieck universe set, is there a small category equivalent to the category of all sets?
| No. If $C$ and $D$ are equivalent categories, then they must have the same "cardinality" of isomorphism types (ie they must either both have set-many isomorphism types, and then must have the same cardinality of these, or both have proper-class-many isomorphism types). One way of seeing this is to note that, assuming global choice, two categories are equivalent if and only if they have isomorphic skeletons. Since $\mathbf{Set}$ has proper-class-many isomorphism types (one for each cardinal), it thus cannot be equivalent to a small category.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4465336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
p-adic formal series, evaluations and invertible elements Take an invertible formal series $f\in \mathbb{Z}_p[[T]]$ of inverse $g\in \mathbb{Z}_p[[T]]$ and let $x\in \mathbb{Z}_p$ such that the value of $f$ evaluated at $x$ exists. I have a few questions:
*
*Is $f(x)$ a $p$-adic integer ? Since $\mathbb{Z}_p$ is a closed set I thought it was the case, but I have some doubts.
*Does $g(x)$ also exist ?
*If $g(x)$ does exist, is it the inverse of $f(x)$ ? I was thinking of evaluating at $x$ in the identity $f(T) g(T) =1$, but still not sure of myself.
I'm new to the p-adic world and to the formal series world, so thanks a lot.
| You meant multiplicative inverse. Because the compositional inverse often exists in formal series.
If the series $f(x)$ converges then it does so to an element of $\Bbb{Z}_p$, yes.
Then try $f(T)=1-T$ and $x=1$.
If $g(x)$ converges as well then $f(x)g(x)=1$, yes (consider the truncated series, change the order of summation to make $\sum_n a_n b_{k-n}=0$ appear, show that the remainder is small).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4465504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Generalised binomial coefficients make sense when characteristic is not 2. $\newcommand{gbin}[1]{\binom{\frac{1}{2}}{#1}}\let\ge\geqslant$Consider the generalised binomial coefficient defined as
$$\gbin{n} := \frac{\left(\frac{1}{2}\right)\left(\frac{1}{2} - 1\right) \cdots \left(\frac{1}{2} - (n - 1)\right)}{n!}$$
for $n \ge 0$. (The value is $1$ when $n = 0$ by usual conventions.)
Is the following true: when $\gbin{n}$ is written in reduced form as $p/q$ with $q > 0$, then the only possible prime factor of $q$ is $2$.
(I have not picked up the question from some source, the reason why I think that the above is actually true is listed below.)
Motivation: If the above is true, then one would be able to define $\gbin{n}$ in any field $k$ whose characteristic is not $2$. Consequently, we would have the identity
$$\sqrt{1 + X} = \sum_{n \ge 0} \gbin{n} X^{n}$$
in the ring $k[\![X]\!]$ just like we have in $\Bbb Q[\![X]\!]$. Using Hensel lifts, we actually do have a square root of $1 + X$ in $k[\![X]\!]$. I believe the "canonical" square root obtained by starting with $1$ should be the one that we get when $k = \Bbb Q$.
Attempt:
Recursively, we have
$$\gbin{n + 1} = \frac{\frac{1}{2} - n}{n + 1}\gbin{n}.$$
However, it is not the case that the fraction $\frac{\frac{1}{2} - n}{n + 1}$ is always of the form $a/2^k$ and so this seems useless.
With some more effort, this answer shows that we have
$$\gbin{n} = \frac{(-1)^{n-1}}{2^{2n-1}n}\binom{2n-2}{n-1}.$$
This seems more promising as the question is then reduced to showing that
$$n \mid \binom{2n-2}{n-1}.$$
| Yes, the denominators are always powers of $2$.
For a quick answer, you should see OEIS A046161. The $n$th entry of this sequence is the denominator in the $n$th term of the power series of $(1+x)^{k/2}$ for any odd $k$. In particular, when $k=1$ we get your series of interest.
Now, in the "formulas" section of this OEIS entry, it's shown that the $n$th entry of this sequence is
$2^{b_n}$, where $b_n$ is A005187. In particular, they're always powers of $2$.
In general, given any sequence of integers, you should just check if the OEIS has information that will solve your problem without you needing to think (as indeed it did here). If your sequence isn't in the OEIS, or if you find something new about your sequence, then you should add that information so that future mathematicians can have an easier job!
For a more direct answer, though, we can show that $n \mid \binom{2n-2}{n-1}$, which will give a proof based on what you've already noticed in your question. Indeed, it's well known that the Catalan Numbers (which are all integers) are given by the formula
$$
C_n = \frac{1}{n+1} \binom{2n}{n}
$$
so that $C_{n-1} = \frac{1}{n} \binom{2n-2}{n-1}$ is always an integer, as desired.
I hope this helps ^_^
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4465623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proving contour integral equal to zero Let $G$ be the path traversed once as shown:
Show that $\displaystyle{\int_{G}{\dfrac{1}{v^4-1} \text{d}v} = 0}$.
By partial fraction decomposition,
$\dfrac{1}{v^4 -1} = \dfrac{1}{4} \left( \dfrac{1}{v-1} - \dfrac{1}{v+1} + \dfrac{i}{v-i} - \dfrac{i}{v+i} \right)$
The singular points $v = \pm 1, \pm i$ all lie inside the contour $G$. Thus, from this theorem (*), we have
\begin{align*}
\int_{G}{\dfrac{1}{v^4-1} \text{d}v} &= \dfrac{1}{4} \left( \int_{G}{\dfrac{1}{v-1}\text{d}v} - \int_{G}{\dfrac{1}{v+1}\text{d}v} + \int_{G}{\dfrac{i}{v-i}\text{d}v} - \int_{G}{\dfrac{i}{v+i}\text{d}v} \right) \\
&= \dfrac{1}{4}\left( 2\pi i - 2\pi i + i\left( 2\pi i \right) - i \left( 2\pi i \right) \right) \\
&= \dfrac{1}{4} \left( 0 \right) \\
&= 0
\end{align*}
(*) Theorem: Let $C$ be a simple closed contour with a positive orientation such that $v_0$ lies interior to $C$, then $\displaystyle{\int_{C} {\dfrac{dv}{(v-v_0)^n}} = 2\pi i}$ for $n =1$ and $0$ when $n \neq 1$ is an integer.
Is that proof correct? If so, could you also point out if there are still theorems I have to mention to make it more accurate?
I'm trying to solve (perhaps overthink) this with the other approach:
We see that it is analytic except at $\pm 1$ and $ \pm i$.
Also, we can apply deformation of the contour $G$ by forming a leaf-like contour and forming the respective circles $C_1, C_2, C_3,$ and $C_4$. As shown here:
The integration can then be evaluated as
$$ \int_{G}{\dfrac{1}{v^4-1} \text{d}v} = \int_{C_1}{\dfrac{1}{v^4-1} \text{d}v} + \int_{C_2}{\dfrac{1}{v^4-1} \text{d}v} + \int_{C_3}{\dfrac{1}{v^4-1} \text{d}v} + \int_{C_4}{\dfrac{1}{v^4-1} \text{d}v} $$
And,
$$\int_{C_n}{\dfrac{1}{v^4-1} \text{d}v} = \dfrac{1}{4} \left( \int_{C_n}{\dfrac{1}{v-1}\text{d}v} - \int_{C_n}{\dfrac{1}{v+1}\text{d}v} + \int_{C_n}{\dfrac{i}{v-i}\text{d}v} - \int_{C_n}{\dfrac{i}{v+i}\text{d}v} \right) $$
Note that when $v_n$ lies exterior to $C_n$, then by Cauchy-Goursat theorem, $\displaystyle{\int_{C_n}{\dfrac{dv}{v-v_n}} = 0}$.
Thus, for $n = 1,$,
$$\int_{C_1}{\dfrac{1}{v^4-1} \text{d}v} = \dfrac{1}{4}(0-0 + i(2\pi i)- 0) = \dfrac{- \pi }{2} $$
for $ n = 2,$
$$\int_{C_2}{\dfrac{1}{v^4-1} \text{d}v} = \dfrac{1}{4}( 2\pi i - 0 + 0-0) = \dfrac{ \pi i}{2}$$
for $ n = 3,$
$$\int_{C_3}{\dfrac{1}{v^4-1} \text{d}v} = \dfrac{1}{4}(0 - 0 + 0 - i (2\pi i) ) = \dfrac{\pi }{2}$$
for $ n = 4,$
$$\int_{C_4}{\dfrac{1}{v^4-1} \text{d}v} = \dfrac{1}{4}(0 -(2\pi i) + 0-0 ) = \dfrac{ - \pi i }{2}$$
Therefore, $$ \int_{G}{\dfrac{1}{v^4-1} \text{d}v} = \dfrac{- \pi }{2} + \dfrac{ \pi i}{2} + \dfrac{\pi }{2} + \dfrac{ - \pi i }{2} = 0$$
Did I just overcomplicate it? Is my first proof already enough? If any of these proofs are correct, could you also point out if there are still theorems I have to mention for them to make it more accurate?
| Since OP's first solution works just fine, I will provide yet another solution:
We "inflate" the contour $G$ so it becomes a CCW-oriented circle of radius $r > 1$.
Then
$$ \int_{G} \frac{\mathrm{d}z}{z^4 - 1}
= \int_{|z| = r} \frac{\mathrm{d}z}{z^4 - 1}
\stackrel{(w=1/z)}{=} \int_{|w|=\frac{1}{r}} \frac{-\mathrm{d}w/w^2}{w^{-4} - 1}
= - \int_{|w|=\frac{1}{r}} \frac{w^2}{1 - w^4} \, \mathrm{d}w $$
In the last integral, $\frac{w^2}{1-w^4}$ has no poles inside the circle $|w| = \frac{1}{r}$ since $\frac{1}{r} < 1$. Therefore the integral evaluates as $0$ by the Cauchy's integral theorem.
Remark. This is an example of the "residue at infinity".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4465771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Proof $ \lim_{x \to \frac{\pi}{2}} \frac{1}{\cos^{2}x} - 4 = \infty $ I want to prove that
$$ \lim_{x \to \frac{\pi}{2}} \left(\frac{1}{\cos^{2}x} - 4\right) = \infty ,$$
is my proof correct?
Proof:
Given $ M \ge 1$, choose $ \delta = \arccos\left(\sqrt{\frac{1}{M}}\right) - \frac{\pi}{2}$.
Suppose $ 0 \lt \left|x - \frac{\pi}{2}\right| \lt \delta $.
Therefore:
$ x - \frac{\pi}{2} \lt \arccos(\sqrt{\frac{1}{M}}) - \frac{\pi}{2} $
$ x \lt \arccos(\sqrt{\frac{1}{M}}) $
$ \cos\left(x\right) \lt \sqrt{\frac{1}{M}}$
$ \cos^{2}\left(x\right) \lt \frac{1}{M}$
$ \frac{1}{\cos^{2}\left(x\right)} \gt M $
$ \frac{1}{\cos^{2}\left(x\right)} - 4 \gt M $
| Given $M > 0$ we solve
$\vert\frac{1}{cos^2x} - 4\vert \geqslant M$ with $x \in [0, 2\pi]$
$$\frac{1}{cos^2x} \geqslant M + 4 \quad\vee\quad \frac{1}{cos^2x} \leqslant 4 - M$$
WLOG we assume $M > 4$ $$cos^2x \leqslant \frac{1}{M + 4} \quad\wedge\quad x \neq \frac{\pi}{2}$$
$$-\frac{1}{\sqrt[]{M + 4}} \leqslant cosx \leqslant \frac{1}{\sqrt[]{M + 4}}$$
Set $k = \arccos\left(\frac{1}{\sqrt[]{M + 4}}\right)$ and find $k \leqslant x < \frac{\pi}{2} \quad\vee\quad \frac{\pi}{2} < x \leqslant \pi - k$
So $A = [k, \frac{\pi}{2}) \quad\cup\quad (\frac{\pi}{2}, \pi - k]$ with $0 < k < \frac{\pi}{2}$
A is a neighborhood of $\frac{\pi}{2}$ and $0 < \delta \leqslant \frac{\pi}{2} - k$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4465933",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Column vectors in polar coordinates? So, we can represent the Cartesian vector $r= x\hat{x}+ y\hat{y}$ as the column vector where the entries are:
$$\begin{bmatrix}
x\\
y\\
\end{bmatrix} $$
How would the polar coordinates position vector :$r= r\hat{r}$ be represented as the entries of a column vector? Like this?
$$\begin{bmatrix}
r\\
0\\
\end{bmatrix} $$
I want to know this because I wish to discuss linear maps in different coordinates. For instance a rotation in Cartesian coordinates around the z axis is
$$r' = R\begin{bmatrix}
x\\
y\\
\end{bmatrix} $$
But the same operation in polar coordinates doesn't make much sense in this representation, since only the angle changes.
| So polar coordinate $(r,\theta)$ is a parametrization of $\mathbb R^2$ i.e. $$g:\mathbb R_+\times[0,2\pi)\to\mathbb R^2,(r,\theta)\mapsto(r\cos\theta,r\sin\theta)$$
Definitely, you can represent the polar coordinate as a column vector. However, this is not a linear basis so you cannot represent a point by linear summation of the basis vectors like this
$$
\hat r =x \hat x+y \hat y
$$
As you said, a linear transform $T$ viewed in a nonlinear paramatrization is not linear, so it's not surprising that it cannot be represented as a matrix.
$$
[x',y']^T=T[x,y]^T\\
[r',\theta']^T=g(Tg^{-1}([r,\theta]^T))
$$
Nevertheless, if you study differential geometry, even if the whole map is nonlinear, you could represent local infinitesimal changes (tangent vectors) by a linear transform (aka Jacobian, pushforward, differential map)
$$
d\mathbb r'=dg\circ T\circ dg^{-1}d\mathbb r
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4466117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Is it possible to characterize the set of $k$-periodic matrices $\in \mathbb{R} ^{n \times n}$? A $k$-periodic matrix $M$ is an $\mathbb{R}^{n \times n}$ matrix such that $M^{k+1} = M$
Is it possible to characterize this set of $k$-periodic matrices? My goal is to exploit this characterization to randomly generate an arbitrary number of these that are all different from one another. Permutation matrices are periodic but otherwise don't fit the bill - there is a finite set of them ($n!$) and they are not necessarily $k$-peroidic - I am looking for an infinite set.
| For $k=1$, every matrix is $k$-periodic, and so you can simply pick some arbitrary $n\times n$ matrices. We'll henceforth assume $k>1$.
Here is a characterization over $\mathbb C$. A matrix is $k$-periodic if and only if its Jordan normal form is, i.e. if and only if every Jordan block in its Jordan normal form is.
For blocks with nonzero eigenvalue, this eigenvalue must satisfy $\lambda^{k-1}=1$, and thus must be a $(k-1)$st root of unity. For a diagonal block, this is also sufficient; for non-diagonal blocks $J$ with nonzero eigenvalue, $J^{k-1}$ will have nonzero entries above the diagonal, and thus $J^k$ will not equal $J$ (since $J$ is invertible). So, the only possible blocks with nonzero eigenvalue are the blocks $[\lambda]$ where $\lambda^{k-1}=1$.
For a block $J$ of size $m$ with eigenvalue $0$, the matrix $J^k$ has entries of $1$ on the diagonal $k$ entries above the main diagonal, and zeros everywhere else (or is $0$ for $k\geq m$); this does not equal $J^k$ if $m>1$ and $k>1$. So, the only possible Jordan block with $k>1$ and eigenvalue $0$ is the block $[0]$.
In other words, a $k$-periodic matrix must be diagonalizable, with eigenvalues $\lambda$ that satisfy $\lambda^k=\lambda$. This means that one can generate $k$-periodic $n\times n$ matrices $M$ by first choosing an arbitrary invertible $n\times n$ matrix $P$, then choosing an arbitrary $n\times n$ diagonal matrix $D$ with diagonal entries satisfying $\lambda^k=\lambda$, and finally set $M=PDP^{-1}$.
If you're looking only for real matrices, you can do this by more carefully constructing $P$ to be a matrix defined by eigenvectors, so that the eigenvector $v$ corresponding to a complex entry $\lambda$ of $D$ is the conjugate of the eigenvector $v'$ corresponding to an entry $\overline\lambda$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4466308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is the expression for the distribution of $Z = -\min\{X,0\}$ which I found is correct? To begin with, if $z < 0$, then $\mathbb{P}(Z \leq z) = 0$.
We can then consider that $z\geq 0$ in order to proceed.
Here is my attempt (edit)
\begin{align*}
\mathbb{P}(Z\leq z) & = \mathbb{P}(-\min\{X,0\} \leq z) = \mathbb{P}(\min\{X,0\} \geq -z)\\\\
& = \mathbb{P}((\min\{X,0\} \geq -z)\cap(X > 0)) + \mathbb{P}((\min\{X,0\} \geq -z)\cap(X\leq 0))\\\\
& = \mathbb{P}(X > 0 \geq -z) + \mathbb{P}(-z\leq X \leq 0)\\\\
& = \mathbb{P}(X > 0) + \mathbb{P}(-z \leq X \leq 0)\\\\
& = 1 - \mathbb{P}(X\leq 0) + \mathbb{P}(-z \leq X\leq 0)\\\\
& = 1 - F_{X}(0) - F_{X}(-z^{-}) + F_{X}(0)\\\\
& = 1 - F_{X}(-z^{-})
\end{align*}
Can you anyone critique my solution, so that I can tell if it is wrong or right?
| This was an answer to a previous version of the question
Try $X$ uniformly distributed on $[-2,-1]$ and $z=3$.
Then $Z=-\min\{X,0\}$ is uniformly distributed on $[1,2]$ and $\mathbb P(Z \le 3)=1$.
Your expression seems to suggest $\mathbb{P}(Z\leq 3) = 1 - F_{X}(-3) + F_{X}(0) - F_{X}(-3^{-}) $ $=1-0+1-0=2$, which is clearly impossible.
One possible cause is that $\mathbb{P}(X > 0 \geq -z)$ need not be equal to $\mathbb{P}(X > -z)$ and is not in this example.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4466394",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that $\omega$-limit set of a recurrent point of a planar flow is a periodic orbit.
Let $f:U\rightarrow\mathbb{R}^2$ a $C^1$ vector field in an open set $U\subseteq\mathbb{R}^2$ and $p\in U$ a regular point of $f$. Show that if $p\in \omega_p(f)$, then $\omega_p(f)$ is a periodic orbit of $f$.
I think... using the Poincaré–Bendixson theorem, it's enough to prove that $\omega_p(f)$ has no singular points, am I right? Any hints?
| Hint: It seems it's much easier to argue using some standard lemmas leading to the Poincaré–Bendixson Theorem. First, since $p$ is regular, there is a flowbox around it. Further, since $f$ is continuous being regular for points is an open condition, that is to say by taking a small enough flowbox we may assume it does not contain a singular point. We also have an embedded compact interval $T$ passing through $p$ transverse to any orbit segment included in the flowbox. Since $p$ is recurrent (i.e. it's included in its own $\omega$-limit set) there is a time $t>0$ such that $\phi_{t}(p)$ is in $T$. If $\phi_{t}(p)$ is not $p$, then again by recurrence there must be another time $t'>t$ such that $\phi_{t'}(p)$ is in that segment of $T$ that is strictly between $\phi_{t}(p)$ and $p$, which is impossible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4466568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can integration in which a variable is linked to (itself + constant) be solved?: $\frac{df(x)}{dx}=f(x+5)$ $$\frac{df(x)}{dx}=f(x+5)$$
I am unable to solve this kind of integration using high school mathematics.
Please help.
| Just an ansatz or guess: Because differentiating $a^x$ gives a multiple of $a^x$, and also $a^{x+\mathrm{const}}$ is a multiple of $a^x$: Try $f(x)=k\exp(ax)$, then:
$$\frac d{dx} f(x) = kae^{ax} \stackrel!= f(x+c) = ke^{a(x+c)} = ke^{ca}e^{ax}$$
with $c=5$. Dividing out $\exp(ax)$:
$$ ka \stackrel!= k\exp(ca)$$
so $k=0$ or $a = \exp(ca)$. To solve the latter, divide by $\exp(ca)$ and rewrite as
$$(-ca)\exp(-ca) = -c$$
and then apply Lambert-W:
$$-ca = W(-c) \quad \implies\quad a = -\frac1cW(-c)$$
Now for $c = 5$ there is no (real) solution because the minimum $-1/e$ of $x\mapsto xe^x$ is greater than $-5$. But I am not familiar with $W$ and its branches, so there is likely a different branch of $W$ that gives (complex) solutions.
Or either there is no solution (except the trivial $k=0$) or the ansatz was too restricting.
That said, if you had used $c=-5$, there was the real solution
$$a = \frac15W(5) \approx1.326725/5 = 0.265345 \approx \ln 1.30388$$
so that $f(x) = k\cdot 1.30388^x$ where you can pick $k$ as you like.
Note: Here is a complex solution for $c=5$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4466733",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Is there a rule for integrating $\int f(x)^{g(x)} dx$? When learning integration there are rules for different combinations of functions. For $f(x)\pm g(x)$ you apply linearity, $f(g(x))$ can be handled by substitution rule and $f(x)g(x)$ is integration by parts. But there seems to be nothing for the case where a function is raised to the power of another function.
$$\int f(x)^{g(x)} dx$$
Is there any rule or technique you can apply to the integral above or is it just a dead end?
| In the (rare) case where the integral can be calculated, the best way to begin is probably write $f(x)^{g(x)}$ as $e^{g(x) \log(f(x))}$. After that there's no much that can be done in general.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4466927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Let $G$ be a group and let prime $p\mid \vert G\vert$. Suppose further that $\vert G\vert < p^2 $. Show that $G$ has a normal subgroup of order $p$. Suppose $G$ is a finite group and $p$ is a prime that divides $\vert G\vert$. Suppose further that $\vert G\vert < p^2 $. Show that $G$ has a normal subgroup of order $p$.
Attempt: We know there is an element of order $p$ by Cauchy's theorem. Say $x\in G$ has order $p$. Let $H$ be the cyclic subgroup generated by $x$.
My idea was to show that the only elements of order $p$ in $G$ are in $H$ and then, since conjugation preserves order of an element, $gHg^{-1} = H$.
However I am finding this difficult to prove. I would appreciate some help as to whether this is a sensible idea or if I should be doing something else?
| Let $G$ act on the set of cosets $G/H$ by left multiplication. You get a homomorphism $\varphi $ from $G$ into $S_{|G|/p}$.
The kernel is non-trivial since $|G|\not\mid (|G|/p)! $. That's because $p\mid |G|$ and $|G|/p\lt p$ by hypothesis.
Furthermore, if $x\not\in H$, then $xH\not=H$, so $x\not\in\rm{ker}\varphi $.
Thus $\rm{ker}\varphi \le H$.
Thus, since $|H|=p$, $\rm{ker}\varphi =H$.
Thanks to @Arturo Magidin
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4467055",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Help to show that f is not differentiable. Let $ 0 < \alpha < 1 $. If $\vert f(x)\vert \geq \vert x\vert^\alpha$ for all $x$ and $f(0) =0$, then $f$ is not differentiable at $x=0$.
I would appreciate some help to prove this.
Edit : I supposed that $f$ is differentiable at 0 and used the definition of derivative as a limit. Since $f(0)=0$ , I get $ \lim_{h \to 0} \frac{f(h)}{h} \geq \lim_{h \to 0} \frac{1}{h^{1- \alpha}} $ by hypothesis and the fact that limit preserves inequality. Since the second limit goes to infinity, then the first limit goes to infinity too, and this is a contradiction with the fact that $f’(0)$ exist. Its that right?
| Suppose it is, then $\left|\dfrac{f(x) - f(0)}{x-0}\right|\ge \dfrac{1}{x^{1-\alpha}}\implies f'(0) = \infty$, contradiction to the fact that $f$ being differentiable at $x = 0$, hence $f'(0)$ must be finite.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4467170",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Prove that $(1+\frac{a^2+b^2+c^2}{ab+bc+ca})^{\frac{(a+b+c)^2}{a^2+b^2+c^2}} \leq (1+\frac{a}{b})(1+\frac{b}{c})(1+\frac{c}{a})$
Assuming $a,b,c>0$, show that
$$\Big(1+\frac{a^2+b^2+c^2}{ab+bc+ca}\Big)^{\frac{(a+b+c)^2}{a^2+b^2+c^2}} \leq \Big(1+\frac{a}{b}\Big)\Big(1+\frac{b}{c}\Big)\Big(1+\frac{c}{a}\Big).$$
I know from CS that $ab+bc+ca \leq a^2+b^2+c^2$ and $(a+b+c)^2 \leq 3(a^2+b^2+c^2)$ so the exponent is less than 3 and the second term in parenthesis greater than $1$, but I can't manage to convert this information, might work on the right hand side but seems like I'm missing a classical inequality since I'm a very beginner in this.
I noticed this inequality is symmetrical and homogeneous, maybe assuming $a+b+c=1$ could be useful...
| I use $abc$'s method .
Let $a+b+c=3u,ab+bc+ca=3v^2,abc=w^3$ then the problem is :
$$\frac{3u3v^2}{w^3}-1\geq \left(1+\frac{9u^2-6v^2}{3v^2}\right)^{\frac{9u^2}{9u^2-6v^2}}$$
A bit of algebra and the inequality is linear in $w$ and as it's homogeneous we can assume $a=b=1$ so we need to show :
$$\Big(1+\frac{2+c^2}{1+2c}\Big)^{\frac{(2+c)^2}{2+c^2}} \leq 2\Big(1+\frac{1}{c}\Big)\Big(1+c\Big)$$
We use logarithm the inequality now is :
$$\frac{(2+c)^2}{2+c^2}\ln\Big(1+\frac{2+c^2}{1+2c}\Big)\leq \ln(2)+\ln(1+c)+\ln\Big(1+\frac{1}{c}\Big)$$
We introduce the function :
$$f\left(x\right)=\frac{(2+x)^{2}}{2+x^{2}}\ln\left(1+\frac{2+x^{2}}{1+2x}\right)-\left(\ln\left(2\right)+\ln\left(1+\frac{1}{x}\right)+\ln\left(x+1\right)\right)$$
Now I haven't an idea to conclude except that we have for $x\in(0,10]$:
$$\ln\left(2\right)+\ln\left(1+\frac{1}{x}\right)+\ln\left(x+1\right)\geq\left(\left(\ln\left(2\right)+\frac{\ln\left(2\right)}{x}+\ln\left(2\right)\ln\left(xe\right)\right)\left(\frac{x}{x+1}\right)+\frac{1}{x+1}\cdot3\ln\left(2\right)\right)\geq \frac{(2+x)^{2}}{2+x^{2}}\ln\left(1+\frac{2+x^{2}}{1+2x}\right)$$
Wich is easier I think .
Edit :
the inequality is :
$$\left(1+\frac{2\left(1+2x\right)}{2+x^{2}}\right)\ln\left(1+\frac{\left(2+x^{2}\right)}{1+2x}\right)\leq \ln\left(2\right)+\ln\left(1+\frac{1}{x}\right)+\ln\left(x+1\right)$$
Setting $a=\frac{\left(1+2x\right)}{2+x^{2}},b=\frac{1}{x},c=1+x$ :
$$\left(1+2a\right)\ln\left(1+a^{-1}\right)\leq \ln\left(1+b\right)+\ln\left(2\right)+\ln\left(c\right)$$
With the constraint $$a=\frac{\left(c+b^{-1}\right)}{2+b^{-2}}$$
Wich is easier with derivatives .
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4467314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Exercise 6.6.5 Weibel Homological Algebra For any field $k$ and any $n\in \mathbb{N}$, let $\gamma$ denote the class in $H^{2}(PGL_{n}(k);k^{*})$ corresponding to the extension
$$1 \rightarrow k^{*} \rightarrow GL_{n}(k) \rightarrow PGL_{n}(k) \rightarrow 1$$
Let $\rho : G \rightarrow PGL_{n}(k)$ be a projective representation. Then we need to show that $\rho$ lifts to a linear representation $\rho': G\rightarrow GL_{n}(k)$ if and only if $\rho^{*}(\gamma)=0$ in $H^{2}(G;k^{*})$ for the restriction map $\rho^{*}:H^{2}(PGL_{n}(k);k^{*}) \rightarrow H^{2}(G;k^{*})$
I have not been able to do much other than unwinding the definitions. Kindly provide a solution.
| Consider any homomorphism $\rho: G \rightarrow PGL_n(k)$. Let $p_0: PGL_n(k) \rightarrow GL_n(k)$ be any set-theoretic lift, $p=p_0 \circ \rho$ is a set-theoretic lift of $\rho$. $\rho$ lifts iff there is some map $t: G \rightarrow k^{\times}$ such that $p/t$ is a group homomorphism.
$p/t$ is a group homomorphism iff for all $g,h \in G$, $p(g)^{-1}p(gh)p(h)^{-1}=t(g)^{-1}t(gh)t(h)^{-1}$.
Now, define $p’: (g,h) \in G^2 \longmapsto p(g)^{-1}p(gh)p(h)^{-1} \in k^{\times}$, $p_0’: (g,h)\in PGL_n(k)^2 \longmapsto p_0(g)^{-1}p_0(gh)p_0(h)^{-1} \in k^{\times}$, so that $p’=p_0’ \circ \rho^{\times 2}$.
Then it’s enough to show that:
1: $p’$ (and in particular $p’_0$ for $\rho$ being the identity) is a $2$-cocycle.
2: $p’$ is a coboundary iff $\rho$ lifts.
The second part follows from simply unwinding the definition of coboundary given the considerations above (then $p’=-dt$).
The first part is a computation using the fact that the image of $p’$ is central in $GL_n(k)$. We have \begin{align*}
dp’(g,h,k)&=p’(h,k)p’(gh,k)^{-1}p’(g,hk)p’(g,h)^{-1}\\
&=p(h)^{-1}p(hk)p(k)^{-1}p(k)p(ghk)^{-1}p(gh)p’(g,hk)p’(g,h)^{-1}\\
&=p(h)^{-1}p’(g,hk)p(hk)p(ghk)^{-1}p(gh)p’(g,h)^{-1}\\
&=p(h)^{-1}p(g)^{-1}p(ghk)p(hk)^{-1}p(hk)p(ghk)^{-1}p(gh)p’(g,h)^{-1}\\
&=p(h)^{-1}p(g)^{-1}p(gh)p’(g,h)^{-1}\\
&=p(h)^{-1}p’(g,h)p(h)p’(g,h)^{-1}\\
&=1.
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4467748",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Understanding Euler's Identity (complex) For a complex number $z=a+bi$ and a positive real value $R$, we have $e^{Rbi}=\cos(Rb)+i\sin(Rb)$. I am struggling to understand this since no matter how large $b$ or $R$ is, we have $|e^{Rbi}| \in [-1, 1]$. What is the best way to understand this intuitively? For instance, in an applied sense, is it true that
$$\bigg|\sum_{z: \ a, b \geq 0}e^{Rbi}f(z)\bigg|\leq \bigg|\sum_{z: \ a, b \geq 0}f(z)\bigg|,$$
where $f$ is some generic function and $\sum_{z: \ a, b \geq 0}f(z)>0$? This makes sense to me because each $|e^{Rbi}|$ is no larger than $1$.
| Starting with the basics, $t\mapsto e^{it} = \cos t +i\sin t$ just describes the complex unit circle, which for $t\in\mathbb{R}$ is run through counter-clockwise with constant speed $1$. This just means that one cycle around the circle takes $2\pi$ time units, which is the length of the unit circle.
So what happens if we plug in $2$, i.e. $e^{i2t}$? Now the speed is $2$, which means one cycle takes $\pi$ time units.
So by increasing the factor $R\in\mathbb{R}$, the geometric shape of the curve traced out by $t\mapsto e^{iRt}$ is always the unit circle. The only thing that changes is the speed with which this circle is orbited.
This means that $|e^{Rit}| = $ for all $t,R\in\mathbb{R}$, since we never are able to leave the circle.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4467841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
A curve that intersects $2x^3y^3-30x^2y^2+11xy^3+2x^3-38x^2y+20xy^2-13y^3+16x^2+94xy+10y^2+301x-668y+662$ with multiplicity $3$ or more let $C=2x^3y^3-30x^2y^2+11xy^3+2x^3-38x^2y+20xy^2-13y^3+16x^2+94xy+10y^2+301x-668y+662$ be a curve in $\mathbb{C^2}$. Find a conic that intersects $C$ in $(2,2)$ with intersection multiplicity $\geq 3$.
Since $C_x(2,2) \neq 0$ and $C_y(2,2) \neq 0$ it is a nonsingular point for $C$. We have defined the multiplicity of intersection of two curves in the projective plane so I imagine I need to find a conic $D=a_{00}x^2+2a_{01}xy+2a_{02}xz+a_{11}y^2+2a_{12}yz+a_{22}z^2$ such that eliminating $z$ from $D$ and $C^h$, where $C^h$ is the homogenization of $C$, the factor $(x-y)^3$ divides such polynomial.
I computed such conditions but they are quite unusable in this form so I opted for a trial-and-error approach, but all of the conics I tried have i.m. $2$, like the circle $(x-\frac{7}{4})^2+(y-3)^2=\frac{17}{16}$.
Any idea or hint? Thank you.
| If it is not required the conic should be smooth, you could take the union of the tangent to $C$ at $(2,2)$ and any other line that passes through $(2,2)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4467978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How do you determine the best solution out of a set of feasible basic solutions/extreme points? I'm working on one of my exam sets, surrounding linear optimization, and could really use some help.
The assignment is essentially $\to$ find the $10$ basic solutions $\to$ find the $5$ feasible solutions out of these $\to$ find the best feasible solution out of these.
Now I'm stuck on this last part, because as far as I know the best feasible solution is defined by all components of it being $\ge0$, aka negative or zero. Whilst as far as I can see the feasible solutions I found are all positive.
So my assumption is that I either got confused on picking the feasible solutions or I misunderstood how to find the best feasible solution.
Any help would be really appreciated, as this is the last assignment of my exam prep for tomorrow, and I'd like to understand it better than I currently do before then.
Assignment
Introduction to the assignment
The actual assignment
Our answer
Finding our basic solution set
Finding our feasible solution set/extreme points
| All components being $\ge 0$ (positive or $0$) is just the condition that makes a basic solution feasible. It has nothing to do with being the worst or the best.
The best solution is determined by the objective function, which as far as I can tell you haven't considered at all yet. You should write down $\tilde c$, figure out $\tilde c \cdot \tilde x$ at each of your five basic feasible solutions, and then (since you're minimizing) pick the one that gives the lowest value.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4468183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Maximum $n$-dimensional volume of set satisfying $(\forall x) \: (\forall y) \: (\| x - y \| \le 1)$ Let $S \subseteq \mathbb{R}^n$ be a set satisfying the property:
$$P(S) \equiv (\forall x \in S) \: (\forall y \in S) \: (\| x - y \| \le 1) $$
where $\| \cdot \|$ is the Euclidean norm.
Let $\mathcal{S}$ be a set of all maximal sets $S$ (maximal by set inclusion) s.t. $S$ satisfies the property $P(S)$. Then what is the value of $V = \max_{T \in \mathcal{S}} \operatorname{vol}(T) $ where $\operatorname{vol}(\cdot)$ represents the $n$-dimensional volume of a set.
I can't figure out the shape of the set for even $n=2$, is it a circle with radius $\frac{1}{2}$, or a square with side $\frac{1}{\sqrt{2}}$, I can't tell.
| Unless I am making a stupid mistake, this seems to follow from the isodiametric inequality.
A closed ball $B$ of diameter one is a maximal set, since any set $S$ that
strictly contains $B$ would have diameter bigger than one.
Now, the isodiametric inequality says that if $E\subseteq\mathbb{R}^{n}$ is a
Lebesgue measurable set, then
$$
\operatorname*{vol}\left( E\right) \leq\alpha_{n}\left( \frac
{\text{$\operatorname*{diam}$}E}{2}\right) ^{n},
$$
where $\alpha_{n}$ is the volume of the ball of radius 1. A proof of this
inequality can be found in the book of Evans and Gariepy and uses Steiner
symmetrization. There is another proof that uses the Brunn-Minkowski
inequality. I can add this proof if you want, I have it typed up.
Now, if $S$ is any other maximal set, then $\operatorname*{diam}S\leq1$, and
so
$$
\operatorname*{vol}\left( S\right) \leq\alpha_{n}\left( \frac
{\text{$\operatorname*{diam}S$}}{2}\right) ^{n}\leq\alpha_{n}\left( \frac
{1}{2}\right) ^{n}=\operatorname*{vol}\left( B\right) .
$$
Hence, the maximum volume is $\alpha_{n}\left( \frac{1}{2}\right) ^{n}$ and
it is realized by $B$.
Remark: Fix $\theta\in\left( 0,1\right) $. By
replacing $E$ with $\theta E$ and $F$ with $\left( 1-\theta\right) F$ in the Brunn-Minkowski inequality and
using the $n$-homogeneity of the Lebesgue measure we obtain that
\begin{align*}
\theta\left( \operatorname*{vol}\left( E\right) \right) ^{\frac{1}{n}
}+\left( 1-\theta\right) \left( \operatorname*{vol}\left( F\right)
\right) ^{\frac{1}{n}} & =\left( \operatorname*{vol}\left( \theta
E\right) \right) ^{\frac{1}{n}}+\left( \operatorname*{vol}\left( \left(
1-\theta\right) F\right) \right) ^{\frac{1}{n}}\\
& \leq\left( \operatorname*{vol}\left( \theta E+\left( 1-\theta\right)
F\right) \right) ^{\frac{1}{n}}.
\end{align*}
Thus the function $f\left( t\right) :=\left( \operatorname*{vol}\left(
tE+\left( 1-t\right) F\right) \right) ^{`\frac{1}{n}}$ is concave in
$\left[ 0,1\right] $.
Proof of the isodiametric inequality.
It is enough to prove the isodiametric inequality for bounded sets, since
otherwise the right-hand side is infinite. If $\lambda>0$, we have that
$\operatorname*{vol}\left( \lambda E\right) =\lambda^{n}\operatorname*{vol}
\left( E\right) $ and $\left( \operatorname*{diam}\left( \lambda E\right)
\right) ^{n}=\lambda^{n}\operatorname*{diam}$$E$, so without loss of
generality we may assume that $\operatorname*{diam}$$E=1$.
Also, since $\operatorname*{diam}\left( E\right) =\operatorname*{diam}%
\left( \overline{E}\right) $, we can replace $E$ with $\overline{E}$ and so
we can assume that $E$ is compact. Let
$$
F:=\left\{ -\boldsymbol{x}:\,\boldsymbol{x}\in E\right\} .
$$
Then $F$ is compact, $E+F$ is compact, and so it is Lebesgue measurable. By
the previous remark the function $f\left( t\right) :=\left( \mathcal{L}%
^{n}\left( tE+\left( 1-t\right) F\right) \right) ^{\frac{1}{n}}$ is
concave in $\left[ 0,1\right] $, and so
$$
\frac{1}{2}\left( \operatorname*{vol}\left( E\right) \right) ^{\frac{1}
{n}}+\frac{1}{2}\left( \operatorname*{vol}\left( F\right) \right)
^{\frac{1}{n}}\leq\left( \operatorname*{vol}\left( \frac{1}{2}E+\frac{1}
{2}F\right) \right) ^{\frac{1}{n}}.
$$
But $\operatorname*{vol}\left( F\right) =\operatorname*{vol}\left(
E\right) $, and so the previous inequality becomes
$$
\operatorname*{vol}\left( E\right) =\operatorname*{vol}\left( F\right)
\leq\operatorname*{vol}\left( \frac{1}{2}E+\frac{1}{2}F\right) .
$$
If $\boldsymbol{x}\in\frac{1}{2}E+\frac{1}{2}F$, then $\boldsymbol{x}
=\frac{\boldsymbol{x}^{\prime}-\boldsymbol{x}^{\prime\prime}}{2}$, where
$\boldsymbol{x}^{\prime},\boldsymbol{x}^{\prime\prime}\in E$, and so
$$
\Vert\boldsymbol{x}\Vert=\frac{1}{2}\Vert\boldsymbol{x}^{\prime}%
-\boldsymbol{x}^{\prime\prime}\Vert\leq\frac{1}{2},
$$
which shows that $\frac{1}{2}E+\frac{1}{2}F\subseteq\overline{B\left(
0,\frac{1}{2}\right) }$. Hence,
$$
\operatorname*{vol}\left( E\right) \leq\operatorname*{vol}\left( \frac
{1}{2}E+\frac{1}{2}F\right) \leq\operatorname*{vol}\left( \overline{B\left(
0,\frac{1}{2}\right) }\right) =\frac{\alpha_{n}}{2^{n}}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4468371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
If $x$ and $y$ are positive integers such that $5x+3y=100$, what is the greatest possible value of $xy$? I first wrote $y$ in terms of $x$. in $5x + 3y = 100$, I subtracted $5x$ from both sides to get $3y = 100 - 5x$. Therefore, $y = \frac{100 - 5x}{3}$. I substituted this into $xy$ to get $x(\frac{100 - 5x}{3}$) – You want to find the maximum value of this. This can be simplified as $\frac{-5x^2 + 100x}{3}$. I factored out a $-5$ to get $\frac{-5(x^2 - 20x)}{3}$. Completing the Square, I got $\frac{-5((x - 10)^2 - 100)}{3}$, or $\frac{-5(x - 10)^2 + 500}{3}$. The maximum value of this is when $x = 10$, since it makes $-5(x - 10)^2$ equal $0$ ($0$ is the greatest value because otherwise it would be negative). So my answer was $\boxed{\frac{500}{3}}$, but I'm pretty certain that isn't correct because the product of two positive integers can't be a fraction. Can someone help me out?
~
EDIT: I found a case where $x=11$. Then, the product is $165$. Not sure if that is the maximum, though.
| Sometimes while finding maxima for positive integers you will have to implement some other techniques along with the traditional ones.In your problem you get $xy=\frac{-5(x-10)^2+500}{3}$.Note that $500\equiv 2$(mod 3) and $-5\equiv 1$(mod 3).So we must have $(x-10)^2\equiv 1$(mod 3).So x-10 can't be a multiple of 3.So for minimum value x=11 and thus xy=165
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4468673",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 4
} |
Infinitude of prime number of the form $x^2+14y^2$ In the book "Primes of the form $x^2+ny^2$", David Cox had shown that:
$$p=x^2+14y^2 \Longleftrightarrow (-14/p)=1 \;\text{and}\; (x^2+1)^2=8 \mod p \; \text{has an integer solution.} $$
Is this imply that there are infinitely many primes of the form $x^2+14y^2$? It is easy to see that there are infinitely many primes that the equation $(x^2+1)^2=8 \mod p$ has an integer solution but I don't have any clue to check if some of them can take $-14$ as their quadratic residue.
| COMMENT.- I bet there is an infinity of primes and I rely for this on the following:
for instance,the four identities $$(a+3)^2+14(3b\pm2)^2=6(6a^2+6a+21b^2\pm28+11)-1\\(6a+1)^2+14(3b)^2=6(6a^2+2a+21b^2)+1\\(6a+5)^2+14(3b)^2=6(6a^2+10a+21b^2+4)+1$$ There are an infinity of values for the factors of $6$ in the three $RHS$ of the four identities which give solutions of the equation $$x^2+14y^2=6z\pm1$$ and I find very plausible suppose that there are in a lot of them primes (all prime is of the form $6z\pm1$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4468883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Show markov property for a s.p. with independent increment I am reading a textbook and get confused on one example which shows $X_t$ with independent increments and $X_0=0$ is a Markov process. In the proof\begin{align*}\mathbb{P}(X_t\leq x\mid X_{t_1},\ldots ,X_{t_n})
& =\mathbb{P}(X_t-X_{t_n}+X_{t_n}\leq x\mid X_{t_1},\ldots ,X_{t_n}) \\
& =\mathbb{P}(X_t-X_{t_n}+X_{t_n}\leq x\mid X_{t_n}) \\
& =\mathbb{P}(X_t\leq x\mid X_{t_n})
\end{align*}
I can understand that since the increments $X_t-X_{t_n}$ are independent, so we can remove $X_{t_1},\ldots ,X_{t_{n-1}}$ from the condition, but how to deal with the $X_{t_n}$? Is it also independent on $X_{t_1},\ldots ,X_{t_{n-1}}$?
| After doing some small manipulations about the idea of independent increment, I think I have figured out a method:
\begin{align*}
\mathbb{P}(X_t \leq x | X_{t_1},\ldots ,X_{t_n})
&= \mathbb{P}(X_t-X_{t_n} \leq x-X_{t_n} | X_{t_1},\ldots ,X_{t_n}) \\
&= \mathbb{P}(X_t-X_{t_n} \leq x-X_{t_n} |X_{t_n}-X_{t_{n-1}}, X_{t_{n-1}}- X_{t_{n-2}}, \ldots, X_{t_2} - X_{t_1}) \\
&= \mathbb{P}(X_t-X_{t_n} \leq x-X_{t_n} |X_{t_{n}}) \\
&= \mathbb{P}(X_t \leq x | X_{t_{n}})
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4469057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
How can I prove that $\Bbb{P}(X>0)\geq \frac{\Bbb{E}(X)^2}{\Bbb{E}(X^2)}$ I have the following problem:
Let $X$ be a nonnegative random variable such that $\Bbb{E}(X^2)<\infty$ Then show that $$\Bbb{P}(X>0)\geq \frac{\Bbb{E}(X)^2}{\Bbb{E}(X^2)}$$
I would like to get some hints, because I did the following:
Let me define $U:=X$ and $V:=\Bbb{1}_{\{X>0\}}$. Then clearly $U\in L^2$. But also $V\in L^2$ since $$\int_\Omega (\Bbb{1}_{\{X>0\}})^2 \Bbb{P}(d\omega)=\int_\Omega \Bbb{1}_{\{X>0\}}\Bbb{P}(d\omega)=\Bbb{P}(X>0)\leq 1$$In addition we have that $\Bbb{E}(V^2)=\Bbb{E}((\Bbb{1}_{\{X>0\}})^2)=\Bbb{E}(\Bbb{1}_{\{X>0\}})=\Bbb{P}(X>0)$. Now applying the Cauchy-Schwarz inequality we get $\Bbb{E}(|UV|)^2\leq \Bbb{E}(U^2)\cdot\Bbb{E}(V^2)=\Bbb{E}(U^2)\cdot\Bbb{P}(X>0)$ so $$\Bbb{P}(X>0)\geq \frac{\Bbb{E}(|\Bbb{1}_{\{X>0\}}X|)^2}{\Bbb{E}(X^2)}$$
Now I see that $\Bbb{E}(|\Bbb{1}_{\{X>0\}}X|)=\Bbb{E}(\Bbb{1}_{\{X>0\}}X)=\Bbb{E}(X)$ since the lebesgue mesure doesn't see the nullsets.
Is this correct like this?
Thanks for your help.
| From Cauchy-Schwarz inequality:
$\Bbb{E}(X)=\Bbb{E}(1_{\{X>0\}}X)\leq \sqrt{\Bbb{E}(1_{\{X>0\}}^2)}\cdot \sqrt{\Bbb{E}(X^2)} = \sqrt{P(X>0)}\cdot \sqrt{\Bbb{E}(X^2)}$. Therefore
$\Bbb{E}(X)^2 \leq P(X>0)\cdot \Bbb{E}(X^2)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4469256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Does triangle inequality imply Euclidean metric? We know that the distances between vertices in a Euclidean space satisfy the triangle inequality, but is the converse true? Specifically, given a complete graph $K_n$ of $n>2$ vertices, along with $\binom{n}{2}$ distances $\{d_{ij}>0,\;\;1\le i < j\le n\}$ that satisfy triangle inequality, can the graph be embedded into a Euclidean space (with an arbitrary finite dimensions)?
| No. Consider the situation $V=\{a,b,c,d\}$ with distances
$d(a,d)=2$ and $d(a,b)=d(a,c)=d(b,c)=d(b,d)=d(c,d)=1$.
The triangle inequality is satisfied since left-hand side of $d(x,z)\leq d(x,y)+d(y,z)$ is less or equal $2$ and the right-hand side is greater or equal $2$.
We have $d(a,d)=d(a,b)+d(b,d)$ and $d(a,d)=d(a,c)+d(c,d)$. In Euclidean spaces these imply that $b$ and $c$ are midpoints of $[a,d]$. Therefore $b$ and $c$ must be equal (but they aren't).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4469359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
$-2(-1)^n\zeta(n)=\lim_{m\to\infty}m^n\left(\displaystyle\prod_{k=1}^n\Gamma\left(1+\frac{\zeta_n^k}{m}\right)^{-2}-1\right).$
Theorem
Let $\zeta_n=e^{2\pi i/n}$ be the $n$-th root of unity, then
$$-2(-1)^n\zeta(n)=\lim_{m\to\infty}m^n\left(\displaystyle\prod_{k=1}^n\Gamma\left(1+\frac{\zeta_n^k}{m}\right)^{-2}-1\right).$$
Just wanted to share this - I thought this up a couple of years ago, but have no proof - it was a side thought. I'll post a proof when I get a moment, but if anyone can see how to do it that would be great :-)
| The series $\log\Gamma(1-z)=\gamma z+\sum_{m=2}^\infty\zeta(m)z^m/m$ (convergent for $|z|<1$) can be obtained from the Weierstrass product for $\Gamma$. Put $z=-t\zeta_n^k$ where $t\to0$ and $\color{red}{n>1}$, and sum over $k$: $$\sum_{k=1}^n\zeta_n^{km}=\begin{cases}n,&n\mid m\\0,&n\nmid m\end{cases}\implies\sum_{k=1}^n\log\Gamma(1+t\zeta_n^k)\underset{m=nk}{=}\sum_{k=1}^\infty\zeta(nk)(-t)^{nk}/k.$$ Thus $\prod_{k=1}^n\Gamma(1+t\zeta_n^k)^{-2}=\exp\big(-2\zeta(n)(-t)^n+o(t^n)\big)=1+2(-1)^{n-1}\zeta(n)t^n+o(t^n)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4469496",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to prove this summation of floor function I am supposed to show that
$$
\left\lfloor \frac{n+1}{2} \right\rfloor + \left\lfloor \frac{n+2}{2^2} \right\rfloor + \left\lfloor \frac{n+2^2}{2^3} \right\rfloor + \cdots + \left\lfloor \frac{n+2^k}{2^{k+1}} \right\rfloor + \cdots = n
$$
For all postive integers $n \geq 1 $. (Note that this summation is not infinite, since $0 < \frac{n+2^k}{2^{k+1}} < 1$ for large $k$.
I can't seem to find a good way to tackle this exercise.
| So the first idea is to consider the base 2 expansion of $n$ (this is natural because you have the power of 2 everywhere). We then write
$$
n=\sum_{k=\geq 0} a_k 2^k, \quad a_k\in\{0,1\}
$$
Then we can actually compute each of the terms in your sum:
$$
\lfloor \frac{n+2^s}{2^{s+1}}\rfloor=\lfloor \sum_k (a_k +\delta_{k,s})2^{k-(s+1)}\rfloor=\sum_{k\geq s+1} a_k 2^{k-(s+1)}+\lfloor \frac{a_s+1}{2}+\sum_{k< s} a_k 2^{k-(s+1)}\rfloor
$$
The last sum inside the floor function can be bounded by:
$$
\sum_{k< s} a_k 2^{k-(s+1)}\leq \frac{1}{2^{s+1}}\sum_{k<s} 2^k=\frac{1}{2^{s+1}}\frac{2^{s}-1}{2-1}=\frac{1}{2}-\frac{1}{2^{s+1}}<\frac{1}{2}
$$
Then we can just notice that if $a_s=0$ the whole term inside the floor function gives $1/2+\sum \dots<1/2+1/2$ hence the floor will be $0$. Similarly if $a_s=1$ one gets that the floor term is $1$ hence we simply have.
$$
\lfloor \frac{n+2^s}{2^{s+1}}\rfloor=\sum_{k\geq s+1} a_k 2^{k-(s+1)}+a_s
$$
Now the results follows from taking the sum over s, reversing the order of summation and using the geometric series (as I did above). I will leave you fill the details.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4469616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
What is my error in this $\nabla_{\vec{v}} f(x,y,z)$ at $\vec{a} = (-1, -1, 4)$ and $\vec{v} = (\frac{\sqrt 2}{2}, \frac{1}{2}, \frac{1}{2})$ problem I want to find gradient of $f(x,y,z) = \sqrt{xyz}$ in the direction of $\vec{v}$ at a point $\vec{a}$.
That is, $\nabla_{\vec{v}} f(x,y,z)$ at $\vec{a} = (-1, -1, 4)$ and $\vec{v} = (\frac{\sqrt 2}{2}, \frac{1}{2}, \frac{1}{2})$
I computed gradient of $f(x,y,z)$ along $\vec{v}$ to be
$\left(\begin{matrix} \sqrt{\frac{yz}{4x}} \\ \sqrt{\frac{xz}{4y}} \\ \sqrt{\frac{xy}{4z}} \end{matrix}\right)$.
So my answer for the value of gradient at $\vec{a}$ is $\left(\begin{matrix} 1 \\ 1 \\ \frac{1}{4}\end{matrix}\right)$.
But the answer given is
$\left(\begin{matrix} \frac{yz}{\sqrt{4xyz}} \\ \frac{xz}{\sqrt{4xyz}} \\ \frac{xy}{\sqrt{4xyz}} \end{matrix}\right)$.
So the accepted answer for value of gradient at $\vec{a}$ is $\left(\begin{matrix} -1 \\ -1 \\ \frac{1}{4}\end{matrix}\right)$.
Why is my answer wrong?
| You need to be careful to differentiate the form $\sqrt{ax}$ when $a$ can be negative (in this case $a=yz$)
Especially $\sqrt{ax}=\sqrt{a}\sqrt{x}$ does not hold when $a$ is negative. I guess this is why you got wrong.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4469764",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
on the equivalence of notions of adjunctions let $C$ and $D$ be categories and $F : C \leftrightarrows D : G$ be a pair of functors. as is well known, there is a natural correspondence of
*
*pairs of inverse isomorphisms $α : \hom_D (FX, Y) \leftrightarrows \hom_C (X,GY) : β$ natural in $X$ and $Y$, and
*transformations $η \colon 1_C → GF$ and $ε \colon FG → 1_D$ such that $(εF)(Fη) = 1_F$ and $(Gε)(ηG) = 1_G$.
this correspondence essentially stems from the yoneda lemma, naturally in $X$ or $Y$ relating
*
*transformations $\hom_D(FX,–) → \hom_C(X,G–)$ to elements in $\hom_C(X,GFX)$, so to transformations $1_C → GF$,
*transformations $\hom_C(–,GY) → \hom_D(F–,Y)$ to elements in $\hom_D(FGY,Y)$, so to transformations $FG → 1_D$.
now, apparently, again by yoneda, the triangle identities for $1_F$ and $1_G$ relate to $βα = \mathrm {id}$ and $αβ = \mathrm {id}$ respectively.
– i’m having trouble seeing that!
let’s look at $βα = \mathrm {id}$: so $α$ and $β$ are morphisms of functors $C^\mathrm {op} × D → \mathrm {Set}$ with a composition
$$\hom_D (F–,–) → \hom_C(–,G–) → \hom_C(F–,–),$$
which by locally invoking the yoneda embedding for all $X$ in $C$ says that some morphism of functors $F → F$ is the identity. apparently this morphism is precisely $(εF)(Fη)$ in the correspondence? why is that?
| If you have a natural (in both variables) transformation $\Phi:\mathbf D(F-,-)\to \mathbf C(-,G-)$, and hence an adjunction $F\dashv G:\mathbf D\to \mathbf C$, observe that any component of the unit $\eta_C:C\to GF(C)$ can be recovered as $\Phi(\operatorname{id}_{F(C)})$, and similarly any component of the counit $\varepsilon_D=\Phi^{-1}(\operatorname{id}_{G(D)})$.
Hence from one hand $\Phi^{-1}\circ\Phi (\operatorname{id}_{F(C)})=\operatorname{id}_{F(C)}$, as they are inverses. If however you start with $\operatorname{id}_{F(C)}$ and apply $\Phi$ you get $\eta _C$; applying $\Phi^{-1}$, you get exactly $\epsilon_{F(C)}\circ F(\eta_C)$, since the universal element of the natural transformation $\Phi^{-1}_{-,GF(C)}: \mathbf C(-,GF(C))\to \mathbf D(F-,F(C))$ is $\varepsilon_{F(C)}$; in fact, this last assertion is contained in the proof of Yoneda's lemma. Thus this shows that $\epsilon_{F(C)}\circ F(\eta_C)=\operatorname{id}_{F(C)}$; to obtain the other triangular identity, you can reason dually.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4469892",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Show that if $(X,p)$ is complete, then so is $(X,d)$, where $\frac{3}{2022}p(x,y) \le d(x,y) \le \min\{1,p(x,y)\}$, for any $x,y \in X$. Let $d$ and $p$ be two metrics on $X$ such that
$$\frac{3}{2022}p(x,y) \le d(x,y) \le \min\{1,p(x,y)\},$$
for all $x,y \in X$. Show that if $(X,p)$ is complete, then so is $(X,d)$.
What I think:
Let $(x_n)$ be any Cauchy sequence in $(X,d)$. I know that the goal is to show that $(x_n)$ is convergent in $(X,d)$. It can be done by showing that $(x_n)$ is a Cauchy sequence in $(X,p)$, since $(X,p)$ is complete. But, I didn't know yet how to apply to there.
Since $(x_n)$ is a Cauchy sequence in $(X,d)$, then for any $\epsilon>0$, there is $N \in \Bbb N$ such that for all $m,n \ge N$, we have $d(x_n,x_m)<\epsilon$. I got stuck when I want to show that $p(x_n,x_m)<\epsilon$. Is it true that by hypothesis, $d(x_n,x_m)<p(x_n,x_m)<\epsilon$? If yes, how to approach it?
Any helps? Thanks in advanced.
| Let $(x_n) $ $d$- cauchy.
Then $p(x_m, x_n) \le \frac{2022}{3}d(x_m, x_n)\to 0 \text{ as } m, n\to \infty$
Hence $(x_n) $ $p$-cauchy.
$(X, p) $ complete imply $x_n\to x$ in $(X, p) $
$d(x_n, x) \le \min\{1,p(x_n,x)\}\to 0 \text{ as } n\to \infty$
Hence $(x_n) \to x $ in $(X, d) $
"$\min$ " function is continuous. $(x_n) \to x , (y_n) \to y$ implies $\min\{x_n,y_n\}\to \min\{x,y\}$
$(x_n) =(1, 1,1,\ldots) \to 1$
$(y_n) =(p(x_n, x)) \to 0$
Hence $\min\{1,p(x_n,x)\}\to \min\{1,0\}=0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4470054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Show that a compact Riemann surface of genus two is hyperelliptic
Suppose $M$ is a compact Riemann surface of genus 2. Then $M$ is hyperelliptic.
This is an old question and there have been some relevant posts on MSE. But I wonder if there is a more direct method. For example, I want to construct the meromorphic function giving the degree-two cover of $\Bbb P^1$. It seems if $\alpha_1, \alpha_2$ form a basis of the space of holomorphic differentials $H^0(M, \Omega^1)$, then $f=\frac{\alpha_1}{\alpha_2}$ is of degree 2. $f$ is obviously a meromorphic function since it is independent of chart choice. And it is not a constant and with degree $>1$, otherwise $M$ is isomorphic to the Riemann sphere. But I don't know hw to carry on to the degree is exactly 2, maybe counting $f^{-1}(\infty)$?
I am just learning Riemann-Roch theorem and I unfamiliar with the language of algebraic geometry. Could you explain in the language of Riemann surfaces? I would appreciate any help or hint! A proper reference is also OK.
| Let $M$ be a genus $g$ compact Riemann surface. Then every abelian differential of first kind (i.e., an element of $H^0(M,\Omega_M^1)$) has $2g -2$ zeros (Gauss-Bonnet: $\int_M c_1(\Omega^{1}_M)= 2g - 2$). Therefore, when $g=2$, your function $f$ has two poles.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4470263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that if $(X_n,d_n)$ is complete, then $(Z,s)$ is complete. Let $\{(X_n,d_n):n \in \Bbb N\}$ be a sequence of complete metric spaces, and let $Z=\prod_{n=1}^\infty X_n$. For each $x=(x_n),y=(y_n) \in Z$, define a metric $s$ on $Z$ as follows:
$$s(x,y) = \sum_{n=1}^\infty \frac{1}{2^n} \cdot \frac{d_n(x_n,y_n)}{1+d_n(x_n,y_n)}.$$
Show that $(Z,s)$ is complete.
Attempt:
Let $(x_k)$ be arbitrary Cauchy sequence in $(Z,s)$, where $x_k = (x_{nk}:n \in \Bbb N)$. Then, for any $\epsilon>0$, there exists $N \in \Bbb N$ such that for all $j,k \ge N$, we have
$$d_n(x_{nj},x_{nk}) \le \sum_{n=1}^\infty \frac{1}{2^n} \cdot \frac{d_n(x_{nj},x_{nk})}{1+d_n(x_{nj},x_{nk})} =: s(x_j,x_k) < \epsilon. \qquad (1)$$
(Since $\sum \frac{1}{2^n}$ is convergent, then $\lim \frac{1}{2^n}=0$ and so, $(\frac{1}{2^n})$ is bounded (by $1$). Thus, since $\sum \frac{1}{2^n} \frac{d_n}{1+d_n}< \epsilon$, then $\frac{d_n}{1+d_n}<\epsilon$.)
Hence, $(x_{nk})$ is a Cauchy sequence in $(X_n,d_n)$ for all $n \in \Bbb N$. Since each $(X_n,d_n)$ is complete, then $(x_{nk})$ is converges to $y_n$ for some $y_n \in X_n$. Define $y:=(y_n:n \in \Bbb N) \in Z$.
We want to show that $x_k \to y$ in $(Z,s)$.
Let $\epsilon>0$ be given. Since $x_{nk} \to y_n$ for each $n \in \Bbb N$, then there exists $M_n \in \Bbb N$ such that for all $n \ge M_n$, we have
$d_n(x_{nk},y_n) < \epsilon$. Let $M:= \max\{M_n: n \in \Bbb N\}$. Then $M \in \Bbb N$ and for each $n \ge M$, we have
\begin{align*}
s(x_k,y) := \sum_{n=1}^\infty \frac{1}{2^n} \cdot \frac{d_n(x_{nk},y_n)}{1+d_n(x_{nk},y_n)} &\le \sum_{n=1}^\infty \frac{1}{2^n} \cdot d_n(x_{nk},y_n) \\
&< \frac{\epsilon}{2} + \frac{\epsilon}{4} + \frac{\epsilon}{8} + \cdots \\
&= \epsilon \cdot \left(\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+\cdots \right) \\
&= \epsilon.
\end{align*}
Thus, any Cauchy sequence in $(Z,s)$ is convergent, so $(Z,s)$ is complete.
Is it correct? Also, I'm in doubt whether the (reason of) inequality on $(1)$ is correct. I got stuck here. Any help please? Thanks in advanced.
| The right idea, but a couple of comments:
*
*The second part of inequality (1) is true by definition of a Cauchy sequence in our new product space; however, the first inequality needs to be justified. This is the key step to show that each individual component is Cauchy.
*The proof of convergence uses the fact that $M = \max\{ M_n : n \in \mathbb{N} \}$. However, since we are dealing with an infinite sequence, this is not necessarily finite (i.e. $M = \sup \{ M_n : n \in \mathbb{N} \}$ could actually be $+\infty$).
To fix this, we just need to somehow split the problem to deal with the finite and infinite parts separately. Here are two hints for the two parts:
*
*Fix $n \in \mathbb{N}$ and $\epsilon_n > 0$. Let $\epsilon > 0$ be some number to be chosen later. Let's denote $d_n = d(x_{nj}, x_{nk})$ to simplify things. Clearly $2^{-n} \frac{d_n}{1 + d_n} \leq \sum_{n \geq 1} 2^{-n} \frac{d_n}{1 + d_n} < \epsilon$. Rearranging, we have $d_n < \frac{2^n \epsilon}{1 - 2^n \epsilon}$ (which makes sense as long as $2^n \epsilon < 1$). Now we can make the right-hand side less than $\epsilon_n$ by making $\epsilon$ small enough, and so $d_n < \epsilon_n$ for all $j, k > N_n$. This shows that the sequence is Cauchy in each of its $n$th components.
*Fix $\epsilon > 0$. Note that $\frac{d_n}{1 + d_n} \leq 1$, and $\sum_{n \geq 1} 2^{-n} = 1$. We can bound the metric part by 1 for $n$ large enough, say $n > N$, so that this half of the sum is less than $\epsilon / 2$. Now we only have to control the sum for $n \leq N$. This are now finitely many terms and we can control this sum to also be less than $\epsilon / 2$ by using the fact that each component is Cauchy (i.e. as you have shown, by choosing $\epsilon_n = \epsilon / 2$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4470680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Kuratowski's Maximal Principle and Tukey's Lemma Context: self-study.
Source work: Smullyan and Fitting: Set Theory and the Continuum Problem (rev. ed. 2010) chapter $4$: Superinduction, well ordering and choice: $\S 5$: Maximal principles.
We are given a form of Kuratowski's maximal principle, paraphrased for clarity:
Let $S$ be a set which is closed under chain unions. Then every element of $S$ is a subset of a maximal element of $S$ under the subset relation.
(We have proved previously in S&F that the above follows from Axiom of Choice.)
Then we are also given a form of Tukey's lemma:
Let $S$ be a non-empty set of finite character. Then every element of $S$ is a subset of a maximal element of $S$ under the subset relation.
In this context "of finite character" means:
Let $A$ be a class. $A$ is of finite character iff for all sets $x$, $x \in A$ iff every finite subset of $x$ is in $A$.
We are given this Lemma $5.4$:
Suppose $A$ is of finite character. Then: (1) $A$ contains, with each element $x$ all subsets of $x$ as well (that is, $x$ is swelled); (2) $A$ is closed under chain unions.
Now we are given Proposition $5.5$, where we have:
From this lemma we have the following: Kuratowski's maximal principle implies Tukey's lemma.
From the definition of Kuratowski's maximal principle and Tukey's lemma, it appears that all we need to do is invoke the result that a class which is closed under chain unions is of finite character.
But this is not what we have. Lemma $5.4$ says: "A class of finite character is closed under chain unions."
Hence this seems to me that this shows that Tukey's Lemma implies Kuratowski's Maximal Principle, not the other way round.
What have I missed?
| I'm not sure what is confusing you here. The proof of Proposition 5.5 is immediate.
Assume Kuratowski. We prove Tukey. So let $S$ be a non-empty set of finite character. By Lemma 5.4, $S$ is closed under chain unions. By Kuratowski, every element of $S$ is a subset of a maximal element of $S$ under the subset relation. Done!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4470844",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can we construct the complex numbers by using $i^2 = -1$ as an axiom? Informally we can construct the complex numbers by starting with the real numbers and adding a new element $i$ which satisfy $i^2 = -1$. If we want to formalize this construction we usually use quotient fields, which is analogous to the informal construction, but a bit different on a technical level. However the informal construction still works and is enough for us to start doing computations and algebra with complex numbers. I was wondering if it was possible to make it formal, e.g like this:
We define an algebraic structure $ℂ$ such that:
*
*$ℂ$ is a field.
*$ℝ$ is a subfield of $ℂ$.
*There exist an element $i$ in $ℂ$ such that $i*i=-1$
Does this work as a rigorous construction? To me this doesn't look any less rigorous then e.g the peano axioms, and it captures everything from the informal construction.
| Define the complex numbers as $\Bbb C =\{a+bi\mid a,b\in\Bbb R\}$, where the sums are just formal sums (without evaluation).
Define addition componentwise as
$$(a+bi)+(c+di) = (a+c)+(b+d)i$$
and multiplication (multiply out)
$$(a+bi)\cdot(c+di) = ac+adi+bci+bdi^2$$
which becomes with $i^2=-1$
$$(ac-bd)+(ad+bc)i$$
One can show that $\Bbb C$ is a field.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4471025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Is the following combinatorial relation correct? I am confused regarding the following problem in combinatorics ( statistical mechanics ).
Suppose I have the following relation : $$\sum_{i=1}^N n_i=\bar{N}$$
I have to find out the number of possible distribution sets, that satisfy this. For example, suppose, $n_1+n_2=\bar{N}=5.$ This is basically the number of partitions of $5$ into two elements. There are $6$ ways of doing this : (0,5);(5,0);(1,4);(4,1);(2,3);(3,2).
I'm told that this is equivalent to the number of ways of distributing the constant $\bar{N}$ among $N$ elements. Using star and bar (bose-einstein counting), this is equivalent to :
$$\Omega=\frac{(\bar{N}+N-1)!}{\bar{N}!(N-1)!}$$
So, this is comparable to the number of ways of putting $\bar{N}$ balls into $N$ boxes. Since these are just numbers, we are considering identical balls. For example $(n_1,n_2)=(0,5)$ can be thought of as $0$ balls in box $n_1$ and $5$ identical balls (1's) in box $n_2$.
But now suppose, we have distinguishable balls. So, now I want to know the number of ways of putting $\bar{N}$ distinguishable balls into $N$ boxes.
Suppose there is a combination set {$n_i$}$=${$n_1,n_2,n_3,.....,n_N$} that satisfies the above constraint. The number of ways of attaining this configuration would be given by :
$$\omega_i=\frac{\bar{N}!}{n_1!n_2!...n_N!}$$
Now, I need to sum over all possible configuration sets {$n_i$}'s.
This is given by : $$\sum_{\{n_i\}} \omega{\{n_i\}}$$
That would give me the total number of possible configurations for distinguishable balls.
In that case this summation would have exactly $\Omega=\frac{(\bar{N}+N-1)!}{\bar{N}!(N-1)!}$ terms in it, as $\Omega$ is the number of possible arrangements for the identical particles.
However, I also know that the total number of possible arrangements for $\bar{N} $ distinguishable balls into $N$ boxes would be $N^{\bar{N}}$.$\Omega$ is the number of distribution sets, that satisfy the constraint $\sum_{i=1}^{N}n_i=\bar{N}$.
So can I then claim,
$$\sum_{\{n_i\}} \omega{\{n_i\}} =\sum_{j=1}^{\frac{(\bar{N}+N-1)!}{\bar{N}!(N-1)!}} \omega_j=N^{\bar{N}}$$
Is this claim correct ?
For example, in identical balls case, $\omega_i=1$ for all {$n_i$}'s and so, total number of possible configurations would be simply $\sum_{i=1}^{\Omega} 1 = \Omega=\frac{(N+\bar{N}-1)!}{\bar{N}!(N-1)!}$
| You are correct. This is a consequence of the multinomial theorem, which states that for all complex numbers $a_1,\dots,a_m$, that
$$
(a_1+\dots+a_m)^N=\sum_{\substack{k_1+\dots+k_m=n \\ k_1,\,\dots,\,k_m\ge 0}}\frac{N!}{k_1!\cdots k_m!}a_1^{k_1}\cdots a_m^{k_m}
$$
In particular, if you set $a_1\gets 1,a_2\gets 1,\dots, a_m\gets 1$, then the LHS is $m^N$, while the RHS is the sum of $N!/(k_1!\cdots k_m!)$ over all possible choices of the integer vector $(k_1,\dots,k_m)$.
This is exactly analogous to how you can prove the sum of all binomial coefficients $\binom{N}{k}$ for a fixed $N$ is equal to $2^N$, by taking the binomial theorem $(x+y)^N=\sum_{k=0}^n \binom{N}kx^ky^{n-k}$, and setting $x\gets 1,y\gets 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4471602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
There exist real numbers $x$ and $y$ such that $\cos(x+y)=\cos x+\cos y$ I’m trying to either prove or disprove the statement that there exist real numbers x and y such that $\cos(x+y)=\cos x+\cos y$, though I quickly encountered a brick wall after expanding the LHS:
$\cos x\cos y - \sin x\sin y = \cos x + \cos y$
My question is, is there a different approach to solving this problem, or is what I started doing the right way? I couldn’t find the problem online, so I would really appreciate your responses
| You can restrict yourself to the case $y=-x$.
Then you only have to find a value $x$ such that $\cos x = 1/2$. As $\cos 0=1$ and $\cos x=\cos (-x)$, you are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4471783",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
} |
Integral of angle between tangent and line I have a problem like this but still haven't figured out how to solve it or what this concept is called in math.
Let's say I have a continuous and differentiable curve $S: y=f(x)$ from $A$ to $B$. $L$ is an arbitrary line with the formula $y=ax+b$. For every point $M$ within $A-B$ in $S$, $\alpha$ is the angle between the tangent of $S$ at $M$ and $L$. What is $\int \alpha(x) \, \mathrm{d}x$ from $A$ to $B$?
In fact I want to find the line(s) L where $\int \alpha(x) \, \mathrm{d}x$ is minimum. Thank you
I think this is unsolvable with math but easily do it with computer.
$\int \alpha(x) \, \mathrm{d}x$ = $\int \arctan{a} \, \mathrm{d}x$ - $\int \arctan{f'(x)} \, \mathrm{d}x$
And the later part is complicated or unsolvable, I tried with wolfram Alpha.
| Since $\arctan(-x) = -\arctan(x)$ and $\arctan$ is strictly increasing, if
$$
\int_R\arctan(f(x))dx=0
$$
then it must be that
$$
\int_R f(x)dx=0
$$
So if you have a value of $a$ such that
$$
\int_A^B \arctan(a-f'(x)) dx = 0
$$
then you must also have
$$
\int_A^B (a-f'(x)) dx = 0
$$
so the solution is $a=f(B) - f(A)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4472088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Conditional probability with an extra term
Consider an experiment having three possible outcomes that occur with probabilities $p_1$, $p_2$, and $p_3$, respectively. Suppose n independent trials of the experiment are conducted and let $X_i$ denote the number of times the $i^{th}$ outcome occurs.
*
*What is the density of $X_1 + X_2?$
*Find $P(X_2 = y \space|\space X_1 + X_2 = z), y = 0, 1, 2, ... ,z$?
I have solved the first question correctly, but there is something wrong with my solution for the second part below; I have explained my approach for the second part.
My approach:
$P(X_2 = y \space|\space X_1 + X_2 = z) = \frac{P(X_1 + X_2 = z \space| \space X_2 = y) \space P(X_2 = y)} {P(X_1 + X_2 = z)} = \frac{P(X_1 = z-y\space| \space X_2 = y) \space P(X_2 = y)} {P(X_1 + X_2 = z)}$
Now, RHS terms:
$P(X_1 = z-y\space| \space X_2 = y) = {n-y \choose z-y} p_1^{z-y}p_3^{n-z}$
$P(X_2 = y) = {n \choose y} p_2^{y}(1-p_2)^{n-y}$
$P(X_1 + X_2 = z) = {n \choose z} (p_1+p_2)^{z}(p_3)^{n-z}$ (This term was calculated in the first part of the question and hence its verified.)
Substituting these terms in the equation, we get:
$P(X_2 = y \space|\space X_1 + X_2 = z) = \frac{{n-y \choose z-y} p_1^{z-y}p_3^{n-z} {n \choose y} p_2^{y}(1-p_2)^{n-y}}{{n \choose z} (p_1+p_2)^{z}(p_3)^{n-z}}$
On simplifying the RHS, we get:
$RHS = {z \choose y} (\frac{p_1}{p_1+p_2})^{z-y} (\frac{p_2}{p_1+p_2})^y (1-p_2)^{n-y}$
but the answer given in the book is: ${z \choose y} (\frac{p_1}{p_1+p_2})^{z-y} (\frac{p_2}{p_1+p_2})^y $
I have an extra term $(1-p_2)^{n-y}$ in my answer; I have rechecked it multiple times, and it doesn't seem like a calculation mistake. Am I making any conceptual mistakes?
PS.: The question is from Introduction to Probability Theory, Hoel Port Stone, Chapter-3 Q22.
Okay, it seems the first term of the RHS in the original equation is wrong, as I can't use $p_1$ and $p_3$ because now the sample space has reduced; changing it to the following gives the correct answer:
$P(X_1 = z-y\space| \space X_2 = y) = {n-y \choose z-y} (\frac{p_1}{p_1+p_3})^{z-y} (\frac{p_3}{p_1+p_3})^{n-z}$
Right?
Also, can we directly state the answer using some argument along the lines of conditional probability?
| It is immediate that $X_1+X_2$ has binomial distribution with parameters $n$ and $p_1+p_2$.
For the second question I would start with:
$$P\left(X_{2}=y\mid X_{1}+X_{2}=z\right)P\left(X_{1}+X_{2}=z\right)=P(X_2=y,X_1+X_2=z)=$$$$P\left(X_{1}=z-y,X_{2}=y,X_{3}=n-z\right)$$
leading to:
$$P\left(X_{2}=y\mid X_{1}+X_{2}=z\right)\frac{n!}{z!\left(n-z\right)!}\left(p_{1}+p_{2}\right)^{z}p_{3}^{n-z}=\frac{n!}{\left(z-y\right)!y!\left(n-z\right)!}p_{1}^{z-y}p_{2}^{y}p_{3}^{n-z}$$
then deleting factors that show up on both sides:
$$P\left(X_{2}=y\mid X_{1}+X_{2}=z\right)\frac{1}{z!}\left(p_{1}+p_{2}\right)^{z}=\frac{1}{\left(z-y\right)!y!}p_{1}^{z-y}p_{2}^{y}$$
then we find:
$$P\left(X_{2}=y\mid X_{1}+X_{2}=z\right)=\binom{z}{y}\left(\frac{p_{1}}{p_{1}+p_{2}}\right)^{z-y}\left(\frac{p_{2}}{p_{1}+p_{2}}\right)^{y}$$
Under condition $X_1+X_2=z$ random variable $X_2$ appears to have binomial distribution with parameters $z$ and $\frac{p_2}{p_1+p_2}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4472246",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Estimation of $\lvert f' \rvert$ via Cauchy-integral formula I am asked to prove that if $f: \mathbb{C} \rightarrow \mathbb{C}$ is entire and $\lvert f(z) \rvert \leq 1$ on $\overline{B_1(0)}$, then $\lvert f'(z) \rvert \leq 4$ on $\overline{B_\frac{1}{2}(0)}$.
What I did is to choose any $z \in \overline{B_\frac{1}{2}(0)}$. I then easily observed that $\overline{B_\frac{1}{2}(z)} \subseteq \overline{B_1(0)}$. Then:
$$
\lvert f'(z) \rvert = \left \lvert \frac{1}{2\pi i} \oint_{\partial K_\frac{1}{2}(z)} \frac{f(w)}{(w-z)^2}~\mathrm{d}w\right \rvert \leq \frac{1}{2\pi} \oint_{\partial K_\frac{1}{2}(z)} \frac{\overbrace{\lvert f(w) \rvert}^{\leq 1}}{\underbrace{\lvert w-z\rvert^2}_{=\frac{1}{4}}}~\mathrm{d}w \leq 2
$$
So I got an even better estimate... . Do you know if I made a mistake? Thank you.
I am also personally interested in what is the sharpest estimation of this kind. I know that $\lvert f' \rvert = 1$ is possible e.g. for $f(z) = z^2$.
| Your bound seems alright, but there is a better bound by using the Schwarz-Pick thereom. Assume that $f(z)$ is non-constant, so $|f(z)|=1$ does not occur in the interior of $|z|\leq 1.$ Using that and applying to holomorphic mapping $f(z)$ from unit disk to unit disk, one gets that $$\frac{|f'(z)|}{1-|f(z)|^2}\leq \frac 1{1-|z|^2},$$ for $|z|<1$, which implies that $|f'(z)|\leq \frac 1{1-|z|^2}.$ With $|z|\leq \frac 1 2,$ this shows that $|f'(z)|\leq \frac 4 3.$ The bound is sharp as can be seen from the example $$f(z)=\frac {2z-1}{z-2}=\frac{z-\frac 1 2}{\frac 1 2z-1},$$ which maps unit disk to itself with $|f'(\frac 1 2)|=\frac 4 3.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4472453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
If you're watching a live stream and n number of minutes behind the live feed, how much time would it take while watching at s speed to catch up? Sometimes I'll tune into YouTube livestreams a few minutes late, but I want to watch the whole thing and catch up with the live feed, so I'll start from the beginning at 2x speed or 1x speed. I'd like to find a way to calculate exactly how much time it will take for me to reach the live feed.
So if I start watching 20 minutes behind at 2x speed, after 20 minutes of watching, I'll be 10 minutes behind, after 10 minutes, I'll be 5, etc.
I would guess this would look something like this
$ total = \frac{20}{2} + \frac{10}{2} + \frac{5}{2} + ... $
How would you create a general equation for this? Could you use summation, product, or something like a Taylor series? I recall an old VSauce Video about SuperTasks which feels relevant here as well.
| First, analogize to chasing a piece of driftwood that is in a stream, whose current is going $1$ mile per hour. Assume that you are running parallel to the driftwood, along the bank of the stream, at $x$ miles per hour. Also assume that you start out $k$ miles behind the driftwood.
Since you are gaining $(x-1)$ miles, for ever hour that you run, and since you start out $k$ miles behind the driftwood, it will take you $~\dfrac{k}{x-1}~$ hours to catch the driftwood.
The analogy is apt. Assume that you start out $k$ minutes behind the show. Also assume that you watch the show at $r$ times the normal speed. For example, $r=2$ implies that you are watching the show at twice the normal speed.
This means that every minute that the show progresses by $1$ minute, you progress by $r$ minutes. This implies that you are catching up to the metaphoric driftwood, at a rate of $(r-1)$ minutes per minute.
Note, that here, the distance to catch up is measured in minutes, rather than (for example) miles.
So, you are catching up by $(r-1)$ minutes, for every minute that you watch, and the driftwood starts out $k$ minutes ahead.
So, it will take you $~\dfrac{k}{r-1}~$ minutes to catch up.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4472651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Two objects coming towards each and then one goes back. How to find the number of trips I saw a question in physics
A car is moving at a constant speed of 40 km/h along a straight road which heads towards a large vertical wall and makes a sharp 90 turn by the side of the wall. A
fly flying at a constant speed of 100 km/h, starts from the wall towards the car at an instant when the car is 20 km away, flies until it reaches the glasspane of the car and return to the wall at the same speed. It continues to fly between the car and the wall till the car makes the 90 degrees turn.
How many trips has it made between the car and the wall?
Solution
Suppose the car is at a distance x away when the fly is at the wall. The time taken by the fly to reach the car is x/(100+40) = x/140. The distance travelled by fly during that time is 100x/140 = 5x/7. Now the fly goes back to the wall which will take 5x/700 = x/140 hours time. By that time,the car travels 40x/140 = 2x/7 km.(3x/7 Km from the wall).
When the 2nd trip of fly towards the car starts, it is at 3x/7 Km from the car.
Now the book says
distance of car at start of 1st trip = $$20$$
distance of car at start of 2nd trip = $$(3/7)×20$$
distance of car at start of 3rd trip = $$(3/7)^2 × 20$$
distance of car at start of 3rd trip = $$(3/7)^3×20$$
Distance of car at start of $$n^{th} trip$$ = $$(3/7)^{n-1}$$
Trips will go on till the distance becomes 0, which will happen when n becomes infinity.
I can't understand understand how can we say about the pattern between the trip number and distance between them?
(PS: The solution I wrote doesn't have the exact words used in the book)
I will be grateful if you could help.
Thank you
| As noted in another answer, the simplest way to find the total distance traveled by the fly is to first calculate the total flying time ($30$ minutes), then multiply that by the fly's (constant) speed of $100$ km/h; this gives the correct total distance, which is $50$ km.
Finding the distance per leg of the trip is a little trickier, since there are two types of flight legs: those toward the moving car, and those toward the stationary wall. To fly to the stationary wall from an initial distance $X$ takes time $X / v_f$, where $v_f=100$ km/h is the fly's speed. To fly to the moving car from an initial distance $X$, on the other hand, takes time $X/(v_f+v_c)$, where $v_c=40$ km/h is the car's speed. The distance between the car and the wall is decreased by $t \cdot v_c$ in either case. So for flights toward the car, $X \rightarrow X - X/(v_f + v_c)\cdot v_c=X\cdot(1- v_c/(v_c+v_f))=(5/7)X$ and the fly travels $(5/7)X$; and for flights toward the wall, $X \rightarrow X - X/v_f\cdot v_c=X\cdot(1-v_c/v_f)=(3/5)X$ and the fly travels $X$. Putting these together, for each round trip starting at the car, the distance is transformed by $X\rightarrow (3/7)X$ and the fly travels $(10/7)X$. So the fly's total travel distance is
$$
(20 \text{ km})\cdot\frac{10}{7}\cdot\left(1 + \frac{3}{7}+\left(\frac{3}{7}\right)^2+\left(\frac{3}{7}\right)^3+\ldots\right)=(20 \text{ km})\cdot\frac{10}{7}\cdot\frac{7}{4}=50 \text{ km}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4472961",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
In a restaurant, 16 men & 10 women are seated on 26 chairs at a round table. How many possible ways men are always seated together? In a restaurant, 16 men and 10 women are seated on 26 chairs at a round table. Find the total number of possible ways such that 16 men are always sitting next to each other.
I've arrived at the solution of $16!.10!$ But I suppose the answer cannot be as simple as this, is my approach and answer correct?
It is challenging for me to frame the solution correctly.
| On a circular table, $n$ objects can be arranged in $(n-1)!$ ways.
First arrange all the women on the circular table.
$10$ women can be arranged in $(10-1)! = 9!$ ways on a circular table.
Now we need to arrange $16$ men such that all of them are together.
We can arrange these men in any of the $10$ gaps between the women. (Assume that we have tied up all the men together and are considering them as a single element).
For arranging this element (all the men) in $10$ gaps, there are $10$ ways.
But these men can also change their order. So, multiply $10$ with the factorial of number of men i.e. $16!$.
So the required number of ways are,
$$9! \cdot 10 \cdot 16! = 10! \cdot 16!$$
Hence your answer is correct!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4473140",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
$T$ is continuous on $X$ iff there exists a $C>0$ such that $||Tx||\le C||x||$ Suppose that $X, Y$ are normed linear space and that $T:X\to Y$ is a linear map, then prove that
a) $T$ is continuous on $X$ iff $T$ is continuous at $0$.
and using a), prove that $T$ is continuous on $X$ iff there exists a $C>0$ such that $||Tx||\le C||x||$ for all $x\in X$.
I tried to prove it as follows:
a) ($\Rightarrow$) is straightforward.
($\Leftarrow$) Let $x\in X$ be arbitrary. Let $x_n\to x$. Suppose that $r_n:=x_n-x$. Clearly, $r_n\to 0$. $Tr_n+Tx=Tx_n$. By continuity of $T$ at $0$, $Tr_n\to 0$; and therefore, it follows that $Tx_n\to Tx$. Since $x$ is arbitrary, it follows that $T$ is continuous on $X$. This completes the proof of part $(a)$.
The second part is where I get stuck.
($\Leftarrow$) This direction is straightforward.
$(\Rightarrow)$ By a), $T$ is continuous at $0$ so there is a $\delta>0$ such that $||Tx||\lt 1$ for all $x: ||x||<\delta$. I don't know how to proceed from here to introduce $C$. Any hints on this? Thanks.
| Choose $x_0\in X$ .
$\|Tx-Tx_0\|=\|T(x-x_0)\|$
Since $T$ is continuous at $0$,
$\|(x-x_0)\|<\delta$ implies $\|T(x-x_0)\|<\epsilon$
Suppose $\forall C>0, \exists x\in X$ such that $\|Tx\|\ge C\|x\|$
Hence $\forall n\in\Bbb{N}, \exists( x_n) \subset X$ such that $\|Tx_n\|\ge n\|x_n\|$
Then $y_n=\frac{x_n}{n\|x_n\|}\to 0$ but $\|Ty_n\| $ doesn't converge to $0$ which contradict the continuity of $T$ at $0$.
Infact if $T$ is a bounded linear map $\|Tx\|\le \|T\|_{op}\|x\|$
Where $\|T\|_{op}=\sup\{\|Tx\|: \|x\|\le 1\}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4473310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Find values of the parameter for the given conic Given $\mathbb{R}^2$ an affine space and the conics:
$Q_\alpha:3x_1^2-\alpha x_1x_2+3x_2^2+14x_1-2x_2+3=0$
$C_\beta:x_1^2+2x_2^2+2\beta x_1x_2-6x_1+5=0$
$i)$ Find $\alpha$ and $\beta$ such that $Q_\alpha$ is a hyperbola and the line
$(r):x_2=x_1-1$ is tangent to $C_\beta$
$ii)$ For the value found for $\beta$ at $i)$ find the canonical form of $C_\beta$ (i.e., classify the conic) using isometries
My Attempt:
I would like to know whether my approach is right and if otherwise, how to correct it.
$i)$
For the first parameter: $\alpha$
In order to find $\alpha$ i thought it would be useful to require that $\Delta≠0$ and $\delta<0$
Where if $Q_\alpha:x^\top A x+Bx+c=0, \delta=det(A)$ and $\Delta=det(A_1)$ where
$\begin{align*}A_1= \begin{bmatrix}
A & \frac{1}{2}B^\top \\
\frac{1}{2}B & c\\
\end{bmatrix}
\end{align*}, $ but it does not help me too much as I have already seen .
For the other parameter: $\beta$
I would just plug in the value of $x_2=x_1-1$ in the equation of $C_\beta$ and require the discriminant to be zero but this method does not always work as I have been told(I think that in dimension 3 things do not work this way? Given that we are working with the asymptote cone). How should I proceed in this situation?
$ii)$
In order to classify the conic using isometries I would try to find the rotation R(an orthogonal matrix with $det(R)=1$) of $\mathbb{R}^2$ making the change of coordinates $x=Rx'$ for x $\in\mathbb{R}^2$ in the equation of $C_\beta:x^\top A' x+B'x+c'=0$ using eigenvectors and eventually Completing the Squares if needed.
Another Question: How could this kind of problems be tackled in a simple way? I mean finding the parameter from the equation of a quadric surface knowing its nature(i.e. an ellipse, or a hyperboloid of one sheet for quadric surfaces) I have seen similar problems here but their solution seems too complicated, having to know some previous classifications and so on.
| i) Matrices associated with the first, resp. second, conic curve are:
$$A=\begin{pmatrix}3&-\frac12 \alpha & 7\\
-\frac12 \alpha & 3&-1\\7&-1&3\end{pmatrix} \ \& \ B=\begin{pmatrix}1&\beta&-3\\ \beta&2&0\\-3&0&5 \end{pmatrix} $$
The first conic, as you have said, will be a hyperbola when $\delta=9-\frac14 \alpha^2 <0$, i.e., when
$\alpha >6 $ or $\alpha < -6$.
The constraint of tangency for the second conic is dealed with by working by duality : compute the inverse of $B$, or more exactly the adjunct matrix of $B$, which is the matrix of the so called dual conic:
$$B'=\begin{pmatrix}10&-5\beta&6\\-5\beta&-4&-3\beta\\6&-3\beta&(2-\beta^2)\end{pmatrix} $$
and express that the vector $V^T=(-1 \ \ 1 \ \ 1)^T$ of the coefficients of the straight line belongs to this dual conic by writing equation:
$$V^TB'V=0 \iff (-1 \ \ 1 \ \ 1)\begin{pmatrix}10&-5\beta&6\\-5\beta&-4&-3\beta\\6&-3\beta&(2-\beta^2)\end{pmatrix}\begin{pmatrix}-1\\1\\1\end{pmatrix}=0$$
giving a quadratic equation
$$ -\beta^2+4\beta-4=0$$
whose unique double root is
$$\beta=2.$$
Fig. : A graphical representation of the conic curves for the cases $a=7$ and $b=2$, the tangency point (black dot) being $(1,0)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4473482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Iso-parametric Dido problem in polar coordinates How to find maximum area arc between two points with fixed length, using polar coordinates?
I know that I can use Lagrange multipliers, but I cannot integrate the Euler-Lagrange equations. Is there not so messy solution when $ r(0)=r(\pi) $ and we want to maximize area above x axis?
| HINTS:
The answer is brief and includes essentials in polar coordinates derivation.
In rectangular coordinates the tangent to curve makes the angle $\phi$ made to x-axis. In polar coordinates $ (r,\theta)$ from origin O,$ \psi$ is angle the curve tangent makes to radius vector.
Primes are differentiation w.r.t. $\theta,$ the independent polar coordinate. Filling in the detailed steps is requested as inducing sufficient exercise in this polar coords case:
$$ \text{Lagrangian: } \frac{r^2}{2}- \lambda \sqrt{r^2+r^{'2}}\tag1$$
Euler Lagrange Equation application when $\theta$ does not explicitly occur:
$$ \frac{r^2}{2}-\lambda \sqrt{r^2+r^{'2}}-r'( -\lambda \frac{r'}{\sqrt{r^2+r^{'2}}})= \text{const., set conveniently
} =\frac{c^2}{2} \tag2$$
Using differential triangle trig relation as in the sketch
$$ \sin \psi=\frac{r}{\sqrt{r^2+r^{'2}}} \tag3$$
Simplifying (2)
$$ r \sin \psi= \frac{r^2-c^2}{2 \lambda} \tag4 $$
which is the differential equation of a Circle radius $\lambda $ and power $c^2$ in polar coordinates.
Geometric property verification
Two segment property of Circle at OAT. Drawing triangles ( not shown) segment product is square of tangent or power:
$$ r (r-2 \lambda \sin \psi) = c^2 \tag 4 $$
When
$$ \psi=0, \text{there is tangential contact of radius vector at T i.e., when r=c. } $$
We can place the Dido circle anywhere in x-y plane but we choose $ \psi =\pm \pi/2$ on x-axis. It is seen that $ \lambda $ is Dido circle /semi-circle radius and $c^2$ its power. Negative value for power is also admissible, when the origin is within the Dido Circle/semi- circle. Eccentric displacement $\epsilon= \sqrt{\lambda^2\pm c^2}.$
When
$$ r= \lambda+{\sqrt{r^2+r^{'2}}}=\lambda+{\sqrt{\lambda^2+c^2}}, \sin \psi =1, \psi = \pi/2 \quad \text {at normal point Q on substitution in 3) and simplification} $$
When
$$ r={\sqrt{r^2+r^{'2}}}- \lambda={\sqrt{\lambda^2+c^2}} -\lambda , \sin \psi =-1, \psi = -\pi/2\; \text {at normal point P on substitution in 3) and simplification.}$$
Integration
Eliminate $\psi$ between (3), (4) to obtain differential equation $ r=f(\theta) $ form
$$ \frac{r'}{r}= \pm \frac{(r^2-c^2)} {\sqrt{2 r^2(2 \lambda^2+c^2) -r^4-c^4}} \tag 5 $$
which appears messier for analytic integration of / for the common Circle in polar coordinates compared to the Cartesian case.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4473724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find $\int_0^1 \sqrt{\frac{1-t}{t(1-0.5t)}}dt$. Find $\int_0^1 \sqrt{\frac{1-t}{t(1-0.5t)}}dt$.
I think that we can not use integration by parts and partial fractions. Is it possible to use change of variable and properties of definite integral? Or any other method? Thanks.
| $$I=\int_{0}^{1}\sqrt{\frac{1-t}{t(1-t/2)}}\,dt = 2\sqrt{2}\int_{0}^{1}\sqrt{\frac{1-u^2}{2-u^2}}\,du=2\sqrt{2}\int_{0}^{1/\sqrt{2}}\sqrt{\frac{1-2v^2}{1-v^2}}\,dv $$
$$ I = 2\sqrt{2}\int_{0}^{\pi/4}\sqrt{\cos(2\theta)}\,d\theta = \sqrt{2}\int_{0}^{\pi/2}\left(\cos\varphi\right)^{1/2}\,d\varphi = \frac{4\pi\sqrt{\pi}}{\Gamma\left(\frac{1}{4}\right)^2}$$
by Euler's Beta function and the reflection formula for the $\Gamma$ function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4473865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show that $0
Show that $0<e-\sum\limits_{k=0}^n\frac{1}{k!}<\frac{1}{n!n}$, where $n>0$.
Hint:
Show that $y_m:=\sum\limits_{k=n+1}^{m+n}\frac{1}{k!}$ has the limit $\lim\limits_{m\to\infty}y_m=e-\sum\limits_{k=0}^n\frac{1}{k!}$ and use that $(m+n)!y_m<\sum\limits_{k=1}^m\frac{1}{(n+1)^{k-1}}$.
If I use the hint then the problem is pretty straightforward.
However, I don't understand why the inequality of the hint is true? A simple example shows that the inequality doesn't hold! There are no further information given.
What am I missing? Is $(m+n)!y_m<\sum\limits_{k=1}^m\frac{1}{(n+1)^{k-1}}$ simply nonsense? Or did the professor forgot to mention some more assumptions?
| I think I got it right this time!!
$e-\sum_{0}^{n}\frac{1}{k!}<\frac{1}{n!n}$ is equivalent to $ e-e+\sum_{n+1}^{\infty }\frac{1}{k!}<\frac{1}{n!n}$ equivalent to
$\frac{1}{(n+1)!}+\frac{1}{(n+1)!(n+2)}+.....<\frac{1}{n!n}$ and hence to
$\frac{1}{n!}({\frac{1}{(n+1)}}+\frac{1}{(n+1)(n+2)}+\frac{1}{(n+1)(n+2)(n+3)}+....)<\frac{1}{n!n}$
But $\frac{1}{(n+1)(n+2)}<\frac{1}{(n+1)^{2}}, \,\,\frac{1}{(n+1)(n+2)(n+3)}<\frac{1}{(n+1)^{3}} $ etc so it suffices to show setting $a=\frac{1}{n+1}$
that $a+a^{2}+a^{3}+....\leq \frac{1}{n}\Leftrightarrow \frac{1}{1-a}-1\leq \frac{1}{n}$. But this is
$\frac{n+1}{n}-1=\frac{1}{n}$. QED
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4474230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Power or polynomial function? According to the definition, $f(x) = a·x^n$ is a power function. If we shift it to $f(x) = a·(x - c)^n$ or, more general, to $f(x) = a·(x - c)^n + d$, it becomes a polynomial function (not a power function anymore). Is this just a matter of mere formalism or nomenclature?
| Power functions must have a single term, but are allowed to have fractional or negative exponent.
Polynomials can have more than one term ("poly", meaning "many", refers to exactly this), but all exponents must be natural numbers.
A single term with a natural number exponent is both a power function and a polynomial (and actually a monomial) simultaneously.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4474386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
I think the following property is necessary for the converse part of Theorem 4.2. (N-z) $\mathcal{U}_x\neq\emptyset$ for any $x\in X$. (Willard) I am reading "General Topology" by Stephen Willard.
I think the following property is necessary for the converse part of Theorem 4.2.
N-z) $\mathcal{U}_x\neq\emptyset$ for any $x\in X$.
I created a python program which computes the number of the functions $\mathcal{U}:X=\{0,1,\dots,n-1\}\to 2^X$ which satisfy N-a) through N-d).
But my python program didn't compute the correct answer.
Am I right?
| You are correct. The neighborhood system must be nonempty at each point.
If $\mathcal{U}_x$ was empty for some $x\in X$, it means $x$ does not have any neighborhoods, and consequently by the condition N-e) $x$ can not belong to any open set. Especially this means $X$ is not open, but then ''the result is a topology on $X$'' does not hold.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4474555",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
sequence such that $x_{n+1}=n(x_n-n)$ Let $(x_n)$ be a sequence such that $x_{n+1}=n(x_n-n)$
Prove that $x_n=O(n)$ if and only if $x_1=2e$
If we have $x_n=O(n)$ then clearly $x_n=n+O(1)$ so they are equivalent.
I don't get how $x_1$ relates to this. Well it's a recurrent series but it's not of the form $x_{n+1}=f(x_n)$
| By some experimentation from the starting point $x_n=an+b+y_n$ and reporting in the equation I quickly found that $x_n=n+1+2y_n$ lead to the following simplification
$$y_{n+1}=ny_n-1$$
So we have transformed the $-n^2$ term into a constant $-1$ which is better.
Now we can set $y_n=(n-1)!\,z_n$ so as to make $n!$ appear on both sides of the equation.
$$z_{n+1}-z_n=\frac {-1}{n!}$$
This is a telescoping relation therefore $z_n=z_1-\sum\limits_{k=1}^{n-1}\frac 1{k!}$
We have $z_n\to z_1+(1-e)$ so to have $x_n=O(n)$ we need to cancel this with $z_1=e-1$
Which is equivalent to
$x_1=2+2y_1=2+2z_1=2+2(e-1)=2e$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4474763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
what is the difference between a markov chain and a random walk? I am confused as to the relation between Markov chains and random walks. Some sources I look at claim all random walks are markov chains and then other sources say the opposite. What makes a markov chain a random walk?
| Epistemologically speaking, a random walk does not possess the essential properties of a Markov chain: transition probabilities. An instantiation of a random walk is, to say the least, an utterly random path in mathematical space. From this path, we cannot go back and deduce the transition probabilities. However, we could technically compute the average transition probability from each state. On the other hand, specifying a random walk a priori (not considering a particular instantiation), it seems to possess the qualities of a markov chain. This is not to say that there are no distinctions to be made. Consider the most elementary random walk, the birth and death chain on the real line. With chance of going up and down the chain in single increments constituting this random walk, it is indeed a markov chain. However, consider a situation where we have an animal foraging for food. Its path can be considered a random walk. Though, it is important to note that the direction it faces also figures into the places visited, so that we must incorporate additional information than merely its current position.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4474923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Three-of-a-kind Poker Hand Problem Problem:
*
*Three-of-a-kind poker hand: Three cards have one rank and the remaining two cards have
two other ranks. e.g. {2♥, 2♠, 2♣, 5♣, K♦}
Calculate the probability of drawing this kind of poker hand.
My confusion: When choosing the three ranks, the explanation used $13 \choose 1$ and $12 \choose 2$. I used $13 \choose 3$ instead which ends up being wrong. I do not know why.
| Here is another way to solve it through unordered samples.
We are looking for hands of the kind $x_1$-$x_2$-$x_2$-$y$-$z$, where $x_1,x_2,x_3$ are all of the same face value (although of a different suit), whereas $y,z$ are different face values.
To work with unordered hands, let's fix the order of the cards as above, i.e. three of a kind are the first three cards followed by two other different kinds.
There are 13 possible face values (2, 3, $\ldots$, K, A), and for each face value, there are ${4\choose 3}$ ways to select 3 cards out of 4, disregarding order and without replacement. This fills $x_1$-$x_2$-$x_3$.
For $y$, there are 48 possibilities, since three face values have already been drawn and the one left cannot be used. For $z$, there are 44 possibilities since the other three remaining cards of the face values chosen in $y$ cannot be either.
However, we are not done yet, i.e. $13{4\choose 3}48\cdot44$ is not quite right because this number includes also poker hands s.t. 4s-4c-4h-2s-3h and 4s-4c-4h-3h-2s, which are obviously indistinguishable since order doesn't matter. But the last two cards can be ordered in 2! ways. Dividing by $2!$, we remove those hands that differ only in the ordering in the last two cards.
The right number of poker hands with a three-of-a-kind is thus
$$13{4\choose 3}\frac{48\cdot44}{2!},$$
and the required probability is
$$
\frac{13 {4\choose 3}\frac{48\cdot44}{2!}}{{52 \choose 5}}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4475082",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Is there notation for subset containing the n smallest (or largest)? Suppose I have a discrete set $A=\{a_1,\ldots,a_k\}$ and I wish to write an expression for the minimum of this set. I would write:
$$
\min \{a_1,\ldots,a_k\}
$$
But what if rather than just an element, I want the subset of $A$ containing the $n$ smallest elements of $A$? How would I write that?
| I don't think there is a standard notation. It is likely to be most clear if you define your own notation in words before using it. Clear notation that springs to mind for me would be $\min_n (A)$ or $\min^{(n)} (A)$, but it needs explanation.
You can define the concept using only existing notation, but it is perhaps less clear than using words:
$$a_1=\min(A) \\ a_{i+1}=\min(A\setminus\{a_1,\dots,a_i\})\ \text{for}\ i\ge 1\\ \text{min}_n(A)=\{a_1,\dots,a_n\}$$
David Sheard's comment suggests another definition.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4475247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Is the conditional expectation of a random variable measurable with respect to it? Suppose $X$ is an integrable random variable on $(\Omega,\mathcal G, P)$ and $\mathcal F\subseteq\mathcal G$ is a sub-$\sigma$-algebra. Is it true that
$$\sigma(E[X\mid\mathcal F])\subseteq\sigma(X)?$$
Intuitively I would say "yes" because $E[X\mid\mathcal F]$ collects all the information that $\mathcal F$ supplies to find out the value of $X$ and at most that can be $\mathcal F = \sigma(X)$ because then one has $X$ (i.e. $E[X\mid X]=X$) all the other information in $\mathcal F$ appears to be irrelevant.
| No. let $\Omega = [0,1]$, $X= 1_{[{1 \over 2},1]}$, ${\cal F}$ the field generated by $[0,{1 \over 3}), [{1 \over 3}, {2 \over 3}), [{2 \over 3}, 1]$.
Then $E[X|{\cal F}](\omega) = \begin{cases} 0, & \omega \in [0,{1 \over 3})\\
{1 \over 2}, & \omega \in [{1 \over 3}, {2 \over 3}) \\ 1, & \omega \in [{2 \over 3}, 1]\end{cases}$, so $\sigma(E[X|{\cal F}]) = {\cal F}$, but
$\sigma(X) = \sigma(\{ [0,{ 1\over 2}), [{1 \over 2},1] \})$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4475590",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Does there exist a function which converts exponentiation into addition? A useful property of the logarithm is that it can "convert" multiplication into addition, as in
$\ln(a)+\ln(b)=\ln(ab) \text{ for all } a, b \in \mathbb{R}^+$
Does there exist a function $f$, which holds a similar property for exponentiation?
$f(a)+f(b)=f(a^b) \text{ for all } a, b \in \mathbb{R}^+$
If so, are there any closed-form expressions for such a function?
| As Stinking Bishop mentions in the comments, if $f(a)+f(b)=f(a^b)$, then it follows (by interchanging $a$ and $b$) that $f(b)+f(a)=f(b^a)$. Hence, $f(a^b)=f(b^a)$ for all $a,b\in\mathbb R^+$. Setting $b=1$, we see that $f(a)=f(1)$ for all $a\in\mathbb R^+$. Now $f(1)+f(1)=f(1^1)$, so $f(1)=0$. Hence, the only function $\mathbb R^+\to\mathbb R$ with the desired property is the zero function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4475785",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 2,
"answer_id": 1
} |
An 'obvious' inequality in Mean curvature flow My question is from the mean curvature flow paper by Gerhard Huisken. First $H:=g^{ij}h_{ij}=tr(h)$ where $h$ is the second fundamental form of the manifold. Then $A:=\left\{h_{ij}\right\}$.
It states that there is an obvious inequality by the Codazzi equation which is \begin{equation}
|\nabla H|^2\le n |\nabla A|^2
\end{equation} $n$ is the dimension of the manifold. So I try to expand everything out and make use of the operator norm. $$|\nabla H|^2=g^{lk}\nabla_l g^{ij}h_{ij}\nabla_kg^{pq}h_{pq}=g^{lk}g^{pq}g^{ij}\nabla_l h_{ij}\nabla_k h_{pq} $$ For the gradient of the second fundamental form $$|\nabla A|^2=g^{lk}g^{pq}g^{ij}\nabla_l h_{qi}\nabla_k h_{pj}\le ng^{pq}g^{ij}\nabla_l h_{qi}\nabla_k h_{pj}$$ I notice the inverse metric in $|\nabla H|$ represent a trace of $h_{ij}$ but the inverse metric in $|\nabla A|$ represent an inner product between two tensors. So I don't know how to connect two magnitude.
| Let $B_{ijk} = \nabla_iA_{jk}$. We want to maximize
$$
f(B) = \frac{g^{pq}B_{ipq}g_{jk}B^{ijk}}{B_{ijk}B^{ijk}} $$
First, note that it's obvious that the minimum value of $f$ is $0$. Therefore, the maximum occurs at a critical point of $f$.
Setting the directional derivative to $0$, we get, for any $2$-tensor $\dot{B}$ and some constant $c$,
\begin{align*}
0 &= g^{pq}g_{jk}B_{ipq}\dot{B}^{ijk} - c B_{ijk}\dot{B}^{ijk}\\
&= (g^{qr}g^{jk}B_{iqr}-cB_{ijk})\dot{B}^{ijk}.
\end{align*}
This implies that there is a vector $v$ such that
$$
B_{ijk} = v_ig_{jk}
$$
Therefore,
$$
g^{jk}B_{jk} = nv_i.
$$
The last two equations imply that
$$
f(B) = \frac{n^2|v|^2}{n|v|^2} = n.
$$
The Codazzi equations were not used here. If you take them into account, you might get an even better constant. In general, an inequality like this is called an improved Kato inequality.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4475926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Minimum value of $ab+bc+ca$ Given that $a,b,c \in \mathbb{R^+}$ and $(a+b)(b+c)(c+a)=1$
Find the Minimum value of $ab+bc+ca$
My try: Letting $x=a+b, y=b+c, z=c+a$ we get
$$a=\frac{x+z-y}{2}, b=\frac{x+y-z}{2},c=\frac{y+z-x}{2}$$
$$xyz=1$$
Now the problem is to minimize $$ab+bc+ca=\frac{xy+yz+zx}{2}-\frac{x^2+y^2+z^2}{4}$$
Now by $A.M-G.M$ we have
$$\frac{xy+yz+zx}{2}\geq \frac{3}{2}\times \sqrt[3]{x^2y^2z^2}=\frac{3}{2}$$
But I am stuck for $\frac{-(x^2+y^2+z^2)}{4}$
| With this separation you can't find a minimum ; since $x^2+y^2+z^2$ is not bounded. Just take the sequence $(x_n,y_n,z_n)=(n,n,1/n^2)$, $n\in\mathbb{N}^\ast$, so that $x_ny_nz_n=1$, then $x_n^2+y_n^2+z_n^2=2n^2+1/n^4$ is not upper bounded. You may maximize the 'whole' function $x^2+y^2+z^2-2(xy+yz+zx)$ by Lagrange multiplier (for example).
Lagrange multiplier
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4476385",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
$\limsup$ of product of sequences with finite $\limsup$ Assume we have three seqeuences of positive real numbers, say $(a_n)_{n\in \mathbb{N}}$, $(b_n)_{n\in \mathbb{N}}$ and $(c_n)_{n\in \mathbb{N}}$, such that $\displaystyle \limsup \limits _{n\to \infty}\frac{a_n}{b_n}<\infty$ and $\displaystyle \lim \limits _{n\to \infty}\frac{b_n}{c_n}=0$. Do we then have$$\limsup \limits _{n\to \infty}\frac{a_n}{c_n}=\limsup \limits _{n\to \infty}\frac{a_n}{b_n}\frac{b_n}{c_n}=0?$$
| Hint: since $\dfrac{a_n}{b_n}$ is bounded, and $\dfrac{b_n}{c_n}\to0$, you have the case in the usual limit "bounded times something tends to $0$ which is 0. Also, since $\dfrac{b_n}{c_n}\to0$ then its limit is finite and $\limsup \dfrac{b_n}{c_n}=\lim\dfrac{b_n}{c_n}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4476541",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
simple pendulum equation, why it cannot be solved with laplace transform (the general solution) Usually to solve the simple pendulum equation:
$\qquad \ell {\ddot \theta }+g\sin \theta =0\,$
Using the first term of Taylor series is used as approximation, but although $\sin \theta$ can be transformed to Laplace "space" and use it to find a general solution, I can't find it.
Why there is no general solution in Laplace?
| In general, Laplace transform can be helpful in solving linear differential equations. But this one is nonlinear.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4476752",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Find the smallest $n\in \mathbb Z^+$ that makes $\sqrt{100+\sqrt n}+\sqrt{100-\sqrt n}$ integer. Find the smallest $n\in \mathbb Z^+$ that makes $\sqrt{100+\sqrt n}+\sqrt{100-\sqrt n}$
Clearly if $n=0$ then we will have $20$ but I couldn't decide that how can I find the other integers. Any hint?
If I say that $x=\sqrt{100+\sqrt n}+\sqrt{100-\sqrt n}$ then we have $200+2\sqrt{10^4-n}=x^2$
| your thinking is absolutely right ,
$x=\sqrt{100+\sqrt n}+\sqrt{100-\sqrt n}$ then we have $200+2\sqrt{10^4-n}=x^2$
for the purposes of this question
$200+2\sqrt{10^4-n}$ has to be perfect square number . and we can also see that x is even
therefore let $g^2=10^4-n$ here $g\in\Bbb Z^+$.
we can also see that $g\leq100$ , therefore $x_{max}=20$
Since $x^2\geq200$
$\implies x\in {\{16,18,20\}}$
$\implies 200+2g \in {\{16^2,18^2,20^2\}}$
$\implies g \in {\{28,62,100\}}$
$\implies n\in {\{9816,6156,0\}}$
There you go .
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4476904",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Prove $p(x,y) = q^2p^{y-2}$ is a probability mass function Question: Prove $p(x,y) = q^2p^{y-2}$ is a pmf
where $q = 1 - p$ $$x = 1,2,3,...,y-1$$ $$y = 2,3,..., \infty $$
My attempt:
I am trying to prove $p(x,y)$ must add up to 1
$$\sum_{y=2}^{\infty} \sum_{x=1}^{y-1} q^2p^{y-2} = \sum_{y=2}^{\infty} q^2p^{y-2} \sum_{x=1}^{y-1} 1 = \sum_{y=2}^{\infty} q^2p^{y-2} (y-1) $$
to me this is kinda similar to a summation of a binomial however there is no way I can get $y-1$ to equal $y \choose 2$ (also, the range of summation is not the same as of binomial's!!!)
I am stuck there and any help would be appreciated. Thanks!
| $$\begin{split}\sum_{y=2}^\infty \sum_{x=1}^{y-1}(1-p)^2p^{y-2} &= \sum_{y=2}^\infty \color{red}{(y-1)}(1-p)^2p^{y-2}\\
&=\frac{(1-p)^2}{p}\sum_{y=2}^\infty (y-1)p^{y-1}\\
&=\frac{(1-p)^2}{p}\sum_{y=1}^\infty yp^y\\&=(*)\end{split}$$
Now, looking at this post:
$$\sum_{y=1}^\infty yp^y = \frac p {(1-p)^2}$$
Thus $$(*)=\frac{(1-p)^2}{p} \frac{p}{(1-p)^2}=1$$
$$$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4477008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Galois Group of $(x^2-5)(x^2-7)$ over $ \mathbb{Q}$ I want to calculate the Galois Group of $p= (x^2-5)(x^2-7) \in \mathbb{Q} $. Because I saw a solution to this problem in a German textbook and think it is incorrect.
So: $p$ is irred. over $\mathbb{Q}$, and $\operatorname{char}( \mathbb{Q} )=0$ therefore the Galois group is a subgroup of $S_4$. Furthermore $ |\operatorname{Gal}(p)|=| \mathbb{Q} ( \sqrt{5}, \sqrt{7}): \mathbb{Q} | = | \mathbb{Q} ( \sqrt{5} ): \mathbb{Q} (\sqrt{7}) | \cdot| \mathbb{Q} ( \sqrt{7}): \mathbb{Q} |= 2 \cdot 2 = 4 $. Since $ \mathbb{Q}( \sqrt{5} ,\sqrt{7}) $ is the splitting field of $p$.
Now we have only $C_4$ (cyclic Group) or $V_4$ (Klein four-group) as possibilities.
If the discriminant of a polynomial is a square (over the the field $K = \mathbb{Q} $) then the Galois group is subgroup of $A_n = A_4$. It is $\operatorname{disc}(p)= 8960 $ which is not a square of $\mathbb{Q}$ so we are left with $G= C_4$.
Does this look right to you? or did I miss something?
I apologize for my English and thank you in advance!
| You have factored $p$, so it is apparently false that $p$ is irreducible over $\mathbb Q$. But your computation of $|\text{Gal}(p)|=4$ is more or less correct, except $|\mathbb Q(\sqrt 5, \sqrt 7):\mathbb Q(\sqrt 7)|=2$ (mistakenly as $\mathbb Q(\sqrt 5):\mathbb Q(\sqrt 7)|$) needs some justification.
It must be isomorphic to $V_4$, not $C_4$, as $C_4$ constains only one proper subgroup of order $2$, but the splitting field has at least two distinct intermediate subfields: $\mathbb Q(\sqrt 5)$ and $\mathbb Q(\sqrt 7)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4477215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Evaluate $\int_{-\infty}^\infty \frac{1}{(x^2 + D^2)^2}dx$. $$\int_{-\infty}^\infty \frac{1}{(x^2 + D^2)^2}dx$$
Edit : $D> 0$.
My work:
Let $x = D\tan \theta$
$$\int_{-\infty}^\infty \frac{1}{(x^2 + D^2)^2}dx=\int_{-\infty}^\infty \frac{1}{(D^2\sec^2 \theta)^2}dx$$
$$=\int_{-\infty}^\infty \frac{1}{(D^2\sec^2 \theta)^2}D\sec^2\theta d\theta
= \frac{1}{D^3}\int_{-\infty}^\infty \cos^2 \theta d\theta$$
$$=\frac{1}{D^3}[\frac{\theta}{2} + \frac{\sin {2\theta}}{4} + C]_{-\infty}^\infty$$
Put $\theta = \arctan{\frac{x}{D}}$;
$$=\frac{1}{D^3}[\frac{\arctan{\frac{x}{D}}}{2} + \frac{\sin {(2\arctan{\frac{x}{D})}}}{4} + C]_{-\infty}^\infty$$
We get the integral as $\frac{\pi}{2D^3}$. Since $\lim_{x \to +\infty} \arctan(x) = \frac{\pi}{2}$ and $\lim_{x \to -\infty} \arctan(x) = \frac{-\pi}{2}$
I feel something isn't right here. Can anyone point the mistake please. Thank you very much.
| Do you know about complex analysis? Integrals like this became trivial:
We consider the contour integral
$$\oint_C \frac{1}{(z^2+D^2)^2}dz=\int_{\Gamma_R}\frac{1}{(z^2+D^2)^2}dz +\int_{[-R,R]}\frac{1}{(z^2+D^2)^2}dz $$
where $C$ is a semicircle in the upper half plane. By the residue theorem, we need only consider the pole at $z=Di$. As the radius of the semicircle goes to infinity, we find that
$$\lim_{R\to\infty}\int_{\Gamma_R}\frac{1}{(z^2+D^2)^2}dz=0$$
so
$$\lim_{R\to\infty}\int_{[-R,R]}\frac{1}{(z^2+D^2)^2}dz=\int_{-\infty}^{\infty}\frac{1}{(z^2+D^2)^2}dz=\oint_C \frac{1}{(z^2+D^2)^2}dz=2\pi i Res(f,Di)=2\pi i\lim_{z\to Di}\frac{d}{dz}((z-Di)^2f(z))=\frac{\pi}{2D^3}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4477389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 0
} |
How to express a set in words Is it correct to say $M$ is the set of all $x$ that are elements of $\mathbb Z$, such that $x $ is greater than $-5$ ?
$ M=${$x\in \mathbb Z :x$ is even, $x > -5$}
| Instead of "the set of all $x$ is an element of $\Bbb Z$" (which doesn't make grammatical sense), just say "the set of all $x$ in $\Bbb Z$". You also missed the "$x$ is even" part. I'd translate "$M=\{x\in\Bbb Z:x\text{ is even},x>-5\}$" as "$M$ is the set of all even integers greater than $-5$". So $M=\{-4,-2,0,2,4,6,8,...\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4477537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Definition of measurable function - connection with being measurable f is measurable if for all Borel sets B, the preimage is in the sigma-algebra.
This is basically saying that for all elements in the sigma algebra, f of it is a borel set.
What part of this definition explains the measurability of f— is it that borel sets are measurable? Thus, taking the function f of anything in the sigma algebra will produce a measurable output, hence the function is “measurable”?
The definition I refer to is as follows.
$f$ is measurable with respect to a measurable space $(X, A)$ if for every borel set B, $$f^{-1}(B)=\{x\in X:f(x)\in B\}\in A$$
This basically means: $f^{-1}(B) \in A$, the $\sigma$-algebra $\implies$ $f$ is measurable
Thus, $f$ is not measurable if its preimage of any Borel set is not contained in the $\sigma$-algebra. My question is as follows.
What is the meaning of this definition?
*
*We have a $\sigma$-algebra mapping to another set that contains Borel sets. If those Borel sets have preimages all of which lay in the $\sigma$-algebra then $f$ is measurable. Is it because of the Borel sets that are measurable, that makes $f$ measurable?
| If we associate $f$ with the range $f(X):= R $ then for borel subset $B$ of $R$, we can assign a measure to it $\mu_X (f^{-1}(B))$ so the function (in terms of its range) can be measured.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4477678",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What Is The Most Efficient Way To Tile A Page With Cube Nets? I'm trying to print out nets of a cube on a sheet of paper, and I'm hoping to fit as many as I can on single sheets. The squares that make up the net are $\frac{1}{2}$ an inch wide, and I'm printing on standard 8.5" x 11" printer paper. I know that all of the 11 nets of a cube can tile 2D space, but I want to find an arrangement of (possibly mixed and matched) cube nets that will cover the page as efficiently as possible with as many cubes as possible.
The best I've gotten is 55 cubes, using the arrangement below:
My hope is that there is an arrangement that can fit 60 cube nets (or some multiple of 20) on the sheet, since I plan to cut them out and tape them into a Menger Sponge. Is there such an arrangement out there?
| Maybe not the most elegant arrangements, but they work.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4477847",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Why does the reduced Chi-squared around 1 not zero? It is mentioned here that the reduced Chi-squared $\chi^2/dof$ is always compared to 1 and it determines the goodness of fitting, where $\chi^2$ is defined as: $$\chi^2=\sum{\frac{(model-data)^2}{\sigma^2}}$$ Meanwhile as I know the $\chi^2$ represents the sum of the squared residuals, so if the model represents the best fit, does not this lead to the sum of the residuals tends to zero and the best fitting should be zero?
I know that $\sigma$ can be used for scaling, but does it mean that even if we have one set of data and we perform fitting with several models should we use different $\sigma$ for each model.
I think that I might have a misconception.
| I think you confuse “true model” and “no standard deviation.” We have an assumption that the observations come from a Gaussian generative process $y_i\sim N(X_i^T\beta, \sigma_i^2)$ for each $i$. If this true model is found, the $\chi^2$ statistic $\sum_i \left(\frac{(y_i-f(x_n|\beta))^2}{\sigma_i^2}\right)$ will be $\chi$-square with n df because it is the sum of n Gaussian random variates squared. The expectation is n, so divided by the degrees of freedom, n, we get 1 for the correct model.
If the statistic is greater than 1, generally it implies underfitting (residuals too large), while less than 1 means overfitting. This is done logically by comparison with the fact that it is 1 under the true model, but in practice it can be complicated.
On the other hand, if the standard deviations were all $0$, we’d get a perfect fit with no residuals, but we cannot have a zero standard deviation in the Gaussian model. The model would be deterministic with no randomness. A chi-square statistic would be inappropriate.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4477977",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Question on a proof that there exist $\alpha \in \mathbb R$ such that $\alpha^2=b$ for $b \in \mathbb R$ and $b\geqslant0$ I am aware there are a couple of other questions related to this topic. I could formulate a proof but I have a particular question regarding a lecturer's proof I found in the web.
The proof goes like this:
Define $T=\{t\in\mathbb R\mid x^2<b\}$. Let $\alpha=sup \: T$. We have to prove that $\alpha^2=b$ by ruling out $\alpha^2<b$ and $\alpha^2>b$. I will omit the case $\alpha^2<b$ since it was clear for me.
For the case $\alpha^2>b$, let's write:
$\left( \alpha - \frac{1}{n} \right)^2 = \alpha^2 - \frac{2\alpha}{n} + \frac{1}{n^2} > \alpha^2 - \frac{2\alpha}{n}$ where $n \in \mathbb N$
Then, we pick a $n_0 \in \mathbb N$ satisfying $\frac{1}{n_0}<\frac{\alpha^2-b}{2\alpha}$ or $\frac{2\alpha}{n_0}<\alpha^2-b$. We know $n_0$ exists from the archimedean property. Then we would have:
$\left( \alpha - \frac{1}{n_0} \right)^2 > \alpha^2 - \frac{2\alpha}{n_0}>\alpha^2-(\alpha^2-b)=b > t^2 \: \: \: \forall t \in T \tag{1}\label{1}$
From the last inequality, the proof concludes that $\left( \alpha - \frac{1}{n_0} \right)$ is an upper bound of $T$ which is less that $\alpha$, then contradicting $\alpha=sup \: T$.
And here is my reflection and question. I think that the only thing we can conclude from the inequality $\eqref{1}$ is that $\left( \alpha - \frac{1}{n_0} \right) \notin T$.
If $\left( \alpha - \frac{1}{n_0} \right)>0$, then the conclusion would hold true since $\left( \alpha - \frac{1}{n_0} \right) > t \: \: \: \forall t \in T$.
However, it can also be true that $\left( \alpha - \frac{1}{n_0} \right)<0$ and in this case I cannot find any contradiction.
Am I right or am I missing something? and How can I finish the proof from $\eqref{1}$?
| I think the proof is a bit sloppy. You can start by considering the case $b=0$. In that case $\alpha=0$ does the job. If $b>0$ your set $T$ is not empty. You can also notice that $T$ is symmetric, that is whenever $t\in T$, $-t\in T$. Therefore $\alpha\ge 0$.
In case $\alpha^2>b$ you necessarily have $\alpha>0$. Then, when you choose $n$ you can also require $1/n<\alpha$ (Archimedean property) and then the proof works.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4478144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Train Traveling from Aytown to Beetown Accident (AHSME 1955) I was working on the question: A train traveling from Aytown to Beetown meets with an accident after 1 hour. The train is stopped for 30 minutes, after which it proceeds at four-fifths of its usual rate, arriving at Beetown 2 hours late. If the train had covered 80 miles more before the accident, it would have been just one hour late. What is the usual rate of the train?
I thought my answer was correct, but it isn't. What was wrong with my work?
For the first scenario, I came up with this equation:
$d=r+(\frac{4}{5}r \cdot \frac{3}{2})$
The r represents the distance travelled in one hour, since the train meets the accident after one hour. The $\frac{3}{2}$ shows the time left after the accident (30 min); 2-(1/2) = 3/2.
For the second scenario, I came up with the equation $d=r+80+(\frac{4}{5}r \cdot \frac{1}{2})$. The $\frac{1}{2}$ is because the train was only 1 hour late and the accident took up 30 minutes, leaving 30 minutes (half an hour) to travel with $\frac{4}{5}$ speed.
I set th equations equal to each other and solved for r. I got that r is 100, but r is actually 20. What did I do wrong?
| There is a very simple way of doing such problems, not always taught.
The difference in "lateness" decreases by $1$ hr. if the train covers $80$ more miles before the accident, thus
$\dfrac{80}{\frac{4}{5}v} - \dfrac{80}{v} = 1,\; \rightarrow v = 20\; mph$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4478233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Suppose $\phi: \mathbb{Q} \to \mathbb{Z}$ is a homomorphism, where both $\mathbb{Q}$ and $\mathbb{Z}$ are under addition. Prove $\phi$ is the zero map
Suppose $\phi: \mathbb{Q} \to \mathbb{Z}$ is a homomorphism, where
both $\mathbb{Q}$ and $\mathbb{Z}$ are under addition. Prove $\phi$ is
the zero map.
Suppose $\phi$ is a non zero map. Meaning there is at least one $x \in \mathbb{Q}$ such that $\phi(x) = a$ for some non zero integer a. Since the definition of homomorphism involves two elements from the domain, I tried taking another element $y \in \mathbb{Q}$ such that $\phi(y) = b$ for some integer b. Since $\phi$ is homomorphism, $\phi(x+y) =\phi(x) + \phi(y)= a + b$.
My intention is to arrive at a contradiction that $\phi$ is not a homomorphism. But I am stuck.
| Let $k$ be the smallest/largest integer $>0$/$<0$ such that for some $x\in\mathbb{Q}$, $\phi(x)=k$. Then $\phi(\frac{x}{2})+\phi(\frac{x}{2})=k$, so $\phi(\frac{x}{2}) = \frac{k}{2}$ which is smaller/greater than $k$ and greater/smaller than $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4478387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Is it possible to maximize $\frac{3t^2}{t^3+4}$ (where $t>0$) without taking derivative? To find the maximum of $f(t)=\dfrac{3t^2}{t^3+4}$ (for $t>0$) we can simply equate the derivative with zero,
$$f'(t)=0\Rightarrow 6t(t^3+4)-3t^2(3t^2)=0\Rightarrow -3t^4+24t=0\Rightarrow t=2$$
And $f_{max}=f(2)=1$.
I'm wondering is it possible to find the maximum without taking derivative? I'm eager to see other methods to maximize the function.
| You can use AM-GM as follows:
\begin{eqnarray*} \frac{3t^2}{t^3+4}
& = & \frac{3t^2}{\frac 12 t^3 + \frac 12 t^3 +4} \\
& \stackrel{AM-GM}{\leq} & \frac{3t^2}{3\sqrt[3]{\frac 12 t^3 \cdot \frac 12 t^3 \cdot 4}} \\
& = & 1
\end{eqnarray*}
Equality holds iff
$$\frac 12 t^3 = 4 \Leftrightarrow t= 2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4478519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
A definition of Grothendieck fibrations without cartesian morphisms. Is this possible? I am thinking of a definition of (cartesian) fibration using the classical example of fibrations: Commutative squares in a category with pullbacks C fibred over C.
Definition: Given a functor $p:X \to C$, we say $p$ is a fibration if given any object $x \in X$ and a morphism $\alpha: c \to p(x)$, there exists a "universal lift" $\beta: x' \to x$ in $X$ such that $p(\beta) = \alpha$ with the property that for any other lift $\gamma: x'' \to x$ in $X$ (i.e. $p(\gamma) = \alpha$), the morphism $\gamma$ factors through a unique morphism $\theta: x'' \to x'$ (i.e. $\beta \circ \theta = \gamma$).
It seems to me that we should be able to define it using "universal lifts".
If this definition works, then morphisms in $X$ that arise as universal lifts will be cartesian morphisms.
I am not able to show the above definition is equivalent to the notion of Grothendieck fibrations. Maybe it is not! So please prove me wrong.
If there are references along this direction, I appreciate that as well.
| If you require that the lift factors through a unique morphism $\theta$ such that $p(\theta)$ is the identity, then you would have asserted that every morphism has a unique weak Cartesian lift. According to Exercise 1.1.6 in Jacobs' book Categorical Logic and Type Theory, the functor is a fibration if and only if every morphism has a unique weak Cartesian lift and weak Cartesian morphisms are closed under composition.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4478616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is this function continuous on rationals? If $f: \mathbb{Q}\to \mathbb{Q}$ such that $p \to p+1$.
I know a function is continuous iff for any open set in Y inverse image is open. But I don't think Q has any open sets. I am very confused and a beginner in analysis and topology.
Please explain in detail if it is not continuous, and how do I construct continuous maps between Q to Q?
| If the topology of $\mathbb{Q}$ is the induced topology of $\mathbb{R}$ on $\mathbb{Q}$, then it is true.
Take the translation (and so the continuos map) $t\colon \mathbb{R}\to \mathbb{R}$ sending $x\mapsto x+1$.
Then, if $i\colon \mathbb{Q}\to \mathbb{R}$ is the inclusion, your map is $f=t\circ i$, that is continuos (because is the the composition of continuos maps).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4478778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
find the $Var[X(t)]$, where $X(t)=\sum_{i=0}^{N(t)}Y_i$ , $N(t)$ is Poisson process and $Y_i$ is IIDRV the answer is $E[N(t)]Var[Y_1]+E^2(Y_1)Var[N(t)]$.
but I get this:
$$
Var[X(t)]=E[X^2(t)]-(E[X(t)])^2=E[N^2(t)]E[Y_i^2]-E^2[N(t)]E^2[Y_1]$$
where:
$$
E[X(t)]=E[E[\sum_{i=0}^{N(t)}Y_i|N(t)=n]]=E[N(t)]E[Y_1]\\
E[X^2(t)]=E[E[(\sum_{i=0}^{N(t)}Y_i)^2|N(t)=n]]=\sum_{n=0}^{\infty}n^2E[Y_i^2]P(N(t)=n)=E[N^2(t)]E[Y_i^2]
$$
Am I wrong? how to get the correct result?
| Your calculation for $\mathbb{E}[X^2(t)]$ is not quite right – when you expand out the square, you would also get the cross terms, e.g. $\mathbb{E}[Y_i Y_j]$, $i \ne j$.
Analogous to the computation of $\mathbb{E}[X(t)]$ – which uses the law of total expectation -- there is a version for variances:
$$
\mathrm{Var}(X(t)) = \mathbb{E}[\mathrm{Var}(X(t) \mid N(t))] + \mathrm{Var}(\mathbb{E}[X(t) \mid N(t)]) .
$$
(Can you prove this?) If you use this result instead, you should be able to obtain $\mathrm{Var}(X(t))$ using similar arguments that you already have used.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4478900",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Proving the limit $\lim_{y \to \infty}2y(\cos(x/y^{1/2})-1)=-x^2$ I am wondering how one could prove the following limit:
$$\lim_{y \to \infty}2y(\cos(x/y^{1/2})-1)=-x^2$$
I tried using a series expansion:
$$\begin{aligned}2y(\cos(x/y^{1/2})-1)&=2y\sum_{n=1}^\infty\frac{(-1)^n}{(2n!)}x^{2n}y^{-n}=\\
&=2\sum_{n=1}^\infty\frac{(-1)^n}{(2n!)}x^{2n}y^{-(n-1)}=\\
&=-x^2+2\sum_{n=2}^\infty\frac{(-1)^n}{(2n!)}x^{2n}y^{-(n-1)}\end{aligned}$$
But I don't see how the second term, if it does, goes to $0$.
| Since we can interchange limit and infinite sum when the sequence of functions converges uniformly we have that
$$\lim_{y\to\infty}2y(\cos(x/\sqrt y)-1)=\lim_{y\to\infty}(-x^2+2\sum_{n=2}^\infty\frac{(-1)^n}{(2n!)}x^{2n}y^{-(n-1)})=-x^2+2\sum_{n=0}^{\infty}\frac{(-1)^n}{(2n!)}x^{2n}\lim_{y\to\infty}\frac{1}{y^{n+1}}=-x^2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4479192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 2
} |
Complex Analysis to solve this integral? $\int_0^{\pi/2} \frac{\ln(\sin(x))}{\sqrt{1 + \cos^2(x)}}\text{d}x$ Complex Analysis time! I need some help in figuring out how to proceed to calculate this integral:
$$\int_0^{\pi/2} \frac{\ln(\sin(x))}{\sqrt{1 + \cos^2(x)}}\text{d}x$$
I tried to apply what I have been studying in complex analysis, that is stepping into the complex plane. So
$$\sin(x) \to z - 1/z$$
$$\cos(x) \to z + 1/z$$
Obtaining
$$\int_{|z| = 1} \frac{\ln(z^2-1) - \ln(z)}{\sqrt{z^4 + 3z^2 + 1}} \frac{\text{d}z}{i}$$
I found out the poles,
$$z_k = \pm \sqrt{\frac{-3 \pm \sqrt{5}}{2}}$$
But now I am confused: how to deal with the logarithms? Also, what when I have both imaginary and real poles?
I am a rookie in complex analysis so please be patient...
| @Hans-André-Marie-Stamm, I hope you don't mind that I was unable to solve this problem using Complex Analysis, but here's a method that relies on the Beta Function and some algebric work.
$$\begin{align}I&=\int_{0}^{\pi/2}\frac{\log\left(\sin(x)\right)}{\sqrt{1+\cos^2(x)}}dx;\ \cos(x)\rightarrow y\\&=\frac{1}{2}\int_{0}^{1}\frac{\log\left(1-y^2\right)}{\sqrt{1-y^4}}dy =\underbrace{\frac{1}{4}\int_{0}^{1}\frac{\log\left(1-y^4\right)}{\sqrt{1-y^4}}dy}_{I_1}+\underbrace{\frac{1}{4}\int_{0}^{1}\frac{\log\left(\frac{1-y^2}{1+y^2}\right)}{\sqrt{1-y^4}}dy}_{I_2}\end{align}$$
$$\begin{align}I_1=&\frac{1}{4}\underbrace{\int_{0}^{1}\frac{\log\left(1-y^4\right)}{\sqrt{1-y^4}}dy}_{y=z^{1/4}}=\frac{1}{16}\int_{0}^{1}z^{1/4-1}\frac{\log\left(1-z\right)}{\sqrt{1-z}}dz\\=&\frac{1}{16}\lim_{t \rightarrow 1/2}\frac{d}{dt}\mathfrak{B}\left(\frac{1}{4},t\right)=\frac{1}{16}\mathfrak{B}\left(\frac{1}{4},\frac{1}{2}\right)\left[\psi^{(0)}\left(\frac{1}{2}\right)-\psi^{(0)}\left(\frac{3}{4}\right)\right]\end{align}$$
$$\begin{align}I_2=&\frac{1}{4}\underbrace{\int_{0}^{1}\frac{\log\left(\frac{1-y^2}{1+y^2}\right)}{\sqrt{1-y^4}}dy}_{y=\sqrt{\frac{1-\sqrt{z}}{1+\sqrt{z}}}}=\frac{1}{32}\int_{0}^{1}z^{1/4-1}\frac{\log\left(z\right)}{\sqrt{1-z}}dz\\=&\frac{1}{32}\lim_{t \rightarrow 1/4}\frac{d}{dt}\mathfrak{B}\left(\frac{1}{2},t\right)=\frac{1}{32}\mathfrak{B}\left(\frac{1}{2},\frac{1}{4}\right)\left[\psi^{(0)}\left(\frac{1}{4}\right)-\psi^{(0)}\left(\frac{3}{4}\right)\right]\end{align}$$
Gathering both results:
$$\begin{align}I&=\frac{1}{32}\mathfrak{B}\left(\frac{1}{2},\frac{1}{4}\right)\left[\psi^{(0)}\left(\frac{1}{4}\right)+2\psi^{(0)}\left(\frac{2}{4}\right)-3\psi^{(0)}\left(\frac{3}{4}\right)\right]\\&=\frac{1}{32}\frac{\Gamma\left(1/4\right)\Gamma\left(1/2\right)}{\Gamma\left(3/4\right)}\left[\psi^{(0)}\left(\frac{1}{4}\right)+2\psi^{(0)}\left(\frac{2}{4}\right)-3\psi^{(0)}\left(\frac{3}{4}\right)\right]\end{align}$$
This result can be simplified if one applies Gamma's Reflection Formula, and Digamma's Reflection and Multiplication Formulas, obtaining:
$$I=\int_{0}^{\pi/2}\frac{\log\left(\sin(x)\right)}{\sqrt{1+\cos^2(x)}}dx=\frac{\log(2)-\pi}{16\sqrt{2\pi}}\Gamma^2\left(\frac{1}{4}\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4479363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Why can't we solve $\lim\limits_{(x, y) \to (0, 0)} \frac{xy}{\sqrt{x^2 + y^2}}$ through dividing by xy in both numerator and denominator? When solving this limit, we use methods like $\epsilon$ - $\delta$ definition and polar coordinate convertion. And I wonder if we can solve it by dividing by $xy$ in both numerator and denominator like this
$\lim\limits_{(x, y) \to (0, 0)} \frac{xy}{\sqrt{x^2 + y^2}} = \lim\limits_{(x, y) \to (0, 0)} \frac{1}{\sqrt{\frac{1}{y^2} + \frac{1}{x^2}}} = 0$
Thanks in advance!
| No, not really. The reason is the usual one that you can't divide by zero, so you cannot perform that division when $xy=0$. But that is on either coordinate axes. In particular it includes lots of points outside of just $(0,0)$.
You can see that the first expression is only undefined when both $x$ and $y$ are equal to zero, i.e. only at the point $(0,0)$ whereas the expression that you have written is undefined when either $x$ or $y$ is equal to zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4479502",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Hand calculation of divisor summatory function Maybe this question is stupid, but there was a problem in a math competition (not even in the highest stage) in my county which asked to
Find $ \sum_{n\leq390} d(n)$, where $d(n)$ is the number of positive divisors of $n$.
It is well known that this is equal to $ \sum_{n\leq390}\lfloor\frac{390}{n}\rfloor$.
The problem is that at that competition calculators are not allowed, and problems shouldn't require any non-elementary knowledge, so how could you get the exact value of this sum (it was not a multiple choice test, so you can't just exstimate it) without brutally evaluating every single fraction (which would almost surely mean making some mistake in the way)? Maybe $390$ is a special number in this context?
| It's well known that
$$s(x)=\sum_{n\leq x} d(n)=\sum_{n\leq x}\left\lfloor\frac{x}{n}\right\rfloor,$$ indeed. But I thought a more efficient version of that was general knowledge as well:
$$s(x)=2\,\sum_{n\leq u}\left\lfloor\frac{x}{n}\right\rfloor-u^2,$$
where $u=\lfloor\sqrt{x}\rfloor$ (that's the famous hyperbola method invented by Dirichlet, ask a search engine or any textbook of elementary number theory).
In this case, we'd have $x=390$ and $u=19$, and a sum of just 19 easily computable summands can be done with pencil and paper (I've done it). But it's still tedious, and doesn't make a lot of sense as a contest problem. After that, we know the result is $2393$, great!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4479683",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finding connected components of a topology space I would like to understand how to find the connected components of a given topological space.
I have this example, that I understand:
If I have $\mathbb{Q}$ with the euclidean topology, the connected components are just the singleton elements of $\mathbb{Q}$.
Proof: If I want to show that the only connected sets are the singletons, I would take a set that is not a singleton and I call it $C$. There exist $x,y\in C$ such that $x \neq y$. Then take $(-\infty,i)$ and $(i,\infty)$ as a seperation for $C$ ($i$ being the irrational).
But if I have for example $\mathbb{Q}$ with another topology i.e cofinite or conumerable. Or just $\mathbb{R}$ with cofinite or conumerable how should i proceed?
| You have to study case by case. For example cofinite topology on an infinite set is connected simply because there are no disjoint open subsets in it. As you can see this is a very different approach from the one you used for the standard $\mathbb{Q}$.
There is no general method, unfortunately. There are some tools (e.g. continuous image of a connected space is connected), but ultimately it boils down to some level of cleverness.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4479823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Root in $(1,2]$ of Equation $x^n-x-n=0$ Consider the equation $x^n-x-n=0$ with $n\in\mathbb{N},n\geq2.$
$a)$ Show that this equation has exactly one solution $u_n\in(1,2].$
$b)$ Show that the sequence $\left\{u_n\right\}$ is decreasing.
$c)$ Determine $L=\lim_{n\rightarrow\infty}u_n.$
Here is what I've done so far on this problem.
$a)$
Let $f_n(x)=x^n-x-n,$ then clearly $f_n$ is continuous and $f_n(1)=-n<0.$
Let $g(n)=f_n(2)=2^n-2-n,$ then $g'(n)=2^n\cdot\ln2-1>0$ as $n\geq2,$ hence
$$f_n(2)=g(n)\geq g(2)=2^2-2-2=0.$$
Furthermore, $\frac{d}{dx}f_n(x)=nx^{n-1}-1\geq1-1=0$ as $n\geq2$ and $x\geq1.$
Therefore, $f_n$ is non$-$decreasing on $[1,2]$ and $f_n(1)<0\leq f_n(2),$ and the Intermediate Value Theorem implies the unique root $u_n$ as desired.
$c)$ Note that $u_n^n-u_n-n=0\Leftrightarrow u_n=\sqrt[n]{u_n+n}$ and for $a>0$ fixed,
$$\lim_{n\rightarrow\infty}\ln(\sqrt[n]{a+n})=\lim_{n\rightarrow\infty}\frac{\ln(a+n)}{n}=\lim_{n\rightarrow\infty}\frac{1}{a+n}=0$$
hence
$$1=\lim_{n\rightarrow\infty}\sqrt[n]{1+n}\leq\lim_{n\rightarrow\infty}\sqrt[n]{u_n+n}\leq\lim_{n\rightarrow\infty}\sqrt[n]{2+n}=1$$
so $L=1.$
I'm currently stuck with part $b),$ so any hints/ideas/comments are appericated. Thank you!
| First, I shall prove a stronger claim.
Claim: $u_n \geq 1 + \frac{2}{n}$ for $n \geq 2$.
Proof: Applying IVT on $\left(1 + \frac{2}{n}, 2\right)$, we simply have to evaluate $f_n\left(1 + \frac{2}{n}\right)$. Now,
$$
f_n\left(1 + \frac{2}{n}\right) = \left(1 + \frac{2}{n}\right)^n - \left(1 + \frac{2}{n}\right) - n
$$
For $n \geq 7$, notice that $f_n\left(1 + \frac{2}{n}\right) \leq e^2 - (n + 1)$, so the inequality holds. For $2\leq n\leq 6$, simply substitute the values in.
Claim: $\frac{\sqrt{n^2 + 4} - (n - 2)}{2} \leq 1 + \frac{2}{n}$.
Proof: The inequality is equivalent to
$$\begin{align*}
n\sqrt{n^2 + 4} - n(n - 2) &\leq 2n + 4 \\
n\sqrt{n^2 + 4} &\leq n^2 + 4 \\
n^2(n^2 + 4) &\leq (n^2 + 4)^2 \\
4(n^2 + 4) \geq 0
\end{align*}$$
which holds for all $n > 0$.
Now, Apply IVT on $(1, u_{n - 1})$. In particular, let $u = u_{n - 1}$. Then,
$$\begin{align*}
u^{n - 1} - u - (n - 1) = 0 &\implies u^{n - 1} = u + (n - 1) \\\\
u^n - u - n &= u(u + (n - 1)) - u - n \\
&= u^2 + (n - 2)u - n
\end{align*}$$
Now, note that this quadratic has positive root
$$
u = \frac{-(n - 2) + \sqrt{(n - 2)^2 + 4n}}{2} = \frac{-(n - 2) + \sqrt{n^2 + 4}}{2} \leq 1 + \frac{2}{n}
$$
Since $u \geq 1 + \frac{2}{n}$, we have that $f(u) = u^n - u - n > 0$. Clearly, $f(1) = -n < 0$, so applying IVT gives part (b).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4479934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Solve the ordinary differential equation $\frac{d^2}{dx^2}F(x)=\frac{1}{F(x)^2}$ I've been trying to solve the ordinary differential equation
$$\frac{d^2}{dx^2}F(x)=\frac{1}{F(x)^2}$$
I tried simplifying and then simplifying even further and found that this function has to be in the form $e^{-cx}$ where $c$ is expressed in terms of the function itself therefore it means its a recurring function...So I need to know a function(that is not recurring) that is inversley proportional to its second derivative.I've been trying to solve this for a long time but couldn't find anything online that was much help either.Help would be appreciated. Thanx in advance
| It is a VERY difficult equation. If you are just interested for a solution try this by Wolfram:
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4480052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Square root of a complex number with infinitesimal imaginary part I am going through lecture in Quantum Field Theory and I am encountering the square root of a complex number with infinitesimally small imaginary part: $$\lim_{y\rightarrow0^+}\sqrt{-x+iy}=-i\sqrt{x}$$ and $$\lim_{y\rightarrow0^+}\sqrt{-x-iy}=i\sqrt{x}$$
But I don't understand how these equalities are established.
The standard way of taking the square root is by writing the complex number in polar form and then the result is just the square root of the magnitude times the exponential of the imaginary unit times the argument of the complex number divided by two in addition to multiples of $\pi$. But when I take the limit of the imaginary part going to zero I just get a vanishing argument and I don't get the minus or plus sign in both the square roots. What am I not getting right here?
| Ok I think I got it now thanks to peterwhy and Cyclotomic Field. Correct me if I am wrong.
$$\lim_{y\rightarrow 0+} \sqrt{-x\pm y}=\lim_{y\rightarrow 0+} \sqrt{x^2+y^2}e^{i\,arctan2(\pm y/-x)/2}$$
For the plus case: $$\lim_{y\rightarrow 0+} arctan2(y/-x)=\pi$$ and for the minus case: $$\lim_{y\rightarrow 0+} arctan2(-y/-x)=-\pi$$
Therefore $$\lim_{y\rightarrow 0+} \sqrt{-x\pm iy}=\mp i \sqrt{x}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4480208",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Number of integer solutions of a multivariate polynomial Given a single-variable polynomial, we all know that the number of its roots is bounded in terms of its degree. A polynomial here is a polynomial with integer coeffecients, and a root of a polynomial is a real root.
Given a multivariate polynomial, this phenomenon, in its general form, stops working. However, by a well-known theorem by Bezout, we still can say that either it has an infinite number of roots, or it has only a finite number which is bounded above in terms of its degree.
My questions is about the number of integer roots: Does there exist a functions $B=B_n:\mathbb N\rightarrow \mathbb N$ such that given a polynomial $P(x_1,...,x_n)$ of degree $d$, we can say that the set of INTEGER roots of $P$ contains at most $B(d)$ elements, or otherwise it is infinite?
Thanks in advance.
| There is no such bound.
Given an equation $X$ over $\mathbf{Q}$ with infinitely many rational points, you can always scale the coefficients so that any finite subset of the rational points all become integral points. (Put them all under a common denominator $N$, then replace each variable $x$ by $x/N$.) Take an elliptic curve $E$ with positive rank, that is, $E(\mathbf{Q})$ is infinite. (They exist: for example, $E: y^2 = x^3 - 2$.) After scaling, you can find an elliptic curve with as many integral points as you like. (You can take $y^2 = x^3 - 2 N^6$ for the greatest common denominator $N$ of the finite set of rational points.) If such a bound in your question existed, then, since these curves all have the same degree, at some point these scaled elliptic curve would have to have infinitely many integral points. But $E(\mathbf{Z})$ is always finite for any elliptic curve by a theorem of Siegel.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4480312",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is $\pi$ the radius of convergence of the taylor series of $f(z)=\frac{1}{1+e^z}$? I'm having trouble with the following exercise:
is it true that the radius of convergence of the Taylor series of the function $$f(z)=\frac{1}{1+e^z}$$ centered at $z=0$ is $\pi$?
I tried to evaluate the taylor series combining the two identities: $$e^z=\sum_{n\geq0}\frac{z^n}{n!}$$
$$\frac{1}{1-z} = \sum_{n\geq0}z^n,|z|<1$$
but it got really messy very quickly and I wasn't able to conclude anything about the radius of convergence.
Given that this is a True/False type question, is it possible to conclude anything without actually evaluating the Taylor series? If so, how?
| Intuitively the radius of convergence is the radius of the largest disk around $0$ that you can find in which $1/(1+e^z)$ is analytic, that's the distance to the closest points where $1 + e^z = 0$. These points are $\pm i\pi$, so the radius of convergence is $\pi$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4480447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Lemma 10.32 of Lee's Introduction to Smooth Manifolds I am reading over the proof of Lemma 10.32 (Local Frame Criterion for Subbundles) in Lee's Introduction to Smooth Manifolds.
The lemma says
Let $\pi: E \rightarrow M$ be a smooth vector bundle and suppose that for each $p\in M$ we are given an M-dimensional linear subspace $D_p \subseteq E_p$. Then $D = \cup_{p \in M} D_p \subseteq E$ is a smooth subbundle of $E$ iff each point of $M$ has a neighborhood $U$ on which there exist smooth local sections $\sigma_1, \cdots, \sigma_m: U \rightarrow E$ with the property that $\sigma_1(q), \cdots, \sigma_m(q)$ form a basis for $D_q$ at each $q \in U$.
Overall I understand the proof of this lemma, besides the part where we need to show that $D$ is an embedded submanifold with or without boundary of $E$. Professor Lee's proof says that
it suffices to show that each $p \in M$ has a neighborhood $U$ such that $D \cap \pi^{-1}(U)$ is an embedded submanifold (possibly with boundary) in $\pi^{-1}(U) \in E$.
It is not very obvious to me why it is sufficient by showing this. May someone explain the logic to me?
Edit: Here's my attempt to reason it: By Theorem 5.8, if $D ∩ \pi^{-1}(U)$ is an embedded submanifold in $\pi^{-1}(U)$, it satisfies the local k-slice condition. Now because $D$ is a union of $D ∩ \pi^{-1}(U)$ over different neighborhoods of $p \in M$, it satisfies the local k-slice condition as well, and hence again by Theorem 5.8, $D$ is an embedded submanifold.
Please let me know if anything is wrong and how it can be corrected.
Thank you very much.
Here's a screenshot of the Lemma and its (partial) proof:
| Unfortunately, Theorem 5.8 only works for smooth manifolds without boundary. But here $E$ probably has boundary. A better theorem is Theorem 5.51,but it also requires that $M$ is a smooth manifold without boundary there.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4480570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Do Steenrod Squares have naturality with homomorphisms that don't come from continous maps I was reading about the Steenrod Squaring operations in Milnor and Stasheff's, Characteristic Classes, now there is an axiom regarding naturality that says that given a continous map $f:(X,Y)\rightarrow (X',Y')$ and $f^*$ is the induced map on cohomology groups then $Sq^i\circ f^*=f^*\circ Sq^i$.
Given a map $g^*$ between cohomology groups that does not come from a continous map between topological spaces do we still get the naturality with the Steenrod Squares ?
The motivation behind this is that, assuming coefficients in $\mathbb{Z}_2$, there is a monomorphism from the cohomology of the real grassmannian to the cohomology of $\mathbb{R}P^\infty\times ...\times\mathbb{R} P^\infty$, as the grassmannian is a universal bundle and the squaring operations are easily computable on $\mathbb{R}P^\infty$ this would allow a description for the action of the Steenrod Squares on any real vector bundle. Unfortunately, I am not sure there is a continous map that induces said monomorphism.
| No, this is certainly not true. For instance, take any space $X$ for which $Sq^i:H^n(X)\to H^{n+i}(X)$ is nontrivial for some $i$ and $n$, and consider $g^*:H^*(X)\to H^*(X)$ which is $0$ in degree $n$ but the identity in degree $n+i$. Then $g^*\circ Sq^i$ is nonzero in degree $n$ but $Sq^i\circ g^*$ is $0$ in degree $n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4480710",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is the image of skolemization? For simplicity, consider first order logic with one binary relation in the signature.
Any $\forall x \exists y. \phi(x,y)$ gives rise to skolemization, converting the formula into $\forall x. \phi(x,f(x))$, and by that adding a function symbol to the signature.
My question is, what precisely are the cases in which we can go the other way around. So we're given a formula in first order logic with one binary relation and several function symbols. In which cases can we write a logically-equivalent formula with no function symbols and one binary relation symbol? Maybe it's impossible to capture all such cases, but I feel that the inverse of skolemization is easier to get hold on.
For more [probably unnecessary] rigor, one might say that two formulas over different signatures are never logically equivalent. So clearly I mean to have same set of models after ignoring the interpretation of the function symbols in the functional signature.
| One thing to think about is that a function $f$ is a binary relation that is right-unique (functional).
So, if you are given $\forall x. \phi(x,f(x))$, and you want to translate that back to some 2-place predicate $\phi(x,y)$, then you'll have to do something like: $$\forall x \exists y (\phi(x,y) \land \forall z (\phi(x,z) \to z = y))$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4480883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Integrating $\frac{1}{z^4}$ over the unit circle I need to integrate
$$\int_{\vert z \vert = 1} \frac{1}{z^4}dz$$
I appeal to
$$\int_\gamma f(z) dz = \int_a^b f(\gamma(t))d\gamma(t)$$
And I get
$$\int_0^{2 \pi} \frac{1}{e^{4 i t}} ie^{it}dt$$
this equals
$$i\int_0^{2 \pi}e^{-3 i t} dt=-\frac{1}{3}(e^{-3 i t})\bigg\vert_0^{2 \pi}=0$$
Did I do this correctly?
| That's perfect. Another way to solve it that can be useful for complicated integrals is the residue theorem. Since the only pole of the function inside your contour is at $z=0$, and $Res(f,0)=0$, we get that
$$\int_\gamma \frac{1}{z^4}dz=2\pi i Res(f,0)=0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4481015",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can I use the mapping notation to describe this vector space? So, I recently learned that the statement $v \in \mathbf F^S$, where $\mathbf F$ is a field and $S$ is a set, can be rewritten as $v : S \rightarrow \mathbf F$.
Does this imply that $ x\in \mathbf F^n$ can be rewritten as $x: n\rightarrow \mathbf F$?
$x$ is a mapping of $ n $ onto $ \mathbf F$?
| As @peek-a-boo pointed out, there's a way to. The caveat is that $n$ refers to an $n$ element set.
Then the vector space $\Bbb F^n$ can then be considered as the space of functions from $\{1,2,\dots,n\}$ to $\Bbb F$.
This is a sort of functional analysis approach.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4481333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
biholomorphic function $f\colon \mathbb{C}\to\mathbb{C}\setminus \{0\}$ Exists there a biholomorphic function $f\colon\mathbb{C}\to\mathbb{C}\setminus \{0\}$?
My idea:
Suppose, there exists a biholomorphic function $f\colon\mathbb{C}\to\mathbb{C}\setminus \{0\}$. Because of $f(z)\neq 0 \ \forall z\in\mathbb{C}$ there exists $\delta>0$ with $|f(z)|\geq \delta\ \forall z \in\mathbb{C}$.
Define $g\colon\mathbb{C} \to \mathbb{C}, g(z)=\frac{1}{f(z)}.$ Then $g$ is holomorphic and bounded by $\frac{1}{\delta}$. Because of Liouville $g$ (and so $f$) is constant: contradiction!
Is this correct?
Edit:
$\mathbb{C}$ is simply-connected.
Suppose, $f$ is biholomorphic.
Then $f(\mathbb{C})=\mathbb{C}\setminus\{0\}$ is simply-connected.
This is a contradiction because $\mathbb{C}\setminus\{0\}$ is not simply-connected.
| Let $\gamma$ be the image of the unit sphere $\partial B_1(0)$ under the proposed inverse $g: \mathbb C^\times \longrightarrow \mathbb C$ of your biholomorphic map $f : \mathbb C\longrightarrow \mathbb C^\times$. Then prove and study the consequences of the equality of integrals
$$
\frac 1 {2\pi i}\int_\gamma \frac{f'(w)}{f(w)} dw =\frac 1 {2\pi i}\int_{\partial B_1(0)}\frac {dz} z
$$
by observing that $f'/f$ is holomorphic on $\mathbb C$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4481454",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.