Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Why Does changing the placement of the I switch the fraction from positive to negative So I am doing an exercise on khan academy and I answer this question and ask for the explanation and I don't understand this Step. Why does changing the placement of the i change the fraction from negative to positive?
| It didn't. The expression is $$\pm \color{red}{-}\frac{i \sqrt{3}}{2},$$ where I have highlighted the extra $-$ sign in red. This is what changes the $\pm$ symbol to $\mp$, not the placement of $i$ after $\sqrt{3}/2$. For instance, we have $$\pm (-1) = \mp 1,$$ although in the absence of another $\pm$ symbol in the expression, the choice of $\mp$ or $\pm$ is not important. What I mean is that we typically only write $\mp$ if there is another choice of sign elsewhere in the expression; e.g., in the identity
$$\sin(a \pm b) = \sin a \cos b \pm \cos a \sin b,$$
we must choose on both sides of the equation the top sign $+$ or the bottom sign $-$, but cannot choose $+$ on the left and $-$ on the right. So in the corresponding cosine identity
$$\cos(a \pm b) = \cos a \cos b \mp \sin a \sin b,$$
the use of $\mp$ on the right is necessary in order for the sign choice to be correct: if we choose the top symbol $+$ on the left, we must also choose the top symbol $-$ on the right.
But when there is no such restriction, e.g.,
$$\frac{-b \pm \sqrt{b^2 - 4ac}}{2a},$$ then the expression $$\frac{-b \mp \sqrt{b^2 - 4ac}}{2a}$$ is equivalent, and $\pm$ is typically preferred instead of $\mp$, because the use of $\mp$ implies that there exists some other $\pm$ in the expression for which the choice of sign must be made "top-top" or "bottom-bottom" as exemplified by the cosine identity above.
Note that on occasion, some authors might violate the "top-top"/"bottom-bottom" convention when writing an expression, for instance
$$\pm 1 \pm x \pm x^2 \pm x^3 \pm \cdots$$ could imply that the choice of sign is independent for each term in the sum. For if the restriction were to apply, the author would probably have written instead
$$\pm (1 + x + x^2 + x^3 + \cdots),$$
which would force all signs to be the same. As you can see, the use of $\pm$ and $\mp$ is sometimes not clear, in which case it may be necessary for the author to specify or explain what is meant.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4516026",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
separation property for schemes Let $A$ be a ring, then we have a scheme $T=\mathrm{Spec}A[y_i; i\in I]$ where $I$ is an index set. Let $B$ be a closed subset of $T$ and $t\in T-B$ be a point of $T$, then I'd like to ask how to find $g\in A[y_i; i\in I]$ such that $B\subset V(g)$ and $t\notin V(g)$.
This is used in a proof due to B. Poonen that universally closed morphism of schemes is quasi-compact.
Any help would be appreciated.
| The $D(g)$ (for $g\in A[y_i:i\in I]$) form a basis for the topology on $T$, and $T - B$ is open. So there is a $g$ such that $t\in D(g)\subset T - B$. This means $t\notin V(g)$ and $B\subset V(g)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4516173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find range of values of $a,b,c$ such that $ax^2+bx+c$ satisfies given conditions Let $y = f(x) = ax^2+bx+c$ where $a\neq0$. Find the range of values $a,b$ and $c$ such that it satisfies the following:
*
*$0 \le f(x) \le 1 \quad \forall \space x \in [0,1]$
What I have found so far?
*
*For $x=0$, we get $0 \le c \le 1$
*For $x=1$, we get $0 \le a+b+c \le 1$
*If roots exist, then they should lie outside the interval $[0,1]$.
Therefore, ${{-b - \sqrt{D}}\over{2a}} < 0$ and ${{-b + \sqrt{D}}\over{2a}} >1$. I tried to solve these but didn't get any fruitful results.
EDIT 1: As explained in comments by @insipidintegrator, this is only for distinct roots.
*Vertex of quadratic $f(x)$ is $\big( \frac{-b}{2a}, -\frac{D}{4a}\big)$. If the x-coordinate of the vertex lies between 0 and 1, then y-coordinate should also be between 0 and 1. But I have no idea how to proceed from here.
I want to find some lower/upper limits for $a,b,c$ or any kind of relation between them. Any solution or hints for solving this problem is greatly appreciated.
| We may obtain the necessary and sufficient conditions for
$$a, b, c \in \mathbb{R}, ~ 0 \le ax^2 + bx + c \le 1, \forall x \in [0, 1].$$
Fact 1: Let $A, B, C \in \mathbb{R}$. Then
$$Ax^2 + Bx + C \ge 0, ~ x \ge 0
\iff A \ge 0, ~ C \ge 0, ~ B \ge -\sqrt{4AC}.$$
(The proof is given at the end.)
Fact 2: Let $A, B, C \in \mathbb{R}$. Then
$$Ax^2 + Bx + C \ge 0, ~ x\in [0, 1] \iff
\left\{\begin{array}{l}
A + B + C \ge 0\\[5pt]
C \ge 0\\[5pt]
B + 2C \ge - \sqrt{4(A + B + C)C}.
\end{array}
\right.
$$
(The proof is given at the end.)
Now, using Fact 2, we have
$$ax^2 + bx + c \ge 0, ~ x \in [0, 1]
\iff
\left\{\begin{array}{l}
a + b + c \ge 0\\[5pt]
c \ge 0\\[5pt]
b + 2c \ge - \sqrt{4(a + b + c)c}
\end{array}
\right.$$
and
$$ - ax^2 - bx - c + 1 \ge 0,~ x \in [0, 1]$$
$$\iff
\left\{\begin{array}{l}
-a - b - c + 1 \ge 0\\[5pt]
- c + 1 \ge 0\\[5pt]
- b + 2(-c + 1) \ge - \sqrt{4(-a - b - c + 1)(-c + 1)}.
\end{array}
\right.$$
Thus, we have
$$0 \le ax^2 + bx + c \le 1, ~ x\in [0, 1]$$
$$\iff \left\{\begin{array}{l}
0 \le a + b + c \le 1\\[5pt]
0 \le c \le 1\\[5pt]
b + 2c \ge - \sqrt{4(a + b + c)c}\\[5pt]
b + 2c \le 2 + \sqrt{4(-a - b - c + 1)(-c + 1)}.
\end{array}
\right.$$
$\phantom{2}$
Proof of Fact 1:
“$\Longleftarrow$”: $\quad$ For $x\ge 0$, we have $Ax^2+Bx+C\ge Ax^2 - \sqrt{4AC}\ x + C = (x\sqrt{A} - \sqrt{C})^2\ge 0$.
“$\Longrightarrow$”: $\quad$ Clearly, $A \ge 0$ and $C \ge 0$.
If $B < - \sqrt{4AC}$ and $AC > 0$,
then $Ax_0^2 + Bx_0 + C < 0$ where $x_0 = \sqrt{\frac{C}{A}} > 0$. Contradiction.
If $B < - \sqrt{4AC}$ and $AC = 0$,
there exists $x_1 > 0$ such that $Ax_1^2 + Bx_1 + C < 0$ (easy). Contradiction.
We are done.
$\phantom{2}$
Proof of Fact 2:
By Fact 1, it suffices to prove that
$$Ax^2 + Bx + C \ge 0, ~ x\in [0, 1]
\iff (A + B + C)s^2 + (B + 2C)s + C \ge 0, ~ s \ge 0.$$
“$\Longleftarrow$”: $\quad$ Clearly, $A+B+C \ge 0$.
If $x = 1$, then $Ax^2 + Bx + C = A + B + C \ge 0$.
If $x\in [0, 1)$, letting $s = \frac{x}{1 - x} \ge 0$, we have
$x = \frac{s}{1+s}$ and
$$Ax^2 + Bx + C = \frac{1}{(1+s)^2}[(A+B+C)s^2+(B+2C)s+C] \ge 0.$$
“$\Longrightarrow$”: $\quad$ For $s\ge 0$, letting $x = \frac{s}{1+s}\in [0,1]$, we have
$$Ax^2+Bx+C = \frac{1}{(1+s)^2}[(A+B+C)s^2+(B+2C)s+C] \ge 0$$
which results in
$(A+B+C)s^2+(B+2C)s+C\ge 0$.
We are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4516307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Is there a unicursal octagram? Tl;dr
Is there such a thing as a unicursal octagram?
Long question
The unicursal hexagram is the figure formed when you link all six vertices of a regular hexagon using a single continuous trace, rather than a set of overlaid triangles. Here is what it looks like:
It even have some occult meanings and other nice symbolism.
For reasons of arbitrary RPG system symmetries, I would like to know whether anyone has ever heard of a unicursal octagram, that is, the figure formed when you trace all the vertices of a regular octagon with a single trace. The rules are:
*
*The trace has a direction. That is, if the octagram is modelled as a graph, each vertex will have exactly one edge going towards it and exactly one edge going out of it.
*Crossings in the middle of the surrounding octagon are allowed, but more symmetric structures are preferred. Beauty is a goal.
I have googled it several times, but the word "unicursal" seem to be very tightly bound to the word "hexagram" in Google's mind, so the octagram's more famous brother is always stealing the spotlight, so my google searches weren't very successful.
Alternatively, I tried looking for questions here on MathSE about unicursal octagrams, but there weren't any. I figured I could ask one, then.
| The reason for the "unicursal" in "unicursal hexagram" is because no other number $n$ of points prevents the existence of a nondegenerate regular star polygon on $n$ points. So unicursal octagrams, dodecagrams, etc. exist but they need not be qualified as such.
The number of nondegenerate $n$-point regular star polygons for $n\ge4$ is $\frac{\varphi(n)}2-1$, which is zero only for $n=4,6$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4516472",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Is there always a nontrivial tangent spherically symmetric vector field on an odd sphere? Let's look at an $n$-dimensional sphere embedded in $\mathbb{R}^{n+1}$. Let's consider a vector field $\vec{E}$ which is:
a. Tangent to the sphere.
b. Spherically symmetric (in the ambient space).
c. It is nontrivial: $\vec{E} \neq 0$ (somewhere).
By the Hairy Ball theorem, if $n$ is even, then $\vec{E}$ vanishes somewhere, and so by spherical symmetry it must vanish everywhere.
For $n=1$, there is such a field: $\vec{E}=a\hat{\theta}$.
Is is true that generally in the odd case we may find such a field? Or is there a condition on $n$?
| Per the comments, "spherically symmetric" here means invariant under the set of rotations of $\Bbb R^{n + 1}$ (fixing $0$), i.e., the standard action of $SO(n + 1)$.
For $n > 1$ the answer is no: Pick a point $p$ at which the vector field $E$ does not vanish. Then, there is a nontrivial rotation that fixes $p$ but sends $E_p$ to $-E_p$, hence $E$ is not invariant. (For $n = 1$ the only rotation that fixes $p$ is the identity.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4516610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solve this integral of degree $2043$
Find the value of $$\frac{\displaystyle\int_0^1(x+1)^{1010}\:\:dx}{\displaystyle\int_0^1(x^{2043}+1)^{1010}\:\:dx}$$
I evaluated the value of the numerator as $$\frac{2^{1011}-1}{1011}$$ But can't do the same for the denominator.
Any help is greatly appreciated.
| Using the Gaussian hypergeometric function
$$\int \left(x^p+1\right)^n \,dx=x \, _2F_1\left(-n,\frac{1}{p};1+\frac{1}{p};-x^p\right)$$
$$\int_0^1 \left(x^p+1\right)^n \,dx=\, _2F_1\left(-n,\frac{1}{p};1+\frac{1}{p};-1\right)$$
$$I_{n,p}=\frac{\displaystyle\int_0^1(x+1)^{n}\:\:dx}{\displaystyle\int_0^1(x^{p}+1)^{n}\:\:dx}=\frac{2^{n+1}-1}{(n+1)\, _2F_1\left(-n,\frac{1}{p};1+\frac{1}{p};-1\right)}$$
What is interesting to notice is that
$$I_{n,2n+k}\sim \frac{2 \,i^{\frac{1}{n}}\, \left(2^{n+1}-1\right) n}{(n+1) \,B_{-1}\left(\frac{1}{2 n},n+1\right)}+k+O(k^2)$$ which, for $n=1010$ and $k=23$ gives $2039.00$ while the result is $2038.96$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4516727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
} |
Deriving the formula of the volume of a sphere using integration I'm trying to derive the formula for the volume of a sphere, using integration :
$\int_{0}^{\pi r}\pi r^{2}dc$
$\pi r^{2}$ is constant, so :
$\pi r^{2}\int_{0}^{\pi r}dc$
Integrating, I get only c, and plug $\pi r$ for c.
So I'm getting $\pi^{2}r^{3}$, instead of $\frac{4}{3}\pi r^{3}$.
My reasoning is that I start with a full disc ($\pi r^{2}$), then rotating the disc for half the circumference of the sphere ($\pi r$) to get the final volume.
What's wrong with my reasoning ?
| Your integral gives the volume of a cylinder of he hight $\pi*r$ your dc would be better called dh . You do not rotate the circle!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4516875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How to draw a search tree, given its structure and the keys? I'm trying to solve the following question:
The keys $1,2,\ldots,17$ are stored in a search tree. In the current situation, the root of the search tree has exactly 2 children. From the right side of the root begins a simple route to the leaf that begins with a sequence of 4 edges from the left white parent, and ends with a sequence of 3 edges from the right parent. From the left son of the root begins a simple path to the leaf that begins with a sequence of 4 edges from the right white parent, and ends with a sequence of 3 edges from the left parent white. Draw the tree.
Now, I understand how the structure looks like:
But I can't seem to figure out how the keys should look like in the graph. If it's a search tree, then there could be only one way to create this tree? If so, how the mapping function should look like?
In the solution, they draw it like so:
Can you please explain which algorithm/method they followed to fill the graph with the right keys? How would I solve this question if another structure was given?
| There are in fact different kinds of search tree.
Presumably you have only encountered one type so far, which is why you are somehow expected to understand how the tree works without being told what kind of tree it is.
Specifically, you are working with a binary search tree.
Starting at any node $N$ with key $k$ in this tree, the key of the left child (if there is one) and all keys that are reachable through the left child are less than $k$,
and the key of the right child (if there is one) and all keys that are reachable through the right child are greater than $k$.
Starting at the top node, by counting the nodes reachable through the left branch we see that there must be eight keys less than the key of the top node,
and by counting the nodes reachable through the right branch we see that there must be eight keys greater than the key of the top node.
Out of all the numbers $1, 2, 3, \ldots, 17$, which one has eight keys smaller than it and eight keys larger? It can only be $9$. That's the number to write in the top node.
Now looking at the left child of the top node, we know it and all nodes below it have keys less than $9$;
they must be $1, 2, 3, \ldots, 8.$
But since this new node has only a right child, all the keys in the chain below it must be greater than its key. That is, this node has the smallest of all those numbers;
therefore its key is $1.$
Just continue like that to fill in the rest of the diagram.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4517033",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Logarithmic integral, complex, argument, several answers? I am reading Prime Obsession by John Derbyshire. Towards the end he offers a detailed calculation of $J(x)$ (according to Reimann). He uses $x=20$ as his example.
He calculates $20^\rho$ for the first couple of zeros (and draws the circle with radius $\sqrt{20}$. In particular: He writes that $20^{0.5+14.134725i} = -0.302303-4.46191i$.
But then he proceeds to write, and we are on the top of page 340 for those with a copy at hand, that $\textrm{Li}(-0.302303-4.46191i) = -0.105384+3.14749i$, where Li is the classic logarithmic integral.
I use Python, and the problem is that sympy.li and mpmath.li both returns $1.99796923684748 - 3.91384242586457i$.
I wrote the author, who responded "The problem is that raising a number to a complex power does not give a single indisputable result. Different software picks different results. I used Mathematica & the values it gave me that best make my case. Other packages deliver different answers -- all correct!".
However feeding Li(20^(1/2+14.134725*i)) into WolframAlpha returns "my" result, not the author's.
My goal is to "unwrap" the $\sqrt{20}$ circle into the beautiful double-spiral on page 337. The plot I get does not match, and I assume it is because I apply Li incorrectly.
Any help in explaining the use of Li in Python would be highly appreciated.
| You need to evaluate the non-trivial zeta zero terms as $\text{Ei}\left(\log(x)\ \rho\right)$ (see WolframAlpha evaluation).
On page 335 of "Prime Obsession" the author indicates the following:
I shall not go into detail, only say that, yes, ${Li}(x)$ is defined$^{128}$ for complex numbers $z$.
Note $128$ on page $390$ explains the author used $\text{Ei}\left(\log(x)\ \rho\right)$ instead of $Li(x^\rho)$.
The Mathematica expression
Show[ParametricPlot[{Re[ExpIntegralEi[Log[20] (1/2 + I t)]],
Im[ExpIntegralEi[Log[20] (1/2 + I t)]]}, {t, -20, 20},
PlotRange -> Full, GridLines -> Automatic],
ListPlot[{{Re@ExpIntegralEi[Log[20] ZetaZero1],
Im@ExpIntegralEi[Log[20] ZetaZero1]}, {Re@
ExpIntegralEi[Log[20] ZetaZero[-1]],
Im@ExpIntegralEi[Log[20] ZetaZero[-1]]}}, PlotStyle -> Red]]
generates the following plot where the red discrete evaluation points represent $\left\{\Re\left(\text{Ei}\left(\log(20)\ \rho_1\right)\right),\Im\left(\text{Ei}\left(\log(20)\ \rho_1\right)\right)\right\}$ and $\left\{\Re\left(\text{Ei}\left(\log(20)\ \rho_{-1}\right)\right),\Im\left(\text{Ei}\left(\log(20)\ \rho_{-1}\right)\right)\right\}$:
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4517601",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Evaluating $\sum_{k=0}^n(-1)^{\frac{k(k+1)}2}k$ I asked this question a few days ago, where I noticed $$\left\lfloor\frac{n}4\right\rfloor+\left\lfloor\frac{n+1}4\right\rfloor-\left\lfloor\frac{n+2}4\right\rfloor-\left\lfloor\frac{n+3}4\right\rfloor=\cos\left(\frac{n\pi}{2}\right)-1,\quad n\in\mathbb N$$
We can also write $\cos\left(\frac{n\pi}{2}\right)-1=\sum_{k=1}^n(-1)^{\frac{k(k+1)}2}$, and so I decided to try and evaluate $$\sum_{k=1}^n(-1)^{\frac{k(k+1)}2}k$$
to perhaps arrive at a another equivalence between trigonometric and floor functions(I do realise that we are only using trivial values of the cosine, but nevertheless find it rather intriguing).
I have arrived at a trigonometric relation as follows:
Since $\sum_{k=1}^nka_k=n(a_1+a_2+\dots+ a_n)-\sum_{k=1}^n(a_1+a_2+\dots+a_{k-1})$, we have (for $a_k=(-1)^{\frac{k(k+1)}2}$ )
$$\begin{align}\sum_{k=1}^nk(-1)^{\frac{k(k+1)}2}&=n\left(\cos\left(\frac{n\pi}{2}\right)-1\right)-\sum_{k=1}^n\left(\cos\left(\frac{k\pi}{2}\right)-1\right)\\
&=n\cos\frac{n\pi}{2}-\frac{\sin(\frac{n\pi}4)}{\sin\frac\pi4}\cdot\cos\frac{(n-1)\pi}4\\
&=n\cos\frac{n\pi}{2}-\frac{1}{\sqrt2}\cdot\sin\frac{(2n-1)\pi}4-\frac12
\end{align}$$
While not as concise as I had hoped, it is still a simple trigonometric equation. My problem however, lies in somehow generating a floor function evaluation for the series. Could somebody please help? Thanks in advance!
| We can write the series as $$\sum_{k=1}^n(-1)^{\frac{k(k+1)} 2}k=\sum_{4j\le n}4j+\sum_{4j-1\le n}(4j-1)-\sum_{4j-2\le n}(4j-2)-\sum_{4j-3\le n}(4j-3)$$
Since $$\sum_{4j\le n}4j=\left(4+8+\dots+4\left\lfloor\frac n4\right\rfloor\right)=\frac12\left\lfloor\frac n4\right\rfloor\left(2\cdot4+\left(\left\lfloor\frac n4\right\rfloor-1\right)\cdot4\right)$$
$$\implies\sum_{4j\le n}4j=2\left\lfloor\frac n4\right\rfloor^2+2\left\lfloor\frac n4\right\rfloor$$
We can use this result to evaluate the remaining terms as :
$$\sum_{4j-1\le n}(4j-1)=\sum_{4j\le n+1}4j-\sum_{4j\le n+1}1=2\left\lfloor\frac {n+1}4\right\rfloor^2+2\left\lfloor\frac {n+1}4\right\rfloor-\left\lfloor\frac {n+1}4\right\rfloor$$
$$\implies\sum_{4j-1\le n}(4j-1)=2\left\lfloor\frac {n+1}4\right\rfloor^2+\left\lfloor\frac {n+1}4\right\rfloor$$
Similarly,
$$\sum_{4j-2\le n}(4j-2)=2\left\lfloor\frac {n+2}4\right\rfloor^2,\sum_{4j-3\le n}(4j-3)=2\left\lfloor\frac {n+3}4\right\rfloor^2-\left\lfloor\frac {n+3}4\right\rfloor$$
Thus,
$$\sum_{k=1}^n(-1)^{\frac{k(k+1)} 2}k=2\left(\left\lfloor\frac {n}4\right\rfloor^2+\left\lfloor\frac {n+1}4\right\rfloor^2-\left\lfloor\frac {n+2}4\right\rfloor^2-\left\lfloor\frac {n+3}4\right\rfloor^2\right)\\+2\left\lfloor\frac {n}4\right\rfloor+\left\lfloor\frac {n+1}4\right\rfloor+\left\lfloor\frac {n+3}4\right\rfloor$$
$$\bbox[5px,border:2px solid #C0A000]{\begin{align} n\cos\frac{n\pi}{2}-\frac{1}{\sqrt2}\cdot\sin\frac{(2n-1)\pi}4-\frac12=&2\left(\left\lfloor\frac {n}4\right\rfloor^2 +\left\lfloor\frac {n+1}4\right\rfloor^2-\left\lfloor\frac {n+2}4\right\rfloor^2-\left\lfloor\frac {n+3}4\right\rfloor^2\right)\\&+2\left\lfloor\frac {n}4\right\rfloor+\left\lfloor\frac {n+1}4\right\rfloor+\left\lfloor\frac {n+3}4\right\rfloor \end{align}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4517736",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does this particular inequality hold after applying (term to term)different monotone functions? Suppose that $A, B, C, D$ are finite subsets of $\mathbb{N}\cup\{0\}$.
I have defined a similarity measure $sim_f$ for any monotonically strictly increasing positive function $f:\mathbb{N}\cup \{0\}\rightarrow \mathbb{R}^+_0$ as $$sim_f(A,B) := \frac{\sum\limits_{e\in A\cap B}f(e)}{\sum\limits_{e\in A\cup B}f(e)}$$
Having said that, I've been struggling for days to prove that for any two monotonically strictly increasing positive functions $f,g$ such that $f,g:\mathbb{N}\cup \{0\}\rightarrow \mathbb{R}^+_0$,
$$sim_f(A,B)\leq sim_f(C,D) \hspace{1cm}\text{implies}\hspace{1cm} sim_g(A,B)\leq sim_g(C,D) $$
I think this statement holds, because I can't find any counter example, but I can't prove it either. Is it true?
| This statement turns out to be false and here is a counter example.
\begin{align}
A &= \{10\} \\
B &= \{1,2,3,4,5,6,7,8,9,10\} \\
C &= \{1,2\} \\
D &= \{2,3\} \\
f(x) &= x \\
g(x) &= \begin{cases}
x & x < 10 \\
90 + x & x \geq 10
\end{cases}
\end{align}
A direct calculation shows that
\begin{align}
\text{sim}_f(A,B) &= \frac{10}{55} = \frac{2}{11} \\
\text{sim}_f(C,D) &= \frac{2}{6} = \frac{1}{3} \\
\text{sim}_g(A,B) &= \frac{100}{145} > \frac{2}{3} \\
\text{sim}_g(C,D) &= \frac{2}{6} = \frac{1}{3}
\end{align}
The trick is to find sets $A,B,C,D$ and a function $f$ with the following properties:
*
*$\text{sim}_f(A,B) < \text{sim}_f(C,D) < 1$
*The largest element $e^* \in A \cup B \cup C \cup D$ is in $A \cap B$ and $e^* \notin C \cup D$.
*Choose $g$ to match $f$ up to $e^* - 1$ and $g(x) = g(e^*) >> f(e^*)$ for all $x \geq e^*$.
with this set up, by taking $g(e^*)$ large enough you can drive $\text{sim}_f(A,B) \rightarrow 1$ while $\text{sim}_g(C,D) = \text{sim}_f(C,D)$ since $e^* \notin C \cup D$ and its the largest element amongst the four sets.
After that its just a bit of tinkering to find a concrete example.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4517870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Rewrite this equation using cartesian coordiantes $x$ and $y$ In this question the $x$ and $y$ coordinates are given, however I do not know what to substitute in
$x=5t-2\;\quad y=-5t+7$
So far I have rearranged for $t$
$x+2=5t$
$\dfrac{x+2}5=t$
therefore would the next step by $y=-5\left(\dfrac{x+2}5\right)+7$ ?
| $x=5t-2$
$x+2=5t$
$\dfrac{x+2}5=t$
$y=-5t+7$
$y=-5\left(\dfrac{x+2}5\right)+7$
$y=-(x+2)+7$
$y=-x-2+7$
$y=-x+5$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4518027",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show there is an eigevalue within a certain value of $\lambda$. Here is the problem: Let $V$ be a finite dimensional inner product space with $T:V\to V$ self adjoint transformation. Suppose there exists $v\in V$ a unit vector such that
$$\|Tv-\lambda v\|<\epsilon$$
Show that there must exist an eigenvalue $\lambda'$ of $T$ such that $$\lambda -\lambda'<\epsilon$$
Here's my attempt: Let $v$ be as supposed, then
$$\|Tv-\lambda v\|^2=\langle Tv-\lambda v, Tv-\lambda v\rangle=\|Tv\|^2-2\lambda \langle v,Tv\rangle +\lambda^2 \|v\|^2=\|Tv\|^2-2\lambda \langle v,Tv\rangle +\lambda^2$$
Here we use the fact that $T$ is self adjoint so its eigenvalues are real so theres no need to worry about conjugates, as well as the fact that $v$ is unit.
Now, let $v=\sum a_i v_i$ where $v_1,...,v_i,...,v_n$ are orthonormal eignevectors ($T$ is self adjoint so we use the spectral theorem):
Continuing from above:
$$\|Tv\|^2-2\lambda \langle v,Tv\rangle +\lambda^2=\|\sum a_i \lambda_i v_i\|^2-2\lambda \sum a_i\lambda_i +\lambda^2$$
where we use orthogonormality of the eigenvectors.
Now i get stuck however. Is this on the right track? Any help is appreciated.
| Continue to use orthonormality of the orthornomal basis (i.e. Pythagoras theorem) from what you have so far to show that
$$
\Vert Tv - \lambda v \Vert^2 = \sum_i |a_i|^2 \lambda_i^2 - 2 \lambda \sum_i |a_i|^2 \lambda_i + \lambda^2 = \sum_i |a_i|^2 (\lambda_i - \lambda)^2,
$$
where we use the fact that $\sum_i |a_i|^2 = \Vert v \Vert^2 = 1$. (Also with a minor correction where the cross term should also have $|a_i|^2$ instead of $a_i$.)
Now, suppose that for all eigenvalues, we have $|\lambda_i - \lambda| \geq \epsilon$. (We have to assume that the relevant norm is being used.) Then the above formula shows that
$$
\Vert Tv - \lambda v \Vert^2 \geq \epsilon^2 \sum_i |a_i|^2 = \epsilon^2 ,
$$
a contradiction to our initial assumption. Hence we conclude that there must exist some eigenvalue $\lambda_i$ such that $|\lambda_i - \lambda| < \epsilon$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4518165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Direct sum of injections is injection, categorically Given $R$-module maps $\{f_i \colon M_i \to N_i\}_{i \in I}$, the universal properties of the direct sums $\oplus M_i$ and $\oplus N_i$ give a unique map $\oplus f_i \colon \oplus M_i \to \oplus N_i$ such that $M_j \xrightarrow{f_j} N_j \to \oplus N_i$ agrees with $M_j \to \oplus M_j \xrightarrow{\oplus f_j} \oplus N_i$ for each $j \in I$. (The unlabeled maps are the inclusions.)
If we use the explicit constructions of $\oplus M_i$ and $\oplus N_i$ as sets of tuples $\{(m_i)_{i \in I} \mid m_i \in M_i\}$ and $\{(n_i)_{i \in I} \mid n_i \in N_i\}$, then the map $\oplus f_i$ is given explicitly by $(m_i) \mapsto (f(m_i))$. Now it's easy to see that if $f_i$ is an injection for all $i$, then $\oplus f_i$ is, too.
However, I was wondering if there's a way to reason about injectivity of $\oplus f_i$ directly from the universal properties, without using any particular construction for the direct sum.
| If $C$ is a category with infinite coproducts, taking $I$-indexed coproducts is a functor $\bigoplus : C^I \to C$ which is left adjoint to the diagonal functor $\Delta : C \to C^I$, and hence which preserves colimits. This implies that it preserves epimorphisms, since those are the morphisms with trivial cokernel pair (in an abelian category substitute "cokernel"). However, there's no abstract nonsense reason it should preserve monomorphisms; those are the morphisms with trivial kernel pair (again, in an abelian category substitute "kernel"), so this would be a limit-commuting-with-colimit type scenario, which usually requires special hypotheses.
Here you can find a counterexample by Zhen Lin in the abelian category $\text{Sh}([0, 1])^{op}$, the opposite of the category of sheaves of abelian groups on the interval.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4518344",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Find natural number $x,y$ satisfy $x^2+7x+4=2^y$
Find natural number $x,y$ satisfy $x^2+7x+4=2^y$
My try: I think $(x;y)=(0;2)$ is only solution. So I try prove $y\ge3$ has no solution, by $(x+1)(x+6)-2=2^y$.
So $2\mid (x+1)(x+6)$, but this is wrong. Done.
This is wrong
Anyone has an idea? Please help, thank you!
| HINT:
We have
$$x^2+7x+(4-2^y)=0\\
\Delta_x=2^{y+2}+33=z^2\\
$$
Let, $y=2k$, then
$$\left(z-2^{k+1}\right)\left(z+2^{k+1}\right)=33$$
and so
$$
\begin{cases}z- 2^{k+1}=\pm 1,3,11,33\\
z+ 2^{k+1}=\pm 33,11,3,1\end{cases}
$$
Then, let $y=2k-1$ we have
$$2^{2k+1}+1=(z^2+1)-33$$
Since, $2^{2k+1}+1 \mod 3=0$, putting $z=3n\pm 1 $ and $z=3n$ we get $z^2+1\mod 3 \neq 0$. Thus, this case is impossible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4518526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Why is $f(x)=\frac{x}{e^x-1}$ continuous? My exercise says:
$$f(x)=\frac{x}{e^x-1},\quad x\neq0$$
$$f(0)=1$$
Can someone explain whatever this means? And why does the graph not cut off at $x=0$?
Edit : Apologies ,it's not $f(0)=0$ but $f(0)=1$.
And to clarify the exercise asks to confirm that $f'(0)$ exists (But that's not what puzzles me right now ).
| $f(x)=$$ \begin{cases}
0 & x= 0 \\
f(x)=\frac{x}{e^x-1} &x\neq0 \\
\end{cases}
$
in order for the function to be continous at a point then this condition must be satsified which is
$\lim_{x \to a} f(x)=f(a)$
but $\lim_{x \to 0} \frac{x}{e^x-1}=1$ which is not equal to $f(0)$ so then $f$ is discontinuous at $x=0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4518652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
how to solve the limit of an integral with an unknown function? With $f \in L^1(\mathbb{R})$, solve :
$$
\lim_{n\rightarrow \infty} \int_{-\frac{\pi}{4}}^{\frac{\pi}{4}} f(x)\tan^n(x)\,\mathrm{d}x
$$
But I have no idea where to start. I think I have to do an integration by parts and apply a convergence theorem, but I can't understand how it's possible to calculate $f(x)$ when it's not defined.
| Notice that for all $x\in\left(-\frac{\pi}{4},\frac{\pi}{4}\right)$ we have that $\tan x\in(-1,1)$, and so in particular also that $\tan^n x\in(-1,1)$. This means that for all $x\in\left(-\frac{\pi}{4},\frac{\pi}{4}\right)$,
$$\left\lvert f(x)\tan^n x\right\rvert\leq\left\lvert f(x)\right\rvert.$$
Since $f$ is $L^1$, we can apply the Dominated Convergence Theorem, which yields that
$$\lim_{n\to\infty}\int_{\left(-\frac{\pi}{4},\frac{\pi}{4}\right)}f(x)\tan^n x~\mathrm{d}\lambda(x)=\int_{\left(-\frac{\pi}{4},\frac{\pi}{4}\right)}f(x)\cdot0~\mathrm{d}\lambda(x)=0.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4518909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Bounds on spectral radius of $2\text{diag}(h)+h \cdot 1^T$ How can I get lower/upper bounds on the largest eigenvalue of the following sum of diagonal and rank-1 matrices for vector $h$ with $h_i>0\ \forall i$:
$$A=2\text{diag}(h)+h \cdot 1^T$$
For instance, for $d=3$ it would be matrix below
$$2 \left(
\begin{array}{ccc}
h_1 & 0 & 0 \\
0 & h_2 & 0 \\
0 & 0 & h_3 \\
\end{array}
\right)+\left(
\begin{array}{ccc}
h_1 & h_1 & h_1 \\
h_2 & h_2 & h_2 \\
h_3 & h_3 & h_3 \\
\end{array}
\right)
$$
The following has been observed to be an upper bound empirically
$$2\max_i h_i+\sum_i h_i\ge\lambda_\text{max}(A)$$
If we let $h=1,\frac{1}{2},\frac{1}{3},\ldots,\frac{1}{d}$, then for $d=4000$, the answer is $\approx 9.29455$, proposed upper bound is 10.8714. Furthermore, relative difference between bound and true value seems bounded as we vary $h$
Motivation: $\alpha<\lambda_1(A)$ is necessary and sufficient for the iteration $w=w-\alpha \langle w, x\rangle x$ to converge when $x$ is sampled from centered Normal with diagonal covariance and $h_i$ on the diagonal (derivation)
| Some thoughts:
We deal with the case $h_1 > h_2 > \cdots > h_n$.
Consider the equation $Ax = \lambda x$ ($x\ne 0$) which is written as
$$(\lambda - 2h_k) x_k = h_k\sum_{i=1}^n x_i, \quad k=1, 2, \cdots, n. \tag{1}$$
We claim that $\lambda \ne 2h_j, \forall j$. Indeed,
if $\lambda = 2h_j$ for some $j$, then $\sum_{i=1}^n x_i = 0$ and $(2h_j - 2h_k)x_k = 0, \forall k\ne j$ which results in $x_k = 0, \forall k \ne j$. Then we get $x = 0$. Contradiction.
From $\lambda \ne 2h_j, \forall j$, we have $\sum_{i=1}^n x_i \ne 0$.
From (1), we have
$$x_k = \frac{h_k}{\lambda - 2h_k}\sum_{i=1}^n x_i, \quad k=1, 2, \cdots, n. $$
Thus, we have
$$\sum_{k=1}^n \frac{h_k}{\lambda - 2h_k} = 1. \tag{2}$$
Fact 1: The equation (2) has exactly $n$ distinct real solutions $\lambda_1 > \lambda_2 > \cdots > \lambda_n$
with $\lambda_1 > 2h_1$ and $\lambda_k \in (2h_k, 2h_{k-1}), k=2, 3, \cdots, n$.
(The proof is easy and thus omitted here.)
Let us give a lower bound of $\lambda_1$.
We have
$$\frac{h_k}{\lambda_1 - 2h_k} = \frac{h_k}{\lambda_1}\cdot \frac{1}{1 - 2h_k/\lambda_1} > \frac{h_k}{\lambda_1} \cdot \left(1 + \frac{2h_k}{\lambda_1}\right), \quad \forall k.$$
Thus, we have
$$\frac{\sum_{i=1}^n h_i}{\lambda_1} + \frac{2\sum_{i=1}^n h_i^2}{\lambda_1^2} < 1$$
which results in
$$\lambda_1 > \frac12 \sum_{i=1}^n h_i
+ \frac12\sqrt{\left(\sum_{i=1}^n h_i\right)^2 + 8 \sum_{i=1}^n h_i^2}. \tag{3}$$
When $h = 1, \frac12, \frac13, \cdots, \frac1d$ and $d = 4000$, (3) gives $\lambda_1 > 9.227851206$. Using Maple, from (2), we get $\lambda_1 \approx 9.294554415$.
A better lower bound:
We have
$$\frac{h_1}{\lambda_1 - 2h_1} + \frac{\sum_{i=2}^n h_i}{\lambda_1} + \frac{2\sum_{i=2}^n h_i^2}{\lambda_1^2} < 1. \tag{4}$$
When $h = 1, \frac12, \frac13, \cdots, \frac1d$ and $d = 4000$, (4) gives $\lambda_1 > 9.284803103$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4519054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Logarithm and absolute value $$y' - y \tan x = 2x \sec x,\quad y(0)=0\tag1$$
integrating factor $= e^{-\int \tan x\ dx} = e^{\ln|\cos x|} = \cos x$
Can we write $|\cos x|$ as $\cos x$ above?
$$I.F. y = \int I.F.\ 2x\ \sec x\ dx\\(\cos x) y = \int \cos x \ 2x\ \sec x\ dx = \int 2x\ dx$$
If we had taken the integrating factor to be $ |\cos x|$, then in the above line $|\cos x|$ and $\sec x$ wouldn't have cancelled.
| Alternative : $\begin{align}&y' - y \tan x = 2x \sec x\\&y'\cos x-y\sin x=2x\\&(y\cos x) '=2x\\&y\cos x=x^2+C\end{align}$
$y(0) =0\implies C=0$
Hence solution : $y(x)=x^2 \sec x$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4519137",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
When the mean of a non-negative, integer-valued random variable goes to zero, does this imply anything about the other raw moments? When the mean of a non-negative, integer-valued random variable $X_n$, for example counting paths in a random graph on $n$ vertices between two fixed vertices, goes to zero,
$$\lim_{n \to \infty}\mathbb{E}[X_n] =0$$
can we have $\lim_{n \to \infty}\mathbb{E}[X_n^3] = \infty$, or under what conditions does this occur? This seems reasonable given we just need two different sequences to have different limits, but a bit counter-intuitive, since we're counting objects, and the raw moments give the typical number of ordered pairs, triples, etc.
| Here is a concrete example about counting things in random graphs.
Let $H$ be the $5$-vertex graph obtained by adding an edge to $K_4$:
Suppose that $X_n$ counts the number of copies of $H$ in $G_{n,p}$, for some carefully chosen $p$. Then $\mathbb E[X_n] = \Theta(n^5 p^7)$. So if we choose $p = o(n^{-5/7})$, then $\mathbb E[X_n] \to 0$ as $n \to \infty$.
However, for such values of $p$, we have $\mathbb E[X_n^2] = \Theta(n^6 p^8)$: $X_n^2$ counts ordered pairs of copies of $H$, and the $\Theta(n^6 p^8)$ term comes from two copies that share $K_4$ but have different edges coming out of the $K_4$. Whenever $p$ is (asymptotically) between $n^{-3/4}$ and $n^{-5/7}$, such as $p = n^{-0.74}$, we have $\mathbb E[X_n] \to 0$ but $\mathbb E[X_n^2] \to \infty$ as $n \to \infty$.
Similarly, there is a $\Theta(n^7 p^9)$ contribution to $\mathbb E[X_n^3]$ that comes from three copies of $H$ that all share the same $K_4$. So if we take a value of $p$ such as $n^{-0.77}$, then we get $\mathbb E[X_n] \to 0$, $\mathbb E[X_n^2] \to 0$, and $\mathbb E[X_n] \to \infty$ as $n \to \infty$.
Essentially, when you see a situation where the mean goes to $0$ but higher moments go to infinity, this tells you that $X_n$ counts objects that rarely appear, but appear in bunches when they do. That's what we see here: it is rare to find a copy of $H$ in $G_{n,p}$ for this range of $p$, but if you find one copy, you find many copies.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4519271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
absolute value of sum vs sum of absolute values I know that if $w$ satisfies $\lvert f_1(w)+f_2(w)\rvert>b$ then $\lvert f_1(w)\rvert>b/2$ or $\lvert f_2(w)\rvert>b/2$, given $b>0$ and $f_1,f_2$ two functions. This can be viewed geometrically.
If $f_1, f_2$ are measurable functions, this implies the well known result that $$P(\lvert f_1+f_2\rvert>b)\leq P(\lvert f_1\rvert>b/2)+P(\lvert f_2\rvert>b/2). \quad (1)$$
Question. Is it true that $$P(\lvert \sum_{k=1}^n f_k\rvert>b)\leq \sum_{k=1}^n P(\lvert f_k\rvert>b/n)? \quad (2)$$
How to prove it?
Comment. For $n=3$, I could use $(1)$ repeatedly, as
\begin{align}
P(\lvert f_1+f_2+f_3\rvert>b)&\leq P(\lvert f_1+f_2\rvert>b/2)+P(\lvert f_3\rvert>b/2)\\
&\leq P(\lvert f_1>b/4)+P(\lvert f_2\rvert>b/4)+P(\lvert f_3\rvert>b/2).
\end{align}
Clearly, this inequality is different from $(2)$. This example suggests me something like
$$P(\lvert \sum_{k=1}^n f_k\rvert>b)\leq \sum_{k=1}^n P(\lvert f_k\rvert>b/2^{\lfloor (n-1)/2\rfloor +1}). \quad (4)$$
But $(2)$ is a better bound than $(4)$.
| Suppose $x \in \{\sum_{k\leq n} |f_k|>b\}$ but $x \notin \cup_{k\leq n}\{|f_k|>b/n\}$. Then $|f_k(x)|\leq b/n,\,\forall k\leq n$. This would imply $\sum_{k\leq n}|f_k(x)|\leq b$, which is a contradiction. Therefore $\{\sum_{k\leq n} |f_k|>b\}\subseteq \cup_{k\leq n}\{|f_k|>b/n\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4519381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Conjugacy of p-subgroups in $GL_{5}(\mathbb{F}_{p})$?. Let $U_{5}$ denote the unitriangular group of $5\times 5$ upper triangular matrices with ones on the diagonal, over the finite field $\mathbb{F}_{p}$. Let $H=\left. \left\{
A=\begin{pmatrix}
1 & 0 & 0 & a &d \\
0 & 1 & 0 & b &e \\
0 & 0 & 1 & c &f \\
0 & 0 & 0 & 1 &0 \\
0 & 0 & 0 & 0 &1%
\end{pmatrix}%
\right| a,b,c,d,e,f \in \mathbb{F}_{p}\right\}$.
and
$K=\left. \left\{B=
\begin{pmatrix}
1 & 0 & a' & b' &c' \\
0 & 1 & d' & e' &f' \\
0 & 0 & 1 & 0 &0 \\
0 & 0 & 0 & 1 &0 \\
0 & 0 & 0 & 0 &1%
\end{pmatrix}%
\right| a',b',c',d',e',f' \in \mathbb{F}_{p}\right\}$ be two subgroups of $GL_{5}(\mathbb{F}_{p})$. The subgroups $H$ and $K$ are maximal abelian normal in $U_{5}$ (See for example Exercise $3$ p. $94$ of the Book {M. Suzuki, Group theory I}).
Does the subgroups $H$ and $K$ conjugate in $GL_{5}(\mathbb{F}_{p})$?.
I think the answer is No but I don't sure what to do about it.
My try to this question:
Let $V$ be a vector of $\mathbb{F}_{p}^{5}$. H and K are not conjugate since
$I(\mathbb{F}_{p}[H])V$ is a 3-dimensional vector space but $I(\mathbb{F}_{p}[K])V$ is just a 2-dimensional. Here, $I$ denotes the augmentation ideal.
Could anyone please tell me if my try is correct or provide a defferent approche?
Thank you in advance.
| Prove that the fixed point space of $H$ on $\mathbb{F}_p^5$ has dimension $3$.
Prove that the fixed point space of $K$ on $\mathbb{F}_p^5$ has dimension $2$.
Conclude that $H$ and $K$ cannot be conjugate in $GL_5(\mathbb{F}_p)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4519578",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How to prove if the statement "If A is a nonsingular matrix, then the homogeneous system Ax = 0 has a nontrivial solution" is true or false? Statement: If is a nonsingular matrix, then the homogeneous system = 0 has a nontrivial solution
We know that if A is an n × n non–singular
matrix, then the homogeneous system
AX = 0 has only the trivial solution X = 0.
Hence if the system AX = 0 has a non–trivial
solution, A is singular.
Example:
By solving the row echelon form of A, we get:
Because of this, we can say that A is singular because we got its reduced row echelon form,
and consequently AX = 0 has a non–trivial
solution x = −1, y = −1, z = 1
More generally, if A is
row–equivalent to a matrix containing a zero
row, then A is singular. For then the
homogeneous system AX = 0 has a
non–trivial solution.
Now, my issue here is that I hesitate to conclude if the given statement above is considered true or false because of the presence of the possibility in the matrix that it can be either trivial or non-trivial. I may want to know what is the final verdict for the statement above if it's true or false.
Your responses would be highly appreciated as this would help me a lot to get a clearer context.
Thank you very much!
| If $A$ is a square invertible matrix, being full rank and being invertible is equivalent.
Therefore we can use the rank–nullity theorem and see that the kernel of your matrix is a vector space whose dimension is necessarily $0$, meaning that it only contains the null vector.
So there is only the trivial solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4519704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
A sequence has no limit point in unit disk, then $D_1 \setminus \{a_n\}$ is open. I need to show the if $\{a_n\} \subset \Bbb{C}$ is a sequence with no limit point in the unit disk, show unit disk minus $\{a_n\}$ is open,
So I want to go about it by contradiction. So if $D_1 \setminus \{a_n\}$ is not open, then it contains a boundary point. Say some $z_0$ such that $B_\epsilon(z_0)$ contains points both inside and outside of $D_1 \setminus \{a_n\}$. Any hints greatly appreciated.
| Given the sequence $(a_n)\subset \mathbb{C} $ has no limit point in $D_1=D(0, 1) $
Let $A=\{x_n:n\in\Bbb{N}\}$
Claim : $D_1\setminus A$ is open.
Proof: Assume the contrary that $D_1\setminus A$ is not open.
Then $\exists x_0\in D_1\setminus A$ such that $\forall r>0, D(x_0, r) \not\subset D_1\setminus A $
Hence $\forall n\in\Bbb{N} ,\exists (x_n) \subset A$ such that $x_n\in D(x_0, \frac{1}{n}) $.
Since $(x_n)\subset A$ and $(x_n)\to x_0\in D_1$ implies $x_0\in D_1$ is a limit point of $A$.
Contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4519828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Homogeneous linear differential equation $y'+a(x)y=0$ where $a$ is continuous and periodic. $y'+a(x)y=0$ is an homogeneous linear differential equation, where $a$ is continuous in $-\infty<x<\infty$ with periodicity $\xi>0$, i.e. $a(x+\xi)=a(x)~\forall x$.
I have to show three things:
*
*If $\phi$ is a non-trivial solution and $\psi(x)=\phi(x+\xi)$, show that $\psi$ is also a solution.
*Show that there exists a constante $c$ such that $\phi(x+\xi)=c\phi(x)~\forall x$. Also prove that $$c=e^{-\int\limits_0^\xi a(t)dt}$$.
*Which condition needs $a$ so that there exists a non-trivial solution, with periodicity $\xi$.
I could manage to prove item 1, we know
$$\phi'(x+\xi)+a(x+\xi)\phi(x+\xi)=0$$
$$\phi'(x+\xi)+a(x)\phi(x+\xi)=0$$
$$\psi'(x)+a(x)\psi(x)=0$$ Hence $\psi$ is a solution.
I'm struggling with item 2. Not sure where to start, the only thing I tried is the following:
$$y'+a(x)y=0$$
$$\dfrac{1}{y}\dfrac{dy}{dx}=-a(x)$$
$$|y|=e^{-\int^x a(t)dt}e^c$$
which leads me to nowhere, but it has a similar form of the $c$ given.
I also thought about using Mean Value Theorem but not sure how to apply it here because there is a $c$ multiplying $\phi(x)$. Tried working with $\dfrac{\phi(x+\xi)}{\phi(x)}$ but still failed.
| Let $\phi$ be a nontrivial solution of $y'+a(x)y=0$ on $\mathbb{R}$. Then $\phi$ is a fundamental system of this first order homogeneous linear differential equation, that is each solution on $\mathbb{R}$ is of the form $c \phi$ for some $c \in \mathbb{R}$. You already know that $x \mapsto \phi(x+ \xi)$ is a solution on $\mathbb{R}$. Thus $\psi(x):=\phi(x+ \xi)=c\phi(x)$ for some $c\in \mathbb{R}$. Moreover, $\phi(x)=\exp(\int_0^x -a(t) dt)\phi(0)$. Thus
$$
c=\psi(0)/\phi(0)=\phi(\xi)/\phi(0)=\exp(-\int_0^\xi a(t) dt).
$$
Now, you see that if $\int_0^\xi a(t) dt=0$, then $\psi(x)=\phi(x)$, that is $\phi$ is periodic with period $\xi$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4519983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proof verification: each $R_i$ reflexive\complete and transitive then so is $R$ defined as $xRy$ if and only if $xR_iy,\forall i \in \{1,\ldots,n\}$. Suppose $R_1, R_2,\ldots,R_n$ are binary relations on $X$. Define the binary relation $R$ by
$$xRy\quad\text{if and only if}\quad xR_iy,\forall i \in \{1,\ldots,n\}.$$
Prove or provide counterexamples to the following statements:
(a) If each $R_i$ is reflexive and transitive, then $R$ is reflexive and transitive.
(b) If each $R_i$ is complete and transitive, then $R$ is complete and transitive.
I couldn't think of a counter example to these two statements so I have tried to use the definitions to prove, but I am not yet very confident in my abilities so I am asking for proof verification.
My attempt:
(a) Suppose $R$ is not reflexive or transitive, then $x\not Rx$ for some $x$ s.th. $x \not R_i x$ for some $i\in\{1,\ldots,n\}$. But this is not possible as $xR_iy \forall i\in\{1,\ldots,n\}$. Hence R must be reflexive and transitive.
(b) Let $X=\{x,y,z\}$, then if $xR_iy, yR_iz, zR_ix$ $\forall R_i$ then by definition of $R$, $xRy, yRz$ and $xRz$, s.th. $R$ is complete and transitive.
| (a) This is true. You want to prove that $R$ is reflexive and transitive. In fact:
*
*If $x\in X$, then, for each $i\in\{1,2,\ldots,n\}$, $x\mathrel{R_i}x$, and therefore $x\mathrel Rx$. So, $R$ is reflexive.
*If $x,y,z\in X$, and $x\mathrel Ry$ and $y\mathrel Rz$, you want to prove that $x\mathrel Rz$. You are assuming that, for each $i\in\{1,2,\ldots,n\}$, $x\mathrel{R_i}y$ and $y\mathrel{R_i}z$. Therefore, again for each $i\in\{1,2,\ldots,n\}$, $x\mathrel{R_i}z$. It follows that $x\mathrel Rz$. So, $R$ is indeed transitive.
(b) This is false. Take $X=\{0,1\}$, $R_1=\leqslant$, and $R_2=\geqslant$. Then both $R_1$ and $R_2$ are complete and transitive. However, the binary relation $R$ is simply the equality on $X$, which is not complete.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4520243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solving $\frac{dx}{dt}=\frac{xt}{x^2+t^2},\ x(0)=1$ I have started self-studying differential equations and I have come across the following initial value problem
$$\frac{dx}{dt}=\frac{xt}{x^2+t^2}, \quad x(0)=1$$
Now, since $f(t,x)=\frac{xt}{x^2+t^2}$ is such that $f(rt,rx)=f(t,x)$ for every $r\in\mathbb{R}\setminus\{0\}$, we can use the change of variables $y=\frac{x}{t}$ to rewrite it in the form
\begin{align}
y+t\frac{dy}{dt} &=\frac{t^2 y}{t^2(1+y^2)} \\
&=\frac{y}{1+y^2} \\
\implies t\frac{dy}{dt} &=\frac{y}{1+y^2}-y \\
&=-\frac{y^3}{1+y^2} \\
\implies \frac{dy}{dt} &= \left(-\frac{y^3}{1+y^2}\right)\cdot\frac{1}{t}
\end{align}
which is separable, and becomes:
\begin{align}
\left(\frac{1+y^2}{y^3}\right)dy &= -\frac{dt}{t} \\
\implies \int_{y_1}^{y_2} \left(\frac{1+y^2}{y^3}\right) dy &= -\int_{t_1}^{t_2} \frac{dt}{t} \\
\implies -\frac{1}{2y_2^2}+\ln \left\lvert \frac{y_2}{y_1} \right\rvert + \frac{1}{2y_1^2} &= -\ln \left\lvert \frac{t_2}{t_1} \right\rvert
\end{align}
but now I don't see how to go forward and find $y(t)$. Also, I integrated from a generic time $t_1$ to a generic time $t_2$ because the right hand side wouldn't have converged otherwise.
So, I would appreciate any hint about how to go forward in solving this IVP.
Thanks
| You got that
$$\dfrac{1+y^2}{y^3}dy=-\frac{1}{t}dt$$
This implies that, putting $y'(t)=\frac{dy(t)}{dt}$,
$$\dfrac{1+y(t)^2}{y(t)^3}y'(t)=-\frac{1}{t}$$
where this equality is as functions of $t$. In particular, since they are the same function, both sides of the equality must have the same primitive, ie
$$\int \dfrac{1+y(t)^2}{y(t)^3}y'(t) dt=\int -\frac{1}{t} dt$$
(but not a definite integral!)
On the left side, you can distribute the denominator and get that $$\int \dfrac{1+y(t)^2}{y(t)^3}y'(t) dt=-\frac{1}{2}y^{-2}(t)+\ln(|y(t)|)+C$$
And the right side remains $$\int -\frac{1}{t} dt=-\ln(t)+C$$
Therefore the solution satisfies the implicit equation $$-\frac{1}{2}y^{-2}(t)+\ln(|y(t)|)=-\ln(t)+C$$
The problem is that the initial data is at $t=0$ where this equality is not defined. Note that the substitution $x=ty$ is not valid for the initial data you have because $1\neq x(0)=0\cdot y =0 $
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4520385",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Why does $\Sigma=\{A:|A|<\infty\;\text{or}\,|A^c|<\infty\}$ not a sigma-algebra. I don't see too much difference between this two sets
1. $$\Sigma=\{A:|A|<\infty\;\text{or}\,|A^c|<\infty\}$$
2 $$\Sigma=\{A:\text{either}\,A\,\text{or}\,A^c\,\text{is finite or countable}\}$$
Why does the first one is not a sigma-algebra, but the second one is?
My attempt: I can prove $\emptyset,\Omega\in\Sigma$ for both cases, and they are both closed under complement. Whenever $A\in\Sigma,A=(A^c)^c\in\Sigma$. I think the only differences only occur for closed under countable union, but I cannot go further.
Thanks for any help.
| If support set is finite, then it is a $\sigma$-algebra.
Otherwise, assume $x_i$, $i \in \mathbb N$ are different elements of support.
Take $A_i = \{x_{2i}\}$. Then each $A_i \in \Sigma$, but $\cup_{i=1}^\infty A_i \notin \Sigma$ because $A$ is infinite and also $A^c$ contains infinite set $\{x_{2i + 1} | i \in \mathbb N\}$, so it's infinite too.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4520534",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is this a valid integral to prove area of circle? Similar to the original poster of this question Is this a valid proof for the area of a circle?, I am a high school AP Calc BC student using the idea of Riemann sums to add an infinite number of isosceles triangles of area $\frac{R^{2}\sin(d\theta)}{2}$ to determine the area of the circle. This technique produces the sum: $\lim_{n \to \infty } \sum_{i=1}^{n}\frac{R^{2}\sin(\frac{2\pi}{n})}{2}$which does evaluate to $\pi R^{2}$. However, I would like to convert the sum to this integral: $\int_{0}^{2\pi}\frac{R^{2}\sin(d\theta)}{2}$, and then evaluate this intergal. Using nonstandard analysis, my guess is to simply $\sin(d\theta)$ as another infinitesimal dω, naively keeping the bounds the same and evaluating $\int_{0}^{2\pi}\frac{R^{2}}{2}d\omega$ does yield the wanted answer: $\pi R^{2}$? I fear this is an abuse of logic and/or notation and would like to know how to approach integrals like the one in question.
| That's a good intuition as $n \to \infty$ we can heuristically substitute $\sin (d\theta)$ with $d\theta$.
More rigourosly we have
$$\sum_{i=1}^{n}\frac{R^{2}\sin(\frac{2\pi}{n})}{2} = \sum_{i=1}^{n}\frac{R^{2}}{2}\frac{\sin(\frac{2\pi}{n})}{\frac{2\pi}{n}}\frac{2\pi}{n} $$
and since
$$\lim_{n \to \infty } \frac{\sin(\frac{2\pi}{n})}{\frac{2\pi}{n}} =1$$
we obtain
$$\lim_{n \to \infty } \frac{2\pi}{n}\sum_{i=1}^{n}\frac{R^{2}}{2}\frac{\sin(\frac{2\pi}{n})}{\frac{2\pi}{n}} = \int_{0}^{2\pi}\frac{R^{2}}{2} d\omega = \frac{R^{2}}{2}\int_{0}^{2\pi} d\omega =\frac{R^{2}}{2}\cdot 2\pi=\pi R^2$$
Refer also to:
*
*Perfect understanding of Riemann Sums
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4520943",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Convert the following statements into predicate logic. G(x) x has a portal gun
R(x) x is a rick
M(x) x is a Morty
Convert the following statements into predicate logic.
1.) There is a Rick.
2.) Everything is a Morty
3.) No morty has a portal gun
How do I even start this? I've watched some videos and it still doesn't click.
*
*Rick exists.
R(x)
*Everything is Morty.
∃x∈M(x)
*No Morty has a portal gun.
∈M(x)¬G(x)?
| I think it should be...
$$1.) \space\space\space\exists x : R(x)$$
$$2.) \space\space\space\forall x : M(x)$$
$$3.) \space\space\space\forall x : M(x)\implies \neg G(x) $$
In plain english these statements would read;
1.) $\space\space$There exists an $x$ such that $x$ has the property of being a Rick.
2.)$\space\space$For all $x$, $x$ has the property of being a Morty.
3.) $\space\space$For all $x$, if $x$ is has the property of being a Morty, then $x$ does not have the property of having a portal gun.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4521032",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
inverse functions when solving trig equations. I am in Pre-calc this semester, and I have been given the problem:
$\cos(\theta) = -\frac12$
The answers I get are $30^o$ and $300^o$ but the answers I get from the homework key are $120^o $ and $240^o$. Why do the inverse equations give me answers that arent correct? Is there a better way to solve these problems? My work is as follows:
$\cos(\theta)=x$
$x= -\frac12$
$x^2+y^2=1$
$(-\frac12)^2+y^2=1$
$\frac14+y^2=1$
$y^2=\frac34$
$\sqrt y^2=\pm\sqrt\frac34$
$y=\pm\frac{\sqrt3}{2}$
$\sin(\theta)=y$
$\sin(\theta)=\frac{\sqrt3}{2}$
$\sin(\theta)=-\frac{\sqrt3}{2}$
$\sin^{-1}(\frac{\sqrt3}{2})=\theta$
$\sin^{-1}(-\frac{\sqrt3}{2})=\theta$
$\theta=60^o,300^o$
| Since $\cos\theta=-\frac{1}{2}$ and $\sin\theta=\pm\frac{\sqrt 3}{2}$ implies that $\theta$ lies in the quadrant second or third, so $\theta$ is $\pi\mp\frac{\pi}{3}=\frac{2\pi}{3}$ or $\frac{4\pi}{3}.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4521270",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
How to transform a nonlinear constraint to linear constraint using big M? Given a nonlinear constraint
$$xy=0$$
where $x$ is a continuous variable and $y$ is a binary variable.
Question: Using the Big-M method, how do I get to change the constraint above to a linear constraint below:
$$x \leq M(1-y)?$$
Probably by showing a theorem.
What I know: I know that $M$ is probably a variable to bound the continuous variable $x$ so that it applies $0 \leq x \leq M$.
Thank you for the help.
| Because $y$ is a binary variable, you just need to check the two cases.
*
*If $y=0$, then $xy=0$, as desired, and the linear constraint reduces to $x \le M$, which is redundant.
*If $y=1$, you want to enforce $x=0$ so that $xy=0$. Because $x \ge 0$, it is enough to enforce $y = 1 \implies x \le 0$. The big-M constraint $x \le M(1-y)$ does exactly that.
More generally, to enforce $y=1 \implies f(x) \le b$, impose $f(x) - b \le M(1-y)$, where $M$ is an upper bound on $f(x) - b$ when $y=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4521434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Improper integral of an even power of $x$ times $e^{-x^2}$ In Donald McQuarrie’s Mathematical Methods for Scientists and Engineers, he has a problem I would like to assign to my class, but I am having trouble solving it. It states
Show that
$$\int_0^\infty e^{-x^2} \cos\alpha x \,dx = \frac{\sqrt{\pi}}{2} e^{-\alpha^2/4},$$
by expanding $\cos\alpha x$ in a Maclaurin series and integrating term by term.
Doing as suggested, you get
$$
\int_0^\infty e^{-x^2} \left( 1 - \frac{\alpha^2}{2!} x^2 + \frac{\alpha^4}{4!} x^4 + \dots \right)dx.
$$
I know that the first integral is $\sqrt{\pi}/2$ and that the remaining integrals can be evaluated using the following result from a table of integrals that I have
$$
\int_0^\infty x^{2n} e^{-x^2} dx = \frac{1 \cdot 3 \cdot 5 \dots (2n-1) }{2^{n+1}} \sqrt{\pi}.
$$
Doing this, I can complete the problem, the issue is that I do not know how to evaluate the above integral to obtain the result I found in the table of integrals.
Can anyone point me in the right direction?
| Evaluate the table integral as follows
$$
\int_0^\infty x^{2n} e^{-x^2} dx =
(-1)^n \frac{d^n}{da^n} \bigg(\int_0^\infty e^{-ax^2} dx\bigg)_{a=1} \\=
(-1)^n \frac{d^n}{da^n} \bigg(\frac{\sqrt{\pi}}2 a^{-1/2} \bigg)_{a=1} =\frac{1 \cdot 3 \cdot 5 \cdots (2n-1) }{2^{n+1}} \sqrt{\pi}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4521553",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Solving $\cos3x=-\sin3x$, for $x \in [0,2\pi]$ For my first step, I would choose to either divide both sides or add sin3x to both sides. My professor in Pre-Calc told me I shouldn't divide any trig functions from either side. Despite this, I can only find a solution this way.
$$\frac{\cos3x}{-\sin3x}=1$$
$$-\tan3x=1$$
$$\tan3x=-1$$
Which would lead me to my method of solving trig equations which is:
$$-\tan\,\epsilon\,Q2\quad-\tan\,\epsilon\,Q4$$
$$\tan x=1 \, at\,\frac{\pi}{4}$$
$$Q2:\pi-\frac{\pi}{4}=\frac{3\pi}{4}$$
$$Q4:2\pi-\frac{\pi}{4}=\frac{5\pi}{4}$$
However, this is only for $\tan x=-1$, my professor told me to apply the "modifications of x" after.
$$3x=\frac{3\pi}{4}=\frac{3\pi}{4}\div3=\frac{\pi}{4}$$
$$3x=\frac{5\pi}{4}=\frac{5\pi}{4}\div3=\frac{5\pi}{12}$$
Is this the correct way to solve this equation? I believe dividing out $\sin3x$ leads to domain or range errors. However, I tried solving by adding $\sin3x$ to both sides first but could not figure out how to isolate the trig functions. I will clarify anything needed in the comments.
| Since sine and cosine functions can't be zero together, it means in your given equation, they are non-zero. Thus, you can divide by either of them without any loss of generality.
Also, if $x\in[0,2\pi]$ then $3x\in[0,6\pi]$. Thus, you are missing some solutions.
Also, if you are dividing $\cos3x$ by $\sin3x$, you are getting $\cot3x$. If you want $\tan3x$, you need to divide $\sin3x$ by $\cos3x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4521724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
} |
Solving the equation $\overline z-z^2=i(\overline z+z^2)$ in $\mathbb{C}$ Let $\overline z$ denote the complex conjugate of a complex number z and let $i= \sqrt{-1}$. In the set of complex numbers, the number of distinct roots of the equation $\overline z-z^2=i(\overline z+z^2)$ is _____________.
My approach is as follow
$z = r{e^{i\theta }}$& $\overline z = r{e^{ - i\theta }}$
$r{e^{ - i\theta }} - {r^2}{e^{i2\theta }} = i\left( {r{e^{ - i\theta }} + {r^2}{e^{i2\theta }}} \right) \Rightarrow r{e^{ - i\theta }} - {r^2}{e^{i2\theta }} = {e^{i\frac{\pi }{2}}}\left( {r{e^{ - i\theta }} + {r^2}{e^{i2\theta }}} \right)$
$ \Rightarrow r{e^{ - i\theta }} - {r^2}{e^{i2\theta }} = \left( {r{e^{ - i\left( {\theta - \frac{\pi }{2}} \right)}} + {r^2}{e^{i\left( {2\theta + \frac{\pi }{2}} \right)}}} \right)$
$ \Rightarrow r\left( {\cos \theta - i\sin \theta } \right) - {r^2}\left( {\cos 2\theta + i\sin 2\theta } \right) = \left( {r\left( {\sin \theta + i\cos \theta } \right) + {r^2}\left( { - \sin 2\theta + i\cos 2\theta } \right)} \right)$
$ \Rightarrow r\cos \theta - {r^2}\cos 2\theta - r\sin \theta + {r^2}\sin 2\theta - i\left( {r\sin \theta - {r^2}\sin 2\theta - r\cos \theta - {r^2}\cos 2\theta } \right) = 0$
Not able to proceed further
| We have
$$\bar{z} - z^2 = i(\bar{z} + z^2)$$
and multiplying by $i$
$$-\bar{z} - z^2 = i(\bar{z} -z^2)$$
then summing
$$z^2=-i\bar z\iff r^2e^{i2\theta}=re^{i\left(-\theta+\frac 3 2\pi\right)}$$
from which we can conclude that $r=0$ or $r=1$ with
$$2\theta=-\theta+\frac 3 2\pi+2k\pi \iff \theta =\frac \pi 2 +\frac 2 3 k\pi$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4522066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Notation for a string of objects Is there a commonly used notation for a string of objects? The particular situation I am interested in is a string of elements from a Boolean algebra. The elements in the string may be expressions using the Boolean operations $\wedge$ and $\vee$. When this situation occurs I enclose such an element in parentheses. A typical example of such a string is
$$(a_{1} \wedge b_{1}) \ldots (a_{m} \wedge b_{1}) (a_{1} \wedge b_{2}) \ldots (a_{m} \wedge b_{2}) \ldots (a_{1} \wedge b_{n}) \ldots (a_{m} \wedge b_{n})$$
What I would like is something like a summation sign so that I could save space and improve clarity. So that this string could be written in a manner similar to
$$\sum_{i = 1}^{m}\sum_{j = 1}^{n} (a_{i} \wedge b_{j}) \text{,}$$
but without the implied operation of addition.
| Hint: A commonly used notation for creating strings is concatenation of letters taken from an alphabet $V$.
We could then write
\begin{align*}
&(a_{1} \wedge b_{1}) \ldots (a_{m} \wedge b_{1}) (a_{1} \wedge b_{2}) \ldots (a_{m} \wedge b_{2}) \ldots (a_{1} \wedge b_{n}) \ldots (a_{m} \wedge b_{n})\\
&\qquad=\prod_{l=1}^n\prod_{k=1}^m(a_{k} \wedge b_{l})
\end{align*}
where non-commutativity of concatenation has to be appropriately addressed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4522252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Prove or disprove: convergence in distribution of continuous uniformly distributed variables Let $(X_n)_{n\in\mathbb{N}}$ be a sequence of random variables, with $X_n\sim U[-n,n]$ for all $n\in\mathbb{N}$.
Prove or disprove: this sequence converges in distribution.
I am not sure if this is true. My first attempt is to work with characteristic function. For $n\in\mathbb{N}$,
$$\varphi_{X_n}(t)=\frac{\sin(nt)}{nt}$$ which approaches $0$ for $t\neq0$ as $n\rightarrow\infty$ and $1$ otherwise. I think I need to use
Lévy's continuity theorem: If a sequence of characteristic functions converges pointwise to a limiting function $\varphi$ which is continuous at zero then $\varphi$ is a characteristic function and the sequence of random variables converges in distribution.
Problem is: What I am not quite sure about is whether or not the limiting function discontinuous at $0$?
Thanks in advance!
| There are a few ways to do this. One way is to work with the ChF (characteristic function) and appeal to the theorem you quoted. For $t\ne 0$, as $n\to\infty$ we have
$$|\varphi_{X_n}(t)|=\left|\frac{\sin(nt)}{nt}\right|\le \frac{1}{nt}\to 0$$
while for $t=0$, recall that the ChF of any random variables at $t=0$ is $1$. Thus, $\varphi_{X_n}(0)=1$ by definition for all $n\in\mathbb{N}$ and so $\varphi_{X_n}(0)\to 1$ as $n\to\infty$. This means that $\varphi_{X_n}(t)\to \boldsymbol{1}_{\{0\}}(t)$, namely the function which is $1$ at $t=0$ and zero everywhere else. This function is discontinuous at $t=0$ since $\boldsymbol{1}_{\{0\}}(t)\to0 \ne \boldsymbol{1}_{\{0\}}(0)$ as $t \to 0$. If you were to draw the graph out, you'll see that there is a sudden jump at $t=0$. In any case, the sequence does not converge in distribution.
The second way to do this is to work directly with the DF (distribution function). Let $F_n$ be the DF of $X_n$ and suppose otherwise that $X_n\overset{d}{\to}X$ for some $X$ with DF $F$ as $n\to\infty$. See that $F_n(x)\to 1/2$ for all $x\in\mathbb{R}$ since $F_n(x)=1$ for $x>n$, $F_n(x)=0$ for $x<-n$ and for $x\in [-n,n]$, $$F_n(x)=\frac{1}{2n}x+\frac{1}{2}$$ Therefore, $F(x)=1/2$ for all $x\in\mathbb{R}$ (this may not be true for at most countably many points, but since $F$ is right-continuous, this is true everywhere). But the constant function $1/2$ is not a DF and so $F$ is not a DF, contradicting the fact that $F$ is the DF of $X$. Therefore, $(X_n)_{n=1}^\infty$ does not converge in distribution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4522413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
The Lie bracket of two horizontal vector fields is vertical? $P(M, G, \pi)$ is a principal bundle, $u\in P$, vectors $X, Y\in H_u P$ where $H_u P$ is the horizontal subspace at $u$.
A theorem that says $[X, Y] \in V_u P$ where $V_u P$ is the vertical subspace. [Reference: Nakahara p. 387, and p. 388 (10.34) for in tuition, where the book is proving the Cartan's structure equation].
The argument given in p. 388 suggests that the integral curves of the projections of two horizontal vectors $V=\pi_* X$ and $W=\pi_* Y$ must form a coordinate basis as $[V, W]=0$. How could this be true? We could have started with non-coordinate basis vectors in $T_p M$ and lifted them into $P$. Furthermore, why could $\pi_*$ and the commutator swap order in (10.34)?
| Consider $\pi: P \to M$, your principal bundle projection, which is a submersion. Also, you have that
$$
D_u \pi: H_uP \to T_{x}M
$$
is a vector space isomorphism. As a result, you can conclude that $X \sim \epsilon V$ and $Y \sim \delta W$ are $\pi$-related (often called $f$ related in literature). This is a step out of OPs textbook, it is not obvious. This implies that $\pi$ can be "swapped" with the commutator, in this case restricted to horizontal vector fields, and gives us
$$
\pi_*([X,Y])=[\pi_*X,\pi_*Y]=[\epsilon V, \delta W]=\epsilon \delta [V,W] \to 0.
$$
We can do this since the leftmost term does not depend on $\epsilon$ or $\delta$. Hence the commutator has no horizontal part and is therefore entirely vertical.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4522595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
maximize $\det(I+\Lambda Q\Lambda)$, where $Q$ is p.s.d. with $\mathrm{rank}(Q)\le 2$ and $\mathrm{tr}(Q)\le 2$ Suppose $\Lambda=\left[ \begin{array}{ccc} 4 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 1 \end{array}\right]$ is diagonal, and $Q$ is a 3-by-3 positive semi-definite (psd) matrix, with $\mathrm{rank}(Q)\le 2$ and $\mathrm{tr}(Q)\le 2$. How should we choose $Q$ to maximize $\det(I+\Lambda Q\Lambda)$?
Without the rank constraint, the answer is simply a diagonal $Q$, with the diagonal elements being $Q_{ii}=\mu-\frac{1}{\Lambda_{ii}^2}$, where $\mu$ is determined by $\mathrm{tr}(Q)=2,$ i.e. $\mu=1\frac{5}{48}.$ This can be proved using the Hadamard's inequality, i.e. $\det(I+\Lambda Q\Lambda)\le \underset{i}\Pi (1+\Lambda_{ii}^2Q_{ii}),$ and maximizing the RHS with Lagrange multipliers. (This is essentially the "beamforming" problem in wireless communication.)
However, when we constrain the rank of $Q$ to be 2 or less, the answer is not so clear to me. Is the optimal $Q$ still diagonal (with $Q_{33}=0$)?
| 1st argument. I removed my first attempt to show this using the Hadarmard's Inequality. As user8675309 pointed out in the comments below, we cannot conclude that $Q$ is still diagonal with $Q_{3,3} = 0$ from Hadarmard's Determinant Inequality.
2nd argument. Denote $X = I + \Lambda Q \Lambda$ and consider the problem of maximizing the determinant of $X$.
The off-diagonal entries of $X$ should be equal to 0 as a necessary condition since they decrease the objective value. Implying that optimal $Q$ is diagonal matrix.
To satisfy $\text{rank}(Q) \leq 2$, at least one diagonal entry of $Q$ should be $0$. Since determinant of $X$ is a monotone increasing function in its diagonal entries. And, by looking at the values in $\Lambda = \text{diag}([4, 2, 1])$, the maximum is achieved at $Q_{3, 3} = 0$.
All in all, the optimal $Q$ is the solution of the relaxed problem where $\text{rank}(Q)$ is replaced by constraint $Q_{3,3} = 0$.
Solution. Instead of maximizing the determinant of $X$, we can maximize the SDP representable concave function log determinant of $X$ and rewrite the problem as:
\begin{align*}
& \text{maximize} \ \text{log} \ \text{det}\left(I + \Lambda Q \Lambda\right) \\
& \text{subject to:} \\
& \qquad \text{trace}(Q) \leq 2 \\
& \qquad Q_{3,3} = 0 \\
& \qquad \, Q \succeq 0
\end{align*}
I got the same result as @RiverLi by solving the corresponding relaxation with added constraint $Q_{3,3} = 0$, here is the code, using CVXPY in Python:
import numpy as np
import cvxpy as cp
I = np.identity(3)
L = np.diag(np.array([4, 2, 1]))
Q = cp.Variable((3, 3), symmetric=True)
objective = cp.log_det(I + L @ Q @ L)
constraints = [cp.trace(Q) <= 2]
constraints += [Q[2, 2] == 0]
constraints += [Q >> 0]
prob = cp.Problem(cp.Maximize(objective), constraints)
prob.solve()
Q = np.round(Q.value, 2)
print('rank(Q) = %.2f' % np.linalg.matrix_rank(Q))
print('objective(Q) = %.2f' % np.linalg.det(I + L @ Q @ L))
print('trace(Q) = %.2f' % np.trace(Q))
print('Q = '); Q
Gives:
rank(Q) = 2.00
objective(Q) = 85.56
trace(Q) = 2.00
Q =
array([[1.09, 0. , 0.],
[0. , 0.91, 0.],
[0. , 0. , 0.]])
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4522753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Are all set lattices isomorphic to a power set? A set lattice is a lattice where the meet is the intersection, the join is the union, and the partial order is $\subseteq$. The standard example is is $P(A)$ where $A$ is a set. I’ve thought about other possible examples, yet have failed to come up with any. Are there any set/distributive lattices that aren’t isomorphic to $P(A)$ for some $A$?
| Well, not every lattice is isomorphic to $P(A)$. For example the $P(A)$ lattice has $2^{|A|}$ elements, and there are lattices for any order, e.g.
$$K_n=\big\{\{1\},\{1,2\},\ldots,\{1,2,\ldots,n-1,n\}\big\}$$
with union and intersection is a latice of exactly $n$ elements.
So the more interesting question would be: can every lattice be embedded into $P(A)$ for some set $A$? The answer is still no. To see that we need to look at some property of (a sublattice of) $P(A)$ that other lattices don't have to have. One of those properties is the distributivity of $\vee$ over $\wedge$ (and vice versa). Union is always distributive over intersection (and vice versa), and so any subtlatice of $P(A)$ is distributive. But there are non-distributive lattices, e.g the $M_3$:
$$M_3=\{0,a,b,c,1\}$$
$$0< a<1$$
$$0< b<1$$
$$0< c<1$$
In fact this is if and only if: a lattice is isomorphic to a lattice of sets (with union and intersection), and thus embeddable into some $P(A)$, if and only if it is distributive. Note that this is a non-trivial theorem (see the wiki I linked earlier).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4523363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Why is it wrong to solve the limit this way? Consider the following limit:
\begin{align*}
&\lim_{x\to +\infty}\frac{e^x}{\left(1+\frac1x\right)^{x^2}}\\
&=e^{\lim_{x\to +\infty}(x-x^2\ln\left(1+\frac1x\right))}\\
&=e^{\lim_{x\to +\infty}x^2(\frac1x-\ln\left(1+\frac1x\right))}\\
&=e^{\lim_{x\to 0^+ }\frac1{x^2}(x-\ln(1+x))}\\
&=e^{\lim_{x\to 0^+ }\frac{\frac12x^2}{x^2}}\\
&=e^{\frac12}
\end{align*}
The above solution is correct and another solution is following:
\begin{align*}
&\lim_{x\to +\infty}\frac{e^x}{(1+\frac1x)^{x^2}}\\
&=\lim_{x\to +\infty}\frac{e^x}{[(1+\frac1x)^{x}]^x}\\
&=\lim_{x\to +\infty}\frac{e^x}{e^x}\\
&=1
\end{align*}
Why is that wrong? Is that because we didn't take the limit of both the numerator and the denominator?
| (I've colored equals signs below as red when the argument is falty. Apologies to the color-blind.)
You've done this:
$$\lim_{x\to\infty} \frac{f(x)}{g(x)^x}\color{red}=\lim_{x\to\infty}\frac{f(x)}{(\lim_{y\to\infty} g(y))^x}$$
If you could do that kind of substitution, you could solve a lot of problems this way:
$$\lim_{x\to \infty}\left(1+\frac1x\right)^x\color{red}=\lim_{x\to\infty}\left(\lim_{y\to\infty} 1+\frac 1y\right)^x=\lim_x 1^x=1.$$
The real question is why you'd think you could let one tiny part of the limit go to infinity first.
One way to see this is wrong is to compute:
$$\lim_{x\to\infty}\frac{g(x)^x}{(\lim_{y\to\infty} g(y))^x}$$
In your technique, this limit is $1.$
If $L=\lim_{y\to\infty} g(y),$ then let $h(x)=g(x)-L.$ $h(x)$ is the "error."
Since $g(x)=L+h(x),$ we can compute the limit:
$$\lim_{x\to\infty} \frac{g(x)^x}{L^x}=\lim\left(\frac{g(x)}{L}\right)^x=\lim\left (1+\frac{h(x)}{L}\right)^x.$$
All we really know is that $h(x)\to 0.$ So, for example, if $h(x)=\frac 1x,$ then the right side of the limit would be $e^{1/L}.$
In your case, you need an estimate for $h(x)=e-(1+1/x)^x.$ I'm not sure how to do that, but I'd bet, from the actual answer, that $h(x)=\frac{e}{2x}+o\left(\frac1x\right).$
Another way to see the error is to take the first approach, but apply the second answer's logic. We'll just take the logarithms. Your second logic is:
$$\lim_{x\to\infty} (x-x^2\log(1+1/x))\color{red}=\lim_{x\to\infty} x-x\left(\lim_{y\to\infty} y\log(1+1/y)\right)=0.$$
since $y\log(1+1/y)\to 1.$
As you can see here, the common factor of $x$ is the problem. It is true that $1-x\log(1+1/x)\to 0,$ but it is not true that $x(1-x\log(1+1/x))\to 0.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4523531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Solve $xdx-ydy=y^2(x^2-y^2)dy$ Question:
$$xdx-ydy=y^2(x^2-y^2)dy$$
I'm having trouble matching the solution in the book, which is $\frac{1}{2}\ln(x^2-y^2)=\frac{1}{3}y^3+C$.
I'm getting an integral that requires the incomplete gamma function.
My attempt:
Rewrite the equation:
$x + (-(x^2 - y^2) y^2 - y)\frac{dy}{dx} = 0$
This is not an exact equation, but I found the integrating factor: $μ(y) = e^{-(2 y^3)/3}$
Multiply both sides of $x + \frac{dy}{dx} (-(x^2 - y^2) y^2 - y) = 0$ by $μ(y):$
$xe^{-\frac{2}{3}y^3} + (e^{-\frac{2}{3} y^3} (y^3 - x^2 y - 1) y) \frac{dy}{dx} = 0$
Let $R(x,y) =xe^{-\frac{2}{3}y^3} $ and $S(x,y) = (e^{-\frac{2}{3} y^3} (y^3 - x^2 y - 1) y)$. So I want to seek $f(x,y)$ such that $\frac{\partial f(x,y)}{x} = R(x,y)$ and $\frac{\partial f(x,y)}{y} = S(x,y)$
Integrating w.r.t $x$:
$f(x,y) = \int xe^{-\frac{2}{3}y^3}\,dx = \frac{1}{2}x^2 e^{-\frac{2}{3}y^3} + g(y)$
$\frac{dg(y)}{dy} = e^{-\frac{2}{3}y} y (y^3 - 1)$
Integrating w.r.t $y$:
$g(y) = \int e^{-\frac{2}{3}y} y (y^3 - 1) dy = ?$
I'm stuck here.
| Your way is correct.
https://www.wolframalpha.com/input?i=Integral+e%5E%7B-2%2F3y%5E3%7D%28y%5E4-y%29dy
I saw this trick:
$$\frac{d(x^2-y^2)}{x^2-y^2}=2y^2dy$$
$$\ln(x^2-y^2)=\frac{2}{3}y^3+2c$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4523805",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Why $x(t)$ should be 1 in $(\frac{1}{2}, a_m]$ in proving the non convergence of the sequence in $C[0,1]$? Here is the part of the book I am speaking about:
But I am wandering in the equalities before the last three lines, the one on the right, why $x(t)$ should be 1 in $(\frac{1}{2}, a_m]$ in proving the non convergence of the sequence in $C[0,1]$? I know that $x_m$ is the line $m(t - \frac{1}{2})$ in this part of the interval.
Could anyone help me answer this question, please?
| For every $s \in \left(\frac12, 1\right]$ there exists some natural $k$ such that $a_m \le s$ for $m \ge k$, that is $s$ definitely ends up with being an element of $[a_m, 1]$ as $m$ progresses. Now $$\underbrace{0 = \lim_{m \to +\infty} \int_{a_m}^1 \lvert 1-x(t) \rvert \mathrm d t}_{\text{as your book has proved}} \ge \int_s^1 \lvert 1-x(t) \rvert \mathrm d t \ge 0 .$$ Thus, if $x$ has to be a continuous function, then it has to be equal to $1$ on the interval $[s, 1]$, in particular $x(s) = 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4523954",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Visually intuitive proof that set of invertible operators is open Let $H$ be a Hilbert space, and let $B(H)$ be the set of all bounded operators of $H$, equipped with the operator norm. It is well-known that the set of all invertible bounded operators forms an open subset of $B(H)$. The proof (which also applies to Banach algebras in general) seems to rely on mostly symbolic manipulation, in which given an operator $S + T$, with $S$ invertible and $T$ small, we consider the infinite sum $\sum_{n=0}^\infty ( S^{-1}T)^n$.
I would like to know if there is a intuitive (perhaps geometric) proof of why bounded operators which are close to an invertible bounded operator must also be invertible. My line of thought is something of the following: If $S$ is invertible and $T$ is close to $S$, then the behaviour of $T$ "can't differ too much" from that of $S$, and therefore must be invertible.
| The picture I see is the following: Given two points $x, y \in H$, there exist balls of the same radius centered at $x$ and $y$ whose images under $S$ are disjoint and therefore a positive distance apart. Therefore, if you perturb $S$ by a map $T$ with sufficiently small operator norm, then $(S+T)x$ and $(S+T)y$ remain in small disjoint balls centered at $Sx$ and $Sy$ respectively. In particular, $(S+T)x \ne (S+T)y$.
If $S^{-1}$ is bounded, then the sufficiently small bound on $\|T\|$ is independent of $x$ and $y$. In other words, if $\|T\|$ is sufficiently small, then $S+T$ is injective.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4524126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Expressing $2e^{i\pi/2}+4e^{4i\pi/3}$ in $a+bi$ form I am asked to express
$$2e^{i\pi/2}+4e^{4i\pi/3}$$ in the form $a + bi$.
If I use Euler's theorem, then $e^{iz}=\cos(z)+i \sin(z)$
which gives me $$2e^{i\pi/2}+4e^{4i\pi/3}=2\cos(\pi/2)+i2\sin(\pi/2)
+4\cos(4\pi/3)+i4\sin(4\pi/3) \tag1$$
which simplifies to
$$0+2i-2-2i\sqrt3 \tag2$$
which simplifies to
$$-2+2i-2i\sqrt3 \tag3$$
Is this correct, or could this be further simplified?
| That's correct so far. The last step is to factor out an $i$ to get:
$$-2 + (2 - 2 \sqrt3) i$$
in the form $a + bi$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4524487",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$G$ acts transitively on a set $X$, where $|X|=m\in\mathbb{Z}_{>0}$, if and only if there exsits $H\leq G$ with index $m$. I am trying to prove that a group $G$ acts transitively on a set $X$, where $|X|=m\in\mathbb{Z}_{>0}$, if and only if there exsits $H\leq G$ with index $m$.
Notation: $Gx$ is the orbit containg $x\in X$, $G_x$ is the stabilizer for $x$.
*
*First, suppose that $G$ acts transitively on $X$, then there is only $1$ orbit, say that it is the one containing $x\in X$ and denoted by $Gx$. Since $X=\cup_{x\in X} Gx$, then $m=|X|=|Gx|$. By the stabilizer/orbit theorem, $m=|G:G_x|$, and since $G_x \leq G$, then letting $H=G_x$, proves the first part of the theorem.
However, I am struggling with the other side; any hint would be appreciated.
| Given $H \le G$ let $G$ act on the set of cosets of $H$ in $G$ by multiplication. The number of such cosets is the index of $H$ in $G.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4524632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can two different matrices have the same eigenvectors? Assume that we have two symmetric and non-negative matrices $A$ and $B$. They are not the same, even if they might look the same to the untrained eye. If I want to find the eigenvectors, I could use the $V$ matrix from the SVD
$$A = USV^T$$
where matrix $V$ holds the eigenvectors in its columns.
Are there any situations where the eigenvectors of $V$ from the $A$ matrix, can be the same as $V$ matrix from the $B$ matrix?
| Take as $A=\begin{bmatrix}
1 & 0 \\
0& 1 \\
\end{bmatrix}$ and $B= \begin{bmatrix}
0 & 0 \\
0& 1 \\\end{bmatrix}$. $A$ has eigenvalue $1$ and eigenvector
$v=\begin{bmatrix}
v_{1} \\v_{2}
\end{bmatrix}=\begin{bmatrix}
0 \\1
\end{bmatrix}$ which is also an eigenvector of $B$ and also the other eigenvector is $\begin{bmatrix}
1 \\0
\end{bmatrix}$.
Now $B$ has also eigenvalue $0$ and $B\begin{bmatrix}
1 \\0
\end{bmatrix}$=$\begin{bmatrix}
0 \\0
\end{bmatrix}$.
So $A,B$ are two non-negative symmetric different matrices which have the same eigenvectors! Therefore the answer is yes they can!!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4524950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Given $3x+4y=15$, $\min(\sqrt{x^2+y^2})=?$ (looking for other approaches)
Given, $(x,y)$ follow $3x+4y=15$. Minimize $\sqrt{x^2+y^2}$.
I solved this problem as follows,
We have $y=\dfrac{15-3x}{4}$,
$$\sqrt{x^2+y^2}=\sqrt{x^2+\frac{(3x-15)^2}{16}}=\frac{\sqrt{25x^2-90x+225}}4=\frac{\sqrt{(5x-9)^2+144}}{4}$$Hence $\min(\sqrt{x^2+y^2})=3$.
I'm wondering is it possible to solve this problem differently?
|
$3x+4y=15$ represents a straight line and $\sqrt{x^2+y^2}$ represents the distance of the point $(x,y)$ from the origin. So the question is basically telling us to find the minimum distance of any point lying on the line $3x+4y=15$, from the origin.
This shortest distance must be the perpendicular distance from the origin to the line.
The perpendicular distance of a point $(h,k)$ from the line $ax+by+c=0$ is $\Bigg|\dfrac{ah+bk+c}{\sqrt{a^2+b^2}}\Bigg|$. Replacing $(h,k)$ with $(0,0)$ and the line with $3x+4y-15=0$ gives us the minimum value as 3.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4525324",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Why this test statistics for these distributions For a random sample $X_1,\ldots,X_n$ from the Poisson distribution I get where the T-statistic comes from because for a Poisson $E(X) = \lambda$ and $V(X)=\lambda$ so it becomes:
$$T= \frac{\bar{X}-E(X)}{\sqrt{\frac{V(X)}{n}}} =\frac{\bar{X}-\lambda}{\sqrt{\frac{\lambda}{n}}},$$
where $\bar{X} = (X_1+\ldots,X_n)/n$.
But for the geometric distribution I have $E(X) = \frac{1}{p}$ and $V(X) = \frac{1}{p^2}$. And if I substitute:
$$T = \frac{\bar{X}-E(X)}{\sqrt{\frac{V(X)}{n}}} = \frac{\bar{X}-\frac{1}{p}}{\sqrt{\frac{1}{np^2}}} \neq \frac{\bar{X}-\frac{1}{p}}{\sqrt{\frac{1-p}{np^2}}}$$
The expression on the right is the real expression. Where does that $1-p$ in the numerator come from? How would I proceed with the binomial and exponential?
| It is not clear what is the purpose of your statistic but, note that if $X\sim \text{Geometric}(p)$, $\text{var}(X) = \frac{1-p}{p^2}$.
Furthermore, if
$$X\sim\text{Binomial}(m,p)$$
then $E(X)=mp$ and $\text{var}(X)=mp(1-p)$. And if
$$X\sim\text{Exponential}(p),$$
then $E(X)=1/p$ and $\text{var}(X)=1/p^2$. So to get the expression of the statistic you can just replace the moments by their expressions.
As a side note, your $T$ statistic coincides with the Wald statistic based on the maximum likelihood estimator and its asymptotic variance. The nice thing about the Wald statistic is that it has asymptotic standard normal distribution, which you can use to do inference on your parameter of interest.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4525540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
why is the empty set linearly independent? For context, I am reading P.R Halmos's Finite-Dimensional Vector Spaces's section on linear dependence. The book wrote a lot of explanation for why the empty set is linearly independent around the definition of linear dependence
Here's the definition provided in the text:
Definition. A finite set $\{x_i \}$ of vectors is linearly dependent if there exists a corresponding set $\{a_i \}$ of scalars, not all zero, such that $$\sum_i a_i x_i = 0 $$ If, on the other hand, $\sum_i a_i x_i = 0 $ implies that $a_i = 0 $ for each $ i $, the set $\{x_i \}$ is linearly independent
And the explanation for why the empty set is linearly independent as I've understood is as follows: Since there is no indices $ i $ at all for an empty set, you cannot assign to some of them a non-zero scalar, thus it's not linearly dependent.
But what I'm confused about is that the negation of "some scalars are non-zero" is "all scalars are zero". Then I can use the same argument to say that since there is no indices $ i $ at all for an empty set, you cannot assign to all the vectors a zero scalar, thus it's not linearly independent.
Especially when the text, for sake of intuition, tries to rephrase the definition of linear independence to "If $\sum_i a_i x_i = 0 $ then there is no index $ i $ for which $ a_i \neq 0 $". Here, equivalently, we can say "If $\sum_i a_i x_i = 0 $ then for all indices $ i $ , $ a_i = 0 $". I feel like this is just playing with words and did not address the problem
| Say that a "dependence relation" for a finite set of vectors is a linear combination of elements from the set that is equal to zero. As you say, the empty set of vectors satisfies a unique dependence relation, namely, the empty sum is equal to zero. In this dependence relation, there are no coefficients. Therefore, all the coefficients are equal to zero. In other words: in every dependence relation satisfied by the empty set, all coefficients are equal to 0. This is the definition of linear independence.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4525614",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Algebraic characterisation of geometrically integral affine schemes of finite type over a field Let $k$ be a field, not necessarily algebraically closed, and let $A$ be a $k$-algebra of finite type.
Recall that $\operatorname{Spec} (A)$ is geometrically integral (over $k$) if $\operatorname{Spec} (A) \times_{\operatorname{Spec} (k)} \operatorname{Spec} k'$ is integral for every field extension $k'$ of $k$.
Translated back to algebra, that is the same as requiring that $A \otimes_k k'$ be an integral domain for every field extension $k'$ of $k$.
In particular this is so for $k' = k$, so henceforth we assume $A$ is an integral domain.
Let $K = \operatorname{Frac} (A)$, the fraction field of $A$.
Then $K \otimes_k k'$ is also an integral domain for every field extension $k'$ of $k$, so in particular $K$ is a separable (but possibly transcendental) field extension of $k$.
Also, $A \otimes_k k'$ is an integral domain for every algebraic extension $k'$ of $k$, so $k$ is algebraically closed inside $A$.
Question.
Now, suppose $A$ is a $k$-algebra of finite type with the following properties:
*
*$A$ is an integral domain.
*$k$ is algebraically closed inside $A$.
*$\operatorname{Frac} (A)$ is a separable field extension of $k$.
Does it follow that $A \otimes_k k'$ is an integral domain for every field extension $k'$ of $k$?
| No. For instance, let $k=\mathbb{Q}$ and $A=\mathbb{Q}[x,y]/(x^2+y^2)$. Since $x^2+y^2$ is irreducible over $\mathbb{Q}$, $A$ is a domain, and the separability condition is trivial since the characteristic is $0$. To see that $\mathbb{Q}$ is algebraically closed in $A$, note that $\operatorname{Frac}(A)$ can be identified with $\mathbb{Q}(i)(x)$ with $i$ mapping to $y/x$ and then $A$ is the subring $\mathbb{Q}[x,ix]$ which is just the subring of $\mathbb{Q}(i)[x]$ consisting of polynomials with constant term in $\mathbb{Q}$. No nonconstant polynomial is algebraic over $\mathbb{Q}$ so $\mathbb{Q}$ is algebraically closed in $A$.
However, $x^2+y^2$ factors as $(x+iy)(x-iy)$ over $\mathbb{Q}(i)$ so $A\otimes \mathbb{Q}(i)$ is not a domain.
(More generally, you need $k$ to be algebraically closed not just in $A$ but in $\operatorname{Frac}(A)$. This suffices, since it implies $\operatorname{Frac}(A)\otimes_k k'$ is a domain and thus $A\otimes_k k'$ is a domain since it is a subring of $\operatorname{Frac}(A)\otimes_k k'$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4525992",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Show that $\frac{1-\sin2\alpha}{1+\sin2\alpha}=\tan^2\left(\frac{3\pi}{4}+\alpha\right)$ Show that $$\dfrac{1-\sin2\alpha}{1+\sin2\alpha}=\tan^2\left(\dfrac{3\pi}{4}+\alpha\right)$$
I am really confused about that $\dfrac{3\pi}{4}$ in the RHS (where it comes from and how it relates to the LHS). For the LHS:
$$\dfrac{1-\sin2\alpha}{1+\sin2\alpha}=\dfrac{1-2\sin\alpha\cos\alpha}{1+2\sin\alpha\cos\alpha}=\dfrac{\sin^2\alpha+\cos^2\alpha-2\sin\alpha\cos\alpha}{\sin^2\alpha+\cos^2\alpha+2\sin\alpha\cos\alpha}=\dfrac{\left(\sin\alpha-\cos\alpha\right)^2}{\left(\sin\alpha+\cos\alpha\right)^2}$$ I don't know if this is somehow useful as I can't get a feel of the problem and what we are supposed to notice to solve it.
| $$\tan^2\left(\dfrac{3\pi}{4}+\alpha\right)=\dfrac{(\sin(\dfrac{3\pi}{4}+\alpha))^2}{(\cos(\dfrac{3\pi}{4}+\alpha))^2}=\dfrac{(\sin\dfrac{3\pi}{4}\cos\alpha+\cos\dfrac{3\pi}{4}\sin\alpha)^2}{(\cos\dfrac{3\pi}{4}\cos\alpha-\sin\dfrac{3\pi}{4}\sin\alpha)^2}=\dfrac{(\dfrac{1}{\sqrt2}\cos\alpha-\dfrac{1}{\sqrt2}\sin\alpha)^2}{(\dfrac{-1}{\sqrt2}\cos\alpha-\dfrac{1}{\sqrt2}\sin\alpha)^2}=\dfrac{\dfrac{1}{2}\cos^2\alpha+\dfrac{1}{2}\sin^2\alpha-\sin\alpha\cos\alpha}{\dfrac{1}{2}\cos^2\alpha+\dfrac{1}{2}\sin^2\alpha+\sin\alpha\cos\alpha}=\dfrac{1-\sin2\alpha}{1+\sin2\alpha}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4526177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 8,
"answer_id": 1
} |
What is the probability that this happen P(A)+P(B)−2P(A ∩B) I think that my question has a bad structured, but my question is in base the next.
Let A and B be any sets. Show that the probability that exactly
one of the events A or B occurs is:
$$P(A)+P(B)−2P(A\cap B)$$
I thought that this is possible with The inclusion-exclusion principle, is it right or I need other thing?
| The inclusion exclusion principle (for any two events, $A,B$) is that:$$\mathsf P(A\cup B)=\mathsf P(A)+\mathsf P(B)-\mathsf P(A\cap B)\\~\\\text{also}\\~\\\mathsf P(A\cap B)=\mathsf P(A)+\mathsf P(B)-\mathsf P(A\cup B)$$
You may indeed use this to show that: $$\mathsf P((A\cup B)\cap (A\cap B)^\complement)=\mathsf P(A)+\mathsf P(B)-2\,\mathsf P(A\cap B)$$
Also note:
$(A\cup B)\cap (A\cap B)^\complement= (A\cap B^\complement)\cup(A^\complement\cap B)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4526244",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
When the discriminant of the quadratic is a root A friend of mine gave me this problem
A quadratic of the form $ax^2+bx+c=0$ has its discriminant as one of its roots. What is the maximum possible value of $ab$?
I tried Vieta's, Quadratic Formula, even plugging it in, always ending up with complicated expressions that have, as far as I can tell, nothing to do with $ab$.
Is there another method of doing this? Am I missing something obvious? My friend said it shouldn't be too hard.
Thanks!
| You have: $\Delta=\frac{-b-\sqrt {\Delta}}{2a}\,\vee \,\Delta =\frac{-b+\sqrt {\Delta}}{2a}$
Let's write this statement shorter as follows:
$$-b\pm\sqrt {\Delta}=2a\Delta$$
Let $\sqrt {\Delta}=u\ge 0$, then you get
$$
\begin{aligned}&2au^2\pm u+b=0\\ \implies &\Delta_u=1-8ab\ge 0\\
\implies &1\ge 8ab\\
\implies &ab\leq\frac 18.\end{aligned}
$$
Explanatory note: Because $u$ is real (assuming $a,b,c$ are real), the discriminant of the quadratic $\Delta_u=1-8ab$ must be non-negative; therefore $\max\{ab\}=\dfrac 18.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4526349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Alternate Optimal Solution Is there a linear optimization problem where there is an alternate optimal solution(i.e $z_j-c_j=0$) but all the $y_{ij}$ in the simplex table is negative, i.e $y_{ij}<0$?
In the linear problem of Maximization, we say an optimal solution has been achieved if $z_j-c_j>0$, but if for some $j$ we have $z_j-c_j=0$ then we have an alternate optimal solution.
| They shouldn’t be all negative, otherwise we wouldn’t have any basic variables left in the tableau as they are required to have a column of the form:
$$\begin{bmatrix}
0\\\vdots\\1\\\vdots\\0
\end{bmatrix}$$
Thus, in this type of negative-basis tableau, we would be forced to solve for basic variables and that would eventually make all of the $y_{ij}$ non-negative.
Not only that, but if we were to be in such a hypothetical situation, then the other solution we would find after pivoting would be infeasible because it would violates the non-negativity, $x_j\ge0$, constraint for the basic variables.
Thus, this situation would never arise unless some grave miscalculations took place
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4526599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find the range of $f(x)=\frac1{1-2\sin x}$ Question:
Find the range for $f(x)= 1/(1-2\sin x)$
Answer :
$ 1-2\sin x \ne 0 $
$ \sin x \ne 1/2 $
My approach:
For range :
$ -1 ≤ \sin x ≤ 1 $
$ -1 ≤ \sin x < 1/2$ and $1/2<\sin x≤1 $ , because $\sin x≠1/2$
$ -2 ≤ 2\sin x <1 $ and $ 1< 2\sin x≤2 $
$ -1 < -2\sin x ≤2 $ and $ -2≤ -2\sin x<-1 $
$ 0 < 1-2\sin x ≤3 $ and $ -1≤ 1-2\sin x<0 $
I am stuck at the last step. If I take reciprocal i.e., $\frac 1{1-2\sin x}$ I get :
$ \frac10 < \frac1{1-2\sin x} ≤\frac13 $ and $ -1≤ \frac1{1-2\sin x} < \frac 10 $
$1/0$ can be interpreted as infinity so the second equation gives range of $f(x)$ as $[-1,∞)$. But the correct answer is : $(−∞,−1]∪[\frac13,∞)$
Any alternative solutions are welcome :)
P.S. : Also $\frac1{1-2\cos x}$ will also have the same range, right? As $\sin x$ and $\cos x$ both lie between $[-1,1]$. So the answer will proceed in similar fashion to the given one.
|
because $\sin x\neq 1/2$
That is not valid as it's circular reasoning, which is a fallacy in argument.
So, you had:
$-1\leq\sin x \leq1$
$\Rightarrow-2\leq2\sin x \leq2$
$\Rightarrow2\geq-2\sin x \geq-2$, that is, $-2\leq-2\sin x \leq2$
$\Rightarrow-1\leq1-2\sin x \leq3$
Now, when we take reciprocal, similar to what we do when we multiply the whole inequality by "$-$" the inequality reverses. So:
$\Rightarrow\frac1{-1}\geq\frac1{1-2\sin x} \geq\frac13$
$\therefore\frac1{1-2\sin x}\geq\frac13 \ $ also, $ \ \frac1{1-2\sin x} \leq -1$
Thus, we get our range as $(-\infty,-1]\cup[\frac13,\infty)$
But, note that $1/0$ is not defined so that means $(1-2\sin x)$ shouldn't be $0$.
So, $\sin x\neq \frac12$.
Yup, replacing $\sin x$ by $\cos x$ won't change the answer, it'll behave the same way.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4526772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
How to prove that $ \int_0^1 \sqrt{\ln(\frac{1}{x})} \ \frac{\vartheta _3(0,x)-1}{x} dx = \sqrt{\pi}\zeta(3) $? Where $ \vartheta _3(0,x) $ is the elliptic theta function.
I first tried to expand the series since the integrand is within the radius of convergence of the series to no avail.
Also, I'm not sure an exponential substitution would make the Theta function any simpler though it would simplify the $\ln$ part.
Any suggestions would be appreciated.
| For simplicity, define the function
$$\psi(x)=\sum_{n=1}^{\infty}e^{-n^2\pi x}:=\frac{1}{2}(\theta_3(0,e^{-\pi x})-1)$$
Then the integral can be equivalently rewritten as
$$I=2\int_0^1\frac{dx}{x}\sqrt{\ln \frac{1}{x}}\psi\left(\frac{\ln 1/x}{\pi}\right)$$
which, after substituting $u=\ln 1/x$ can be shown to be
\begin{align}
I&=2\int_0^\infty\sqrt{u}\psi(u/\pi)du \\
&=2\sum_{n=1}^\infty\int_0^\infty\sqrt{u}e^{-n^2 u} \\
&=2\Gamma(3/2)\sum_{n=1}^\infty\frac{1}{n^3}\\&=\sqrt{\pi}\zeta(3)
\end{align}
There are no special Jacobi identities that need to be used for this result, but that doesn't mean it isn't cute!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4526929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to build a protractor without a protractor? We all know how to use a protractor, it is taught in elementary school. However, I was wondering what type of knowledge is required to build one from scratch.
For instance, was the understanding of $\pi$ and a compass first required before the first protractor, and if so how can I draw a full protractor on paper with just a compass, a ruler and some understanding of $\pi$?
I guess my point is, if we can draw a semi-circle on paper, then how can we fill up the degrees without the help of a protractor?
| I think there are two questions here: the practical question of what is actually done at a protractor factory, and the theoretical question of can you decompose a circle into $360$ equal pieces given only a straight-edge and compass.
I'll focus on the latter since the former is not really about mathematics. We know that $360 = 2^3\cdot3^2\cdot5$. Now, $72=2^3\cdot3^2$ degrees is a constructible angle, because a pentagon is constructive. Bisection is always possible, so that leaves angles that need to be trisected twice. This isn't possible with a straight-edge and compass (in general), BUT arbitrary trisection is possible with a ruler and compass (i.e. putting distances on your straight-edge is enough to over-come this hurdle). Wikipedia says this was already known to Archimedes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4527082",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 0
} |
Inequality $xy+yz+zx-xyz \leq \frac{9}{4}.$ Currently I try to tackle some olympiad questions:
Let $x, y, z \geq 0$ with $x+y+z=3$. Show that
$$
x y+y z+z x-x y z \leq \frac{9}{4}.
$$
and also find out when the equality holds.
I started by plugging in $z=3-x-y$ on the LHS and got
$$
3y-y^2+3x-x^2-4xy+x^2y+xy^2 = 3y-(y^2+x^2)+3x-4xy+x^2y+xy^2\leq 3y-((y+x)^2)+3x-4xy+x^2y+xy^2
$$
But this got me nowhere.
Then I started again with the left hand side
$$
x y+y z+z x-x y z \Leftrightarrow yz(1-x)+xy+zx
$$
and $x+y+z=3 \Leftrightarrow y+z-2=1-x$ so
$$
yz(y+z-2)+x(y+z)
$$
But this also leaves no idea. Do I have to use a known inequality?
| COMMENT.- In atention to said by @Erik Satie (artistic nickname) we give here another different proof.
We consider $z=a$ fixed. Then the segment with positive $x$ and $y$ of the line $x+y=3-a$ must be “under” the curve in the first quadrant of equation $f(x)=y=\dfrac{2.25-ax}{(1-a)x+a}$ which is equivalent to the proposed inequality for all $x,y$ being $z=a$. Since $f(x)$ is convex in the first quadrant (because $f''(x) \gt0$) and symmetric with respect to the diagonal $y=x$, considering the two fixed points $(x_1,x_1)$ and $(x_2,y_2)$ of the line and the curve respectively, it is enough to verify that $x_1\le x_2$ which reduces the inequality to one of a single variable $a$ easy to verify. You have then to verify the inequality
$$\dfrac{3-a}{2}\le\dfrac{-a+\sqrt{a^2+2.25(1-a)}}{1-a}$$ in which you do have to consider the cases $0\le a\lt1,\space a=1, \space 1\lt a\le3$. Note that for $a=1$ the curve $f(x)$ is a line and the difference $x_1\le x_2$ is evident.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4527271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 4
} |
Find all values of a so that the circle $x^2 - ax + y^2 + 2y = a$ has the radius 2 My goal is to find all values of "a" so that the circle $x^2 - ax + y^2 + 2y = a$ has the radius 2
The correct answer is: $a = -6$ and $a = 2$
I tried solving it by doing this:
$x^2 - ax + y^2 +2y=a$
$x^2 - ax + (y+1)^2-1=a$
$(x - \frac a2)^2 - (\frac a2)^2 + (y+1)^2-1=a$
$(x - \frac a2)^2 - {a^2\over 4} + (y+1)^2-1=a$
$(x - \frac a2)^2 + (y+1)^2=a + {a^2\over 4} + 1$
$(x - \frac a2)^2 + (y+1)^2={a^2+4a + 4\over 4}$
We want the radius to be 2 so set this ${a^2+4a + 4\over 4}$ equal to 2
${a^2+4a + 4\over 4}=2$
$a^2+4a + 4=8$
$a^2+4a -4=0$
Solve for a:
$a=-2 \pm \sqrt{4+4}$
$a=-2 \pm \sqrt{8}$
This is not correct as you can see. I don't understand what I do wrong, I'm not sure if there is one of those tiny mistakes somewhere in my solving process or if I'm completely wrong from the beginning. Thanks in advance.
| $\frac{a^2+4a+4}{4}$ is not a radius.
Actually, it is the square of radius.
So, you should solve $\frac{a^2+4a+4}{4}=2^2$
And its solution is a=-6, a=2
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4527455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How do I use the following difference root (LDE) to extract a partial sum of the harmonic series below? I asked WolframAlpha about some partial sum formula for the following divergent series:
$$\sum_{x=2}^\infty \frac{H_x}{ax\pm1} = ??$$
Where $H_x$ is the $x$th harmonic number.
It informed me that it is indeed divergent, then spat out the following for the partial sum formula:
$$DifferenceRoot[\{y, n\}, Function \{(n + 1) (a n \pm 1) y(n) - (n + 1) (2 a n + a \pm 2) y(n + 1) + (n + 1) (a n + a \pm 1) y(n + 2) - 1 = 0, y (0) = 0, y (1) = 0\}](m + 1) \mp \frac{1}{1 \pm a}$$
What am I supposed to do with that? I checked the documentation, it's not entirely clear how you can use this without a Wolfram Notebook product license. What techniques should I even be thinking about here to solve this, something regarding linear differential equations?
| You have a problem because of the $\pm 1$. If it was just
$$\sum_{n=2}^p \frac {H_n}{an}=\frac{6 \left(H_p\right){}^2-6 \psi ^{(1)}(p+1)+\pi ^2-12}{12 a}$$ which is an upper or lower bound for your summation.
Assuming that $a$ is large, what you could do is to write
$$\sum_{n=2}^p \frac {H_n}{an\pm 1}\sim \sum_{n=2}^m \frac {H_n}{an\pm 1}+\sum_{n=m+1}^p \frac {H_n}{an}$$ Compute the first summation and the second one is
$$\sum_{n=m+1}^p \frac {H_n}{an}=\frac{\left(H_p\right){}^2-\left(H_m\right){}^2+\psi ^{(1)}(m+1)-\psi^{(1)}(p+1)}{2 a}$$
Edit
For the case of
$$S_k=\sum_{n=2}^p \frac {H_n}{n+k}$$ where $k$ is an integer, the formulae are quite simple
$$S_1=\frac{1}{2} \left(\left(H_{p+1}\right){}^2-H_{p+1}^{(2)}-1\right)$$
$$S_2=\frac{1}{6} \left(3 \left(H_{p+2}\right){}^2-3 H_{p+2}^{(2)}+\frac{6}{p+2}-8\right)$$
$$S_3=\frac{1}{2} \left(\left(H_{p+3}\right){}^2+\frac{1}{p+2}+\frac{3}{p+3}+\psi^{(1)}(p+4)-\frac{\pi ^2}{6}-4\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4527636",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can every problem in Theory of Computation be stated as undecidable, by writing a reduction from Halting Problem? Let us consider the problem that
Whether a given Turing Machine M, has at least 481 states.
Since the number of states of M can be read off from the encoding of M. We can build a Turing machine that, given the encoding of M written on its input tape, counts the number of states of M and accepts or rejects depending on whether the number is at least 481.
(Source : Discussion in Automata and Computability by Dexter Kozen)
But, let us forget this for some time, and try to write reduction from a Halting problem.
HP # x -> Halting problem instance
We are constructing a machine 'R'
Input to R: A machine description M
What 'R' does::
First runs HP on x, if halts, then check whether M has 481 states, if yes accept, else reject.
L(R) = {
Set of all descriptions M, which has at least 481 states :::: If HP halts on x
Phi ::: If HP doesn't halt on x
}
Doesn't this hold as a reduction from Halting Problem to the problem mentioned above?
Or I am doing some wrong in the process of the reduction?
Thanks in advance!!
| Just because a problem can be reduced from the halting problem does not make the original problem undecideable. A problem is decideable if there exists an algorithm that solves the problem. That means that a problem is undecideable if there does not exist any algorithm that solves it. In other words, a problem is undecideable if, for every algorithm, it is true that the algorithm does not solve the problem.
The existence of some algorithm that does not solve the problem in no way shows that the problem is undecideable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4527797",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Difficulty in proving that the sum of two measurable function is a measurable function Let $(\Omega, \Sigma)$, $(\mathbb{R},\mathcal{B})$ be the two measurable space. Let $f:\Omega\rightarrow \mathbb{R}$, and $g:\Omega\rightarrow \mathbb{R}$ be the two measurable functions. In order to prove that $f+g$ is a measurable map, I need to show that
$$\{\omega\in\Omega: f(\omega)+g(\omega)< x\}\in \Sigma, \forall x\in\mathbb{R}$$
I was reading a proof, which says that:
$$\{\omega\in\Omega: f(\omega)+g(\omega)< x\}= \bigcup_{r\in\mathbb{Q}}\Big[\{\omega : f(\omega)<r\}\cap\{\omega: g(\omega)< x-r\}\Big].$$
I have no clue about how the above two quantities are equal. Can somebody simplify it?
| In the formula $$\{\omega\in\Omega: f(\omega)+g(\omega)< x\}= \bigcup_{r\in\mathbb{Q}}\Big[\{\omega : f(\omega)<r\}\cap\{\omega: g(\omega)< x-r\}\Big] \tag{*}$$
Clearly the RHS is contained in the LHS.
For the converse, suppose $\omega$ is in the LHS. Write $\delta=x-f(\omega)-g(\omega)>0$ and find a rational number $r$ such that $f(\omega)<r<f(\omega)+\delta$. Then
$$ g(\omega)=x-f(\omega)-\delta<x-r \,,$$
so $\omega$ is in the RHS of $(*)$, as needed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4527926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
$\operatorname{Hilb}^8(\mathbb{P}^4_k)$ not irreducible (Ex. in Hartshorne's Deformation Theory book) Exercise 1.5.8 from Robin Hartshorne's Deformation Theory:
5.8. $\operatorname{Hilb}^8(\mathbb{P}^4_k)$ is not irreducible.
Consider the Hilbert scheme of zero-dimensional
closed subschemes of $\mathbb{P}^4_k$
of length $8$, the ground field $k$ is assumed to be algebraically closed. There is one component of dimension $32$ that
has a nonsingular open subset corresponding to sets of eight distinct points. (I suppose that the author uses it as nontrivial fact)
We will
exhibit another component containing a nonsingular open subset of dimension $25$.
The Exercise comprises of four parts and I have problems with the first part:
(a) Let $R := k[x, y, z,w]$, let $\mathfrak{m}$ be a maximal ideal in this ring, and let $I = V + \mathfrak{m}^3$, where
$V$ is a $7$-dimensional subvector space of $\mathfrak{m}^2/\mathfrak{m}^3$. Let $B = R/I$, and let $Z$ be the
associated closed subscheme of $\mathbb{A}^4 \subset \mathbb{P}^4 $. Show that the set of all such $Z$, as the
point of its support ranges over $\mathbb{P}^4$, forms an irreducible $25$-dimensional subset of
the Hilbert scheme $H = \operatorname{Hilb}^8(\mathbb{P}^4)$.
How to show that the "set" of the $Z$'s as defined in (a) is irreducible?
Let call it $S \subset H$. The Hilbert scheme $H$ is constructed as closed subscheme of the Grassmanian defined by the vanishing of various determinants and is therefore we can endow the "set" $S$ as subscheme of $H$ with unique reduced scheme structure.
On the set level / on $k$-valued points $S(k)$ we can define canonically the map $p(k): S(k) \to \mathbb{P}^4(k)$ sending $Z$ the the unique maximal ideal $\mathfrak{m}_Z \subset k[x, y, z,w]$ associated to it as described in the construction above.
How can this idea be converted into a 'honest' map $p:S \to \mathbb{P}^4$? As soon as it is possible to construct such map $p$ we can use a result (reference ?) that for a proper surjective map $f: X \to Y$ with $Y$ and all fibers irreducible of same dimension, the scheme $X$ is irreducible, too.
Therefore the question reduces to 'How to construct $p:S \to \mathbb{P}^4$ from set map $p(k): S(k) \to \mathbb{P}^4(k)$?'
In addition note that that's just my suggestion how roughly I wanna to tackle this exercise. Maybe there are more effective ways to do it. All suggestions for alternative approaches are of course welcome!
| Honestly, I have no idea what you are on about connections and such…
Let $(a_{i,j})$ be an arbitrary $7\times 3$ matrix of scalars, and let $f_1,\dots,,f_7$ be the linear combinations of monomials indicated in the rows of following table:
$$\begin{array}{*{12}{c}}
x^2 & y^2 & z^3 & w^2 & xy & xz & xw & yz & yw & zw \\ \hline
1 & & & & & & & a_{1,1} & a_{1,2} & a_{1,3} \\
& 1 & & & & & & a_{2,1} & a_{2,2} & a_{2,3} \\
& & 1 & & & & & a_{3,1} & a_{3,2} & a_{3,3} \\
& & & 1 & & & & a_{4,1} & a_{4,2} & a_{4,3} \\
& & & & 1 & & & a_{5,1} & a_{5,2} & a_{5,3} \\
& & & & & 1 & & a_{6,1} & a_{6,2} & a_{6,3} \\
& & & & & & 1 & a_{7,1} & a_{7,2} & a_{7,3} \\
\end{array}$$
Now let $a$, $b$, $c$ $d$ be four scalars and consider the ideal generated by the $7$ polynomials
$$
f_1(x-a,y-b,z-d,w-d), \dots, f_7(x-a,y-b,z-d,w-d)
$$
and all the polynomails $(x-a)^i(y-b)^j(z-c)^k(w-d)^l$ with $i+j+k+l=3$.
This gives you a $25$-dimensional family of ideals of colength $8$, parametrized by a point in $k^4\times M_{7,2}(k)$.
Viewing the entries of the matrix and the coordinates of the point $(a,b,c,d)$ as varibles now, the ideal generated by those seven polynials in $k[x,y,z,w,a,b,c,d,a_{1,1},\dots,a_{7,3}]$ define subscheme $Z$ in $k^4\times M_{7,2}(k)\times k^4$. The restriction of the map $p:k^4\times M_{7,2}(k)\times k^4\to k^4\times M_{7,2}(k)$ projecting on the first two factors to $Z$ is a map $Z\to k^4\times M_{7,2}(k)$ which is a flat family of subschemes of $k^4$, the fiber of $p$. The universal property of the Hilbert scheme tells you then that to this flat family corresponds a regular map into the Hilbert scheme.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4528284",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Doubts in solving $\int_0^1 x\sqrt{x+2} dx$ I was solving the following integral and got stuck in finding the new limits after the substitution. $$\int_0^1 x\sqrt{x+2} dx$$
Here's my work so far:
Putting $(x+2) = t^2$ so that $dx = 2t\ dt$ and $x = t^2 - 2$
Thus the original integral changes to,
$$\int_{?}^{?} (t^2 - 2) (t) (2t )\ dt = \boxed{2\int_{?}^{?} t^4 - 2t^2\ dt}$$
The above boxed integral can be solved easily using power rule of integral but I'm totally out in finding the limits.
When $x =0, t^2= 2$ and when $x = 1, t^2 = 3$ so $t$ is going from $\pm \sqrt{2}$ to $\pm \sqrt{3}$.
So we need to evaluate the integral $$2\int_{t = \pm \sqrt{2}}^{t = \pm \sqrt{3}} t^4 - 2t^2\ dt$$
which is not meaningful I think.
So, I need help in finding the sign convention of the limits of the integral.
| Here’s my take on the problem: When you write $t^2=x+2$ and then proceed to substitute $\sqrt{x+2}$ with $t$, then you are essentially writing that $$\sqrt {t^2}=t$$, implying that $$|t|=t$$ which means that $t\geq0$. Thus I would prefer the limits be from $\sqrt2$ to $\sqrt3$.
As a side note, the problem can be circumvented by assuming $t=\sqrt{x+2}$, which is an equivalent substitution but easily tells us that $t\geq0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4528442",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Papa Rudin theorem $1.40$ There is the theorem:
Suppose $\mu(X) < \infty$, $f \in L^1(\mu)$, S is a closed set in the complex plane, and the averages $$A_E(f) = \frac{1}{\mu(E)} \int_E f \, d\mu$$. lie in S for every E $\in$
$\mathfrak M$ with $\mu(E)>0$. Then $f(x)\in S$ for almost all x $\in$ X.
There is the proof:
let $Δ$ be a closed circular disc ( with center at $\alpha$ and radius $r$ $\gt$ $0$, say) in the complement of $S$. Since $S^{c}$ is the union of countably many such discs, it is enough to prove that $\mu(E)$ $=$ $0$, where $E=f^{-1}(Δ)$ .
I don't understand why is it enough to prove that $\mu(E)$ $=$ $0$ and I also don't understand why we took $E=f^{-1}(Δ)$ .
Any help would be appreciated.
| You want to show that $f(x) \in S$ for almost every $x \in X$. This is equivalent to saying that the set $$\{x \in X : f(x) \in S^c\} = f^{-1}(S^c)$$ has measure $0$. To show this, he decomposes $S^c$ in a countable union of disks $\Delta$ and shows that each of the $f^{-1}(\Delta)$ have measure $0$. Then, by $\sigma$-additivity, $f^{-1}(S^c)$ also has measure $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4528602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find the all possible values of $a$, such that $4x^2-2ax+a^2-5a+4>0$ holds $\forall x\in (0,2)$
Problem: Find the all possible values of $a$, such that
$$4x^2-2ax+a^2-5a+4>0$$
holds $\forall x\in (0,2)$.
My work:
First, I rewrote the given inequality as follows:
$$
\begin{aligned}f(x)&=\left(2x-\frac a2\right)^2+\frac {3a^2}{4}-5a+4>0\end{aligned}
$$
Then, we have
$$
\begin{aligned}
0<x<2\\
\implies -\frac a2<2x-\frac a2<4-\frac a2\end{aligned}
$$
Case $-1:\,\,\,a≥0 \wedge 4-\frac a2≤0$.
This leads,
$$
\begin{aligned}\frac {a^2}{4}>\left(2x-\frac a2\right)^2>\left(4-\frac a2\right)^2\\
\implies \frac {a^2}{4}+\frac {3a^2}{4}-5a+4>f(x)>\left(4-\frac a2\right)^2+\frac {3a^2}{4}-5a+4\end{aligned}
$$
For $f(x)>0$, it is enough to take $\left(4-\frac a2\right)^2+\frac {3a^2}{4}-5a+4>0$ with the restriction $a≥0\wedge 4-\frac a2≤0$.
Case $-2:\,\,\,a≤0 \wedge 4-\frac a2≥0$.
We have:
$$
\begin{aligned}\frac {a^2}{4}<\left(2x-\frac a2\right)^2<\left(4-\frac a2\right)^2\\
\implies \frac {a^2}{4}+\frac {3a^2}{4}-5a+4<f(x)<\left(4-\frac a2\right)^2+\frac {3a^2}{4}-5a+4\end{aligned}
$$
Similarly, for $f(x)>0$, it is enough to take $\frac{a^2}{4}+\frac {3a^2}{4}-5a+4>0$ with the restriction $a≤0\wedge 4-\frac a2≥0$.
Case $-3:\,\,\,a≥0 \wedge 4-\frac a2≥0$.
This case implies, $\left(2x-\frac a2\right)^2≥0$. This means, $f(x)≥\frac {3a^2}{4}-5a+4$
Thus, for $f(x)>0$, it is enough to take $\frac {3a^2}{4}-5a+4>0$ with the restriction $a≥0\wedge 4-\frac a2≥0$.
Finally, we have to combine all the solution sets we get.
I haven't done the calculation, because I want to make sure that the method I use is correct. Do you see any flaws in the method?
| Suggestion
In case$-1:\,\,\,a≥0 \wedge 4-\frac a2≤0$ simply means $a \ge 8$.
After re-writing the function in vertex form and plugging in some valid values of a (like 8, 10 etc) will give a graph on quadrant I for x in (0, 2). This means the function is greater than 0 for all x in that domain under that restriction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4528746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 11,
"answer_id": 6
} |
Subseries of $p$-series Let $A \subseteq \mathbb{N}$. Let $0 \leq p \leq 1$ be the unique number such that $\sum_{n \in A} n^{-p-\epsilon}$ converges and $\sum_{n \in A} n^{-p+\epsilon}$ diverges for every $\epsilon > 0$.
Can we find, for every $0 \leq q < p$, a set $B_q \subseteq A$ such that $\sum_{n \in B_q} n^{-q-\epsilon}$ converges and $\sum_{n \in B_q} n^{-q+\epsilon}$ diverges for every $\epsilon > 0$?
| For any set $A \subset \mathbb{N}$, let $A(n) := A \cap \{\ell \in \mathbb{N} \colon 2^n \leq \ell < 2^{n+1}\}$. Then, for $\ell \in A(n)$ and $\alpha \in \mathbb{R}$, we have $2^{-\alpha} 2^{-\alpha n} \leq \ell^{-\alpha} \leq 2^{-n \alpha}$, and this easily implies
$$
\sum_{\ell \in A}
\ell^{-\alpha}
= \sum_{n=1}^\infty \,\, \sum_{\ell \in A(n)} \ell^{-\alpha}
\asymp \sum_{n=1}^\infty 2^{-\alpha n} \#A(n)
.
\tag{$\ast$}
$$
Now, define (somewhat similar to the formula for the radius of convergence of a power series)
$$
p^* (A)
:= \limsup_{n\to\infty} \frac{\log_2 (\#A(n))}{n}
,
$$
noting that $0 \leq p^\ast(A) \leq 1$ (why?!).
I now claim that
$$
\sum_{\ell \in A} \ell^{-(p^\ast(A) + \epsilon)}
< \infty
\quad \text{and} \quad
\sum_{\ell \in A} \ell^{-(p^\ast(A) - \epsilon)}
= \infty
\qquad \forall \, \epsilon > 0
.
\tag{$\lozenge$}
$$
To see this, note that there exists $N_0$ such that $\frac{\log_2(\# A(n))}{n} - p^*(A) \leq \frac{\epsilon}{2}$ for all $n \geq N_0$ and thus
$$
\log_2 \big( 2^{-(p^*(A) + \epsilon/2) n} \cdot \# A(n) \big)
= n \cdot \Big( \frac{\log_2 (\# A(n))}{n} - p^*(A) - \epsilon/2 \Big)
\leq 0
,
$$
which means that $2^{-(p^*(A) + \epsilon/2) n} \cdot \# A(n) \leq 1$ for all $n \geq N_0$. This easily implies that $\sum_{n=1}^\infty 2^{-(p^* (A) + \epsilon)n} \# A(n) < \infty$. By $(\ast)$, this implies that $\sum_{\ell \in A} \ell^{-(p^\ast(A) + \epsilon)} < \infty$, proving the first part of $(\lozenge)$.
To prove the second part, we show that actually $2^{-(p^*(A) - \epsilon) n} \# A(n) \not\to 0$, which easily implies $\sum_{n=1}^\infty 2^{-(p^*(A) - \epsilon) n} \# A(n) = \infty$, which by $(\ast)$ will imply the second part of $(\lozenge)$.
To show $2^{-(p^*(A) - \epsilon) n} \# A(n) \not\to 0$, assume towards a contradiction that $2^{-(p^*(A) - \epsilon) n} \# A(n) \to 0$. Thus, there exists $C > 0$ satisfying $2^{-(p^*(A) - \epsilon) n} \# A(n) \leq C$ and thus
$$
\log_2 (\# A(n)) - (p^*(A) - \epsilon) n \leq \log_2 (C)
.
$$
Hence,
$$
\frac{\log_2 (\# A(n))}{n} \leq p^*(A) - \epsilon + \frac{\log_2(C)}{n}
,
$$
which is impossible by definition of $p^*(A)$. This completes the proof of $(\lozenge)$.
Finally, let $0 \leq q < p^* (A)$ be arbitrary. For each $n \in \mathbb{N}$, pick $B_n \subset A(n)$ with $\# B_n = \min \{ \# A(n), \lceil 2^{q n}\rceil \}$ and set $B = \bigcup_{n=1}^\infty B_n$. It is then not hard to see
$$
p^*(B)
= \limsup_{n\to\infty}
\frac{\log_2 (\# B(n))}{n}
= \limsup_{n\to\infty}
\min \Big\{ \frac{\log_2 (\# A(n))}{n}, \frac{\log_2 (\lceil 2^{qn}\rceil)}{n} \Big\}
= \min \{ p^*(A), q\}
= q
.
$$
By the above considerations, this easily shows that $B$ is as required.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4529008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
iterated square roots convergence I suspect that
$$
\forall (x, a) \in \mathbb{N}_*^2,\ \lim_{k \rightarrow \infty} \underbrace{\sqrt{x + \sqrt{x + \sqrt{x + \sqrt{\dots \sqrt{x + a}}}}}}_k > \frac{x}{2}
$$
However, I have no idea how to prove it.
| Define
$$f(x)=\lim_{k \rightarrow \infty} \underbrace{\sqrt{x + \sqrt{x + \sqrt{x + \sqrt{\dots \sqrt{x + a}}}}}}_k$$
Assuming that $f(x)$ converges,
\begin{align}
f(x)&=\sqrt{x+f(x)}\\
\left\{f(x)\right\}^2&-f(x)-x=0\\
f(x)&=\frac{1+\sqrt{1+4x}}{2}
\end{align}
And then
$$f(x)\le\frac x2\quad(x\ge6)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4529209",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show $e^z$ is 1-1 onto from $\{x+iy: 00\}$ and find images under $\{x+iy: \text{$x$ is constant}\}$ and $y$ constant. Show $e^z$ is 1-1 onto from $\{x+iy: 0<y<\pi\}$ onto $\{u+iv : v>0\}$ and find images under $\{x+iy: \text{$x$ is constant}\}$ and $\{x+iy: \text{$y$ is constant}\}$.
For $1-1$ do I consider say $z_1=x_1+iy_1, z_2 = x_2+iy_2$ such that either $x_1 \neq x_2$ or $y_1 \neq y_2$? Then compute $e^{z_1}$ and $e^{z_2}$ and show they're not equal? And for onto how do I show every element of the upper half plane gets hit by $e^z$?
For the second part I know $x$ is the radius and $y$ is the angle so is it asking what the image is when the radius is fixed and when the angle is fixed? Any help greatly appreciated.
| For completeness here's a solution following my comments.
Suppose $z_1=x_1+iy_1,z_2=x_2+iy_2\in\{x+iy:y\in(0,\pi)\}$. Then $e^{z_1}=e^{z_2}$ gives
$$
e^{x_1}e^{iy_1}=e^{x_2}e^{iy_2}
$$
with both sides in polar form. Since polar form is unique up to argument we deduce $x_1=x_2$ and $y_1=y_2+2k\pi$ for some integer $k$. But $\lvert y_1-y_2\rvert < \pi$ so $y_1=y_2$ and injectivity follows.
Now write $e^z=e^xe^{iy}$ for $z$ in range, in polar form. As $x$ ranges over $\mathbb{R}$, $e^x$ (our polar radius) ranges over $(0,\infty)$. As $y$ ranges over $(0,\pi)$, our polar angle ranges over $(0,\pi)$. This is enough to deduce that the mapping is onto to the upper-half plane.
It should be clear from this construction that fixing $x$ and varying $y$ gives the upper-half semicircle of radius $e^x$ with centre at the origin, and fixing $y$ and varying $x$ giving the half-line $\arg{z}=y$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4529511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Can a smooth manifold be embedded into its tangent bundle? Given a smooth $n$-dimensional manifold $M$, one can always find an immersion into its tangent bundle $TM$ by looking at the zero-section, i.e. the map that sends $p\in M$ to $(p,0)\in TM$. One can see this is an immersion for example by writing everything in local coordinates and checking that this is locally an inclusion.
Can it be proved that this is also an embedding? (i.e. a homeomorphism onto its image). It seems intuitively clear to me that this should be true because "the tangent bundle has a copy of $M$ inside" but I can't find a way to prove or disprove it. Could you help me?
| (EDIT: This is false! leaving it up cause there's a useful comment.) Let $\phi: X \to Y$ be an injective map of topological spaces which you wish to check is a homeomorphism onto its image. You can check this locally: if $\{U_i\}$ is a cover of $X$ such that $\phi|_{U_i}$ is a homeomorphism onto its image for all $i$, then $\phi$ is a homeomorphism onto its image.
This fact works out great in your case, where $X$ is a manifold, since you can take $\{U_i\}$ to be a coordinate chart, and you are reduced to proving that $\mathbb{R}^n$ is homeomorphic to its image in its (trivial) tangent bundle $\mathbb{R}^n \times \mathbb{R}^n$, which is true.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4529618",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Functions of One or Two Variables I know that a function of two variables is written in the form $f(x,y)$ = ....., where $x$ and $y$ don't have to appear explicitly.
The function $z$ = $93x^5 + 2y - 7x$, is that a function of one or two variables ? I know the domain must be a subset of the $x-y$ plane (real axis), so I would say that it is a function of two variables.
| Yes your answer is correct, $z$ is a (real-valued) function of two variables indeed:
*
*the value for $z$ is determined by two variable $x$ and $y$, that is $z=z(x,y)$,
*and moreover at any pair $(x,y)$ corresponds one and only one value
for $z$.
Both conditions are crucial for the definition of a function.
As a third ingredient, we also need to specify its domain and codomain, as for example (without restriction for the domain):
$$z: (x,y)\in \mathbb R^2 \to 93x^5 + 2y - 7x \in\mathbb R$$
Note that in this case the codomain corresponds also to the range.
Refer also to the related:
*
*What is a function?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4529770",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Connectivity of random regular multigraphs For even integers $n$ and integers $3\le r < n$, is it true that a random $r$-regular multigraph on $n$ labelled vertices, obtained as the union of $r$ independent uniformly random perfect matchings, is asymptotically almost surely connected?
I know this is true if one conditions on there being no multiple edges, but does connectivity still hold without this condition?
| In any perfect matching, there are $\binom{n/2}{k}$ vertex sets of size $2k$ with no edges to their complement: just take any $k$ edges. To phrase this probabilistically: a fixed set of $2k$ vertices has a probability of exactly $\frac{\binom{n/2}{k}}{\binom{n}{2k}}$ of having no edges to its complement in a uniformly random perfect matching. So the expected number of such sets in a random union of three matchings is
$$
\sum_{k=1}^{n/2-1} \binom{n}{2k} \left(\frac{\binom{n/2}{k}}{\binom{n}{2k}}\right)^3 = \sum_{k=1}^{n/2-1} \frac{\binom{n/2}{k}^3}{\binom{n}{2k}^2}.
$$
The $k=1$ term of this sum is $\frac{(n/2)^3}{\binom n2^2} = O(\frac1n)$. The $k=2$ term is $O(\frac1{n^2})$.
To deal with the other terms, figure out the ratio between consecutive terms. We have
$$
\frac{\binom{n/2}{k+1}}{\binom{n/2}{k}} = \frac{n/2-k}{k+1} \qquad \text{ and } \qquad \frac{\binom n{2k+2}}{\binom n{2k}} = \frac{(n-2k)(n-2k-1)}{(2k+1)(2k+2)}
$$
and therefore
$$
\frac{\binom{n/2}{k+1}^3 / \binom{n}{2k+2}^2}{\binom{n/2}{k}^3 / \binom n{2k}^2} = \frac{(n/2-k)(2k+1)^2}{(n-2k-1)^2(k+1)} < \frac{(n-2k)(2k+1)}{(n-2k-1)(n-2k-1)}
$$
which is less than $1$ provided that $2k+1 \le n-2k-2$, or $4k+3 \le n$.
We could get worried at that point, except that the sum is symmetric around $k = \frac n4$, and when $k$ is close to $\frac n4$, the ratio $\binom{n/2}{n/4}^3 / \binom{n}{n/2}^2$ is exponentially small: close to $2^{-n/2}$. So the first and last terms are $O(\frac1n)$, and the other $n$ terms are $O(\frac1{n^2})$ at worst; therefore the whole sum is $O(\frac1n)$.
In particular, the expected number of such sets is $O(\frac1n)$, so whp there are no such sets, and the multigraph is connected.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4530001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Two presentations of the Klein bottle In Hatcher's Algebraic Topology, Chapter 1, page 51, he describes two presentations of the Klein bottle:
The first one is the usual one, a square with opposite sides identified via the word $aba^{−1}b$, then Hatcher says that if one cuts the square along a diagonal and reassembles the resulting two triangles as shown in the figure, one obtains the other representation as a square with sides identified via the word $a^2c^2$.
Question: Does this process, which involves cutting and gluing, automatically imply the two presentations are equivalent?
| Just to summarize the comments:
The Klein bottle are two triangles with edges identified as follows:
We recognize the Klein bottle solely by the cycles on the boundaries of the two triangles, which are $cab$ and $cb^{-1}a$. It doesn't matter how we arrange the two triangles, they can be put separately as above, or two $a$'s putting together, or two $b$'s putting together, or two $c$'s putting together.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4530101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Why are absolutely continuous measures called that way? Let $X\subseteq \Bbb{R}^n$ and $\mathcal{B}(X)$ the borelian set for $X$. Let $\mu$ and $\nu$ be two measures on $(X,\mathcal{B})$. We say that $\mu$ is absolutely continuous with respecto to $\nu$ (denoted by $\mu\ll \nu$) if $\forall A\in\mathcal
{B}(X),\; \mu(A)=0\implies \nu(A)=0$.
Generally, continuity refers to some sort of smoothness of a function: small variations on domain gives an small variation on co-domain. I don't see how this definition fits this category which leads me to wonder, why are absolutely continuous measures called that way? I'm looking for maybe an historical answer (the reason why it started being called that way) or an answer that appeals to the definition itself (something on the definition which makes it reasonable to be called that way).
| Comment. I guess you would need to consult G. Vitali. (1905, coined the term "absolutely continuous function"; later "absolutely continuous measure" was modeled on that.)
From Mathwords:
ABSOLUTE CONTINUITY. The concept was introduced in 1884 by E. Harnack "Die allgemeinen Sätze über den Zusammenhang der Functionen einer reellen Variabelen mit ihren Ableitungen. II. Theil." Math. Ann. 21, (1884), 217-252. The term was introduced in 1905 by G. Vitali. In the meantime several mathematicians had used the concept. See T. Hawkins Lebesgue's Theory of Integration.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4530247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Whether a $n-1$ dimensional subspace of $\mathbb{F}_{q^n}$ can be written as a multiple of the kernel of the trace map Let $q$ be a prime power and $n$ be a positive integer. Let $U$ be a $\mathbb{F}_q$-vector subspace of the finite field $\mathbb{F}_{q^n}$ of dimension $n-1$. Let $V$ be the kernel of the trace map $Tr: \mathbb{F}_{q^n} \rightarrow \mathbb{F}_q$. It is known that $V$ is also a vector subspace of dimension $n-1$. Then can we say that $U=aV$ for some non-zero $a\in\mathbb{F}_{q^n}$?
| Yes, simply say that the $x\mapsto Tr(ax)$ give $q^n$ distinct $\Bbb{F}_q$-linear maps $\Bbb{F}_{q^n}\to \Bbb{F}_q$, so they represent all of them.
Take a $\Bbb{F}_q$-linear isomorphism $h:\Bbb{F}_{q^n}/U \to \Bbb{F}_q$, then $h = Tr(a.)$ for some $a\in \Bbb{F}_{q^n}^*$ so $U=\ker h =\ker(Tr(a.))=a^{-1} \ker(Tr)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4530429",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Different definitions of ring of sets I am studying measure theory now and I have encountered three different definitions of ring of sets (from different sources), here is the list of them:
*
*$\mathcal{R}$ is called ring of sets if $\mathcal{A}, \mathcal{B} \in \mathcal{R}$ implies $\mathcal{A}\cup\mathcal{B} \in \mathcal{R}$ and $\mathcal{A}\cap\mathcal{B} \in \mathcal{R};$
*$\mathcal{R}$ is called ring of sets if $\mathcal{A}, \mathcal{B} \in \mathcal{R}$ implies $\mathcal{A}\cup\mathcal{B} \in \mathcal{R}$ and $\mathcal{A}\setminus\mathcal{B} \in \mathcal{R};$
*$\mathcal{R}$ is called ring of sets if $\mathcal{A}, \mathcal{B} \in \mathcal{R}$ implies $\mathcal{A}\cap\mathcal{B} \in \mathcal{R}$ and $\mathcal{A}\;\triangle\;\mathcal{B} \in \mathcal{R};$
It was easy for me to prove that the $2$ and the $3$ are equivalent (if we have $\mathcal{A}\cup\mathcal{B} \in \mathcal{R}$ and $\mathcal{A}\setminus\mathcal{B} \in \mathcal{R}$ than we have $\mathcal{A}\cap\mathcal{B} \in \mathcal{R}$ and $\mathcal{A}\;\triangle\;\mathcal{B} \in \mathcal{R}$ and vice versa), however, it seems like the $1$ isn't to them. From standard identities of set theory it looks like using only $\cap, \cup$ we can't obtain $\triangle, \setminus$. But I have problem in providing more strict proof.
May be there are some examples of collection of sets which satisfy the $1$ definition but not the $2$ (or vice versa)? Or may be there is some general proof?
| (Sorry this is a answer, I don't have enough reputation to comment)
To my understanding the first definition is used in order theory and the later in measure theory
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4530671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that $\left|\frac{x}{1+x^2}\right| \leq \frac{1}{2}$ Prove that $$\left|\frac{x}{1+x^2}\right| \leq \frac{1}{2}$$ for any number $x$.
My attempt:
$$\left|\frac{x}{1+x^2}\right| \leq \frac{1}{2} \\ \iff \frac{x}{1+x^2} \geq -\frac{1}{2} \land \frac{x}{1+x^2} \leq \frac{1}{2}$$ $$\iff (x+1)^2 \geq 0 \land (x-1)^2 \geq 0$$ the last two inequalities are obviously true, which concludes my proof attempt.
Not sure if this is a correct way to prove the inequality, also it's clearly not very elegant.
Could someone please verify my solution, and maybe suggest a more elegant or efficient approach?
| Your solution looks fine, as an alternative, by a single inequality, we have
$$\left|\frac{x}{1+x^2}\right| \leq \frac{1}{2} \iff \left|1+x^2\right|\ge 2|x|
\iff x^4-2x^2+1\ge 0\iff (x^2-1)^2 \ge 0$$
or also
$$\left|\frac{x}{1+x^2}\right| \leq \frac{1}{2} \iff 1+x^2\ge 2|x|
\iff x^2-2|x|^2+1\ge 0\iff (|x|-1)^2 \ge 0$$
Another way by AM-GM
$$\frac{1+x^2}{2}\ge \sqrt{x^2}=|x|$$
Another way, by $x=\tan \theta$ we have
$$\left|\frac{x}{1+x^2}\right|= \left|\frac{\tan \theta}{1+\tan^2 \theta}\right|=\frac12|\sin 2\theta|\le \frac12$$
Another way, by rearrangement
$$\left|\frac{1+x^2}{x}\right|=\frac{1+x^2}{|x|}=\frac1{|x|}\cdot 1+|x|\cdot 1\ge \frac1{|x|}\cdot |x|+1\cdot 1= 2$$
Another one
$$\left|\frac{x}{1+x^2}\right|\le \frac12 \iff \frac{2|x|}{(|x|-1)^2+2|x|}\le 1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4530875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Evaluating $\sum_{k=a}^{15} \frac{k!}{(k-a)!}$ for $0 \leq a \leq 15$ I am trying to simplify the sum $\sum_{k=a}^{15} \frac{k!}{(k-a)!}$ for a given integer $a \in [0, 15]$.
Plugged into WolframAlpha, I see that the expression is equivalent to $\frac{16!}{(a+1) \cdot (15 - a)!}$ but I have no clue as to how to get there.
Any help would be appreciated!
| In general: $$\sum_{k=a}^n\binom{k}{a}=\binom{n+1}{a+1}$$
(For a proof see here)
As @Jean Marie remarked this is the so-called hockey stick identity.
Applying that we find:$$\sum_{k=a}^{15}\frac{k!}{\left(k-a\right)!}=a!\sum_{k=a}^{15}\binom{k}{a}=a!\binom{16}{a+1}=\frac{16!}{\left(a+1\right)\left(15-a\right)!}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4531110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How to prove that the sum of reciprocals of one plus perfect powers is $\frac{\pi^{2}}{3}-\frac{5}{2}$ Let $S$ be Set of perfect powers without duplicates $1,4,8,9,\dots$ (http://oeis.org/A001597 ) How to prove the following?
$$\sum_{s\in S}\frac{1}{s+1}=\frac{\pi^{2}}{3}-\frac{5}{2}$$ (starting with $s=4$ ) I found this formula in the book "Mathematical Constants" by Steven R. Finch on page 113.
| Let $S$ be the set of perfect powers without $1$ . That is, $S = \{4,8,9,16,25,27,\ldots\}$. Consider the sum $$
S_{N} = \sum_{s \in S} \frac 1{s^{N}-1}
$$
which converges for $N\geq 2$ because $\sum_{n=1}^\infty \frac 1{n^N}$ converges, and the above sequence is thus a subsequence of a convergent sequence. For $N=1$, a separate argument can be created by "working backwards" from what we do below so I won't really emphasize that point.
We can evaluate this sum using some clever ideas. The first is to consider the set of non-powers $T$ (insist on $1 \notin T$) and its relation to $S$. Of course it is the complement of $S$, but there is a deeper relation.
Indeed, let $s \in S$. We can find $k \geq 2$ such that $s$ is a perfect $k$-th power. Let $K$ be the largest number such that $s$ is a $K$th perfect power. Then, $s^{\frac 1K}$ is a positive integer that has to be a non-power by maximality of $k$. Thus, every $s \in S$ is uniquely of the form $t^K$ where $K \geq 2$ and $t \in T$. On the other hand, if $t \in T$ and $K \geq 2$, obviously $t^K \in S$. Therefore, we may write $$
\sum_{s \in S} \frac 1{s^N-1} = \sum_{K \geq 2} \sum_{t \in T} \frac{1}{t^{NK}-1}
$$
Now we use a very nice trick : the identity $\frac{1}{n-1} = \frac 1{n} + \frac 1{n(n-1)}$ gives that $$
\sum_{K \geq 2} \sum_{t \in T} \frac{1}{t^{NK}-1} = \sum_{K \geq 2} \sum_{t \in T} \frac 1{t^{NK}} + \sum_{K \geq 2} \sum_{t \in T} \frac 1{t^{NK}(t^{NK} - 1)} \tag{*}
$$
However, observe that $$
\sum_{K \geq 2} \sum_{t \in T} \frac 1{t^{NK}} = \sum_{t \in T} \sum_{ K \geq 2} \frac 1{t^{NK}} = \sum_{t \in T} \frac 1{t^N(t^N-1)}
$$
Therefore, combining this with $(*)$ gives $$
\sum_{K \geq 2} \sum_{t \in T} \frac{1}{t^{NK}-1} = \sum_{t \in T} \frac 1{t^N(t^N-1)} + \sum_{K \geq 2} \sum_{t \in T} \frac 1{t^{NK}(t^{NK} - 1)}
$$
However, take a very careful look at the RHS here. We are actually summing the quantity $\frac 1{v^N(v^N-1)}$, first for $v \in T$, and then for numbers of the form $v^K$ for $v \in T, k \geq 2$ : which we know to be equal to $S$!
That is, we in fact, have $$
\sum_{t \in T} \frac 1{t^N(t^N-1)} + \sum_{K \geq 2} \sum_{t \in T} \frac 1{t^{NK}(t^{NK} - 1)} = \sum_{t \in T} \frac 1{t^N(t^N-1)} + \sum_{s \in S}\frac 1{s^N(s^N-1)} = \sum_{k=2}^{\infty} \frac 1{k^N(k^N-1)}
$$
We have obtained the identity $$
S_N = \sum_{k=2}^{\infty} \frac{1}{k^N(k^N-1)}
$$
Let's put $N=1$ first. Then, we get by telescoping, $$
S_1 = \sum_{k=2}^{\infty} \frac{1}{k(k-1)} = \sum_{k=2}^{\infty} \left(\frac 1{k-1} - \frac 1{k} \right)\\ = 1 - \frac 12 + \frac 12 - \frac 13+ \ldots = 1
$$
This is a proof of the first identity in Finch's book. The proof of the second identity follows by the evaluation of $S_2$. We write by the telescoping identity $$
S_2 = \sum_{k=2}^{\infty} \frac{1}{k^2(k^2-1)} = \sum_{k=2}^{\infty} \left(\frac 1{k^2-1} - \frac 1{k^2}\right) = \sum_{k=2}^{\infty} \frac 1{k^2-1} - \sum_{k=2}^{\infty} \frac 1{k^2}
$$
We know that $\sum_{k=2}^{\infty} \frac 1{k^2} = \frac{\pi^2}{6}-1$. What about $\sum_{k=2}^{\infty} \frac 1{k^2-1}$? For that, perform partial fractions and notice yet another telescoping occuring.
$$
\sum_{k=2}^{\infty} \frac 1{k^2-1} = \frac 12\sum_{k=2}^{\infty} \frac 2{k^2-1} = \frac 12\sum_{k=2}^{\infty} \left(\frac{1}{k-1} - \frac 1{k+1}\right) \\ = \frac 12 \left(1 - \frac 13 + \frac 12 - \frac 14 + \frac 13 - \frac 15 + \frac 14 - \frac 16 + \ldots\right) \\ = \frac 12\left(1+\frac 12\right) = \frac 34
$$
That is, we obtain $$
S_2 = \frac{3}{4} + 1 - \frac{\pi^2}{6} = \frac{7}{4} - \frac{\pi^2}{6}
$$
We are finally in a position to finish: (and I need to , because merely typing the word telescoping has made my voice hoarse) $$
\sum_{s \in S} \frac 1{s+1} = \sum_{s \in S} \left(\frac{1}{s-1} - \frac{2}{s^2-1}\right) = S_1 - 2S_2 = \frac{\pi^2}{3} - \frac 72 + 1 = \frac{\pi^2}{3} - \frac 52
$$
as desired.
Note that the evaluation of higher $S_N$ is possible, because $$
S_N = \sum_{k=2}^{\infty} \frac 1{k^N-1} - \zeta(N) + 1
$$
One uses partial fraction decomposition, and the definition of the Digamma function like has been done here, to obtain $$
S_N = 1 - \zeta(N) - \frac 1N \sum_{\omega^N = 1}\omega \psi(2-\omega)
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4531627",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 1,
"answer_id": 0
} |
Proving using Mathematical Induction I'm Stuck on the last step, this is proving using mathematical induction, a lecture from my Elementary number theory class.
The question goes to,
Prove that $\sum_{k=1}^n \frac{1}{k^2}=\frac{1}{1^2}+\frac{1}{2^2}+\cdots+\frac{1}{n^2}\le 2-\frac{1}{n}$ whenever $n$ is a positive integer.
This is my Attempt,
Step 1: Base Case ($n=1$)
$$\sum_{k=1}^n \frac{1}{k^2}=\frac{1}{1^2}\le2-\frac{1}{1}$$
$$=1\le1$$, Therefore Base case is true.
Step 2: Induction Hypothesis
Suppose $\sum_{k=1}^n \frac{1}{k^2}=\frac{1}{n^2}\le2-\frac{1}{n}$ is true for $n=m$
$\implies$ $\sum_{k=1}^m \frac{1}{k^2}=\frac{1}{m^2}\le2-\frac{1}{m}$ $\forall m \in \mathbb{N}$
Step 3: $n = m+1$
$\sum_{k=1}^{m+1} \frac{1}{k^2}=\sum_{k=1}^m \frac{1}{k^2}+\frac{1}{(m+1)^2}\le2-\frac{1}{m}+\frac{1}{(m+1)^2}$
I'm Stuck on this step
| Starting from the induction hypothesis which is $\sum_{k=1}^m \frac{1}{k^2}\le2-\frac{1}{m}$:-
Thus:-
$$\sum_{k=1}^m \frac{1}{k^2}\le2-\frac{1}{m}$$
$$\sum_{k=1}^{m+1} \frac{1}{k^2}\le2-\frac{1}{m}+\frac{1}{(m+1)^{2}}$$
As $m\ge1$ so:-
$$\Rightarrow m^2+2m\le m^2+2m+1$$
$$\Rightarrow \frac{m(m+2)}{(m+1)^2}\le 1$$
$$\Rightarrow \frac{(m+1)+1}{(m+1)^2}\le \frac{1}{m}$$
$$\Rightarrow \frac{1}{(m+1)^2} + \frac{1}{m+1}\le \frac{1}{m}$$
$$\Rightarrow \frac{1}{(m+1)^2} -\frac{1}{m} \le -\frac{1}{m+1}$$
$$\Rightarrow 2-\frac{1}{m}+\frac{1}{(m+1)^2} \le 2 -\frac{1}{m+1}$$
Now making use of the 2nd equation:-
$$\sum_{k=1}^{m+1} \frac{1}{k^2}\le2-\frac{1}{m}+\frac{1}{(m+1)^{2}}\le2 -\frac{1}{m+1}$$
Thus completing the induction step.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4531805",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Uniqueness of the nodes for Gauss-Legendre quadrature Gauss-Legendre quadrature approximates $\int_{~1}^{1}f(x)dx$ by $\sum_{i=1}^nw_if(x_i)$.
Wikipedia says that
This choice of quadrature weights $w_i$ and quadrature nodes $x_i$ is the unique choice that allows the quadrature rule to integrate degree $2n − 1$ polynomials exactly.
https://en.wikipedia.org/wiki/Gauss–Legendre_quadrature
The uniqueness of $w_i$ satisfying the condition for fixed $x_i$ comes from the invertibility of the Vandermonde's matrix.
How to prove the uniqueness of $x_i$?
| It is known that there are nodes $x_1<x_2<\ldots < x_n$ and the weights $w_i>0$ such that
$$\int\limits_{-1}^1p(x)\,dx =\sum_{i=1}^n w_ip(x_i),\qquad \deg p\le 2n-1$$
Assume
there are other nodes $x_1'<x_2'<\ldots <x_n'$ and quantities $w_i'$ (not necessarily nonnegative) such that
$$\sum_{i=1}^n w_ip(x_i)=\sum_{i=1}^n w_i'p(x_i'),\qquad \deg p\le 2n-1\quad (*)$$
Assume there exists a polynomial $q,$ $\deg q\le 2n-1,$ $q(x_i)\ge 0$ for $i=1,2,\ldots, n$ and
$q(x)=0$ iff $x\in \{x_1',x_2',\ldots, x_n'\}.$
Then the formula $(*)$ implies $q(x_i)=0$ for $i=1,2,\ldots, n.$ Therefore $x_i'=x_i$ for $i=1,2,\ldots, n.$
Now we are going to construct a polynomial $q$ with the properties described above. Assume one of the intervals $[x_k,x_{k+1}]$ contains more than one element of $\{x_j'\}_{j=1}^n$
$$x_k\le x'_{l} <x'_{l+1}\le x_{k+1}$$
Then
$$q(x)=(x-x'_l)(x-x'_{l+1})\prod_{j\neq l,l+1}^n(x-x'_j)^2$$
In the opposite case every interval $[x_i,x_{i+1}]$ contains at most one element $x_j'.$ By the pigeonhole principle either $x_1'<x_1$ or $x_n'>x_n.$
In the first case let
$$q(x)=(x-x_1')\prod_{j=2}^n(x-x'_j)^2$$ and in the second case
$$q(x)=(x_n'-x)\prod_{j=1}^{n-1}(x-x'_j)^2$$
In all three cases we have $\deg q\le 2n-1,$ $q(x_i')=0$ and $q(x_i)\ge 0.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4531960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to solve the following multivariable optimization? I found a question in the CSIR NET exam which was an optimization problem.We have to find maximum and minimum value of $f(x_1,x_2,x_3,x_4)=30x_1+90x_2+100x_3+120x_4$ subject to the constraints $x_1,x_2\geq 1,x_3,x_4\geq 1/2$ and $x_1+x_2+x_3+x_4=5$.I have done Linear programming problems but this seems to be a multivariable linear programming.Can someone give me an idea how to solve it?
| The solution $x=(3,1,0.5,0.5)$ obtained via inspection by @callculus42 is indeed optimal, with objective value $30\cdot3+90\cdot1+100\cdot0.5+120\cdot0.5 = 290$. For a short certificate of optimality (from LP duality), note that
\begin{align}
30 x_1 + 90 x_2 + 100 x_3 + 120 x_4
&= 30 \sum_{j=1}^4 x_j + 60 x_2 + 70 x_3 + 90 x_4 \\
&\ge 30 \cdot 5 + 60\cdot 1 + 70\cdot0.5 + 90\cdot0.5 \\
&= 290.
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4532105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does $\sum_{k =1}^n \Big(1 - \frac{1}{\lambda_n \mu_k} \Big)_+$ converge when $\mu_k \to 0$? Let $\{\mu_k\}$ be a positive, decreasing sequence, such that $\mu_k \to 0$ as $k \to \infty$.
For notion, $x _+ = \max\{0, x\}$ denotes the positive part.
$\underline{\text{Series:}}$ $\quad$ We are going to define two series based on a sequence of numbers $\{\lambda_n\}$ and $\lambda$.
The series are
$$
S_n = \sum_{k =1}^n \frac{1}{\lambda_n} \Big(\lambda_n - \frac{1}{ \mu_k} \Big)_+,
\quad \mbox{and} \quad
S = \sum_{k =1}^\infty \frac{1}{\lambda} \Big(\lambda - \frac{1}{\mu_k} \Big)_+.
$$
$\underline{\text{Choice of $\lambda_n, \lambda:$}}\quad$ The scalars $\lambda_n$ and $\lambda$ solve the following equations:
$$
\sum_{k = 1}^n \frac{1}{\mu_k}\Big(\lambda_n - \frac{1}{ \mu_k} \Big)_+ = 1,
\quad \mbox{and} \quad
\sum_{k = 1}^\infty \frac{1}{\mu_k}\Big(\lambda - \frac{1}{ \mu_k} \Big)_+ = 1.
$$
Question: Is it true that $S_n \to S$ as $n \to \infty$? The main difficulty is that the terms of the series $S_n$ depend on $\lambda_n$ rather than $\lambda$.
| Following a suggestion of Steven Stadnicki.
$\mu_k$ are decreasing and so $1/\mu_k$ are increasing to $+\infty$.
Consequently, there exists $\kappa$ for which $\lambda < 1/\mu_\kappa$,
and thus $(\lambda - 1/\mu_k)_+ = 0$ for all $k \geq \kappa$.
Consequently,
$$
\sum_{k=1}^\infty \mu_k^{-1} (\lambda - \mu_k^{-1})_+ =
\sum_{k=1}^{n} \mu_k^{-1}(\lambda - \mu_k^{-1})_+
$$
for all $n \geq \kappa - 1$.
This implies $\lambda = \lambda_n$ for all $n \geq \kappa - 1$.
Moreover, for $n \geq \kappa - 1$, it can be verified that
$$
S = S_n = \lambda^{-1} \sum_{k < \kappa} (\lambda - \mu_k^{-1}).
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4532250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove there exists a $c$ such that $g(c) = 1$ in the interval $[-5, 5]$ I've this problem where I am asked to use the Intermediate Value Theorem:
A function $f$ is continuous where $f(-5) = -1$ and $f(5) = 6$, and $g(x) = 1 - (f(x))^2$.
Is there a value $c$ for $-5 \leq{c} \leq{5}$ such that $g(c) = 1$? Why, or why not?
I understand that the intermediate value theorem guarantees an $f(c) = 0$ so that $g(c) = 1 - 0^2$, but I cannot figure out a way to succinctly say there is a $g(c) = 1$ by this fact, since applying IVT to to $-5, 5$ yields $g(-5) = 0$ $g(5) = -35$, which does not guarantee a $g(c) = 1$.
Is there a theorem I am unaware of that proves this from the fact that there is an $f(c) = 0$?
| Since $f$ is continuous on $[-5,5]$ and $f(-5)f(5) = -6 < 0$, by IVT there is a number $c \in (-5,5)$ such that $f(c) = 0 \implies f(c)^2 = 0 \implies g(c) = 1 - f(c)^2 = 1 - 0^2 = 1 - 0 = 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4532415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$G = HN$ a group, $N$ normal and $N \cap H =1$. If the conjugacy of $H$ on $N$ has a orbit with all of the non trivial elements: $|N|$ is a prime Let $G$ be a finite group, $H, N \leqslant G $ with $N$ normal in $G$ such that $G=HN$ and $N \cap H =\{1\}$. Suppose that all of the non-trivial elements of $N$ are in a single orbit for the conjugacy action of $H$ on $N$, then: exists a prime $p$ such that $\forall x \in N$, $x \neq 1$ we have $|x|=p$.
So I can deduce that the conjugacy action of $H$ on $N$ has two orbits: $\{1\}$ and $N \setminus \{1\}$ and if $x \in N \setminus \{1\}$, for the orbit-stabilizer theorem we have that $|N|-1 = [H:H_x]$, so $|N|-1$ divides the order of $H$, hence $|G| = |N|(|N|-1)k$. Now, to show that $|x| = p\,\,$, I can imagine that the strategy would be to show that $|N|$ is the prime $p$. Any hint will be appreciated since I am stuck here, thank you.
| From hypotheses we obtain: $$N=\{1_G\}\cup\{h n_0 h^{-1}|h\in H\},\;\;\text{ for any } n_0\in N\backslash{\{1_G\}}.$$
Let $n\in N\backslash{\{1_G\}},$ then for a proper $h\in H$ : $$n^{|n_0|}=(h n_0 h^{-1})^{|n_0|}=h n_0^{|n_0|}h^{-1}=1_G. $$ $$\implies |n|\bigg{|} {|n_0|}\stackrel{\text{simmetry}}{\implies} {|n_0|}\bigg{|} |n|\implies {|n_0|}= |n|.\;\;\; (1)$$
From the Cauchy's theorem exists $n\in N\backslash{1_G}$ s.t. $|n|=p$, for a proper prime (that divides the order of $|N|$), hence from (1), every non-trivial elements of $N $ has order $p$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4532543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Spivak, Ch. 20, Problem 10c: How to compute the Taylor polynomial of $\frac{1}{\cos}$?
*(c) Find $P_{5,0,\tan}(x)$. Hint: first use Problem $9$(f) and the value of $P_{5,0,\cos}(x)$ to find $P_{5,0,1/\cos}(x)$.
As a note on notation, $P_{5,0,\tan}(x)$ is the fifth order Taylor polynomial at $0$ of $\tan$.
For this question, I am interested in the computation of $P_{n,0,\frac{1}{1-\cos}}(x)$ specifically.
In Problem $9$(f), it was shown that if $g(a)=0$ then
$$P_{n,a,\frac{1}{1-g}}(x)=\left [ \sum_{i=0}^n \left ( P_{n,a,g}(x) \right )^i \right ]_n$$
where $[P]_n$ denotes the truncation of a polynomial $P$ to degree $n$, that is, it is the sum of all terms of $P$ of degree $\leq n$.
The solution manual does the following
$$P_{n,0,\frac{1}{1-\cos}}(x)=\left [ \frac{1}{1-(\frac{x^2}{2!}-\frac{x^4}{4!})} \right ]_5\tag{1}$$
$$=\left [ 1+\left (\frac{x^2}{2!}-\frac{x^4}{24}\right ) +\left (\frac{x^2}{2!}-\frac{x^4}{24}\right )^2 \right ]_5$$
$$=1+\frac{x^2}{2}+\frac{5x^4}{24}\tag{2}$$
My question is simply: where does $(1)$ come from?
I will also add my own two attempts below.
In my first attempt, I tried letting $f(x)=\frac{1}{x}$ and $g(x)=\cos{x}$ and using the relationship
$$P_{n,a,f\circ g}(x)=[ P_{n,g(a),f}\circ P_{n,a,g}]_n=[ P_{n,g(a),f}(P_{n,a,g}(x))]_n$$
which was proved in the problem $9$(e) just before.
Then, computing the Taylor polynomial of $f$ we have
$$P_{n,a,f}(x)=\sum\limits_{i=0}^n \frac{(x-a)^i}{a^{i+1}}$$
And since $a=0$ in this problem, we have $g(a)=g(0)=1$. Thus
$$P_{n,1,f}(x)=\sum\limits_{i=0}^n (x-1)^i$$
$$P_{n,0,g}(x)=\sum\limits_{i=0}^n (-1)^i \frac{x^{2i}}{(2i)!}$$
and
$$P_{n,0,f\circ g}(x)=[ P_{n,1,f}(P_{n,0,g}(x))]_n$$
$$=\left [ \sum\limits_{i=0}^n \left ( \sum\limits_{j=1}^n (-1)^j \frac{x^{2j}}{(2j)!} \right )^i \right ]_n\tag{3}$$
Thus
$$P_{5,0,f\circ g}(x)=P_{5,0,\frac{1}{\cos}}(x)=\left [ \sum\limits_{i=0}^5 \left ( \sum\limits_{j=1}^5 (-1)^j \frac{x^{2j}}{(2j)!} \right )^i \right ]_5\tag{4}$$
$(2)$ is quite a complicated expression.
In my second attempt, I used $\cos{x}=1-g(x)$, so $g(x)=1-\cos{x}$ and
$$P_{2n+1,0,g}(x)=1-\sum\limits_{i=0}^n (-1)^i \frac{x^{2i}}{(2i)!}$$
$$=1-\left [ 1+\sum\limits_{i=1}^n (-1)^i \frac{x^{2i}}{(2i)!}\right ]$$
$$=-\sum\limits_{i=1}^n (-1)^i \frac{x^{2i}}{(2i)!}$$
$$=\sum\limits_{i=1}^n (-1)^{i+1} \frac{x^{2i}}{(2i)!}$$
Thus
$$P_{5,0,\frac{1}{\cos}}(x)=P_{5,0,\frac{1}{1-g}}(x)=\left [ \sum\limits_{i=0}^5[P_{5,0,g}(x) ]^i \right ]_5$$
$$=\left [ \sum\limits_{i=0}^5\left ( \sum\limits_{j=1}^2 (-1)^{j+1} \frac{x^{2j}}{(2j)!} \right )^i \right ]_5\tag{5}$$
Now, $(4)$ and $(5)$ seem to be different expressions, both involving lots of complicated terms. I asked a question on MaplePrimes about how to truncate polynomials in Maple, but for now I can't tell if they are different, or indeed if one of them is equal to what Spivak has as the correct answer for the Taylor polynomial of $\frac{1}{\cos}$ that we see in $(2)$.
In addition to my first question, it would be nice to get feedback on these two attempts. It seems at least one of them is incorrect.
|
My question is simply: where does (1) come from?
(1) comes from an error in notation. Look back at the problem and you see that it is $P_{5,0,\frac 1{\cos}}$ you need to calculate for the hint, not $P_{5,0,\frac 1{1-\cos}}$. You cannot calculate $P_{5,0,\frac 1{1-\cos}}$ at all, as $\frac 1{1-\cos x}$ is not even defined at $x = 0$, much less differentiable. And the singularity is a pole, not removable. Someone accidently replaced $g$ with $\cos$ in the Taylor polynomial notation, when they meant to replace $\frac 1{1-g}$ with $\frac 1\cos$.
A further issue is that (1) also doesn't quite follow the hint, but instead follows pretty much the same path you did in your first attempt, except they shortcutted a bit, and most importantly instead of writing out frightening summation notations, they substituted in the much less frightening simple expressions those summations represent:
$$P_{5,0,\cos}(x) = \sum_{j=0}^2 (-1)^{j+1} \frac{x^{2j}}{(2j)!} = 1 - \frac{x^2}{2!}+\frac{x^4}{4!}$$
(In writing up your first attempt here, you incorrectly have $5$ as the upper limit of the sum, but in the second attempt, you wrote the correct upper limit of $2$, so I assume the first was a typo.)
So the final formula of your first attempt can be expressed more simply as
$$P_{5,0,\frac1{\cos}}(x) = \left[1 + \left(\frac{x^2}{2!}-\frac{x^4}{4!}\right) + \left(\frac{x^2}{2!}-\frac{x^4}{4!}\right)^2 +\ \dots\right]_5$$
Which is also what they would have gotten by directly applying 9(f) instead of reverting to earlier formulas. The next thing to notice is that the "$+\ \dots$" at the end can be dropped (which is why I didn't bother to write it out). Any power $\left(\frac{x^2}{2!}-\frac{x^4}{4!}\right)^i$ for $i>2$ will, when multiplied out, consist only of terms of degree $6$ or greater, and we are throwing out anything with a degree greater than $5$. Thus your "quite a complicated expression" actually isn't that complicated after all. Even the $\left(\frac{x^2}{2!}-\frac{x^4}{4!}\right)^2$ only contributes $\left(\frac{x^2}{2!}\right)^2$ to the Taylor polynomial, the other terms being of too high degree. So
$$P_{5,0,\frac1{\cos}}(x) = 1 + \frac{x^2}{2!}-\frac{x^4}{4!} + \frac{x^4}{2!^2} = 1 + \frac 12 x^2 +\frac 5{24}x^4$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4532731",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Example of an infinite-dimensional geodesic NPC Space I just started reading Ballmann's book on non-positive curvature spaces. In it most, non-linear, examples of NPC spaces are negatively curved manifolds or specific graphs/discrete metric spaces, or buildings.
So, to gain intuition, what is a "down-to-earth"/interesting example of a metric space $(X,d)$ which:
*
*Has non-positive curvature (in the sense of Ballmann)
*X is not a topological vector space but it is bi-Lipschitz equivalent to an infinite-dimensional separable Hilbert space $H$.
*"X is not only a toy example but is interested in other areas of math...has reasonable "roots""
| I'll briefly sketch the construction of an infinite dimensional hyperbolic space mentioned by Moishe. Let $H$ be an infinite dimensional separable Hilbert space, and consider the space $V:=\mathbb R\oplus H$. We can equip $V$ with an inner product $\langle(\lambda_1,h_1),(\lambda_2,h_2)\rangle_V:=\lambda_1\lambda_2-\langle h_1,h_2\rangle_H$, turning it into a separable Hilbert space, with quadratic form, $Q(\cdot)=\|\cdot\|^2_V$. $V$ has a natural cone, $C=\{(\lambda,x)\in V: \|x\|_H<\lambda\}$. We can consider a slice of this cone $\mathcal H:=\{v\in C: Q(v)=1\}$. Finally, we equip $\mathcal H$ with a metric, $d(u,v):=\operatorname{arcosh}(\langle,u,v \rangle).$ The metric space $(\mathcal H,d)$ is known as the hyperboloid model of infinite dimensional hyperbolic space, and is the natural extension of Minkowski's model of hyperbolic space in finite dimensions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4532910",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Area function is continuous on a set of compact sets in $[0,1]^2$ Consider $X=[0,1]^2\subset \mathbb{R}^2$. If $H_X$ is a set of all compact sets in $X$, then we can define a metric $d$ on $H_X$ i.e. Hausdorff metric $d$ :
For $A,\ B\in H_X$, then $d(A,B)=R$ iff there exists a smallest $R$ s.t. $R<r$, $r$ is arbitrarily close to $R$ and $U_r(A)$ contains $B$ and $U_r(B)$ contains $A$, where $U_r(C)=\{ a\in X$|$ |a-c|\leq r$ for some $c\in C\}$ and $|\ -\ |$ is Euclidean distance.
If $A_n =\{ (x,y)\in X| y=\frac{i}{n},\ 0\leq i\leq n\}$, then $A_n$ goes to $X$. Hence if $Area$ is Euclidean Lebesgue measure, $$ \lim_n\ {\rm Area}\ (A_n)=0 < {\rm Area}\ X=1 \ (1)$$
Question : I want to know whether or not there is an example opposite to $\ast$. Is there an example $A_n$ with $A_n\rightarrow A$ s.t. ${\rm Area}\ A <\lim_n\ {\rm Area} \ A_n\ (2)$ ?
Remark : a. If we consider a length function on a set of continuous maps from
unit interval to $X$, then we have $(2)$ but not $(1)$.
b. Note that area function is continuous on a set of all convex subsets in $X$.
| For $A$ a subset of the metric space $X$ we have
$$A_{\epsilon} \colon = \{ x \in X \ | \ d(x, A) \le \epsilon\}$$
is a closed subset of $X$ and
$$\bigcap_{n \ge 1} A_{\frac{1}{n}} = \bar A$$
In your case you also have
$$\lim_{n \to \infty} \mu(A_{\frac{1}{n}}) = \mu(\bar A)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4533043",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Proving $P[\{X_n ≤ x\} ∩ \{|X_n − X| < \epsilon\}] \leq P[X ≤ x + \epsilon]$ I'm studying Theorem 5.2.1 from Hogg, Craig Introduction to Mathematical Statistics 8th Edition. I'm adding a few intermediate steps to the proof for clarity and want to know if I'm doing it correctly. Thanks in advance!
My goal is to prove:
$P[\{X_n ≤ x\} ∩ \{|X_n − X| < \epsilon\}] \leq P[X ≤ x + \epsilon]$
where $X_n$ is a sequence of random variables.
My attempt:
$P[\{X_n ≤ x\} ∩ \{|X_n − X| < \epsilon\}]$
$= P[\{X_n ≤ x\} \cap (\{X_n − X < \epsilon\} \cup \{X \lt X_n + \epsilon\} )]$
$= P[(\{X_n ≤ x\} \cap (\{X_n − X < \epsilon\}) \cup (\{X_n ≤ x\} \cap \{X \lt X_n + \epsilon\})]$
$\leq P[\{X_n ≤ x\} \cap \{X \lt X_n + \epsilon\}]$
$=P[X \lt x + \epsilon]$
$\leq P[X \leq x + \epsilon]$
| Note that
$$
X_n\leqslant x\implies X_n+\epsilon \leqslant x+\epsilon \implies X_n+|X-X_n|\leqslant x+\epsilon \implies X\leqslant x+\epsilon
$$
Now, as a probability measure is an increasing function, we have that
$$
\{X_n\leqslant x\}\subset \{X\leqslant x+\epsilon \}\implies P(X_n\leqslant x)\leqslant P(X\leqslant x+\epsilon )
$$
∎
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4533297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How can this equation be simplified to give $y$?: $x = \frac{(-1)^y ( 5 (-1)^y y - y + (-1)^y - 1))}4$ I'm trying to convert this equation to the form $y = ...$, but I am stuck. It seems the $y$-root of $(-1)^y$ is not $-1$, but is instead a beast. Here is the overall equation:
$$x = \frac{(-1)^y ( 5 (-1)^y y - y + (-1)^y - 1))}4$$
Note: y is an integer, x is an integer.
I could be open to x needing to be a complex number as long as there are solutions where x ∈ { 3+0i, 5+0i, 7+0i, ...} and y ∈ { 5, 8, 11, ...}.
Note: the point is to avoid using separate equations for even vs. odd, but to have one equation that handles both. That's why the first equation has (-1)^n in it; it makes the equation = y when y is even, and (3y+1)/2 when y is odd. However that trick is not as helpful when we only care about every third number instead of every second number.
Context: I'm an old man trying to refresh my math skills by learning about groups and branch groups. I'm not sure how much extra context you want. Trying to build a map between 2n+1 and 3n+2, kind of.
| You can make a case decision. Let $y$ be an even integer then
$$x= \frac{1\cdot ( 5 \cdot 1\cdot y - y + 1 - 1))}4=\frac{ 4y }4=y$$.
Let $y$ be an odd integer then
$$x=\frac{-( -5 y - y -1 - 1))}4=\frac{-(-6y-2)}{4}=\frac{6y+2}{4}$$
Solving for y gives $y=\frac{4x-2}{6}=\frac23x-\frac13$. So the function is
$$y=\begin{cases} x, \quad\ \ \ \ \ \ \textrm{if y is an even integer} \\ \frac23x-\frac13, \textrm{if y is an odd integer}\end{cases}$$
So you calculate both cases, and then you make the decision which one case is right. IMHO it is the easiest approach. I don't see a handy way for non-integers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4533640",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Find $x$ such that $\sqrt[x]9=81$ Find $x$ such that $\sqrt[x]9=81$
If I simplified this to $9^{\frac1x}=81$, then we have $x={\frac12}$
I'm stuck here,
My question: can we rewrite $\sqrt[\frac{a}{b}] c\quad$ as $\sqrt[a]{c^b}\quad$ for all positive real number $c$?
| Yes you can! Note that $$\sqrt[\frac a b]c=c^{\frac {1} {\frac a b}}=c^{\frac b a}=\sqrt[a]{c^b}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4533842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Using $\varepsilon$-$\delta$, prove that $\lim\limits_{(x,y) \rightarrow (3,2)} (xy^2+\frac{3}{x})=13$ I started working on factoring $|xy^2+\frac{3}{x}-13|$ and got to $|xy^2-12-\frac{x-3}{x}|$. I have to get to $(y-2)$ somehow, but am confused how to get that from $xy^2-12$. Can anyone give even a slight hint? :)
| Let $\varepsilon>0$ be fixed.
We can write
\begin{align}
\big|xy^2+\frac 3x-13\big| &= \big|xy^2-3\cdot2^2+\frac3x-1\big| =\\[1.5ex]
&= \big|(x-3)\,4+x(y^2-2^2)+\frac {3-x}x\big|\leq\\[1.5ex]
&\leq \big|x-3\big|\,4+\big|x\big|\cdot\big|y-2\big|\cdot\big|y+2\big|+\frac{\big|3-x\big|}{\big|x\big|}\leq\ldots\\[1.5ex]
\end{align}
$\Big[$Because $(x,y)$ converges to $(3,2)$, we can suppose that $2\leq x\leq 4,\quad 1\leq y\leq 3$, i.e. $(x,y)$ belongs to the closed square $S\Big((3,2),1\Big)$ centered in$(3,2)$ with half-side $1\Big]$
\begin{align}
\phantom{ll}\ldots&\leq\big|x-3\big|\,4+4\,\big|y-2\big|\,5+\frac12\,\big|3-x\big|=\\[1.5ex]
&=\frac92\,\big|x-3\big|+20\,\big|y-2\big|\leq\\[1.5ex]
\end{align}
\begin{align}
&\leq 20\Big(\big|x-3\big|+\big|y-2\big|\Big)\leq\ldots\\[1.5ex]
\end{align}
$\bigg[$ By using that $\;a,b\in\mathbb R \implies a+b\leq\sqrt2\sqrt{a^2+b^2}\bigg]$
\begin{align}
\phantom{l}\ldots&\leq 20\,\sqrt2\,\sqrt{\big|x-3\big|^2+\big|y-2\big|^2}\leq\varepsilon
\end{align}
when
$$ (x,y)\in B\Big((3,2), \frac{\varepsilon}{20\sqrt2}\Big)\cap S\Big((3,2),1\Big), $$
where $B\Big((\xi,\eta),r\Big)$ denote the closed disk centered in $(\xi,\eta)$ and with radius $r$.
Hence a possible choice for $\delta$ is
$$\delta=\min\Big(\frac\varepsilon{20\sqrt2},1\Big). $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4534047",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Does convergence of arithmetic mean imply convergence of geometric mean? Let $x_n$ be a sequence of positive real numbers, which is not convergent. ($x_n$ does not converge to a finite number, nor to infinity). Define
$$ A_n = \frac{x_1 + x_2 + \cdots + x_n}{n} \quad \text{ and }\quad G_n = \sqrt[n]{x_1x_2...x_n}.$$
Does the convergence of $A_n$ imply the convergence of $G_n$ or vice versa?
I know that if $x_n \to L \in \mathbb{R}\cup\{\infty\}$, then both $A_n,G_n$ converge to $L$. But here I assume $x_n$ is not convergent.
I also wonder if adding a boundedness assumption on $x_n$ changes anything.
| Here's an example to show that the convergence of $A_n$ does not imply the convergence of $G_n$.
Let $\mathbb{Z}_0$ denote the set of nonnegative integers, and let
$S=\{2^k{\,\mid\,}k\in\mathbb{Z}_0\}$.
Let the sequence $x_1,x_2,x_3,...$ be defined by
$$
x_n=
\begin{cases}
{\large{\frac{1}{2^n}}}&\text{if}\;n\in S\\[4pt]
1&\text{otherwise}
\end{cases}
$$
Then the sequence $(x_n)$ is a bounded, non-convergent sequence of positive real numbers.
For $A_n$ we have
$$
\frac{n-\bigl(\log_2(n)+1\bigr)}{n} < A_n < 1
$$
hence ${\displaystyle{\lim_{n\to\infty}}}A_n=1$.
For $G_n$ we have
$$
G_n
=
\begin{cases}
{\large{\frac{1}{2}}}&\text{if}\;n+1\in S\\[4pt]
{\Large{\frac{{\Large{1}}}{2^{\left(2-{\large{\frac{1}{n}}}\right)}}}}&\text{if}\;n\in S\\
\end{cases}
$$
hence the sequence $(G_n)$ has limit points ${\large{\frac{1}{2}}}$ and ${\large{\frac{1}{4}}}$, so $(G_n)$ is non-convergent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4534513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Computing real minima of positive univariate polynomials (of degree $\le 6$) Let $p : \mathbb R \rightarrow \mathbb R$ be a non-constant positive polynomial with $\deg(p) \leq 6$. We know that $p$ has between $1$ and $3$ minima. How can an approximation to the global minimum be found numerically?
This problem occurs naturally when trying to compute the closest point to a cubic spline.
The approaches I've seen so far all solve the generic quintic equation $p'(t) = 0$. But this gives up a lot of structure, since now we're computing (up to) $5$ roots instead of $3$. Are there any numerical methods more tailored to this kind of situation?
The kind of structure I'm thinking of: since we know that if $t$ is a minimum and $p''(t) \geq 0$ at any local minimum, we could factor the (non-constant, leading coefficient positive) quartic $p''(t)$ and obtain (up to) $3$ intervals where the roots must lie, where only one interval can be finite.
| This doesn’t answer the question you asked, but it might solve the problem that motivated your question.
Here’s what I’d do to find the closest point on a cubic curve …
*
*Calculate $n$ points on the curve, and use these to construct a polyline approximation.
*Find the closest point on the polyline, and it’s corresponding parameter value, $t$.
*Do $m$ iterations of Newton-Raphson to polish the value of $t$.
Suggested values are: $n$ should be 3 to 10 times the degree of the curve, and $m$ should be 3 or 4. You’ll need to change those values to match your speed/accuracy/reliability requirements.
All of this should work fine on a GPU.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4534853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show $\lim_{n \to -\infty} log(n!)-\sqrt{n} > 0$ Show
$$\lim_{n \to -\infty} log(n!)-\sqrt{n} > 0$$
or in other words:
$$\exists N\in \mathbb{N} \text{ s.t. } \forall n>N, log(n!)-\sqrt{n} > 0$$
I tried recursion:
Finding $m'$ s.t. $\sqrt{m'+1}-\sqrt{m'}\leq log(m'+1)$ and $m\geq m' $ s.t. $log(m!)>\sqrt{(m)}$
Therefore we know it is true for $m$ and want to show it is true for $m+1$. We have $log((m+1)!)=log(m+1)+log(m!)$ and $\sqrt{m+1}=(\sqrt{m+1}-\sqrt{m})+\sqrt{m}$.
By condition we set and the property of $m$, we have it true for $m+1$. However, I'm struggling with the process of finding $m'$.
Thank you very much!
| Base case: $\log 6 = \log (3!) > \sqrt 3: 1.792 > 1.732$
$$
\begin{gather}
n \ge 1 \implies \sqrt {n+1} - \sqrt{n} < 1 \\
n > e \implies \log n > 1 > \sqrt {n+1} - \sqrt{n} \\
\log ((n+1)!) - \log (n!) = \log n \implies \log ((n+1)!) - \log (n!) > \sqrt {n+1} - \sqrt{n}
\end{gather}
$$
For $n \ge e$, the LHS increases faster than the RHS as $n$ increases, so the inequality $\log (n!) > \sqrt n$ holds for all $n \ge 3$. Hence $\log (n!) - \sqrt n >0$ for all $n \ge 3$, including as $n \to \infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4535051",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Is there a general definition of Maximal function? I see this term coming up quite a lot, but I have not figured if there's a general definition.
For example wikipedia https://en.wikipedia.org/wiki/Maximal_function shows several examples but does not provide a general definition (if there's any).
In my understanding a maximal function is any function involving a $ \sup $. For example in the proof of Caratheodory theorem, derivatives of measures or Riesz representation theorem they all involve supremum of something.
Is there a more general definition rather than per example based?
| A good definition of a maximal function would be a function using a supremum. Since maximal and maximize have the same root word, I think this is what people mean by "maximal function."
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4535204",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Question about what is a separable polynomial Here is the definition of separable polynomial as I understand it.
Let $K$ be a splitting field for a polynomial $f(x)\in F[x]$. If $f(x)$ is irreducible, then $f(x)$ is separable if the roots in $K$ all have multiplicity $1$. If $f(x)$ is reducible, then $f(x)$ is separable if all the irreducible factors of $f(x)$ are separable.
My question is whether the irreducible factors can share roots. For example, if $g(x)$ is a separable irreducible polynomial, then if $g(x)^2$ separable?
| There seems in fact two definitions of separable polynomial.
Using the note(Fields and Galois Theory
J.S. Milne
), in page 33:
Definition 2.22 A polynomial is separable if it is nonzero and satisfies the equivalent conditions on (2.21).
conditions on (2.21) is that the polynomial has only simple roots.
But there is also a footnote on this definition (in page 33 too):
This is Bourbaki’s definition. Often, for example, in the books of Jacobson and in earlier versions of
these notes, a polynomial is said to be separable if each of its irreducible factors has only simple roots.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4535460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How to solve this 2nd order ODE with Dirac delta? I need to solve the following ODE
$$ f''(x) - \zeta f(x) + \zeta\delta(x-b) = 0,$$
where $x\in(-\infty,\infty)$ and where $f(x)\rightarrow 0$ as $x \rightarrow \mp \infty$. The solution ignoring the Dirac impulse is given by
$$f(x) = c_1 e^{\sqrt{\zeta}x} + c_2 e^{-\sqrt{\zeta}x}.$$
Since I have a Dirac impulse at $x=b$, I should be solving for two ODEs, one below $x=b$ and another above $x=b$. Then I have to put together both solutions at $x=b$. This is where I am confused, how can I do this part?
A bit more on the intuition behind the problem. The ODE in question is a steady state Fokker-Planck (or Kolmogorov Forward) Equation. Mass is injected at $x=b$ and dissipates both to the left and right of $x=b$. Then, mass anywhere in $x\in(-\infty,\infty)$ is taken out at a rate $\zeta$ and reinjected back to $x=b$.
| You have two solutions as you noted for:
$$ f''(x) - \zeta f(x) + \zeta\delta(x-b) = 0,$$
given by (denoting $\phi^2 = \zeta$). For simplicity, I will let $b = 0$ (you can also do this via a shift).
$$ f(x) = \begin{cases} A e^{\phi x} & \text{if } x \le 0 \\
B e^{-\phi x} & \text{if } x > 0. \end{cases} $$
Continuity is required and so we must have $A = B$. Finally, we will see what is required in the neighborhood of $x = 0$ by looking at a small integral containing $0$ of the ODE and take the limit
$$ 0 = \lim_{\epsilon \to 0} \int_{-\epsilon}^\epsilon f''(x) - \phi^2 f(x) + \phi^2\delta(x) \ dx = f'(0^+) - f'(0^-) + \phi^2 $$
Hence, $0 = -\phi B - \phi A + \phi^2$. The two conditions imply that $A = B = \phi/2$. Hence the solution:
$$ f(x) = \frac{\phi}{2} e^{-\phi |x|}. $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4535605",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Quantum plane is a bialgebra I am reading ‘Hopf algebras and their actions on rings’. Susan wrote the quantum plane as an example at 1.3.9 Example. He said $B = k \langle x,y \mid xy = qyx \rangle$, $0 \neq q \in k$ with coalgebra structure
$$
\Delta(x) = x \otimes x \,,
\quad
\Delta(y) = y \otimes 1 + x \otimes y \,,
\quad
\epsilon(x) = 1 \,,
\quad
\epsilon(y) = 0 \,.
$$
But when I checked the bialgebra conditions, I met a question: What does $\Delta(xy)$ (or $\Delta(xx)$, $\Delta(yy)$) look like? I mean, the precise expression? Can the definition of $\Delta$ on $x$, $y$ induce $\Delta(xy)$?
| What Montgomery writes in their book needs to be understood as follows:
*
*There exists a unique homomorphism of $k$-algebras $Δ$ from $B$ to $B ⊗ B$ such that $Δ(x) = x ⊗ x$ and $Δ(y) = y ⊗ 1 + x ⊗ y$.
*There exists a unique homomorphism of $k$-algebras $ε$ from $B$ to $k$ such that $ε(x) = 1$ and $ε(y) = 0$.
*The $k$-algebra $B$, together with the two homomorphisms $Δ$ and $ε$, becomes a $k$-bialgebra (with $Δ$ serving as comultiplication and $ε$ as counit).
Therefore, $Δ$ and $ε$ are homomorphisms of $k$-algebras by their construction.
The same principle has already been used in previous examples:
*
*In Example 1.3.2, the action of $Δ$ and $ε$ is only specified on basis elements, i.e., on elements of the group $G$.
To apply both maps to arbitrary elements of the group algebra $kG$, the given formulas first need to be extended linearly.
(And it then needs to be checked that these linear maps are indeed homomorphisms of $k$-algebras.)
*In Example 1.3.3, the action of $Δ$ and $ε$ are only specified on elements of the Lie algebra $\mathfrak{g}$.
To apply both maps to arbitrary elements of the universal enveloping algebra $U(\mathfrak{g})$, the given formulas first need to be extended as homomorphisms of $k$-algebras.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4535735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.