Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Proving sum of negative powers of 2 equals one before becoming greater than one Let $x$: $x_1 \geq x_2 \geq x_3\geq...\geq x_n$ be negative powers of 2 with sum greater than one. Then $\exists$ $l$ s.t. $x_1 + x_2 + ... + x_l = 1$. It seems obvious from examples, but I'm finding it a bit difficult to prove it. Any help would be highly appreciated.
| Your comment is close.
Let $S_i = x_1+x_2+\cdots + x_i$.
For the first case $l=1$, the first sum $S_1 = x_1<1$. For the last case $l=n$, $S_n > 1$ (as given). For the increasing sum $S_l$, there must be a largest $l$ that satisfy $S_l < 1$.
Assume that $k$ is the greatest integer that satisfy $$S_k = \sum_{i=1}^kx_i < 1$$
Then the goal is to prove that the next sum
$$S_{k+1}=\sum_{i=1}^{k+1}x_i = 1.$$
As you pointed out, $S_k = \sum_{i=1}^k x_i$ is an integer multiple of $x_{k+1}$, so let $S_k = Ax_{k+1}$. Then from the hypothesis,
$$\begin{align*}
Ax_{k+1} &< 1\\
A &< x_{k+1}^{-1}
\end{align*}$$
$x_{k+1}^{-1}$ is an integer for being a positive power of $2$, and since both sides are integers,
$$\begin{align*}
A + 1 &\le x_{k+1}^{-1}\\
Ax_{k+1} + x_{k+1} &\le 1\\
S_{k+1} &\le 1
\end{align*}$$
Also by the definition of $k$, $S_{k+1} \not< 1$.
Then the only possibility is that $$S_{k+1} = \sum_{i=1}^{k+1}x_i = 1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3977891",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
When does function iteration converge to a fixed point? After watching a Youtube video by blackpenredpen (only 54 seconds long) showing how to write 5 as an infinite nested square root it got me thinking.
This is what he showed in the video:
$
\begin{align}
x &= 5
\\x-5& = 0
\\(x-5)(x+4)&=0
\\x^2-x-20&=0
\\x^2&=20+x
\\x&=\sqrt{20+x}
\\x&=\sqrt{20+\sqrt{20+x}}
\\x&=\sqrt{20+\sqrt{20+\sqrt{20+\sqrt{20+\sqrt{20+x}}}}}
\end{align}
$
$x$ converges to 5 with the above infinite square root. Nothing weird about that, I tried multiplying by other equations instead of $(x+4)$ and got other fun infinite square roots. $x=\sqrt{10+3\sqrt{10+3x...}}$ and $x = \sqrt{30-\sqrt{30-\sqrt{30-x...}}}$. Both converge to 5 too, all good.
However I wanted to see if it would work without using square roots everywhere. So this is what I tried:
$
\begin{align}
x^2&=20+x
\\x&=x^2-20
\\x&=(x^2-20)^2-20
\end{align}
$
Both $x=x^2-20$ and $x=(x^2-20)^2-20$ diverge (tested by iterating with code), and I have no clue why and I want to know why. My guess would be that it has something to do with the derivative. The derivative at the interesting point, $x=5$, is $\frac{d}{dx}(x^2-20)=2x=10$. And it does not converge. The derivative of one that does converge is $\frac{d}{dx}\sqrt{20+x}=\frac{1}{2\sqrt{20+x}}=0.1$, that is two orders of magnitude smaller than the diverging one. Am I onto the right track? Is there some kind of theorem that states that the derivative has to be less than 1?
I know about Newton's method, a proven technique for finding the zeros to functions. I am curious about why using equations on the form ($x=\sqrt{x}$) converges and why using $x=x^2$ diverges.
| You are exactly right. There is a simple rule coming from chaos theory that tells you whether certain fixed points are sinks or sources for a function being iterated. Basically, suppose $f$ is some continuously differentiable function such that $f(x_0)=x_0$. If $|f'(x_0)|>1$ then $x_0$ is a source. If $|f'(x_0)|<1$ then $x_0$ is a sink. If $|f'(x_0)|=1$ then it can go either way.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3978230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
What are the positive/negative intertial exponent of the quadratic form $q(A)=tr(A^2)$ for $A\in \Bbb R^{n\times n}$. What are the positive/negative intertial exponent of the quadratic form $q(A)=tr(A^2)$ for $A\in \Bbb R^{n\times n}$.
Choosing the canonical base $E_{ij}$, then ...
| Let $\mathcal H$ and $\mathcal K$ be respectively the subspace of all symmetric matrices and the subspace of all skew symmetric matrices. Then $q$ is positive on $\mathcal H\setminus0$ and negative on $\mathcal K\setminus0$. Also, $\mathbb R^{n\times n}=\mathcal H\oplus\mathcal K$. Therefore $n_+(q)=\dim\mathcal H$ and $n_-(q)=\dim\mathcal K$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3978371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $X$ is separable then so is $B_X.$
Let $X$ be a separable normed linear space. Then show that $B_X = \{x \in X\ |\ \|x\| = 1\}$ is also separable.
Since $X$ is separable it has a countable dense subset. Let $D = \{x_n\ |\ n \in \Bbb N \}$ be a countable dense subset of $X.$ WLOG we may assume that $0 \not\in D$ for otherwise we could take $D \setminus \{0\}$ which continues to be dense in $X.$ Then I guess that $D' = \left \{\frac {x_n} {\|x_n\|}\ \bigg |\ n \in \Bbb N \right \}$ is a dense subset of $B_X.$ Let $y \in X$ with $\|y\| = 1.$ Let us choose $\varepsilon \gt 0$ arbitrarily. Then I need to show that $B(y,\varepsilon)$ intersects $D'.$ If $y = x_n$ then we are through. If not then there exists a sequence $\{x_{k_n}\}_{n \geq 1}$ in $D$ converging to $y.$ Since norm is a continuous function it follows that $\|x_{k_n}\| \to \|y\| = 1$ as $n \to \infty.$ But this shows that $\frac {x_{k_n}} {\|x_{k_n}\|} \to y$ as $n \to \infty,$ proving that $D'$ is dense in $B_X.$
Does my proof make sense? Can anybody plaese check my argument above?
Thanks in advance.
| A shorter proof could be : If $X$ is metric and separable then has a countable basis. Any subspace of a countable basis space is countable basis (take the trace on the subspace), and in the end recall that if a space has a countable basis is separable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3978484",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Number of way choose ordered distinct quadruples $(A,B,C,D)$ from $\{1,2,...,n\}(n\geq 2)$
Number of way choose ordered distinct quadruples $(A,B,C,D)$ from
$\{1,2,...,n\}(n\geq 2)$ such that $A\subseteq B\cup C\cup D$.
I think in following manner:
$\forall x\in \{1,2,...,n\} $
if $A$ included it then it must be in $B \vee C \vee D$ or if $x$ not in $A$ it can be at any $B \vee C \vee D$ such a way give us $7^n$ .Any one can verify my solution?
| Let's count the $(A,B,C,D)$ for a given choice of $S=B\cup C\cup D$ (where $k=|S|$).
Draw a Venn diagram for $B,C,D$. We can distribute the elements of $S$ among the $7$ regions, and that corresponds to a choice of $(B,C,D)$. Labelling the regions $1$-$7$, this is the same as a function from $S$ to the set $\{1,\cdots,7\}$, of which there are $7^k$. Also there are $2^k$ choices for $A\subseteq S$.
There are $\binom{n}{k}$ choices for $S\subseteq\{1,\cdots,n\}$, so summing over $k$ we get
$$ \sum_{k=0}^n \binom{n}{k}\cdot7^k\cdot2^k. $$
Do you see why this is $15^n$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3978616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
If $a_n=100a_{n−1}+134$ , find least value of $n$ for which $a_n$ is divisible by $99$
Let $a_{1}=24$ and form the sequence $a_{n}, n \geq 2$ by $a_{n}=100 a_{n-1}+134 .$ The first few terms are
$$
24,2534,253534,25353534, \ldots
$$
What is the least value of $\mathrm{n}$ for which $\mathrm{a}_{\mathrm{n}}$ is divisible by $99 ?$
So, In this post,the present answer there get to the result that $a_n \equiv a_1 + (n - 1)35 \equiv 35n - 11 \equiv 0 \pmod{99} \tag{1}\label{eq1A}$
But after that what i did is to multiply both sides by $9$ in the congruence
$35n - 11 \equiv 0 \pmod{99}$
$315n \equiv 0 \pmod{99}$
$18n \equiv 0 \pmod{99}$
dividing by $18$ ,we get
$n \equiv 0 \pmod{11}$ but I did not get the other part of the congruence which was arrived in that answer by this process ?
thankyou
| You need to solve $35n \equiv 11 \pmod{99}$. This can be done via a variation of the (Extended) Euclidean Algorithm.
$$
\begin{array}{ll}
99n \equiv 0 \pmod{99} & (1)\\
35n \equiv 11 \pmod{99} & (2)\\
-6n \equiv -33 \pmod{99} & (3) = (1)-3\times(2)\\
-n \equiv -187 \equiv -88 \pmod{99} & (4) = (2) + 6\times(3)
\end{array}
$$
Therefore $n\equiv 88 \pmod{99}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3978727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
} |
Probability problem on Unif(0,1) random variables Given three iid Unif(0,1) random variables a, b, and c, what is the probability that c is the maximum if a+b < 1?
Attempt:
The probability distribution of a+b for a+b < 1 is a+b and for 1 < a+b < 2 it is 2-(a+b). Therefore, the events a+b < 1 and a+b > 1 both have probability 1/2. Also, $P(c>a \cap c>b) = 1/3$. This means that
$$1/3 = P(c>a \cap c>b | a+b > 1)P(a+b > 1) + P(c>a \cap c>b | a+b < 1)P(a+b < 1) $$
$$2/3 = P(c>a \cap c>b | a+b > 1) + P(c>a \cap c>b | a+b < 1) $$
I am not sure how to continue. Can someone explain how to proceed with the solution?
| Representing the possible outcomes of sampling $a,b,c$ as three coordinates into a unit cube, the volume where $c>a,c>b,a+b<1$ is a triangular dipyramid with vertices $(0,0,0),(0,1,1),(1,0,1),(0,0,1),(1/2,1/2,1/2)$, which has base area $\frac{\sqrt3}2$ and combined height $\frac{\sqrt3}2$. Thus $P(c>a,c>b,a+b<1)=\frac13\cdot\frac{\sqrt3}2\cdot\frac{\sqrt3}2=\frac14$. More clearly, $P(a+b<1)=\frac12$ (a right triangular prism). Hence the final answer is $\frac{1/4}{1/2}=\frac12$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3978890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
There exists a differentiable function for which $f'\left(x\right)=\begin{cases} \frac{\cos x-1}{x^{2}} & x\ne0\\ 1 & x=0 \end{cases}$ Prove or disprove: There exists a differentiable function for which $$f'\left(x\right)=\begin{cases} \frac{\cos x-1}{x^{2}} & x\ne0\\ 1 & x=0 \end{cases}$$
My way:
False. Assume towards a contradiction that there exists such function.
Then we have:
$$f'\left(0\right)=1$$ $$f'\left(2\pi\right)=0$$
By Darboux's theorem, there exists $$0<c<2\pi$$ such that:
$$f'\left(c\right)=\frac{1}{2}$$
Thus:
$$f'\left(c\right)=\frac{1}{2}=\frac{\cos c-1}{c^{2}}\Rightarrow$$ $$\cos c=1+\frac{c^{2}}{2}>1$$
Contradiction!
.
My question is - would it been easier to disprove by integrating and trying to find the original function?
Even if not, could I see the shortest method on this case please?
Thank you.
| Your proof looks perfectly fine, great job!
The function given for $f'(x)$ has a removable discontinuity at $x=0$. There's a theorem in analysis stating that the derivative of a differentiable function can't have removable or jump discontinuities. But if you look at a typical proof of this theorem, it's pretty much the same argument as your solution here — such a proof would either refer to Darboux's Theorem or would reproduce the Intermediate Value Theorem argument that lies behind Darboux's Theorem.
So short of stating that "there's a theorem for this", your solution is the best approach, imho.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3979029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
inner semidirect and outer semidirect relationship https://kconrad.math.uconn.edu/blurbs/grouptheory/group12.pdf.
In the text above, the author explains how to find all groups of order 12.
He does so by showing that a group $G$ (of order $12$) is isomorphic to the semi-direct product of the $2$-sylow ($P_2$) and $3$-sylow ($P_3$) subgroups.
Then, he splits to cases and goes through all the possible combinations of semidirect products of $P_2$ ($\Bbb Z_4$ or $\Bbb Z_2 \times \Bbb Z_2$) with $P_3$ ($\Bbb Z_3)$.
Now my problem arises: he also looks for all possible actions $P_3 \to {\rm Aut}(P_2)$ (or $P_2 \to {\rm Aut}(P_3)$).
But why is this necessary?
He proved that $P_2 \cap P_3$ is trivial, $|P_2P_3| = |G|$ and that $P_2$ is normal or $P_3$ is normal. This means that $G$ is isomorphic to the inner semi-direct product, so why there could be more nonisomorphic semidirect products?
An example of what I'm trying to ask:
Let's say we're in the case that $P_2$ is normal and isomorphic to $\Bbb Z_2 \times \Bbb Z_2$, he found that $G$ is isomorphic to $\Bbb Z_2 \times \Bbb Z_2 \rtimes \Bbb Z_3$ or $G$ isomorphic to $\Bbb Z_2 \times \Bbb Z_2 \times \Bbb Z_3$.
But, we already know that $G$ is isomorphic to the inner semidirect product, so how is it possible that there are two options but not one?
| For any action $\varphi:G\to {\rm Aut}(H)$ we obtain a semidirect product by defining multiplication on elements of $G\times H$ by
$$(g,h)(g',h'):=(gg',\,\varphi_{g'}(h)h')$$
where conjugation $g^{-1}hg$ becomes $\varphi_g(h)$ because
$$(g^{-1},1)(1,h)(g,1)=(1,\varphi_g(h))\,.$$
For distinct actions, we can indeed obtain nonisomorphic semidirect products of the same two groups. For instance, if $\varphi$ is the constant map ${\rm id}_H$, then we obtain the ordinary direct product as a special case.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3979280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Non-negative random variables (ask about the definition) As I understand, a non negative random varialbe which takes non-negative values. But I've seen in that in many books the definition for non-negative random varialbles is a random variable $ X $ such that
$ \mathbb{P}\left(X<0\right)=0 $.
How come those two definitions equivalent? $ \mathbb{P}\left(X<0\right)=0 $ does not imply that $ X(\omega) \geq 0 $ for any $ \omega \in \varOmega $.
What is the acceptable definition between the two that I mentioned?
| You probably have heard about Murphy's law. Aside all the rhetoric and myths around it, the Murphy's law actually is quite important. An event can be possible or impossible. Probability is only defined over possible event. A possible event can be improbable, i.e., its probability can be zero, but it doesn't mean that it cannot occur.
But as you mentioned, it is customary to assign zero probability to impossible events. Even though it is ultimately a bad practice, it usually works. The good practice however, is to always make a clear distinction between impossible and improbable events.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3979807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Two definitions of the injectivity radius For a Riemannian manifold $M$, usually, the injectivity radius of $p \in M$ is defined by the supremum of positive numbers $r$ such that $\exp_p|_{B(0, r)}$ is a diffeomorphism onto the image. Here, $B(0,r)$ is the geodesic ball of radius $r$. However, in Klingenberg's book, the injectivity radius is defined by by the supremum of positive numbers $r$ such that $\exp_p|_{B(0, r)}$ is injective. Are these two definitions equivalent?
Considering the global rank theorem, I tried to prove the exponential map at $p$ is of constant rank, but it seems that this try does not work. In fact, Klingenberg frequently says that the exponential map is just injective instead of saying the map is a diffeomorphism. To guarantee that the exponential map at $p$ is a diffeomorphism on a geodesic ball, is it enough to show that it is injective?
Thanks!
| Edit: According to this answer, Klingenberg's Riemannian Geometry shows that $\exp: TM \to M$ is non-injective in an neighborhood of any critical point in Chapter 2.1, "Completeness and Cut Locus:"
The following result is from Klingenberg's Riemannian geometry book (Theorem 2.1.12):
Theorem. On a complete Riemannian manifold $(M,g)$, if $\mathrm{ker}(d\exp_p\vert_v)\neq 0$ for some $(p,v)\in TM$, then $\exp_p$ fails to be injective in every neighbourhood of $v$.
(Thus if injectivity fails infinitesimally, it also fails in every neighbourhood. This is of course false for a general smooth map.)
That being proven, Klingenberg is then free to give a weaker-looking, yet equivalent, alternative definition of injectivity radius, and apply it for the remainder of the text.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3979936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Given a scalene triangle ABC and D, E, F the middle of BC, CA, AB respectively, prove that certain lines coincide Given a scalene triangle $\triangle ABC$ and $D,E,F$ the midpoints of $BC, CA, AB$ and $A_1, B_1, C_1$ points such that $\angle AA_1C=90^{\circ}$ and $\angle BB_1C=90^{\circ}$ and $\angle CC_1A=90^{\circ}$. $A_2$ is the intersection of $BC$ and $B_1C_1$. We define $B_2$ and $C_2$ in the same manner. Prove that the lines which from $D,E,F$ which form a right angle with $AA_2, BB_2, CC_2$ respectively, coincide.
We can easily prove that they all pass through the orthocenter $H$ of the triangle, withe use of the Brocard theorem My question is, is there any other way to prove this?
| Brokard's theorem is the natural way, but here's a more elementary approach.
Let $(AB_1C_1)$ meet $(ABC)$ again at $G$. The radical axis theorem on $(AB_1C_1)$, $(ABC)$ and $(BCB_1C_1)$ implies that $G$ lies on $AA_2$.
It is known that the reflection $H'$ of $H$ over $D$ is diametrically opposite $A$ on $(ABC)$ (prove it!). So $\angle AGH=\angle AGH'=90^\circ$, which implies $G$, $H$, $D$, $H'$ are collinear.
Similarly, the other two lines also pass through $H$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3980116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Problem with the proof of of $\mathbf{u} \cdot \mathbf{v}=\|\mathbf{u}\|\|\mathbf{v}\| \cos \theta$ in Simon, Blume 1994 In Simon, Blume 1994, p. 215, we find the following proof of the identity
$$
\mathbf{u} \cdot \mathbf{v}=\|\mathbf{u}\|\|\mathbf{v}\| \cos \theta
$$
Without loss of generality, we can work with $\mathbf{u}$ and $\mathbf{v}$ as vectors with tails at the origin $\mathbf{0} ;$ say $\mathbf{u}=\overrightarrow{O P}$ and $v=\overrightarrow{O Q}$. Let $\ell$ be the line through the vector $\mathbf{v}$, that is, the line through the points 0 and $Q$. Draw the perpendicular line segment $m$ from the point $P$ (the head of $\mathbf{u}$ ) to the line $\ell$, as in Figure 10.20 . Let $R$ be the point where $m$ meets $\ell$. Since $R$ lies on $\ell, \overrightarrow{O R}$ is a scalar multiple of $\mathbf{v}=\overrightarrow{O Q}$. Write $\overrightarrow{O R}=t v .$ Since $\mathbf{u}, t \mathbf{v},$ and the segment $m$ are the three sides of the right triangle $O P R,$ we can write $m$ as the vector $\mathbf{u}-t \mathbf{v}$. Since $\mathbf{u}$ is the hypotenuse of this right triangle,
$$
\cos \theta=\frac{\|t \mathbf{v}\|}{\|\mathbf{u}\|}=\frac{t\|\mathbf{v}\|}{\|\mathbf{u}\|}
$$
On the other hand, by the Pythagorean Theorem and Theorem $10.2,$ the square of the length of the hypotenuse is:
$\|\mathbf{u}\|^{2}=\|t \mathbf{v}\|^{2}+\|\mathbf{u}-t \mathbf{v}\|^{2}$
$$
\begin{array}{l}
=t^{2}\|\mathbf{v}\|^{2}+(\mathbf{u}-t \mathbf{v}) \cdot(\mathbf{u}-t \mathbf{v}) \\
=t^{2}\|\mathbf{v}\|^{2}+\mathbf{u} \cdot \mathbf{u}-2 \mathbf{u} \cdot(t \mathbf{v})+(t \mathbf{v}) \cdot(t \mathbf{v}) \\
=t^{2}\|\mathbf{v}\|^{2}+\|\mathbf{u}\|^{2}-2 t(\mathbf{u} \cdot \mathbf{v})+t^{2}\|\mathbf{v}\|^{2}
\end{array}
$$
or
$$
2 t(\mathbf{u} \cdot \mathbf{v})=2 t^{2}\|\mathbf{v}\|^{2}
$$
It follows that
$$
t=\frac{\mathbf{u} \cdot \mathbf{v}}{\|\mathbf{v}\|^{2}}
$$
Plugging equation (5) into equation (4) yields
$$
\cos \theta=\frac{\mathbf{u} \cdot \mathbf{v}}{\|\mathbf{u}\|\|\mathbf{v}\|}
$$
This identity is then used to prove the following theorem
Theorem 10.4 The angle between vectors $\mathbf{u}$ and $\mathbf{v}$ in $\mathbf{R}^{\mathbf{n}}$ is
(a) acute, if $\mathbf{u} \cdot \mathbf{v}>0$
(b) obtuse, if $\mathbf{u} \cdot \mathbf{v}<0$,
(c) right, if $\mathbf{u} \cdot \mathbf{v}=0$.
However, I don't understand this proof fully since it seems that
$$
\cos \theta=\frac{\|t \mathbf{v}\|}{\|\mathbf{u}\|}=\frac{|t|\|\mathbf{v}\|}{\|\mathbf{u}\|}\neq\frac{t\|\mathbf{v}\|}{\|\mathbf{u}\|}
$$
Especially, this would imply that
$$
\cos \theta=\frac{|\mathbf{u} \cdot \mathbf{v}|}{\|\mathbf{u}\|\|\mathbf{v}\|}
$$
which would contradict the theorem 10.4. Am I missing something, or is there an error in this proof?
Thank you in advance for your help!
| The quoted proof was careless at one point. The generally correct expression is $t\Vert v\Vert/\Vert u\Vert$ rather than $|t|\Vert v\Vert/\Vert u\Vert$. The sign of $t$ boils down to whether the multiple of $v$ onto which $u$ is projected is parallel or antiparallel to $v$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3980223",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Infinite minimal left ideals Let $R$ be an infinite field and let $S$ be the ring
$$S = \begin{pmatrix}
R & R & R\\
0 & R & 0\\
0 & 0 & R
\end{pmatrix}$$
Show that there are infinite minimal left ideals of $S$.
I tried to consider the possible minimal left ideals of $S$. Given a nonzero matrix $A$ in one of these ideals, say $I$, I think I could multiply only once, or maybe twice in the left by some matrices of $S$ to get a matrix $B$ whose nonzero entries are in the same positions as $A$, but $B's$ nonzero entries are simply $1$'s, so that, since $B$ is in the ideal $I$, we can get any matrix with the nonzero entries in the same positions as $A$ but being any elements of $R$. Hence, since there are finitely many possible choices for the positions of the nonzero entries of a matrix $3$ x $3$, it seems to follow that $S$ has finitely many minimal left ideals... contradicting what I'm supposed to prove.
Any help would be appreciated.
| It appears that you are misinterpreting the use of the symbol $R$ in the definition of $S$.
Each instance of the symbol $R$ in that definition is intended as an independent, arbitrary choice of an element of $R$.
With that understanding, we can proceed as follows . . .
For each $r\in R$, let $I_r$ be the left ideal of $S$ generated by the matrix
$$
T_r
=
\pmatrix{
0&1&r\\
0&0&0\\
0&0&0\\
}
$$
Then for any $A\in S$ we have
$$
AT_r=
\pmatrix{
0&a&ar\\
0&0&0\\
0&0&0\\
}
$$
where $a=A_{1,1}$.
It follows that
*
*Each of the left ideals $I_r$ is minimal.$\\[4pt]$
*No two of the left ideals $I_r$ are equal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3980438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Show that $\int_0^\pi \log( 1 - 2r\cos(t) + r^2)\, dt=0$ Show that for $r \in (-1,1)$
$$ \int_0^\pi \log( 1 - 2r\cos(t) + r^2)\, dt = 0$$
Here's what I did so far:
$$f(r,t) = \log(1 - 2r\cos(t) + r^2) = \log( (1-re^{it})(1-re^{-it}))$$
The Leibniz rule states that $$\dfrac{d}{dr} \int_0^\pi f(r,t)\ dt = \int_0^\pi \dfrac{\partial}{\partial r} f(r,t) \ dt$$
After calculating the right part I found $2\pi r$ which means that $\displaystyle\int_0^\pi f(r,t)\ dt = \pi r^2$ when it should be $0$.
Thanks in advance.
| By substitution $(z=e^{it})$ and computing residues, we have the well known integrals:
$$J_1 = \int_0^{2\pi} \frac{dt}{1-2r\cos t+ r^2} = \frac{2\pi}{1-r^2}, \quad (|r|<1),$$
$$J_2 = \int_0^{2\pi} \frac{\cos t\, dt }{1-2r\cos t+ r^2} = \frac{2\pi r}{1-r^2}, \quad (|r|<1).$$
Let $$J_0(r) = \int_0^{2\pi} \log \left( {1-2r\cos t+ r^2} \right) dt$$
$$\frac{dJ_0}{dr} = \int_0^{2\pi} \frac{\partial \log \left( {1-2r\cos t+ r^2} \right)}{\partial r}dt=2 \int_0^{2\pi} \frac{r-\cos t}{ {1-2r\cos t+ r^2} }dt=rJ_1-J_2=0.$$
$$J_0(r) = \text{const}.$$
Since $J_0(0)=0,$ we have $$J_0(r) = 0,$$ but this is twice the integral we want, which is thus zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3980537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 8,
"answer_id": 6
} |
How do even and odd functions relate to even and odd numbers? How do the notions of oddness and evenness apply to both functions and numbers?
If Even and Odd functions share nothing in common with Even and Odd numbers, then why were Even and Odd adopted for functions? Why not use other adjectives?
James Stewart, Calculus 7th ed. 2011. This isn't the Early Transcendentals version.
| For an even function $f(-x)=f(x).$ Now you know that $(-x)^n=x^n$ if $n$ is even.
For an odd function $g(-x)=-g(x)$. Now you also know that $(-x)^n=-x^n$ if $n$ is odd.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3980664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Prove this functional inequality Given $f: R \to (0,\infty]$, twice differentiable with $f(0) = 0, f'(0) = 1, 1 + f = 1/f''$
Prove that $f(1) < 3/2$
I found some useless (I believe) facts about $f$, but nothing that gives me a light to answer it.
*
*$f''(x) > 0$
*$f'(x) \ge 1$
*$e^{((f')^2-1)/2} = f+1$ is another way to write the differential equation
....
?
| As you've noticed, this can be written as
$$\frac12(f'^2-1)=\ln(1+f).$$ If we want a parametric equation, we might choose $f'=t$ as a parameter, and then, we have (denoting the derivative with respect to $t$ by a point)
$$f(t)=e^{(t^2-1)/2}-1$$ and $$\dot x=\dot f/t=e^{(t^2-1)/2},$$ and since $x=0$ corresponds to $t=f'(0)=1$, $$x(t)=\int^t_1 e^{(u^2-1)/2}\,du.$$ The latter may not be expressible by "elementary" functions, but there are a lot of rather common special functions one can use for that (Dawson's integral, error function or incomplete gamma function at some imaginary argument). So we can numerically solve for the value of $t$ that corresponds to $x=1$, finding $t\approx1.6517245235869685402116665754535807279$, and there, $f\approx1.3728623070052740075133959835820792705$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3980796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why is $\sum_{k=1}^{\infty} \frac{(-1)^{k}}{(2 k) !}=\left.\left(\sum_{k=0}^{\infty} \frac{(-1)^{k}}{(2 k) !} x^{2 k}-1\right)\right|_{x=1}$? In order to sum up the series we have to realize that it is almost the expansion of the cosine function. But we need $k=0$ in the sum. I don't know how to do this. The solution is $\sum_{k=1}^{\infty} \frac{(-1)^{k}}{(2 k) !}=\left.\left(\sum_{k=0}^{\infty} \frac{(-1)^{k}}{(2 k) !} x^{2 k}-1\right)\right|_{x=1}$, but where does $-1$ on the right hand side come from and how did we achieve $k=0$?
My approach was to do the following, which leads to a dead end...(right?)
$\sum_{k=1}^{\infty} \frac{(-1)^{k}}{(2 k) !}=-\sum_{k=1}^{\infty} \frac{(-1)^{k-1}}{(2 k) !}=\cdots$
| As @Chaos & @Fred said, it's because of first term added to right hand side ($k=0$). Also notice the benefit of relation:
$$f(x) = \sum_{k=0}^{\infty} \frac{(-1)^k}{(2k)!}x^{2k} \Longrightarrow
f''(x) = -f(x) \Longrightarrow f(x) = a \sin (x) + b \cos (x)$$
May this is the purpose of relation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3980907",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
For odd $n$, $n\times n$ matrix with real entries have at least one real eigenvalues. Reading a linear algebra textbook, I encounter
For odd $n$, $n\times n$ matrix with real entries have at least one real eigenvalues.
I noticed in Determinant-free proof that a real $n \times n$ matrix has at least one real eigenvalue when $n$ is odd., the proof without determinant are posts.
It seems for me with the trick of determinant, this problem might be easy. The first thing I came up with is $\det(A_n-\lambda I_n)$ but this does not guarantee that the solution has at least one real eigenvalues. i.e., setting characteristic polynomial as $f_{A}(\lambda) =\lambda^n + a_{n-1} \lambda^{n-2} + \cdots + a_0$ with $a_i \in \mathbb{R}$, we do not know one of $\lambda$ should be real.
What can be the simple proof with determinants?
| The characteristic polynomial is of odd degree. Since complex roots come in conjugate pairs, and by the fundamental theorem of algebra there are an odd number of roots over $\Bbb C$ (counting multiplicity), there must be a real root.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3981070",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
} |
Analytic variety can be defined by finitely many equations Let $(f_i)_{i\in I}$ be a family of holomorphic functions on $\mathbb C^n$. $V:=\{z\in\mathbb C^n\mid f_i(z)=0,\forall i\}$. I know that locally we could write down the equation with finitely many equations because the local ring $\mathscr{O}_0$ of the analytic sheaf is Noetherian.
But it this also true globally? First, we know that the ring of all holomorphic functions on $\mathbb C^n$ is not Noetherian.
I attempted like this assume $I_0(V):=\{f\in \mathscr{O}_0\mid f|_V=0\}$ is an ideal of $\mathscr{O}_0$, so it is finitely generated say by $(g_1,...,g_m)$. Then when we localize any $f_i$ to $\mathscr{O}_0$, we can write $f_{i,x}=r_1g_1+...+r_mg_m$. This means they coninside on an open neighborhood. Then apply identity theorem. But later when I reviewed it, it was not that correct, because the functions $g_1,r_1,...,r_m,g_m$ might actually not be globally defined. So maybe this claim is wrong? is there any counter example?
| Yes, this is true, even if you replace ${\mathbb C}^n$ by an $n$-dimensional complex manifold $M$. The precise statement is that if $V\subset M$ is the zero-set of a family of holomorphic functions $f_j, j\in J$, on $M$ then there are holomorphic functions $g_1,...g_n$ on $M$ such that
$$
V=\{z\in M: g_i(z)=0, i=1,...,n\}.
$$
See Proposition 5.7 in
E.M. Chirka, "Complex analytic sets." Translated from the Russian by R. A. M. Hoksbergen, Mathematics and Its Applications: Soviet Series, 46. Dordrecht etc.: Kluwer Academic Publishers. xix, 372 p. Dfl. 195.00; (1989).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3981349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to solve the recurrence $()=5(/2)+^3()$ using iteration method How do we solve the recurrence $()=5(/2)+^3()$ using the Iteration method?
I solved the recurrence using a master method - master method
Now using the iteration method
$()=5(/2)+3() = 5(5T(/4)+(n/2)^3)*log(n/2))+n^3*logn$ $=$ ... $= 5^i(n/2^i)+n^3*log(n) * ∑(n^3*(5^k) *log(n/2^k) ) / 8^k) $
How is it equal to $Θ(n^3log(n))$ ?
| Let $u_k:=T(2^k)$. Then substituting $n=2^k$ in the given equation, $$u_k=5u_{k-1}+8^kk$$
This can be solved in the usual way, $$u_k=\tfrac{1}{3} 8^{k+1} (k-\tfrac{5}{3}) + C5^k$$ Substituting $k=\log_2n$, $5^k=5^{\log_2n}=n^{\log_25}$, $$T(n)=\tfrac{8}{3}n^3(\log_2 n-\tfrac{5}{3})+Cn^{\log_2 5}=O(n^3\log_2 n)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3981472",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Can a ring without unity have a non-trivial subring with unity? The original problem was to prove that for any commutative ring R, there is a canonical (unique) ring homomorphism from Z to R. The problem would be simple, if R has $1_R$ and we can let the homomorphism map $\phi$ sends $0$ to $0_R$ and $1$ to $1_R$. But if R doesn't have an identity, and $\phi(Z)$ forms a subring of $R$ with unity $\phi(1)$, then the Rng $R$ has a subring with unity. If ring without unity can only have a trivial subring with unity, $\phi$ must be the trivial map, and the proof would be complete. But I'm not sure of this part. Any help would be appreciated!
| Many "commutative rings" are assumed from context to contain the identity, and homomorphisms of these rings must take the identity to the identity. If that is the case, define $\phi:\mathbb{Z}\to R$ so that $\phi(n)=1_R+...+1_R$ (n times).
If this is not assumed, it is false that for any commutative ring $R$ there exists a unique ring homomorphism $\phi:\mathbb{Z}\to R.$ For example if $R=\mathbb{Z}$ either the trivial map or the identity map is a homomorphism in the non-unital ring sense.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3981795",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Inequality of sum of error functions I am stuck with proving the following. Let $A\geq0
, B>0$, and $\alpha \in (0,1)$. I would like to show that
$$
\text{erf}\left(\frac{\alpha A+x}{\sqrt{2} \alpha B}\right)+\text{erf}\left(\frac{A-x}{\sqrt{2} B}\right)-\text{erf}\left(\frac{\alpha A-x}{\sqrt{2} \alpha B}\right)-\text{erf}\left(\frac{A+x}{\sqrt{2} B}\right)\geq 0,\quad \mbox{for all } x\geq0,$$ where erf($\cdot$) denotes the error function.
In my numerical experiments I have not found a single counterexample for this inequality, but I have not been able yet to prove it (e.g., via known bounds for the error function). Anyone a clue?
| @Yves Daoust gave the good hint.
Keeping your notations, at least small values of the arguments, we have for the expression
$$f(x)=2 \sqrt{\frac{2}{\pi }}\,\frac {(1-\alpha)}{\alpha B}\,e^{-\frac{A^2}{2 B^2}}\, \Bigg[1+\sum_{n=1}^\infty\frac{ a_n}{(2 n+1)! \,\left(\alpha B^2\right)^{2 n} }\, x^{2n}\Bigg]\,x$$ with
$$a_1=\left(\alpha ^2+\alpha +1\right) (A^2-B^2) $$
$$a_2=\left(\alpha ^4+\alpha ^3+\alpha ^2+\alpha +1\right) \left(A^4-6 A^2 B^2+3 B^4\right)$$
$$a_2=\left(\alpha ^6+\alpha ^5+\alpha ^4+\alpha ^3+\alpha ^2+\alpha +1\right) \left(A^6-15 A^4 B^2+45 A^2 B^4-15 B^6\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3982126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How could I interpret $\delta$ in $\mathbf{R} \in \mathbb{R}^{|S|}/\delta$? I'm trying to implement a Paper and when looking at the pseudocode I encountered the following:
*
*Pick a random reward vector $\mathbf{R} \in \mathbb{R}^{|S|}/\delta$
*...
*Repeat:
*
*Pick a reward vector $\tilde{\mathbf{R}}$ uniformly at random from the neighbours of $\mathbf{R}$ in $\mathbb{R}^{|S|}/\delta$
*...
Suppose e.g. $\delta = 0.05$ and $|S| = 5$.
What's the meaning of "$/\delta$" and how could I implement such sampling in code? I understood it has something to do with the Equivalence class but couldn't figure out how to apply this concept in practice.
The Paper I am referring to is Bayesian Inverse Reinforcement Learning, the pseudocode appears in Figure 3 specifically. Here we can find a (possibly incorrect) implementation:
def sample_random_rewards(n_states, step_size, r_max):
"""
sample random rewards form gridpoint(R^{n_states}/step_size).
:param n_states:
:param step_size:
:param r_max:
:return: sampled rewards
"""
rewards = np.random.uniform(low=-r_max, high=r_max, size=n_states)
# move these random rewards toward a gridpoint
# add r_max to make mod to be always positive
# add step_size for easier clipping
rewards = rewards + r_max + step_size
for i, reward in enumerate(rewards):
mod = reward % step_size
rewards[i] = reward - mod
# subtracts added values from rewards
rewards = rewards - (r_max + step_size)
Thanks
| As @johnny10 has mentioned in the comments, there is a definition at the beginning of section 5:
The sampling technique we use is an MCMC algorithm GridWalk (see [Vempala, 2005]) that generates a Markov chain on the intersection points of a grid of length $\delta$ in the region $\mathbb R^{|S|}$ (denoted $\mathbb R^{|S|}/\delta$).
Hence, the paper is referring to the set of all $|S|$-tuples of real numbers, where each entry is an integer multiple of $\delta$. For $\delta = 0.05$ and $|S|=3$ one example would be $(23.45, -8.10, 5.55)$, since each of the three numbers is a multiple of $0.05$. Numbers like $1.23$ would not be allowed.
The chosen notation is non-standard and a better notation would be $(\delta\mathbb Z)^{|S|}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3982326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does this ring isomorphism hold for rings that aren't necessarily fields? The problem I'm working on is: Show that $R[x,y]/(x^2-y^3)\cong A$ where $A=\{\sum a_ix^i\in R[x]:a_1=0\}$. A similar question has been posted on the website multiple times, but the explanation is either confusing (to me at least) or requires $R$ to be a field (which is not assumed to necessarily be true in the problem).
I have constructed a homomorphism $\varphi:R[x,y]\to A; \varphi(x):=x^3;\varphi(y):=x^2;\varphi(r):=r,r\in R$. There is a unique homomorphism for which all of the previous equalities hold and it is surjective. To prove the claim it is sufficient to show that $ker\varphi\subset(x^2-y^3)$. The other inclusion is trivial. Once proven, the hypothesis easily follows from the first isomorphism theorem. The inclusion above could be proven using division with rest if $R$ were a field, however that isn't necessarily the case.
I've gotten this far on my own:
We can split the terms of any $f\in R[x,y]$ in the following way: $f=g_0+g_1+...+g_n$ where $\varphi(g_k)=c_kx^k$ for some $c_k\in R$ and all $k$. This partition of terms is clearly unique. Let $f\in ker\varphi$. In this case $c_k=0$ for all $k$. The goal is to prove that $g_k(x,y)=(x^2-y^3)h_k(x,y)$ for some polynomial $h_k.$ It is easy to see that $g_0$ consists of only constant terms, therefor $g_0=0$. To find the others let us take a look at the following equation: $\varphi(x^iy^j)=x^k$. This equation clearly holds if and only if $3i+2j=k$. It's easy to find all of the solutions of this equation. If $k$ is even then $i=0,j=\frac{k}{2}$ is a solution. If, on the other hand $k$ is odd, then $i=1,j=\frac{k-3}{2}$ is a solution (we're obviously looking for solutions where $k\neq1$). All the other solutions can be found by subtracting $3$ from $j$ and adding $2$ to $i$. Using this method we can find that:
\begin{equation}
\begin{split}
&g_0=a_{0,0}\\
&g_1=0\\
&g_2=a_{2,0}y\\
&g_3=a_{3,0}x\\
&g_4=a_{4,0}y^2\\
&g_5=a_{5,0}xy\\
&g_6=a_{6,0}x^2+a_{6,1}y^3...
\end{split}
\end{equation}
Where $a_{i,j}\in R$. From here we can see why $g_0=g_1=...=g_5=0$. For example $0=\varphi(g_2)=a_{2,0}x^2\Rightarrow a_{2,0}=0\Rightarrow g_2=0$. Similarily, we can conclude that $g_k=0$ whenever the equation $3i+2j=k$ has a unique solution in the natural numbers (holds also for $k=7$ and $k=8$). If the equation has two solutions (like in case $k=6$) we can do the following:
$$
0=\varphi(g_6)=(a_{6,0}+a_{6,1})x^6
$$
This means that $g_6(x,y)=a_{6,0}x^2-a_{6,0}y^3=(x^2-y^3)a_{6,0}$. So the claim holds for $k=6$. Similar can be shown for all other $k$ for which the equation above has only two solutions (like for $k=9,10,11$) using the equality $cx^iy^j-cx^{i-2}y^{j+3}=(x^2-y^3)\cdot cx^{i-2}y^j$. Furthermore it can also be easily factored in the case where the equation has three solutions ($k=12$ for example). That's because if
$$
g_k(x,y)=c_0x^iy^j+c_1x^{i-2}y^{j+3}+c_2x^{i-4}y^{j+6}
$$
then $\varphi(g_k)=0\Rightarrow c_0+c_1+c_2=0$ therefor
\begin{equation}
g_k(x,y) = c_0x^iy^j - c_2x^{i-2}y^{j+3} - c_0x^{i-2}y^{j+3} + c_2x^{i-4}y^{j+6} = \\
(x^2-y^2)(c_0x^{i-2}y^j - c_2x^{i-4}y^{j+3})
\end{equation}
This is as far as I've come. I can't find a solution for when the equation with $k$ has four solutions. All of the attempts at factoring the general case have failed.
| We want to show that the kernel of $\varphi$ is $(x^2-y^3)=: I$, with $I$ being a subset of said kernel.
Let $h \in R[x,y] \backslash I$ with $\varphi(h)=0$ (assume there exists one for the sake of contradiction). We can write $h \in g+I$, with $\varphi(g)=0$, and $g(x,y)=g_1(x)+yg_2(x)+y^2g_3(x)$, $g_1,g_2,g_3 \in R[x]$, and thus $g \notin I$.
But $0=\varphi(g)=g_1(x^3)+x^2g_2(x^3)+xh_3(x^3)$ where $h_3(x)=xf_3(x)$.
Each term of the RHS only contributes to disjoint classes of powers of $x$: from left to right, the $x^{3d}$, the $x^{3d+2}$, the $x^{3d+1}$. As the RHS is zero, it follows that all three terms are zero and thus $g=0 \in I$, a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3982467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Confirming a easy proof: the product of two consecutive numbers is always even. Can someone confirm if my prove is right?
Theorem. The product of two consecutive integers is always even.
Proof. Define a number $n$ such that $n:=2k$ where $k\in\mathbb{Z}$, this ensures that $n$ is an even number. Define a second number $p$ such that $p:=n+1=2k+1$ this ensures that $p$ is an odd number and the numbers $n$ and $p$ are consecutive because they differ by $1$. There product is given by:
$$np=2k(2k+1)=2(2k^2+k)=4k^2+2k\tag1$$
Which is clearly a multiple of $2$.
Define a number $h$ such that $h:=2m+1$ where $m\in\mathbb{Z}$, this ensures that $h$ is an odd number. Define a second number $z$ such that $z:=n-1=2m$ this ensures that $z$ is an even number and the numbers $h$ and $z$ are consecutive because they differ by $1$. There product is given by:
$$hz=2m(2m+1)=2(2m^2+m)=4m^2+2m\tag2$$
Which is clearly a multiple of $2$, this proves that $hz$ is an even number.
This proves this theorem.
If I am wrong how can I make my proof valid?
| Style and sufficiency of proof generally depends on where you take the "floor of certainty" to be - what facts and statements can be taken as true and what needs to be demonstrated.
With a simple theorem like this you might say that we should set the floor a little lower, that is provide an exposition of even simple steps in the proof. For example, the fact that alternating integers are odd and even might need to be supported - or not.
Commenting on your proof, then, in the knowledge that this is a little subjective:
*
*Breaking into the cases of lower number even and lower number odd looks good, and you could do this more clearly.
*You don't need to define a separate number for $n{+}1$. It doesn't add clarity and could easily confuse the matter.
*You can continue to use $n$ and $n{+}1$ as the successive numbers in the two cases - again limit your new defined numbers, for example to where they will interact with each other; there's no such interaction between your cases.
So using your method I would write:
Theorem. The product of two consecutive integers is always even.
Proof. Consider successive integers $n$ and $n{+}1$ with product $n(n{+}1)$ in two cases, (1) where $n$ is even and (2) where $n$ is odd.
Case 1, $n$ even:
Find $k\in\mathbb{Z}$ such that $n=2k$. Now the desired product is given by:
$$n(n+1)=2k(2k+1)=2(k(2k+1))\tag1$$
And since $k(2k+1)\in\mathbb{Z}$ this is thus a multiple of $2$ and even as required.
Case 2, $n$ odd:
Find $k\in\mathbb{Z}$ such that $n=2k+1$. Now the desired product is given by:
$$n(n+1)=(2k+1)(2k+2)=2(2k+1)(k+1)\tag 2$$
And since $(2k+1)(k+1)\in\mathbb{Z}$ the product is again a multiple of $2$ and even as required.
Since both cases demonstrate an even product, this proves this theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3982646",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Solving for $g(y)$ in the following integral equation Given the following integral equation where $f, h$ are known:
$$ f(t) = \int_{t-1}^t{g(y)\cdot h(y)\ \textrm{d}y}$$
Is it possible to solve this for $g(t)$, i.e. get an equation of the form $g(t) = \dots$ (supposing $g, h$ are suitably nice)? I have no idea where to start. I acknowledge that an analytical solution may not even exist in the general case, but I remain hopeful!
| If $ g $ and $ h $ are continuous at $ \Bbb R $ , then $ f $ will be differentiable at $ \Bbb R $ and, by chain rule,
$$f'(t)=g(t)h(t)-g(t-1)h(t-1)$$
So,
$$g(t)=g(t-1)\frac{h(t-1)}{h(t)}+\frac{f'(t)}{h(t)}$$
which is a kind of recursive relation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3982823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Resources, references, or examples for logics with finitely many sentences Are there any interesting logics with only finitely many sentences total? Is there a reason why potential logics with this property would be trivial or badly behaved?
I'm looking for a reference to read more or an example of such a logic that's interesting as a mathematical object.
I was recently looking at some examples of interesting logics that are fragments of better-known logics, such as two-variable logic and a list of intermediate logics that are interesting logics between intuitionistic propositional logic and classical propositional logic.
One thing that I haven't come across yet, though, is a modern logic (possibly described as a fragment of another logic) with structural constraints on well-formed formulas that are so strong that only finitely many sentences are possible or only finitely many sentences up to renaming of variables are possible. Traditional term logic, even with the addition of non for modifying predicates, has finitely many sentences up to renaming of variables.
I can think of a few weird properties of such finite-number-of-sentences logics off the top of my head like the fact that ordinary introduction rules like $A, B \vdash A \land B$ will sometimes fail if the conclusion is too big to be well-formed.
NOTE: removed reference to $\alpha$-equivalence, replaced with up to renaming of variables which is more clear.
| All automated (computer based) theorem provers involve some logic with a finite number of sentences.
I would consider some of those automated theorem provers of interest, since they have gotten used to solve open problems in mathematics and logic, such as when the Robbins problem got solved. Some others might not have solved any open problem, but have the ability to solve open problems with human assistance (or one might think of the computer as the human's assistant). Also, model finders with only a finite number of possible sentences, I would think of interest, since they save people time, as computers work faster than humans in some situations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3982975",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
$\left(\sum_{i=1}^{n} x_{i}^{2}\right)^{\frac{1}{2}} \leq \sum_{i=1}^{n} x_{i}$ I was wondering if for $x_i\in\mathbb{R}_{\geq0}$ the inequality
$$
\left(\sum_{i=1}^{n} x_{i}^{2}\right)^{\frac{1}{2}} \leq \sum_{i=1}^{n} x_{i}
$$
Holds. If so, is there a name for it?
My attempt
$$
\sum_i^n i = \frac{n(n+1)}{2}
$$
$$
\sum_i^n i^2 = \frac{n(n+1)(2n+1)}{6}
$$
Since the right hand side is $O(n^{3/2})$ and the left hand side is $O(n^2)$, and n is a positive integer, the inequality should hold.
| I don't know whether there's a name for it, but note that\begin{align}\left(\sum_{i=1}^nx_i\right)^2&=\sum_{i=1}^nx_i^{\,2}+\overbrace{\sum_{i\ne j}x_ix_j}^{\geqslant0}\\&\geqslant\sum_{i=1}^nx_i^{\,2}\end{align}and that therefore$$\sum_{i=1}^nx_i\geqslant\sqrt{\sum_{i=1}^nx_i^{\,2}}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3983098",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Does the function have a positive lower bound? Let $\zeta$ be an irrational number and $p, q, n$ be natural numbers. The function $g$ is defined as:
$$g_n(\zeta,p,q)=p^n|p\zeta-q|.$$
Does there exist a positive $r$ such that:
$$\forall p\in\mathbb{N}-\{0\},\ \forall q\in\mathbb{N},\ g_n(\zeta,p,q)\geq r?$$
Is the result relevant to $\zeta$ and $n$?
| There is no lower bound.
Consider an increasing sequence $e_i$ of positive integers and let $b$ be a positive integer. Then
$$\zeta = \sum_i b^{-e_i}$$
will be irrational if the $e_i$ are chosen not to result in a periodic tail for the base-$b$ expansion of $\zeta$.
But $$\min_q g_n(\zeta, b^{e_k}, q) = \sum_{i > k}b^{(n+1)e_k-e_i}$$
and no matter what $n, k, r$ we choose, there will be sequences with $e_{k+1}$ large enough to make that remainder smaller than $r$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3983251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Solve the ODE $y'^{2}+y^2=1$ I can figure out solutions $y=\sin x$ or $\cos x$ and trivial solution $y=1$. But how to get all solutions?
| Failed attempt:
As the $(y,y')$ trajectory is a circle, we can write the solution as $$y(x)=\sin(u(x)),y'(x)=\cos(u(x)).$$ This requires
$$y'(x)=\cos(u(x))=u'(x)\cos(u(x))$$ and $u'(x)=1$ or $\cos(u(x))=0$.
The first case yields $u(x)=x+c$, or $y(x)=\sin(x+c)$, and the second is $y(x)=\pm1$.
Unfortunately, this method does not yield the most general solution because it does not cover the case
$$y(x)=\cos(u(x)),y'(x)=\sin(u(x))$$ that results in $u'(x)=-1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3983421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
} |
$\cos (\tan \alpha)=\sin \alpha $ I can't find where is my mistake so if someone can enlighten me...
$\tan \alpha=\arccos (\sin \alpha) $
$\tan \alpha=\arccos (\cos (\alpha -\frac{\pi }{2}))= \alpha -\frac{\pi }{2}$
$\frac{d}{d\alpha }\tan \alpha=\frac{d}{d\alpha }(\alpha -\frac{\pi }{2}) $
$\frac{1}{\cos (\alpha)^{2}}=1 $
$\alpha =\pi n$
there is something wrong and I don't really know.
| The fact that $\alpha$ satisfies $F(\alpha)=G(\alpha)$ does not imply that $F'(\alpha)=G'(\alpha)$. For instance, $x=0$ is a solution to $x=x^2$, but not a solution to $1=2x$.
Added: There are of course a few other issues of purely algebraic nature, namely that $\cos t=c$ does not imply $t=\arccos c$, and that $\arccos \cos c\ne c$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3983766",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Prove that if $(a + b + c)(ab + ac + bc) = abc$ then sum of some two numbers equals $0$ Prove that if $(a + b + c)(ab + ac + bc) = abc$ then sum of some two numbers equals $0$.
Without loss of generality let's suppose that $a=0$ then $(b + c)bc = 0 \Rightarrow b+c = 0$ or
$bc = 0 \Rightarrow b=0 \lor c = 0$ and $a+b=0 \lor a+c=0$ respectively. Q.E.D.
Now let's suppose $abc \not = 0$ and $a+b+c \not = 0$ then from initial equation we can get the following one that must hold $\dfrac{1}{a+b+c} = \dfrac{1}{a} + \dfrac{1}{b} + \dfrac{1}{c}$
As $a,b,c$ are some arbitrary numbers (but not non-negative integers for example) I've got completely stuck here
| Hint:
$$(a + b + c)(ab + ac + bc) - abc=(a + b)(b + c)(c + a)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3983909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Unique maximal ideal and ring epimorphism kernel with prime numbers equivalence Let $A$ be a ring. Let $f: \mathbf{Z} \to A$ be a surjective ring homomorphism. Prove that $A$ has a unique maximal ideal iff there exists $n\in \mathbf{N}$ and $p\in\mathbf{N}$ a prime number such that $\ker f =(p^{n})$.
$(p^{n})$ denotes the ideal generated by $p^{n}$, I'm not sure if that's the standard notation.
We had to prove that in a recent exam and nobody could prove it. I haven't found it online neither. Thank you!
| I assume by 'ring epimorphism' you mean 'surjective ring homomorphism'. Since $f$ is surjective we have $A\cong \Bbb Z/\ker f$. We know that the maximal ideals of $\Bbb Z/\ker f$ correspond bijectively to the maximal ideals of $\Bbb Z$ containing $\ker f$, hence $A$ has a unique maximal ideal iff there is a unique maximal ideal of $\Bbb Z$ containing $\ker f$. Now use the fact that $\Bbb Z$ is a PID.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3984040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
How to interpret a two-sided graph?
What does the "right" Y-axis represent?
Taken from here: https://istheservicedown.co.uk/status/virgin-media/2654675-bristol-bristol-england-united-kingdom
| Looking around the provided site it looks like the blue curve represents the number of reports for Virgin Media outages in the UK, whereas the green curve is the number of reports specific to Bristol. The units on the left are for the blue (country-wide) curve, and the units on the right are for the green (local) curve. For example, consider the excerpted images below. Note that in each case the blue curve matches the graph of total Virgin Media outages, whereas the green curve (and corresponding scale on the right axis) change depending on locality.
Virgin Media outages in the UK:
Virgin Media outages in the Bristol:
Virgin Media outages in the London:
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3984197",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find the maximum of $(x_1\cdots x_n)^2$ subject to $x_1^2+\ldots +x_n^2=1$ Find the maximum of $(x_1\cdots x_n)^2$ subject to $x_1^2+\ldots +x_n^2=1$ und show that $$\left (\prod_{k=1}^na_k\right )^{\frac{1}{n}}\leq \frac{1}{n}\sum_{k=1}^na_k$$
For the first part I applied the method of Lagrange multipliers.
We have the function $f(x_1, \ldots , x_n)=(x_1\cdot \ldots \cdot x_n)^2$ and the constraint $g(x,y,z)=x_1^2+ \ldots + x_n^2-1=0$.
The Lagrange function is \begin{equation*}L(x_1, \ldots , x_n ,\lambda )=f(x_1, \ldots , x_n)+\lambda g(x_1, \ldots , x_n)=(x_1\cdot \ldots \cdot x_n)^2+\lambda \left (x_1^2+ \ldots + x_n^2-1\right )=\left (\prod_{j=1}^nx_j\right )^2+\lambda \left (\sum_{j=1}^nx_j^2-1\right )\end{equation*}
We calculate the partial derivatives of $L$ :
\begin{align*}&\frac{\partial}{\partial{x_i}}(x_1, \ldots , x_n ,\lambda )=2\left (\prod_{j=1}^nx_j\right )\cdot \left (\prod_{j=1, j\neq i}^nx_j\right ) +2\lambda x_i \\ & \frac{\partial}{\partial{\lambda }}L(x_1, \ldots , x_n ,\lambda )=\sum_{j=1}^nx_j^2-1 \end{align*} with $1\leq i\leq n$.
To get the extrema we set the partial derivatives equal to zero.
Then we get the following system:
\begin{align*}&2\left (\prod_{j=1}^nx_j\right )\cdot \left (\prod_{j=1, j\neq i}^nx_j\right ) +2\lambda x_i =0 \Rightarrow x_i\cdot \prod_{j=1, j\neq i}^nx_j^2 +\lambda x_i =0 \Rightarrow x_i\cdot \left (\prod_{j=1, j\neq i}^nx_j^2 +\lambda \right ) =0 \\ & \sum_{j=1}^nx_j^2-1=0 \end{align*}
How can we continue?
| A bit late but I thought worth mentioning it.
Set $y_k = x_k^2$ for $k=1\ldots n$. So, to maximize is
$$\prod_{k=1}^n y_k \text{ subject to } \sum_{k=1}^ny_k =1\text{ with } y_1,\ldots ,y_n \geq 0$$
Since the product is $0$ if any of the $y_k$ is zero, we can assume $y_1,\ldots ,y_n > 0$.
Now, Jensen (or concavity of $\ln$) gives immediately
$$\sum_{k=1}^n \ln y_k \leq n \ln \sum_{k=1}^n \frac{y_k}n = \ln \frac 1{n^n}$$
Hence,
$$\prod_{k=1}^n x_k^2 = \prod_{k=1}^n y_k \leq \frac 1{n^n}$$
and equality is reached for $y_k = \frac 1n \Leftrightarrow x_k =\pm \frac 1{\sqrt n}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3984320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
How to calculate a combinatorial sum I have a combinatorial sum in hand which I suspect equals zero. But I do not know how to prove it. Can you guys help me? (I am even not sure if this is a hard question or not)
Is
$$
\sum_{k=0}^n (-1)^k k^{n-1} \left(\begin{array}{l}
n \\
k
\end{array}\right) = 0
$$
?
| HINT: Start with $$f(x)=(1+x)^n=\sum_{k=0}^n\binom{n}{k}x^k.$$ Differentiate to show that $f'(-1)=0$. Differentiate again to show that $f''(-1)=0$. Keep going (inductively) until the $(n-1)$-th derivative.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3984436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
How many clopen subsets does the subspace [0,1]∪[3,4] have? Show that the subspace $[0,1]∪[3,4]$ of $\mathbb R$ has at least $4$ clopen subsets. Exactly how many clopen subsets does it have?
Just to clarify, this is the subspace induced by $[0,1]∪[3,4]$ from $\Bbb R$, so the space is just the intersection of all open sets in $\Bbb R$ with $[0,1]∪[3,4].$
Finding $4$ clopen sets is easy. Just form half open intervals with one of the $4$ endpoints, like $[0, 1/2).$ As for how many clopen sets there are, I think the answer is that there are uncountable many, since the intervals $(0,1)$ and $(3,4)$ have uncountably many elements.
| Clopen sets are both closed and open. For example, the empty set and the entire space are always clopen. Half-open intervals are not necessarily clopen; for example, $[0,1/2)$ is not closed, because it does not contain $1/2$.
One set that is clopen in the subspace topology is $[0,1]$. Even though it is not open in $\mathbb R$, it is open in the subspace topology, because it is the intersection of the subspace $[0,1]\cup[3,4]$ and $(-1,2), $ which is open in $\mathbb R$.
I have given you three clopen sets in the subspace topology, and I hope you can find a fourth.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3984546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Proper definition of a function I'm sorry if this is a pedantic question, but I want to be sure I'm using terminology correctly.
Without a second thought, I would make statements of the following form: "Consider the function $f: A \to B$ defined by $f(a) = b$."
This isn't fully correct, however, because the full definition of the function $f$ should be the domain, codomain, and the rule. Without one, the definition is ambiguous.
Am I incorrect on this, or is this a commonly accepted shorthand? Would it make more sense to say "consider the function $f: A \to B$ given by" or "governed by"?
| Saying "given by", "verifying", "satisfying", etc. is more suitable for defining functions using words. Otherwise, remember that you can also use the notation:
\begin{align*}
f: & A \longrightarrow B \\
& a \longmapsto f(a)=b
\end{align*}
And we'll say it's well defined if any point $a\in A$ has an image through $f$ that satisfies $f(a)\in B$, and also that it gives no contradiction, to be said: $a=a'$ implies $f(a)=f(a')$. If that's not verified, we say it's not well defined, and you may better change it's definition to make clear where does $f$ have sense. For example, the function
\begin{align*}
f: & \mathbb{R} \longrightarrow \mathbb{R}_+ \\
& x \longmapsto f(x)=1/x
\end{align*}
isn't really well defined, since $x=0$ has no image, and not al images from points in $\mathbb{R}$ are positive, so a right definition of it will be
\begin{align*}
f: \mathbb{R}\setminus\{0\} & \longrightarrow \mathbb{R} \\
x \ \ \ & \longmapsto \ f(x)=1/x
\end{align*}
though sometimes domain and codomain specifications are omitted because they can be deduced by the rule or the context
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3984693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Calculate $x^3 + \frac{1}{x^3}$ Question
$x^2 + \frac{1}{x^2}=34$ and $x$ is a natural number. Find the value of $x^3 + \frac{1}{x^3}$ and choose the correct answer from the following options:
*
*198
*216
*200
*186
What I have did yet
I tried to find the value of $x + \frac{1}{x}$. Here are my steps to do so:
$$x^2 + \frac{1}{x^2} = 34$$
$$\text{Since}, (x+\frac{1}{x})^2 = x^2 + 2 + \frac{1}{x^2}$$
$$\Rightarrow (x+\frac{1}{x})^2-2=34$$
$$\Rightarrow (x+\frac{1}{x})^2=34+2 = 36$$
$$\Rightarrow x+\frac{1}{x}=\sqrt{36}=6$$
I have calculated the value of $x+\frac{1}{x}$ is $6$. I do not know what to do next. Any help will be appreciated. Thank you in advanced!
| You wrote
$$(x+\frac{1}{x}) = x^2 + 2 + \frac{1}{x^2}.$$
But it should read
$$(x+\frac{1}{x})^2 = x^2 + 2 + \frac{1}{x^2}.$$
Then we obtain $x+\frac{1}{x}=6.$
From $x^2 + \frac{1}{x^2}=34$ we obtain
$(1) \quad x^3+\frac{1}{x}=34 x$
and
$(2) \quad x+\frac{1}{x^3}=\frac{34}{x}.$
If we add (1) and (2) we get
$$x^3+\frac{1}{x^3}=33(x+\frac{1}{x}).$$
Ans so
$$x^3+\frac{1}{x^3}=6 \cdot 33=198.$$
Remark : if $x^2 + \frac{1}{x^2}=34$, then $x$ can not be a natural number !
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3984824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Can we identify the pole of the Gamma function with the limit of the harmonic numbers? The expansion of the gamma function around $x=0$ is
$$\Gamma(x)=\frac{1}{x}-\gamma+O(x).$$
The Euler constant $\gamma$ is defined by $$\gamma=\lim_{n\to\infty}(H_n-\log n)$$ where $H_n$ is the $n$th harmonic number. On the other hand, $\gamma$ can also be defined as $$\gamma=\lim_{x\to 0}\left(\frac{1}{x}-\Gamma(x)\right).$$
However, $\lim_{n\to\infty} \log n$ and $\lim_{x\to 0} 1/x$ can be sort of identified in the following way:
$$\lim_{x\to 0^+} \frac{1}{x} = \lim_{x\to 0^+}\lim_{\delta\to 0}\int_\delta^1 t^{-1+x} dx$$
Now I will be sketchy and interchange the limits.
$$\lim_{x\to 0^+} \frac{1}{x} = \lim_{\delta\to 0}\lim_{x\to 0^+}\int_\delta^1 t^{-1+x} dt = \lim_{\delta\to 0}\int_\delta^1 t^{-1} dt
=-\lim_{\delta\to 0}\log\delta=\lim_{n\to\infty} \log n .$$
Then comparing to the two definitions of $\gamma$ above, it seems like one could again "identify" the pole of $\Gamma(x)$ with the limit of the harmonic numbers.
We can do a similar procedure to try to establish that.
$$\lim_{x\to 0^+} \Gamma(x)=\lim_{x\to 0^+} \lim_{\delta\to 0} \int_\delta^\infty t^{x-1} e^{-t} dt$$
Again being sketchy with the limits:
$$\lim_{x\to 0^+} \Gamma(x)=\lim_{\delta\to 0} \int_\delta^\infty t^{-1} e^{-t} dt = \lim_{\delta\to 0}\Gamma(0,\delta)$$
Here $\Gamma(s,z)$ is the upper incomplete gamma function, which for $s=0$ has the following expansion around $z=0$.
$$\Gamma(0,z)=-\gamma-\log z+O(z)$$
Comparing this to the definition of $\gamma$ we obtain
$$\lim_{x\to 0^+} \Gamma(x) = -\lim_{n\to\infty}H_n.$$
My questions are: How legitimate is this? Is this a well-known statement? (I haven't seen it anywhere.) If it is legitimate, what is its significance?
| What is true is that the two equations
$$ H_n = \log(n) + \gamma + \frac1{2n} + \cdots \tag{1}$$
and
$$ \Gamma(1/n) = n - \gamma + \frac1n + \cdots \tag{2} $$
together imply that
$$ \lim_{n\to\infty} \Gamma(1/n) + H_n - n - \log(n) = 0.
\tag{3} $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3985210",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can a generator of the ring of integers of local fields can be chosen so that it is also a uniformizer at the same time? Let $L/K$ be an extension of local fields. We can find $\alpha$ such that $\mathcal{O}_L=\mathcal{O}_K[\alpha]$. What do we know about this generating element? I think that this $\alpha$ can be selected in such a way that, in addition to the above property, it is also a uniformizing parameter at the same time. But I can not prove it.
| $O_L=O_K[\pi_L]$ iff $f(L/K)=1$ and $O_L=O_K[\zeta_{q_L-1}]$ iff $e(L/K)=1$. In general it is $$O_L=O_K[\zeta_{q_L-1}+\pi_L]$$
Hensel lemma is needed to construct the root unity, then (in the non-trivial case $e\ne 1,f\ne 1$) we need a closed-ness and density argument, investigating $\zeta_{q_L-1}+\pi_L-(\zeta_{q_L-1}+\pi_L)^q$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3985302",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
People say that |x| is not differentiable, but isn't the derivative of |x|, |x|/x? I have heard that $|x|$ is not differentiable, but I found a derivative that meets all the requirements necessary.
Here is my proof that the derivative of $|x|$ is $|x|/x$. First, note that $|x|$ has a slope of $1$ when x is positive. $|x|/x$ also equals $1$ when $x$ is positive. $|x|$ has a slope of $-1$ when $x$ is negative. $|x|/x$ equals $-1$ because $|x|$ is positive and $x$ is negative. $|x|$ has an undefined slope at $x=0$ and |x|/x is also undefined at $x=0$. Therefore, $|x|$'s derivative is $|x|/x$.
Anything wrong with this?
| If $A\subset\Bbb R$ and $f$ is a function from $A$ into $\Bbb R$, we say that $f$ is differentiable if $f$ is differentiable at every $a\in A$. So, the absolute value function is not differentiable, since it is not differentiable at $0$. Otherwise, you are right, but I would simply say that if we differentiate it at a point $a\ne0$, then we get $1$ if $a>0$ and $-1$ if $a<0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3985434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 1
} |
How many different equivalence relations with exactly two different equivalence classes are there on a set with $n$ elements I came across with this topic. It looks straight forward for $5$ elements, but what if I want to find how many different equivalence relations with exactly two different equivalence classes are there on a set with $n$ elements?
It says that the set of equivalence relations on a set are in direct bijection with the set of partitions on a set. This made me read about Bell numbers but I'm not sure how help us here. How to solve this problem?
| Pick an element $a\in S$. Then we have an obvious bijection between $$\{\text{equivalence relations on }S\text{ with exactly two equivalence classes}\}$$ and $$\{\text{proper subsets of }S\setminus\{a\}\,\}$$
given by
$$ {\sim}\mapsto \{\,x\in S\setminus\{a\}\mid x\sim a\,\}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3985524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Limit of a integral For $f:[0,1]\rightarrow \mathbb{R}$ continuous, let $a >0.$
\begin{eqnarray*}
L=\lim_{\varepsilon \rightarrow 0}\int_{\varepsilon a}^{\varepsilon b} \frac{f(x)}{x} dx
\end{eqnarray*}
Show that $L=f(0)\ln(\frac{b}{a})$.The first I thought was in mean value theorem for integrals. but I don't know how can start, if can give me some hint to start. I will be gratefull.
| Let $0<a<b$. By direct calculation, for any $t>0$, $\int_{ta}^{tb}\frac{f(0)}{x}dx=f(0)\ln(b/a).$
Let $\varepsilon>0$ be given. We choose $\delta\in(0,1)$ such that $|f(x)-f(0)|<\varepsilon\cdot\frac{1}{\ln(b/a)}$
whenever $x\in[0,\delta]$. (Note that $\ln(b/a)>0$.) Let
$t_{0}=\frac{\delta}{b}.$ Let $t\in(0,t_{0})$ be arbitrary.
Note that $x\in[ta,tb]\Rightarrow x\in[0,\delta]$, so we have
estimation
\begin{eqnarray*}
& & \left|\int_{ta}^{tb}\frac{f(x)}{x}dx-f(0)\ln(b/a)\right|\\
& = & \left|\int_{ta}^{tb}\frac{f(x)}{x}dx-\int_{ta}^{tb}\frac{f(0)}{x}dx\right|\\
& \leq & \int_{ta}^{tb}\left|f(x)-f(0)\right|\cdot\frac{dx}{x}\\
& \leq & \varepsilon\cdot\frac{1}{\ln(b/a)}\cdot\int_{ta}^{tb}\frac{dx}{x}\\
& = & \varepsilon.
\end{eqnarray*}
This shows that $\lim_{t\rightarrow0+}\int_{ta}^{tb}\frac{f(x)}{x}dx=f(0)\ln(b/a).$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3985689",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Question about notation used for planar vector fields and differential equations I have learned in my calculus courses that a planar vector field is just a continuous (or smooth) map $\mathbb{R}^2 \to \mathbb{R}^2$, say $V(x,y) = (f(x,y),g(x,y))$ where $f$ and $g$ are continuous (smooth) maps from $\mathbb{R}^2$ to $\mathbb{R}$.
But I've seen an example, in this wiki page on Hilbert's $16^{\text{th}}$ problem, where a planar vector field is written just as a system of $2$ ODEs, in this case
$$ \frac{dx}{dt}=P(x,y), \frac{dy}{dt} = Q(x,y) $$
for polynomials $P,Q$ in two variables.
Could one equivalently write the above vector field as $V(x,y)=(P(x,y),Q(x,y))$? Why is it written with ODEs instead of the other way? What differences are there between these two ways of constructing vector fields?
| Yes, they are equivalent. It is just two different ways to look at the same thing.
Using your notation $V=(P,Q)$, you have an ODE in $\mathbb{R}^2$:
$$
\frac{d\vec x}{dt}=V(\vec x)
$$
where $\vec x:=(x,y)$.
One can write an (evolutionary) equation in higher dimensional space component-wise and have a system of equations.
There are lots of such examples. For instance, on this page the system of Navier-Stokes equations is written in the vector form while the official problem description in the Clay Mathematics Institute writes it component-wise.
This is similar to how you write a linear equation. For example,
$$
\begin{pmatrix}
1&2\\
3&4
\end{pmatrix}
\begin{pmatrix}
x\\
y
\end{pmatrix}
=
\begin{pmatrix}
3\\
7
\end{pmatrix}
$$
is equivalent to
$$
x+2y=3,\quad 3x+4y=7.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3985823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
First-order formula implying the existence of an infinite branch in a tree I want to find a formula which would imply the existence of an infinite branch in a tree. I know beforehand that my tree has countably many vertices (in fact, each vertex can possibly have a countable number of sons, so we cannot directly apply the Kőnig's lemma).
My question is: quantifying only over vertices (and edges, for that matter), is it possible to write a logical formula showing that the tree has an infinite branch (i.e. a tree satisfies the formula iff it has an infinite branch) ?
It is not hard to find one implying the existence of branches of arbitrary length, but this is not what I am looking for - although I am not even sure such a property can be expressed in first-order logic.
If this is impossible, are there some other assumptions that would need to be made in order to be able to write such a formula ? (For example, if the tree had a finite branching factor - which can be expressed by a first-order formula - then we could apply the Kőnig's lemma. As I already said, this is not the case here, but does there exist other hypothesis that would achieve the same result ?)
| As it stands out, there is no easy way to answer my question. Indeed, one can find
here some sets of trees in term that are placed in the analytic hierarchy - rather than the arithmetic one, and I do not have any way of "discarding" those trees with the elements given in my question, meaning that the general problem is much harder that what can be dealt with FO-logic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3986020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Conjugation action on a semi-direct product $N\rtimes H$
Let $G$ be a finite group, with $H\le G$ and $N\trianglelefteq G$ such that $G=N\rtimes H$.
Suppose that the conjugation action of $H$ on $N$ induces $2$ orbits in $N$. Prove the following requests:
a) $\exists p$ prime $\forall 1\ne n\in N$ such that $o(n)=p.$
b) SHow that $N$ is abelian.
I said that if $G=N\rtimes H$, hence we have the conditions $(*)\begin{cases}N \cap H=\{1_G\}\\NH=G \end{cases}$. For a generic element, $n \in N$ the stabilizer of $n$ in the action $\varphi:H\to\operatorname{Aut}(N)$ is such that $h^{-1}nh=n\iff h\in C_H(n)$, whose order divides $H$.
Furthermore, if the number of orbits is two, the quotient $H/Ker(\varphi)\cong \varphi(H)\le\operatorname{Aut}(N)$ has order two, with $Ker(\varphi):=\underset{n\in N}\bigcap C_H(n)$. I'm confusing things because the map induced by the semi-direct product has the same form of the map induced by the conjugation action and I'm not sure about how using the Hypothesis $(*)$.
If $N$ is abelian it means that its centralizer is the entire $G$ (with $C_G(H)\le Z(G)$), and in particular, considering tha action restricted to $H$, $C_H(N)$ should corespond to $H$.
Thank for the help and for your patience.
| Under conjugation one orbit is just the identity. So all other elements of $N$ are conjugate in $G$ and therefore have the same order.
Let this order be a multiple of the prime $p$, then the $p$th power of an element of $N$ is still in $N$ but has a smaller order i.e. it has to be the identity.
Since $N$ is a $p$-group, $Z(N)$ is non-trivial and is a characteristic subgroup of $N$. Therefore $Z(N)$ is fixed by $H$ and so all elements of $N$ are in $Z(N)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3986151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Let $x_0$ be a transcendental number, $x_{n+1}=\frac{3-x_n}{x_n^2+3x_n-2}$. What is the limit of $x_n$? Let $x_0$ be a transcendental number, $$x_{n+1}=\frac{3-x_n}{x_{n}^{2}+3x_{n}-2}$$
What is the limit of $x_{n}$?
Choose $x_0=\pi$, and is seems that the limit of $x_n$ is $-1$. But what is the proof for this $\pi$ and other numbers? Let
$$f(x)=\frac{3-x}{x^{2}+3x-2}$$
The following may be helpful.
$$f'(x)=\frac{(x-7)(x+1)}{(x^{2}+3x-2)^2}$$
$$f(x)-x=\frac{-(x-1)(x+1)(x+3)}{x^{2}+3x-2}$$
$$f(x)+1=\frac{(x+1)^{2}}{x^{2}+3x-2}$$.
| Let $f(x) = \frac{3-x}{x^2+3x-2}$. If $\lim x_n$ exists, then $L = \lim x_{n+1}=\lim x_n$, so set $$L=f(L)$$
There's three solutions to this: $L = -3, -1, 1$. In order to find the correct one, note that for a small neighborhood around $-3$, you have $|f(x)+3|>|x+3|$, and around $1$, you have $|f(x)-1|>|x-1|$. For both $-3$ and $1$, the difference will be made even bigger. Around $-1$ on the other hand, you have $|f(x)+1|<|x+1|$, so the difference is becoming smaller (this is not a rigorous proof but more of an intuitive one).
Thus, for "most" $x_0$, it will converge to $-1$. The only way it will converge to $-3$ or $1$ is if it converges exactly in a finite number of iterations. But for that to be true, it has to be a solution to $$f^n(x_0) = -3$$
(or $1$)
for some $n$, meaning that it must be algebraic. Therefore, for all transcendental, the limit will be $-1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3986329",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Are these two curves birationally equivalent? I'm trying to work out if the following two curves are birationally equivalent:
$$Y^2 = 2X^4 + 17X^2 + 12$$
$$2Y^2 = X^4 - 17$$
(I'm considering the above as the affine shorthands for the projective curves they represent)
I saw elsewhere online that I could use the following code in magma:
K<x,y>:=AffineSpace(Rationals(),2);
C1A:=Curve(K,2*x^4+17*x^2+12-y^2);
C2A:=Curve(K,x^4-17-2*y^2);
C1:=ProjectiveClosure(C1A);
C2:=ProjectiveClosure(C2A);
IsIsomorphic(C1,C2);
However this gives me a runtime error.
Can anyone see an error with my code/approach above? Alternatively, if there's another way I can check if two curves like the above are birationally equivalent either by hand or by using Magma / Sage or something similar that would be really helpful.
| In PARI/GP:
? ellinit(ellfromeqn(2*x^4+17*x^2+12-y^2)).j
%1 = 384200066/111747
? ellinit(ellfromeqn(x^4-17-2*y^2)).j
%2 = 1728
The curves are not birationally equivalent because they have different $j$-invariants.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3986554",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Solving a linear system of equations use row operations Can somebody help me find the simplest way to use elementary operations of multiplying one row by another and scalar multiplication of a row to find the solution to the system of equations?
$$2x_1+4x_2-x_3=7$$ $$x_1+x_2-x_3=0$$ $$3x_1-2x_2+3x_3=8$$
Every time I solve this it feels like it could have been done simpler... Thanks!
My attempt:
I first exchanged row 1 and row 2 to get:
$$x_1+x_2-x_3=0$$ $$2x_1+4x_2-x_3=7$$ $$3x_1-2x_2+3x_3=8$$
Then I did $-2R_1+R_2 \rightarrow R_2$ and $-3R_1+R_3 \rightarrow R_3$ to get:
$$x_1+x_2-x_3=0$$ $$0x_1+2x_2+x_3=7$$ $$0x_1-5x_2+6x_3=8$$
But now I can't use row 2 to eliminate the $x_2$ in row 3 without introducing fractions. I realize i could use back substitution, but I want to use row operations. Was there a different way I could have carried out the row operations to make this nicer? Thank you!
| We are given
$$\begin{pmatrix}
2 & 4 & -1\\
1 & 1 & -1\\
3 & -2 & 3
\end{pmatrix}
\begin{pmatrix}
x_1\\
x_2\\
x_3
\end{pmatrix}=
\begin{pmatrix}
7\\
0\\
8
\end{pmatrix}$$
Take $R_1 - 2R_2 \to R_2$ and $-3R_1+2R_3 \to R_3$ to form
$$\begin{pmatrix}
2 & 4 & -1\\
0 & 2 & 1\\
0 & -16 & 9
\end{pmatrix}
\begin{pmatrix}
x_1\\
x_2\\
x_3
\end{pmatrix}=
\begin{pmatrix}
7\\
7\\
-5
\end{pmatrix}$$
Then let $8R_2 + R_3 \to R_3$ to obtain
$$\begin{pmatrix}
2 & 4 & -1\\
0 & 2 & 1\\
0 & 0 & 17
\end{pmatrix}
\begin{pmatrix}
x_1\\
x_2\\
x_3
\end{pmatrix}=
\begin{pmatrix}
7\\
7\\
51
\end{pmatrix}$$
We then find that $x_1=1,x_2=2,x_3=3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3986688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Show that $2\le U_n \le 3$ for all $n$ Given $U_n = (1 + 1/n)^n$ , $n = 1,2,3,\ldots\;.$
Show that $2 \le U_n \le 3$ for all n
This is what I've done. Can anyone help?
$$\begin{align*}
a_n=\left(1+\frac1n\right)^n&=\sum_{r=0}^n{^nC_r}(1)^r\left(\frac1n\right)^{n-r}\\
&=\sum_{r=0}^n{^nC_r}\left(\frac1n\right)^{n-r}\\
&=1+\frac{n}n+\frac1{2!}\frac{n(n-1)}{n^2}+\frac1{3!}\frac{n(n-1)(n-2)}{n^3}\\
&\quad\,+\ldots+\frac1{n!}\frac{n(n-1)\ldots(n-n+1)}{n^n}
\end{align*}$$
Since $\forall k\in\{2,3,\ldots,n\}$: $\frac1{k!}<\frac1{2^k}$, and $\frac{n(n-1)\ldots\big(n-(k-1)\big)}{n^k}<1$,
$$\begin{align*}
a_n&<1+\left(1+\frac12+\frac1{2^2}+\ldots+\frac1{2^{n-1}}\right)\\
&<1+\left(\frac{1-\left(\frac12\right)^n}{1-\frac12}\right)<3-\frac1{2^{n-1}}<3
\end{align*}$$
| Hint Let
$$U_n = (1 + 1/n)^n \\
V_n=U_n = (1 + 1/n)^{n+1}$$
It is clear that $U_n < V_n$ for all $n$. Use Bernoulli inequality to show that
$$\frac{U_{n+1}}{U_n} \geq 1 \\
\frac{V_{n}}{V_{n+1}} \geq 1$$
Find some $m$ so that $V_m \leq 3$.
Deduce from here that
$$U_1 \leq U_n \leq V_m$$ for all $n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3986812",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is it inuitively correct to say that $\lim_{n \to \infty} \frac{x}{n} = \mathrm{d}x$? We know that from the Riemann sum that $\Delta x = \mathrm{d}x$ where $n \to \infty$ and $\Delta x = \dfrac{b - a}{n}$. If, however, $x$ represents some length of interval, can we also say that $\displaystyle\lim_{n \to \infty}\dfrac{x}{n} = \mathrm{d}x$?
| I think this is quite a tricky topic to talk about because of how people interpret things vs how they are actually defined. I find it best to think of $dx$ as a separate value, rather than a function or scaling of the variable $x$. whilst we are talking about a small change in $x$ the key word here is change, so rather than $x/n\to0$ really we want the change in $x$, or the interval that we are looking at to be very small rather than the initial or end values.
One nice way to think of this is by looking at your average integral:
$$I_{a,b}[f]=\int_a^bf(x)dx$$
now think of the integral symbol as a stretched out $S$, which is the first letter of sum. Now inside the integral (the part we are summing) is $f(x)\times dx$. In other words we can think of an integral as a sum over our domain $[a.b]$ where the height of each one is $f(x)$ and the width is $dx$. However for our error to tend to zero we require the rectangles to fit the function $f(x)$ well, and actually tend to it exactly. The best way for us to do this is to reduce the width $dx$ to a very small value close to zero, but it cannot actually equal zero or be "infinitely small". Whilst this may seem strange it does not actually matter as we do not need a numerical method for $dx$, unless using some methods for approximating and in that case the statement is not strictly true. Hope this helps
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3987107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Find a generator of the ideal generated by $3$ and $2-2\sqrt{-2}$ in ${\mathbb{Z}\left[\sqrt{-2}\right]}$ Exercise: Find a generator of the ideal generated by $3$ and $2-2\sqrt{-2}$ in ${\mathbb{Z}\left[\sqrt{-2}\right]}$.
I know that such a generator exists since $\mathbb{Z}\left[\sqrt{-2}\right]$ is a Euclidean domain and thus a PID, with a generator being given by an arbitrary element of $I$ with minimal norm, where $I$ denotes the ideal generated by $3$ and $2-2\sqrt{-2}$. Furthermore, $3$ has norm $9$ and $2-2\sqrt{-2}$ has norm $12$.
How do I continue?
| We want to find $\alpha\in \mathbb{Z}[\sqrt{-2}]$ such that $I = (3, 2 - 2\sqrt{-2}) = (\alpha)$. For such $\alpha$, there exists $\beta, \gamma \in \mathbb{Z}[\sqrt{-2}]$ such that $3 = \alpha\beta, 2 - 2\sqrt{-2} = \alpha \gamma$. If we observe norms of them, we have $9 = N(\alpha)N(\beta)$ and $12 = N(\alpha)N(\gamma)$, so $N(\alpha) = 1$ or $3$. When $N(\alpha) = 1$, $\alpha = \pm 1$ and one can show that $1 \not \in I$ by several ways (brute force actually works) Since there are only finitely many $\alpha \in \mathbb{Z}[\sqrt{-2}]$ with $N(\alpha)$ (and there are only two actually), and only one of them satisfies $\alpha | 3$ and $\alpha = 2 -2\sqrt{-2}$. Now we have $(3, 2 - 2\sqrt{-2}) \subseteq (\alpha)$, and you also need to show the opposite inclusion by seeking at $\beta_1, \beta_2$ with $\alpha = \beta_{1}\times 3 + \beta_{2} \times (2 - 2\sqrt{-2})$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3987419",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Why are the values of $\sin (\theta)$ and $\csc(\theta)$ positive in the second quadrant? Why is the value of $\sin (\theta)$ and $\csc(\theta)$ positive in the second quadrant?
Similarly why the value of other trigonometric ratios so in other quadrants?
| You may recall this image:
Now, the unit circle definition tells us that $\cos \theta$ and $\sin\theta$ are the $x$ and $y$ coordinates of the point whose radius makes angle $\theta$ with the positive $x$-axis.
Therefore, in the first quadrant ($0<\theta<90^\circ$), both $\cos\theta$ and $\sin\theta$ are positive, and in the second quadrant ($90^\circ<\theta<180^\circ$), $\cos\theta$ is negative and $\sin\theta$ is positive, and so on for third and fourth quadrants.
Define:
$$\sec\theta=\frac{1}{\cos\theta}\text{ and }\csc\theta=\frac{1}{\sin\theta}$$
Thus, the sign of $\sec\theta$ is the same as $\cos\theta$ and the sign of $\csc\theta$ is the same as $\sin\theta$.
Define:
$$\tan\theta=\frac{\sin\theta}{\cos\theta}\text{ and }\cot\theta=\frac{\cos\theta}{\sin\theta}$$
Thus, the signs of $\tan\theta$ and $\cot\theta$ are positive, if $\sin\theta$ and $\cos\theta$ have same signs, and negative, if different.
For example, for the second quadrant, $\cos\theta$ is negative and $\sin\theta$ is positive. So, $\sec\theta$ is negative and $\csc\theta$ is positive, and , $\tan\theta$ and $\cot\theta$ are negative.
Hope this helps. Ask anything if not clear :)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3987565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Number of possible matrices using given conditions I have a question, that goes as follows:
The number of all matrices $A=[a_{ij}], 1 \leq i,j \leq 4$ such that $a_{ij}= \pm1 $ and $\sum_{i=1}^4a_{ij}= \sum_{j=1}^4a_{ij}=0$ is?
So, I think this question means to say that the sum of all elements in a row, and all elements in a column is zero, and the possible value of the elements being only $+1,-1$. How do I form the combinations in this case?
I tried to find the number of solutions to the equation $a_{11}+a_{12}+a_{13}+a_{14}=0$ and then repeating the same process for every row and column but that didn't get me anywhere.
The answer says 90 and I cannot seem to be able to get there. I have seen another question asking the same, but the answer isn't very clear to me, and that question being 4 years old, I am posting again.
| The problem is same as placing two $1$s and two $0$s in each row and column.
Put two $1$s in a row. $\binom{4}{2}$ ways. Pick one of the columns in which a $1$ is placed and place another $1$. $\binom{3}{1}$ ways.
There is a $2 \times 2$ submatrix in which three $1$s are placed now. If fourth square also contains a $1$, entire matrix is determined. Thus one matrix is obtained.
$$
\begin{bmatrix}
1 & 0 & 1 & 0 \\
0 & & & \\
1 & & (1,0)& \\
0 & & &
\end{bmatrix}
$$
If the fourth square contains a zero, there are two ways to place a $1$ in that particular row (containing the zero in place of fourth $1$).
$$
\begin{bmatrix}
1 & 0 & 1 & 0 \\
0 & & & \\
1 & 1& 0& 0\\
0 & & &
\end{bmatrix}
$$
Now in the untouched column (fourth one above), there is only one way to put two $1$s. The remaining $2\times2$ submatrix can be filled in two ways by choosing a position for $1$.
$$
\begin{bmatrix}
1 & 0 & 1 & 0 \\
0 & & & 1 \\
1 & 1 & 0 & 0\\
0 & & & 1
\end{bmatrix}
$$
Hence final answer $$\binom{4}{2}\cdot\binom{3}{1}\cdot(1+2\cdot2)=90$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3987669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Cardinality of the equivalence class of a given braid word? The symmetric group $S_n$ has presentation $$S_n = \{ s_1,...,s_{n-1} | s_is_{i+1}s_i=s_{i+1}s_is_{i+1}, \text{ } s_i^2=1 , \text{ and } s_is_j = s_js_i \text{ for } |i-j| \geq 2\}$$
If we take away the relation $s_i^2=1$ we get the braid group:
$$B_n = \{ s_1,...,s_{n-1} | s_is_{i+1}s_i=s_{i+1}s_is_{i+1},\ \text{ } s_is_j = s_js_i \text{ for } |i-j| \geq 2\}$$
Thus elements $b \in B_n$ are equivalence classes, with $b \cong c$ if the braid word that represents $b$ differs from the braid word representing $c$ by finite many applications of the relations.
I am trying to understand, given an element $b \in B_n$, what is its cardinality as an equivalence class as an element of $B_n$?
For example, let $b=(1,2,3,1,2,3)$. Then I'm pretty sure, by brute force, I've calculated that there are $7$ different braid words that can represent $b$:
$(1,2,3,1,2,3)$
$(1,2,1,3,2,3)$
$(2,1,2,3,2,3)$
$(2,1,3,2,3,3)$
$(2,3,1,2,3,3)$
$(1,2,1,2,3,2)$
$(2,1,2,2,3,2)$
Does anyone know how I could figure this out in a more sophisitcated way? I know word problems such as this have been well studied. Also, is there a systemic way to determine when two braid words are equivalent? Thank you.
| The equivalence classes are all countably infinite.
In fact, from the presentations one reads off a surjective homomorphism $B_n \to S_n$ taking each generator of $B_n$ to the same-named generator of $S_n$. Your equivalence classes are simply the cosets of the kernel of this homomorphism. Since $B_n$ is countably infinite and $S_n$ is finite, and since the cosets all have the same cardinality (a fact true of every group homomorphism), it follows that the cosets are all countably infinite.
By the way, I'll add that the notation of your "brute force calculation" does not make much sense to me; I don't know what those sequences of natural numbers in that calculation are supposed to represent, either as elements of $S_n$ or of $B_n$. Elements of $S_n$ or of $B_n$ can, by definition of a generating set, each be written as a finite word in the generators. For example each element of $B_3$ or of $S_3$ can all be written as a word in the letters $s_1,s_2$; a specific example of such a word is
$$s_1^2s_2^{-1}s_1^{4}s_2^3 = s_1 s_1 s_2^{-1} s_1 s_1 s_1 s_1 s_2 s_2 s_2
$$
Given two such words, to say they represent the same element of $B_3$ means, as you say, that you can get from one word to the other by applying the relations of $B_3$. On top of that, to say those two words represent the same element of $S_3$ means that you can get from one to the other by applying the larger set of relations of $S_3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3987798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Proof that the canonical mapping of $V$ into $V^{**}$ is not bijective Let $V$ be an infinite vector space over a skew field $K$ and $V^{**}$ its bidual. Write $c_V$ for the canonical injection of $V$ into $V^{**}$. I want to show that $c_V$ is not surjective.
We can restrict to the case where $V:= K_s^{(L)}$ for some infinite set $L$. In that case there is an isomorphism of $\varphi$ of $V^*$ onto $K^L$. Let $(a_\lambda)_{\lambda\in L}$ be the canonical basis of $V$ and $(a^*_\lambda)_{\lambda\in L}$ be the corresponding family of coordinate forms in $V^*$. Let $F'$ be the span of the family $(a^*_\lambda)_{\lambda\in L}$ in $V^*$. Then $V^*\ne F'$, otherwise $K^{(L)}_d=\varphi(F')=\varphi(V^*)=K^L$. This implies that there exists a hyperplane $H'$ of $V^*$ such that $F'\subset H'$. Clearly $H'\ne V^*$; it follows that $(V^*/H')^*\ne0$ and $(V^*/H')^*\cong H''$ where $H''$ is the orthogonal of $H'$ in $V^{**}$. Write $F$ for the orthogonal of $F'$ in $V$. At this point the author says that
$$(H''\cap c_V(V))\subset c_V(F)=0.$$
I don't understand why $c_V(F)=0$? By defintion
$$F:=\{x\in V\ |\ (\forall x^*)(x^*\in F'\implies\langle x,x^*\rangle=0)\}.$$
Take $x\in F$ and $x^*\in V^{*}$. Why would $c_V(x)(x^*)=\langle x,x^*\rangle=0$?
| This is just because $F$ itself is $0$. Indeed, if $x\in F$, then in particular $\langle x,a_\lambda^*\rangle=0$ for each $\lambda$, which means every coordinate of $x$ is $0$ so $x$ is $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3987905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is $\cos(x^2)$ a solution of a 2nd order linear ODE defined on all of $\mathbb R$? I've been asked in an assignment whether the function $\cos(x^2)$ can be the solution of a 2nd order homogeneous linear equation which satisfies the existence and uniqueness theorem on all of $\mathbb R$. On one hand I cannot find a reason why it wouldn't be a solution of such an equation (unlike $\sin(x^2)$, which was easy) since there is no point where both the function and its derivative vanish, on the other hand I cannot for the life of me find an example of such an equation. Assistance would be appreciated.
| Your function is a solution of $$xy''-y'+4x^3y=0.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3988031",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to calculate $\int_{C}{}\frac{\bar{z}}{z+i} dz$ if $C$ is a circle: $|z+i|=3$? The task:
$$
\text{Calculate } \int_{C}{}f(z)dz\text{, where } f(z)=\frac{\bar{z}}{z+i}\text{, and } C \text{ is a circle } |z+i|=3\text{.}
$$
Finding the circle's center and radius:
$$
|z+i|=|x+yi+i|=|x+(y+1)i|=3
\\
x^2+(y+1)^2=3^2
$$
Parametrizing the circle:
$$
z(t)=i+3e^{-2\pi i t}
$$
Now I need to calculate this integral:
$$
\int_{0}^{1}{f(z(t))z'(t)dt}=
2\pi\int_{0}^{1}{
\frac{1-3ie^{-2\pi i t}}{e^{4 \pi i t}}dt
}
$$
Unfortunately I calculated this integral, and it's equal to $0$. Is this correct? I don't think so. Where did I go wrong? Maybe I made a mistake when calculating the integral - what would be the best way to calculate it?
| You cannot directly use the Residue Theorem in this case since $\frac{\overline{z}}{z+i}$ is not holomorphic, so your approach is entirely reasonable. Your calculation of the integral is correct since
$$\int_{0}^{1}e^{2k\pi ti}dt=\left.\frac{1}{2\pi ki}e^{2k\pi kti}\right|_0^{1}=\frac{1}{2\pi ki}(1-1)=0$$
for any integer $k$, and so
\begin{align*}
2\pi\int_{0}^{1}\frac{1-3ie^{-2\pi it}}{e^{4\pi i t}}dt&=2\pi\int_{0}^{1}e^{-4\pi i t}dt-6\pi i\int_{0}^{1}e^{-6\pi it}dt\\
&=2\pi(0)-6\pi i(0)\\
&=0
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3988180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
$\lim_{x \to \infty} \frac {\log(|\sin(x)|)}{\log(x)}$ Does $\lim_{x \to \infty} \frac {\log(|\sin(x)|)}{\log(x)}$ exist considering $\sin(x) = 0$ When $x=n\pi$?
| The limit inf is $-\infty$ and obtained when $\sin=0$, while the limit sup is $0$ and is obtained when $\sin=1$. The limit does not exist because the two limits are not equal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3988305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is $A$ a perfect square? Consider
$$A=a^2+2ab^2+b^4-4bc-4b^3,$$
where $a,b,c\in\mathbb{Z}$ and $b\neq0$ such that $b|a$ and $b|c$, so $b|A$.
Now I want to know that Is $A$ a perfect square?
| No, $A$ is not a perfect square for such $a,b,c$ values. As a counterexample, take $a=b=c=1$, for which $A=-4$.
EDIT: The answer below corresponds to the original question "Can $A$ be a perfect square?".
Yes, it can be 0. Let $a=bn$ and $c=bm$. Then:
$$\begin{align}
A&=a^2+2ab^2+b^4-4bc-4b^3\\
&=b^2n^2+2b^3n+b^4-4b^2m-4b^3\\
&=b^2 \big((n+b)^2-4(m+b)\big)
\end{align}$$
So any values that make $(n+b)^2-4(m+b)$ a square are valid ones. So, for instance, any $n=-b$ and $m=-b$ are enough, but you can find many others.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3988517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
proving limits exist Prove $\lim\frac{ 4n^3+3n}{n^3-6}= 4$.
I basically need to determine how large $n$ has to be to imply
$$\frac{3n+24}{n^3-6}<\epsilon$$
the idea is to upper bound the numerator and lower bound the denominator.
For example, since $3n + 24 ≤ 27n$, it suffices for us to get $\frac{27n}{n3-6} < ε$.
it is inferred, thereafter, that all we need is $\frac{n^3}2 ≥6$ or $n^3 ≥12$ or $n>2$.
this is where I have a problem. I don't understand how we got to the point where all we need is $\frac{n^3}2≥6$.
| If $\frac{n^3}2\ge 6$ then
$$n^3-6\ge n^3-\frac{n^3}2=\frac{n^3}2 $$
is your desired monomial lower bound for the denominator.
In fact, we are allowed to be wasteful and may begin right away with, say, assuming $n>1000$.
That makes $3n+24<3.024 n$ and $n^3-6>0.999999994n^3$ and so the error term certainly $<\frac{3.025}{n^2}$. But that doesn't matter for the sole purpose of proving the limit where any other bound of the form $<\frac C{n^2}$ (or even weaker) is good enough.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3988628",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
} |
Does the standard normal distribution have a heavy right tail? I read about heavy right tail and I saw that a distribution is said to have a heavy right tail if its tail probabilities vanish slower than any exponential
$$\forall >0: \lim_{x\to\infty}e^{tx}(>)=\infty.$$
Does the standard normal distribution have a heavy right tail?
| The definition of a heavy right tailed distribution is that the moment generating function $M_X(t)$ is infinite for all $t > 0$ (see here). This is not the case for the standard normal distribution, where we have $$M_X(t)=\exp\left(\frac{t^2}{2}\right).$$
What you are writing about the limit $\lim_{x\to\infty}e^{tx}P(X>x)=\infty$ is only a necessary condition (an implication of the definition). But since it was part of your question: To calculate $\lim_{x\to\infty}e^{tx}P(X>x)$ you can use $P(X>x)=1-F_X(x)$ and then apply L'Hôpital's rule:
$$\lim_{x\to\infty}e^{tx}P(X>x)=\lim_{x\to\infty}\frac{-\exp(-x^2/2)/\sqrt{2\pi}}{-t\cdot e^{-tx}}=0.
$$
So by calculating this you also see that the standard normal distribution is not a heavy right tailed distribution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3988774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
An interesting problem on progression and series In my text book I found a problem which asked to sum of the following series $$1\times 2\times 3+2\times 3 \times 4+ 3\times 4\times 5+\cdots + n(n+1)(n+2)$$ which I found to be $\frac{n(n+1)(n+2)(n+3)}{4}$ which is indeed true.
Now as a general thought it came to my mind what if the problem asks to find the closed form of this$-$ $$\sum_{k=1}^{n}\underbrace{k\times(k+1)\times(k+2)\times(k+3)\times(k+4)\cdots\times(k+m-1)}_{m \text { terms}}$$ where $m$ is any integer for instance in the aforementioned problem it was $4$. Now, following the pattern our intuition suggest that the answer should be $$\boxed{\frac{n(n+1)(n+2)\cdots(n+m)}{m+1}}$$ Now, in order to check one can easily say that for $m=1$ it's obviously true. It's also true for $m=2$. So now my natural question is that Is the closed sum true for all $m \in\mathbb{N}$ ? I have tried with induction but at the end I mess it up and the idea to proof in that way seems to be went into vain and I also have no further idea to proceed, any help? Thanks for your attention.
| The result that $$\frac{n(n+1)(n+2)\cdots(n+m)}{m+1}-\frac{(n-1)n(n+1)\cdots(n+m-1)}{m+1}$$ $$=n(n+1)\cdots(n+m-1)$$
can be seen easily by taking out (mentally) the common factor $n(n+1)\cdots(n+m-1)$ of all terms.
You can then prove the result either by induction or by the method of differences.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3988859",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
$ A=\begin{pmatrix} 0 & 2 & 0\\ 0 & 0 & 3 \\ 0 & 0 & 0 \end{pmatrix}$, calculate $e^A$ I have encounter a question in my book , it was For $ A=\begin{pmatrix}
0 & 2 & 0\\
0 & 0 & 3 \\
0 & 0 & 0
\end{pmatrix}$, calculate $e^A$
My solution way : I tried to find its eigenvalues , so i found that the only eigenvalue is $0$ and the eigenspace is $(1,0,0)$ .Hence , it cannot be diagonalized.
Then , i tried to use taylor exponential and it gives me $ A=\begin{pmatrix}
1 & 2 & 0\\
0 & 1 & 3 \\
0 & 0 & 1
\end{pmatrix}$ .However the answer is $ A=\begin{pmatrix}
1 & 2 & 3\\
0 & 1 & 3 \\
0 & 0 & 1
\end{pmatrix}$ .
What am i missing ,can you help me?
| Note that $$A^2=\begin{pmatrix}
0 & 0 & 6\\
0 & 0 & 0 \\
0 & 0 & 0
\end{pmatrix},$$ which is where the $3$ in the upper-right corner comes from.
Since $A^3$ is just the $3\times 3$ matrix of zeroes, $e^A=I+A+\frac12 A^2.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3988984",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Is every normal subgroup of $G$ of this form?
Let $G = H \times K$. If $N$ is a normal subgroup of $H$ and $L$ is a normal subgroup
of $K$, show that $N \times L$ is a normal subgroup of $G$. Is every normal subgroup of $G$ of
this form?
My attempt:
Let $G = H \times K$. If $N$ is a normal subgroup of $H$ and $L$ is a normal subgroup
of $K$, show that $N \times L$ is a normal subgroup of $G$
Since $e\in N$ and $e\in L$, then $(e,e)\in N\times L$. If $(n_1,l_1),(n_2,l_2) \in N\times L$, then $(n_1,l_1)(n_2,l_2)=(n_1n_2,l_1l_2)\in N\times L$. Also, if $n\in N$, then $n^{-1}\in N$ and if $l\in L$, then $l^{-1}\in L$, so $(n,l)(n^{-1},l^{-1})=(nn^{-1},ll^{-1})=(e,e)$. Thus $N\times L$ has inverses and therefore, $N\times L \leq G$.
For normality, if $(h,k)$ is an element of $G$ and $(n,l)\in N\times L$, then $(h^{-1},k^{-1})(n,l)(h,k)=(h^{-1}nh,k^{-1}lk)$. Since $N$ is normal in $H$ and $L$ is normal in $K$, we have $h^{-1}nh\in N$ and $k^{-1}lk\in L$, so $(h^{-1}nh,k^{-1}lk) \in N\times L $ and therefore $N\times L$ is a normal subgroup of $G$.
Is every normal subgroup of $G$ of this form?
In this part, my intuition says yes, but I don't know how to formalize this, could you help me to formalize the answer, either positive or negative?
| No. Consider for example the group $G=\Bbb Z/2\Bbb Z\times \Bbb Z/2\Bbb Z$. It has the (normal) subgroup $\{(0,0),(1,1)\}$ which is not of the desired form. However under certain conditions it is true, for example when $H,K$ are finite of coprime order.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3989138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Radius of curvature for a plane curve Given the plane curve $(x(t),y(t)) = (2 \cos t, \sin t)$, the task is to find the radius of curvature at $(0,1)$. (The given point corresponds to time $t=\pi/2$.)
The radius $R$ is given by $1/\kappa$ where $\kappa$ is the curvature,
$$ \kappa = \frac{|v \times a |}{ |v|^3}, $$
with $v$ and $a$ being the velocity and acceleration vectors, respectively. Taking the first and second derivative, one can obtain $v=(-2\sin t, \cos t)$ and $a=(-2\cos t, -\sin t)$. At the given point, $v = (-2,0)$ and $a=( 0,-1)$, so that $v\times a = (0,0,2)$ and
$$ \kappa = \frac{2}{8} = \frac{1}{4} \Rightarrow R = 4.$$
However, my solution manual says that $R = 2$ with no technical explanation. Can you help me figure out what's wrong?
| Let's do it in general: $\alpha(t) = (2\cos t, \sin t)$ implies $\alpha'(t) = (-2\sin t, \cos t)$ as well as $\alpha''(t) = -\alpha(t)$, and thus $$R(t) = \frac{1}{|\kappa(t)|} = \frac{\|\alpha'(t)\|^3}{\det(\alpha'(t),\alpha''(t))} = \frac{(4\sin^2t+\cos^2t)^{3/2}}{2\cos^2t +2\sin^2 t} = \frac{1}{2}(4\sin^2t+\cos^2t)^{3/2}.$$For $t = \pi/2$, we have $\cos \pi/2 = 0$ and $\sin \pi/2 = 1$, thus $$R(\pi/2) = \frac{4^{3/2}}{2} = 4.$$I think you have the right solution and the textbook has a mistake.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3989248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How to integrate $\int_0^{\infty} \frac{x}{e^x+1} dx$ I want to integrate it explicitly. I looked up if this had an exact solution, and I got that it is $\frac{\pi^2}{12}$ here: https://www.integral-calculator.com/#expr=x%2F%28e%5Ex%2B1%29&lbound=0&ubound=inf
I thought it could be solved using geometric series, so I tried to turn my integral into something similar:
$$\int_0^{\infty} \frac{x}{e^x+1} dx= \int_0^{\infty} \frac{x}{e^x+1} \frac{e^x-1}{e^x-1} dx=\int_0^{\infty} \frac{x e^x }{e^{2x}-1}dx -\int_0^{\infty} \frac{x}{e^{2x}-1} dx= \int_0^{\infty} \frac{x e^x }{e^{2x}-1}dx -\frac{1}{4}\int_0^{\infty} \frac{x}{e^{x}-1} dx=\int_0^{\infty} \frac{x e^x}{e^{2x}-1} - \frac{1}{4}\frac{\pi^2}{6}$$
Where the second term is easy to solve with geometric series, and it's also defined with the Riemann zeta:
$$\int_0^{\infty} \frac{x}{e^x-1}=\zeta(2)=\frac{\pi^2}{6}$$
I think I can use geometric series also with the other integral, I've tried to solve it this way and I got:
$$\int_0^{\infty}\frac{xe^x}{e^x-1}=\frac{1}{4}\sum_{n=1}^{\infty}\frac{1}{\left(n-\frac{1}{2}\right)^2} $$
And I don't know how to obtain a value from this infinite sum, although if I'm not wrong it should be $\pi^2/2$.
Anyone knows how to do this? Also, I think I might be looping the loop and there's some easier way to proceed from the beggining, so I'd appreciate another point of view to solve this.
Thanks.
| Alternatively, let $t=e^{-x}$ to express the integral as
$$\int_0^{\infty} \frac{x}{e^x+1} dx= - \int_0^1 \frac{\ln t}{1+t}dt
\overset{IBP} = \int_0^1 \frac{\ln (1+t)}{t}dt
= \frac{\pi^2}{12}
$$
where the result
Finding $ \int^1_0 \frac{\ln(1+x)}{x}dx$ is used.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3989406",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
How to find a upper bound of the set $A= \{ (1+\frac{1}{n})^n : n \in \mathbb{N}^{*} \} $ i have the set $A= \{(1+\frac{1}{n})^{n} : n \in \mathbb{N}^* \}$ and the exercise ask me to find at least $2$ upper bound least equal to $\frac{14}{5}$.
My first question is, how i know $\frac{14}{5}$ is a upper bound of $A$?
My attempt is show doesn't exist $m \in \mathbb{N}^*$ such that $(1+\frac{1}{m})^{m} = \frac{14}{5}$, but i'm stuck, for the other upper bound numbers, i could reason the same way?
Thanks for suggestions
| First you should know that:
$$ (1) \ln(1+x)=x-\frac{x^2}{2}+O(x^3)$$
Also consider that ln is an injective function meaning if $\ln(A)=1$ then $A=e$
So:
$$ \lim_{n\to\infty} \ln(A) = \lim_{n\to\infty} \ln((1+\frac{1}{n})^n = \lim_{n\to\infty} n\ln(1+\frac{1}{n}) $$
Applying (1):
$$ \lim_{n\to\infty} n\ln(1+\frac{1}{n}) = \lim_{n\to\infty} n(\frac{1}{n}-\frac{1}{2n^2}+O(\frac{1}{n^3}))$$
$$\lim_{n\to\infty} n(\frac{1}{n}-\frac{1}{2n^2}+O(\frac{1}{n^3})) = \lim_{n\to\infty} (1-\frac{1}{2n}+nO(\frac{1}{n^3})) = 1 $$
Since $$ \lim_{n\to\infty} \ln(A) = 1 $$ Than $$ \lim_{n\to\infty} A = e $$
So e is the least upper bound and any number greater than e is an upper bound (Note: $\frac{14}{5} > e $)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3989503",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Gradient of dot product If $\vec A$ and $\vec B$ are two vectors such that $\vec A=<A_x,A_y,A_z>$ and $\vec B=<B_x,B_y,B_z>$ then
$$\nabla({\vec A}.{\vec B})=A \times (\nabla \times B)+B \times (\nabla \times A)+(A.\nabla)B+(B.\nabla)A$$
In this relation what does this $(A.\nabla)B$ imply? How does this work on vector? Isn't gradient meant to be operated on scalar function?
| The operator $\vec A\cdot\nabla$ is the directional derivative operator in the direction $\vec A$. If you have some experience with analysis, you might know that if $f:\mathbb R^n\to\mathbb R^n$ is totally differentiable with Jacobian $\mathrm Jf(x)$ in $x$, then all its directional derivatives exist and can be calculated as $\mathrm Jf(x)v$, where $v$ is the direction.
Sometimes this is also written as $(\nabla f)^T v$, using the nabla operator. We can write this as
$$(\nabla f)^Tv=(\partial_1 f)v_1+\dots+(\partial_n f)v_n.$$
Keep in mind that $\partial_i f$ are vector valued functions. This expression we could also write as $(v_1\partial_1+\dots+v_n\partial_n)f$. Formally, the expression in parentheses is just $v\cdot\nabla$, so we can write the whole thing as
$$(\nabla f)^T v=(v\cdot\nabla)f.$$
The takeaway is: $v\cdot\nabla$ is the directional derivative operator in direction $v$ because $(v\cdot\nabla)f=(\nabla f)^T v$, and the expression on the right side is known to give you the directional derivative.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3989633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is the union of two circles homeomorphic to an interval? Let $Y$ be the subspace of $\Bbb R^2$ given by $Y=\{(x,y): x^2+ y^2=1\}\cup \{(x,y): (x−2)^2+ y^2=1\}$. Is $Y$ homeomorphic to an interval?
I have previously already shown that the unit circle is not homeomorphic to any interval and I think this is also true for $Y$. Basically if we remove a point from $Y$ that is not the intersection, and remove a point from any interval that is not an endpoint, the interval becomes disconnected, but the union of two circles are still connected so there is no homeomorphisms. Is that correct?
| A cutpoint of a connected space $X$ is a $p \in X$ such that $X\setminus\{p\}$ is disconnected.
If $f:X \to Y$ is a homeomorphism of connected spaces $X$ and $Y$ and $p$ is a cut point of $X$ then $f(p)$ is a cutpoint of $Y$ (and vice versa).
If we take $X$ to be an interval, then $X$ has at most two non-cutpoints. (the endpoints in the case of a closed interval). $Y$ on the other hand has infinitely many non-cutpoints (all points except $(1,0)$). So there can be no homeomorphism between them by the observations in the second paragraph.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3989743",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
What's For Multiplication... As summation symbol ($\sum$) is for summing up terms, is there a similar notation for multiplication (except factorial notation)?
| $\prod$ (\prod) is the multiplicative analogue of $\sum$. It multiplies things together, where multiplication is assumed commutative.
But there's more! For some other commutative binary operations, the result of applying that operation to a list of objects may be denoted in a $\sum$-like notation, but with an enlarged operator taking $\sum$'s place. For example, $\bigwedge$ performs a logical AND ($\wedge$) of whatever follows, $\bigvee$ logical OR ($\vee$) and $\bigoplus$ logical XOR ($\oplus$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3989956",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Uniform convergence of a functional series $\sum\limits _{n=1}^{\infty}\frac{x\sin(xn)}{n+x}$ Here's the question: Check series for uniform convergence: $$\sum\limits _{n=1}^{\infty}\frac{x\sin(nx)}{n+x}$$
a) $ x\in E_{1} = (0;\pi) ;$
b) $ x\in E_{2} = (\pi;+\infty) .$
For a) I used the fact that $ \left| \sum\limits_{n=1}^{N}x\sin(nx) \right| \le \left| x\sum\limits_{n=1}^{N}\sin(nx) \right| \le \left| \displaystyle{\frac{x}{\sin(\frac{x}{2})}} \right| \le \pi$ , that means we have bounded partial sums. Sequence $ \{\displaystyle{\frac{1}{n+x}}\} $ is monotone and uniformly goes to 0 for every value of $ x\in E_{1} $ as n goes to $\infty$. Thus, series is umiform convergent on $ E_{1} = (0;\pi) $ . Question is, can I perform the same thing setting partial sums as $ \left| \sum\limits_{n=1}^{N}\sin(nx) \right| $ and sequence as $ \{\displaystyle{\frac{x}{n+x}}\} $ which tells us that it converges to $1$ and finally saying that it converges not uniformly?
| For part (b), consider a sequence $x_n = \frac{\pi}{4n} + 2n\pi \in E_2$. For $n < k \leqslant 2n$ we have
$$\frac{1}{\sqrt{2}} = \sin \left(\frac{\pi}{4}+ 2n^2 \pi \right) = \sin nx_n < \sin k x_n < \sin 2nx_n = \sin \left(\frac{\pi}{2}+ 2n^2 \pi \right) = 1$$
Thus,
$$ \left|\sum_{k = n+1}^{2n}\frac{x_n \sin kx_n}{k+x_n} \right| =\sum_{k = n+1}^{2n}\frac{x_n \sin kx_n}{k+x_n} > n \cdot \frac{x_n}{2n + x_n}\cdot \frac{1}{\sqrt{2}} = \frac{2n^2\pi + \frac{\pi}{4}}{\sqrt{2}(2n + 2n\pi + \frac{\pi}{4n})} $$
Since, the RHS tends to $+\infty$ as $n \to \infty$, the series cannot be uniformly convergent on $E_2$, by violation of the uniform Cauchy criterion.
Edit (9/16/2022):
The argument above is flawed because with $x_n = \frac{\pi}{4n} + 2n\pi$, we should have $\frac{\pi}{4} +2n^2\pi < kx_n \leqslant \frac{\pi}{2} + 2n^2\pi + 2n^2\pi$ and we don't have $\sin kx_n > 0$ for all $n< k\leqslant 2n$.
It should be amended as follows.
Consider a sequence $x_n = \frac{\pi}{4n} + 2\pi \in E_2$. For $n < k \leqslant 2n$ we have $\frac{\pi}{4} < \frac{\pi k}{4n}\leqslant \frac{\pi}{2}$ and
$$\frac{1}{\sqrt{2}} = \sin \left(\frac{\pi}{4}\right) < \sin \frac{\pi k}{4n} =\sin \left(\frac{\pi k}{4n}+ 2k\pi\right) = \sin kx_n < \sin \left(\frac{\pi}{2} \right) = 1$$
Thus,
$$ \left|\sum_{k = n+1}^{2n}\frac{x_n \sin kx_n}{k+x_n} \right| =\sum_{k = n+1}^{2n}\frac{x_n \sin kx_n}{k+x_n} > n \cdot \frac{x_n}{2n + x_n}\cdot \frac{1}{\sqrt{2}} = \frac{2n\pi + \frac{\pi}{4}}{\sqrt{2}(2n + 2\pi + \frac{\pi}{4n})} $$
Since the RHS tends to $\frac{\pi}{\sqrt{2}}\neq 0$ as $n \to \infty$, the series cannot be uniformly convergent ...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3990044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$S$ and $T$ are idempotent linear operators in $V$ which is a vector space in $\mathbb{C}$. Prove that if $S+T$ is idempotent then $ST=TS=0$. If $S+T$ is idempotent, then
$$(S+T)^2=S+T$$
$$\implies S^2+ST+TS+T^2=S+T$$
$$\implies S+T+ST+TS=S+T$$
$$\implies ST+TS=0$$
Now, from here how do I show that $ST=TS=0$?
| Essentially equivalent answer but more elegant solution: $ST = SST = -STS = TSS = TS$, so $2ST = ST+TS = 0$ and hence $ST = 0$. Note that you do not need the field $F$ underlying the vector space to be $ℂ$, but you do need $\text{char}(F) ≠ 2$ otherwise you cannot divide by $2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3990326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How can I see a visual representation of the difference between Euclidean and Affine geometry? I have just started studying different types of geometry with the first to I have covered being Euclidean geometry and Affine geometry. I am aware that Euclidean geometry is what we have used the most up until degree-level maths and Affine geometry is to do with the preservation of parallel lines but beyond this I am struggling to visualise the differences between these geometries. I was wondering if anybody knew a good way of explaining this or some resources to aid in my understanding.
| First of all, it can be very useful to know that there is an even more general geometry including Euclidean and affine geometry: Projective geometry (shortly said: the geometry of perspective).
This said, I think that there at least two ways to consider this issue.
First point of view attached to figures :
*
*Euclidean geometry: the image of a square can be a/any square.
*Affine geometry: the image of a square is a/any parallelogram.
*Projective geometry: the image of a square is a/any quadrilateral.
Second point of view attached to group of transformations (The "modern" point of view developed by Klein in the so-called Erlangen program):
*
*Euclidean geometry is characterized by the group of transformations generated by rotations, symmetries, translations, all of them preserving (Eucledean) norm:
$$\begin{cases}x'&= &a x - \varepsilon b y + e\\y'&=& b x + \varepsilon a y +f\ \end{cases} \ \text{with} \ a^2+b^2=1 \ \ \text{or} \ \ \begin{cases}x'&=&x+ e\\y'&=&y +f\ \end{cases}$$
(with $\varepsilon = +1$ for a rotation, $\varepsilon = -1$ for a symmetry with respect to a straight line ; the second case deals with pure translations).
*
*Affine geometry is characterized by any transformation of the form :
$$\begin{cases}x'&=&ax+by+e\\y'&=&cx+dy+f \end{cases}$$
*
*Projective geometry is characterized by any transformation of the form :
$$\begin{cases}x'&=&(ax+by+e)/(gx+hy+i)\\y'&=&(cx+dy+f)/(gx+hy+i) \end{cases}$$
(please note that if, in the common denominator, we take $f=h=0$ and $i=1$, we are back to an affine transformation).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3990447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prerequisites for Neuwirth's 'Knot Groups' I have never studied knot theory before. I would like to get into the subject. I am interested in studying knots from a topological perspective (as opposed to a combinatorial one.) I am studying knot theory to eventually be able to read Morishita's Knots and Primes.
I would like to begin my studies by reading Neuwirth's Knot Groups. I have questions:
*
*For someone with little prior knowledge in knot theory, am I starting too big? Would reading Neuwirth be a fruitless endeavor? What articles or texts should I look into before reading Neuwirth, if any?
*Knot Groups was published in 1965. There has been much work in the area since then. Is there a newer superior to Neuwirth? Toward my goal of reading Morishita, what knot theory-related texts should I consider as substitutes and/or supplements to Neuwirth, if any? (The reason I want to start with Neuwirth is because I know knot groups are of fundamental importance to Morishita's work, so it seemed natural to start learning knot groups.)
*What are the most important prerequisites to Knot Groups? I understand this is broad. I don't need specifics, but broadly identifying the most important tools I will need going into Neuwirth would be very helpful. (This is similar to my first question.)
| I would say the only prerequisites are a bit of abstract algebra applied to cellular decompositions of 3-manifolds. Chapter III of the book is a reasonably self-contained explanation of the classical origins of knot theory -- with many illustrations fully described in words, but left to the imagination of the reader as pictures. The is book is greatly underappreciated, perhaps because of its misleading title. It's really more about covering complexes than the knot group itself. --Ken Perko, lbrtpl@gmail.com
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3990592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Solving a linear ODE using Frobenius' method The given ODE is
$2y''$ - $y'/(x-1)$ + $y/(x-1)^2$ = 0.
Solving we get,
$\sum_{n=0}$$[2(n+r)(n+r-1)a_n - (n+r)a_n](x-1)^{(n+r-1)}$.
Now I have few basic doubts. Please bear with me.
Can't we directly equate the coefficient of $(x-1)^{(n+r-1)}$ to zero?
Because the solution given here shows that we need to first expand to get:
$[2r(r-1)-r+1]a_0(x-1)^{r-2} + [2r(r+1)-(r+1)+1]a_1(x-1)+...$
Then substituting the coefficient of $(x-1)^{r-2}$ to zero to get the indicial equation as:
$2r(r-1)-r+1=0$.
My second doubt is that, after solving, we get $r=1$ or $r=0.5$
Then what should we do after this?
The solution says substituting r, only coefficient of $a_0$ goes to zero, that means $a_1,a_2....= 0$
Therefore, the general solution should be $a_0(x-1)+a_2(x-1)^{0.5}$
Can someone elaborate on this please
| Yes, that looks correct. The equation is of Euler-Cauchy type,
$$
2(x-1)^2y''-(x-1)y'+y=0
$$
this implies that basis solutions $(x-1)^r$ exist and $r$ solves the characteristic equation
$$
0=2r(r-1)-r+1=(2r-1)(r-1)
$$
that is identical to the indicial equation. So indeed this gives a complete basis $(x-1)^{1/2}, (x-1)$.
To the first point, in a more general series expansion the first terms would have coefficients with negative indices. One can avoid that by treating these terms separately or by expanding the coefficient sequence with $a_{k}=0$ for $k<0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3990981",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Given a bipartite graph $G=(U,V,E)$, what is the bound on $|V|$ given some conditions? I have a bipartite graph $G=(U,V,E)$ with the following two conditions:
*
*Every node $u\in U$ has a degree of at most a constant $c$, i.e., $1\leq \text{deg}(u)\leq c$;
*Every node $v\in V$ has a degree of at most $d$;
*No two nodes in $U$ are connected to the same set of nodes in $V$, i.e., for any two distinct $u_1,u_2\in U$, $\text{neighbor}(u_1)\neq\text{neighbor}(u_2)$.
Is it possible to show that the $|V|=\Theta(|U|)$? Note that here $|U|,d\rightarrow\infty$, $d=o(|U|)$, and every $u$ is connected to at least one $v\in V$. Otherwise, is there a way to derive the minimum size of $|V|$ in terms of $|U|$ and $d$?
For the case of $c=1$, it is clear that we have $|V|=|U|$, but things are not so clear to me when $c>1$. The best I could do for a general $c=\Theta(1)$ is $|V|=O(|U|/d)$: the $u$'s contribute at least 1 edge each giving us $|E|\geq|U|$. Dividing $|E|$ by $d$ gives us the bound.
| In your situation, there is to every node $u \in U$ a set of nodes $S_u \subseteq V$. Your first condition implies that the cardinality of each $S_u$ is somewhere in between $1$ and $c$. A priori, this means that there are
$$
\binom{n}{1} + \binom{n}{2} + \cdots + \binom{n}{c}
$$
possible sets $S_u$, where $n = |V|$. When $d \to \infty$, this number can also be attained. From this we infer $|V| = \Omega(|U|^{1/c})$, and this is the best one can do.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3991112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
A few questions on an application of the residue theorem I have got the following closed curve $\gamma$
and I am supposed to calculate
\begin{equation}
\int_\gamma\frac{\cos(z)}{z^3(z^2 +1)}dz
\end{equation}
with the help of the residue theorem.
Now, I've got a few questions
*
*$0$, i and -i are the isolated singularities of our given function. As far as I understand it, the winding number of $0$ is $-2$, the winding number of $-$i is $-1$ and the winding number of i is $0$. Is this correct?
*How can I determine the residue in this case? For a start, I tried finding the Laurent expansion for $z_0=0$, but I have no idea how to rearrange the equation further. I only got as far as
\begin{equation}
\frac{\cos(z)}{z^3(z^2 +1)}=\frac{1}{z^2+1}\sum\limits_{n=0}^{\infty}(-1)^n\cdot\frac{z^{2n-3}}{(2n)!}.
\end{equation}
Could someone give me a hint on how to find the Laurent expansion?
Thanks in advance!
| *
*You are right.
*Let $f(z)=\frac{\cos(z)}{z^2+1}$. Then\begin{align}\operatorname{res}_{z=0}\left(\frac{\cos(z)}{z^3(z^2+1)}\right)&=\operatorname{res}_{z=0}\left(\frac{f(z)}{z^3}\right)\\&=\frac{f''(0)}{2!}\\&=-\frac32.\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3991236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Question from Sasane, How and Why of One Variable Calculus I've been reading through Amol Sasane "The How and Why of One Variable Calculus" and I've got a question from one of the proofs about the sum property of integrals. (Page 199)
I know this doesn't have all the details. We've chosen an arbitrary positive epsilon and found partitions to make this statement of the upper sums of two functions work.
So we have :
$$\overline{S}(f + g) - \epsilon < \overline{S}(f) + \overline{S}(g)$$
Which I can follow when doing the proof myself. Though Sasane's next statement confuses me a little, at least with the technical details. When looking at this proof on my own a thought I had was "Can I take the limit as $\epsilon \to 0$?". I thought about it and it seemed like it might a way of doing this. Though I can't see how it would work rigorously or with change to a non-strict inequality.
Would anyone be able to shed some like on the details or find me material explaining this property?
Edit: If anyone would be so inclined, is this something really obvious and I just didn't see it? Reading through this book I feel like many students starting off would need more explanation than what was given.
| Starting with the statement $$\overline{S}(f + g) - \varepsilon < \overline{S}(f) + \overline{S}(g)\tag{1}$$ where $\varepsilon > 0$ is arbitrary. Suppose now that $$\overline{S}(f+g) > \overline{S}(f) + \overline{S}(g)\tag{2}.$$ In this case, set $$\overline{S}(f+g) - \overline{S}(f) - \overline{S}(g) = m > 0$$ and let $\varepsilon' = \frac{m}{2} >0$. Then we have $$\overline{S}(f+g) -\varepsilon' -\overline{S}(f) - \overline{S}(g) = m - \varepsilon' = m - \frac{m}{2} = \frac{m}{2} > 0.$$ This, however, means that $$\overline{S}(f+g) - \varepsilon' > \overline{S}(f) + \overline{S}(g),$$ which contradicts $(1)$. Thus, our assumption $(2)$ was incorrect, and we have by contradiction that $$\overline{S}(f+g) \leq \overline{S}(f) + \overline{S}(g).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3991599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Closed form for $\sum_{k=0}^{n}\left(-1\right)^{k}F_{k}^{2}$ How to get a closed form for the following sum:
$$\sum_{k=0}^{n}\left(-1\right)^{k}F_{k}^{2}$$
Where $F_k$ is the $k$th Fibonacci number.
I tried to get a recurrence relation,but I failed and the closed form is given by $$\frac{\left(-1\right)^{n}F_{2n+1}-\left(2n+1\right)}{5}$$
| First let's take a look at the power series $ \sum\limits_{n\geq 0}{F_{n}^{2}x^{n}} $, it is easy to find what its radius of convergence would be using the ratio test, but for now let's denote it $ R<1 $, and let's define a function $ f:x\mapsto\sum\limits_{n=0}^{+\infty}{F_{n}^{2}x^{n}} $.
We have for all $ x\in\left(0,R\right) $ : \begin{aligned} \frac{1}{1-x}\sum_{n=0}^{+\infty}{F_{n}^{2}x^{n}}=\sum_{n=0}^{+\infty}{\left(\sum_{k=0}^{n}{F_{k}^{2}}\right)x^{n}}&=\sum_{n=0}^{+\infty}{F_{n}F_{n+1}x^{n}}\\&=\sum_{n=0}^{+\infty}{\left(F_{n+1}^{2}-F_{n}^{2}-\left(-1\right)^{n}\right)x^{n}}\\ \frac{f\left(x\right)}{1-x}&=\frac{f\left(x\right)}{x}-f\left(x\right)-\frac{1}{1+x}\\ \iff f\left(x\right)&=\frac{x\left(1-x\right)}{\left(1+x\right)\left(x^{2}-3x+1\right)}\end{aligned}
Now : \begin{aligned} \sum_{n=0}^{+\infty}{\left(\sum_{k=0}^{n}{\left(-1\right)^{n-k}F_{k}^{2}}\right)x^{n}}&=\left(\sum_{n=0}^{+\infty}{\left(-1\right)^{n}x^{n}}\right)\left(\sum_{n=0}^{+\infty}{F_{n}^{2}x^{n}}\right)\\ &=\frac{x\left(1-x\right)}{\left(1+x\right)^{2}\left(x^{2}-3x+1\right)}\\ &=\frac{1-x}{5\left(x^{2}-3x+1\right)}+\frac{1}{5\left(x+1\right)}-\frac{2}{5\left(x+1\right)^{2}} \\ &=\frac{1}{5}\sum_{n=1}^{+\infty}{F_{n}^{2}x^{n-1}}+\frac{1}{5}\sum_{n=0}^{+\infty}{\left(-1\right)^{n}x^{n}}-\frac{2}{5}\sum_{n=1}^{+\infty}{n\left(-1\right)^{n}x^{n-1}}\\ &=\frac{1}{5}\sum_{n=0}^{+\infty}{\left(F_{n+1}^{2}+\left(-1\right)^{n}\left(2n+3\right)\right)x^{n}}\end{aligned}
Thus, for all $ n\in\mathbb{N} $ : $$ \sum_{k=0}^{n}{\left(-1\right)^{k}F_{k}^{2}}=\frac{\left(-1\right)^{n}F_{n+1}^{2}+2n+3}{5} $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3991692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Prove that $g(x)=(x^2−1)^x$ is increasing on $(1,+\infty)$ Let $A = \mathbb{R}\setminus[−1,1]$. Let $g : A\to\mathbb{R}$ be defined by
$g(x)=(x^2-1)^x$
for all $x\in A$.
Prove that $g(x)=(x^2-1)^x$ is increasing on $(1,\infty)$
I have currently attempted to prove this by showing $g'(x)\geq 0 $ for all $ x\in A $ which gives $g'(x)= (x^2-1)^{x-1}(2x^2+(x^2-1)ln(x^2-1))$ which should be greater than or equal to $0$ however I am unsure how to show this or whether I have gone about this the right way.
Any Help will be grateful.
| A. For b>1 x^b is increasing.
B. For a>1 a^x is increasing.
For any x_1 < x_2 where both are bigger than root 2:
We get from A above:
[(x_1)^2 - 1]^x_1 < [(x_2)^2 - 1]^x_1
Then we get from B above:
[(x_2)^2 - 1]^x_1 < [(x_2)^2 - 1]^x_2
putting them together the function is increasing when bigger than root 2.
Only very close to 1 do we need further analysis
d/dx((x^2 - 1)^x) = (x^2 - 1)^x ([(2 x^2)/(x^2 - 1)] + log(x^2 - 1))
So the front part of the right hand side of the above equation is always positive for x>1,so we only need to make sure that [(2 x^2)/(x^2 - 1)] + log(x^2 - 1) is always positive for x>1.
So we want to show 2x^2 + (x^2-1)log(x^2-1) >0 for x>1. Let U = x^2 -1, U>0 for x>1, so it suffices to show 2 + 2U + Ulog(U) > 0 for U>0.
d/dx(x log(x)) = log(x) + 1
Ulog(U) takes a minimum point at U= 1/e. So Ulog(U) > -1/e for U>0. That proves the inequality:
2x^2 + (x^2-1)log(x^2-1) >0 for x>1.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3991819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Hints for two proofs Please do not give me the answers. I only want hints to approach these two proof problems I'm struggling with:
*
*There exists some differentiable function $f(x)$ such that $f'(x)\ne f(x)$ but $f''(x)=f(x)$
*There exists some $x\in \mathbb R$ such that $x^2\in \mathbb \{R -\mathbb Q\}$ while $x^4\in \mathbb R$
For the first one I started out defining $f'$ as a limit but I couldn't come up with any way to utilize this for my proof. For the second one I've tried doing it by cases, considering $x<0, x=0, x>0$. But I didn't get anywhere doing this.
| Hint
$1.$ Think about $\sin \alpha$;
$2.$ Think about $x^2=\sqrt{3}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3992000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Interesting Weighted Sum of Binomial Coefficients In my research, I've come across the following weighted sum of binomial coefficients:
$$\sum_{i=1}^k i\cdot{2k \choose k+i}.$$
According to Wolfram Alpha, this equals exactly the quantity
$$\frac{1}{2}(k+1){2k \choose k+1}.$$
After trying to prove this, I've come up shorthanded. I have two questions.
*
*Is there a direct procedure (or a combinatorial argument) that yields this expression?
*Is there a general class of weighted sums that could be solved in a similar way?
| I'll give a different solution from the two in the linked question.
Now to view it as computing $E|X-k|$ where $X \sim Bin\left(2k,\frac 12\right)$, it's quite natural to do the following:
$$\sum_{i=1}^k i\cdot{2k \choose k+i} = \sum_{i=1}^k (k+i-k) \binom{2k}{k+i} \\
= \sum_{i=1}^k (k+i) \binom{2k}{k+i} - k \sum_{i=1}^k \binom{2k}{k+i}\\
= 2k \sum_{i=1}^k \binom{2k-1}{k+i-1} - k \cdot \frac{2^{2k} - \binom{2k}{k}}{2}\\
= 2k \cdot \frac{2^{2k-1}}{2} - k\cdot \frac{2^{2k} - \binom{2k}{k}}{2}\\ = \frac{k}{2} \binom{2k}{k} = \frac{1}{2}(k+1){2k \choose k+1}. \blacksquare
$$
Note the above quantity is actually equal to $2^{2k-1}E|X-k|$, not $E|X-k|$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3992298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Range of $\frac{\cos\theta_1+\cdots+\cos\theta_{10}}{\sin\theta_1+\cdots+\sin\theta_{10}}$ given $\sin^2\theta_1+\cdots+\sin^2\theta_{10}=1$
Given that for $ \theta_i \in \left[0, \dfrac{\pi}{2}\right]$, where $1 \le i \le 10$ , $\sin^2\theta_1+\sin^2\theta_2+\cdots+\sin^2\theta_{10}=1$, find the minimum and maximum value of $$\dfrac{\cos \theta_1+\cos\theta_2+\cdots+\cos\theta_{10}}{\sin\theta_1+\sin\theta_2+\cdots+\sin\theta_{10}}$$
I tried using Cauchy Schwarz inequality which just gives $\displaystyle \sum_{i=1}^{10} \sin\theta_i \le \sqrt{10}$ and $\displaystyle \sum_{i=1}^{10} \cos \theta_i \le \sqrt{90}$ because $\displaystyle \sum_{i=1}^{10}\cos^2\theta_i =9$ but I don't think it would be helpful here.
Maybe some inequality can be applied by writing it as $$\dfrac{\displaystyle \sum_{i=1}^{10} \sqrt{1-\sin^2\theta_i}}{\displaystyle \sum_{i=1}^{10} \sin \theta_i}$$
Any hints would be appreciated!
| lower bound:
Notice that $$\cos ^2 \theta_1=\sin^2 \theta_2+\sin^2 \theta_3+\cdots+\sin^2 \theta_{10}$$ similarly $$\cos^2 \theta_i=\sum_{j=1 \rightarrow 10 , j\neq i }\sin^2 \theta_j$$
$$S_i=\sum_{j=1\rightarrow 10,j\neq i} \sin \theta_i \tag {say}$$
thus by power mean inequality
$$\cos \theta_i\ge \sqrt{\frac{S_i^2}{9}}=\frac{S_i}{3}$$ Hence $$\dfrac{\cos \theta_1+\cos\theta_2+\cdots+\cos\theta_{10}}{\sin\theta_1+\sin\theta_2+\cdots+\sin\theta_{10}}\ge \frac{S_1+S_2+S_3+\cdots+S_{10}}{3(\sin \theta_i+\sin \theta_2+\cdots+\sin \theta_{10})}=\boxed 3$$
Upper bound: see See Hai's beautiful proof!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3992475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
What does $w' := $ mean in the context of the Halting Problem? I am studying the Halting Problem and I came across the following notation. I am not sure what it means. The context is as follows:
To prove that the Halting Problem is undecidable, we employ proof by contraction. Assume that the language $K = \{w \in \{0,1\}^* | M_w \ \text{stops for input w}\}$, is decidable. This means that the characteristic function $\chi_K$ is computable using a Turing Machine $M$. Now we can construct a Turing Machine $M'$ that simulates $M$. When the output is $0$, the $M$ stops and when the output is $1$, then $M$ goes in an endless loop. Let $w' := <M'>$ ...
What does $w' := <M'>$ mean in this context? Is it just that $M'$ is a Turing Machine that is encoded by $w'$?
| Notice that Turing Machines are powerful enough to simulate other Turing Machines. We encode Turing Machines $M$ as words $\langle M \rangle$ and we can formulate the Halting Problem as a word problem for the language $L=\{\langle M \rangle \epsilon \langle \omega \rangle\ |\ M \text{ halts on input } \omega\}$ whereby $\langle M \rangle$ and $\langle \omega \rangle$ are encodings for $M$ and $\omega$ and $\epsilon$ is a separator.
First denote $(M, \omega)$, whereby $M$ is a Turing Machine and $\omega$ some word of our input alphabet of $M$. Then the encoding of such a pair is often denoted by $\langle M, \omega \rangle$.
Since $\omega' \in \{0,1\}^*$ (which is our word encoding for which we could also write $\langle \omega' \rangle$), and $\langle M' \rangle$ denotes the encoding of the Turing Machine $M'$, it is indeed encoded by the binary word $\omega'$.
To summarize, we write $\langle M' \rangle$ to denote the encoding of the Turing Machine $M'$, and since $\omega'$ is an encoded word, $\omega' := \langle M' \rangle$ says that the encoding of $M'$ (which is $\langle M' \rangle)$ is given by $\omega'$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3992630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finding a contradiction in equations and inequalities. Question:-In the figure shown, Coefficient of friction between the blocks C and B is $0.4$. There is no friction between block C and ground.The system of blocks is released from rest in the shown situtation.Find the accelerations of masses.(Given $m_{B}=5kg , m_{C}=10kg , m_{A}=3kg$ )
I made two cases
$1)$ Block B and C move together.
$2)$ Block B and C will not move together.
When I solve equations for Case $1$, I find that block will move together.This implies that Case $1$ is true.
$$30-T=3a_{3}$$
$$T-f=5a_{5}$$
$$f=10a_{10}$$
$$f \leq 20$$
Note that if blocks B and C will move together then $a_{3}=a_{5}=a_{10}=a$
On solving we get $a=\frac {5}{3}m/s^2$ and $f=16.67N$
As case $1$ is true. So contradiction condition must arrive during solving equations for case $2$.
I got the following equations for case $2$. Despite my all efforts I was unable to find how equations will contradict.
\begin{align}
30-T&=3a_{3}\\
T-f&=5a_{5}\\
f&=10a_{10}\\
f&\leq {20}\\
a_{3}&>a_{10}\\
\end{align}
Note that $a_{3}=a_{5}$
My attempt
Adding equation $1$ and $2$, we get
$$30-f=8a_{3}$$
So, $30-8a_{3}\leq 20$ and $10a_{10}\leq 20$
This gives $a_{3}\geq \frac54$ and $a_{10}\leq 2$
Also, $$T-f \geq \frac{25}{4}$$
$$30-T\geq \frac{15}{4}$$
This gives $T\leq \frac{105}{4}$
Adding eqation $1$ and $3$ we get
$$30+f-T=10a_{10}+3a_{3}$$
Using $f-T\leq \frac {-25}{4}$
$$12a_{3}+40a_{10}\leq {95}$$
Adding equation $2$ and $3$,we get
$$T=5a_{3}+10a_{10}$$
Using $T\leq \frac {105}{4}$
$$4a_{3}+8_{10}\leq 21$$
So We got following inequalities involving $a_{3}$ and $a_{10}$
$$a_{3}>0$$
$$a_{10}>0$$
$$a_{3}>a_{10}$$
$$a_{3}\geq \frac54$$
$$a_{10}\leq 2$$
$$12a_{3}+40a_{10}\leq {95}$$
$$4a_{3}+8a_{10}\leq 21$$
On plotting them on graph where x-axis represents $a_{3}$ and y-axis represents $a_{10}$ I got quadrilateral having coordinates $(1.25,0), (1.25,1.25), (1.75,1.75), (5.25,0)$.
So, $1.25\leq a_{3} \leq 5.25$ and $0\leq a_{10}\leq 1.75$
Can anybody help to find contradiction.
| I belive you have made a mistake. In either case we can calculate the frictional force directly, since we have friction at a maximum as we have motion:
$$F_\text{MAX}=\mu R=0.4\times5g=2g$$
I will calculate all accelerations below:
For B and A, let $a_1$ be their common acceleration. Resolve for A and then B:
$$\begin{align}
& 3g-T=3a_1\\
& T-2g=5a_1
\end{align}$$
Add the two equations together to eliminate $T$:
$$g=8a_1\implies a_1=\frac{g}{8}$$
This is actually equivalent to treating A and B as a single particle:
Using $F=ma$, it is clear that as the tensions cancel out we have
$$3g-2g=(3+5)a_1\implies g=8a_1\implies a_1=\frac{g}{8}$$
Now for box C. We have
$$2g=10a_2\implies a_2=\frac{g}{5}$$
where $a_2$ is the acceleration of C.
As an aside, we can prove that there is motion as follows:
First, note that the tension must be less than $3g$; otherwise A would move upwards, which is absurd. Now suppose that $T\le2g$. Then A will still accelerate downwards. But $F_\text{MAX}=2g$, so B will not move. This is a contradiction as B is connected to A. Hence $2g<T<3g$ and there is movement since $T>F_\text{MAX}$ as required.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3992920",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Showing weak convergence of a sequence in $L^p(R)$ I have a sequence of functions $f_k$ in $L_p(R)$, with $1<p<\infty$ and I'd like to show that it weakly converges to $0$.
This is the sequence, where $k\in N$
$$f_k = 1_{[k,k+1]}$$
What I've tried:
If $f_k$ converges to $0$, then we should have
$$\lim_{k\to \infty}\int_R(f_k-0)\phi dx =0$$
where $\phi \in L_{p'}(R)$ (the dual, or here, $L_q(R)$.
Putting in the function, one gets
$$\lim_{k\to \infty}\int_k^{k+1}1.\phi dx$$
Now I need to show this goes to $0$ for all functions $\phi \in L_q(R)$$, but that isn't necessarily true right?
| With $q = p'$ you can use Hölder's inequality to say that
$$\int_{\mathbb{R}}1_{[k,k+1]}|\phi(x)|dx\leq \left(\int_{\mathbb{R}}1_{[k,k+1]}|\phi(x)|^qdx\right)^{1/q}$$
The question is therefore if
$$\lim_{k\rightarrow \infty}\int_{\mathbb{R}}1_{[k,k+1]}|\phi(x)|^qdx = 0$$
which you can show using the Dominated Convergence Theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3993258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Defining a real number No answers, please, hints only.
I want to express every nonzero $x\in \mathbb R$ as a product of two numbers that are not rational.
My attempt is $x=(a_1+b_1i)(a_2+b_2i)$ where $a_1, b_2\in \mathbb R$
Am I correct or do I need a hint?
| Hints:
*
*Why is addressing the case with positive $x\ne1$ the hard part?
*For such $x$, consider rational and irrational $x$ separately, and think about how $n$th roots would help.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3993380",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 0
} |
Is the following definition of an elliptic curve correct? Im new to algebraic geometry so I want to make sure im getting my definitions right. I know there are a few ways to state what an elliptic curve is (ex a smooth projective curve of genus one with distinguished $K$-rational point). But I am wondering if the following is equivalent. For simplicity lets just work over $\mathbb{C}$.:
$\textbf{Definition:}$ An elliptic curve $E$ is a non-singular projective curve in $\mathbb{P}^2$ of the form
$$E: Y^2Z + a_1XYZ + a_3YZ^2 = X^3 + a_2X^2Z + a_4XZ^2 + a_6Z^3 $$
I am wondering if this is sufficient to define an elliptic curve?
| This is subtle. It is true that every elliptic curve can be written in this form, and that every smooth projective curve of this form has genus $1$.
But an elliptic curve is not a smooth projective curve of genus $1$. An elliptic curve (over a field $K$) is a smooth projective curve of genus $1$ together with a distinguished $K$-rational point, and this is key to the theory because it's the point that forms the identity for the group law. You need to say something about what this distinguished point is: with an equation of this form it's conventional to take it to be the point at infinity, with coordinates $(X : Y : Z) = (0 : 1 : 0)$. But this needs to be said explicitly in the definition; it is in fact possible to pick another point, which changes the group law. This is important for the following reasons among others:
*
*There are smooth projective curves of genus $1$ over non-algebraically closed fields $K$ which have no $K$-rational points, and hence there is no choice of point which turns them into elliptic curves.
*Elliptic curves have endomorphisms and automorphisms, and these are required to preserve the distinguished point (which turns out to imply that they preserve the group law). If you don't keep this in mind you will be confused when you read statements about endomorphisms and automorphisms of elliptic curves in the literature (which in fact happened recently on MO). For example, endomorphisms of an elliptic curve form a ring, but only if you require that endomorphisms preserve the distinguished point.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3993570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
positive semidefinite why is theinequality What are the conditions on $\text{tr}(AB) \leq \text{tr(A)} \text{tr(B)}$ to be true?
I started the service today, so I don't have any reputation points and can't make any comments. So, let me ask a question here.
I don't understand the part of this question that says tr(^1/2^1/2)≤tr(^1/2()^1/2).
if A<=aI is true, why is the inequality holds?
| The trace of a PSD matrix is non-negative. This means that the trace function is increasing with respect to the PSD partial order $\preceq$ (the Löwner order). If $A \preceq B$ then $B - A$ is PSD so
$$ \operatorname{tr}(B - A) = \operatorname{tr} B - \operatorname{tr} A \ge 0. $$
(The trace is also linear.) Therefore $\operatorname{tr} A \le \operatorname{tr} B$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3993712",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
non unique imaginary units A professor told me that another way to define complex numbers is through the real matrices of the form $\begin{bmatrix}a & b\\-b &a\end{bmatrix}$. With this definition, the imaginary unit becomes the matrix $I=\begin{bmatrix}0 &1\\-1&0\end{bmatrix}$. By definition $i^2=-1$ and It's pretty easy to check that the square of the matrix $I$ is the negative of the identity matrix.
So apparently, any matrix with square equal to the negative of the identity can be taken as $I$. Why is that? When you think about the $2D$ plane, this corresponds to choosing a vector different from (1,0) as $\bar i$, the unit vector in the x-direction.
Then he says that by some general theorem all complex structures in dimension 2 are isomorphic to each other and that one can allow the imaginary unit to change from point to point.
Does anyone understand what this means and why we can do this?
| In a similar way, we can represent the real number $-1$ by the matrix: $\begin{bmatrix}-1 &0\\0&-1\end{bmatrix}$ which can also be thought of as rotation by $180^\circ$. Is it rotation clockwise or anticlockwise? Well for $180^\circ$ it doesn't matter.
However, $i$ represents rotation by $90^\circ$ and clockwise and anticlockwise make a difference. Which is right? It doesn't matter, just make a choice and stick to it.
Compare driving on the road. Should we drive on the left or the right? Clearly both work as some countries choose one and some choose the other. However, it is important for a country to make a choice and be consistent. Also, when visiting a country with the opposite convention, care is required to adjust. The UK and France have similar road rules except that left and right are flipped. For example, when joining a roundabout in the UK you give way (yield) to traffic already on the roundabout, this traffic is approaching from your right. In France, you also give way to traffic already on the roundabout but it is approaching from your left.
Please note that this is only a brief description of the differences between driving in the UK and France. If you actually plan to drive in France, please research the rules more thoroughly. Swapping left and right is just a first approximation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3993849",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why does the intersection of two parametric curves have different t results for each curve? I'm slightly confused after reading the problem posed here: Points of intersection of two parametric curves
Why is it that the "t" result at the intersection point of the two curves is not the same? I.e. the solution involves the answer "t1" for the first curve and "t2" for the second curve, but if the two curves are being made equal, why is t1 != t2?
Also, is there any way we can "manufacture" two different curves (say two parametric straight lines) that, at a specific t value for BOTH curves, yield the SAME point in space? How would you go about doing that? (given, for example, 3 points in space)
| As for your second question, given two parametric curves $$C_1 :\ (a(t),b(t)) \\ C_2 :\ (c(t), d(t)) $$
they can be made to intersect at $t=t_0$ by choosing functions $a,b,c,d$ such that $$a(t_0) = c(t_0) \\ b(t_0)= d(t_0) $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3994005",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Confusion on the definition of the standard topology on $\mathbb{R}$ So I've been learning a bit about topology recently so I am a complete novice and there are a lot of definitions that confuse me my first confusion regards the topology on $\mathbb{R}$. I think that the standard topology consists of all open intervals $(a,b)$ and that the topology is generated by the basis that consists of all open balls.
However open balls are used only in metric spaces since we have a distance function so when using these open balls do we no longer have a topological space but a metric space?
My second confusion is that of half open intervals in the standard topology on $\mathbb{R}$ for example I know that singletons are closed when considered as a subset of the reals such as $[0]$ is closed however what is $[0,2)$ considered?
Thanks in advance.
| A metric space is a topological space (the converse is not always true). The set $ [0,2) $ is neither open nor closed. So are the set $ (0,1) \cup [4,5] $ or the set of rationals $ \mathbb{Q} $.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3994146",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Find integral from a known integral
The correct answer is $7$. I figured out that if you substitute the inner argument of the second function, the end points become the same but flipped. That means we can just negate the integral and get the same endpoints. This is where I am now stuck. How do I get to the result with that knowledge?
| First, let $u = 7 - 8x$. Then, $u(0) = 7$ and $u(19) = -145$. Also, $dx = -\frac{du}{8}$. Then, we wish to calculate:
$$\int_{0}^{19}f(7-8x)\ dx = \int_{u(0)}^{u(19)}-f(u)\ \frac{du}{8} = -\frac{1}{8}\int_{7}^{-145}f(u)\ du = \frac{1}{8}\int_{-145}^{7}f(u)\ du =\frac{56}{8} = \boxed{7}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3994320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Topological groups: if $A$ is an open subset of $G$, then $A^{-1}$ is open. Let $G$ be a topological group with open subset $A$. I want to prove very thoroughly/precisely that $A^{-1}:=\{a^{-1}:a\in A\}$ is also open.
Obviously the key to this proof is the fact that the inversion map $\phi:G\longrightarrow G$ is continuous.
$\phi:G\longrightarrow G$ is continuous, so since $A\subseteq G$ is open in $G$, $\phi^{-1}(A)$ is open in $G$ (by definition of topological continuity).
$\phi^{-1}(A)$ consists of elements of the form $a^{-1}$, so $\phi^{-1}(A)=A^{-1}$. This looks like we are done.
I just want to check a small detail: we take $A$ to be in the codomain of $\phi$, which of course is $G$, but in particular, shouldn't the elements of $A$ be of the form "inverses of some elements of $G$", so $A=B^{-1}$ for some set $B\subseteq G$, say, and therefore $\phi^{-1}(A)=\phi^{-1}(B^{-1})=({B^{-1}})^{-1}=B$, So we do not necessarily obtain $A^{-1}$ back when we apply $\phi^{-1}$, unless $\phi^{-1}$ is actually an inverse of $\phi$, as opposed to a preimage?
I think I might be getting confused here. Could anyone help clarify on this?
Thanks
| The steps in your argument are correct, but I think you've confused yourself by not justifying the key step.
You say "$\phi^{-1}(A)$ consists of elements of the form $a^{-1}$" -- why is this? Let's prove it.
Let $x \in \phi^{-1}(A)$ be arbitrary. By definition, $x^{-1} = \phi(x) \in A$. Then $x = (x^{-1})^{-1}$ is the inverse of an element of $A$. Since $x$ was arbitrary, we have $\phi^{-1}(A) \subseteq A^{-1}$. Conversely, for any $a \in A$, $\phi(a^{-1}) = (a^{-1})^{-1} = a \in A$, so $a^{-1} \in \phi^{-1}(A)$. This shows $A^{-1} \subseteq \phi^{-1}(A)$, completing the proof.
Here's another way to do this: just note that $\phi$ is its own inverse! Thus $\phi^{-1}(A) = \phi(A) = A^{-1}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3994480",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is the set of well formed formulas are defined as the smallest set of strings Consider this definition of Well formed formulas in mathematical logic - The set of all well formed formula is the smallest set of strings , WFF that satisfies
*
*All Boolean variables are in WFF, and so are the symbols Τ and F. We call
such formulae atomic.
*If A and Β are any strings in WFF, then so are the strings (not A), (A and B),
(A V β), (A —> β), (A = B)
Wouldn't there be exactly one such set of strings satisfying these properties of WFF ?
I don't understand the need for using the word smallest.
| Consider first a simpler example:
The set of even numbers $E$ is the smallest $X\subseteq \mathbb{N}$ with the following two properties:
*
*$0\in X$, and
*if $n\in X$ then $n+2\in X$.
The two bulletpoints alone do not pin down $E$! For example, both $\mathbb{N}$ itself and $E\cup\{n\in\mathbb{N}: n\ge 17\}$ satisfy them. Basically, the minimality clause is required to make sure that no "unintended" elements enter the set we're defining.
Turning back to the example in the question, note that the set of all finite strings of symbols satisfies points $1$ and $2$ of your definition; it's only the minimality clause that tells us that that's not what we have in mind. See also here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3994682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Dimension of vector space and span I have the following problem
a) I say that the vectors are linearly independent because they are not scalar multiples of each other.
b) This one I'm struggling with. How do I find the dimension of the vector space? Is it 3 dimension due to the fact that the vectors only have 3 coordinates? Is it so simple?
c) I'm not sure what to say here. I don't think it belongs belongs to the span. If this is true, can it be explained by the same reasoning as in a)?
| Answering each of your questions:
a) True, they are linearly independent (it's easily seen that they're not schalar multiples of each other). You can see this from noticing that in the second component, $a_1$ has $-1$ and $a_2$ has $1$, but the other components aren't the same with opposite symbols.
b) Since they are linearly independent, then the vector space generated by both vectors (or how it's written: $\langle a_a,a_2\rangle$) is of dimension $2$ (it would have been dimension $1$ if they were linearly dependent).
c) Since $a_1$ is not a schalar multiple of $a_2$ (because they are linearly independent), you conclude that $a\notin \langle a_2\rangle$.
As you see, all of your questions have to do with the same important question: are those vectors linealy independent? The answer to that is yes, and the following statements are concluded from there.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3994813",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
How to get the value of $2\left |\csc x \right | \sin x + 3\left | \cos y\right|\sec y$ given two constrains? The problem is as follows:
Given:
$\sqrt{\cos x}\cdot \sin y > 0$
and,
$\tan x\cdot \sqrt{\cot y} < 0$
Find:
$B=2\left |\csc x \right | \sin x + 3\left | \cos y\right|\sec y$
The alternatives given in my book are as follows:
$\begin{array}{ll}
1.&\textrm{-1}\\
2.&\textrm{5}\\
3.&\textrm{1}\\
4.&\textrm{-5}\\
\end{array}$
I'm confused on exactly how to use the given clues to solve the problem?
To me the source of confusion is how to use the absolute value in the question?
From the first given expression I'm getting this, assuming squaring both sides of the equation will not modify its order:
$\left(\sqrt{\cos x}\cdot \sin y\right)^2 > 0^2$
$\cos x \cdot \sin^2 y > 0$
$\left(\tan x\cdot \sqrt{\cot y}\right )^2 < 0^2$
$\tan^2 x\cdot \cot y < 0^2$
But the thing is this where I'm stuck, where to go from here?. There isn't known a relationship between those angles. If so then I believe trigonometric identities could be used to simplify the expression. Therefore, should these expressions be divided or what?.}
Can someone help me here on what should be done? and more importantly why?. It would help me the most is an answer which would explain how does absolute value is used here?.
| $a<0$ doesn't necessarily mean $a^2<0$.
For example, $-2<0$ but $(-2)^2 = 4 >0$.
Now,
$\begin{align}B &= 2|\csc x|\sin x + 3 |\cos y|\sec y = 2 \frac{\sin x}{|\sin x|}+3\frac{|\cos y|}{\cos y}\\
\\& = 2\text{ sgn}(\sin x) + 3\text{ sgn}(\cos y)\end{align}$
Then, $\sqrt{\cos x} \sin y > 0 \Rightarrow \sin y > 0$ and $\cos x > 0$ as $\cos$ is under the square root.
Similarly, $\cot y>0$ and $\tan x<0$.
So we have,
$\sin y >0, \cot y = \frac{\cos y}{\sin y} > 0 \Rightarrow \boxed{\cos y >0}$
$\cos x>0, \tan x = \frac{\sin x}{\cos x}<0 \Rightarrow \boxed{\sin x<0}$
Thus,
$B = 2(-1) + 3(1) = 1$
Edit (based on comment)
As $\sin x<0 \Rightarrow |\sin x| = -\sin x$ and $\cos y >0 \Rightarrow |\cos y| = \cos y$.
So,
$\begin{align}B = 2 \frac{\sin x}{|\sin x|}+3\frac{|\cos y|}{\cos y} = 2 \frac{\sin x}{-\sin x}+3\frac{\cos y}{\cos y} = -2 + 3 = 1\end{align}$
Clarification for abs value:
Consider some numerical values. Let $x=1$, so $|x|=1=x$.
Again let $x=−1$. Now what is it's absolute value? It's again 1, right? $1=−(−1)$ or $|x|=−x$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3994917",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.