Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Prove $(\vec A \times \vec B) \cdot (\vec C \times \vec D) = (\vec A \cdot \vec C)(\vec B \cdot \vec D) - (\vec A \cdot \vec D)(\vec C \cdot \vec B)$ Prove that $(\vec A \times \vec B) \cdot (\vec C \times \vec D) = (\vec A \cdot \vec C)(\vec B \cdot \vec D) - (\vec A \cdot \vec D)(\vec C \cdot \vec B)$.
The problem asks to prove this only using the properties:
$
\text{(i)}\space (\vec a \times \vec b) \times \vec c = (\vec a \cdot \vec c)\vec b - (\vec b \cdot \vec c)\vec a \\
\text{(ii)}\space \vec a \times (\vec b \times \vec c) = (\vec a \cdot \vec c)\vec b - (\vec a \cdot \vec b)\vec c \\
\text{(iii)}\space \vec u \cdot (\vec v \times \vec w) = \vec v \cdot (\vec w \times \vec u) = \vec w \cdot (\vec u \times \vec v) = -\vec u \cdot (\vec w \times \vec v) = -\vec w \cdot (\vec v \times \vec u) = -\vec v \cdot (\vec u \times \vec w)$
I've tried manipulating the left hand side in all the ways I could think of, and I can't seem to reach the right hand side.
Can someone please point me in the right direction?
|
Let us omit these horrible vector arrows. Then we have
$$
\begin{align}
( A \times B) \cdot ( C \times D) &\overset{(iii)}{=} D\cdot ((A\times B)\times C) \overset{(i)}{=}
D\cdot( (A\cdot C)B-(B\cdot C)A)\\
&=( A \cdot C)( B \cdot D) - ( A \cdot D)( C \cdot B),
\end{align}
$$
where in the last equality we have used the symmetry of the scalar product $A\cdot B=B\cdot A$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/653512",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Absolute convergence of the series $sin(a_n)$ implies...? Can someone please give me a counterexample for the following statement?
If $\sum _1 ^\infty sin(a_n) $ absolutely converges , then $\sum_1^\infty a_n$ also absolutely converges.
I tried working with $a_n = \frac{ (-1)^n } {n} $ and $a_n = \frac{(-1)^n lnn}{n}$ , but without any success...
Got any idea?
Thanks a lot !
|
If $a_n=\pi$, then $\sin a_n=0$, and thus $\sum_{n=1}^\infty\sin a_n$ converges absolutely, while $\sum_{n=1}^\infty a_n$ does not.
Hence, you need to make an assumption: $a_n\in(-\pi/2,\pi/2)$.
Then use the fact that
$$
\frac{2|x|}{\pi}\le|\sin x|\le |x|,
$$
for all $x\in(-\pi/2,\pi/2)$, which implies that:
$$
\sum_{n=1}^\infty a_n\,\,\,\text{converges absolutely}\,\,\,\Longleftrightarrow
\sum_{n=1}^\infty \sin a_n\,\,\,\text{converges absolutely}.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/653583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Prove $\{x \mid x^2 \equiv 1 \pmod p\}=\{1, -1\}$ for all primes $p$ One way to prove that $\{x \mid x^2 \equiv 1 \pmod p\}=\{1, -1\}$ is to use the fact that $\{1, -1\}$ is the only subgroup of the cyclic group of primitive residue classes modulo $p$ that has the order 2. Is there a more basic way to prove this?
|
$$x^2\equiv1\mod p\iff p\text{ divides }(x-1)(x+1)$$
But a prime divides a product if and only if it divides one of the two factors.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/653660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Is the Series : $\sum_{n=1}^{\infty} \left( \frac 1 {(n+1)^2} + ..........+\frac 1 {(n+n)^2} \right) \sin^2 n\theta $ convergent?
Is the Series :
$$\sum_{n=1}^{\infty} \left( \frac 1 {(n+1)^2} + \ldots+\frac 1 {(n+n)^2} \right) \sin^2 n\theta $$ convergent?
Attempt:
$$\sum_{n=0}^{\infty} \left( \frac 1 {(n+1)^2} + \ldots+\frac 1 {(n+n)^2} \right) \sin^2 n\theta $$
$$= \sum_{n=1}^{\infty} \left( \frac 1 {(n+1)^2} + \ldots+\frac 1 {(n+n)^2} \right) (1- \cos( 2n\theta)) \frac 1 2$$
$$=\frac 1 2 \left[\sum_{n=1}^{\infty} \left( \frac 1 {(n+1)^2} + \ldots+\frac 1 {(n+n)^2} \right)\right]- \frac 1 2\left[\sum_{n=1}^{\infty} \left( \frac 1 {(n+1)^2} + \ldots+\frac 1 {(n+n)^2} \right) \cos(2n\theta)\right] $$
The left part by sandwich theorem has limiting value to $0$
The right part by Dirichlets theorem is convergent as $ \left( \frac 1 {(n+1)^2} + \ldots+\frac 1 {(n+n)^2} \right)$ is a positive, monotonically decreasing sequence and the sequence of partial sum of $\sum_{n=1}^{\infty} \cos 2n\theta$ is bounded.
Hene, the given series is convergent. Am i correct?
|
Notice that
$$\sum_{k=1}^n\frac 1{(n+k)^2}=\frac 1{n}\frac 1n\sum_{k=1}^nf\left(\frac kn\right),$$
with $f(x):=\frac 1{1+x^2}$, a continuous positive function. Hence the convergence of the initial series reduces to the convergence of
$$\sum_{n\geqslant 1}\frac{\sin^2(n\theta)}n.$$
Write $\sin^2(A)=\frac{1-\cos(2A)}2$. If $\theta$ is not a multiple of $\pi$, the series is divergent because $\sum_{n\geqslant 1}\frac{\sin(2n\theta)}n$ is convergent.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/653777",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Identity of a Mathematician Mentioned in Euler I and several others are in the process of translating one of Euler's papers from Latin to English, in particular the one that the Euler Archive lists as E36. In it Euler proves the Chinese Remainder Theorem, but my question is this:
In Section 1, Euler refers to two (presumably mathematicians) named in Latin as Lahirii and Sauwerii. The Latin is as follows:
Ita quadratorum magicorum constructio iam pridem ab arithmeticis est tradita; qua autem cum esset insufficiens maiora ingenia Lahirii et Sauwerii ad perficiendum requisivit.
We are reasonably confident that Lahirii refers to Philippe de La Hire as he did in fact work on magic squares (quadratorum magicorum), but we have no clue as to the identity of Sauwerii and can find no mention of a mathematician named Sawyer who worked on magic squares.
Does anyone know who this might refer to?
|
There is also a reference to Sauveur in Sandifer's book, "The Early Mathematics of Leonhard Euler", in the chapter about E-36.
For a preview, see:
http://books.google.com/books?id=CvBxLr_0uBQC&pg=PA351
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/653853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 2,
"answer_id": 0
}
|
A Probability Problem about the $Trypanosoma$ parasite Considering the distribution of Trypanosoma (parasite) lengths shown below, suppose we take a sample of two Tryoanosomes. What is the probability that:
a) both Trypanosomes will be shorter than 20um?
b) the first Trypanosome will be shorter than 20um and the second will be longer than 25um?
c) exactly one of the Trypanosomes will be shorter than 20um and one will be longer than 25um?
I have no idea were to start. Even though it looks like an easy solution, this question is kind of confusing, especially the B and C part. Could someone please help me to get on the right track? Thank you for your help guys!
|
All three questions can be solved in a pretty similar fashion. The key to answering them is using the rule that, if $A$ and $B$ are independent events, then $P(A\cap B) = P(A)P(B)$ in other words, multiply the probabilities of each event. So for example, the probability of selecting a parasite shorter than 20 micrometers is $0.01 + 0.34 = 0.35$. The probability of selecting two such parasites, then, is $(0.35)(0.35) = 0.1225$.
There is a twist in the final question. The second question dictates what order you select the parasites in. But the third question states simply that you pick two parasites. Therefore, you could pick the shorter parasite followed by the longer one or the longer one followed by the shorter one. If $A$ is the first event and $B$ the second, you're seeking $P(A\cup B)$. If $P(A\cap B) = 0$ which is the case here; it's impossible to pick both the shorter parasite and the longer one first then $P(A\cap B) = P(A) + P(B)$.
Does that make sense?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/653928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
PMF function: joint pmf question For the PMF of $X$
$$P(X = x) = (0.5)^x \;\text{ for }\;x\in\mathbb{Z}^+,$$
would the PMF of $Y= X-1$ just be
$$P(Y = x) = (0.5)^{x-1}.$$
|
$$
\Pr(Y=x) = \Pr(X-1=x) = \Pr(X=x+1) = 0.5^{x+1}.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/654133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Showing a set is open but not closed Let $X = \{ (x,y) \in \mathbb{R}^2 : x > 0, y \geq 0 \} $. I am claiming $X$ is open, but it is not closed.
My Try:
To show it is not closed, I found a sequence that converges to a point outside $X$. For instance, $(x_n, y_n) = (\frac{1}{n}, \frac{1}{n} ) \to (0,0) \notin X$. Hence, $X$ cannot be closed.
In trying to show it is open, I have diffictulty trying to find a suitable open ball contained in $X$. Any help would be greatly appreaciated.
Thanks.
|
The set is not open. Take for example the point $P=(5,0)$. There is no ball with centre $P$ which is fully contained in our set.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/654221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Pumping Lemma Squares Proof Explanation I'm looking for some help understand this perfect squares proof using the pumping lemma. Here is the proof:
I don't understand how n^2 + k < n^2 + n towards the end of the proof. Would anyone be able to explain this to me? I also think that the last line n^2 < n^2+k < (n+1)^2 confusing because of (n^2 + k < n^2 + n).
|
First from the statement of the pumping lemma we know that $|xy|\le n$. Also we have set $k$ to be the length of $y$, i.e. $|y|=k$. Moreover the length of $xy$ is certainly greater or equal than the length of $y$: $|xy|\ge |y|$. Combining these we get
$$n\ge |xy|\ge |y|=k$$
therefore $n^2+k\le n^2+n<n^2+2n+1=(n+1)(n+1)=(n+1)^2.$
Note that equality $k=n$ may occur if and only if $x$ is empty. Otherwise $k<n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/654298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
An Euler-Mascheroni-like sequence How does one compute the limit of the sequence:
$$\sum_{k = 0}^{n}\frac{1}{3k+1} - \frac{\ln(n)}{3}$$
I would apreciate a hint.
|
Hint: use a comparison series/integral, by writing $$\frac{\ln n}{3} = \int_{1}^n \frac{dx}{3x}=\sum_{k=1}^{n-1} \int_{k}^{k+1}\frac{dx}{3x}$$ and
$$\sum_{k=0}^n \frac{1}{3k+1} = \sum_{k=0}^n \int_{k}^{k+1}\frac{dx}{3k+1}$$
Edit: is not a straightforward approach -- I was thinking about proving convergence.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/654614",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Proof that if $A\subset B$ then $A^* = B^*$ I prove here that if $B$ is a Banach space, and $A$ is a closed subspace of $B$, $A\subset B$, then
$$A^* = B^*.$$
($A^*$ stands for the dual of $A$.)
There is obviously something wrong here but where?
We already have that $A^* \subset B^*$. The other inclusion comes from the following quick proof : Let $T\in B^*$, then there exists a constant $C$ s.t. for all $b\in B$, $|\langle T, b\rangle|\le C\|b\|$. Then for all $a\in A$, $|\langle T, a\rangle|\le C\|a\|$ thus $T$ belongs to $A^*$.
I already know counterexamples; where am I wrong in the proof?
Thanks.
|
This question has been answered in comments:
The mistake is that we already have $A^∗\subset B^∗$. Taking the dual acts contravariantly, i.e. it reverses inclusions and arrows of morphisms. – Your Ad Here Jan 28 at 16:09
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/654686",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Check if a graph is Eulerian
Let $G=((2,3,4,5,6,7),E)$ be a graph such that {$x$,$y$} $\in E$ if and only if the product of $x$ and $y$ is even, decide if G is an Eulerian graph.
My attempt
I tried to plot the graph, this is the result:
So, if my deductions are true, this is not an Eulerian graph because it's connected but all the vertices doesn't have an even degree. For example $deg(2)=5$. Moreover, there is no trace of Eulerian trails.
I cannot figure out if this assumptions are presumably correct.
|
You don't need to draw the graph --- you can easily do this analytically.
You have 3 odd-numbered vertices and 3 even-numbered vertices. A product $xy$ is even iff at least one of $x,y$ is even.
A graph has an eulerian cycle iff every vertex is of even degree. So take an odd-numbered vertex, e.g. 3. It will have an even product with all the even-numbered vertices, so it has 3 edges to even vertices. It will have an odd product with the odd vertices, so it does not have any edges to any odd-numbered vertices. So $3$ has an odd degree, violating the necessary condition for an Eulerian graph. So G is not Eulerian.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/654761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Proving that $\frac{e^x + e^{-x}}2 \le e^{x^2/2}$
Prove the following inequality:
$$\dfrac{e^x + e^{-x}}2 \le e^{\frac{x^2}{2}}$$
This should be solved using Taylor series.
I tried expanding the left to the 5th degree and the right site to the 3rd degree, but it didn't help.
Any tips?
|
Obscene overkill: since
$$ \cosh(z)=\prod_{n\geq 0}\left(1+\frac{4z^2}{(2n+1)^2\pi^2}\right) $$
we have
$$ \log\cosh(z)\leq \sum_{n\geq 0}\frac{4z^2}{(2n+1)^2\pi^2} =\frac{z^2}{2}$$
and the claim follows by exponentiating both sides. In other terms, it is sufficient to integrate both sides of $\tanh(z)\leq z$ over $[0,x\in\mathbb{R}^+]$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/654839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 2
}
|
Is the Hausdorff condition redundant here? This is a question in Algebraic Topology by Hatcher, Chapter 0:
21) If $X$ is a connected Hausdorff space that is a union of a finite number of 2 spheres,
any two of which intersect in at most one point, show that $X$ is homotopy equivalent
to a wedge sum of $S^1$’s and $S^2$’s.
Is it necessary that Hatcher requires $X$ to be Hausdorff here? Is it possible to have a space $X$ as a finite disjoint union of spaces homeomorphic to $S^2$ where we identify at most one point of each pair and $X$ not be Hausdorff. It seems pretty obvious to me that this is not possible but maybe I'm overlooking something.
|
Note that
a union of a finite number of 2 spheres, any two of which intersect in at most one point
is not the same as
a space $X$ as a finite disjoint union of spaces homeomorphic to $S^2$ where we identify at most one point of each pair,
the union Hatcher speaks of need not arise as a quotient of a disjoint union (topological sum).
As a set, let $X$ be the union of four spheres of radius $1$ with centres in $(\pm 1, \pm1, 0)$ in $\mathbb{R}^3$. In each of the four points $(1,0,0),\, (-1,0,0),\, (0,1,0),\, (0,-1,0)$ two of the spheres intersect. Now endow that space with a coarser topology than the subspace topology, that is not Hausdorff, but nevertheless induces the standard topology on each of the spheres.
For a point $x$ that is not an intersection point of two neighbouring spheres, let the neighbourhoods of $x$ be the intersections of a neighbourhood of $x$ in $\mathbb{R}^3$ with $X$, a neighbourhood basis is then
$$\mathscr{N}(x) = \left\lbrace B_{\varepsilon}(x) \cap X : \varepsilon > 0 \right\rbrace.$$
For an intersection point $p$, let a neighbourhood basis of $p$ be
$$\mathscr{N}(p) = \left\lbrace \bigl(B_\varepsilon(p) \cup B_{\varepsilon'}(-p)\bigr) \cap X : \varepsilon > 0, \varepsilon' > 0\right\rbrace.$$
Verify that there is a unique topology on $X$ such that the above are neighbourhood bases for the respective points, and that topology is not Hausdorff (every neighbourhood of an intersection point $p$ meets every neighbourhood of $-p$).
Yet $X$ is the union of finitely many (four) two-spheres (subspaces homeomorphic to $S^2$), any two of which intersect in at most one point.
So it is not redundant to require $X$ to be Hausdorff.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/654895",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 0
}
|
Why does f' = 0 gives the min or max? I understand how to calculate it, but I am just curious, why actually it works? Do we have a proof that it always works?
|
Consider the case where a differentiable function $f(x)$ has a minimum at the point $x_{0}$. By definition, there exists a $\epsilon-$neighborhood
$$
x = \left\{ x\in\mathbb{R} \colon \left| x - x_{0} \right| < \epsilon \right\}
$$
with $\epsilon>0$ where $f(x) > f(x_{0})$.
If we constrain $\delta < \epsilon$, then
$$
\frac{f(x+\delta) - f(x)} {\delta} > 0.
$$
So within the $\epsilon-$neighborhood, a $\delta-$sequence produces a limit which approaches 0 from above.
A similar argument holds for approaching the minimum from the left.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/654953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
}
|
Lagrange's Theorem exercise Be $u : \mathbb{R} \rightarrow \mathbb{R}$ a $C^2$ function. Proof that exists a $x \in \mathbb{R}$ with $0<x<2$ for which $u(2)-2u(1)+u(0)=u''(x)$
Applying Lagrange's Theorem I showed:
$u(2)-u(1)=u'(x_1)$ with $0<x_1<2$
$u'(x_1) - u'(0) = u''(x_2) x_1$ with $0<x_2<x_1$
$u(2)-u(1)-u'(0) = u''(x_2)x_1$
How i can finish?
Thanks
|
If $f \in C^2(a,b)$ then for every $x\in (a,b)$ and suitable $h>0$
$$
f(x+h)=f(x)+hf^{\prime}(x) + \frac{h^2}{2}f^{\prime\prime}(\xi_+), \qquad \xi_+ \in (x,x+h)
$$
and
$$
f(x-h)=f(x)-hf^{\prime}(x) + \frac{h^2}{2}f^{\prime\prime}(\xi_-), \qquad \xi_- \in (x-h,x).
$$
Summing we get
$$
f(x+h)+f(x-h) - 2f(x) = h^2\frac{f^{\prime\prime}(\xi_+) + f^{\prime\prime}(\xi_-)}{2}
$$
and by intermediate value theorem for $f^{\prime\prime}$ there exists $\xi \in (x-h,x+h)$ such that
$$
f(x+h)+f(x-h) - 2f(x) = h^2 f^{\prime\prime}(\xi)
$$
Now just plug into the last equation $f=u$, $(a,b)=(0,2)$, $x=1$ and $h=1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/655045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What is a good book for multivariate stochastic processes? I'm looking for a good book that introduces (preferred without measure-theoretic proofs though that may have to do) multivariate stochastic processes. So suppose you have $\{\mathbf{X}_n : n \in \mathbb{Z}_0^\infty\}$ where $\mathbf{X}_n \in \Re^k$ or $\in \mathfrak{C}^k$. I want a book that discusses things like power spectrums, optimal linear prediction, transfer of a power spectrum through a linear system (with a matrix transfer function), etc. in the context of this vector-valued stochastic process.
Every reference I check in the library or find online is for scalar stochastic processes mostly.
|
Spectral Analysis and Its Applications 1968 by G. M. Jenkins and D. G. Watts has all the information I requested. Looks like works from the 60s can often trump tons others from 2000s.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/655132",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Kalman Filter Process Noise Covariance I want to model the movement of a car on a straight 300m road in order to apply Kalman filter on some noisy discrete data and get an estimate of the position of the car.
In a Kalman filter the matrix $A$ and process noise covariance $Q$ is what describes the system. I have chosen $A$ according to the following model:
\begin{align}
\begin{bmatrix}p_{k+1} \\v_{k+1}\end{bmatrix} &= A \begin{bmatrix}p_k \\v_k\end{bmatrix} \\
\begin{bmatrix}p_{k+1} \\v_{k+1}\end{bmatrix} &= \begin{bmatrix}1 & \Delta t \\0 & 1\end{bmatrix} \begin{bmatrix}p_k \\v_k\end{bmatrix}
\end{align}
where $p$ is position and $v$ is velocity.
My question is how to define the process noise covariance $Q$? The process details are unknown to me but they should be the same for any urban car movement. I have looked for articles that describe the process noise variance for forward and sidewise movement of a car but I haven't found anything.
I'm sorry if the question is not very mathematical but since Kalman filters are maths I figured someone here should have worked with car models in the past.
|
Process noise can be viewed as any disturbance that affect the dynamics, for example steep on the roads, wind affects, friction on the roads etc. In general you can just say it as environmental disturbances.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/655263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Equivalence and Order Relations I have the following problem: Provide an example of a set $S$ and a relation $\sim$ on $S$ such that $\sim$ is both an equivalence relation and an order relation. Conjecture for which sets and this is possible, and prove your conjecture.
Below is the example I came up with but I don't know how to prove my conjecture and I don't even know if I am correct, I would really appreciate some help!
The only set that satisfies the reflexive property of an equivalence relation and the nonreflexive property of an order relation is the $\emptyset$. For any relation on that set, it is vacuously true that is is both an equivalence relation and an order relation.
|
Assuming your order relations are required to be strict (that is, irreflexive), then you're almost done.
You have already shown that $\varnothing$ has an appropriate relation. What remains is to show that no nonempty set can have a relation that is both an equivalence and an order.
Thus assume that $A$ is a set, $x\in A$ (such an $x$ will exist if $A$ is nonempty), and $R\subseteq A^2$ is a relation on $A$, and that $R$ is a equivalence relation, and that $R$ is a (partial?) order. Use these assumptions to reach a contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/655367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Help with inequality estimate, in $H^1$, Given a bilinear form on $H^1 \times H^1$, where $H^1 = W^{1,2}$
\begin{align*}
B[u,v] = \int_U \sum_{i,j}a^{i,j}(x)u_{x_i}v_{x_j} + \sum_ib^i(x)u_{x_i}v + c(x)uv \, \mathrm{d}x
\end{align*}
the book gives the inequality bound $\vert B[u,v] \vert \leq \alpha \Vert u \Vert_{H^1} \Vert v \Vert_{H^1}$.
I'm trying to verify this:
\begin{align*}
\vert B[u,v] \vert &\leq \sum_{i,j} \vert a^{i,j} \int u_{x_i}v_{x_j} \vert + \sum_i \vert b^i \int u_{x_i} v \vert + \vert \int c u v \vert \\
&\leq \sum_{i,j} \Vert a^{i,j} \Vert_{\infty}\Vert u_{x_i} \Vert_{L^2} \Vert v_{x_j}\Vert_{L^2} + \sum_i \Vert b^{i} \Vert_{\infty}\Vert u_{x_i} \Vert_{L^2} \Vert v\Vert_{L^2} + \Vert c \Vert_{\infty} \Vert u \Vert_{L^2} \Vert v \Vert_{L^2}
\end{align*}
And from here, I'm not sure how I should proceed. I know I need to eventually get something of the form
\begin{align*}
\Vert u \Vert_{H^1} \Vert v \Vert_{H^1} = (\Vert u \Vert_{L^2}^2 + \sum_i \Vert u_{x_i} \Vert_{L^2}^2)^{1/2}(\Vert v \Vert_{L^2}^2 + \sum_i \Vert v_{x_i} \Vert_{L^2}^2)^{1/2}
\end{align*}
but I'm just not sure how. If someone could help me with the missing steps, I'd really appreciate it.
|
The claim follows since both $\|u\|_{L^2}$ and $\|u_{x_i}\|_{L^2}$ are bounded by $\|u\|_{H^1}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/655427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
logarithm and exponent computation performance Using glibc on a x86 processor, which takes more CPU time? $a\ log\ b$ or $b^a$? For which values of $a$ is one faster than the other? Optional: Does the base used matter?
See also: What algorithm is used by computers to calculate logarithms?
Because I know someone will mention it, I know that $a\ log\ b$ does not equal $b^a$.
|
Look through the code used by the FreeBSD operating system:
http://svnweb.freebsd.org/base/head/lib/msun/src/
http://svnweb.freebsd.org/base/head/lib/msun/src/e_pow.c?view=markup
http://svnweb.freebsd.org/base/head/lib/msun/src/e_log.c?view=markup
It is claimed that these are rather high quality algorithms, better than cephes, and probably better than glibc.
http://lists.freebsd.org/pipermail/freebsd-numerics/
http://lists.freebsd.org/pipermail/freebsd-numerics/2012-September/000285.html
In one of these emails, someone describes an algorithm where they start with Taylor's series, and then run it through an optimization procedure to fine tune the coefficients. But there are so many emails that it would take a while to find where they describe it. These guys are really wanting to get every last digit accurate.
Update: I think the algorithm is called Remez algorithm. You can read about it on wikipedia.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/655526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Polar Integral Confusion Yet again, a friend of mine asked for help with a polar integral, we both got the same answer, the book again gave a different answer.
Question
Use a polar integral to find the area inside the circle $r=4 \sin \theta$ and outside $r=2$.
Proposed Solution
We see that the two circles intersect at $ \pm \frac{\pi}{6}$. Because of the symmetry of the region, we may write $$2 \int_{0}^{\frac{\pi}{6}} \int_{2}^{4 \sin \theta} r \, \,dr d\theta $$ and equivalently $$2 \int_{0}^{\frac{\pi}{6}} 8 \sin^2 \theta - 2 \, \,d\theta$$ and finally $$-4 \int_{0}^{\frac{\pi}{6}} 2\cos 2\theta -1 \, \,d\theta$$
Which is $\frac{2}{3} ( \pi - 3\sqrt{3})$. The book gives an answer of $\frac{2}{3} ( 2\pi - 3\sqrt{3})$, the same as evaluating the integral from $0$ to $\frac{\pi}{3}$.
Which answer is correct?
|
The two circles intersect when:
$$\sin \theta = 1/2 \ \to \ \theta = \pi/6, \pi-\pi/6$$
So the integral should be from $\pi/6$ until $5\pi/6$, and not as you wrote.
Note that you could have eliminated both answers by noting that the area should be larger than half of the circle, or $A>2\pi$, whereas your book's answer gives $\sim 1.3$, and your answer is negative.
The correct result should be:
$$\frac{2}{3}\left(2\pi\color{red}{+}3\sqrt{3}\right)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/655593",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Intuition behind smooth functions. Smooth functions $f(t)$ are those such that $\frac{d^nf(t)}{dt^n}$ exists for all $n\in\Bbb{N}$.
I understand the intuition behind smoothness for functions like $f(t)=| t|$ and $f(t)=\sqrt{t}$. $f(t)$ has a "sharp" (and hence non-smooth) turn at $t=0$. Similarly, $f(t)=\sqrt{t}$ ends abruptly at $t=0$ (and hence is not "smooth").
However, functions like $f(t)=t^{\frac{1}{3}}$ seem "smooth" enough to me!
Why does the intuitive understanding of smoothness fail here? Is this just another case of extending a definition of a term to non-intuitive cases?
Thanks in advance!
|
The problem with the function $f(t) = t^{1/3}$ is not that it is something else. The problem is that in the point $0$, the slope of the curve is infinite as the function turns completely vertical. The curve drawn is actually smooth, as you can find a smooth parametrisation of it.
The parametrisation $$t\rightarrow (t, \sqrt[3]{t})$$ is not smooth, however, you can reparametrise the curve as $$t\rightarrow (t^3, t)$$ which is obviously smooth.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/655686",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Example of quotient mapping that is not open I have the following definition:
Let ($X$,$\mathcal{T}$) and ($X'$, $\mathcal{T'}$) be topological spaces. A surjection $q: X \longrightarrow X'$ is a quotient mapping if $$U'\in \mathcal{T'} \Longleftrightarrow q^{-1}\left( U'\right) \in \mathcal{T} \quad \text{i.e. if } \mathcal{T'}=\{ U' \subset X' : q^{-1}\left( U' \right) \in \mathcal{T} \}$$
and the properties:
*
*$q$ is a bijective quotient mapping $\Leftrightarrow$ $q$ is a homeomorphism
*In general, $q$ quotient $\not \Rightarrow q$ open.
If $U \in \mathcal{T}, q(U)\subset X'$ is open if $q^{-1}\left( q\left( U \right) \right) \in \mathcal{T}$ but not in general.
I could not find an example of quotient mapping for which $q^{-1}\left( q\left( U \right) \right)$ is not open. I would understand the idea better if you could show me one.
|
An open map is:
for any open set in the domain, its image is open
A quotient map only requires:
for any open preimage in the domain, its image is open
A quotient map may not be an open map because there may exist a set such that it is open but is not a preimage, i.e., the preimage of the image of an open set may not be the open set itself.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/655797",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26",
"answer_count": 6,
"answer_id": 5
}
|
Evans PDE chapter 2 problem 4 Problem is
Give a direct proof that if $u \in C^{2}(U) \cap C(\overline{U})$ is harmonic within a bounded open set $U$, then
$\max_{\overline{U}} u =\max_{\partial U} u$.
What I think is that;
Let $u^{\epsilon} = u+ \epsilon |x|^{2}$, where $\epsilon >0 $. Then $\bigtriangleup u^{\epsilon} = \bigtriangleup u + 2\epsilon |x| = 2\epsilon |x| \geq 0$ since $u$ is harmonic. Equality holds if $x=0$. Suppose $x_{0} \in U$ is maximum point of $u^{\epsilon}(\overline{U})$. Then $\bigtriangleup u^{\epsilon}(x_0) = 2 \epsilon |x_0| \leq 0$.
I want to show that $x_0 \neq 0$, so this is contradiction, but I don't know how to do that. Could anyone have an idea?
|
If the maximum is attained inside the domain, at point $x_0$, then the Hessian $H$ of $u$ at $x_0$ is symmetric definite negative. You first have that
$$Tr(H) = \Delta u$$
and because H is definite negative,
$$Tr(H) < 0$$
This implies a contradiction with $u$ is harmonic.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/655905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
When are complex conjugates of solutions not also solutions? I've heard that for "normal" equations (e.g. $3x^2-2x=0$), if $(a+bi)$ is a solution then $(a-bi)$ will be a solution as well.
This is because, if we define $i$ in terms of $i^2=-1$ then we might as well define $i^\prime=-i$. Since ${i^\prime}^2=-1$ we find $(a+bi)$ has the same algebraic behaviour as $(a+bi^\prime)$.
So what are non-"normal" equations? When are conjugates not also solutions?
|
If the polynomial has real coefficients, and there is a nonreal root, then its conjugate is also a root. Otherwise, there would be at least one nonreal coefficient.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/656069",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
sequence defined by $u_0=1/2$ and the recurrence relation $u_{n+1}=1-u_n^2$ I want to study the sequence defined by $u_0=1/2$ and the recurrence relation $$u_{n+1}=1-u_n^2\qquad n\ge0.$$ I calculated sufficient terms to understand that this sequence does not converge because its odd and even subsequences converge to different limits.
In particular we have $$\lim_{n\to\infty}a_{2n+1}=0\text{ and}\lim_{n\to\infty}a_{2n+2}=1.$$
Now I can also prove that $u_n\leq1$ because the subsequent terms are related in this way $$f\colon x\mapsto1-x^2.$$
In order to prove that the subsequences converge I need to prove either that they are bounded (in order to use Bolzano-Weierstrass) or that they are bounded and monotonic for $n>N$. Proved that the subsequences converge to two different limits, I will be able to prove that our $u_n$ diverges.
Can you help me? Have you any other idea to study this sequence?
Thank you.
|
The function $f:[0,1]\longrightarrow[0,1]$ has a unique fixed point but is repellent. And we can't reach it in a finite number of steps because is irrational.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/656147",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
supremum equals to infumum The question is:
What can you say about the set M if sup M = inf M.
I know that supremum is the lowest upper bound and infumum is the biggest lower bound. But I cant figure out what you can say about the set M. And we only had one lession about supremum and infumum.
|
You know that for any $x \in M$, we have $\inf M \leq x \leq \sup M$, by definition of upper and lower bounds (not necessarily least upper or greatest lower bound). Now, if $\inf M = \sup M$, what does that say about the possible values of $x$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/656207",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Intuitively, why is the Euler-Mascheroni constant near $\sqrt{1/3}$? Questions that ask for "intuitive" reasons are admittedly subjective, but I suspect some people will find this interesting.
Some time ago, I was struck by the coincidence that the Euler-Mascheroni constant $\gamma$ is close to the square root of $1/3$. (Their numerical values are about $0.57722$ and $0.57735$ respectively.)
Is there any informal or intuitive reason for this? For example, can we find a series converging to $\gamma$ and a series converging to $\sqrt{1/3}$ whose terms are close to each other?
An example of the kind of argument I have in mind can be found in Noam Elkies' list of one-page papers, where he gives a "reason" that $\pi$ is slightly less than $\sqrt{10}$. (Essentially, take $\sum\frac1{n^2}=\pi^2/6$ as known, and then bound that series above by a telescoping series whose sum is $10/6$.)
There are lots of ways to get series that converge quickly to $\sqrt{1/3}$. For example, taking advantage of the fact that $(4/7)^2\approx1/3$, we can write
$$
\sqrt{\frac{1}{3}}=(\frac{16}{48})^{1/2}
=(\frac{16}{49}\cdot\frac{49}{48})^{1/2}=\frac{4}{7}(1+\frac{1}{48})^{1/2}
$$
which we can expand as a binomial series, so $\frac{4}{7}\cdot\frac{97}{96}$ is an example of a good approximation to $\sqrt{1/3}$. Can we also get good approximations to $\gamma$ by using series that converge quickly, and can we find the "right" pair of series that shows "why" $\gamma$ is slightly less than $\sqrt{1/3}$?
Another type of argument that's out there, showing "why" $\pi$ is slightly less than $22/7$, involves a particular definite integral of a "small" function that evaluates to $\frac{22}{7}-\pi$. So, are there any definite integrals of "small" functions that evaluate to $\sqrt{\frac13}-\gamma$ or $\frac13-\gamma^2$?
|
I have an interesting approach. The Shafer-Fink inequality and its generalization allow to devise algebraic approximations of the arctangent function with an arbitrary uniform accuracy. By a change of variable, the same holds for the hyperbolic arctangent function over the interval $(0,1)$ and for the logarithm function over the same interval. For instance,
$$ \forall x\in(0,1),\qquad \log(x)\approx\frac{90(x-1)}{7(x+1)+12\sqrt{x}+32\sqrt{2x+(x+1)\sqrt{x}}}\tag{A} $$
and $\approx$ holds as a $\leq$, actually. We have
$$ \gamma = \int_{0}^{1}-\log(-\log x)\,dx \tag{B}$$
hence:
$$ \gamma\leq 1-\log(90)+\int_{0}^{1}\log\left[7(x+1)+12\sqrt{x}+32\sqrt{2x+(x+1)\sqrt{x}}\right]\,dx\tag{C}$$
where the RHS of $(C)$ just depends on the (logarithms of the) roots of $7 + 32 x + 12 x^2 + 32 x^3 + 7 x^4$, which is a quartic and palindromic polynomial. The numerical approximation produced by $(C)$ allows to state:
$$ \gamma < 0.5773534 < \frac{\pi}{2e}.\tag{D}$$
Actually $(A)$ is not powerful enough to prove $\gamma<\frac{1}{\sqrt{3}}$, but we can achieve that too by replacing $(A)$ with the higher-order (generalized) Shafer-Fink approximation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/656283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 3,
"answer_id": 0
}
|
How do i prove that $\mathfrak{M}\oplus\Sigma$ is the sigma algebra generated by products of elements of generating sets? Let $(X,\mathfrak{M}),(Y,\Sigma)$ be measurable spaces.
Let $\mathscr{A},\mathscr{B}$ be sets such that $\sigma(\mathscr{A})=\mathfrak{M}$ and $\sigma(\mathscr{B})=\Sigma$.
How do i prove that $\mathfrak{M}\oplus\Sigma$ is the sigma-algebra generated by $\{A\times B : A\in \mathscr{A}, B\in\mathscr{B}\}$?
Below is what i tried:
Define $G=\{A\times B : A\in \mathscr{A}, B\in\mathscr{B}\}$.
Define $R=\{E\subset X : E\times Y \in \sigma(G)\}$.
I showed that $R$ is a sigma-algebra, but i cannot prove whether $\mathscr{A}\subset R$.
|
Define $H := \{ A \times B : A \in \mathfrak M, B \in \Sigma\}$. The goal is to show that $\sigma(H) = \sigma(G)$. The inclusion $\sigma(G) \subseteq \sigma(H)$ follows from $G \subseteq H$. For the other inclusion,
it suffices to prove that $H \subseteq \sigma(G)$. Note that for all $B \in \mathscr B$ the family $R_B := \{ E \subseteq X : E \times B \in \sigma(G)\}$ is a $\sigma$-algebra on $X$ which contains $\mathscr A$, so that $\mathfrak M = \sigma(\mathscr A) \subseteq R_B$, i.e., $A \times B \in \sigma(G)$ for all $A \in \mathfrak M$ and $B \in \mathscr B$.
Now we let $A \in \mathfrak M$ be arbitrary and consider $R_A := \{ F \subseteq Y : A \times F \in \sigma(G) \}$. This is a $\sigma$-algebra on $Y$ by the same reasoning as before. Moreover, by the above paragraph, it contains $\mathscr B$. Together this implies $\Sigma = \sigma(\mathscr B) \subseteq R_A$. Since $A \in \mathfrak M$ was arbitrary, we find $H \subseteq \sigma(G)$ as desired.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/656374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How to find the shortest distance from a line to a solid? The equation $x^2 + y^2 + z^2 - 2x + 6y - 4z + 5 = 0$ describes a sphere. Exactly
how close does the line given by $x = -1+t; y = -3-2t; z = 10+7t$ get to this sphere?
So the sphere is centered at $(1,-3,-2)$ and the radius is $3$.
I want to find the point where the segment from the center to that point is perpendicular to the line, and then minus the radius to get the answer. So how can I find that point? Or how should I solve this problem in other ways?
|
Let's just find a vector in the direction of the line, find a vector connecting a point on the line t0 the center, and then make sure they're perpendicular.
Any two points on the line will allow us to find a vector in the direction of the line. With,say, $t=0$ and $t=1$, we get $(-1,-3,10)$ and $(0,-5,17)$, yielding a vector $\vec{v}=(1,-2,7)$
Let $P=(t-1,-2t-3,7t+10)$ be an arbitrary point on the line. The vector between this point and $(1,-3,-2)$ is $\vec{w}=(t-2,-2t,7t+12)$.
Then we want $\vec{v}\cdot\vec{w}=t-2+4t+49t+84=54t+82=0$. So the point at $t=\frac{-41}{27}$ should be the base of a perpendicular dropped from the center to the line.
So this perpendicular has the length of the vector $\vec{w}=(\frac{-95}{27},\frac{82}{27},\frac{37}{27})$, which is $\sqrt{\frac{634}{27}}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/656427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
An identity involving the Pochhammer symbol I need help proving the following identity:
$$\frac{(6n)!}{(3n)!} = 1728^n \left(\frac{1}{6}\right)_n \left(\frac{1}{2}\right)_n \left(\frac{5}{6}\right)_n.$$
Here,
$$(a)_n = a(a + 1)(a + 2) \cdots (a + n - 1), \quad n > 1, \quad (a)_0 = 1,$$
is the Pochhammer symbol. I do not really know how one converts expressions involving factorials to products of the Pochhammer symbols. Is there a general procedure? Any help would be appreciated.
|
Pochhammer symbols (sometimes) indicate rising factorials, i.e., $n!=(1)_n$ . This is obviously the case here, since the left hand side is never negative, assuming natural n.
$$\bigg(\frac16\bigg)_n=\prod_{k=0}^{n-1}\bigg(\frac16+k\bigg)=\prod_{k=0}^{n-1}\bigg(\frac{6k+1}6\bigg)=6^{-n}\cdot\prod_{k=0}^{n-1}(6k+1)$$
$$\bigg(\frac12\bigg)_n=\prod_{k=0}^{n-1}\bigg(\frac12+k\bigg)=\prod_{k=0}^{n-1}\bigg(\frac{6k+3}6\bigg)=6^{-n}\cdot\prod_{k=0}^{n-1}(6k+3)$$
$$\bigg(\frac56\bigg)_n=\prod_{k=0}^{n-1}\bigg(\frac56+k\bigg)=\prod_{k=0}^{n-1}\bigg(\frac{6k+5}6\bigg)=6^{-n}\cdot\prod_{k=0}^{n-1}(6k+5)$$
Since $1728=12^3$, our product becomes $$2^{3n}\cdot\prod_{k=0}^{n-1}(6k+1)(6k+3)(6k+5)=\dfrac{2^{3n}\cdot(6n)!}{\displaystyle\prod_{k=0}^{n-1}(6k+2)(6k+4)(6k+6)}=$$
$$=\dfrac{2^{3n}\cdot(6n)!}{2^{3n}\cdot\displaystyle\prod_{k=0}^{n-1}(3k+1)(3k+2)(3k+3)}=\dfrac{(6n)!}{(3n)!}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/656505",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
Help me with this limit $$ \lim_{x\to0} {{xe^x \over e^x-1}-1 \over x}$$
I know it should equal ${1 \over 2}$ because when i calculate with number like $0.0001$ the limit $\approx {1 \over 2}$ but i can't prove it.
|
Divide the top and bottom by $x$ to clean stuff up:
$$\dots={ {{e^x \over e^x-1}-\frac{1}{x} }}\normalsize=\frac{x\cdot e^x-e^x+1}{x\cdot(e^x-1)}$$
Can you do it now?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/656594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
If $H \leq G$ and $H \subset Z(G)$, the center of $G$, is $H \trianglelefteq G$? This is probably a very dumb question. Is it true that, in general, if $H$ is a subgroup of a group $G$, and $H \subset Z(G)$, the center of $G$, does it follow that $H$ is normal in $G$?
What I know so far that could potentially be useful:
*
*$Z(G)$ is normal in $G$.
*$H$ is normal in $G \Leftrightarrow gHg^{-1} \subseteq H$ for all $g \in G$.
This is part of an intermediate step for a homework problem.
|
This can be understood intuitively with group actions. Say $G$ acts on a set $X$:
*
*A subset $Y\subseteq X$ is pointwise fixed if $gy=y$ for all $g\in G$.
*A subset $Y\subseteq X$ is setwise fixed if $gY:=\{gy:y\in Y\}=Y$ for all $g\in G$.
The group $G$ acts on itself by conjugation. Then:
*
*A subset $H\subseteq G$ is central if $~[G,H]=1 ~~\Leftrightarrow~H\subseteq Z(G)~ \Leftrightarrow~ H$ is pointwise fixed.
*A subset $H\subseteq G$ is normal if it is setwise fixed.
For general group actions, pointwise fixed is much stronger than ($\Rightarrow$) setwise fixed. Therefore we may directly conclude that central ($H\subseteq Z(G)$) implies normal ($H\trianglelefteq G$) in this case.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/656703",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Find $\exp(D)$ where $D = \begin{bmatrix}5& -6 \\ 3 & -4\end{bmatrix}. $ The question is
Find $\exp(D)$ where $D = \begin{bmatrix}5& -6 \\ 3 & -4\end{bmatrix}. $
I am wondering does finding the $\exp(D)$ requires looking for the canonical form... Could someone please help?
|
Hint:
*
*Write the Jordan Normal Form (it is diagonalizable) with unique eigenvalues.
*$e^{D} = P \cdot e^J \cdot P^{-1}$
The Jordan Normal Form is:
$$A = P J P^{-1} = \begin{bmatrix}1&2 \\ 1 & 1\end{bmatrix}~\begin{bmatrix}-1&0 \\0 & 2\end{bmatrix}~\begin{bmatrix}-1&2 \\ 1 & -1\end{bmatrix}$$
Now use the above formula to find the exponential of $D$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/656756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Are all metabelian groups linear Are all metabelian groups linear? (i.e. isomorphic to a subgroup of
invertible matrices over a field)
|
Every finitely generated metabelian group $\Gamma$ has a faithful representation over a finite product of fields (I think it's due to Remeslennikov). If $\Gamma$ is (virtually) torsion-free, one field of characteristic zero is enough. But for instance, denoting $C_n$ the cyclic group of order $n$, the wreath product $C_6\wr C_\infty$ is not linear over a single field (you need a field of char. 2 and a field of char. 3).
Note that the result is false for 3-solvable finitely generated groups, which can be far from linear or even residually finite: here are 2 examples (one residually finite and not the other):
1) the wreath product $\Lambda=\mathbf{Z}\wr\Gamma$, where $\Gamma$ is f.g. metabelian and not virtually abelian. Then $\Lambda$ is not linear over any commutative ring, although it is 3-solvable and residually finite.
2) (Hall) fix a prime $p$ and consider the group $A_3$ of matrices $M(k;x_{12},x_{13},x_{23})=\begin{pmatrix} 1 & x_{12} & x_{13}\\ 0 & p^k & x_{23}\\ 0 & 0 & 1\end{pmatrix}$ with $x_{ij}\in\mathbf{Z}[1/p]$ and $k\in\mathbf{Z}$. Then it is 3-solvable and finitely generated (namely by $M(1;0,0,0)$, $M(0;1,0,0)$ and $M(0;0,0,1)$); the element $M(0;0,1,0)$ is central and generates an infinite cyclic subgroup $Z$; the quotient $A_3/Z$ fails to be residually finite because it contains a (central) copy of the abelian group $\mathbf{Z}[1/p]/\mathbf{Z}$. Hence $A_3/Z$ is not linear over any commutative ring.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/656843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove ${\rm tr}\ (AA^T)={\rm tr}\ (A^TA)$ for any Matrix $A$ Prove ${\rm tr}\ (AA^T)={\rm tr}\ (A^TA)$ for any Matrix $A$
I know that each are always well defined and I have proved that, but I am struggling to write up a solid proof to equate them. I know they're equal.
I tried to show that the main diagonal elements were the same but if I say that $A$ is $n\times m$ then
$$(AA^T)_{ii} = (a_{11}^2+\dots +a_{1m}^2) + \dots + (a_{n1}^2+\dots +a_{nm}^2)$$
and
$$(A^TA)_{ii} = (a_{11}^2+\dots +a_{n1}^2) + \dots + (a_{1m}^2+\dots +a_{nm}^2)$$
|
To give you an idea of how to properly write these sort of proofs down, here's the proof.
For a matrix $X$, let $[X]_{ij}$ denote the $(i,j)$ entry of $X$. Let $A$ be $m\times n$ and $B$ be $n\times m$. Then
\begin{align*}
\mathrm{tr}\,(AB)
&= \sum_{i=1}^n[AB]_{ii} \\
&= \sum_{i=1}^n\sum_{k=1}^m[A]_{ik}\cdot[B]_{ki} \\
&= \sum_{k=1}^m\sum_{i=1}^n[B]_{ki}\cdot[A]_{ik} \\
&= \sum_{k=1}^m[BA]_{kk} \\
&= \mathrm{tr}\,(BA)
\end{align*}
Your question is a special version of this result with $B=A^\top$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/656918",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 1
}
|
Exercice on a differential form Let $\omega$ be a $q$-form on $\mathbb{R}^2$ and let
*
*$Z_{\mathbb{R}^2}(dx_1)=\{p \in \mathbb{R}^2 \colon (dx_1)_{|p}=0\}$
*$Z_{S^1}(dx_1)=\{p \in S^1 \colon (dx1_{|S^1})_{|p}=0\}$
where $(dx1_{|S^1})$ is the pull-back of $q$. I have to show that $Z_{\mathbb{R}^2}(dx_1) \cap S^1 \ne Z_{S^1}(dx_1)$.
$\\$
I try to solve it, and for me $Z_{\mathbb{R}^2}(dx_1) \cap S^1=\{(0,1),(0,-1)\}$
(Is this because $dx_1$ is zero applied on a point of type (0,c)? in this case how is the tangent vector in a single point (0,c)?)
What about $Z_{S^1}(dx_1)$? I think it contains also the point $(1,0)$,$(-1,0)$ because in this point the tangent vector may be represent as $v=\frac {\partial}{\partial y}$ and $dx_1(v)=0$. Is it right?
|
The $1$-form $dx_1$ is the differential of the coordinate function $$x_1:\>{\mathbb R}^2\to{\mathbb R}, \quad (p_1,p_2)\mapsto p_1\ .$$
For a vector $X=(X_1,X_2)\in T_p$ one has
$$x_1(p+X)-x_1(p)= X_1\ .$$
Since the right side is obviously linear in $X$ we can already conclude that $$dx_1(p).X=X_1\ .$$
This means that $dx_1(p)$ computes the first coordinate of any tangent vector $X$ attached at $p$, and this for all points $p\in{\mathbb R}^2$. It follows that $dx_1(p)\ne0\in T_p^*$ for all $p$; whence your set $Z_{{\mathbb R}^2}(dx_1)$ is empty.
When $p=(\cos\phi,\sin\phi)\in S^1$ then $(TS^1)_p$ is spanned by the vector $X=(-\sin\phi,\cos\phi)$. From $dx_1(p).X=-\sin\phi$ we conclude that $dx_1|S^1(p)=0$ iff $\sin\phi=0$, i.e. at the points $(\pm1,0)\in S^1$. These two points constitute your $Z_{S^1}(dx_1)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/657016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Real and distinct roots of a cubic equation The real values of $a$ for which the equation $x^3-3x+a=0$ has three real and distinct roots is
|
Here is a graphical approach. Note that '+ a' shifts the graph up or down.
1) Disregard '+ a'.
2) Graph $ f(x) = x^3 - 3x $
http://www.wolframalpha.com/input/?i=plot%28x%5E3+-+3x%29#
3) from the graph we see that we must find the y values at the local extrema. You did that already getting y = 2 and -2
4) It should be obvious now by inspection that a must be between -2 and 2
Final answer ,
-2 < a < 2
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/657144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Zeros of $e^{z}-z$, Stein-Shakarchi Complex Analysis Chapter 5, Exercise 13 This is an exercise form Stein-Shakarchi's Complex Analysis (page 155) Chapter 5, Exercise 13:
Prove that $f(z) = e^{z}-z$ has infinite many zeros in $\mathbb{C}$.
Attempt:
If not, by Hadamard's theorem we obtain $$e^{z}-z = e^{az+b}\prod_{1}^{n}(1-\frac{z}{z_{i}})$$ where $\{z_{i}\}$ are the zeros of $f$. How can we conclude ?
|
It is easy to see that $e^z-z$ has order of growth 1. So we can use Hadamard's theorem.
Assume that $f(z)=e^z-z$ has only finitely many zeroes, $a_1,a_2,\ldots, a_n$. Then by Hadamard's factorization theorem, for some $a\in \mathbb{C}$ we have $$e^z-z=e^{az}\prod_{i=1}^{n}\Big(1-\frac{z}{a_i}\Big).$$ Then by using the identity $$\frac{(f_1f_2\ldots f_n)'}{f_1f_2\ldots f_n}=\frac{f_1'}{f_1}+\cdots+\frac{f_n'}{f_n}.$$ We have for $z\notin \{a_1,a_2,\ldots,a_n\}$ $$\frac{e^z-1}{e^z-z}=a+\sum_{i=1}^{n}\frac{1}{z-a_i}.$$
The L.H.S has infinitely many zeroes $2\pi ni$ for $n \in \mathbb{Z}$, but the R.H.S is a quotient of two polynomial functions therefore has only has finitely many zeroes.
Therefore $f$ must have infinitely many zeores.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/657208",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Continuosly differentation on composite functions Let $f: \mathbb{R}\rightarrow\mathbb{R}$ a $C^1$ function and defined $g(x) = f(\|x\|)$.
Prove $g$ is $C^1$ on $\mathbb{R}^n\setminus\{0\}$. Give an example of $f$ such that $g$ is $C^1$ at the origin and an exemple of $f$ such that $g$ is not. Find a necessary condition in $f$ for $g$ to be differentiable at the origin. Thanks in advance!
|
It is clear that $g$ is $C^1$ on $\mathbb{R}^n\backslash\{0\}$ as $x\mapsto \|x\|$ is also $C^1$ there. The derivative is given by
$$\nabla g(x)=\frac{x}{\|x\|} f^\prime(\|x\|)$$
for $x\not=0$.
Now let us investigate differentiability at $g$. For a real number $h\not=0$ we have
$$\frac{1}{h}(g(he_i)-g(0))=\frac{1}{h}(f(h)-f(0))\longrightarrow f^\prime(0)$$
as $h\rightarrow 0$. Therefore $\partial_i g(0)$ exists for every $i=1,\dots,n$ and is given by
$$\partial_i g(0) = f^\prime(0)$$
In other words, $g$ is differentiable at $0$. But to be $C^1$ at $0$ we still need continuity. That is, if $x^{(n)}\rightarrow 0$ for a sequence $(x^{(n)})_n$ in $\mathbb{R}^n$ (let us assume that $x^{(n)}\not=0$), we want
$$\frac{x_i^{(n)}}{\|x^{(n)}\|}f^\prime(\|x^{(n)}\|)\longrightarrow f^\prime(0)$$
as $n\rightarrow\infty$ for all $i=1,\dots,n$. This can only be true if $f^\prime(0)=0$. To see that, look for example at the sequence $x^{(n)}_i=-\frac{1}{n}$.
Therefore your necessary condition is
$$f^\prime(0)=0$$
Clearly, it is also sufficient.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/657288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Prove a square can't be written $5x+ 3$, for all integers $x$. Homework question, should I use induction?..
Help please
|
Hint $\ $ Let $\,n\,$ by any integer. $\ {\rm mod}\ 5\!:\ n \equiv 0,\pm1,\pm2\,\Rightarrow\, n^2\equiv 0,1,4,\,$ so $\ n^2\not\equiv 3\!\pmod{\!5}$
Remark $ $ It's easier if you know $\mu$Fermat: $\ 3 \equiv n^2\overset{\rm square}\Rightarrow\color{#0a0}{-1}\equiv \,n^4\ [\,\equiv \color{#c00}1$ by $\mu$Fermat]
This is a special case of Euler's Criterion: $\,{\rm mod}\ p=5\!:\,\ 3 \not\equiv n^2\, $ by $\ 3^{(p-1)/2}\equiv\color{#0a0}{-1}\not\equiv \color{#c00}1$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/657375",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
}
|
Show that f has a unique zero and prove that f′(λ)>0 with f being continuous and differentiable. Let $f: [a,b] \longrightarrow \mathbb{R}( a<b)$, $f$ is continuous and differentiable.
We assume that $f$ and $f'$ are increasing and $f(a)<0, 0<f(b)$.
Show that $f$ has a unique zero which we denote $\lambda$ and prove that $f'(\lambda)>0$
I have used IVT theorem to show that $\lambda$ exists.
*For the uniqueness if $f'$ is continuous, I can use
$$f(\mu)-f(\lambda)=\int_\lambda^\mu f'(x) \, \mathrm dx$$
Except that f is only differentiable..
Thank you in advance for your help
|
You don't need the fact that $f$ is increasing, only $f'$.
First, as you said, use IVT for existence.
Second, consider your integral $f(a)-f(0)$ where $f(a)=0$ and conclude that $f'(a)$ must be positive. (Edit: as per your comment, if you don't like integral, use MVT to show the same fact.)
Finally, given this, conclude that for $b>a$: $f(b)=f(b)-f(a)$ must be positive as well.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/657456",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Generalization of principle of inclusion and exclusion (PIE) The PIE can be stated as $$|\cup_{i=1}^n Y_i| = \sum_{J\subset[n], J\neq \emptyset} (-1)^{|J|-1} |Y_J|$$ where $[n]=\{1,2,...,n\}$ and $Y_J=\cap_{i \in J} Y_i$.
Problems using it are usually reduced to counting problems and require finding a good union decomposition of the set you want to count. I would like to know if you have some applications of finding bounds on arbitrary integer sums or even evaluating them using setdecomposition and manipulation. The more vague question if the IEP can be generalized away from counting, for example some polynomial is certainly defined for integers so it sort of falls into the first question, but the real numbers are certainly not countable but maybe there is some connection.
references are appreciated, too.
|
The principle of inclusion and exclusion is intimately related to Möbius inversion, which can be generalized to posets. I'd start digging in this general area.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/657517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Shading in a simple closed curve I started thiking about this today and I have an answer I feel I can justify intuitively but not rigorously.
Let $\mathbb{S} = \{(x,y) \ | \ f(x,y)=c\}$ define the points on a simple closed curve (at least I think this is the right terminology; examples would be a circle, elipse or heart curve).
Is it always the case that set of points contained within the curve (i.e. the points that "shade it in") are equal to $\mathbb{S^*}=\{(x,y) \ | \ f(x,y) \lt c\}$
This seems to be true "intuitively" since you are making the $x, y$ values smaller and so $f(x,y)$ shrinks as well. Can we perform some type of derivative on the curve to prove this?
Also, any help with terminology/notation would be greatly appreciated. I think this falls under Topology but I'm not sure.
|
With a few additional assumptions your statement becomes true.
Suppose $f$ is continuous. You've assumed that $S$ is a simple closed curve, so by the Jordan curve theorem, the complement of $S$ in the plane consists of two components, one unbounded and one bounded. The bounded component is what you are calling the points inside the curve. Let's write $B$ for the bounded component and $U$ for the unbounded component.
Suppose that at least one point $(x,y)$ in $B$ satisfies $f(x,y)<c$. Then all points in $B$ satisfy this inequality, since $B$ is connected (definition of component), and $S$ by assumption contains all points with $f(x,y)=c$. And if one point in $U$ satisfies $f(x,y)>c$, then all points in $U$ do too (same reasoning).
So if (1) $f$ is continuous, and (2) one point in the bounded component satifies $f(x,y)<c$, and (3) one point in the unbounded component satifies $f(x,y)>c$, then your statement is true.
If any one of these assumptions is dropped, the statement no longer holds. Here's an example where (1) and (2) hold but not (3). Let $f(P)$ be the distance from $P$ to the origin if that distance is $\leq 1$. If the distance to the origin is $>1$, say $r$, then let $f(P)=1/r$. Then $S=\{P:f(P)=1\}$ is the unit circle, but the locus of $f(P)<1$ is everything else.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/657611",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
Find m.g.f. given $E(X^r)$ function?
"Let $X$ be a random variable with $E(X^r) = 1 / (1 + r)$, where $r = 1, 2, 3,\ldots,n$. Find the series representation for the m.g.f. of $X$, $M(t)$. Sum this series. Identify (name) the probability distribution of $X$?
As a hint, use the Taylor Formula."
The expectation is what is throwing me off here. So it's a summation? The summation doesn't converge, and I'm not aware of how to get the mgf without knowing the distribution or pdf/cdf.
|
The definition of mgf is
$$
M(t) = \mathbb{E}[e^{tX}]
$$
and so
$$
E[X^r] = M^{(r)}(0).
$$
Notice that if we represent $M(t)$ as a McLaurin series, say
$$
M(t) = \sum_{n=0}^\infty \frac{m_n}{n!} t^n
$$
then $M^{(n)}(0) = \frac{m_n}{n!}$. We can now equate them, getting
$m_n = \frac{n!}{1+n}$ and so
$$
M(t) = \sum_{n=0}^\infty \frac{m_n}{n!} t^n
= \sum_{n=0}^\infty \frac{t^n}{1+n}.
$$
Can you take it from here?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/657699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
ZF Set Theory Axiom of Infinity Could someone please state and explain the axiom of infinity in ZF set theory? This isn't homework, it's just something that has interested me for awhile.
|
The axiom of infinity says that there exists a set $A$ such that $\varnothing\in A$, that is the empty set is an element of $A$, and for every $x\in A$ the set $x\cup\{x\}$ is also an element of $A$.
The definable function $f(x)=x\cup\{x\}$ is an injection from $A$ into itself, and since $f(x)\neq\varnothing$ for every $x$, it follows that $f$ is not surjective. Therefore $A$ must be infinite.
Do note that $\{\varnothing\}$ is not the empty set, and so $\varnothing\cup\{\varnothing\}=\{\varnothing\}\neq\varnothing$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/657782",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Need help with an integral. I am asked to show that
$\displaystyle \frac{1}{2\pi} \int_{0}^{2\pi} e^{2 \cos \theta} \cos \theta \, d\theta = 1 + \frac{1}{2!} + \frac{1}{2!3!} + \frac{1}{3!4!} + \cdots$
by considering $\displaystyle \int e^{z + \frac{1}{z}} \, dz$. I don't really know how to incorporate the hint, and any advice would be appreciated. Thanks!
|
Hint: the usual way to attempt a real integral from $0$ to $2\pi$ by using complex methods is to substitute $z=e^{i\theta}$. At some stage you may need to take the real part or imaginary part of a complex integral.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/657864",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
'uniform approximation' of real in $[0,1]$ Good evening,
Prove that: For every $\varepsilon>0$, there exist an $n\in \mathbb{N}$, such that for every $x\in[0,1]$, there exist $(p,q)\in \mathbb{N^2}$, with $0\leq p\leq q\leq n$ and |$qx−p|≤\varepsilon$.
I have tried to prove the result like the proof of density but I didn't succeed.
Thanks for your help.
|
Hint: Look at Dirichlet's approximation theorem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/657922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
I'm trying to prove there is infinite rational numbers between any two rational number I understand how to explain but can't put it down on paper.
$\displaystyle \frac{a}{b}$ and $\displaystyle \frac{c}{d}$ are rational numbers. For there to be any rational between two numbers I assume $\displaystyle \frac{a}{b} < \frac{c}{d}$.
I let $\displaystyle x = \frac{a}{b}$ and $\displaystyle y = \frac{c}{d}$ so $x < y$. the number in the middle of $x$ and $y$ is definitely a rational number so $\displaystyle x < \frac{x+y}{2} < y$. I know that there is another middle between $x$ and $\displaystyle \frac{x+y}{2}$ and it keeps on going. How would I write it as a proof?
|
One natural way to complete your proof is by contradiction.
You noticed that, between any two rational numbers, there is a third.
Now, pick any two rational numbers $x < y$, and assume that there are a finite number of (say, only $n$) rational numbers between them. Call these numbers $a_0 < a_1 < \cdots < a_{n-1}$. Now, look between $x$ and $a_0$. We know there is some rational number, call it $z$ between $x$ and $a_0$, which then must also be between $x$ and $y$. But this cannot be true, because $z$ is less than $a_0$, so cannot equal any of the $a_i$, which were assumed to be every rational between $x$ and $y$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/657992",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
}
|
Maple Input and string conversion In Maple how does one prompt a user to input an equation with variable x? Then convert that equation into a data type that will enable me to perform functions on said equation?
|
One way to do this is to use the readstat command, which prompts the user to enter a Maple statement whose value is returned. For example:
Test := proc()
local p, x;
x := readstat( "Enter Variable: " );
p := readstat( "Enter Polynomial: " );
diff( p, x )
end proc:
Then you can do:
> Test();
Enter Variable: x;
Enter Polynomial: x^2 - a*x + 2;
-a + 2 x
>
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/658057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
(Geometry) Proof type questions
Can someone please explain to me the given question and proof? otherwise I might just have to end up dropping my maths course because unfortunately I'm not understanding anything from my teacher. I'm not really 'seeing' the concept, if that makes sense.
Regards,
|
The thing you might be tripped up on is the 180. The 180 comes from the 'line' DE, which is a straight angle, you know, a 180. And since 180 is a constant, subtracting either $\angle CBD$ or $\angle ABD$ you'll end up with the same number.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/658152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
The kernel of homomorphism of a local ring into a field is its maximal ideal? I have a question about the proof of Theorem 3.2. of Algebra by Serge Lang.
In the theorem $A$ is a subring of a field $K$ and $\phi:A \rightarrow L$ is a homomorphism of $A$ into an algebraically closed field $L$.
In the beginning of the proof
We may first extend $\phi$ to a homomorphism of the local ring $A_\mathfrak{p}$, where $\mathfrak{p}$ is the kernel of $\phi$. Thus without loss of generality, we may assum that $A$ is a local ring with maximal ideal $\mathfrak{m}$.
Then, later
Since $\phi$ and the canonical map $A \rightarrow A/\mathfrak{m}$ have the same kernel,
I understand that $\mathfrak{m}$ is the kernel of the extension of $\phi$ to $A_\mathfrak{p}$. But in the proof, $\mathfrak{m}$ seems to be assumed to be the kernel of $\phi$ even if $A$ itself is a local ring.
In general, is the kernel of homomorphism of a local ring to a field its maximal ideal? If so, why is that ?
|
I believe, Lang meant that one can consider only the case of a local ring and the kernel being $\mathfrak{m}$ (i.e. $A:=A_{\mathfrak{p}}$). For the kernel: let $R$ be a ring of germs of infintely-differentiable functions, and take a homomorphism of $R$ to formal power series. That is embeddeble into the field of Laurent series, but the kernel is obviously not maximal ideal.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/658224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
give an example of algebraic numbers $\alpha, \beta$ such that.... Question is to find algebraic numbers $\alpha, \beta$ such that :
$$[\mathbb{Q}(\alpha):\mathbb{Q}]>[\mathbb{Q}(\beta):\mathbb{Q}]>[\mathbb{Q}(\alpha\beta):\mathbb{Q}]$$
It is not so difficult to find algebraic numbers $\alpha, \beta$ such that :
$$[\mathbb{Q}(\alpha):\mathbb{Q}]>[\mathbb{Q}(\beta):\mathbb{Q}]$$
but then the last relation $$[\mathbb{Q}(\beta):\mathbb{Q}]>[\mathbb{Q}(\alpha\beta):\mathbb{Q}]$$is getting disturbed all the time....
Please provide some hint for me to clear this.
Thank you :)
|
One can argument this way. Suppose that we can find some $\alpha$ with the following properties:
1) $[\mathbb Q(\alpha)\colon \mathbb Q]=n$ and there is some prime $p$ such that $p(p+1)\mid n$.
2) The minimal polynomial $m(x)$ of $\alpha$ is of the form $f(x^{p(p+1)})$ for some $f(x)\in \mathbb Z[x]$.
Then $[\mathbb Q(\alpha^p)\colon \mathbb Q]=n/p$. In fact clearly $[\mathbb Q(\alpha^p)\colon\mathbb Q]$ is at least $n/p$, but also $\alpha^p$ is a root of $f(x^{p+1})$ which has degree $n/p$, and so is the minimal polynomial of $\alpha^p$. In the same way, one checks that $[\mathbb Q(\alpha^{p+1})\colon \mathbb Q]=n/(p+1)$ and since $\alpha^{p+1}=\alpha\cdot\alpha^p$ your example is found since $n>n/p>n/(p+1)$.
This allows you to construct a lot of examples. Pick your favourite $n$ divided by $p(p+1)$ for some prime $p$, then take any irreducible polynomial $m(x)\in \mathbb Z[x]$ of degree $n$ of the form $f(x^{p(p+1)})$ for some $f(x)\in \mathbb Z[x]$. Then any root $\alpha$ of $m$ does the job. To construct $m(x)$ one can simply use Eisenstein's criterion.
To give an "explicit" example say that you choose $n=24=(3\cdot 4)\cdot 2$, so that $p=3$. Now fix your favourite prime number $q$ and set $m(x)=x^{24}+qx^{12}+q$. This polynomial is irreducible by Eisenstein's criterion and is of the form $f(x^{12})$ where $f(x)=x^2+qx+q$. If $\alpha$ is a root of $m$, then $[\mathbb Q(\alpha)\colon \mathbb Q]=24$, $[\mathbb Q(\alpha^3)\colon \mathbb Q]=8$ because $x^8+qx^4+q$ is the minimal polynomial of $\alpha^3$ and $[\mathbb Q(\alpha^4)\colon \mathbb Q]=6$ because the minimal polynomial of $\alpha^4$ is $x^6+qx^3+q$. Therefore $\alpha$ and $\beta=\alpha^3$ do the job.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/658283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
Very silly permutation question Okay let me briefly explain my doubt.
I'll explain some easy problems,so that you can study easily my mind and you can guess what confusion i might be going through right now.
This may be silly.But please help me out.
How do you permute the letters ABC with no character can be repeated?
We can do $3!$ arrangements.
Like, 3 ways to choose the first character, 2 ways for the second one,one way for the last character.
Okay,so far so good.
Now,how do you permute $AAB$?
$$AAB,ABA,BAA,BAA,ABA,AAB.$$
The formula will be like dividing the number of times each letter repeats.
Like, $3!/2!$ (Since $A$ is repeated).
I don't understand this part.Why we need to divide by $2!$? Reason is being told as $A$ is repeated twice. So what? What happens internally? What happens when we divide ? Subtraction should make sense here since we are eliminating the redundancy. But dividing gives the actual answer! Can somebody explain me clearly why dividing with repeated terms makes sense?
Please and thanks.
|
This can be understood by renaming the second A to A'. This gives you obviously 3!=6 possible permutations:
AA'B
ABA'
A'AB
A'BA
BAA'
BA'A
Now you can group these into pairs whose elements are equal except of A and A' being swapped:
AA'B, A'AB
ABA', A'BA
BAA', BA'A
Since A and A' shall not be distinguished we only keep one permutation from each pair, effectively dividing by two.
The same argument works for any number of A, A', A'', ... (k times) grouping them into tuples of k! elements.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/658557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
}
|
Integrating $\int_0^\infty\frac{1}{1+x^6}dx$ $$I=\int_0^\infty\frac{1}{1+x^6}dx$$
How do I evaluate this?
|
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\on}[1]{\operatorname{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
$\ds{\bbox[5px,#ffd]{}}$
\begin{align}
&\bbox[5px,#ffd]{\int_{0}^{\infty}{\dd x \over 1 + x^{6}}} =
{1 \over 6}\int_{0}^{\infty}{x^{1/6 - 1} \over 1 + x}\,\dd x =
{1 \over 6}\,\
\overbrace{\Gamma\pars{1 \over 6}\Gamma\pars{1 - {1 \over 6}}}
^{\ds{\substack{\ds{Ramanujan's} \\[0.5mm] \ds{Master}\\[0.75mm] \ds{Theorem}}}}
\\[5mm] = &\
{1 \over 6}\,{\pi \over \sin\pars{\pi/6}} =
{1 \over 6}\,\pi\ \underbrace{\csc\pars{\pi \over 6}}_{\ds{2}}\
=\ \bbx{\pi \over 3} \approx 1.0472 \\ &
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/658637",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 7,
"answer_id": 6
}
|
Functional analysis and physics Some branches of mathematics like functional analysis do not on first encounters seem to have any possible applications.Can someone please give me some examples of applications of functional analysis in phsics?I believe there can hardly be any absolutely unapplied branch of mathematics:it is just I am curious to know how this particular field of study in mathematics has been instumental in physics or possibly other sciences.Thank you.
|
Historically functional analysis emerged from applications; it is therefore obvious that it has "applications". In quantum mechanics the obserables are operators on a hilbert space, and one wants in particular to realize the "canonical commutation relation" $[q,p]=1$ for self-adjoint operators $q$ and $p$ (position and momentum) on a hilbert space; it is easily seen that this implies that at least one of them must be unbounded and hence cannot be defined on the whole space (Hellinger-Töplitz). From this it follows that one must take care of all technicalities which come along with unbounded operators.
Also the spectrum of the operator is related to the measurement of the observable which it represents.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/658714",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
At a closed monoidal category, how can I derive a morphism $C^A\times C^B\to C^{A+B}$? Let $A$, $B$ and $C$ be objects of a closed monoidal category which is also bicartesian closed. How can I derive a morphism $C^A\times C^B\to C^{A+B}$?
$(-)\times (-)$ denotes the product, $(-)+(-)$ the coproduct and $(-)^{(-)}$ the exponentiation.
|
Cartesian closure applies to cartesian categories, i.e. categories which are (symmetric) monoidal with respect to the (binary) product bifunctor (basically any finitely complete category is cartesian). Cartesian closed categories are those categories where each functor $A\times-$ has a right adjoint $(-)^A$ realizing the binatural bijection
$$
{\cal C}(A\times B,C)\cong {\cal C}(B, C^A)
$$
In this setting you can exploit the fact that right adjoint preserve limits (being the bifunctor $(A,C)\mapsto C^A$ contravariant in $A$ this means that it sends colimits to limits): more precisely (I love these computations by nonsense!), in this particular case you have that
$$\begin{align*}
{\cal C}(X, C^{A\coprod B}) & \cong {\cal C}(X\times(A\amalg B),C)\\
&\cong {\cal C}\Big((X\times A)\amalg(X\times B),C\Big)\\
&\cong {\cal C}\big(X\times A, C\big)\times {\cal C}\big(X\times B,C\big) \\
&\cong {\cal C}(X,C^A)\times {\cal C}(X,C^B) \\
&\cong {\cal C}(X,C^A\times C^B)
\end{align*}$$
Now you can conclude, since the Yoneda lemma tells you that the two objects you wanted to link are isomorphic (since they give rise to canonically isomorphic hom-presheaves).
This in fact works in more generality, i.e. in a (let's suppose: symmetric) monoidal category $\cal C$ such that the tensor functor $\otimes\colon (A,B)\mapsto A\otimes B$ gives rise to functors $A\otimes -$, each of which has a right adjoint $[A,-]$ (the "internal hom" in the monoidal category.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/658848",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Solve the following differential equation: $ty' + 2y = \sin(t)$ An exercise from the book:
Solve the following differential equation: $ty' + 2y = \sin(t)$
This is the first time I approch a differential equation, and the book doesn't provide an example how to solve an differential equation,
So I need your help to show me how to solve this differential equation.
It's not a homework. Thanks in advance!
|
Hint:
Consider the following ODE:
$$y'(t) + p(t) y(t) = F(t).$$
Suppose you are interested in rewriting this equation as follows:
$$ \frac{d}{dt}(I y) = I F,$$
for some function $I(t)\neq 0$ (called integrating factor). Expand the product of derivatives to find:
$$y' + \frac{I'}{I} y = F, $$
so it must hold:
$$\frac{I'}{I} = p(t) \iff I(t) =e^{\int p(t) dt},$$ being the constant of integration set to 1 since it's not relevant (both sides of the equation are multiplied by $I$). This leads you to the solution:
$$Iy = \int IF \, dt + A,$$
with $A$ a constant of integration.
Identify $p(t) = 2/t$ and $F(t) = \sin t/t$ in order to achieve the solution.
Cheers!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/658961",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Is it possible to swap vectors into a basis to get a new basis? Let $V$ be a vector space in $\mathbb{R}^3$. Assume we have a basis, $B = (b_1, b_2, b_3)$, that spans $V$. Now choose some $v \in V$ such that $v \ne 0$. Is is always possible to swap $v$ with a column in $B$ to obtain a new basis for $V$?
My initial thought is that it is not always possible since if $v$ is some linear combination of any 2 vectors in $B$ then if we were to swap $w$ into $B$, $B$ would no longer span $V$. Am I missing a nuance here?
|
If $(v_1,\cdots, v_n)$ is a basis for any vector space $V$, and $w\in V$ is an arbitrary vector, then swapping $w$ for $v_i$ will result in a basis iff $w$ is not in the span of $\{v_j\mid j\ne i\}$. The proof is a good exercise.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/659113",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
How prove $\sum\limits_{cyc}(f(x^3)+f(xyz)-f(x^2y)-f(x^2z))\ge 0$ let $x,y,z\in (0,1)$, and the function
$$f(x)=\dfrac{1}{1-x}$$
show that
$$f(x^3)+f(y^3)+f(z^3)+3f(xyz)\ge f(x^2y)+f(xy^2)+f(y^2z)+f(yz^2)+f(z^2x)+f(zx^2)$$
For this problem simlar this Schur inequality:
http://www.artofproblemsolving.com/Wiki/index.php/Schur
we all know this third degree-Schur inequality,
$$a^3+b^3+c^3+3abc-ab(a+b)-bc(b+c)-ca(c+a)\ge 0$$
$a,b,c\ge0$
But my problem is $f(x)$ such this form.so How prove it? Thank you
|
In fact, this nice problem can indeed be solved using Schur's Inequality of 3rd degree, albeit indirectly. First we will prove a lemma:
Lemma: For $x, y, z>0$, we have:
$$e^{x^3}+e^{y^3}+e^{z^3}+3e^{xyz}\ge e^{x^2y}+e^{xy^2}+e^{y^2z}+e^{yz^2}+e^{z^2x}+e^{zx^2}$$
Proof: By Schur's inequality, for each $n\in\mathbb{N}$, we have:
$$G_n=\sum_{cyc} (\frac{x^n}{n!}-\frac{y^n}{n!})(\frac{x^n}{n!}-\frac{z^n}{n!})\ge 0 $$
Also, $\lim_{G_n\to\infty}=0$. Therefore:
$$\sum_{cyc}e^{x^3}+e^{xyz}-e^{x^2y}-e^{x^2z}=\sum_{n=1}^\infty G_n\ge 0$$
Where we here used $e^t=1+t+\frac{t^2}{2}+\cdots$. So the lemma is proven. $\Box$
Now take $(x, y, z):=(-x(\ln s)^{\frac{1}{3}}, -y(\ln s)^{\frac{1}{3}}, -z(\ln s)^{\frac{1}{3}})$ for a variable $s<1$. Then the lemma gives that:
$$g(s)=s^{-x^3}+s^{-y^3}+s^{-z^3}+3s^{-xyz}-( s^{-x^2y}+s^{-xy^2}+s^{-y^2z}+s^{-yz^2}+s^{-z^2x}+s^{-zx^2})\ge 0$$
for all $0\le s\le 1$ (the endpoints are true by inspection). Finally, integrating this over the interval $[0, 1]$ gives:
$$\int_0^1 g(s)=\sum_{cyc}\frac{1}{1-x^3}+\frac{1}{1-xyz}-\frac{1}{1-x^2y}-\frac{1}{1-x^2z}\ge 0$$
as desired.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/659298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
A question about the action of $S_n$ on $K[x_1,...,x_n]$ Let ${K}$be the field ($\,Char\,K\not=0)$. Let $n\in \mathbb{Z}^{+}$. $S_n$ acts on $K[x_1,...,x_n]$in the following way:
If $p\in K[x_1,...,x_n]$ and $\sigma\in S_n$, then $\sigma p$ is the polynomial $p(x_{\sigma(1)},x_{\sigma(2)},...,x_{\sigma (n)})$.
Question: Let $H$ be a subgroup of $S_n$, must there exist a polynomial $p\in K[x_1,...,x_n]$ such that $stab(p)=H$ ?
(Where $stab(p)$ is defined to be $\{\sigma\in S_n|\sigma p=p\})$
Thank you
Edit: I also want to know the answer to the question in the case that our field $K$ is $\mathbb{C}$
|
Yes.
Let $f(x_1,\ldots, x_n)=\prod_{k=1}^n x_k^k$. Then
$p=\sum_{h\in H} h(f)$ has the desired property. (This works for all fields, also those of nonzero characteristic)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/659471",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How prove this $x_{2^n}\ge 2^n-n+1$ the sequence $ (x_n)_{n\ge 1}$, $ x_n$ being
the exponent of 2 in the decomposition of the numerator of
$$\dfrac{2}{1}+\dfrac{2^2}{2}+\cdots+\dfrac{2^n}{n}$$
goes to infinity as $ n\to\infty$.
Even more, prove that $$x_{2^n}\ge 2^n-n+1$$
My idea: maybe
$$\dfrac{2}{1}+\dfrac{2^2}{2}+\cdots+\dfrac{2^n}{n}=\dfrac{2^n}{n}\cdot M'?$$
where $M'$ is postive numbers.
so we after we replacing $n$ by $2^n$, then
$$\dfrac{2}{1}+\cdots+\dfrac{2^{2^n}}{2^n}=\dfrac{2^{2^n}}{2^n}M''=2^{2^n-n}M''$$
then I can't,Thank you for your help
|
Now,I have solution This nice problem,
we only note that
$$\dfrac{2}{1}+\dfrac{2^2}{2}+\cdots+\dfrac{2^n}{n}=\dfrac{2^n}{n}\sum_{k=0}^{n-1}
\dfrac{1}{\binom{n-1}{k}}$$
This indentity proof can see:http://www.artofproblemsolving.com/Forum/viewtopic.php?p=371496&sid=e0319f030d85bf1390a8fb335fd87c9d#p371496
also I remember this indentity stackmath have post it,But I can't find this link.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/659579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
}
|
Combinatorial proof that $\sum_{j=0}^k (-1)^j {\binom n j}=(-1)^k \binom{n-1}{k}$
Prove $\sum_{j=0}^k (-1)^j {\binom n j}=(-1)^k \binom{n-1}{k}$.
It can be proven easily using induction and Pascal's identity, but I want some insight. The alternating sum reminds of inclusion-exclusion, but the RHS can be negative.
I've also tried the snake oil method, but don't know how to finish it:
$$\sum_n (\sum_{j=0}^k (-1)^j {\binom n j})x^n= \sum_{j=0}^k (-1)^j \sum_n {\binom n j}x^n$$
$$=\sum_{j=0}^k (-1)^j \frac{x^j}{(1-x)^{j+1}}=\frac{1}{1-x} \sum_{j=0}^k (\frac{-x}{1-x})^j=1-(\frac{-x}{1-x})^{k+1}$$
|
Reverse the terms in your LHS and multiply by $(-1)^k$ to see that your identity is equivalent to this: $\sum_{j=0}^k (-1)^{j} {\binom n {k-j}}= \binom{n-1}{k}$. The right hand side is the number of $k$-subsets of positive integers less than $n$. That equals the number of $k$-subsets of positive integers less than or equal to $n$ minus the number of $k$-subsets of positive integers less than or equal to $n$ that contain the integer $n$, or $\binom n k - \binom {n-1}{k-1}$. (This is the defining recurrence for binomial coefficients.) Continue with this idea:
$\begin{align*}
\binom{n-1}{k} &= \binom n k - \binom {n-1}{k-1}\\
&= \binom n k - \left(\binom {n}{k-1}- \binom {n-1}{k-2}\right)\\
&= \binom n k - \binom {n}{k-1} + \left(\binom {n}{k-2}-\binom {n-1}{k-3}\right)\\
&= \binom n k - \binom {n}{k-1} + \binom {n}{k-2}-\left(\binom {n}{k-3}-\binom{n-1}{k-4}\right)\\
&= \cdots\\
&= \sum_{j=0}^k (-1)^{\,j} {\binom n {k-j}}.
\end{align*}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/659653",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
How can you find the $x$-coordinate of the inflection point of the graphs of $f'(x)$ and $ f''(x)$? So I understand how to find the inflection points for the graph of $f(x)$.
But basically, I have been shown a graph of an example function $f(x)$ and asked the state the inflection points of the graph. (I am just shown a curve... not given an actual function in terms of $x$)
Easy enough... I can see by looking where the concavity of the curve changes. But here is the tricky part. I am then asked to find the inflection points of the curves of $f'(x)$ and $f''(x)$.
From the graph of $f(x)$, I can draw in the values it cuts the $x$-axis, and whether it is positive/negative, but I don't understand how you can also comment on the inflection points from it?
i.e. $f(x)$ has both absoulute and local max values at $x = 2$ and $x = 6$, and local min at $x = 4$. Hence, $f'(x)$ cuts $x$ axis at $2, 4$ and $6$. I would imagine the local min/max would therefore be $3$ and $5$ (middle of $2$ & $4$, and $4$ & $6$ respectively)
The book gives answers a) $(3,5)$ Got it! b) $2, 4, 6$ c)$1, 7$
How do you find the point of inflection for $f'(x)$ and $f"(x)$ in this case?
|
I've always thought that the points where the f(x) graph cuts the x-axis is where the points of inflection for f'(x) are.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/659729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
LCM of First N Natural Numbers Is there an efficient way to calculate the least common multiple of the first n natural numbers? For example, suppose n = 3. Then the lcm of 1, 2, and 3 is 6. Is there an efficient way to do this for arbitrary n that is more efficient than the naive approach?
|
If you don't want to assume having or constructing list if primes, you can just recursively apply the two argument LCM, resulting in something that is likely a bit worse than $O(n \cdot log n)$ time. You can likely improve to close to $O(n \cdot log n)$ by replacing the linear application with divide and conquer:
Assuming $f(n) = LCM(1, 2, ... n)$
Then define $f\prime(m,n)$ so that $f(n) = f\prime(1, n)$
Then
*
*If $m = n$ then $f\prime(m,n) = n$
*If $m < n$ then $f\prime(m,n) = LCM(f\prime(m, \lfloor{(m+n)/2}\rfloor), f\prime(\lfloor{(m+n)/2}\rfloor+1,n))$
This does the same $n-1$ LCM evaluations as the linear solution but it should do that majority of them with significant smaller numbers which should speed things up a bit. On the other hand it results in $log_2(n)$ values having to be held at some point in time as a trade off. If you are willing to accept an $O(n)$ space cost to do that, then it might be even better to do:
*
*allocate a min queue,
*populate it with 1 to $n$
*pop and LCM the smallest two values
*re-insert the result
*continue at step 3 until only one value remains.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/659799",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 6,
"answer_id": 5
}
|
Graph with vertices having certain degrees Let $G$ be a simple graph with $n$ vertices, such that $G$ has exactly $7$ vertices of degree $7$ and the remaining $n-7$ vertices of degree $5$. What is the minimum possible value for $n$?
I have gotten that $n$ could equal $14$ with $G$ as the following graph:
i) $G=G_1 \cup G_2$
ii) $|G_1|=|G_2|=7$
iii) $G_1 \cong K_7$
iv) each vertex in $G_2$ is connected to one distinct vertex in $G_1$ and four more in $G_2$ (subject to the restraints)
How do I know this is the least value of $n$? If it is not, how can I compute the least value? Thank you!
|
Assuming you are concerned with simple graphs, you can use the Erdos-Gallai theorem that characterizes when does a graph sequence admit a graphical representation.
Using the above theorem you can verify that you need at least three vertices of degree $5$ thus giving you the degree sequence $ds = (7,7,7,7,7,7,7,5,5,5).$ Given this you can construct the following graph realizing $ds.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/659872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Find the differential of an n-variable function The problem goes like this:
If $f:\mathbb{R}^n\to\mathbb{R}, f(x)=\arctan||x||^4$, prove that $Df(x)(x)=\displaystyle\frac{4||x||^4}{1+||x||^8}$
Now, I've calculated each of the partial differentials (if that's the right word) and applied that $1\times n$ matrix to a vector $(x_1, ... ,x_n)$ and I get this:
$4(\displaystyle\frac{x_1^4}{1+x_1^8}+...+\frac{x_n^4}{1+x_n^8})$
Now, the similarity between those terms and the final solution is obvious but I just can't seem to get the sum to become the above.
Am I going about this the wrong way or am I just missing something?
($||\cdot||$ is the Euclidian norm)
|
$\mathrm{Arctan}$ is differentiable on $\mathbb{R}$ with :
$$ \big( \mathrm{Arctan}' \big)(x) = \frac{1}{1+x^{2}} $$
And $\varphi \, : \, x \in \mathbb{R}^{n} \, \longmapsto \, \Vert x \Vert^{4}$ is differentiable on $\mathbb{R}^{n}$ with :
$$ \mathrm{D}_{x}\varphi \cdot h = 4 \Vert x \Vert^{2} \left\langle x,h \right\rangle $$
where $\left\langle \cdot,\cdot \right\rangle$ denotes the usual inner product on $\mathbb{R}^{n}$. Since $f = \mathrm{Arctan} \circ \varphi$, the chain rule gives you :
$$
\begin{align*}
\mathrm{D}_{x}f \cdot h & = {} \frac{1}{1+\big( \Vert x \Vert^{2} \big)^{2}} \times \mathrm{D}_{x}\varphi \cdot h \\[2mm]
&= \frac{4 \Vert x \Vert^{2} \left\langle x,h \right\rangle}{1+\Vert x \Vert^{8}} \\
\end{align*}
$$
Which gives you the expected result for $\mathrm{D}_{x}f \cdot x$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/659958",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Confusion about a way to compute a residue at a pole Suppose I have a function of the form $$f(x)=\frac{1}{(x-a)(x-b)^2(x-c)^3}$$
Clearly, I have a simple pole at $a$, and poles of order 2,3 at $b,c$, respectively.
By definition, the residue at $x=\alpha$ is the coeffecient of the term $(x-\alpha)^{-1}$ in the Laurent expansion of $f(x)$ centered at $x=\alpha$. From my experience, this residue at $x=\alpha$ is equivalent to the coefficient of the term $(x-\alpha)^{-1}$ in the partial fraction decomposition of $f(x)$. However, the partial fraction of $f(x)$ will not be a Laurent series since it has no center (I.e., there are denominators of the form $x-\beta$ with $\beta \not = \alpha$). Is there a rule that says that the coefficient of $(x-\alpha)^{-1}$ in the partial fraction decompositon is the residue at $x=\alpha$, even if the partial fraction decompoition is not a Laurent expansion?
|
Yes, there is such a rule. If you have a partial fraction decomposition
$$f(z) = h_\alpha(z) + h_\beta(z) + h_\gamma(z) + g(z),$$
where $h_w$ is the principal part of $f$ in $w$, and $g$ is the remaining holomorphic part of $f$, then you can develop $h_\beta,\, h_\gamma$, and $g$ in Taylor series about $\alpha$ to obtain the Laurent series of $f$ in the annulus $0 < \lvert z-\alpha < d$, where $d$ is the distance to the nearest other singularity ot the boundary of the domain of $f$, whatever is smaller.
The point is that everything but the fractions $\frac{c_k}{(z-\alpha)^k}$ in the partial fraction decomposition of $f$ is holomorphic in a neighbourhood of $\alpha$, so can be expanded into a power series around $\alpha$ and therefore doesn't influence the coefficient of $(z-\alpha)^{-1}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/660069",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Help solve simultaneous inequality that has $\leq$ and $\geq$ in it
I only have problems determining the values of $\alpha$ and $\beta$, so I will only show the solution used to derive their values:
So, I know how to get the inequalities at ★ and † but I don't know how use them to deduce $\alpha$ and $\beta$.
All I know is $\alpha \geq -1$ and I got this from ★ because if $\alpha=-1$ then ★ is 0 and if $\alpha > -1$, ★ is greater than 0.
Please show me the next steps (I don't understand the reasoning the solution provided).
|
The reasoning of the solution is the following.
Firstly, the inequalities $(\star)$ and $(\dagger)$ represent the two principle minors of the Hessian matrix of $f$. It is known that $f$ is convex iff it's Hessian matrix is positive semidefinite and the Hessian matrix is positive semidefinite if it's principle minors are positive. So, that explains the derivation of the two inequalities (which you said in your post that you have already understood).
Secondly, the two inequalities are solved simultaneously, by discriminating cases for $α$. We note that the roots of the polynomial in $(\star)$ $$α(α+1)=0$$ are $α=-1$ and $α=0$. Thus the term $α(α+1)$ is $\ge0$ if $α \notin (-1,0)$. So there are two basic cases (1.) $α\ge 0$ and (2.) $α\le -1$ in which the $(\star)$ inequality is satisfied.
*
*First case $α \ge 0$. Then $(\star)$ is satisfied and for $(\dagger)$ we have that $$\underbrace{(α+1)}_{\text{positive}}(β-1)(1-α-β)$$ Therefore in order for the whole term to be positive (so that $(\dagger)$ is satisfied) we need that $$(β-1)(1-α-β)\ge 0$$ The roots of this polynomial are $β_1=1$ and $β_2=1-α$, where the second, due to the selection of $α$, is less or equal than $1$ ($α\ge 0 \implies 1-a \le 1$). Obviously this polynomial (in $β$) is nonegative when $$β \in [1, 1-α]$$ (just let $β \to \pm \infty$ to see what happens and then take in account that in the roots the sign of the terms changes) therefore there is one admissible region for $β$ which is $$ 1 \le β \le 1-α$$ Combining the above results we have determined one region ("Region $A$" in the graph) where $f(x,y)$ is convex, in particular $$\tag{Region A} α\ge 0 \qquad \text{ and } \qquad 1 \le β \le 1-α$$ (Note that the name of the Region is selected in order to fit with the given graph). Similarly we will proceed with the case $α \le -1$ to determine the remaining regions where $f$ is convex.
*Second case $α \le -1$. Then $(\star)$ is satisfied and for $(\dagger)$ we have that $$\underbrace{(α+1)}_{\text{negative}}(β-1)(1-α-β)$$ Therefore in order for the whole term to be positive (so that $(\dagger)$ is satisfied) we need that $$(β-1)(1-α-β)\le 0$$ The roots of this polynomial are $β_1=1$ and $β_2=1-α$, where the second, due to the selection of $α$, is equal or bigger than $2$ ($α\le-1 \implies 1-a \ge 2$). Obviously this polynomial (in $β$) is negative when $$β \notin (1, 1-α)$$ therefore there are two admissible regions for $β$ which are $$β\le 1 \qquad $$ and $$β\ge 1-α$$ Combining the above results we have determined two regions ("Regions $B$ and $C$" in the graph)where $f(x,y)$ is convex, in particular $$\tag{Region C} α\le-1 \qquad \text{ and } \qquad β \le 1 \phantom{-α1}$$ and $$\tag{Region B} α\le-1 \qquad \text{ and } \qquad β \ge 1-α$$ which completes the explanation of the reasoning in the above solution.
To sum up, the author of the solution, initially determined the principal minors of the Hessian matrix of $f$ and thus came up with two inequalities $(\star)$ and $(\dagger)$ that depend on the values of the unknown parameters $α$ and $β$. Afterwards he started with the simpler equation of the two (that is $(\star)$ which depends only on $α$ and can thus be easily solved) and discriminated cases for the one parameter $α$. For each case, he plugged the value(s) of $α$ in the second inequality $(\dagger)$ and came up with the respective values of the other parameter $β$. The admissible results formed in total $3$ Regions (depending of course on the values of $α$ and $β$) which are depicted in the given graph with the names $A$, $B$ and $C$ as it was required.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/660164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Triple of powers is subvariety? Consider the set $B=\{(t^2,t^3,t^4)\mid t\in \mathbb{C}\}$. Is it a subvariety of $\mathbb{C}^3$? That is, is it the set of common zeros of some (finite number of) polynomials?
I'm thinking about $y^2-x^3$ and $z-x^2$. But suppose $x=t^2$, then we're allowing $y=\pm t^3$. How to get rid of $-t^3$?
|
That $x = t^2$ allows $y = \pm t^3$ is no problem, because $(t^2, -t^3, t^4)$ is also in $B$ (just replace $t$ by $-t$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/660244",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Open and closed maps: What good for? I'm wondering what open mappings are actually good for (except for inverse becomes continuous)???
My irritation came since, people stress that an open mapping not necessarily preserves closed sets (well, sure, I mean closed maps are some totally different subject since they don't describe neigbborhoods).
I cannot imagine any other purpose despite quotient maps or homeomorphisms but thats again about some continuity issues; maybe you know some problem where this becomes really important...
|
Open and closed maps become useful when combined with continuity!
Open/Closed Maps:
For a continuous and open/closed map we have:
If it is injective then it is an embedding.
If it is surjective then it is a quotient map.
If it is bijective then it is a homeomorphism.
Note: Neither embeddings nor quotient maps are necessarily open/closed
but homeomorphisms are necessarily open/closed.
Closed Map Lemma:
Continuous maps from compact spaces to Hausdorff spaces are closed.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/660348",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Integration of $\sin(\frac{1}{x})$ How to find the value of the integral?
$$\begin{align}
(1)&&\int_{0}^{1}\sin\left(\frac{1}{x}\right)dx\\\\
(2)&&\int_{0}^{1}\cos\left(\frac{1}{x}\right)dx
\end{align}$$
|
The antiderivatives involve the sine of cosine integral functions. From their definition, the antiderivatives are respectively $$x \sin \left(\frac{1}{x}\right)-\text{Ci}\left(\frac{1}{x}\right)$$ and $$\text{Si}\left(\frac{1}{x}\right)+x \cos \left(\frac{1}{x}\right)$$ So, the integrals are $$\sin (1)-\text{Ci}(1)$$ and $$\text{Si}(1)-\frac{\pi }{2}+\cos (1)$$ and their numerical values are respectively $0.504067$ and $-0.084411$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/660425",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Exercise about linear operator For $X$ Banach, I have to show that if $T\in\mathfrak{L}(X)$ and $||T||_{\mathfrak{L}(X)}<1$ then exists $(I-T)^{-1}$ and
$$
(I-T)^{-1}=\sum_{n=0}^\infty T^n.
$$
For the existence of $(I-T)^{-1}$ I proved that $\ker(I-T)=\{0\}$. But for the second point of the proof I don't know how to proceed. I know that this is the geometric series and $||T||_{\mathfrak{L}(X)}<1$, but I'm not sure this is enough.
|
Look at this $$(I-T)\circ \sum_{j=0}^{n} T^j =I-T^{n+1} $$ and $$\left(\sum_{j=0}^{n} T^j \right)\circ (I-T)=I-T^{n+1} .$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/660524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Union of ascending ideals is an ideal Could you tell me what I'm doing wrong in proving this proposition?
If $I_1 \subset I_2 \subset ... \subset I_n \subset ...$ is an ascending chain of ideals in $R$, then $I := \bigcup _{n \in \mathbb{N}} I_n$ is also an ideal in $R$.
So we need to prove that the group $(I, +)$ is abelian and if $a \in I$, then $ar, ra \in R$ (but I suppose we could assume that $R$ is commutative).
Here are my attempts: $a \in I \Rightarrow \exists n\in \mathbb{N} : a \in I_n \Rightarrow ar, ra \in I_n \Rightarrow ar, ra \in I$. But I think I don't use the fact that the chain is ascending here. Could you point out my mistake and help me correct it?
Thank you a lot!
|
The following is in the book Algebra, by T. Hungerford.
Let $b \in I_i$ and $c \in I_j$. It follows that $i \leq j$ or $j \leq i$. Say $i \geq j$, then $I_j \subset I_i$. Therefore, $b, c \in I_i$. Since $I_i$ is an ideal, $b-c \in I_i \subset I$.
Similarly, if $r \in R$ and $b \in I$, $b \in I_i$ for some $i$. Therefore $rb \in I_i \subset I, br \in I_i \subset I$. Hence $I$ is an ideal in $R$.
The result follows from a theorem previously proved in the book:
A nonempty subset I of a ring R is a left [resp. right] ideal if, and only if, for all $a,b \in I$, $r \in R$:
$a, b \in I \Rightarrow a - b \in I $ and $a \in I, r \in R \Rightarrow ra \in I$ [resp. $ar \in I$]
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/660633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
A problem of subgroup of a group If $G$ be a finite group of order $pq$ where $p$ & $q$ are prime numbers such that $p>q$. Prove that $G$ has at most one subgroup of order $p$.Hence prove that a group of order 6 has at most one subgroup of order 3.
My try is that i let $G$ has two subgroup $H$ & $K$ of same order $p$.Then we have to prove $H=K$
since $p>q$ & $o(G)=pq$ & clearly $p>\sqrt{pq}$
hence $o(H)>\sqrt{o(G)}$ & also $o(K)>\sqrt{o(G)}$
so there must some elements in intersection of both of the subgroups with identity i.e
$H\cap K$$\neq ${e}
now since o($H\cap K$) must divide $o(H)$ & $o(G)$.And from above conclusion $o(H\cap K)\neq 1$ & $p$ is prime so $o(H\cap K)$ must be $p$ hence
$H=K$
but i am not sure about the line which is 'bold'.
plz suggest me.
|
This is correct, though you have not explained that.
To do that, consider the map $H\times K\to G$ defined as $(h,k)\mapsto hk$. What can you say about it if $H\cap K=\{e\}$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/660723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Normal Subgroup Counterexample Im having trouble with the second part of this question,
Let $H$ be a normal subgroup of $G$ with $|G:H| = n$,
i) Prove $g^n \in H$ $\forall g \in G$ (which i have done)
ii) Give an example to show that this all false when $H$ is not normal in $G$.(which I am having trouble with showing)
Any suggestions?
|
Hint: If $H$ is not normal in $G$, then $G$ is necessarily nonabelian. Consider the smallest nonabelian group you know.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/660832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Constructing sequence of functions I have to construct a sequence of $\{f_i\}$, where $f_i$ belongs to $C[0,1]$ such that:
$$
d(f_i,0) = 1 \\
d(f_i,f_j)=1, \forall i,j \\
\text{Using Sup-Norm metric, i.e.} \mathbb{\|}f\mathbb{\|} = \sup_x{\mathbb{|}f_i (x)\mathbb{|}}
$$
Thanks.
|
Set
$$
f_n(x)=\left\{
\begin{array}{lll}
0 & \text{if}& x\le \frac{1}{n+1}, \\
n(n+1)x-n& \text{if} & x\left[\frac{1}{n+1},\frac{1}{n}\right], \\
1 & \text{if}& x\ge \frac{1}{n}.
\end{array}
\right.
$$
Clearly,
$$
\sup_{x\in[0,1]}|f_m(x)-f_n(x)|=1,
$$
if $m\ne n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/660904",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Limits of trigonometric function I know my answer is correct, but are my steps correct?
$$
\begin{align}
& \lim_{t \to 0} \frac{\tan(2t)}{t}\\[8pt]
& = \lim_{t \to 0} \frac{1}{t} \tan(2t)\\[8pt]
& = \lim_{t \to 0} \frac{\sin(2t)}{t \cdot \cos(2t)} \cdot \frac{2}{2}\\[8pt]
& = \lim_{t \to 0} \frac{2 \cdot \sin(2t)}{2t \cdot \cos(2t)}\\[8pt]
& = \lim_{t \to 0} \left(\frac{\sin(2t)}{2t} \cdot \frac{2}{\cos(2t)}\right)\\[8pt]
& = 1 \cdot \frac{2}{1}\\[8pt]
& = 2
\end{align}
$$
|
Yes exactly is correct congratulation
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/661014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Decide whether the following properties of metric spaces are topological or not I have to decide whether the following properties of metric spaces are 'topological' or not,
(a) every continuous function on M is bounded
(b) $(\forall x \in M)(\forall y \in M) d(x; y) > 0$
(c) $(\forall x \in M)(\forall y \in M) d(x; y) > 1$
Where M is a metric space with metric d.
but looking through my notes.. i can't find any distinct definition of what 'topological' means. Would anyone be able to clear this up?
|
In short, a property of a metric space is topological if it does not (really) depend on the metric. That is, if you have a different metric that produces the same notions of convergence, continuous functions, etc. (or even if you only have the notions of convergence, contnuou functions, etc.) then the property does not change.
Note that $b$ and $c$ can be greatly simplified to a truth value that holds completely independently of the metric $d$; which makes at least these properies topological.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/661088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
}
|
If $mv < pv < 0$, is $v > 0?$ (1) $m < p$ (2) $m < 0$ Is there a way to simplify this equation and not rely on testing numbers via trial and error?
If $mv < pv < 0, is v > 0$?
(1) $m < p$
(2) $m < 0$
We have to figure out if statement 1 by itself is sufficient to answer this question, or if statement 2 is sufficient by itself to answer this question, or if both statements combined are necessary to answer this question, or if both statements independently are needed to answer this question, or if neither statement is sufficient.
So, I know we can't divide by v since we do not know if v is negative or positive. but can we divide by v and evaluate two cases?
Can we simply the stem of the question to:
m < p < 0 and
m > p > 0?
My answer is D, they are both independently sufficient.
For statement 1, if m < p and they are positive numbers, then there is no way for the inequality to work. Say m = 2 and p = 3, there is no value of v that works... so we can't test this case right? However, if m = -3 and p = -2, then V = 1 and V is > 0. Sufficient.
statement 2: I used the same numbers. Sufficient. I can't find a negative number for V that makes inequality work.
|
You can try the two cases separately.
Assume $v$ is negative and then divide through by $v$ giving
$$ m > p > 0\;\;\; (*)$$
This is contradicted by both statements (1) and (2). Hence by contradiction if (1) or (2) are true, then $v$ cannot be negative.
On the other hand if you assume $v$ is positive and then divide by $v$ you get
$$ m < p < 0\;\;\; (\dagger) $$
This is consistent with both statements (1) and (2). Hence we conclude that either (1) or (2) or both will show that $v>0$. So I agree with your answer D they are both independently sufficient.
Finally if we have no information then we can't say whether (*) or $(\dagger)$ holds, so we do need at least statement (1) or (2) to be true. (and also note that $v$ can't be zero)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/661176",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Probability distribution of dot product? Sorry if this is a basic question. I don't know much about statistics and the closest thing I found involved unit vectors, a case I don't think is easily generalizable to this problem.
I have a reference vector $\mathbf V$ in some $\mathbb R^n$.
I have another vector in $\mathbb R^n$ of independent random variables, each with Gaussian distribution, each with the same standard deviation $\sigma$. Let's call the vector $\mathbf X$.
What is the probability distribution of $\mathbf V\cdot \mathbf X$?
Surely this is a famous problem with a widely known solution. Or is there an elegant approach to the problem?
|
V.X will be Normal, as linear combination of normal rvs is again normal. Check this.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/661246",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
solve equation in positive integers Can anybody help me with this equation?
Solve in $\mathbb{N}$:
$$3x^2 - 7y^2 +1=0$$
One solution is the pair $(3,2)$, and i think this is the only pair of positive integers that can be a solution. Any idea?
|
There are infinitely many solutions in positive integers. $7y^2-3x^2=1$ is an example of a "Pell equation", and there are standard methods for finding solutions to Pell equations.
For example, the fact that $(x,y)=(2,3)$ is a solution to $7y^2-3x^2=1$ is equivalent to noting that $(2\sqrt7+3\sqrt3)(2\sqrt7-3\sqrt3)=1$. The fundamental unit in $\Bbb Q(\sqrt{21})$ is $55+12\sqrt{21}$; in particular, $(55+12\sqrt{21})(55-12\sqrt{21})=1$. Consequently, if we calculate $(2\sqrt7+3\sqrt3)(55+12\sqrt{21}) = 218 \sqrt{7}+333 \sqrt{3}$, it follows that $(218 \sqrt{7}+333 \sqrt{3})(218 \sqrt{7}-333 \sqrt{3})=1$, or $7\cdot218^2 - 3\cdot333^2=1$.
You can get infinitely many solutions $(x_n,y_n)$ to $7y^2 - 3x^2=1$ by expanding $(2\sqrt7+3\sqrt3)(55+12\sqrt{21})^n = y_n \sqrt{7}+x_n \sqrt{3}$. Your solution corresponds to $n=0$, while the previous paragraph is $n=1$; for example, $n=2$ yields $x_2=36627$ and $y_2=23978$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/661342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Construct polyhedron from edge lengths I'm interested in the following problem: I am given the combinatorial structure (vertices, edges, faces) and edge lengths of a polyhedron. From this I'd like to infer the vertex positions.
Now, I know that the above information does not uniquely determine the polyhedron. For instance, position and rotation are deliberate. But if I also pre-specify the positions of two connected vertices, then position and rotation of the polyhedron are determined.
My question is: Is the polyhedron completely determined then? If not, which additional information is needed? And is there a known algorithm for constructing at least one of the possible polyhedra from this information?
The application is to construct a 3D model of a house given the side lengths. If there is no algorithm for general polyhedra, maybe there is one for a subset of all possible house shapes? I assume simple house shapes here, i.e. all walls and roof sides are just single faces. The faces can, however, have 5 or more vertices and the house shapes do not have to be convex.
Thanks a lot in advance!
Daniel
|
You might extend this into three dimensions as a prism.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/661422",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
Solving for the determinant of a matrix? I know the rules of solving for the determinants of a matrix and I totally thought I was doing this right, but the answer output is incorrect.
I took the approach that I was adding the $a, b$ and $c$, so nothing would change. However, I notice I am multiplying the second row by scalars, so I just figured it would be $-4^3 \cdot (1)$ for $-64$. Can someone explain the intuition behind the solution please. Thanks for the help.
Never mind, I realized my mistake was I was taking the value of $-4^3$, however, the entire row was just being multiplied by the value of $-4$. Therefore, the final answer came out to be $-4$.
Given $$\det\begin{bmatrix}
a&b&c\\
d&e&f\\
g&h&i
\end{bmatrix}=1$$
Find the determinat
$$\det\begin{bmatrix}
a&b&c\\
-4d+a&-4e+b&-4f+c\\
g&h&i
\end{bmatrix}$$
|
If ${\rm det}\ A=1$, can you calculate
$$ {\rm det}\ (\left( \begin{matrix} 1&0&0\\ 1 &-4 &0 \\ 0&0&1 \end{matrix} \right) A) $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/661500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Evaluating $\int_\gamma |z|\,|dz|$ I'm updating my question to reflect changes that have occurred.
I'm kind of stuck trying to figure out this integral:
$$\int_\gamma |z|\,|dz|=\int_{0}^{\pi}|t||1+t||dt|$$
where $\gamma(t)=te^{it}$ where $0 \leq t \leq \pi$.
Since I have reduced this to $t$, do I approach this for $-t$ and $t$?
|
I understand the question as asking for
$$I = \int_\gamma |z| |dz|$$
where the curve $\gamma$ is defined to be the locus of $z(t)$ from $t=0$ to $t=\pi$ with $z(t) = t e^{it}$.
Then $|z| = t$ and $|dz| = |e^{it} + ite^{it}| = |1+it| = \sqrt{1+t^2}$, so
$$\begin{eqnarray}
I & = & \int_0^\pi t\sqrt{1+t^2} dt \\
& = & \frac12 \int_{t=0}^{t=\pi} \sqrt{1+t^2}d(1+t^2) \\
& = & \frac13 \left[(1+t^2)^\frac32\right]_{t=0}^{t=\pi} \\
& = & \frac{(1+\pi^2)^\frac32 - 1}{3} \\
& = & 11.612\dots
\end{eqnarray}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/661573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How to solve $\log \sqrt[3]{x} = \sqrt{\log x} ?$ How to solve $$\log \sqrt[3]{x} = \sqrt{\log x} $$
|
Using $$m\log a=\log(a^m)$$ when both logs are defined
$$\log\sqrt[3] x=\sqrt{\log x}\implies\frac13 \log x=\sqrt{\log x}$$
$$\sqrt{\log x}(\sqrt{\log x}-3)=0$$
$$\sqrt{\log x}=0\iff \log x=0\iff x=1$$
$$\sqrt{\log x}-3=0\iff \log x=9$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/661614",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $\frac{1}{1*3}+\frac{1}{3*5}+\frac{1}{5*7}+...+\frac{1}{(2n-1)(2n+1)}=\frac{n}{2n+1}$ Trying to prove that above stated question for $n \geq 1$. A hint given is that you should use $\frac{1}{(2k-1)(2k+1)}=\frac{1}{2}(\frac{1}{2k-1}-\frac{1}{2k+1})$. Using this, I think I reduced it to $\frac{1}{2}(\frac{1}{n^2}-(\frac{1}{3}+\frac{1}{5}+\frac{1}{7}+\cdots+\frac{1}{2n+1}))$. Just not sure if it's correct, and what to do with the second half.
|
$$\frac{1}{(2k-1)(2k+1)}=\frac{1}{2}(\frac{1}{2k-1}-\frac{1}{2k+1})
$$This implies that the sum is
$$\frac{1}{2}\big\{\frac{1}{1}-\frac{1}{3}+\frac{1}{3}-\frac{1}{5}+\frac{1}{5}-...-\frac{1}{2n-1}+\frac{1}{2n-1}-\frac{1}{2n+1}\big\}$$ cancel terms and complete
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/661701",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
A closed ball in a metric space is a closed set
Prove that a closed ball in a metric space is a closed set
My attempt: Suppose $D(x_0, r)$ is a closed ball. We show that $X \setminus D $ is open. In other words, we need to find an open ball contained in $X \setminus D$.
Pick $$t \in X-D \implies d(t,x_0) > r \implies d(t,x_0) - r > 0 $$ Let $B(y, r_1)$ be an open ball, and pick $z \in B(y,r_1)$. Then, we must have $d(y,z) < r_1 $. We need to choose $r_1$ so that $d(z,x_0) > r$. Notice by the triangle inequality
$$ d(x_0,t) \leq d(x_0,z) + d(z,t) \implies d(z,x_0) \geq d(x_0,t) - d(z,t) > d(x_0,t) - r_1.$$
Notice, if we pick $r_1 = d(t,x_0)-r$ then we are done.
Is this correct?
|
No I do not think this is correct. The idea seems correct, but the execution was poor. You should specify that $y\in X\backslash D$. I am also not sure how you justify your last inequality. If $t$ is arbitrary in $X\backslash D$ we cannot conclude $d(z,t)<r_1$ and $-d(z,t)>-r_1.$
Here is how I would solve the problem:
Let $X$ be a metric space, $p\in X$, $r>0$.
Let $A=\{q\in X :d(p,q)\leq r\}$. Let $\{q_k\}\in A$ with $d(q_k,q)\rightarrow 0$.
We want to show $q\in A$.
By the triangle inequality we have $$d(q,q_k)+d(q_k,p)\ge d(q,p)$$. $\Rightarrow$ $$d(q,q_k)+r\ge d(q,p)$$ (Since $\{q_k\}\subseteq A)$
Now take the limit as $k\rightarrow \infty$ of both sides and we have $$0+r\ge d(p,q)$$
$$\Rightarrow q\in A$$ $\Rightarrow A$ is closed $$QED$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/661759",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 2,
"answer_id": 1
}
|
Uniform convergence of Taylor series I am trying to show:
If $f: B(0,R) \to \mathbb C$ is analytic then the Taylor series of $f$ at $0$ converges uniformly to $f$ in $B(0,r)$ for all $r\in (0,R)$
But I got stuck with my proof. Please can somebody help me? Here is what I have so far:
From here I have
$$ f(z) = \sum_{n=0}^\infty z^n {1\over 2 \pi i}\int_C {f(w) \over w^{n+1}}dw$$
Therefore the Taylor series of $f$ at $0$ is
$T_0(z) = \sum_{n=0}^\infty z^n c_n$
where $c_n = {1\over 2 \pi i}\int_C {f(w) \over w^{n+1}}dw$.
My idea is to apply the Weierstrass $M$-test here for $f_n(x) = z^n c_n$. It holds that $|z^n| < r^n$ and $|c_n|\le {M \over r^n}$ and therefore $|f_n(z)|\le M$. The problem is how to show
$\sum_{n=0}^\infty M < \infty$? It is obviously not true (consider $R=1$). Where is my mistake?
|
We have to show, that $\sum_{n=0}^\infty z^n c_n$ is unformly convergent for $|z|<r$. The trick is, that $C\subset B(0,R)$ can be any smooth closed curve rounding the origin. So take $\varrho$ such that $r<\varrho<R$ and let $C:=\{z\,|\,|z|=\varrho\}=\{\varrho e^{it}\,|\,0\leq t\leq 2\pi\}$. Then
$$
|c_n| =\left|{1\over 2 \pi i}\int_C {f(w) \over w^{n+1}}dw\right|\leq \frac{1}{2\pi}\int_C\frac{|f(w)|}{\varrho^{n+1}}|dw|\leq \max_{w\in C}|f(w)|\frac{1}{2\pi}\int_0^{2\pi}\frac{1}{\varrho^{n+1}}\varrho\,dt=\max_{w\in C}|f(w)|\frac{1}{\varrho^n}.
$$
Thus it follows
$$
\left|\sum_{n=0}^\infty z^n c_n\right|\leq\sum_{n=0}^{\infty} \max_{w\in C}|f(w)|\frac{r^n}{\varrho^n}=\max_{w\in C}|f(w)|\sum_{n=0}^{\infty} \left(\frac{r}{\varrho}\right)^n=\max_{w\in C}|f(w)|\frac{1}{1-(r/\varrho)}.
$$
(Here we used that $0<(r/\varrho)<1$.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/661846",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
How to solve the following limit? $$\lim_{n\to\infty}{\bigg(1-\cfrac{1}{2^2}\bigg)\bigg(1-\cfrac{1}{3^2}\bigg) \cdots \bigg(1-\cfrac{1}{n^2}\bigg)}$$
This simplifies to $\prod_{n=1}^{\infty}{\cfrac{n(n+2)}{(n+1)^2}}$.
Besides partial fractions and telescope, how else can we solve? Thank you!
|
\begin{align}
L&=\lim_{n\to\infty}{\bigg(1-\cfrac{1}{2^2}\bigg)\bigg(1-\cfrac{1}{3^2}\bigg) \cdots \bigg(1-\cfrac{1}{n^2}\bigg)}\\
&=\lim_{n\to\infty}{\bigg(1-\cfrac{1}{2}\bigg)\bigg(1-\cfrac{1}{3}\bigg) \cdots \bigg(1-\cfrac{1}{n}\bigg)\bigg(1+\cfrac{1}{2}\bigg)\bigg(1+\cfrac{1}{3}\bigg) \cdots \bigg(1+\cfrac{1}{n}\bigg)}\\
&=\lim_{n\to\infty}\frac1n\frac{n+1}2\\
&=\frac12
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/661879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Area of a triangle The following problem in elementary geometry was proposed to me. As a mathematical analyst, I confess that I can't solve it. And I have no idea of what I could do. Here it is: pick a triangle, and draw the three mediana (i.e. the segments that join a vertex with the midpoint of the opposite side). Use the three segments to construct a second triangle, and prove that the area of this triangle is $3/4$ times the area of the original triangle.
Any help is welcome.
|
Use vectors.It will be really helpful.Define sides of triangle ABC as A=0, B= b vector and C= c vector.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/662053",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 8,
"answer_id": 2
}
|
Why is the notation for differentiation like this? Consider the notation for denoting the differentiation of a function $f(x)$.
$$\frac{d[f(x)]}{dx}$$
I mean, this notation doesn't make any sense. $dx$ means a vanishingly small $x$, which can be understood, if we take $x$ to mean $\Delta x$ in a loose sense. But what does $df(x)$ denote?
I mean, ideally, shouldn't differentiation be denoted as
$$\frac{f(x_0 + d\Delta x) - f(x_0)}{d\Delta x}$$
Is this done just to simplify things, or is there another reason?
|
The $d[f(x)]$ indicates the vanishingly small change in $f(x)$ corresponding to the vanishingly small change in $x$ indicated by $dx$. Their ratio is the derivative.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/662180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Continuous function between metric spaces Let $(X , d)$ and $(Y , \rho)$ be two metric spaces. Let $S = \{x_1, x_2, x_3, ...\}$ be a countable dense subset of $X$. Let $f : X \rightarrow Y$ be a continuous function. The prove that;
For a closed set $F$ in $Y$, $f(x)$ belongs to the closed set $F$ if and only if for each $n \in \mathbb{N}$, there is some $s \in S$ with $d(x, s) < \frac{1}{n}$ and $ρ(f (s), F) < \frac{1}{n}$.
Thanks in advance.
|
"$\Rightarrow$" Let $f(x) \in F$, i.e. $x \in f^{-1}(F)$. Let $n \in \mathbb{N}^+$. Since $f$ is continuous at $x$, we find some $\delta>0$ such that $d(x,y) < \delta $ implies $d(f(x),f(y))< \frac{1}{n}$. Since $S$ is dense, there is some $s \in S$ such that $d(x,s) < \min(\delta,\frac{1}{n})$. Hence, $d(x,s) < \frac{1}{n}$ and $d(A,f(s)) \leq d(f(x),f(s)) < \frac{1}{n}$.
"$\Leftarrow$" It suffices to prove that $f(x) \in \overline{F}$, i.e. that for every $n \in \mathbb{N}^+$ there is some $y \in F$ such that $d(f(x),y)<\frac{1}{n}$. Since $f$ is continuous, there is some $m \in \mathbb{N}^+$ such that $d(x,y) < \frac{1}{m}$ implies $d(f(x),f(y)) < \frac{1}{2n}$. We may assume $m \geq 2n$. By assumption there is some $s \in S$ such that $d(x,s) < \frac{1}{m}$ and $d(f(s),F) < \frac{1}{m}$. Choose $y \in F$ such that $d(f(s),y) < \frac{1}{m}$. Then
$d(f(x),y) \leq d(f(x),f(s)) + d(f(s),y) < \frac{1}{2n} + \frac{1}{2n} = \frac{1}{n}.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/662315",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Proof that the number of diagonals of a polygon is $\frac{n(n-3)}{2} $ For $n \geq 3$ proof that the number of diagonals of a polygon is $\frac{n(n-3)}{2} $ using induction.
I don't know how to start this problem, can you give me a hint?
|
From any vertex we cannot draw a diagonal to its $2$ adjacent sides and the vertex itself.
So for a polygon containing $n$ sides we cannot draw $3$ diagonals, it means we can draw $n-3$ diagonals.
For $n$ vertices we can draw $n(n-3)$ diagonals.But each diagonal has two ends, so this would count each one twice. So that we will divide $n(n-3)$ by $2$. So finally number of diagnols for a polygon containing $n$ sides is $\frac{n(n-3)}2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/662503",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 5
}
|
If $X$ and $Y$ are uniformly distributed on $(0,1)$, what is the distribution of $\max(X,Y)/\min(X,Y)$? Suppose that $X$ and $Y$ are chosen randomly and independently according to the uniform distribution from $(0,1)$. Define $$ Z=\frac{\max(X,Y)}{\min(X,Y)}.$$
Compute the probability distribution function of $Z$.
Can anyone give me some hints on how to proceed?
I can only note that $\mathbb{P}[Z\geq 1]=1$ and $$F_Z(t)= \mathbb{P}[Z \leq t]=\mathbb{P}[X\leq Y, Y\leq tX]+\mathbb{P}[Y \leq X, x\leq tY]$$
|
We find the cdf $F_Z(x)$ of the random variable $Z$. By symmetry, it is enough to find $\Pr(Z\le z|Y\gt X)$, that is,
$$\frac{\Pr((Z\le z) \cap (Y\gt X))}{\Pr(Y\gt X)}.$$ The denominator is $\frac{1}{2}$. so we concentrate on the numerator.
The rest of the argument is purely geometric, and will require a diagram. Draw the square with corners $(0,0)$, $(1,0)$, $(1,1)$, and $(0,1)$.
Draw the line $x+y=1$ joining $(1,0)$ and $(0,1)$.
For a fixed $z$ (make it $\gt 1$) draw the line $y=zx$.
We will need labels. Let $P$ be the point where the lines $x+y=1$ and $x+y=1$ meet. Let $Q$ be the point where the line $y=zx$ meets the line $y=1$, and let $R=(0,1)$. Let $A=(1,0)$ and $B=(1,1)$. The quadrilateral $ABQP$ is the region where $Y\ge X$ and $Y\le zX$. So the probability that $Y\gt X$ and $Z\le z$ is the area of this region.
It is easier to first find the area of $\triangle PQR$. One can show that $QR=z$ and that the height of $\triangle PQR$, with respect to $P$, is $\frac{1}{1+z}$. Thus $\triangle PQR$ has area $\frac{1}{2}\frac{z}{1+z}$.
It follows that the area of $ABQP$ is $\frac{1}{2}-\frac{1}{2}\frac{1}{1+z}$. Divide by $\frac{1}{2}$ and simplify. We find that $\Pr(Z\le z)=\frac{z}{1+z}$.
Thus $F_Z(z)=\frac{z}{1+z}$ (for $z\gt 1$). If we want the density function, differentiate.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/662601",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
About the roots of a quadratic equation Let $m_1$ and $m_2$ the real and diferent roots of the quadratic equation $ax^2+bx+c=0$. Do you know some way to write $m_1^k + m_2^k$ in a simplest form (linear, for example) using just $a,b,c,m_1$ and $m_2$?
Thanks for the attention!
|
You can use Newton's polynomial to calculate any value of $m_1^k + m_2^k$, without actually calculating the roots and using just the coefficients in front of the variable. Actually it works for any degree polynomial. Let $s_k = m_1^k + m_2^k$ and $m_1$ and $m_2$ be roots of $ax^2 + bx + c=0$. Then we have:
$$as_1 + b = 0$$
$$as_2 + bs_1 + 2c= 0$$
$$as_3 + bs_2 + cs_1 = 0$$
$$as_4 + bs_3 + cs_2 = 0$$
$$ \ldots$$
You can find more about the general formula and Newton's polynomials here.
This maybe isn't what you are, assuming that you want just linear expression, but it's recursive one and should satisfy your requirements
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/662684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
What's the diference between $A<\infty$ and $A<\aleph_0$? In my topology class the teacher gave some examples of topologies, and I'm trying to prove that they really are topologies. If $X$ is a set then:
*
*$\mathcal C=\{A:\# (X-A)<\infty\}$ is a topology in $X$.
*$\mathcal C'=\{A:\# (X-A)<\aleph_0\}$ is a topology in $X$.
I've already proved that $\mathcal C$ is a topology, but I don't know what to do for $\mathcal C'$, what's the diference?
|
There must have been some kind of misunderstanding (either on your part or your instructor's). $\aleph_0$ is the cardinality of the natural numbers which [assuming the Axiom of Choice...] is the smallest infinite cardinal. Thus the second expression is just a more formal way of writing the first one.
Maybe what was intended was $\{ A : \# (X \setminus A) \leq \aleph_0\}$, namely the subsets which are "cocountable": their complement is finite or countably infinite.
Note that there is another mistake: when $X$ is infinite, the empty set does not belong to $\mathcal{C}$, but it should in order to give a topology. Similarly, assuming the above correction, when $X$ is uncountably infinite the empty set does not belong to $\mathcal{C}'$, but it should.
One can fix and even generalize the question as follows: let $\kappa \geq \aleph_0$ be any infinite cardinal. Then for any set $X$, the family $\mathcal{C}_{\kappa}$ consisting of the empty set together with all subsets $A$ with $\# (X \setminus A) \leq \kappa$ gives a topology on $X$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/662840",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Proving a summation involving binomial coefficients. I need to prove the following inductively: (http://upload.wikimedia.org/math/9/e/5/9e57871ba17c1ad48e01beb7e1bb3bb9.png)
$$\sum_{i=1}^{n} i{n \choose i} = n2^{n-1}$$
And for the life of me I can't figure out how to express the formula with n+1 in terms of n.
|
** I was going to withdraw this answer as it does not use induction, but decided to leave it anyway and take the criticism. Thanks lab bhattacharjee for setting me right**
Start with
$$
(x+1)^n = \sum _i {n \choose i} x^i
$$
and differentiate with respect to $x$
$$
n(x+1)^{n-1} = \sum _i {n \choose i} i x^{i-1}
$$
Now set $x=1$ to get your identity
$$
n ~2^{n-1} = \sum _i i~ {n \choose i}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/662921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Coordinates rotation by $120$ degree If I have a point on a standard grid with coordinates say:
$A_1=(1000,0)$
$A_2=(707,707)$
Is there a easy way to transfer this points to $\pm 120$ degrees from the origin $(0,0)$, and keeping the same distance?
So for $A_1$, the result should be something like:
$B_1=(800,-200);\ C_1=(800,-200)$
I can make this with triangles and calculate it but there should be some formula. I need a formula to use with software.
|
In general, the rotation matrix of angle $\theta$ is $$r_\theta=\left(\begin{array}{cc}\cos\theta & -\sin\theta\\ \sin\theta & \cos\theta\end{array}\right).$$
In your case, $\theta=\pm120°$, so $\cos\theta=-1/2$ and $\sin\theta=\pm\frac{\sqrt 3}2$.
You just have to apply the matrix to the vector to get the image of the vector by the rotation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/663064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.