Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
determine the number of homomorphisms from $D_5$ to $\mathbb{R^*} $ and from $D_5$ to $S_4$ I'm trying to do these three exercises for my math study:
a) Determine the number of homomorphisms from $D_5$ to $\mathbb{R^*}$
b) Determine the number of homomorphisms from $D_5$ to $S_4$
c) give an injective homomorphism form $D_5$ to $S_5$
I think I solved a):
The amount of homomorphisms is 1. The group $D_5$ is generated by a rotations and reflections. every reflection has order 2, and the rotations have order 1(the rotation about nothing) and order 5(the 4 rotations left). so we have to sent the reflection to -1 or 1, because the order of $f(x)$ has to divide the order of $x$ and the rotation about 0 degrees has only 1 as option to send it to. Combining that gives us only 1 homomorphism.
For b), I think the answer is 6! = 720, because every reflection in $D_5$ has to go to a 2-cycle in $S_4$, because the 2-cycles generate $S_4$. There are six of them, so the number of different homomorphisms are 6!.
Can you tell me if this is correct, and explain c) to me?
Thanks in advance!
|
For a) the answer is two because :
$$Hom(D_5,\mathbb{R}^*)=Hom(D_5/[D_5,D_5],\mathbb{R}^*)=Hom(Z/2Z,\mathbb{R}^*) $$
The cardinal of the last one is $2$.
Edit : another way without abelianization.
Take $r$ a rotation and $s$ a reflection generating $D_5$. Take $f\in Hom(D_5,\mathbb{R}^*)$. You have $f(r)^{5}=1$, the only real number verifying this is $1$.
Now every element of $D_5$ is written as $r^k$ or $r^ks$. The image of $r^k$ by $f$ must be $1$. The image of $r^ks$ by $f$ is $f(r^ks)=f(r)^kf(s)=f(s)$.
We then see that the morphism $f$ is only determined by its value in $s$. This value must verify $f(s)^2=1$ so it is either $-1$ or $1$. This shows that there are at most $2$ such morphisms. To justify that there exist a non-trivial morphism from $D_5$ to $\mathbb{r}^*$ one can find a non-trivial morphism from $D_5$ to $Z/2Z=\{\pm 1\}$ by :
$$D_5\rightarrow D_5/<r> $$
For b) you see that if $f:D_5\rightarrow S_4$ is a morphism then $f(r)=Id$ (because there are no element of order $5$). Now such morphisms have then image in a subgroup of order $2$ of $S_4$. It follows easily that the number of such morphisms is the number of elements of order dividing $2$ in $S_4$, you have :
$$\frac{4\times 3}{2}=6\text{ transpositions and } 3\text{ double transpositions and the identity.} $$
you then get $10$ morphisms from $D_5$ to $S_4$.
Now for c), the idea is to find a subgroup $H$ of $S_5$ isomorphic to $D_5$ (then the isomorphism from $D_5$ to $H$ gives you the injective homomorphism from $D_5$ to $H$).
It suffices (because $D_5=Z/5Z\rtimes_{-1} Z/2Z$) to exhibit an element $\tau\in S_5$ such that :
$$\tau^2=Id\text{ and } \tau(1,2,3,4,5)\tau^{-1}=(1,2,3,4,5)^{-1}=(1,5,4,3,2) $$
Then you can verify that :
$$<(1,2,3,4,5)>\rtimes_{conj} <(2,5)(3,4)> $$
Is a subgroup of $S_5$ isomorphic to $D_5$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1212788",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Meaning of correlation In probability books it is sometimes mentioned that correlation is a measure of linearity of a relationship between random variables.
This claim is supported by the observation that $\left| \rho(X,Y)\right|=1 \iff X=b+aY$.
But let consider a less extreme case: $X,Y,Z$ are random variables such that $\rho(X,Y)=0.5$ and $\rho(Y,Z)=0.8$. Does it mean that the relationship between $X$ and $Y$ is "less" linear than between $Y$ and $Z$? How at all the degree of linearity in a relationship can be defined (especially given the fact, that a relationship between random variables can be non-functional, so one can not always use the second derivative)?
Edit:
I am talking about Pearson correlation here.
|
It depends on what measure of correlation you use.
The Pearson correlation (the most commonly used one) measures the linearity of a relationship between two random variables.
The Spearman rank correlation however, also measures nonlinear (monotonic) relationships. This is defined as the Pearson correlation coefficient between the ranked variables.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1212859",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 7,
"answer_id": 1
}
|
Eigenvalues of the sum of a diagonal matrix and the outer product of two vectors Does a $n \times n$ matrix $M = D + u. v^T$ with $D$ diagonal, $u$ and $v$ two given vectors, and $n$ a positive integer have some interesting properties in term of spectrum (all the real parts of the eigenvalues positive, for example) ?
I'm interested in the stability of the associated system of first order linear differential equations.
|
If $D$ is a positive definite diagonal matrix and $u:=[u_1,\ldots,u_n]^T$ and $v:=[v_1,\ldots,v_n]^T$ are positive vectors, a very useful fact is that $M:=D+uv^T$ is symmetrizable. That is, if
$$\tag{1}
S:=\mathrm{diag}\left(\sqrt{\frac{u_i}{v_i}}\right)_{i=1}^n,
$$
then
$$
S^{-1}MS=D+ww^T, \quad w:=S^{-1}u=Sv,
$$
is symmetric. Hence all eigenvalues of $M$ are real (and the associated eigenvectors are real as well).
Now since $ww^T$ is semidefinite and $D$ is positive definite, $D+ww^T$ is positive definite and the eigenvalues of $D+ww^T$ (and hence the eigenvalues of $M$) are positive.
Note that the assumption on the positivity of $u$ and $v$ can be relaxed. For each $i=1,\ldots,n$, we might require either that $u_i$ and $v_i$ have the same sign (in this case, (1) is well-defined) or they are both zero (in this case, the corresponding diagonal entry of $S$ can be arbitrary nonzero).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1212981",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Number of elements of $\mathbb{Z}_p$ that satisfy a certain property Let $S(n,p)=\{a\in \mathbb{Z}_p : a^n=1$ (mod $p$)$\}$ where $p\geq3$ is a prime number and $1\leq n\leq p$. I am interested in finding a general formula for cardinality of $S(n,p)$. For example, I know that $|S(1,p)|=1$ and $|S(p-1,p)|=p-1$ (by Fermat's little theorem).
|
Already this has been answered. But I'd like to make two points: as $0$ is ruled out we are looking at a subset of the group $\mathbf{Z}_p^*$. And $x\mapsto x^n$ is a homomorphism of this group, and so the solution set $S(n,p)$ you are looking for is the kernel of that homomorphism.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1213106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Take seven courses out of 20 with requirement
To fulfill the requirements for a certain degree, a student can choose to take any 7 out of a list of 20 courses, with the constraint that at least 1 of 7 courses must be a statistics course. Suppose that 5 of the 20 courses are statistics courses.
From Introduction to Probability, Blitzstein, Hwang
Why is ${5 \choose 1}{19 \choose 6}$ not the correct answer?
|
There are $\binom{20}{7}$ possibilities, but $\binom{15}{7}$ of them are not ok, hence the answer is $\binom{20}{7}-\binom{15}{7}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1213185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Proving that $\int_0^\infty\frac{J_{2a}(2x)~J_{2b}(2x)}{x^{2n+1}}~dx~=~\frac12\cdot\frac{(a+b-n-1)!~(2n)!}{(n+a+b)!~(n+a-b)!~(n-a+b)!}$
How could we prove that $$\int_0^\infty\frac{J_{2a}(2x)~J_{2b}(2x)}{x^{2n+1}}~dx~=~\frac12\cdot\frac{(a+b-n-1)!~(2n)!}{(n+a+b)!~(n+a-b)!~(n-a+b)!}$$ for $a+b>n>-\dfrac12$ ?
Inspired by this question, I sought to find $($a justification for$)$ the closed form expressions of
the following two integrals: $~\displaystyle\int_0^\infty\frac{J_A(x)}{x^N}~dx~$ and $~\displaystyle\int_0^\infty\frac{J_A(x)~J_B(x)}{x^N}~dx.~$ For the former,
we have $~\displaystyle\int_0^\infty\frac{J_{2k+1}(2x)}{x^{2n}}~dx~=~\frac12\cdot\frac{(k-n)!}{(k+n)!}~,~$ for $k>n>\dfrac14~,~$ which I was ultimately
able to “justify” $($sort of$)$ in a highly unorthodox manner, using a certain trigonometric integral expression for the Bessel function, and then carelessly $($and shamelessly$)$ exchanging the order of integration. Unfortunately, even such underhanded tricks have failed me when attempting to approach the latter. Can anybody here help me ? Thank you !
|
To save from typing a result a highlight will be listed for now.
*
*The integral in question is a reduction of the more general integral
\begin{align}
\int_{0}^{\infty} \frac{J_{\mu}(at) \, J_{\nu}(bt)}{t^{\lambda}} \, dt = \frac{b^{\nu} \Gamma\left( \frac{\mu + \nu - \lambda +1}{2}\right)}{2^{\lambda} \, a^{\nu - \lambda +1} \, \Gamma\left( \frac{\lambda + \mu - \nu +1}{2} \right) } \, {}_{2}F_{1}\left( \frac{\mu+\nu-\lambda+1}{2}, \frac{\nu-\lambda-\mu+1}{2}; \nu+1; \frac{b^{2}}{a^{2}} \right).
\end{align}
When $\mu \rightarrow 2 \mu$, $\nu \rightarrow 2 \nu$, $a=b=2$, $\lambda = 2n+1$ the result is obtained
*A method to obtain the above listed result is to consider the integral as
\begin{align}
\int_{0}^{\infty} \frac{J_{\mu}(at) \, J_{\nu}(bt)}{t^{\lambda}} \, dt = \lim_{s \rightarrow 0} \, \int_{0}^{\infty} e^{-s t} \, t^{-\lambda} \, J_{\mu}(at) \, J_{\nu}(bt) \, dt
\end{align}
*See G. N. Watson's Bessel function Book section 13.4, p.401
Edit:
The product of two Bessel functions, as required by this problem, is
\begin{align}
J_{\mu}(x) \, J_{\nu}(x) = \sum_{n=0}^{\infty} \frac{(-1)^{n} \, \Gamma(2n+\mu+\nu+1) \, \left(\frac{x}{2}\right)^{2n+\mu+\nu}}{n! \, \Gamma(\mu+\nu+n+1) \, \Gamma(\mu+n+1) \, \Gamma(\nu+n+1)}
\end{align}
When $x = 2t$ it is seen that
\begin{align}
\int_{0}^{\infty} e^{-st} \, t^{- \lambda} \, J_{\mu}(2t) \, J_{\nu}(2t) \, dx = \sum_{n=0}^{\infty} \frac{(-1)^{n} \, \Gamma(2n+\mu+\nu+1) \, \Gamma(2n+\mu+\nu-\lambda+1) \, \left(\frac{1}{s}\right)^{2n+\mu+\nu-\lambda+1}}{n! \, \Gamma(\mu+\nu+n+1) \, \Gamma(\mu+n+1) \, \Gamma(\nu+n+1)}
\end{align}
Reducing this series and a possible transformation of the resulting hypergeometric series along with the limiting value for $s$ will yield the desired result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1213371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 2,
"answer_id": 1
}
|
Prove that $\frac{1}{(1-x)^k}$ is a generating function for $\binom{n-k-1}{k-1}$ On my discrete math lecture there was a fact that:
$\frac{1}{(1-x)^k}$ is a generating function for $a_n=\binom{n-k-1}{k-1}$
I'm interested in combinatorial proof of this fact. Is there any simple proof of such kind available somewhere? (I saw proof using $k$ x $n$ board where we're thinking of $n-k-1$ roads availible to point $(k,n)$ where we're choosisng $k-1$ fields to go right but I didn't understand it - is there a simpler proof somewhere?)
|
Suppose we have a sequence of values of length $k$ where the values are any non-negative integer. This is has the combinatorial specification
$$\mathfrak{S}_{=k}\left(\sum_{q\ge 0} \mathcal{Z}^q\right).$$
This gives the generating function
$$\frac{1}{(1-z)^k}.$$
On the other hand all such sequences can be obtained by stars-and-bars combining $n$ elements with $k-1$ dividers, giving
$$\sum_{n\ge 0} {n+k-1\choose k-1} z^n,$$
which was to be shown.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1213435",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Which of these two factorizations of $5$ in $\mathcal{O}_{\mathbb{Q}(\sqrt{29})}$ is more valid? $$5 = (-1) \left( \frac{3 - \sqrt{29}}{2} \right) \left( \frac{3 + \sqrt{29}}{2} \right)$$
or
$$5 = \left( \frac{7 - \sqrt{29}}{2} \right) \left( \frac{7 + \sqrt{29}}{2} \right)?$$
$\mathcal{O}_{\mathbb{Q}(\sqrt{29})}$ is supposed to be a unique factorization domain, so the two above factorizations are not distinct. But I can divide factors in both of them by units to obtain yet more seemingly different factorizations.
The presence of the unit $-1$ in the first factorization does not trouble me, since for example on this page http://userpages.umbc.edu/~rcampbel/Math413Spr05/Notes/QuadNumbFlds.html the factorization of $3$ in $\mathcal{O}_{\mathbb{Q}(\sqrt{13})}$ is given as $(-1)(7 - 2 \sqrt{13})(7 + 2 \sqrt{13})$.
I honestly find rings of complex numbers far easier to understand!
|
Note that
$$ \frac{7-\sqrt{29}}2 \cdot \frac{5+\sqrt{29}}2 = \frac{3+\sqrt{29}}2 $$
and
$$ \frac{5+\sqrt{29}}2 \cdot \frac{-5+\sqrt{29}}2 = 1 $$
So the two factorizations are equally good, just related by unit factors. (Remember that even in an UFD, factorizations are only unique up to associatedness).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1213530",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 5,
"answer_id": 4
}
|
Limit as $x$ tend to zero of: $x/[\ln (x^2+2x+4) - \ln(x+4)]$ Without making use of LHôpital's Rule solve:
$$\lim_{x\to 0} {x\over \ln (x^2+2x+4) - \ln(x+4)}$$
$ x^2+2x+4=0$ has no real roots which seems to be the gist of the issue.
I have attempted several variable changes but none seemed to work.
|
An approach without L'Hopital's rule.
$$\lim_{x\to 0} {x\over \ln (x^2+2x+4) - \ln(x+4)}=\lim_{x\to 0} {1\over {1\over x}\ln {x^2+2x+4\over x+4}}=\lim_{x\to 0} {1\over \ln \big ({x^2+2x+4\over x+4}\big)^{1\over x}}$$
but
$$({x^2+2x+4\over x+4}\big)^{1\over x}=({x^2+x+x+4\over x+4}\big)^{1\over x}=\big(1+{x^2+x\over x+4}\big)^{1\over x}=\big(1+{ x(x+1)\over x+4}\big)^{1\over x}=\big(1+\color\red x({x+1\over x+4})\big)^{1\over \color\red x}$$ which tends to $e^{1\over 4}$ as $x$ goes to $0$. So the original limit is ${1\over \ln{e^{1\over 4}}}=\color\red 4$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1213655",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 4
}
|
What is the limit of this sequence involving logs of binomials? Define $$P_{n} = \frac{\log \binom{n}{0} + \log \binom{n}{1} + \dots +\log \binom{n}{n}}{n^{2}}$$ where $n = 1, 2, \dots$ and $\log$ is the natural log function. Find $$\lim_{n\to\infty} P_{n}$$ Using the property of the $\log$, I am thinking to find the product of all the binomial coefficients. But it gets really messy.
|
The limit is $1/2$.
Here is a proof by authority:
This is copied from here:
Prove that $\prod_{k=1}^{\infty} \big\{(1+\frac1{k})^{k+\frac1{2}}\big/e\big\} = \dfrac{e}{\sqrt{2\pi}}$
$$\prod_{k=0}^n \binom{n}{k} \sim
C^{-1}\frac{e^{n(n+2)/2}}{n^{(3n+2)/6}(2\pi)^{(2n+1)/4}}
\exp\big\{-\sum_{p\ge 1}\frac{B_{p+1}+B_{p+2}}{p(p+1)}\frac1{n^p}\big\}\text{ as }n \to \infty
$$
where
$$\begin{align}
C
&= \lim_{n \to \infty}
\frac1{n^{1/12}}
\prod_{k=1}^n \big\{k!\big/\sqrt{2\pi k}\big(\frac{k}{e}\big)^k\big\}\\
&\approx 1.04633506677...\\
\end{align}
$$
and the $\{B_p\}$
are the Bernoulli numbers,
defined by
$$\sum_{p \ge 0} B_p\frac{x^p}{p!} = \frac{x}{e^x-1}
.$$
If the $n^2$ root is taken,
the only term that
does not go to $1$
is
$e^{1/2}$
(from
$e^{n(n+2)/2}$)
so that
$\left(\prod_{k=0}^n \binom{n}{k}\right)^{1/n^2}
\sim e^{1/2}
$
or
$\dfrac{\sum_{k=0}^n \ln \binom{n}{k}}{n^2}
\sim 1/2
$.
There is probably a relatively simple proof.
It might be easier to find
knowing the answer.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1213843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 0
}
|
A unique solution Find the sum of all values of k so that the system
$$y=|x+23|+|x−5|+|x−48|$$$$y=2x+k$$
has exactly one solution in real numbers. If the system has one solution, then one of the three $x's$, should be 0. Then, there are 3 solutions, when each of the modulus becomes 0. Where am I going wrong?
|
$$y-2x$$ is a piecewise linear function and is convex. The slope is nowhere zero (slope values are $-5,-3,-1,1$) so that it achieves an isolated minimum, which is the requested value of $k$, at the lowest vertex.
$$\begin{align}-23&\to145\\5&\to61\\48&\to\color{green}{18}\end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1214058",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
}
|
How does the minus come in Geometric Series I'm looking into the geometric series and can't understand how the 1 - .01 comes in below:
0.272727... = 0.27 + 0.0027 + 0.000027 + 0.00000027 + ...
= 0.27 + 0.27(.01) + 0.27(.01)^2 + 0.27(.01)^3 + ...
= 0.27 / (1-.01)
= 0.27 / 0.99
= 27/99
= 3/11
|
Question is why do solve it using geometric progressions at all. When you use the basic method taught as kids in school, you're basically deriving the sum of an infinite GP unknowingly. Here's the easiest way-
x=0.272727272727...
And 100x=27.272727272727....
Thus, 99x=27
x=27/99
This is basically like deriving the sum of an infinite gp, only instead of a and r, we're using numbers here!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1214193",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Prove $\{n^2\}_{n=1}^{\infty}$ is not convergent Definition of convergent sequence:
There exists an $x$ such that for all $\epsilon > 0$, there exists an $N$ such that for all $n \geq N$, $d(x_n, x) < \epsilon$.
So the negation is that there does not exist an $x$ such that for all $\epsilon > 0$, there exists an $N$ such that for all $n \geq N$, $d(x_n, x) < \epsilon$.
The negation doesn't really help me here, since we weren't given a specific limit point of the sequence to disprove, and we have to prove this for all $x \in \mathbb{R}$. I have never done this before since we were always given some limit point along with the sequence. What is the way to prove this for all $x$?
|
The definition of convergence is
"There exists a $x\in\mathbb{R}$, such that for all $\epsilon>0$, there exists an $N$ such that for all $n\ge N$, $d(x_n,x)<\epsilon$"
The negation is
"There is no $x\in\mathbb{R}$, such that for all $\epsilon>0$, there exists an $N$ such that for all $n\ge N$, $d(x_n,x)<\epsilon$"
Since $n^2$ is monotone crescent and is not bounded (there is no maximum), for all $x\in\mathbb{R}$ there is a $n$ large enough such that $d(x_n,x)=|n^2-x|>\epsilon$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1214271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Parametric representation of a plane cut of a sphere at y=5 The sphere is given by $x^2+y^2+z^2=36$
Parametric Form:
$$x=6\sin t\cos u$$
$$y=6\sin t\sin u$$
$$z=6\cos t$$
If the sphere is 'cut' at $z=5$ this problem is trivial. ($0<t<\arccos(5/6),0<u<2\pi$), but although the sphere cut at $y=5$ seems to be just as simple, using the standard angle definitions of the parametric form of a sphere, we now have 2 unknowns instead of 1. How do I go about this?
|
Choose for parametrization:
$$\begin{gathered}
x = 6cos(t)cos(u) \hfill \\
y = 6sin(t) \hfill \\
z = 6cos(t)sin(u) \hfill \\
\end{gathered}$$
Rotate half circle around y-axis.
Then to solve
$$y = 6sin(t) = 5$$
is done as before. Choose right parametrization.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1214357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Complex Number use in Daily LIfe What are the different properties of Complex Numbers. ?
I have doubt on real life use of complex numbers. Where and in what conditions do we use complex numbers in our day to day life.
My main focus is to know apart from Electrical ENgineering where it is used. Daily Life use. which can be understood by layman
|
Why are real numbers useful? Most people can think of many reasons they are useful, they allow people to encode information into symbols that most anyone can understand. Its the same case with complex numbers. Most examples give highly specific and niche uses for complex numbers, but in reality, they could be used anywhere. The simplest way to understand complex numbers is to realize that $i \cdot i=-1$, $-1 \cdot i=-i$, and $-i \cdot i=1$. You'll notice that multiplying something by $i$ repeatedly results in eventually getting back the number you started with. In addition, note that complex numbers are made from both real and imaginary components. Replace real with x and imaginary with y, and it becomes apparent that complex numbers can be plotted on x-y graphs. This also means that repeatedly multiplying by $i$ corresponds to rotation. So complex numbers allow us to encode more "complicated" information. I'll leave you with a question. A question for you, what are some uses of x-y graphs and rotation?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1214446",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Linear subspace closed under all special orthogonal matrices
Let $n\in \mathbb N$ and $E$ be an $n$ dimensional vector space over $\mathbb R$.
Let $F$ be a linear subspace of $E$ such that $\forall f\in SO(E), f(F)\subset F$
Prove that $F=\{0\}$ or $F=E$
Although the result is quite intuitive, I haven't been able to write up a proof. I've noticed that the orthogonal complement is closed under $SO(E)$ as well.
I tried a proof by contradiction. If $\dim F\in\{1,\ldots,n-1\}$, one may consider some well-chosen reflection...
|
To prove the result, it will suffice to show that if $F$ contains some line$~L_0$ through the origin (in other words if $F\neq\{0\}$) then $F$ contains every line through the origin (in other words if $F=E$). Let $L_1$ be an arbitrary line through the origin; one needs to find of $g\in SO(E)$ such that $g(L_0)=L_1$; then the property of being $SO(E)$-stable will force $F$ to contain $L_1$, because it contains $L_0$.
Your idea of using reflections works. After choosing spanning unit vectors $v_0,v_1$ of $L_0,L_1$ respectively (and assuming $v_0\neq v_1$) one can apply the orthogonal reflection in the hyperplane orthogonal to $v_1-v_0$ to send $v_0$ to $v_1$, and therefore $L_0$ to $L_1$. The reflection is in $O(E)$ but not in $SO(E)$, but this can be remedied by composing with the orthogonal reflection in the hyperplane orthogonal to $v_1$, which fixes $L_1$. The composition of both reflections is your $g$.
Another approach is to use orthonormal bases, and notably the fact that for a given line one can always choose an orthonormal basis starting with a vector from this line.
After choosing one reference ordered orthonormal basis$\def\B{\mathcal B}~\B$, the elements $g\in SO(E)$ are in bijection the set of ordered orthonormal bases of$~E$ with the same orientation as$~\B$; the bijection is simply taking the image of$~\B$ under$~g$. Now choose $\B$ to start with a vector from$~L_0$, and choose another orthonormal basis to start with a vector from$~L_1$; one can arrange (by flipping a sign in the latter basis if necessary) that the two bases have the same orientation. The unique $g\in SO(E)$ sending$~\B$ to the second basis will have $g(L_0)=L_1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1214524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
What is the notation (if any) for series probability inclusion? In statistics, what is the notation to use for an event $A$ in $B$ in $C$ in $D$, etc., where the series may continue for a large number of events? The following works for a few events:
$$A\cap B\cap C\cap D\cap\cdots$$
...However, this can become long and arduous for many events that might be considered. Is there any way to succinctly note a lengthy series like this (something like $\sum$ for a sum or $\prod$ for product series)?
The key here is in looking at multiplication laws, such as the following, and how to best note the series of multiplication steps for each $A_n$ in it:
$$p(A_1\cap \cdots \cap A_n)$$
I'd expect something like the following for "a" terms, but am not sure if this is correct (I don't believe the first term would work out here):
$$\prod_{n=1}^a p(A_1\mid A_1\cap A_n)$$
|
For $A_1\cap\cdots\cap A_n$ you can write $\bigcap_{k=1}^n A_k$. This is coded in MathJax and LaTeX as \bigcap, and in a "displayed" as opposed to "inline" context, it puts the subscripts and superscripts directly below and above the symbol, just as with $\sum$, thus:
$$
\bigcap_{k=1}^n A_k
$$
If you want the subscripts and superscripts to be formatted that way in an inline context, as $\displaystyle\bigcap_{k=1}^n A_k$, just put \displaystyle before it (that also affects the size).
You can also write
$$
T=\bigcap_{x\in S} A_x
$$
and that means that $a\in T$ if and only if for all $x\in S$, $a\in A_x$. The set $S$ need not be countably infinite; it can be uncountably infinite. For example, the intersection of all open intervals $(-\varepsilon,\varepsilon)$ for $\varepsilon>0$ is
$$
\bigcap_{\varepsilon>0} (-\varepsilon,\varepsilon) = \{0\}.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1214753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Show that all the intervals in $\mathbb{R}$ are uncountable
Question:
Show that all the intervals in $\mathbb{R}$ are uncountable.
I have already proven that $\mathbb{R}$ is uncountable by using the following:
Suppose $\mathbb{R}$ is countable. Then every infinite subset of $\mathbb{R}$ is also countable. This contradicts the fact that $\mathbb{I} \subset \mathbb{R}$ is uncountable. Consequently, $\mathbb{R}$ must be uncountable.
However, how can I show that ALL the intervals in $\mathbb{R}$ are uncountable?
|
Hint: Consider $\phi : \mathbb R \to (-1,1)$ defined by $$\phi(x) = \frac{x}{1 +|x|}$$ and notice that $f: (-1,1) \to (a,b)$ defined by $$f(x) = \frac{1}{2}\bigg((b-a)x + a+ b\bigg)$$
is a bijection.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1215006",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 2
}
|
Is this a valid step in a convergence proof? I'm asked to say what the following limit is, and then prove it using the definition of convergence.
$\lim_{n\rightarrow\infty}$$\dfrac{3n^2+1}{4n^2+n+2}$.
Is it valid to say that the sequence behaves like $\dfrac{3n^2}{4n^2}$ for large n?
|
Just to expand a bit on the above solution, just to the left of the first (*) we need $\frac{5}{16n}< ϵ $ which implies that we need $\frac{5}{16ϵ} < n$ in case it wasn't already plainly obvious. Then setting N > $\frac{5}{16ϵ}$ guarantees the result for all n≥N.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1215112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
$\mathbb{Q}(\sqrt{2+\sqrt{2}})$ is Galois over $\mathbb{Q}$? I need to show that the extension $\mathbb{Q}(\sqrt{2+\sqrt{2}})$ is Galois over $\mathbb{Q}$, and compute its Galois group.
I am learning Galois theory by myself and got stuck in this exercise. I know the fundamental theorem of Galois theory. Any help would be useful
Thanks
|
Let $\alpha = \sqrt{2 + \sqrt{2}}$.
An extension is Galois if it is separable and normal. Both of these properties can be resolved by considering the minimal polynomial $p$ of $\alpha$ (i.e. $p(x) = 0$).
Now $(\alpha^2 - 2)^2 - 2 = 0$ so the minimal polynomial is $x^4 - 4x^2 + 2$.
The extension is Galois since this polynomial doesn't have repeated roots. That is easy to verify by computing the gcd of $p$ with its derivative $p'$.
We can show the extension is normal by showing that $p$ splits in the field $\mathbb Q(\alpha)$. The four roots of $p$ are $\pm \sqrt{2 + \pm \sqrt{2}}$ so if we can construct $\alpha' = \sqrt{2 + \sqrt{2}}$ from $\alpha$ we would have $p(x) = (x-\alpha)(x+\alpha)(x-\alpha')(x+\alpha')$.
Observe that:
*
*$\alpha^2 - 2 = \sqrt{2}$
*$1/\alpha = \frac{\sqrt{2 - \sqrt{2}}}{\sqrt{2 + \sqrt{2}}\sqrt{2 - \sqrt{2}}} = \frac{\sqrt{2 - \sqrt{2}}}{\sqrt{2^2 - 2}} = \frac{\sqrt{2 - \sqrt{2}}}{\sqrt{2}}$
so $\alpha' = (\alpha^2 - 2)/\alpha$.
This proves that the extension is Galois.
Now to compute the Galois group. The extension can be taken in two, degree two, steps: first adjoin $\sqrt{2}$ (a root of $x^2-\sqrt{2}$) to $\mathbb Q$ then a root of $x^2-(2+\sqrt{2})$. $$[\mathbb Q : \mathbb Q(\sqrt{2+\sqrt{2}})] = [\mathbb Q : \mathbb Q(\sqrt{2})] [ \mathbb Q(\sqrt{2}) : \mathbb Q(\sqrt{2+\sqrt{2}})] = 4.$$ so by Galois theory the group too should have order 4: The group must be $C_4$ or $C_2 \times C_2$.
Next we should look at the automorphisms in detail to determine which group type this is. From Galois theory we know the group acts transitively on the roots, i.e. there is an automorphism $\sigma$ that maps $\alpha$ to any of the other roots.
Suppose $\sigma (\alpha) = \alpha'$, then $\sigma (-\alpha) = -\alpha'$ and using the relation defining $\alpha'$ compute that $\sigma (\alpha') = -\alpha$. So $\sigma$ has order for, generating $C_4$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1215186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Minimize Product of Sums of Squared Distances The Question
Given two sets of vectors $S_1$ and $S_2$,we want to find a unit vector $s$ such that
$$\{\sum_{u\in S_1}(\|u\|^2-\langle u, s \rangle^2)\}
\cdot
\{\sum_{v\in S_2}(\|v\|^2 - \langle v, s \rangle^2)\}$$ is minimized,
where $\langle *, * \rangle$ denotes the inner product of two vectors, $\|*\|$ the length of a vector.
It is easy to notice that $\|u\|^2 - \langle u, s \rangle^2$ is just the squared distance from $u$ to the line specified by $s$. Consequently, for
$$\arg \min_s\sum_{u\in S_1}(\|u\|^2 - \langle u, s \rangle^2)$$, we can get the answer by doing a SVD decomposition. The same holds for the set $S_2$。
However, when the optimization objective becomes a form like the one in my question, how to get the answer? Rather than a closed-form solution, I'm more interested in the time complexity of solving the problem.
As the post below has figured out, the problem can be converted to a tensor problem on the unit sphere.
EDIT
As @user1551's answer points out, the question can be transformed to a form of product of two quadratic forms. Are there any materials covering this topic?
|
I don't think there is any (semi)closed-form solution, but you can simplify the problem quite a bit (when the dimension of the vector space is small). Presumably all the vectors here are real. Let $A=\sum_{u\in S_1}uu^T$ and $B=\sum_{v\in S_2}vv^T$. Since
$$
\|u\|^2-\langle u, s \rangle^2 = u^Tu - (s^Tu)(u^Ts)
= \operatorname{trace}(uu^T)-s^Tuu^Ts,
$$
it follows that
$$
\sum_{u\in S_1}\left(\|u\|^2-\langle u, s \rangle^2\right)
= \operatorname{trace}(A)-s^TAs
= s^T\left(\operatorname{trace}(A)I-A\right)s.
$$
Let $P=\operatorname{trace}(A)I-A$ and $Q=\operatorname{trace}(B)I-B$. These two matrices are positive semidefinite. Now the problem boils down to minimising $(s^TPs)(s^TQs)$ on the unit sphere.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1215279",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Are all functions that have an inverse bijective functions? To have an inverse, a function must be injective i.e one-one.
Now, I believe the function must be surjective i.e. onto, to have an inverse, since if it is not surjective, the function's inverse's domain will have some elements left out which are not mapped to any element in the range of the function's inverse.
So is it true that all functions that have an inverse must be bijective?
Thank you.
|
Yes. Think about the definition of a continuous mapping.
Let $f:X\to Y$ be a function between two spaces. Topologically, a continuous mapping of $f$ is if $f^{-1}(G)$ is open in $X$ whenever $G$ is open in $Y$. In basic terms, this means that if you have $f:X\to Y$ to be continuous, then $f^{-1}:Y\to X$ has to also be continuous, putting it into one-to-one correspondence.
Thus, all functions that have an inverse must be bijective.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1215365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 7,
"answer_id": 5
}
|
Nonlinear discontinuous convex function Let $X$ be a normed vector space. Construct nonlinear discontinuous convex function $f:X \rightarrow \mathbb{R}$.
I tried something with $\frac{-1}{\|x\|}$ but had no success.
|
Such example does not always exist. For example, if $\dim X<\infty$, $X$ is isomorphic to $\mathbb{R}^n$ and every convex function $f:\mathbb{R}^n\to \mathbb{R}$ is continuous.
For $\dim X=\infty$, recall that every infinite-dimensional normed space contains a discontinuous linear functional, say $f$. Let $g=f+c$, with constant $c\not=0$.
Then $g$ is convex and discontinuous but not linear, since $g(0)\not=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1215486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Determinant of matrix times a constant. Prove that $\det(kA) = k^n \det(A)$ for and ($n \times n$) matrix.
I have tried looking at this a couple of ways but can't figure out where to start. It's confusing to me since the equation for a determinant is such a weird summation.
|
To elaborate on the above answer, the formula for the determinant is
$$\operatorname{det}(A)=\sum_{\sigma\in S_n}sign(\sigma)\Pi_{i=1}^na_{i,\sigma_i}$$
so
$$\operatorname{det}(kA)=\sum_{\sigma\in S_n}sign(\sigma)\Pi_{i=1}^nka_{i,\sigma_i}$$
$$=k^n\operatorname{det}(A)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1215555",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 4,
"answer_id": 3
}
|
Problem when $x=\cos (a) +i\sin(a),\ y=\cos (b) +i\sin(b),\ z=\cos (c) +i\sin(c),\ x+y+z=0$ Problem: If $$x=\cos (a) +i\sin(a),\ y=\cos (b) +i\sin(b),\ z=\cos (c) +i\sin(c),\ x+y+z=0$$ then which of the following can be true:
1) $\cos 3a + \cos 3b + \cos 3c = 3 \cos (a+b+c)$
2) $1+\cos (a-b) + \cos (b-c) =0$
3) $\cos 2a + \cos 2b +\cos 2c =\sin 2a +\sin 2b +\sin 2c=0$
4) $\cos (a+b)+\cos(b+c)+\cos(c+a)=0$
Try: I tried taking $x=e^{ia},y=e^{ib},z=e^{ic}$ and then i tried expressing each option in euler form
FOR EXAMPLE:
1) $-3/2 e^{-i a-i b-i c}-3/2 e^{i a+i b+i c}+1/2 e^{-3 i a}+1/2 e^{3 i a}+1/2 e^{-3 i b}+1/2 e^{3 i b}+1/2 e^{-3 i c}+1/2 e^{3 i c}$
2) $1/2 e^{i a-i b}+1/2 e^{i b-i a}+1/2 e^{i b-i c}+1/2 e^{i c-i b}+1$
3) $1/2 e^{-2 i a}+1/2 e^{2 i a}+1/2 e^{-2 i b}+1/2 e^{2 i b}+1/2 e^{-2 i c}+1/2 e^{2 i c}$
4) $1/2 e^{-i a-i b}+1/2 e^{i a+i b}+1/2 e^{-i a-i c}+1/2 e^{i a+i c}+1/2 e^{-i b-i c}+1/2 e^{i b+i c}$
Now after all this i'm stuck!!Please help!! How should i proceed?
|
HINT:
For $(1),$
Using If $a,b,c \in R$ are distinct, then $-a^3-b^3-c^3+3abc \neq 0$.,
we have $x^3+y^3+z^3=3xyz$
Now $x^3=e^{i(3a)}=\cos3a+i\sin3a$
and $xy=e^{i(a+b)}=\cos(a+b)+i\sin(a+b),xyz=\cdots$
For $(3),(4)$
$x+y+z=0\implies\cos a+\cos b+\cos c=\sin a+\sin b+\sin c=0$
So, $x^{-1}+y^{-1}+z^{-1}=?$
Now $x^2+y^2+z^2=(x+y+z)^2-2(xy+yz+zx)=(x+y+z)^2-2xyz(x^{-1}+y^{-1}+z^{-1})=?$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1215672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
What is the name of a number ending with zero's? I am currently writing a very specific graph of a function implementation. The graph can have min/max values e.g. $134$ and $1876$ respectively.
I'm calculating "nice" numbers. For min/max they are $100$ and $1900$ respectively.
Is there a commonly used name for such a number?
|
They're exact multiples of powers of ten; they're also rounded to fewer significant figures. The benefit is to rapid human interpretation of annotation.
I haven't heard of an established term for such a pair, but "reduced precision bracket" might be a good descriptor.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1215923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Dimension of a subspace smaller than dimension of intersection Suppose I have a finite-dimensional vector space $V$, and $U_1, U_2$ are subspaces of $V$, such that $U_1\nsubseteq U_2$.
Is it possible that $\dim{U_1}\leq\dim{U_1 \cap U_2}$?
|
Fact: If $X$ is a subspace of the finite dimensional vector space $Y,$ and $\dim X = \dim Y,$ then $X=Y.$ Hence if $\dim U_1 \le \dim U_1\cap U_2,$ then $\dim U_1 = \dim U_1\cap U_2,$ hence $U_1\cap U_2$ equals $U_1.$ This implies $U_1 \subset U_2,$ contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1216010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Find the Taylor series expansion (centered at $z_{0}=0$) of the function $f(z)=\sin(z^3)$. Find the Taylor series expansion (centered at $z_{0}=0$) of the function $f(z)=\sin(z^3)$.
Use this expansion: a) to find $f^{69}(0)$; b) to compute the integral transversed once in the positive (with respect to the disk) direction
$\oint\limits_{|z|=3}{\frac{f(z)}{z^{70}}}dz$
I know that if a function $f$ is analytic in a disk centered at $z_0$ of radius $r$, then it can be represented as a sum of it's tailor series. The disk here is centered at the origin and has a radius of 3.
Should I be using the formula to find the Taylor Coefficients at $z_0$: $a_n=\frac{f^{(n)(z_0)}}{n!}$?
I am looking for help with coming up with the solution to this problem. I feel that I have the correct pieces and am not sure of how to connect them.
|
Use the Taylor series for sine (just plug in $z^3$ wherever a $z$ appears in the original series). Notice what this "does" to the coefficients: the coefficient of $z$ in the original series is now the coefficient of $z^3$, the coefficient of $z^2$ in the original now belongs to $z^6$, etc.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1216296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find $z$ s.t. $\frac{1+z}{z-1}$ is real I must find all $z$ s.t. $\dfrac{1+z}{z-1}$ is real.
So, $\dfrac{1+z}{1-z}$ is real when the Imaginary part is $0$.
I simplified the fraction to $$-1 - \dfrac{2}{a+ib-1}$$
but for what $a,b$ is the RHS $0$?
|
If $z+1 = r(z-1)$, then the complex numbers $0, z-1, z+1$ lie on a line. But the line through $z-1$ and $z+1$ can only contain $0$ if $z$ is real.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1216378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
}
|
$V=\mathcal{Z}(xy-z) \cong \mathbb{A}^2$. This question is typically seen in the beginning of a commutative algebra course or algebraic geometry course.
Let $V = \mathcal{Z}(xy-z) \subset \mathbb{A}^3$. Here $\mathcal{Z}$ is the zero locus. Prove that $V$ is isomorphic to $\mathbb{A}^2$ as algebraic sets and provide an explicit isomorphism $\phi$ and associated $k$-algebra isomorphism $\tilde{\phi}$ from $k[V]$ to $k[\mathbb{A}^2]$ along with their inverses. Is $V = \mathcal{Z}(xy-z^2)$ isomorphic to $\mathbb{A}^2$?
Here is what I have so far: let $V = \mathcal{Z}(xy-z) \subset \mathbb{A}^3$. Consider the map: $\pi(x,y,z) = z$ where $\pi$ is a family of varieties i.e. a surjective morphism. This map give the hyperbola family: $\{\mathcal{Z}(xy-z) \subset \mathbb{A}^2\}_{z \in \mathbb{A}^1}$ and is injective. Does this provide an explicit isomorphism $\phi$? I am not sure how to proceed for the coordinate rings and how to define the inverses.
Thank you!
|
Hint: Use the projection
\begin{align*}
\pi : V &\to \mathbb{A}^2\\
(x,y,z) &\mapsto (x,y) \, .
\end{align*}
Can you find its inverse?
To find the induced maps on the coordinate rings, recall that these maps are just given by preccomposition, e.g., for $\pi : V \to \mathbb{A}^2$,
\begin{align*}
\widetilde{\pi} : k[\mathbb{A}^2] &\to k[V]\\
F &\mapsto F \circ \pi \, .
\end{align*}
To address the question in the comments, no, $V = \mathcal Z(xy-z^2)$ and $\mathbb A^2$ are not isomorphic. This can be seen on the level of coordinate rings. Note that the coordinate ring of $\mathbb{A}^2$ is $k[x,y]$ which is a UFD; we claim that the coordinate ring $k[V] = k[x,y,z]/(xy - z^2)$ is not a UFD. The relation $xy = z^2$ gives two different factorizations, since $x,y,z$ are non-associate irreducibles. (They all have degree $1$.) This thread has further explanation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1216512",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Sequence Word Problem General Question Whenever I do sequence word problems in my math homework I often end up accidentally adding 1 more term than needed, or subtracting 1 more term. Word problems seem ambiguous to me in wording a lot of the time and I don't know whether to do $(n-1)d$, or $nd$. Can anyone give some general guidelines as to when to use $(n-1)d$, or $nd$? I feel like I understand sequences well but once word problems come into play no matter how much practice I put in, I add or subtract a term extra.
Thank you.
|
An answer specific to the example in the comments:
To find the difference between each row we can use the information about rows 3 and 12. We know that row three has 41 boxes, and row 12 has 23 boxes. Since we have an arithmetic progression, this means that
$$
41+(12-3)n=23,
$$
or $n=-2$. Thus when you go up one row, you lose $2$ boxes. Note that we used $12-3=9$, because in going from row $3$ to row $12$ we moved upwards nine times. Even though there are 10 rows in this interval, we wish to keep track of the number of times we change rows. This I hope gets to the heart of your question. So now that we know $n=-2$, lets find the number of boxes in row $1$. Note that in going from row $3$ to row $1$, we move downwards twice. Thus our answer is $41+(1-3)*-2=45$.
A last piece of advice: when the number of rows or terms in your sequence is small, draw pictures! It is a great way to build intuition doing these problems, and can be nice to check your work.
Hope this helps!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1216579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What method was used here to expand $\ln(z)$? On Wikipedia's entry for bilinear transform, there is this formula:
\begin{align}
s &= \frac{1}{T} \ln(z) \\[6pt]
&= \frac{2}{T} \left[\frac{z-1}{z+1} + \frac{1}{3} \left( \frac{z-1}{z+1} \right)^3 + \frac{1}{5} \left( \frac{z-1}{z+1} \right)^5 + \frac{1}{7} \left( \frac{z-1}{z+1} \right)^7 + \cdots \right] \\[6pt]
&\approx \frac{2}{T} \frac{z - 1}{z + 1} \\[6pt]
&= \frac{2}{T} \frac{1 - z^{-1}}{1 + z^{-1}}
\end{align}
What is the method that is used to expand $\ln(z)$? Taylor series? Laurent series? Some other techniques?
|
Taylor series show
$$\log((1-z)/(1+z)) = \log(1-z) - \log(1+z) = 2(z+z^3/3 + \cdots)$$
then let $y=(1-z)/(1+z)$.
The math works, though something doesn't jive with my intuition. I guess what's confusing is that the Taylor series already looks like log but it is missing the even terms. With that, as z goes to infinity the series approaches the divergent harmonic series, which makes sense.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1216691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
How do we know Euclid's sequence always generates a new prime? How do we know that $(p_1 \cdot \ldots \cdot p_k)+1$ is always prime, for $p_1 = 2$, $p_2 = 3$, $p_3 = 5$, $\ldots$ (i.e., the first $k$ prime numbers)?
Euclid's proof that there is no maximum prime number seems to assume this is true.
|
That's not true. Euclid's proof does not conclude that this expression is a new prime, it concludes that there exists primes that were not in our original list, that lie between the $k'th$ prime and $p_{1}p_{2}...p_{k} + 1$ (including $p_{1}p_{2}...p_{k} + 1$). Among those extra primes that must exist, $p_{1}p_{2}...p_{k} + 1$ may be one of them, or it may not be. It doesn't matter, either way, there are more primes than listed in our initial list.
We have a counter example as soon as $2*3*5*7*11*13 + 1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1216775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
for f(x,y,z) find point on surface nearest to origin $f(x,y,z)=x^2+2y^2-z^2$, $S=\{(x,y,z): f(x,y,z)=1\}$ find point on S nearest to origin.
I thought I would use Lagrange multipliers to solve this problem, but when I use $f(x,y,z)=x^2+2y^2-z^2$ and $g(x,y,z)=x^2+2y^2-z^2-1$, along with,
$$g(x,y,z)=0$$
$$f_x=\lambda g_x$$
$$f_y=\lambda g_y$$
$$f_z=\lambda g_z$$
I am getting not so good answers. How to solve this type of problem
|
The distance from $(x,y,z)$ to the origin is $\sqrt{x^2+y^2+z^2}$.
We therefore want to minimize $\sqrt{x^2+y^2+z^2}$, or equivalently $x^2+y^2+z^2$, subject to the condition $x^2+2y^2-z^2-1=0$. So $x^2+y^2+z^2$ should be your $f$. Now the usual process should work well.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1216862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
About a polynomial with complex coefficients taking integer values for sufficiently large integers Let $f(x)$ be a polynomial with complex coefficients such that $\exists n_0 \in \mathbb Z^+$ such that $f(n) \in \mathbb Z , \forall n \ge n_0$, then is it true that $f(n) \in \mathbb Z , \forall n \in \mathbb Z$ ?
|
Proof by induction on the degree of the polynomial. Trivially true for degree zero. Assume true for degree $n$. Let $f$ have degree $n+1$ and be integral for all sufficiently large integer arguments. Then same is true for $g(x)=f(x+1)-f(x)$, a polynomial of degree $n$. By the induction hypothesis, $g$ is integer-valued for all integer arguments. Then it follows that $f$ is.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1216983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
The diophantine equation $x^2+y^2=3z^2$ I tried to solve this question but without success:
Find all the integer solutions of the equation: $x^2+y^2=3z^2$
I know that if the sum of two squares is divided by $3$ then the two numbers are divided by $3$, hence if $(x,y,z)$ is a solution then $x=3a,y=3b$. I have $3a^2+3b^2=z^2$ and that implies
$$
\left(\frac{a}{z}\right)^2+\left(\frac{b}{z}\right)^2=\frac{1}{3}
$$
so I need to find the rational solutions of the equation $u^2+v^2=\frac{1}{3}$ and I think that there are no solutions for that because $\frac{1}{3}$ doesn't have a rational root, but I dont know how to explain it.
Thanks
|
How about using infinite descent ?
As for any integer $a\equiv0,\pm1\pmod4\implies a^2\equiv0,1$
We have $x^2+y^2\equiv0\pmod3$
So, $3|(x,y)$
Let $x=3X,y=3Y$
and so on
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1217070",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
Show that if we exchange elements in 2 different basis will still give us a basis. I came across a proof of the following theorem:
Let V be a free module over a division ring D. Suppose X and Y are two finite basis of V, then |X| = |Y|.
that uses the fact that if we exchange elements in 2 different basis, we will still get a basis. Say $X = \{x_1, x_2, ..., x_n\}$ and $Y = \{y_1, ..., y_k\}$, then $\{x_1, x_2, ..., x_{n-1}, y_k\}$ is still a basis.
However, I am having trouble proving that, I can't show that if $0 = a_1 x_1 + ... a_n y_k$, then all the coefficients are 0. Can someone teach me how to solve this? Thanks
|
Let $X=\{x_1, x_2\}$, $Y=\{y_1, y_2\}$ with $y_1=x_2$, $y_2=x_1$. Now $\{x_1, y_2\} = \{x_1\}$ is not a basis in general. Hence, your statement is wrong.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1217141",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Is the optimum of this problem unique? I have a convex optimisation problem:
$$\min_{x_i} \sum_{i=1}^N a_i \exp(-x_i/b_i)\quad\text{ s.t. }\sum_{i=1}^N x_i=x\text{ and } x_i\ge 0$$
The KKT conditions are:
$$\lambda=\begin{cases}
a_i/b_i \exp(- x_i/b_i) &\text{ if }a_i/b_i>\lambda \\
a_i/b_i+\mu_i &\text{ else }\end{cases},$$
where $\lambda>0$ and $\mu_i\ge 0$.
I am able to characterise the interior solution ($x_i>0\ \forall i$), but I am unable to see if the solution is unique and under what circumstances. What is the unique optimum for this problem and how is it dependent on the parameters?
|
Hint: A strictly convex function has at most one (global) minimum.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1217241",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Prove that equation $x^6+x^5-x^4-x^3+x^2+x-1=0$ has two real roots Prove that equation
$$x^6+x^5-x^4-x^3+x^2+x-1=0$$ has two real roots
and $$x^6-x^5+x^4+x^3-x^2-x+1=0$$
has two real roots
I think that:
$$x^{4k+2}+x^{4k+1}-x^{4k}-x^{4k-1}+x^{4k-2}+x^{4k-3}-..+x^2+x-1=0$$
and
$$x^{4k+2}-x^{4k+1}+x^{4k}+x^{4k-1}-x^{4k-2}-x^{4k-3}-..+x^2+x-1=0$$
has two real roots but i don't have solution
|
HINT: you can write this equation:
$x^6+x^5-x^4-x^3=-x^2-x+1$
$y=x^6+x^5-x^4-x^3$
$y=-x^2-x+1$
and draw two functions noting that there are two points where two fnctions intersect.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1217316",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 3
}
|
Geometry - Determine all points along a ray from starting coordinates and direction I am working on a video game. I need to determine each point along a ray with every x interval with the following information:
*
*X, Y, Z coordinates of the starting point of the ray, and,
*X, Y, Z coordinates representing the angle of the ray (the game is 3D, from what I can tell the game engine uses Euclidian vectors)
Of course rays are infinite in number, so I need something like "get next point." My idea is to have the computer get the first point, check it, and if that is not successful, get the next point and check it and so on until a specific point fits what I need. Of course, there some be some arbitrary upper limit to prevent the program crashing due to continually checking many rays.
In my game (it's a Minecraft clone for learning purpuses for anyone who knows about that) there is a 3d grid, each cell in the game is filled completly with some material, ie grass or air. I need to determine the first non-air block in the direction the player is looking. The player can be looking in any direction and is not bound to the grid in any way other than their location being represented by a floating point x, y, and z coordinate on the grid.
NOTE: My game engine does not have a built-in feature to do this.
|
Well if $A = (a_1,a_2,a_3)$ is the start point, and $d =(b_1,b_2,b_3)$ is the direction of the reay, you can parametrize the ray by
$$ p = a + td = \begin{pmatrix} a_1+td_1 \\ a_2 + td_2 \\ a_3+td_3\end{pmatrix}$$ for $t \in \mathbb R$, or for $t \in [0,1]$ if you want to get a point $p$ that is in between $a$ and $b$.
If your 'minecraft' blocks have a edge length of $e$, you can check whether the point
$$ p = a+t\frac{e}{|d|} d = \begin{pmatrix} a_1+t\frac{e}{|d|}d_1 \\ a_2 + t\frac{e}{|d|}d_2 \\ a_3+t\frac{e}{|d|}d_3\end{pmatrix}$$
within a block for $t = 0,1,2,3,...$ (until you are within a block)
where $|d| = \sqrt{d_1^2+d_2^2+d_3^2}$ is the length of the vector $d$.
Here I used the idea that if you are outside of block, you can make a step of at least the size $e$ in order not to overshoot the possible first block.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1217547",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
How to prove that $\lfloor a\rfloor+\lfloor b\rfloor\leq \lfloor a+b\rfloor$ We have the floor function $F(x)=[x]$ such that $F(2.5)=[2.5]=2, F(-3.5)=[-3.5]=-4, F(2)=[2]=2$.
How can I prove the following property of floor function:
$$[a]+[b] \le [a+b]$$
|
Consider the algebraic representation of $\mathrm{floor}(x)=\lfloor x\rfloor$ (usually the floor of $x$ is represented by $\lfloor x\rfloor$, not $[x]$):
$$
x-1<\color{blue}{\lfloor x\rfloor\leq x}.
$$
Hence, for arbitrary $a$ and $b$, we have
$$
\lfloor a\rfloor\leq a\tag{1}
$$
and
$$
\lfloor b\rfloor\leq b.\tag{2}
$$
Now simply add $(1)$ and $(2)$ together to get
$$
\lfloor a\rfloor+\lfloor b\rfloor\leq a+b.\tag{3}
$$
Finally, take the floor of both sides of $(3)$:
$$
\lfloor a+b\rfloor\geq\Bigl\lfloor\lfloor a\rfloor+\lfloor b\rfloor\Bigr\rfloor=\lfloor a\rfloor+\lfloor b\rfloor.
$$
Hence, we have that $\lfloor a\rfloor+\lfloor b\rfloor\leq \lfloor a+b\rfloor$, as desired.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1217644",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 0
}
|
Dual of a matrix lie algebra In fact I already calculate the dual space with a formula, but I did'd understand some steps of the formula.
So, I want to calculate the dual space of The lie algebra of $SL(2,R)$. Knowing that $B=\left(\begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} , \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \right)$, using the trace paring and that the matrix in the dual is the transpose of matrix in the lie algebra, we obtain that the dual basis is the B basis transpose.
The thing I did not understand was why the transpose are the dual.
|
This is a general fact from linear algebra. If $e_1, \ldots , e_n$ is a basis of a vector space $V$, then $V^{\ast}$ has the dual basis consisting of linear functions $f_1, \ldots, f_n$ defined by $f_i(e_j) = 0$ for all $i$ and $j$ such that $i\neq j$, and $f_i(e_i) = 1$ for all $i$. If a linear transformation $\phi$ is given by a $n × n$-matrix $A$, $\phi(x) = Ax$, then the matrix of $\phi^{\ast}$ in the dual basis is the transpose $A^T$ .
A representation $ρ : \mathfrak{g} \rightarrow \mathfrak{gl}(V )$ defines the dual (or contragredient) representation $\rho^{\ast} : \mathfrak{g}\rightarrow \mathfrak{gl}(V^{\ast})$ as follows: $a\in \mathfrak{g}$ sends a linear function $f(x)$ to $−f(\rho(a)x)$, or $a \mapsto −\rho(a)^T$ in the matrix form, with the transpose matrix. Hence for $\mathfrak{g}=\mathfrak{sl}_2$ and the natural representation $\rho$ of it (the identity map on trace zero matrices), the dual representation is just given by the negative of the transposed matrices.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1217743",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
What is wrong with this putative proof? So I've spent about an hour trying to figure out what is wrong with this proof. Could somebody clearly explain it to me? I don't need a counterexample. For some reason I was able to figure that out.
Thanks.
Theorem. $\;$ Suppose $R$ is a total order on $A$ and $B\subseteq A$. Then every element of $B$ is either the smallest element of $B$ or the largest element of $B$.
Proof. $\;$ Suppose $b\in B$. Let $x$ be an arbitrary element of $B$. Since $R$ is a total order, either $bRx$ or $xRb$.
*
*Case 1. $bRx$. Since $x$ was arbitrary, we can conclude that $\forall x\in B(bRx)$, so $b$ is the smallest element of $R$.
*Case 2. $xRb$. Since $x$ was arbitrary, we can conclude that $\forall x\in B(xRb)$. so $b$ is the largest element of $R$.
Thus, $b$ is either smallest element of $B$ or the largest element of $B$. Since $b$ was arbitrary, every element of $B$ is either its smallest element or its largest element.
|
The proof shows that for an arbitrary $x$, either $bRx$ or $xRb$. Therefore $\forall x(xRb\lor bRx)$. So far so good.
But $\forall x(P(x)\lor Q(x))$ is not equivalent to $\forall xP(x)\lor\forall x Q(x)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1217788",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 7,
"answer_id": 6
}
|
Finding lines of symmetry algebraically How would you determine the lines of symmetry of the curve $y=f(x)$ without sketching the curve itself?
I was working on the problem of finding the lines of symmetry of the curve given by:
$ x^{4}$+$ y^{4}$ = $u$ where u is a positive real number.
but I had to resort to doing a quick sketch.
Is there an explicit condition for a line to be considered a line of symmetry of a curve?
Thanks.
|
A line L given by $ax+by+c=0$ is a line of symmetry for your curve, if and only if for any point $P_1(x_1,y_1)$ from the curve, its symmetrical point $P_2(x_2,y_2)$ (with respect to the line L) is also on the curve.
Now:
(1) express $x_2$ and $y_2$ as functions of $x_1, y_1$ (this can be done from the equation of L);
(2) substitute the $x_2$ and $y_2$ expressions in the curve equation (because they should satisfy it too); thus you'll get the "iff condition" which you're looking for.
In the general case i.e. for any arbitrary curve, it won't be a nice equation, I believe.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1217868",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Is the sequence of $\mathbb Z$-modules below exact? I came across the following example in my algebra textbook:
Considere the sequence of $\mathbb Z$-modules: $$0\longrightarrow \mathbb Z_2\stackrel{f}{\longrightarrow}\mathbb Z_4\stackrel{g}{\longrightarrow}\mathbb Z_2\longrightarrow 0,$$
where $f$ is the isomorphism of $\mathbb Z_2$ and the unique proper subgroup $2\mathbb Z_4$ of $\mathbb Z_4$ and $g$ is the multiplication by $2$. This is an exact short sequence.
I guess there is something strange here for if the sequence were exact $\textrm{im}(g)=\mathbb Z_2=\{\overline{0}, \overline{1}\}$ but if there were $\overline{a}\in \mathbb Z_4$ such that $2\overline{a}=\overline{1}$ in $\mathbb Z_2$ then $2|2a-1$ what certainly can't happen.
Am I missing something?
Reference: Algèbre et Modules (I. Assem, pg. 31).
|
It seems to me that the second $\Bbb Z_2$ is actually $2\Bbb Z_4$, that is, it is:
$\{[0]_4,[2]_4\}$ (this is isomorphic to $\Bbb Z_2$).
It should be clear then, that $gf = 0$, as one would expect in an exact sequence.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1217970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Finding the joint unconditional distribution of $X$ and $Y=N-X$ for $X\sim \mathrm{Bin}(N,p)$ and $N\sim \mathrm{Pois}(\lambda)$. The question asks to find the unconditional joint distribution of $X$ and $Y=N-X$, given that
*
*$N$ has a Poisson distribution with parameter $\lambda$, and
*$X$ has a conditional distribution $\mathrm{Bin}(N,p)$.
I have worked out that $X$ has an unconditional distribution $\mathrm{Pois}(p\lambda)$ though I'm not sure that is of much help.
I am having trouble beginning. The steps I used to find the single unconditional distribution were straightforward, but I'm not sure how to go about it for the joint distribution.
Thanks in advance.
|
You have been given the conditional probability mass of $X\mid N$, so why not use it?
$\begin{align}
\mathsf P(X=k, Y=h) & = \mathsf P(X=k, N-X=h)
\\ & = \mathsf P(X=k, N=h+k)
\\ & = \mathsf P(X=k\mid N=h+k)\cdot \mathsf P(N=h+k)
\\ & = \binom{h+k}{k}p^{k}(1-p)^{h}\cdot\frac{\lambda^{h+k}e^{-\lambda}}{(h+k)!}
\end{align}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1218081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Show that a subspace is proper dense in $l^1$ sequence space. L^1 space. Let Y = $L^1 $($\mu$) where $\mu$ is counting measure on N.
Let X = {$f$ $\in$ Y : $\sum_{n=1}^{\infty}$ n|$f(n)$|<$\infty$}, equipped with the $L^1$ norm.
Show that X is a proper dense subspace of Y.
I don't know how to show that X is dense in Y.
I was thinking about constructing a sequence in Y so that every convergent sequence in X converges to the sequence in Y.
But then I think there might be another way to show denseness of X.
Thank you.
|
Let $e_i :\mathbb N\to\mathbb R$ defined as
$$
e_i(j)=\left\{\begin{array}{lll} 1 &\text{if}&i=j,\\ 0 &\text{if}&i\ne j.\end{array}\right.
$$
Then $e_i\in X$. If $f\in Y$, then setting
$$
f_n=\sum_{i=1}^n f(i)\, e_i\in X,
$$
we observe that
$$
\|f-f_n\|_{L^1}=\sum_{i>n}\lvert\,f(i)\rvert.
$$
Clearly, the right hand side of the above tends to zero, as $n\to\infty$, and hence $f_n\to f$ in $L^1(\mu)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1218291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Euler sum of a divergent series So I have a series $1+0+(-1)+0+(-1)+0+1+0+1+0+(-1)+...$
Is it correct to rearrange this as $1+0+(-1)+0+1+0+(-1)+0+1+0+(-1)+0...$
The second problem can be done as an Euler sum and the answer is $\frac{1}{2}$. In general, I know that absolutely convergent series can be rearranged but I'm not sure what the rule is for this case
|
A series converges, by definition, if the sequence of partial sums $S_n$ converges to some number. So, call your individual terms $a_n$, the sequence of partial sums is $\displaystyle \sum _{k=1}^n a_k$. If you look at your sequence of partial sums, you can see it doesn't converge to any number, because it varies from 1 to 0 back to 1 infinitely often, therefore there is no $N$ such that $\forall n>N$, you have a sum $S$ such that $|S_n -S|<\frac 1 2$ (picking one half as our epsilon challenge, for instance)
Another way you can tell that this sequence diverges is that in a necessary (But not sufficient) criterea for a series to converge is that the limit of the individual terms in the sequence goes to 0, and here once again you do not have that.
Now, some people will play around with alternate theories of mathematics that are NOT standard analysis that will assign a "sum" to normally divergent sequences. But that's not what the default meaning for the value of an infinite sum.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1218664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Why only the numerator is derived? Why the derivative of $y = \frac{x^5}{a+b}-\frac{x^2}{a-b}-x$ is solved by deriving just the numerators?
The solution is $\frac{dy}{dx}=\frac{5x^4}{a-b}-\frac{2x}{a-b}-1$.
|
Because the denominator does not depend on $x$. So if we were to formally use $\left(\frac{u}{v}\right)'=\frac{u'v-uv'}{v^2}$ we get by plugging $v'=0$ ($v$ does not depend on $x$) $\left(\frac{u}{v}\right)'=\frac{u'}{v}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1218730",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Second order linear ODE determine coefficient A vertical spring has the spring constant $32$ N/m. A body with the mass 2 kg is attached to its end and is at equilibrium. At the time $t=0$ a variable force $$F(t)=12\sin(\omega t)$$ is applied in the direction of gravity. Describe the motion of the body for different values of the force's angle frequency $\omega $.$$$$
My idea is to denote the springs vertical position with $y(t)$. If we assume that downwards is the positive direction then the acceleration should be described as
$$y''(t)=6\sin(\omega t) - 16y(t)$$
The homogenous solution is
$$y_h = A_1\sin(4t)+B_1\cos(4t)$$
Assume the particular solution is on the form
$$y_p = A_2\sin(\omega t) $$
Deriving this twice and plugging in to the original equation gives
$$16A_2-A_2\omega^2=6, A_2=\frac{6}{16-\omega^2}$$
and so the general solution would be
$$y=A_1\sin(4t)+B_1\cos(4t)+\frac{6\sin(\omega t)}{16-\omega^2}$$
but since $y(0)=0$ we can remove the cosine term and so
$$y=A_1\sin(4t)+\frac{6\sin(\omega t)}{16-\omega^2}$$
Now how do I determine $A_1$? And I understand that I need to consider the special case $\omega=4$.
|
The equation is:
$$y''=6\sin\omega t-16y$$
Let $y=a\sin(\omega t+\theta)$
Now:
$$-a\omega^2\sin(\omega t+\theta)=6\sin\omega t-16a\sin(\omega t+\theta)$$
Or:
$$\underbrace{(16-\omega^2)a}_{\alpha}\sin(\omega t+\theta)=6\sin\omega t$$
Now:
$$(\alpha\cos\theta-6)\sin\omega t+(\alpha\sin\theta)\cos\omega t=0\\
\sin\left(\omega t+\arctan\frac{\alpha\sin\theta}{\alpha\cos\theta-6}\right)=0$$
...
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1218860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What does the solution of a PDE represent? So I took a course in PDE's this semester and now the semester is over and I'm still having issue with what exactly we solved for. I mean it in this sense, in your usual first or second calculus course you are taught the concept that your second derivative represents acceleration, your first derivative represents velocity and your fucntion is some of position function. So in the world of PDE's what does the function U(x,t) really capture? Your derivatives model rates of change of a phenomena in a broad sense, but what does the soultion model? So for example the solution of the Heat equation is your U(x,t) function modelling temperature? or or what is the solution to the wave equation modelling? Are these some sort of "position functions"? I ask because I'm going over Green's equations and the relationship with the Dirac function and I'm trying to wrap my head around what is being described. I understand the steps towards a solution, but it is the description of what is occurring which is faltering and essentially is the important part behind why we do mathematics. Also is there some sort of visual representation of these solutions? Thanks
|
There are tons of situations where the solution of a PDE discribes the behaviour over time:
1) pendulum
2) (radio active) decay
3) mass-spring-damper systems
4) inductance-capacitance-resistance systems
5) heat distribution over time
6) water level in a bucket with a leak
7) Temperature of a heated house
8) (electromagnetic) waves
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1218955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
}
|
Quadratic Integers in $\mathbb Q[\sqrt{-5}]$ Can someone tell me if $\frac{3}{5}$, $2+3\sqrt{-5}$, $\frac{3+8\sqrt{-5}}{2}$, $\frac{3+8\sqrt{-5}}{5}$, $i\sqrt{-5}$ are all quadratic integers in $\mathbb Q[\sqrt{-5}]$. And if so why are they in $\mathbb Q[\sqrt{-5}]$.
|
Only one of them is. There are a number of different ways to tell, and of these the easiest are probably the minimal polynomial and the algebraic norm.
In a quadratic extension of $\mathbb{Q}$, the minimal polynomial of an algebraic number $z$ looks something like $a_2 x^2 + a_1 x + a_0$, and if $a_2 = 1$, we have an algebraic integer.
If $z = m + n\sqrt{d}$, then the norm is $N(z) = m^2 - dn^2$. In $\mathbb{Q}(\sqrt{-5})$ for $z = m + n\sqrt{-5}$, this works out to $N(z) = m^2 + 5n^2$. If $z$ is an algebraic integer, then $N(z) \in \mathbb{Z}$.
Let's take the five numbers one by one:
*
*$\frac{3}{5}$ is obviously in $\mathbb{Q}$ so it's also in the extension $\mathbb{Q}(\sqrt{-5})$. But its minimal polynomial is $5x - 3$, so $a_1 = 5$ but $a_2 = 0$. Also, the norm is $\frac{9}{25}$. Clearly $\frac{3}{5}$ is not an algebraic integer.
*$2 + 3\sqrt{-5}$ has a minimal polynomial of $x^2 - 4x + 49$, so our $a_2$ here is indeed 1. Also, its norm is 49. Here we have an algebraic integer in $\mathbb{Q}(\sqrt{-5})$.
*$\frac{3}{2} + 4\sqrt{-5}$ has a minimal polynomial of $4x^2 - 12x + 329$, so $a_2 = 4$. And the norm is $\frac{329}{4}$, which is not an integer. So $\frac{3}{2} + 4\sqrt{-5}$ is not an algebraic integer in $\mathbb{Q}(\sqrt{-5})$. (Side note: $\frac{3}{2} + 4\sqrt{5}$ is not an algebraic integer in $\mathbb{Q}(\sqrt{5})$ either, but $\frac{3}{2} + \frac{7\sqrt{5}}{2}$ is).
*$\frac{3}{5} + \frac{8\sqrt{-5}}{5}$... you can do this one on your own.
*$i \sqrt{-5}$ works out to $-\sqrt{5}$, which is an algebraic integer, but it comes from a different domain.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1219077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
$\lim_{n\to\infty} \frac{1}{\log(n)}\sum _{k=1}^n \frac{\cos (\sin (2 \pi \log (k)))}{k}$ What tools would you gladly recommend me for computing precisely the limit below? Maybe a starting point?
$$\lim_{n\to\infty} \frac{1}{\log(n)}\sum _{k=1}^n \frac{\cos (\sin (2 \pi \log (k)))}{k}$$
|
Let $f(x) = \frac{\cos(\sin(2\pi x))}{x}$, we have
$$f'(x) = - \frac{2\pi\cos(2\pi x)\sin(\sin(2\pi x)) + \cos(\sin(2\pi x))}{x^2}
\implies |f'(x)| \le \frac{2\pi + 1}{x^2}
$$
By MVT, for any $x \in (k,k+1]$, we can find a $\xi \in (0,1)$ such that
$$|f(x) - f(k)| = |f'(k + \xi(x-k))|(x-k) \le \frac{2\pi+1}{k^2}$$
This implies
$$\left| \int_k^{k+1} f(x) dx - f(k)\right| \le \frac{2\pi+1}{k^2}$$
As a result,
$$|\sum_{k=1}^n f(k) - \int_1^n f(x) dx|
\le |f(n)| + \sum_{k=1}^{n-1} \left|f(k) - \int_k^{k+1} f(x)dx\right|\\
\le 1 + (2\pi + 1)\sum_{k=1}^{\infty} \frac{1}{k^2}
= 1 + \frac{(2\pi + 1)\pi^2}{6} < \infty$$
As a result,
$$\lim_{n\to\infty}\frac{1}{\log n}\sum_{k=1}^{n}f(k)
= \lim_{n\to\infty}\frac{1}{\log n}\int_1^n f(x) dx
= \lim_{L\to\infty}\frac{1}{L}\int_0^L f(e^t) de^t\\
= \lim_{L\to\infty}\frac{1}{L}\int_0^L \cos(\sin(2\pi t)) dt
$$
Since the integrand is periodic with period $1$, the limit at the right exists
and equal to
$$\begin{align}
& \int_0^1\cos(\sin(2\pi t))dt = \frac{1}{2\pi}\int_0^{2\pi}\cos(\sin(t)) dt = J_0(1)\\
\approx & 0.765197686557966551449717526102663220909274289755325
\end{align}
$$
where
$J_0(x)$
is the Bessel function of the first kind.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1219170",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 0
}
|
$\max(a,b)=\frac{a+b+|a-b|}{2}$ generalization I am aware of an occasionally handy identity: $$\max(a,b)=\frac{a+b+|a-b|}{2}$$
However, I have found I'm unable to come up with a nice similar form for $\max(a,b,c)$. Of course I could always use the fact that $\max(a,b,c)=\max(\max(a,b),c)$ to write
$$\begin{align}\max(a,b,c)&=\frac{\frac{a+b+|a-b|}{2}+c+\left|\frac{a+b+|a-b|}{2}-c\right|}{2}
\\&=\frac{a+b+2c+|a-b|+|a+b-2c+|a-b||}{4}\end{align}$$
but this lacks elegance and in particular it is not clear to me just from the formula that if I permute the variables I get the same result.
The following doesn't work, but it would be nice if I could write $\max(a,b,c)$ in some form like
$$\frac{a+b+c+|a-b|+|a-c|+|b-c|}{3}$$
Is there a good generalization of this to $n$ variables? That is given $x_1,x_2,\dots,x_n\in\mathbb{R}$, is there a way to write $\max(x_1,x_2,\dots,x_n)$ in a clearly symmetric form using addition, subtraction, division, and the absolute value function?
|
Symmetrizing in the obvious way:
$$\frac{a+b+c}3+\frac{|a-b|+|b-c|+|a-c|}{12}+\\\frac{|a+b-2c+|a-b||+|a+c-2b+|a-c||+|b+c-2a+|b-c||}{12}$$
For four variables, I found:
$$\tfrac14\left(a+b+c+d+|a-b|+|c-d|+|a+b-c-d+|a-b|-|c-d||\right)$$
Of course you don't have to use absolute value signs: $$\lim_{m\to\infty}\sqrt[m]{a^m+b^m+c^m}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1219291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 1,
"answer_id": 0
}
|
Maximizing the area of a triangle with its vertices on a parabola. So, here's the question:
I have the parabola $y=x^2$. Take the points $A=(-1.5, 2.25)$ and $B=(3, 9)$, and connect them with a straight line. Now, I am trying find out how to take a third point on the parabola $C=(x,x^2)$, with $x\in[-1.5,3]$, in such a way that the area of the triangle $ABC$ is maximized.
I have pretty good evidence by trial and error that this point is $(.75, .5625)$ but I have no idea how to prove it. I was trying to work with a gradient, and then Heron's formula, but that was a nightmare to attempt to differentiate. I feel like this is a simple optimization problem but I have no clue how to solve it. Any help is appreciated!
Thanks.
|
Assuming $A=\left(-\frac{3}{2},\frac{9}{4}\right),B=(3,9),C=(x,x^2)$, the area of $ABC$ is maximized when the distance between $C$ and the line $AB$ is maximized, i.e. when the tangent to the parabola at $C$ has the same slope of the $AB$ line. Since the slope of the $AB$ line is $m=\frac{9-9/4}{3+3/2}=\frac{3}{2}$ and the slope of the tangent through $C$ is just $2x$, the area is maximized by taking:
$$ C=\left(\frac{3}{4},\frac{9}{16}\right) $$
and the area of $ABC$ can be computed through the shoelace formula:
$$ [ABC] = \frac{729}{64}. $$
This area is just three fourth of the area of the parabolic segment cut by $A$ and $B$, as already known to Archimedes. Here we have a picture:
Also notice that in a parabola the midpoints of parallel chords always lie on a line that is parallel to the axis of symmetry. That immediately gives that $C$ and the midpoint $M=\left(\frac{3}{4},\frac{45}{8}\right)$ of $AB$ have the same $x$-coordinate. Moreover, it gives that the area of $ABC$ is the length of $CM$ times the difference between the $x$-coordinate of $B$ and the $x$-coordinate of $C$, hence $\frac{729}{64}$ as already stated.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1219402",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
Showing that the function $f(x,y)=x\sin y+y\cos x$ is Lipschitz I wanted to show that $f(x,y)=x\sin y+y\cos x$ sastisfy Lipschitz conditions. but I can't separate it to $L|y_1-y_2|$.
According to my lecturer, the Lipschitz condition should be $$|f(x,y_1)-f(x,y_2)|\le L|y2-y1|$$
I was able show that $x^2+y^2$ in the rectangle $|x|\le a$, $|y|\le b$ satisfies the Lipschitz condition, with my $L=2b$. But I had problem showing this for $f(x,y)=x\sin y+y\cos x $.
|
Since $\frac {\partial f} {\partial y} =x \cos y+ \cos x$, it follows that
$$\left| \frac {\partial f} {\partial y} \right| = |x \cos y+ \cos x| \le |x \cos y|+|\cos x| = |x| \, |\cos y|+|\cos x| \le a+1$$
since $|x| \le a$, $|\cos y| \le 1$ and $|\sin y| \le 1$. This means that $|f(x,y_1) - f(x,y_2)| \le (a+1) \, |y_1 - y_2|$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1219535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
If $X$, $Y$ are independent random variables and $E[X+Y]$ exists, then $E[X]$ and $E[Y]$ exist. I've been trying to show E$|X+Y|$ < $\infty$ $\Rightarrow$ E$|X|$ < $\infty$ by showing E$|X|$ $\leq$ E$|X+Y|$, but I'm stuck and cannot proceed from here.
Someone can help me, please?
-----[added]-----
$$E|X+Y| = \int\int|x+y|f_X f_Y\mathsf dx\mathsf dy = \int E|X+y|f_Y\mathsf dy < \infty.$$
So, can I say that $E|X+y| < \infty$ for almost every $y$ including $y=0$?
|
Your second approach works. If $X$ and $Y$ are independent and $|X+Y|$ is integrable, then
$$
E|X+Y| = \int|x+y|dP_{(X,Y)}(x,y)=\int\left[\int|x+y|dP_X(x)\right]dP_Y(y)
$$
by Fubini's theorem, and moreover the function $$y\mapsto\int|x+y|dP_X(x)=E|X+y|$$ is integrable with respect to $P_Y$, hence finite almost surely. Pick any $y$ for which $E|X+y|$ is finite and use
$$
|X|=|X+y-y|\le|X+y|+|y|$$
to conclude that $X$ is integrable.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1219633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
CY 3-folds are $T^2 \times \mathbb{R}$ fibrations over the base $\mathbb{R}^3$. What does it mean? In this article at section 2. Toric geometry and Mirror Symmetry there is the statement that CY 3-folds are $T^2 \times \mathbb{R}$ fibrations over the base $\mathbb{R}^3$. Now, my questions refers to pages 3 and 4. Although I have some familiarity with fiber bundles I cannot figure out what this actually means.
Can you "dumb down" this for me? Explain what this fibration actually means (maybe using a familiar example) and tell me how they get this diagram of Fig. 1?
|
There is a map $F:\mathbb C^3 \to \mathbb R^3$ given by
$$F (z_1, z_2, z_3) = (|z_1|^2 - |z_3|^2, |z_2|^2 - |z_3|^2, Im(z_1z_2z_3))$$
Then one can think of $\mathbb C^3$ as a "fibration" of $\mathbb R^3$:
$$\mathbb C^3 = \bigcup_{x\in \mathbb R^3} F^{-1}(x).$$
One can check that when $x$ does not lie in the "degenerate locus" in figure 1, then $F^{-1}(x)$ is diffeomorphic to $T^2 \times \mathbb R$. Note also that all $F^{-1}(x)$ are special Lagrangian. Special Lagrangian fibration are of great interest in mirror symmetry.
It is not true that all CY 3 folds are of this form locally. I do not know if this is true for toric CY 3 folds though.
Note that "fibration" in this case is not the same as a fiber bundle, as some of the fibers are not homotopic to each other.
I am no expert in this field, but to get an introduction to special Lagrangian fibration, you may take a look at "Calabi-Yau Manifolds and Related Geometries"
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1219698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
A level Central Limit Theorem question How many times must a fair die be rolled in order for there to be less than 1% chance that the mean of the scores differ from 3.5 by more than 0.1?
The answer is $n≥1936$, but how do you get to this answer?
|
Find mean and variance for one fair die. Let $X$ be the number of faces showing on one fair die.
$$\mu = E(X) = (1 + 2 + \cdots + 6)/6 = [6(7)/2]/6 = 7/2 = 3.5,$$
where we have used the formula that the sum of the first $n$ integers
is $n(n+1)/2.$ Also,
$$E(X^2) = (1^2 + 2^2 + \cdots + 6^2)/6 = \frac{6(7)(13)/6}{6} = 91/6,$$
where we have used the formula that the sum of the first $n$ squares
is $n(n+1)(2n+1)/6.$ Then
$$ \sigma^2 = V(X) = E(X^2) - \mu^2 = \frac{91}{6} - \frac{49}{4}
\frac{182 - 147}{12} = 35/12.$$
There are, of course, other methods of finding $\mu$ and $\sigma^2.$
Find the mean and variance for the average of $n$ dice. For the average $\bar X_n$ on $n$ rolls of a die, we have
$$\mu_n = E[(X_1 + X_2 + \dots + X_n)/n] = n\mu/n = \mu.$$
and, because we assume the $n$ rolls of the die are independent,
$$\sigma^2_n = V[(X_1 + X_2 + \dots + X_n)/n] = n\sigma^2/n^2 = \sigma^2/n.$$
Assume the average is normally distributed. If $n$ is even moderately large
the Central Limit Theorem indicates that $\bar X_n$ has approximately
the normal distribution with mean $\mu_n = 7/2$ and variance
$\sigma_n^2 = \sigma^2/n = \frac{35}{12n}.$
Thus, upon standardizing (here, dividing by $\sigma_n$), we have
$$P\{-0.1 < \bar X_n - 7/2 < 0.1\} \approx
P\{-0.1\sqrt{12n/35} < Z < 0.1\sqrt{12n/35} \},$$
where $Z$ is standard normal.
Because we want this probability to equal $.01 = 1\%,$ we note that
$P\{-2.576 < Z < 2.576\} = 0.1.$
So in order to find the desired $n,$ we set $2.576 = 0.1\sqrt{12n/35},$
solve for $n$ and round up to the nearest integer. This gives the claimed answer $n = 1936$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1219828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Classifying a set closed under two unary operators Suppose we have a single square $S$ with distinct colors at its four vertices. We define two operators: the rotation operator $R$, which rotates the square $90^\circ$ clockwise, and the transpose operator $T$, which flips the square along its main diagonal (from the top left to bottom right vertex).
We apply $R$ and $T$ to the square repeatedly in any order we like and for as many times we like. We get a collection of squares, such that when either $R$ or $T$ is applied to any one element, we get back an element in the same collection.
The closest thing that comes into my mind as to what the above collection should be called, is "a group closed under $R$ and $T$ with generator $S$" - however the definition of a group only allows for one group law, and it also requires the group law to be a binary operator.
How should such a collection be classified?
|
$R$ and $T$ are actually elements of a group and the group they generate is the group of symmetries of the square. There are 8 elements of this group including the identity which leaves the entire square fixed. The binary operation here is function composition, which is to say, $R*T$ is the motion that results in applying $T$ to the square and then applying $R$ to the square.
This is an example of a group acting on a set. The elements of the group act on the square.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1219938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to compute $(a+1)^b\pmod{n}$ using $a^b\pmod{n}$? As we know, we can compute $a^b \pmod{n}$ efficiently using Right-to-left binary method Modular exponentiation.
Assume b is a prime number .
Can we compute directly $(a+1)^b\pmod{n}$ using $a^b\pmod{n}$?
|
In general no, but yes in some special cases, e.g. if $\ n\mid \pm a^k\! +\! a\! +\! 1\,$ for small $\,k\,$ then
${\rm mod}\ n\!:\,\ \color{#c00}{a\!+\!1\,\equiv\, \pm a^k}\,\Rightarrow\, (\color{#c00}{a\!+\!1})^b \equiv (\color{#c00}{\pm a^k})^b\equiv (\pm1)^b (a^b)^k$
so you need only raise the known result $\,a^b\,$ to a small power $\,k\,$ to get the result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1220014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Infinite Product - Seems to telescope Evaluate
$$\left(1 + \frac{2}{3+1}\right)\left(1 + \frac{2}{3^2 + 1}\right)\left(1 + \frac{2}{3^3 + 1}\right)\cdots$$
It looks like this product telescopes: the denominators cancel out (except the last one) and the numerators all become 3.
What would my answer be?
|
we have the following identity (which affirms that the product telescopes):
$$\left (1+\frac{2}{3^n+1}\right)=3\cdot\frac{3^{n-1}+1}{3^n+1}=\frac{1+3^{-(n-1)}}{1+3^{-n}}$$
(as denoted in the comment by Thomas Andrews)and as a result:
$$\prod_{k=1}^n \left (1+\frac{2}{3^k+1}\right)=\frac{2\cdot 3^n}{3^n+1}=\frac{2}{1+3^{-n}}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1220095",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Find circumcenter when distance between ABC points of triangle with two points's ratio given The complete problem is:
I am having three points A,B,C whose ratio of the distances from points (1,0) and (-1,0) is 1:3 each. Then I need the coordinates of the circumcenter of the triangle formed by points A,B & C.
Can anybody tell me how to proceed?
|
I'd proceed like this:
*
*Prove that, in 2D, the set of all points defined by a fixed ratio
$\rho\neq1$ of their distances to two distinct given points
(which I will call poles for reasons not explained here)
is a circle $K$.
(I'd use equations with cartesian coordinates for that,
but if you find a coordinate-free reasoning, that would be interesting.)
*Conclude that the circumcenter of $\triangle ABC$ is the (yet unknown)
center $M$ of that circle $K$.
*Find two points $D,E$ on the line through the poles (here: the $x$-axis)
with the same given distance ratio $\rho$.
*Argue with symmetry and conclude that $M$ must be the midpoint of the
segment connecting $D$ and $E$. In particular, $M$ lies on the $x$-axis.
Update: An illustration:
More could be said on this ratio-and-circle topic, but
I have kept that to the comments section.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1220215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How many arrangements of letters in REPETITION are there with the first E occurring before the first T? The question is:
How many arrangements of letters in REPETITION are there with the first E occurring before the first T?
According the book, the answer is $3 \cdot \frac{10!}{2!4!}$, but I'm having trouble understanding why this is correct.
I figured there are $4$ positions for the $2$ E's and $2$ T's, and one of the E's must be placed before any of the T's. $P(3;2,1)$ -> remaining $3$ positions which can be filled with $1$ E and $2$ T's. -> $3!/2!$ -> $3$. So, I have this part correct I believe.
Where I'm having trouble is figuring out why $\frac{10!}{2!4!}$ is correct. Is it this $P(10; 4, 2, 1, 1, 1, 1)$? What are the different types of letters represented here?
|
First we'll arrange the letters like this:
$$\text{E E T T}\quad\text{R P I I O N}$$
Now replace $\text{EETT}$ with $****$:
$$\text{* * * *}\quad\text{R P I I O N}$$
The number of arrangements is $$\frac{10!}{4!2!}$$ because four symbols are the same, two other symbols are the same, and all the rest are different.
Now consider any one of those arrangements. We can replace the asterisks $****$, from left to right, with $\text{EETT}$ or $\text{ETET}$ or $\text{ETTE}$. There are three possibilities. This results in the answer given in the book: $$\frac{10!}{4!2!}\cdot3$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1220336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
The probability that the ratio of two independent standard normal variables is less than $1$ Let the independent random variables $X,Y\sim N(0,1)$. Prove that $P(X/Y < 1) = 3/4. $
Could anyone help me prove this analytically? Thanks.
Progress: My first thought was to integrate the joint density function: $\dfrac{e^{-\frac{x^2}{2}-\frac{y^2}{2}}}{2\pi}$ but I'm not sure where to go from here.
|
It is well-known that the random variable $X/Y$ has the two-sided Cauchy density $\frac{1}{\pi(1+x^2)}$ for $-\infty<x<\infty$. Thus
$P(X/Y<1)=\int_{-\infty}^{1}\frac{1}{\pi(1+x^2)}dx=0.5+\int_{0}^{1}\frac{1}{\pi(1+x^2)}dx$
and so
$P(X/Y<1)=0.5+\frac{1}{\pi}\hbox{arctg}(1)=0.5+0.25=0.75$.
Note: A much simpler way is to consider the random vector $(X,Y)$ on the plane. Then,
$P(X/Y<1)=P(X/Y<1\mid Y>0)\times\frac12+P(X/Y<1\mid Y<0)\times\frac12\\
= (\frac12+\frac14) \times \frac12+ (\frac12+\frac14)\times\frac12=\frac34.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1220436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Has anyone seen this combinatorial identity involving the Bernoulli and Stirling numbers? Does anyone know a nice (combinatorial?) proof and/or reference for the following identity?
$$\left( \frac{\alpha}{1 - e^{-\alpha}} \right)^{n+1} \equiv \sum_{j=0}^n \frac{(n-j)!}{n!} |s(n+1, n+1-j)| \alpha^j \bmod \alpha^{n+1}.$$
Here
$$\frac{\alpha}{1 - e^{-\alpha}} = \sum_{i=0}^{\infty} (-1)^i \frac{B_i \alpha^i}{i!}$$
is one of the generating functions for the Bernoulli numbers, and $|s(n+1, n+1-j)|$ is an unsigned Stirling number of the first kind.
Motivation (feel free to ignore): this identity comes from two different computations of the Todd class of $\mathbb{CP}^n$. One uses the Euler sequence. The other involves computing the holomorphic Euler characteristic $\chi(\mathcal{O}(k))$ of the line bundles $\mathcal{O}(k)$ using that the higher cohomology of $\mathcal{O}(k)$ vanishes for $k$ large enough and that for $k \ge 0$, $H^0(\mathcal{O}(k))$ is the dimension of the space of homogeneous polynomials of degree $k$ in $n+1$ variables, which is ${k+n \choose n}$, then working out what the Todd class must be using Hirzebruch-Riemann-Roch. This is a bit indirect to say the least, and I have no idea how to convert it into combinatorics.
|
The coefficients $B_j^{(r)}$ defined by$$\sum_{j = 0}^\infty B_j^{(r)} {{x^j}\over{j!}} = \left({x\over{e^x - 1}}\right)^r$$are usually called higher order Bernoulli numbers, so your identity is a formula for $B_j^{(r)}$ for $j < r$.
Let $c(n, k) = |s(n, k)| = (-1)^{n - k}s(n, k)$. This is a fairly standard notation, used, for example, in Stanley's "Enumerative Combinatorics". Then$${{(-\log(1 - x))^k}\over{k!}} = \sum_{n = k}^\infty c(n, k) {{x^n}\over{n!}}.$$Differentiating this equation gives$${{(-\log(1 - x))^k}\over{(1 - x)k!}} = \sum_{n = k}^\infty c(n + 1, k + 1) {{x^n}\over{n!}}.\tag*{$(1)$}$$For any formal Laurent series $f = f(\alpha)$ we define the residue of $f$, denoted $\text{res}\,f$, to be the coefficient of $\alpha^{-1}$ in $f$. So the coefficient of $\alpha^k$ in $f$ is $\text{res}\,f/\alpha^{j + 1}$.
We will apply Jacobi's change of variables formula for residues, which is a form of the Lagrange inversion formula. See e.g. Gessel's survey of Lagrange inversion at https://arxiv.org/abs/1609.05988, Theorem 4.1.1.
Suppose that $f(\alpha)$ is a formal Laurent series in $\alpha$ and that $g(\alpha) = g_1 \alpha + g_2\alpha^2 + \ldots$ is a formal power series in $\alpha$ with $g_1 \neq 0$. Then Jacobi's formula says that$$\text{res}\,f(\alpha) = \text{res}\,f(g(\alpha))g'(\alpha).$$We apply Jacobi's formula with$$f(\alpha) = \left({\alpha\over{1 - e^{-\alpha}}}\right)^{n + 1} \alpha^{-j - 1}$$and$$g(\alpha) = -\log(1 - \alpha).$$Then the coefficient of $\alpha^j$ in$$\left({\alpha\over{1 - e^{-\alpha}}}\right)^{n + 1}$$is\begin{align*}
\text{res}\,f(\alpha) & = \text{res}\,f(g(\alpha))g'(\alpha) \\ & = \text{res}\, {{(-\log(1 - \alpha)/\alpha)^{n + 1}}\over{(-\log(1 - \alpha))^{j + 1}(1 - \alpha)}} \\ & = \text{res}\,{{(-\log(1 - \alpha))^{n - j}}\over{\alpha^{n + 1}(1 - \alpha)}}.
\end{align*}This is the coefficient of $\alpha^n$ in$${{(-\log( 1- \alpha))^{n - j}}\over{1 - \alpha}}$$which by $(1)$ is$${{(n - j)!}\over{n!}} c(n + 1, n - j + 1).$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1220519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 1,
"answer_id": 0
}
|
limits involving a piecewise function, Prove that if $c \ne 2$, then f does not have a limit at $x = c$. $$f(x) = \begin{cases} (x-2)^3 & \text{if $x$ is rational } \\
(2-x) & \text{if $x$ is irrational }
\end{cases}$$
(i) Prove that if $c \ne 2$, then f does not have a limit at $x = c$.
(ii) Prove that $\lim_{x\to2} f (x)$ exists.
Hi all, i'm not very sure how to approach this question. For part i), i think i'm suppose to find a rational sequence $x_n$ and an irrational sequence $y_n$ such that $x_n \rightarrow c$ & $y_n\rightarrow c$. But $\lim_{n\to \infty}f(x_n) \ne \lim_{n\to \infty}f(y_n)$. Would letting $x_n = \frac{1}{n}$ and $y_n=\frac{1}{\sqrt{n}}$ suffice?
I'm clueless as to how to answer part ii). Would appreciate any hints or advice. Thanks in advance.
|
For part $(i)$ you're on the right track, but note that we are concerned with $x_n, y_n$ converging to any arbitrary $c$. Note that your $x_n, y_n$ both converge to $0$ (also that $y_n$ is necessarily even a sequence of irrationals - consider $n = 4$). I'm guessing you are to come up with the sequences yourself, and I leave but a hint for you here:
*
*If $c$ is rational consider adding to $c$ some rational sequence that converges to $0$ i.e. find an $a_n$ such that $\{a_n\} \subset \mathbb{Q}$ and $a_n \to 0$.
*If $c$ is irrational consider first a sequence of decimal approximations for $x_n$ (that is the $n^{th}$ term of $x_n$ has $n$ decimal places expanded out) and for $y_n$ simply consider adding on the same sequence $a_n$ as above, will $c + a_n$ be irrational always?
Once you have done this then note that you will have $\lim_{n \to \infty} f(x_n) = (c - 2)^3$ and $\lim_{n \to \infty} = (2 - c)$. Now note $f$ is continuous at $c$ iff the limit is the same no matter how you approach it (meaning, no matter what sequence you use to approach $c$). If $c \neq 2$ what do you get above?
As for $(ii)$, we can use the sequential characterization of continuity. That is, $f : \mathbb{R} \to \mathbb{R}$ is continuous at $c$ iff for any $c_n \to c$ we have
$$
\lim_{n \to \infty} f(c_n) = f(c)
$$
Now theres three possible types of sequences when talking about rational and irrational sequences:
*
*Completely rational sequence (all elements of the sequence are themselves rational)
*Completely irrational sequence (all elements of the sequence are themselves irrational)
*Mixed: Some elements are rational, some are irrational
Now when considering the case of $c = 2$ what is the limit of a completely rational sequence? What about a completely irrational sequence? As for this third category of sequences, the proof really depends on the level of rigor your professor/teacher wants. If you clue me in on how precise you want this to be I can help you out.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1220610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Binomial Expansion without inifinty series Variable here is "a" and "b".The question is to simplify the $\sqrt [3] { (\frac{a}{\sqrt{b^3} })\times {\frac {\sqrt{a^6b^2}}{a} }+ \frac{a}{b^2}}$
So these are my steps
=$\left(\frac {a^3b} {b^\frac {3} {2} }+ \frac {a} {b^2}\right)^\frac {1} {3}$
=$\left({a^3b^{1-\frac {3} {2}} } + \frac {a} {b^2}\right)^\frac {1} {3}$
=$\left({a^3b^\frac {-1} {2} } + \frac {a} {b^2}\right)^\frac {1} {3}$
Now the real question is,how to simplify this?Assuming the both fraction is simplified to "x" and "y",I get $(x+y)^\frac {1} {3}$ but my family said that it is expanded to this:
$(x+y)^\frac {1} {3}=x^\frac {1} {3}+y^\frac {1} {3}$
It should not be this because $(x+y)^2=x^2+2xy+y^2$
The thing is I noticed where $(x+y)^n$ have a pattern called pascal triangle.But this can be only applied when n≥1 where n is a integer.
The problem with the binomial theorem is that the answer is infinite,which I do not want to get this.I need to get an answer where it is not infinite and does not involve any roots.How do I simplify this?
I have checked with Issac Newton theory,but the Issac Newton theory is infinite.
I have searched online and I see is infinite answer: Link
If this is impossible to simplify without avoiding infinite series,is there another method to solve the question,$\sqrt [3] { (\frac{a}{\sqrt{b^3} })\times {\frac {\sqrt{a^6b^2}}{a} }+ \frac{a}{b^2}}$?
|
In general,
$$(x+y)^{1/3}\ne x^{1/3}+y^{1/3}$$
For example, take $x=y=1$. If this weren't true, we'd have $\sqrt[3]2=2$.
In fact, if $x$ and $y$ are positive, we have:
$$(x+y)^{1/3}<x^{1/3}+y^{1/3}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1220693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Calculation of real root values of $x$ in $\sqrt{x+1}-\sqrt{x-1}=\sqrt{4x-1}.$
Calculation of x real root values from $ y(x)=\sqrt{x+1}-\sqrt{x-1}-\sqrt{4x-1} $
$\bf{My\; Solution::}$ Here domain of equation is $\displaystyle x\geq 1$. So squaring both sides we get
$\displaystyle (x+1)+(x-1)-2\sqrt{x^2-1}=(4x-1)$.
$\displaystyle (1-2x)^2=4(x^2-1)\Rightarrow 1+4x^2-4x=4x^2-4\Rightarrow x=\frac{5}{4}.$
But when we put $\displaystyle x = \frac{5}{4}\;,$ We get $\displaystyle \frac{3}{2}-\frac{1}{2}=2\Rightarrow 1=2.$(False.)
So we get no solution.
My Question is : Can we solve above question by using comparision of expressions?
Something like $\sqrt{x+1}<\sqrt{x-1}+\sqrt{4x-1}\; \forall x\geq 1?$
If that way possible, please help me solve it. Thanks.
|
For $x\ge1$ we have $$\sqrt{4x-1}\ge \sqrt {3x} $$
and $$\sqrt{x+1}\le \sqrt {2x}$$
hence
$$\sqrt{x+1}-\sqrt{x-1}\le \sqrt{2x}<\sqrt{3x}\le\sqrt{4x-1} $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1220800",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
The number of zeros of a polynomial that almost changes signs Let $p$ be a polynomial, and let $x_0, x_1, \dots, x_n$ be distinct numbers in the interval $[-1, 1]$, listed in increasing order, for which the following holds:
$$
(-1)^ip(x_i) \geq 0,\hspace{1cm}i \in \{0, 1, \dots, n\}
$$
Is it the case that $p$ has no fewer than $n$ zeros, either distinct or coincident? (If the inequality were strict, the answer would be "yes", by the intermediate value theorem.)
|
Counting the zeros with multiplicity, the answer is still yes. If $p$ is a (real) polynomial, and $x_0 < x_1 < \dotsc < x_n$ are points such that $(-1)^i p(x_i) \geqslant 0$ for $0 \leqslant i \leqslant n$, then $p$ has at least $n$ zeros in the interval $[x_0,x_n]$ counted with multiplicity.
We prove that by induction on $n$, modifying the proof for the case of strict inequalities. The important fact is that at a simple zero, $p$ changes its sign. That allows to deduce the existence of further zeros or multiple zeros in certain configurations.
The base case $n = 1$ is direct, either (at least) one of $p(x_0)$ and $p(x_1)$ is $0$, or we have $p(x_0) > 0 > p(x_1)$ and the intermediate value theorem asserts the existence of a zero in $(x_0,x_1)$.
For the induction step, we have $x_0 < x_1 < \dotsc < x_n < x_{n+1}$, and the induction hypothesis asserts the existence of at least $n$ zeros of $p$ in the interval $[x_0,x_n]$. If $p(x_{n+1}) = 0$, we have found our $n+1^{\text{st}}$ zero and we're done. So in the following, we consider the case $(-1)^{n+1}p(x_{n+1}) > 0$. If $(-1)^np(x_n) > 0$, then $p$ has a further zero strictly between $x_n$ and $x_{n+1}$, and again we have our $n+1^{\text{st}}$ zero. If $p(x_0) = p(x_1) = \dotsc = p(x_n) = 0$, these are $n+1$ zeros, and we are done again.
Finally, we are looking at the situation where $(-1)^kp(x_k) > 0$, $(-1)^{n+1}p(x_{n+1}) > 0$ and $p(x_{k+1}) = \dotsc = p(x_n) = 0$. By hypothesis, we have at least $k$ zeros of $p$ in the interval $[x_0,x_k)$, and by assumption, we have $n-k$ distinct zeros of $p$ in the interval $(x_k,x_{n+1})$. We must see that there is an additional zero in that interval, or (at least) one of the $x_i$ is a multiple zero. If the $x_i,\; k < i \leqslant n$ were all simple zeros, and there were no other zero of $p$ in the interval $(x_k,x_{n+1})$, then there would be precisely $n-k$ sign changes of $p$ between $x_k$ and $x_{n+1}$, hence
$$(-1)^{n-k}p(x_k)p(x_{n+1}) > 0.\tag{A}$$
But in the situation we are considering, we have
$$(-1)^kp(x_k) > 0 \land (-1)^{n+1}p(x_{n+1}) > 0,$$
which implies
$$(-1)^{n-k+1}p(x_k)p(x_{n+1}) = \bigl((-1)^kp(x_k)\bigr)\cdot \bigl((-1)^{n+1}p(x_{n+1})\bigr) > 0.\tag{B}$$
But $(\text{B})$ contradicts $(\text{A})$, so we conclude the existence of a further zero of $p$ in $(x_k,x_{n+1})$ (either distinct from the $x_i$ or in the form of a multiple zero at one of the $x_i$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1220879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Why $\max \left\{ {{x^T}Ax:x \in {R^n},{x^T}x = 1} \right\}$ is the largest real eigenvalue of A? Let $A \in {M_n}(R)$ and A is symmetric.Why $\max \left\{ {{x^T}Ax:x \in {R^n},{x^T}x = 1} \right\}$ is the largest real eigenvalue of A?
|
So, here's an intuitive and somewhat geometric explaination of what is happening.
$1:$ For $\lambda$ to be an eigenvalue, we need to be able to find a vector $v$ such that $Av=\lambda v$
$2:$ We observe that $v$ is an eigenvector iff $v\over{||v||}$ is an eigenvector. So, we reduce the search space to the unit sphere.ie., vectors with norm $=1$. So, now we have to find $\lambda$ and $v$ such that $Av=\lambda v$ and $||v||=1$
$3:$ Since we are talking about symmetric matrices existence of eigenvalues is guaranteed. Our claim now is that the largest eigenvalue is given by $max_{||x||=1} x^TAx$.
$4:$Observe here that $y=Ax$ is a vector. It need not lie on the unit sphere. But we are projecting $y$ onto a vector $x$ in the unit sphere by taking $x^Ty$. $5: $ When is this maximized ? By the definition of standard dot product, if $\theta$ is the angle between $x$ and $y$, $x^Ty=||x||||y||cos(\theta)=||y||cos(\theta)$. Here, for a given $x$, this would have been maximized when $cos(\theta)=1$. Since the existence of eigenvalues is guaranteed, this definitely occurs at the eigenvectors.The maximum among these is the eigenvector corresponding to the largest eigenvalue. As for vectors which are not eigenvectors,we can directly see that $x^Ty<||y||$. Now, because $A$ is diagonalizable and the eigenvectors $v_1,...,v_n$ form a basis, we can write $x=\alpha_1v_1 + ... + \alpha_nv_n$.Here, we denote the largest eigenvalue by $\lambda_1$ and the corresponding eigenvector to be $v_1$ . So, $Ax=\alpha_1\lambda_1v_1 + ... + \alpha_n\lambda_nv_n$. $||y||=||Ax||\leq||\alpha_1\lambda_1v_1 + ... + \alpha_n\lambda_1v_n||=\lambda_1||x||=\lambda_1$. So, those which are not eigenvectors donot exceed this value.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1220995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
}
|
Proving real polynomials of degree greater than or equal to 3 are reducible this is the proof I'm given:
Example 1.20. Let $f \in \mathbb{R}[x]$ and suppose that $\deg(f) > 3$. Then $f$ is reducible.
Proof. By the Fundamental theorem of algebra there are $A \in C$ such that $$f (x) = (x - \lambda_1) \dots (x - \lambda_n).$$
Note that $0 = \overline{f(\lambda_j)} = f(\bar\lambda_j)$ since the coefficients of $f$ are real. Thus if $\lambda_j \in \mathbb{C} \setminus \mathbb{R}$ there is $k$ such that $\lambda_k = \bar\lambda_j$. Moreover $$(x - \lambda_j)(x - \bar\lambda_j) = x^2 + (\lambda_j + \bar\lambda_j) x + \lambda_j \bar\lambda_j = x^2 - 2 \Re(\lambda_j) + |\lambda_j|^2 \in \mathbb{R}[x].$$ Thus $f$ factorises into real polynomials of degree $1$ (corresponding to $\lambda_j \in \mathbb R$) and 2 (corresponding to a pair $\lambda_j, \bar\lambda_j \in \mathbb C$). ❑
What I don't understand is the step "$0 = \overline{f(\lambda_j)} = f(\overline{\lambda_j})$ since the coefficients of $f$ are real."
Firstly, why do we need this step for the rest of the proof?
Secondly, I don't follow how $f$ having real coefficients gives $0 = \overline{f(\lambda_j)} = f(\overline{\lambda_j})$.
Thanks
|
For every 2 complex numbers $a, b \in \mathbb{C}$ it's true that
$$\overline{a+b} = \overline{a} + \overline{b}$$
$$\overline{ab} = \overline{a} \overline{b}$$
And of course $a = \overline{a}$ for every $a \in \mathbb{R}$. So if $f(x) \in \mathbb{R}[x]$, then $\overline{f(x)} = f(\overline x)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1221105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Floating point number,Mantissa,Exponent In this computer, numbers are stored in $12$-bits. We will also assume
that for a floating point (real) number, $6$ bits of these bits are reserved for
the mantissa (or significand) with $2^{k-1}-1$ as the exponent bias (where
$k$ is the number of bits for the characteristic).
$011100100110010111110011$
What pair of floating point numbers could be represented by these
$24$-bits?
I have gone this far:
As described above that each number is of $12$ bit so we get each number
$011100100110$
First one is $0$ bit so it is positive and
Mantissa will be $100110$
Exponent will be $11100b=28$
my unbiased exponent will be $2^{28-15}=2^{13}$
How to find the floating point number from here?
|
Usually the mantissa is considered to have a binary point after the first bit, so your mantissa would be $1.1100_2=\frac 74=1.75_{10}$. Sometimes a leading $1$ is assumed, so your mantissa would be $(1).11100_2=\frac{15}8=1.875_{10}$ This gives one more bit of precision. To find the exponent, you subtract the offset from the stored value. You probably meant $2^{k-1}-1$ as the offset. Can you do that?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1221206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Let $\pi$ be a prime element in $\Bbb{Z}[i]$. Then $N(\pi)=2$ or $N(\pi)=p$ s.t. $p$ is prime and $p\equiv 1\pmod 4$
Let $\pi$ denote a prime element in $\Bbb{Z}[i]$ such that $\pi\not\in \Bbb{Z},i\Bbb{Z}$. Prove that $N(\pi)=2$ or $N(\pi)=p$, where $p$ is a prime number $\equiv 1\pmod4$. Give a complete classification of the prime elements of $\Bbb{Z}[i]$ using the prime numbers in $\Bbb{Z}$.
This exact question has been asked here but I do not understand the answers given, so I will ask it again in hope of further explanation.
Here's everything I know that will probably help me with the problem:
*
*We know that $\pi=a+bi$ where $a,b\in \Bbb{Z}\setminus \{ 0\}$.
*Fermat's Two Square theorem: A prime number $p\equiv 1 \mod4$
satisfies $p=a^2+b^2$ where $a\neq b$ and $a,b\in \Bbb{N}\setminus
\{0\}$.
*$\pi$ is irreducible in $\Bbb{Z}[i]$
*If $\pi=xy$ then $\pi \mid x$ or $\pi \mid y$
*If $N(\pi)=2$ then $a=\pm 1, b=\pm 1$.
*If $N(\pi)=p$ where prime $p\equiv 1\pmod 4$ then we know $p=a+bi$ for some $a,b\in \Bbb{Z}\setminus \{0\}$ by Fermat's theorem.
I can't figure out where to start. I was thinking of starting with this:
Let $x,y\in \Bbb{Z}[i]$ such that $x=x_1+x_2 i$ and $y=y_1+y_2i$. Then $\pi\mid x$ or $\pi\mid y$. Suppose $\pi\mid x$.....
Can I have a hint on how to begin this?
|
It's easy to see that $1+i$ and $1-i$ are irreducible in $\mathbb{Z}[i]$ and the only elements $x\in\mathbb{Z}[i]$ such that $N(x)$ are those two, up to multiplication by invertible elements (that is, $1$, $-1$, $i$ and $-i$). In particular $2$ is not irreducible in $\mathbb{Z}$.
Suppose $p>2$ is a prime integer such that $p\equiv 1\pmod{4}$. By Fermat's two-square theorem, $p=a^2+b^2$ for some integers $a,b$. Then $a+bi$ and $a-bi$ are not associates in $\mathbb{Z}[i]$ and they are irreducible.
Indeed, suppose $a+bi=xy$ for some $x,y\in \mathbb{Z}[i]$. Then
$$
N(x)N(y)=N(a+bi)=a^2+b^2=p
$$
and so either $N(x)=1$ or $N(y)=1$, which is equivalent to the fact that either $x$ or $y$ is invertible.
The same holds of course for $a-bi$.
Note that if $z\in\mathbb{Z}[i]$ is irreducible, then $\bar{z}$ is irreducible as well, because conjugation is an automorphism of $\mathbb{Z}[i]$.
Now we do the converse.
Suppose $z\in\mathbb{Z}[i]$ is irreducible. Let $p\mid N(z)=z\bar{z}$, where $p$ is a prime integer. If $p$ is irreducible also in $\mathbb{Z}[i]$, then $p\mid z$ or $p\mid\bar{z}$, but this is the same because $p\in\mathbb{Z}$. Since $z$ is irreducible, then $z=pu$, for some invertible $u$.
If $p$ is not irreducible in $\mathbb{Z}[i]$, then $p=xy$, where neither $x$ nor $y$ is invertible. and so $p^2=N(p)=N(x)N(y)$ forces $N(x)=N(y)=p$. Therefore both $x$ and $y$ are irreducible, as seen before. Since $x\mid N(z)=z\bar{z}=N(z)$, it's not restrictive to assume that $x\mid z$. Therefore $x=zu$ for some invertible $u$, so $\bar{x}=\bar{z}\bar{u}$ and $x\bar{x}=z\bar{z}$. Similarly, $y\bar{y}=z\bar{z}$, so
$$
p^2=N(p)=N(x)N(y)=x\bar{x}y\bar{y}
$$
and it follows that $z\bar{z}=p$. Thus $N(z)$ is a sum of two squares in $\mathbb{Z}$, so either $N(z)=2$ or $N(z)\equiv1\pmod{4}$.
A prime integer $p$ such that $p\equiv 3\pmod{4}$ is irreducible in $\mathbb{Z}[i]$. Indeed, if $p=xy$, then $p^2=N(x)N(y)$ with $N(x)>1$ and $N(y)>1$, which forces $N(x)=p$. But then $p$ would be the sum of two squares in $\mathbb{Z}$, which is impossible.
In conclusion, the prime elements in $\mathbb{Z}[i]$ are
*
*$a+bi$ and $a-bi$ where $a^2+b^2$ is a prime integer (so $a^2+b^2=2$ or $a^2+b^2\equiv 1\pmod{4}$)
*$p$, where $p$ is a prime integer with $p\equiv 3\pmod{4}$
and all associates thereof.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1221283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Why does this approximation of square roots using derivatives work this way? I came up with this way to estimate square roots by hand, but part of it doesn't seem to make sense.
Consider how $f(n) = \sqrt{n^2+\varepsilon} \approx n$ when $\varepsilon$ is small. Therefore, using the tangent line with slope $f'(n) = \frac{n}{\sqrt{n^2+\varepsilon}}$ to approximate $f$ gives
$f(n) \approx n +\varepsilon\cdot\frac{n}{\sqrt{n^2+\varepsilon}}$
However, if the original approximation is substituted in the denominator, this gives
$f(n) \approx n + \frac{\varepsilon n}{n} = n + \varepsilon$
Which obviously makes no sense. However, if the chain rule is skipped while taking the derivative, the approximation becomes
$f(n) \approx n + \frac{\varepsilon}{2n}$
Which is a good approximation. For example, when $\varepsilon = 1, \sqrt{n^2+1} \approx n+\frac{1}{2n},$ and when $\varepsilon = n, \sqrt{n^2+n} \approx n + \frac{1}{2}$
Why does this work only when the derivative is done incorrectly? I feel like I am missing something obvious, but I can't see why this works the way it does.
|
This seems like an improper application of the tangent line approximation. The usual approximation is
$$f(x+\varepsilon)\approx f(x)+\varepsilon f'(x)\tag{*}
$$
for $\varepsilon$ small. Your choice of $f$ doesn't match up with $(*)$. But using $f(x)=\sqrt x$ gives
$$
\sqrt{x+\varepsilon}=f(x+\varepsilon)\approx\sqrt x+{\varepsilon\over{2\sqrt x}}$$
and plugging $x=n^2$ yields
$$
\sqrt{n^2+\varepsilon}\approx n+\frac\varepsilon{2n},$$
the approximation you're seeking.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1221391",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Find basis and dimension of a subspace Problem: Let V be the subspace of all 2x2 matrices over R, and W the subspace spanned by:
\begin{bmatrix}
1 & -5 \\
-4 & 2 \\
\end{bmatrix}
\begin{bmatrix}
1 & 1 \\
-1 & 5 \\
\end{bmatrix}
\begin{bmatrix}
2 & -4 \\
-5 & 7 \\
\end{bmatrix}
\begin{bmatrix}
1 & -7 \\
-5 & 1 \\
\end{bmatrix}
Q: Find a basis and the dimension of W.
What I've done so far:
-I'll refer to the matrices as W1, W2, W3 and W4 (top-down). I've noticed so far that W3 = W1 + W2, does that means that span(W1,W2,W3,W4) = span(W1,W2,W3) ?
I know that a way to find a basis is by reducing a matrix of coefficients to a echelon form, but how do I represent those matrices in a coefficients matrix?
|
You can consider each matrix to be a vector in $\mathbb{R}^4$.
The only pivots are in the first two columns, so the first two matrices are linearly independent and form a basis for the subspace. The last two are linear combinations of the first. Notice that $M_3=M_1+M_2$, and $M_4=\frac{4}{3} M_1-\frac{1}{3}M_2$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1221487",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
A finite abelian group has order $p^n$, where $p$ is prime, if and only if the order of every element of $G$ is a power of $p$ Suppose that G is a finite Abelian group. Prove that G has order $p^n$, where p is prime, if and only if the order of every element of G is a power of p.
I tried the following route, but got stuck. Using the fundamental theorem of finite Abelian groups, the problem reduces to proving Cauchy's theorem for a cyclic abelian group. If G is a cyclic group, and p divides G, then G has an element of order p whether p is prime or not. If we regard G as the integers mod p, then we can notice that if $|G| = kp$ then the integer k has order p in G.
|
Assume $p$ divides the order, and $q$ is some other prime that also divides the order. By Cauchy's theorem there is an element of order $q$.
Therefore, if the order is not a power of a prime then all the elements can't be of order a power of the same prime.
Assume that the order of the group is $p^n$. Then the order of an element $a$ must divide the order.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1221570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Asymptotic expansion of exp of exp I am having difficulties trying to find the asymptotic expansion of $I(\lambda)=\int^{\infty}_{1}\frac{1}{x^{2}}\exp(-\lambda\exp(-x))\mathrm{d}x$ as $\lambda\rightarrow\infty$ up to terms of order $O((\ln\lambda)^{-2})$. How does $(\ln\lambda)^{-1}$ appear as a small parameter? Please help. Thank you.
|
If we substitute $e^{-x} = y$ and integrate by parts the integral becomes
$$
\begin{align}
\int_1^\infty x^{-2} \exp(-\lambda e^{-x})\,dx &= \int_0^{1/e} y^{-1} (\log y)^{-2} e^{-\lambda y}\,dy \\
&= e^{-\lambda/e} - \lambda \int_0^{1/e} (\log y)^{-1} e^{-\lambda y}\,dy. \tag{1}
\end{align}
$$
It was proved by Erdélyi in [1] that for $a$ real, $b>0$, and $0 < c < 1$,
$$
\int_0^c (-\log t)^a t^{b-1} e^{-\lambda t}\,dt \sim \lambda^{-b} \sum_{n=0}^{\infty} (-1)^n \binom{a}{n} \Gamma^{(n)}(b) (\log \lambda)^{a-n}
$$
as $\lambda \to \infty$. So, setting $a=-1$ and $b=1$ we get
$$
\int_0^{1/e} (-\log y)^{-1} e^{-\lambda y}\,dy \sim \lambda^{-1} \sum_{n=0}^{\infty} \Gamma^{(n)}(1) (\log \lambda)^{-n-1}.
$$
Since each term in this asymptotic series is larger than $e^{-\lambda/e}$, we conclude upon substituting this into $(1)$ that
$$
\int_1^\infty x^{-2} \exp(-\lambda e^{-x})\,dx \sim \sum_{n=0}^{\infty} \Gamma^{(n)}(1) (\log \lambda)^{-n-1}
$$
as $\lambda \to \infty$.
The first few terms of this expansion are
$$
\int_1^\infty x^{-2} \exp(-\lambda e^{-x})\,dx = (\log \lambda)^{-1} - \gamma(\log \lambda)^{-2} + \left(\gamma^2 + \tfrac{\pi^2}{6}\right)(\log \lambda)^{-3} + \cdots.
$$
[1] A. Erdélyi, General asymptotic expansions of Laplace integrals, Archive for Rational Mechanics and Analysis, 7 (1961), No. 1, pp. 1-20.
[Article page on SpringerLink]
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1221704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
The geometry of unit vectors that have specific angle with a given vector It is easy to see that for $S^2$ this space is nothing but a circle that is the intersection of a cone with aperture $2\alpha$ (where $\alpha$ is the predifined specific angle), and $S^2$. My question is that is this observation extendable to higher dimensions? Is it true that for any given vector $u\in\mathbb{R}^n$ and the unit vectors in $\mathbb{R}^n$, the geometry of the points over this unit sphere that have a specific angle with $u$ is $S^{n-1}$? Any hints to prove disprove it.
DISCLAIMER: This is not a homework but rather a self observation (as it might be guessed from its silliness(?!))
|
General spherical coordinates are given by
$$ \begin{align*}
x_1 &=r\sin{\phi_1}\sin{\phi_2}\dotsm\sin{\phi_{n-2}}\sin{\phi_{n-1}} \\
x_2 &=r\sin{\phi_1}\sin{\phi_2}\dotsm\sin{\phi_{n-2}}\cos{\phi_{n-1}} \\
x_3 &=r\sin{\phi_1}\sin{\phi_2}\dotsm\cos{\phi_{n-2}} \\
&\vdots\\
x_{n-1} &=r\sin{\phi_1}\cos{\phi_2}\\
x_n &= r \cos{\phi_1}
\end{align*}$$
Setting $r=1$ and dotting this with $$ (0,\dotsc,0,1) $$ gives $\cos{\phi_1}$, so if we hold that constant, we have a sphere (looking at the similarity of the coordinates after setting $\phi_1$ constant), and we can see, again by the definition, that it has radius $\lvert \sin{\phi_1} \rvert$ (since the first $n-1$ coordinates have common factor $\sin^2{\phi_1}$, and by analogy with $r$ in the full space).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1221780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
When does equality hold in this case? Give example of two vectors $x$ and $y$ such that $$||x+y||_2^2 = ||x||_2^2+||y||_2^2$$
and
$$<x,y>\neq0$$
I can't seem to find any two vectors $x$ and $y$ that satisfied both conditions at the same time.
|
In $\mathbb{C}$ as $\mathbb{C}$-space, with $(z,w)=z\overline{w}$
$$|1+i|^2=2=1+1=|1|^2+|i|^2$$
and $$(1,i)=1\overline{i}=-i\neq0$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1221886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Clarification of the statement of the Pumping Lemma In class we were told that Pumping Lemma states:
"Let A be a regular language over $\Sigma$. Then there exists k such that for any words $x,y,z\in\Sigma^{*}$, such that $w=xyz\in A$ and $\lvert y\rvert\ge k$, there exists words $u,v,w$ such that $y=uvw$, $v\neq\epsilon$, and for all $i\ge0$ the word $xuv^{i}wz\in A$."
I was informed by the professor that one cannot make choices on what $x,y,z$ are in the above Theorem when you wish to prove a language is not regular. I'm probably missing something incredibly easy but if you assume (for the purpose of contradiction) that a language is regular does the above statement not guarantee for all choices $x,y,z$ such that $xyz\in A$ and $y$ is sufficiently long that the conclusion of the Pumping Lemma holds? Perhaps a quantifier is missing?
Any clarification to the statement of the Pumping Lemma would be appreciated. Thank you very much for your help.
|
Your understanding is correct. If you want to apply the lemma to a language you know (or assume) is regular, you can choose $x$, $y$, and $z$ any which way you fancy, as long as you make sure $y$ is long enough. (But since you get $k$ from the lemma, what you actually need in the usual scenario is to provide is a procedure that explains how you're going to choose $x$, $y$, and $z$ after you know what $k$ is).
In light of this, your professor's statement is rather peculiar. Perhaps he was referring to proving the lemma? If your task is to prove the lemma rather than apply it, the burden of proof switches around, and you now have to find a way to deal with whatever $xyz$ your adversary shows up with (after you tell him a $k$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1221968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
the number of inversions in the permutation "reverse" Known, that number of inversions is $k$ in permutation:
$$\begin{pmatrix}
1& ...& n& \\
a_1& ...& a_n&
\end{pmatrix}$$
Find number of inversions in permutation (let's call it "reverse"):
$$\begin{pmatrix}
1& ...& n& \\
a_n& ...& a_1&
\end{pmatrix}
$$
First, max number of inversions is $\frac{n(n-1)}{2}$. Then we can see if we get permutation with $0$ inversions than "reverse" permutation for it will have $\frac{n(n-1)}{2}$.
I can guess that for $\sigma(i)$, $\sigma(n-i+1)$ sum of inversions in them is $\frac{n(n-1)}{2}$. Thus, answer will be $\frac{n(n-1)}{2}-k$
Please help me to prove that.
|
Hints.
*
*Think of some permutation $ι$ on $[n]$ so that for any permutation $σ$ on $[n]$, “$σ$-reverse” is $σι$.
*An inversion of any permutation $σ$ on $[n]$ is a pair $(i,j)$ with $i < j$ such that $\frac{σ(i) - σ(j)}{i-j} < 0$.
*For permutations $σ$, $τ$ on $[n]$ you have $\frac{(στ)(i) - (στ)(j)}{i - j} = \frac{σ(τ(i)) - σ(τ(j))}{τ(i) - τ(j)}·\frac{τ(i) - τ(j)}{i - j}$.
*What does this imply for the inversions of $στ$ in terms of the inversions of $σ$ and $τ$?
*How many inversions does $ι$ have?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1222158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Proving irreducibility of Markov chain I have a Markov chain:
*
*state: a permutation of n cards
*transition: taking the top-most card and randomly choose one of the n possible positions for the card
I know it is obviously irreducible because we can arrive at any permutation states from any starting permutation states. But I'm wondering how can I express it in mathematical way.
Thanks for any kind of help!
|
Look at the 'reverse' move, in which you pick a card from within the deck and move it to the top. These reverse moves can get you from starting state $(a_{\pi(1)},\ldots,a_{\pi(n)})$ to final state $(a_1,\ldots,a_n)$ by first locating card $a_n$ and moving it to the top, then locating card $a_{n-1}$ and moving it to the top, and so on, finally moving card $a_1$ to the top.
Now run the video backwards: you've got yourself an algorithm for getting from $(a_1,\ldots,a_n)$ to $(a_{\pi(1)},\ldots,a_{\pi(n)})$ by repeatedly taking the top card and moving it to a position deeper within the deck.
Example: You can get from DBCEA to ABCDE using moves-to-the-top via:
$${\rm DBCEA}\to{\rm EDBCA}\to{\rm DEBCA}\to{\rm CDEBA}\to{\rm BCDEA}\to{\rm ABCDE}$$
Therefore you can get from ABCDE to DBCEA using moves-from-the-top via:
$$
{\rm ABCDE}\to{\rm BCDEA}\to{\rm CDEBA}\to{\rm DEBCA}\to{\rm EDBCA}\to{\rm DBCEA}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1222356",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Show that $R(G)\cong K_0(F[G])$ Let $F$ be a field of characteristic $0$, $G$ a finite group and let $R(G)$ be the additive group of functions $G\to F$ generated by characters of $G$ of degree $1$.
Question: How can we show that $R(G)\cong K_0(F[G])$ where $K_0(F[G])$ is the Grothendieck group of finitely generated projective $F[G]$-modules?
Many thanks in advance.
|
If $G$ is a finite group and $F$ is a field of characteristic zero, Then the group algebra $F[G]$ is semisimple by maschke's theorem (this is a fact that is demonstrated in any book of representation theory, i.e Fulton Representation Theory) with one matrix ring factor for each conjugacy class of $G$, so $K_0(F[G])\cong \Bbb Z^n$ by the Artin-Wedderburn theorem and morita invariant of $K_0$ and $n$ is the number of conjugacy class of $G$.
in other the set of finite-dimensional representations $Rep_F(G)$ is monoid under the direct sum, and $Rep_F(G)\cong \Bbb N^n$. Therefore $F(G)=K_0(Rep_F[G])\cong \Bbb Z^n$. (I apologize in advance if my English is not very good, I've done my best attempt)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1222436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
The Euler-Poisson equation $$\int_{0}^\pi (x''^2+4x^2) dt$$
$$ x(0)=x'(0)=0; x(\pi)=0;x'(\pi)=\sinh(\pi)$$
This is The Euler-Poisson equation, i found:
$$\frac {\partial f}{\partial x}-\frac {d}{dt} \frac{\partial f}{\partial x'} +\frac {d^2}{d^2t} \frac{\partial f}{\partial x''}=0$$
$$2x''''=8x$$
$$x''''(t)=4x;$$
decide which way next?
${{{}}}$
|
$$x^{(4)}=-4x$$
$$x(t)=C_1 e^{-t} sin(t)+C_2 e^{t} sin(t)+C_3 e^{-t} cos(t)+C_4 e^{t} cos(t)$$
$$C_1=1/2; C_2=-1/2; C_3=C_4=0$$
$$x(t)=-sin(t) sinh(t)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1222504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
If $f$ is a real continuously differentiable function defined on $[0,b]$ then $\int f^2\leq\int f'^2$
Suppose $f$ is a continuously differentiable real valued function defined on an interval $[0,b]$ with $f(0)=f(b)$. Prove that$$\int_0^bf'(x)^2dx\geq\int_0^bf(x)^2dx$$
I do have some doubts about this because if I consider $f\equiv C$ where $C$ is a constant then the inequality is false. So maybe it holds for non-constant functions only. I do not doubt the authenticity of this problem.
I am a bit stumped on how I should proceed. Please provide just a hint only to help me start. Thanks.
|
Suppose that $f(a)=f(b)=0$.
We have $$f(x) = \int_a^x f'(s)ds = - \int_x^bf'(s)ds,$$hence
$$|f(x)|\le \min\left(\int_a^x |f'(s)|ds,\int_x^b|f'(s)|ds\right)\le \frac 12 \int_a^b|f'(s)|ds.$$
We integrate with respect to $x$ to get
$$\int_a^b f^2(x)dx\le \frac 14 (b-a)\left(\int_a^b|f'(s)|ds\right)^2\le \frac{(b-a)^2}{4}\int_a^b|f'(s)|^2ds$$
By Cauchy-Bunyakowski-Shwartz inequality.
Why the multiplicative constant $\frac{(b-a)^2}{4}$? To think of it, we can not avoid having some information on the domain of integration: its measure has to come into play.
Note that by similar technique we can get the result with the only hypothesis $f(a)=0$. We will lose the factor $\frac 14$, however.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1222641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Sum of geometric and Poisson distribution Suppose I have $X \sim \mathrm{Geom}(p)$ and $Y=\mathrm{Pois}(\lambda)$.
I want to create $Z = X + Y$, where the $X$ begins at $0$ rather than $1$.
Is this possible? Then I would calculate the mean and variance.
|
Something that I thought of to compute the pmf, then the expectation, is:
$Pr(X=k)= p(1-k)^{k}$
$Pr(Y=k)=\frac{\lambda^k e^{\lambda}}{k!}$
Then:
$Pr(Z=k)=p(1-p)^{k} \frac{\lambda^k e^{\lambda}}{k!}$
Finally, the expected value of $Z$ would be
$E[Z] = \sum_{k=0}^{\infty}k p(1-p)^k \frac{\lambda^k e^{\lambda}}{k!} $
$E[Z] = p \sum_{k=0}^{\infty}k (1-p)^k \frac{\lambda^k e^{\lambda}}{k!} $
Now we split into $k = 0$, and then remaining $k$:
$E[Z] = p\{0 \frac{\lbrack(1-p)\lambda\rbrack^0 e^{-\lambda}}{0!}\} + p\sum_{k=1}^{\infty}\frac{\lbrack(1-p)\lambda\rbrack^k e^{-\lambda}}{(k-1)!}$
$E[Z] = p \sum_{k=1}^{\infty}\frac{\lbrack(1-p)\lambda\rbrack^k e^{-\lambda}}{(k-1)!}$
$E[Z] = e^{-\lambda} p\lbrack(1-p)\lambda\rbrack \sum_{k=1}^{\infty}\frac{\lbrack(1-p)\lambda\rbrack^{k-1}}{(k-1)!}$
Let $j=k-1$
$E[Z] = e^{-\lambda} p\lbrack(1-p)\lambda\rbrack \sum_{j=0}^{\infty} \frac{\lbrack(1-p)\lambda\rbrack^j e^{-\lambda}}{j!}$
$E[Z] = e^{-\lambda} p\lbrack(1-p)\lambda\rbrack e^{(1-p)\lambda}$
$E[Z] = p\lbrack(1-p)\lambda\rbrack e^{-\lambda p}$
Then the variance is straightforward.
Thanks!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1222718",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
vector field question Consider the vector field $$F(x,y,z)=(zy+\sin x, zx-2y, yx-z)$$ (a) Is there a scalar field $f:\mathbb{R}^3 \rightarrow \mathbb{R}$ whose gradient is $F$?
(b) Compute $\int _C F\cdot dr \neq 0$ where the curve $C$ is given by $x=y=z^2$ between $(0,0,0)$ and $(0,0,1)$.
I have no idea how to do the first one and for the second one, is there any typo on the curve equation because I have no idea how to parameterize it...
|
The way to approach this is to consider $F(x,y,z) = F_x \hat{x} + F_y \hat{y} + F_z \hat{z}$, and compute the following integrals:
$$
\int F_x(x,y,z)dx \hspace{3pc} \int F_y(x,y,z) dy \hspace{3pc} \int F_z(x,y,z)dz
$$
I'll compute the first one for you: $\int (yz + \sin x)dx = xyz - \cos x + c(y,z)$ where we observe that the "constant" term is actually a function of $y$ and $z$ since if we take the partial derivative of this function with respect to $x$ the $c$ term will drop out. You need to compute the corresponding integrals for $F_y$ and $F_z$ and ask yourself can the constant terms I get from integrating each equation fit into the other equations? For instance, integrating the second equation will yield $xyz - y^2 + d(x,z)$. We therefore see that the first and second integrals have $xyz$ in common, and it's completely reasonable that $-y^2$ might be a part of $c(x,y)$. Additionally, it's also quite possible that $-\cos x$ is a part of $d(x,z)$.
Once you compute the scalar field, the second part can be done by the fundamental theorem of line integrals: $\int F\cdot \textbf{dr} = f(x,y,z) |_{\textbf{r}_0}^{\textbf{r}_1}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1222801",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Show that $\ln (x) \leq x-1 $ Show that $\ln (x) \leq x-1 $
I'm not really sure how to show this, it's obvious if we draw a graph of it but that won't suffice here. Could we somehow use the fact that $e^x$ is the inverse? I mean, if $e^{x-1} \geq x$ then would the statement be proved?
|
Yes, one can use $$\tag1e^x\ge 1+x,$$ which holds for all $x\in\mathbb R$ (and can be dubbed the most useful inequality involving the exponential function). This again can be shown in several ways.
If you defined $e^x$ as limit $\lim_{n\to\infty}\left(1+\frac xn\right)^n$, then $(1)$ follows from Bernoullis inequality: $(1+t)^n>1+nt$ if $t>-1$ and $n>0$.
To show that $\ln(x)\le x-1$ for all $x>0$, just substitute $\ln x$ for $x$ in $(1)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1222872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 6,
"answer_id": 3
}
|
A generalization of the anti-automorphisms of a group Recently, there was a question about anti-automorphisms of a group $G$. In this case, the inversion map $\iota : G \rightarrow G : x \mapsto x^{-1}$ is an anti-automorphism and $\iota \mathrm{Aut}(G)$ is the coset of all anti-automorphisms.
Let $K$ be a permutation group of degree $n$ and let $\pi \in K$. A natural generalization is a $\pi$-automorphism of the group $G$, where we say that a bijection $\phi : G \rightarrow G$ is a $\pi$-automorphism if for all $x_1, \ldots, x_n \in G$,
$$\phi(x_1 \cdots x_n) = \phi(x_{\pi(1)}) \cdots \phi(x_{\pi(n)})$$
We'll say that $\phi$ is a $K$-automorphism if it is a $\pi$-automorphism for some $\pi \in K$.
Now it's clear that the $K$-automorphisms of $G$ form a group and it's not hard to tell what the cosets of $\mathrm{Aut}(G)$ are in this group.
What I'm wondering if there are examples of groups that have interesting $K$-automorphisms or if the structure of the $K$-automorphisms is understood.
|
An example of a $\langle (1\,2\,3)\rangle$-automorphism is $f(x)=ex$, where $e$ is some central involution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1222972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
What's the supremum of the following set $\{ n + \frac{(-1)^n}{n} : n \in \mathbb{N}\}$ What's the supremum of the following set $\{ n + \frac{(-1)^n}{n} : n \in \mathbb{N}\}$?
I know that the infimum is $0$, but what about the supremum? I have calculated with Maxima the first $1000$ terms, and it seems that the numerator grows faster than the denominator (because we are adding or removing always a smaller fraction), could it be $\infty$?
|
To see that it isn't bounded above, observe that for a positive integer $k$
$$2k + \frac{(-1)^{2k}}{2k} = 2k + \frac1{2k}>2k. $$
Given $M>0$, choose $k$ such that $2k>M$. Then $2k + \frac1{2k}$ is an element of the bracketed set which is greater than $M$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1223160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Interpreting $\begin{bmatrix} 1 & 2 & 3 \end{bmatrix} \cdot h $ as a scalar or matrix multiplication As part of another problem I am working on, I have the following product to work out.
$\begin{bmatrix} 1 & 2 & 3 \end{bmatrix} \cdot h $
where $h$ is a scalar. My question is, if I commute the row vector and the scalar then I can just multiply it through. If I think of the $h$ as a $1 \times 1$ matrix however, it seems that this isn't allowed.
I know it sounds simple, but I want to understand the subtlety involved.
|
As a reminder, if we have $A$ and $B$ be $n\times m$ and $m \times p$ matrices (respectively) then $AB$ is a $n \times p$ matrix. Matrix multiplication will work if the number of columns of the first matrix matches the number of rows of the second matrix. This is also the reason why matrix multiplication is not commutative. If we let $[h]$ be a $1\times 1$ matrix and we have the $1 \times 3$ matrix $\begin{bmatrix} 1 & 2 & 3 \end{bmatrix},$ then we can certainly multiply the two. The product will be a $1 \times 3$ matrix equal to $\begin{bmatrix} h & 2h & 3h \end{bmatrix}$.
So that all works out fine if we treat $h$ as a $1\times 1$ matrix and we multiply from the left. It is clear we cannot multiply a $1\times 1$ matrix on the right of $\begin{bmatrix} 1 & 2 & 3 \end{bmatrix}$ and we couldn't multiply at all if you had a matrix with more than one row. We can get around this if we scrap the idea that $h$ has to be a $1 \times 1$ matrix. Consider the $n \times n$ identity matrix $I_n$, except with a diagonal of $h$'s instead of $1$'s. Simply make $n$ is the number that allows you to multiply with the matrix you are after. For example, if we want to multiply on the right of $\begin{bmatrix} 1 & 2 & 3 \end{bmatrix}$ we need $n = 3$, and you should find that $$\begin{bmatrix} 1 & 2 & 3 \end{bmatrix}\begin{bmatrix} h & 0 & 0 \\ 0 & h & 0 \\ 0 & 0 & h \end{bmatrix} = \begin{bmatrix} h & 2h & 3h \end{bmatrix}$$
The reasoning has gotten a bit circular now as $\begin{bmatrix} h & 0 & 0 \\ 0 & h & 0 \\ 0 & 0 & h \end{bmatrix} = hI_3$. So in conclusion, scalars are not matrices, and matrices are not scalars. Even if sometimes they can be coaxed in a way to make it look like they act the same. There are very nice, scalar-exclusive properties that you can take advantage of as you work through linear algebra, just as there are very nice matrix-exclusive properties.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1223209",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Find all the complex numbers $z$ satisfying
Find all the complex numbers $z$ satisfying
$$
\bigg|\frac{1+z}{1-z}\bigg|=1
$$
So far I´ve done this:
$$
z=a+bi \\
\bigg|\frac{(1+a)+bi}{(1-a)-bi}\bigg|=1 \\
\mathrm{expression*conjugated} \\
\bigg|\frac{1+2bi-(a^2)-(b^2)}{1+2a+a^2+b^2}\bigg|=1
$$
|
Hint: $|c/d|=|c|/|d|$ and your numerator and denominator then represent the distances from ??????
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1223297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Let $g: \mathbb{R} \rightarrow \mathbb{R}$ be a differentiable function satisfying $g'(x) > 0$ for all $x \neq 0$. Prove that $g$ is one-to-one. Let $g: \mathbb{R} \rightarrow \mathbb{R}$ be a differentiable function satisfying $g'(x) > 0$ for all $x \neq 0$. Prove that $g$ is one-to-one.
Proof: Case 1: Consider $x_{2} > x_{1}>0$. Then by the Mean Value Theorem, there exists a point $x \in \mathbb{R}$ such that $g'(x) = \frac{g(x_{2}) - g(x_{1})}{x_{2}- x_{1}}$ which means that $g(x_{2}) > g_(x_{1})$ since $g'(x)> 0$ . Thus $g$ is $1-1$ in this case.
Case 2: Similarily if $x_{1}<x_{2}<0$, then $g(x_{1})<g(x_{2})$.
Case 3: The only case left to consider is that if $x_{1} \leq 0< x_{2}$
I am not sure how to approach case 3. Do I use the MVT again or something else. Also, am I missing any circumstances?
|
You need the function to be differentiable over the interval $(x_1,x_2)$ and that's why you're considering the case $x_1\leq 0 < x_2$ apart. However, as I've said, the only thing you need to complete the proof is considering continuity of $g$ at $0$. This is a necessary condition (and you have it from differentiability condition). For example, you could have the function
$$g(x)=\left\{ \begin{align} 2x &\ \ \mbox{ if } x>0\\ x &\ \ \mbox{ if } x<0\\
2 &\ \ \mbox{ if } x=0\end{align}\right.$$
This function satisfy your hypothesis (except for differentiability at $0$) but it's not continuous at $0$, you can see it's not $1-1$ since $g(0)=g(1)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1223398",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Orbit-Stabiliser Theorem Applied "The group S6 acts on the group Z6 via σ([a]) = [σ(a)], for σ ∈ S6 and a∈{1,...,6}.
A permutation that is also an isomorphism is called an automorphism. The set G of
automorphisms of Z6 is a group. Use the orbit-stabiliser theorem to find its order."
I am considering the element [1], and I have found that the size of its orbit is 2 (as isomorphisms preserve orders of elements, [1] may only be mapped to itself or [5]). However, I have no idea how to go about finding the size of the stabiliser of [1]. Some help would be greatly appreciated!
|
Hint: Recall that for any homomorphism $\pi$, the value of $\pi(g)$ determines the value of $\pi$ on all of $\langle g \rangle$. What does that mean in your situation?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1223623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How to differentiate $y=\sqrt{\frac{1+x}{1-x}}$? I'm trying to solve this problem but I think I'm missing something.
Here's what I've done so far:
$$g(x) = \frac{1+x}{1-x}$$
$$u = 1+x$$
$$u' = 1$$
$$v = 1-x$$
$$v' = -1$$
$$g'(x) = \frac{(1-x) -(-1)(1+x)}{(1-x)^2}$$
$$g'(x) = \frac{1-x+1+x}{(1-x)^2}$$
$$g'(x) = \frac{2}{(1-x)^2}$$
$$y' = \frac{1}{2}(\frac{1+x}{1-x})^{-\frac{1}{2}}(\frac{2}{(1-x)^2})
$$
|
$$ y^2=\frac{1+x}{1-x}=\frac2{1-x}-1\implies Derivative =\frac2{(1-x)^2} $$
$$ 2 y y ^{'} = RHS $$
$$y'=\sqrt\frac{1-x}{1+x}\cdot\frac1{(1-x)^2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1223727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 6,
"answer_id": 4
}
|
Proof of sum results I was going through some of my notes when I found both these sums with their results
$$
x^0+x^1+x^2+x^3+... = \frac{1}{1-x}, |x|<1
$$
$$
0+1+2x+3x^2+4x^3+... = \frac{1}{(1-x)^2}
$$
I tried but I was unable to prove or confirm that these results are actually correct, could anyone please help me confirm whether these work or not?
|
we have $\sum_{i=0}^nx^i=\frac{x^{n+1}-1}{x-1}$ and the limit for $n$ tends to infinity exists if $|x|<1$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1223811",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 3
}
|
Is there an operation that takes $a^b$ and $a^c$, and returns $a^{bc}$? I know that multiplying exponents of the same base will give you that base to the power of the sum of the exponents ($a^b \times a^c = a^{b+c}$), but is there anything that can be done with exponents that will give you some base to the power of the product of those exponents: $f(a^b, a^c) = a^{bc}$?
|
The function as initially described fails to exist by the definition of a function: for any given input (or input "vector") $x$, there must exist a single output $f(x)$. Here, the quickest counterexample for any such function $f(a^b,a^c)=a^{bc}$ is to note that $2^4=4^2$ and then any combination of these produces multiple outputs for a single specified input:
$$f(2^4,2^8)=f(4^2,4^3)=f(16,16^\frac 32)=2^{32}\neq 4^6\neq 16^\frac 32$$
or more generally,
$$f(a^{bk},a^{ck})=f((a^k)^b,(a^k)^c)=a^{bck^2}\neq (a^k)^{bc}$$
Essentially, the problem is that with two inputs, our function does not have enough information to give us the result that we want. We must give the function an additional input, whether by specifying that input as a "parameter" that reduces $f$ to one member of a "family" of functions; or as a "variable" that is part of the input vector into $f$. Personally, since the $a$ referenced in the question looks a lot like it is intended to be a relatively "fixed" parameter, I would choose to solve the problem this way:
$$f_a(q,r) = a^{\log_a q\cdot\log_a r}$$
where you are declaring the function $f$ to be bound by the base $a$ that you intend to work in. Now, we would give our function some inputs like $f_a(a^b,a^c)$ and expect to get a value $a^{bc}$.
With the additional comments discussing the possibility of using this function to understand the quantity $e^{i\pi}$ by applying our function to $f_e(e^i,e^\pi)$, we would naively apply our function as
$$f_e(e^i,e^\pi)=e^{\log e^i\cdot \log e^\pi} = e^{i\pi}$$
and arrive at the correct answer, but this process fails to provide further meaning or understanding, and also ignores the possibility that our function may take on other values when supplied with complex inputs outside of what we intend.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1223920",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Is there any other constant which satisfy Euler formula? Every body knows Euler Formula
$e^{ix}=\cos x +i\sin x$
Is there any other constant beside $i$ which satisfies the above equation?
|
If I understand correctly your question you ask about the
equation exp(ax) = cos(x) + asin(x) ; a ≠ i, not for equality of functions but for
equality of numbers. We see the problem from the two viewpoints.
For real functions it is clear that it is impossible. On the
complex we would have the equality exp(ax) – exp(ix) = (a -i)sin(x); taking the
second derivative and adding we get (a^2 + 1) exp(ax) = 0 so still impossible.
For diophantine equations there are for each negative
constant a, an infinity of positive x with couples (x,a) satisfying the equality
which is clear because of intersections of both curves.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1224081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
differentiation of a matrix function In statistics, the residual sum of squares is given by the formula
$$ \operatorname{RSS}(\beta) = (\mathbf{y} - \mathbf{X}\beta)^T(\mathbf{y} - \mathbf{X}\beta)$$
I know differentiation of scalar functions, but how to I perform derivatives on this wrt $\beta$? By the way, I am trying to take the minimum of RSS wrt to $\beta$, so I am setting the derivative equal to 0.
I know somehow product rule has to hold. So here I have the first step
$$-\mathbf{X}^T(\mathbf{y}-\mathbf{X}\beta) + (\mathbf{y}-\mathbf{X}\beta)^T(-\mathbf{X})= 0$$
|
First you can remove the transposition sign from the first bracket:
$RSS=(\mathbf{y}^T - \beta ^T \mathbf{X} ^T)(\mathbf{y} - \mathbf{X}\beta)$
Multiplying out:
$RSS=y^Ty-\beta ^T \mathbf{X} ^Ty-y^TX\beta+\beta^TX^T X\beta $
$\beta ^T \mathbf{X} ^Ty$ and $y^TX\beta$ are equal. Thus
$RSS=y^Ty-2\beta ^T \mathbf{X} ^Ty+\beta^TX^T X\beta$
Now you can differentiate with respect to $\beta$:
$\frac{\partial RSS}{\partial \beta}=-2X^Ty+2X^T X\beta=0$
Dividing by 2 and bringing the first summand to the RHS:
$X^T X\beta=X^Ty$
Multiplying both sides by $(X^T X)^{-1}$
$(X^T X)^{-1}X^T X\beta=(X^T X)^{-1}X^Ty$
$(X^T X)^{-1}X^T X= I$ (Identity matrix).
Finally you get $\beta=(X^T X)^{-1}X^Ty$
Equality of $\beta ^T \mathbf{X} ^Ty$ and $y^TX\beta$
I make an example:
$\left( \begin{array}{c c} b_1 & b_2 \end{array} \right) \cdot \left( \begin{array}{c c c} x_{11} & x_{21} \\ x_{12} & x_{22}\end{array} \right) \cdot \left( \begin{array}{c c} y_1 \\ y_2 \end{array} \right) $
$=\left( \begin{array}{c c} b_1x_{11}+b_2x_{12} & b_1x_{21}+b_2x_{22} \end{array} \right) \cdot \left( \begin{array}{c c} y_1 \\ y_2 \end{array} \right)$
$=b_1 x_{11}y_1+b_2 x_{12}y_1+b_1x_{21}y_2+b_2x_{22}y_2\quad (\color{blue}{I})$
$\left( \begin{array}{c c} y_1 & y_2 \end{array} \right) \cdot \left( \begin{array}{c c c} x_{11} & x_{12} \\ x_{21} & x_{22}\end{array} \right) \cdot \left( \begin{array}{c c} b_1 \\ b_2 \end{array} \right) $
$=\left( \begin{array}{c c} y_1x_{11}+y_2x_{21} & y_1x_{12}+y_2x_{22} \end{array} \right) \cdot \left( \begin{array}{c c} b_1 \\ b_2 \end{array} \right)$
$=y_1 x_{11}b_1+y_2 x_{21}b_1+y_1x_{12}b_2+y_2x_{22}b_2 \quad (\color{blue}{II})$
$\color{blue}{I}$ and $\color{blue}{II}$ are equal.
Derivative Rules
$\frac{\partial \beta ^T X ^T y }{\partial \beta }=X^Ty$
$\frac{\partial \beta^T X^T X \beta }{\partial \beta }=2X^TX\beta$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1224163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.