Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Prove that the group $G$ with generators $x,y,z$ and relations $z^y=z^2$, $x^z=x^2$, $y^x=y^2$ has order $1$
Prove that the group $G$ with generators $x,y,z$ and relations $z^y=z^2$, $x^z=x^2$, $y^x=y^2$ has order $1$.
This is a problem on Page 56 of Derek J.S. Robinson's A Course in the Theory of Groups (GTM 80). I think $z^y$ means the result of $y$ acting on $z$, and may be defined as $y^{-1}zy$.
Suppose that $F$ is a free group generated by $x,y,z$. The epimorphism $\pi: F \rightarrow G$ has its kernel $K$ generated by $z^yz^{-2}$, $x^zx^{-2}$, $y^xy^{-2}$. How to prove $K=F$? Or, how to prove $x,y,z \in K$? I've tried but didn't find the right way.
Thank you very much.
| EDIT: I changed the steps to be somewhat simpler.
Since this is tagged homework, I will only give hints (I hope!).
*
*Look at $z^{yx}$, and write it in two fairly different ways; use the equality between these two words to write $x$ as a word in $y$ and $z$.
*Now look at $y^x$, and write it in two different ways; use the equality between these two words to show $x$ is a power of $z$.
*Use the fact $x$ is a power of $z$ to write $z^x$ in two different ways, and show $x$ is trivial; conclude your group is trivial.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/56438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
How do I prove the divergence of this series? How do I prove that $\displaystyle\sum_{n\geq 1}\frac {1}{\ln^2n}$ is a divergent series?
| Short answer: Since $n > \log n$ for all $n \in \mathbb R^+$, we know that $\frac{1}{n (\log n)} \leq \frac{1}{(\log n)^2}$ for all $n \in \mathbb R^+$.
We know that $\sum_{n=1}^\infty \frac{1}{n (\log n)}$ diverges, so by the comparison test, so does $\sum_{n=1}^\infty \frac{1}{(\log n)^2}$.
Longer answer: Here's some information that explains the motivation of comparing the series with $\sum_{n=1}^\infty \frac{1}{n (\log n)}$. It is a commonly known fact that all the following series diverge:
*
*$\displaystyle \sum_{n=1}^\infty \frac{1}{n}$
*$\displaystyle \sum_{n=1}^\infty \frac{1}{n (\log n)}$
*$\displaystyle \sum_{n=1}^\infty \frac{1}{n (\log n) (\log \log n)}$
*$\displaystyle \sum_{n=1}^\infty \frac{1}{n (\log n) (\log \log n)(\log \log \log n)}$
*and so on
You can prove those by using the integral test (or the Cauchy condensation test, if you prefer). When you apply either of these tests, each series reduces to the series above it. We know the first one (which is the harmonic series) diverges, so by induction, all the other series diverge as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/56508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
What is the analogue of spherical coordinates in $n$-dimensions? What's the analogue to spherical coordinates in $n$-dimensions? For example, for $n=2$ the analogue are polar coordinates $r,\theta$, which are related to the Cartesian coordinates $x_1,x_2$ by
$$x_1=r \cos \theta$$
$$x_2=r \sin \theta$$
For $n=3$, the analogue would be the ordinary spherical coordinates $r,\theta ,\varphi$, related to the Cartesian coordinates $x_1,x_2,x_3$ by
$$x_1=r \sin \theta \cos \varphi$$
$$x_2=r \sin \theta \sin \varphi$$
$$x_3=r \cos \theta$$
So these are my questions: Is there an analogue, or several, to spherical coordinates in $n$-dimensions for $n>3$? If there are such analogues, what are they and how are they related to the Cartesian coordinates? Thanks.
| Just look at n-sphere. A lecture note from Stony Brook is also available.
You can find it in Fock's paper (Fock, V. (1935).)or in some recent papers- like (Howard, S. "Fundamental Solution of Laplaces Equation
in Hyperspherical Geometry.") or (Jing-Jing, F., Ling, H., & Shi-Jie, Y. (2011). Solutions of laplace equation in n-dimensional spaces. Communications in Theoretical Physics, 56(4), 623.) for advanced studies.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/56582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "46",
"answer_count": 3,
"answer_id": 2
} |
Uniqueness of the curve given the curvature and the torsion i was reading the book of Docarmo of differential geometry and I Have a question at the end of the proof that given the curvature and the torsion of a curve, the curve it´s unique , I only omitted the part where it shows that a rigid motion does not alter the curvature and torsion
My question is in the red rectangle, why this equality it´s true? i did not understand it, sorry for my questions...
| The author proved that the derivative of $|t-\bar{t}|^2+|n-\bar{n}|^2+|b-\bar{b}|^2$ is $0$. When the derivative of a function is $0$, it means the function is constant. But since the expression is identifiably $0$ at the initial point $s_0$, it must be identically $0$ for all $s$. The only way that's possible is if its constituent parts $|t-\bar{t}|^2,|n-\bar{n}|^2,|b-\bar{b}|^2$ are all $0$. The only way $|t-\bar{t}|^2$ is always $0$ is if $t=\bar{t}$ always holds. But $t=d\alpha/ds$ and $\bar{t}=d\bar{\alpha}/ds$, so we have the equality in red.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/56647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solving $-u''(x) = \delta(x)$ A question asks us to solve the differential equation
$-u''(x) = \delta(x)$
with boundary conditions
$u(-2) = 0$ and $u(3) = 0$ where $\delta(x)$ is the Dirac delta function. But inside the same question, teacher gives the solution in two pieces as $u = A(x+2)$ for $x\le0$ and $u = B(x-3)$ for $x \ge 0$. I understand when we integrate the delta function twice the result is the ramp function $R(x)$. However elsewhere in his lecture the teacher had given the general solution of that DE as
$u(x) = -R(x) + C + Dx$
So I dont understand how he was able to jump from this solution to the two pieces. Are these the only two pieces possible, using the boundary conditions given, or can there be other solutions?
Full solution is here (section 1.2 answer #2)
http://ocw.mit.edu/courses/mathematics/18-085-computational-science-and-engineering-i-fall-2008/assignments/pset1.pdf
| This is a good example of a question to which one can answer at some very different levels of mathematical sophistication... Since you say nothing about this, let me try an elementary approach.
What you call the Dirac delta function (which is not a function, at least not in the sense of a function from $\mathbb R$ to $\mathbb R$) is a strange object but something about it is clear:
One asks that $\displaystyle\int_y^z\delta(x)\mathrm dx=0$ if $y\leqslant z<0$ or if $0<y\leqslant z$ and that $\displaystyle\int_y^z\delta(x)\mathrm dx=1$ is $y<0<z$.
We will not use anything else about the Dirac $\delta$.
If one also asks that $\displaystyle\int_y^zu''(x)\mathrm dx=u'(z)-u'(y)$ for every $y\leqslant z$, one can integrate once your equation $\color{red}{-u''=\delta}$, getting
that there exists $a$ such that
$$
u'(x)=a-[x\geqslant0],
$$
where we used Iverson bracket notation. Now let us integrate this once again.
Using the facts that $\displaystyle\int_y^zu'(x)\mathrm dx$ should be $u(z)-u(y)$ for every $y\leqslant z$, and the value of $\displaystyle\int_y^z[x\geqslant0]\mathrm dx$, one gets that for every fixed negative number $x_0$,
$$
u(x)=u(x_0)+a\cdot (x-x_0)-x\cdot[x\geqslant0].
$$
This means that $b=u(x_0)-a\cdot x_0$ does not depend on $x_0<0$, hence finally, for every $x$ in $\mathbb R$,
$$
\color{red}{u(x)=a\cdot x+b-x\cdot[x\geqslant0]}.
$$
(And, in the present case, the condition that $u(-2)=u(3)=0$ imposes that $a=3/5$ and $b=6/5$.)
This is the general solution of the equation $-u''=\delta$. Note that every solution $u$ is $C^\infty$ on $\mathbb R\setminus\{0\}$ but only $C^0$ at $0$ hence $u'$ and $u''$ do not exist in the rigorous sense usually meant in mathematics. Note finally that $u$ is also
$$
u(x)=a\cdot x+b-x\cdot[x\gt0].
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/56681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 1
} |
Please help me to understand why $\lim\limits_{n\to\infty } {nx(1-x^2)^n}=0$ This is the exercise:
$$f_{n}(x) = nx(1-x^2)^n,\qquad n \in {N}, f_{n}:[0,1] \to {R}.$$
Find ${f(x)=\lim\limits_{n\to\infty } {nx(1-x^2)^n}}$.
I know that $\forall x\in (0,1]$ $\Rightarrow (1-x^2) \in [0, 1) $ but I still don't know how to calculate the limit. $\lim\limits_{n\to\infty } {(1-x^2)^n}=0$ because $(1-x^2) \in [0, 1) $ and that means I have $\infty\cdot0$.
I tried transformation to $\frac{0}{0} $ and here is where I got stuck.
I hope someone could help me.
| Three ways for showing that, for any $a \in (0,1)$ fixed,
$$
\mathop {\lim }\limits_{n \to \infty } na^n = 0
$$
(which implies that $f(x)=0$).
1) Fix $b \in (a,1)$. Since $\lim _{n \to \infty } n^{1/n} = 1$, $n^{1/n}a < b$ $\forall n \geq N_0$. Hence,
$na^n = (n^{1/n} a)^n < b^n $ $\forall n \geq N_0$; the result thus follows from $\lim _{n \to \infty } b^n = 0$.
2) If $b \in (a,1)$, then
$$
\frac{{b - a}}{a}na^n = (b - a)na^{n - 1} = \int_a^b {na^{n - 1} \,dx} \le \int_a^b {nx^{n - 1} \,dx} = x^n |_a^b = b^n - a^n ,
$$
and so the result follows from $\lim _{n \to \infty } (b^n - a^n) = 0$.
3) Similarly to 2), if $b \in (a,1)$, then by the mean-value theorem
$$
b^n - a^n = nc^{n - 1} (b - a) \ge na^{n - 1} (b - a) = na^n \frac{{b - a}}{a},
$$
for some $c \in (a,b)$; hence the result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/56724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
} |
Is the language $L = \{0^m1^n: m \neq n - 1 \}$ context free? Consider the language: $L = \{0^m1^n : m \neq n - 1 \}$ where $m, n \geq 0$
I tried for hours and hours but couldn't find its context free grammar. I was stuck with a rule which can check on the condition $m \neq n - 1$. Would anyone can help me out? Thanks in advance.
| we have 3 basic ways to check if a grammar is CFL,
*
*draw a PDA
*pumping lemma
*get a stack, and check if by 2 operations(push/pop) u can achieve the target
I will go with the stack method here,
we need 0 or more 0s followed by 0 or more 1s, given that no of 0s should not be 1 less than the no of 1s.
Sol:-
*
*try pushing as many 0s as you get.
*once you receive 1, start popping the 0s(if any)
*at the end of the input,
if there exists a single 0 in the stack, reject the string
else, accept the string.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/56836",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Evaluating Limits I am having trouble understanding this question on limits. Suppose that $r(x)$ is a function where
$$
\lim_{x\rightarrow 0} \dfrac{r(x)}{x^2} =0 \ .
$$
Can someone please explain how, from the first limit I can show that:
$$
\lim_{x\rightarrow 0} \dfrac{r(x)}{x} =0 \ .
$$
| Since
$$\lim_{x\rightarrow 0}\frac{r(x)}{x^{2}}=\lim_{x\rightarrow 0}\frac{%
r(x)}{x}\cdot \lim_{x\rightarrow 0}\frac{1}{x}=0$$
and
$$\lim_{x\rightarrow 0}\frac{1}{x}\neq 0,$$
we must have
$$\lim_{x\rightarrow 0}\frac{r(x)}{x}=0.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/56894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Connectedness of a certain subset of the plane Let $U$ be an open and connected subspace of the Euclidean plane $\mathbb{R}^2$ and $A\subseteq U$ a subspace which is homeomorphic to the closed unit interval. Is $U\setminus A$ necessarily connected?
| Every subset $A$ of $\mathbb{R}^{2}$ homeomorphic to $[0,1]$ is "tame", that
is, there is a self-homeomorphism of the plane $\varphi$, such that
$\varphi(A)=[0,1]$ (citation needed :)). Then it follows that $A$ may be
represented as an intersection $A=\cap_{i=1}^{\infty}D_{i}$ of a decreasing
sequence of (closed) topological disks, so we have $D_{i}\subset U$ for some
$i$ (sufficiently large). Now, to show that $U\backslash A$ is connected, let
$x,y\in U\backslash A$ and $\gamma$ be a topological arc in $U$ connecting $x$ and $y$;
then the set $L=(\gamma\backslash D_{i}^{0})\cup\partial D_{i}$ is connected.
To prove this, let $L=L_{1}\cup L_{2}$ where $L_{1}$, $L_{2}$ are disjoint
open subsets of $L$, then since $\partial D_{i}$ is connected, we may suppose
that $\partial D_{i}\subset L_{1}$, but then each component $K$ of
$\gamma\backslash D_{i}^{0}$ is also contained in $L_{1}$ ($K$ is intersecting
$\partial D_{i}$ as$\ \gamma$ is connected!); thus $L=L_{1}$, so $L$ is
connected. Now, since $L\subset U\backslash A$, it follows that any two points
of $U\backslash A$ are contained in a connected set and therefore $U\backslash
A$ is connected as well.
p.s. [Here $D_{i}^{0}$ is the interior of $D_{i}$.]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/56936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 0
} |
Cyclic numbers are characterized by the reciprocals of full reptend primes? The number $142,857$ is widely known as a cyclic number, meaning consecutive multiples are cyclic permutations, i.e.
$1 × 142,857 = 142,857$
$2 × 142,857 = 285,714$
$3 × 142,857 = 428,571$
and so on.
142857 is the repeating unit of $\frac{1}{7} = 0.\overline{142857}$ and in fact, every prime for which $10$ is a primitive root will generate a cyclic number (if we allow $0$ as a first digit, for example $0588235294117647$ which is the repeating unit of $\frac{1}{17}$). These primes are called full reptend primes.
From what I've read, it seems that there is a bijection between full reptend primes and cyclic numbers: a number is cyclic if and only if it is the repeating unit for the reciprocal of a full reptend prime.
I was able to find a proof for the if part. I was wondering if anyone can provide a proof for the only if part.
Edit: It's been a while since I posed this question and I still haven't found a proof. I've added a bounty in hopes of prompting some interest.
| Here is a possible line of approach, lacking only the insight why
$(d+1)\frac{n}{b^d-1}$ must be $1$.
If the base $b$ representation of a number $n$ is cyclic of (exact) length $d\geq\lceil{\log_b n}\rceil$ (with inequality for leading zeros),
then the first $d$ consecutive multiples of $n$, $\{kn|1\leq k\leq d\}$,
exhaust (i.e. are in bijection with) all the cycles, which in turn
correspond bijectively with the repeating base-$b$ expansions of
$\frac{kn}{b^d-1}\in(0,1)$.
Their sum satisfies
$$
\frac{b^d-1}{b-1}s=
\frac{d(d+1)}{2}n
\qquad\text{where}\qquad
s=d\frac{b-1}{2}\frac{(d+1)n}{b^d-1}\in\mathbb{N}
$$
is the sum of the base-$b$ digits of $n$.
Since each digit is between $0$ and $b-1$,
$s$ must lie between $0$ and $d(b-1)$ inclusive.
However, for the sum $s$, the range must be exclusive (for $b>2$),
since otherwise the digits would all be the same ($0$ or $b-1$)
and $d$ would have to be $1$. Thus we have
$$0<\frac{s}{d}=\frac{b-1}{2}\frac{(d+1)n}{b^d-1}<b-1$$
$$0<0.\overline{n_{d-1}\dots n_0}=\frac{(d+1)n}{b^d-1}<2$$
where the middle quantities in the first and second lines are
the average value $\frac{1}{d}\sum a_i$ of the base-$b$ digits of $n$,
and the fraction obtained from repeating the digits of
$n=\sum_0^{d-1}a_ib^i$ after the base-$b$ decimal point,
respectively. These quantities are rational numbers, but
not guaranteed to be integers. What we want to show however, as we shall see,
is that the latter quantity is in fact an integer, and therefore $1$.
Let's assume that $d>1$, and that $d$ is minimal, in the
sense that the $n$ is not a repeated or decomposable cycle:
$$
\nexists c\vert\;d,
\quad 1<c<d
\quad(c\;\text{a proper divisor of}\;d)
\quad\text{with}
\quad\frac{b^{d}-1}{b^c-1}\vert\;n.
$$
We should also note that $s\equiv n\pmod{b-1}$,
i.e. that $\frac{n-s}{b-1}\in\mathbb{Z}$,
since each base-$b$ digit of $n$ remains fixed modulo $b-1$
when multiplied by its respective nonnegative power of $b$.
We actually want to show that
$$n=\frac{b^d-1}{d+1}
\qquad\text{and that}\qquad
t=\frac{b^d-1}{n}=d+1\in\mathbb{N}$$
is in fact prime with $b$ as primitive root.
Perhaps there is a good argument why $s=d\frac{b-1}{2}$ lies exactly in the middle (the expected value) of the prescribed range or, equivalently,
that the average value of the digits of $n$ base $b$ is $\frac{b-1}{2}$, or
that the first noncyclic multiple of $n$ (which we know is the $d+1^\text{th}$) satisfies $(d+1)n=b^d-1$ (which is $b-1$ times the repunit of length $d$).
Certainly we know that the sequence $\{kn\}_{k=1}^d$ is increasing and bounded by $b^d$ (since each term has at most $d$ digits base $b$). Therefore, they must correspond with the lexicographic ordering of cyclically shifted length-$d$ strings starting with $n$, zero-padded if necessary (i.e. if $b^{d-1}>n$). And since $(d+1)n=dn+n$ is the sum of the smallest and largest of these cyclic shifts, its leading digit must also be the sum of the smallest and largest digits of $n$ (unless a carry increases the sum to $\geq b^d$).
If we need to resort to an examination in more detail of the products $\{kn\}_{k=1}^d$ and their relation to base-$b$ shifts of $n$, we need not resort to naming $n$'s digits. We can in stead rely on the division algorithm, and note that if
$$
n=q_k b^k+r_k
\quad\text{with}\quad
\left\{\begin{matrix}
q_k=\lfloor{b^{-k}n}\rfloor,\\
r_k=n-b^kq_k
\end{matrix}\right.
\quad\text{for}\quad
0\leq k\leq d
\quad\text{and if}\quad
n_k=r_k b^{d-k}+q_k,
$$
then $\{n_k\}_{k=1}^{d-1}$
is a permutation of $\{nk\}_{k=1}^{d-1}$.
Note that if $n$ and $n_k$ are identified with
strings of $d$ letters in the alphabet $\{0,\dots,b-1\}$,
then $n_k=\text{right}(n,k)+\text{left}(n,d-k)$,
where the plus symbol here indicates string concatenation and the
right and left functions are familiar from some programming languages,
since $q_k$ and $r_k$ are the left $d-k$ and
right $k$ digits of $n$ base $b$ respectively.
Once we can establish that $n=\frac{b^d-1}{d+1}$,
we would have that $b^d\equiv 1\pmod t$.
From here we could argue that $b$ must have order $d$ modulo $t$
using the minimality of $d$: if $1<c=\text{ord}_t(b)<d$,
then we would have a nontrivial repunit factorization
$$
\frac{b^c-1}{t}\cdot\frac{b^d-1}{b^c-1}=n
$$
so that $n$ is a repetition of a shorter cycle of length $c$,
contradicting our assumption.
But this would prove the result, since then
$t-1=d=\text{ord}_t(b)\leq\phi(t)\leq t-1$,
i.e. we would have sandwiched Euler's totient function
$\phi(t)$ into attaining its theoretical maximum,
which only occurs when $t$ is prime, while on the other hand,
the order of any element $b$ mod $t$ only attains $\phi(t)$
when the element $b$ is a primitive root,
i.e. a generator of $(\mathbb{Z}/t\mathbb{Z})^*$.
Finally (just for fun), note that a start at factoring $n$ for $d>1$ is
$$
n=\frac{b^d-1}{t}=\frac{1}{t}\prod_{0<c|d}\Phi_c(b)
$$
where $\Phi_c(x)$ denotes the cyclotomic polynomial of degree $c$,
and the product above will be a partial factorization with $\tau(d)$ terms,
one of which must be divisible by the prime $t$,
where $\tau(d)$ is the number of positive divisors of $d$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/56989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 3,
"answer_id": 0
} |
differential equations So I have to do this:
$$-\sum_{n=0}^{\infty} \left(\frac{n^2 \pi \ T}{L^2} a_n(t) + m_La^{''}_n\right)\sin\left(\frac{n \pi x}{L} \right) = F_0 \sin \omega_E t.$$
I'm supposed to multiply this equation through by $\sin(kx\pi/L)$, integrate over $x$ from $0$ to $L$, and use the fact that
$$\int_0^L \sin\left(\frac{n \pi x}{L} \right) \sin \left(\frac{k \pi x}{L} \right)dx = \begin{cases}\frac{L}{2}& \mbox{if }n=k\\
0&\mbox{otherwise} \end{cases}$$
to derive a set of ordinary differential equations that model each individual $a_k(t)$.
I tried to do this and got stuck, so I thought I'd come here for help.
This is what I have so far
If $n = k$
$$-\sum_{n=0}^{\infty} \left ( \int_0^L { \left(\frac{n^2 \pi \ T}{L^2} a_n(t) + m_La^{''}_n\right)}dx \times \frac L 2 \right ) = \int_0^L {F_0 \sin \omega_E t \sin \left(\frac{k \pi x}{L} \right)}dx$$
$$-\sum_{n=0}^{\infty} \left(\frac{n^2 \pi \ T}{L} a_n(t) + Lm_La^{''}_n \right) \times \frac L 2 = \int_0^L {F_0 \sin \omega_E t \sin \left(\frac{k \pi x}{L} \right)}dx$$
$$-\sum_{n=0}^{\infty} \left( 2n^2 \pi \ T a_n(t) + 2L^2 m_La^{''}_n \right) = \int_0^L {F_0 \sin \omega_E t \sin \left(\frac{k \pi x}{L} \right)}dx$$
$$-\sum_{n=0}^{\infty} \left( 2n^2 \pi \ T a_n(t) + 2L^2 m_La^{''}_n \right) = F_0 \sin \omega_E t \left(- \frac{L}{k \pi}\right) \left(\cos \left(k \pi \right) - 1\right)$$
and then i have no idea. If $n \neq k$, then the right side is the same and the left side is $0$.
| Is the sum from n=0 to infinity still? ;) Use the n /= k condition to drop that to a single term.
Edit: Thought I would be a bit more explicit.
$-\sum_{n=0}^\infty\Big(\frac{n^2\pi T}{L^2}a_n(t)+m_L\ddot{a_n}\Big)sin(\frac{n\pi x}{L})=F_o sin(\omega_E t)$
$\int_0^L \Big[-\sum_{n=0}^\infty\Big(\frac{n^2\pi T}{L^2}a_n(t)+m_L\ddot{a_n}\Big)sin(\frac{n\pi x}{L})\Big]sin(\frac{k\pi x}{L})=\int_0^LF_o sin(\omega_E t)sin(\frac{k\pi x}{L})$
Now look at each piece of the sum. For each one that is $n \neq k$, it is 0. For each term that is $n=k$, multiply by $\frac{L}{2}$. So:
$-\Big(\frac{k^2\pi T}{L^2}a_k(t)+m_L\ddot{a_k}\Big)\frac{L}{2}=\int_0^LF_o sin(\omega_E t)sin(\frac{k\pi x}{L})$
Notice the complete substitution from n to k.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/57046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Determining which values to use in place of x in functions When solving partial fractions for integrations, solving x for two terms usually isn't all that difficult, but I've been running into problems with three term integration.
For example, given
$$\int\frac{x^2+3x-4}{x^3-4x^2+4x}$$
The denominator factored out to $x(x-2)^2$, which resulted in the following formulas
$$
\begin{align*}
x^2+3x-4=\frac{A}{x}+\frac{B}{x-2}+\frac{C}{(x-2)^2}\\
x^2+3x-4= A(x-2)(x-2)^2+Bx(x-2)^2+Cx(x-2)\\
x^2+3x-4= A(x-2)^2+Bx(x-2)+Cx\\\\
\text{when x=0, }A=-1
\text{ and x=2, }C=3
\end{align*}
$$
This is where I get stuck, since nothing immediately pops out at me for values that would solve A and C for zero and leave some value for B. How do I find the x-value for a constant that is not immediately apparent?
| The point of these equations involving $A$, $B$, $C$, and $x$ is that there are unique values of $A$, $B$, and $C$ that make the decomposition work, but they are supposed to be valid for every value of $x$. In particular, they should be true when $x = 0$ and $x = 2$. These values happen to be convenient because they cause all but one of the unknowns to vanish, thus allowing you to easily solve for the remaining unknown. Once you know $A$ and $C$, however, your equation looks like
$$
\begin{align*}
x^2 + 3x - 4 = -(x-2)^2 + Bx(x-2) + 3x,
\end{align*}
$$
since $A = -1$ and $C = 3$ are the unique solutions. I re-emphasize that they do not depend on the choice of $x$; we just chose convenient values for $x$ to help us discover $A$ and $C$.
To finish, we can just use any value for $x$, leaving $B$ as the only unknown. I'll choose $x = 1$, simply because I'm not very imaginative:
$$
\begin{align*}
1^2 + 3 \cdot 1 - 4 &= -(1 - 2)^2 + B \cdot 1 (1 - 2) + 3 \cdot 1\\
0 &= -B + 2\\
B &= 2.
\end{align*}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/57114",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Optimum solution to a Linear programming problem If we have a feasible space for a given LPP (linear programming problem), how is it that its optimum solution lies on one of the corner points of the graphical solution? (I am here concerned only with those LPP's which have a graphical solution with more than one corner/end point.)
I was asked to take this as a lemma in the class, but got curious about the proof. Any help is sincerely appreciated.
|
In two dimensional case the linear optimization (linear programming) is specified as follows: Find the values $(x, y)$ such that the goal function
$$g(x, y) = a x + b y \;\;\; (Eq. 1)$$
is maximized (or minimized) subject to the linear inequalities
$$a_1 x + b_1 y + c_1 \ge 0 \;\; (or \le 0) $$
$$a_2 x + b_2 y + c_2 \ge 0 \;\; (or \le 0) $$
$$ ... $$
Each of these linear inequalities defines a half plane bounded by the line obtained by replacing the inequality by equality. The solution $(x, y)$ that maximizes the goal function must lie in the intersection of all these halfplanes which is obviously a convex polygon. This polygon is called the feasible region. Let the value of the goal function at a point $(x, y)$ of the feasible region be $m$
$$g(x, y) = a x + b y = m \;\;\; (Eq. 2)$$
The value $m$ of the goal function will obviously not change when we move $(x, y$) on the line
defined by (Eq. 2). But the value of $g()$ will be increased when we increase $m$. This leads to a new line which is parallel to (E.q. 2). We can do this as long as the line contains at least one point of the feasible region. We conclude that the maximum of the goal function is achieved at an extreme point of the feasible region which - for a convex polygon - is a vertex (or an edge when the goal line is parallel to the restriction line going through the extreme vertex).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/57173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 3,
"answer_id": 2
} |
Proof of cardinality inequality: $m_1\le m_2$, $k_1\le k_2$ implies $k_1m_1\le k_2m_2$ I have this homework question I am struggling with:
Let k1,k2,m1,m2 be cardinalities. prove that if $${{m}_{1}}\le {{m}_{2}},{{k}_{1}}\le {{k}_{2}}$$ then $${{k}_{1}}{{m}_{1}}\le {{k}_{2}}{{m}_{2}}$$
Can anyone please help me prove this?
thanks
| First:
*
*What does that mean that $k_1\le k_2$? It means there exists a one-to-one function $f\colon k_1\to k_2$.
*What does that mean $k_1 m_1$? It is the cardinality of $A\times B$ where $|A|=k_1$ and $|B|=m_1$.
Suppose $k_1\le k_2$ and $m_1\le m_2$, we abuse the notation and assume that $k_i,m_i$ are also the sets given in the cardinalities at hand.
Now we need to find a function from $k_1\times m_1$ which is one-to-one, into $k_2\times m_2$. Since $k_1\le k_2$ there exists a one-to-one $f\colon k_1\to k_2$, and likewise $g\colon m_1\to m_2$ which is one-to-one.
Let $h\colon k_1\times m_1\to k_2\times m_2$ be defined as:
$$h(\langle k,m\rangle) = \langle f(k),g(m)\rangle$$
$h$ is well-defined, since for every $\langle k,m\rangle\in k_1\times m_1$ we have that $f(k)\in k_2$ and $g(m)\in m_2$, therefore $h(\langle k,m\rangle)\in k_2\times m_2$.
We need to show that $h$ is injective. Suppose $h(\langle a,b\rangle) = h(\langle c,d\rangle)$, then $\langle f(a),g(b)\rangle=\langle f(c),g(d)\rangle$. Therefore $f(a)=f(c)$ and $g(b)=g(d)$.
Since $f,g$ are both injective, we have that $a=c, b=d$ that is $\langle a,b\rangle=\langle c,d\rangle$.
It is a very standard exercise to prove the basics properties of the cardinals order, for example:
$A\le B$ and $C\le D$, then:
*
*$A+C\le B+D$,
*$A\cdot C\le B\cdot D$,
*$A^C\le B^D$.
And so forth. It is easily proved by the above method, of composing the injective functions witnessing $A\le B$ and $C\le D$ into functions witnessing these properties.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/57236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Definition of manifold From Wikipedia:
The broadest common definition of manifold is a topological space
locally homeomorphic to a topological vector space over the reals.
A topological manifold is a topological space locally homeomorphic to
a Euclidean space.
In both concepts, a topological space is homeomorphic to another topological space with richer structure than just topology. On the other hand, the homeomorphic mapping is only in the sense of topology without referring to the richer structure.
I was wondering what purpose it is to map from a set to another with richer structure, while the mapping preserves the less rich structure shared by both domain and codomain? How is the extra structure on the codomain going to be used? Is it to induce the extra structure from the codomain to the domain via the inverse of the mapping? How is the induction like for a manifold and for a topological manifold?
Thanks!
| Part of what is neglected by the seeming presupposition in the last paragraph is that it says "locally". It's only locally the same, not necessarily globally. Thus a sphere or a torus locally looks like a plane, but is not connected together in the same way.
One thing often added to the definitions is that a manifold is a Hausdorff space. This is not redundant. Some manifolds locally homeomorphic to Euclidean spaces are not Hausdorff spaces. For example take a line with one point missing, and then put two points where that one point was. Then define an open neighborhood of either of those two points to contain the point itself plus the other points of some open neighborhood of the missing point. Those two new points cannot be separated from each other by open sets, so it's not a Hausdorff space.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/57333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 2
} |
Homology of $\mathbb{R}^3 - S^1$ I've been looking for a space on the internet for which I cannot write down the homology groups off the top of my head so I came across this:
Compute the homology of $X : = \mathbb{R}^3 - S^1$.
I thought that if I stretch $S^1$ to make it very large then it looks like a line, so $\mathbb{R}^3 - S^1 \simeq (\mathbb{R}^2 - (0,0)) \times \mathbb{R}$. Then squishing down this space and retracting it a bit will make it look like a circle, so $(\mathbb{R}^2 - (0,0)) \times \mathbb{R} \simeq S^1$. Then I compute
$ H_0(X) = \mathbb{Z}$
$ H_1( X) = \mathbb{Z}$
$ H_n(X) = 0 (n > 1)$
Now I suspect something is wrong here because if you follow the link you will see that the OP computes $H_2(X,A) = \mathbb{Z}$. I'm not sure why he computes the relative homologies but if the space is "nice" then the relative homologies should be the same as the absolute ones, so I guess my reasoning above is flawed.
Maybe someone can point out to me what and then also explain to me when $H(X,A) = H(X)$. Thanks for your help!
Edit $\simeq$ here means homotopy equivalent.
| Another way...
Consider $S^3$ as Alexandrov's compactification of $\mathbb{R}^3$: $S^3 = \mathbb{R}^3 \cup \{ \infty \}$. The set $X = \mathbb{R}^3 \setminus (S^1 \times \{ 0 \})$ can be seen as the complement in $S^3$ of the union $Y$ of a circle $S^1$ and of a point $P$. Since $S^3$ is homogeneous, we can suppose that $\infty$ is a point of $S^1$, so
$$
Y = S^3 \setminus (S^1 \cup \{ P \} ) = \mathbb{R}^3 \setminus ((S^1 \setminus \{ \text{a point} \}) \cup \{ P \}.
$$
Therefore $X$ is homeomorphic to the complement $Y$ in $\mathbb{R}^3$ of the union of a line and of a point: $Y = \mathbb{R}^3 \setminus (\{ x=y=0 \} \cup \{(1,0,0) \})$. Probably applying Mayer-Vietoris to $Y$ is easier than to $X$.
However it can be seen that $Y$ is homotopically equivalent to the very simple CW-complex $S^1 \vee S^2$, so computing homology is easy using cellular homology.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/57380",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
prove cardinality rule $|A-B|=|B-A|\rightarrow|A|=|B|$ I need to prove this $|A-B|=|B-A|\rightarrow|A|=|B|$
I managed to come up with this:
let $f:A-B\to B-A$ while $f$ is bijective.
then define $g\colon A\to B$ as follows:
$$g(x)=\begin{cases}
f(x)& x\in (A-B) \\
x& \text{otherwise} \\
\end{cases}$$
but I'm not managing to prove this function is surjective.
Is it not? or am I on the right path? if so how do I prove it?
Thanks
| Your basic intuition is correct.
First prove that $g$ is injective.
Suppose $x,y\in A$ and $x\neq y$. Let us break this into four cases (two similar):
*
*If $x\in B$ and $y\notin B$ (or vice versa) then $g(x)=x$ while $g(y)=f(y)\notin A$, therefore $g(x)\neq g(y)$.
*If $x,y\in B$ then $f(x)\neq f(y)$ since $f$ is injective, and therefore $g(x)\neq g(y)$.
*Similarly for $x,y\notin B$, we have that $g(x)=x\neq y=g(y)$.
Therefore $g$ is an injective function.
To show $g$ is surjective, pick $x\in B$.
Either $x\in A$ and therefore $g^{-1}(x)=x$, or $x\notin A$ and therefore $f^{-1}(x)=a$ is defined; $a\in A\setminus B$; and $g(a)=f(a)=x$ as needed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/57441",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Incorrect manipulation of limits Here's my manipulation of a particular limit:
$\displaystyle \lim\limits_{h\rightarrow 0}\Big[\frac{f(x+h)g(x) - f(x)g(x+h)}{h}\Big]$
Using the properties of limits:
$\displaystyle \begin{align*}
&=\frac{\lim\limits_{h\rightarrow 0}\Big[f(x+h)g(x) - f(x)g(x+h)\Big]}{\lim\limits_{h\rightarrow 0}h}\\
&=\frac{\lim\limits_{h\rightarrow 0}\Big[f(x+h)g(x)\Big] - \lim\limits_{h\rightarrow 0}\Big[f(x)g(x+h)\Big]}{\lim\limits_{h\rightarrow 0}h}\\
&=\frac{\lim\limits_{h\rightarrow 0}\Big[f(x+h)\Big]\lim\limits_{h\rightarrow 0}\Big[g(x)\Big] - \lim\limits_{h\rightarrow 0}\Big[f(x)\Big]\lim\limits_{h\rightarrow 0}\Big[g(x+h)\Big]}{\lim\limits_{h\rightarrow 0}h}\\
&=\frac{f(x)\lim\limits_{h\rightarrow 0}\Big[g(x)\Big] - f(x)\lim\limits_{h\rightarrow 0}\Big[g(x+h)\Big]}{\lim\limits_{h\rightarrow 0}h}\\
&=\frac{f(x)\Big(\lim\limits_{h\rightarrow 0}\Big[g(x)\Big] - \lim\limits_{h\rightarrow 0}\Big[g(x+h)\Big]\Big)}{\lim\limits_{h\rightarrow 0}h}\\
&=\frac{f(x)\Big(\lim\limits_{h\rightarrow 0}\Big[g(x) - g(x+h)\Big]\Big)}{\lim\limits_{h\rightarrow 0}h}\\
&=f(x)\lim\limits_{h\rightarrow 0}\Big(\frac{g(x) - g(x+h)}{h}\Big)\\
&=-f(x)g'(x)\end{align*}$
I'm pretty sure that my end result is incorrect, as I've used arbitrary functions for $f(x)$ and $g(x)$ and it didn't support my conclusion. I think that the factoring of $f(x)$ might be what is incorrect in my manipulation, but I'm not 100% sure. Could someone explain to me what I did wrong and why it is wrong? Which one of the limit "axioms" did I use incorrectly? Thank you.
| Hint to solve your problem:
Let
$$
\varphi (h) = \frac{{f(x + h)g(x) - f(x)g(x + h)}}{h}.
$$
Then
$$
\varphi (h) = \frac{{[f(x + h) - f(x) + f(x)]g(x) - f(x)[g(x + h) - g(x) + g(x)]}}{h}.
$$
From this it follows straightforwardly that
$$
\mathop {\lim }\limits_{h \to 0} \varphi (h) = g(x)f'(x) - f(x)g'(x).
$$
I'll give more hints if you need.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/57504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 1
} |
Proof for divisibility rule for palindromic integers of even length I am studying for a test and came across this in my practice materials. I can prove it simply for some individual cases, but I don't know where to start to prove the full statement.
Prove that every palindromic integer in base $k$ with an even number of digits is divisible by $k+1$.
| The second claim written below will prove your result. In general, you can mimic the divisibility tests for $9$ and $11$. $9$ and $11$ have simple divisibility rules since we deal with decimal number system i.e. base $10$. You can develop divisibility rules for $k$, on a similar note, by expressing it either in base $k+1$ (or) base $k-1$.
Claim 1:
One possible divisibility for a number '$a$' to be divided by '$n$' is as follows. Express the number '$a$' in base '$n+1$'. Let '$s$' denote the sum of digits of '$a$' expressed in base '$n+1$'.
Now $n|a \iff n|s$. More generally, $a \equiv s \pmod{n}$
Example:
Before setting to prove this, we will see an example of this. Say we want to check if $13|611$. Express $611$ in base $14$.
$611 = 3 \times 14^2 + 1 \times 14^1 + 9 \times 14^0 = (319)_{14}$
where $(319)_{14}$ denotes that the decimal number $611$ expressed in base $14$.
The sum of the digits $s = 3 + 1 + 9 = 13$. Clearly, $13|13$. Hence, $13|611$, which is indeed true since $611 = 13 \times 47$.
Proof:
The proof for this claim writes itself out.
Let $a = (a_ma_{m-1} \ldots a_0)_{n+1}$, where $a_i$ are the digits of '$a$' in the base '$n+1$'.
$a = a_m \times (n+1)^m + a_{m-1} \times (n+1)^{m-1} + \cdots + a_0$
Now, note that
$$n+1 \equiv 1 \pmod n$$
$$(n+1)^k \equiv 1 \pmod n$$
$$a_k \times (n+1)^k \equiv a_k \pmod n$$
$$a = a_m \times (n+1)^m + a_{m-1} \times (n+1)^{m-1} + \cdots + a_0$$
$$\equiv (a_m + a_{m-1} \cdots + a_0) \pmod n$$
$$a \equiv s \pmod n$$
Hence proved.
Claim 2:
Another possible divisibility rule for a number '$a$' to be divided by '$n$' is as follows. Express the number '$a$' in base '$n-1$'. Let '$s$' denote the alternating sum of digits of '$a$' expressed in base '$n-1$' i.e. if $a = (a_ma_{m-1} \ldots a_0)_{n-1}$, $s = a_0 - a_1 + a_2 - \cdots + (-1)^{m-1}a_{m-1} + (-1)^m a_m$
Now $n|a$ if and only $n|s$. More generally, $a \equiv s \pmod{n}$
Example:
Before setting to prove this, we will see an example of this. Say we want to check if $13|611$. Express $611$ in base $12$.
$611 = 4 \times 12^2 + 2 \times 12^1 + B \times 12^0 = (42B)_{12}$
where $(42B)_{14}$ denotes that the decimal number $611$ expressed in base $12$, $A$ stands for the tenth digit and $B$ stands for the eleventh digit.
The alternating sum of the digits $s = B_{12} - 2 + 4 = 13$. Clearly, $13|13$. Hence, $13|611$, which is indeed true since $611 = 13 \times 47$.
Proof:
The proof for this claim writes itself out just like the one above.
Let $a = (a_ma_{m-1} \ldots a_0)_{n+1}$, where $a_i$ are the digits of '$a$' in the base '$n-1$'.
$a = a_m \times (n-1)^m + a_{m-1} \times (n-1)^{m-1} + \cdots + a_0$
Now, note that
$$n-1 \equiv (-1) \pmod n$$
$$(n-1)^k \equiv (-1)^k \pmod n$$
$$a_k \times (n-1)^k \equiv (-1)^k a_k \pmod n$$
$$a = a_m \times (n-1)^m + a_{m-1} \times (n-1)^{m-1} + \cdots + a_0$$
$$a \equiv ((-1)^m a_m + (-1)^{m-1} a_{m-1} \cdots + a_0) \pmod n$$
$$a \equiv s \pmod n$$
Hence proved.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/57551",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Integrate $\int\limits_{0}^{1} \frac{\log(x^{2}-2x \cos a+1)}{x} dx$ How do I solve this: $$\int\limits_{0}^{1} \frac{\log(x^{2}-2x \cos{a}+1)}{x} \ dx$$
Integration by parts is the only thing which I could think of, clearly that seems cumbersome. Substitution also doesn't work.
| Apply the identity $Li_2(z) + Li_2({1/z})=-\frac{\pi^2}6-\frac12\ln^2(-z)$ in the evaluation below
\begin{align}
\int_{0}^{1} \frac{\ln(x^{2}-2x\cos{a}+1)}{x} \ dx
=& \int\limits_{0}^{1} \frac{\ln(1-xe^{i a})+ \ln(1-xe^{-ia })}{x} \ dx \\=&-Li_2(e^{ia })- Li_2(e^{-ia }) \\
=& \frac{\pi^2}6 + \frac12\ln^2(-e^{ia})= \frac{\pi^2}6+\frac12[i (a-\pi)]^2\\
= &-\frac12a^2 +\pi a - \frac{\pi^2}3
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/57607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
} |
Normalizer vs Centralizer In Group Theory, what is the difference between a normalizer and a centralizer of a set S? I'm having a bit of difficulty understanding it...
Thanks in advance!
| If $G$ is a group, and $H$ is a subgroup, then the normalizer of $H$ in $G$ is
$$N_G(H) = \{ g\in G \mid g^{-1}Hg = H\},$$
and the centralizer is
$$C_G(H) = \{g\in G \mid gh = hg\text{ for all }h\in H\}.$$
It is easy to see that $C_G(H)\subseteq N_G(H)$, but the converse need not hold.
For example, take $G=S_3$, and let $H = \{ I, (1,2,3), (1,3,2)\}$.
What is $C_G(H)$? It's the collection of all permutations that commute with $I$, with $(1,2,3)$, and with $(1,3,2)$. Since $(1,2)$ does not commute with $(1,2,3)$,
$$(1,2,3)(1,2) = (1,3)\neq (2,3) = (1,2)(1,2,3),$$
then $(1,2)\notin C_G(H)$. However, $(1,2)$ does normalize $H$:
$$\begin{align*}
(1,2)^{-1}I(1,2) &= I\in H;\\
(1,2)^{-1}(1,2,3)(1,2) &= (1,3,2)\in H;\\
(1,2)^{-1}(1,3,2)(1,2) &= (1,2,3)\in H.
\end{align*}$$
So $(1,2)\in N_G(H)$. Similarly, $(1,3)$ and $(2,3)$ are not in the centralizer, but are in the normalizer. $H$ is contained in both.
For another example, take $G=H=S_3$. Then the normalizer is all of $G$, because for every $x,g\in G$ we have $gxg^{-1}\in G$; but the centralizer is equal to the center (the set of things that commute with everything) and the center of $G$ is just the identity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/57663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 0
} |
No group of order 36 is simple
Fraleigh(7ed) Example37.14 No group of order 36 is simple. Such a group $G$ has either $1$ or $4$ subgroups of order $9$. If there is only one such subgroup, it is normal in $G$. If there are four such subgroups, let $H$ and $K$ be two of them. $H \cap K$ must have at least $3$ elements, or $HK$ would have to have $81$ elements, from $|HK|=|H||K|/|H\cap K|$. Thus the normalizer of $H \cap K$ has as order a multiple of $>1$ of $9$ and a divisor of $36$; hence the order must be either $18$ or $36$. If the order is $18$, the normalizer is then of index $2$ and therefore is normal in $G$. If the order is $36$, then $ H \cap K$ is normal in $G$.
I don't understand the highlighted sentence. It must be from that $N(H \cap K) \supset H$ (or $K$), but why $H\cap K$ is normal in $H$ or $K$? I guess it must be from the first Sylow thoerem(below). But the first Sylow theorem seems to state that $H\cap K$ is a normal subgroup of a subgroup of order $9$, not necessariliy $H$ or $K$, maybe other than $H$ and $K$. How can I conclude that $H \cap K$ is a normal subgroup of $H$ or $K$?
First Sylow Theorem Let $G$ be a finite group and let $|G|=p^n m$ where $n\ge1$ and where $p$ does not divide $m$. Then
1. $G$ contains a subgroup of order $p^i$ for each $i$ where $1 \le i \le n$
2. every subgroup $H$ of $G$ of order $p^i$ is a normal subgroup of a subgroup of order $p^{i+1}$ for $1\le i<n$
Edit: It was very easy. By the first Sylow theorem, $H \cap K$ is a normal subgroup of a subgroup of order $9$, not necessarily $H$ or $K$. But it is still true that $N(H \cap K)$ contains a subgroup of order $9$, so $N(H\cap K)$ has as order a multiple of $9$.
| Groups of order 9 must be commutative, so any subgroup of $H$ or $K$ is normal relative to it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/57754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 1,
"answer_id": 0
} |
For this bilinear form: $q(v)=q(x_1,x_2,x_3)=x_1^2+x_2^2+9x_3^{2}+2x_1x_2-6x_1x_3-5x_2x_3$ find a base $B$ so that $[q]^B_B=D$ diagonalizable matrix For this bilinear form: $q(v)=q(x_1,x_2,x_3)=x_1^2+x_2^2+9x_3^{2}+2x_1x_2-6x_1x_3-5x_2x_3$ I need to find a base $B$ so that $[q]^B_B=D$ will be diagonalizable matrix. So, I tried to look for eigenvalues after writing this bilinear form as a matrix $\begin{pmatrix}
1 & 1 &-3 \\ 1&1 &-2.5 \\ -3& -2.5 & 9\end{pmatrix}$ and find eigenvalues but it's impossible mission, it's really messy. Is there any other method which with I can find it? maybe something with Jacobi method?
Thank you.
| It is messy because you have misunderstood the problem. While $q(\underline{v})$ is induced by the bilinear form $f(\underline{u}, \underline{v})=\underline{v}^TA\underline{u}$, where $A$ is your $3\times 3$ coefficient matrix, $q$ is quadratic, not bilinear, also not a linear transformation. So, what you are asked to do is to find a decomposition of the form $A = P^TDP$ (where $P$ is invertible and the diagonal of $D$ does not necessarily contain any eigenvalue of $A$), but you have confused this with an eigenvalue decomposition $A = P^{-1}DP$. Surely, as your matrix $A$ is real symmetric, you can do both by performing an orthogonal decomposition $A=Q^TDQ$ where $QQ^T=I$ and $D$ contains the eigenvalues of $A$, but this is simply not required.
In general, you can find a decomposition $A = P^TDP$ by using elementary row/column operations. This is somewhat akin to finding a row-reduced echelon form of a matrix, but here we need to perform both an elementary row operation and a corresponding elementary column operation at each step. In other words, if, in a certain step, you multiply $A$ by an elementary matrix $E$ on the left, you should also mutiply $A$ by $E^T$ on the right.
For the problem you describe, however, simple inspection plus some completing-square trick is enough. Note that
$$
\begin{eqnarray}
&&x_1^2 + x_2^2 + 9x_3^2 + 2x_1x_2 - 6x_1x_3 - 5x_2x_3\\
&=&(x_1 + x_2 - 3x_3)^2 + x_2x_3\\
&=&(x_1 + x_2 - 3x_3)^2 + \frac14[(x_2 + x_3)^2 - (x_2 - x_3)^2].
\end{eqnarray}
$$
So you may take $B=\{(x_1 + x_2 - 3x_3),\ (x_2 + x_3),\ (x_2 - x_3)\}$. You may verify that $A = P^TDP$ where
$$
P=\begin{pmatrix}
1&1&-3\\0&1&1\\0&1&-1
\end{pmatrix},
\ D=\begin{pmatrix}
1\\&\frac14\\&&-\frac14
\end{pmatrix}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/57825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Improper Double Integral Counterexample Let $f: \mathbf{R}^2\to \mathbf{R}$. I want to integrate $f$ over the entire first quadrant, call $D$. Then by definition we have
$$\int \int_D f(x,y) dA =\lim_{R\to[0, \infty]\times[0, \infty]}\int \int_R f(x,y) dA$$
where $R$ is a rectangle.
I remember vaguely that the above is true if $f$ is positive. In other words, if $f$ is positive, then the shape of the rectangle does not matter.
So this brings me to my question: give a function $f$ such that the shape of the rectangles DO MATTER when evaluating the improper double integral.
| Observe the following, if $g$ is a function on $\mathbf{R}^2$ with $$g(x,0) = g(0,y) = 0$$ then you have that
$$ \partial_y g(x_0,y) = \partial_yg(0,y) + \int_0^{x_0} \partial^2_{xy}g(s,y) ds $$
So
$$ g(x,y) + g(0,0) - g(0,y) - g(x,0) = \int_0^x\int_0^y \partial^2_{xy} g(s,t) dtds $$
In other words, it suffices to find a twice continuously differentiable function $g$, vanishing on the coordinate axes, such that $\lim_{r\to\infty} g(r\cos\theta,r\sin\theta)$ is dependent on the angle $\theta$ chosen.
Let $\phi(r)$ be an arbitrary smooth function such that $\phi(r) = 0$ if $r < 1$ and $\phi(r) = 1$ if $r > 2$. Define
$$ g(x,y) = \frac{\phi(xy)}{x^2 + y^2} $$
Then for $f(x,y) = \partial^2_{xy} g(x,y)$, you have that for the integrals
$$I(s; a) = \iint_{[0,s]\times [0,as]} f(x,y) dA = \frac{\phi(as^2)}{s^2(1 + a^2)} $$
you have that for any fixed $a > 0$, the limit
$$ \lim_{s\to\infty} I(s;a) = \frac{a}{1+a^2} $$
is dependant on the aspect ratio of the rectangle chosen.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/57940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Determining if a quadratic polynomial is always positive Is there a quick and systematic method to find out if a quadratic polynomial is always positive or may have positive and negative or always negative for all values of its variables?
Say, for the quadratic inequality
$$3x^{2}+8xy+5xz+2yz+7y^{2}+2z^{2}>0$$
without drawing a graph to look at its shape, how can I find out if this form is always greater than zero or has negative results or it is always negative for all non-zero values of the variables?
I tried randomly substituting values into the variables but I could never be sure if I had covered all cases.
Thanks for any help.
| One of the methods when you don't know necessary and sufficient condition for the minimum of function of several variables - consider other as parameters. You know that for a function
$$
a_1x^2+b_1(y,z)x+c(y,z)
$$
the minimum is attained at $\frac{-b_1(y,z)}{2a_1}$ for $a_1>0$ and any fixed $y,z$. Then you should just substitute this into your equation and solve the minimum problem w.r.t. $y$ and then, on the third step, for $z$.
In your case: $a_1 = 3, b_1 = 8y+5z$, so you put
$$
x = -\frac{1}{6}(8y+5z)
$$
and obtain a function
$$
\frac{1}{12}(20 y^2-56 y z-z^2)
$$
which certainly can go below zero due to the negativity of the coefficient with $z^2$.
Finally, the strict inequality never holds, since any quadratic function is equal to zero in the origin.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/58010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 4,
"answer_id": 2
} |
Fermat's last theorem -- Google and PCMag.com In recognition of Fermat's 410th birthday, Google ha(s/d) a special google-doodle with Fermat's last theorem.
The first link point(s/ed) to an article on PCMag.com which states:
In time, Fermat was considered to be the founder of the modern number theory. He came up with Fermat's Last Theorem, which states that $x^n + y^n = z^n$.
Am I missing something or is the PCMag article missing something?
| PCMag is missing something. Fermat's Last Theorem is that for integers $x$, $y$, $z$, and $n$, with $n > 2$, $x^n + y^n \ne z^n$ (provided that $x$, $y$, and $z$ are nonzero).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/58069",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 0
} |
Solve this Inequality I am not sure how to solve this equation. Any ideas
$$(1+n) + 1+(n-1) + 1+(n-2) + 1+(n-3) + 1+(n-4) + \cdots + 1+(n-n) \ge 1000$$
Assuming $1+n = a$
The equation can be made to looks like
$$a+(a-1)+(a-2)+(a-3)+(a-4)+\cdots+(a-n) \ge 1000$$
How to proceed ahead, or is there another approach to solve this?
| I will do it as the following,
$$
\begin{align}
&(1+n) + 1+(n-1) + 1+(n-2) + 1+(n-3) + 1+(n-4) + \cdots + 1+(n-n)\\
=&(1+n) + [1+(n-1) + 1+(n-2) + 1+(n-3) + 1+(n-4) + \cdots + 1+(n-n)]\\
=&(n+1) + [n+(n-1)+\cdots+2+1]\\
=&\frac{(n+1)[1+(n+1)]}{2}\\
=&\frac{(n+1)(n+2)}{2}
\end{align}
$$
Now you can go on with Rolando's answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/58106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Parameterizing a curve Edited...
I have a Cartesian equation of a cycloid:
$$\arcsin\left(k\sqrt{y(x)}\right) - k\sqrt{y(x)-k^2y(x)^2} + c = x$$
where $k$ and $c$ are constants.
How might I parameterize it so that I get the usual parameterizations, i.e.
$$\begin{align*}
x&=r(t-\sin{t})\\
y&=r(1-\cos{t})
\end{align*}$$?
Thanks in advance!
| I'll give you a way to check if something your fiend friend gave you is a cycloid: the Whewell equation (the equation relating tangential angle $\phi$ and arclength $s$) for the cycloid is
$$s=k\,\sin\,\phi$$
where $k$ is a constant (which is proportional to the radius of the rolling circle). Derive the required expressions for arclength and tangential angle from your parametric equations, and see if the cycloid's Whewell equation holds.
Alternatively, the Cesàro equation (which relates curvature $\kappa$ and arclength) for the cycloid is
$$\frac1{\kappa^2}+s^2=c$$
where $c$ is a constant (that is also related to the radius of the rolling circle). Substitute the appropriate expressions for the curvature and arclength to verify if you have a cycloid.
Both equations are so-called intrinsic equations; equations that depend only on the nature of the curve, and not its orientation/position in the plane.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/58227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Reduction modulo: $\mathbb{Z}[x]\to\mathbb{Z}/p\mathbb{Z}[x]$ Let
$$f(x)=\sum_{i}a_ix^i\in\mathbb{Z}[x],$$
and
$$[f]_{p}(x)=\sum_{i}[a_i]_{p}x^i\in\mathbb{Z}/p\mathbb{Z}[x].$$
Is it true that:
$$f(\xi)=0\Rightarrow [f]_{p}(\xi)=0,$$
where $\xi$ is some root of unity.
i.e. is it true proof:
$$[f]_{p}(\xi)=\sum_{i}[a_i]_{p}{\xi}^i=\sum_{i}{a}_i{\xi}^i-\sum_{i}(a_i-[a_i]_{p}){\xi}^i=0-p\cdot\alpha=0-0\cdot\alpha=0.$$
Thanks.
| If $\xi$ is any algebraic integer, then there is a ring homomorphism $\mathbb{Z}[\xi] \rightarrow \bar{\mathbb{F}}_p$ (actually taking values in $\mathbb{F}_{p^n}$ for some $n$).
Just take a maximal ideal $\mathfrak{m}$ of $\mathbb{Z}[\xi]$ containing $p$ (there is at least one thanks to Zorn's lemma because $p$ is not invertible), then $\mathbb{Z}[\xi]/\mathfrak{m} \simeq \mathbb{F}_{p^n}$ for some $n$.
This ring homomorphism lifts the one you described ($\mathbb{Z} \rightarrow \mathbb{F}_p$), and allows to give a meaning to your computation.
Note that there can be more than one such lift (i.e. more than one maximal ideal containing $p$).
In the special case that $\xi$ is a $p^{\text{something}}$-th or $k$-th ($k$ dividing $p-1$) root of unity, $n$ is equal to $1$.
In the first case, there is a unique lift, but in the second case there are several if $k>1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/58304",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Order of an element in a group
Explain why external direct products $\mathbb{Z}_8 \times \mathbb{Z}_4$ and $\mathbb{Z}_{80000000}\times \mathbb{Z}_{4000000}$ have same number of elements of order 4
I am thinking as for every divisor $d$ of the order of the group, there are $\phi(d)$ (Euler's $\phi$) elements of order $d$, in both the external direct products the number of elements of order $4$ are the same. Please suggest if this logic is ok.
| First:
Lemma 1. Let $A$ and $B$ be groups. An element $(a,b)\in A\times B$ has order $n$ if and only if $\mathrm{lcm}(\mathrm{order}(a),\mathrm{order}(b)) = n$.
Proof. If $(a,b)$ has exponent $n$, then $(1,1) = (a,b)^n = (a^n,b^n)$, so $a^n=1$, $b^n=1$, hence $\mathrm{order}(a)|n$ and $\mathrm{order}(b)|n$. Thus, $\mathrm{lcm}(\mathrm{order}(a),\mathrm{order}(b))|n$. So the order of $(a,b)$ is a multiple of $\mathrm{lcm}(\mathrm{order}(a),\mathrm{order}(b))$.
Conversely, if $k=\mathrm{lcm}(\mathrm{order}(a),\mathrm{order}(b))$, then $a^k=1$ and $b^k=1$ (since $k$ is a multiple of the orders), so $(a,b)^k = (1,1)$. Thus, the order of $(a,b)$ divides $\mathrm{lcm}(\mathrm{order}(a),\mathrm{order}(b))$. QED
Lemma 2. Let $G$ be a group, and let $g\in G$. If the order of $g$ is $n$ ($n=0$ if $g$ is of infinite order), and $k\gt 0$, then the order of $g^k$ is $n/\gcd(n,k)$.
Proof. Let $\gcd(n,k)=d$, and write $n=dm$, $k=d\ell$, $\gcd(m,\ell)=1$. Then $n/\gcd(n,k)=m$. Since $(g^k)^m = g^{km}$, and $km=d\ell m = n\ell$, then $(g^k)^m = (g^n)^{\ell} = 1$. So the order of $g^k$ divides $n/\gcd(n,k)$. On the other hand, if $(g^{k})^a = 1$, then $n|ka$, hence dm|d\ell a$, hence $m|\ell a$. Since $\gcd(m,\ell)=1$, it follows that $m|a$, so $n/\gcd(n,k)|a$. Thus, the order of $g^k$ is a multiple of $n/\gcd(n,k)$. QED
Corollary. If $G$ is a cyclic group of order $n\gt 0$, then the number of elements of order $d$ in $G$ is $0$ if $d$ does not divide $n$, and $\varphi(d)$ (Euler's $\varphi$) if $d|n$.
Proof. Let $x$ be a generator of $G$. If $d$ doesn't divide $n$, then no element can have order $d$ and we are done. Suppose then that $d$ divides $n$. Then $n=dk$. By Lemma 2, an element $x^a\in G$ has order $d$ if and only if $d=n/\gcd(a,n)$, if and only if $\gcd(a,n)=k$. Thus the question reduces to asking how many $a$, $0\leq a\lt n$, satisfy $\gcd(a,n)=d$. Such an $a$ must be of the form $a=dm$ with $0\leq m\lt n/d = k$, and $\gcd(m,n/d)=\gcd(m,k)=1$. Thus, the number is precisely $\varphi(k)$, the number of nonnegative integers smaller than $k$ and relatively prime to $k$. QED
Now, consider the case of $A\times B$ and elements of order $4$. $(a,b)\in A\times B$ has order $4$ if and only if the lowest common multiple of the orders of $a$ and $b$ is $4$, if and only if:
*
*$a$ and $b$ both have order $4$; or
*$a$ has order $2$ and $b$ has order $4$; or
*$a$ has order $4$ and $b$ has order $2$; or
*$a$ has order $1$ and $b$ has order $4$; or
*$a$ has order $4$ and $b$ has order $1$.
That is, one entry has order $4$, and the other entry has exponent $4$.
In both your cases, since $4$ divides the order of each of the two factors (in both products), you have $\varphi(4)$ elements of order $4$ in each factor group, and $\varphi(4)+\varphi(2)+\varphi(1)$ elements of exponent $4$ in each factor group. So the counts for elements of order $4$ in the products are the same: $2\varphi(4)(\varphi(4)+\varphi(2)+\varphi(1)) - \varphi(4)^2$.
(If you want $a$ to have order $4$, you have $\varphi(4)$ ways of choosing $a$; then you have $\varphi(4)+\varphi(2)+\varphi(1)$ ways of choosing $b$ of exponent $4$. The same analysis holds if you first decide that $b$ has order $4$ instead and $a$ has exponent $4$. However, this counts the case in which both $a$ and $b$ have order $4$ twice, which occurs in $\varphi(4)\times\varphi(4)$ ways, so we subtract it once to get the inclusion-exclusion count right.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/58362",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Finite Subgroups of GL(n,R) A nice result about $GL(n,\mathbb{Z})$ is that it has finitely many finite subgroups upto isomorphism; and also any finite subgroup of $GL(n,\mathbb{Q})$ is conjugate to a subgroup of $GL(n,\mathbb{Z})$.
Next, I would like to ask natural question, what can be said about finite subgroups of $GL(n,\mathbb{R})$, $GL(n,\mathbb{C})$; at least for $n=2$ . Does every finite group can be embedded in $GL(2,\mathbb{R})$? (I couldn't find some reference for this.)
Only thing I convinced about $GL(2\mathbb{R})$ is that it contains elements of every order and hence cyclic groups of all finite order.
| The relationship between Z and Q is very different from the relationship between R and C.
Not every finite subgroup of GL(2,C) can be embedded in GL(2,R).
Finite subgroups of GL(2,R) have a faithful real character of degree 2 whose irreducible components have Frobenius-Schur index 1, while finite subgroups of GL(2,C) have a faithful character of degree 2.
The group C4 × C4 for instance has no faithful real character of degree 2, so is isomorphic to a subgroup of GL(2,C) but not isomorphic to a subgroup of GL(2,R).
The smallest counterexample is the quaternion group of order 8.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/58476",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 4,
"answer_id": 0
} |
When can a random variable be written as a sum of two iid random variables? Suppose $X$ is a random variable; when do there exist two random variables $X',X''$, independent and identically distributed, such that $$X = X' + X''$$
My natural impulse here is to use Bochner's theorem but that did not seem to lead anywhere. Specifically, the characteristic function of $X$, which I will call $\phi(t)$, must have the property that we can a find a square root of it (e.g., some $\psi(t)$ with $\psi^2=\phi$) which is positive definite. This is as far as I got, and its pretty unenlightening - I am not sure when this can and can't be done. I am hoping there is a better answer that allows one to answer this question merely by glancing at the distribution of $X$.
| This Wikipedia article addresses precisely this question, with some nice examples, but doesn't actually answer the question: http://en.wikipedia.org/wiki/Indecomposable_distribution
There is a book titled Characteristic Functions, by Eugene Lukacs, which addressed many questions of this kind.
If you want to address only infinite divisibility, rather than the (possibly more complicated) questions of finite divisibility, then one necessary condition for infinite divisibility is that all of the even-order cumulants must be non-negative. PS: That all of the even-order cumulants must be non-negative is true of compound Poisson distributions, but now I think probably not of infinitely divisible distributions in general.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/58525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 5,
"answer_id": 1
} |
Norm-Euclidean rings? For which integer $d$ is the ring $\mathbb{Z}[\sqrt{d}]$ norm-Euclidean?
Here I'm referring to $\mathbb{Z}[\sqrt{d}] = \{a + b\sqrt{d} : a,b \in \mathbb{Z}\}$, not the ring of integers of $\mathbb{Q}[\sqrt{d}]$.
For $d < 0$, it is easy to show that only $d = -1, -2$ suffice; but what about $d>0$?
Thanks.
| Your exact question has already been resolved and is today a simple matter of looking it up in the OEIS and sifting out the numbers of the form $4k + 1$.
$$d = 2, 3, 6, 7, 11 \textrm{ or } 19$$
But it was a long, slow process throughout the 20th century to arrive at this answer, as Ian Stewart and David Tall explain in their excellent book Algebraic Number Theory and Fermat's Last Theorem.
Here I'm referring to $\mathbb{Z}[\sqrt{d}] = \{a + b\sqrt{d} : a, b \in \mathbb{Z}\}$, not the ring of integers of $\mathbb{Q}[\sqrt{d}]$.
Because you said this, it's necessary to sift out the numbers of the form $4k + 1$. Stewart & Tall (and many other authors in other books) show that if a domain is Euclidean then it is a principal ideal domain and a unique factorization domain (the converse doesn't always hold, but that's another story).
So if $d > 5$, $d \equiv 1 \pmod 4$ and $$m = \frac{d - 1}{4},$$ (that's an integer) then $d - 1 = 2^2 m = (-1)(1 - \sqrt d)(1 + \sqrt d)$ represents two distinct factorizations of the same number, which means that $\mathbb Z[\sqrt d]$ is not a unique factorization domain, which in turns means it can't be Euclidean and certainly not norm-Euclidean.
If you had asked about Euclidean in general, you question would be significantly more difficult. The Euclidean function for $\mathcal O_{\mathbb Q(\sqrt{69})}$ was only recently (just a couple of decades ago) discovered: the norm function requires two adjustments and is somewhat frustrating for practical applications.
And it has been proven that $\mathbb Z[\sqrt{14}]$ is Euclidean but not norm-Euclidean, but no one on Math.SE seems to know what the Euclidean function might be (it has been asked).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/58561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 2,
"answer_id": 0
} |
Prove this number fact Prove that $x \neq 0,y \neq 0 \Rightarrow xy \neq 0$.
Suppose $xy = 0$. Then $\frac{xy}{xy} = 1$. Can we say that $\frac{xy}{xy} = 0$ and hence $1 = 0$ which is a contradiction? I thought $\frac{0}{0}$ was undefined.
| consider that xy=0 from the basic axioms we know that product is zero when at least one of the multiplier is zero so is means that x is zero or y is zero or both x and y is equal zero which contradicts to given condition that neither x or y is zero so it means that their product is also not equal to zero
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/58667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 3
} |
Is $\mathbb{Q}[2^{1/3}]$ a field? Is $\mathbb{Q}[2^{1/3}]=\{a+b2^{1/3}+c2^{2/3};a,b,c \in \mathbb{Q}\}$ a field?
I have checked that $b2^{1/3}$ and $c2^{2/3}$ both have inverses, $\frac{2^{2/3}}{2b}$ and $\frac{2^{1/3}}{2c}$, respectively.
There are some elements with $a,b,c \neq 0$ that have inverses, as $1+1*2^{1/3}+1*2^{2/3}$, whose inverse is $2^{1/3}-1$.
My problem is that is that I can't seem to find a formula for the inverse, but I also can't seem to find someone who doesn't have an inverse.
Thanks for your time.
| It is a field, and you don't need to find an inverse for each element to prove that each element has an inverse. You can prove that if $\alpha$ is in the set then $\lbrace\,1,\alpha,\alpha^2,\alpha^3\,\rbrace$ is a linearly dependent set over the rationals, then deduce that $\alpha$ satisfies an equation of degree (at most) 3 over the rationals, then if $A\alpha^3+B\alpha^2+C\alpha+D=0$ you have $\alpha(A\alpha^2+B\alpha+C)=-D$, from which you can see an inverse to $\alpha$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/58735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 1
} |
How to calculate Maximum or Minimum of two numbers without using if? How to to calculate the maximim or minimum of two numbers without using "if" ( or something equivalant to that manner)?
The above question is often asked in introductory computer science courses and is answered using this method.
Now although it is not obvious, but using absolute value is also equivalant to using an if statement e.g. defining Max(a,b) = a if a>b else b;
Besides using limits, is there another way of finding the maximum or minimum of two numbers?
| Whether you need an if statement to take an absolute value depends on the format in which your numbers are stored. If you're using IEEE floating point numbers, you can take the absolute value by masking out the sign bit.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/58792",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 4,
"answer_id": 0
} |
Ratio of circumference to radius I know that the ratio of the circumference to the diameter is Pi - what about the ratio of the circumference to the radius? Does it have any practical purpose when we have Pi? Is it called something (other than 2 Pi)?
| The ratio of the circumference to the radius is $2\pi$, which some people call "One turn". I think you would enjoy to read this article: "$\pi$ is wrong!" by Bob Palais. Other people call $2\pi$ by the name of Tau. See this page: http://tauday.com
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/58907",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 2
} |
$T: V \to V$ non-negative linear transformation: There's $S: V \to V$ , so that $S^6=T$ Let $V$ be an Euclidean finite dimension, and $T: V \to V$ non-negative linear transformation.
I need to prove that there's another non-negative transformation, $S: V \to V$ , so that $S^6=T$.
If I could know that $T$ is diagonal, so It will be easy to prove, because $T$ is non-negative.
Thanks
| First of all, as the OP made precise in the comments, the assumption on $T$ means that its complex spectrum is included in $]0,+\infty[$.
By Jordan's theorem, we know that there exits non-negative real numbers $\lambda_1, \ldots, \lambda_r$, $\mu_1, \ldots, \mu_s$ such that the matrix of $T$ in some basis is given by $\mathrm{diag}(\lambda_1, \ldots, \lambda_r,J(\mu_1), \ldots, J(\mu_s))$, where
$J(\mu) = \begin{pmatrix} \mu & 1 & 0 & \ldots & 0 \\
0 & \mu & 1 & \ldots &0 \\
\vdots & & & &\vdots \\
0 & & \ldots && \mu \end{pmatrix}$
(the size of the matrix may vary with the index, of course).
We want to find a sixth-root of $T$, so it is enough to do it block by block. For the first part, $\mathrm{diag}(\lambda_1, \ldots, \lambda_r)$, it is clear. So we have to do it for $J(\mu)$ now.
The key observation is that $J(\mu)^2$ is similar to $J(\mu^2)$. Indeed, $J(\mu)^2=(\mu I+J)^2=\mu^2 I + (\mu J + J^2)$, where $J=J(0)$ with the previous notations. To see that the nilpotent matrices $J$ and $\mu J+J^2$ lie in the same conjugacy class, it suffices to check that $\dim(\mathrm{Ker}(J)^k)=\dim(\mathrm{Ker}(\mu J+J^2)^k)$ for all $k$ integer. But this is clear because $\mu J + J^2= J(\mu I + J))$ and $\mu I + J$ is invertible.
So by iteration $J(\mu^6)$ is similar to $J(\mu)^6$, and therefore, $J(\mu)$ admits a sixth root, which is gonna be conjugated to $J(\sqrt[6]{\mu})$.
Remark. Note that it is important to assume that no $\mu$ is zero. Else, the result fails to be true. The optimal assumption would thus be something like $0$ has maximal multiplicity as eigenvalue of $T$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/58966",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
$A\in M_n(\mathbb C)$ invertible and non-diagonalizable matrix. Prove $A^{2005}$ is not diagonalizable $A\in M_n(\mathbb C)$ invertible and non-diagonalizable matrix. I need to prove that $A^{2005}$ is not diagonalizable as well. I am asked as well if Is it true also for $A\in M_n(\mathbb R)$. (clearly a question from 2005).
This is what I did: If $A\in M_n(\mathbb C)$ is invertible so $0$ is not an eigenvalue, We can look on its Jordan form, Since we under $\mathbb C$, and it is nilpotent for sure since $0$ is not an eigenvalue, and it has at least one 1 in it's semi-diagonal. Let $P$ be the matrix with Jordan base, so $P^{-1}AP=J$ and $P^{-1}A^{2005}P$ but it leads me nowhere.
I tried to suppose that $A^{2005}$ is diagonalizable and than we have this $P^{-1}A^{2005}P=D$
When D is diagonal and we can take 2005th root out of each eigenvalue, but how can I show that this is what A after being diagonalizable suppose to look like, for as contradiction?
Thanks
| If $A^m$ is diagonalizable, then $A^m$ is cancelled by its minimal polynomial $P$, which has simple roots. Therefore $A$ is cancelled by $P(X^m)$ which has simple roots because $P(0)\neq 0$ ($A$ is invertible).
Indeed, if $P(X)=\prod (X-\lambda_i)$, then $P(X^m)=\prod (X^m-\lambda_i)$ whose roots are all the $m$-roots of $\lambda_i$ which differ one frome another (if $\mu_i$ is a $m$-root of $\lambda_i$, then $\mu_i\neq \mu_j$ else $\lambda_i=\mu_i^m=\mu_j^m=\lambda_j$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/59016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
} |
A sufficient condition for linearity? If $f$ is a linear function (defined on $\mathbb{R}$), then for each $x$, $f(x) – xf’(x) = f(0)$. Is the converse true? That is, is it true that if $f$ is a differentiable function defined on $\mathbb{R}$ such that for each $x$, $f(x) – xf’(x) = f(0)$, then $f$ is linear?
| The following argument uses not much machinery.
Suppose that $f(0)=b$ and $f'(0)=m$. Let $g(x)=f(x)-(mx+b)$. Then $g(0)=0$ and $g'(0)=0$.
Note that
$$g(x)-xg'(x)=[f(x)-(mx+b)] -x(f'(x)-m)=0=g(0).$$
So $xg'(x)-g(x)=0$ for all $x$.
For $x \ne 0$, let $h(x)=g(x)/x$.
Then for any $x \ne 0$, we have
$$h'(x)=\frac{xg'(x)-g(x)}{x^2}=0.$$
It follows that $h(x)$ is a constant $p$ on $(0, \infty)$, and a constant $n$ on $(-\infty,0)$.
Thus $g(x)=px$ on $(0,\infty)$ and $g(x)=nx$ on $ (-\infty,0)$.
But $g'(x)=0$. So
$$\lim_{x\to 0+} \frac{px-0}{x-0}=0.$$
It follows that $p=0$. In the same way we can show that $n=0$. So $g(x)$ is identically $0$, and therefore $f(x)=mx+b$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/59078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 5,
"answer_id": 3
} |
Prove that all even integers $n \neq 2^k$ are expressible as a sum of consecutive positive integers How do I prove that any even integer $n \neq 2^k$ is expressible as a sum of positive consecutive integers (more than 2 positive consecutive integer)?
For example:
14 = 2 + 3 + 4 + 5
84 = 9 + 10 + ... + 15
n = sum (k + k+1 + k+2 + ...)
n ≠ 2^k
| The sum of the integers from $1$ to $n$ is $n(n+1)/2$. The sum of the integers from $k$ to $k+n$ is then
$$\begin{align*}
k+(k+1)+\cdots+(k+n) &= (n+1)k + 1+\cdots+n\\
& = (n+1)k + \frac{n(n+1)}{2} \\
&= \frac{(n+1)(2k+n)}{2}.\end{align*}$$
Therefore, $a$ can be expressed as the sum of consecutive integers if and only if $2a$ can be factored as $(n+1)(2k+n)$.
Suppose that $a$ is a power of $2$. Then $2a$ is a power of $2$, so $(n+1)(2k+n)$ must be a power of $2$. If we want to avoid negatives, and also avoid the trivial expression as a sum with one summand, we must have $n\geq 1$ and $k\gt 0$. But the parities of $n+1$ and of $2k+n$ are opposite, so this product cannot be a power of $2$ unless either $n+1=1$ (which requies $n=0$) or $2k+n=1$ (which requires $k=0$). Thus, no power of $2$ can be expressed as a sum of at least two consecutive positive integers. In particular, $8$, $16$, $32$, etc cannot be so expressed.
On the other hand, suppose that $a$ is even but not a power of $2$. If we can write $2a = pq$ with $p\gt 1$ and odd, $q$ even, and $q\geq p+1$, then setting $n=p-1$ and $k=(q-p+1)/2$ gives the desired decomposition. If this cannot be done, then every time we factor $2a$ as $pq$ with $p\gt 1$ odd, we have $q\lt p+1$. Then we can set $n=q-1$ and $k = (p+1-q)/2$.
Thus, the powers of $2$ are the only even numbers that are not expressible as the sum of at least two consecutive positive integers.
Added. The OP has now excluded powers of $2$, but has also required that the sum contains strictly more than two summands; i.e., $k\gt 0$ and $n\gt 1$. With the above decompositions, the only case in which we could have $n=1$ is if $2a=pq$ with $p$ odd, $p\gt 1$, and $q=2$. But this is impossible, since $a$ is assumed to be even, and this leads to $2a = 2p$ with $p$ odd.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/59131",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 5,
"answer_id": 3
} |
Formula for $1^2+2^2+3^2+...+n^2$ In example to get formula for $1^2+2^2+3^2+...+n^2$ they express $f(n)$ as:
$$f(n)=an^3+bn^2+cn+d$$ also known that $f(0)=0$, $f(1)=1$, $f(2)=5$ and $f(3)=14$
Then this values are inserted into function, we get system of equations solve them and get a,b,c,d coefficients and we get that $$f(n)=\frac{n}{6}(2n+1)(n+1)$$
Then it's proven with mathematical induction that it's true for any n.
And question is, why they take 4 coefficients at the beginning, why not $f(n)=an^2+bn+c$ or even more? How they know that 4 will be enough to get correct formula?
| As the comments indicate, there are many possibilities. One first assumes that it's a polynomial (perhaps for no good reason, you know?). One might consider that the polynomial grows asymptotically like a cubic from the integral.
Or one might repeatedly apply the difference operator. For example, the first few values are 0, 1, 5, 14, 30, 55... Then the first set of differences is 1, 4, 9, 16, 25... The second set of differences is 3, 5, 7, 9... The third set of differences is 2, 2, 2, 2, 2...
Thus we suspect a cubic polynomial will do.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/59175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 2
} |
A Combinatoric Probability Problem, my program seems to give wrong answer say, I have 3 red balls, 4 yellow balls, 5 blue balls. totally 12 balls
now I randomly arrange the balls, a1...a12
a good layout is such: a1 = a12 or a1 = a11 or a2 = a12 (= means same color)
what's the probability of gaining a good layout?
I believe it is somewhat 66%=0.2879*3-0.06818*2-0.7273+0.01212 but my program (10000 random runs) gives 44%...
Update
My Maths is correct, my program has a bug..., now corrected.
| There are ${12\choose 2}=66$ ways to choose a pair of balls, and
${3\choose 2}+{4\choose 2}+{5\choose 2}=19$ ways for them to have
the same color. Thus,
$$\mathbb{P}(a_1=a_{11})=\mathbb{P}(a_1=a_{12})=\mathbb{P}(a_2=a_{12})={19\over 66}.$$
Similarly, we have
$$\mathbb{P}(a_1=a_{11}=a_{12})=\mathbb{P}(a_1=a_2=a_{12})={3\over 44},$$
and
$$\mathbb{P}([a_1=a_{11}]\cap [a_2=a_{12}]) = {14\over 165}.$$
Finally,
$$\mathbb{P}([a_1=a_{11}=a_2=a_{12}]) = {2\over 165}.$$
By inclusion-exclusion, the chance of getting a good configuration is
$$3\cdot {19\over66}-2\cdot {3\over44}-{14\over 165}+{2\over 165}={36 \over55}=.65454.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/59236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Maximizing the sum of two numbers, the sum of whose squares is constant
How could we prove that if the sum of the squares of two numbers is a constant, then the sum of the numbers would have its maximum value when the numbers are equal?
This result is also true for more than two numbers. I tested the results by taking various values and brute-force-ish approach. However I am interested to know a formal proof of the same.
This is probably an mild extention to this problem. I encountered this result while searching for a easy solution for the same.
| You can see this quite readily in the theorem of Thales: An angle inscribed
in a semicircle is a right angle.
The constant in your problem is the square of the hypotenuse, and the sum of the other two sides is maximized when the altitude of the inscribed triangle is maximized.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/59292",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 8,
"answer_id": 1
} |
Demonstrations - Calculating the area under a curve - integration - functions of one variable Please, I wish to help with this issue. I tried with the principle of mathematical induction but could not.
Consider the following:
Let the function $$f:[a,b]\subset\mathbb{R}\to \mathbb{R},$$
The partition of the interval $ [a, b] $:
$$a={x_0} < {x_1} < {x_2} < ... < {x_{n - 1}} < {x_n} = b,$$
subinterval length:
$$\Delta x = \frac{{b - a}}{n},$$
the sum below:
$$A({R_n}) = \sum\limits_{i = 0}^{n - 1} {f({x_i})\Delta x} $$
and the upper sum
$$A({S_n}) = \sum\limits_{i = 1}^n {f({x_i})\Delta x}.$$
Well, now I want to resolve this problem:
Let $A_a^b(x^m)$ be the area under the curve $f(x) = x^m$ over the closed interval $[a, b]$.
Prove that $$A_a^b(x^m)=\frac{b^{m+1}}{m+1}-\frac{a^{m+1}}{m+1}$$ whenever $m \geqslant 0$.
| The wording of the problem seems to me that he has been given the partition to work with, he can not use another one. To OP: Below I give a lemma that is crucial for solving the problem with the particular choice of partition of the interval given in the question. I leave setting up the Riemann sums, learning Big-Oh notation and filling in any details for steps you do not see immediately to you.
Proof by Induction that $ \sum_{k=1}^n k^{p-1} = \frac{n^p}{p} + O\left( n^{p-1} \right) $ where $ p \in \mathbb{N} $:
The base case $p=1$ is easily verified to be true. We make an induction hypothesis that there exists $ m \in \mathbb{N} $ such that the above equality holds for $ p = 1,2,3, \cdots m $. From the binomial theorem we have $$ (k+1)^{m+1} - k^{m+1} = \sum_{t=1}^{m+1} \binom{m+1}{t} k^{m+1-t}. $$ Letting $ k = 1,2,3, \cdots n $ and summing gives $ (n+1)^{m+1} - 1 = (m+1) \sum_{k=1}^n k^m + O(n^m) $, where the lower order terms were $ O(n^m) $ by the induction hypothesis. Thus, $ \sum_{k=1}^n k^m = \frac{n^{m+1}}{m+1} + O(n^m) $ which completes the inductive step.
From the above result, $$ \frac{ \sum_{k=1}^n k^{p} }{n^{p+1}} = \frac{1}{p+1} + O\left( \frac{1}{n}\right) \to \frac{1}{p+1} $$ as $ n\to \infty .$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/59344",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How do Equally Sized Spheres Fit into Space? How much volume do spheres take up when filling a rectangular prism of a shape?
I assume it's somewhere in between $\frac{3}{4}\pi r^3$ and $r^3$, but I don't know where.
This might be better if I broke it into two questions:
First: How many spheres can fit into a given space? Like, packed optimally.
Second: Given a random packing of spheres into space, how much volume does each sphere account for?
I think that's just about as clear as I can make the question, sorry for anything confusing.
| There are some links here (Rusin's known-math site). Also here (Eppstein's Geometry Junkyard).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/59401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Arithmetic progression I'm studying for a test, and I am having a hard time with this particular exercise.
The first member is equal to 7 and the fifth member is equal to 59.
How many members should be taken in to the sequence that it would amount to 24,217?
So far I have found out that d=13, but having trouble with the equation.
Any help would be appreciated. Thank you.
| If the first number in an arithmetic progression is $a$ and the increment is $d$, term $i$ is $a+(i-1)d\ \ $. So the sum of $n$ terms is $\sum_{i=1}^n a+(i-1)d=na+n(n-1)d/2\ \ $. In your case, $a=7, d=(59-7)/4=13\ \ \ $, so you can just solve the quadratic equation $7n+13(n-1)n/2=24217\ \ $.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/59443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Inserting numbers to create a geometric progression Place three numbers in between 15, 31, 104 in such way, that they would be successive members of geometric progression.
PS! I am studying for a test and need help. I would never ask anyone at math.stackexchange.com to do my homework for me. Any help at all would be strongly appreciated. Thank you!
| There is no solution. The terms of a geometric series are $ar^i$, where $r$ is the ratio and we will count from $i=0$ for convenience. This gives $a=15$. If there is one term between $15$ and $31$, it must be $\sqrt{15\cdot 31}\approx 21.56$ and the ratio would be about $1.4376$. Then the next terms would be about $44.565, 64.067, 92.103$ and we don't arrive at $104$. Other configurations of where to put the terms can be checked similarly.
Added: to avoid checking cases, from $15$ and $104$ the ratio must be $\sqrt[5]{\frac{104}{15}}\approx 1.47295$ and we don't hit $31$, though we come sort of close with about $32.544$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/59511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Algebraic proof that collection of all subsets of a set (power set) of $N$ elements has $2^N$ elements In other words, is there an algebraic proof showing that $\sum_{k=0}^{N} {N\choose k} = 2^N$? I've been trying to do it some some time now, but I can't seem to figure it out.
| Another approach is identifying the powerset $\mathcal P \, X$ of a set $X$ with the set of functions $X \to 2$ (that is to say with the set of idicator functions of the subsets). Of course this is only useful if you have any previous results on cardinalities of sets of functions between (finite) sets.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/59554",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 3
} |
Minimal area of a surface that splits a cube into two equal-size pieces I had read the following problem and its solution from one source problem which was the following:
You want to cut a unit cube into two pieces each with volume 1/2. What dividing surface, which might be curved, has the smallest surface area?
The author gave his first solution by this way:
When bisecting the equilateral triangle, an arc of a circle centered at a vertex had the shortest path. Similarly for this problem, the octant (one-eighth) of a sphere should be the bisecting surface with the lowest area. If the cube is a unit cube, then the octant has volume 1/2, so its radius is given by
$$\frac{1}{8}(\frac{4}{3} \pi r^3)=\frac{1}{2}$$
So the radius is $\displaystyle \left( \frac{3}{\pi} \right)^{\frac{1}{3}}$ and the surface area of the octant is
$$\text{surface area}=\frac{4 \pi r^2}{8}=1.52$$ (approximate)
But after this the author said that he made a mistake; the answer was wrong and the correct one is the simplest surface – a horizontal plane through the center of the cube – which has surface area 1, which is less than the surface area of the octant. But he has not given reasons why the horizontal surface area is the best solution and I need a formula or proof of this. Can you help me?
| Following up the discussion joriki and I had, rounding off the corners where the hexagon meets the edge of the cube does reduce the area. If we look at the cross section where the hexagon meets the wall it looks like this:
$\theta$ is the dihedral angle between the hexagon and the cube. I imagine rounding off the corner with a small radius R. The surface area is reduced by $\frac{a}{\sqrt{2}}R[\cot \theta-(\frac{\pi}{2}-\theta)]$. The area is increased by some small triangles, but their area is quadratic in $R$, being $R (\sec \theta -1)\sqrt{2}R(\frac{\pi}{2}-\theta)\frac{1}{2}$ so for small enough $R$ we have reduced the cut area. We need to do the mirror image on the other side to maintain the volume, but we win there as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/59609",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
The integral of the mean curvature vector over a closed immersed surface Suppose we have a closed, orientable, smooth surface $\Sigma$ immersed smoothly in $\mathbb R^n$ via $f:\Sigma \rightarrow \mathbb R^n$. Impose a Riemannian structure on $\Sigma$ by taking $g_{ij} = \partial_if\cdot\partial_jf$, the metric induced on $\Sigma$ by the immersion $f$. The inner product here is just the usual inner product from $\mathbb R^n$.
The mean curvature vector is
$$
\vec H = \Delta f,
$$
where $\Delta$ is the Laplace-Beltrami operator on $(\Sigma,g)$.
Consider the integral of the mean curvature vector over the surface $\Sigma$:
$$
\int_\Sigma \vec H\ d\mu.
$$
It seems rather plausible that this ought to be zero in the case where $\Sigma$ is closed, embedded, and has only one codimension. Is this known? Is it easy to prove?
If it is not zero in the generality above, as a surface immersed in $\mathbb R^n$, is it equal to some expression involving topological information of $\Sigma$?
| For immersions into $R^3$ (rather than arbitrary codimension) there is basically a one-line proof:
$$2\int_\Sigma HN dA = \int_\Sigma df \wedge dN = \int_\Sigma d(fdN) = \int_{\partial\Sigma = \emptyset} f dN = 0.$$
Here $N$ is the Gauss map associated with the immersion $f: M \to \mathbb{R}^3$. The relationship $df \wedge dN = 2HNdA$ is also not hard: let $\kappa_i$ be the curvatures associated with principal directions $X_i$, so that $dN(X_i)=\kappa_i df(X_i)$. Then
$$df \wedge dN(X_1,X_2) = df(X_1) \times dN(X_2) - df(X_2) \times dN(X_1) = (\kappa_1 + \kappa_2) df(X_1) \times df(X_2) = 2HNdA(X_1,X_2).$$
(Note that here the wedge product uses the cross product $\times$ to define an algebra on $\mathbb{R}^3$, i.e., $\alpha \wedge \beta(X,Y) := \alpha(X) \times \beta(Y) - \alpha(Y) \times \beta(X)$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/59664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
} |
Theorem of Steinhaus The Steinhaus theorem says that if a set $A \subset \mathbb R^n$ is of positive inner Lebesgue measure then $\operatorname{int}{(A+A)} \neq \emptyset$. Is it true that also $\operatorname{int}{(tA+(1-t)A)} \neq \emptyset$ for $t \in(0,1)$? It is clear for $t=\frac{1}{2}$? But in general?
Thanks.
| Yes.
If $A$ has positive inner measure then it contains a measurable set $B$ of positive finite measure.
Now for $0 \lt t \lt 1$ both $tB$ and $(1-t)B$ have positive measure, hence $tB + (1-t)B$ contains an open set by the result mentioned below. Therefore the interior of $tA + (1-t)A \supset tB + (1-t)B$ is non-empty.
In Corollary 20.17 on page 296 of Hewitt–Ross, Abstract Harmonic Analysis, I the following general result is shown:
Let $X$ and $Y$ be measurable sets of (finite) positive measure in a locally compact group with left Haar measure $\mu$. Then $XY$ contains an open set.
The proof relies on showing that the convolution $[X]\ast[Y](z)$ of the characteristic functions $[X]$ and $[Y]$ is equal to the function $z \mapsto \mu(X \cap zY^{-1})$; it is continuous; it vanishes outside $XY$; and it is non-zero because $\int [X]\ast[Y]=\mu(X) \mu(Y) \gt 0$, hence $XY$ must contain an interior point.
Some background, applications and further links are contained in this thread here.
If you read French you may want to have a look at Hugo Steinhaus's original article Sur les distances des points des ensembles de mesure positive, Fund. Math. 1 (1920), 93–104.
In fact, the entire first issue of Fundamenta Mathematicae is packed with gems of this sort. Fourteen (!) articles are by Wacław Sierpiński.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/59769",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 1,
"answer_id": 0
} |
Why is the set of commutators not a subgroup? I was surprised to see that one talks about the subgroup generated by the commutators, because I thought the commutators would form a subgroup. Some research told me that it's because commutators are not necessarily closed under product (books by Rotman and Mac Lane popped up in a google search telling me). However, I couldn't find an actual example of this. What is one? The books on google books made it seem like an actual example is hard to explain.
Wikipedia did mention that the product $[a,b][c,d]$ on the free group on $a,b,c,d$ is an example. But why? I know this product is $aba^{-1}b^{-1}cdc^{-1}d^{-1}$, but why is that not a commutator in this group?
| Here is an explicit example, paraphrased from the paper of Martin Isaacs linked in the comments. Let $S_3$ be the group of permutations of a 3-element set, and consider the group ring $\mathbb Z_2[S_3]$, which is an abelian group under addition and admits an action of $S_3$ by right multiplication. Thus we can construct the semidirect product $G=S_3\ltimes\mathbb Z_2[S_3]$. Explicitly the operation is
$$
(w_1,x_1)(w_2,x_2)=(w_1w_2,x_1w_2+x_2).
$$
We have
$$\begin{eqnarray*}
[(w_1,x_1),(w_2,x_2)]
&=&(w_1,x_1)(w_2,x_2)(w_1^{-1},x_1w_1^{-1})(w_2^{-1},x_2w_2^{-1})\\
&=&([w_1,w_2],x_1(w_2+1)w_1^{-1}w_2^{-1}+x_2(w_1^{-1}+1)w_2^{-1})
\end{eqnarray*}$$
For any $w_1,w_2\in S_3$, the element
$$
(1,w_1+w_2)=[(1,w_1),(w_2^{-1}w_1,0)]
$$
is a commutator. For $S\subseteq S_3$, let $\bar S=\sum_{s\in S}s\in\mathbb Z_2[S_3]$. Let $T=\{1,(123)\}$. Then
$$
(1,\bar G+\bar T)=(1,(12)+(13))(1,(23)+(321))
$$
is a product of commutators. Suppose it equals $[(w_1,x_1),(w_2,x_2)]$. Then $[w_1,w_2]=1$ and
$$
\bar G+\bar T=x_1(w_2+1)w_1^{-1}w_2^{-1}+x_2(w_1^{-1}+1)w_2^{-1}.
$$
Now $w_1,w_2$ generate an abelian subgroup $H<S_3$, and $w_1\bar H=w_2\bar H=\bar H$. Thus
$$
(\bar G+\bar T)\bar H=0.
$$
By considering each possibility for $H$, we see that this is impossible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/59816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 5,
"answer_id": 4
} |
QQ plot explanation The figure shows the Q-Q plot of a theoretical and empirical standardized Normal distribution generated through the $qqnorm()$ function of R statistical tool.
How can I describe the right tail (top right) that does not follow the red reference line? What does it mean when there is a "trend" that running away from the line?
Thank you
| Looks like your data has a cutoff at $4$. You could probably fit the samples you plotted fairly well with a curve such as
$$y = \frac 1 2 \left( x + 4 - \sqrt{c+(x-4)^2} \right),$$
where $c > 0$ is a free parameter that describes the sharpness of the cutoff. Just by eyeballing the graph, I'd guess that $c \approx 0.1$ for your data.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/59873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Finite differences of function composition I'm trying to express the following in finite differences:
$$\frac{d}{dx}\left[ A(x)\frac{d\, u(x)}{dx} \right].$$
Let $h$ be the step size and $x_{i-1} = x_i - h$ and $x_{i+ 1} = x_i + h$
If I take centered differences evaluated in $x_i$, I get:
$\begin{align*}\left\{\frac{d}{dx}\left[ A(x)\frac{d\, u(x)}{dx}\right]\right\}_i
&= \frac{\left[A(x)\frac{d\, u(x)}{dx}\right]_{i+1/2} - \left[A(x)\frac{d\, u(x)}{dx}\right]_{i-1/2}}{h} \\
&= \frac{A_{i+1/2}\left[\frac{u_{i+1}-u_{i}}{h}\right] - A_{i-1/2}\left[\frac{u_{i}-u_{i-1}}{h}\right]}{h}
\end{align*}$
So, if I use centered differences I would have to have values for $A$ at $i + \frac 12$ and $A$ at $i - \frac 12$; however those nodes don't exist (in my stencil I only have $i \pm$ integer nodes); is that correct? If I use forward or backward differences I need A values at $i$, $i + 1$, $i + 2$ and at $i$, $i -1$, $i -2$ respectively.
Am I on the correct path?
I would really appreciate any hint.
Thanks in advance,
Federico
| $$
\frac{\mathrm{d}}{\mathrm{d}x}\left[A\frac{\mathrm{d}u}{\mathrm{d}x}\right]=\frac{\mathrm{d}A}{\mathrm{d}x}\frac{\mathrm{d}u}{\mathrm{d}x}+A\frac{\mathrm{d}^2u}{\mathrm{d}x^2}
$$
So you might try using $\frac{1}{4}(A_{i+1}-A_{i-1})(u_{i+1}-u_{i-1}) + A_i(u_{i+1}-2u_i+u_{i-1})$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/59923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
An Exercise from Rotman (Carmichael) about Commutator Subgroups This question is inspired by a recent question by user Jaymes about the set of commutators not being a group.
Following a link to MO, Gerry Myerson posted an example from Carmichael. I quote
Let $G$ be a subgroup of $S_{16}$ generated by the following eight elements:
$$
\eqalign{(ac)(bd);&(eg)(fh);\cr(ik)(jl);&(mo)(np);\cr(ac)(eg)(ik);&(ab)(cd)(mo);\cr(ef)(gh)(mn)(op);&(ij)(kl).\cr}
$$
Then the commutator subgroup is generated by the first four elements above, and is of order $16$. Moreover,
$$
\alpha=(ik)(jl)(mo)(np)
$$
is in the commutator subgroup, but is not a commutator.
However, no proof is given. Can somebody offer a proof of this?
Regards,
| I am not sure if this is a good enough of a proof for you, but I can work it out in Mathematica and confirm that with explicit computations. If this not helpful, I would be happy to delete the answer.
The copy and paste ready code follows:
Commutator[a_, b_] :=
PermutationProduct[a, b, PermutationPower[a, -1],
PermutationPower[b, -1]]
ru = {"a" -> 1, "b" -> 2, "c" -> 3, "d" -> 4, "e" -> 5, "f" -> 6,
"g" -> 7, "h" -> 8, "i" -> 9, "j" -> 10, "k" -> 11, "l" -> 12,
"m" -> 13, "n" -> 14, "o" -> 15, "p" -> 16};
generators = {
Cycles[{{1, 3}, {2, 4}}], Cycles[{{5, 7}, {6, 8}}],
Cycles[{{9, 11}, {10, 12}}], Cycles[{{13, 15}, {14, 16}}],
Cycles[{{1, 3}, {5, 7}, {9, 11}}],
Cycles[{{1, 2}, {3, 4}, {13, 15}}],
Cycles[{{5, 6}, {7, 8}, {13, 14}, {15, 16}}],
Cycles[{{9, 10}, {11, 12}}]
};
g = PermutationGroup[generators];
GroupOrder[g]
g1 = PermutationGroup[Take[generators, 4]];
GroupOrder[g1]
el = Cycles[{{9, 11}, {10, 12}, {13, 15}, {14, 16}}];
MemberQ[GroupElements[g1], el]
AllCommutators =
DeleteDuplicates[Commutator @@@ Tuples[GroupElements[g], {2}]]
Complement[GroupElements[g1], AllCommutators]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/59981",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
How to obtain the Standard Deviation of a ratio of independent binomial random variables? X and Y are 2 independent binomial random variables with parameters (n,p) and (m,q) respectively.
(trials, probability parameter)
| If $n$ and $m$ go to infinity while $p$ and $q$ are fixed, then the ratio $R=X/Y$ is well defined on an event of probability $1-o(1)$.
On this event (or conditionally on this event, since the two asymptotically equivalent), Edgeworth expansions of $X$ and $Y$ show that the expectation of $R$ behaves like $\dfrac{np}{mq}$ and that its variance behaves like
$$
\left(\frac{np}{mq}\right)^2\left(\frac{1-p}{np}+\frac{1-q}{nq}\right).
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/60032",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Checking whether a point lies on a wide line segment I know how to define a point on the segment, but I have this piece is a wide interval. The line has a width.
I have $x_1$ $y_1$, $x_2$ $y_2$, width and $x_3$ $y_3$
$x_3$ and $x_4$ what you need to check.
perhaps someone can help, and function in $\Bbb{C}$ #
| Assuming @Henry's picture summarizes the question asked, for a given thickness $T$ a necessary and sufficient condition is given by the following inequalities:
$$
0\le\overrightarrow{AC}\cdot\overrightarrow{AB}\le AB^2,\qquad\qquad
AB^2AC^2\le T^2AB^2+\left(\overrightarrow{AC}\cdot\overrightarrow{AB}\right)^2.
$$
The first condition ensures that the projection of $C$ on the line $(AB)$ lies between $A$ and $B$ and the second condition ensures that the distance between $C$ and the line $(AB)$ is at most $T$.
To prove this, note that $\overrightarrow{AC}$ must be $\overrightarrow{AC}=u\overrightarrow{AB}+t\overrightarrow{N}$ with $0\le u\le 1$, $t^2\le T^2$ and $\overrightarrow{N}$ a unitary vector orthogonal to $\overrightarrow{AB}$ and try to express the conditions on $u$ and $t$ in terms of $\overrightarrow{AC}$, $\overrightarrow{AB}$, their norms $AC$ and $AB$, and their scalar product $\overrightarrow{AC}\cdot\overrightarrow{AB}$ only.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/60070",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
How to compute that the unit digit of $\frac{(7^{6002} − 1)}{2}$? The mother problem is:
Find the unit digit in LCM of $7^{3001} − 1$ and $7^{3001} + 1$
This problem comes with four options to choose the correct answer from,my approach,as the two number are two consecutive even numbers hence the required LCM is $$\frac{(7^{3001} − 1)(7^{3001} + 1)}{2}$$
Using algebra that expression becomes $\frac{(7^{6002} − 1)}{2}$,now it is not hard to see that unit digit of $(7^{6002} − 1)$ is $8$.
So the possible unit digit is either $4$ or $9$,As there was no $9$ as option I selected $4$ as the unit digit which is correct but as this last part is a kind of fluke I am not sure if my approach is right or not or may be I am unable to figure out the last part how to be sure that the unit digit of $\frac{(7^{6002} − 1)}{2}$ is $4$?
| The elementary way: $7^2=50-1$ mod $100$ hence $7^4=1$ mod $100$ hence $7^{6000}=1$ mod $100$ because $6000$ is a multiple of $4$, hence $7^{6002}=7^2$ mod $100$ and your number is $\frac12(7^2-1)$ mod $50$. This is $24$ mod $50$ hence the last digit is $4$ (and a priori, the previous digit is either $2$ or $7$ but one just has to be a little more careful to prove that it is $2$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/60138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 3
} |
Determinantal formula for reproducing integral kernel How do I prove the following?
$$\int\det\left(K(x_{i},x_{j})\right)_{1\leq i,j\leq n}dx_{1} \cdots dx_{N}=\underset{i=1}{\overset{n}{\prod}}\left(\int K(x_{i},x_{i})\;dx_{i}-(i-1)\right)$$
where
$$K(x,y)=\sum_{l=1}^n \psi_l(x)\overline{\psi_l}(y)$$
and
$$\{\psi_l(x)\}_{l=1}^n$$
is an ON-sequence in $L^2$.
One may note that $$\int K(x_i,x_j)K(x_j,x_i) \; d\mu(x_i)=K(x_j,x_j)$$
and also that $$\int K(x_a,x_b)K(x_b,x_c)d\mu(x_b)=K(x_a,x_c).$$
| Iterated use of Lemma 5.27 (p. 103) in Orthogonal polynomials and random matrices: a Riemann-Hilbert approach by Percy Deift. (Google Books link.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/60192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Polynomial $p(x) = 0$ for all $x$ implies coefficients of polynomial are zero I am curious why the following is true. The text I am reading is "An Introduction to Numerical Analysis" by Atkinson, 2nd edition, page 133, line 4.
$p(x)$ is a polynomial of the form:
$$ p(x) = b_0 + b_1 x + \cdots + b_n x^n$$
If $p(x) = 0$ for all $x$, then $b_i = 0$ for $i=0,1,\ldots,n$.
Why is this true? For example, for $n=2$, I can first prove $b_0=0$, then set $x=2$ to get a linear system of two equations. Then I can prove $b_1=b_2 = 0$. Similarly, for $n=3$, I first prove $b_0=0$, then I calculate the rank of the resulting linear system of equations. That shows that $b_1=b_2=b_3=0$. But if $n$ is very large, I cannot keep manually solving systems of equations. Is there some other argument to show all the coefficients must be zero when the polynomial is always zero for all $x$?
Thanks.
| If you are willing to accept $\displaystyle \lim_{x \rightarrow \infty}\frac{1}{x^{m}}=0$ and $\displaystyle \lim_{x \rightarrow \infty}x^m=\infty$ for all $m>0$, then you can argue as follows. Suppose you have a polynomial $f(x)=b_nx^n+ \ldots b_0$ with $b_n \neq 0$ and $n\geq 1$, then
$$\lim_{x\rightarrow \infty}f(x)=\lim_{x \rightarrow \infty} x^n(b_n+ \ldots \frac{a_0}{x^n})=\lim_{x \rightarrow \infty} x^nb_n=\infty.$$
Since the constant function $0$ does not have this property, it cannot be equal to a polynomial with degree greater or equal to $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/60261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 5,
"answer_id": 0
} |
Calculating points on a plane In the example picture below, I know the points $A$, $B$, $C$ & $D$. How would I go about calculating $x$, $y$, $z$ & $w$ and $O$, but as points on the actual plane itself (e.g. treating $D$ as $(0, 0)$, $A$ as $(0, 1)$, $C$ as $(1, 0)$ and $B$ as $(1, 1)$.
Ultimately I need to be able to calculate any arbitrary point on the plane so I'm unsure as to whether this would be possible through linear interpolation of the results above or whether I would actually just have to do this via some form of Matrix calculation? I don't really know matrix math at all!
Just looking for something I can implement in JavaScript (in an enviroment that does support matricies).
| This isn't so bad, so that's good. Your suggested set of points, however, do not submit to the same sort of analysis as what I'm about to give because it's a square - so the sides are parallel (so the f1 and f2 are... inconvenient, even in the projective plane; if you know projective geometry, mention it, and I'll update).
The general process is to find the equations of lines, find where they intersect, and then make more lines, and find where they intersect. Ok? Great.
First, the point $O$ is nothing more than the midpoint of DB. Perhaps the figure in general is not symmetric, in which case it's the intersection of DB and AC. Say the coordinates of A are
$\left[ \begin{array}{cc}
a_x \\
a_y
\end{array} \right]$,
the coordinates of B are $\left[ \begin{array}{cc}
b_x \\
b_y
\end{array} \right]$
and so on.
Then the lines DB and AC can be parameterized by the equations
$\overline{DB} = \left[ \begin{array}{cc}
b_x \\
b_y
\end{array} \right] + \left(
\left[ \begin{array}{cc}
d_x \\
b_y
\end{array} \right] - \left[ \begin{array}{cc}
b_x \\
b_y
\end{array} \right] \right)t$
$\overline{AC} = \left[ \begin{array}{cc}
a_x \\
a_y
\end{array} \right] + \left(
\left[ \begin{array}{cc}
c_x \\
c_y
\end{array} \right] - \left[ \begin{array}{cc}
a_x \\
a_y
\end{array} \right] \right)s$
Now you set equal and solve for s and t. How? Considering x and y components separately, you have two equations in 2 variables - use a matrix.
$\left[ \begin{array}{cc}
d_x - b_x & c_x - a_x \\
d_y - b_y & c_y - a_y
\end{array} \right]
\left[ \begin{array}{cc}
t \\
s
\end{array} \right] =
\left[ \begin{array}{cc}
b_x - a_x \\
b_y - a_y
\end{array} \right]$
And this procedure will give you the intersection of those two lines. In fact, this will give you the intersection between any two non-parallel lines. Good.
So with this, we found the point $O$. Do this with the lines AD and BC to find their focal point, f2. Repeat with the lines AD and BC to find their focal point, f1. Then we can repeat with the lines $\overline{f_1O}$ and $\overline{AD}$ to find z. $\overline{f_1O}$ and $\overline{BC}$ to find x. And because I've given almost everything away, rinse, wash, and repeat for w and y.
Does that make sense?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/60294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Fibonacci divisibilty properties $ F_n\mid F_{kn},\,$ $\, \gcd(F_n,F_m) = F_{\gcd(n,m)}$ Can any one give a generalization of the following properties in a single proof? I have checked the results, which I have given below by trial and error method. I am looking for a general proof, which will cover the all my results below:
*
*Every third Fibonacci number is even.
*3 divides every 4th Fibonacci number.
*5 divides every 5th Fibonacci number.
*4 divides every 6th Fibonacci number.
*13 divides every 7th Fibonacci number.
*7 divides every 8th Fibonacci number.
*17 divides every 9th Fibonacci number.
*11 divides every 10th Fibonacci number.
*6, 9, 12 and 16 divides every 12th Fibonacci number.
*29 divides every 14th Fibonacci number.
*10 and 61 divides every 15th Fibonacci number.
*15 divides every 20th Fibonacci number.
| I guess that the standard way to understand all these divisibility results in one single swoop is to observe that the Fibonacci sequence modulo any number N becomes periodic.
For instance, Fibonacci modulo 2 is 0, 1, 1, 0, 1, 1, 0, ...... proving the even-ness of $F_n$ for $n=0,3,6,9,...$.
Fibonacci modulo $3$ is 0, 1, 1, 2, 0, 2, 2, 1, 0, 1, 1, 2, 0, ..... making obvious that $3$ divides $F_n$ for $n=0, 4, 8, 12, ...$ .
Try yourself the next ones!
NOTE: the same technique can be applied to any linear recursive sequence with constant coefficients.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/60340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32",
"answer_count": 5,
"answer_id": 3
} |
Method to reverse a Kronecker product Let's say I have two simple vectors: $[0, 1]$ and $[1, 0]$.
Their Kronecker product would be $[0, 0, 1, 0]$.
Let's say I have only the Kronecker product. How can I find the two initial vectors back?
If my two vectors are written as : $[a, b]$ and $[c, d]$, the (given) Kronecker product is:
$$[ac, ad, bc, bd] = [k_0, k_1, k_2, k_3]$$
So I have a system of four non linear equations that I wish to solve:
$$\begin{align*}
ac &= k_0\\
ad&= k_1\\
bc&= k_2\\
bd &=k_3.
\end{align*}$$
I am looking for a general way to solve this problem for any number of initial vectors in $\mathbb{C}^2$ (leading my number of variables to $2n$ and my equations to $2^n$ if I have $n$ vectors).
So here are a few specific questions:
What is the common name of this problem?
If a general solution is known, what is its complexity class?
Does the fact that I have more and more equations when $n$ goes up compared to the number of variables help?
(Note: I really didn't know what to put as a tag.)
| This problem (Reverse kronecker product) has a known solution called "Nearest Kronecker Product" and it is generalized to matrices as well.
Given $A\in \mathbb R^{m\times n} $ with $m = m_1m_2$ and $n = n_1n_2$, find $B\in \mathbb R^{m_1\times n_1}$ and $C\in \mathbb R^{m_2\times n_2}$ so
$\phi(B,C)$ = min $|| A- B\otimes C||_F$, where $F$ denotes Frobenius norm.
This is reformulated as:
$\phi(B,C)$ = min $|| R- vec(B)\otimes vec(C)'||_F$
$vec$ is the vectorization operator which stacks columns of a matrix on top of each other. A is rearranged into $R \in \mathbb R^{m_1n_1\times m_2n_2}$ such that the sum of squares in $|| A- B\otimes C||_F$ is exactly the same as $|| R- vec(B)\otimes vec(C)'||_F$.
Example for arrangement where $m_1=3,n_1=m_2=n_2=2$:
$$
\phi(B,C) = \left|
\left[
\begin{array}{cc|cc}
a_{11}& a_{12} & a_{13} & a_{14} \\
a_{21}& a_{22} & a_{23} & a_{24} \\ \hline
a_{31}& a_{32} & a_{33} & a_{34} \\
a_{41}& a_{42} & a_{43} & a_{44} \\ \hline
a_{51}& a_{52} & a_{53} & a_{54} \\
a_{11}& a_{62} & a_{63} & a_{64}
\end{array}
\right]
-
\begin{bmatrix}
b_{11}& b_{12} \\
b_{21}& b_{22} \\
b_{31}& b_{32}
\end{bmatrix}
\otimes
\begin{bmatrix}
c_{11}& c_{12} \\
c_{21}& c_{22}
\end{bmatrix} \right|_F \\
\phi(B,C) =
\left|
\begin{bmatrix}
a_{11}& a_{21} & a_{12} & a_{22} \\ \hline
a_{31}& a_{41} & a_{32} & a_{42} \\ \hline
a_{51}& a_{61} & a_{52} & a_{62} \\ \hline
a_{13}& a_{23} & a_{14} & a_{24} \\ \hline
a_{33}& a_{43} & a_{34} & a_{44} \\ \hline
a_{53}& a_{63} & a_{54} & a_{64}
\end{bmatrix}
-
\begin{bmatrix}
b_{11} \\
b_{21} \\
b_{31} \\
b_{12} \\
b_{22} \\
b_{32}
\end{bmatrix}
\begin{bmatrix}
c_{11}&c_{21} & c_{12} & c_{22}
\end{bmatrix} \right|_F
$$
Now the problem has turned into rank 1 approximation for a rectangular matrix. The solution is given by the singular value decomposition of $R = USV^T$ in [1,2].
$$
vec(B) = \sqrt{\sigma_1}u_1, \quad vec(C) = \sqrt{\sigma_1}v_1
$$
If $R$ is a rank 1 matrix solution will be exact i.e. $A$ is full seperable.[3]
[1] Golub G, Van Loan C. Matrix Computations, The John Hopkins University Pres. 1996
[2] Van Loan C., Pitsianis N., Approximation with Kronecker Products, Cornell University, Ithaca, NY, 1992
[3] Genton MG. Separable approximations of space–time covariance matrices. Environmetrics 2007; 18:681–695.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/60399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 3,
"answer_id": 1
} |
Other awesome topology related videos like this one? Turning a sphere inside-out: http://www.youtube.com/watch?v=BVVfs4zKrgk
And part 2: http://www.youtube.com/watch?v=x7d13SgqUXg
This is really, really cool. They describe things in really simple terms though (like referring to curvatures with smiles and frowns) and sometimes I wish I could connect it to the math that I know if relevant terms were brought up. The animations are simply amazing, and I can't believe this was made in 1994!
I was wondering if anyone knows if these people made other videos like this using their techniques... So cool!
| Check out this Math Overflow thread! For instance, you may enjoy this video about the Möbius band.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/60584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
} |
A sum involving the Möbius function I am trying to find some work done on the following: $$\sum_{d \vert n}\frac{2^{\omega(d)}}{d}\mu(d)$$ where $\omega(d)$ is the number of distinct prime factors of $d$ and $\mu$ is the Möbius function. I saw something about $$\sum_{d \vert n}\frac{\mu(d)}{d}=\phi(n)/n$$ (where $\phi$ is the Euler phi function) on planetmath, but I'm not entirely certain how to use it. Does anyone know of any work done on the first sum?
| Let $n=p_1^{a_1}\cdots p_k^{a_k}$. We can prove that
$$\sum_{d|n}\frac{m^{\omega(d)}}{d}\mu(d)=\prod_{i=1}^k\left(1-\frac{m}{p_i}\right)$$
(notice that for $m=1$ this is $\varphi(n)/n$). The proof is easy, just check the corresponding coefficients of $m^{r}$ on both sides.
Indeed, there is the well-known identity
$$\prod_{i=1}^k (1+tx_i)=\sum_{r=1}^k t^r e_r(x_1,\dots,x_k)$$
where the $e_r$'s are the elementary symmetric polynomials. If we let $S(n)$ denote the squarefree part of $n$, then by putting $x_i=\frac{1}{p_i}$ we get
$$\sum_{d|S(n)}\frac{t^{\omega(d)}}{d}=\sum_{r=1}^k t^r e_r(\frac{1}{p_1},\dots,\frac{1}{p_k})=\prod_{i=1}^k\left(1+\frac{t}{p_i}\right)$$
Now to get your identity just use $t=-m$, and the fact that $\mu(d)=(-1)^{\omega(d)}$ when $d$ is squarefree. For $t=-2$ we get
$$\left(1-\frac{2}{p_1}\right)\cdots\left(1-\frac{2}{p_k}\right)$$
which is always non-negative (it is $=0$ only when $n$ is even).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/60648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Can someone explain consensus theorem for boolean algebra In boolean algebra, below is the consensus theorem
$$X⋅Y + X'⋅Z + Y⋅Z = X⋅Y + X'⋅Z$$
$$(X+Y)⋅(X'+Z)⋅(Y+Z) = (X+Y)⋅(X'+Z)$$
I don't really understand it? Can I simplify it to
$$X'⋅Z + Y⋅Z = X' \cdot Z$$
I don't suppose so. Anyways, why can $Y \cdot Z$ be removed?
| Boolean Algebra has a very powerful metatheorem that says that if any 2-element "{0, 1}" Boolean Algebra has a theorem, then it holds for all Boolean Algebras. So, if you just want an argument that should come as convincing, you just need to check that all substitution instances of "0" and "1" in those equations. Here's a compact argument:
Suppose x=0. Then for the first equation we have 0.y+0'.z+y.z=0+1.z+y.z=z+y.z on the left-hand side, and 0.y+0'.z=0+1.z=z. Well, z+y.z=z by absorption and commutation. Now suppose x=1. Then on the left hand side we have 1.y+1'.z+y.z=y+0.z+y.z=y+y.z. On the right-hand side we have 1.y+1'.z=y+0.z=y. So, the two sides equal each other by absorption. So, the first equation holds. In other words, it qualifies as a theorem. The second equation follows by the De Morgan duality metatheorem. So, by the metatheorem which says that if any 2-element Boolean Algebra has a theorem, the consensus theorem holds for all Boolean Algebras. If anything doesn't come as clear here, please don't hesitate to ask.
Why is this true? Well, one could argue that Boolean Algebra originally got skillfully set-up as an algebraic system to behave like classical propositional logic, and in classical propositional logic where "=" gets taken as logical equivalence, each equality in your question corresponds to a theorem. However, I suspect such an answer many people would find that explanation contentious at best. Sometimes things in mathematics just hold true, because they do hold true... or many different explanations can get put forth to explain why something holds true.
Your can't simplify it to x'.z+y.z=x'.z That is not an theorem in Boolean Algebra. Suppose x=1, y=0, z=1. Then, we have 0'.1+0.1=1.1+0=1 for the expression on the left-hand side, and 1'.1=0.1=0 on the right hand side.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/60713",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 1
} |
Help with the proof of the characterization of linearly dependent sets In Lay - Linear Algebra, 3rd ed. ch 1.7, Theorem 7 states that
An indexed set $S = \{ \mathbf{v}_1, \dots , \mathbf{v}_p \}$ of two or more vectors is linearly dependent if and only if at least one of the vectors in $S$ is a linear combination of the others. In fact, if $S$ is linearly dependent and $\mathbf{v}_1 \neq \mathbf{0}$ , then some $\mathbf{v}_j$ (with $j > 1$) is a linear combination of the preceding vectors, $\mathbf{v}_1,\dots , \mathbf{v}_{j-1}$ .
The proof later on in the chapter proves the theorem from two "directions", starting with $\mathbf{v}_j$ in $S$ as a linear combination of the other vectors, and showing that if this is the case then $S$ must be linearly dependent. Then Lay proves the theorem from the other direction, namely supposing $S$ is linearly dependent first, and then showing that some $\mathbf{v}_j$ must be a linear combination of the other vectors.
The whole second part is shown here for context, but my problem is already in the second line (the rest is fine):
[...] suppose $S$ is linearly dependent. If $\mathbf{v}_1$ is zero, then it is a (trivial) linear combination of the other vectors in $S$. Otherwise, $\mathbf{v}_1 \neq \mathbf{0}$, and there exists weights $c_1 , \dots , c_p$ , not all zero, such that $$c_1 \mathbf{v}_1 + c_2 \mathbf{v}_2 + \cdots + c_p \mathbf{v}_p = \mathbf{0}$$ Let $j$ be the largest subscript for which $c_j \neq 0$. If $j=1$ , then $c_1 \mathbf{v}_1 = \mathbf{0}$, which is impossible because $\mathbf{v}_1 \neq \mathbf{0}$. So $j>1$, and $$\begin{align} c_1 \mathbf{v}_1 + \cdots + c_j \mathbf{v}_j + 0 \mathbf{v}_{j+1} + \cdots + 0\mathbf{v}_p = \mathbf{0} \\ c_j \mathbf{v}_j = -c_1 \mathbf{v}_1 - \cdots -c_{j-1} \mathbf{v}_{j-1} \\ \mathbf{v}_j = ( - \frac{c_1}{c_j} \mathbf{v}_1 ) + \cdots + ( -\frac{c_{j-1}}{c_j} ) \mathbf{v}_{j-1} \end{align}$$
Now, my concern here is quite trivial really (pun sort of intended). In the second line of the proof above, if $\mathbf{v}_1$ is zero, why must it be a trivial linear combination of the other vectors in $S$? Since $S$ is here defined to be linearly dependent, can't we find weights $c_1, \dots , c_p$ not all zero so that $$c_1 \mathbf{v}_1 = c_2 \mathbf{v}_2 + \cdots c_p \mathbf{v}_p $$, where $\mathbf{v}_1 = \mathbf{0}$? Or is it that $\mathbf{v}_1$ is the defining factor that determines the linear dependence of $S$? There is something here I just can't see. Maybe it's too late.
Sorry for the long post! I wasn't sure how much context would be appropriate. If somebody knows how to clean up the multiline code in the proof then I'd appreciate that.
| The point is not that $\mathbf v_1$ must be a trivial linear combination of the other vectors, as you rephrased it, but that it is. That's enough to show that at least one of the vectors is a linear combination of the others.
Also, we did find weights $c_1,\dotsc,c_p$, not all zero, so that $c_1 \mathbf{v}_1 = c_2 \mathbf{v}_2 + \cdots c_p \mathbf{v}_p$, namely $c_1=1$, $c_2=\dotso=c_p=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/60758",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
How to show $\lim\limits_{t\to 0}\frac{|f+tg|^{p}-|f|^{p}}{t}=pfg|f|^{p-2}$ Could anyone help to compute this limit? Thank you!
This is part of a proof in Analysis by E. Lieb. Let $f$ and $g$ be real numbers and $p>1$, show the following limit:
$$\lim\limits_{t\to 0}\frac{|f+tg|^{p}-|f|^{p}}{t}=pfg|f|^{p-2}$$
| If $g=0$, both sides are constant $0$.
If $g\neq 0$ and $f=0$, we have
$$\lim_{t\to 0}\frac{|tg|^p}{t} = |g|^p\lim_{t\to 0}\frac{|t|^p}{t} = 0,$$
since $p\gt 1$ (it's the limit of $|g|t^{p-1}$ as $t\to 0^+$, which is $0$, and of $-|g|t^{p-1}$ as $t\to 0^-$, which is also $0$).
If $g\neq 0$ and $f\neq 0$, say $f\gt 0$. Using L'Hopital's Rule you have
$$\begin{align*}
\lim_{t\to 0}\frac{|f+tg|^p - |f|^p}{t} &= \lim_{t\to 0}\frac{(f+tg)^p - f^p}{t}\\
&= \lim_{t\to 0}\frac{p(f+tg)^{p-1}g - 0}{1} = pf^{p-1}g = pfgf^{p-2} = pfg|f|^{p-2}.
\end{align*}$$
If $f\lt 0$, then
$$\begin{align*}
\lim_{t\to 0}\frac{|f+tg|^p - |f|^p}{t} &= \lim_{t\to 0}\frac{(-f-tg)^p - (-f)^p}{t}\\
&= \lim_{t\to 0}\frac{p(-f-tg)^{p-1}(-g) - 0}{1} \\
&= p(-f)^{p-1}(-g)\\
&= -pg|f|^{p-1}\\
&= p(-|f|)g|f|^{p-2}\\
&= pfg|f|^{p-2}.
\end{align*}$$
In any of the four cases, we have equality.
(As to getting rid of the absolute value signs in $|f+tg|$, so long as $f\neq 0$, $f+tg$ has the same sign as $f$ for sufficiently small $t$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/60803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Alperin p82 Lemma 11.3 My question is regarding Lemma 11.3 on p82 of Local representation theory by JL Alperin; the Google Books preview unfortunately does not contain this page. I need to prove the following claim:
Claim:
If $V$ is a relatively $Q$-projective and a relatively $\mathfrak{Y}$-projective $kL$-module then $V^G$ is relatively $\mathfrak{X}$-projective.
Proof. Let $W$ be an indecomposable summand of $V$. Then $W$ is relatively $Y$-projective for some $Y\in \mathfrak{Y}$. This is as far as I have got -- not sure why we need to bring vertices into it and definitely don't understand why if $W$ has vertex $P$ then $W^G$ is relatively $P$-projective.
Any help would be great!
Notation: $G$ is a finite group, $Q$ is any $p$-subgroup and $L=N_G(Q)$ is its normaliser. There are two collections of subgroups of $G$, $\mathfrak{X}=\{sQs^{-1}\cap Q\mid s\in G\setminus L\}$ and $\mathfrak{Y}=\{sQs^{-1}\cap L\mid s\in G\setminus L\}$. We say a module is relatively $\mathfrak{X}$- (resp. $\mathfrak{Y}$-) projective if it is a direct sum of relatively $X$- (resp. $Y$-) projective modules for $X\in \mathfrak{X}$ (resp. $Y\in \mathfrak{Y}$).
Cheers.
| As a rule of thumb, if the statement you want to prove involves at least one restriction and at least one induction, then Mackey decomposition is your best friend.
So you want to show that if $W$ has vertex $Q$ (I guess that's what you mean), i.e. if $W\;|\;M^L$ for some $Q$-module $M$ (and $Q$ is minimal with this property), then $W^G$ is relatively $Q$-projective, i.e. $W^G|((W^G)_Q)^G$.
Now,by Mackey, $(W^G)_Q = \bigoplus_{g\in L\backslash G/Q}((W^g)_{L^g\cap Q})^Q$.
The summand corresponding to the trivial coset representative is $W_Q$, so $((W^G)_Q)^G$ has $(W_Q)^G$ as a direct summand, since induction preserves direct sums. But also, as in your previous question, $M$ being a source of $W$ implies that $M|W_Q$ (this also uses Mackey).
Inducing both sides back to $G$ (and using again the fact that induction preserves direct sums), $M^G\;| \;(W_Q)^G$. Since $W|M^L$, you get $W^G|(M^L)^G = M^G$, and you are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/60864",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solve $x^3 +1 = 2y^3$
Solve $x^3 +1 = 2y^3$ in integers.
(Actually the original question was solve $x^n +1 = 2^{n-2} y^n$ but I can't even solve particular case $n=3$.)
Thanks in advance.
| Mordell, Diophantine Equations, Chapter 23, Theorem 5 (page 203): If $d$ is an integer, $d\gt1$, there is at most one integer solution of $x^3+dy^3=1$ other than $x=1$, $y=0$.
Also, Chapter 24, Theorem 5 (page 220): The equation $x^3+dy^3=1$ ($d\gt1$) has at most one integer solution with $xy\ne0$. This is given by the fundamental unit in the ring when it is a binomial unit, i.e., when the fundamental unit takes the form $x+y\root3\of d$.
Both proofs are fairly long, and take some knowledge of Algebraic Number Theory. Maybe there's some elementary trick I'm not seeing for handling the case $d=2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/61014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 0
} |
name for a rational number between zero and one? I'm searching for a unified name to convey for the concept that a number will always be between zero and one.
Some info for context:
in probability we've got a number between 0 and 1. Percentages appear to be similar in that we've got a number between zero and one, but it is multiplied by one hundred.
The nearest names that I've got so far are "factor" or "coefficient", but both of these names could be larger than 1, or smaller than zero.
| Since the mantissa of a logarithm is a value between 0 and 1 (I'm just barely old enough to have used logarithm tables instead of calculators in high school) I thought maybe googling "mantissa" might suggest something (and if not, I was still going to suggest something like "a mantissa number" for a number between 0 and 1), and when I googled, I immediately found the following at Wolfram's Mathworld site:
http://mathworld.wolfram.com/Mantissa.html
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/61125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Proof for $\max (a_i + b_i) \leq \max a_i + \max b_i$ for $i=1..n$ I know this question is almost trivial because the truth of this statement is completely intuitive, but I'm looking for a nice and as formal as possible proof for $$\max (a_i + b_i) \leq \max a_i + \max b_i$$ with $i=1,\ldots,n$
Thanks in advance,
Federico
| $$
\max_{1\le i \le n} (a_i + b_i) \le \max_{1\le i \le n}( \max_{1\le k \le n}(a_k) + b_i) = \max_{1\le k \le n}(a_k) + \max_{1\le i \le n}(b_i)
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/61241",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Transition matrix I have a directed graph $G_1$. I extract its transition matrix $T_1$.
Now I also have directed graph $G_2$, which is equal to $G_1$ with inverted edges.
If I get its transition matrix $T_2$, what is the relationship between $T_1$ and $T_2$?
What is the relationship between the adjancency matrices of $G_1$ and $G_2$?
Thanks for any hint, Mulone
| Hint: $T_1$ has a $1$ in the $i,j$ location if there is a path from $V_i$ to $V_j$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/61288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is the median always between the mode and the mean for a unimodal distribution? Is it ALWAYS the case that, for a unimodal probability distribution, the median is between the mode and mean?
| The answer is "No," as the article linked to by leonbloy indicates. Here's an example of the kind of situation in which you would get mean < mode < median. The black rectangle below contains the mean, the blue rectangle the mode, and the red rectangle the median.
The black rectangle is $5 \times 1/5$, the blue $1/4 \times 4$, and the red $1 \times 3$. The median is in the red rectangle because the areas of the three rectangles are 1, 1, and 3. The blue rectangle clearly contains the mode. Placing the origin at the lower left corner of the blue rectangle, we see that the mean is at $(-2.5)(1) + (1/8)(1) + (3/4)(3) = -1/8$, and so the black rectangle contains the mean.
Of course, this can be scaled to produce an actual probability density function or smoothed to obtain a continuous pdf without changing mean < mode < median.
Added: One of our own users, Henry, has written a detailed article on the relationship between the mean, median, mode, and standard deviation in a unimodal distribution. It's definitely worth a look.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/61419",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Motivation of the Gaussian Integral I read on Wikipedia that Laplace was the first to evaluate
$$\int\nolimits_{-\infty}^\infty e^{-x^2} \, \mathrm dx$$
Does anybody know what he was doing that lead him to that integral? Even better, can someone pose a natural problem that would lead to this integral?
Edit: Many of the answers make a connection to the normal distribution, but then the question now becomes: Where does the density function of the normal distribution come from? Mike Spivey's answer is in the spirit of what I am looking for: an explanation that a calculus student might understand.
| The function $e^{-x^2}$ is natural for investigation for lots of different reasons. One reason is that, depending on your normalization, it is essentially a fixed point of the Fourier Transform. That is,
$$
\int_{\mathbb{R}^n} e^{-\pi x^2} e^{-2\pi ix\cdot t}\mathrm{d}x=e^{-\pi t^2}
$$
Another reason is tied to the Central Limit Theorem. Suppose that $f$ satisifies $\int_{\mathbb{R}^n}f(x)\;\mathrm{d}x=1$, $\int_{\mathbb{R}^n}x\;f(x)\;\mathrm{d}x=0$, and $\int_{\mathbb{R}^n}|x|^2\;f(x)\;\mathrm{d}x=1$ (these can be attained by translating and scaling the domain and scaling the range of $f$). Let $f^{\;*k}$ be the convolution of $f$ with itself $k$ times. Then $k^{n/2}f^{\;*k}(x\sqrt{k})\to \frac{1}{\sqrt{2\pi}^n}e^{-x^2/2}$ as $k\to\infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/61480",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 6,
"answer_id": 5
} |
How to represent XOR of two decimal Numbers with Arithmetic Operators Is there any way to represent XOR of two decimal Numbers using Arithmetic Operators (+,-,*,/,%).
| The answer is yes. Let us assume the numbers $a,b$ have the form
$a = (a_1,a_2,\ldots,a_n)$
$b = (b_1,b_2,\ldots,b_n)$
where $a_i,b_i \in \{0,1\}$. We can extract the lowest bit ($a_n,b_n$) with
$a_n = a\%2$,
$b_n = b\%2$.
Similar we have
$a_{n-1}=[(a-a_n)/2]\%2$, $b_{n-1}=[(b-b_n)/2]\%2$
Now when $c = a\text{ XOR }b$ we know that
$c_n = (a_n+b_n)\%2$ and so on and you can put that all together in a huge ugly formula :-)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/61556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 6,
"answer_id": 2
} |
How to set up this vector problem? Reading through my homework, I encountered this. I don't know what a picture of this would look like, and how to decompose into separate vectors. Once I figure out what I need to decompose into, I can do the rest. Can you help me?
A car is driven east for a distance of 48 km, then north for 27 km,
and then in a direction 32° east of north for 25 km. Determine (a) the
magnitude (in km) of the car's total displacement from its starting
point and (b) the angle (from east) of the car's total displacement
measured from its starting direction.
| Draw a picture. Piece by piece. Describing what it looks like just involves me repeating the problem. Draw a line segment right and label is 40. Draw a line segment up and label it 27, connected to the end of the previous line. Draw a line segment at an angle, label the degree 32 from upwards and the length 25. Then find the overall x and y displacement for the big triangle.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/61609",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Deriving 2D Coordinate Rotation Formula I'm trying to write out the steps in code for deriving the 2D coordinate rotation formula so I can understand it.
x = radius * cos(angle)
y = radius * sin(angle)
x1 = radius * cos(angle + -rotation)
y1 = radius * sin(angle + rotation)
So,
x1 = radius * cos(angle) * cos(rotation) – radius * sin(angle) * sin(rotation)
y1 = radius * sin(angle) * cos(rotation) + radius * cos(angle) * sin(rotation)
Therefore,
x1 = cos(rotation) * x – sin(rotation) * y
y1 = cos(rotation) * y + sin(rotation) * x
I hate to post a question just asking "Is this right?", but is this right? In particular, I'm unsure if I'm correctly representing the w (as listed in the link) in the first x1 and y1 assignment, and it's expansion. And no, this isn't homework. Thanks.
| Why don't you look at the rotation this way. What is the coordinates of the rotated $\hat{x}=(1,0)$ axis? From trigonometry you will find it as $X=(\cos\theta,\sin\theta)$. Now what is the coordinates of the rotated $\hat{y}=(0,1)$ axis? Similarly from trig you get $Y=(-\sin\theta,\cos\theta)$
A general vector $\vec{v}=(A,B)=A \hat{x}+B \hat{y}$ is rotated by rotating the unit vectors $V = A*(\cos\theta,\sin\theta) + B(-\sin\theta,\cos\theta)$ or re-arranged
$$ V = \begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix} \begin{pmatrix} A \\ B \end{pmatrix} $$
So the rotation matrix is $${\rm Rot}(\theta)=\begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/61656",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
What's the limit of the sequence $\lim\limits_{n \to\infty} \frac{n!}{n^n}$? $$\lim_{n \to\infty} \frac{n!}{n^n}$$
I have a question: is it valid to use Stirling's Formula to prove convergence of the sequence?
| You can prove that
$$
n! < \left( \frac{n+1}{2} \right)^n.
$$
Now observe that
$$
0 \leq \lim_{n\to\infty} \frac{n!}{n^n} <\lim_{n\to\infty} \frac{\left(\frac{n+1}{2}\right)^n }{n^n} = \lim_{n\to\infty} \frac{1}{2^n}\cdot\frac{(n+1)^n}{n^n}.
$$
We know that $1/2^n\to 0$ as $n\to\infty$. If you know that $[(n+1)/n]^n\to e$ as $n\to\infty$, then you're done... the limit is zero!
Definitely not as nice as anon's solution, but a different approach nontheless.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/61713",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 6,
"answer_id": 2
} |
Proving that $ 2 $ is the only real solution of $ 3^x+4^x=5^x $ I would like to prove that the equation $ 3^x+4^x=5^x $ has only one real solution ($x=2$)
I tried to study the function $ f(x)=5^x-4^x-3^x $ (in order to use the intermediate value theorem) but I am not able to find the sign of $ f'(x)= \ln(5)\times5^x-\ln(4)\times4^x-\ln(3)\times3^x $ and I can't see any other method to solve this exercise...
| One direct method is to divide directly by $5^x$ and get $1=(3/5)^x+(4/5)^x$. From here it is clear that the RHS is strictly decreasing, and there is a unique solution. Almost all exponential equations can be treated this way, by transforming them to
*
*one increasing function equal to one decreasing function
*one increasing/decreasing function equal to a constant.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/61812",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 2,
"answer_id": 0
} |
Getting the name of combinatorial problems I'll often find myself with some combinatorial problem that's obviously been studied before. For example, "Find the smallest set(s) of positive integers such that every integer from 1 to n is the sum of at most two elements of the set." Without becoming an expert on combinatorics, is there some way of finding out the name such a problem goes by in the literature, say in some sort of catalog? Googling and related tactics don't seem to be very helpful here, as most questions of this sort just consist of the words "set, smallest, such that, ..." repeatedly -- there's generally no unique word or phrase to latch on to. For instance, when I tried to Google the problem above, I got back the subset sum problem (given a set, determine whether some subset sums to zero) and the knapsack problem (given a set of objects with specific weights and values, find the most valuable subset under a given total weight), neither of which have anything to do with what I was actually looking for.
I'm not looking for the name of the problem above in particular (although it wouldn't hurt if anyone knows it), but rather some clean way of looking such things up for myself. Does such a catalog exist?
EDIT: My basic idea here is that a large number of combinatorial problems fall into some basic, MADLIBS-style patterns, for instance:
Choices of $m$ elements of the set ___, (with|without) repetition, (with|without) ordering, satisfying the additional constraint ____.
(Paths|circuits) through a (directed|undirected), (vertex|edge) weighted graph, which visit each (edge|vertex), such that the total weight is (maximal|minimal), and such that _.
An index that listed things in this manner would be helpful to non-experts.
| One way is to calculate the terms for small $n$ (in your example the size of the smallest set with your desired property) and then look these up in the On-Line Encyclopedia of Integer Sequences
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/61857",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 3,
"answer_id": 1
} |
Integral closure of p-adic integers in maximal unramified extension Let $\mathbb Q_p$ be the field of p-adic numbers, and let $\mathbb Q_p^{\text{unr}}$ be maximal unramified extension in some algebraic closure of $\mathbb Q_p$. My understanding is that $\mathbb Q_p^{\text{unr}}$ has a fairly explicit description:
$$ \mathbb Q_p^{\text{unr}} = \mathbb Q_p \left(\bigcup_{(n,p)=1} \mu_n \right)$$
where $\mu_n$ is a primitive $n$th root of unity, i.e. we adjoin all $n$th roots of unity with $n$ relatively prime to $p$.
My question is: Does the integral closure of $\mathbb Z_p$ in $\mathbb Q_p^{\text{unr}}$ have a similarly explicit description? For example, does it equal:
$$ \mathbb Z_p \left[\bigcup_{(n,p)=1} \mu_n \right] $$
perhaps?
| Yes, this is true. Since the integral closure of a directed union is the union of the integral closures, it suffices to establish this at every finite level: that is, for $n$ prime to $p$, the ring of integers in $\mathbb{Q}_p(\zeta_n)$ is $\mathbb{Z}_p[\zeta_n]$.
Here are two methods of proof:
First Proof (Local): This follows from the structure theory of unramified extensions of local fields. For instance, you can apply Proposition 4 of these notes on local fields to $\overline{f}$, the minimal polynomial over $\mathbb{F}_p$ of a primitive $n$th root of unity.
Second Proof (Global): Show that the discriminant of the order $\mathcal{O} = \mathbb{Z}[\zeta_n]$ -- or, in plainer terms, of $(1,\zeta_n,\ldots,\zeta_n^{\varphi(n)-1})$ -- is prime to $p$. Therefore the localized order $\mathcal{O} \otimes \mathbb{Z}_p$ is maximal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/61900",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 1,
"answer_id": 0
} |
Expressing sums of products in terms of sums of powers I'm working on building some software that does machine learning. One of the problems I've come up against is that, I have an array of numbers:
$[{a, b, c, d}]$
And I want to compute the following efficiently:
$ab + ac + ad + bc + bd + cd$
Or:
$abc + abd + acd + bcd$
Where the number of variables in each group is specified arbitrarily. I have a method where I use:
$f(x) = a^x + b^x + c^x + d^x$
And then compute:
$f(1) = a + b + c + d$
$(f(1)^2-f(2))/2 = ab + ac + ad + bc + bd + cd$
$(f(1)^3 - 3f(2)f(1) + 2f(3))/6 = abc + abd + acd + bcd$
$(f(1)^4 - 6f(2)f(1)^2 + 3f(2)^2 + 8f(3)f(1) - 6f(4))/24 = abcd$
But I worked these out manually and I'm struggling to generalize it. The array will typically be much longer and I'll want to compute much higher orders.
| Besides using the Newton's identities as mentioned in @Soarer's comment, you could also consider an algorithm to generate all combinations. E.g. to compute
$$abc+abd+acd+bcd$$
you would generate 3-combinations out of 4 elements. Staying with this example, you have already an array of numbers
$a = [a_0, a_1, a_2, a_3]$
so you would generate all possible triples of indexes $(i, j, k), i < j < k$ and then multiply and add $a_i * a_j * a_k$. This method could be numerically more robust then Newton's identities because only additions and no subtractions/divisions are used. The efficiency could also be favourable but this would require more analysis. There are perhaps still more efficient algoritms.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/61966",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Bound 1D gaussian domain in the interval $[-3\sigma, 3\sigma]$ so it still is a probability density function I need to bound a 1D gaussian/normal (or similar) probability density function in the domain interval $[-3\sigma, 3\sigma]$ in a way that still integrates to 1.
I would need something like this:
$$
p(x) = \begin{cases} N(x;\mu, \sigma) &\text{if } -3\sigma \leq x \leq 3\sigma\\
0 & \text{otherwise }
\end{cases}
$$
This is NOT a probability density function but how could I get a bounded distribution that is similar to the gaussian case?
Thanks in advance,
Federico
| It seems you are not clear about what you want. To truncate any variable to a given range, you just restrict its density to that range, and divide by its integral so that integrates to 1.
But if you want to generate a random variable that just "looks like" a gaussian, but has support on an interval, and its density is smooth, you can sum three (or more) uniforms. For example, if you sum three uniforms in $[-1,1]$, the result is a random variable that has support in $[-3,3]$, and its variance is $1$; you can multiply the result by $\sigma$ to get a suport $[-3 \sigma,3 \sigma]$ and standard deviation $\sigma$. The density is piecewise quadratic, it's continuous and derivable (though not infinitely differentiable, of course).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/62003",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
How to show $a,b$ coprime to $n\Rightarrow ab$ coprime to $n$? Let $a,b,n$ be integers such that $\gcd(a,n)=\gcd(b,n)=1$. How to show that $\gcd(ab,n)=1$?
In other words, how to show that if two integers $a$ and $b$ each have no non-trivial common divisor with and integer $n$, then their product does no have a non-trivial common divisor with $n$ either.
This is a problem that is an exercise in my course.
Intuitively it seems plausible and it is easy to check in specific cases but how to give an actual proof is not obvious.
| $\gcd(a,n)=1$ implies $ar+ns=1$ for some integers $r,s$. $\gcd(b,n)=1$ implies $bt+nu=1$ for some integers $t,u$. So $$1=(ar+ns)(bt+nu)=(ab)(rt)+(aru+sbt+snu)n$$ so $\gcd(ab,n)=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/62072",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Proof that $6^n$ always has a last digit of $6$ Without being proficient in math at all, I have figured out, by looking at series of numbers, that $6$ in the $n$-th power always seems to end with the digit $6$.
Anyone here willing to link me to a proof?
I've been searching google, without luck, probably because I used the wrong keywords.
| HINT $\rm\ \ 6-1\ |\ 6^k-1,\ $ so $\rm\:\ 2,5\ |\ 6^n-6\ \Rightarrow\ 10\ |\ 6^n - 6\:,\ $ i.e. $\rm\ 6^n\ =\ 6 + 10\ k\:$ for $\rm\:k\in\mathbb Z\:.$
Alternatively: $\rm\ mod\ 10:\ \ 6^n\equiv 6\ $ since it is $\rm\ 0^n \equiv 0\pmod 2,\ \ 1^n \equiv 1\pmod 5$
Similarly odd $\rm\:b\: \Rightarrow\: (b+1)^n\equiv b+1\pmod{2\:b}\:,\:$ so $\rm\:(b+1)^n\:$ has last digit $\rm\:b+1\:$ in radix $\rm\:2\:b\:.$
NOTE how modular arithmetic reduces the induction to the trivial inductions $\rm\ 0^n = 0,\ 1^n = 1\:.$ This is a prototypical example of the sort of simplification afforded by reducing arithmetical problems to their counterparts in the simpler arithmetical rings of integers $\rm\:(mod\ m)\:.\:$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/62126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 6,
"answer_id": 4
} |
Proving $1^3+ 2^3 + \cdots + n^3 = \left(\frac{n(n+1)}{2}\right)^2$ using induction How can I prove that
$$1^3+ 2^3 + \cdots + n^3 = \left(\frac{n(n+1)}{2}\right)^2$$
for all $n \in \mathbb{N}$? I am looking for a proof using mathematical induction.
Thanks
| It may be helpful to recognize that both the RHS and LHS represent the sum of the entries in a the multiplication tables. The LHS represents the summing of Ls (I'll outline those shortly), and the RHS, the summing of the sum of the rows [or columns])$$\begin{array}{lll}
\color{blue}\times&\color{blue}1&\color{blue}2\\
\color{blue}1&\color{green}1&\color{red}2\\
\color{blue}2&\color{red}2&\color{red}4\\
\end{array}$$
Lets begin by building our multiplication tables with a single entry, $1\times1=1=1^2=1^3$. Next, we add the $2$s, which is represented by the red L [$2+4+2 = 2(1+2+1)=2\cdot2^2=2^3$].
So the LHS (green 1 + red L) currently is $1^3+2^3$, and the RHS is $(1+2)+(2+4)=(1+2)+2(1+2)=(1+2)(1+2)=(1+2)^2$.
$$\begin{array}{llll}
\color{blue}\times&\color{blue}1&\color{blue}2&\color{blue}3\\
\color{blue}1&\color{green}1&\color{red}2&\color{maroon}3\\
\color{blue}2&\color{red}2&\color{red}4&\color{maroon}6\\
\color{blue}3&\color{maroon}3&\color{maroon}6&\color{maroon}9\\
\end{array}$$
Next, lets add the $3$s L. $3+6+9+6+3=3(1+2+3+2+1)=3\cdot3^2=3^3$. So now the LHS (green 1 + red L + maroon L) currently is $1^3+2^3+3^3$, and the RHS is $(1+2+3)+(2+4+6)+(3+6+9)=(1+2+3)+2(1+2+3)+3(1+2+3)=(1+2+3)(1+2+3)=(1+2+3)^2$.
By now, we should see a pattern emerging that will give us direction in proving the title statement.
Next we need to prove inductively that $\displaystyle\sum_{i=1}^n i = \frac{n(n+1)}{2}$, and use that relationship to show that $1+2+3+\dots+n+\dots+3+2+1 = \dfrac{n(n+1)}{2}+ \dfrac{(n-1)n}{2} = \dfrac{n((n+1)+(n-1))}{2}=\dfrac{2n^2}{2}=n^2$
Finally, it should be straight forward to show that:
$$\begin{array}{lll}
(\sum^n_{i=1}i+(n+1))^2 &=& (\sum^n_{i=1}i)^2 + 2\cdot(\sum^n_{i=1}i)(n+1)+(n+1)^2\\
&=& \sum^n_{i=1}i^3 + (n+1)(\sum^n_{i=1}i + (n+1) + \sum^n_{i=1}i)\\
&=& \sum^n_{i=1}i^3 + (n+1)(n+1)^2\\
&=& \sum^n_{i=1}i^3 + (n+1)^3\\
&=& \sum^{n+1}_{i=1}i^3\\
\end{array}$$
and, as was already pointed out previously, $$(\sum_{i=1}^1 i)^2 = \sum_{i=1}^1 i^3=1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/62171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "67",
"answer_count": 16,
"answer_id": 3
} |
Solve to find the unknown I have been doing questions from the past year and I come across this question which stumped me:
The constant term in the expansion of $\left(\frac1{x^2}+ax\right)^6$
is $1215$; find the value of $a$. (The given answer is: $\pm 3$ )
Should be an easy one, but I don't know how to begin. Some help please?
| Via the binomial theorem we know the $x^0$ factor (the coefficient of which you call the "constant term") is actually
$${6 \choose 2}\left(\frac{1}{x^2}\right)^2(ax)^{6-2}.$$
Since 6 choose 2 is equal to $5\cdot6/2=15$, we have the equality $15a^4=1215$. The prime factorization of 1215 is $5\cdot3^5$, so we arrive at the conclusion that $a^4=3^4$, or $a=\pm3$ (discarding imaginary solutions).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/62234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Proof: The complement of an annulus embedded in a sphere has two connected components By the Jordan Curve Theorem we know that the complement of an $S^{n-1}$ embedded into the $S^n$ has exactly two connected components.
What if -- instead of a sphere -- we embed an annulus, i.e. $S^{n-1}\times [-1,1]$ into $S^n$. Intuitively I would say that the complement of this annulus should also have two components, but I couldn't think of or find an easy proof for this statement.
Can anyone here come up with a simple solution maybe deducing that assertion from Jordans theorem as a corollary? If not: What techniques could be used to proof the statement or is it even false?
Note: One idea would be to use the Schoenflies theorem which would allow me to show that the component of the complement of the image of $S^{n-1}\times \{1\}$ that contains the image of $S^{n-1}\times \{-1\}$ is homeomorphic to an open $n$-cell and thus to $\mathbb{R}^n$, allowing me to use the Jordan Curve theorem again on that component. However, I am actually trying to proof exactly that theorem using the above statement, so I cannot use it here.
| Let $f: S^{n−1}×[−1,1]\to S^n$ be our embedding. My goal is to show $S^n-f(S^{n-1}\times\{-1,1\})$ has three connected components.
First look at the two connected components of $S^n-f( S^{n−1}×{\{−1\}})$
Take two points in one of the components.
If they both lie in the annulus we can choose a path in the annulus using our domain as a chart.
Suppose both are outside the annulus, though. Then there is a path between them in $S^n-f( S^{n−1}×{\{−1\}})$, but it may go through the annulus. There must be time the path first enters the annulus and a time it finally leaves. Moreover, we know these points must be on $f(S^{n-1}\times \{1\}$ We can then use our chart to homotope this path using radial projection to be either on $f(S^{n-1}\times \{1\}$ our completely off the annulus.
Now use small open balls and compactness to homotope the path completely off $f(S^{n-1}\times \{1\}$.
Thus, these points are still in the same connected component when we remove the boundaries of the annulus. Clearly, upon removing the boundaries of the annulus the interior is a connected component (any path would have to enter the annulus). So we have shown there are three connected components upon removing the boundaries so when we remove the interior of the annulus we are left with two.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/62308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Integral around unit sphere of inner product For arbitrary $n\times n$ matrices M, I am trying to solve the integral
$$\int_{\|v\| = 1} v^T M v.$$
Solving this integral in a few low dimensions (by passing to spherical coordinates) suggests the answer in general to be
$$\frac{A\,\mathrm{tr}(M)}{n}$$
where $A$ is the surface area of the $(n-1)$-dimensional sphere. Is there a nice, coordinate-free approach to proving this formula?
| As a function of $M$ your integral is linear, and is invariant under conjugation by orthogonal transformations ($C_R: M \mapsto R^{T} M R$). Now the average of $C_R(M)$ over all orthogonal transformations $R$ (using Haar measure) is $\text{Tr}(M) I/n$ (it must be invariant under all $C_R$ so it is a multiple of $I$, and the trace is preserved). So the integral is the same as it would be for $\text{Tr}(M)I/n$, which is $\text{Tr}(M)/n$ times the area of the sphere.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/62358",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 2,
"answer_id": 1
} |
Limit of difference of two square roots I need to find the limit, not sure what to do.
$\lim_{x \to \infty} \sqrt{x^2 +ax} - \sqrt{x^2 +bx}$
I am pretty sure I have to divide by the largest degree which is x^2 but that gets me some weird numbers that don't seem to help.
| Applying the formula $x^2-y^2=(x-y)(x+y)$ we get
$\sqrt{x^2+ax}-\sqrt{x^2+bx}= \frac{x^2+ax-x^2-bx}{\sqrt{x^2+ax}+\sqrt{x^2+bx}}=\frac{x(a-b)}{x\left(\sqrt{1+\frac{a}{x}}+\sqrt{1+\frac{b}{x}}\right) }=\frac{a-b}{\sqrt{1+\frac{a}{x}}+\sqrt{1+\frac{b}{x}}}$
now you can take the limit.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/62418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Proving $\cot(A)\cot(B)+\cot(B)\cot(C)+\cot(C)\cot(A)=1$ I was stumped by another past-year question:
In $\triangle ABC$, prove that $$\cot(A)\cot(B)+\cot(B)\cot(C)+\cot(C)\cot(A)=1.$$
Here's what I have done so far: I tried to replace $C$, using $C=180^\circ-(A+B)$. But after doing this, I don't know how to continue.
I would be really grateful for some help on this, thanks!
| $$\cot(A+B+C)=\frac{\cot(A)\cot(B)\cot(C)-(\cot(A)+\cot(B)+\cot(C))}{\cot(A)\cot(B)+\cot(C)\cot(B)+\cot(C)\cot(A)-1}$$
now use the fact that $\cot(\pi)$ is infinity and for that the denominator on the right hand side has to be 0
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/62499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 0
} |
Why is a finite integral domain always field? This is how I'm approaching it: let $R$ be a finite integral domain and I'm trying to show every element in $R$ has an inverse:
*
*let $R-\{0\}=\{x_1,x_2,\ldots,x_k\}$,
*then as $R$ is closed under multiplication $\prod_{n=1}^k\ x_i=x_j$,
*therefore by canceling $x_j$ we get $x_1x_2\cdots x_{j-1}x_{j+1}\cdots x_k=1 $,
*by commuting any of these elements to the front we find an inverse for first term, e.g. for $x_m$ we have $x_m(x_1\cdots x_{m-1}x_{m+1}\cdots x_{j-1}x_{j+1}\cdots x_k)=1$, where $(x_m)^{-1}=x_1\cdots x_{m-1}x_{m+1}\cdots x_{j-1}x_{j+1}\cdots x_k$.
As far as I can see this is correct, so we have found inverses for all $x_i\in R$ apart from $x_j$ if I am right so far. How would we find $(x_{j})^{-1}$?
| In fact, we can go a bit farther, and say that if $R$ is a finite commutative ring that has elements that are not zero-divisors, then $R$ has an identity. Furthermore, every nonzero element of $R$ is either a unit or a zero-divisor.
To see why, pick $a\in R\setminus\{0\}$ with $a$ not a zero-divisor. As $R$ is finite, the set $\{a,a^2,a^3,...\}$ must also be finite, whence there exist $m,n\in \mathbb{N}$ with $m<n$ and $a^m=a^n$.
We will now show that $a^{n-m}$ serves as an identity for $R$. Pick any $x\in R$. Then $a^m=a^n$ implies $a^mx=a^nx$, whence $a^m(a^{n-m}x-x)=0$. Now, since $a$ is not a zero divisor, it is clear that $a^m$ is not a zero-divisor. Thus, the only way we can have $a^m(a^{n-m}x-x)=0$ is if $a^{n-m}x-x=0$ or $a^{n-m}x=x$. Therefore $a^{n-m}=1_R$, and $R$ has an identity.
In fact, the proof of why any nonzero zero-divisor is a unit essentially follows from the same argument as above (letting $x=1$ now that we know that $R$ has an identity): if $a\in R\setminus\{0\}$ is not a zero-divisor, then there exist $0<m<n$ with $a^m=a^n\,\Rightarrow\,a^m(a^{n-m}-1)=0\,\Rightarrow\,a^{n-m}=1$ (since, again, if $a$ is not a zero-divisor, then neither can $a^m$ be a zero-divisor). Therefore, every nonzero element of $R$ is either a zero-divisor or a unit.
From here, it directly follows that every finite integral domain is a field, since integral domains have no zero-divisors.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/62548",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40",
"answer_count": 7,
"answer_id": 3
} |
What is your favorite application of the Pigeonhole Principle? The pigeonhole principle states that if $n$ items are put into $m$ "pigeonholes" with $n > m$, then at least one pigeonhole must contain more than one item.
I'd like to see your favorite application of the pigeonhole principle, to prove some surprising theorem, or some interesting/amusing result that one can show students in an undergraduate class. Graduate level applications would be fine as well, but I am mostly interested in examples that I can use in my undergrad classes.
There are some examples in the Wikipedia page for the pigeonhole principle. The hair-counting example is one that I like... let's see some other good ones!
Thanks!
| Let $p=6k-1$ be a prime.
Theorem: Modulo $p$, if $a^3=b^3$ then $a=b$.
Proof. Suppose $x\ne 0$. Then, by Fermat's little theorem, $x^{p-1}=1$, so $x=(x^{p-1})^2x=x^{2p-1}=x^{12k-3}=(x^{4k-1})^3$ is a cubic residue mod $p$. 0 is a cubic residue, too, so every residue is a cubic residue. By the pigeonhole principle, every residue is a cubic residue exactly once. Thus $a^3=b^3$ implies $a=b$.
Here the $x^3$ mod $p$ are the pigeons and the $x$ mod $p$ are the holes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/62565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "89",
"answer_count": 23,
"answer_id": 21
} |
Are there any common practices in mathematics to guard against mistakes? It occurred to me that math is somewhat like programming (or vice-versa, if you prefer) because, in both, it is easy to make mistakes or overlook them, and the smallest error or misguided assumption can make everything else go completely wrong.
In programming, in order to make code less error-prone and easier to maintain, there are some principles such as don't repeat yourself and refactor large modules into smaller ones. If you ask any experienced programmer about these, he will tell you why these principles are so important and how they make life easier in the long run.
Is there anything that mathematicians frequently do to protect themselves from mistakes and to make things generally easier? If so, what are they and how do they work?
| An example:
After Doron Zeilberger was criticised repeatedly for errors in his long proof of the alternating sign matrix conjecture he rewrote it in a tree-like structure and had other people check each node of the tree:
http://arxiv.org/abs/math/9407211
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/62608",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25",
"answer_count": 4,
"answer_id": 3
} |
On factorizing and solving the polynomial: $x^{101} – 3x^{100} + 2x^{99} + x^{98} – 3x^{97} + 2x^{96} + \cdots + x^2 – 3x + 2 = 0$ The actual problem is to find the product of all the real roots of this equation,I am stuck with his factorization:
$$x^{101} – 3x^{100} + 2x^{99} + x^{98} – 3x^{97} + 2x^{96} + \cdots + x^2 – 3x + 2 = 0$$
By just guessing I noticed that $(x^2 – 3x + 2)$ is one factor and then dividing that whole thing we get $(x^{99}+x^{96}+x^{93} + \cdots + 1)$ as the other factor , but I really don't know how to solve in those where wild guessing won't work! Do we have any trick for factorizing this kind of big polynomial?
Also I am not sure how to find the roots of $(x^{99}+x^{96}+x^{93} + \cdots + 1)=0$,so any help in this regard will be appreciated.
| Note that
$$t^{34}-1=(t^{33}+t^{31}+\cdots+t+1)(t-1)$$
and so, substituting $t=x^3$, we get
$$x^{102}-1=(x^{99}+x^{96}+\cdots+x^3+1)(x^3-1)$$
So any real root of $x^{99}+x^{96}+\cdots+x^3+1$ will be a real root of $x^{102}-1$ (and those should be easy to find). But note that, for example, $1$ is a real root of $x^{102}-1$, but is not a root of $x^{99}+x^{96}+\cdots+x^3+1$, since $34=1+1+\cdots+1\neq0$. So, once you find the real roots of $x^{102}-1$ and determine which of them is in fact a root of $x^{99}+x^{96}+\cdots+x^3+1$, you can combine with the real roots of $x^2-3x+2=(x-1)(x-2)$ to get the answer.
To factorize $x^{99}+x^{96}+\cdots+x^3+1$ into irreducibles over $\mathbb{Z}$ (which, it turns out, is equivalent to factoring into irreducibles over $\mathbb{Q}$ in this case), we use the fact that
$$x^{99}+x^{96}+\cdots+x^3+1=\frac{x^{102}-1}{x^3-1}$$
combined with the fact that
$$x^{102}-1=\prod_{d\mid 102}\Phi_d(x)=\Phi_{102}(x)\Phi_{51}(x)\Phi_{34}(x)\Phi_{17}(x)\Phi_6(x)\Phi_3(x)\Phi_2(x)\Phi_1(x)$$
where $\Phi_d(x)$ is the $d$th cyclotomic polynomial. The cyclotomic polynomials are all irreducible over $\mathbb{Q}$. Any irreducible polynomial in $\mathbb{R}[x]$, though, is either a linear $x-a$ for $a\in\mathbb{R}$, or a quadratic $x^2+ax+b$ for which $a^2-4b<0$. The factorization into irreducibles over $\mathbb{R}$ is just $(x-1)$, $(x+1)$, and then a bunch of quadratics $$x^2-(\zeta_{102}^k+\overline{\zeta_{102}}^k)x+1=(x-\zeta_{102}^k)(x-\overline{\zeta_{102}}^k)$$
where $\zeta_{102}$ is a primitive $102$th root of unity and $0<k<51$.
Of course, the factorization into irreducibles over $\mathbb{C}$ is just
$$(x-1)(x-\zeta_{102})(x-\zeta_{102}^2)\cdots(x-\zeta_{102}^{50})(x+1)(x-\zeta_{102}^{52})\cdots(x-\zeta_{102}^{101})$$
Wolfram Alpha has a nice printout with the conjugate pairs, it may be helpful.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/62655",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Set Properties: Formal Semantics of Programming Languages What is the difference between the two notations below?
{x | x (is an element of) X & P(x)}
vs.
X = {x | P(x)}
These both seems to say to me x is in X as long as it abides by property P. The top one is defined as a comprehension and the bottom one is used as a lead into Russle's paradox.
| The difference is that $X$ is required to be a set in the first, and in the second it may not be. It's kind of amazing, but this avoids the classical set-theoretic paradoxes. The classic Russell paradox $X=\{x|x\not \in x\}$ is a good example. Now in the first we know we have a set by the power set and separation axioms. In the second we don't.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/62748",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is there a solid where all triangles on the surface are isosceles? Are there any solids in $R^{3}$ for which, for any 3 points chosen on the surface, at least two of the lengths of the shortest curves which can be drawn on the surface to connect pairs of them are equal?
| If one has a plane 3-connected graph in the plane all of whose faces are triangles it is known that one can not always realize this graph by a convex 3-dimensional polyhedron all of whose faces are congruent strictly isosceles triangles. There are exactly 8 types of such graphs which can be realized with equilateral triangles - the so called convex deltahedra. It is however an open problem whether one can always realize such a graph so that all of the faces are isosceles but have faces with edges of different lengths. Your question goes beyond these considerations in its requirements.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/62804",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Does every Noetherian domain have finitely many height 1 prime ideals?
Let $A$ be a Noetherian domain. Is the set $\{P\subset A \mid P \mbox{ prime ideal, } \dim A_P=1\}$ always finite?
I can prove for $f \neq 0, f\in A$, the set $\{P\subset A \mid \dim A_P=1, f\in P\}$ is finite (by using the primary decomposition of $\sqrt{(f)}$). The above statement is just the case when $f=0$.
| No. Look at the ring $A=\mathbb{Z}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/62856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 2,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.