Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Linear codes and check matrices. Two linear codes are defined as the following subspaces of $\Bbb B_7$:
$C_1$ with dimension $2$ and basis $\{1010101,\ 0101010\}$
$C_2=\text{ker}(H)$ where $H=\left (\!\begin {array}{ccccccc} 0&0&0&1&1&1&1\\ 0&1&1&0&0&1&1\\ 1&0&1&0&1&0&1 \end {array}\!\right )$.
Just wanted to check, to find $C_1$ I'd take the word $0000000$ and add the two elements in the basis right? For $C_2$ however, I get a bit bogged down in trying to get $(A\vert I_m$) because I'm not sure what $m$ should be in this case.
| $C_1$ has dimension 2 and consists of the zero vector, the two basis vectors and the all-1 vector (by adding the two basis vectors).
$C_2$ has dimension $m=4 = 7-3$, since the kernel has dimension 3 (as can be seen from the check matrix) and the ambient space is $\Bbb Z_2^7$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3806012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finding remainder of $123^{456}$ divided by 88 using Chinese Remainder Theorem I tried using Chinese remainder theorem but I kept getting 19 instead of 9.
Here are my steps
$$
\begin{split}
M &= 88 = 8 \times 11 \\
x_1 &= 123^{456}\equiv 2^{456} \equiv 2^{6} \equiv 64 \equiv 9 \pmod{11} \\
y_1 &= 9^{-1} \equiv 9^9 \equiv (-2)^9 \equiv -512 \equiv -6 \equiv 5 \pmod{11}\\
x_2 &= 123^{456} \equiv 123^0 \equiv 1 \pmod{8}\\
y_2 &= 1^{-1} \equiv 1 \pmod{8} \\
123^{456}
&\equiv \sum_{i=1}^2 x_i\times\frac{M}{m_i} \times y_i
\equiv 9\times\frac{88}{11}\times5 + 1\times\frac{88}{8} \times1 \equiv 371
\equiv 19 \pmod{88}
\end{split}
$$
| $y_1$ should've been the inverse of $8\pmod{11}$, not of $9\pmod{11}$, so $y_1=7$.
Similarly, $y_2$ should've been the inverse of $11\pmod 8$, not of $1\pmod 8$, so $y_2=3$.
Therefore, the result is: $9\times\frac{88}{11}\times \color{red}{7}+1\times\frac{88}{8}\times \color{red}{3}=537\equiv 9\pmod{88}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3806122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 7,
"answer_id": 1
} |
Does a curve (differentiable manifold of dim 1) always have a parametrization? I don't know much about this field, so this is a basic question. I think there are 2 similar basic concepts referring to curve:
*
*A $C^k$ differential curve, can be defined as a differentiable $C^k$ manifold of dimension 1. This is what one can read here: https://en.wikipedia.org/wiki/Curve#Differentiable_curve .
*A $C^k$-parametric curve is roughly a $C^k$ function $f$ from an interval $I$ of $\mathbb{R}$, to a normed vector space $E$.
This concept is studied here https://en.wikipedia.org/wiki/Differentiable_curve, surprisingly I haven't seen this page mentioning the first definition despite its title.
Now I think it is obvious that the trajectory of a $C^k$-parametric curve (ie $f(I)$) is a $C^k$ differential curve, because one of the equivalent definitions of manifold says that, each point of it needs to have some local parametrization, so we can use the global parametrization provided by the parametric curve for it.
It naturally raises the converse question:
If i am given a set of points that is a manifold of dimension 1, is it the trajectory of at least one parametric arc?
A more accurate question: if I have a $C^k$ differential curve, is it the trajectory of at least one $C^k$ parametric arc?
I thought that maybe one can use the local parametrizations provided by the manifold and somehow stick them together as a chain, but I feel like there is no guarantee that I can cover the whole curve, because maybe the paramatrizations are getting smaller and smaller...
| When you say
Now I think it is obvious that the trajectory of a $C^k$-parametric curve (ie $f(I)$) is a $C^k$ differential curve...
this is in fact not correct. Consider for example a lemniscate.
The converse is not true either. Consider a manifold of dimension one with two connected components.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3806230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Is the product of source and target maps of a Lie Groupoid a submersion? Let $G:= (G_1 \rightrightarrows G_0)$ be a Lie Groupoid. By definition, we know that source $s$ and target $t$ are submersion. Now define $(s,t):G_1 \rightarrow G_0 \times G_0$ as $\gamma \mapsto (s(\gamma),t(\gamma))$.
My Question is the following:
(1) Is $(s,t)$ a submersion?
(2) More generally, is product of two submersions always a submersion?
I am expecting a positive answer.
My attempt in the context of my original question:
$(s,t)_{*,\gamma}:T_\gamma(G_1) \rightarrow T_{(s(\gamma),t(\gamma))}(G_0 \times G_0) \cong T_{s(\gamma)}(G_0) \times T_{t(\gamma)}(G_0)$ (Standard identification) where $\gamma \in G_1$ and $(s,t)_{*. \gamma}$ is the differential of $(s,t)$ at $\gamma \in G_1$.
Also, $s_{*,\gamma} \times t_{*,\gamma} : T _{\gamma} (G_1) \rightarrow T_{s(\gamma)}(G_0) \times T_{t(\gamma)}(G_0)$ defined by $\lambda \mapsto (s_{* ,\gamma}(\lambda), t_{* ,\gamma}(\lambda))$
Now I am guessing that $(s,t)_{*,\gamma} = s_{*,\gamma} \times t_{*,\gamma}$ ....(3)
But I am not able to prove it explicitly.
More generally, is there any result along the following line:
If $F:M \rightarrow N \times N$ is smooth map of finite dimensional smooth manifolds given by $F(x)=(f(x) , g(x))$ where $f ,g :M \rightarrow N$ are smooth maps. Then is $F_{*,p} = f_{{*,p}} \times g_{{*,p}} $?... (4)
After this I am not able to proceed!
Summarising I asked Questions marked (1) ,(2) ,(3) ,(4) with an ultimate goal to get the answer of (1) that is
Is $(s,t)$ a submersion?
Thanks in advance.
| In general, the map $(s,t):G_1\rightarrow G_0\times G_0$ need not be a submersion.
Suppose $M$ is a positive dimensional manifold and consider the Lie groupoid $M \rightrightarrows M$ where we think of the objects as being $M$, and for each $m\in M$, the only arrow at $m$ is the identity arrow (so the collection of arrows is canonically identified with $M$).
Then $s = t = Id_M$. So, $(s,t):M\rightarrow M\times M$ is the diagonal map $m\mapsto (m,m)$. The dimension of $M\times M$ is twice the dimension of $M$, so for positive dimensional $M$, there can be no submersion $M\rightarrow M\times M$. Thus, $(s,t)$ is not a submersion in this case.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3806344",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Simple space-filling curve Here are six iterations of Hilbert space-filling curve.
Isn't there a simpler space-filling curve?
For example $16$ iterations of this curve:
Isn't it also a space-filling curve?
*
*If it is not then why?
*If it is then why they (mathematicians) did not use the simplest possible curve? Or what are advantages of Hilbert curve over mine?
| The images you provided don't lead to the construction of a space-filling curve. This is due to the fact that if you take $\{f_n\}$ to be the sequence of functions defining the iterations that you have provided, normally we'd call $f:= \lim_{n\rightarrow \infty} f_n$ our resulting space-filling curve. However, in this case this limit doesn't exist (so $f$ isn't well-defined).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3806548",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Swapping modulus and argument in polar coordinates Let $y=f(x)$ in Cartesian coordinates. Swapping the $x$ coordinate with the $y$ coordinate has the effect of reflection about the axis $y=x$.
But if $r=f(\theta )$ in polar coordinates (where $r$ is the modulus and $\theta$ is the argument), what happens when we swap $r$ with $\theta$?
For example, what's the difference between the polar graphs of $r=\theta ^2$ and $\theta =r^2$? Do they look the same? Or is there some kind of reflection as well?
| Polar coordinates we call well known mapping $\mathbb{R}^2 \to \mathbb{R}^2$, from $(x,y)$ to $(\theta, r)$ using formulas
$x = r\cos \theta$, $y = r\sin \theta$, $r \geqslant 0,\theta \in [0, 2\pi) $.
As to plane $(\theta, r)$, then it is usual cartesian coordinates, usual $\mathbb{R}^2$, and you can think about it exactly as you think about $(x,y)$. $r=\theta^2$ is exactly parabola. $\theta=r^2$ is both branches of square root.
We use polar coordinate, when some function/ curve looks "difficult" for $(x,y)$ and by mapping it to $(\theta, r)$ plane we obtain more "easy" case. Most known example is circle $x^2+y^2=r^2$, which by polar coordinate moves to interval $[0, 2\pi) \times \{1\}$. Disk $x^2+y^2\leqslant r^2$ is mapped to rectangle $[0, 2\pi) \times [0,1]$.
Addition.
Now about swapping variables. By definition axial symmetry is not identical Orthogonal Transformation which have line of fixed points. This line is called symmetry axis. To obtain for point $M$ symmetrical point $M'$ with respect to symmetry axis one need to draw perpendicular line to symmetry axis from $M$ and take point $M'$ on this perpendicular on other side of symmetry axis on same distanse as $M$.
For example, if we consider $y=x$ as symmetry axis, then for point $(a,b)$ symmetrical point is $(b,a)$.
So, on $\mathbb{R}^2$ swapping coordinates i.e. having graph $y=f(x)$ and considering $x=f(y)$ is exactly creating symmetry with respect to line $y=x$. Same is, of course if we speak about $r=f(\theta)$ and considering $\theta=f(r)$ - they are symmetric with respect to line $r=\theta$.
Another question is what gives swapping variables for $(x,y)$ in $(\theta, r)$ and reverse. Let's consider firstly "polar plane". As is stated above, swapping variables there means symmetry with respect to line $r=\theta$. Last is well known Archimedean spiral on "cartesian plane". So swapping coordinates $\theta$ and $r$ gives on plane $(x,y)$ graphs "symmetric" with respect to spiral $r=\theta$ which is same as $\sqrt{x^2+y^2}=\arctan \frac{y}{x}$. For example parabola $r=\theta^2$, which is some type of spiral on $(x,y)$, after swapping gives $\theta=r^2$, or taking its one branch, $r=\sqrt{\theta}$ is again some spiral on $(x,y)$.
Summing up:
*
*parabola $y=x^2$ is axial symmetric with respect to square root $x=y^2$ using symmetry axis line $y=x$.
*In "polar" language spiral $r=\theta^2$ is "spirally" symmetric with respect to spiral $\theta=r^2$ using
symmetry "axis" spiral $r=\theta$
Second example. Let's take in polar plane $r=\tan\theta$ i.e. points $(\theta,\tan\theta)$. Swapping variables give $\theta=\tan r$ i.e. points $(\tan r,r)$. Obviously $(\theta,\tan\theta)$ is axially symmetrical to $(\tan r,r)$ with respect to symmetry axis $\theta=r$. Now if we consider corresponding points on $(x,y)$ plane, then symmetry axis $\theta=r$ creates spiral, while $r=\tan\theta$ and $\theta=\tan r$ create some corresponding curves on $(x,y)$: $\sqrt{x^2+y^2}=\frac{y}{x}$ and $\arctan \frac{y}{x}=\tan \sqrt{x^2+y^2}$. Obviously $(x,y)$ curves are not axially symmetrical.
If it sounds acceptable, we can call "spirally" symmetrical on plane $(x,y)$ such points, which preimages are axially symmetrical on plane $(\theta, r)$ with respect to symmetry axis $\theta=r$.
Using this term we can call $\sqrt{x^2+y^2}=\frac{y}{x}$ and $\arctan \frac{y}{x}=\tan \sqrt{x^2+y^2}$ "spirally" symmetrical on plane $(x,y)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3806684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
What is relationship between beta and binomial distributions in Bayesian inference I came across this question:
Suppose we are giving two students a multiple-choice exam with 40 questions, where each question has four choices. We don't know how much the students have studied for this exam, but we think that they will do better than just guessing randomly. a: What is our likelihood? b: What prior should we use?
The solutions were: a: Likelihood is Binomial(40, theta); b: The conjugate prior is a beta prior.
Could someone please explain why beta is the conjugate prior of a binomial? I meant how could one know $\theta$ was distributed as $Beta$? Could other distributions be used for binomial likelihood, and what is the consequence of not using $Beta$ for the prior?
Thanks in advance.
| The point is that if the prior is a beta distribution and the likelihood comes from a binomial distribution, then the posterior is a again a beta distribution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3806844",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Integrate $ \int \frac{1}{\sin^{4}x+\cos^{4}x}dx $ Show that$$ \int \frac{1}{\sin^{4}(x)+\cos^{4}(x)}dx \ = \frac{1}{\sqrt{2}}\arctan\left(\frac{\tan2x}{\sqrt{2}}\right)+C$$
I have tried using Weierstrass substitution but I can't seem to get to the answer... Should I be using the said method or is there another way I can approach the question? Since the integrand evaluates into an arctangent function I am assuming there is some trickery in the manipulation that can get me there. But I just can't seem to see it...
| \begin{align}
\int \frac{dx}{\sin^{4} x+\cos^{4} x}
&= \int \frac{dx}{\frac14(1-\cos2x)^2+\frac14(1+\cos2x)^2}\\
&= \int \frac{2dx}{1+\cos^22x}=\int\frac{2\sec^22xdx}{2+\tan^22x}\\
&=\frac1{\sqrt2}\int\frac{d(\frac{\tan2x}{\sqrt2})}{1+(\frac{\tan2x}{\sqrt2})^2}=\frac{1}{\sqrt{2}}\arctan\frac{\tan2x}{\sqrt{2}}+C
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3807027",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 9,
"answer_id": 0
} |
$M_n(k)$ is an indecomposable algebra Let $A$ be a $k$-algebra. The following are equivalent:
(i) $A$ is indecomposable as a $k$-algebra.
(ii) $A$ is indecomposable as an $A-A$-bimodule.
(iii) The idempotent $1_A$ is primitive in $Z(A)$.
Then the author says later that $M_n(k)$ is an indecomposable algebra. Why? Apprarently, the identity matrix is not primitive(can be written as the sum of two orthogonal idempotents)
Any help would be appreciated!
| The identity matrix isn’t primitive in $M_n(k)$, but it is primitive in $Z(M_n(k))=k$, which is what the third thing says.
Any simple $k$-algebra $A$ has to be indecomposable, of course, because a nontrivial central idempotent $e$ would create a nontrivial ideal $eA$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3807100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Elements of quotient group with $\mathbb Z$-basis Abelian free group $G, H$ have rank $2$ and $G$ has $\Bbb Z$-basis $x, y$, if $H$ has $\Bbb Z$-basis
$$2x+y, 2x-3y$$ then what are the elements of $G/ H$ ?
I am new to the topics, so don't know how to start. The question is motivated from the following excerpt of the book Algebraic-Number Theory by Ian Stewart and David Tall, on page 30 -
for example, if $G$ has rank $3$ and $\Bbb Z$-basis $x, y, z$; and if $H$ has $\Bbb Z$-basis
$$3x+y-2z,
4x-5y+ z,
x +7z,$$
then $|G/ H|$ is the absolute value of
$\begin{bmatrix}
3 & 1 & -2\\
4 & -5 & 1\\
1 & 0 & 7
\end{bmatrix}$,
namely 142.
I wanted to know what are the elements of quotient group $G/H$, and asked it but since it has a large order i have modified to my present question.
| I assume that $G$ is a free Abelian group over a two-element set $\{x,y\}$ and $H$ is the subgroup of $G$ generated by $2x+y$ and $2x-3y$. It is easy to check that $H$ has rank $2$. The theorem on subgroups of a finitely generated free Abelian group (see, for instance, [§20, Kur]) implies that there exist bases $\{u_1,u_2\}$ and $\{v_1,v_2\}$ of the groups $G$ and $H$, such that $v_1=k_1u_1$ and $v_2=k_2u_2$ for some natural numbers $k_1|k_2$. It follows that $G/H$ is isomorphic a direct product of cyclic groups of orders $k_1$ and $k_2$.
The numbers $k_1$ and $k_2$ can be found as follows. Let $u_1=a_{11}x+a_{12}y$, $u_2=a_{21}x+a_{22}y$, and $A=\|a_{ij}\|$, $1\le i,j\le 2$. Since $\{u_1,u_2\}$ is a basis of the group $G$, there exist integers $b_{ij}$, $1\le i,j\le 2$ such that $x=b_{11}u_1+b_{12}u_2$ and $y=b_{21}u_1+b_{22}u_2$. It follows $BA=I$, where $B=\|b_{ij}\|$, $1\le i,j\le 2$, the matrix $A$ is invertible.
Cauchy-Binet formula implies that if $M$ is an integer $n\times n$ matrix and $A$ and is an invertible $n\times n$ integer matrix then matrices $M$ and $MA$ have the same divisors $d_1,\dots, d_k$, where $d_i$ is the greatest common divisors of minors of $i$-th order of the matrix.
Since $$\begin{pmatrix}k_1 & 0\\ 0 & k_2\end{pmatrix} A=\begin{pmatrix}2 & 1\\ 2 & -3\end{pmatrix},$$
$k_1=\gcd (2,1,2,-2)=1$ and $k_1k_2=\gcd\det \begin{pmatrix}2 & 1\\ 2 & -3\end{pmatrix}=8$.
References
[Kur] A. G. Kurosh, Group theory, 3nd ed., Nauka, Moskow, 1967. (in Russian)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3807245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Two inequalities for proving that there are no odd perfect numbers? Let $n$ be a natural number. Let $U_n = \{d \in \mathbb{N}| d|n \text{ and } \gcd(d,n/d)=1 \}$ be the set of unitary divisors, $D_n$ be the set of divisors and $S_n=\{d \in \mathbb{N}|d^2 | n\}$ be the set of square divisors of $n$.
The set $U_n$ is a group with $a\oplus b := \frac{ab}{\gcd(a,b)^2}$. It operates on $D_n$ via:
$$ u \oplus d := \frac{ud}{\gcd(u,d)^2}$$
The orbits of this operation "seem" to be
$$ U_n \oplus d = d \cdot U_{\frac{n}{d^2}} \text{ for each } d \in S_n$$
From this conjecture it follows (also one can prove this directly since both sides are multiplicative and equal on prime powers):
$$\sigma(n) = \sum_{d\in S_n} d\sigma^*(\frac{n}{d^2})$$
where $\sigma^*$ denotes the sum of unitary divisors.
Since $\sigma^*(k)$ is divisible by $2^{\omega(k)}$ if $k$ is odd, where $\omega=$ counts the number of distinct prime divisors of $k$, for an odd perfect number $n$ we get (Let now $n$ be an odd perfect number):
$$2n = \sigma(n) = \sum_{d \in S_n} d \sigma^*(\frac{n}{d^2}) = \sum_{d \in S_n} d 2^{\omega(n/d^2)} k_d $$
where $k_d = \frac{\sigma^*(n/d^2)}{2^{\omega(n/d^2)}}$ are natural numbers.
Let $\hat{d}$ be the largest square divisor of $n$. Then:
$\omega(n/d^2)\ge \omega(n/\hat{d}^2)$.
Hence we get:
$$2n = 2^{\omega(n/\hat{d}^2)} \sum_{d \in S_n} d l_d$$
for some natural numbers $l_d$.
If the prime $2$ divides not the prime power $2^{\omega(n/\hat{d}^2})$, we must have $\omega(n/\hat{d}^2)=0$ hence $n=\hat{d}^2$ is a square number, which is in contradiction to Eulers theorem on odd perfect numbers.
So the prime $2$ must divide the prime power $2^{\omega(n/\hat{d}^2})$ and we get:
$$n = 2^{\omega(n/\hat{d}^2)-1} \sum_{d \in S_n} d l_d$$
with $l_d = \frac{\sigma^*(n/d^2)}{2^{\omega(n/d^2)}}$. Hence the odd perfect number, satisifies:
$$n = \sum_{d^2|n} d \frac{\sigma^*(n/d^2)}{2^{\omega(n/d^2)}}=:a(n)$$
Hence an odd perfect number satisifies:
$$n = a(n)$$
Edit:
This equation is wrong for odd perfect numbers.
So my idea was to study the function $a(n)$, which is multiplicative on odd numbers, on the right hand side and what properties it has to maybe derive insights into odd perfect numbers.
Conjecture: For all odd $n \ge 3$ we have $a(n)<n$. This would prove that there exists no odd perfect number.
This conjecture could be proved as follows:
Since $a(n)$ is multiplicative, it is enough to show that for an odd prime power $p^k$ we have
$$a(p^k) < p^k$$
The values of $a$ at prime powers are not difficult to compute and they are:
$$a(p^{2k+1})= \frac{p^{2(k+1)}-1}{2(p-1)}$$
and
$$a(p^{2k}) = \frac{p^{2k+1}+p^{k+1}-p^k-1}{2(p-1)}$$
However, I am not very good at proving inequalities, so:
If someone has an idea how to prove the following inequalities for odd primes $p$ that would be very nice:
$$p^{2k+1} > \frac{p^{2(k+1)}-1}{2(p-1)}, \text{ for all } k \ge 0$$
and
$$p^{2k} > \frac{p^{2k+1}+p^{k+1}-p^k-1}{2(p-1)}, \text{ for all } k \ge 1$$
Thanks for your help!
| *
*$p^{2k+1} > \dfrac{p^{2(k+1)} - 1}{2(p-1)}$ equals to $(p-2)p^{2k+1} + 1 > 0$ for $p \ge 2$ and $k \ge 0$
*$p^{2k} > \dfrac{p^{2k+1} + p^{k+1} - p ^ k - 1}{2(p-1)}$ equals to $(p^k-1)((p-2)p^k-1) > 0$ for $p > 2$ and $k \ge 1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3807399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Finding the limit: $\lim_{x\to \infty}\frac{1}{2}x\sin {\frac{180(x-2)}{x}}$ While investigating a problem, I came across a function: $$f(x) = \frac{1}{2}x\sin {\frac{180(x-2)}{x}}$$ When looking at the function in Desmos (I was checking my proof), I discovered that $$\lim_{x\to \infty}\frac{1}{2}x\sin {\frac{180(x-2)}{x}} = \pi$$ I double-checked Wolframalpha, and this limit is true. The only issue is that I can't seem to prove it by hand, and I'm really interested as to how $\pi$ pops out of nowhere. Please note I am working in degrees, so it is not 180 radians in the sin function. I would really appreciate it if someone could explain a solution.
| You really shouldn't work in degrees; more specifically, the sin function itself is defined (despite what you might have learned in high school) with 'radian' arguments. Formulae like $e^{ix}=\cos x+i\sin x$, or $\frac{d}{dx}\sin x=\cos x$, rely on it. There's another formula that relies on it that's the critical one here: $\lim_{x\to 0}\frac{\sin x}x=1$. (Incidentally, another way of thinking about this might be that since angles are 'dimensionless', unlike length or spans of time or mass, $180^\circ$ is literally just a fancy way of writing '$\pi$')
Now, writing your function 'properly', you have $f(x)=\frac12x\sin\left(\pi(1-\frac2x)\right)$ $= \frac12 x\sin(\pi-\frac{2\pi}{x})$. Using the symmetry of the $\sin$ function, this is equal to $\frac12x\sin(\frac{2\pi}x)$. Now, we can substitute $y=\frac1x$; taking the limit as $x\to\infty$ is the same as taking the limit as $y\to 0$ (technically only from positive $y$, but that's moot here), and so your limit is equal to $\frac12\lim_{y\to 0}\dfrac{\sin(2\pi y)}{y}$. But $\lim_{y\to 0}\frac{\sin(ay)}y$ $= a\lim_{y\to 0}\frac{\sin(ay)}{ay}$ $=a$; this gives your limit as $\frac12\cdot2\pi=\pi$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3807488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Eigenvectors of action span the representation Let $V$ be a complex finite dimensional vector space and $\rho:S_3\to \text{GL}(V)$ be a representation of the symmetric group $S_3$. Let $A_3$ be the alternating subgroup of $S_3$; let $\tau$ be a generator of $A_3$ (since $A_3$ is cyclic.). I want to prove that $V$ is spanned by the eigenvectors of the action of $\tau$ on $V$, and that their eigenvalues are powers of $\omega=\exp(2\pi i/3)$.
I have no idea of how to do this. I think it has to do with the fact that $A_3$ is Abelian, and thus the actions are G-module homomorphisms. Then some application of Schur's Lemma might follow, but I am not sure of how to proceed.
| Recursive proof on $dim(V)$, since $S^3$ is finite, there exists an Hermitian product of $V$ invariant by the action of $V$. Let $c$ be an eigenvalue of $\tau$, and $V_c$ the associate eigenspace, $V'$ the orthogonal of $V_c$ is invariant by $\tau$. If $V_c\neq V$, apply the recursive hypothesis to $V'$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3807616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
recurrence relation where $c_n = c_{n-1} + 2c_{n-2}$
A sequence $(c_n)$ is defined recursively as follows: $c_0 = 1, c_1 = 1, $ and $c_n = c_{n-1} + 2c_{n-2}$ for $n\geq 2$. We use $[x^n]g(x)$ to denote the coefficient of $x^n$ of the polynomial $g(x).$ Show that $c_{2n} = [x^{2n}] \dfrac{1}2\left(\dfrac{1}{1-x-2x^2}+\dfrac{1}{1+x-2x^2}\right)$ and that $\sum_{n\geq 0} c_{2n}x^n = \dfrac{1-2x}{1-5x+4x^2}.$ From this, one can deduce that $c_{2n} = 5c_{2n-2} - 4c_{2n-4}.$ Obtain a similar equation for $c_{2n}, c_{2n-4}$ and $c_{2n-8}.$
I know that $\dfrac{1}2\left(\dfrac{1}{1-x-2x^2}+\dfrac{1}{1+x-2x^2}\right) = \dfrac{1-2x^2}{1-5x^2+4x^4}$ and so if I show that $c_{2n} = [x^{2n}] \dfrac{1-2x^2}{1-5x^2+4x^4},$ I can replace $x^2$ with $x$ and get that $\sum_{n\geq 0} c_{2n}x^n = \dfrac{1-2x}{1-5x+4x^2}$. But I'm not sure how to show that $c_{2n} = [x^{2n}] \dfrac{1-2x^2}{1-5x^2+4x^4}.$ I don't think I'll need to compute the exact coefficient and it does not seem useful to manipulate the recurrence equation by substituting $n$ with $2n$. I tried showing that $(1-5x^2+4x^4) \sum_{n\geq 0} c_{2n} x^{2n} = 1-2x^2.$ Matching coefficients results in $c_0+(c_2-5c_0)x^2 + \sum_{n\geq 0} (c_{2n}-5c_{2n-2}-2c_{2n-4})x^{2n} = 1-2x^2,$ but it seems I'd need to prove something like $c_{2n}-5c_{2n-2}-2c_{2n-4}.$ I think figuring out how to come up with $\dfrac{1-2x}{1-5x+4x^2}$ should help me obtain a similar equation relating $c_{2n}, c_{2n-4}, c_{2n-8}$
| Using generating functions, I obtained the expression for $G(z) = \sum_n c_n z^n$:
$$
G(z) = \frac{c_0(1-z)+c_1z}{(1+z)(1-2z)}
$$
Now you need to expand the expression with partial fractions, you will obtain on RHS two expressions of the form
$$
G(z) = \lambda_1\sum_{z=0}^{\infty}(-1)^kz^k + \lambda_1\sum_{z=0}^{\infty}(-s_1)^kz^k
$$
By taking the coefficients for the term $z^n$ you will get your closed-form expression. Here you will need to find the constants $\lambda_1, \lambda_2$ using partial fractions, and $s_1$ by using Generalized binomial coefficient for $\frac{1}{1-2z}$. Can you handle from here?
EDIT: In the partial fraction step it's better to group $c_0 + (c_1-c_0)z$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3807711",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Is this discrete analogue of Fatou's lemma valid I was curious if this discrete analog of Fatou's lemma is valid:
\begin{align}
\sum_{j=1}^\infty \liminf_{k \rightarrow\infty} a_j(k) \leq \liminf_{k \rightarrow\infty} \sum_{j=1}^\infty a_j(k) ,
\end{align}
where $a_j(k)$ is a doubly indexed sequence of real numbers.
Does it hold in the general real case? What if $a_j(k) \geq 0 $, does it hold then? Thanks to all helpers.
| An infinite sum can be realized as an integral w.r.t counting measure on the integers. So the inequality holds in the non-negative case. Fatou's Lemma requires non-negativity and and the inequality is false without it.
$a_j(k)=-1$ for $j=k$ and $0$ for $j \neq k$ gives a counter-example.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3807824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Let $ABC$ be triangle with sides that are not equal. Find point $X$ on $BC$ From following conditions. Let $ABC$ be triangle with sides that are not equal. Find point $X$ on $BC$ such that $$\frac{\text{area}\ \triangle{ABX}}{\text{area}\ \triangle{ACX}}=\frac{\text{perimeter}{\ \triangle{ABX}}}{\text{perimeter}{\ \triangle {ACX}}}$$
I am not getting how to start with, Clearly $ABC$ triangle is scalene but what about other triangle is it necessary they should be scalene too? Please help me to solve this. Thanks in advance
| Obviously, the stated condition means that
$\triangle ABX$ and $\triangle AXC$
must have the same radius of the inscribed circle.
That means $AX$ must be the incircle bisector of $\triangle ABC$,
see
Incircle bisectors and related measures.
For $A$, $B$, $C$
ordered in positive (counterclockwise) direction,
$|BC|=a$, $|AC|=b$, $|AB|=c$, using expressions given in the link above,
we can find out that
\begin{align}|BX|&=\tfrac12\,a-\frac{b-c}{2a}\,\Big(b+c-\sqrt{(b+c)^2-a^2}\Big).\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3807936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Integrating floor functions without known limits Suppose we have$$\frac{\displaystyle\int_0^n{\lfloor x \rfloor}\,dx}{\displaystyle\int_0^n{\lbrace x \rbrace }\,dx}$$ $n \in I$
where $\lfloor \cdot\rfloor$ and $\lbrace \cdot\rbrace $ represent the floor function and fractional part function.
The usual method of splitting the function in intervals fails because the upper limit is not fixed. Is there any other way to approach this?
Edit:
I would also like to know a method to integrate $\displaystyle\int_0^n{\lbrace x \rbrace }\,dx$ without depending on floor function
| Note that
$$\int_{0}^{n}\lfloor x\rfloor\,dx=\sum_{k=0}^{n-1}k\bigg[\int_{k}^{k+1}\,dx\bigg]=\frac{n(n-1)}{2}$$
Also note that $\{x\}=x-\lfloor x\rfloor$ if $x\geq 0$. The case for $x<0$ is $\{x\}=x-\lceil x\rceil$ and is easily modified from this. So, $$\int_{0}^{n} \{x\}\,dx=\int_{0}^{n}x-\lfloor x\rfloor\,dx=\frac{n^2}{2}-\frac{n(n-1)}{2}=\frac{n}{2}$$
So I suppose to finish it off, $$\frac{\displaystyle\int_0^n{\lfloor x \rfloor}\,dx}{\displaystyle\int_0^n{\lbrace x \rbrace }\,dx}=\frac{\frac{n(n-1)}{2}}{\frac{n}{2}}=n-1$$
Furthermore, $n$ cannot be equal to $0$ in order for this formula to work.
Addressing the edit question, here is a picture of what is happening when the integrand is $\{x\}$:
This is clearly the function $y=x$ on intervals of length $1$. So, $$\int_{0}^{n}\{x\}\,dx = n\int_{0}^{1}x\,dx=\frac{n}{2} $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3808039",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Spivak's Calculus Chapter $7$ $15$b) Prove $f$ is bounded below Q15) Suppose that $\phi$ is continuous and $\lim_{x \rightarrow \infty}(\frac{\phi(x)}{x^n})=0=\lim_{x \rightarrow -\infty}(\frac{\phi(x)}{x^n})$
b) Prove that if $n$ is even, then there is a number $y$ such that $y^n + \phi(y) \leq x^n + \phi(x)$ for all $x$.
I have been wrestling with the question all day but can't get the result. In fact not only did I not get the result, I'm becoming convinced it's false. For example, say $\phi(x) = \frac{1}{x}$. Then $\phi$ is continuous and satisfies the limits above, but $x^n + \frac{1}{x}$ doesn't have a lower bound because $\lim_{x \rightarrow 0^-}(x^n + \frac{1}{x}) = -\infty$. Unless $\frac{1}{x}$ is not continuous at $0$, ok, but when we say a function is continuous, don't we mean specifically it's continuous on the domain on which it's defined? I'm pretty sure I'm wrong somewhere. How do we solve this question?
| Hint : The function $$x \mapsto x^n + \phi(x)$$
is continuous and tends to $+\infty$ when $x$ tends to $\pm \infty$. It is sufficient to ensure that it has a minimum.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3808143",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Solving for $p$ binomcdf$(n,p,k)$=$x$ In Binomial Probability with Unknown p, this is done for a case in which $n=0$, which ends up being a simple solution. With $n > 0$, things aren't as simple, trying to naively directly solve gets you higher and higher power equations.
E.g. solving $\text{binomcdf}(17,p,8) = 0.3$ (The probability of 8 or less successes is exactly 0.3) is, written out;
$$\sum^{8}_{n=0}\binom{17}{n} p^n(1-p)^{17-n} = 0.3$$
That's a polynomial of a very high degree. What would be a good way to solve or in a numerically stable way approximate the solution to arbitrary precision of such an equation?
In particular, I'm motivated by the following problem:
Let there be $n$ members in a legislature. There are 3 measures suggested to this legislature. The first measure to pass invalidates the others. The legislature unanimously wants one measure to pass with each measure to have an equal chance of passing, but each member only has control over their own vote. In an even-numbered legislature a hung vote means a failed measure.
The members decide on a certain nash-probability $p_1$ for which the chance of passing a measure is $\frac{1}{3}$ and another $p_2$ for a chance of $\frac{1}{2}$, with the last measure always getting a unanimous vote. What are these values?
The solution would be to solve the equations
$$\text{binomcdf}(n,p_1,\lfloor\frac{n}{2}\rfloor) = \sum^{\lfloor\frac{n}{2}\rfloor}_{k=0} \binom{n}{k}p_1^k(1-p_1)^{n-k} = \frac{1}{3}$$
$$\text{binomcdf}(n,p_2,\lfloor\frac{n}{2}\rfloor) = \sum^{\lfloor\frac{n}{2}\rfloor}_{k=0} \binom{n}{k}p_2^k(1-p_2)^{n-k} = \frac{1}{2}$$
for respectively $p_1$ and $p_2$.
For odd values of $n$ and the case where the target probability is $\frac{1}{2}$, there a simple solution, $p_2 = 0.5$, and this is the only answer.
So far I've simply applied a spreadsheet, fiddling with the value of $p_1$ until I get close to the desired chance, but I wonder if a more general method exists.
| As Felix Marin points out, we have bounds on when we have $\operatorname{binomcdf}(n,p,k)-t$ changes sign given by $p=0$ and $p=1$, so bracketing methods can safely be tried to get fast and guaranteed convergence. You may want to try Brent's method, Chandrupatla's method, or even the Newton-Raphson method since the derivative is known.
The only issue remaining is you may get slow convergence if the desired value is close to the boundary, where the cdf becomes very flat. This can be done using symmetry along with the asymptotic behavior near either $0$ or $1$. For $0<k<n-1$:
$$\operatorname{binomcdf}(n,p,k)\approx\binom nk(1-p)^{n-k},\quad p\approx1\\\operatorname{binomcdf}(n,p,k)\approx1-\binom n{k+1}p^{k+1},\quad p\approx0$$
For your example of $\operatorname{binomcdf}(17,p,8)=0.3$, these approximations can be solved to give
$$0.313\le p\le0.715$$
which is closer to the more exact $p=0.56241865$.
See here for an implementation using Chandrupatla's method.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3808215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Is the convex combination realizing the convex envelope unique? Let $F:[0,\infty) \to [0,\infty)$ be a continuous function satisfying $F(1)=0$, which is strictly increasing on $[1,\infty)$, and strictly decreasing on $[0,1]$. Suppose also that $F|_{(1-\epsilon,1+\epsilon)}$ is convex for some $\epsilon>0$. Suppose that $F$ is not affine on any subinterval.
Let $\hat F(x) = \sup \{ h(x) \mid \text{$h$ is convex on $[0, \infty)$}, h \le F \} \, $ be the convex envelope of $F$. Let $c\in (0,1)$, and suppose that $\hat F(c) < F(c)$.
Question: Let $x,y \in [0,\infty)$ and $\lambda \in [0,1]$ satisfy $c = \lambda \, x + (1-\lambda)\, y$ and $\hat F(c) = \lambda \, F(x) + (1-\lambda) \, F(y)$. Are such $x,y$ unique?
(Here is an argument for the existence of such $x$ and $y$, under slightly different conditions).
We always have
$
\hat F(c) \le \lambda \, \hat F(x) + (1-\lambda) \, \hat F(y) \le \lambda \, F(x) + (1-\lambda) \, F(y),
$
so $\hat F(c) = \lambda \, F(x) + (1-\lambda) \, F(y)$ if and only if $\hat F(x)=F(x), \hat F(y)=F(y)$, and $\hat F$ is affine on $[x,y]$.
| Take $c = 1/2$,
$$x_1=1/2 -a,\quad y_1=1/2 +a,\\ x_2=1/2 -b, \quad y_2=1/2 +b,$$
where $a<b<1/2$. You can find a function $F$ satisfying all the hypotheses such that $\hat F(c)=1-c< F(c)$, and
$$ \hat F(x) = F(x)=1-x,\quad x = x_1,x_2,y_1,y_2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3808340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
prove the angle of sum of vectors is always in the middle For two planar vectors $v_1, v_2$, and they are in the same quadrant.
Define $\angle(v_1)$ as the angle between $v_1$ and positive x-axis. And $\angle{v_1} \lt \angle(v_2)$.
Their sum $v_3 = v_1+v_2$, would this be true $\angle(v_1) \lt \angle(v_3) \lt \angle(v_2)$? can you prove it?
Here is what I have tried:
It should be easy to see $v_3$ is in the same quadrant with $v_1, v_2$, since $v_3= (a_3, b_3)=(a_1+a_2, b_1+b_2)$, which $\forall a_i$ have the same sign and $\forall b_i$ have the same sign.
Since all $v_1, v_2, v_3$ are in the same quadrant, the angle between any two of them is less than 90. The three of them would form an acute triangle.
| Hint (elaborating on Tassle's comment):
You have that
$$\tan\angle v_1=\frac{a_1}{b_1},\ \tan\angle v_2=\frac{a_2}{b_2},\ \tan\angle v_3=\frac{a_1+a_2}{b_1+b_2}.$$
Since the $\tan$ function is monotonically increasing, it suffices to show that
$$\frac{a_1}{b_1}<\frac{a_1+a_2}{b_1+b_2}<\frac{a_2}{b_2}.$$
Since we know that $a_1/b_1<a_2/b_2$ (because $\angle v_1<\angle v_2$), can you prove the above inequality?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3808571",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why do engineers use derivatives in discontinuous functions? Is it correct? I am a Software Engineering student and this year I learned about how CPUs work, it turns out that electronic engineers and I also see it a lot in my field, we do use derivatives with discontinuous functions. For instance in order to calculate the optimal amount of ripple adders so as to minimise the execution time of the addition process:
$$\text{ExecutionTime}(n, k) = \Delta(4k+\frac{2n}{k}-4)$$
$$\frac{d\,\text{ExecutionTime}(n, k)}{dk}=4\Delta-\frac{2n\Delta}{k^2}=0$$
$$k= \sqrt{\frac{n}{2}}$$
where $n$ is the number of bits in the numbers to add, $k$ is the amount of adders in ripple and $\Delta$ is the "delta gate" (the time that takes to a gate to operate).
Clearly you can see that the execution time function is not continuous at all because $k$ is a natural number and so is $n$.
This is driving me crazy because on the one hand I understand that I can analyse the function as a continuous one and get results in that way, and indeed I think that's what we do ("I think", that's why I am asking), but my intuition and knowledge about mathematical analysis tells me that this is completely wrong, because the truth is that the function is not continuous and will never be and because of that, the derivative with respect to $k$ or $n$ does not exist because there is no rate of change.
If someone could explain me if my first guess is correct or not and why, I'd appreciate it a lot, thanks for reading and helping!
| If you consider that k and n are continuous variables, you obtain a continuous function that may be derivated and is entangled to the initial discontinuous function. The real problem is if the extreme point of this function can approximate the extreme point of the discontinuous one. The error may be evaluated with the Cauchy reminder formula for Newton or Lagrange interpolation which has a k! in the denominator and may be tolerable if k is enough big.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3808684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "72",
"answer_count": 5,
"answer_id": 4
} |
How many $3$-digit numbers have a digit sum of $21$? I have the following question:
How many $3$-digit numbers have a digit sum of $21$?
I can count it by taking separate cases of three digit numbers such that their digit sum is $21$ & by calculating all possible numbers that can be made from each case and by adding them to get the result.
My question: Is there any direct combination formula to calculate this?
(I don't think stars and bars method works here)
| $$a+b+c=21\implies(9-a)+(9-b)+(9-c)=6$$
Apply stars and bars to $x+y+z=6$ to get $\binom{8}{2}=28$ solutions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3808952",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Differentiate two times a simple expression Let's say I have:
$$ y = -\frac{1}{\tan(x)} $$
How do you get a relation between $\mathrm{d}^2x$ and $\mathrm{d}^2y$ ?
What I know:
$$\mathrm{d}y = \frac{\mathrm{d}x}{\sin(x)^2} $$
| Hint: $$y^{(n)}(x)=\frac{d^ny(x)}{dx^n} \iff d^ny(x)=y^{(n)}(x)\space dx^n.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3809082",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Finding a closed form expression of a sequence that is defined recursively via a definite integral Consider the following series function that is defined recursively by the following definite integral
$$
f_n(x) = \int_0^x u^n f_{n-1}(u) \, \mathrm{d}u \qquad\qquad (n \ge 1) \, ,
$$
with $f_0 (x) = \operatorname{erf}(x)$ being the error function.
Then, let us define for $n \ge 0$ the following sequence:
$$
u_n = \lim_{x\to 0} x^{-\frac{(n+1)(n+2)}{2}} f_n(x) \, .
$$
It can readily be checked that $0 < u_n < \infty$.
My questions are:
*
*Is there a way to find out the general term of the above number sequence?
*Is the series $\sum_{n=0}^{\infty} u_n$ convergent? If so, what is the limit of this series?
Any hint of help is highly appreciated and desirable.
Thanks and best,
Daddy
| Apparently you made some index mistakes...
If $$f_n(x)=u_n x^{k_n}+O\left(x^{k_n+1}\right)$$ then it can be easily seen
$$k_{n+1}=k_n+n+2$$
$$u_{n+1}=\frac{u_n}{k_n+n+2}$$
Assuming that you are using the unnormalised error function $\text{erf}(x)=x+O(x^3)$, we have the initial conditions $k_0=1$, $u_0=1$.
Solving the first recurrence relation we get $$k_n=\frac{(n+1)(n+2)}2$$
Obviously, the second recurrence relation gives $$u_n=\prod^{n-1}_{i=0}\frac1{k_i+i+2}=\prod^n_{i=1}\frac2{(i+1)(i+2)}=\frac{2^{n+1}}{(n+1)!(n+2)!}$$
Doing a little algebra one arrives at $$\sum^\infty_{n=0}u_n=\frac{I_1\left(2\sqrt2\right)}{\sqrt2}-1=1.394833...$$ where $I_1$ is the first order modified Bessel function of the first kind.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3809239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
If $P(x)=\sum_{i=0}^da_i\left(\prod_{j=i}^{d+i-1}(x+j)\right)$ is linear, what is its constant term? Question: Fix $d,m\in\mathbb{N}$ with $0\leq m\leq d$ and define
$$P(x)=\sum_{i=0}^da_i\left(\prod_{j=i}^{d+i-1}(x+j)\right),$$
where each $a_i$ is a constant, $a_m=0$. Suppose that, after expansion, $P(x)=c-x$ for some constant $c$. Show that $c=\frac{m}{d}-d$.
I obtained a rough solution by evaluating $P\left(-(d+k)\right)$ for each $0\leq k\leq m$, which yields $m+1$ linear relations on $a_0,\dots, a_{m-1}$ and $c$, from which one can then solve by scaling and subtracting. However, I am hoping for a cleaner, more succinct answer (in fact, one might even be able to use the above approach in a neater way than I did).
| Some thoughts
Clearly, $d\ge 1$.
Let $x = -(m-1), -m, -(m+1), \cdots, -(d-1)$ respectively to get
\begin{align}
P(1-m) &= c + m - 1, \\
P(-m) &= c + m, \\
P(-m - 1) &= c + m+1, \\
&\cdots\cdots\\
P(-d+1) &= c + d - 1.
\end{align}
Then we have (weighted sum of the equations above)
$$\sum_{k=0}^{d-m} P(-m - k + 1)(-1)^k\binom{d+1}{k} = \sum_{k=0}^{d-m} (c + m + k - 1) (-1)^k\binom{d+1}{k}. \tag{1}
$$
Claim 1: It holds that
$$\sum_{k=0}^{d-m} P(-m - k + 1)(-1)^k\binom{d+1}{k} = 0.$$
(The proof is given at the end.)
By (1) and Claim 1, we have
$$\sum_{k=0}^{d-m} (c + m + k - 1) (-1)^k\binom{d+1}{k} = 0$$
which results in
$$c = -m + 1 - \frac{\sum_{k=0}^{d-m} k (-1)^k\binom{d+1}{k}}{\sum_{k=0}^{d-m} (-1)^k\binom{d+1}{k}}
= -m + 1 - (d+1)\frac{d-m}{d} = \frac{m}{d} - d$$
where we have used the identity (see 26.3.10 in https://dlmf.nist.gov/26.3)
$$(-1)^N \binom{M}{N} = \sum_{k=0}^N (-1)^k \binom{M+1}{k}, \quad 0\le N \le M$$
to get
$$\sum_{k=0}^{d-m} (-1)^k\binom{d+1}{k} = (-1)^{d-m}\binom{d}{d-m}$$
and
$$\sum_{k=0}^{d-m} k (-1)^k\binom{d+1}{k} = (d+1)\frac{d-m}{d}(-1)^{d-m}\binom{d}{d-m}. \tag{2}$$
(The proof of (2) is given at the end.)
$\phantom{2}$
Proof of Claim 1: We have
\begin{align}
&\sum_{k=0}^{d-m} P(-m - k + 1)(-1)^k\binom{d+1}{k}\\
=\ & \sum_{k=0}^{d-m} \sum_{i=0}^d a_i\left(\prod_{j=i}^{d+i-1}(-m - k + 1+j)\right)(-1)^k\binom{d+1}{k}\\
=\ & \sum_{i=0}^d a_i \sum_{k=0}^{d-m} \left(\prod_{j=i}^{d+i-1}(-m - k + 1+j)\right)(-1)^k\binom{d+1}{k}\\
=\ & \sum_{i=0}^d a_i A_i
\end{align}
where
$$A_i = \sum_{k=0}^{d-m} \left(\prod_{j=i}^{d+i-1}(-m - k + 1+j)\right)(-1)^k\binom{d+1}{k}.$$
It suffices to prove that $A_i = 0$ for all $i \ne m$.
We split into three cases:
*
*$m = d$: For $0\le i < m$, we have
$$A_i = \prod_{j=i}^{d+i-1}(-d + 1+j) = 0.$$
*$m = 0$: For $1\le i\le d$, noting that $\prod_{j=i}^{d+i-1}(-m - k + 1+j) = 0$ for $i + 1 \le k \le d$, we have
\begin{align}
A_i &= \sum_{k=0}^{d} \left(\prod_{j=i}^{d+i-1}( - k + 1+j)\right)(-1)^k\binom{d+1}{k}\\
&= \sum_{k=0}^i \left(\prod_{j=i}^{d+i-1}( - k + 1+j)\right)(-1)^k\binom{d+1}{k}\\
&= \sum_{k=0}^i \frac{(d+i-k)!}{(i-k)!}(-1)^k\binom{d+1}{k}\\
&= d! \sum_{k=0}^i (-1)^k \binom{d+1}{k} \binom{d+i-k}{i-k}\\
&= 0
\end{align}
where we have used the identity (see @arindam mitra's answer:
Prove combinatorial identity using inclusion/exclusion principle)
$$\sum_{k=0}^M (-1)^k \binom{N}{k}\binom{N + r - k}{M - k} = 0, \quad 0 \le r \le M-1$$
to get (let $M = i$, $N = d + 1$, $r = i - 1$)
$$\sum_{k=0}^i (-1)^k \binom{d+1}{k} \binom{d+i-k}{i-k} = 0.$$
*$1 \le m \le d - 1$: If $0\le i < m$, clearly $\prod_{j=i}^{d+i-1}(-m - k + 1+j) = 0$ and hence $A_i = 0$.
If $m < i \le d$, I $\color{blue}{\textrm{GUESS}}$ $A_i = 0$.
Remark: With the help of Maple, $\color{blue}{\textrm{it appears that}}$
$$\sum_{k=0}^{d-m} \Big(\prod_{j=i}^{d+i-1}(-m - k + 1+j)\Big)(-1)^k\binom{d+1}{k}
= (-1)^{d-m}\binom{d}{m} \prod_{0\le k \le d, \, k\ne m} (i-k). \tag{2}$$
How to prove it?
$\phantom{2}$
Proof of (2): If $d-m = 0$, it is obvious. If $d-m\ge 1$, we have
\begin{align}
\sum_{k=0}^{d-m} k (-1)^k\binom{d+1}{k}
&= \sum_{k=1}^{d-m} k (-1)^k\binom{d+1}{k}\\
&= (d+1) \sum_{k=1}^{d-m} (-1)^k \binom{d}{k-1}\\
&= -(d+1) \sum_{j=0}^{d-m-1} (-1)^j \binom{d}{j}\\
&= -(d+1)(-1)^{d-m-1}\binom{d-1}{d-m-1}\\
&= (d+1)\frac{d-m}{d}(-1)^{d-m}\binom{d}{d-m}.
\end{align}
We are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3809530",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 1
} |
Locally countable implies countably tight?
*
*Locally countable: The space has a base of open sets, each with countable cardinality.
*Countably tight: for each A⊆X and each $p\in\overline{A}$ there is a countable subset $D\subseteq A$ such that $p\in \overline D$.
Does locally countable imply countably tight?
| Yes, if $p \in \overline{A}$ and $U$ is open containing $p$, then $p \in \overline{U \cap A}$. So if $U$ is chosen to be countable, $U \cap A$ is as required for showing countable tightness.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3809662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How to prove the union bound with Markov's inequality? We have events $B_1$, $B_2$, $\dots$, $B_t$. Prove $\Pr\left(\bigcup_{i=1}^t B_i\right) \le \sum_{i=1}^{t}\Pr(B_i)$.
Wikipedia proves by induction and I also understand this inequality intuitively, that is when summing all the events you're computing the overlapped events multiple times. But I'm not sure how to prove this using markov's inequality. Can someone give some insights into how to prove this? Thanks so much!
| Hint: take $X = 1_{B_1} + \dots + 1_{B_t}$. Note that $\bigcup_{i=1}^t B_i = \{X \ge 1\}$, so use Markov's inequality to estimate the probability of the latter event.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3809886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can a Normal Distribution be specified by its mean and cubic deviation? I recently read that:
a normal distribution is completely specified by its mean and standard deviation.
That makes a lot of sense. But I was wondering isn't it also true that it could be completely specified by its mean and the cubic deviation? Or quadratic one? Or even the mean deviation?
If we consider the standard deviation formula:
$$\sigma = (\frac{1}{N} \sum_i \lvert x_i - avg \rvert^\color{red}{p})^{1/\color{red}{p}}$$
Then:
*
*p = 1: mean deviation.
*p = 2: standard deviation.
*p = 3: cubic deviation. I just made this name up.
*p = 4: quartic deviation. I just made this name up.
*p = 2.3456789: any positive non-integer value of p.
Can any of those deviations completely specify a normal distribution, in addition to the mean value of course?
| From the Wikipedia entry, we know that the $p$-th central absolute moment of a $N(\mu,\sigma^2)$ random variable $X$ is
$$E[|X-\mu|^p]=\sigma^p\frac{2^{p/2}\Gamma(\frac{p+1}2)}{\sqrt\pi}.$$
If we know this number, and know $p$, we can determine $\tau=\sigma^p$
and then determine $\sigma=\tau^{1/p}$.
It might seem paradoxical, but even when $p=1$ this is sufficient to yield $\sigma$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3809982",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
random walk inside a square (probability of escape before returning) My question comes from Exercise 9.7. of the book "Markov Chains and Mixing Times (2nd edition)" written by David A.Levin and Yuval Peres. Specifically, let $B_n$ be the subset of $\mathbb{Z}^2$ in the box of side length $2n$ centered at $0$. Let $\partial B_n$ be the set of vertices along the perimeter of the box. The problem statement asks us to show that for simple random walk on $B_n$, $$\lim_{n\to \infty} \mathbb{P}_0\{\tau_{\partial B_n} < \tau^+_0\} = 0.$$ I think it is intuitively clear but have no idea on how this can be justified analytically. Since this problem is in Chapter 9, I guess the author wants us to use the theory of network reduction laws/rules developed in the context of "random walks on networks". Any hint or help are greatly appreciated!
| I think I have something, if I dindn't make any mistake.
$\mathbb{P}_0\{\tau_{\partial B_n} < \tau^+_0\} = \sum_{k \in \mathbb{N} } \mathbb{P}\{\tau_{\partial B_n} < \tau^+_0 |\tau^+_0 =k \} \mathbb{P}\{ \tau^+_0 =k \} = \sum_{k \geq n } \mathbb{P}\{\tau_{\partial B_n} < \tau^+_0 |\tau^+_0 =k \} \mathbb{P}\{ \tau^+_0 =k \} \leq \sum_{k \geq n } \mathbb{P}\{ \tau^+_0 =k \} = \mathbb{P}\{ \tau^+_0 \geq n \}$
Overall $\mathbb{P}_0\{\tau_{\partial B_n} < \tau^+_0\} \leq \mathbb{P}\{ \tau^+_0 \geq n \}$ which mean that if a random walk touch the boundary of $B_n$ and then go back to $0$ then the first return map to $0$ take at least $n$ step.
Obviously $\mathbb{P}\{ \tau^+_0 \geq n \} \underset{n \to \infty}{\to} 0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3810184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What category is the universal property of the Free Group a diagram in? wikipedia says that the free group is defined by a universal property:
The free group $F_S$ is the universal group generated by the set $S$. This can be formalized by the following universal property: given any function $f$ from $S$ to a group $G$, there exists a unique homomorphism $φ: F_S → G$ making the following diagram commute (where the unnamed mapping denotes the inclusion from $S$ into $F_S$):
My question is, in what category is this a diagram? Is it in Grp or Set? Either way I'm confused, because $S$ is not a group, suggesting it's in Set, but the uniqueness of $\phi$ only holds for homomorphisms, not general functions, suggesting this is in Grp.
| As the definition mentions, $f$ and the unnamed inclusions are just functions while $\varphi$ is a group homomorphism. Hence the diagram is not in $\mathbf{Grp}$, nor actually in $\mathbf{Set}$ (in the sense that the diagram in $\mathbf{Set}$ would not force $\varphi$ to be a group homomorphism).
The construction gives in fact a functor from $\mathbf{Set}$ to $\mathbf{Grp}$ assigning to each set $S$ the free group $F_S$, and to each function $g:S\to T$ the morphism $\varphi_g:F_S \to F_T$ associated to the map $f=\iota_T\circ g:S\to F_T$ by the universal property (where $\iota_T:T\to F_T$ is the inclusion).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3810300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 0
} |
Minimum distance among a set of points in COVID-19 times Students going back to schools made me think of the following (non-trivial, I think) problem.
Problem: How to arrange students in a classroom so that they keep a distance of $1.5$ meters from each other in such a way that there is room for as many students as possible?
Formally, the problem consists of solving for $n$ points
$$p_0=(0,0), p_1=(x_1, y_1), \ldots p_{n-1}=(x_{n-1}, y_{n-1})$$
where $n$ is given
$$\begin{array}{ll} \text{minimize} & \displaystyle\sum_{i \neq j} \| p_i - p_j \|_2\\ \text{subject to} & \|p_i - p_j\|_2 \geq 1.5 \qquad \forall i \neq j\end{array}$$
How would you face this problem? Are there algorithms to solve it?
(I am using the Euclidean norm, so it is a non-linear problem)
| The usual formulation of this problem is that you are given a fixed shape of classroom and are supposed to maximize $n$. Imagine you extended the classroom by $0.75$ metres in each direction, then you would be able to place a circle of radius $0.75$ on each student without the circles overlapping. Vice versa, any nonoverlapping packing of circles of radius $0.75$ in the extended classroom corresponds to a valid positioning of students in the original classroom.
Alternatively, your task might be to fit $n$ nonoverlapping circles of radius $r$ into a given shape, and your goal is to make $r$ as large as possible.
This type of problem is known as circle packing problems. It's popular in recreational mathematics, and I believe the usual approach to find packings is still just to try a bunch of arrangements of circles (either manually or by computer) until you get a nice one. There's no known way to verify optimality of a packing in short time.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3810415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Expected number of days before all magical seeds become apple trees This is a question that I came across recently.
At the end of day $0$, $6$ magical seeds are planted. On each day following it, each seed has a chance to magically transform into an apple tree with a probability of $\frac{1}{2}$. The outcomes of seeds are independent of each another.
What is the expected number of days for all six seed to have become apple trees?
My solution:
$E(n)$ - number of expected days from the point that there is only n seed(s) left.
So, $E(1)$ - number of expected days for the last seed to grow.
$E(1) = 1 + \frac{1}{2} E(1) \,or \,E(1) = 2$. This we anyway know from a coin flip analogy.
$E(2) = 1 + \frac{2}{4} E(1) + \frac{1}{4} E(2) \,or\, E(2) = \frac{8}{3}$.
This comes from the fact that if at the end of a day, two seeds are left, I have 3 possible events - i) both seeds become trees the next day ($+1$ day). ii) one seed becomes tree and one seed is left (probability $\frac{2}{4}$). So we further add expected number of days for $E(1)$. iii) None of the seeds become a tree (probability $\frac{1}{4}$). So we add further expected number of days for $E(2)$.
Similarly, $E(3) = 1 + \frac{3}{8} E(1) + \frac{3}{8} E(2) + \frac{1}{8} E(3)$
$E(4) = 1 + \frac{4}{16} E(1) + \frac{6}{16} E(2) + \frac{4}{16} E(3) + \frac{1}{16} E(4)$
$E(4) = 1 + \frac{4}{16} E(1) + \frac{6}{16} E(2) + \frac{4}{16} E(3) + \frac{1}{16} E(4)$
$E(5) = 1 + \frac{5}{32} E(1) + \frac{10}{32} E(2) + \frac{10}{32} E(3) + \frac{5}{32} E(4) + \frac{1}{32} E(5)$
$E(6) = 1 + \frac{6}{64} E(1) + \frac{15}{64} E(2) + \frac{20}{64} E(3) + \frac{15}{64} E(4) + \frac{6}{64} E(5) + \frac{1}{64} E(6)$
This gives me an answer of $E(6) = \frac{55160}{13671}$. However the answer given is $(\log_2 6)$. I do not understand how the answer got into $\log$. When I calculate both, they are not same values.
Also, are there more generic and faster methods that I could use to get to the answer?
| Let $X_i$ be the r.v. corresponding to the number of the day where the $i$-th seed become an apple tree. Obviously, for all $i=1, ..., 6$ and for all $n \geq 1$,
$$\mathbb{P}(X_i = n) = \frac{1}{2^n}$$
What you are asked to compute is
$$\mathbb{E}(\max(X_i))$$
But for all integer $n \geq 1$,
$$\mathbb{P}(\max(X_i)=n) = \mathbb{P}(\max(X_i) \leq n) - \mathbb{P}(\max(X_i) \leq n-1) = \prod_{i=1}^6 \mathbb{P}(X_i \leq n) - \prod_{i=1}^6 \mathbb{P}(X_i \leq n-1)$$
$$=\left( 1 - \frac{1}{2^n} \right)^6-\left( 1 - \frac{1}{2^{n-1}} \right)^6$$
And $$\mathbb{E}(\max(X_i)) = \sum_{n=1}^{+\infty} n \left[\left( 1 - \frac{1}{2^n} \right)^6-\left( 1 - \frac{1}{2^{n-1}} \right)^6\right]$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3810512",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
How to maximize $f(x,y) = 3\sin x + 4\cos y$? Hello for an linear algebra course I have the question as seen below
*Among all the unit vectors
$$
u=\left(\begin{array}{l}
u_{1} \\
u_{2} \\
u_{3} \\
u_{4}
\end{array}\right) \in \mathbb{R}^{4}
$$
find the one for which the sum
$$
3 u_{1}-2 u_{2}+4 u_{3}-u_{4}
$$
is maximal.
To maximize the sum of the unit vector I have set $u_2$ and $u_4$ to $0$ since they do not contribute to the sum. Then we are left with $2$ variables that squared have to add up to $1$. Thats when I thought of the unit circle since that always adds up to one. But trigonometry is not my strongest point and I am now trying to figure out how to maximize $3\sin x + 4\cos y$ since this would be a valid unit vector and maximize the sum in the question. Or is there another way to answer this questions. Any help or suggestions are much appreciated.
Thanks in advance,
Thomas
| $u$ is a unit vector means $||u|| = \sqrt{u_1^2 +u_2^2 + u_3^2 + u_4^2} = 1$. Consider it as a constraint on the vector you seek $u$ in the form
$$
g(u_1,u_2,u_3,u_4) = 0\qquad {\rm i.e. } \qquad \sqrt{u_1^2 +u_2^2 + u_3^2 + u_4^2} - 1 =0
$$
the function you have $f(u_1,u_2,u_3,u_4) = 3u_1 - 2u_2 + 4u_3-u_4$ must be maximized subject to the cnstraint above.
using Lagrange multipliers methode you get the system
$$
\nabla f(u_1,u_2,u_3,u_4)\ - \lambda \nabla g(u_1,u_2,u_3,u_4) = \vec{0} \\
g(u_1,u_2,u_3,u_4) = 0
$$
Solve for $u_1,u_2,u_3,u_4$ and $\lambda$ you get two solutions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3810717",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
For what values of $k$ is there a perfect $x^p$ in {$n, n+1, n+2, ... ,kn$}? I was thinking about this the other day, and I can't seem to find the answer.
It is a fairly simple proof by induction to show that for all $n ∈ ℕ$, there is a perfect square in ${n, n+1, n+2, ... , 2n}$.
I am trying to generalize this question. For what values of $k$ is there always perfect cube in ${{n, n+1, n+2,..., kn}}$?
More generally, for what values of $k$ is there always a perfect $x^p$ in ${n, n+1, n+2, ... , kn}$? I'm not sure how to approach a proof of this. Maybe it's way above me?
Thanks.
| For $n = 1$, since $1$ is a $p$'th power, any $k$ will do. For $n = 2$, the next largest $p$'th power if $2^p$, so
$$kn \ge 2^p \implies k \ge 2^{p-1} \tag{1}\label{eq1A}$$
I will show the minimum allowed value of $k = 2^{p-1}$ always works. For any $2 \le n \le 2^p$, this value of $k$ works. Also, if $n = m^p$ for any integer $m$, then this $k$ also works. Next, consider
$$m^p \lt n \lt (m+1)^p \tag{2}\label{eq2A}$$
for an integer $m \ge 2$.
For $p = 1$, we have $k = 1$, with this working since each value is it's own first power. For $p = 2$, we have $k = 2$, which you've stated you can prove using induction. Thus, consider $p \ge 3$. Using \eqref{eq2A}, the Binomial theorem expansion with $x \gt 0$ giving $(1 + x)^p = 1 + px + \frac{p(p-1)}{2}x^2 + \ldots + x^p \gt 1 + px$, and $m \ge 2 \implies m + m \ge m + 2 \implies 2m - 1 \ge m + 1$, we then get
$$\begin{equation}\begin{aligned}
\left(2^{p-1}\right)n & \gt \left(2^{p-1}\right)m^p \\
& = \left(\frac{1}{2}\right)\left(2^{p}\right)m^p\left(\frac{(m+1)^p}{(m+1)^p}\right) \\
& = \left(\frac{1}{2}\right)\left(\frac{2m}{m+1}\right)^p(m+1)^p \\
& = \left(\frac{1}{2}\right)\left(1 + \frac{m - 1}{m+1}\right)^p(m+1)^p \\
& \gt \left(\frac{1}{2}\right)\left(1 + \frac{p(m - 1)}{m+1}\right)(m+1)^p \\
& \ge \left(\frac{1}{2}\right)\left(1 + \frac{3(m - 1)}{m+1}\right)(m+1)^p \\
& = \left(\frac{1}{2}\right)\left(\frac{4m - 2}{m+1}\right)(m+1)^p \\
& = \left(\frac{2m - 1}{m+1}\right)(m+1)^p \\
& \ge (m+1)^p
\end{aligned}\end{equation}\tag{3}\label{eq3A}$$
Thus, $(m+1)^p \in [n,kn]$, confirming $k = 2^{p-1}$ (as well as any larger $k$) will always work.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3810898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Steps for solution of $\frac{dy}{dx}-y^2=\frac{a}{x}+b$ According to Wolfram,
$$\frac{dy}{dx}-y^2=\frac{b}{x}+a$$
The equation above has the solution;
Where $a$ and $b$ are constants. $_1F_1(a;b;x)$ and $U(a,b,x)$ are the Kummer confluent hypergeometric function and confluent hypergeometric function of the second kind, respectively.
I don't know how Wolfram got this solution, can anyone here help me please?
Thank you guys
| What is making problem for $$\frac{dy}{dx}-y^2=\frac{b}{x}+a$$ is $a$. For $a=0$, you would get the "simple"
$$y=\frac{c_1 J_1\left(2 \sqrt{b} \sqrt{x}\right)+\sqrt{b} \sqrt{x} \left(\left(c_1-2\right)
J_0\left(2 \sqrt{b} \sqrt{x}\right)-c_1 J_2\left(2 \sqrt{b}
\sqrt{x}\right)\right)}{2 \left(1-c_1\right) x J_1\left(2 \sqrt{b} \sqrt{x}\right) }$$ which can also write
$$y=-\frac 1x\frac{\, _0\tilde{F}_1(;1;-b x)}{ \, _0\tilde{F}_1(;2;-b x)}$$ where appears the regularized confluent hypergeometric function.
Now, for $a \neq 0$, the "monster"
$$\frac{\sqrt{-a} \left(-c_1 (U(k,0,2 t)-2 U(k,1,2 t))-2 (t+1) \,
_1F_1(k+1;2;2 t)-2 (k-1) t \, _1F_1(k+1;3;2 t)\right)}{c_1 U(k,0,2
t)+2 t \, _1F_1(k+1;2;2 t)} $$ where $t=x\sqrt{-a}$ and $k=-\frac b{2\sqrt{-a}}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3811025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
prove or disprove: if $y'=y^2-\cos(x)$ then any solution diverges in a finite time given the following ode, prove or disprove:
*
*if $y'=y^2-\cos(x)$ then any solution diverges in a finite time.
"diverge" means, that there is a point $x$ where the solution isn't continuous (there and thereafter).
if $y(0)>1$ or $y(0)<-1$ it's clear how to show it.
for instance, take $y(0)>1$, and let say we have $x$ where for the first time $y(x)=1$.
then according to the mean value theorem there is $t$ where $y'(t)<0$ and then it applies that $y(t)<1$, thus $x$ is not the first time we reached $y(x)=1$, therefore there isn't such point.
The problem is showing that if $-1<y(0)<1$ it must go out from this range in a finite time.
Edit
I thought about a solution $y'=y^2-\cos(x)>y^2-1$, which its solution is $\frac{1-e^{2c+2x}}{e^{2c+2x}+1}$, which might have a pole. depending on y(0)
| Gosh -- this is hard! I've tried plotting a streamline plot of the ODE, which seems to confirm finite-time blowup. The horizontal axis is the $x$-coordinate; the vertical axis is the $y$-coordinate.
As a rough first argument, you can say that $y^2 + \cos(x) \approxeq y^2$ for $y$ large. More precisely, as $y \to \infty$, we have $y^2 + \cos(x) \to y^2$ since $|\cos(x)| \leq 1$. (You can probably state this precisely using big $O$ notation, but I won't bother.) The solution to $y' = y^2$ is $y(x) = \frac{-1}{x+C}$, which diverges to infinity in finite time for $y>0$. That shows that finite-time blowup if $y$ is large.
To show that $y$ blows up when $y(0)$ is between $-1$ and $1$, you have to show that the graph of $y(x)$ eventually exceeds $y=1$. Then, $y^2 > |\cos(x)|$, so $y'$ will be positive, causing $y$ to increase more, increasing $y^2$ more in turn, which increases $y'$ more, and so on. This is not a rigorous proof, but I'm fairly certain you can use the argument in the above paragraph to justify this claim precisely.
This reduction makes your task somewhat easier, but I have little clue how to show $y(x)$ eventually exceeds $y=1$ given $y(0)$ between $-1$ and $1$. Here's a possible direction for a proof.
Notice that on the $y$-axis, all slopes are negative, and as $x$ gets larger, $f'(x)$ increases and increases until it diverges. In other words, the second derivative $f''$ (assuming it exists) appears to be positive for some region to the right of the $y$-axis. Differentiating both sides with respect to $x$:
$$f'' = 2y\dfrac{dy}{dx} + \sin (x) > 0 $$
for all $x$ in some interval $[0,M)$. So we get $2y(x) \cdot y'(x) > \sin (x)$. We can solve the equation $2y(x) \cdot y'(x) = \sin (x)$ exactly, and the solution is:
$$y(x) = \sqrt{(2\cos(x) -2C)} $$
This is where I'm stuck. Maybe you can say that $y(x) > \sqrt{(2\cos(x) -2C)}$ and try going from there?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3811152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
what is the purpose of number system conversions e.g decimal to base 5? I'm learning Number system conversion youtube. so far I know there are decimal, binary, octal, and Hex numbers. There is a purpose behind converting decimal to binary But what is the purpose of converting decimal to base 5 number?
| Not much. The only argument for base $10$ is that we are used to it, which is very strong. Binary is useful for computer processing, but leads to very long expressions for numbers. Years ago some computers would compress binary into octal, which cuts the number of digits by a factor $3$. I worked on a CDC6400 that had $60$ bit words and the dumps were in octal. I haven't heard of that for a long time, now it is all hex, which cuts the number of digits by a factor $4$, but I am out of touch and there could be some systems that still use octal.
Long ago I did a math puzzle that had $6$ everywhere in layers. There was a bunch of computation to do, which I did in base $6$. The puzzle could be solved without that.
I think there is value in recognizing the distinction between a number and its representation in some particular way. Expressed in binary, $\frac 15$ does not terminate, while it does in base $10$. We get a number of questions involving terminating decimals that do not realize it depends on what base you are in.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3811340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Möbius transformations of finite order I saw this question, which brought up the question: can we classify all the Möbius transformations (with complex coefficients) of finite order? In particular, do these only consist of the rotations? I sense symbolic manipulation isn't the way to go about this—and indeed didn't get anywhere with this—and would appreciate some insight.
| Every Möbius transformation which is not the identity is conjugate to either a translation
$$
z \to z + a \, \quad (a \in \Bbb C, a \ne 0)
$$
or a complex-linear map (rotation/dilation)
$$
z \to \lambda z \, \quad (\lambda \in \Bbb C, \lambda \ne 0, 1)
$$
depending on whether it has one or two fixed points. Translations do not have finite order, and rotations if and only if $\lambda$ is some root of unity.
It follows that all Möbius transformation of finite order are of the form
$$
T(z) = S^{-1}(\lambda S(z))
$$
with some Möbius transformation $S$ and $\lambda = e^{2 \pi i k/n} \ne 1$ for some integers $k, n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3811443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How to prove that the elasticity of the revenue function is $E_R(p)=\frac{E_R(q)}{E_R(q)-1}$? The Problem
Let the demand function be $ap+bq=k$.
Prove that this equation (elasticity of revenue) is true:
$$E_R(p)=\frac{E_R(q)}{E_R(q)-1}$$
Definitions
Demand Function
The Demand Function is defined as the relation between the price $p$ of the good and the demanded quantity $q$ of the good which in our example is: $ap+bq=k$. Note that $D^{-1}(p) = G(q)$
Revenue Function
The Revenue Function is defined as $R = p q$, where R is the total revenue, $p$ is the selling price per unit of sales, and $q$ is the number of units sold
Elasticity of a function
The Elasticity of a function $f(x)$ approximates the change of $f$ given the change of $x$ and is defined as:
$$ E_f(x) = \frac{df}{dx} \frac{x}{f(x)}$$
My solution attempt
We need to prove $E_R(p)=\frac{E_R(q)}{E_R(q)-1}$
*
*The demand function can be written as: $ap+bq=k \iff \boxed{D(p) = q = \frac{k-ap}{b}} \:\:(1)$ and $\boxed{G(q) = p = \frac{k-bq}{a}}\:\: (2)$
*Therefore we can write the revenue function as $R(q) = pq = pD(p) \iff \boxed{ R(q) = \frac{kp-ap^2}{b} } \:\:(3)$ and $R(p) = pq = G(q)q \iff \boxed{R(p) = \frac{kq-bq^2}{a}}\:\: (4)$
Hence,
$E_R(p) = \frac{R(q)}{dq} \frac{q}{R(q)} \stackrel{(3)}{=} \left(\frac{kp-ap^2}{a}\right)'\cdot \frac{q}{\frac{kp-ap^2}{a}} = \frac{k-2ap}{a}\cdot \frac{q}{\frac{kp-ap^2}{a}} = \frac{\left(k-2ap\right)q}{kp-ap^2}$
$$ \boxed{ E_R(p) = \frac{\left(k-2ap\right)q}{kp-ap^2}}\:\: (5)$$
And,
$E_R(q) = \frac{R(p)}{dq} \frac{p}{R(p)} \stackrel{(4)}{=} \left( \frac{kq-bq^2}{a}
\right)' \cdot \frac{p}{\frac{kq-bq^2}{a}} = \frac{k-2bq}{a} \cdot \frac{p}{\frac{kq-bq^2}{a}} = \frac{p\left(k-2bq\right)}{kq-bq^2}$
$$ \boxed{E_R(q) = \frac{p\left(k-2bq\right)}{kq-bq^2}} \:\:(6)$$
So, at last:
$$E_R(p)=\frac{E_R(q)}{E_R(q)-1} \iff \\ \frac{\left(k-2ap\right)q}{kp-ap^2} = \frac{\frac{p\left(k-2bq\right)}{kq-bq^2}}{\frac{p\left(k-2bq\right)}{kq-bq^2}-1}$$
Which is overly complicated but it must hold, if there were no trivial calculation mistakes.
The Question
Given the fact that this was an exam subquestion, I am confident that there is an easier way to prove the elasticity equation (maybe by using elasticity function properties?), but if there is, I can't spot it.
Any ideas?
| Note you may equivalently define elasticity (using your notation) as
$$E_f(x)=\frac{d \log f(x)}{d \log x}.$$
Thus,
$$\small E_R(q)=\frac{d \log pq}{d \log q}=\frac{d \log pq}{d \log p}\frac{d \log p}{d \log q}=\frac{d \log pq}{d \log p}\left(\frac{d (\log p+\log q)}{d \log q}-1\right)=\frac{d \log pq}{d \log p}\left(\frac{d \log pq}{d \log q}-1\right)=E_R (p) (E_R(q)-1),$$
and rearranging gives us the desired result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3811532",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
If a left module M is co-isosimple and semi-Hopfian, then M is simple. I have tried to prove following theorem;
The followings are equivalent for a left module M:
*
*M is simple,
*M is co-isosimple and co-regular,
*M is co-isosimple and semi-Hopfian,
*M is co-isosimple and discrete.
So far, I have proved (1$\Rightarrow$4$\Rightarrow$2$\Rightarrow$3), but I couldn't prove (3$\Rightarrow$1) which is the last step.
Definition 1: A non-zero module M is co-isosimple if it is isomorphic to all its non-zero quotients.
Definition 2: A left module M is a semi-Hopfian module if for every epimorphism p:M$\longrightarrow$M we have that ker(p) is a direct summand of M.
In addition, I've seen a theorem in another article that, a left module M is semi-Hopfian if and only if for any submodule N of M which satisfies M/N is isomorphic to M is a direct summand of M.
By using just these information, I can say that since M is co-isosimple, then for every submodule N of M, M/N is isomorphic to M, and by the information that I wrote after definitions, every submodule of M is a direct summand of M so that M is semisimple. However, the thing I try to show M is simple but I am stuck there. Is there anyone can help me?
| Let $M$ be co-isosimple and semi-hopfian, and $N$ be any submodule not equal to $M$.
Firstly of course, $M\cong M/N$. But that means the composition of the projection $\pi:M\to M/N$ with that isomorphism is an onto endomorphism of $M$. Therefore its kernel (which is $N$) is a summand of $M$. This shows that $M$ is a semisimple module.
If $M$ isn't finitely generated, then it is easy to come up with a submodule $N$ such that $M/N$ is finitely generated, and so $M\not\cong M/N$. Thus co-isosimplicity prevents $M$ from being infinitely generated.
So now suppose $M$ is finitely generated and has a nontrivial submodule $N$. Then the composition length of $M/N$ must be strictly smaller than the composition length of $M$, so $M\not\cong M/N$ in that case either. So co-isosimplicity precludes the existence of nontrivial submodules, too.
Therefore, the only thing left is for $M$ to be simple, which clearly works.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3811657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Limit of multivariable function $f(x,y) = {(x^2+y^2)}^{x^2y^2}$ $$f(x,y) = {(x^2+y^2)}^{x^2y^2}$$
I need to find the limit at (0,0) point
I applied the exponent rule and got $$e^{x^2y^2ln(x^2+y^2)}$$
and now with chain rule, I need to find the limit of $${x^2y^2ln(x^2+y^2)}$$
and how? :D
There isn't L'Hôpital's rule for multivariable function, right?
| We have that
$$ {(x^2+y^2)}^{x^2y^2}=e^{x^2y^2 \log(x^2+y^2)} \to 1$$
indeed
$$x^2y^2 \log(x^2+y^2)=(x^2+y^2) \log(x^2+y^2) \frac{x^2y^2}{x^2+y^2} \to 0\cdot 0=0 $$
since by $t=x^2+y^2 \to 0$ by standard limits
$$(x^2+y^2) \log(x^2+y^2) =t\log t \to 0$$
and since $x^2+y^2 \ge 2xy$
$$0\le\frac{x^2y^2}{x^2+y^2} \le \frac{x^2y^2}{2xy} =\frac12 xy \to 0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3812048",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Verification for a series of limits I ask kindly if my steps for these limits are right.
I start, always immediatly, checking the domain and this is an advice for my students of an high school.
I have, for example this function where the domain of
$$\operatorname{dom}\left(\frac{(1-x)^{\sqrt x}}{x-2}\right)=\{x\in \Bbb R \colon 0\leq x \leq 1\}$$
and I think that it is not possible to write that $x\to +\infty$
If I have the function $g(x)=\left(\frac{x-1}{x-2}\right)^{\sqrt x}$ where it is possibile to have $x\to +\infty$, my steps are:
$$\lim _{x\to +\infty }\left(\frac{x-1}{x-2}\right)^{\sqrt x}=\lim_{x\to +\infty}\left(1+\frac{1}{x-2}\right)^{\sqrt x}= \lim_{x\to +\infty}\left(1+\frac{1}{x-2}\right)^{\frac{(x-2)\sqrt x}{x-2}} $$
$$=\lim_{x\to +\infty}\left[\left(1+\frac{1}{x-2}\right)^{x-2}\right]^{\sqrt{\frac{x}{(x-2)^2}}}=e^0=1 $$
If the function is:
$$\psi(x)=\frac{(x-1)^{\sqrt x}}{x-2}$$
it is possible from the simple domain that $x\to +\infty$,
$$\lim _{x\to +\infty }\frac{(x-1)^{\sqrt x}}{x-2}=\lim _{x\to +\infty }\frac{e^{\sqrt{x}\ln \left(x-1\right)}}{x-2}=\frac{e^{\lim _{x\to +\infty}x\sqrt{x}\,\frac{\ln (x-1)}{x}}}{x-2} \tag{A}$$
Being $$\lim _{x\to +\infty \:}\frac{\ln \left(x-1\right)}{x}=0 \tag{B}$$
I will have the exponent $x\sqrt{x}\,\frac{\ln (x-1)}{x}$ (in (A) last step) an indeterminate form $(\infty\cdot0)$. Is there a strategy without the use of the L'Hopital rule for the (A)?
| Let's not use L'Hopital's rule in $(A)$.
$$\bbox[yellow,5px,border:2px solid red]{\frac{(x-1)^{\sqrt x}}{x-2}=\frac{x^{(\sqrt x-1)}(1-x^{-1})^{x(\sqrt x)^{-1}}}{1-2x^{-1}}=P(x)\cdot Q(x)}\tag{1}$$
where, $$P(x)=x^{(\sqrt x-1)}, \quad \text{and} \quad Q(x)=\frac{(1-x^{-1})^{x(\sqrt x)^{-1}}}{1-2x^{-1}}$$
Note that $\lim_{x\to \infty} Q(x)=1$ ($\,\text{Numerator}$ of $Q(x)$ tends to $e^0=1$).
Assume on the contrary that limit in $(1)$ exists(finitely) as $x\to \infty$ and is equal to $L\in \mathbb R$.
Hence by limit rules, $P(x)=\dfrac{P(x)Q(x)}{Q(x)}\implies \lim_{x\to \infty} P(x)=L$, which is a contradiction (do you see why?).
Hence, the limit doesn't exist finitely.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3812138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Weird looking ODE solution: How to verify this is indeed a solution? I will try to express my question using an example.
Consider this homogeneous ODE: $y' = \frac{y-x}{x+y }$
Its solution is: $\boxed{\frac12 \log\left( \frac{y^2(x)}{x^2} +1\right) - \log(x) + \arctan\left( \frac{y(x)}{x}\right) = c} \quad c\in \mathbb{R}$
As far as I am understanding solving a differential equation means finding a $y(x)$ that satisfies the differential equation.
But this particular solution, cannot be expressed in terms of $y(x)$ (at least I can't).
*
*If the previous argument is true how could one verify the solution, given the fact that there is no actual $y$ to plug into the ODE?
*If the previous argument is false, how could the solution be expressed in terms of $y$?
P.S: A previous question I've asked is How do $\arctan$ and $\ln$ relate?. Given the fact that the solution only contains those two functions, I suspect that it may be helpful to note this down. Could there be some complex analysis involved?
| Use the substitution $y = xv$
$$v+xv' = \frac{x(v-1)}{x(v+1)} \implies xv' = -\frac{v^2+1}{v+1}$$
which is separable
$$\int\frac{v}{v^2+1} + \frac{1}{v^2+1} \:dv = -\int\frac{dx}{x} \implies \frac{1}{2}\log\left(v^2+1\right)+ \tan^{-1}v = -\log|x| + C$$
which means the solution can be given as
$$\frac{1}{2}\log\left(x^2v^2+x^2\right) + \tan^{-1}v = C \implies \log(y^2+x^2) + 2 \tan^{-1}\left(\frac{y}{x}\right) = C$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3812300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Understanding the exactness of a sequence in Kummer Theory Here are the prerequisites and the parts which cause me trouble (taken from Milne's Fields and Galois Theory):
In particular, I don't quite understand the exactness at $H^1(G,\mu_n)$ and what it has to do with Hilbert 90 (as mentioned in the text).
If I understood the underlying maps correctly, the map $F^\times \cap E^{\times n} \to H^1(G,\mu_n)$ maps an element $z \in F^\times \cap E^{\times n}$ to the crossed homomorphism $f_z: G \to \mu_n$, $\sigma \mapsto \frac{\sigma(c)}{c}$ for an arbitrary $c \in E^\times$ with $z = c^n$ (one can show that the maps do not depend of the choice of $c$).
We can see here that $f_z$ is a principal crossed homomorphism, so the image of $F^\times \cap E^{\times n} \to H^1(G,\mu_n)$ is trivial. However, this seems to contradict the exactness because the kernel of $H^1(G,\mu_n) \to 1$ is obviously $H^1(G,\mu_n)$ which is not trivial.
And then, I still don't see what all of this has to do with Hilbert 90. Hilbert 90 says that if the norm of an element $\alpha \in E$ is $1$, then there exists a $\beta \in E$ such that $\alpha = \beta/\sigma(\beta)$ (where $\sigma$ is a generator of the Galois group of $E/F$).
Could you please resolve any misunderstandings I have here? Thank you!
| Hilbert 90 has two common formulations -- one is the form you are quoting (where it is only valid if $E/F$ is a cyclic extension, by the way), and the other says $H^1(\text{Gal}(E/F), E^\times) = 0$ (any $E/F$ Galois -- this can be translated to your cyclic example by using an explicit description of the cohomology of cyclic groups).
Now if we take your short exact sequence, we get a long exact sequence in cohomology. The next term after $H^1(G, \mu_n)$ is $H^1(G, E^\times)$, which vanishes by Hilbert 90.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3812427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Find a limit involving floor function I had to find the following limit: $\displaystyle{\lim_{x \to \infty}}\frac{x}{\lfloor x \rfloor}$ Where $x\in\mathbb{R}$ and $f(x)= {\lfloor x \rfloor}$ denotes the floor function.
This is what I did:
I wrote $x-1\leq \lfloor x \rfloor\leq x$. Then $\frac{1}{x}\leq\frac{1}{\lfloor x \rfloor}\leq\frac{1}{x-1}$. Multiplying by $x$ we get $\frac{x}{x}\leq\frac{x}{\lfloor x \rfloor}\leq\frac{x}{x-1}$. Since $\displaystyle{\lim_{x \to \infty}}\frac{x}{x}=\displaystyle{\lim_{x \to \infty}}\frac{x}{x-1}=1$, by the squeeze theorem we can say that $\displaystyle{\lim_{x \to \infty}}\frac{x}{\lfloor x \rfloor}=1$. Is this correct? If not, what should I fix?
| It is correct. Slightly easier $\lfloor x\rfloor = x-\{x\}$ So $x/\lfloor x \rfloor= \frac{1}{1-\{x\}/x}$ so the limit is $1$ since $0\le\{x\}<1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3812572",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Question about group operation in Fundamental group I want to show that if the fundamental group $\pi_1(X,x_0)$ is abelian where $X$ is path-connected, then for any two paths $h_1,h_2: I\to X$ from $x_0$ to $x_1$, $\beta_{h_1}\equiv \beta_{h_2}$ where $\beta_h([f])=[h^{-1}*f*h]$. It seems that most of the solutions looks like the following: $\beta_{h_1}([f])=[h_1^{-1} *f*h_1]=[h_1^{-1}*f*h_2*h_2^{-1}*h_1]$ and as $X$ is path-connected, $\textit{we can say that $\pi_1(X,x_1)$ is also abelian}$. Hence, $[h_2^{-1}*h_1*h_1^{-1}*f*h_2]=[h_2^{-1}*f*h_2]=\beta_{h_2}([f])$. But in some solution, one proved this statement as the following: $[f]*[h_1*h_2^{-1}]=[h_1*h_2^{-1}]*[f]\iff [h_1*h_2^{-1}]^{-1}*[f]*[h_1*h_2^{-1}]=[f]\iff [h_2*h_1^{-1}]*[f]*[h_1*h_2^{-1}]=[f]\iff [h_1^{-1}]*[f]*[h_1]=[h_2^{-1}]*[f]*[h_2]$. Something like this form. I mean they do the concatenation operation even if $[h_1]$ is not an element of whatever fundamental group. So my question is, $\textit{is this operation possible?}$. In other words, even if we argue in fundamental group, is conatenation operation valid when the operation is well-defined not only as group operation but also as operation between paths.
|
The concatenation operation is, in general, is defined for two paths such that the initial point of one path coincides with the terminal point the other.
As you can see from the picture the paths has right identity and left identity as well as right and left inverses. Hence even though your [$h_1$] doesn't belong to the fundamental group it can be thought of belonging to the groupoid and hence those operations are valid.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3812672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Ideals with infinitely many generators As the title may suggest, I am not sure if I understand the ideals with infinitely many generators correctly. Let us take the ring $R=\mathbb{Z}[x_1,x_2,x_3,\dots]$ and consider the ideal $(x_1,x_2,x_3\dots)$ generated by all the indeterminates in the polynomial ring $R$. As a set
\begin{equation}
(x_1,x_2,x_3,\dots)=\{f_1x_1+f_2x_2+f_3x_3+\dots|f_1,f_2,f_3,\dots\in R\}.
\end{equation}
My understanding is that the $+$ in the above notation does not stand for the ring operation (we can't add infinitely many elements) and that $f_1x_1+f_2x_2+f_3x_3+\dots$ is an element of $R$ and so automatically only finitely many $f_i\neq0_R$. Is my understanding correct?
| Writing $$(x_1,x_2,x_3,\dots)=\{f_1x_1+f_2x_2+f_3x_3+\dots|f_1,f_2,f_3,\dots\in R\}$$ is rather imprecise, because (as you said), you cannot add infinitely many elements in a ring (unless you have a topology). It would be better to write instead
$$
(x_1,x_2,x_3,\dots) = \left\{\sum_{i\geq 1} f_i x_i\mid f_i\in R\textrm{ and }f_i = 0\textrm{ for all but finitely many }i\right\}.
$$
The sum in the set on the right hand side is now well-defined, and this is the ideal that you want.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3812795",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Let $X$ be any set $\tau : = \{X-\{a_1 , a_2 ,......, a_n\} \cup \phi\}$. Is $(X , \tau )$ $T_1$?
I can not understand the topology $\tau$. Can anyone please help me to understand?
| The topology that is meant is most likely:
$$\tau = \{X\setminus F\mid F\subseteq X\text{ finite }\} \cup \{\emptyset\}$$
And a topology on $X$ is $T_1$ iff for all $x \in X$, $\{x\}$ is closed. Now decide... Whether $X$ is a $T_2$ space will depend on the size of $X$..
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3812944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to show that $ C := A^3-3A^2+2A = 0$?
Let $$A = \begin{pmatrix} -1 & 1 & 2 \\ 0 & 2 & 0 \\ -1 & 1 & 2 \end{pmatrix}$$
Let $C:= A^3-3A^2+2A $.
Show that $C=0$.
I know that $A$ is diagonalisable with $\operatorname{spec}(A)=\{0,1,2\}$.
I have no clue how to approach that problem. Any advice?
| Here is a way to obtain the value of $p(A)$, where $p(X)= X^3+3X^2+2X$, using Hamilton-Cayley:
The characteristic polynomial of $A$ is $\:\chi_A=X(X-1)(X-2)=X^3-3^2+2X$, and we have
$p(X)=\chi_A(X)-6X^2$, so
$$p(A)=\chi_A(A)-6A^2=-6A^2$$
so you have only one matrix multiplication to do.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3813075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Is this type of 'cfrac' known $\ldots\frac{a_n}{b_n+}\ldots\frac{a_2}{b_2+}\frac{a_1}{b_1+a_0}$? I have looked in a few well known books on cfracs but did not encounter anything related to the following quantity:
$$
\ldots\frac{a_n}{b_n+}\ldots\frac{a_2}{b_2+}\frac{a_1}{b_1+a_0}=\ldots\cfrac{a_n}{b_n+\cfrac{a_{n-1}}{b_{n-1}+\ldots+\cfrac{a_2}{b_2+\cfrac{a_1}{b_1+a_0}}}}
$$
It has a structure similar to ordinary continued fractions but it goes to the upside and to the left (while ordinary cfrac goes to the downside and to the right).
Q: What is known about such a mathematical object?
| Your problem is convergence.
Ordinary continued fractions are well-behaved objects because we can truncate them at various points, getting a sequence of rational numbers, which (at least assuming all coefficients are positive integers) converge to some limit. Extending the truncation has a smaller and smaller effect on the final value.
In your backwards continued fractions, that is pretty much never going to be the case. If we truncate stopping at $b_n$, then for the $n^{\text{th}}$ approximation and the $(n-1)^{\text{th}}$ approximation to both be approximately equal to some value $x$, we want $x \approx b_n + \frac{a_n}{x}$, which is just a quadratic condition on $x$, $a_n$, and $b_n$ that ignores the rest of the continued fraction. Similar behavior happens if we truncate stopping at $a_n$. As a result, we don't get convergence unless $a_n$ and $b_n$ approach some specific relationship involving the limit; in particular, if the $a_n$ and $b_n$ are all integers, we can only converge to $x$ if $x$ is the root of a monic quadratic equation with integer coefficients, and we can only converge to it if $a_n$ and $b_n$ eventually are those integer coefficients.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3813221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Number of non negative integer solutions of $x+y+2z=20$
The number of non negative integer solutions of $x+y+2z=20$ is
Finding coefficient of $x^{20}$ in
$$\begin{align}
&\left(x^0+x^1+\dots+x^{20}\right)^2\left(x^0+x^1+\dots+x^{10}\right)\\
=&\left(\frac{1-x^{21}}{1-x}\right)^2\left(\frac{1-x^{11}}{1-x}\right)\\
=&\left(1-x^{21}\right)^2(1-x)^{-3}\left(1-x^{11}\right)
\end{align}$$
i.e.finding coefficient of $x^{20}$ in $$(1-x)^{-3}-x^{11}(1-x)^{-3}$$
Or, coefficient of $x^{20}$ in $(1-x)^{-3}-$ coefficient of $x^9$ in $(1-x)^{-3}=\binom{22}{20}-\binom{11}{9}=176$
The asnwer is given as $121$. What's my mistake?
EDIT (after seeing @lulu's comment):
Finding coefficient of $x^{20}$ in $$\begin{align}
&\left(x^0+x^1+...+x^{20}\right)^2\left(x^0+x^2+...+x^{20}\right)\\
=&\left(\frac{1-x^{21}}{1-x}\right)^2\left(\frac{1-x^{22}}{1-x^2}\right)\\
=&\left(1-x^{21})^2(1-x)^{-2}(1-x^{22})(1-x^2\right)^{-1}
\end{align}$$
i.e.finding coefficient of $x^{20}$ in $$(1-x)^{-2}(1-x^2)^{-1}$$
Not able to proceed next.
| Solving another way one has $0\le z \le10$ so we have in total the solutions of
$$x+y=20\space\space\text{ for } (x,y,0)\\x+y=18\space\space\text{ for } (x,y,1)\\x+y=16\space\space\text{ for } (x,y,2)\\x+y=14\space\space\text{ for } (x,y,3)\\x+y=12\space\space\text{ for } (x,y,4)\\x+y=10\space\space\text{ for } (x,y,5)\\x+y=8\space\space\text{ for } (x,y,6)\\x+y=6\space\space\text{ for } (x,y,7)\\x+y=4\space\space\text{ for } (x,y,8)\\x+y=2\space\space\text{ for } (x,y,9)\\x+y=0\space\space\text{ for } (x,y,10)$$
Thus we have
$$21+19+17+15+\cdots+5+3+1=121$$ solutions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3813339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Is $\exists x(x=a)$ a valid formula in first order logic with equality? Under Tarskian semantics, is $\exists x(x=a)$ true in every interpretation in first order logic (FOL) with equality? If so, could $\exists x(x=a)$ be considered a tautology?
This question was motivated by another which was closed for lacking clarity. Here I have attempted to add that clarity myself so that the same question can be useful to others who are wondering a similar thing.
This is a self answer question.
| Your question does not specify what $a$ is. The simpler case is if $a$ is a constant-symbol. Your posted answer is more or less correct for that case, but there are some issues. Firstly, it is not correct to write "$a≠a$" when referring to the element that $a$ is interpreted as in a structure $M$. Either write "$M ⊨ a≠a$" or "$a^M ≠ a^M$". Secondly, it is not a very good idea to prove that it is a tautology by proving that its negation is contradictory in every structure, because this relies on classical FOL semantics, which is unnecessary here. It is always better to rely on only the relevant aspects.
Here is a fully rigorous proof: Take any FOL structure $M$ over a language that has constant-symbol $a$. Then $M ⊨ a=a$, so $M[x:=a^M] ⊨ x=a$ and hence $M ⊨ ∃x ( x=a )$. Therefore, $∃x ( x=a )$ is an FOL tautology.
The other case is when $a$ is a variable. In this case, since $a$ is free in $∃x ( x=a )$, it is semantically interpreted as being universally quantified over the domain. That is, it is equivalent to $∀a ∃x ( x=a )$. That is how one should understand it, and indeed a deductive system for FOL that permits free variables in its theorems would allow you to prove the equivalence. But the completely semantic proof does not need to use that fact: Take any FOL structure $M$. For every element $c$ in $M$, we have $M[a:=c] ⊨ a=a$, so $M[a:=c][x:=c] ⊨ x=a$ and hence $M[a:=c] ⊨ ∃x ( x=a )$. Therefore, $∃x ( x=a )$ is an FOL tautology.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3813639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How many $3$-digit numbers are there whose sum of digits is $10$? I did just counted all in gap of every $100$ numbers. Like in $100-200$, there are $10$ nos.($109,118,127, \ldots,181,190$). And then $9$ nos. in $200-300$, $8$ in $300-400$ ... and so on. By this I am getting $10+9+...3+2=54$, but answer given in India's JEE Mains exam is $55$. Am I missing one or the answer given is wrong and I'm right? Please help.
| This problem is equivalent to partitioning ten items to three boxes.
For example $433$ whose sum of digits is 10. $433$ can be represented as partition of items as below:
$$\circ \circ \circ \circ \mid \circ \circ \circ \mid \circ \circ \circ$$
More examples:
$109$
$$\circ \mid \mid \circ \circ \circ \circ \circ \circ \circ \circ \circ$$
$019$, this is not a 3-digit number.
$$\mid \circ \mid \circ \circ \circ \circ \circ \circ \circ \circ \circ$$
$190$
$$ \circ \mid \circ \circ \circ \circ \circ \circ \circ \circ \circ \mid$$
Now we calculate the Number of three digit number with following three mutually exclusive cases.
Case I: Zero is not one of the digits.
Here we need to choose partition from the $9$ spaces between the items.
This is $= \binom{9}{2} = 36$
Case II: One of the digit except the last digit is zero. (Eg: 109)
Both the partitions occupy the same position.
Number of choices = $\binom{9}{1} = 9$
Case III: The last digit is zero but other digits are non-zero. (Eg: 190)
One of the partitions is at the end and other partition can be chosen from one of the 9 places. The number of choices = $\binom {9}{1} = 9$
Therefore total number of 3-digit numbers whose sum of digits is $10 = 36 + 9 + 9 = 54$.
NOTE: $100, 200, \cdots, 900$ digits with two zeros are not included in any of these cases. All these numbers have a digit sum less than 10.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3813775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Consequence of uniform continuity $f:\mathbb R \rightarrow \mathbb R$ is a uniformly continuous function such that $f(n) \rightarrow 0$ as $n \rightarrow \infty$ where $n \in \mathbb N$. I have to find a counterexample to show that $f(x)$ may not converge to $0$ as $x \rightarrow \infty$. If we assume that $f(x^2)$ is uniformly continuous, then $f(x) \rightarrow 0$ as $x \rightarrow \infty$.
I think $f(x) = \sin2\pi x$ works for the first part. What can I do about the second part?
| For easier notation let $g(x)=f(x^2)$. By assumption we know that $g(\sqrt{n})\to 0$ when $n\to\infty$. Now, we have:
$\sqrt{n+1}-\sqrt{n}=\frac{1}{\sqrt{n+1}+\sqrt{n}}\to 0$
So now let $\epsilon>0$. Since $g$ is uniformly continuous there is some $\delta>0$ such that $|x-y|<\delta$ implies $|g(x)-g(y)|<\frac{\epsilon}{2}$. Also, since $\sqrt{n+1}-\sqrt{n}\to 0$ there is some $n_1\in\mathbb{N}$ such that $n\geq n_1$ implies $\sqrt{n+1}-\sqrt{n}<\delta$. Finally, since $g(\sqrt{n})\to 0$ there is $n_2\in\mathbb{N}$ such that $n\geq n_2$ implies $|g(\sqrt{n})|<\frac{\epsilon}{2}$.
So now let $n_0=\max\{n_1,n_2\}$. Assume $x>\sqrt{n_0}$ is any real number. Let $n$ be the largest natural number such that $\sqrt{n}\leq x$. Then $n\geq n_0$ and $\sqrt{n}\leq x<\sqrt{n+1}$. Since $n\geq n_0\geq n_1$ we have:
$-\delta<0<x-\sqrt{n}<\sqrt{n+1}-\sqrt{n}<\delta$.
In other words $|x-\sqrt{n}|<\delta$. Hence $|g(x)-g(\sqrt{n})|<\frac{\epsilon}{2}$. But also, $n\geq n_0\geq n_2$ and so $|g(\sqrt{n})|<\frac{\epsilon}{2}$. It follows from the triangle inequality that $|f(x^2)|=|g(x)|<\epsilon$.
Alright, so we proved that there is a positive number $M:=\sqrt{n_0}$ such that $x>M$ implies $|f(x^2)|<\epsilon$. It follows that if $x>M^2$ then $|f(x)|=f((\sqrt{x})^2)|<\epsilon$. So indeed $f(x)\to 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3813925",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
T-annihilators and whether it kills the set Question : $V=F[t]v$ is T-cyclic, $g(t)$ is a polynomial. Show that if $g(T)v=0$, then $g(T)=0$.
Any ideas how to solve this?
plus, does this statement implies T-annihilators of v are exactly same as Ann(T)? I don't really get the differences between two concepts.
| Firstly, note $V$ being $T$-cyclic means $V = \text{span}\{v, T(v), T^2(v), \cdots \}$, and that $\{v, T(v), T^2(v), \cdots \}$ form a basis for $V$. The moment you are given $g(T)v=0$, you can conclude that $V$must be finite dimensional, since you may write $g(T)v= \sum_i g_i T^i v = 0$, so you have finitely many basis vectors. (Here, $g_i$'s are coefficients of the polynomial $g$).
Now, if $g(T)v = 0$, then $g(T)(Tv) = 0$, $g(T)(T^2v) = 0$, and repeat the argument for each of the finitely many basis vectors. The (linear) operator $g(T)$ maps each basis element $(T^iv)$ to $0$, so it is the zero operator on $V$.
$Ann(T)= \{f(x) \mid f(T) = 0\}$ (set of all polynomials which, when $T$ is "plugged in", gives the zero operator).
$T$-annihilators of $v$ = $\{ f(x) \mid f(T)v = 0 \}$ (set of all polynomials which, when $T$ is "plugged in" and acts on $v$, gives the zero vector.
Normally, you can only conclude that $Ann(T) \subseteq $ $T$-annihilators of $v$. However, in this case, when $V$ is cyclic and you are specifically looking at the $T$-annihilators of $v$, the cyclic vector, an annihilator of $v$ is also a $T$ annihilator, so you're right in saying the two sets are equal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3814170",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Can this monstrous expression be simplified? $\sqrt{\left( r_d \cos\left(\frac{-4(C - X) \csc(2α)}{Z}\right) + r_p \left(\frac{2C \tan(α) - 4 (C - X) \csc(2α))}{Z}\right) \sin\left(\frac{-4 (C - X) csc(2α)}{Z}\right) - m X \tan(α) \sin\left(\frac{-4 (C - X) \csc(2α)}{Z}\right)\right)^2 + \left( r_d \sin\left(\frac{4(C - X) \csc(2α)}{Z}\right) - r_p \left(\frac{2C \tan(α) - 4 (C - X) \csc(2α))}{Z}\right) \cos\left(\frac{-4 (C - X) csc(2α)}{Z}\right) + m X \tan(α) \cos\left(\frac{-4 (C - X) \csc(2α)}{Z}\right)\right)^2}$
*
*$r_d = r_p + m X - m C$
*$r_p = \frac {m Z}{2}$
*$m$ is positive
*$α$ is between $0$ and $\frac π 4$
*$Z$ is a positive integer
*$X$ is between -1 and +1
*$C$ is between 1 and 1.5
I've been staring at this until I'm cross-eyed, but I can't find any way to break it down. It's not for a class, so I don't have any resources to call upon.
Context:
I'm trying to find the radius of a point on the curve defined by the parametric expressions
$$x = r_d \cos(γ) + r_p \left(\frac{2C \tan(α)} Z + γ\right) \sin(γ) - m X \tan(α) \sin(γ),$$
$$y = r_d \sin(γ) - r_p \left(\frac{2C \tan(α)} Z + γ\right) \cos(γ) + m X \tan(α) \cos(γ)$$
Specifically, the point at $γ = \frac{-4(C - X) \csc(2α)}{Z}$. My instinct for solving that was to plug in the value and use the Pythagorean theorem, which created the expression that is the subject of this question. If there's a better way to find this radius, I would love to try it out.
Update: Looking to simplify the base expressions, I can expand the instances of $r_d$ and $r_p$ and then factor out the $m$ from all three terms, and I can factor out the $\sin$ and $\cos$ from the last two terms, but I can't see how to use that to any advantage...
| It's often much better to simplify as soon as possible. In this case, the parameterized $x$ and $y$ values at the specified value $\gamma_0 := -4(C-X)\csc(2\alpha)/Z$ reduces fairly nicely:
$$\begin{align}
x &=\tfrac12 mZ \left(\;\cos\gamma_0 + \gamma_0 \cos\alpha \sin(\alpha+\gamma_0)\;\right) \tag1\\[4pt]
y &=\tfrac12 mZ \left(\;\sin\gamma_0 - \gamma_0 \cos\alpha \cos(\alpha+\gamma_0)\;\right) \tag2
\end{align}$$
From there, we easily get
$$x^2+y^2 = \tfrac14m^2Z^2\left(\;1 + \gamma_0 \sin 2 \alpha + \gamma_0^2\cos^2\alpha\;\right) \tag3$$
(Conveniently, there are no $\gamma$s inside the trig functions.)
If you like, you can expand $1=\sin^2\alpha+\cos^2\alpha$ and $\sin2\alpha=2\sin\alpha\cos\alpha$, regroup, and write
$$x^2+y^2 =
\tfrac14m^2Z^2\left(\;\left(\gamma_0\cos\alpha+\sin\alpha\right)^2+\cos^2\alpha\;\right) \tag4$$
At this point, expanding $\gamma_0$ explicitly to $-4(C-X)\csc(2\alpha)/Z$ doesn't seem to give anything particularly pretty, so I'll leave that to the reader. $\square$
As a bit of a prequel, just substituting $r_d\to r_p+mX-mC$and $r_p\to mZ/2$ into OP's parametric equations gives the simplification
$$\begin{align}
x &= \tfrac12 mZ \left(\;
\cos\gamma + \gamma \sin\gamma +\gamma_0 \sin\alpha\cos(\alpha+\gamma)
\;\right) \tag{0.1}\\[4pt]
y &= \tfrac12 mZ \left(\;
\sin\gamma - \gamma \cos\gamma +\gamma_0 \sin\alpha \sin(\alpha+\gamma)
\;\right) \tag{0.2}
\end{align}$$
with $\gamma_0$ as above. From these, we get
$$x^2 + y^2 = \tfrac14 m^2Z^2 \left(\;
1 + \gamma_0 \sin2\alpha + \gamma^2\cos^2\alpha + (\gamma-\gamma_0)^2 \sin^2\alpha \;\right) \tag{0.3}$$
When $\gamma=\gamma_0$, we have that $(0.1)$, $(0.2)$, $(0.3)$ reduce to $(1)$, $(2)$, $(3)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3814259",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
$\int \frac{2x^7+3x^2}{x^{10}-2x^{5}+1}dx$
$$\int \frac{2x^7+3x^2}{x^{10}-2x^{5}+1}dx$$
I have no idea how to approach this problem.
I just know that I have to express the top expression in some form of a derivative of the expression in the denominator. But I just couldn't figure out how.
Any help would be appreciated.
| Rewrite the expression as
$$\int\frac{2x^7+3x^2}{(x^5-1)^2}dx = \int \frac{2x+\frac{3}{x^4}}{\left(x^2-\frac{1}{x^3}\right)^2}dx = \frac{-1}{x^2-\frac{1}{x^3}}+C $$
by dividing top and bottom by $x^6$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3814410",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Finding solution of $y' = \frac{(y^2 - 4yt + 6t^2)}{t^2}$ with initial condition $y(2) = 4$ Concerning the ordinary differential equation
$$y' = \frac{(y^2 - 4yt + 6t^2)}{t^2}$$
with initial condition $y(2) = 4$.
My text book gives the solution as $y = 2t$, and so does MATLAB.
However, I only see the following :
\begin{align}
v &= \frac{y}{t}\\
y' &= v^2 - 4v + 6\\
tv' + v &= v^2 - 4v + 6\\
tv'&= v^2 - 5v + 6\\
\end{align}
\begin{align}
\frac{dv}{(v-3)(v-2)} &= \frac{dt}{t}\\
\frac{dv}{(v-3)} - \frac{dv}{(v-2)} &= \frac{dt}{t}\\
\ln(|v-3|) - \ln(|v-2|) &= \ln(|t|) + c\\
\frac{v-3}{v-2} = te^{c}\\
\end{align}
which then reduces to
$$v = \frac{3 - 2te^{c}}{1-te^{c}}$$
and
$$y = \frac{t(3 - 2te^{c})}{1-te^{c}}$$
There must be at least some logic to this, because several online calculators (for example WolframAlpha) give the same result. But nevertheless it seems wrong, as the initial condition implies
$4 - 8e^{c} = 6 - 8e^{c}$?
Can someone please explain to me how to arrive at $y=2t$, and where I would have gone wrong?
| You've shown
$$ tv' = v^2 - 5v + 6 = (v-2)(v-3).$$
and since $v = y/t$, the initial condition gives $v(2) =2$. Thus
$$\frac{dv}{(v-3)(v-2)} = \frac{dt}{t}$$
cannot be done since $v-2$ is at the bottom.
This is exactly the case where sometimes separation of variable does not work: indeed for a simple problem like
$$ y' = y, \ \ \ y(0) = 0,$$
you cannot use separation of variable to find the solution (which is $y(t) = 0$).
To "find" the solution $y = 2t$, you simply observe that $v(t) = 2$ is a solution, and then from the uniqueness theorem to ODE, it is THE solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3814506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Convergence of fixed points as a consequence of pointwise convergence? Setup: Let $p := p(n)$ be such that $\lim_{n \to \infty} pn = \lambda > 0$, and suppose $f_n(t) := (1 + (t-1)p)^n$, with domain $[0, 1]$.
Goal: I am trying to show that if $\theta_n$ is a fixed point of $f_n$:
$$
\theta_n = f_n(\theta_n), \quad \text{for all}~n,
$$
then $\theta_n$ converges and its limit $\theta$ solves $e^{\lambda(\theta - 1)} = \theta$.
What I tried: I can see that $\theta \mapsto e^{\lambda(\theta - 1)}$ is an increasing function of $\theta$ and so the fixed point is unique. Moreover, for $\epsilon > 0$ and $n$ large enough, we can provide the estimate
$$
(1 - \epsilon)\lambda \leq pn \leq (1 + \epsilon)\lambda.
$$
Therefore, for $n$ sufficiently large: we have the sandwich relation
$$
\left(1 + \lambda\frac{(t-1) - \epsilon}{n} \right)^n \leq f_n(t) \leq
\left(1 + \lambda\frac{(t-1) + \epsilon}{n} \right)^n, \quad \mbox{for}~t \in [0, 1].
$$
And so taking limits, $e^{\lambda((t-1) - \epsilon)} \leq \lim_n f_n(t) \leq e^{\lambda((t - 1) + \epsilon)}$, and by taking $\epsilon\downarrow 0$, we see that $f_n \to f$ pointwise on $[0, 1]$, where $f(t) := e^{\lambda (t- 1)}$.
Intuitively, it seems that since $\lambda > 0$, eventually $f_n$ have unique fixed points and so since $f_n \to f$, it seems that $\theta_n \to \theta$ should follow, but I do not know how to show this.
Comment regarding uniform convergence. Suppose that $f_n \to f$ uniformly on $[0, 1]$. Then let $\epsilon > 0$ and define
$$
\delta(\epsilon) := \inf_{t : |\theta - t| \geq \epsilon} |t - f(t)|.
$$
Since $f$ is continuous and $\theta$ is the unique fixed point of $f$, it follows that $\delta(\epsilon) > 0$. Now let $n$ be large enough such that
$\|f_n - f\|_{\infty} < \delta(\epsilon)$. Note that $|\theta_n - \theta| <\epsilon$. (If not, then:
$$
\delta(\epsilon) \leq
|f(\theta_n) - \theta_n| \leq |f_n(\theta_n) - f(\theta_n)| + |f_n(\theta_n) - \theta_n| <
\delta(\epsilon) + |f_n(\theta_n) - \theta_n|.
$$
So cancelling terms, we get a contradiction to the fact $\theta_n$ is a fixed point of $f_n$.) Putting the pieces together, we see that for each $\epsilon > 0$, we have $|\theta_n - \theta| <\epsilon$, for sufficiently large $n$, whence $\theta_n \to \theta$.
Consequence: it is sufficient to show that $f_n \to f$ uniformly.
| Your sandwich argument estimate is already essentially uniform. Namely,
$$e^{\lambda((t-1)+\epsilon)} - e^{\lambda((t-1)-\epsilon)} = e^{\lambda(t-1)} (e^{\lambda \epsilon} - e^{-\lambda \epsilon}).$$
As $\epsilon \to 0$, the right-hand-side becomes $1-1=0$.
Edit: Some more details. There's probably a cleaner way to arrange it.
Let
\begin{align*}
L_{\epsilon, n}(t) &:= (1+\lambda \frac{(t-1)-\epsilon}{n})^n, \\
U_{\epsilon, n}(t) &:= (1+\lambda \frac{(t-1)+\epsilon}{n})^n, \\
L_\epsilon(t) &:= \exp(\lambda((t-1)-\epsilon), \\
U_\epsilon(t) &:= \exp(\lambda((t-1)+\epsilon).
\end{align*}
For each fixed $\epsilon>0$, we have $L_{\epsilon, n}(t) \to L_\epsilon(t)$ uniformly as $n \to \infty$, since
\begin{align*}
\log\left(1+\lambda\frac{(t-1)-\epsilon}{n}\right)^n
&= n\log\left(1+\lambda\frac{(t-1)-\epsilon}{n}\right) \\
&= n\left(\lambda\frac{(t-1)-\epsilon}{n} + O\left(\left(\lambda\frac{(t-1)-\epsilon}{n}\right)^2\right)\right) \\
&= \lambda(t-1)-\epsilon + O(1/n).
\end{align*}
Likewise $U_{\epsilon, n}(t) \to U_\epsilon(t)$ uniformly as $n \to \infty$. As noted, $U_\epsilon(t) - L_\epsilon(t) \to 0$ uniformly as $\epsilon \to 0$.
We have
\begin{align*}
|f_n - f|
&\leq |f_n - L_{\epsilon, n}| + |L_{\epsilon, n} - L_\epsilon| + |L_\epsilon - f| \\
&\leq |U_{\epsilon, n} - L_{\epsilon, n}| + |L_{\epsilon, n} - L_\epsilon| + |L_\epsilon - U_\epsilon| \\
&\leq |U_{\epsilon, n} - U_\epsilon| + |U_\epsilon - L_\epsilon| + |L_\epsilon - L_{\epsilon, n}| + |L_{\epsilon, n} - L_\epsilon| + |L_\epsilon - U_\epsilon|.
\end{align*}
For each $\delta>0$, there is some $\epsilon>0$ such that $|U_\epsilon - L_\epsilon| < \delta$, and for that $\epsilon$, for large enough $n$ we have $|L_{\epsilon, n} - L_\epsilon|, |U_{\epsilon, n} - U_\epsilon| < \delta$. Hence $f_n \to f$ uniformly on $[0, 1]$.
This last sequence of inequalities feels redundant, but I couldn't quickly find a cleaner way through, and this works.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3814610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the correct expression about Units digit? (revised) I want to express it accurately.
For example, consider $1872_{(9)}$. I would like to refer $2$.
There are many expression such as 'units of digit of $1872_{(9)}$ is 2', 'unit digit of $1872_{(9)}$ is 2' so on.
What is the correct expression?
Moreover, consider $54_{(7)}$ and $32_{(7)}$. How can I refer to $4$ and $2$? In other words, what is the exact grammar referring to several unit digits?
Thanks.
-----------------------------------addition-----------------------------------------
For $1872_{(9)}$, what is the correct expression when I refer 2 in number theory sense?
(1) the units of digit of $1872_{(9)}$ is 2
(2) the unit of digit of $1872_{(9)}$ is 2
(3) the unit digit of $1872_{(9)}$ is 2
(4) the units digit of $1872_{(9)}$ is 2
| Taken from computer science, usually the $2$ in $432$ is referred to as the Least Significant digit, or LSd. and the $4$ would be the Most Significant digit, or MSd.
In any base, the right-most digit is the one with the least weight, thus is the least significant.
This also comes from the fact that a number in base $b$, can be written as, for $8762_b$ for instance
$$8b^3+7b^2+6b^1+2b^0$$
the exponent of $b$ gives the order of the digits, then you can say
digit $0$ is $2$, digit $1$ is $6$ etc...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3814890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Radius of the circumscribed circle of an isosceles triangle An isosceles triangle $ABC$ is given $(AC=BC).$ The perimeter of $\triangle ABC$ is $2p$, and the base angle is $\alpha.$ Find the radius of the circumscribed circle $R$.
$$R=\frac{p}{2\sin\alpha(1+\cos\alpha)}$$
Let $CD=2R.$ The triangle $BCD$ is a right triangle and we have $\angle BAC=\angle ABC=\angle BDC=\alpha.$
I am not sure how to approach the problem. It's really hard for me to solve problems like this. Can you give me a hint and some thoughts on the problem?
| Another simple approach. Let $x=AC=BC$. Then
$$2p=AC+BC+2AH\\=2x+2x\cos\alpha$$
and
$$R=\frac 12 CD=\frac 12 \frac{BC}{ \sin \alpha} = \frac{x}{2 \sin \alpha}$$
Now you can complete the solution by a simple substitution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3815021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
} |
$\prod_{n=1}^\infty \frac{2n+1}{2n}$ diverges This is the Problem from the textbook "Intro to analysis" by Arthur Mattuck
Prove that $a_k := \prod_{n=1}^k \frac{2n+1}{2n}$ is strictly increasing and not bounded above.
Strict increasing is trivial: $\frac{a_{k+1}}{a_k} > 1$. But I am stuck at showing $a_k$ is not bounded above. If it is bounded above, then it must have a limit, so I think it suffices to show it diverges. But I also don't know how to show it's divergent.
Could anyone give me some hint on this?
| It is greater than $b_k=\prod_{n=1}^k\frac{2n+2}{2n+1}$ and $a_kb_k=k+1$ because the products telescope.
So $a_k\gt\sqrt{k+1}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3815230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
About a subset of $\mathbb Q[x]$ of polynomials $f$ such that $f(n)=f(-n)$ for every $n$ in $\mathbb N$ Let $A=\{f \in \mathbb Q[x] : f(n)=f(-n)$, for every $n \in \mathbb N\}$.
Show that
*
*$A$ is a subring of $\mathbb Q[x]$.
*$A$ is a Euclidean Domain.
*For every $f \in A$ we have $f(r)=f(-r)$, for every $r \in \mathbb Q$.
I think I managed to prove 1 and 2, this is how I proceeded:
1
We have that $A \neq \emptyset$, since the constant polynomial $1$ is such that $1(n)=1=1(-n)$, for every $n \in \mathbb N$.
For every $f$, $g \in A$, for every $n \in \mathbb N$ we have:
$(f-g)(n)=f(n)-g(n)=f(-n)-g(-n)=(f-g)(-n)$
and
$(fg)(n)=f(n)g(n)=f(-n)g(-n)=(fg)(-n)$
These mean that both $f-g$ and $fg$ are in $A$. This proves that $A$ is a subring of $\mathbb Q[x]$.
2
For every $f \in A$, we have:
$f=a_nx^n+\cdots+a_0$
where, for each $i \in \{0,\ldots,n\}$, $a_i \in \mathbb Q$.
We have that, for each $q \in \mathbb Q$ seen as a polynomial in $\mathbb Q[x]$, $q(n)=q=q(-n)$, for every $n \in \mathbb N$. This implies that $\mathbb Q \subseteq A$.
If every rational number is in $A$, this implies that every polynomial $f$ in $A$ has a leading coefficient $a_n$ that is a unit in $A$.
I consider the function $deg : A \to \mathbb N$, which assigns to every $f \in A$ its degree. This function serves as a Euclidean evaluation, that means it satisfies:
For every $f$, $g \in A$, with $g \neq 0$, there exist $q$, $r \in A$ such that:
*
*$f=gq+r$
*$r=0$ or $deg(r)<deg(g)$
This evaluation, together with the fact that every polynomial in $A$ has a unit as leading coefficient, ensure that $A$ is a Euclidean domain.
Now, my questions are:
*
*Are my solutions correct?
*Can you help me with point 3? There should be some way to exploit the fact that every pair of polynomials can go through Euclidean division but I can't see where to begin.
| If $f(x)\in A$, let $g(x)=f(x)-f(-x)$. Then $(\forall n\in\Bbb N):g(n)=0$. Since $g(x)$ has infinitely many zeros, it is the null polynomial. Therefore, $(\forall r\in\Bbb Q):f(r)=f(-r)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3815372",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Example of a cancellative power semigroup Let $ \, S \, $ be a semigroup such that $ \ |S| \geq 2 \ $. Its power semigroup is the power set $ \, \wp(S) \, $ together with the binary operation
$$XY = \{ xy \in S : x \in X, \ y \in Y \} \ \ .$$
I am interested in the semigroup $ \ Q_S = \wp(S) \setminus \{ \varnothing \} \ $, which I also call the power semigroup of $ \, S \, $.
A semigroup $ \, S \, $ is said left cancellative if, and only if, for all $ \ x,y,z \in S \ $, if $ \ xy=xz \ $, then $ \ y=z \ $.
A semigroup $ \, S \, $ is said right cancellative if, and only if, for all $ \ x,y,z \in S \ $, if $ \ yx=zx \ $, then $ \ y=z \ $.
I would like to see an example of a semigroup $ \, S \, $ such that $ \, Q_S \, $ is left cancellative and right cancellative.
I tested the most immediate and the most standard examples of semigroups, but none of them resulted in such a desired example. I don't know where to look anymore.
| Let $a \in S$ and let $a^+ = \{a^n \mid n > 0\}$ be the subsemigroup of $S$ generated by $a$. Since $aa^+ = a^+a^+$, one gets $a = a^+$ by right cancellation. Thus $a = a^2$ and $S$ is an idempotent semigroup. Furthermore, $aS = aaS$ implies $S = aS$ by left cancellation.
Consequently, $SS = \bigcup_{x \in S} xS = S = aS$, whence $S = a$ by right cancellation. Thus $S$ is the trivial semigroup.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3815530",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Sets by Inclusion Clarification \begin{align}
A &= \lbrace 70, 210, 280\rbrace\\
B &= \mathbb{Z}\\
C &= \lbrace n\in\mathbb{N}\,|\, n=7 m \,\mbox{for some} \, m\in\mathbb{Q} \rbrace\\
D &= \lbrace n\in\mathbb{N}\,|\, n=35 m \,\mbox{for some} \, m\in\mathbb{N} \rbrace \\
E &= \lbrace n\in\mathbb{N}\,|\, n=7 m \,\mbox{for some} \, m\in\mathbb{N} \rbrace
\end{align}
$$ Letter ⊆ Letter⊆Letter ⊆Letter ⊆Letter $$
I'm starting discrete mathematics and encountered this problem where we are tasked with ordering the set by inclusion. I understand the premise, but I think that my reasoning is wrong. Upon initial glance, I would sort it as $ A⊆ D⊆E⊆B⊆C$, as A only has the three elements, the elements of D can be made by E, but then things start to get confusing with B and C. B stipulates all integers, which I understand, but C would be able to "create" elements that are not integers by virtue of the m $\in \Bbb{Q}$ allowing the use of all rational numbers, as well as all of the integers that could exist in $\Bbb{N}$. However, would this even matter as we are adding them to set $n\in\Bbb{N}$, which is only natural numbers? Any help would be appreciated. I've also included a picture for clarification.
A screenshot of the problem
| $A$ by inspection and $C,D,E$ by definition are subsets of $\Bbb N$, and $\Bbb Z$ contains all of $\Bbb N$.
$C$ is in fact just equal to $\Bbb N$ as $n \in \Bbb N$ means $n = 7\cdot \frac{n}{7}$, so $7$ times a rational.
$A$ consists of multiples of $35$, so $A \subseteq D$.
A multiple of $35$ is always a multiple of $7$ of course, so $D \subseteq E$.
So in all we get $$A \subseteq D \subseteq E \subseteq C \subseteq B$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3815674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Controlling $C^1$ norm by Hölder norm and $C^0$ norm I've been trying to do this problem for a little while and have not made much progress. I want to show that for any $\epsilon>0$, there exists some $C_{\epsilon}>0$ such that
$$
\Vert u'\Vert_{L^{\infty}} \le \epsilon\sup_{x\neq y}\frac{|u'(x)-u'(y)|}{|x-y|^{\alpha}}+C_{\epsilon}\Vert u\Vert_{L^{\infty}}
$$
holds for all $u\in C^{1,\alpha}([0,1])$ where $0<\alpha<1$. I've looked at a few special cases, such as when the function is linear (in which case you can take $C_{\epsilon}=2$) or when the function is highly oscillatory, such as $\sin(nx)$ in which case you can take $C_{\epsilon}=1/\epsilon$, but I haven't had much success in the general case.
| Suppose
$$ |u'(x) - u'(y)| \le \frac1\epsilon |x-y|^\alpha \tag 1$$
Suppose $u'(x) \ge 1$. Then from (1), for $|y-x| \le (\frac12 \epsilon)^{1/\alpha}$, we have that $|u'(x) - u'(y)| \le \frac12$, and hence $u'(y) \ge \frac12$.
Hence if $|x-y| = (\frac12 \epsilon)^{1/\alpha}$, then
$$ |u(x) - u(y)| = \left|\int_x^y u'(z) \, dz\right| \ge \tfrac12 (\tfrac12 \epsilon)^{1/\alpha} ,$$
thus ${\|u\|}_\infty \ge \frac14 (\frac12 \epsilon)^{1/\alpha}$. Similarly if $u'(x) \le -1$.
Thus if we set $C_\epsilon = 4 (2/\epsilon)^{1/\alpha}$, then
$$ \epsilon\sup_{x\neq y}\frac{|u'(x)-u'(y)|}{|x-y|^{\alpha}}+C_{\epsilon}{\Vert u\Vert}_{L^{\infty}} \le 1 $$
implies that
$$
{\Vert u'\Vert}_{L^{\infty}} \le 1 .
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3815769",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Implicit Differentiation of $x+y= \arctan(y)$? I'm trying to verify that $x+y = \arctan(y)$ satisfies this differential equation: $1+(y^2)+(y^2)y'= 0$. To do so, I tried to differentiate $x+y = \arctan(y)$ to get $y'$, but only got so far:
$$1 = \left(\frac{1}{1+y^2}y' -1 \right) y'.$$
Now I'm not sure how I can isolate $y'$, or whether I am even taking the right first steps to solving the problem. Any hints/help about what direction I should take?
| You had differentiated correctly, but made an error when you re-arranged your equation:
$$ [ \ x \ + \ y \ ] \ ' \ \ = \ \ [ \ \arctan(y) \ ] \ ' \ \ \Rightarrow \ \ 1 \ + \ y' \ \ = \ \ \frac{1}{1 \ + \ y^2} · y' $$
$$ \Rightarrow \ \ 1 \ \ = \ \ \frac{1}{1 \ + \ y^2} · y' \ - \ y' \ \ \Rightarrow \ \ 1 \ \ = \ \ \left(\frac{1}{1+y^2} \ - \ 1 \right) · y' \ \ . $$
rather than $ \ \left(\frac{1}{1+y^2}y' -1 \right) y' \ \ . $ You will indeed verify the solution of the differential equation once you have the correct expression:
$$ 1 \ \ = \ \ \left(\frac{1}{1+y^2} \ - \ 1 \right) · y' \ \ \Rightarrow \ \ 1 \ - \ \left(\frac{1}{1+y^2} \ - \ 1 \right) · y' \ \ = \ \ 1 \ - \ \left(\frac{-y^2}{1+y^2} \right) · y' \ \ = \ \ 0 $$
[since $ \ 1 + y^2 \ \neq \ 0 \ \ , $ it is "safe" to multiply the equation through by it]
$$ \Rightarrow \ \ 1·(1+y^2) \ - \ (-y^2) · y' \ \ = \ \ 1 \ + \ y^2 \ + \ (y^2) · y'\ \ = \ \ 0 \ \ . $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3815870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Tu's Manifolds Problem 20.10 I am trying to compute the Lie derivative $\mathcal{L}_X\omega$ of the 2-form
$$\omega:= xdy\wedge dz + (z-y)dx\wedge dy$$
against the vector field
$$X:=-y \frac{\partial}{\partial x} + x \frac{\partial}{\partial y}$$
on the 2 sphere $S^2 \subset \mathbb{R}^3$. I believe that the flow $F$ of $X$ is given by
$$F\left(t, \left[\begin{matrix} x \\ y \\ z \end{matrix} \right]\right) = \left[\begin{matrix} \cos(t) & -\sin(t) & 0\\ \sin(t) & \cos(t) & 0 \\ 0 & 0 & 1 \end{matrix}\right] \cdot \left[\begin{matrix} x \\ y \\ z \end{matrix}\right].$$
Using $\mathcal{L}_X \omega := \frac{d}{dt}|_{t=0} F^*\omega$, I get $$\mathcal{L}_X \omega = xdx\wedge dz - ydy\wedge dz - xdx\wedge dy$$
from $F^*\omega = \sum_i F^*(a_i)d(F^*x_{i_1})\wedge d(F^*x_{i_2})$. For Cartan's homotopy formula, I get
$d\omega = 2dx\wedge dy \wedge dz, \hspace{1cm}\text{ and }\\
\iota_X \omega = x^2 dz + (z-y)(-ydy-xdx)$
so that
$d \iota_X\omega = 2xdx\wedge dz - y dz\wedge dy - xdz\wedge dx + x dy\wedge dx \\ \hspace{1cm} = 3xdx\wedge dz - ydz\wedge dy + x dy\wedge dx$
and
$\iota_X(d\omega) =d\omega(X,-,-)\\
\hspace{1.3cm} = 2dx\wedge dy \wedge dz\left(\begin{matrix} -y & - & - \\ x & - & - \\ 0 & - & - \end{matrix}\right)\\
\hspace{1.3cm} = 2(-ydy\wedge dz - x dx\wedge dz) \\
\hspace{1.3cm} = -2y dy\wedge dz - 2x dx\wedge dz.$
This gives $\mathcal{L}_X\omega = d \iota_X\omega + \iota_X(d\omega)\\
= (3xdx\wedge dz - ydz\wedge dy + x dy\wedge dx) - (2y dy\wedge dz + 2x dx\wedge dz)\\
= x dx \wedge dz - y dy\wedge dz -x dx\wedge dy.$
This now agrees with what I got from $\frac{d}{dt}|_{t=0}F_t^* \omega$, though my question is: How to restrict this to $S^2$? For instance, in the upper hemisphere $U_{z>0}$ of $S^2$, we have coordinates $(x,y,\sqrt{1-x^2-y^2})$ so that $d(i^*z) = \frac{-x}{\sqrt{1-x^2-y^2}}dx - \frac{y}{\sqrt{1-x^2-y^2}}dy$ gives
$i^*(\mathcal{L}_X\omega):= xdx\wedge d(i^*z) - ydy\wedge d(i^*z) - xdx\wedge dy\\
= (\frac{2xy}{\sqrt{1-x^2-y^2}} - x)dx\wedge dy \\
= \frac{-2xy - x\sqrt{1-x^2-y^2}}{\sqrt{1-x^2-y^2}} dx\wedge dy.$
Is this correct? I suppose one could also use spherical coordinates, rather than worry about charts.
| Here's what I got, although it's not 100% certain that I got the correct results. You should check your calculations and mine again and compare them.
So $$L_X \omega = d i_X \omega + i_X d \omega.$$
First, we calculate $$i_X \omega = \omega(X, \cdot) = x \cdot \begin{vmatrix} x & 0 \\ dy & dz \end{vmatrix} + (z-y)\cdot\begin{vmatrix} 0 & x \\ dz & dy \end{vmatrix} = x^2 dz - (z-y)xdz = x(x+y-z)dz,$$and $d i_X \omega = (2x+y-z)dx \wedge dz + x dy \wedge dz.$
Since $\omega$ is a two-form on the sphere $S^2$, its differential is zero, so the $i_X d \omega$ term vanishes. Note that, as it was written, $\omega$ is a form on $\mathbb{R}^3$, so in this calculation, whenever we write $\omega$, we mean $i_{S^2}^* \omega$, where $i_{S^2}$ is the inclusion $S^2 \hookrightarrow \mathbb{R}^3$. But since the pullback commutes with the interior and exterior derivatives, we can do the calculation in $\mathbb{R}^3$ and then pull everything back.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3815957",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Spectral Theorem for unbounded self adjoint operators So I've been working on the Spectral Theorem for self adjoint unbounded Operators with the Book by Rudin and got to a problem:
Let $(X,\mathcal{A})$ be a measure space, $H$ a complex Hilbert space and $P:\mathcal{A}\rightarrow B(H)$ a resolution of the identity. Then, to every measurable function $f:X\rightarrow\mathbb{C}$ there exists a densely defined operator $\Psi(f)$ in $H$, with domain
$D(\Psi(f)) = \{x\in H: \int_{X} \vert f(\lambda) \vert^{2} \ d\langle P(\lambda)x,x\rangle < \infty\}$, which is characterized by $$\langle \Psi(f)x,y\rangle = \int_{X} f(\lambda) \ d\langle P(\lambda)x,y\rangle$$ for all $x\in D(\Psi(f))$ and $y\in H$.
My problem is the following theorem: In the above situation, if $D(\Psi(f)) = H$ then $f$ is essentially bounded.
$\textbf{Proof:}$ Since $\Psi(f)$ is a closed operator, the closed graph theorem implies $\Psi(f)\in B(H)$. If $f_{n} = f\chi_{A_{n}}$ for $A_{n} = \{x\in X : \vert f(x)\vert \leq n\}$ and $n\in\mathbb{N}$, then it follows $$\Vert f_{n}\Vert_{\infty} = \Vert \Psi(f_{n})\Vert = \Vert \Psi(f)\Psi(\chi_{A_{n}}) \Vert \leq\Vert \Psi(f) \Vert,$$
since $\Vert\Psi(\chi_{A_{n}})\Vert = \Vert \chi_{A_{n}} \Vert_{\infty}\leq 1$. Thus $\Vert f \Vert_{\infty} \leq \Vert\Psi(f)\Vert$ and $f$ is essentially bounded.
I don't know how we can conclude that $\Vert f \Vert_{\infty} \leq \Vert \Psi(f)\Vert$, since $f_{n}\rightarrow f$ only pointwise. I also know that $\Vert \Psi(f_{n}) \Vert$ converges to $\Vert\Psi(f)\Vert$ but I can't see how that could help me. I would appreciate any help.
| $|f_n(x)| \leq \|\Psi (f)\| $ for all $x$ outside some set $E_n$ of measure $0$. If $x \notin \cup_n E_n$ and $f_n(x) \to f(x)$ then $|f_n(x)| \leq \|\Psi (f)\| $ for all $n$ so $|f(x)| \leq \|\Psi (f)\| $. This proves that $\|f\|_{\infty} \leq \|\Psi (f)\| $.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3816150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Bondarenkos counter-example in dimension $\geq 65$ to Borsuk's conjecture. Just to remind you, Borsuk conjectured that:
Every subset $E\subset \mathbb{R}^d$ can be partitioned into $(d+1)$ sets with smaller diameter.
Even though this conjecture had been proven to be wrong, the search for the smallest dimension in which it doesn't hold is still on.
After a couple of super high dimensional counter-examples Bondarenko showed in one of his papers that Borsuk's conjecture doesn't hold for dimension $64$. Which is, as far as i know, the current record.
To keep things short I skip some definitions on strongly-regular graphs. All necessary information is contained in the paper (see section on strongly-regular graphs).
Bondarenko uses a representation of strongly regular graphs to construct a two-distance set in a dimension. In detail they show that for the strongly-regular graph with parameters $G=(416,100,36,20)$ can be embedded into an $f=65$ dimensional space such that $84$-partitions are needed.
My question is: Why is $G=(416,100,36,20)$ the way to go? There are countless smaller strongly-regular graphs which can be embedded into smaller dimensions. How could they make sure that no smaller strongly-regular graph exists such that their proof works in smaller dimensions?
A list of (many) strongly-regular graphs can be found here.
| First:
The considered paper by Bondarenko gave counterexamples for (all integer) dimensions from 65 onwards but not for a smaller dimension.
Second:
The mentioned arXiv reference is for the preprint. The final paper appeared in the journal Discrete & Computational Geometry, volume 51, issue 3 (April 2014). Meanwhile, it is freely accessible via
https://link.springer.com/article/10.1007/s00454-014-9579-4
Third:
Your question should be answered by Remark 1 of the final paper. (That remark states in particular, that the two graphs mentioned/used in the article had been found by an extensive search, and that the smaller one "probably [...] has the smallest possible dimension for this approach. However, we can not prove this [...]").
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3816282",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Locally constant group schemes Let $k$ be a field of characteristic $p$, then the group scheme $\mu_\ell$, if $p$ does not divide $\ell$, is étale-locally isomorphic to $\mathbb{Z}/\ell\mathbb{Z}$. I have two quick questions which I don't feel very certain. Is $\mu_p$ fppf locally isomorphic to $\mathbb{Z}/p\mathbb{Z} $? Also, is $\alpha_p $ fppf locally constant?
| I guess if $\mu_p$ were locally isomorphic to $\mathbb{Z}/p\mathbb{Z}$, there would be a $k$-algebra $A$ such that $\mu_{p,A}\simeq \mathbb{Z}/p\mathbb{Z}_A$. But for a field $K$ which is a $A$-algebra, $\mu_{p,A}(K)=1$ but $\mathbb{Z}/p\mathbb{Z}_A(K)$ is the abstract group $\mathbb{Z}/p\mathbb{Z}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3816400",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Computing card probabilities $1$ to $13$ and whose colors are red, green, blue, or yellow. You draw the topmost three cards. What's the probability that:
*
*All three cards are green or blue. I think the answer is ${26\choose 3}/{52\choose 3}$ since there are $26$ green or blue cards, and we pick three.
*Exactly two of the cards are colored blue. I think the answer is $({13\choose 2}\cdot 39)/{52\choose 3}$ since we pick the blue cards in ${13\choose 2}$ ways and pick one of the last $39$ (non-blue) cards in $39$ ways.
*Exactly one card has the number $13$ written on it. I think the answer is ${4\choose 1}{48\choose 2}/{52\choose 3}$ since you pick the $13$ in four ways and pick the other two cards in ${48\choose 2}$ ways.
*There is at least one card with a $12$ written on it, but no cards with a $13$ written on it. I think the answer is $4\cdot {47\choose 2}/{52\choose 3}$ since we can place a $12$ in four ways and pick the last $2$ cards in ${47\choose 2}$ ways (remove the one ace and four $13$'s).
*All of the cards are different colors. I think the answer is ${4\choose 3}13^{3}/{52\choose 3}$ since you pick the three colors in ${4\choose 3}$ ways and for each color you can permute the value in $13$ ways.
Are my solutions correct? If not, where did I go wrong?
| The first three and the last one are correct.
In the fourth one you’ve overcounted. Specifically, you’ve counted each set of $3$ that contains exactly two $12$s twice, once for each $12$, and each set of $3$ that includes exactly three $12$s three times, once for each $12$: any one of the $12$s can be the one that you singled out at the beginning, and you’re counting each of those possibilities separately.
It’s easier here to count the sets of $3$ that contain no $12$s or $13$s and subtract that from number of sets of $3$ that contain no $13$s. There are $\binom{44}3$ ways to choose $3$ cards that are neither $12$s nor $13$s, and there are altogether $\binom{48}3$ ways to choose $3$ cards that are not $13$s, so there are $\binom{48}3-\binom{44}3$ ways to choose $3$ cards that include at least one $12$ but no $13$s.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3816553",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Question about using the notation "$\pm$" to represent the answer briefly. I saw this problem in the book with answer:
Solve the equation $z^4+z^2+1=0$.
The book gave the final answer as:
$$z_1=\frac{1}{2}+ \frac{\sqrt{3}}{2}i, \quad
z_2=\frac{-1}{2}- \frac{\sqrt{3}}{2}i,\quad
z_3=\frac{-1}{2}+ \frac{\sqrt{3}}{2}i,\quad
z_4=\frac{1}{2}- \frac{\sqrt{3}}{2}i.$$
My question isn't about how to solve the equation, But I want to know Is it possible to write the roots of the equation briefly like: $$z=\pm \frac{1}{2}\pm \frac{\sqrt{3}}{2}i. $$
And does it represent the same $z_1$ , $z_2$ , $z_3$ , $z_4$ I mentioned earlier? And finally I want to know whether this is a common way to represent four answers or not.
| There are, I think, two questions implicit in the original post:
*
*Is it acceptable to write $\pm a \pm b$?
I think that the answer to this is a qualified "yes". As noted by Calum Gilhooley, there are notable texts and authors who have written $\pm a\pm b$ to denote a four element set, i.e.
$$ \pm a\pm b = \{ a+b, a-b, -a+b, -a-b\}. $$
Because this notation has appeared in the literature, if you use it in your own writing, you will likely be safe from criticism.
On the other hand, illustrious authors often get away with things that the rest of us can't, simply on the basis of their reputations. Thus I would be cautious about emulating the style of famous authors, as they are often famous for their mathematical discoveries, rather than their clear exposition or writing ability.
*Should one write $\pm a\pm b$?
This is a matter of opinion, but I would recommend against writing $\pm a \pm b$. This notation has the potential to be ambiguous. The notation $ \pm a \mp b $ unambiguously denotes the two-element set
$$ \{a-b, -a+b\}, $$
hence it is reasonable to infer that the notation $\pm a\pm b$ ought to denote the analogous two-element set
$$ \{a+b,-a-b\}. $$
However, as amWhy noted in chat one can make a very good argument that $\pm a\pm b$ should denote the four-element set
$$ \{a+b, a-b, -a+b, -a-b\}. $$
As the notation has two very reasonable interpretations, it has the potential to be ambiguous and to cause confusion.
Because the notation has the potential to be ambiguous, I think that one should be careful about when one uses it. In one's own personal notes or on a blackboard, where the style of communication is informal and intentionally abbreviated, it can be perfectly reasonable to reduce the number of symbols written, and just write $\pm a\pm b$. In these settings, there are other channels of information which disambiguate: the verbal lecture, surrounding notes, and (perhaps) the context in which the notation is used.
On the other hand, I would suggest that, in formal writing, one really ought to avoid the notation $\pm a\pm b$. Off the top of my head, there are a few alternatives which might be better:
*
*Write "The four-element set $\pm a\pm b$..." or "The two-element set $\pm a\pm b$...", depending on what you mean.
*Similarly, in the specific case highlighted in the original question, one could write "The four roots are given by $\pm a\pm b$..."
*Write something more explicit, such as
$$ \bigl\{ (-1)^ma + (-1)^nb : m,n \in \mathbb{N} \bigr\}. $$
*Just list out all four elements, e.g. "The solutions are $a+b$, $a-b$, $-a+b$, and $-a-b$."
Remember that the goal of mathematical writing is to be clear and unambiguous. While one certainly can write $\pm a \pm b$, my feeling is that one shouldn't. Of course, at the end of the day, this is between the author, the reader, and (perhaps) the publisher, and one should do whatever one believes will satisfy the needs of all parties the best.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3816700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Integral involving composition of Trig. and Inverse Trig. functions. For a minor class i am taking this year, I found the following integral in a problem set, and where i had no luck in evaluating it: $$\int \cos(2\cot^{-1}\sqrt{\frac{(1-x)}{(1+x)}})dx$$.
I proceeded as follows:
->first, let $x=\cos(2\theta)$ $\implies$ $dx=-2\sin2\theta d\theta$ so the integral becomes: $$\int \cos(2\cot^{-1}\sqrt{\frac{(1-\cos(2\theta))}{(1+\cos(2\theta))}}).-2\sin2\theta d\theta =\int \cos(2\cot^{-1}\sqrt{\frac{(\sin^2(\theta))}{(\cos^2\theta)}}).-2\sin2\theta d\theta\\=\int \cos(2\cot^{-1}(\tan\theta).-2\sin2\theta d\theta=\int \cos(\frac{2}{\theta}).-2\sin2\theta d\theta$$
After this i am stuck. How do i proceed? I cant seem to find any mistakes with my substitution and the succeeding lines. Where did i make a mistake(if any) though? Can i still use this substitution?. I tried looking this up for some guidance and in this online integral solver i found, https://www.integral-calculator.com/ , they used some other method to get to the right answer which i didn't quite understand(refer to image). I wanted to keep using a substitution method if possible.
| Hint:
WLOG let $x=-\cos2t,0\le2t\le\pi$
$$\text{arccot}\sqrt{\dfrac{1-x}{1+x}}=\text{arccot}(\cot t)=t\text{ as } 0\le t\le\dfrac\pi2$$
$$\cos\left(\text{arccot}\sqrt{\dfrac{1-x}{1+x}}\right)=\cos(2t)=-x$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3816978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
} |
Proving $(a+b+c) \Big(\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\Big) \leqslant 25$ For $a,b,c \in \Big[\dfrac{1}{3},3\Big].$ Prove$:$
$$(a+b+c) \Big(\dfrac{1}{a}+\dfrac{1}{b}+\dfrac{1}{c}\Big) \leqslant 25.$$
Assume $a\equiv \text{mid}\{a,b,c\},$ we have$:$
$$25-(a+b+c) \Big(\dfrac{1}{a}+\dfrac{1}{b}+\dfrac{1}{c}\Big) =\dfrac{2}{bc} (10bc-b^2-c^2) +\dfrac{c+b}{abc} (a-b)(c-a)\geqslant 0.$$
I wish to find a proof with $a:\neq {\rm mid}\left \{ a, b, c \right \},$ or another proof$?$
Actually$,$ I also found a proof true for all $a,b,c \in \Big[\dfrac{1}{3},3\Big],$ but very ugly.
After clearing the denominators$,$ need to prove$:$
$$f:=22abc-a^2c-a^2b-b^2c-ab^2-bc^2-ac^2\geqslant 0$$
but we have$:$
$$f=\dfrac{1}{32} \left( 3-a \right) \left( 3-b \right) \Big( c-\dfrac{1}{3} \Big) +
\left( 3-a \right) \left( a-\dfrac{1}{3} \right) \left( b-\dfrac{1}{3} \right) +\\+{
\frac {703}{32}}\, \left( a-\dfrac{1}{3} \right) \left( b-\dfrac{1}{3} \right) \left(
c-\dfrac{1}{3} \right) +{\frac {9}{32}} \left( 3-a \right) \left( 3-c
\right) \left( a-\dfrac{1}{3} \right) +\dfrac{1}{4} \left( 3-b \right) \left( 3-c
\right) \left( c-\dfrac{1}{3} \right) +\dfrac{5}{4} \left( 3-c \right) \left( c-\dfrac{1}{3}
\right) \left( a-\dfrac{1}{3} \right) +{\frac {49}{32}} \left( 3-c \right)
\left( b-\dfrac{1}{3} \right) \left( c-\dfrac{1}{3} \right) + \left( 3-b \right)
\left( b-\dfrac{1}{3} \right) \left( c-\dfrac{1}{3} \right) +\\+{\frac {21}{16}}\,
\left( 3-b \right) \left( a-\dfrac{1}{3} \right) \left( b-\dfrac{1}{3} \right) \\+\dfrac{5}{4}\,
\left( 3-a \right) \left( c-\dfrac{1}{3} \right) \left( a-\dfrac{1}{3} \right) +\dfrac{1}{32}
\, \left( 3-a \right) ^{2} \left( 3-c \right) +\dfrac{1}{4}\, \left( 3-b
\right) \left( b-\dfrac{1}{3} \right) ^{2}+\dfrac{1}{32} \left( 3-b \right) ^{2}
\left( a-\dfrac{1}{3} \right) +{\frac {9}{32}} \left( a-\dfrac{1}{3} \right) \left(
b-\dfrac{1}{3} \right) ^{2}+\dfrac{1}{4} \left( a-\dfrac{1}{3} \right) \left( c-\dfrac{1}{3} \right) ^{
2}+\dfrac{1}{4} \left( b-\dfrac{1}{3} \right) \left( 3-b \right) ^{2}+{\frac {9}{32}}
\, \left( b-\dfrac{1}{3} \right) \left( c-\dfrac{1}{3} \right) ^{2}$$
So we are done.
If you want to check my decomposition$,$ please see the text here.
| Let $f(a,b,c)=(a+b+c)\left(\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\right)$. Note that $f$ is concave of each variable (if other variables are fixed). Hence, since concave on $I$ fucntion attains its maximum at endpoint of $I$ (here $I=[m,M]=\left[\frac{1}{3},3\right]$)
$$
\max_{(a,b,c)\in I^3} f=\max_{(a,b,c)\in\{m,M\}^3} f.
$$
Thus, we need only to compute these 8 values and choose the maximal one.
Details: consider any point $(a,b,c)$, fix $b$ and $c$ and consider $f$ as a function of $a$. We obtain
$$
f(a,b,c)\leq\max\{f(m,b,c),f(M,b,c)\},
$$
so we can assume that $a\in\{m,M\}$. Now repeat this argument for $b$ and $c$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3817104",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
negative variance example I came across the following matrix.
I notice, that it is symmetric and real and has an eigenvalue of -0.01091786.
This must mean, that there exists a vector $x \in \mathbb{R}^{10}$ for which it holds
$$ xVx^T<0$$
but I have no idea how to come about an example of such vector? - I especially want to know, if there is an example of a vector $x$ that has that property and also has the property $\sum_{i=1}^{10}x_i =1$.
Thank you in advance.
| Here is one trick for finding eigenvectors associated with an eigenvalue. It is guaranteed to work if the characteristic polynomial is also the minimal polynomial, but may fail otherwise:
If $p$ is the characteristic polynomial of $A$, then by the Cayley-Hamilton theorem, $p(A) = 0$. If $\lambda$ is an eigenvalue of $A$, then $p(t) = (t - \lambda)q(t)$ where the polynomial $q(t)$ can be found by synthetic division.
Now $p(A) = (A - \lambda)q(A) = 0$. If $q(A)$ has any non-zero columns, those columns have to be eigenvectors associated with $\lambda$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3817272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is it possible to define $f: S \rightarrow T$ such that $f$ is continuous and onto for each of the following pairs of $S$ and $T ?$ Is it possible to define $f: S \rightarrow T$ such that $f$ is continuous and onto for each of the following pairs of $S$ and $T ?$ For each pair, provide an example of one such $f,$ if possible; otherwise, show that it is impossible to define one such $f$
$(i) S=(0,1) \times(0,1)$ and $T$ is the set of rational numbers.
$(ii) S=(0,1) \times(0,1)$ and $T=[0,1] \times[0,1]$
My attempt : continuous image of connected set is connected . (0,1)×(0,1) is connected but ${Q}$ is not , so (i) is false
Help me solving (ii) thank you
| The answer to ii is yes. For example, we could consider the function $f:S \to T$ defined by
$$
f(x,y) = (\sin^2(2 \pi x),\sin^2(2 \pi y)).
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3817436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proving $6(x^3+y^3+z^3)^2 \leq (x^2+y^2+z^2)^3$, where $x+y+z=0$ Question :
Let $x,y,z$ be reals satisfying $x+y+z=0$. Prove the following inequality:$$6(x^3+y^3+z^3)^2 \leq (x^2+y^2+z^2)^3$$
My Attempts :
It’s obvious that $x,y,z$ are either one negative and two positive numbers or one positive and two negative numbers.
In addition, putting $(x,y,z)$ and $(-x,-y,-z)$ into the inequality has the same outcome.
Therefore, without loss of generality, I suppose $x,y \geq 0$ and $z\leq0$.
$x+y=-z\\ \Longrightarrow (x+y)^2=z^2 \\ \Longrightarrow (x^2+y^2+z^2)^3=8(x^2+xy+y^2)^3$
is what I have got so far, and from here I can’t continue.
Am I on the right direction? Any suggestions or hints will be much appreciated.
| Scaling, one may assume that $z = -1$, so that $x + y = 1$. Then the desired inequality is
$$6(x^3 + y^3 - 1)^2 \leq (x^2 + y^2 + 1)^3$$
Since $x^3 + y^3 = (x + y)^3 - 3xy(x + y) = 1 - 3xy$ and $x^2 + y^2 + 1 = (x + y)^2 - 2xy + 1 = 2 - 2xy$, the desired inequality is
$$54(xy)^2 \leq (2 - 2xy)^3$$
Thus it makes sense to look at $xy = x(1 - x) = {1 \over 4} - (x+{1 \over 2})^2$, whose range is $(-\infty, {1 \over 4}]$. Letting $r = xy$, we need that for $r \leq {1 \over 4}$ we have
$$54r^2 \leq (2 - 2r)^3$$
However, the polynomial $54r^2 - (2 - 2r)^3$ can be directly factorized into $2(r + 2)^2(4r - 1)$, which is nonpositive in the domain $(-\infty, {1 \over 4}]$. Hence the inequality holds.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3817541",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 4
} |
Mixed directional derivative of second order and the Hessian matrix First order directional derivative in 2 dimensional $xy$-coorinate system is
$$D_{\bf{u}}=u_1\frac{\partial}{\partial x}+u_2\frac{\partial}{\partial y}$$
I thought to go further and analyse second order directional derivatives.
The formula for general
second order mixed directional derivative is
$$ D^2_{\bf{u}\bf{v}}=D_{\bf{v}}\left[u_1\frac{\partial}{\partial x}+u_2\frac{\partial}{\partial y}\right]=u_1D_{\bf{v}}\frac{\partial}{\partial x}+u_2D_{\bf{v}}\frac{\partial}{\partial y}=$$
$$=u_1(v_1\frac{\partial^2}{\partial x^2}+v_2\frac{\partial^2}{\partial y\partial x})+u_2(v_1\frac{\partial^2}{\partial x\partial y}+v_2\frac{\partial^2}{\partial y^2})=$$
$$=u_1v_1\frac{\partial^2}{\partial x^2}+u_1v_2\frac{\partial^2}{\partial y\partial x}+u_2v_1\frac{\partial^2}{\partial x\partial y}+u_2v_2\frac{\partial^2}{\partial y^2}$$
I see the formula we get has definitely something to do with the following two matrices:
$$U=\left[\matrix{u_1v_1 && u_1v_2\\ u_2v_1 && u_2v_2}\right]$$
$$H=\left[\matrix{\frac{\partial^2}{\partial x^2} && \frac{\partial^2}{\partial x\partial y} \\ \frac{\partial^2}{\partial y\partial x} && \frac{\partial^2}{\partial y^2}}\right]$$
Where the second of them is known as the Hessian operator in the 2-dimensional $xy$-plane.
Are here any matrix algebra masters, who will find how to express $D_{\bf{u}\bf{v}}$ in terms of $U$ and $H$ ?
If we had defined an operation like $$\left[\matrix{a_{11} && a_{12} \\ a_{21} && a_{22}}\right]* \left[\matrix{b_{11} && b_{12} \\ b_{21} && b_{22}}\right]=a_{11}b_{11}+a_{12}b_{12}+a_{21}b_{21}+a_{22}b_{22}$$
$$A*B=\sum_{i, j} A_{ij}B_{ij},$$
then the formula for mixed directional 2nd order derivative would look like
$$D_{\bf{u}\bf{v}}=U^T*H$$
Any ideas how to express the $*$ operation using standard operations on matrices?
| I'm not sure if this is exactly what you're looking for, but if we write $\mathbf u=\begin{bmatrix} u_1\\u_2\end{bmatrix},\ \mathbf v=\begin{bmatrix} v_1\\v_2\end{bmatrix}$, then the formula becomes
$$D^2_{\bf{u}\bf{v}} = \mathbf v^T\mathbf H\mathbf u$$
Let us write $$\begin{align}
f_{11}&=\frac{\partial^2f}{\partial x^2}\\
f_{12}&=\frac{\partial^2f}{\partial y\partial x}\\
f_{21}&=\frac{\partial^2f}{\partial x\partial y}\\
f_{22}&=\frac{\partial^2f}{\partial y^2}
\end{align}$$
as usual. If the partials exist and are continuous, we have $f_{12}=f_{21}$ so we can write $$\mathbf H=\begin{bmatrix}f_{11}&f_{12}\\f_{21}&f_{22}\end{bmatrix}$$
and the formula takes the easily-rembered form
$$v_1f_{11}u_1+v_1f_{12}u_2+v_2f_{21}u_1+v_2f_{22}u_2=\begin{bmatrix}v_1&v_2\end{bmatrix}\begin{bmatrix}f_{11}&f_{12}\\f_{21}&f_{22}\end{bmatrix}\begin{bmatrix}u_1\\u_2\end{bmatrix}$$ Since the left-hand side is a scalar, and $\mathbf H$ is symmetric, we can transposes to get$$D^2_{\bf{u}\bf{v}} = \mathbf u^T\mathbf H\mathbf v$$ so we don't even have to remember which side the $\mathbf u$ goes on.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3817703",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
definite integral, regularized hypergeometric function I am currently stuck at the following equation: $\frac{\sqrt{m} \sec \left( (m+1)\pi \right)}{4\Gamma(m)} \int\limits_{0}^{\infty} \sqrt{z}\: {}_1\mathcal{M}_1\left(\frac{1}{2},\frac{3}{2}-m,\frac{-mz}{4} \right) dz=1$ $\forall$ $m \in \mathbb{Z}^+$, where ${}_1\mathcal{M}_1(a,b,z)=\frac{1}{\Gamma(b-a)\Gamma(a)}\int\limits_0^1e^{z\alpha}\alpha^{a-1}(1-\alpha)^{b-a-1}d\alpha$ is the regularized confluent hypergeometric function.
Given that I evaluated the integral using Mathematica, currently I'm interested in how did Mathematica obtain this solution.
What I have been able to obtain after some manipulations and exploiting properties of the Gamma function is the following: $\frac{\sqrt{m} \sec \left( (m+1)\pi \right)}{4\Gamma(m)} \int\limits_{0}^{\infty} \sqrt{z}\: {}_1\mathcal{M}_1\left(\frac{1}{2},\frac{3}{2}-m,\frac{-mz}{4} \right) dz=\frac{2\Gamma(m-\frac{1}{2})}{m\pi\Gamma(m)}\int\limits_{0}^{\infty}t^{\frac{1}{2}}{}_1F_1\left(\frac{1}{2},\frac{3}{2}-m,-t \right) dt$, where we apply the transformation $\frac{mz}{4}\rightarrow t$ and ${}_1F_1(a,b,z)={}_1\mathcal{M}_1(a,b,z)\Gamma(b)$ is the standard confluent hypergeometric function.
There exists a standard integral in the book "Table of Integrals, Series, and Products" by I. S. Gradshteyn and I. M. Ryzhik, which is as follows: $\int\limits_{0}^{\infty}t^{b-1}{}_1F_1(a,c,-t)dt=\frac{\Gamma(b)\Gamma(c)\Gamma(a-b)}{\Gamma(a)\Gamma(c-b)}$ [7.612.1]. But the problem is that this result holds only when $b<a$, which is not the case in my problem. This is precisely the point where I am stuck and cannot arrive at the solution that is provided by Mathematica.
Any suggestion will be helpful.
| To evaluate
\begin{equation}
I=\frac{2\Gamma(m-\frac{1}{2})}{m\pi\Gamma(m)}\int\limits_{0}^{\infty}t^{\frac{1}{2}}{}_1F_1\left(\frac{1}{2},\frac{3}{2}-m,-t
\right)\,dt
\end{equation}
we replace the hypergeometric function by its representation in terms of
Laguerre polynomials (see
here)
\begin{equation}
{}_1F_1(a, a - n, z)=\frac{(-1)^nn!}{(1-a)_n}e^zL_n^{a-n-1}(-z)
\end{equation}
with $a=1/2,n=m-1,z=-t$, to express, after several simplifications,
\begin{equation}
I=\frac{2(-1)^{m-1}}{m\sqrt{\pi}}\int\limits_{0}^{\infty}t^{1/2}e^{-t}L_{m-1}^{1/2-m}(t)\,dt
\end{equation}
From the Rodrigues-type expression
\begin{equation}
L_n^\lambda(z)=\frac{e^zz^{-\lambda}}{n!}\frac{\partial^n}{\partial z^n}\left( z^{n+\lambda} e^{-z}\right)
\end{equation}
with $n=m-1,\lambda=1/2-m$,
\begin{equation}
L_{m-1}^{1/2-m}(z)=\frac{e^zz^{m-1/2}}{(m-1)!}\frac{\partial^{m-1}}{\partial z^{m-1}}\left( z^{-1/2} e^{-z}\right)
\end{equation}
Thus
\begin{equation}
I=\frac{2(-1)^{m-1}}{m!\sqrt{\pi}}\int\limits_{0}^{\infty}t^{m}\frac{\partial^{m-1}}{\partial t^{m-1}}\left( t^{-1/2} e^{-t}\right)\,dt
\end{equation}
By performing integrations by parts $m-2$ times,
\begin{equation}
\int\limits_{0}^{\infty}t^{m}\frac{\partial^{m-1}}{\partial t^{m-1}}\left( t^{-1/2} e^{-t}\right)\,dt=(-1)^{m-1}m!\int_0^\infty t^{1/2} e^{-t}\,dt
\end{equation}
Finally
\begin{equation}
I=1
\end{equation}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3817814",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is the partial sum $\sum_{k=1}^n (-1)^{k-1}\frac{\left(\frac{1}{9}\right)^k}{2k-1}$? I have a problem where, after some work, I've arrived at
$$6 \times \lim_{n \to ∞} \sum_{k=1}^n (-1)^{k-1}\frac{\left(\frac{1}{9}\right)^k}{2k-1}$$
and I need to find the partial sum $$\sum_{k=1}^n (-1)^{k-1}\frac{\left(\frac{1}{9}\right)^k}{2k-1}.$$
to calculate the above limit, but I'm having trouble finding it.
I know that if this was simply a series with the $\left(\frac{1}{9}\right)^k$ term, I would just use the geometric series formula, but there's an elusive alternating term as well as the $2k-1$ term.
| Hint:
$$\arctan(x)=\sum_{k=0}^{\infty}(-1)^k\frac{x^{2k+1}}{2k+1}$$ if $|x|< 1.$ Now consider the following shift of the summation index $k\rightarrow k-1.$
\begin{align} \arctan(x)&=\sum_{k=1}^{\infty}(-1)^{k-1}\frac{x^{2k-1}}{2k-1}. \space \text{Now let $x=\frac{1}{3}$.} \end{align}
\begin{align} \text{Your final answer should be $6\times\frac{1}{3}\arctan(\frac{1}{3})=2\arctan(\frac{1}{3})\approx 0.64350110879$} \end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3817938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Quotient rule for multivariable functions $\newcommand{\mbf}{\mathbf}$
Let $f,g:\mathbb R^n\to \mathbb R$ be differentiable at $a$.
$Df(a)$ is the unique linear transformation $\mathbb R^n\to\mathbb R$ such that
$$\lim_{\mbf h\to\mbf 0}\frac{|f(\mbf a+\mbf h)-\mbf f(\mbf a)-Df(\mbf a)(\mbf h)|}{|\mbf h|}=0.$$
I want to show that if $g(a)\neq 0$, then $D(f/g)(a)=\dfrac{g(a)Df(a)-f(a)Dg(a)}{[g(a)]^2}$.
I've shown that $D(f\cdot g)(a)=g(a)Df(a)+f(a)Dg(a)$.
Let $h:\mathbb R^n\to\mathbb R$ be defined by $h(x)=\frac{1}{g(x)}$. Then
$$D(f/g)(a)=D(f\cdot h)(a)=h(a)Df(a)+f(a)Dh(a).$$
Since $h=q\circ g$ where $q:\mathbb R-\{0\}\to\mathbb R$ is defined by $q(x)=1/x$,
$$Dh(a)=D(q\circ g)(a)=Dq(g(a))\circ Dg(a)=D\frac1{g(a)}\circ Dg(a)$$
How can I simplify $D\frac 1{g(x)}$ to prove this?
| You only need to show that $D\frac{1}{g(a)}=-\frac{Dg(a)}{g^2(a)}$ and for this is necessary and sufficient that
$$\lim_{h\to 0}\frac{|1/g(a+h)- 1/g(a)+D(g(a))(h)/g^2(a)|}{|h|}=0.$$
Indeed, note that
$$\frac{|1/g(a+h)- 1/g(a)+D(g(a))(h)/g^2(a)|}{|h|}=\frac{|g^2(a)-g(a)g(a+h)+g(a+h)Dg(a)(h)|}{|g^2(a)g(a+h)h|}\leq\frac{|g^2(a)-g(a)g(a+h)+g(a)Dg(a)(h)|}{|g^2(a)g(a+h)h|}+\frac{|g(a+h)Dg(a)(h)-g(a)Dg(a)(h)}{|g^2(a)g(a+h)h|}=\frac{|g(a+h)-g(a)-Dg(a)(h)|}{|g(a)g(a+h)||h|}+\frac{|(g(a+h)-g(a))Dg(a)(h/|h|)}{|g^2(a)g(a+h)|},$$
where making $h\to0$ the first summand $\to0$ by the differentiability of $g$ and the second summand $\to0$ because it is the product of a function wich converges to $0$ and a bounded function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3818066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
$\int_I f\,dm\geq\vert I\vert$ for any interval $I$, prove $f(x)\geq 1\text{ a.e.}$ Problem:
$f$ is Lebesgue integrable function defined on $[a,b]$, and $\int_I f\,dm\geq\vert I\vert$ for any interval $I\subset [a,b]$. Prove $f(x)\geq 1 \text{ a.e. } x\in [a,b]$
As open sets can be written as the union of countable disjoint open intervals, $\int_G f\,dm\geq m(G)$ for any open set $G\subset[a,b]$. Then I try to approximate the measurable set $\{x\in [a,b]:f(x)<1\}$ with an open set and hope this would be helpful, but it does not work. I am afraid this may not be the right way to solve this problem.
|
Lebesgue's Theorem:--- Here one can use a lemma related Hardy-Littlewood maximal function
which says that for a locally Lebesgue integrable function $\varphi$ defined over
$\Bbb R^n$ we have $$\lim_{r\to
0+}\frac{1}{m(B_r(x))}\int_{B_r(x)}\varphi(y)\,dy=\varphi(x)\text{ a.e. } x\in
\Bbb R^n.$$
See Folland's Real Analysis Theorem 3.18 for proof.
Note that according to the statement we have $\frac{\int_I f\ dm}{|I|}\geq 1$ for any interval $I\subseteq(a,b)$, so that $$f(x)=\lim_{r\to 0+}\frac{1}{m\big((x-r,x+r)\big)}\int_{(x-r,x+r)}f\ dm \geq 1\text{ for almost every }x\in (a,b).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3818192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
$\sum_{m=1}^{\infty}\sum_{n=1}^{\infty} \frac{m²n}{n3^m +m3^n}$ $\sum_{m=1}^{\infty}\sum_{n=1}^{\infty} \frac{m²n}{n3^m +m3^n}$.
I replaced m by n,n by m and sum both which gives term $\frac{mn(m+n)}{n3^m +m3^n}$.how to do further?
| A closely related summation doable by hand is when $3^m$ multiplies the denominator of OP:
$$S=\sum_{m=1}^{\infty}\sum_{n=1}^{\infty} \frac{m²n}{3^m(n3^m +m3^n)}~~~(1)$$ $$S=\sum_{m=1}^{\infty}\sum_{n=1}^{\infty}\frac{1}{a_m(a_m+a_n)},~~ a_k=3^k/k~~~(2)$$
Interchange $m$ and $n$, then
$$S=\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}\frac{1}{a_n(a_n+a_m)}~~~(3)$$
Adding (2) and (3), we get
$$2S=\sum_{n=1}^{\infty}\sum_{m=1}^{\infty} \frac{1}{a_m a_n} =\left(\sum_ {k=0}^{\infty} \frac{1}{a_k}\right)^2= \left(\sum_ {k=0}^{\infty} \frac{k}{3^k}\right)^2.$$
Next use $$\sum_{k=1}^{\infty} kx^k=\frac{x}{(1-x)^2}, {x}<1.$$
$$\implies S=\frac{1}{2} \frac{9}{16}=\frac{9}{32}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3818304",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Sum of 3 unit vectors being shorter than 1 What is the probability for the sum of three unit vectors to be shorter than 1?
The vectors' direction angles has uniform distribution on $[0, 2\pi]$.
I've made simulations and I also used Wolfram Alpha to solve the final equation below, so I'm pretty sure, that the result is
$\frac 1 4$. How can I prove that?
If I rotate the vectors with the same angle so that the first one's direction angle is 0, than the length doesn't change. So the result is the same as this value:
$$Prob(\vert e^0 + e^{i\alpha} + e^ {i\beta} \vert < 1)$$
($\alpha$ and $\beta$ is uniformly distributed on $[0, 2\pi]$)
I'll denote the sum by z, so
$$z = e^0 + e^{i\alpha} + e^ {i\beta} = 1 + e^{i\alpha} + e^ {i\beta}$$
$$\vert z\vert^2 = z \overline z = (1 + e^{i\alpha} + e^ {i\beta})(1 + e^{-i\alpha} + e^ {-i\beta})$$
$$\vert z\vert^2 = 3 + \left( e^{i\alpha} + e^{-i\alpha} \right)+ \left(e^ {i\beta}+ e^ {-i\beta}\right) + \left(e^ {-i(\alpha - \beta)} + e^ {-i(\alpha - \beta)}\right)= 3 + 2 \cos \alpha + 2 \cos \beta+ 2 \cos (\alpha - \beta)$$
This is non-negative so $\vert z \vert < 1 \Leftrightarrow \vert z \vert^2 < 1 $, that gives us
$$3 + 2 \cos \alpha + 2 \cos \beta+ 2 \cos (\alpha - \beta) < 1$$
$$2 + 2 \cos \alpha + 2 \cos \beta+ 2 \cos (\alpha - \beta) < 0$$
$$1 + \cos \alpha + \cos \beta+ \cos (\alpha - \beta) < 0$$
I need to solve this equation. I have the solution from Wolfram Alpha:
and this gives me the $\frac 14$ probability. But how can I get this solution?
| The first two points $z_+$, $z_-$ have an angle $\alpha'$ among themselves, where $\alpha'$ is uniformly distributed in $[0,\pi]$. We may assume them as $\cos\alpha\pm i \sin\alpha$, where $\alpha$ is uniformly distributed in $\bigl[0,{\pi\over2}\bigr]$. This gives $z_++z_-=2\cos\alpha=:z_*$. The unit circle with center $z_*$ has an arc of length $2\alpha$ within the unit circle of the $z$-plane. We have a success iff the third random point is lying on this arc. The total probability that this happens is
$$p={2\over\pi}\int_0^{\pi/2}{2\alpha\over 2\pi}\>d\alpha={1\over4}\ .$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3818454",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
} |
Let the distribution of a random vector (X,Y) be given by density. Are X,Y independent? Let the distribution of a random vector $(X,Y)$ be given by density.
$$f(x,y)=\left\{\begin{matrix}
e^{-x-y} & x,y\geq 0\\
0 & x<0 \hspace{2mm} \vee y<0
\end{matrix}\right.$$
Are $X,Y$ independent?
My take:
So I have to calculate $fx(x) \cdot fy(y)$ and check if they are equal to $f(x,y)$
$\int_{0}^{\infty}e^{-x-y}dy=e^{-x}$
$\int_{0}^{\infty}e^{-x-y}dx=e^{-y}$
$fx(x) \cdot fy(y)=f(x,y)$?
$e^{-x} \cdot e^{-y}=e^{-x-y}$
The equation is right so (X,Y) are independent
(If for example $x,y \in[1,5]$, then I would have to calculate intergral with upper $5$ and lower $1$, correct?)
| You are mostly correct, but as Henry commented, you should include the support with the expressions (and using indicators to do so saves type space). This verifies that the support for the product of the marginal functions is identical to the support for the joint function as well as the product and the joint being equal everywhere in that support. $$\begin{align}\because\qquad f_{\small X,Y}(x,y)&=\mathrm e^{-x-y}\,\mathbf 1_{0\leqslant x}\mathbf 1_{0\leqslant y}\\[2ex]f_{\small X}(x)&=\mathrm e^{-x}\mathbf 1_{0\leqslant x}\int_0^\infty \mathrm e^{-y}\,\mathrm d y\\[1ex]&=\mathrm e^{-x}\mathbf 1_{0\leqslant x}\\[2ex]f_{\small Y}(y)&=\mathrm e^{-y}\mathbf 1_{0\leqslant y}\int_0^\infty \mathrm e^{-x}\,\mathrm d x\\[1ex]&=\mathrm e^{-y}\mathbf 1_{0\leqslant y}\\[2ex]\therefore\qquad f_{\small X}(x)\,f_{\small Y}(y)&=\mathrm e^{-x}\mathbf 1_{0\leqslant x}\cdot\mathrm e^{-y}\mathbf 1_{0\leqslant y}\\[1ex]&=f_{\small X,Y}(x,y)\end{align}$$
$\qquad\mathbf 1_{E}=\begin{cases}1&:& E\text{ is true}\\0&:& \text{otherwise}\end{cases}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3818584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Are 2 norm and infinity norm equivalent in an infinite dimensional vector space space I am trying to answer the following question:
Are the 2-norm and the infinity norm equivalent in $l^2$, the space of real valued sequences which are square summable?
($l^2 := \big\{ x = (x_i)_{i \in \mathbb{N}} \ \ \big| \ \ \sum_{i=1}^{\infty} |x_i|^2 < +\infty \big\}$)
I am aware that in an n-dimensional vector space, we are able to obtain the following inequality (whose lower bound still stands for an infinite dimensional space):
$\|x\|_\infty \le \|x\|_2 \le \sqrt{n} \|x\|_\infty$
But I am unsure of how to proceed. Are there any obvious sequences to use as counterexamples if the norms are not equivalent? Otherwise can someone give me a hint for how I can find a constant to replace $\sqrt{n}$ in the upper bound? Thanks!
| Take $x_n = (1,1,1,...,\underset{n-th}{1}, 0,0,....)\in \ell^2.$ Then $$||x_n ||_{\infty} =1$$ but $$||x_n||_2 =\sqrt{n}$$ so these norms can't be equivalent on $\ell^2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3818715",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A mathcounts question The question is this.
There are $240$ pairs of numbers such as $(7,5318)$ or $(17,358)$ that can be formed using each of the digits $1$, $3$, $5$, $7$ and $8$ exactly once. What is the largest possible product of two such numbers?
Why wouldn't the answer be $87531 \times 87513$?
The answer key says $62333$.
This is a Mathcounts problem, from the $2011-2012$ handbook.
| No digits can be repeated, so 87531 * 87513 is not an option. Look for a two digit $*$ 3 digit number combination
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3819126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Suppose Kn decomposes into edge-disjoint triangles. Prove that 6 | n − 1 or 6 | n − 3. My problem is finding a way to prove this. I was thinking that any Kn graph could have at least one triangle as long as n is 3 or higher. Other than that I do not know where to go with proving this is true.
| Hint: To show this, you want to show that $n$ is odd and that $n$ is not equivalent to $2\bmod 3$. It's good to do those separately.
For each part, it's useful to look at some examples. Why can't you split the edges of $K_4$ into three edge-disjoint triangles? How about $K_5$? Try counting things, like degrees or the total number of edges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3819382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
A bartender stole champagne from a bottle that contained 50% of
A bartender stole champagne from a bottle that contained 50% of spirit and he replaced what he had stolen with champagne having 20% spirit. The bottle then contained only 25% spirit. How much of the bottle did he steal?
My approach: Let the total quantity champagne containing (50% of the spirit) be x.
$\therefore$ it will contain x/2 spirit and x/2 non-spirit.
Suppose he steals y% from it. Let the total quantity of champagne containing(20 % of spirit) be z.
$\therefore$ It will contain z/5 spirit.
Now I'm not able to form an equation please help.
| Without (much) math.
Must steal enough champagne so that weighted average goes 5/6 of the way from 50 to 20
(i.e. $[25-20] \times 5 = [50 - 25]$).
Therefore, he must steal 5/6 of the bottle.
I mention this approach not as a recommended approach for someone new to the problem, but rather as a way of developing intuition.
After enough experience with weighted average problems, they become solvable without (much) math.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3819503",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Distribution of first $9$ natural numbers The question states
The first $9$ natural numbers are to be divided in three groups $g_1$, $g_2$ and $g_3$ of equal size. In how many different ways can this be done if the sum of numbers in each group is odd?"
What I thought was to split it into cases. The three numbers chosen for one group should be of the following types:(for the sum to to be odd)
*
*$2$ even $1$ odd $\to$ $\binom{4}{2}$ $\binom{5}{1}$
*All $3$ odd $\to$ $\binom{5}{3}$
Summing this gives $40$ and I proceeded to choose for the next group by considering that case $1$ had occurred (for selection of first group) and with that new sample space tried forming the same cases. Similarly assuming that case $2$ has occurred I tried making cases for selection of group $2$. But this just lead to a lot of chaos. The given answer is $360$. How do I approach this question?
| The first nine natural numbers contain exactly five odd numbers. Each group must contain an odd number of natural numbers if the sum of the numbers in each group is to be odd. The only way to do this is to have one group with three odd numbers and two groups with exactly one odd number.
Choose which three odd numbers are placed in the group with three odd numbers, which can be done in $\binom{5}{3}$ ways. Choose which two of the four even numbers will be placed in the group with the smaller of the two remaining odd numbers, which can be done in $\binom{4}{2}$ ways. The other two even numbers must be placed in the group with the remaining odd number. Hence, there are $$\binom{5}{3}\binom{4}{2}$$ unlabeled groups of three natural numbers, each having an odd sum, which can be formed from the first nine natural numbers. Since the groups are labeled, there are $$\binom{5}{3}\binom{4}{2}3!$$ labeled groups of three natural numbers, each of which has an odd sum, which can be formed from the first nine natural numbers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3819652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Ambient isotopy and isotopy In the lecture note of my knot theory course, there is a very short explanation why we need ambient isotopy instead of just isotopy. It uses the following figure to illustrate without any explanation. I don't understand why it's OK to make a knot become unknot when it's isotopy, but not OK when it's ambient isotopy. Can anyone give an intuitive explanation?
Reference
| The idea of the diagram is to demonstrate that isotopy between two embeddings is an incorrect notion of knot equivalence. Note that a knot is an embedding:
$k:S^1\to \mathbb{R}^3$ (or more conveniently $\mathbb{S^3}$)
Given two knots (embeddings) $k_0$ and $k_1$, we may construct an isotopy of embeddings:
$k_t:S^1\times [0,1] \to S^3$
such that for each $t\in [0,1]$, $k_t$ is an embedding.
For any tame knot we may construct an isotopy that intuitively "pulls" the knotted portion of an knot down to a point (as pictured). For any $t<1$ the knot is not self-intersecting, and for $t=1$ the embedding is that of the unknot and thus not self-intersecting. Therefore, the map described is an isotopy of embeddings. However, this map is not differentiable (smooth) around $t=1$. To avoid making all tame knots equivalent, a more sensitive measure of equivalence is adopted in which the isotopy of embeddings is smooth.
To reconcile this with the standard definition of knot equivalence via ambient isotopy, it should be noted that any smooth isotopy of embeddings lifts to an ambient isotopy.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3819811",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Simplify $\sum_{i=1}^{n}\sum_{j=i+1}^{n} (i+j)$ $$\sum_{i=1}^n \sum_{j=i+1}^n (i+j)$$
I'm pretty sure it's somewhat simple to solve but I can't get it done.
I know that the given sum is equal to $\frac{n}{2} (n^2-1)$.
I've been on this for 3h+ now and I'm really hoping someone can give me a hint :)
| Here is to help you to start:
\begin{align}
\sum_{i=1}^n \sum_{j=i+1}^n (i+j) &= \left( \sum_{i=1}^n \sum_{j=i+1}^n i\right)+\left( \sum_{i=1}^n \sum_{j=i+1}^n j\right) \\
&= \left( \sum_{i=1}^n i (n-(i+1)+1)\right)+\left( \sum_{i=1}^n \left(\sum_{j=1}^n j -\sum_{j=1}^i j \right)\right) \\
\end{align}
Try to simplify it. Formulas to evaluate $\sum i^2$ is useful as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3819902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Proving two equivalent statement that come from Otto Holder's theorem $a \in \ell_q$ and $x \in \ell_p$. We also have $1/p + 1/q = 1$.
I want to show that
$$
\cfrac{|a_ix_i|}{||x||_p ||a||_q}
\leq
\cfrac{1}{p}\left(\cfrac{|x_i|}{||x||_p} \right)
+\cfrac{1}{q}\left(\cfrac{|a_i|}{||a||_q} \right) \\
\implies
\cfrac{\sum\limits_{i=1}^{\infty}|a_ix_i|}{||x||_p||a||_q} \leq 1
$$
I started with
$$
\cfrac{\sum |a_ix_i|}{||x||_p ||a||_q}
\leq
\cfrac{1}{p}\left(\cfrac{\sum |x_i|}{||x||_p} \right)
+\cfrac{1}{q}\left(\cfrac{\sum |a_i|}{||a||_q} \right)
$$
and then I expanded the summations and wrote out the definition for the $p$-norm and $q$-norm but I don't see how to make the terms disappear. I know that the norms must converge to be a valid element of the space but this fact didn't help me reduce the RHS to 1. I know that somehow the right hand side should add to one but I'm stuggling to see it.
The comments make me think that there's a mistake in the homework. Here's an image of the problem.
| Hint:
Use Young's inequality
for each summand.
This will look similar to your assumed inequality (maybe you meant to add the exponents so that it becomes Young's inequality).
edit: To address the edit of the question: Yes, there does seem to be a mistake. In the second part the exponents $p$ and $q$ are missing over the parts in the parentheses.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3820118",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Cauchy-Schwarz Inequality problems Let $a,$ $b,$ $c,$ $d,$ $e,$ $f$ be nonnegative real numbers.
(a) Prove that
$$(a^2 + b^2)^2 (c^4 + d^4)(e^4 + f^4) \ge (ace + bdf)^4.$$
(b) Prove that
$$(a^2 + b^2)(c^2 + d^2)(e^2 + f^2) \ge (ace + bdf)^2.$$
I'm not sure how I should start approaching both problems. I believe I should use Cauchy-Schwarz, but I'm not sure.
Any help would be appreciated! Thanks in advance.
| These are both CS inequality applications, you should try yourself. Here is the first one:
*
*$(c^4+d^4)(e^4+f^4) \geqslant (c^2e^2+d^2f^2)^2$
*$(a^2+b^2)(c^2e^2+d^2f^2) \geqslant (ace+bdf)^2$
Now combine the two to get what you want.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3820233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
If $a^2+b^2-ab=c^2$ for positive $a$, $b$, $c$, then show that $(a-c)(b-c)\leq0$
Let $a$, $b$, $c$ be positive numbers. If $a^2+b^2-ab=c^2$. Show that
$$(a-c)(b-c)\leq0$$
I have managed to get the equation to $(a-b)^2=c^2-ab$, but I haven't been able to make any progress.
Can someone help me?
| By the law of sines, we have that
$$a^{2}+ b^{2}- 2ab\cdot\sin 60^{\circ}= c^{2}\Rightarrow c:={\rm med}\left \{ a, b, c \right \}\Rightarrow \left ( a- c \right )\left ( b- c \right )\leq 0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3820511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 3
} |
A differentiable function on $(a,b)$ with nonzero derivative over $(a,b)$ where $f'(c)>0$ for some $c\in(a,b)$ means $f'(x)>0$ for all $x\in(a,b)$. It certainly seems correct that if you have at least one point $c\in(a,b)$ such that $f'(c)>0$, where $f'(x)\ne0$ for all $x\in(a,b)$, then $f'(x)>0$ for all $x\in(a,b)$. At least from a calculus standpoint, if $f$ is well-defined (not necessarily continuous), it should be the case that this is true. But how does one show this formally?
The exact statement I am attempting to prove is this:
Suppose $f:(a,b)\to\mathbb{R}$ is a differentiable function such that $f'(x)\ne0$ for all $x\in(a,b)$. Suppose there exists a point $c\in(a,b)$ with $f'(c)>0$, then $f'(x)>0$ for all $x\in(a,b)$.
I thought it might be easiest to prove by contradiction, and then use the intermediate value property to derive a contradiction. Any pointers would be helpful on this one!
EDIT: My original direct proof went something like this:
Let $f:(a,b)\to\mathbb{R}$ be a differentiable function such that $f'(x)\ne0$ for all $x\in(a,b)$. Take a point $c\in(a,b)$ such that $f'(c)>0$. Since $f'(x)\ne0$ for any $x\in(a,b)$ there can be no extreme values on $(a,b)$. Then take some $y\in\mathbb{R}$ such that $f'(c)=y$ and by the intermediate value property, we are guaranteed that $f'(a)<y<f'(b)$, that is $f$ is strictly increasing on $(a,b)$. Hence $f'(x)>0$ for all $x\in(a,b)$.
I'm a little worried about some of the logic in this. It feels incomplete to me.
| According to Darboux's Thereom, if $f$ is a differentiable function in an interval $I$, then the range of its derivative function $f'$ must still be an interval. Using the given condition, we can easily get the conclusion.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3820623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
When are eight integers entirely determined by their pairwise sums? Alice picks 8 numbers, which are not necessarily different. Once she has picked them, she writes out the addition of all the pairs on a piece of paper, which she gives to Basil. Basil wins if he can guess correctly the original n numbers, which Alice chose. Can Basil be certain that he will win?
After a lot of trial and error I found the case where Alice picks the numbers $1,5,7,9,12,14,16,20$ which have the same pairwise sums as the numbers $2,4,6, 10,11,15,17,19$. However the trial and error method is extremely laborious and tedious. Is there a more mathematical approach, which can immediately give you the solution?
| Hint: if the collections $(a_1, \dots, a_k)$ and $(b_1, \dots, b_k)$ have identical pairwise sums, then the collections $(a_1, \dots, a_k, b_1+m, \dots, b_k+m)$ and $(b_1, \dots, b_k, a_1+m, \dots, a_k+m)$ also have identical pairwise sums. (The number $m$ has to be such that $a_i \neq b_j \pm m$ for all $i,j$, so that the numbers in each collection would be different.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3820737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 0
} |
Does this inequality hold true for all $\alpha\in\mathbb{R}$? Let $x$ and $y$ be two real and positive numbers. Let $\alpha\in\mathbb{R}$. I am trying to understand if the inequality
$$ x^{\alpha} + y^{\alpha} \leq (x+y)^{\alpha}$$
holds true. By attemps, I found that it holds true only if $\alpha \geq1$. Could anyone please tell me if it is true?
Moreover, there is a simple way to justify it instead of proceeding by trial?
Thank you in advance!
| If the inequality holds for all $x, y \geq 0$ the take $x=y=1$ to get $2 \leq 2^{\alpha}$ which implies $\alpha \geq 1$.
If $\alpha \geq 1$ consider the function $f(x)=(x+y)^{\alpha}-x^{\alpha}-y^{\alpha}$ for fixed $y$. Since $f'(x)=\alpha (x+y)^{\alpha -1} -\alpha x^{\alpha -1} \geq 0$ the function is increasing on $[0,\infty)$. Since $f(0)=0$ w get $f(x) \geq 0$ for all $x \geq 0$. This proves the inequality when $\alpha \geq 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3820921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.