Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
In a triangle, prove that $ \sin A + \sin B + \sin C \leq 3 \sin \left(\frac{A+B+C}{3}\right) $
Prove that for any $\Delta ABC$ we have the following inequality:
$$ \sin A + \sin B + \sin C \le 3 \sin \left(\frac{A+B+C}{3}\right) $$
Could you use AM-GM to prove that?
| I'm sure there's a different way to approach this question, but here's one way using the graph of $\sin x $:
Consider $3$ points on the graph for $x\in(0,\pi)$. They are $(A,\sin A)$, $(B,\sin B)$, and $(C,\sin C)$ as shown. $A,B,C$ are such that $A+B+C=\pi$.
(Ignore the fact that $A,B$ and $C$ are angles of a triangle, they are just some values of $x$ that satisfy the condition $A+B+C=\pi$)
$A, B$ and $C$ are plotted on the graph of $y=\sin x$ and are joined to form a triangle, as shown.
Consider the centroid of the triangle, $G$, given by:
$$G=\left(\dfrac{A+B+C}{3}, \dfrac{\sin A+\sin B+\sin C}{3}\right)$$
Draw a line $PG$ as shown at $x=\dfrac{A+B+C}{3}$ (or $\dfrac{\pi}{3}$). This line intersects the curve at the point $P$ given by:
$$P=\left(\dfrac{A+B+C}{3}, \sin \left(\dfrac {A+B+C}{3}\right)\right)$$
From the figure, it is evident that the $y$-value of $P$ $>$ the $y$-value of $G$.
So we obtain the inequality $$\sin \left(\dfrac {A+B+C}{3}\right) >\dfrac{\sin A+\sin B+\sin C}{3}$$
Note that, for the case where $A=B=C$, we can deduce that $A=B=C= \dfrac{\pi}{3}$ and therefore $A,B$ and $C$ will coincide with point $P$ on the curve. For this particular case, we can deduce that $$\sin \left(\dfrac {A+B+C}{3}\right) =\dfrac{\sin A+\sin B+\sin C}{3}$$
Combining both the inequalities obtained, we get the desired result:
$$\sin \left(\dfrac {A+B+C}{3}\right) \geq \dfrac{\sin A+\sin B+\sin C}{3}$$
or
$$3 \sin \left(\dfrac {A+B+C}{3}\right) \geq \sin A+\sin B+\sin C$$
Credit for the answer: "Play with Graphs" by Amit Agarwal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3224342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How come the time complexity of Binary Search is log n I am watching this professor's video on Binary Search but when he reached here, I am a bit lost. How come he came up the time coomplexity is log in just by breaking off binary tree and knowing height is log n
https://youtu.be/C2apEw9pgtw?t=969 . and then the time complexity become log 16/2 = 4... how that is $\log n$ time complexity?
| First, it is important to note that the running time of an algorithm is usually represented as function of the input size. Then, we 'measure' the complexity by fitting this function into a class of functions. For instance, if $T(n)$ is the function describing your algorithm's running time and $g\colon\mathbb{N}\to\mathbb{R}$ is another function then
$$T\in O(g) \iff \text { there exist } c,n_0\in\mathbb{R}_{++} \text{ such that } T(n)\leq c g(n) \text{ for each } n\geq n_0. $$
Similarly, we say that
$$T\in \Omega(g) \iff \text { there exist } c,n_0\in\mathbb{R}_{++} \text{ such that } T(n)\geq c g(n) \text{ for each } n\geq n_0. $$
If $T$ belongs to both $O(g)$ and $\Omega(g)$ then we say that $T\in\Theta(g)$. Let's conclude that for the binary search algorithm we have a running time of $\Theta(\log(n))$. Note that we always solve a subproblem in constant time and then we are given a subproblem of size $\frac{n}{2}$. Thus, the running time of binary search is described by the recursive function $$T(n)=T\Big(\frac{n}{2}\Big)+\alpha.$$
Solving the equation above gives us that $T(n)=\alpha\log_2(n)$. Choosing constants $c=\alpha$ and $n_0=1$, you can easily conclude that the running time of binary search is $\Theta(\log(n))$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3224473",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 2
} |
$2f(x)=f(y) \Rightarrow 2f(tx)=f(ty)$ Find all continuous and strictly monotonic function $f:[0,\infty)\to \Bbb R$ such that:
*
*If there is a pair $(x,y)\neq (0,0)$ such that $2f(x)=f(y)$ then $2f(tx)=f(ty)$ for all $t>0$;
*There is at least one pair $(x,y)$ where the above condition hold.
What I got so far:
*
*Check that $f(x)=ax^b$ is a solution. I want to discover if there is any other;
*$t\to 0$ then $2f(0)=f(0) \Rightarrow f(0)=0$;
*We can prove that $y> x$;
*$t:=t/x$ and then $2f(t)=f(tk)$, for all $t>0$, with $k=y/x >1$;
*$f(tk^n)=2^nf(t)$ and setting $t=1$ we get $f(k^n)=2^nf(1)$. It give us that, if $f(1)>0$ then $f$ is increasing, otherwise, $f$ is decreasing.
| I found another solution and I think it is insteresting to post here. It was inspired in change of variable sugested by @Yuri Negometyanov.
Let's set $tx=2^z$ and $ty=2^w$, where $z,w \in \Bbb R$ and $t>0$. We also call $g:\Bbb R \to \Bbb R$ such that $g(s)=f(2^s)$, and then we get
$$2f(tx)=f(ty)\to 2f(2^z)=f(2^w)\to 2g(z)=g(w)$$
but $\frac{ty}{tx} =\frac{y}{x}=2^{w-z}$ and then $w=z+\log_2 (y/x)=z+d$. So,
$$2g(z)=g(z+d)$$
wich has solution
$$g(z)=2^{z/d}\phi(z)$$
where $\phi(z)=\phi (z+d)$, it means that $\phi$ is a periodic function with period $d$. . But $\phi$ has to follow some constrains. For example, $\phi(z)=\sin^2\left(z\frac{\pi}{d}\right)+10$ works and $\phi(z)=\sin^2\left(z\frac{\pi}{d}\right)$ don't. So,
suppose that $f$ is increasing (if it is decreasing we work with $-f$), then $g$ is increasing, which means that for $p>q$ we have $g(p)>g(q)$, and
$$2^{p/d}\phi(p)>2^{q/d}\phi(q)\to 2^{(p-q)/d}>\frac{\phi(q)}{\phi(p)}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3224576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Easiest way to solve $2^{\sin^2x}-2^{\cos^2x}=\cos 2x$ I'm trying to find best shortcut to crack this task
$$2^{\sin^2x}-2^{\cos^2x}=\cos 2x$$
I tried first to go through using trigonometric identities
$$\cos^2x+\sin^2x=1 \quad\text{and}\quad 2\sin^2x=1-\cos 2x$$
After substitution I reached this equation
$$2^{1-\cos x}-2^{(1-\cos 2x)/2}\cos 2x -2=0$$
Later, I supposed to use substitution to obtain,
$$u^4-2u^2-2u=0$$
where $u=\sqrt{v}$, $v=\frac{2}{w}$, and $w=\cos 2x$
As a result , I obtained two roots in $\mathbb{R}$ one of them is zero and the other is $1.76929$. This leads me to doubt in my process.
So, is there Hint or procedure can be followed to obtain the desire result?
| HINT.-You have immediately a solution if you note that for $x=\dfrac{\pi}{4}$ one has $\sin(x)=\cos(x)$ and $\cos(2x)=0$. The solution is done by $$x=\dfrac{(2n+1)\pi}{4};\space\space n\in\mathbb Z$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3224689",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Compute polynomial $p(x)$ if $x^5=1,\, x\neq 1$ [reducing mod $\textit{simpler}$ multiples] The following question was asked on a high school test, where the students were given a few minutes per question, at most:
Given that,
$$P(x)=x^{104}+x^{93}+x^{82}+x^{71}+1$$
and,
$$Q(x)=x^4+x^3+x^2+x+1$$
what is the remainder of $P(x)$ divided by $Q(x)$?
The given answer was:
Let $Q(x)=0$. Multiplying both sides by $x-1$:
$$(x-1)(x^4+x^3+x^2+x+1)=0 \implies x^5 - 1=0 \implies x^5 = 1$$
Substituting $x^5=1$ in $P(x)$ gives $x^4+x^3+x^2+x+1$. Thus,
$$P(x)\equiv\mathbf0\pmod{Q(x)}$$
Obviously, a student is required to come up with a “trick” rather than doing brute force polynomial division. How is the student supposed to think of the suggested method? Is it obvious? How else could one approach the problem?
| I would have thought that bright students, who knew $1+x+x^2+\cdots +x^{n-1}= \frac{x^n-1}{x-1}$ as a geometric series formula, could say
$$\dfrac{P(x)}{Q(x)} =\dfrac{x^{104}+x^{93}+x^{82}+x^{71}+1}{x^4+x^3+x^2+x+1}$$
$$=\dfrac{(x^{104}+x^{93}+x^{82}+x^{71}+1)(x-1)}{(x^4+x^3+x^2+x+1)(x-1)}$$
$$=\dfrac{x^{105}-x^{104}+x^{94}-x^{93}+x^{83}-x^{82}+x^{72}-x^{71}+x-1}{x^5-1}$$
$$=\dfrac{x^{105}-1}{x^5-1}-\dfrac{x^{104}-x^{94}}{x^5-1}-\dfrac{x^{93}-x^{83}}{x^5-1}-\dfrac{x^{82}-x^{72}}{x^5-1}-\dfrac{x^{71}-x}{x^5-1}$$
$$=\dfrac{x^{105}-1}{x^5-1}-x^{94}\dfrac{x^{10}-1}{x^5-1}-x^{83}\dfrac{x^{10}-1}{x^5-1}-x^{72}\dfrac{x^{10}-1}{x^5-1}-x\dfrac{x^{70}-1}{x^5-1}$$
and that each division at the end would leave zero remainder for the same reason, replacing the original $x$ by $x^5$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3224765",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "63",
"answer_count": 6,
"answer_id": 5
} |
Show that $\min\{X_{1},X_{2},\ldots,X_{n}\}$ is sufficient for $\mu$ when $\sigma$ is fixed
Let $X_{1},X_{2},\ldots,X_{n}$ be a sample from a population with density $p(x,\theta)$ given by
\begin{align*}
p(x,\theta) = \frac{1}{\sigma}\exp\left\{-\left(\frac{x-\mu}{\sigma}\right)\right\}
\end{align*}
if $x\geq \mu$ and $0$ otherwise. Here $\theta = (\mu,\sigma)$ with $-\infty < \mu < \infty$ and $\sigma > 0$.
*
*(a) Show that $\min\{X_{1},X_{2},\ldots,X_{n}\}$ is sufficient for $\mu$ when $\sigma$ is fixed.
*(b) Find a one-dimensional sufficient statistic for $\sigma$ when $\mu$ is fixed.
*(c) Exhibit a two-dimensional sufficient statistic for $\theta$.
MY ATTEMPT
(b) In the first place, let us determine the maximum likelihood function for this sample considering that $\mu$ is fixed:
\begin{align*}
L(\textbf{x},\theta) = \prod_{i=1}^{n}\frac{1}{\sigma}\exp\left\{-\left(\frac{x_{i}-\mu}{\sigma}\right)\right\} = \frac{1}{\sigma^{n}}\exp\left\{-\frac{1}{\sigma}\left(\sum_{i=1}^{n}x_{i} - n\mu\right)\right\}
\end{align*}
In this case, we can factor $L(\textbf{x},\theta) = h(x)g_{\sigma}(T(\textbf{x}))$, where
\begin{align*}
h(x) = 1\quad\text{and}\quad g_{\sigma}(T(\textbf{x})) = \frac{1}{\sigma^{n}}\exp\left\{-\frac{1}{\sigma}\left(\sum_{i=1}^{n}x_{i} - n\mu\right)\right\}
\end{align*}
Therefore the statistic $T(\textbf{x}) = \sum_{i=1}^{n}x_{i}$ is sufficient for $\sigma$.
But I do not know if this is right neither do I know how to approach the other two items. Can somebody help me out? Thanks in advance!
| By Neyman-Fisher Lemma, $T(\boldsymbol{x})$ is a Sufficient Statistics for $\mu$ if and only if there exist two non negative functions $g(T(\boldsymbol{x}), \mu)$ and $h(\boldsymbol{x})$ such that:
$$L(\boldsymbol{x}, \mu) = g(T(\boldsymbol{x}), \mu)h(\boldsymbol{x})$$
Note that I am directly considering only $\mu$ as parameter since $\sigma$ is known. We have that:
$$L(\boldsymbol{x}, \mu) = \frac{1}{\sigma^{n}}exp \left \{ -\frac{1}{\sigma}\sum_{i = 1}^{n}x_{i} \right \}exp\left \{ \frac{n\mu}{\sigma} \right \}I(x_{(1)} > \mu)$$
Hence, letting $T(\boldsymbol{x})$ $=$ $min \left \{ X_{1},...,X_{n} \right \}$ $\equiv$ $X_{(1)}$ and:
$$g(T(\boldsymbol{x}), \mu) = exp\left \{ \frac{n\mu}{\sigma} \right \}I(x_{(1)} > \mu)$$
and:
$$h(\boldsymbol{x}) = \frac{1}{\sigma^{n}}exp \left \{ -\frac{1}{\sigma}\sum_{i = 1}^{n}x_{i} \right \}$$
we can conclude the minimum to be the Sufficient Statistics. In general, when writing the joint distribution, recall also to include the support of the joint distribution. In particular, if the variables are iid and each of them has to be greater than the parameter, then in the support of the joint we will have that the minimum of them has to be greater than it, since this implies the condition that all of them are. Similarly, if instead in the support of the single (iid) variable we have that it has to be smaller than the parameter, then in the joint we will have that only the maximum will have to be smaller than it: these two logical facts are useful to find the Minimal Sufficient as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3224872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to take the derivative of minimum of a norm? Suppose $f: \mathbb{R}^n \rightarrow \mathbb{R}$ where $f$ is the following:
$$
f(z) =
\begin{cases}
0 \,\,\,\, z \in C \\
\min_{x\in C} \frac{1}{2} \|x-z\|_2^2 \,\,\,\ z\notin C
\end{cases}
$$
where $C$ is a closed convex $C$ in $\mathbb{R}^n$ and $x$ is a point in $C$.
How can we find derivative of $f$.
I know the answer is
$$
f'(z) =
\begin{cases}
0 \,\,\,\, z \in C \\
z-x \,\,\,\ z\notin C
\end{cases}
$$
but how would we find this derivative using the definition?
Is there any way to get that using:
$$
f'(z;d) = \lim_{d \rightarrow 0^{+}} \frac{\min_{x\in C} \frac{1}{2} \|x-(z+td)\|_2^2 - \min_{x\in C} \frac{1}{2} \|x-z\|_2^2}{t}
$$
| Actually the answer is
$$f^\prime(z) = z - x,$$
and not $x - z$. Here $x$ is a point of $C$ nearest to $z$.
First of all, let us denote by $x(u)$ a point of $C$ nearest to $z + u$. We will show that $\|x(u) - x\| \to 0$ as $u\to 0$ (i.e., function mapping $z$ to its nearest point is continuous). Indeed, assume for contradiction that there is a limit point of $x(u)$ which is different from $x$, i.e., there is a sequence $u_m\in\mathbb{R}^n$ such that $u_m\to 0$ and $x(u_m) \to x^\prime \neq x$. Then $x^\prime\in C$ since $C$ is closed. On the other hand, $x^\prime$ is more far from $z$ than $x$ (this is because the nearest point is unique). So assume that $\|x^\prime - z\| - \|x - z\| \ge \varepsilon > 0$. Let $m$ be such that $\| u_m\| \le \varepsilon/10$ and $\|x(u_m) - x^\prime\| \le \varepsilon/10$. Then $z + u_m$ is closer to $x$ than to $x(u_m)$. Indeed, by using the triangle inequality multiple times we get:
\begin{align*}
\| z + u_m - x\| &\le \|z - x\| + \|u_m\| \le \|z - x\| + \varepsilon/10\\
&\le \|z - x^\prime\| - \varepsilon + \varepsilon/10 = \|z - x^\prime\| - 9\varepsilon/10 \\
&\le \|z - x(u_m)\| + \|x(u_m) - x^\prime\| - 9\varepsilon/10 \le \|z - x(u_m)\| - 8\varepsilon/10 \\
&\le \|z + u_m - x(u_m)\| + \|-u_m\| - 8\varepsilon/10 \le \|z + u_m - x(u_m)\| - 7\varepsilon/10,
\end{align*}
and this contradicts the definition of $x(u_m)$.
By definition we have to show that
$$f(z + u) - f(z) - \langle u, z - x\rangle = o(\|u\|)$$
for every $z$ when $u\to 0$. Let us write it down:
\begin{align*}
f(z + u) - f(z) - \langle u, z - x\rangle = 1/2\langle z + u - x(u), z + u - x(u)\rangle - 1/2 \langle z - x, z - x\rangle - \langle u, z - x\rangle\end{align*}
Now, imagine that we had $x$ instead of $x(u)$ in the first term of the last expression. Then this expression would be equal to just $\langle u, u\rangle = o(\|u\|)$, as required. However, we have not $x$ but $x(u)$. So to finish the argument we have to show that the difference
\begin{align*}
\langle z + u - x(u), z + u - x(u)\rangle - \langle z + u - x, z + u - x\rangle
\end{align*}
is $o(\|u\|)$ as $u\to 0$. First of all, by definition $z + u$ i closer to $x(u)$ than to $x$, hence:
$$\langle z + u - x(u), z + u - x(u)\rangle \le \langle z + u - x, z + u - x\rangle.$$
On the other hand:
\begin{align*}
\langle z + u - x, z + u - x\rangle &= \langle z - x, z - x\rangle + 2 \langle u, z - x\rangle + \langle u, u\rangle \\
&\le \langle z - x(u), z - x(u)\rangle + 2 \langle u, z - x\rangle + \langle u, u\rangle
\end{align*}
The latter is because $z$ is closer to $x$ than to $x(u)$. Once again, imagine that we replace $x$ by $x(u_m)$ in the last expression. Then we obtain exactly $\langle z + u - x(u), z + u - x(u)\rangle$. However, the cost of that is $\langle u, z - x\rangle - \langle u, z - x(u)\rangle = \langle u, x(u) - x\rangle$. Fortunately, the absolute value of the last expression is at most $\|u\| \cdot \|x(u) - x\| = o(\|u\|)$ and the latter is due to the fact that $x(u) \to x$ as $u\to 0$, as we have proved.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3224980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Understanding numerical methods for nonlinear hyperbolic equation For the linear advection $u_t + au_x = 0$, we have the explicit Lax-Friedrichs scheme
$$ u_k^{n+1} = \frac{1}{2} (u_{k+1}^n + u_{k-1}^n) - a\frac{\Delta t }{2 \Delta x } (u_{k+1}^n - u_{k-1}^n) $$
But, if we replace this equation with $u_t + ( f(u) )_x = 0 $, the L-F method now reads
$$ u_k^{n+1} = \frac{1}{2} (u_{k+1}^n + u_{k-1}^n) - \frac{\Delta t }{2 \Delta x } (f( u_{k+1}^n) - f(u_{k-1}^n)) \tag{A}$$
If our $u_0(x)$ is smooth or piecewise smooth, then my ${\bf understanding}$ is that the method above will work just fine ${\bf unless}$ a shock or rarefaction forms in which case the exact solution is not longer a classical but a weak solution and the numerical method may not converge. But, I have learnt that if we can put our method in the form
$$ u_k^{n+1} = u_k^n - \frac{ \Delta t }{\Delta x} [ F(u_k^n, u_{k+1}^n) - F(u_{k-1}^n, u_k^n)] \tag{B}$$
and $F(u,w)$ is called ${\bf numerical \; flux}$, then we guarantee our method wont converge to a non-solution. Now, my books claims that L-F can also be written in conservative form if we take
$$ F(u_k, u_{k+1}) = \frac{ \Delta x }{2 \Delta t} ( u_k - u_{k+1}) + \frac{1}{2} ( f(u_k) + f(u_{k+1}) ) $$
and I assume this is done by some manipulation of equation (A). But, here is where my confusion arises. Isnt equation (B) and (A) just the same? What is special about equation (B)? Can someone clarifies this to me?
| Both conservative and non-conservative Lax-Friedrichs schemes are identical. To see this, one injects the expression of the numerical flux in $(\text B)$, which gives $(\text A)$.
When considering smooth solutions, one can either use $u_t + f'(u) u_x = 0$ or $u_t + f(u)_x = 0$ to develop numerical methods that will converge towards the strong solution. However, in case of nonsmooth solutions it is advantageous to use only the conservative PDE $u_t + f(u)_x = 0$ to develop numerical methods, more precisely its integral form
$$
\frac{\text d}{\text d t} \int_{x_1}^{x_2} u\,\text d x = f (u|_{x=x_1})- f (u|_{x=x_2}) ,
$$
which leads to the definition of conservative methods. According to the Lax-Wendroff theorem, a stable conservative method will converge towards a weak solution of the PDE. Hence, discontinuities will satisfy the Rankine-Hugoniot condition, i.e., shock waves will propagate at the correct speed. Note that this is not necessarily true for non-conservative methods (see e.g. §12.9 pages 237-238 of (1)).
(1) R.J. LeVeque, Finite Volume Methods for Hyperbolic Problems, Cambridge university press, 2002. doi:10.1017/CBO9780511791253
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3225126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Solving pde on a strip
Let $\Omega = \lbrace x_1,x_2 \in \mathbb{R}^2 | x_1>0, 0<x_2<1\rbrace$. Solve $$
\begin{aligned}
u_t(\boldsymbol{x},t) &= \nabla^2 u(\boldsymbol{x},t) , \qquad \boldsymbol{x}\in \Omega, t> 0,\\
u(\boldsymbol{x},0) &= g(\boldsymbol{x}) , \\
u_{x_1}(0,x_2,t) &= 0 ,\\
u(x_1,0,t) &= u(x_1,1,t) = 0,
\end{aligned} $$
Im trying to solve this equation but Im stuck. I know to use fourier transform but this only works for infinite domain. Here we have a infinite strip in the plane. How do we handle this heat equation with the boundary condition ?
| Ignoring the initial condition, the generic separated solution has the form
$$
A(n,s)\sin(n\pi x_2)\cos(s x_1)e^{-(s^2+n^2\pi^2)t} \\
n=1,2,3,\cdots,\;\;\; s \ge 0.
$$
These are required because $\sin(n\pi x_2)$ vanishes at $x_2=0,1$, and because $\cos'(sx_1)=0$ at $x_1=0$. $A(n,s)$ is an unknown coefficient function that is determined by the initial condition $u(x_1,x_2,0)=g(x_1,x_2)$.
The general solution is an integral over $s$ and a sum over $n$ of the separated solutions:
$$
u(x_1,x_2,t)= \\ \int_0^\infty\left(\sum_{n=1}^{\infty}A(n,s)\sin(n\pi x_2)e^{-n^2\pi^2 t}\right)\cos(sx_1)e^{-s^2 t}ds
$$
The coefficients $A(n,s)$ are determined by the initial condition $u(x_1,x_2,0)=g(x_1,x_2)$, where $g$ is given:
$$
g(x_1,x_2) = \int_0^\infty\left(\sum_{n=1}^{\infty}A(n,s)\sin(n\pi x_2)\right)\cos(sx_1) ds
$$
The expression in parentheses on the right is the inverse cosine transform of $g$ in the variable $x_1$. That is,
$$
\frac{2}{\pi}\int_0^{\infty}g(x_1,x_2)\cos(sx_1)dx_1
= \sum_{n=1}^{\infty}A(n,s)\sin(n\pi x_2)
$$
The coefficients $A(n,s)$ are completely determined by the orthogonality of the $\sin(n\pi x_2)$ functions on $0 < x_2 < 1$ and the "orthogonality" of the $\cos(sx_1)$ functions on $[0,\infty)$ through the Fourier cosine transform:
$$
A(n,s) =
\frac{\int_0^1\left(\frac{2}{\pi} \int_0^\infty g(x_1,x_2)\cos(sx_1)dx_1 \right)\sin(n\pi x_2)dx_2}{\int_0^1\sin^2(n\pi x_2)dx_2}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3225252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Probability of X being a trick coin (heads every time) after heads is flipped k amount of times
A magician has 24 fair coins, and 1 trick coin that flips heads
every time.
Someone robs the magician of one of his coins, and flips it $k$ times
to check if it's the trick coin.
A) What is the probability that the coin the robber has is the trick
coin, given that it flips heads all $k$ times?
B) What is the smallest number of times they need to flip the coin to
believe there is at least a 90% chance they have the trick coin, given
that it flips heads on each of the flips?
Here is my approach:
Let $T$ be the event that the robber has the trick coin
Let $H$ be the event where the robber flips a heads k times in a row
$Pr(T) = 1/25$
$Pr(H|T) = 1$
$Pr(T') = 24/25$
$Pr(H|T') = 1/2$ when $k=1$, $1/4$ when $k=2$, $1/8$ when $k=3$... etc
$Pr(T|H) = (1 * 1/2) / (1 * 1/2 + Pr(H|T') * 24/25) = 1/13, 1/7, 1/4,...$ etc
So the Pr(T|H) answer changes for every k, do I answer with the formula? How can I answer A? How do I make a probability distribution when k can be infinite?
Also is B 8 flips? Since when k = 8, Pr(T|H) = 1/256.
Thanks for any help.
| $$P(trick|H_k)=\frac {P(trick \cap H_k)} {P(H_k)}=\frac {P(H_k|trick).P(trick)} {P(H_k)}.$$
Now,
$$\begin{align}P(trick)&= \frac{1}{24}\\
P(H_k|trick)&=1\\
P(H_k)&=P(H_k|trick)\cdot P(trick)+P(H_k|fair)\cdot P(fair)\\
&=1\cdot\frac{1}{24}+\frac{1}{2^k}\cdot\frac{23}{24}\end{align}$$
Hope this helps
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3225413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Prove that $2^{2^{2^{\cdot^{\cdot^{2}}}}} \mod 9 = 7$
Prove that $\underbrace{2^{2^{2^{\cdot^{\cdot^{2}}}}}}_{2016 \mbox{
times}} \mod 9 = 7$
I think that it can be done by induction:
Base:
$2^{2^{2^{2}}} \equiv 2^{16} \equiv 2^8 \cdot 2^8 \equiv 2^4 \cdot 2^4 \cdot2^4 \cdot2^4 \equiv 7^2 \cdot 7^2 \equiv 4 \cdot 4 \equiv 16 \equiv 7$
Assume that it is true for $n$ times.
Let $a_n = \underbrace{2^{2^{2^{\cdot^{\cdot^{2}}}}}}_{n \mbox{
times}}$
$$ a_{n+1} = 2^{a_n} = 2^{9k+7} = 2^7 \cdot 2^{9k} \equiv 128 \cdot 1 \equiv 2 \neq 7 $$
Where did I go wrong?
| Just because we are working in modulo $9$, that doesn't mean that the exponents work in modulo $9$. In general, $2^{9k}\not\equiv 1$. Actually, if $k$ is odd, $2^{9k}\equiv -1$. Which is exactly what goes wrong here: You are getting $-7\equiv 2$, rather than $7$.
You can, of course, try to take this into account in your induction proof. But you run into the problem of trying to keep track of even and odd modulo $9$, which means that you've basically gone up to modulo $18$, and if we're unlucky (I don't think we are in this case), this will just explode.
We have $2^6 = 64\equiv 1$, so looking at the exponent modulo $6$ seems like it would be more natural than modulo $9$.
To help keeping things straight, I will introduce colours to our "power tower":
$$
2^{\displaystyle\color{red}2^{\displaystyle\color{blue}2^{\displaystyle\color{orange}2^{\cdot^{\cdot^2}}}}}
$$
where the base is black, the first exponent is red, the seoncd is blue, the third is orange, and after that it turns out that we can stop caring.
Now, looking at the red number modulo $6$, we have $\color{red}2^\color{blue}{3} \equiv \color{red}2^\color{blue}{1}$ and $\color{red}2^\color{blue}{4}\equiv \color{red}2^\color{blue}{2}$, so looking at the blue number modulo $2$ seems like it could be useful (just as long as we make certain that it is strictly positive).
And we see that the blue number is strictly positive and even (it's a $2$ raised to the orange number, and the orange number is strictly positive). So it may as well be a $2$ as far as the red number is concerned. Which means that the red number might as well be a $4$ as far as the black number is concerned. Thus we end up with $2^\color{red}4\equiv 7$ modulo $9$ for our final answer.
Working this way, for each level you go up in the exponent tower, the modulus we are interested in will go down. Finally, at some stage (usually pretty quickly), we are basically just interested in whether the result of the remaining tower is even or odd (and possibly whether it's larger than some relatively small number), and then we can work our way down again.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3225521",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
(Tridiagonal) Inverse of a matrix Given this $n \times n$ matrix:
$$ A= \left(\begin{matrix}a_1&a_1&...&a_1\\a_1&a_2&...&a_2\\\vdots& &\ddots &\vdots\\a_1&a_2&...&a_n\end{matrix}\right) $$
How can I show that the inverse of this type of matrices are tridiagonal?
Help would be nice! Thank you!
| I would prove explicit formulae. For $n>1$,
$$(A^{-1})_{k,k+1}=(A^{-1})_{k+1,k}=\frac{1}{a_k-a_{k+1}}\qquad(1\leqslant k<n)\\(A^{-1})_{k,k}=\frac{a_{k-1}-a_{k+1}}{(a_k-a_{k-1})(a_k-a_{k+1})}\qquad(1\leqslant k\leqslant n)$$
with $a_0=0,a_{n+1}=\infty$ in the latter one, so that
$$(A^{-1})_{1,1}=\frac{a_2}{a_1(a_2-a_1)},\qquad(A^{-1})_{n,n}=\frac{1}{a_n-a_{n-1}}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3225658",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Geometric interpretation of ranks of matrices gathering coefficients of 3 affine and associated vectorial planes Given three planes $\pi_{1}, \pi_{2},\pi_{3}$ $\subseteq \mathbb{R}^{3}$ with their respect cartesian equation of the form $A_{i}x + B_{i}y + C_{i}z + D_{i} = 0$, we can determine their relative position in space by comparing the rank of the coefficient matrix $M$, with the rank of the augmented matrix $M^{*}$, where $$M = \begin{pmatrix}
A_{1}& B_{1} & C_{1} \\
A_{2} & B_{2} & C_{2} \\
A_{3}& B_{3}& C_{3}
\end{pmatrix}$$ and $$ M^{*} = \begin{pmatrix}
A_{1}& B_{1} & C_{1} & D_{1}\\
A_{2} & B_{2} & C_{2} & D_{2} \\
A_{3}& B_{3}& C_{3} & D_{3}
\end{pmatrix}$$ What is the geometric interpretation of these two matrices, how do they affect the planes relatively to each other visually?
My guess has been that $rank(M)$ expresses if some of the planes have the same direction, while $rank(M^{*})$ also checks if they are separated by a distance or not. However, I have been struggling to understand why 3 separated parallel planes have $rank(M) = 1$ but $rank(M^{*}) = 2$, according to my textbook.
| This array displays the different possible cases :
Let us first recall that the rank of a matrix $M$ can be considered at least from 2 angles of attack: (a) rank of its columns' space or (b) max. size of a non zero determinant of a submatrix.
Remark; there is a third characterization: (c) by using rank-nullity theorem: $rank=4-dim(ker(M))$.
Now, let us explain some aspects of the array above.
4 squares are hatched due to the fact that
$$rank(M^*) = \begin{cases} rank(M)\\ or \ \ rank(M)+1\end{cases}$$
Indeed,the rank of $M$ is (characterization (a) above) the dimension of $E:=vect (columns(M))$; extending $M$ to $M^*$ amounts to add a column belonging already to $E$ (rank preserved) or not (rank increased by one unit).
I do not give a detail for each case. Let me only give a treatment of the case you have been struggling with, the case of 3 parallel planes.
Their equations
$$ax+by+cz=d_k \ \ k=1,2,3$$
involve matrix :
$$\begin{pmatrix}
a&b&c&d_1\\
a&b&c&d_2\\
a&b&c&d_3\\
\end{pmatrix}$$
on which we could discuss its rank (rank = 1 or rank = 2) using characterization (b). But it is simpler to consider it from the point of view of linear combinations:
$$\exists x,y,z \ s.t. \ x \begin{pmatrix}
a\\
a \\
a
\end{pmatrix}+y\begin{pmatrix}
b\\
b \\
b
\end{pmatrix}+z\begin{pmatrix}
c\\
c \\
c
\end{pmatrix}=\begin{pmatrix}
d_{1}\\
d_{2} \\
d_{3}
\end{pmatrix}\tag{2}$$
which will have a solution (in fact an infinite number of solutions) if and only if the last vector is in the range space of the first three ones, i.e., if and only if all its entries are identical. If one of the entries is different from the others, the rank of $M^*$ becomes $2$.
Let us come back a short while on the case where $d_1=d_2=d_3$. In this case, (2) is equivalent to the fact that $x+y+z-d_1=0$ has a 2 dimensional space of solutions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3225833",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is $\langle (-y)/(x^2+y^2),x/(x^2+y^2) \rangle$ a conservative vector field? If it is,I found two potential function which is $\arctan(y/x)$ and $-\arctan(x/y)$.
But I don't know which is true.
| Briefly: no, it's not a conservative vector field on its entire domain $\mathbb{R}^2 \setminus \{0\}$. The vectors of the vector field form counterclockwise circles about the origin, and the line integral of any counterclockwise circle about the origin is $2\pi$.
On any simply connected subset of its domain, however, the vector field is conservative. You can define a potential function as follows: pick an arbitrary point $p$ in the domain and let the potential at any point $x$ be the angle in radians from $p$ to $x$, where positive means you can get from $p$ to $x$ by moving counterclockwise within the domain and negative means moving clockwise. One possible choice of potential function is $\arctan(y/x)$, as long as the domain lies entirely in the first and fourth quadrants (otherwise there's a discontinuity associated with crossing the $y$-axis). Of course, it's not possible to define these potential functions unambiguously if the domain wraps all the way around the origin.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3225917",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
ABC triangle with $\tan\left(\frac{A}{2}\right)=\frac{a}{b+c}$ I have a triangle ABC and I know that $\tan\left(\frac{A}{2}\right)=\frac{a}{b+c}$, where $a,b,c$ are the sides opposite of the angles $A,B,C$. Then this triangle is:
a. Equilateral
b. Right triangle with $A=\pi/2$
c. Right triangle with $B=\pi/2$ or $C=\pi/2$ (right answer)
d. Acute
e. Obtuse
I tried to write $\frac{a}{\sin(A)}=2R\implies a=2R\sin(A)$ and to replace in initial equation.Same for $b$ and $c$ but I didn't get too far.
| One way to see that c) is the correct answer is as follows. Draw yor triangle $ABC$. Construct the angle $A/2$ by extending $BA$ until a point $M$ such that $MA=b$. Note that $MA=AC=b$ so, $MAC$ is isosceles, so you have $\angle CMA=A/2$. Look at the triangle $MBC$ You have $\angle CMB=A/2$ with $CB=a$ and $MB=b+c$. So.... If $\angle B=\pi/2$ you certainly have that $\tan( A/2)=a/(b+c)$.
A "symmetric" construction proves that if $C=\pi/2$, then you also have $\tan A/2=a/(b+c)$.
Of course this just proves the converse of your statement, but in a multiple choice problem this is good enough.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3226067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Derivative equals 1 Consider the following curve:
$$
x^{2}+y^{2}=2 x y+16
$$
By differentiating both sides with respect to $x$
$$
\frac{d y}{d x}(2 y-2 x)=2 y-2 x
$$
$$
\frac{d y}{d x}=\frac{2 y-2 x}{2 y-2 x}=1
$$
Are there any reasons that explain why the derivative at each point is equal to 1?
Update: I think the reason is related to the fact that the curve resembles parallel lines? But how does this explain it?
| $$
x^2+y^2=2xy+16\implies\\
x^2-2xy+y^2=16\implies\\
(x-y)^2=16\implies\\
|x-y|=4\implies\\
x>y,\ y=x-4\\
x<y,\ y=x+4.
$$
So, the original implicit function $x^2+y^2=2xy+16$ is equivalent to the two functions $y=x-4$ and $y=x+4$ for different values of $x$ and $y$. Graphically, those are nothing more than two lines of slope $1$. That's why it's only natural to expect the derivative to be $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3226204",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Gödel's theorem vs unprovable mathematical results As an answer to this question, Peter Smith wrote:
Indeed, it is a fairly gross misunderstanding of what Gödel's theorem says to
summarize it as asserting that "there exist mathematical results that
cannot be proven"
It made me realize that I don't understand the difference between Gödel's first incompleteness theorem and the assertion "there exist mathematical results that cannot be proven."
Since ZFC is a fixed set of axioms, aren't there mathematical sentences (e.g., the continuum hypothesis) which can neither be proved nor disproved, as long as we stick with ZFC? And isn't this what Gödel's first incompleteness theorem predicts?
| You're understanding Godel correctly - you're misunderstanding the misunderstanding. :P The following will just reaffirm things you already know, but readers may find it useful:
The issue is in understanding what "cannot be proven" means. All too frequently someone will make the implicit error of assuming that this is said with respect to every appropriate axiom system. This amounts to a quantifier mix-up:
"For every appropriate axiom system there is an statement undecidable in that system"
(which is correct) becomes
"There is a statement undecidable in any appropriate axiom system"
(which ... isn't).
More generally, one should never say "prove" without specifying an axiom system (or acknowledging that there's some handwaving going on). For example, somewhere on this site is at least one question about GIT which goes roughly: "GIT1 proves that ZFC is incomplete, which means ZFC is consistent, but doesn't that contradict GIT2?" The issue of course is that the first "proves" is with respect to a system including the hypothesis that ZFC is consistent, while GIT2 would only apply if we had been using ZFC itself.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3226336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Show that $B(X,Y^*)$ and $B(Y,X^*)$ are isometrically isomorphic.
If $X$ and $Y$ are normed spaces then we define. $$B(X,Y)= \{ f:X\rightarrow Y | f :\text{ f is a linear operator and bounded }\}$$
$X^*= \{f:X \rightarrow \mathbb{R} | \text{ f is a linear operator and bounded }\}$
I only know Hanh-Banach Theorem. And I don't know how even start. Something that confuse me is how to send functions that send a vector in a linear functional in a function that send a vector in a functional.
This seems super hard. How should I think about this?
| Given a bounded operator $T:X \to Y^{*}$ define $S: Y \to X^{*}$ by $(Sy)(x)=(Tx)(y)$. It is fairly routine to very that this is an isometric isomorphism. I will be glad to provide details if you get stuck.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3226499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Number of ones in Binary matrix multiplication Consider a binary matrix $\mathbf A_n$ corresponding to values $0$ to $2^n-1$ where each row represents a length $n$ binary representation of a real number. For example, for $n=3$ we have
$\mathbf A_3=\begin{bmatrix}
0 & 0 & 0\\
1 & 0 & 0\\
0 & 1 & 0\\
1 & 1 & 0\\
0 & 0 & 1\\
1 & 0 & 1\\
0 & 1 & 1\\
1 & 1 & 1
\end{bmatrix}.$
Consider two arbitrary non-zero binary vectors $\mathbf v_1, \mathbf v_2$ of length $n$ (column vector) such that $\mathbf v_1\neq\mathbf v_2$. Now, assume $\mathbf u_1=\mathbf A\mathbf v_1$ (mod $2$) and $\mathbf u_2=\mathbf A\mathbf v_2$ (mod $2$) . I can verify for an arbitrary $n$ that $\mathbf u_1.^*\mathbf u_2$ (element-wise multiplication) has always $2^{n-2}$ ones. For example, for $\mathbf A_5$ and
$$
\mathbf v_1=\begin{bmatrix}0,\ 0,\ 1,\ 1,\ 1\end{bmatrix}^T,
\mathbf v_2=\begin{bmatrix}0,\ 1,\ 0,\ 0,\ 0\end{bmatrix}^T
$$
We have
$$
\mathbf u_1.^*\mathbf u_2=\begin{bmatrix} 0\ 0\ 0 \ 0 \ 0 \ 0\ 1 \ 1\ 0 \ 0 \ 1 \ 1 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0\ 1\ 1\ 0 \ 0 \ 0\ 0 \ 0\ 0 \ 0 \ 0 \ 0 \ 0\ 1\ 1\end{bmatrix}^T
$$
of length $2^5$ which has $2^{5-2}=8$ ones. I am looking for an analytical way to prove this.
| By induction:
You've checked the base case manually.
Suppose true for $n$. In the case $n+1$, we select nonzero $v \neq w$. $v$ differs from $w$ in at least one entry, say the $i$th entry. Consider the vectors $\tilde{v}, \tilde{w}$ that have their $i$th entry deleted:
\begin{align}
\tilde{v} &= (v_1, \dots, v_{i-1}, v_{i+1},\dots, v_{n+1})
\\
\tilde{w} &= (w_1, \dots, w_{i-1}, w_{i+1},\dots, w_{n+1})
\end{align}
Case1
Suppose $\tilde{v} \neq \tilde{w}$. Then the induction hypothesis applies and the element-wise product of $A_n \tilde{v}$ with $A_n \tilde{w}$ has $2^{n-2}$ entries that are 1.
$A_{n+1}$ can be created by forming the matrix
\begin{align}
\begin{pmatrix}
A_n \\
A_n
\end{pmatrix}
\end{align}
and then inserting an $i$-th column with $n$ entries that are 0 and $n$ entries that are 1. Convince yourself that this enumerates exactly all of the rows of $A_{n+1}$ (possibly permuted, but this won't change our conclusions).
For the $n$ rows where the $i$th entry of $A_{n+1}$ is 0, nothing has changed from the $n$ case when we calculate $A_n \tilde{v}$ and $A_n \tilde{w}$ - we will still get $2^{n-2}$ entries that are 1 in the element-wise product.
For the $n$ rows where the $i$th entry of $A_{n+1}$ is 1, we have to do a little more work. Since $v$ and $w$ differ in the $i$-th entry, without loss of generality, assume that $v_i = 0, w_i = 1$. Then we have for an arbitrary $j$th row of this part of $A_{n+1}$
\begin{align}
(A_n \tilde{v})_j &= 0 \implies (A_{n+1} v)_j = 0
\\
(A_n \tilde{v})_j &= 1 \implies (A_{n+1} v)_j = 1
\\
(A_n \tilde{w})_j &= 0 \implies (A_{n+1} w)_j = 1
\\
(A_n \tilde{w})_j &= 1 \implies (A_{n+1} w)_j = 0
\end{align}
Lemma: I claim that $2^{n-1}$ entries of $A_n \tilde{v}$ were 1.
Since we know that $2^{n-2}$ entries of $A_n \tilde{v}$ and $A_n \tilde{w}$ were simultaneously 1, this means that $2^{n-2}$ entries of $A_n \tilde{v}$ were 1 while $A_n \tilde{w}$ was 0. Then these are precisely the entries where $A_{n+1} v$ and $A_{n+1} w$ are simultaneously 1. This means that we have $2^{n-2}$ entries of the element-wise product that are 1, from the rows of $A_{n+1}$ where column $i$ is 1.
Altogether, if I can prove my Lemma, combining the count from the rows of $A_{n+1}$ where column $i$ is 0 and 1, we have $2^{n-2}+2^{n-2} = 2^{(n+1)-2}$ entries of the element-wise product that are 1, as desired.
Proof of Lemma:
$\tilde{v}$ has, say $k$ entries that are 1. Then we are interested in rows of $A_n$ with an odd number of 1's that coincide with the entries of $v$ that are 1. There are
\begin{align}
\begin{pmatrix}
k \\
1
\end{pmatrix}
\end{align}
$k$-tuples that have precisely a single 1 in them,
\begin{align}
\begin{pmatrix}
k \\
3
\end{pmatrix}
\end{align}
$k$-tuples that have precisely three 1's in them, etc. So there are
\begin{align}
2^{n-k}
\sum\limits_{j=0}^{2j+1 \leq k}
\begin{pmatrix}
k \\
2j+1
\end{pmatrix}
=
2^{n-1}
\end{align}
rows in $A_n$that produce an odd number when multiplied by $\tilde{v}$. The factor $2^{n-k}$ appeared because $2^{n-k}$ entries of $v$ are zero and so having a 0 or a 1 in the corresponding column of a row of $A_n$ doesn't matter, meaning that both situations are counted.
Case 2
Suppose $\tilde{v} = \tilde{w}$. Then $A_n \tilde{v} = A_n \tilde{w}$ and by the above lemma, $2^{n-1}$ entries of the element-wise product are 1. When we construct $A_{n+1}$ as in Case 1 above, again without loss of generality, $v_i = 0, w_i = 1$, because they must differ in this entry.
All the rows where $A_{n+1}$ has a 0 in the $i$th column reduce to the $A_n$ case and so we have $2^{n-1}$ entries of the element-wise product that are 1.
All the rows where $A_{n+1}$ has a 1 in the $i$th column cannot have $A_{n+1} v = A_{n+1} w$ (because all entries of $v$ and $w$ are equal, except the $i$th) and so these rows cannot give 1's in the element-wise product.
This gives the total count of $2^{(n+1)-2}$ 1's in the element-wise product, as desired.
By the way, I believe your question is equivalent to the statement
Let $X$ be a set with $|X| = n$. Let $A,B \subseteq X$. Then there are
$2^{n-2}$ sets $Y\subseteq X$ such that $|A \cap Y|$ and $|B \cap Y|$
are odd.
Perhaps someone can find a shorter or more elegant proof using this observation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3226626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to factorized this 4th degree polynomial? I need your help to this polynomial's factorization.
Factorize this polynomials which doesn't have roots in Q.
$ \ f(x) = x^4 +2x^3-8x^2-6x-1 $
P.S.) Are there any generalized method finding 4th degree polynomials factor?
| You can always find the roots of a fourth degree polynomial. The formulas are not pretty, but they do exist.
Using the formulas, or, like me, using Wolfram Alpha, can show you that the roots of the polynomial are $1\pm\sqrt{2}$ and $-2\pm \sqrt{3}$ so the polynomial is
$$\begin{align}f(x)&=(x-1-\sqrt{2})(x-1+\sqrt{2})(x+2-\sqrt{3})(x+2+\sqrt{3})\\&=(x^2-2x-1)(x^2+4x+1)\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3226755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 1
} |
Whether $\Bbb{R}^3\setminus \{0\}$ and $\Bbb{R}^3 \setminus \{0,1 \}$ are homeomorphic or not? I am thinking about whether the two spaces $\Bbb{R}^3 \setminus \{0 \}$ and $\Bbb{R}^3 \setminus \{0,1 \}$ are homeomorphic or not?
I guess they are not homeomorphic but cannot find out the proper reason. Till now I have come to the following :
$S^2$ is a deformation retract of $\Bbb{R}^3 \setminus \{0\}$ where as I think one can deform the space $\Bbb{R}^3 \setminus \{0,1 \}$ on to two spheres with a single common point, i.e. Wedge of two Spheres ( For this I try to see the deformation visually). But this means both of the space has trivial First fundamental group. So I think this idea didn't work...!!
So how can I distinguish these to space topologically. Any suggestion is appreciated. Thank you.
P.S: Let me clear that I am very new to Algebraic topology. I recently started the first fundamental groups and its properties and try to use it to distinguish two spaces. The spaces in the question is a very random that I thought that it could be solved using fundamental groups. So if this two spaces cannot be distinguished using General Topology and tools in First Fundamental group then let me know. Thank you..
| $\Bbb R^3 - \{0\}$ deformation retracts to $S^2$ , whereas $\Bbb R^3 - \{0,1\}$ is homotopic (actually deformation retracts) to $S^2 \lor S^2$ but $S^2$ is clearly not homotopic to $S^2 \lor S^2$ since, $H_2 (S^2) \cong \Bbb Z$ but $H_2 (S^2 \lor S^2) \cong \Bbb Z^2$ , thus $\Bbb R^3 - \{0\}$ is not homotopic to $\Bbb R^3 - \{0,1\}$ , and hence in particular not homeomorphic!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3226849",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
} |
Derivative rules I have some problems when I have to derive: I don't know where I have to begin and where I have to stop. For example, if I have $$f(x)=(x^2 +1)(x^2 +3)$$ I know that I have to use the product rule so I get $$f'(x)=(x^2 +1)'(x^2 +3)+(x^2 +1)(x^2 +3)'$$ and the resolution is $$f'(x)=4x^3 +8x$$. But why can't I derive the stuff inside the brackets, like $$f(x)'=(2x)(2x)$$ and then $$f'(x)=4x^2$$
And I always have that problem, I don't know what rule I should use first.
Thank you.
| Write $g(x) = x^2+1$ and $h(x) = x^3+3$.
Then your $f$ is
$$ f(x) = g(x)h(x)$$
So the product rule says
$$
f'(x) = g'(x)h(x) + g(x)h'(x).
$$
The second thing you wrote would be equivalent to
$$
f'(x) = g'(x)h'(x)
$$
and this is just not how the derivative works.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3227026",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 9,
"answer_id": 2
} |
$A\in M_2(\mathbb{R}), \lambda \in \mathbb{R}$ such that $A^{2}-\lambda A+\lambda ^{2}I_{2}=O_2$, $A^{2018}=?$ $A\in M_2(\mathbb{R}), \lambda \in \mathbb{R}$ such that $$A^{2}-\lambda A+\lambda ^{2}I_{2}=O_2$$
I need to find $A^{2018}$.
From that equation I know that $det(A)=\lambda^{2}$ and $tr(A)=\lambda$.
Also I get $A^{2}=\lambda\cdot A-\lambda^{2} I$
How to continue ?
| $$A^{2}-\lambda A+\lambda ^{2}I_{2}=O_2 \implies A^2=\lambda A-\lambda ^2 I \implies A^3=-\lambda ^3 I$$
Note that $$2018=3(672)+2$$
Thus $$A^{2018} = (-\lambda ^3 I)^{672} A^2 =\lambda ^{2017} A -\lambda^{2018} I = \lambda ^{2017}( A -\lambda I)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3227218",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to get the probability of a defective item We are given three boxes as follows: Box $1$ has $10$ light bulbs of which $4$ are defective. Box $2$ has $6$ light bulbs of which $1$ is defective. Box $3$ has $8$ light bulbs of which $3$ are defective. If a box is selected at random and then a bulb is drawn. (a). What is the probability that the bulb is defective?
(b). Draw a tree diagram to show the above question.
I have tried solving it. First, I thought that to select a box at random, it's gonna be $3C1$ which is three $(3)$. Then the total number of bulbs $= 24$ and there are $8$ defective bulbs. So the probability is $3 \cdot\dfrac{8}{24} = 1$. Please am I right?
Someone please help including the tree diagram
| $P(\text{defective}) = P(b_1)P(\text{defective}|b_1) + P(b_2)P(\text{defective}|b_2) + P(b_3)P(\text{defective}|b_3)$
where $b_1$ means box 1 and so on. The idea is: you first have to chose a box $b_1,b_2,b_3$ at random, then you have to choose a bulb inside that box. Can you put together the numbers?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3227366",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Topology Filters Let $X$ be a set and $\mathcal F$ a filter on $X$ with a countable filter basis $\mathcal B$. Prove that $\mathcal F$ has a countable filter basis $\mathcal B´$ which is nested, i.e. $\mathcal B´$={$B_n$} with $n\in \Bbb N$ such that $\ B_{n+1}\subset B_n$ for all n.
I am thinking on takin $B´_n=B_1\cap B_2....\cap B_n$. Am I on a good road? How do I prove that both basis generate the same filter?
| You're quite right: define $C_n = \bigcap_{i=1}^n B_i$ which are all in $\mathcal{F}$ too (as filters are closed under finite intersections) and is clealry nested as $C_{n+1} \subseteq C_n$ for all $n$.
If $A \in \mathcal{F}$ then some $B_k \subseteq A$ (definition of the $\{B_n: n \in \mathbb{N}\}$ being a base) and then $C_k \subseteq B_k$ too. So the $\{C_n: n \in \mathbb{N}\}$ also forms a base for $\mathcal{F}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3227487",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
What formula to chose a nonlinear formula? Suppose I have a formula
$$f(x) = x,$$
where $0 \leq x \leq 255$
Now I want to have a formula
$$f(x) = y,$$
where $0 \leq x \leq 255$, where $f(0) = 0$, $f(255) = 255$ and e.g., $f(128) = 150$ (the value of $150$ might vary).
All other values should be interpolated.
So actually I want a function that is nonlinear (starts with $0$ and goes up to $255$, starting increasing fast and finishes increasing slow); opposite as a parabolic function.
What (kind of) function should I use?
| To satisfy the requirements:
$$f(128)=150, f(255)=255$$
However, $f(0)=0$ can't be satisfied by this method. I assumed that f(1)=1, however
you can change the numbers but not use zero, otherwise, there would be no inverse.
An example of the curve looks like this Curve
I can provide more info about the derivation if you want.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3227585",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Conic formulation with binary variables in Gurobi I have a constraint of the following form
$$x^2 \leq yz$$
where $z$ is binary, $y \geq 0$, and $x$ is free. Can Gurobi handle this constraint?
| This is a (mixed integer) rotated quadratic cone. This can be handled in Gurobi. See for instance http://www.gurobi.com/documentation/8.1/refman/c_grbaddqconstr.html and https://www.gurobi.com/documentation/8.1/examples/qcp_py.html .
Alternatively, presuming there is a known upper bound for y, the right-hand side can be linearized per section 2.8 of "FICO MIP formulations and linearizations Quick reference" https://www.gurobi.com/documentation/8.1/examples/qcp_py.html, in which case it can be handled as a quadratic inequality constraint, plus the linearization constraints.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3227692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Looking to create a ranked list for PUBG teams PUBG is a video game played with multiple teams in each game. I am looking to create a rating system so that as teams play each other, they will gain/lose rating based on their placement (1st being the best), how many teams were playing (making it more difficult to survive until the end), and the rating of the teams they were playing against.
For example, if Team A has a rating of 1400, and they get 1st place in a game of 16 teams that all have a combined average rating of 1100, Team A would receive less points than Team B would if Team B had a rating of 1200 and won that same game. The same would go for Team A placing last in a game of teams with a lower combined average rating and thus losing more points than Team B would.
I have been looking at the Elo rating system for ideas, but I am stumped at the moment.
If anyone has any idea of a formula that exists that does this, please let me know! Any suggestions or thoughts are welcome, as I am trying to get this done soon with the scene growing.
| The Weng-Lin rating systems are perfect for your needs. There are several free and open source implementations in different languages by the name openskill.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3227854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
What is p(Evidence) exactly in a bayesian model? I'm having a hard time intuitively understanding what this means in a machine learning context. When using the variables $A$ or $B$ or some trivial example, it all makes sense, but when looking at machine learning formulas where there are real variables its harder to see exactly what is meant. For example, if $t$ is what I am trying to predict and $x$ is the training example or input...
$$
p(t|x) = \frac{p(x|t)p(t)}{p(x)}
$$
What is meant by $p(x)$? if $x$ is a training example, does it mean the probability of seeing $x$ out of all possible training examples (kind of like the probability of drawing $x$ from a hat)? the probability of seeing $x$ out of the previously known distribution of examples? or something else?
Sometimes I see this with model parameters such as $\theta$ as well which raises the same sort of questions.
| Let's take your dice example to try to illustrate the issue. Here $T$ is your uncertain parameter and $t$ a value it can take, while $X$ is your observation and $x$ a particular value it can take.
*
*Suppose you have a $t$-sided fair die, but you do not know what value $t$ has. You do have a prior distribution for $t$ of $P(T=t) = \frac{t}{2^{t+1}}$ for $t \in \{1,2,\ldots\}$.
*You roll the die and observe a value of $X=x$. Since this is a fair die, you know $P(X=x \mid T=t) = \frac{1}{t}$ for $x \in \{1,2,\ldots\,t\}$
*You can at this stage ask what is the unconditional $P(X=x)$? In other words, at the start what do you think the probability is of rolling a particular value $x$ even though you do not know how many sides the dice has? As a simple application of conditional probability $$P(X=x) = \sum P(X=x \mid T=t) P(T=t) = \sum\limits_{t=x}^\infty \frac{1}{2^{t+1}} = \frac{1}{2^{x}}$$
As examples, from the first bullet $P(T=6)=\frac{6}{128}$ and $P(T=7)=\frac{7}{256}$ etc. So the unconditional or marginal probability of rolling $X=6$ is $$P(X=6) = \frac{1}{6} \times \frac{6}{128} + \frac{1}{7} \times \frac{2}{256}+ \cdots = \frac{1}{64}= \frac{1}{2^6}$$
If you do roll a $6$ then you then know the number of sides $T \ge 6$, and you get a posterior probability mass function $$P(T=t \mid X=6) = \frac{\frac{1}{2^{t+1}}}{\frac{1}{2^{6}}} = \frac{1}{2^{t-5}}$$ for $t \ge 6$, so $P(T=6 \mid X=6)= \frac12$, $P(T=7 \mid X=6)= \frac14$, etc.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3228312",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Rudin, theorem 2.30 intuition behind could you show 1-2 real examples that is related to this theorem?
2.30 Theorem: Suppose ⊂. A subset of is open relative to if and only if =∩ for some open subset of .
| Take $\mathbb{R}=Y\subset X=\mathbb{R}^2$. Then any open set $E$ in $\mathbb{R}$ is given by a countable union of open intervals, $\Omega=\bigcup_{j\in J}(a_j,b_j)$.
Each $(a_j,b_j)$ can be rewritten as $B\left(\frac{a_j+b_j}{2},\frac{b_j-a_j}{2}\right)\cap \mathbb{R}$, i.e. a ball in $\mathbb{R}^2$ of radius half the length of the interval, centered at the middle of the interval?
Then,
$$
E=\bigcup_{j\in J}(a_j,b_j)=\bigcup_{j\in J}B\left(\frac{a_j+b_j}{2},\frac{b_j-a_j}{2}\right)\cap \mathbb{R}\\
=\mathbb{R}\cap\bigcup_{j\in J}B\left(\frac{a_j+b_j}{2},\frac{b_j-a_j}{2}\right)
$$
and $G=\bigcup_{j\in J}B\left(\frac{a_j+b_j}{2},\frac{b_j-a_j}{2}\right)$.
It might be fun to try the above out on $\mathbb{R}^k\subset \mathbb{R}^n$, $k<n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3228487",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Does $f_n\to0$ pointwise + $f_n$ integrable + $f_n$ uniformly bounded imply $\int f_n\to 0$? If $\{f_n:[0,1]\to\mathbb{R}\}_{n\ge 1}$ is a sequence of Riemann-integrable functions that converge pointwise to the zero function and $\{f_n\}_{n\ge 1}$ is uniformly bounded by $M$ can we prove that $\int_{[0,1]}f_n\to0$ ?
It seems difficult both to prove or disprove it. For a possible counterexample I came up with the sequence
$$f_1(x)=1-|2x-1|$$
$$f_{n+1}=1-2|f_n(x)-1/2|$$
which is the sequence of "spikes" at $x=j/2^n$ for $j=1,...,2^{n-1}$ so that $\int_{[0,1]}f_n=1/2$ for every $n\in\mathbb{N}$ but $f_n(x)$ it doesn't seem to converge to $0$ at some values like $x=1/3$. While we can get some $j/2^n$ arbitrarily close to every value, the closer we get, the crazier the spikes get.
For a possible proof, the set $A_{n,\varepsilon}:=\{\,x:|f_n(x)|\ge\varepsilon\,\}$ should in some sense be small so that we can separate the integral (or at least every lower/upper sum) of $f_n$ into the $A_{n,\varepsilon}$ with "large but bounded" $f_n$ but small interval width and the $[0,1]\setminus A_{n,\varepsilon}$ part with small $f_n$ and whatever interval. So let $P_m:=\{I_1,...,I_m\}$ be the partition of $[0,1]$ with $I_j=[\frac{j-1}{m},\frac{j}{m}]$. Now $J_1,...,J_r$ the $I_j$'s that contain points of $A_{n,\varepsilon}$ and $K_1,...,K_s$ the remaining $I_j$'s.
$$U(f_n,P_m)=\sum_{i=1}^rM_{J_i}\frac{1}{m}+\sum_{i=1}^sM_{K_i}\frac{1}{m}\le\frac{rK}{m}+\varepsilon$$
We at least know the limit $r/m$ exists as $m\to\infty$ and depends on $n$ because $f_n$ is integrable. Do you know of any proof or counterexample?
Thanks!
| It is true, by Lebesgue's dominated convergence theorem. Just take $M>0$ such that$$(\forall n\in\mathbb N)\bigl(\forall x\in[0,1]\bigr):\bigl\lvert f_n(x)\bigr\rvert\leqslant M.$$Then, if you define $g(x)=M$ for each $x\in[0,1]$, $g$ is integrable and$$(\forall n\in\mathbb N)\bigl(\forall x\in[0,1]\bigr):\bigl\lvert f_n(x)\bigr\rvert\leqslant g(x).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3228602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How many inner products exist in $R^n$? I want to know how many inner products exist in $R^n$ making it a Hilbert space. I know they all can be corresponded to the Euclidean inner product by some Isometry/Unitary function, but I want to know more explicit formula for this function, and if it is twice differentiable or not ?
| Given an inner product $\langle \cdot, \cdot \rangle$ in $\mathbb{R}^n$, consider a basis $\{
v_i \}_{i=1}^n$ of $\mathbb{R}^n$ and define $g_{ij} = \langle v_i, v_j \rangle$.
Take $u=\sum_{i=1}^n \alpha_i v_i$ and $v=\sum_{j=1}^n \beta_j v_j$.
Then
$$(1) : \langle u, v \rangle = \sum_{i, j = 1}^n g_{ij} \alpha_i \beta_j$$
The matrix $[g_{ij}]$ so defined is symmetric and positive.
Conversely, given a symmetric positive matrix $[g_{ij}]$, the bilinear form defined by $(1)$ is an inner product, as one can verify.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3228740",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Is H2 the same as the 4th alternating group
Would H2=A4 as if we apply sigma^5 to both sides we get sigma^12=1 which is true for all elements of the 4th alternating group by Lagrange's theorem.
| $H_2\subset A_4$, by definition. Conversely, let $\sigma \in A_4$. Then $\sigma ^{12}=1$ (since $\mid A_4\mid=12$) $\implies \sigma ^7=\sigma ^{-5}\implies \sigma \in H_2$. Thus $A_4\subset H_2$. Thus $H_2=A_4$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3228880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to find $\frac{∂f}{∂y}$ for Functions of the form $f(x,y,z(y))$? This is the first time I'm using math.stackexchange. Please excuse me and correct me if I'm not doing things in the right format.
So my question is this: given a function of the form $f(x,y,z(y))$, and suppose we want to find $\frac{∂}{∂y} f(x,y,z(y))$. Then by the Chain Rule, we would have something like this
$$\frac{∂f}{∂y} = \frac{∂f}{∂y} + \frac{∂f}{∂z}\frac{dz}{dy}.$$
But this notation is really confusing since the two $\frac{∂f}{∂y}$ do not mean the same thing. Am I doing this correctly, or is there any better notation to clarify this expression? I would be much appreciated if someone could shed some light on me.
| Don't confuse the function $f(x,y,z)$ and the function $F(x,y)=f(x,y,z(y))$.
The first is a function of three variables $x,y,z$ while the second is a function of two variables $x,y$.
People familiar with multi variables calculus loosely use a common symbol for both without making mistake. For one not yet familiar with these symbolism it is suggested to use different symbols :
$$\frac{∂F}{∂y}=\frac{∂f}{∂y}+\frac{∂f}{∂z}\:\frac{dz}{dy}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3229134",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Convergence (to zero) for PDF of normal distribution. I need to prove that the PDF converges to zero when $n\to\infty$; that is,
$$\lim_{n\to \infty}f_n(x) =\lim_{n\to\infty} \frac{1}{\sqrt{2\pi n^{-3}}}\exp\left(-\frac{(x-\frac{1}{n})^2}{2n^{-3}}\right)\to 0$$
I have tried using L'Hopital and differentiate $3$ times as someone suggested on StackOverflow but I seem to be getting $\infty\cdot0$ that is $0$ for the exponential(as $n\to\infty$) and $\infty$ for the $n^6$ in the numerator both by hand and in maple.
Any other suggestions on how to show this would be much appreciated.
| $$\log f_n(x) \propto -3 \log n -\frac{(x-\frac{1}{n})^2}{2n^{-3}} \longrightarrow -\infty$$
So $$f_n(x) \longrightarrow e^{-\infty} = 0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3229261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Proof continuous function $g$ defined on $[0,1]$ has a fixedpoint $x \in [0,1]$ Claim: For the continuous function $g:[0,1]\rightarrow[0,1] \exists x \in [0,1]: g(x_1) = x_1$
Proof:
Case 1: If g(0) = 0 and/or g(1) = 1 then g has at least one fix point on one of the ends of the intervall. This means $\exists x_1\in[0,1]: g(x_1)= x_1$
Case 2: Suppose $g(0) \ne 0 $ and $g(1) \ne 1 \Rightarrow$ g does not have a fixed point on one of the ends of the intervall. Therefore we have that g(0) > 0 since (0 < g(0) < 1) and g (1) < 1 since (0 < g(1) < 1) now define:
$$h(x) := g(x) -x$$
$\Rightarrow$ h is continuous since it is constructed by adding two continuous functions. Now:
$$h(0)= g(0)-0>0. \,\,By \,\, g(0)>0$$
$$h(1) = g(1) - 1 < 0. \,\, By \,\, g(1) < 1$$
By the intermediate value thm. $\Rightarrow \exists x_1 [0,1]: h(x_1) = 0$ this implies $\Rightarrow g(x_1)-x_1= 0 \iff g(x_1) = x_1$
Thus $x_1$ is a fixed point of g. By this the claim is proven. q.e.d
Now I'm asking for verification for that proof.
| Your proof is correct!
Geometrically, this says, any continuous function $g$ from $[0,1]$ to itself MUST intersect the diagonal $y=x$, and that intersecting point is the fixed point
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3229453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Using chain rule to differentiate $f(x)=a(x)b(x)$? Why can I not apply the chain rule to a product in the following way.
If we have some product:
$$f(x)=a(x)b(x)$$
Consider the multiplication of b by a as another’s function so that:
$$f(b(x))=ab$$
So that
$\frac{df}{dx} = f’(b)b’(x)$
Something feels very wrong. But I can’t put my finger on it.
| You can't use the chain rule because this is not a composition. Just calling it one does not make it one. For example, suppose $a(x)=x+1$ and $b(x)=x^2$. Then your first line says
$$
f(x) = x^2(x+1) = x^3+ x^2
$$
so
$$
f(b(x)) = f(x^2) = (x^2)^3 + (x^2)^2
$$
so not the same as "$ab$".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3229580",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 7,
"answer_id": 4
} |
If $a^{m}+1\mid a^{n}+1$ then prove that $m\mid n$.
Let $a$ be an integer, $a\ge2$. If $a^{m}+1\mid a^{n}+1$ then prove that $m\mid n$.
Actually I know a similar proof which is, $a^{m}-1\mid a^{n}-1 \iff m\mid n$, but I can't prove this. I also need some examples of the question.
Can't seem to find any correlation between the two proofs.
I seem to not find examples where $a$ is something different from $2$ and taking $m=2$.
Please help. I think 4-5 examples might help me to see the proof.
| $U_{k} = a^{k} + 1$ are terms of a Lucas sequence; hence, $U_{m} | U_{n}$ iff $m | n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3229855",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
How to get sum of $\frac{1}{1+x^2}+\frac{1}{(1+x^2)^2}+...+\frac{1}{(1+x^2)^n}$ using mathematical induction Prehistory: I'm reading book. Because of exercises, reading process is going very slowly. Anyway, I want honestly complete all exercises.
Theme in the book is mathematical induction. There were examples, where were shown how with mathematical induction prove equations like $(1+q)(1+q^2)(1+q^4)\dots(1+q^{{2}^{n}}) = \frac{1-q^{{2}^{n+1}}}{1-q}$.
Now I'm trying to complete exercise where I have to find sum of $\frac{1}{1+x^2}+\frac{1}{(1+x^2)^2}+...+\frac{1}{(1+x^2)^n}$.
I tried to do it with mathematical induction. Like this:
n=1: $\frac{1}{1+x^2}$
n=2: $\frac{1}{1+x^2}+\frac{1}{(1+x^2)^2}=\frac{1+1+x^2}{(1+x^2)^2}$
n=3: $\frac{1}{1+x^2}+\frac{1}{(1+x^2)^2}+\frac{1}{(1+x^2)^3}=\frac{(1+x^2)^2+2+x^2}{(1+x^2)^3}$
...
And so on (I've calculated til n=5). But I don't see any consistent pattern to evaluate sum of progression.
After that I found formulas of geometrical progression:
$q=\frac{b_{n+1}}{b_n}$ and $S_n=\frac{b_1(1-q^n)}{1-q}$, so I've evaluated:
$q=\frac{\frac{1}{(1+x^2)^2}}{\frac{1}{1+x^2}}=\frac{1}{1+x^2}$
and
$S_n=\left(\frac{1}{1+x^2}\left[1-\left(\frac{1}{1+X^2}\right)^n\right]\right):\left(1-\left[\frac{1}{1+x^2}\right]\right)=\left[\frac{1}{1+x^2}-\left(\frac{1}{1+x^2}\right)^{n+1}\right]\frac{1+x^2}{x^2}=\frac{\left[1-\left(\frac{\sqrt[n+1]{1+x^2}}{1+x^2}\right)^{n+1}\right]}{x^2}$
First of all, I'd like to know how to find sum of geometrical progression with mathematical induction.
Secondly, I'd like to know what is wrong with my evaluations.
| Hint:
For a geometric series with ratio $q$:
$$q+q^2+\dots+q^n=\frac{q(1-q^n)}{1-q}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3229983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Spivak Calculus Chapter 2,Number 18, Connection with Number 17 In Spivak's Calculus, Problem 17 has you $(1)$ verify that all natural numbers are factorable into a product of purely prime numbers and $(2)$ has you show that $n^{\frac{1}{k}}$ is irrational unless $n=m^k$ for some natural number $m$.
Problem 18 asks
Prove that if $x$ satisfies
$$x^n+a_{n-1}x^{n-1}+...+a_0 = 0$$
then $x$ is irrational unless it is an integer. How is this a generalization of Problem 17?
I know the solution to Problem 18, but I do not see how it is a generalization of Problem 17. I feel that the main issue is that I do not understand if it has to do primarily with the factorization into primes, the irrationality of $k_{th}$ roots, or relies on both.
I would very much appreciate a hint to get started on seeing the connection.
| If $k,n\in\mathbb N$, consider the polynomial $x^k-n$. Problem 17 is equivalent to the assertion that if this polynomial has a rational root, then it is actually an integer one.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3230076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$x_1 x_2 x_3 x_4 + x_2 x_3 x_4 x_5 +......+ x_n x_1 x_2 x_3 = 0$ then what is $n$? Can anyone please help me to understand what is the following problem saying?[!
Each of the numbers $x_1,x_2,\cdots,x_n,n>4$, is equal to $1$ or $-1$. Suppose
$$x_1x_2x_3x_4+x_2x_3x_4x_5+\cdots +x_nx_1x_2x_3=0$$
then,
$(1)$ $n$ is even, $(2)$ $n$ is odd, $(3)$ $n$ is odd multiple of $3$, $(4)$ $n$ is prime.
I am really having a hard time to understand this problem. What I understood is I have to find a natural number $n$ for which the equation holds whatever the values of $x_i$ s are. That is impossible. That's why it seems that some conditions on $x_i$ s are necessary.
| Let $$I_1 = x_{1}x_{2}x_{3}x_4$$ $$I_2 = x_{2}x_{3}x_4x_5$$ $$\vdots $$ $$I_n = x_nx_{1}x_{2}x_{3}$$ Since each $I_k\in\{-1,1\}$ and $I_1+I_2+...+I_n=0$ we must have equaly $-1$ and $1$ so $n$ must be even.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3230238",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Equilateral triangles on the sides of a triangle We have a triangle.
We then construct three points outside of the triangle by drawing three equilateral triangles on the sides of the original triangle.
Now we want to do the opposite: from the three points we constructed, we want to construct the original triangle. (With just a ruler and a compass)
How does that construction look like?
My idea was to use the Fermat point by drawing the same construction with the three points (the Fermat point is the same).
The construction of the Fermat point
| Denote the points like on this picture:
Consider the following composition of rotations: $I= R_{C',60^\circ}\circ R_{A',60^\circ}\circ R_{B',60^\circ}$. The classification of isometries says that $I$ is a central reflection, but also $I(A)=A$, so $I$ is the central reflection with respect to $A$: $I=S_A$. Thus $A$ can be constructed (e.g. take arbitrary $X$ and construct $X':=I(X)$; $A$ is the midpoint of $XX'$). In a similar way, by considering suitable compositions of rotations, you can construct $B$ and $C$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3230381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
splitting accommodation costs between people when some of them stay for fewer days We're 4 people and we are staying 7 nights, for a total cost of 546.
One of us however is leaving 1 day earlier.
Initially I thought the problem was very simple.
I reasoned that the 3 of us staying for the full period should pay $C_1$, and the person leaving earlier should pay $C_2$, under the following conditions:
$3 \cdot C_1 + C_2 = 546$
$C_2 = \frac 6 7 \cdot C_1$
The result was:
$C_1 = \frac {1274} 9 \approx 141.56$
$C_2 = \frac {364} 3 \approx 121.33$
I was happy with this, but then I thought: how am I going to explain it to the group?
So I tried a more 'intuitive' approach, which I believed was equivalent.
I reasoned that the first 6 nights should be split equally between all 4 of us, whereas the last night should only be split between the 3 of us who were still staying at that point.
Cost of each night:
$\frac {546} 7 = 78$
Cost of the first 6 nights:
$78 \cdot 6 = 468$
Split equally between 4 of us:
$\frac {468} 4 = 117$
Cost of the last night split only between 3 of us:
$\frac {78} 3 = 26$
Which would give:
$C_1 = 117 + 26 = 143$
$C_2 = 117$
The first condition above is still met:
$3 \cdot 143 + 117 = 546$
but the second is not:
$\frac 6 7 \cdot 143 \approx 122.57 \ne 117$
I can see that the two approaches are algebraically different, but I don't understand why they are conceptually different.
How can looking at 1 day at a time be different from looking at the total? Shouldn't the person staying 1 day less always be paying 6/7 of what the others pay? Or should the 6/7 condition perhaps be applied not to the cost each of the 3 people staying 7 days pays, but (somehow) to the total?
I'm sure there is a simple explanation, but I can't immediately see it. Any idea?
And do you think either of the two approaches is 'fairer'?
Thanks!
| If we both buy a £10 meal every day that we are on holiday and you stay for 6 nights while I stay for 7 then your meal costs will be 6/7 of mine. But if the daily cost isn't constant, that doesn't apply.
The constant cost per day assumption doesn't apply to your situation: the daily cost goes up when there are fewer people sharing. That's why your first method overcharges the 6-night guest and why it's inconsistent with the second.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3230495",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How to check if functions are linear? I know that a function is linear if $f(ax+by) = af(x)+bf(y)$, but whilst doing some exercises I came across to this one where it says that I have to check if the following functions are linear. I don't really understand how should I proceed.
\begin{align*}
f: \mathbb{K}^{3} &\to \operatorname{Mat}(2,1 ; \mathbb{K})\\
(x, y, z) &\mapsto \begin{bmatrix}x-y\\ y-z\end{bmatrix}
\end{align*}
\begin{align*}
g: \mathbb{K}[x] &\to \mathbb{K}[x]\\
P(x) &\mapsto P'(x)+2P''(x)+1
\end{align*}
| Conditions of linearity of function is $(1)f(x+y)=f(x)+f(y)$ and $f(cx)=cf(x)$ where $c$ is scalar. Alternatively we may write $f(cx+dy)=cf(x)+df(x)$, $c$, $d$ are scalars. If a function satisfy these properties we declared it as linear. As $f(x) = tx$ is linear.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3230561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Inverse function theorem. Let $f:\mathbb{R}^{n}\rightarrow \mathbb{R}^{n}$, $C^{1}$, such that $det(f'(x))\neq 0,\ \forall\ x\in \mathbb{R}^{n}$ and $f^{-1}(K)$ is a compact set for all $K\subset \mathbb{R}^{n}$ compact. Prove that $f$ is a surjective function.
If anyone can help, I'll be grateful.
| You know $f$ is open, so $f(\mathbb{R}^n)$ is open. It suffices to prove $f(\mathbb{R}^n)$ is closed.
Let $y\in\overline{f(\mathbb{R}^n)}$. Choose $y_n\in f(\mathbb{R}^n)$, $y_n\to y$. Since $\overline{B_r}(y)$ is compact, we know $f^{-1}(\overline{B_r}(y))$ is compact. If $x_n\in\mathbb{R}^n$ be such that $f(x_n)=y_n$, then $x_n\in f^{-1}(\overline{B_r}(y))$ is a sequence in a compact subset of $\mathbb{R}^n$, hence has a convergent subsequence converging to some $x$. Then $f(x)=y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3230696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Degree 2 Recurring monic polynomials Consider a monic polynomial $x^2+ax+b=0$, with real coefficients. If it has real roots $p$ and $q$, such that $p\leq q$, then you construct a new monic polynomial as $x^2+px+q=0$.
If this polynomial has real roots $p_1$ and $q_1$, $p_1\leq q_1$ you again construct a monic polynomial as $x^2+p_1x+q_1=0$. You go on doing this until you get complex roots. Once you get complex roots, you stop and count the number of polynomials constructed so far.
So my question here is if you begin with any arbitrary monic polynomial with real coefficients, how many such polynomials can be created, will this process go on forever or will it give a finite number of polynomials? How do you go on to start with the solution?
| If you start with the polynomial $a=b=0$, then the polynomial will always be $x^2$ and so the process goes on forever. However, in all other cases, it will terminate in finitely many steps.
First, suppose $b=0$ and $a\neq 0$. If $a<0$, the next polynomial is $x^2-a$ which does not have real roots. If $a>0$, the next polynomial is $x^2-ax$ and then the next polynomial is $x^2+a$ which does not have real roots.
Now suppose $b\neq0$. Note then that the initial polynomial does not have $0$ as a root, and consequently neither will any of the others (since the constant coefficient is never $0$). Let us suppose the process goes on forever and say the polynomials are $x^2+ax+b$, $x^2+p_0x+q_0$, $x^2+p_1x+q_1$, $x^2+p_2x+q_2$, and so on.
Note first that the signs of the coefficients $(p,q)$ go in a cycle $$(-,-)\to(-,+)\to (+,+)\to (-,-).$$ For instance, if $p_n$ and $q_n$ are both negative, then the roots of $x^2+p_nx+q_n$ have opposite sign (since their product $q_n$ is negative), so $p_{n+1}$ is negative and $q_{n+1}$ is positive.
Let us now analyze how the absolute value of the linear coefficient changes in this cycle. If $p_n$ and $q_n$ are negative, then $p_{n+1}<0<q_{n+1}$ and $p_{n+1}+q_{n+1}=-p_n>0$, so $|p_{n+1}|<|q_{n+1}|$. Thus $$|p_{n+1}|<\sqrt{|p_{n+1}q_{n+1}|}=\sqrt{|q_n|}\leq\sqrt{|p_n|}.$$
Now suppose $p_n$ is negative and $q_n$ is positive. Then $0<p_{n+1}\leq q_{n+1}$ and so $$|p_{n+1}|=p_{n+1}\leq \frac{p_{n+1}+q_{n+1}}{2}=\frac{-p_n}{2}=\frac{|p_n|}{2}.$$
Finally, suppose $p_n$ and $q_n$ are positive. Then $p_{n+1}\leq q_{n+1}<0$ so $$|p_{n+1}|<|p_{n+1}+q_{n+1}|=|-p_n|=|p_n|.$$
We thus see that in all cases, $|p_{n+1}|\leq\max(|p_n|,1)$. Moreover, in every third step, $|p_{n+1}|\leq\frac{|p_n|}{2}$. It follows that for all sufficiently large $n$, $|p_n|\leq 1$.
Let us now pick $n$ such that $|p_n|\leq 1$ and $p_n$ and $q_n$ are positive. We then have $$p_n^2-4q_n\leq p_n^2-4p_n<0.$$ Thus the discriminant of $x^2+p_nx+q_n$ is negative and it does not have real roots. This is a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3230835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Categorical general topology -- reference request I am looking for literature describing general topology in terms of category theory. I would prefer literature which does not assume too much familiarity with category theory, but would appreciate any coherent refrences.
| I recall G. Preuß' book Theory of Topological Structures: An Approach to Categorical Topology and some works by Horst Herrlich (references at that page) as well.
There have been some conferences on "categorical topology" of which there are proceedings (e.g. this one or this one etc.; this is all pretty old and in vogue in the 70's). This survey might interest you as well, being more recent.
Also, "pointless topology" and frames/locales might interest you as well. This has connections to logic and geometry.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3230974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
What is the difference between these two combinatorics problems? So the first problem is
"In how many ways can we arrange the letters in the word Alabama."
and the second questions is
"In how many ways can we arrange three Mathematics books, five English
books, four Science books and a dictionary?"
I can solve the first question by dividing $7!$ (number of letters) by $4!$ (number of times letter repeats), but I cannot solve the second question by dividing $13!$ (number of books) by $4!$ (number of science books) $ \cdot 3!$ (number of Mathematics books) $\cdot 5!$ (number of English books). The solution to the second question provided is $3! \cdot 5! \cdot 4! \cdot 4! = 414720$
I don't understand this solution. To me it seems that the situations are identical, so the same solution should apply. Why does the second solution work and what is the difference between two questions?
| The second question implicitly asks for the books on each subject to be kept next to each other. Very very poorly worded, I know. Thus we have $3!$ ways for the maths, $5!$ for english, $4!$ for science, $1!$ for dictionary, and $4!$ ways to rearrange, and we multiply these.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3231049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Let $f:[a,\infty)\rightarrow \mathbb{R}$ be a uniformly continuous function. $\int_{a}^{\infty} f$ converges.Prove that $\lim_{x\to\infty} f(x)=0$
Let $f:[a,\infty)\rightarrow \mathbb{R}$ be a uniformly continuous function in that range. $\int_{a}^{\infty} f$ converges. Prove that $\lim_{x\to\infty} f(x)=0$
Hint: Use the sequence $F_n(x)=n\int_{x}^{x+\frac{1}{n}} f$.
Honestly I have been trying to solve this one for some time but the hint really confuses me.
I have tried to mess around with $F_n(x)$ a bit, for example by using the fundamental theorem but it still seems like such a random choice and I can't make anything out of it.
Any guidance/explanations will be appreciated.
Please use the hint in the question.
| First of all, note that for all $n\in\mathbb{N}$
$$\lim_{x\rightarrow\infty}F_n(x)=n\left[\lim_{x\rightarrow\infty}\left(\int_a^{x+\frac{1}{n}}f(t)\mathrm{d}t- \int_a^xf(t)\mathrm{d}t\right)\right]=n\left[\int_a^{\infty}f(t)\mathrm{d}t-\int_a^{\infty}f(t)\mathrm{d}t\right]=0.$$
Now let $\varepsilon>0$ be arbitrary. By uniform continuity, there is a $\delta>0$ such that for all $t,x\in\left[a,\infty\right)$, we have $|f(t)-f(x)|<\varepsilon$ whenever $|t-x|<\delta$. Pick $N\in\mathbb{N}$ such that $\frac{1}{n}<\delta$ for all $n\ge N$. Then for all $x\in\left[a,\infty\right)$, we have
$$\left|n\int_x^{x+\frac{1}{n}}f(t)\mathrm{d}t-f(x)\right|=\left|n\int_x^{x+\frac{1}{n}}f(t)-f(x)\mathrm{d}t\right|\le n\int_x^{x+\frac{1}{n}}|f(t)-f(x)|\mathrm{d}t\le\varepsilon.$$
Since $\varepsilon$ and $x$ were arbitrary, we conclude that $\lVert F_n-f\rVert\rightarrow0$ as $n\rightarrow\infty$, i.e. $F_n\rightarrow f$ uniformly. Thus,
$$\lim_{x\rightarrow\infty}f(x)=\lim_{x\rightarrow\infty}\lim_{n\rightarrow\infty}F_n(x)=\lim_{n\rightarrow\infty}\lim_{x\rightarrow\infty}F_n(x)=0.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3231173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Surface integral of hyperboloid using polar coordinates fails? I am trying to find the surface area of the hyperboloid $x^2 + y^2 − z^2 = 1$ where $0\le z \le 1 $. My book goes ahead making hyperbolic substitutions, however I don't understand why the simple approach fails.
$$\mathbf n= \langle 2x, 2y, -2z \rangle$$
$$=\langle x, y, -z \rangle$$
$$=\langle x, y, -\sqrt{x^2+y^2-1} \rangle$$
$$||\mathbf n||=\sqrt{2x^2+2y^2-1}$$
$$=\sqrt{2r^2-1}$$
Now, since $z=\sqrt{x^2+y^2-1}$ and $0\le z \le 1 $, so if $x^2+y^2=r^2$, then $1 \le r \le \sqrt{2}$. So I continue
$$\int_0^{2\pi}\int_1^\sqrt{2}r\sqrt{2r^2-1}\ dr \ d\theta$$
$$\approx 4.3942$$
My book comes out with $7.9665$, what did I do wrong?
| When doing a surface integral, as is mentioned above in the comments, it is necessary to choose the "correct" normal vector. You start by choosing a parameterization of your surface, of the form $$r(u,v) = (x(u,v), \, y(u,v), \, z(u,v)) $$
In other words, express the $x,y,$ and $z$ coordinates as functions of two parameters. Next, you compute the tangent vectors to the surface given by the partial derivative vectors:
$$ r_u = (x_u,y_u,z_u) $$
$$ r_v = (x_v,y_v,z_v) $$
The "correct" normal vector to use for the surface integral is the cross product $r_u \times r_v$. The area differential is then the magnitude $|r_u \times r_v|$.
One particular common situation is when the surface is a graph (i.e. it is given by $z=f(x,y)$). In this case, you can choose your parameterization to simply be $u=x$ and $v=y$, with $z=f(x,y)$. So it looks like
$$ r(x,y) = (x,y,f(x,y)) $$
If you take the cross product, you get
$$ r_x \times r_y = (-f_x,-f_y,1) $$
Therefore the area differential is $\sqrt{1+f_x^2+f_y^2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3231290",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove that there exists an integer greater than x such that any polynomial $f(x)$ will be strictly non-negative and get large? Hi I am taking a number theory class and so far I have been proving modular congruences, modular arithmetic, and prime properties. There is this theorem that came up in the textbook and apparently it does not involve any modular arithmetic. The theorem is as follows
Suppose $f(x)=a_nx^n+a_{n-1}x^{n-1}+\dots+a_0$ is a polynomial of degree $n>0$. Then there exists an integer $k$ such that if $x>k$ then $f(x)>0$.
I feel like this would come up in real analysis, but I have not come that far in my studies. I have an idea of applying induction and using the ceiling function somehow, but I have no idea how to start off this proof. Any help will do and thank you
| One should note that the field of interest here (presumably) is $\mathbb{R}$, so that $f(x)$ is a real polynomial. Notice first that for $a_n<0$, this fails completely, because we can just choose $f(x)=-x$ and it does not have the property.
If we assume that $a_n>0$, then write
$$ \frac{f(x)}{x^n}=a_n+a_{n-1}x^{-1}+\cdots +a_1 x^{1-n}+a_0 x^{-n}$$
so that $\lim_{x\to\infty} \frac{f(x)}{x^n}=a_n>0$. This implies that $\lim_{x\to\infty}f(x)=\infty$, so that $f(x)>0$ for $x\ge N$ for some $N\in \mathbb{R}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3231447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
$ \lim_{x\to \frac{1}{{\sqrt 2}^+}} \frac{\cos ^{-1} \left( 2x\sqrt{1-x^2}\right)}{x-\frac{1}{\sqrt{2}}}$
$\displaystyle \lim_{x\to {1\over \sqrt{2}^+}} \dfrac{\cos ^{-1} \left( 2x\sqrt{1-x^2}\right)}{x-\dfrac{1}{\sqrt{2}}}$
I have tried substituting $x$ for $\sin \theta$, doing the calculations and ended up with -$2√2$. But the solution provided was $2√2$. Then I tried this question again, but this time used $\cos \theta$ instead of $\sin \theta$ and the answer did match. I don't understand why $x$ as $\sin \theta$ doesn't give the correct result. I have checked all my steps but couldn't find any flaw with $\sin \theta$ as substitution. Can anyone tell me whether $\sin \theta $ a wrong substitution for this question or not?
| One may use l'Hopital rule, then
$$\lim_{x\to {1\over \sqrt{2}^+}} \dfrac{\cos ^{-1} \left( 2x\sqrt{1-x^2}\right)}{x-\dfrac{1}{\sqrt{2}}} = \lim_{x\to {1\over \sqrt{2}^+}}\dfrac{-2(1-2x^2)}{\sqrt{1-x^2}\sqrt{1-4x^2-4x^4}} = \lim_{x\to {1\over \sqrt{2}^+}}\dfrac{-2(1-2x^2)}{\sqrt{1-x^2}(2x^2-1)}=2\sqrt{2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3231558",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Continuous function as difference of convex functions Can every continuous function $f:\mathbb{R} \rightarrow \mathbb{R}$ be written as the difference of two convex functions?
If not, can every twice continuously differentiable function $f:\mathbb{R} \rightarrow \mathbb{R}$ be written as the difference of two convex functions?
Is there an explicit decomposition?
My thoughts for the latter case are as follows:
Let
$$
g_1(x)=f(0) + \int_0^xf'(s)\chi_{\{f''(s)\geq 0\}}\,ds,
$$
$$
g_2(x)= \int_0^x-f'(s)\chi_{\{-f''(s) > 0\}}\,ds.
$$
We have $f(x) = g_1(x)-g_2(x)$ but I can't see how to prove/disprove $g_1,g_2$ being convex.
Is it enough to note that the second derivative of each of $g_1,g_2$ is non-negative?
This question is motivated as a positive answer would allow the Ito-Tanaka formula to be applied to continuous functions.
Thanks
| For $f \in C^{2}$ the result is true but your construction does not work. Start with $g_1(x)=\int_0^{x}(f''(t))^{+}dt$ and then take $g(x)=\int_0^{x} g_1(t) dt$. Simialry define $h_1$ and $h$ and see that $f(x)=f(0)+xf'(0)+g-h$. [Here $x^{+}=\max \{x,0\}$ and $x^{-}=-\min\{x,0\}$]. Note that $f(0)+xf'(0)+g$ and $h$ are convex.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3231693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Balls between boxes equation I have a practice question I don't know how to answer past part (a)
Consider nonnegative integer solutions of the equation $x_1 + x_2 + x_3 + x_4 + x_5 + x_6 = 26$
(a) How many different solutions are there? [3 marks]
(b) How many solutions also satisfy: for every $i \in \{1, 2, 3\}, x_i > 0?$ [3 marks]
(c) How many solutions also satisfy: for every $i \in \{1, 2, 3, 4, 5, 6\}, x_i$ is even? [3 marks]
For (a) I have ${31}\choose{26}$ and I feel confident in that, but I am unsure of how to answer the next two parts as I can't find anything in my notes that helps me with it.
Thanks
edit: think i have solved the second two parts but could someone tell me if they look right and if my logic is good.
I am considering the 6 unknown integers as boxes and the final answer of 26 as 26 balls.
part(b).
if i consider it as such, could it be that i just need to say that I place one ball in each of the first 3 boxes, leaving me with 23 to be split between all 6 of the boxes so that the answer would be C(28, 23)?
part (c).
I would now glue the balls together so that there are 13 pairs (each considered as one distinct object) to split between the 6 boxes, leaving me with C(18, 13)?
Apologies for the balls in boxes analogy but this is how I was taught this initially and I am trying to relate this to that, so I can have a better grasp of this. Thank you to anyone who has read this :)
| The task in part b and part c is to find a way to reduce the problem to part a.
Say for part b, here's a hint. Let:
$$x_i^{'} = x_i - 1, \quad \forall i$$
Now if we find the non-negative solutions to below problem:
$$\sum_{i=1}^{6} x_{i}^{'} = 32$$
we get all positive solutions to the original problem asked.
Which is very similar to part a.
Similarly for part c, replace
$$x_i = 2y_{i} \quad \forall i$$,
where $y_i \in \{0,1,2,..., 13\}$. can you finish now?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3231826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proof verification for $(A-B)-C = (A-C)-(B-C)$
Let $A$, $B$, and $C$ be sets.
Prove that $(A-B)-C=(A-C)-(B-C)$.
I attempted to prove this with the following. I am very new to proof writing, and I feel I may have glossed over something or have done something just completely invalid. Any assistance with revisions or a superior method would be greatly appreciated.
Suppose $x \in (A-B)-C$. This would mean $x \in A$, $x \notin B$, $x \notin C$. The statements $x \in A$ and $x \notin C$ could be written as $(A-C)$. The statements $x \notin B$ and $x \notin C$ could be written as $-(B-C)$. Together these statements could be written as $(A-C)-(B-C)$. This shows that $(A-B)-C \subseteq (A-C)-(B-C)$.
Suppose $x \in (A-C)-(B-C)$. This would mean that $x \in A$, $x \notin B$, $x \notin C$. The statements $x \in A$ and $x \notin B$ could be written as $(A-B)$. In addition $x \notin C$, this combined with the earlier statement that $(A-B)$ would result in $(A-B)-C$. This shows that $(A-C)-(B-C) \subseteq (A-B)-C$.
Concluding the proof that $(A-B)-C = (A-C)-(B-C)$.
| The main idea is good, but the way you write it can be improved.
In the first part, after taking an element $x$, your goal is to show that $x \in (A - C) - (B - C)$.
$x \in A$ and $x \notin C$ could be written as $(A - C)$
I see what you are trying to say, but this is not mathematically correct. "$x \in A$ and $x \notin C$" is a statement and $A - C$ is a set. They are not equivalent.
However, what you tried to say is probably "$x \in A$ and $x \notin C$, so $x \in (A - C)$".
The statements $x$ $\notin$ $B$ and $x$ $\notin$ $C$ could be written as $-(B-C)$.
This is similar. The first problem with this statement is that you are equating a sentence and a set. The second problem is that $-(B - C)$ is not a correct expression. What you meant is probably $X - (B - C)$ where $X$ is the universe.
Instead of this, you can just say $x \notin B$, so $x \notin (B - C)$.
Putting these together will result in something like...
Since $x \in (A - B) - C$, we know that $x \in A, x \notin B, x \notin C$.
Since $x \in A$ and $x \notin C$, $x \in (A - C)$.
Since $x \notin B$, $x \notin (B - C)$.
Since $x \in (A - C)$ and $x \notin (B - C)$, $x \in (A - C) - (B - C)$.
Thus $(A - B) - C \subset (A - C) - (B - C)$.
The opposite direction has two issues:
*
*The first one is the same as above. You can't say $x \in A$ and $x \notin B$ could be written as $A - B$ because one is a statement and the other one is a set. You can say $x \in A$ and $x \notin B$, so $x \in A - B$.
*The second issue is that it is not obvious that $x \in (A - C) - (B - C)$ implies that $x \notin B$. (It is true, but it's not obvious.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3231983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Must every right-inverse of a linear transformation be a linear transformation? Let T be a linear transformation $\mathbb{R}^2 \rightarrow \mathbb{R}^3$. Let S be the right-inverse of T. Does S have to be linear transformation?
Thanks in Advance.
| No
(Actually a linear map $\mathbb R^2 \to \mathbb R^3$ cannot have a right inverse, since having a right inverse is equivalent to being surjective, and linear maps have the dimension of the range at most the dimension of the domain, so there are no surjective linear maps $\mathbb R^2 \to \mathbb R^3$)
BUT ANYWAY. . . .
Consider the projection map $P: \mathbb R^2 \to \mathbb R$ given by $P(x,y) = x$. Let $f : \mathbb R \to \mathbb R$ be your favourite nonlinear function and define the right-inverse $Q: \mathbb R \to \mathbb R^2$ by $Q(x) = (x,f(x))$.
Then we have $P\circ Q(x) = P(Q(x))=P(x,f(x))= x$ so this is indeed a right inverse.
To see $Q$ is nonlinear observe the image is the graph of the function $f$ which is not a linear subspace of $\mathbb R^2$. That means $Q$ is nonlinear.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3232066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
System of first order linear differential equations
The solutions of a homogeneous system of linear differential equations of the first order $$\frac{d \vec x}{d t} = A \vec x$$ are $$\vec x = e^{ \lambda t} \vec v $$ where $ \lambda $ are the eigenvalues of A and $\vec v$ are the associated eigenvectors.
So the first and second options are true. As for the others I'm not sure.
| Consider the matrix
\begin{align}
A =&\
\begin{pmatrix}
1 & -1 & 1 & 0\\
-1 & 2 & 0 & 1\\
1 & 2 & 0 & 0 \\
-2 & -1 & 0 & 0
\end{pmatrix}
\begin{pmatrix}
-2 & 0 & 0 & 0\\
0 & -1 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0
\end{pmatrix}
\begin{pmatrix}
1 & -1 & 1 & 0\\
-1 & 2 & 0 & 1\\
1 & 2 & 0 & 0 \\
-2 & -1 & 0 & 0
\end{pmatrix}^{-1}\\
=&\ \frac{1}{3}\begin{pmatrix}
0 & 0 & 4 & 5\\
0 & 0 & -6 & -6\\
0 & 0 & -2 & 2 \\
0 & 0 &-2 & -7
\end{pmatrix}
\end{align}
then it's clear that $v=(-1, 2 ,2, -1)^T$ is an eigenvector with eigenvalue $-1$. Likewise, $w=(1, -1, 1, -2)^T$ is an eigenvector with eigenvalue $-2$. However, 3 is not an eigenvalue of $A$ so $(A-3I)\mathbf{x} = \mathbf{0}$ has only a trivial solution. It's also easy to check that the last statement is false.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3232208",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that a directed graph with no cycles has at least one node of indegree zero How would I show this? I know a directed graph with no cycles has at least one node of outdegree zero (because a graph where every node has outdegree one contains a cycle), but do not know where to go from here.
| Suppose that there exists a graph with no cycles and there are no nodes of indegree $0$. Then each node has indegree $1$ or higher. Pick any node, since its indegree is $1$ or higher we can go to its parent node. This node has also indegree $1$ or higher and so we can keep doing this procedure until we arrive at the node we already visited. This will prove that there exists a cycle which contradicts our initial assumption. So we proved that every directed graph with no cycles has at least one node of indegree zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3232341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Construct a rank-3 matroid using rank-2 flat Let $E$ be a finite set with size bigger or equal to 3. Let $L$ be a collection of subsets of $E$ such that:
*
*$2 \leq |A| < |E|$ for any $A \in L$
*$|A \cap B| \leq 1$ for any $A, B \in L$
Now show that there exists a unique simple rank-3 matroid $M$ on $E$ such that $L$ is exactly the collection of all rank-2 flat. Also, describe the set of basis of $M$ in terms of $L$.
I have difficulty in finding those independent sets and also do not know how to define a rank function such that all elements in $L$ has rank 2. I can come up with an example like $U_{3, 4}$. In $U_{3, 4}$, $L$ is just all the set with size 2 and it is unique up to isomorphism. At least now I know that there is a matroid that can be built up in this way but fail to prove the more general theorem.
Any responses will be appreciated.
Update_1: Hinted by Joshua, I consider the characterization of flats of a matroid but used the definition from here. This time I have trouble proving the third axiom and also do not know how to make these flats have rank 2.
| Because $M$ is supposed to be simple, every singleton $\{e\}$ for $e \in E$ must be a flat, and these are the flats of rank $1$. This shows uniqueness.
The conditions stated on $L$ are not enough to ensure such $M$ exists. Let $E = \{1,2,3,4\}$ and let $L = \{\{1,2\}, \{1,3\}\}$. The sets $E$ and $L$ satisfy your conditions. But, there is no simple matroid $M$ of rank 3 on $E$ with 2-flats exactly $L$. This is because the only flat covering $\{3\}$ is $\{1,3\}$, and $\{1\}$ does not partition $\{1,2,4\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3232485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Question about associated primes and annihilator I am trying to solve the following exercise:
Let $R$ be noetherian and $M$ a finitely genererated $R$-module. Show that $\mathrm{Ass}(R/\mathrm{Ann}(M)) \subseteq \mathrm{Ass}(M)$ and both sets have the same minimal elements. Show that in general the inclusion is not an equality.
There is a hint that I should first prove $(\mathrm{Ann}(M):c)_R=\mathrm{Ann}(cM)$ for every $c \in R$. I have managed to do that but I have no clue where to use it. Thx for your time!
| Let me demonstrate the inclusion using the equality you proved.
Let $p \in Ass (R/ ann (M))$. Then there exists $x$ in $R$ such that $p = ann(M) :_R x$. You showed is that $ann(M) :_R x = 0 :_R xM$. Since $M$ is finitely generated, you may choose a finite generating set $\{m_1,\dots,m_n\}$ of $M$. Then
$0:_R xM = \cap\,\, 0:_R xm_i$.
Summing up we have
$$p = \cap\,\, 0:_R xm_i.$$
Since the intersection on the RHS is finite and $p$ is prime, $p \supset 0:_R xm_i$ for some $i$. As $Rxm_i \subset xM$, you have the other inclusion. Thus $p$ is in $ass(M)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3232573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Nonvanishing vector field on an odd sphere
This is an exercise I am somewhat confused about. Here $X$ looks like a vector field on $\mathbb{R}^{2n}$, not $S^{2n-1}$. Then how should I interpret $X$ to make it a vector field on the sphere? Could anyone please explain?
| The sphere $S^n$ is an imbedded submanifold of $R^{n+1}$ under the inclusion map $i$. Hence, at every point $p\in S^n$, the differential $di$ establishes a one-to-one correspondence between the tangent space $T_pS^n$ and a subspace of $T_p\mathbb R^{n+1}\equiv \mathbb R^{n+1}$. Suppose that $\gamma(t)=(x_1(t),...,x_{n+1}(t))$ is an arbitrary curve passing through $\gamma(0)$ in $S^n$, so that $\gamma’(0)$ is an arbitrary tangent vector to $S^n$, and that $x_1^2(t)+...+x_{n+1}^2(t)=1$ implies that the dot product $(x_1’(0),...,x_{n+1}’(0)).(x_1(0),...,x_{n+1}(0))=0$. On the other hand, if $y=(y_1,...,y_{n+1})$ is an arbitrary element in $\mathbb R^{n+1}$ whose dot product with a point in the sphere is $0$, then $y=di(\gamma’(0))$ for some $\gamma(t)$ in the sphere.
Using the above considerations, you easily see why the restriction of your vector field to the sphere is a vector field on $S^n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3232799",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Proving goal having the form $P \lor Q$, is it redundant to separate into two cases? In Velleman's How to Prove It, the strategy given for proving goal of the form $P \lor Q$ goes like this:
If $P$ is true, then clearly $P \lor Q$ is true. Now suppose $P$ is false.
[Proof of Q goes here]
Thus, $P \lor Q$ is true.
I feel like the first sentence is redundant. After all, when proving $P \lor Q$, it's equivalent to proving $\lnot P \to Q$, thus we only need to suppose $\lnot P$ to begin with.
| Any statement in any proof is redundant if you consider it ‘too obvious’.
In this case, maybe you consider it obvious that $P \implies P \lor Q$ (and I wouldn’t blame you). It does follow directly from the definition of $\lor$, but it’s nice to be especially clear when writing a proof and so it doesn’t hurt to add it in. It is certainly true, though, that proving $\neg Q \implies P$ will suffice for most readers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3232998",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
For given $ab\leq n$, do there exist $a'\geq a$ and $b'\geq b$ such that $a'b'=n$? For example, given $a=3$, $b=3$, and $n=14$, no such $a',b'$ exists. On the other hand, for $a=3$, $b=3$, and $n=12$, we can use $a'=3$ and $b'=4$.
Is there a simple formula that can help determine the answer to this question, rather than simply searching through possible combinations of $a'$ and $b'$?
| In reverse, it's related to the formula for markup and margin. No possibility of such a factorization exists, with $ab=n$. In fact: $$\sqrt{n}<a\land \sqrt{n}<b \implies n<ab<a'b'$$
Which is then a contradiction. One way would be:$${n\over ab}=(1+b_\text{markup})(1+a_\text{markup})$$ where you use percentage markup that can be integers when multiplied by b and a, respectively. But that's still more of a brute force approach.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3233161",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Closed form solution for constant exponent in sum I am trying to solve for $\alpha$ in the following equation:
$$ 0.80 = \frac{1}{3} \left( X_1^\alpha + X_2^\alpha + X_3^\alpha \right)$$
Right now I just use Excel and solver to find a numerical solution to problems like these. Seems like there should be a way to obtain a closed form solution.
This is an exponential curving example for grades that sets the average post-curved grade to 80%. Let me make it a bit more general:
$$ 0.80 = \frac{1}{n} \sum_i^n X_i^\alpha$$ where $n$ is the number of students that took the exam, $0 < X_i < 1$ is an exam grade for student $I$, and $0 < \alpha \le 1$ is a scaling factor. Everything is known except $\alpha$.
This type of curving "gives to the needy and not the greedy" in that lower scoring students receive more of a bump yet the rank ordering is unchanged.
| Considering that you look for the zero of function
$$f(\alpha)=\sum_{i=1}^n X_i^\alpha - nk$$ without loss of generality, suppose that the $X_i$ are such that $X_1\leq X_2\leq \cdots \leq X_n$.
So the solution is bounded
$$ \alpha_{min}=\frac{\log (k)}{\log (X_1)}\leq \alpha \leq \frac{\log (k)}{\log (X_n)} =\alpha_{max}$$ with $f( \alpha_{min})>0$ and $f( \alpha_{max})<0$.
Since $$f''(\alpha)=\sum_{i=1}^n X_i^\alpha \log ^2(X_i)$$ is positive, start Newton method with $\alpha_0=\alpha_{min}$ and, by Darboux theorem, you will reach the solution without any overshoot of the solution.
Let us try with $n=6$, $k=0.8$ and $\{0.123,0.234,0.235,0.456,0.567,0.678\}$ for the $X_i$. You should get the following iterates
$$\left(
\begin{array}{cc}
n & \alpha_n \\
0 & 0.106483 \\
1 & 0.198764 \\
2 & 0.205234 \\
3 & 0.205263
\end{array}
\right)$$
For the fun of it, let us use $n=1000$ the $X_i$'s being normally distributed between $0.2$ and $0.8$ $(\mu=\frac 23,\sigma=\frac 13)$. This would give the following iterates
$$\left(
\begin{array}{cc}
n & \alpha_n \\
0 & 0.165337 \\
1 & 0.348781 \\
2 & 0.362371 \\
3 & 0.362439
\end{array}
\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3233359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Integration of $\int_\pi^{+\infty}e^{-st}(t-\pi)\,dt$ How can i calculate this integral?
$$
\int_\pi^{+\infty}e^{-st}(t-\pi)\,dt
$$
I tried many different things but i was hopeless.
I know that integrating by parts may be the way to go but i am truly lost.
| Integration by parts will give you (choosing $f = t-\pi$ and $g' = e^{-st}$)
$${\displaystyle\int}\left(t-{\pi}\right)\mathrm{e}^{-st}\,\mathrm{d}t=-\dfrac{\left(t-{\pi}\right)\mathrm{e}^{-st}}{s}-{\displaystyle\int}-\dfrac{\mathrm{e}^{-st}}{s}\,\mathrm{d}t +C$$
which is
$${\displaystyle\int}\left(t-{\pi}\right)\mathrm{e}^{-st}\,\mathrm{d}t=-\dfrac{\left(t-{\pi}\right)\mathrm{e}^{-st}}{s}-\dfrac{\mathrm{e}^{-st}}{s^2} +C$$
Organizing, we get
$${\displaystyle\int}\left(t-{\pi}\right)\mathrm{e}^{-st}\,\mathrm{d}t=-\dfrac{\left(s\left(t-{\pi}\right)+1\right)\mathrm{e}^{-st}}{s^2}+C$$
Evaluating at the boundaries, we get
$${\displaystyle\int_{\pi}^{\infty}}\left(t-{\pi}\right)\mathrm{e}^{-st}\,\mathrm{d}t=\dfrac{\mathrm{e}^{-{\pi}s}}{s^2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3233498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Discrete Exponential Martingale - Properties This question is about the discrete exponential martingale.
Let $(Y_n)_n$ be a sequence of independent and identically distributed random variables
with $m_{Y}(t) :=\mathbb{E}\left[e^{t Y_{1}}\right]<\infty, t \in \mathbb{R}$. I want to show that
\begin{align*}
M_{n} :=\frac{1}{m_{Y}(t)} \exp \left(t \sum_{i=1}^{n} Y_{i}\right)
\end{align*}
is a martingale.
But I didn't know how to proof that \begin{align*}
\begin{aligned} \mathbb{E}\left[M_{n+1} | \mathcal{F}_{n}\right] &=\frac{1}{m_{Y}(t)} \mathbb{E}\left[\exp \left(t \sum_{i=1}^{n+1} Y_{i}\right) | \mathcal{F}_{n}\right] \\ &=\frac{1}{m_{Y}(t)}\left\{\mathbb{E}\left[\exp \left(t Y_{n+1}\right) | \mathcal{F}_{n}\right]+\sum_{i=1}^{n} \mathbb{E}\left[\exp \left(t Y_{i}\right) | \mathcal{F}_{n}\right]\right\} = ... = M_n\end{aligned}
\end{align*}
| The candidate martingale has perhaps been miscopied. As per the comment by @Sesame, if instead one defines $M_n$ by
$$M_n = \frac{\exp(t\sum_{i=1}^n Y_i)}{m_Y(t)^n},$$
then the usual computations carry through to show this is a martingale.
Summarizing the steps presented in the comment, we compute
$$\mathbb{E}(M_{n+1}|\mathscr{F}_n)=\frac{1}{m_Y(t)^{n+1}}\exp(t \sum_{i=1}^n Y_i)\mathbb{E}(\exp(tY_{n+1})|\mathscr{F}_n)$$
$$=\frac{M_n}{m_Y(t)}\mathbb{E}(\exp(tY_{n+1}))=M_n$$
where we have, first, used the "pulling out what is known" property since $\exp(t \sum_{i=1}^n Y_i) \in \mathscr{F}_n$ and, second, used that $\exp(t Y_{n+1})$ is independent of $\mathscr{F}_n$. Thus $\mathbb{E}(M_{n+1}|\mathscr{F}_n)=M_n$, so $M_n$ is a martingale as desired.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3233622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Tensor product of exact complexes is exact Let $M_\circ = \dots \to M_n \dots \to M_0 \to 0$ and $N_\circ = \dots \to N_n \dots \to N_0 \to 0$ be exact complexes of modules over a ring $A$ such that each module is flat.
Is it then true that $(M\otimes N)_\circ = \dots M_n\otimes_AN_n \dots \to M_0\otimes_A N_0 \to 0$ is exact?
I can get an exact double complex but I don't know how to use that to conclude what I want.
(The motivation is to show that if $P_\circ$ and $Q_\circ$ are polynomial simplicial resolutions of rings $B,C$ flat over $A$, then $P_\circ\otimes_AQ_\circ$ is a polynomial simplicial resolution of $B \otimes_A C$.
| No. For instance, let both complexes be $0\to A\to A\to 0$ but with one of them in degrees $0$ and $1$ and the other in degrees $1$ and $2$. Then the tensor product will be nonzero only in degree $1$ and so will not be exact.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3233860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to compute the MLE of the zero truncated poisson? $$P(X = x ) =
\frac{\theta ^ x e^{- \theta} }{x ! \left ( 1 - e^{- \theta} \right )},\ x=1,2,\cdots,\ 0<\theta<\infty$$
Then the likelihood of $(x_1,\cdots,x_n)$ is
$$\mathcal L(\theta \mid \boldsymbol y) = \prod_{i=1}^n \frac{e^{-\theta}}{1-e^{-\theta}} \frac{\theta^{x_i}}{x_i!} \propto (e^{\theta} - 1)^{-n} \theta^{n \bar x}$$
so the log-likelihood is
$$\ell(\theta \mid \boldsymbol x) = -n \log(e^{\theta} - 1) + n\bar x \log \theta$$
and its derivative with respect to θ is
$$\frac{\partial \ell}{\partial \theta} = \frac{e^\theta}{1-e^{\theta}} + \frac{\bar x}{\theta}$$
but I don't know how to compute $\partial \ell/\partial\theta = 0$, since this does not have a closed form expression.
Is there anyone can give me an idea?
| Your calculation is right so far. As you said it does not exist a closed form. We can use the Lambert W function to obtain a solution for $\theta$
$\frac{e^\theta}{1-e^{\theta}} + \frac{\bar x}{\theta}=0$
Multiplying both sides by $(1-e^{\theta})$ and $\theta$
$\theta\cdot e^\theta+\bar x\cdot (1-e^{\theta})=0$
$e^\theta \cdot (\theta-\bar x)+\bar x=0$
$e^\theta \cdot (\theta-\bar x)=-\bar x$
Multiplying both sides by $e^{-\bar x}$
$e^{\theta-\bar x} \cdot (\theta-\bar x)=-\bar x\cdot e^{-\bar x}$
${\theta-\bar x}=W(-\bar x\cdot e^{-\bar x})$
${\hat \theta}=W(-\bar x\cdot e^{-\bar x})+\bar x$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3234144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Playing with squares Extending from particular examples I've found that $$n^2=\sum_{i=1}^{i=n-1} 2\, i+n$$
this is that for any square of side $n$ the area can be calculated in a simple way.
Example
For a square of side $7$, the result is: $2×1+2×2+2×3+\cdots + 7=49$
Question
Is there any way to prove this generally true, is there more than one way?Can you show at least one?
| $$\sum_{i=1}^{i=n-1} 2\, i\,+n=2\left(\dfrac {n (n-1)}2\right)+n=n^2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3234237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
How much ways there are to produce sum = 21 with 4 different natural numbers? 0 isn't natural number , and the sum way is important (e.g. 3+5+6+7 is different from 5+6+7+3).
I got that I have for a+b+c+d = 21 , 2024 options.
I think I need to sub the invalid numbers , so I need to relate for the cases :
a = 0 , b = 0, c = 0, d = 0 ,
a = b , a = c , a = d, b = c, b = d, c = d?
| $$
\eqalign{
& N = {\rm No}{\rm .}\,{\rm of}\,{\rm sol}{\rm .}\left( {a + b + c + d = 21\quad \left| {\;0 < a \ne b \ne c \ne d} \right.} \right) \cr
& \quad \Downarrow \cr
& N = 4!\;{\rm No}{\rm .}\,{\rm of}\,{\rm sol}{\rm .}\left( {a + b + c + d = 21\quad \left| {\;0 < a < b < c < d} \right.} \right) \cr
& \quad \Downarrow \cr
& 0 < a = x_{\,1} \quad b = x_{\,2} + 1\quad c = x_{\,3} + 2\quad d = x_{\,4} + 3 \cr
& \quad \Downarrow \cr
& N = 4!\;{\rm No}{\rm .}\,{\rm of}\,{\rm sol}{\rm .}\left( {x_{\,1} + x_{\,2} + 1 + x_{\,3} + 2 + x_{\,4} + 3 = 21\quad
\left| {\;0 < x_{\,1} \le x_{\,2} \le x_{\,3} \le x_{\,4} } \right.} \right) = \cr
& = 4!\;{\rm No}{\rm .}\,{\rm of}\,{\rm sol}{\rm .}\left( {x_{\,1} + x_{\,2} + x_{\,3} + x_{\,4} = 15\quad
\left| {\;0 < x_{\,1} \le x_{\,2} \le x_{\,3} \le x_{\,4} } \right.} \right) = \cr
& = 4!\;{\rm N}{\rm .}\,{\rm of}\,{\rm partitions}\,{\rm of}\,15\;{\rm into}\,4{\rm parts} = \cr
& = 4!\;27 = 648 \cr}
$$
which checks with direct computation.
To exemplify that, let's take a case with smaller values
$$
\eqalign{
& N = {\rm No}{\rm .}\,{\rm of}\,{\rm sol}{\rm .}\left( {a + b + c = 9\quad \left| {\;0 < a \ne b \ne c} \right.} \right) \cr
& \quad \Downarrow \cr
& N = 3!\;{\rm No}{\rm .}\,{\rm of}\,{\rm sol}{\rm .}\left( {a + b + c = 9\quad \left| {\;0 < a < b < c} \right.} \right) \cr
& \quad \Downarrow \cr
& N = 3!\;{\rm No}{\rm .}\,{\rm of}\,{\rm sol}{\rm .}\left( {x_{\,1} + x_{\,2} + 1 + x_{\,3} + 2 = 9\quad \left| {\;0 < x_{\,1} \le x_{\,2} \le x_{\,3} } \right.} \right) = \cr
& = 3!\;{\rm No}{\rm .}\,{\rm of}\,{\rm sol}{\rm .}\left( {x_{\,1} + x_{\,2} + x_{\,3} = 6\quad \left| {\;0 < x_{\,1} \le x_{\,2} \le x_{\,3} } \right.} \right) = \cr
& = 3!\;{\rm N}{\rm .}\,{\rm of}\,{\rm partitions}\,{\rm of}\,6\;{\rm into}\,3\;{\rm parts} = \cr
& = 3!\;3 = 18 \cr}
$$
and in fact
$$
\underbrace {\left[ {1,1,4} \right],\left[ {1,2,3} \right],\left[ {2,2,2} \right]}_{partit.\;x_{\,1} \le x_{\,2} \le x_{\,3} }\quad \Rightarrow \quad
\underbrace {\left[ {1,2,6} \right],\left[ {1,3,5} \right],\left[ {2,3,4} \right]}_{ordered\;0 < a < b < c}
$$
are the only ordered triplets of different positive integers that sums to $9$.
Since they contain different integers, you can permute each of them
to obtain that the number of unordered triplets is $18$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3234407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Manifold parallelizable equivalent condition
This is an exercise from Loring Tu's Introduction to Manifolds that I am stuck at. I know that a tangent bundle is trivial if it is isomorphic to the product bundle $M \times \mathbb{R}^{n}$. Here $n$ is the dimension of the (smooth of course) manifold $M$.
I think I have to construct a smooth frame from the the isomorphism and vice versa; but I cannot find a way to do so...Could anyone please help me?
| If there is a smooth frame $X_1,\dots,X_n$, then each tangent vector $V$ is uniquely written as
$$V=v^1X_1+\dots+v^nX^n.$$
What can you say about the map $(x,V)\mapsto(x,v^1,\dots,v^n)$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3234575",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Coordinate transformations I have two scalar functions of $x$ and $y$ that I can define:
$$f(x,y)=x^2+y^2\qquad
\text{and}\qquad
g(x,y)=x^2 + \sin^2(x) y^2.$$
Is it true that there is literally no coordinate change that will take one to the other?
| FWIW, note that $$\mathrm{d}f\wedge\mathrm{d}g ~=~ h(x,y)\mathrm{d}x\wedge\mathrm{d}y,$$ where $$h(x,y)~:=~ 2y\{2x(\sin^2(x)-1)-y^2\sin(2x) \} ~=~ -4y\cos(x)\{x \cos(x)+y^2\sin(x) \}.$$ The pair $(f,g)$ are by definition functionally independent within the set $$\Omega~:=~\{(x,y)\in \mathbb{R}^2| h(x,y)\neq 0\}.$$ By the inverse function theorem the pair $(f,g)$ constitutes local coordinates in sufficiently small neighborhoods of $\Omega$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3234676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Calculate $\int e^{2x}(\cos x)^3 dx$
Calculate $$\int e^{2x}(\cos x)^3 dx$$
My try:
*
*Firsty I tried to use integration by parts but then I got:
$$\int e^{2x}\cos^3(x) dx=...=\frac{1}{2}\cos^3(x) e^{2x}+\frac{3}{2}\left(\int e^{2x} \sin(x) dx-\int e^{2x} \sin^3(x) dx \right)$$So my calculation have not many sens because I have $\int e^{2x} (\sin x)^3 dx $ so I returned to similar problem how in the task.
*After that I tried to use integration by substitution:
$$u=\cos(x)$$
$$du=-\sin(x)dx$$
However in this way I don't know how to transform $e^{2x}$ depending on the $u$.
Have you any idea how to do this task?
| Linearise $\cos^3 x$ first: $\;\cos 3x=4\cos^3x-3\cos x$, so $\;\cos^3x=\frac14(\cos 3x+3\cos x)$, whence
$$\mathrm e^{2x}\cos ^3 x=\tfrac14\operatorname{Re}\Bigl(\mathrm e^{(2+3i)x}+3\mathrm e^{(2+i)x}\Bigl)$$
so calculate $\;\frac14\displaystyle\int\bigl(\mathrm e^{(2+3i)x}+3\mathrm e^{(2+i)x}\bigl) \mathrm dx$ and take its real part.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3234835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Identity in a composition algebra Let $A$ be a real composition algebra ($A=\mathbb{R}, \mathbb{C}, \mathbb{H}, \mathbb{O}$). I would like to prove that $$ |\lambda|=1 \implies(\lambda u) \overline{(\lambda v)}=u\overline{v}$$
In a composition algebra we have that $\lambda \bar{\lambda}=|\lambda|^2$ but $A$ may be non-associative and non-commutative.
$\bar{x}:=2(x,1)-x$.
Moufang identities hold:
$$(ax)(ya)=a((xy)a) $$
$$a(x(ay))=(a(xa))y$$
$$x(a(ya))=((xa)y)a$$
| This is not true (in the noncommutative case). For instance, in the quaternions, taking $\lambda=u=i$ and $v=j$ we get $$(\lambda u)\overline{(\lambda v)}=i^2\overline{(ij)}=k\neq -k=i\overline{j}=u\overline{v}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3234928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Characterization of interior point of convex set using normal cones There is a theorem saying that for any convex set $Q$, $x\in \text{int } Q \Leftrightarrow N_Q(x)=\{0\}$. I'm trying to prove the backward direction, and my argument is as follows: If $N_Q(x)=\{0\}$, then equivalently any nonzero vector cannot be in the normal cone at $x$, which means that any nonzero vector must make some acute angle with $y-x$ for some $y\in Q$. I'm trying to argue that this implies $\exists \epsilon>0$ such that $B_\epsilon (x)\subset Q$, but there seems to be a gap in the argument. How can I fill in this gap? Or are there other ways to prove this?
| Assume that $N_{Q}(x) = \{0\}$. For the sake of contradiction, take $x \in \mathbf{bd}(Q)$, the set's boundary. Then, the supporting hyperplane theorem implies that $\exists v \in \mathbb{R}^d, v \neq 0$ such that
$$
\langle v, x \rangle \geq \langle v, y \rangle, \; \forall y \in Q \implies
\langle v, y - x \rangle \leq 0, \; \forall y \in Q \Leftrightarrow v \in N_Q(x).
$$
This contradicts your assumption on $N_Q$, since $v$ is guaranteed to be nonzero. Hence $x$ must indeed be in $\mathrm{int}(Q)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3235056",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Put trivariate PDF in terms of bivariate PDFs Let there be the random variables $X$, $Y$, and $Z$. Let all the bivariate PDFs $f_{X, Y}$, $f_{X, Z}$, and $f_{Y, Z}$ be known.
Can we write the unknown trivariate PDF $f_{X, Y, Z}$ in terms of the known bivariate PDFs?
| Here is an example, obtained by tweaking a 2D counterexample: Let $(X,Y,Z)$ be such that
$$
f_{X,Y,Z}(x,y,z)=
\begin{cases}
2 \phi(x)\phi(y)\phi(z) & xyz>0\\
0 & \text{otherwise}
\end{cases}
$$
where $\phi$ is the standard 1D Gaussian pdf. Then the bivariates $f_{X,Y}, f_{Y,Z}, f_{Z,X}$ are all standard 2D Gaussians, but of course $(X,Y,Z)$ is not Gaussian. Now both the standard 3D Gaussian and this $f$ give the same distribution of bivariates.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3235168",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
If $\space$ $\forall$ $x \in \Bbb R$, $\space$ $f(f(x))=x^2-x+1$. Find the value of $f(0)$. If $\space$ $\forall$ $x \in \Bbb R$, $\space$ $f(f(x))=x^2-x+1$. Find the value of $f(0)$.
I thought that making $f(x)=0$ implies that $f(0)= 0^2 - 0 + 1 = 1$, but i think that this isn't correct, because the $x$ in $f(f(x))$ isn't equal to $f(x)$ .
Any hints?
| Note that the only fixed point of $ff$ is $1$. Since $ff(0)=1$ is fixed by $ff$, we have $fff(0)$ must also be a fixed point of $ff$. This gives $fff(0)=1$ and so $f(0)\in(ff)^{-1}(1)=\{0,1\}$. But $0$ isn't a fixed point of $f$ (since it isn't fixed under $ff$), so $f(0)=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3235288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Can the minimum of two consecutive prime gaps become arbitary large? Here :
https://oeis.org/A023186
the so-called "Lonely primes" are shown.
Let $$[a,b,c]$$ be a triple of consecutive primes and define $$d:=\min(c-b,b-a)$$
My question :
Can we prove that $d$ can become arbitary large ? In other words, can we prove that the minimum of two consecutive prime gaps can become arbitary large ?
With the chinese remainder theorem we can construct a positive integer $N$, such that the each number in the range $$[N-d,N+d]$$ except $N$ (d is some positive integer ) must have a prime factor not exceeding $p_{2d}$ But can we show that $N$ can be prime (perhaps by using the Dirichlet theorem) and that also $N-d>p_{2d}$ can be satisfied ?
| If you read that OEIS page you linked to, it says
Erdős and Suranyi call these reclusive primes and prove that there are an infinite number of them.
(Emphasis mine.) Each lonely prime is the $b$ of a consecutive prime triple $[a, b, c]$ for which $d = \min(b-a, c-b)$ is larger than for any previous prime triple. Since this sequence of lonely primes continues infinitely, there must be arbitrarily large $d$.
In particular, two consecutive gaps of size at least $2k$ must occur at the latest on either side of lonely prime number $k$ (counting $2$ as the $0$th lonely prime). If you instead take sequence number 120937, then it duplicates the lonely primes such that it happens for the first time exactly at term number $k$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3235548",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Partial Differentiation of an equation using implicit differentiation confusion I wanted to ask a question about implicit differentiation in partial differentiation.
When I was at school, I remember partial differentiation as something like this:
When you have a function composed of $x$ and $y$'s and you run into a $y$ term, differentiate with respect to $y$ and multiply by $\frac{dy}{dx}$ i.e.
$$\frac{d}{dx} = \frac{d}{dy} \times \frac{dy}{dx}$$
Now, I read a problem on the Physics section yesterday afternoon that is relevant to my Chemistry course and I couldn't work it out.
If I have the equation
\begin{aligned} \frac{F\left(N_{A}, N_{B}\right)}{k T}=& N_{A} \ln \left(\frac{N_{A}}{N}\right)+N_{B} \ln \left(\frac{N_{B}}{N}\right) \\ &+\left(\frac{z w_{A A}}{2 k T}\right) N_{A}+\left(\frac{z w_{B B}}{2 k T}\right) N_{B}+\chi_{A B} \frac{N_{A} N_{B}}{N} \end{aligned}
I can get a quantity called the Chemical potential $\mu_{A}$ by differentiating the above equation with respect to $N_A$ while keeping $N_B$ and $T$ constant.
$$\mu_{A}=\left(\frac{\partial F} {\partial N_{A}}\right)_{T, N_{B}}$$
(The above equation is called the Free Energy Equation)
$N$ is the total number of molecules in the system, $N_A$ and $N_B$ are the number of molecules of A and B respectively. $N$ is not constant. They are related quite simply as via the sum:
$$N = N_A + N_B$$
which makes sense.
This equation also shows us that a small change in $N_A$ will also yield a small change in $N$ hence why $N$ is not constant, as mentioned above.
So I wanted to get from this equation, $\mu_{A}$
$$\frac{\mu_{A}}{k T}=\left[\frac{\partial}{\partial N_{A}}\left(\frac{F}{k T}\right)\right]_{T, N_{B}}$$
which makes sense.
The result given from the book (page 7), however, confused me.
They gave the result as
$$=\ln \left(\frac{N_{A}}{N}\right)+1-\frac{N_{A}}{N}-\frac{N_{B}}{N}+\frac{z w_{A A}}{2 k T}+\chi_{A B} \frac{\left(N_{A}+N_{B}\right) N_{B}-N_{A} N_{B}}{\left(N_{A}+N_{B}\right)^{2}}$$
and only the last two terms made sense.
The OP asked where the terms
$$-\frac{N_{A}}{N}-\frac{N_{B}}{N}$$
came from and the answer given was as follows:
You missed the $N$ in the logarithm. Since $N_{B}$ is kept constant while changing $N_{A}$, the total number of particles $N=N_{A}+N_{B}$ changes as well. The missing term is
$$\dfrac{\partial N}{\partial N_{A}}\dfrac{\partial}{\partial N}\left[N_{A}\ln\left(\dfrac{N_{A}}{N}\right)+N_{B}\ln\left(\dfrac{N_{B}}{N}\right)\right]=-\dfrac{N_{A}}{N}-\dfrac{N_{B}}{N}$$
and in the comments the value of $\dfrac{\partial N}{\partial N_{A}}$ was clarified to be $1$, again using the equation $N = N_A + N_B$. I understood this step.
but I then thought, if that was the case, how did the terms $$\ln \left(\frac{N_{A}}{N}\right)+1$$ arise then?
My original thought, similar to the OP was to differentiate:
$$\dfrac{\partial}{\partial N_{A}}\left[N_{A}\ln\left(\dfrac{N_{A}}{N}\right)+N_{B}\ln\left(\dfrac{N_{B}}{N}\right)\right]$$
but as explained, this assumed $N$ was constant, which it is not.
How are the terms
$$\ln \left(\frac{N_{A}}{N}\right)+1$$
yielded by this application of implicit differentiation, if $\dfrac{\partial N}{\partial N_{A}} = 1$ using $\dfrac{\partial N}{\partial N_{A}}\dfrac{\partial}{\partial N}$?
| You're right that $N$ is not a constant; in fact, it's shorthand for a function $N(N_A, N_B) = N_A + N_B$.
$N$ is substituted for convenience. If you want to differentiate the expression $F/kB$ with respect to $N_A$, one option is to replace $N$ with $N_A+N_B$ wherever it occurs. $N$ is defined to be $N_A+N_B$ so this replacement is always okay, and when you differentiate the overall substituted expression with respect to $N_A$, you will get the right answer.
The full expression, with substitution, is:
$$\begin{aligned}F(a,b)/kB = & a \ln \left(\frac{a}{a+b}\right)+b \ln \left(\frac{b}{a+b}\right) \\ &+\left(\frac{z w_{A A}}{2 k T}\right) a+\left(\frac{z w_{B B}}{2 k T}\right) b+\chi_{A B} \frac{ab}{a+b} \end{aligned}$$
which consists of five terms. You can differentiate each of them separately with respect to $a$ then add up the results. For example, you can work out that the derivative of the first term is $\frac{b}{a+b} + \ln \left(\frac{a}{a+b}\right)$ and the derivative of the second term is $\frac{-b}{a+b}$.
Added together, these two terms give you $$\ln(\frac{a}{a+b}).$$ Actually this is equal to the first four mysterious terms $\ln(N_A/N)+1 - N_A/N - N_B/N$ in your answer—note that the four terms are actually simpler than they appear because the last three cancel:
$$1 - \frac{N_A}{N} - \frac{N_B}{N} = 1 - \frac{N_A+N_B}{N} = 1 - \frac{N}{N} = 1 - 1 = 0$$
so those four mysterious terms are equivalent to simply
$$\ln\left(\frac{N_A}{N}\right)$$
Instead of using the substitution route, you can also just apply the chain rule. Done properly, this must give you the same result.
Let's try it just on a simple term like $\ln(N_A/ N)$. We have:
$$\begin{align*}\partial_a \ln\left(\frac{a}{n}\right) &= \frac{1}{a/n} \cdot \partial_a \frac{a}{n} & \text{\{chain rule for ln\}}\\ &= \frac{1}{a/n} \cdot \frac{\partial_a(n)a - \partial_a(a)n}{n^2}&\text{\{quotient rule\}}\\ &= \frac{1}{a/n} \cdot \frac{1\cdot a - 1\cdot n}{n^2}\\& = \frac{n}{a}\frac{a-n}{n^2} \\&= \frac{1}{a}\frac{a-n}{n} \\&= \frac{1}{a}\frac{a}{n} - \frac{1}{a}\frac{n}{n} \\&= \frac{1}{n} - \frac{1}{a}\end{align*}$$
So when we compute the derivative of a more complex term like $N_A\ln(N_A/N)$, the product rule says that this is:
$$N_A \cdot \partial_{N_A} \ln\left(\frac{N_A}{N}\right) + \partial_{N_A}[N_A] \cdot \ln\left(\frac{N_A}{N}\right) $$
$$=N_A\left(\frac{1}{N} - \frac{1}{N_A}\right) + 1\cdot \ln\left(\frac{N_A}{N}\right) = \left(\frac{N_A}{N}-1\right) + \ln\left(\frac{N_A}{N}\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3235711",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
If $f:A\rightarrow \mathbb{R}$ is an integrable function on $B$, do $\int_Af$ exists? Let $A$ and $B$ be open sets of $\mathbb{R}^n$ such that $A-B$ has measure zero. If $f:A\rightarrow \mathbb{R}$ is an integrable function on $B$, can we ensure that $\int_Af$ exists?
I am trying show that $\int_Af$ exists, where
$$A=\{(x, y)\in \mathbb{R}^2: x>0\text{ and } y>0\}$$
and the function $f:A\rightarrow \mathbb{R}$ is given by
$$f(x)=\frac{1}{(x^2+\sqrt{x})(y^2+\sqrt{y})}.$$
I proved that $\int_Bf$ exists, where
$$B=\{(x, y)\in A: x\neq 1\text{ and } y\neq 1\}.$$
We have that $A-B$ has measure zero, so we can ensure that $\int_Af$ exists?
| You are right. You can use that the integral over $A$ is equal to the integral over $A\cap B$ plus the integral of $A\setminus B$ that is zero (and that $f\geq 0$). So $f$ is an integral function over A.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3235827",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Determine this limit $\lim_{n\to\infty} \frac{\ln\left(\frac{3\pi}{4} + 2n\right)-\ln\left(\frac{\pi}{4}+2n\right)}{\ln(2n+2)-\ln(2n)}.$ how can I determine the following limit? $$\lim_{n\to\infty} \frac{\ln\left(\frac{3\pi}{4} + 2n\right)-\ln\left(\frac{\pi}{4}+2n\right)}{\ln(2n+2)-\ln(2n)}.$$
This question stems from this question. The proof presented there is incorrect, and it would be trivial to show that the mentioned integral diverges if the above limit is $>0$ by using the comparison test (anyone feels free to do this by the way).
| Using the rules of logarithm so write $$\frac{\ln\left(\frac{\frac{5\pi}{4}+2n}{\frac{\pi}{4}+2n}\right)}{\ln\left(\frac{2n+2}{2n}\right)}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3235939",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
} |
Index of Fibonacci primes and Lucas primes. For an integer $n\geq 0$ let $F_n$ denote the $n$th Fibonacci number and let
$L_n$ denote the $n$th Lucas number.
It is known
that $F_n$ is prime only if $n$ is prime or $n=4$.
According to Wikipedia
it is known that $L_n$ is prime only if $n$ is $0$, prime or a power of $2$.
The reciprocals are not true. Indeed,
$$\begin{array}{ccc}
F_2 &= &1,\\
F_{19} &= &37\times 113,\\
L_{23} &= &139\times 461,\\
L_{64} &= &1087 \times 4481.
\end{array}$$
Let $S=\{n: F_n\text{ is prime}\}$ and $T=\{n: L_n\text{ is prime}\}$.
Q: Is the set $S\cap T$ finite?
Some remarks:
*
*Some computations in Mathematica suggest that $n=4, 5, 7, 11, 13, 17, 47$ are all the integers in
$S\cap T$ with $n\leq 10000$.
*It follows by the exposition above that an element of $S\cap T$ is either $4$ or prime.
*It is not known whether there are infinitely many Fibonacci prime numbers. So, either my question is an open problem or the answer is yes.
*My interest in the set $S\cap T$ arises from a question related to
giving approximations of $\sqrt{5}$ as the fraction of two prime numbers. Recall that $\frac{L_n}{F_n}$ tends to $\sqrt 5$ when $n$ tends to infiniy.
*I have no strong background on Fibonacci numbers. All remarks are welcome.
| Your question may not yet be answerable because we don't know if there exist an infinite number of Fibonacci (or Lucas primes); Moreover, we don't even know if there exist an infinite number of composite Fibonacci (Lucas) numbers.
One thing (among many) that we do know is that $F_{2n} = F_{n}L_{n}$ with the $gcd(F_{n}, L_{n})$ being either 1 or 2.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3236040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Triangle inequality for angles in Euclidean space Is there any simple proof of the following statement: for all vectors $ v,w,u\in V\setminus\{0\} $, where $ V $ is a Euclidean space, inequality
$$ \angle(u,v)\le\angle(u,w)+\angle(w,v)$$
holds.
Unfortunately, couldn't find anything useful in books or Google. I've seen this post: Triangle inequality for angles, but I'm not sure if the given answer is correct or not, and is there more clear proof or not.
| That inequality is only true if you are careful about the numerical value assigned to an angle and how you add angles.
The dot product definition gives signed angles.
If you measure angles by the (nonnegative) great circle arclength they cut off on the unit sphere what should happen when the sum wraps around to more than a full circle?
If all you care about are small unsigned angles then you can use that arclength - it's just the triangle inequality for great circle distances.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3236181",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Fluids - vector identity confusion Trying to prove Kelvin's Circulation Theorem, but struggling to see why the following equality holds:
$$\textbf{u} \cdot (d\textbf{l}\cdot \nabla)\textbf{u} = d\textbf{l} \cdot \nabla \left(\frac{1}{2} u^2 \right)$$
I don't really have much idea where to start. I can take that gradient on the RHS, but that is of a scalar function and I'm not sure how I'll be able to get the vector $\textbf{u}$ out in the way that looks anything like how it should on the LHS...
Any hints would be appreciated (don't mind a solution either)
Edit: added picture of $d\mathbf{l}$ representation from notes.
| \begin{align}
d\textbf{l} \cdot \nabla \left(\frac{1}{2} u^2 \right)
&\;=\; \sum_{\mu\, \nu} dl_{\mu} \, \frac{\partial}{\partial x_{\mu}} \, \frac{1}{2} u_{\nu}u_{\nu}\\
&\;=\; \sum_{\mu\, \nu} dl_{\mu} \, u_{\nu}\, \frac{\partial}{\partial x_{\mu}} u_{\nu}\\
&\;=\; \sum_{\mu\, \nu} u_{\nu} \, dl_{\mu} \, \frac{\partial}{\partial x_{\mu}} u_{\nu}\\
&\;=\; \mathbf{u} \cdot \bigl[
\left(d\mathbf{l}\cdot \mathbf{\nabla}\right) \mathbf{u}
\bigr]
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3236285",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If ${a_k}$ is bounded, then $\sum_{k=0}^{\infty}{a_k}z^{k}$ defines an analytic function on the open unit disk. Is this conclusion true, If ${a_k}$ is bounded, then $\sum_{k=0}^{\infty}{a_k}z^{k}$ defines an analytic function on the open unit disk.
I tried to construct counterexamples but looks like the statement is true.
How can I prove it?
| Use M-test. $|z| <1$ implies $\sum |a_k||z^{k}|\leq M\sum|z|^{k} =M\frac 1 {1-|z|}<\infty$ where $M=\sup \{|a_k|:k\geq 1\}$. This implies that the sum is analytic in the open unit disk.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3236542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Cycle notation for cyclic orders Is there a convenient cycle notation for cyclic orders (https://en.wikipedia.org/wiki/Cyclic_order)?
For example:
Definition. A set of four elements $a, b, c, d$ of a cyclically ordered set is a 4-cycle $[a, b, c, d]$ if $[a,b,c] \land [c,d,a]$.
Using transitivity it is easy to show that
$[a, b, c, d] \implies [a,b,c]$, $[b,c,d]$, $[c,d,a]$, $[d,a,b]$, and all the cyclic equivalents:
$[b,c,a]$, $[c,a,b]$, $[c,d,b]$, $[d,b,c]$, $[a,c,d]$, $[d,a,c]$, $[a,b,d]$, $[b,d,a]$.
Which also means $[a, b, c, d] \iff [b, c, d, a] \iff [c, d, a, b] \iff [d, a, b, c]$.
It could simplify the ternary arithmetic.
| I am not aware of any standard notation, but your notation $[a, b, c, d]$ makes sense and could be extended to $[a_1, a_2, \dotsm, a_n]$ for any $n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3236651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Question regarding partially ordered sets I have encountered few questions while reading the book 'Modern Algebra'.
Let $\mathbb Q$ be the set of rational numbers.
Let
$B = \{ x : x\in\mathbb Q,\sqrt2 < x < \sqrt3 \}$.
How it can be shown that -
*
*$B$ has infinite number of upper and lower bounds.
*$\inf B$ and $\sup B$ do not exist .
Why $\sqrt2$ and $\sqrt3$ cannot be taken as lower aqnd upper bounds?
I am not able to understand that how can infimum and supremum can exist at the first place as the relation defined is not a binary relation.
A detailed proof would be helpful.
|
Why $\sqrt2$ and $\sqrt 3$ can not be taken as lower and upper bounds?
Because the exercise is implicitly asking about an inf and sup that are in $\mathbb Q$. The exercise is perhaps not stating this very clearly, but remember that infimum and supremum is always about how a set fits into a certain ordered superset.
I am not able to understand that how can infimum and supremum can exist at the first place as the relation defined is not a binary relation.
The order relation that is being considered is the ordinary comparison of rational numbers. $x\le y$ is certainly a binary relation. (This is the default assumption when you speak about order properties of sets of numbers).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3236822",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How many toys can be chosen? There are 3 red, 5 blue, 2 yellow and 4 green toys in the box. In how many different ways can 6 toys be chosen if one of them should be blue and the other one - yellow?
I came up with a solution but i am not sure if it is right.
2 toys out of 6 should be of a specific colour. Then, only 4 toys can be of any other colour.
so, the number of different ways $= C^4_{12} = \frac{12!}{4!8!}=495$
| Given the clarifications in comments – the toys are distinct, order doesn't matter, exactly one is blue and exactly one is yellow – we have
*
*$5$ ways to choose the blue toy and $2$ ways to choose the yellow one
*$\binom74=35$ ways to choose the other four toys
Thus there are actually $5×2×35=350$ valid selections.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3236949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Covering map from sphere with six points removed to doubly-punctured complex plane
$X$ is $S^2\subset \mathbf{R}^3$ with its intersection points with the coordinate axes removed.
Show that the following map is a covering map. $$\begin{align*}p:X&\longrightarrow \mathbf{C}-\{0,1\} \\ (x,y,z)&\longmapsto \left(\frac{x+iy}{1-z} \right)^4 \end{align*}$$
To check if it is a covering map, I should show that it is (1) continuous, (2) surjective and (3) that for all $w\in\mathbf{C}-\{0,1\}$ there is an open neighbourhood $U$ for which $p^{-1}(U)$ is the disjoint union of subsets $V_\alpha\subset X$ which are all homeomorphic to $U$.
We have never encountered such a difficult covering map before, the only one we have seen in class is the standard covering map $\mathbf{R}\to S^1:t\mapsto e^{it}$ and $S^1\to S^1:z\mapsto z^n$ for the circle $S^1$.
I know that the formula for stereographic projection $S^2-\{(0,0,1) \}\to \mathbf{C}$ from the north pole $(0,0,1)$ is $(x,y,z)\mapsto \frac{x+iy}{1-z}$. This is clearly a bijection $\phi:S^2-\setminus \{0,0,1\}\leftrightarrow\mathbf{C}-\{0\}$. Under $\phi$, the missing points of $X$ are mapped to $0,1,-1,i$ and $-i$. The map $z\mapsto z^4$ then 'rotates' the plane such that the points $1,-1,i,-i$ are sent to $1$.
The function is continuous since it's a formula, and $z\neq 1$ since we removed the point $(0,0,1)$ from the sphere.
I tried proving that it is surjective by using spherical coordinates, but that did not work out. Since $X$ does not contain $(0,0,0)$ we never reach $0$ in the image, but then why don't we reach $1$?
Could someone provide any help?
| $\left(\dfrac{x+iy}{1-z} \right)^4 = 1$ means that $\dfrac{x+iy}{1-z} \in \{ 1, -1, i, -i \}$. Let us assume $\dfrac{x+iy}{1-z} = \pm1$. Then $x + iy = \pm(1-z)$ which implies $y = 0$ and $x = \pm(1-z)$. Since $x^2 + z^2 = 1$, we get $(1-z)^2 + z^2 = 1$ which implies $z =0$ or $z=1$. For $z=0$ we get $x = \pm 1$, hence $(x,y,z) = (\pm 1, 0,0) \notin X$, for $z=1$ we get $x = 0$, hence $(x,y,z) = (0, 0,1) \notin X$. Similarly $\dfrac{x+iy}{1-z} = \pm i$ is impossible.
Now look what $p$ does. As we know, $z = \pm 1$ is impossible since then $(x,y,z) = (0,0,\pm 1)$. For $\zeta \ne \pm 1$ let $X_\zeta = \{(x,y,z)\in X \mid z = \zeta \}$. This is the intersection of $X$ with the plane $P_\zeta = \{(x,y,z) \in \mathbb R^3 \mid z = \zeta \}$.
For $\zeta \ne 0$ it is a circle in $P_\zeta$ with center $(0,0,\zeta)$ and radius $\sqrt{1-\zeta^2}$. It is wrapped by $p$ four times around the circle in $\mathbb C$ with center $0$ and radius $r_\zeta = \dfrac{(1-\zeta^2)^2}{(1-\zeta)^4} = \left( \dfrac{1+\zeta}{1 - \zeta} \right)^2$. For $\zeta \in (-1,0)$ the function $\zeta \mapsto r_\zeta$ is a decreasing homeomorphism onto $(0,1)$, and for $\zeta \in (0,1)$ an increasing homeomorphism onto $(1,\infty)$. This shows that each $w\in \mathbb C$ with $\lvert w \rvert \ne 0,1 $ is in the image of $p$.
For $\zeta = 0$ the set $X_0$ is the disjoint union of four open quarters of the unit circle in $P_0$. Each of them is mapped to the unit circle in $\mathbb C$ minus the point $1$. This shows that $p$ is surjective.
It should now be clear how to verify that $p$ is a covering projection with four sheets.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3237091",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
$f(x)= \frac{\sqrt{x^2-1}}{x+\log x}$ in the set $E=[1,+ \infty)$ I have the function $f(x)= \frac{\sqrt{x^2-1}}{x+\log x}$ in the set $E=[1,+ \infty)$
(i) I have to prove that $\forall y_0 \in [0,1) $exists only one $x_0 \in E $ such that $f(x_0)=y_0$
(ii) to discuss the uniform continuity of f in E.
Supposing that there are two different values $x_1$ and $x_2 \in E $ such that
$f(x_1)=y_0=f(x_2)$, I have to prove that $ f(x_1)=y_0=f(x_2)>1$
Can someone help me to understand what to do?
| 1) first show that $f$ is a strictly increasing function; hence, (i) follows.
2) note that $f$ is also bounded and continuous. Now show that any bounded continuous increasing function is uniformly continuous. Hence $f$ is also uniformly continuous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3237240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Counting strategies Exam Question How many different ways can people finish in
i) a $4$ person race, ii) a $6$ person race, iii) a $10$ person race
What I did:
$4^4 = 256$
$6^6 = 46,656$
$10^{10}$
as there are $4$ people and therefore 4th place 3rd place 2nd place and 1st place so $4$ to the power of $4$.
I don't know where I went wrong and is there a formulae for working out these type of questions?
Thank You and Help is Appreciated
| Just because there are $4$ ways to choose the first person and $4$ ways to choose the second person doesn't mean there are $16$ ways to choose both; some combinations such as $AAAA$ are't allowed (but are still counted).
Instead, try looking at the factorial function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3237411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Factoring primes in $\mathbb{Q}(\sqrt{-1},\sqrt{5})$
Show that $2,5$ are the only primes which ramify in $L:=\mathbb{Q}(\sqrt{-1},\sqrt{5})$, and that their ramification indices are both $2$.
Obviously $K_1:=\mathbb{Q}(\sqrt{-1})$ and $K_2:=\mathbb{Q}(\sqrt{5})$ are both Galois and it's easy to check that:
$$K_1K_2=\mathbb{Q}(\sqrt{-1},\sqrt{5})$$
$$K_1\cap K_2=\mathbb{Q}$$
$$(d_{K_1},d_{K_2})=(-4,5)=1$$
Since $\mathcal{O}_{K_1}=\langle 1,i\rangle_\mathbb{Z}$ and $\mathcal{O}_{K_2}=\langle 1,\omega\rangle_\mathbb{Z}$, there's a famous proposition which says that $\mathcal{O}_L=\langle 1,i,\omega,i\omega\rangle_\mathbb{Z}$ and $d_L=d_{K_1}^2d_{K_2}^2=2^45^2$.
This proves $2,5$ are the only primes which ramify in $L$. I would know how to calculate the ramification indices if $\mathcal{O}_L=\mathbb{Z}[\theta]$ for some $\theta\in L$ by factoring the minimal polynomial of $\theta$ modulo $2$ and $5$.
I can't find such $\theta$, so I'm kind of lost.
| I'll make my comment and answer and add the case of $5$.
See what happens to $(2)$ in $_2$ and then from there see what happens to $K_1K_2$. From $\mathbb{Q}$ to $K_2$ we have $e=1$ and $f=2$ and $g=1$ (since $^2−x−1$ is irreducible mod 2). Then going from $_2$ to $_1_2$ a degree 2 extension we know $=1$ or $e=2$ but it must be 2 since we know the rational prime 2 ramifies from discriminant calculation. So from $\mathbb{Q}$ to $_1_2$ we have $=2$ and $=2$ and $g=1$ for the prime $2$.
For the prime $5$, use $K_1$. Since $X^2 + 1 = (X+2)(x+3)$ mod 5, we have that $5$ splits in $K_1$ into $g=2$ primes, and $e=1$ and $f=1$. Then going from $K_1$ to $K_1K_2$ we know 5 will end up being ramified, so $e=2$. So from $\mathbb{Q}$ to $K_1K_2$ we have $g=2, e=2, f=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3237501",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Integral with log of absolute value of sine Show that
$$\int_{-\pi/3}^{\pi/3} \log \vert 8 \sin(t/2) (1 + \sin t)^2 \vert dt = 0.$$
WolframAlpha claims that this is true. I've tried manipulating the integrand a bunch and various trig identities, but it hasn't simplified things. It looks vaguely like Jensen's formula, but I'm not sure how to use this.
It also sort of looks like one of those integrals that can be evaluated using complex analysis and contour shifting, but I don't see how.
| Note that (using $t \to -t$ on the negative interval):
$\int_{-\pi/3}^{\pi/3} \log \vert 8 \sin(t/2) (1 + \sin t)^2 \vert dt =2\pi \log 2+2\int_{0}^{\pi/3}\log {\sin(t/2)}dt+ 4\int_{0}^{\pi/3}\log {\cos t}dt$
Using now $t/2 \to t$ we get
$\int_{0}^{\pi/3}\log {\sin(t/2)}dt=2\int_{0}^{\pi/6}\log {\sin t}dt=2\int_{\pi/3}^{\pi/2}\log {\cos t}dt$,
so putting all together the original integral becomes:
$2\pi \log 2+4\int_{0}^{\pi/2}\log {\cos t}dt=4\int_{0}^{\pi/2}\log {(2\cos t)}dt=0$ by the well known result (and easy to prove by simple manipulations once one argues that the integral is indeed convergent) that
$\int_{0}^{\pi/2}\log {(2\sin t)}dt=\int_{0}^{\pi/2}\log {(2\cos t)}dt=0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3237621",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Reasoning behind integrating f(x)/g(x)? I understand the method to integrate this function would be:
$\int{\frac{x^2+1}{x^4-x^2+1} \thinspace dx}$
Divide all terms by $x^2$:
$= \int{\frac{\frac{x^2}{x^2}+\frac{1}{x^2}}{\frac{x^4}{x^2}-\frac{x^2}{x^2}+\frac{1}{x^2}} \thinspace dx}$
=$ \int{\frac{1+\frac{1}{x^2}}{x^2-1+\frac{1}{x^2}} \thinspace dx}$
Factor the denominator:
=$ \int{\frac{1+\frac{1}{x^2}}{(x-\frac{1}{x})^2 + 1} \thinspace dx}$
Use $u$-substitution:
$ u = x -\frac{1}{x}, du = 1 +\frac{1}{x^2} \thinspace dx$
$ \int{\frac{du}{u^2+1}} $
$= \tan^{-1}{(x-\frac{1}{x})}+C$
I can verify that this is the correct method using the chain rule:
$u = x-\frac{1}{x}, \frac{du}{dx} = 1 +\frac{1}{x^2} $
$y = \tan^{-1}{u}, \frac{dy}{du} = \frac{1}{u^2+1} $
$ \frac{dy}{dx} = \frac{1}{u^2+1} \times (1 +\frac{1}{x^2})$
$= \frac{1 +\frac{1}{x^2}}{u^2+1} $
$ = \frac{1 +\frac{1}{x^2}}{u^2+1} $
$ = \frac{\frac{x^2 +1}{x^2}}{(x-\frac{1}{x})^2+1} $
$ = \frac{\frac{x^2 +1}{x^2}}{x^2-2+\frac{1}{x^2}+1} $
$ = \frac{\frac{x^2 +1}{x^2}}{\frac{x^4-x^2+1}{x^2}} $
$ = \frac{x^2 +1}{x^4-x^2+1} $
Only after differentiating the resulting integral, does it make sense where the $x^2$ comes from.
As some terms are fractions, distributing the denominator and dividing all terms by the same common denominator simplifies the function.
$$ \frac{\frac{x^2 +1}{x^2}}{\frac{x^4-x^2+1}{x^2}} = \frac{x^2 +1}{x^4-x^2+1} $$
So it makes sense then that this is the term we will multiply with the original integral in order to make it more useful to work with.
Firstly, is there a name for this technique, that is, multiply all terms by some function of x in order to make it easier to factor into something integratable?
Is there a telltale way to recognise when to use this method, and what the divisor should be (obviously without knowing the answer and working backwards)?
Are there any other ways to solve this integral?
Sometimes I don't feel this method is intuitive enough to remember in exam conditions.
Thanks
| Here's another way to do it. I would go for this way if I don't immediately see a trick.
It's a big theorem that all rational functions have elementary antiderivatives. The general way to integrate a rational function is to factor it into quadratics and linears (this is always possible by FTA), and use partial fractions decomposition.
For our specific example, we have to factor $x^4-x^2+1$. The following is a somewhat common trick:
$$x^4-x^2+1 = x^4+2x^2+1 - 3x^2=(x^2+1)^2-(\sqrt3x)^2 = (x^2-\sqrt3x+1)(x^2+\sqrt3x+1)$$
For the partial fraction decomposition, it seems reasonable that the numerators should be constants; fortunately, we can even guess them
$$\dfrac {x^2+1}{(x^2-\sqrt3x+1)(x^2+\sqrt3x+1)}= \dfrac {1/2}{x^2-\sqrt3x+1}+ \dfrac {1/2}{x^2+\sqrt3x+1}$$
And now the way you integrate $\frac {1}{ \text{quadratic}}$ is you complete the square on the bottom.
Completing the Square: From $x^2+bx+c$ we can complete the square to $\left(x+\dfrac {b}{2}\right)^2+ \left( c-\dfrac {b^2}{4} \right)$. This means that
$$\dfrac {1}{x^2+\sqrt3x+1} = \dfrac {1}{(x+\frac {\sqrt 3}{2})^2+\frac 14}$$
Let me know if you need more help.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3237793",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to solve this inequation? So the inequation is this $x^{2016}-1<0 $
My initial idea was to transform it like this $x^{2016}>x^0$ and then to look four cases:
$1.$ when $x \lt 0\lt 1$
$2.$ when $x \gt 1$
$ 3.$ when $-1\lt x\lt 0$
$4.$ when $x \lt-1$
Is this the proper way to do it?
| As others have said, a far simpler method would be to just do the following:
$x^{2016}-1<0 \Rightarrow x^{2016}<1 \Rightarrow \ln|x^{2016}|<\ln|1|\Rightarrow\ln|x^{2016}|<0\Rightarrow \ln|x|<0\Rightarrow |x|<1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3237873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
The dual space of $C(Y,\mathbb R)$ when $Y$ is a complete and separable metric space Just to be confirmed what is the dual space of $C(Y,\mathbb R)$ i.e the vector space of all continuous functions $f: Y\to \mathbb R$ when $Y$ is complete and separable metric space? Is it the same when $Y$ is compact, which is the space of all signed/complex measures on $Y$? Thanks for any comment.
Even, I would be interested to know the dual space, when continuous functions are bounded or maybe even Lipschitz.
| If you are also interested in the space $C_b(Y, \mathbb{R})$ of bounded continuous functions endowed with the topology of uniform convergence, which is normed by
\begin{equation*}
||f|| := \sup_{y \in Y} |f(y)|, \quad f \in C_b(Y, \mathbb{R}),
\end{equation*}
then the assumption that $Y$ is complete separable metric space is not sufficient for identifying the dual $C(Y,\mathbb{R})'$ with the family of finite Borel measures on $Y$.
Here is a standard counterexample. Let $Y := \mathbb{N}$ and give it the discrete metric; on the other hand, let $C_0(\mathbb{N},\mathbb{R})$ be the subspace of $C_b(\mathbb{N},\mathbb{R})$ consisting of convergent sequence. Define a linear functional $L$ on $C_0(\mathbb{N}, \mathbb{R})$ as follows:
\begin{equation*}
L(f) := \lim_{n \to \infty} f(n), \quad f \in C_0(\mathbb{N},\mathbb{R}).
\end{equation*}
It is easy to see that $L$ is bounded and hence continuous. By virtue of the Hahn-Banach theorem, $L$ can be continuously extended over $C_b(\mathbb{N},\mathbb{R})$. However, $L$ cannot be identified with any countably additive measure on $\mathbb{N}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3238053",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Paul J Nahin. Story of minus one, I'm stumped on page 5. every time I try to read this book and follow the logic I fail on just page 5 where it states near the bottom
$${1 \over x } + 14 x + \sqrt{{1\over x^2} + 196 x^2 } = 12,$$
which is easily put into the form given above,
$$ 172 x = 336 x^2 +24.$$
(From An Imaginary Tale, The Story of $\sqrt{-1}$ )
But try as I may I cannot manipulate the square root equation into the quadratic stated, easy indeed!
A lifetime working an an electrical engineer using j notation has been of no help to me.
Please put me right if you are able.
Thank you.
| Let $\frac{1}{x}+14x=y$ then we can rewrite the equation as $y+\sqrt{y^2-28}=12 \rightarrow \sqrt{y^2-28}=12-y$ Squaring both parts we get: $y^2-28=144-24y+y^2$ or $24y=172$ or $$24(\frac{1}{x}+14x)=172$$ Can you finish?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3238196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Wikipedia's proof of Jensen's inequality I think there is a glitch in the proof by induction. The proof is still valid, but they add an unnecessary assumption:
In the induction step, they choose one of the $\lambda_i$'s that is strictly positive (I guess by that, they mean nonzero). Since the sum of the $\lambda_i$'s is 1, there must be at least one that is nonzero, that part is valid. And the argument that follows is also perfectly valid.
However, why do we need to pick a nonzero $\lambda_i$? Wouldn't the argument work regardless? If $\lambda_1 = 0$, the inequality still holds. In other words, the inequality holds regardless of the value of $\lambda_1$:
$$ \varphi\left( \lambda_1 x_1 + (1 - \lambda_1) \sum_{i=2}^{n+1}\frac{\lambda_i}{1-\lambda_1} x_i \right) ~\leqslant~ \lambda_1 \varphi(x_1) + (1-\lambda_1)\varphi\left( \sum_{i=2}^{n+1} \frac{\lambda_i}{1-\lambda_1} x_i \right)$$
because $\varphi$ is convex, period. No requirement on the coefficient being nonzero: according to Wikipedia's definition of a convex function, it is $\forall x_1, x_2 \in X$, $\forall \lambda \in [0,1] ~ \cdots$
Since the $\lambda_i$'s are all nonnegative and their sum is 1, then every $\lambda_i \in [0,1]$, so the definition of convexity applies.
Am I missing something?
| Of course, $\lambda_1=0$ is valid, but not so interesting:
$$ \varphi\left(\sum_{i=2}^{n+1}\lambda_i x_i \right) ~\leqslant~ \varphi\left( \sum_{i=2}^{n+1} \lambda_i x_i \right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3238316",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
The limit of the interval endpoints depending on $n$ For $n \in \mathbb{N_0}$ consider the sequence of intervals of the the following from:
\begin{align}
A_k &:= [ k, k + 1 ), \quad k \in \mathbb{N_0} \\
\frac{A_k}{2} &:= \left[ \frac{k}{2}, \frac{k + 1}{2} \right), \quad k \in \mathbb{N_0} \\
. \\
. \\
. \\
\frac{A_k}{2^n} &:= \left[ \frac{k}{2^n}, \frac{k + 1}{2^n} \right), \quad k \in \mathbb{N_0}
\end{align}
Clearly, for every $n \in \mathbb{N_0}$, the intervals $\frac{A_k}{2^n}$, $k \in \mathbb{N_0}$ are pairwise disjoint and their union covers the the interval $[ 0, \infty )$. Fix some $t \in [ 0, \infty )$. For every $n \in \mathbb{N_0}$ there exists a unique $k_n \in \mathbb{N_0}$ such that $t \in \frac{ A_{k_n} }{ 2^n }$, i.e.
$$
\frac{k_n}{2^n} \leq t < \frac{k_n + 1}{2^n}.
$$
I would like to show that, for example, $\frac{k_n + 1}{2^n} \downarrow t$ as $n \rightarrow \infty$. This seems intuitively clear, since with growing $n$ the endpoints of the interval only get closer. But how can one show this in a rigorous way? It seems that showing that $\frac{ k_n + 1 }{ 2^n }$ is a decreasing sequnce with its infimum being equal to $t$ should suffice.
| For the decreasing part, it's enough to show that $\frac{k_n+1}{2^n}$ is also the endpoint of one of the intervals for any $m> n$. Because then it's obvious that $\frac{k_m+1}{2^m}$ is either that or something smaller. And in fact, if $m>n$, you have:
$$\frac{k_n+1}{2^n}=\frac{2^{m-n}(k_n+1)}{2^m}=\frac{\left(2^{m-n}k_n+2^{m-n}-1\right)+1}{2^m}=\frac{k'+1}{2^m}$$
where $k'=2^{m-n}k_n+2^{m-n}-1$.
The convergence part follows from the fact that the length of the intervals goes to $0$ and $\frac{k_n+1}{2^n}-t$ is always smaller than that length.
Hope this helps.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3238421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A polynomial algorithm to determine whether a finite group is nilpotent Does there exist a polynomial (in respect to the order of the group) algorithm that given a Cayley table of a finite group determines, whether a group is nilpotent or not?
There do exist polynomial algorithms, that determine, whether a group is nilpotent of degree k or not. The simplest of them is simply checking the necessary commutator identity for all $(k + 1)$-ples of elements of the group (this algorithm works for $O(n^{k + 1})$, where $n$ is the order of the group). However, none of them can be used to determine in polynomial time, whether a group is nilpotent of arbitrary degree.
| A simple (but inefficient) way to test nilpotency in polynomial time would be to compute its lower central series and see whether the final group is trivial or not.
Each commutator subgroup computation takes $O(n^2)$ time, and since the orders of the groups in this central series do not increase, they must stabilise in at most $O(\log n)$ steps (by way of the fundamental theorem of arithmetic – the worst case is when the orders halve at each step). Thus, this algorithm takes $O(n^2\log n)$ time.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3238510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.