Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Problem in understanding the partial differentiation I was reading the book on Neural Network and got stuck up on this equation given below. \begin{eqnarray} z^{l+1}_k = \sum_j w^{l+1}_{kj} a^l_j +b^{l+1}_k = \sum_j w^{l+1}_{kj} \sigma(z^l_j) +b^{l+1}_k. \tag{43}\end{eqnarray} here w and b are constant vectors. The equation was then differentiated and below is the result. Where did the summation go? \begin{eqnarray} \frac{\partial z^{l+1}_k}{\partial z^l_j} = w^{l+1}_{kj} \sigma'(z^l_j). \tag{44}\end{eqnarray} If you want to see the complete problem then visit http://neuralnetworksanddeeplearning.com/chap2.html eqn 43 and 44
The $z_k$'s are defined over a sum of $z_j$'s. $$ z^{l+1}_k = \sum_j w^{l+1}_{kj} a^l_j +b^{l+1}_k = \sum_j w^{l+1}_{kj} \sigma(z^l_j) +b^{l+1}_k. $$ You are differentiating with respect to only one of the $z_j$'s. So, for each $j$, all the other terms in the sum go to zero: $$ \begin{align} \frac{\partial z^{l+1}_k}{\partial z^l_j} &= \frac{\partial }{\partial z^l_j} \left[\sum_j w^{l+1}_{kj} \sigma(z^l_j) +b^{l+1}_k\right]\\ &= \sum_j \frac{\partial }{\partial z^l_j}\left[w^{l+1}_{kj} \sigma(z^l_j) +b^{l+1}_k\right]\\ &= w^{l+1}_{kj} \sigma'(z^l_j). \tag{44}\end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2240044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Limit of uniformly continuous functions with convergent improper integrals. I am struggling with the proof of the following theorem: Let $f(x)$ be uniformly continuous in $[a, \infty)$ s.t. the integral $\int_a^{\infty} f(x)dx$ converges. prove that $\lim_{x \to \infty} f(x) = 0$. I came to the conclusion that it is enough to prove that $\lim_{x \to \infty} f(x)$ exists, and from there I have a proof that the limit is $0$. I tried to use Cauchy's equivalent definition for the convergence of the improper integral + his definition for a regular limit at $x \to \infty$ with no success... I will be happy to get hints as for how I should proceed, not proofs, since I really want to crack this one by myself, only I spent a pretty long time with no success. Thank you :)
Hint. If $\lim_{x\to\infty} f(x)$ fails to exist but the integral $\int_{a}^{\infty} f(x) \, dx$ converges, then we often observe a 'train of narrowing peaks': $\hspace{3em}$ This means that you begin to see an abrupt change in $f(x)$ for large $x$. How the uniform continuity enters this picture is that it prevents peaks from being both high and narrow. So it becomes harder to observe high peaks as $x\to\infty$. Sketch of Proof. (Hover the cursor to see the content.) To convert this argument to a solid proof, assume otherwise that $|f(x)| \nrightarrow 0 $. Then you can find $\epsilon > 0$ and $x_n \to \infty$ such that $|f(x_n)| > \epsilon$. Now you can utilize uniform continuity to argue that there exists $\delta, \epsilon > 0$ for which $$ \left| \int_{x_n-\eta}^{x_n+\eta} f(t) \, dt \right| \geq \delta $$ holds for all $n$. (I will leave this part to OP.) This shows that $\int_{a}^{x} f(t) \, dt$ is not Cauchy as $x \to \infty$, so it cannot converge as $x \to \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2240140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Coercive continuous function on a closed subset has a global minimum proof I would like to ask for help with the proof of the following proposition: Let f be a real continuous function, defined on a closed set $X \subset \mathbb{R}^n$, which is coercive, i.e. for every sequence $\{x_n\}_{n=1}^\infty$ with $||x_n|| \to \infty$ we have $f(x_n) \to +\infty$. Then $f$ achieves global minimum on $X$. My idea for a proof: If $X$ is bounded we are done by the Weierstrass extreme value theorem. But we only have closeness. So let $f^* = \inf\limits_{x\in X}f$ which is achieved at $x^*$ for which we don't know if $x^* \in X$. Let $Y$ be the closed ball of radius $||x^*||+1$. It surely contains $x^*$ and is bounded and so is $$ Z = X \cap Y \subset Y $$ Now we know that $Z$ is compact and $f$ achieves its minimum there. But are we sure (and if yes, why) that this minimum is actually $f^*$ and thus $x^* \in X$?
Let choose any point in $X$, call it $x_0$. Since $f$ is coercive, then $\exists k>0\mid ||x||\ge k\implies f(x)\ge1+f(x_0)$. Note: this is a simple way to guarantee that $f(x)>f(x_0)$. It is not a restriction since coercivity allows to find $k$ for any $A$, in particular $A=f(x_0)+1$. Now $K=X\cap \overline{B(0,k)}$ is compact (since $X$ is closed and closed ball compact) so $f$ reaches a minimum in $x^*\in K\subset X$. Also $x_0\in K$ else $||x_0||>k\implies f(x_0)\ge 1+f(x_0)$ which is a contradiction. In particular $f(x^*)\le f(x_0)$. Yet $\forall x\in X\setminus K$ we have $||x||>k$ so $f(x)\ge 1+f(x_0)>f(x_0)\ge f(x^*)$ So $x^*$ is a global minimum for $f$ on all $X$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2240269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Rolling $2$ dice: NOT using $36$ as the base? I apologize for such a simple question. It has been a while since I took math classes. When you roll $2$ dice, there are $36$ possibilities. However, there are only $21$ combinations, if order does not matter. Rolling a $(4,2)$ = rolling a $(2,4)$. Let's say in a game, rolling a $(1,1)$ makes you lose. The odds of rolling this is a $1/36$. But why can't you say the probability is a $1/21$, assuming you roll both dice at the same time? There's only one combination that makes you lose, so why can't you use $21$ as the denominator? I have tried searching on this topic, but have not found a good answer. (Most likely because my thinking is fallacious.)
Because to get $(1,1)$, both dice must show a $1$. To get a $1$ and a $2$, it could be either $(1,2)$ or $(2,1)$. Here's another way to look at it ... The probability of getting $(1,1)$ is $$\frac{1}{6}\times\frac{1}{6} = \frac{1}{36}$$ Explanation: The dice are independent and each die has probability $\large{\frac{1}{6}}$ of showing a $1$. To get a $1$ and a $2$, the first die must show either $1$ or $2$, and the second die must show whichever of $1,2$ did not show on the first die. Hence the probability of getting a $1$ and a $2$ is $$\frac{2}{6}\times\frac{1}{6} = \frac{2}{36}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2240392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 6, "answer_id": 0 }
Determining a value $k$ that gives a size = 0.05 test. A single observation $X$ from a normal distribution with mean $\mu$ and $\sigma^2$=1 is used to test $$H_0 : \mu = 1 \ \ \ \text{vs} \ \ \ H_1 : \mu \lt 1 $$ using the critical region $C = {{x : x \lt k}}$ Determine the value of k that gives a size 0.05 test. My attempt: The size of the test = significance level = $\alpha$ = 0.05 From my notes, I am told $\alpha = \pi(H_0)$, where $\pi$ signifies the power function. The only thing I could think of that might link these together is the z-score formula. $$ Z_{0.05} = 1.65 $$ $$ 1.65 = \frac{\bar x - 1}{1} $$ However this doesn't make sense to me, as when I solve for $\bar x$ I get 2.65, which is greater than $\mu$, and this region should be less than $\mu$ Am I supposed to derive the power function for $\mu$ myself?
You have most of the pieces. Let's put them together: You want to reject $H_0: \mu = 1$ against $H_a: \mu < 1$ at the 5% level when $X < k.$ Thus, under $H_0$ you want $P(X < k) = 0.05.$ Putting this into a form so that you can use printed standard normal tables, we have $$P\left(\frac{X - \mu_0}{\sigma} = X - 1 < k - 1\right) = P(Z < k-1) = .05,$$ where $\mu_0 = 1,\, \sigma = 1,$ and $Z \sim \mathsf{Norm}(0,1),$ that is, standard normal. From the table you have $P(Z < - 1.645) = 0.05.$ So $k - 1 = -1.645,$ and finally $k = 1 - 1.654 = -0.645.$ In this simple case, rejecting $H_0$ when $X < -.0645$ is the same as rejecting when $Z < -1.645.$ (The latter inequality is the 'usual' way to state the rejection criterion.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2240487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to show that $\mathbb{Z}\left[\frac{1 + \sqrt{5}}{2}\right]$ is finitely generated? We can say that $\mathbb{Z}\left[\cfrac{1 + \sqrt{5}}{2}\right]$ is finitely generated if minimal polynomial of $\cfrac{1 + \sqrt{5}}{2}$ is in $\mathbb{Z}[X]$. After some calculations it can be shown that $f(X) = X^2 - X - 1 \in \mathbb{Z}[X]$ is the minimal polynomial of $\cfrac{1 + \sqrt{5}}{2}$. I think that $\mathbb{Z}\left[\cfrac{1 + \sqrt{5}}{2}\right]$ is generated by $\left \{1, \cfrac{1 + \sqrt{5}}{2}\ \right \}$ so that any element of it can be written in the form $a + b \cfrac{1 + \sqrt 5}{2}$ or simply $\cfrac{c}{2} + \cfrac{d \sqrt{5}}{2}$ with $c, d \in \mathbb{Z}$. By this claim, if I take some $f\in \mathbb{Z}\left[\cfrac{1 + \sqrt{5}}{2}\right]$ then I should be able to write it in the form above. Now let $f = a_n \left(\cfrac{1 + \sqrt{5}}{2} \right)^n + a_{n - 1} \left(\cfrac{1 + \sqrt{5}}{2} \right)^{n - 1} + \dots + a_1 \left(\cfrac{1 + \sqrt{5}}{2} \right) + a_0$, each $a_i \in \mathbb{Z}$. The $k^{th}$ term in the partial sum above is equal to $a_k.\cfrac{n!.5^{\frac{k}{2}}}{k!(n-k)!2^k}$ . I can not see that how these terms cancel out each other and in the end we have something like $\cfrac{c}{2}+\cfrac{d\sqrt{5}}{2}$. How do we show this?
You've found the minimal polynomial; this gives a very simple way to rewrite the ring: $$ \mathbb{Z}\left[ \frac{1+\sqrt{5}}{2} \right] \cong \mathbb{Z}[X] / (X^2 - X - 1) $$ so showing the left hand side is finitely generated as an abelian group is the same thing as showing the right hand side is finitely generated as an abelian group. For the purposes of working with elements, the congruence relation generated by the ideal $(X^2 - X - 1)$ can be summarized as $$ X^2 \equiv X + 1 $$ which makes it easy to see that the additive group of $\mathbb{Z}[X] / (X^2 - X - 1) $ is a free abelian group with basis given by (the classes of) $1$ and $X$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2240560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Find all of the elements of a subgroup generated by a $3$-cycle notation Question: Let $H$ be the subgroup of $A_n$ generated by $(123)$. Write down all elements of $H$. Here is my attempt: $H\le \langle 123 \rangle$ $H$=$\langle a \rangle$ where the order of $a$=$n$ where $\langle a \rangle$=$\langle e,a,a^{2},....,a^{n-1} \rangle$ since $\langle 123 \rangle$ has order $3$, $\langle 123 \rangle$=$\langle (e),(123),(23)(31) \rangle$ I computed $(123)^{2}$ and $(123)^{3}$ since I had to order $3$ to come up with my answer for $\langle 123 \rangle$ Did I do this right?
Yes, your answer is correct. $H = \langle (123) \rangle = \{e, (123), (132) \}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2240633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Collinear intersection points of diagonals in a regular heptagon In a regular heptagon, all diagonals are drawn. Then points $A$, $B$, and $C$ in the diagram below are collinear: (If the vertices of the pentagon are $P_1, \dots, P_7$ in clockwise order, then $A = P_1$, $B$ is the intersection of $P_2 P_5$ and $P_3 P_7$, and $C$ is the intersection of $P_2 P_4$ and $P_3 P_5$.) Offhand, I see two approaches to trying to prove this (I verified it by direct computation in Mathematica): * *Use the area method (Machine Proofs in Geometry style) to express the area $S_{ABC}$ in terms of areas of triangles formed by vertices of the heptagon, in hopes of showing that it's $0$. *Use Trig Ceva to show that $\angle P_4 AB = \angle P_4 AC$: if we draw line $AB$, then $B$ is the intersection of three chords in the circumcircle of the heptagon, forming six arcs - four of which we already know. Same for $C$. However, both of these methods seem extremely computationally intensive. (Though I'd be happy to be proven wrong there, if someone sees a straightforward way to do either computation.) Is there a simple proof why $A$, $B$, and $C$ are collinear, or at least a good reason why we should expect this to be true?
In your notations, triangles $P_2P_3C$ and $P_7BP_2$ are similar because $$\angle \, CP_2P_3 = \angle \, P_4P_2P_3 = \alpha = \angle \, P_2P_7P_3 = \angle \, P_2P_7B \,\,\,\text{ and }$$ $$\angle \, P_2CP_3 = 2\, \alpha = \angle \, P_7P_2P_5 = \angle \, P_7P_2B$$ Therefore, $$\frac{CP_3}{P_2B} = \frac{P_3P_2}{BP_7}$$ However, $P_2B = P_3B $ because triangle $BP_2P_3$ is isosceles due to $$\angle \, BP_2P_3 = \angle \, P_5P_2P_3 = 2\alpha = \angle \, P_7P_3P_2 = \angle \, BP_3P_2$$ and $P_3P_2 = P_1P_7$ as edges of the regular heptagon. Thus, the latter ratio turns into $$\frac{CP_3}{P_3B} = \frac{CP_3}{P_2B} = \frac{P_3P_2}{BP_7} = \frac{P_1P_7}{BP_7}$$ or if you prefer $$\frac{CP_3}{P_1P_7} = \frac{P_3B}{BP_7}$$ Finally, observe that $P_1P_3 = P_7P_5$ so the cyclic quad $P_1P_3P_5P_7$ is an isosceles trapezoid which means that $P_1P_7$ is parallel to $P_3P_5$. Since $C$ lies on $P_3P_5$ one concludes that $CP_3$ is parallel to $P_1P_7$. Since $$\frac{CP_3}{P_1P_7} = \frac{P_3B}{BP_7}$$ and $CP_3$ is parallel to $P_1P_7$, by the Intercept Theorem, point $B$ lies on the segment $P_1C \equiv AC$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2240755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Does the series $ \sum_{n=1}^{\infty}(-1)^{n}n^{2}e^{\frac{-n^{3}}{3}}$ converge or diverge? Does the series $$\sum_{n=1}^{\infty}(-1)^{n}n^{2}e^{\frac{-n^{3}}{3}}$$ converge or diverge? I have attempted using the alternating series test, but couldn't find a useful function to use as $b_n$ and no other tests I know seem to be useful in coming to the conclusion of whether it is convergent or divergent.
The series converges absolutely. This is because we have $$0 \leq \exp\left(-\frac{n^3}{3}\right) \leq \frac{1}{1 + \frac{n^3}{3} + \frac{n^6}{18}}$$ by considering the Taylor expansion and hence $$0 \leq n^2\exp\left(-\frac{n^3}{3}\right) \leq \frac{n^2}{1 + \frac{n^3}{3} + \frac{n^6}{18}} \leq 18n^{-4}$$ and $$\sum 18n^{-4}$$ converges by the integral test. More intuitively, just keep in mind that exponentials of the form $a^{-x}$ for $a>1$ decay faster than any rational function. If you want to use the alternating series test to merely establish conditional convergence, you can use L'hopital to show $\lim_{x \to \infty} x^2e^{-\frac{x^3}{3}} = 0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2240847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Examples on product of weakly differentiable functions * *Are there two functions $f,g \in L^1_\text{loc}(\mathbb{R}^N)$, $N\ge 1$, such that $f,g$ are weakly differentiable (have all first-order weak partial derivatives in $L^1_\text{loc}(\mathbb{R}^N)$) but $fg$ has no weak partial derivative? *Are there two functions $f,g \in L^1_\text{loc}(\mathbb{R}^N)$, $N\ge 1$, such that $f,g$ are weakly differentiable (have all first-order weak partial derivatives in $L^1_\text{loc}(\mathbb{R}^N)$) but $fg \in L^1_\text{loc}(\mathbb{R}^N)$ and has no weak partial derivative?
A good place to look for examples is the power functions. This is because the local integrability of these functions (or lack thereof) is easily determined by comparing the power to the dimension. To address your first item, use the fact that $f(x) = |x|^{-p}$ is locally integrable if and only if $p< N$. For such $f$ we have \begin{equation*} |\nabla f(x)| = p|x|^{-(p + 1)} \end{equation*} and consequently, (choosing $g = f$), \begin{equation*} |\nabla (f(x)g(x))| = 2p|x|^{-(2p + 1)}. \end{equation*} Thus, both $\nabla f$ and $\nabla g$ are locally integrable as long as $p + 1< N$, and $fg$ fails to be locally integrable as long as $2p + 1\geq N$. Now choose any $p$ such that $(N - 1)/2\leq p< N-1$. This argument works for $N>1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2240934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the symbolic form of "there does not exist a largest natural number " Other students in office hour said this is the correct form $(\forall x)(\exists y)(y>x)$ { for all x natural number, there exists y such that y is greater than x } But "there does not exist a largest natural number " $\neg(\exists x)(x\text{ largest natural number})$ Am I even close ?
In English: "there is no natural n, for which all natural k would be smaller than n". $\lnot \exists n \forall k (k, n) \in \mathbb{N}^2 \land k < n$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2241040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 7, "answer_id": 3 }
Taking Limit to infinity when variable is exponent? Example. I am testing $-1/4$ and $1/4$ for $$\lim_{n \to \infty} \frac{(-1)^n 4^n x^n}{\sqrt{n}}$$ What happens in the numerator that it makes it equal 1? Like what happens to each term that it everything ends up as 1? Any help would be greatly super appreciated! Thanks
For $x = -1/4$, you have $$\lim_{n \to \infty} \frac{(-1)^n 4^n (-1/4)^n}{\sqrt{n}}.$$ Since you have a bunch of terms all raised to the same power in the numerator, we can simplify the equation a bit by pulling these terms together inside one pair of parenthesis: $$\lim_{n \to \infty} \frac{\big[(-1)(4)(-1/4)\big]^n}{\sqrt{n}} = \lim_{n \to \infty} \frac{1}{\sqrt{n}}.$$ I skipped a few steps I thought were obvious. I encourage you try working everything out for $x = 1/4.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2241161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Proving elements of a polynomial ring are integral over another. I have a quotient polynomial ring $ R = k[X,Y,Z]/ \langle X^2 - Y^3-1, XZ-1 \rangle$ where $k$ is a field and $X,Y,Z$ are variables. Let $x, y, z $ be the images of $X,Y,Z$ respectively. Fixing $a, b \in k$ and writing $ t = x +ay +bz$, I need to show that $x, y $ are integral over $P = k[t]$. So I think $x = X + A(X^2-Y^3-1) + B(XZ-1) $ where $A, B \in k[X,Y,Z]$ with similar expressions for $y, z$. But I am not sure about anything else. I am sorry I do not have more to show for my work. Thanks.
We have $x^2=y^3+1$ and $zx=1$. We might as well write $1/x$ for $z$. Then $t=x+ay+b/x$ so $$ay=t-x-\frac bx.$$ Then $$a^3x^2=a^3y^3+a^3=\left(t-ax-\frac bx\right)^3+a^3.$$ Multiplying by $x^3$ gives $$a^3x^5=(tx-ax^2-b)^3+a^3x^3.$$ If $a\ne0$ this equation can be rewritten as $$a^3x^6+\textrm{ lower terms in }x, t$$ which, when we divide by $a^3$ gives $x$ as integral over $k[t]$. If $a=0$ we get $t=x+a/x$ and then $$x^2-tx=a=0$$ so still $x$ integral over $k[t]$. As $y$ is integral over $k[x]$ then $y$ is integral over $k[t]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2241341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Can you apply proof by contrapositive on proof by contradiction In a proof by contradiction, a false statement implies a contradiction. What happens if you take the contrapositive of this? A not-contradiction implies a not-false statement. I.e. a tautology implies a true statement. Is this right?
Both are forms of indirect proof. As you would expect, a proof by contradiction makes use of a contradiction. A proof by contrapositive does not. You can use proof by contradiction to infer that a statement $X$ is true by first assuming $X$ is false and then deriving a contradiction of the form $Y \land \neg Y$ or $Y\iff \neg Y$. You can use proof by contrapositive to infer that an implication of the form $X\implies Y$ is true by first assuming $Y$ is false and then proving that $X$ is false. No contradiction is required. Of course, you can use both methods several times in the same proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2241578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Closed form expressions for harmonic sums It is well known that there are deep connections between harmonic sums (discrete infinite sums that involve generalized harmonic numbers) and poly-logarithms. Bearing this in mind we have calculated the following sum: \begin{equation} S_a(t) := \sum\limits_{m=1 \vee a} H_m \cdot \frac{t^{m+1-a}}{m+1-a} \end{equation} where $t\in (-1,1)$ and $a \in {\mathbb Z}$. The result reads: \begin{eqnarray} S_a(t) = \left\{ \begin{array}{lll} \frac{1}{2} [\log(1-t)]^2 + \sum\limits_{j=1}^{a-1} \frac{1}{j \cdot t^j} \left( \sum\limits_{m=1}^j \frac{t^m}{m} + (1-t^j) \log(1-t) \right) + Li_2(t) 1_{a\ge 1} & \mbox{if $a \ge 0$} \\ \frac{1}{2} [\log(1-t)]^2 -\sum\limits_{j=1}^{|a|} \frac{1}{j } \left( \sum\limits_{m=1}^j \frac{t^m}{m} + (1-t^j) \log(1-t) \right) & \mbox{if $a < 0$} \\ \end{array} \right. \end{eqnarray} Unfortunately it took me a lot of time to derive and thoroughly check the result even though all the calculations are at elementary level. It is always helpful to use Mathematica. Indeed for particular values of $a$ Mathematica "after long thinking" comes up with solutions however from that it is hard to find the generic result as given above. Besides in more complicated cases Mathematica just fails. In view of the above my question is the following. Can we prove that every infinite sum whose coefficients represent a rational function in $m$ and in addition involve products of positive powers of generalized harmonic numbers, that such a sum is always always given in closed form by means of elementary functions and poly-logarithms? If this is not the case can we give a counterexample?
Even though this is not conceived as an answer to your specific question which conserns the class of functions needed to represent your sum it also contributes to it as it exhibits a broader class than you mentioned. Also I think it is an interesting result in itself when it comes to closed formes. Compact closed expression I have found that your sum $$S_a(t) := \sum\limits_{m=1} ^{ \infty} H_m \cdot \frac{t^{m+1-a}}{m+1-a}\tag{1}$$ can be expressed in more compact closed forms. The first one I derived is the following; for the second one see the paragraph "Derivation". $$f(a,t) = -\frac{1}{12} \left(12 a \, _4F_3(1,1,1,a+1;2,2,2;1-t)-12 a t \, _4F_3(1,1,1,a+1;2,2,2;1-t)-12 a \log (1-t) \, _3F_2(1,1,a+1;2,2;1-t)+12 a t \log (1-t) \, _3F_2(1,1,a+1;2,2;1-t)+6 \psi ^{(0)}(1-a)^2+12 \gamma \psi ^{(0)}(1-a)-6 \psi ^{(1)}(1-a)+6 \gamma ^2+\pi ^2\right)+\frac{1}{2} \log ^2(1-t)$$ It consists of hypergeometric, polygamma and log functions, and some well known constants abundant in this field. The graph shows $f(a,t=1/2)$ as a function of $a$ Validity checks I have checked the validity of both $f(a,t)$ and $f_{1}(a,t)$ by plotting them together with a partial sum of $S_a(t)$ as a function of $a$ for $t=1/2$. All three curves agree reasonably. Unfortunately, my attempts to perform an independent validity check by comparing power series in $t$ failed. This might be due to difficulties in Mathematica and needs further study. Derivation We observe that the derivative of $S_a(t)$ with respect to $t$ is a simple function $$\frac{\partial S_a(t)}{\partial t}=\sum _{m=1}^{\infty } H_m t^{m-a}=-\frac{t^{-a} \log (1-t)}{1-t}\tag{2}$$ Hence $S_a(t)$ is given by the integral $$-\int_0^t \frac{u^{-a} \log (1-u)}{1-u} \, du\tag{3}$$ Mathematica gives for this integral the following expression $$f_{1}(a,t) =\pi (-1)^{a-1} H_{a-1} \csc (\pi a)-\frac{1}{a^2}\left(\frac{1}{t-1}\right)^a \, _3F_2\left(a,a,a;a+1,a+1;\frac{1}{1-t}\right)-\frac{1}{a}\left(\frac{1}{t-1}\right)^a \log (1-t) \, _2F_1\left(a,a;a+1;\frac{1}{1-t}\right)$$ This is equivalent to $f(a,t)$ which I have derived first in the following more complicated manner. Substituting $u\to 1-s$ in $(3)$ leads to $$-\int_{1-t}^1 \frac{(1-s)^{-a} \log (s)}{s} \, ds\tag{4}$$ Expanding $(1-s)^{-a}$ into a binomial series, interchanging the summation and integration, doing the integral, and then the sum gives $f(a,t)$. Mathematica expressions In order to avoid possible typing errors, here are the Mathematica expressions for the functions obtained f[a_, t_] := (1/ 2 Log[1 - t]^2 + -(1/ 12) (6 EulerGamma^2 + \[Pi]^2 + 12 a HypergeometricPFQ[{1, 1, 1, 1 + a}, {2, 2, 2}, 1 - t] - 12 a t HypergeometricPFQ[{1, 1, 1, 1 + a}, {2, 2, 2}, 1 - t] - 12 a HypergeometricPFQ[{1, 1, 1 + a}, {2, 2}, 1 - t] Log[ 1 - t] + 12 a t HypergeometricPFQ[{1, 1, 1 + a}, {2, 2}, 1 - t] Log[ 1 - t] + 12 EulerGamma PolyGamma[0, 1 - a] + 6 PolyGamma[0, 1 - a]^2 - 6 PolyGamma[1, 1 - a])) f1[a_, t_] :=(-1)^(a - 1) \[Pi] Csc[a \[Pi]] HarmonicNumber[a - 1] - (1/ a^2) (1/(-1 + t))^ a HypergeometricPFQ[{a, a, a}, {1 + a, 1 + a}, 1/(1 - t)] - (1/ a) (1/(-1 + t))^ a Hypergeometric2F1[a, a, 1 + a, 1/(1 - t)] Log[1 - t]
{ "language": "en", "url": "https://math.stackexchange.com/questions/2241698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
If Y is complete then B(X,Y) is complete I am reading the proof for this theorem and it goes as follows: To see that $B(X,Y)$ is complete when $Y$ is complete, let $\left\{ T_n \right\}$ be a Cauchy sequence (of operators) in $B(X,Y)$, that is, $$\|\ T_n - T_m \|\ \to 0 \hspace{.5cm} \text{as} \hspace{.5cm} n,m \to \infty.$$ Then, for each $x \in X$, $$\|\ T_n x - T_m x \|\ = \|\ (T_n - T_m)x \|\ \leq \|\ T_n - T_m \|\ \|\ x \|\ \to 0 \hspace{.5cm} \text{as} \space\ n,m \to \infty$$ which imples that $\left\{ T_n x \right\} $ is a Cauchy sequence in $Y$. As $Y$ is complete, $\left\{ T_n x \right\}$ converges to some vector, denoted by $Tx$ in $Y$. We have just defined a transformation $T: X \to Y$ by $x \mapsto Tx = \displaystyle\lim_{n\to\infty} T_n x$. Linearity of $T$ is clear. Next, we show that $T \in B(X,Y)$ and $T_n \to T$ with respect to the operator norm. For any $\epsilon > 0$ and any $x \in X$: \begin{align*} \|\ (T_n - T)x \|\ &= \|\ (T_n - T_m)x + (T_m - T)x \|\ \\ &\leq \|\ (T_n - T_m)x \|\ + \|\ (T_m - T)x \|\ \\ &\leq \|\ T_n - T_m \|\ \|\ x \|\ + \|\ T_m x - Tx \|\ \\ &< \dfrac{\epsilon}{2} \|\ x \|\ + \dfrac{\epsilon}{2} \|\ x \|\ \hspace{1cm} \exists N \in \mathbb{N} \space\ \text{when} \space\ m,n > N \\ &= \epsilon \|\ x \|\ \\ \end{align*} Therefore, $\|\ T_n - T \|\ < \epsilon$ whenever $n \geq N$. This shows that $T_n \to T$ in operator norm, and that $\|\ T_n - T \|\ \to 0$ as $n \to \infty$, and so $T$ is continuous. My question: Why is $\|\ T_m x - Tx \|\ < \dfrac{\epsilon}{2} \|\ x \|\ $ ? That is the only part of this proof that baffles me.
First let me answer your explicit question. The fact that the sequence $T_{n}x$ converges to $Tx$ implies that for every $\delta > 0$, there exists some $N \in \mathbb{N}$ such that $\| T_{n}x - Tx \| < \delta$ for all $n > N$. Now set $\delta = \epsilon \|x\| /2$. Now for your implicit question, which seems more important. There is a subtle point in this proof. When one shows that the sequence $T_{n}$ converges to $T$, one has to show that for each $\epsilon > 0$, there exists an $n \in \mathbb{N}$, such that $\| T_{n} - T \| < \epsilon$, if $n > N$. Notice that there is no mention of any point $x \in X$. This is where the second index $m$ comes in. You have proven that I can choose an $N \in \mathbb{N}$ such that $\| T_{n} - T_{m} \| < \epsilon/2$ for all $n,m > N$. Let this $N$ be fixed. Now let $x \in X$ be arbitrary and let us compute \begin{equation} \| (T_{n} - T)x \| \leqslant \| T_{n} - T_{m} \| \| x \| + \| T_{m} x - Tx \|. \end{equation} Note that this inequality is true for any $m \in \mathbb{N}$. Now let us stop for a second. At this point we can choose and $M \in \mathbb{N}$, such that $\| T_{m}x - Tx \| < \epsilon \| x \| /2$. Suppose without loss of generality that $M > N$. Hence we obtain that \begin{equation} \| T_{n} - T_{m} \| \| x \| + \| T_{m} x - Tx \| < \epsilon \| x \| \end{equation} for any $n > N$ and any $m > M$. Putting these things together we see that \begin{equation} \|(T_{n} - T) x \| < \epsilon \| x \|. \end{equation} Notice that this holds for any $x \in X$, and no a priori choice of $M$ was needed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2241816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Does $\sum_{n=1}^\infty \frac{1}{\log(e^{n}+e^{-n})}$ converge or diverge? How would I show that the following series converges or diverges? $$\sum_{n=1}^\infty \frac{1}{\log(e^{n}+e^{-n})}$$ Any help would be appreciated.
HINT: $$\log(e^n+e^{-n})=n+\log(1+e^{-2n})\le n+e^{-2n}\le 2n$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2241955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Intuitive meaning of $E(X^2)$ and $E(X+a)$? I understand conceptually that $E(X)$ is the average expected value of random variable $X$ over multiple trials over a long period of time i.e. the mean. Similarly, I understand for conditionals that $E(X|Y)$ is the average value of $X$ for all cases where $Y$ has already happened. However, what is the intuitive meaning behind expressions like $E(X^2)$, and other expectations besides $E(X)$? For instance, the meaning of $E(X+a)$, where $a$ is a constant? I can't seem to grasp the concept of those expressions.
* *$E(X+a):$ is a mean value of some new random variable, exactly the $X+a$, which has just shifted ALL values of $X$ by $a$. Why is it happening? Just remember what is a formal definition of some r.v. $X$, it's a function that takes values in $\Omega$ and map them into some subset of $\mathbb{R}$, i.e.: $$X:\Omega \to \mathbb{R}.$$ So we just move all that real values by $a$. And hence get a new r.v. with the mean $E(X+a)=E(X)+a.$ * *$E(X^2)$ Is similarly a mean of r.v. $X^2$. And how we get that random variable from $X$? We just square all of it ($X$) values, but the probability of each squared value will be the same as in $X$ without squaring. But in can be that $X$, for example, takes values $b$ and $-b$ with probability $p(b)$ and $p(-b)$, then in $X^2$ we will have $b^2$ and $b^2$ (which is the same), so the total probability of $b^2$ in $X^2$ will be $p(b)+p(-b).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2242082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
A Prufer code with 2 unknowns Given a labled tree G with the vertices $\lbrace{1,2,3,4,5,6}\rbrace$, And the Prufer code of G is $(1,x,2,y)$, and $x,y\in \lbrace{1,2,3,4,5,6}\rbrace$. Which of the following is true: * *$x,y\in \lbrace{3,4,5,6}\rbrace$ and $x\neq y$ *$x,y\in \lbrace{3,4,5,6}\rbrace$ but it might be that $x=y$ *$x,y\in \lbrace{1,2,3,4,5,6}\rbrace$ and $x\neq y$ *$x,y\in \lbrace{1,2,3,4,5,6}\rbrace$ *Impossible: G's Prufer code cannot be in the length of 4 I think 5 can be easilly ruled out, and I think also 1, but I couldn't figure out a method for solving this. Tried to reconstruct the tree but got confused from the different possibilites...
The reason Prüfer codes are so useful is that they provide a bijection between the $n^{n-2}$ labeled trees on $n$ vertices and the $n^{n-2}$ sequences of $n-2$ elements from $\{1,2,\dots,n\}$. Every possible sequence of this form gives a unique labeled tree, and every labeled tree has a unique Prüfer code. In this case, $n=6$ and every $4$-tuple $(a,b,c,d)$ with $a,b,c,d \in \{1,2,3,4,5,6\}$ corresponds to a labeled tree; in particular, any $x,y \in \{1,2,3,4,5,6\}$ give valid Prüfer codes $(1,x,2,y)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2242195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Regarding recurrences, why do characteristic polynomials work, and why do we look for the roots? I'll use an example recurrence but my question is meant to be generalized. Let's say we had some recurrence, such as: $$F(n) = -8F(n-1) + 9F(n-2) + 92F(n-3) - 140F(n-4)$$ where we already know the first few base constants $F(0), F(1), F(2), F(3)$ so the entire recurrence is defined for all integers $n \geq 0$. Normally we convert this to some kind of characteristic polynomial: $$x^n = -8 x^{n-1} + 9 x^{n-2} + 92 x^{n-3} - 140x^{n-4}$$ Divide everything by $x^{n-4}$ and put everything on one side: $$x^4 +8 x^3 - 9 x^2 - 92 x + 140 = 0$$ This polynomial can be factored: $$(x - 2)^2 (x + 5) (x + 7) = 0$$ And now we know that the roots are $2, -5, -7$. The $2$ root has multiplicity $2$, whereas the $-5$ and $-7$ roots each have multiplicity $1$. From this we can say that: $$F(n) = a \cdot (2)^n + b \cdot n \cdot (2)^n + c \cdot (-5)^n + d \cdot (-7)^n$$ And then we use the original four values of $F$ that we do know to solve a short system of equations and solve for $a, b, c, d$ to finish up the closed form. The short version of my question is basically "Why does this work?" Why can we use a "characteristic polynomial" (what is this, exactly) instead of a recurrence? Why does that polynomial's roots directly correspond to the closed-form of that recurrence? Why do we need to add an additional term with another power of $n$ for roots of multiplicity $>1$?
Two explanations. First, if you have $c_k a_{n + k} + \dotsb + c_0 a_n = 0$, a (not unreasonable, simple) idea is to try $a_n = \alpha^n$. Substituting, you get that it has to be $c_k \alpha^k + \dotsb + c_0 = 0$. Furthermore, the equation is linear, i.e., multiplying a solution by a constant or adding two solutions gives a solution. If $\alpha$ is a multiple zero of the characteric equation, again trying $n \alpha^n$, $n^2 \alpha^n$, ... shows it "works". A "more scientific" (can be extended in several directions) technique is to use generating functions: Define $A(z) = \sum_{n \ge 0} a_n z^n$, multiply the recurrence by $z^n$ and sum over $n \ge 1$, to get after recognizing some sums: $\begin{align*} c_k \sum_{n \ge 0} a_{n + k} z^n + \dotsb + c_0 \sum_{n \ge 0} a_n z^n &= 0 \\ c_k \frac{A(z) - a_0 -a_1 z - \dotsb - a_{k - 1} z^{k - 1}}{z^k} + \dotsb + c_0 A(z) &= 0 \end{align*}$ If you now multipĺy this mess by $z^k$, and solve for $A(z)$, you get: $\begin{align*} A(z) &= \frac{p(z)}{c_k z^k + \dotsb + c_0} \end{align*}$ here $p(z)$ is a polynomial. This in turn can be written as partial fractions, the terms of which are of one of the forms, for some $r \ge 1$: $\begin{equation*} \frac{1}{(1 - \alpha z)^r} \end{equation*}$ You want the coefficients of $z^n$ in such. By the (extended) binomial theorem: $\begin{align*} (1 - \alpha z)^{-r} &= \sum_{n \ge 0} (-1)^n \binom{-r}{n} \alpha^r z^r \\ &= \sum_{n \ge 0} \binom{n + r - 1}{r - 1} \alpha^r z^r \end{align*}$ Note that the binomial coefficient $\binom{n + r - 1}{r - 1}$ is a polynomial of degree $r - 1$ in $n$, the $\alpha$ is seen to be a zero of the characteristic polynomial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2242305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Prove that $\sum\limits_{k=1}^{\infty}\frac{(-1)^k}{2k+1}x^{2k+1}$ uniformly converges on $[-1,1].$ Prove that $\sum\limits_{k=1}^{\infty}\dfrac{(-1)^k}{2k+1}x^{2k+1}$ uniformly converges on $[-1,1].$ My book says I have use alternating series test. I can see that the series converges for any $x\in[-1,1]$ by the alternating series test but it doesn't tell us the series is uniformly convergent on $[-1,1]$. I tried to use Weierstrass M-Test instead but it fails to pass the M-test. I let $f_k(x) = \dfrac{(-1)^k}{2k+1}x^{2k+1}$ and found $M_k = \sup\{|\dfrac{(-1)^k}{2k+1}x^{2k+1}|:x\in[-1,1]\} = \dfrac{1}{2k+1}$. But $\sum\limits_{k=1}^{\infty}\dfrac{1}{2k+1}$ is not convergent and does not pass the M-test. Hence, the series cannot be uniformly convergent. Did I use the M-Test correctly?
Let $$f_n(x)=\sum_{k=1}^{n} \frac{(-1)^k}{2k+1} x^{2k+1},$$ and call $$f(x)=\sum_{k=1}^{\infty} \frac{(-1)^k}{2k+1} x^{2k+1}.$$ Note that $$f_n'(s)=-s^2\frac{1-(-s^2)^n}{1+s^2} \implies f(x)=- \int_0^x s^2\frac{1-(-s^2)^n}{1+s^2}\,ds$$ and $$f(x)=\arctan(x)-x=-\int_0^x\frac{s^2}{1+s^2}\,ds.$$ Thus: $$|f_n(x)-f(x)|=\left| \int_0^x \frac{s^{2(n+1)}}{1+s^2}\,ds\right| \le \left| \int_0^1 s^{2(n+1)}\,ds\right|=\frac{1}{2n+3}\to0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2242411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Limit of a product of sequence Let $a > 0$ and $\{y_n\}_{n\geq0}$ be a sequence such that $y_0 > 0$, $y_n > a \ \forall \ n > 1$ and $$\sum_{n=1}^\infty \frac 1 {y_n} \to \infty$$ Prove That $$ \lim_{n\to \infty} \prod_{k=1}^n \left( 1 - \frac a {y_k}\right) = 0 $$ I tried to expand the product using Vieta's formula, but I'm getting the product to be infinite. Edit : I'm adding the source of the problem. I found it at the end of Theorem 1 on page 7 of this PDF. Here is a screenshot Any help will be appreciated.
By the strict convexity of the exponential function, we have $$1 - x \leqslant e^{-x}\tag{1}$$ for all $x\in \mathbb{R}$, and the inequality is strict for all $x \neq 0$. The assumptions yield $0 < 1 - \frac{a}{y_k} \leqslant 1$ for $k > 1$ (probably a typo and it should have been $\geqslant 1$, but that doesn't matter). Using $(1)$, we find $$0 < \prod_{k = 2}^m \biggl(1 - \frac{a}{y_k}\biggr) \leqslant \exp \Biggl(-a \sum_{k = 2}^m \frac{1}{y_k}\Biggr)\tag{2}$$ for all $m \geqslant 2$. The assumption $\sum \frac{1}{y_k} = +\infty$ implies that the right hand side of $(2)$ tends to $0$ as $m \to \infty$, hence by squeezing we deduce $$\lim_{m\to \infty} \prod_{k = 2}^m \biggl(1 - \frac{a}{y_k}\biggr) = 0.$$ Multiplying with the constant $1 - \frac{a}{y_1}$ yields $$\lim_{m\to\infty} \prod_{k = 1}^m \biggl(1 - \frac{a}{y_k}\biggr) = 0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2242721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
An operator is injective if and only if the range of its adjoint separates points Let $T \in L(E,F)$ where $E,F$ are normed spaces. Then, $T$ is injective if and only if $ T^*(F') \subset E' $ separates points of $E$. $T^*$ means adjoint of T. I do not have a starting point. I don't even know what separates means in this problem. It should have been clearer , but this is the way it is written. Can someone give a definition of separation in this context and point me to a direction after that?
It's convenient to use the bracket notation $\langle x,\varphi \rangle =\varphi(x)$ when discussing the spaces and their duals. So, the adjoint $T^*$ satisfies $\langle Tx, \varphi\rangle = \langle x, T^*\varphi\rangle$ for all $x\in E$ and $\varphi\in F^*$. Hence $$ \|Tx\| = \sup_{\|\varphi\|_{F^*}=1} |\langle Tx, \varphi\rangle | = \sup_{\|\varphi\|_{F^*}=1} |\langle x, T^*\varphi\rangle| $$ which implies $$\ker T=\bigcap_{\|\varphi\|_{F^*}=1} \ker T^*\varphi = \bigcap_{\psi\in \operatorname{ran} T^*} \ker \psi $$ Both directions of the claim follow at once.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2242824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that this function is not square-integrable. Let $f(x):=(1-x^2)^{\frac{m}{2}} \int_0^x \frac{dt}{(1-t^2)^{m+1}}$ be a function on $(-1,1)$. Then I would like to show that the asymptotic of $f$ is such that for $m \in \{1,2,3,...\}$ the function $f$ is not square-integrable at $ \pm 1.$
You need only look at the case $x=1$. Put $\displaystyle F_m(x)=\int_0^x \frac{dt}{(1-t^2)^{m+1}}$. Then we compute that $F_0(x)=\frac{1}{2}\log (\frac{1+x}{1-x})$. Integrating $F_m$ by parts, we get that $$(2m+2)F_{m+1}(x)=(2m+1)F_m(x)+\frac{x}{(1-x^2)^{m+1}}$$ Now it is easy to show that $(1-x^2)F_0(x)\to 0$ if $x\to 1$, and hence $(1-x^2)F_1(x)\to 1/2$ if $x\to 1$. Now we prove by induction that for $m\geq 1$, $(1-x²)^mF_m(x)\to \frac{1}{2m}$, and it is easy to finish.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2243068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is $C_c^{\infty}(\mathbb{R}^n)$ dense in $L^p(M,d\sigma)$, $1\leq p<\infty$, where $M$ is an $n-1$ regular surface in $\mathbb{R}^n$? I know that, given an open set $\Omega\subseteq\mathbb{R}^n$, $C_c^{\infty}(\Omega)$ (smooth functions with compact support) is dense in $L^p(\Omega)$, $1\leq p<\infty$. Let $M$ be a smooth $n-1$ regular surface in $\mathbb{R}^n$, and let $d\sigma$ be the surface measure. Is it true that $C_c^{\infty}(\mathbb{R}^n)$ is dense in $L^p(M,d\sigma)$, $1\leq p<\infty$? That is, if $\int_M |f|^p\,d\sigma<\infty$, can we find $\{f_m\}\subseteq C_c^{\infty}(\mathbb{R}^n)$ such that $\lim_m \int_M|f-f_m|^p\,d\sigma=0$? If not, which spaces would be dense in $L^p(M,d\sigma)$?
Let us show that $C_c ^\infty (M)$ is dense in $L^p (M)$ ($1 \le p < \infty$), in two steps. First, $C_c ^\infty (M)$ is dense in $C_0 (M)$ (the space of functions that vanish at infinity) in the topology of compact convergence, by one of the many variations of the Stone-Weierstrass theorem. Since $C_c(M) \subseteq C_0 (M)$, it follows that $C_c ^\infty (M)$ is dense in $C_c (M)$ too. Next, it is a known result that $C_c (M)$ is dense in $L^p (M)$ (this is true at least for $\sigma$-finite spaces, not only for smooth manifolds). Combining the two facts you get that $C_c ^\infty (M)$ is dense in $L^p (M)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2243237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
If $f$ is continuous and $\int^{a}_{-a} f(x) \,dx=0$, is $f$ necessarily odd? I understand that if $f$ is an odd function then $$\int^{a}_{-a} f(x) \,dx =0$$ Can I say that $f$ is odd function if $\int^{a}_{-a} f(x)=0$ and $f$ is continuous on $[-a,a]$?
No. Let $a = 1$ and consider the function $$ f(x) = \begin{cases}\frac{3}{2}x^2&\text{if $x\geq 0$} \\ x&\text{if $x<0$} \end{cases} $$ $f$ is continuous at $0$ since $\lim_{x\to 0^+} f(x) = \lim_{x \to 0^-}$. Furthermore, we have $$ \int_{0}^1 f(x) = \int_{-1}^0 \frac{3}{2} x^2 = \frac{1}{2} $$ and $$ \int_{-1}^0 f(x) = \int_{-1}^0 x = -\frac{1}{2} $$ so $$ \int_{-1}^1 f(x) = 0 .$$ But $f$ is obviously not odd.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2243348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
complex number with determinant Let $z_1$ and $z_2$ be two distinct complex numbers and let $\,z=(1-t)z_1 +tz_2\,$ for some real number $t$ with $0<t<1$. Then we have to prove $$\begin{vmatrix} z-z_1 & \overline{z}-\overline{z_1} \\ z_2-z_1 & \overline{z_2}-\overline{z_1} \end{vmatrix}\;=\;0$$ I thought about it, but don't get any start. Can anybody provide me a hint? can anybody help me in this
Given that $z=(1-t)z_1+tz_2$ we can write $z=z_1+t(z_2-z_1)$ so that $z-z_1=t(z_2-z_1)$ and $$\overline{z}-\overline{z_1}=\overline{z-z_1}=\overline{t(z_2-z_1)}=t(\overline{z_2}-\overline{z_1}),$$ so the top row of the matrix is $t$ times the bottom row. Hence the determinant is zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2243476", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find $\sin A$ and $\cos A$ if $\tan A+\sec A=4 $ How to find $\sin A$ and $\cos A$ if $$\tan A+\sec A=4 ?$$ I tried to find it by $\tan A=\dfrac{\sin A}{\cos A}$ and $\sec A=\dfrac{1}{\cos A}$, therefore $$\tan A+\sec A=\frac{\sin A+1}{\cos A}=4,$$ which implies $$\sin A+1=4\cos A.$$ Then what to do?
Together with $$\sin A+1=4\cos A $$ you can use $\sin^2A+\cos^2A=1$ as $$(\sin A+1)(\sin A-1)=\cos^2 A\ .$$ Putting the two together you easily get $$4\cos A(\sin A-1)=\cos^2 A\ ,$$ and hence $$4\sin A-4=\cos A\ .$$ You now just have to solve the linear system in $\sin A$ and $\cos A$: $$\begin{cases} 4\sin A-4=\cos A\\ \sin A+1=4\cos A \end{cases}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2243571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 4 }
Finding the solutions for $\cot(x) = -x$ without digital tools All day I've been stuck with the following question where we're not allowed to use any digital tools. How does the amount of roots depend on the constant $a$? $\cos(x) = ax$. * *$a = \frac{\cos(x)}{x}$, *$a' = \Big(\frac{1}{x}\Big)\Big(\frac{\cos(x)}{x}+\sin(x)\Big)$, *$a' = 0 = \frac{\cos(x)}{x}+\sin(x) \implies \cot(x) + x = 0$. I cheated and used digital tools to find a approximate way of describing the extreme values: * *$x(n) = n\big(\frac{2.8}{n} + \pi \big)$ where $\pm2.8$ is the first solution on either side of the $y$-axis rounded of course. What I'm wondering is: * *Is there a way to find the first solution without digital tools or a lot of testing? I've tried to find a common term by extending and flipping the functions. *Is there a more accurate way to describe the solutions? Since there is not exactly pi steps between every one of them. Sorry if my english doesn't make sense somewhere, it's not my first language. Would be grateful for any leads. Thanks!
With the exception of special cases such as $a=0$ or $a=\frac{2}{\pi}$ the equations you are investigating can not be solved using elementary functions (exponents, logarithms, trigonometry, polynomials, etc.). Even using digital tools you still only get an approximate answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2243633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
If $f$ is differentiable everywhere and $|f(y)-f(x)| \leq |x-y|^n$ for all $n >1$, then $f^{\prime}(x)= 0 $ for all $x$. Suppose $f:\mathbb{R} \rightarrow \mathbb{R}$ is a differentiable function and satisfies $|f(x) - f(y)| \leq |x-y|^n$ for all $n > 1.$ Show that $f^{\prime}(x)=0$ for all $x\in \mathbb{R}.$ My first attempt: Choose $x < y$ such that $y - x < 1$. By the Mean Value Theorem on $[x,y]$, we have $\frac{f(y) - f(x)}{y-x} = f^{\prime}(c)$ for some $c \in (x,y).$ Note that we have $| f^{\prime}(c) | = \frac{|f(y) - f(x)|}{y-x} \leq |x-y|^{n-1}$. Since it holds for any $n > 1$, we have $\displaystyle\lim_{n \rightarrow \infty}|f^{\prime}(c)| \leq \displaystyle\lim_{n \rightarrow \infty}|x-y|^{n-1}.$ Since $y - x < 1,$ we have $\displaystyle\lim_{ \rightarrow \infty}|x-y|^{n-1} = 0$. Hence, $|f^{\prime}(c)| = \displaystyle\lim_{n \rightarrow \infty}|f^{\prime}(c)| = 0 .$ Therefore, $f^{\prime}(c) = 0$. Since it holds for any interval $[x,y]$ that shrinks to a point in it, we can have $f^{\prime}(x)=0$ for all $x \in \mathbb{R}.$ My second attempt: Since $f$ is differentiable everywhere, we have for any $x \in \mathbb{R}, f^{\prime}(x) = \displaystyle\lim_{y \rightarrow x }\frac{f(x)-f(y)}{x-y}.$ Therefore, $|f^{\prime}(x)| = \displaystyle\lim_{y \rightarrow x }\frac{|f(x)-f(y)|}{|x-y|} \leq \displaystyle\lim_{y \rightarrow x} |x-y|^{n-1} = 0.$ Hence, $|f^{\prime}(x)| = 0$, which implies that $f^{\prime}(x) = 0.$ Since it holds for any $x \in \mathbb{R}$, we have $f^{\prime}(x) = 0$ for any $x \in \mathbb{R}.$ Are my two attempts correct? I am quite doubtful about my first attempt on the choosing $x$ and $y$ part.
Let's make it more general: if $f:{\mathbb R}\to {\mathbb R}$ is a function satisfying $|f(x) - f(y) |\le |x-y|^{1+\alpha}$ for some $\alpha>0$, then $f$ is constant. Let's see why. Let $x=0$. For any $n\in{\mathbb N}$, telescoping and triangle inequality give $$|f(y) - f(0)|\le \sum_{k=1}^n \left |f(\frac{y k}{n})-f(\frac{y (k-1)}{n})\right |=(*)$$ By assumption $$(*) \le \sum_{k=1}^n |\frac{y}{n}|^{1+\alpha}=|y|^{1+\alpha} \frac{n}{n^{1+\alpha}}\to 0.$$ Therefore $f(y)=f(0)$ for all $y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2243741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Given two square matrices $A$ and $B$, is $(AB)^2 = A^2B^2$ true or false? I tried with a counter example and it came out that this claim is false. I took a matrix $$ A= \left[ {\begin{array}{cc} 2 & 1\\ 3 & 2\\ \end{array} } \right] $$ and a matrix $$ B= \left[ {\begin{array}{cc} 1 & 3\\ 4 & 1\\ \end{array} } \right]. $$ I first calculated $$AB$$ and I got: $$ AB= \left[ {\begin{array}{cc} 6 & 7\\ 11 & 11\\ \end{array} } \right] $$ and then I calculated $$AB \times AB$$ that is same as $$(AB)^2$$ and I got: $$ (AB)^2= \left[ {\begin{array}{cc} 85 & 119\\ 187 & 198\\ \end{array} } \right] $$ then I calculated $$A \times A$$ that is the same as $$A^2$$ and I got $$ A^2= \left[ {\begin{array}{cc} 7 & 4\\ 12 & 13\\ \end{array} } \right] $$ and after I calculated $$B^2$$ $$ B^2= \left[ {\begin{array}{cc} 13 & 6\\ 8 & 13\\ \end{array} } \right] $$ and at the end I calculated $$A^2B^2$$ and I got: $$ A^2B^2= \left[ {\begin{array}{cc} 123 & 94\\ 250 & 241\\ \end{array} } \right] $$ so I am deducing that the claim at the beginning is false, so $$(AB)^2 \neq A^2B^2$$ Is this right?
Assuming , $A$ and $B$ are invertible, we have $$(AB)^2=A^2B^2$$ if and only if $$ABAB=AABB$$ if and only if $$BA=AB$$ which is not true in general.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2243927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Compute the limit. Compute the limit. $$\lim _{n \to \infty} \left(\sqrt{n} \int_{0}^{\pi} (\sin x)^n dx\right)$$ I have no clue where to start with this problem so any help is greatly appreciated.
Another possibility besides the explicit calculation of the integral (which one can do recursively, for example) is to use Laplace's method to approximate $$ \int_0^\pi\sin^n(x)\,dx=\int_0^\pi e^{n\ln\sin(x)}\,dx=\int_0^\pi e^{n\,f(x)}\,dx $$ where (in Wiki notations) $f(x)=\ln\sin(x)$ and $x_0=\frac{\pi}{2}$. We have $f''(x_0)=-\frac{1}{\sin^2(\pi/2)}=-1$, so the methods says that $$ \int_0^\pi\sin^n(x)\,dx\sim e^{n\ln\sin(\pi/2)}\sqrt{\frac{2\pi}{n\cdot 1}}=\frac{\sqrt{2\pi}}{\sqrt{n}} $$ which gives the limit to be $\sqrt{2\pi}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2244030", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Proving $C_0^1(\Omega)$ is not dense in $H^1(\Omega)$. Hi I am trying to work on the following problem: (a) $C_0^1(\Omega)$ is dense in $L_2(\Omega)$ (b) $C_0^1(\Omega)$ is dense in $H_0^1(\Omega)$. (c) Explain why $C_0^1(\Omega)$ is not dense in $H^1(\Omega)$. I know how to do (a) and (b) but I couldn't find how to solve (c). Any help would greatly appreciated. Thanks in advance. So By following the comments given below I got $$f_n(x)=\begin{cases}n^2x^2,\,\,\,\,\,\,\,0\le x\le\frac{1}{n}\\1,\,\,\,\,\,\,\,\frac{1}{n}\le x\le 1-\frac{1}{n}\\n^2x^2,\,\,\,\,\,\,\,1-\frac{1}{n}\le x\le 1\end{cases}$$ Clearly $f_n(x)\to 1$ but $f(x)=1\not\in C_0^1(\Omega)$ where as $f_n(x)\in C_0^1$, therefore $C_0^1(\Omega)$ is not dense in $H^1(\Omega)$. I still have a doubt about the fact that is $f_n(x)$ in $C_0^1(\Omega)$, since the derivative is not continuous anymore.
You can check that via looking at the trace. Let $D$ denote the closure of $C_0^1(0,1)$ in $H^1(0,1)$. By definition, the trace of $u \in C_0^1(0,1)$ is zero at both end points. Moreover, it is continuous. Hence, the trace of any function in $D$ is $0$. However, there are functions in $H^1(0,1)$ with non-zero trace, e.g., $x \mapsto 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2244142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
What is the most efficient way to calculate $R^2$? Hello I am working on a question from an old exam paper and wondered what is the best way to tackle parts ii and iii. Given the data it is easy to find $\hat{\beta_0}=-1.071$ and $\hat{\beta_1}=2.741$. Now for part ii) I have the formula $R^2=1-SSE/SST$ where $SST=\sum(y_i-\bar{y})^2$ (easy to work out) and $SSE=\sum e_i^2=\sum (y_i-\hat{y_i})$. Likewise I have for part iii) An unbiased estimate of $\sigma^2$ is $\sum e_i^2/(n-2)$. Question: I wondered if there is a nice and more efficient way to work out $\sum e_i^2$ or do I have to calculate each predicted value based on the model take it away from the actual value square that value and then sum all the values up?
Yes, you are correct that computing SSE using the definition is a tedious job. If the regression coefficients are estimated without rounding errors, then the following procedure may be used for computing SSE. By definition, $SSE = \sum(y_i-\hat{y_i})^2$, and $\hat{y}_i=\hat{\beta}_0 + \hat{\beta}_1 x$. Plug-in $\hat{y_i}$ into the definition of SSE and simplify. It results in the following: $$SSE = \sum y_{i}^{2}-\hat{\beta_0}\sum y_i - \hat{\beta_1}\sum x_i y_i.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2244234", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Conditional probability and indicator function Can someone give me a formal rigorous proof of the following equation? $$\frac{E\{X \cdot I(T=1) \}}{\Pr(T=1)}= E(X|T=1)$$ Many thanks!
\begin{align} \mathsf{E}(X \cdot \mathbb{I}(T = 1)) &= \sum_t \mathsf{E}(X \cdot \mathbb{I}(T = 1) \mid T = t)\cdot\Pr(T = t) \\ &= \mathsf{E}(X \cdot \mathbb{I}(T = 1) \mid T = 1) \cdot \Pr(T = 1) \\ &= \mathsf{E}(X \mid T = 1) \cdot \Pr(T = 1) \end{align} where the first equality is by the law of total expectation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2244368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Verifying that the $2n$-th term in a modified generalized Fibonacci sequence of order $n$ is the $n-1$-th Cullen number I was messing around with recursive sequences and I came across something interesting. The Fibonacci numbers start with $F_1 = 0$, $F_2=1$, and continues with $F_i = F_{i-1} + F_{i-2}$. The Tribonacci numbers start with $F_1 = F_2 = 0$, $F_3 = 1$, and continues with $F_i = F_{i-1}+F{i-2}+F{i-3}$ The generalized Fibonacci numbers of order $n$ start with $F_1=F_2=\dots=F_{n-1}=0$, $F_n=1$, and for $i\geq n$, $$F_i=\sum_{j=i-n}^{i-1}F_j.$$ I find the numbers that these generate to be boring, though, so I'm going to invent a new set of sequences called the Gibonacci sequences. The recurrence relation is the same as the Fibonacci sequences, but the initial values are different. For the Gibonacci sequence of order $2$, we have $G_1=G_2=1$ and $G_i = G_{i-1}+G_{i-2}$. Likewise, the generalized Gibonacci numbers of order $n$ we have $G_1=G_2=\dots=G_n=1$ and for $i\geq n$, $$G_i=\sum_{j=i-n}^{i-1}G_j.$$ I'm particularly interested in the point where all the initial values are used up, so to speak -- the last number that is defined in terms of one of the initial values. This would be the $2n$-th term for a Gibonacci sequence of order $n$. For example, when $n=2$, our sequence would be $1,\,1,\,2,\,\boxed{3},\dots$ where the number of interest is highlighted. For $n=3$, our sequence would be $1,\,1,\,1,\,3,\,5,\,\boxed{9},\,\dots$. What I'm going to do is define a sequence from these numbers as I go up in order for the generalized Gibonacci numbers. We'll define $C_n$ as $G_{2n}$ where $G$ the Gibonacci sequence of order $n$. Our sequence $C$ looks like this: $$(1,\,3,\,9,\,25,\,65,\,161,\,385,\,\dots).$$ It seems to be the case that the $i$th term in this sequence is precisely $(i-1)\cdot 2^{i-1}+1$. In other words, this sequence seems to generate the Cullen numbers! I'm very much convinced that this pattern continues for every number in the sequence, but I wouldn't know how to prove this to be the case. That, in essence is my question: how can we prove that this sequence generates the Cullen numbers?
Prove that $G_{n+k}=2^{k-1}(n-1)+1$ for $1\le k\le n$. It's useful to note that $G_{n+k}=2G_{n+k-1}-G_{k-1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2244536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Probability of choosing same set of unique objects over time without replacement I have $1024$ unique objects, and each round I start with all of them and choose $7$ uniformly at random without replacement. If I do this $n$ times, what is the probability distribution that I will have a collision (pick the same $7$ as I have in a previous round)? If this can be generalized to $N$ unique objects choose $K$, it would be useful to compare choices of parameters. I vaguely think I could start answering this question but I'm far from an expert in probability and would greatly appreciate some help.
The number of different choices of $7$ of the $1024$ objects is $\binom{1024}7$. This is a very large number; call it $C$. If $n=2$, there is a collision with probability $\frac 1C$ and so no collision with probability $\frac{C-1}C$. If $n=3$, to avoid a collision we first have to choose differently in the first two rounds (probability $\frac{C-1}C$), then avoid both of those combinations in the third round (probability $\frac{C-2}C$), and so the probability of no collision is $\frac{(C-1)(C-2)}{C^2}$. Similarly the probability of no collisions in $n$ rounds is $$\frac{(C-1)(C-2)\cdots(C-n+1)}{C^{n-1}}.$$ This can be approximated as $\exp\big(-\frac{n(n-1)}{2C}\big)$. See this wikipedia page for more details.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2244701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Vorticity equations of incompressible Navier-Stokes equations in 2D We know for incompressible Navier-Stokes equations, we have the vorticity equation: $$\omega_t - \Delta \omega + (u \cdot \nabla)\omega = (\omega \cdot \nabla)u$$ But for two dimensional space, $(u \cdot \nabla)\omega $. I don't see why after I plug in the expression of $\omega$. (Here $\omega = \partial_1 u_2 - \partial_2 u_1$)
A solution to the 2D Navier-Stokes equations can be realised as a special kind of solution to 3D Navier-Stokes, namely one for which $$ u(x,y,z) = (u_1(x,y),u_2(x,y),0) , \quad w(x,y,z)=\nabla \times u(x,y,z) =(0,0,\omega(x,y)).$$ Here, $\omega = \operatorname{curl}_{\text{2D}}(u_1,u_2). $ See for instance here. Plugging this into the above vorticity equation, we notice in particular that $$ w\cdot \nabla = \begin{pmatrix} 0 \\ 0 \\ \omega \end{pmatrix}\cdot \begin{pmatrix} \partial_1 \\ \partial_2 \\ \partial_3 \end{pmatrix} = \omega\partial_3$$ So since $u$ does not depend on $x_3$, the term $(w\cdot\nabla) u $ vanishes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2244800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Finding eigenvectors of a 3x3 matrix with a root of multiplicity 3 I have the matrix \begin{bmatrix}1&0&0\\2&2&-1\\0&1&0\end{bmatrix} I know that the only eigenvalue is 1 with multiplicity 3 I solved for the first eigenvalue and got \begin{bmatrix}0\\1\\1\end{bmatrix} How do I find the other two? I know they are \begin{bmatrix}0\\1\\0\end{bmatrix} and \begin{bmatrix}1/2\\0\\0\end{bmatrix} but when I do $(A-\lambda I)v_2 = v_1$, I get the system of equations $2x + y -z = 1$, $y -z =1$. I don't see how that gives the second eigenvector. Thanks
If you use a Linear Algebra tool like in MATLAB, or a libraries for programming languages, like NumPy with Python or MathNet Numerics with C#, you can find out that the eigenvectors calculated by these tools are: [0, 0, 0] [(√2)/2, (√2)/2, (√2)/2] [(√2)/2, (√2)/2, (√2)/2] The eigenvalues are as stated: (1,0) (1,0) (1,0) 1 of multiplicity 3. We should bear in mind that eigenvalues could be complex numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2244911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Finding Coordinates on a 3D vector My question is how to find the coordinates for a point on segment $KL$ where $K=(3,2,1)$ and $L=(7,9,5)$ that is $5$ units away from $K$. I know that the vector is $KL=[4,7,4]$ and the length of the entire vector is $9$ but not sure how to move the point along the vector by $5$ units
We want to move $\dfrac 59$ of the vector $[4,7,4]$, which is $\left[2\frac29, 3\frac89,2\frac29\right]$ This equates to the point \begin{align}P&=K+\frac 59(KL)\\ &=(3,2,1)+\frac59[4,7,4]\\ &=(3,2,1)+\left[2\frac29, 3\frac89,2\frac29\right]\\ &=\left(5\frac 29,5\frac89,3\frac29\right)\\ &\approx (5.22, 5.89, 3.22)\quad(2\text{d.p.})\end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2245009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proving $\lim_{(x,y)\to (0,0)} (x^4+y^4)/(x^2+y^2)=0$ by definition I need to prove by definition (and nothing else) that $$\lim_{(x,y) \to (0,0)}\frac{x^4+y^4}{x^2+y^2} = 0.$$ I've been stuck on this for almost an hour with no luck, and ran out of ideas. Can anyone help or give a hint?
Another approach which is useful whenever you see a denominator of $x^2+y^2$ is to use polars: $x=r\cos t$, $y=r\sin t$. Then $$\frac{x^4+y^4}{x^2+y^2}=\frac{r^4\cos^4t+r^4\sin^4t} {r^2\cos^2t+r^2\sin^2t}=r^2(\sin^4t+\cos^4t)$$ which is clearly $\le 2r^2$. So as $r\to0$ the function tends to zero. (Although as Luiz points out an even stronger inequality is obvious.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2245117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
What is the standard deviation of dice rolling? When trying to find how to simulate rolling a variable amount of dice with a variable but unique number of sides, I read that the mean is $\dfrac{sides+1}{2}$, and that the standard deviation is $\sqrt{\dfrac{quantity\times(sides^2-1)}{12}}$. I doubt that the $12$ comes from the formula because it seems strongly linked with the examples of using two six-sided dice. Is the formula for the standard deviation correct? If not what is it?
The formula is correct. The 12 comes from $$\sum_{k=1}^n \frac1{n} \left(k - \frac{n+1}2\right)^2 = \frac1{12} (n^2-1) $$ Where $\frac{n+1}2$ is the mean and k goes over the possible outcomes (result of a roll can be from 1 to number of faces, $n$), each with probability $\frac1{n}$. This formula is the definition of variance for one single roll.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2245194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Lambert W function style inequality Suppose that for real $x,y \ge 1$ we have the inequality $x/\log x \le y/\log 2$. Then the inequality $$ x \le (2/\log 2)y\log y + c $$ appears to hold for some constant $c$. What is the smallest such $c$? I would also accept an argument that no such constant exists. Depending on how I pose this to Wolfram Alpha, it returns different answers: it seems to sometimes use the wrong branch of the Lambert W function.
$x/\log(x)$ for $x > 1$ has a minimum value of $e$ at $x=e$, decreasing for $1 < x < e$ and increasing for $e < x < \infty$. Thus if $y < e \log(2)$ there are no solutions to your inequality; if $y \ge e \log(2)$ the solutions form an interval $$ - \frac{y W_{0}(-\log(2)/y)}{\log(2)} \le x \le - \frac{y W_{-1}(-\log(2)/y)}{\log(2)} $$ where $W_{0}$ and $W_{-1}$ are the principal and "$-1$" branches of the Lambert W function. So your claim is that $$- \frac{y W_{-1}(-\log(2)/y)}{\log(2)} < \frac{2 y \log(y)}{\log(2)} + c $$ With $s = \log(2)/y$, this says $$ - W_{-1}(-s) < 2 \ln(\ln(2)/s) + c s $$ Note that both $-W_{-1}(-s)$ and $2 \ln(\ln(2)/s)$ are decreasing on $(0, 1/e)$ with values in $(1,\infty)$, and $x \exp(-x)$ is decreasing on $(1,\infty)$, so this is equivalent to $$ -W_{-1}(-s) \exp\left(W_{-1}(-s)\right) > ( 2 \ln(\ln(2)/s) + c s ) \exp\left(- 2 \ln(\ln(2)/s) - c s \right) $$ which simplifies to $$ s > \frac{s^2 e^{-cs}}{\log(2)^2} \left(2 \log(\log(2)) - 2 \log(s) + c s\right)$$ That is certainly true for sufficiently small $s$, since the right side is $O(s^2 |\log(s)|)$ and thus $o(s)$ as $s \to 0+$. We can adjust $c$ to make it true on the rest of the interval. The least possible $c$ is the maximum of $$ F(s) = \frac{-2 \log(\log(2)/s) + W_{-1}(-s)}{s} $$ for $0 < s < 1/e$. Numerically it is approximately $.455389746$, occurring at $s \approx .2530875517$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2245321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
order of the pole $0$ of function $f(z)=\frac{1}{z^{3/2}}$ Find order of the pole $0$ of $f(z)=\frac{1}{z^{3/2}}$. I think it is $2$ because \begin{align*} z^{3/2}\big|_{0}&=0\\ (z^{3/2})'\big|_{0}&=0\\ (z^{3/2})''\big|_{0}&\neq 0, \end{align*} but I'm not sure. Could someone explain to me, please?
As Chappers pointed out this function is not analytic on any annalus centred at $z=0$. It's a multivalued function defined by: \begin{align*} z^{-\frac32} = e^{Ln(z^{-\frac32})} = e^{-\frac 32 (\ln|z| +i\arg(z)+2p\pi i)} = |z|^{-\frac32} e^{\frac{-3i}{2}(\arg(z)+2p\pi)}, \end{align*} where $p \in \mathbb Z$. Another way to see that something odd is going on is to evaluate $f$ along the circle given by $e^{i \theta}$. At $\theta =0$ we have that $f(e^{i0})=e^0=1$ (naivly). But at $\theta = 2\pi$ (the same point in th complex plane) we get $f(e^{i2\pi}) = e^{-i3\pi} = -1$ (which is the main concern here).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2245406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Quantifier statement explanation I'm having trouble understanding what the above statement means in plain English. Does it mean: "For any Natural number there exist a number greater than it". I don't get what the right part of the implication means.
You are correct. Literally translated, the statement means "for all $x$ such that $x$ is a natural number, there exists a number $y$ such that $y$ is a natural number and $x$ is less than $y$".
{ "language": "en", "url": "https://math.stackexchange.com/questions/2245525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Sum of squares by treating it as a nonhomogenous recurrence Let $T(n) = T(n-1) + n^2$ where $T(0) = 0$. The homogenous part $T(n) = T(n-1)$ has characteristic polynomial $x - 1 = 0$ and root $1$, which means $T(n) = \alpha \cdot 1^n$ for the homogenous part. I am not sure how to do the nonhomogenous part. I tried this: $cn^2 = c(n-1)^2 + n^2$ but $c$ doesn't become a nice constant. I am trying to ultimately derive $T(n) = n(n+1)(2n+1)/6$
You should try with a full cubic, not just the highest degree term. So you have to try $T(n) = \alpha n^3 + \beta n^2 + \gamma n + \delta$. Setting $T(0)=0$ we have $\delta=0$. Then, setting $T(n) = T(n-1) + n^2$ yields: $$\begin{cases} -3\alpha -1 = 0 \\ 3\alpha - 2\beta = 0 \\ -\alpha + \beta - \gamma = 0\end{cases}$$ Solving this, gives you the result: $\alpha = 1/3$, $\beta = 1/2$, $\gamma = 1/6$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2245608", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Set Proof verification How to prove that $(A-B)\cup(A\cap B)=A$? Here is my proof, I'm not sure if I verified it correctly. $(A-B) \cup (A \cap B)$ simplifies into $x \in (A-B) \vee x \in (A \cap B)$. Then we have $[x \in A$ and $x \notin B$] or $[x \in A$ and $x \in B$] $\iff x \in A$ Therefore, $(A-B) \cup (A \cap B) = A$
Provide another proof: Notice that $(A-B) \subset A$ and $(A\cap B) \subset A$, thus $(A-B) \cup (A\cap B) \subset A$ On the other hand, $\forall x \in A$, if $x\in B$, then $x\in A\cap B$; else if $x\notin B$, then $x\in A-B$. Thus $x\in (A-B) \cup (A\cap B)$. So we have $A\subset (A-B) \cup (A\cap B)$. So we get $A = (A-B) \cup (A\cap B)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2245669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Proof that $\sin$ is continuous by open sets on $\mathbb{R}$ with the usual topology The open set definition of continuity is: $f:A \to B$ is continuous $\iff$ $U_{B}\in\tau_{B} \implies f^{-1}U\in\tau_{A}\ \forall U_B$, where $\tau_A$ and $\tau_B$ are the topologies of $A$ and $B$. I believe that in the usual topology on $\mathbb{R}$ this reduces to: $f:\mathbb{R}\to\mathbb{R}$ is continuous $\iff$ $\forall\epsilon\exists\delta$ s.t. $|f(x)-f(y)|<\epsilon \implies |x-y|<\delta$, since $U_{B} = \{f(x) : |f(x)-f(y)|<\epsilon\}$ and $U_A = \{x : |x-y|<\delta\}$. My textbook (Nakahara) makes it very clear the the converse definition is not true; i.e., you can't just show that open sets in the domain map to open sets in the range, you must show that the inverse image of an open set in the range is an open set in the domain. I'm trying to prove that $\sin:\mathbb{R}\to\mathbb{R}$ is continuous. Now, I can easily do the following by choosing $\delta = \epsilon/2$: $$ |x-y|<\frac{\epsilon}{2} \\ 2\left|\frac{x-y}{2}\right| < \epsilon \\ 2\left|\sin\left(\frac{x-y}{2}\right)\right| < \epsilon \\ 2\left|\sin\left(\frac{x-y}{2}\right)\cos\left(\frac{x+y}{2}\right)\right| < \epsilon \\ \left|\sin x - \sin y\right| < \epsilon $$ But I'm operating under the impression that I must show the reverse, and it's not at all clear to me that the step: $$ 2\left|\sin\left(\frac{x-y}{2}\right)\right| < \epsilon \implies 2\left|\frac{x-y}{2}\right| < \epsilon $$ is true in this direction (although it naturally follows the other way around). So my questions are: * *Am I correct in thinking that I need to show the converse? *If so, how do I do it? *If not, what is the meaning of Nakahara's statement?
If you want to use the "inverse image of open sets are open" definition of continuity you could begin by noting that there are three types of basic open intervals in $[-1,1]$ * *$(a,b)$ where $-1<a<b<1$ *$(a,1]$ *$[-1,b)$ Then point out the the inverse image under $\arcsin$ of each of these three types of basic open set is a countable union of open intervals in $\mathbb{R}$. Then let $U$ denote an open set in $[-1,1]$. Then for each $y\in U$ there is a basic open set $V$ of one of the three types above such that $p\in V\subseteq U$, and the inverse image of $V$ under the $arcsin$ function is an open set containing the inverse images of $y$. Thus the inverse image of $U$ is open in $\mathbb{R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2245791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Information Encoded by a Probability Density Function I want to calculate the information needed to encode a probability density function. For a discrete probability function such as a coin flip, the information would be calculated as follows: $$S=\sum_n -P_n\log_2P_n$$ So for a coin flip we would have $$S=-0.5\log_20.5 + -0.5\log_20.5=1$$ So it would take one bit to encode a coin flip (heads or tails, one or zero). If you want to try to calculate this for a continuous probability function, obviously you cannot use a discrete sum, you have to use an integral. But, when I do this $$S=\int_{-\infty}^\infty{-P(x)\log_2P(x)}$$ With the normal distribution $$P(x) = e^{-\frac{x^2}{r^2}}\frac{1}{r\sqrt{\pi}}$$ I get an equation somewhere along the lines of $$S=\log_2{r}+C$$ which seems right more or less at first, but this means that for some values of $r$ you need negative information to describe the function. I think the problem with what I am doing here is rooted in the fact that for a probability distribution, probabilities are only non-zero for ranges of $x$, such that say $P(3)$ would not really have any kind of significance. Any help would be very much appreciated.
The amount of information (in the Shannon sense, measured in bits) that a continuous source produces is infinite. You need an infinite amount of bits to encode a variable that -for example- is uniform on $[0,1]$. The differential entropy is not the same as the true entropy, it cannot be interpreted in that way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2245876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Sum of Idempotent Transformation Let $V$ be a $n$ dimensional linear space on the number field $K$. Let $A_1,A_2,\cdots,A_s$ be idempotent transformations(or matrices) on $V$ ($A_i^2=A_i$, $i=1,2,\cdots,s$). If $A=A_1+A_2+\cdots+A_s$ is also an idempotent transformation, Prove $A_iA_j=0$ and $A_jA_i=0$ for $1\leqslant i<j\leqslant s$. Advanced Algebra, Tsinghua University Press, Page 259 It's not hard to prove it if $s=2$. But I can't even prove the case when $s=3$. The book gives a pretty complicated method by setting $$G=\begin{pmatrix} A_1^2&A_1A_2&\cdots&A_1A_s\\ A_2A_1&A_2^2&\cdots&A_2A_s\\ \vdots&\vdots&&\vdots\\ A_sA_1&A_sA_2&\cdots&A_s^2 \end{pmatrix}$$ Any advice or other method appreciated.
Answer inspired by the one given at the link in the comment under the question. Note that a linear operator being idempotent means that it is diagonalisable with eigenvalues $0$ and $1$ only: the space $V$ is a direct sum of the eigenspaces for $0$ (its kernel) and the eigenspace for$~1$ (its image). Among other things this means that the trace of $A$ is equal to its rank (the dimension of the eigenspace for$~1$). Our argument hinges on the additivity of the rank, and on the fact that we are considering the sum (rather than some other linear combination) of idempotents. Since the image of a linear combination of linear operators is always contained in the sum of their images, the rank of $A=A_1+\cdots+A_s$ is at most the sum of the ranks of the $A_i$; on the other hand the trace of $A$ equals the sum of the traces of the$~A_i$. If both the individual $A_i$ and their sum$~A$ are idempotent, this means that the image of$~A$, having as dimension the sum of the images of the$~A_i$, is in fact their direct sum (a sum of finite dimensional subspaces is a direct sum if and only if its dimension is the sum of their dimensions). Thus each vector $v$ in the image of $A$ can be uniquely written as sum of vectors in the respective images of the$~A_i$, the components of $v$ in the summands of the direct sum. But one has $$ v=A(v)=A_1(v)+\cdots+A_s(v), $$ so these components are simply the vectors $A_i(v)$ for $i=1,\ldots,s$. For a vector in one summand of a direct sum, it components in the other summands are clearly equal to the zero vector; this means that $A_j(A_i(v))=0$ whenever $j\neq i$. This holds for all$~v$ in the image of$~A$; on the other had for $w$ in the kernel of $A$ one has $A(w)=0=A_1(w)+\cdots+A_s(w)$ and by the directness of the sum this implies $A_i(w)=0$ for each$~i$; so $A_j(A_i(w))=0$ trivially in this case. Since for any $i\neq j$ one has that $A_jA_i$ vanishes on both eigenspaces of$~A$, and these eigenspaces span$~V$, it follows that $A_jA_i=0$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2245976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Do I have to multiply it whole? Given that $$A=\begin{bmatrix}4 & 1\\ -9 & -2 \end{bmatrix}$$ and $$A^{100}=\begin{bmatrix}a & b\\ c & d \end{bmatrix}$$ What is $a$? I tried to multiply it again and again but it seems lengthy. Is there a shorter method?
From $$ A^{k+1} = A^k A $$ you have the recurrences $$ a_1 = 4 \\ b_1 = 1 \\ a_{k+1} = 4 a_k - 9 b_k \\ b_{k+1} = a_k - 2 b_k $$ for the upper elements of $A^{k+1}$. Then $$ 2 a_{k+1} = 8 a_k - 18 b_k = - a_k + 9 b_{k+1} \\ 9 b_{k+1} = 2 a_{k+1} + a_k $$ and $$ a_{k+2} = 4 a_{k+1} - 9 b_{k+1} = 2 a_{k+1} - a_k $$ which is an order $2$ homogeneous linear recurrence which we now can solve by the usual algorithm. The characteristic polynomial is $$ p(t) = t^2 - 2 t + 1 = (t - 1)^2 $$ with double root $r=1$ and solution $$ a_n = k_1 r^n + k_2 n r^n = k_1 + k_2 n $$ From the initial known sequence elements we have $$ a_1 = k_1 + k_2 = 4 \\ a_2 = k_1 + 2 k_2 = 7 $$ so $k_2 = 3$ and $k_1 = 1$ which gives $$ a_n = 1 + 3n $$ In particular $$ a_{100} = 1 + 3\cdot 100 = 301 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2246077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Help me understand division in modular arithmetic From Wikipedia: In mathematics, modular arithmetic is a system of arithmetic for integers, where numbers "wrap around" upon reaching a certain value So the point of modular arithmetic is to do our normal arithmetic operations wrap around after reaching a certain value. From what I read https://www.doc.ic.ac.uk/~mrh/330tutor/ch03.html about modular arithmetic operations, they are just normal operations, same as how we use in normal arithmetic. Consider modulo n: * *modulo addition is defined (a+b)mod n *modulo subtraction is defined as (a-b) mod n *modulo multiplication is defined as (a*b) mod n *modulo division is defined as (a/b) mod n After defining above arithmetic operations we just happened to have found out that this is true, (a+b)mod n =(a mod n + b mod n) mod n which is similar with multiplication and subtraction. That doesn't mean that modular addition is (a mod n + b mod n) or does it? (correct me if I am wrong). now consider modular division as it is defined as (a/b) mod n ex: consider a=48,b=8,n=4 (here b is multiple of n) Now from what i understand This is perfectly fine (48/8)mod 8=6 right ! but as it says in above so when we modular division is not valid (a/b) modulo n when b is multiple of n. But can't we just say (a/b) modulo n != (a mod n)/( b mod n) and move on. So everything boils down to following questions. 1.Does while doing modular arithmetic every number 'p' that is ever going to used in arithmetic operation should be in [0,n) so modular arithmetic is ((a mod n + b mod n) mod n) 2.It doesn't matter what numbers you are using, at the end, value should be 0<= V< n so modular arithmetic is (some long cumbersome arithmetic expressions) mod n Explain where I am getting this wrong, this is bothering me a lot.
I'll try my best to answer concisely: * *It is not a condition for $a,b$ to be in $[o,n)$ but since every integer is equivalent to one, it makes sense to use numbers in this range *When I studied this, I liked to think that rather than division of $a$ by $b$, we multiply by the inverse of $b$. If $a|n$ there is no inverse (it is 'like' dividing by zero in a naive sense) To find the inverse of $b \mod{c}$ there is the extended Euclidean algorithm
{ "language": "en", "url": "https://math.stackexchange.com/questions/2246175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
What is the average? When I was first introduced to the concept of average (mean), I was confused. What does average mean? How does one number $\sum_{i=1}^{n} a_{i}$ represent the "central tendency" of a set of data points $a_i$. Then I found a way to deal with this concept. I thought that the average (mean) is "the closest to all the data points at the same time". Now, I want to prove this. Concisely: Let $f$ $:$ $\Bbb{R}$$\to$$\Bbb{R}$ be defined by: $$f(x) = \sum_{i=1}^{n}|x-a_i|$$ To prove: $f(x)$ hits a minimum at $x=\bar a$, where $\bar a$ is the mean of the discrete data points $a_i$. This would mean that the sum of the distances of the mean from the various data points is minimum as compared to any other number. My attempt: Clearly $f(x)$ is continuous since it is a sum of continuous functions and piece-wise differentiable since it is a sum of such functions. So, I find $f'(x)$. Before that, let's assume $a_1<a_2<\ldots<a_n$ [clearly, no loss of generality, here]: $$ f'(x) = \begin{cases} -n, & \text {$x<a_1$} \\ -n+2, & \text{$a_1<x<a_2$} \\ -n+4, & \text{$a_2<x<a_3$} \\ . & . \\ . & . \\ . & . \\ -n+2n = n, & \text{$x>a_n$} \\ \end{cases} $$ Now, the problem arises: $f'(x)=0$ has no solutions for $n$ is odd. For n is even, it has the solution as an entire interval: $$x \in (a_{n/2},a_{n/2+1})$$ This means i failed, my intuition was wrong from the very beginning. It can be proven [i think] that for $n$ is even, $\bar a$ lies in the above interval, but still: It means that there are more real numbers that are as much the "mean" of the data points as the mean itself [if my "definition" was right]. So, two questions: * *Why was my intuition wrong? *Which intuition is right for averages (mean)?
Your attempts is excellent, but will not lead you to success. Actually, the minimum of $$f_1(x) := \sum_{i=1}^{n}|x-a_i|$$ is known to be achieved by the median, another estimator of the central trend. (The median is defined as the value of rank $n/2$ or the average of the two values of rank $(n\pm1)/2$; for even $n$, all values between the two middle ones, including the median, achieve the minimum.) Instead, the average achieves the minimum of $$f_2(x) := \sum_{i=1}^{n}(x-a_i)^2,$$ as one easily shows by canceling the derivative: $$\frac12\dfrac{f_2(x)}{dx} = \sum_{i=1}^{n}(x-a_i)=nx-\sum_{i=1}^{n}a_i=0.$$ So the average is the solution of the equations $x=a_i$ in the least-squares sense. It is more sensitive than the median to far-away values, because of the squared weighting. For the sake of the comparison, the plot shows $f_1(x)/4$ and $\sqrt{f_2(x)/4}$ for the points $1, 3, 4, 7$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2246270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
For what value of $w$ is $(1-w)\bar X_1 + w\bar X_2$ the minimum variance unbiased estimator of $\mu$ Let $\bar X_1$ and $\bar X_2$ be the means of two independent samples of sizes $n$ and $2n$ from an infinite population that has mean $\mu$ and variance $\sigma^2 \gt 0$. For what value of $w$ is $(1-w)\bar X_1 + w\bar X_2$ the minimum variance unbiased estimator of $\mu$ ? I first wrote that I need to show $$E[(1-w)\bar X_1 + w\bar X_2] = \mu \qquad (1)$$ and that $$Var[(1-w)\bar X_1 + w\bar X_2] = \frac{1}{E\left[\left(\frac{d\ell(\theta)}{d\theta}\right)^2\right]} \qquad (2) $$ And then I will have shown that my parameter is a minimum variance unbiased estimator. First problem I see is that I don't know the probability distribution of this sample, so does that mean I can't find the log-likelihood function and by extension, can't find the R.H.S of (2)? Also, the question states that the population size is inifite. Does this mean that $E[X_i] = \mu$ ? Where $X_i$ is the sample mean from any population. Even if this is true, does it help me? I tried expanding (1) with the premise that $E[X_i] = \mu$ but i only got this: $$ E[(1-w)\bar X_1 + w\bar X_2] = \mu $$ $$ (1-w)E[\bar X_1] + wE[\bar X_2] = \mu $$ $$ (1-w)\mu + w\mu = \mu $$ $$ 1-w+w = 1 $$ $$ \text{trivial...}$$ Is my thought process wrong here? Is there another way to determine whether something is a minimum variance unbiased estimator?
For any distribution, $E(X_i) = \mu$. If the population mean is $\mu$, any single observation $X_i$ has mean $\mu$. In addition, if $\{X_i\}$ is an iid sample, then the expectation of the sample mean is $\mu$ (i.e., sample mean is an unbiased estimator of $\mu$). Because of that, for any $w$ the estimator $(1-w) \bar{X}_1 + w \bar{X}_2$ is unbiased. So the only issue is to have the variance as small as possible. Variance of $(1-w) \bar{X}_1 + w \bar{X}_2$ is $\frac{(1-w)^2\sigma^2}{n} + \frac{w^2\sigma^2}{2n} $. Use Calculus principles to minimize this with respect to $w$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2246335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Can Fermat's Two Squares Theorem be phrased in terms of Schemes? Fermat's two squares theorem says that a prime number $p = a^2 + b^2$ is the sum of two squares if and only if $p = 4k+1$. How might I phrase this in terms of Schemes? I know that $\mathrm{Spec}(\mathbb{Z}) = \{ primes \}$. And maybe we are saying there is a map or fibration from $X = \{ a^2 + b^2\}$ to $\mathrm{Spec}(\mathbb{Z})$ ? Schemes may seem like overkill, but it is one of the very first exercises in, say David Eisenbud and Joe Harris's The Geometry of Schemes to phrase modular arithmetic in terms of affine scheme theory: It therefore makes sense to go beyond the first page of the number theory textbook and ask if Fermat's theorem can also be discussed in terms of Scheme theory (maybe unifying the considerations over $\mathbb{C}$ and $\mathbb{F}_p$).
We have that $$p=x^2+y^2$$defines a conic over $\mathbb{Q}$ and its rational points are solutions to this equation in $\mathbb{Q}$. Here we always add the points at infinity to make it a projective curve. But the same equation defines a conic bundle scheme over $\mathbb{Z}$ whose generic fiber is that conic over $\mathbb{Q}$ and that is proper over $\mathbb{Z}$ when a line at infinity is added—a $2$-dimensional scheme. The condition that it has an integral solution when $$p=4k+1$$is equivalent to saying that the conic over $\mathbb{Q}$ has a rational point whose closure in the scheme does not intersect the line at infinity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2246415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
Solve $(y+\sqrt{x^2+y^2})dx-xdy=0$ Solve $(y+\sqrt{x^2+y^2})dx-xdy=0$ I suspect this is homogoneus equation after we divide sides by $y$. But I don't know how to contiunue.
note that we can write $$1+\sqrt{\left(\frac{x}{y}\right)^2+1}-\frac{x}{y}\frac{dy}{dx}=0$$ for $$y>0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2246649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How do we conclude that following two groups are isomorphic? Let $H= \Bbb Z \times \Bbb Z \times \{0\}$ and $T = \{0\} \times \Bbb Z \times \{0\}$ Then $H/T $ is isomorphic to $\Bbb Z$ $\Bbb Z$ denotes integers, and $\times$ Cartesian product. How did we make that conclusion? I come across such examples very often, could you give me some tips in order to see such I believe "triviality"?
Define a map $\phi:H \to \mathbb{Z}$ such that $(m,n,0) \mapsto m$. It is easily verified that this is a group homomorphism, and that the image of $\phi$ is all of $\mathbb{Z}$. Moreover note that $\ker{\phi} = T$. Therefore, by the First Isomorphism theorem we have that: $$ H/T \cong \text{Im}{\phi} = \mathbb{Z} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2246770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Find area of trapezoid contained in circle The picture below represents a circumference of radius 5 and center at the origin of the referencial 0xy. In the picture is represented too a trapezoid [ABCD], with [AD] and [BC] parallel. A is the point of the circumference that belongs to the x axis. Points B and C belong to the circumference. D belongs to the 0x axis and has x-coordinate $-\frac{5}{2}$. Consider $AÔB = \alpha, \alpha \in ]0,\frac{\pi}{2}[$. Show that the area of the trapezoid in function of $\alpha$ is given by: $$A(\alpha) = 25\sin(\alpha)\cos(\alpha)+\frac{75}{4}\sin(\alpha)$$ First I tried to calculate the area of the whole trapezoid at once and so I got: $$\frac{2 \cdot 5\cos(\alpha)+(5+\frac{5}{2})}{2}\cdot 5\sin(\alpha)$$ But when I put this one and the given one in the calculator they gave different values, so I tried dividing the main trapezoid in two smaller ones at the y axis and I got: \begin{align} 25\sin(\alpha) \cdot \frac{\cos(\alpha)+1}{2}+25\sin(\alpha) \cdot \frac{2\cos(\alpha)+1}{4 }= \\ 25\sin(\alpha)\Bigg(\frac{\cos(\alpha)+1}{2}+\cdot \frac{2\cos(\alpha)+1}{4}\Bigg) = \\ 25\sin(\alpha)\Bigg(\frac{4(\cos(\alpha)+1)+2(2\cos(\alpha)+1)}{8}\Bigg) = \\ 25\sin(\alpha)\Bigg(\frac{\cos(\alpha)+1+4\cos(\alpha)+2}{4}\Bigg) = \\ 25\sin(\alpha)\Bigg(\frac{5\cos(\alpha)+3}{4}\Bigg) = ??? \end{align} I have two questions: * *Why didn't my first attempt work? *How do I solve this, continuing with my second one?
You were correct but you did not continue with the first formula you derived: $$ {2⋅5cos\alpha+(5+5/2)\over 2}⋅5sin\alpha = 5{10\over 2}cos\alpha sin\alpha + 5{7.5\over 2}sin\alpha ⇒ \\ A(\alpha) = 25sin\alpha cos\alpha +{75\over 4}sin\alpha$$ Here's an alternative way: We have to notice that as the angle a changes the point B changes and in order for the trapezoid to remain a trapezoid that means point C changes in order to stay paraller with AD. So: $$ A\hat OB = C\hat OD $$ Because BD and AD are parallel that means: $$ O\hat BC = O\hat CB $$ If we calculate the area of OAB triangle in terms of the angle α we got: $$ OA = 5\\ sin\alpha = h/5 \Rightarrow h = 5sin\alpha \\ AreaOfOAB = {25\over 2}sin\alpha$$ If we calculate the area of OCD triangle in terms of the angle α we got: $$ OD = 2.5\\ sin\alpha = h/5 \Rightarrow h = 5sin\alpha \\ AreaOfOCD = {12.5\over 2}sin\alpha$$ Now let's look at the OCB triangle: $$ cos\alpha = {(BC/2)\over 5}={BC\over 10} \Rightarrow BC = 10cos\alpha \\sinα=h/5⇒h=5sinα\\ AreaOfOCB = 25sin\alpha cos\alpha$$ Now if we add them we got: $$ A(\alpha) = 25sin\alpha cos\alpha +{75\over 4}sin\alpha$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2246831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find three subspaces $W_1$, $W_2$ and $W_3$ of $k [X]$ such that $k[X]\cong W_1\oplus W_2\oplus W_3$ I am trying to find three non-trivial subspaces of all polynomials, there are infinite subspaces of the space of all polynomials? I could fix a natural $n$ and say that the polynomials of degree less than $n$ are a subspace, those of degree equal to n other and those of degree greater than $n$ another and in this case could not arm the direct sum with those three subspaces?
A basis of $k[X]$ is $1, X, X^2, X^3, \dots$. The only thing you have to do is divide the basis elements in three groups, so for instance $$\begin{align*} W_1 &= \text{span}(1) = k \\ W_2 & = \text{span}(X) = k \cdot X \\ W_3 & = \text{span}(X^2, X^3, \dots) = X^2 \cdot k[X]\\ \end{align*} $$ would work.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2246977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Can any graph be represented by a formula? I am only in high school but interested in maths. Can any graph be represented by a formula or does the graph have to have certain characteristics like a pattern etc?
If you're asking whether a function has to be given by some sort of algorithm or logic, then the answer is no: A function $f:X \to Y$ is simply a subset of $S_f \subset X\times Y$ such that for each $x\in X$, there exists exactly one point $(x, y)\in S_f$; the point $y$ is denoted by $f(x)$. Note that this assignment is deterministic (though not necessarily computable by a human, algorithm, etc.). We can talk about the kinds of functions you describe, though; they're just a smaller subset of the space of all functions. For that, "computable function" is probably the concept you want. If you want to relax the deterministic requirement, then "random variable" is probably what you're looking for, but note that there are quite a few technical details required to define them properly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2247093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to solve this logarithmic equation through algebraic manipulation? I was working through an Art of Problem Solving workbook when I encountered a very frustrating problem. Solve the equation $\log_{2x}216=x$, where $x\in \mathbb{R}$. I understand how to find the answer by inspection (all you have to do is stare at it for a few seconds), but I'm trying to figure out how to solve it algebraically. Every time I try to manipulate the problem, I either find myself running in circles or I create some convoluted expression that's even harder to deal with than the original problem. Forgive me if this is a very easy question - I'm just completely lost here. Thanks!
In terms of using inspection, I think this makes the inspection easier. $$\ln_{2x}216=x \Rightarrow \ln 6^3 = x \ln 2x \Rightarrow 3 \ln 6 = x \ln2x $$ If you guess that the linear factors are equal ($x=3$) you immediately see the logarithmic factors are equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2247182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
The number of solutions of the equation $\tan x +\sec x =2\cos x$ lying in the interval $[0, 2\pi]$ is: The number of solutions of the equation $\tan x +\sec x =2\cos x$ lying in the interval $[0, 2\pi]$ is: $a$. $0$ $b$. $1$ $c$. $2$ $d$. $3$ My Attempt: $$\tan x +\sec x=2\cos x$$ $$\dfrac {\sin x}{\cos x}+\dfrac {1}{\cos x}=2\cos x$$ $$\sin x + 1=2\cos^2 x$$ $$\sin x +1=2-2\sin^2 x$$ $$2\sin^2 x +\sin x - 1=0$$ $$2\sin^2 x +2\sin x -\sin x - 1=0$$ $$2\sin x (\sin x +1) -1(\sin x+1)=0$$ $$(\sin x +1)(2\sin x-1)=0$$ So, what's the next?
The answer must be 2 Since tanx and secx domain would consider every x, rejecting (2n+1)π/2 values in the interval [0,2π] such values would be rejected from the solution. In the last step sinx=1 the solution would be 3π/2 and this would be rejected due to domain condition. The other part, that is, sinx=1/2 has solution x=5π/6 and x= π/6 in the Interval [0,2π]. Thus the number of solutions are 2
{ "language": "en", "url": "https://math.stackexchange.com/questions/2247408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Why are numerator and denominator called so? There are terminologies for natural numbers, whole numbers and so on. (If the meaning of the terms can be found, it becomes easier to understand. For natural numbers, the term "natural" refers to the naturally occurring set of numbers in nature like $2,3,4$ and not $-2$, $-3$, and $-4$). But I didn't find any information about why the numerator is called "numerator" and denominator is called "denominator". Is it just a simple terminology given by mathematicians (like "addition" in addition) or is there any special "meaning" behind these terms (like "natural" in natural numbers)? Thanks for your time. If any doubt please comment.
In a fraction, such as two-fifths, "two" is the numerator, and "fifths" the denominator. Numerator tells us "how many". The word is derived from the Latin "numerus" (number). Denominator names the "things" we are counting. The word is derived from the Latin "denomino" (to name).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2247583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 1, "answer_id": 0 }
Convergence Speed for Optimization Methods on non Lipschitz and strictly monotone funtions I am studying convergence analysis for some optimization techniques, so this could be a naive question. In the derivation of convergence speed for Gradient descend (and Newton method), they usually assume Lipschitz condition on the first derivative (and second derivative). My question is as follows, consider a strictly decreasing function that does not satisfy the Lipschitz conditions, is there a convergence analysis for Gradient descend (or other techniques) for such functions? The simple example in mind is the following function with $N$ variables: $$ \sum_i^N \frac{1}{x_i} , ~~~~ x_i \in [0,1]$$ Now we can simply minimize the function by letting all $x_i = 1$. A simlar function is $$ \sum_i^N e^{x_i}, ~~~~ ~~~ x_i \in [0 , \infty ) $$ any clarification is appreciated.
Perhaps what you're looking for is the subgradient descent algorithm, which works without assuming that the function $f$ being optimized is even differentiable; one only needs continuity, and a Lipschitz condition on $f$ itself, i.e. $$ |f(x)-f(y)|\leq L\|x-y\|.$$ Of course, we also need access to the subgradient information, so we assume we have a subgradient oracle for $f$, meaning that for each point $x$ we can get a vector $g(x)$ such that $$f(y)\geq f(x) + \langle g(x),y-x\rangle$$ for all $y$. This is sufficient to show a rate of convergence for gradient descent where the function error of the average iterate is $O(1/\sqrt{t})$ after $t$ iterations. You might want to take a look at the analysis in section 1.5 here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2247681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Suppose equation $x^{12} = 1$ has $14$ solution in some group. Show that this group is not cyclic. Suppose equation $x^{12} = 1$ has $14$ solution in some cyclic group. Show that this group is not cyclic. Any help would be appreciated. I was trying to show this by contradiction, but I didn't go too far. Attempt: Suppose that equation has $14$ solutions in some cyclic group $C_n$. then if $a^k \in C_n$ is one of the solution we have that $(a^k)^{12} = 1 \implies a^k$ is a generator of $C_n$. From the condition of the problem there are $14$ different $k$'s such that $a^k = 1 \implies$ there are $14$ generators $\implies \varphi (n) = 14$. At first glance, it seems to me that I have to find the smallest $n$ such that $\varphi (n) = 14$, but in class our teacher haven't mentioned anything about the inverse of the totient, so maybe I'm wrong with what I've done.
Here is another take. There is only one finite cyclic group of each order, up to isomorphism. $\mathbb C^\times$ contains a copy of the cyclic group of order $n$: it is the subgroup of $n$-th roots of unit. The equation $x^{12} = 1$ has at most $12$ solutions in $\mathbb C$ and so cannot have $14$ solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2247847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Knowing that for any set of real numbers $x,y,z$, such that $x+y+z = 1$ the inequality $x^2+y^2+z^2 \ge \frac{1}{3}$ holds. Knowing that for any set of real numbers $x,y,z$, such that $x+y+z = 1$ the inequality $x^2+y^2+z^2 \ge \frac{1}{3}$ holds. I spent a lot of time trying to solve this and, having consulted some books, I came to this: $$2x^2+2y^2+2z^2 \ge 2xy + 2xz + 2yz$$ $$2xy+2yz+2xz = 1-(x^2+y^2+z^2) $$ $$2x^2+2y^2+2z^2 \ge 1 - x^2 -y^2 - z^2 $$ $$x^2+y^2+z^2 \ge \frac{1}{3}$$ But this method is very unintuitive to me and I don't think this is the best way to solve this. Any remarks and hints will be most appreciated.
Here's another way to approach this. It's easy to see that the value of $\frac13$ is obtained when each of $x, y, z$ is $\frac13$. We want to show that as the variables deviate from this point (with their sum still being 1) the value cannot decrease. So we look at the deviations from $\frac13$: $x=\frac13+\epsilon_1$, $y=\frac13+\epsilon_2$, $z=\frac13+\epsilon_3$ with $\epsilon_1+\epsilon_2+\epsilon_3=0$. you have $x^2+y^2+z^2=\\ (\frac13+\epsilon_1)^2+(\frac13+\epsilon_2)^2+(\frac13+\epsilon_3)^2=\\\left(\frac19+\frac23\epsilon_1+\epsilon_1^2\right)+\left(\frac19+\frac23\epsilon_2+\epsilon_2^2\right)+\left(\frac19+\frac23\epsilon_3+\epsilon_3^2\right)=\\ \left(\frac19+\frac19+\frac19\right)+\frac23(\epsilon_1+\epsilon_2+\epsilon_3)+(\epsilon_1^2+\epsilon_2^2+\epsilon_3^2)=\\ \frac13+(\epsilon_1^2+\epsilon_2^2+\epsilon_3^2) \ge \frac13$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2247973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 15, "answer_id": 10 }
Why is $\lim_{x \rightarrow e}\frac{\ln{(x)}-1}{x-e}$ equal to $\frac{1}{e}$ As it says. Why? The best I could achieve is: $$\lim_{x \rightarrow e}\frac{\ln(x)-\ln (e)}{x-e}$$ And the answer says it is equal to $$\frac{\text{d}}{\text{d}y}(\ln(x))$$ When $x$ is $e$ so it should be $$\frac 1 x$$ And when $x$ is replaced by $e$ it is $$\frac 1 e$$ Why is that? What converted the limit to a derivative? Also no l'hopital.
The derivative of a real valued function $f$ at a point $y$ is given by $\lim_{x \to y} \frac{f(x) - f(y)}{x-y} = f'(y)$. In your case, if we have $f(x) = \ln x$ then we know $f'(x) = \frac{1}{x}$ and $f'(e) = \frac{1}{e}$. But $\displaystyle f'(e) = \lim_{x \to e} \frac{\ln x - \ln e}{x-e} = \lim_{x\to e} \frac{\ln x -1 }{x-e}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2248211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Difficult limit $\lim_{x\to\infty} e^{-x}\int_0^x \int_0^x \frac{e^u-e^v}{u-v} \,\mathrm{d}u\mathrm{d}v$ I need to calculate this limit: $$\lim_{x\to\infty} e^{-x}\int_0^x \int_0^x \frac{e^u-e^v}{u-v} \,\mathrm{d}u\mathrm{d}v.$$ How do I do it? There's a hint that I should use de l'Hospital's rule.
Expand the exponentials in their standard Taylor series to see the integrand equals $$\sum_{n=0}^{\infty}\frac{1}{n!}\frac{u^n-v^n}{u-v}= \sum_{n=1}^{\infty}\frac{1}{n!}(u^{n-1} + u^{n-2}v + \cdots + uv^{n-2} + v^{n-1}).$$ For each $n$ we have $$\int_0^x\int_0^x (u^{n-1} + u^{n-2}v + \cdots + uv^{n-2} + v^{n-1})\,dv\,du = x^{n+1}\left ( \frac{1}{n\cdot 1} + \frac{1}{(n-1)\cdot 2} + \frac{1}{(n-2)\cdot 3} +\cdots + \frac{1}{2\cdot (n-1)} + \frac{1}{1\cdot n}\right) \ge x^{n+1}\frac{1}{n}\left ( \frac{1}{1} + \frac{1}{2} + \cdots + \frac{1}{n}\right ) \ge x^{n+1}\frac{\ln n}{n}.$$ Thus the full expression of interest is at least $$e^{-x}\sum_{n=1}^{\infty} \frac{\ln n}{n\cdot n!}x^{n+1} \ge e^{-x}\sum_{n=1}^{\infty} \ln n\frac{x^{n+1}}{(n+1)!}.$$ If we had just $e^{-x}\sum_{n=1}^{\infty} \frac{x^{n+1}}{(n+1)!}$ on the right, the limit on the right would be $1.$ But we have the extra $\ln n$ in front of the standard coefficients, hence the limit on the right is $\infty.$ Therefore the limit on the left is $\infty$ and we're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2248355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Rewriting a set as a polyhedron I have the following set: \begin{align} M = \{ x \in \mathbb{R}^n: x \geq 0, x^{T}y \leq 1, \forall y \text{ with } \lVert y \rVert \leq 1 \} \end{align} I would like to rewrite this set so as to find out whether it is a polyhedron, defined as the intersection of finitely many halfspaces of the form: \begin{align} P=\{x \in \mathbb{R}^n:Ax \leq b\} \end{align} How can I do this so as to have the polyhedron, or how can I argument that it is impossible, and thus the set is not a polyhedron? What I have: if the condition on $y$ from the set m were $\lVert y \rVert = 1$ instead of the inequality, then the set would represent the intersection of the unit ball $\{ x: \lVert x \rVert \leq 1 \}$ (using the Cauchy inequality to rewrite the $x^T y \leq 1$ condition) and the non-negative outhunt $R^n_{+}$. To my understanding, this would not be a polyhedron according to the above definition. How does the $\lVert y \rVert \leq 1$ instead of $\lVert y \rVert = 1$ condition change things?
If the norm is Euclidean then the answer is straightforward. Note that $\langle x , y \rangle \le 1$ for all $y$ of unit norm or less iff $\max_{\|y\| \le 1} \langle x , y \rangle \le 1$ iff $\|x\| \le 1$. Hence the set in question can be written as $\{ x | x \ge 0, \|x \| \le 1 \}$. This is not a polyhedral set. Note: This answer is a bit flip in that I have not demonstrated that $M$ is not polyhedral. To take a quote of Rockafellar's slightly out of context, "This classical result is an outstanding example of a fact which is completely obvious to geometric intuition, ... and is not trivial to prove" ("Convex Analysis", p. 171). In particular, a bounded polyhedral set (a polytope in Rockafellar's nomenclature) has a finite number of extreme points (Corollary 19.1.1 in the above), but it is easy to see that $M$ has an infinite number of extreme points.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2248479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Computing $\sum_{n=1}^{\infty} \frac{1}{n^4}$ I came across this problem in Fourier analysis, but I'm not sure my confusion is with Fourier analysis per se, it might be that I'm lacking some understanding of sums/series. I am trying to determine the sum $$\sum_{n=1}^{\infty} \frac{1}{n^4}.$$ I know this problem has been posted before, but I want to solve it in a particular way. I already have that $$\sum_{n=0}^{\infty} \frac{1}{(2n+1)^4} = \frac{\pi^4}{96}.$$ I found a suggested solution online that said to solve $x = \frac{\pi^4}{96} + \frac{x}{16}$ for $x$ to find the right answer. (So $x$ is the series I am trying to determine.) But I don't understand why I should divide $x$ by $16$. I understand that I have to add something to $\frac{\pi^4}{96},$ since that is the sum $\sum_{n=1}^{\infty} \frac{1}{n^4}$ over all odd indices, but not over the even ones. And I get why I must add $x$ divided by something, but why $16$?
Although the OP is seeking a way forward through Fourier series analysis, I thought it might be instructive and useful for some users to present an approach that relies on contour integration. To that end, we proceed. We begin by noting that the function $f(z)=\frac{\cot(\pi z)}{z^4}$ has simple poles at $z=n$, $n\ne 0$ and a fifth-order pole at $z=0$. The residue at $z=n$, $n\ne 0$ is given by $\frac1{\pi n^4}$. The residue at $z=0$ is given by $-\frac{\pi^3}{45}$. Let $C_N$ be the contour $|z|=N+1/2$. Then, we have $$\begin{align} \lim_{N\to \infty}\oint_{C_N} \frac{\cot(\pi z)}{z^4}\,dz&=2\pi i \left(2\sum_{n=1}^\infty \frac{1}{\pi n^4} +\text{Res}\left(\frac{\cot(\pi z)}{z^4},z=0\right)\right)\\\\ &=2\pi i \left(\frac{2}{\pi}\sum_{n=1}^\infty \frac{1}{n^4}-\frac{\pi^3}{45}\right)\\\\ &=0 \end{align}$$ Hence, we see that $$\bbox[5px,border:2px solid #C0A000]{\sum_{n=1}^\infty \frac{1}{n^4}=\frac{\pi^4}{90}}$$ And we are done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2248607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
How many free variables can there be for an $n\times n$ matrix when solving for an eigenspace? let's say you have an nxn matrix, call it $A$. If $A =$ $$ \begin{matrix} a^1_1 & a^1_2 & \cdots & \cdots & \cdots & a^1_n \\ a^2_1 & a^2_2 & \cdots & \cdots & \cdots & a^2_n \\ \vdots & \vdots & \ddots & & & \vdots \\ \vdots & \vdots & & \ddots & & \vdots \\ \vdots & \vdots & & & \ddots & \vdots \\ a^n_1 & a^n_2 & \cdots & \cdots & \cdots & a^n_n \end{matrix} $$ then what is the dimension of the eigenspace? I know how to find the eigenvalues for any matrix, no matter whether those eigenvalues are real or complex, but, when you're solving for the eigenspace, you get at least one free variable. My question is, for an nxn matrix, how many free variables are there allowed to be when solving the equation $A-\lambda I = 0$ where $\lambda$ is an eigenvalue? Is the number of free variables $n$? $n-1$? Thanks!
Expanding on my comment above: I think the answer you're looking for is the following: The number of free variables in $\det(A - \lambda I) = 0$ is equal to the dimension of the null space of $A - \lambda I$. Here are some examples to play with: $$ A_3 = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} $$ $$ A_2 = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix} $$ $$ A_1 = \begin{pmatrix} 1 & 1 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix} $$ Setting $\lambda = 1$, you will find that the dimension of the null space of $A_3 - \lambda I$ is $3$, the dimension of the null space of $A_2 - \lambda I$ is $2$, and the dimension of the null space of $A_1 - \lambda I$ is $1$. Edit: For any $m \times n$ matrix $B$, interpreted as a linear function (via right multiplication of a column vector), you have $B : \mathbb{R}^n \to \mathbb{R}^m$. The null space of $B$ is a subspace of the domain, and hence a subspace of $\mathbb{R}^n$, and therefore has dimension bounded above by $n$ and bounded below by $0$ (where the $0$ case means that the only vector ${\bf v}$ in $\mathbb{R}^n$ such that $B{\bf v} = {\bf 0}_m$ is ${\bf v} = {\bf 0}_n$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2248707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Prove that $a_n \to 1$ implies $a_n^2 \to 1$ using the definition of convergence Suppose that $(a_{n})$$_{n \in \mathbb{N}}$ is convergent, with limit 1. Show, directly from the definition, that $(a^2_{n})$$_{n \in \mathbb{N}}$ is convergent, with limit 1. My Attempt: Let $\epsilon > 0$ be given. We want to find $N \in \mathbb{N}$ such that $\forall n>N, |a^2_{n} - 1| < \epsilon$. $|a^2_{n} - 1| = |a_{n} - 1||a_{n} + 1|$. As $(a_{n})$$_{n \in \mathbb{N}}$ is convergent with limit 1, $\exists M \in \mathbb{N}$ such that $\forall m>M, |a_{n} - 1| < \frac{\epsilon}{2}$. Then for $N>M, |a_{n} - 1||a_{n} + 1| < \frac{\epsilon}{2} (\frac{\epsilon}{2} + 2) = \epsilon + \frac{\epsilon^2}{4} $ A bit stuck from here, would appreciate some help.
We know $a_n$ converges to $1$. We know that for each $\delta>0$, there's an $N$ such that for all $n>N$, we must have $$|a_n-1|<\delta$$ Let $\epsilon>0$. Take $\delta=-1+\sqrt{1+\epsilon}$ (and see $\delta>0$). Now note that $$|a_n+1|\leq |a_n|+|1|=|a_n|-|1|+2\leq |a_n-1|+2<2+\delta$$ so that $$|a_n^2-1|=|a_n-1||a_n+1|< \delta(2+\delta)=\epsilon$$ which proves the required statement.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2248778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Can a function $f$ have $L^p$ norms $\Vert f\Vert_p =p$ for all $1\leq p<\infty$? I have tried to show that such a function must be in $L^{\infty}$ and thus it is impossible for such a function to exist since, in that case, $$\infty =\lim_{p\rightarrow \infty} \Vert f\Vert_p=\Vert f\Vert_{\infty}.$$ Basic estimates seem to fail and I can't seem to construct a counterexample. What if we replace $p$ with something that grows at different rates, like $e^p$ or $\log p$?
The magic words are: Riesz-Thorin interpolation Theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2248989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
What is this *redundant cycle* "thing"? Why is it a matroid? I found the following definition in an introductory exposition about matroids. Let $G=(V,E)$ be a given (finite) graph. Let $\mathcal{C, D}$ be arbitrary collections of cycles in $G$. $\mathcal{C}$ is redundant, if every edge in $G$ appears in an even number of cycles of $\mathcal{C}$. $\mathcal{D}$ is independent, if there is no subset of $\mathcal{D}$ which is redundant. I am supposed to be showing that the set of independent sets of cycles form a matroid. I do not know how to proceed here, or what this notion of redundancy is actually referring to. It took me a while to come up with a graph and a redundant cycle set, but I didn't get any intuition of what this is about. What is the intuition behind these objects? How do I proceed to show that these indeed form a matroid?
Suppose you want to count the cycles in a ladder graph. Clearly it looks as though a ladder graph with $n$ rungs is made of $n-1$ cycles glued together. However, if you count all possible cycles, you find $\binom n2$ of them (choose any two rungs and take the rectangle between them). The notion of independent cycles formalizes the former intuition; all independent sets of cycles have $n-1$ elements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2249115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
why$2 (-1 + 2^{1 + n})$ is the answer to the recurrence relation $a_{n}=2a_{n-1}+2$? $a_{0}=2$ $a_{1}=2(2)+2$ $a_{2}=2(2(2)+2)+2$ $a_{3}=2(2(2(2)+2)+2)+2$ $a_{4}=2(2(2(2(2)+2)+2)+2)+2$ $a_{5}=2(2(2(2(2(2)+2)+2)+2)+2)+2$ To simplifiy $a_{6}=2^{6}+2^{5}...2^{1}$ so my answer is $a_{n}=2^{n+1}+2^{n}+...2^{1}$ The correct answer is $2 (-1 + 2^{1 + n})$ How do I make this transition?
The inhomogeneous recurrence relation $$ a_n = 2 a_{n-1} + 2 $$ can be turned into a homogeneous recurrence $$ a_n - a_{n-1} = 2 a_{n-1} + 2 - (2 a_{n-2} + 2) = 2 a_{n-1} - 2 a_{n-2} \iff \\ a_n = 3 a_{n-1} - 2 a_{n-2} $$ and solved by the usual algorithm. The characteristic polynomial is $$ p(t) = t^2 - 3 t + 2 $$ with roots $r_1 = 1$ and $r_2 = 2$. So the solution is $$ a_n = k_1 r_1^n + k_2 r_2^n = k_1 + k_2 2^n $$ The initial elements give $$ a_0 = k_1 + k_2 = 2 \\ a_1 = k_1 + 2 k_2 = 6 $$ This gives $k_2 = 4$ and $k_1 = -2$. The solution is $$ a_n = -2 + 4 \cdot 2^n = 2^{n+2} - 2 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2249216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 1 }
Gradient computation, result verification I have a problem with the computation of the gradient of the function $$ L(w) = -\dfrac{1}{N}\sum\limits_{n=1}^N y_{n}\log\left( \sigma(w^{T}x_{n}) \right) + (1 - y_{n})\log\left( 1-{\sigma}(w^{T}x_{n}) \right) $$ where $\sigma$ is a sigmoid function defined by $\sigma(x) = \dfrac{1}{1 + \mathrm{e}^{-x}}$. With my attempts I end up with the gradient taken with respect to $w$: $$ \triangledown_{w}L(w) = -\dfrac{1}{N}\sum\limits_{n=1}^N x_{n}y_{n} -x_{n}\sigma(w^{T}x_{n}) \;. $$ My process of computation was as follows: derivative of $\log\left( \sigma(w^{T}x_{n}) \right)$, then inner function of $\log$, so it's $\sigma$ and then inner function of $\sigma$ so $w^Tx$. The same, of course, for the second part of the sum. Can you spot any obvious mistakes? Maybe I am not allowed to treat vectors ($w$ and $x$) like normal variables? Thanks in advance.
An easy way to check is to check the components. Recall that $$ \frac{\partial}{\partial x}\sigma(x) = \sigma(x)[1-\sigma(x)] \;\;\;\&\;\;\; \frac{\partial}{\partial w_i} w^Tx_j = x_{ji} $$ Then, the $i$th component of the gradient is: \begin{align} \frac{\partial}{\partial w_i} L(w) &= \frac{-1}{N}\sum_j y_j\frac{\sigma(w^Tx_j)[1-\sigma(w^Tx_j)]x_{ji}}{\sigma(w^Tx_j)} +(1-y_j)\frac{(-1)\sigma(w^Tx_j)[1-\sigma(w^Tx_j)]x_{ji}}{1-\sigma(w^Tx_j)}\\ &= \frac{-1}{N}\sum_j y_j[1-\sigma(w^Tx_j)]x_{ji} - (1-y_j)\sigma(w^Tx_j)x_{ji}\\ &= \frac{-1}{N}\sum_j y_jx_{ji} - \sigma(w^Tx_j)x_{ji} \end{align} Therefore, combining the components into one vector: $$ \nabla L(w) = \frac{-1}{N}\sum_j x_j[y_j - \sigma(w^Tx_j)] $$ as expected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2249351", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Uniform convergence of $\sum\limits_{n=1}^{\infty}(1-x)x^n$ I want to study the uniform convergence of $\sum\limits_{n=1}^{\infty}(1-x)x^n \rm{~~for}~~ x$ on $[0, 1]$. This is my attempt: First I study convergence on $[0, 1)$: by Ratio test sum is convergent when $|x|< 1$. Hence, $\sum\limits_{n=1}^{\infty}(1-x)x^n$ on $[0, 1)$. When $x = 1$, $f_n(x) = 0$ and sum of series is convergent. Thus, sum of functional series is convergent on $[0, 1]$. Is my reasoning correct, and if so can it be considered as a rigorous proof? Thanks.
Hint: $(1-x)x^n=x^n-x^{n+1}$. Just use the partial sums, then everything gets simple as these are partial sums of the geometric series.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2249502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Polygon area method I saw this problem in a puzzle book. Just wondering if anyone can explain the principle behind this method. A rectilinear figure of any number of sides can be reduced to a triangle of equal area, and as $\angle AGF$ happens to be a right-angle the thing is quite easy in this way: * *Continue the line $GA$. *Now lay a parallel ruler from $A$ to $C$, run it up to $B$ and mark the point $1$. *Then lay the ruler from $1$ to $D$ and run it down to $C$, marking point $2$. *Then lay it from $2$ to $E$, run it up to $D$ and mark point $3$. *Then lay it from $3$ to $F$, run it up to $E$ and mark point $4$. If you now draw the line $4$ to $F$ then $\triangle G4F$ is equal in area to the irregular field. As our scale map shows $GF$ to be $7$ inches (rods), and we find the length $G4$ in this case to be exactly $6$ inches (rods), we know that the area of the field is $\frac 12 (7\times 6)$ or $21$ square rods. The simple and valuable rule I have shown should be known by everybody-but is not
It is the standard Euclidean procedure for reducing any polygon to a triangle. Take triangle ABC (yellow) and triangle A1C (pink) as in the diagram below: Since 1B and AC are parallel, the two triangles have the same area. Hence the polygon GABC... has the same area as G1C..., which is a polygon with one less vertex. Continuing like this reduces the original polygon to a triangle of the same area.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2249625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Does the geodesic on a surface $z = f(x,y)$ always trace out a straight line in the $xy$ plane? Let $z = f(x,y)$ be a surface. Let $(x_0, y_0, z_0)$ and $(x_1, y_1, z_1)$ be two points on that surface. Let $g(t) = \langle x(t), y(t), z(t)\rangle$ be a parameterization of the geodesic curve between the two points. Is the following statement true? Let $g_{xy} = \langle x(t), y(t), 0\rangle$ be the projection of the geodesic onto the $xy$ plane. Then $g_{xy}$ is the straight line defined by the points $(x_0, y_0)$ and $(x_1, y_1)$ on the $xy$ plane.
If the rigid surface containing the geodesic curve rolls in the plane of osculation with common normal without slipping then we have the same zero geodesic curvature in the plane as well as the surface, the trace on the plane is a straight line. The instantaneous projection without any rolling is not in general a straight line. The rolling contact line and projection of given line in fixed $(x-,y-) $ plane are two distinctly different lines.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2249750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Number of Solutions to $e^{z}-3z-1=0$ in the Unit Disk I am working through some of the past qualifying exams in complex analysis and I am a bit stuck on the question I posed in the title. My immediately thought is use Rouche's Theorem. For instance, I tried letting $f(z)=e^{z}$ and $g(z)=3z+1$ in hopes of getting $|f(z)|\leq |g(z)|$ on $|z|=1$. But this is false since on $|z|=1$. $$ |f(z)|\leq\sup_{x\in[-1,1]}e^{x}\leq e\not< 2\leq |3z+1|=|g(z)|. $$ Clearly, there is at least one solution since $z=0$ works. I am thankful for any ideas as to how to proceed.
For $z\ne 0$, let $\gamma$ be the line segment connecting $0$ and $z$. Then, $$|e^z -1| = \left| \int _\gamma e^u du \right| \le \int _\gamma \sup_{u\in \gamma}|e^u| |du| \le e|z| $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2249855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Discrete math: forming a comittee from n men, n women, using 2 different approaches. Prove that, $$\sum_{k=1}^n k\binom{n}{k}^2 = n\binom{2n-1}{n-1}$$ by determining, in two different ways, the number of ways a committee can be chosen from a group of $n$ men and $n$ women. Such a committee has a woman as the chair and has $n − 1$ other members. Why is $k$ in $[1,n]$? Why is the binomial coefficient to the power $2$?
Hint: $k$ is the total number of women on the committee (so must be at least $1$). It may help to use the fact that $\binom nk=\binom n{n-k}$ to rewrite the LHS as $$\sum_{k=1}^nk\binom nk\binom n{n-k}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2249940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
How to prove this integral $\iint_{D} \frac{\mathrm{d}\bar{z}\mathrm{d}z}{z-\zeta} = - 2{\pi}i{\bar{\zeta}} $ I am reading this paper and there is an integral in it: $$\iint_{D} \frac{\mathrm{d}\bar{z}\mathrm{d}z}{z-\zeta} = - 2{\pi}i{\bar{\zeta}},$$ where $D$ is a disc of radius $R$ and $\zeta$ is a point in $D$. I write the left in definition. Let $\zeta = a+ i b$, then \begin{align*}\iint_{D} \frac{\mathrm{d}\bar{z}\mathrm{d}z}{z-\zeta} &=2i \iint_{D}\frac{\mathrm{d}x\mathrm{d}y}{(x+iy)-(a+ib)} \\&=2 \iint_{D}\frac{(y-b)\,\mathrm{d}x\mathrm{d}y}{(x-a)^2+(y-b)^2} +2i\iint_{D}\frac{(x-a)\,\mathrm{d}x\mathrm{d}y}{(x-a)^2+(y-b)^2}, \end{align*} and it should be $$\iint_{D}\frac{(x-a)\,\mathrm{d}x\mathrm{d}y}{(x-a)^2+(y-b)^2}=-{\pi}a.$$ Using polar coordinates and change variable to $t = \tan\frac{\theta}{2}$, \begin{align*} &\mathrel{\phantom{=}}\iint_{D}\frac{(x-a)\,\mathrm{d}x\mathrm{d}y}{(x-a)^2+(y-b)^2}\\ &= \int^R_0\mathrm{d}r\int^{\pi}_{-\pi}\frac{r(r\cos\theta -a)}{(r\cos\theta-a)^2+(r\sin\theta - b)^2}\,\mathrm{d}\theta\\ &=\iint\frac{(r(\cos^2 \frac{\theta}{2}-\sin^2 \frac{\theta}{2})-a(\cos^2\frac{\theta}{2}+\sin^2\frac{\theta}{2}))\,\mathrm{d}\theta\mathrm{d}r}{(r^2+a^2+b^2)(\cos^2\frac{\theta}{2}+\sin^2\frac{\theta}{2})-2ra(\cos^2\frac{\theta}{2}-\sin^2\frac{\theta}{2}) -4rb\sin\frac{\theta}{2}\cos\frac{\theta}{2}}\\ &=\int^R_02r\,\mathrm{d}r\int^{+\infty}_{-\infty}\frac{r(1-t^2)-a(1+t^2)}{((r^2+a^2+b^2)(1+t^2)-2ra(1-t^2)-4rbt))(1+t^2)}\,\mathrm{d}t \end{align*} and I don't know how to continue. Did I do something wrong? And I think the author use complex language for convenience. I calculate in real is the wrong way but I don't know how to do it in complex. Thank you!
The integral you have isn't $-2\pi i \bar{\zeta}$ unless the disk $D$ is centered at origin. Assume $D$ is centered at origin and $r$ is its radius. Using Stoke's theorem for complex coordinates, we have $$\int_D \frac{d\bar{z} \wedge dz}{z-\zeta} = \int_D d\left( \frac{\bar{z}-\bar{\zeta}}{z-\zeta} dz \right) = \int_{\partial D} \frac{\bar{z}-\bar{\zeta}}{z - \zeta} dz $$ On the circle $\partial D$, we have $\displaystyle\;\bar{z} = \frac{r^2}{z}\;$. When $\zeta \ne 0 $ and $\in D \setminus \partial D$, we can evaluate the integral using an ordinary contour integral. $$\begin{align}\int_{\partial D}\left(\frac{r^2}{z} - \bar{\zeta}\right)\frac{1}{z-\zeta}dz &= \int_{\partial D}\left[\frac{r^2}{\zeta}\left(\frac{1}{z-\zeta}-\frac{1}{z}\right)- \frac{\bar{\zeta}}{z-\zeta}\right]dz\\ &= 2\pi i \left[ \frac{r^2}{\zeta} (1 - 1) - \bar{\zeta}\right] = -2\pi i \bar{\zeta} \end{align} $$ When $\zeta = 0$, the result is similar. Note Please note that there are some subtle issues in how to handle the singularity at $\zeta$ correctly. We have chosen to use the relation $$\frac{d\bar{z} \wedge dz}{z-\zeta} = d\left( \frac{\bar{z}-\bar{\zeta}}{z-\zeta} dz \right) \quad\text{ instead of }\quad \frac{d\bar{z} \wedge dz}{z-\zeta} = d\left( \frac{\bar{z}}{z-\zeta} dz \right) $$ to evaluate the integral. In this way, the intermediate $1$-form remains bounded and we can forget about the singularity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2250037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Determine whether $\int^{1/2}_0\frac{1}{\sin x\cdot \ln x}dx$ is convergence, absolute convergence or divergent. I'm trying to determine whether $\int^{1/2}_0\frac{1}{\sin x\cdot \ln x}dx$ is convergence, absolute convergence or divergent. Let $f(x) = \frac{1}{\sin x\cdot \ln x}$, $f(x)< 0$ for $(0,0.5]$. Therefore I assume I need to work with $-f(x) \ge 0$ in this question. I'm trying to use the comparing rule without success to solve this. I thought of using the function $h(x) = \frac{-1}{\ln x}$ which holds $f(x)>g(x)$ in the question's interval, although that didn't get me anywhere. What are my options here?
$\sin{x}<x$ for $0<x<\pi/2$ (draw a picture to see this). So $$ \frac{1}{\sin{x} \cdot (-\log{x})} >\frac{1}{x \cdot (-\log{x})}, $$ and this has antiderivative $-\log{(-\log{x})}$. This is finite when $x=1/2$ and diverges for $x \to 0$, so the original integral diverges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2250259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Placing m books on n shelves such that there is at least one book on each shelf Given $m \ge n \ge 1$, how many ways are there to place m books on n shelves, such that there is at least one book on each shelf? Placing the books on the shelves means that: • we specify for each book the shelf on which this book is placed, and • we specify for each shelf the order (left most, right most, or between other books) of the books that are placed on that shelf. I solve this problem in the following way: If $m=n$, there are $m!$ or $n!$ ways to do it Else: * *Place $n$ books on $n$ shelves: $n!$ ways to do it *Call the set of $m-n$ remaining books $T=\{t_1, t_2,..,t_{m-n}\}$ The procedure for placing books on shelves: choose a shelf, choose a position on the shelf We know choosing a shelf then place the book on the far left has $n$ ways For book $t_1$, there is a maximum of $1$ additional position (the far right). Thus there is $n+1$ ways to place book $t1$. For book $t2$, there is a maximum of $2$ additional positions. Thus there is $n+2$ ways for book $t_2$ ... For book $t_i$, there is a maximum of $i$ additional positions. Thus there is $n+i$ ways for book $t_i$ In placing $m-n$ books, we have $(n+1)(n+2)...(n+m-n)$ or $(n+1)(n+2)..m$ ways In total, we have $n!(n+1)(n+2)...m$ or $m!$ ways Is there any better solution to this problem?
Given that the ordering of the books is important, not merely which shelf they are on, you can simply split any book ordering into shelves by choosing the $n{-}1$ shelf breaks from the $m{-}1$ book gaps. So there are $$m!\binom{m-1}{n-1} = \frac{m!(m-1)!}{(m-n)!(n-1)!} \text{ options}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2250353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Behavior of $\sin x/x$ as $x$ approaches 0? Which is the limiting behavior, $x \to 0\ \frac{\sin x}{x}$, in terms of x: 1) $\lim\limits_{x \to 0} \frac{\sin x}{x}$ = $\lim\limits_{x \to 0}\frac{\sin'x}{x'} = \frac{\lim\limits_{x \to 0}\cos x}{1} \rightarrow 1 - x^2/2 + O(x^4)$ by L'Hospitals' rule, or, 2) As $x \to 0,\ \frac{\sin x}{x}\sim\frac{x - x^3/3! + O(x^5)}{ x } = 1 - x^2/6 + O(x^4)$ expanding $\sin x$ by Taylor series. I know each has a limit of 1, but what is the behavior in terms of the small $x$, and why don't both approaches give the same answer?
Because L'Hopital's theorem says: Let $f,g$ be two differentiable functions such that $g'(x)\ne 0$ in a neighbourhood of $x_0$, $\lim_{x\to x_0}\frac{f'(x)}{g'(x)}=L\in\Bbb R\cup\{-\infty,\infty\}$, and either $\lim_{x\to x_0} g(x)=\infty$ or $\lim_{x\to x_0}f(x)=\lim_{x\to x_0} g(x)=0$. Then, $$\lim_{x\to x_0}\frac{f(x)}{g(x)}=L$$ As you can see, the information it provides for the first-and-subsequent-order expansion of $\frac{f(x)}{g(x)}$ amounts to $0$. Added: If by chance you wished to apply it to go further in the expansion of $\frac fg$ (let's say, first order and $f(x)\to 0,\ g(x)\to 0$), you'd end up with something like $$\lim_{x\to x_0} \frac{f(x)-Lg(x)}{xg(x)}\stackrel?=\lim_{x\to x_0} \frac{f'(x)+Lg'(x)}{g(x)+xg'(x)}$$ and there is no evident manipulation of the RHS that yields an explicit limit, nor it is obvious that $g+xg'$ is not frequently $0$ in the neighbourhoods of $x_0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2250517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
SAT inequality problem I've been studying from Collegeboard SAT practice tests, and I've stumbled with a inequality problem, which I can't seem to understand even with SAT answer explanation.I would greatly appreciate it if anyone could help me. $$ y ≤ 3x+1 $$ $$x-y > 1 $$ Which of the following ordered pairs (x, y) satisfies the system of inequalities above? $$A) (−2, −1) $$ $$B) (−1, 3 )$$ $$C) (1, 5 )$$ $$D) (2,-1)$$ edit This is the answer explanation they give me: Choice D is correct. Any point (x, y) that is a solution to the given system of inequalities must satisfy both inequalities in the system. Since the second inequality in the system can be rewritten as $$y < x − 1$$, the system is equivalent to the following system. $$ y ≤ 3x+1 $$ $$x-y > 1 $$ Since $$3x + 1 > x − 1$$ for $$x > −1 $$and$$ 3x + 1 ≤ x − 1$$ for$$ x ≤ −1$$, it follows that $$y < x − 1$$ for $$x > −1$$ and $$y ≤ 3x + 1$$ for$$ x ≤ −1$$. Of the given choices, only (2, −1) satisfies these conditions because $$ −1 < 2 − 1 = 1. $$
$$y\leq3x+1$$ $$x-y>1\implies y<x-1$$ You can solve this graphically The region which solves these inequalities is the region below the two lines. By plotting the 4 points, you'll see option D is the only one which lies in the region.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2250602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
$2x^2 + 3x +4$is not divisible by $5$ I tried by $x^2 \equiv 0, 1, 4 \pmod 5$ but how can I deal with $3x$? I feel this method does not work here.
$$5|\;(2x^2+3x+4)\iff$$ $$ \iff5|\;3(2x^2+3x+4)=(6x^2+9x+12)\iff$$ $$\iff 5|\;((6x^2+9x+12)-(5x^2+5x+10))=$$ $$=(x^2+4x+2)=(x+2)^2-2.$$ But no square is $2$ more than a multiple of $5.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2250718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 5 }
Truncated normal random variable Find the cdf and quantile function for the truncated (at a) normal random variable given that $$\frac{\varphi(x) I_{x>a}}{1-\Phi(a)}$$ where $\varphi(x)$ is the density for standard normal and $\Phi(x)$ is the cdf for standard normal distribution. Express answers in terms of $\varphi(x)$ and $\Phi(x)$. Appreciate your help, thank you!
Since the cdf is $F_t (u)=\int_a^u\frac{\varphi(x) }{1-\Phi(a)} dx$, it can be expressed in terms of normal and hence standard normal (using the substitution $v=x-a $ to 'evaluate' the integral).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2250802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How many squares can be inscribed in a regular polygon? Say that a square $S$ is said to be inscribed in a regular polygon $P$ if all the four vertices of $S$ lie on the boundary of $P$. It is well-known that one can inscribe a square in a regular $n$-gon for $n\geq 5$. I would like to know, up to rotational symmetry, how many distinct squares can be inscribed? For example, in a hexagon only one square can be inscribed. Second question: What is the ratio of their side lengths?
If $n$ is a multiple of $4$, then every couple of opposite points (with respect to the center) on the polygon can be taken as endpoints of a diagonal of an inscribed square, so in this case we have infinitely many solutions. In the other cases, it is not difficult to prove that a solution is possible, where the inscribed square has a side parallel to a side of the polygon. I think there are no other solutions, because to find the inscribed square one has to solve a system of linear equations, which can be indeterminate (as in the case when $n$ is a multiple of $4$) but otherwise cannot have more than one solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2251079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Snooker shot - does margin of error increase or decrease as the target angle increases? There is a perception (widely held) in snooker that a straight shot is more difficult than an angled shot. There are many forum discussion about this, and the reasons are usually accepted to be psychological. But I was wondering, is there a mathematical reason for it. Is the margin of error greater when the shot being taken is at an angle? Example - if the white ball was 1 degree off target on a straight shot, and one degree off target on an angled shot (same target for both shots), and each shot hit at the same speed, would the red ball travel off line to the same extent?
Let the ball be radius $r$ and distance between balls $d$. Let $\theta$ be angle white is struck from line between centre of balls and $\phi$ direction red moves from line between red ball and white before struck. Then get $2r sin(\theta + \phi) = d sin(\theta)$. Call $a=\frac{d}{2r}$. This gives $\frac{d\phi}{d\theta}=-1+ \frac{a cos(\theta)}{\sqrt{1-a^2 sin^2(\theta)}}$. This represents the ratio of the error in the direction of red to the error in hitting the white. It increases monotonically from straight shot to a fine cut. If $d \approx r$ then this will not hold true as the target spot on the red is further away for a cut compared to a straight shot.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2251189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 0 }
Where does the curve $x^y = y^x$ intersect itself? This problem is quite easily solved by using logarithms and derivative and forming the function $f(x,y) = x^y - y^x$, however there are assertions that this problem can be solved without using either of the two. I can not see how one would proceed to solve this problem in the absence of logarithmic manipulation and differentiation. Can anyone shed some light on this?
Ok the function $f\colon \mathbb R^2 \to \mathbb R$ is continuous. And $f(1,2)=-1$ and $f(2,1)=1$ so it has to take the values between -1 and 1. Thus for some point we have that $f$ is zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2251318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Maximize number of covered sets by choosing given number of elements I am not sure if it's a known probem: There is a set of some elements. For the purpose of this explanation that can be a subset of natural numbers, let's say $\{1, 2, ..., 20\}$. Let's call it $SET$. There are also given subsets of the $SET$. That subsets don't have to be disjoint and sum of them doesn't have to cover the $SET$ (i.e. $\{1, 3, 5\}$, $\{8, 9, 14\}$, $\{1, 10, 15, 18, 20\}$, $\{5, 6, 8, 9\}$, and so). Now, with given number $k$ ($0<k<|SET|$), how to choose $k$ elements of the $SET$ to maximize number of covered (by choosen elements) subsets. Thank you for help, Przemek.
Could someone explain the way that Mees de Vries answer meets my criteria. I think it is about searching smallest number of subsets covering SET, not about finding vertices that satisfy the maximum number of subsets. Correct me if I am wrong.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2251418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Geometric series sum $\sum_{n=0}^\infty\frac{a}{(1+x)^n}$ Why does this hold $$\sum_{n=0}^\infty\frac{a}{(1+x)^n}=\frac{a(x+1)}{x}$$ ? To me it looks like $=\frac{a}{x}$ from the formula. That is: $$r=(1+x)^{-n}\Rightarrow a(1-(1+x))^{-n}=a/n$$
Okay, do if you put $n=0,1,2..$ in the forumula you'll get, this series, $a, \frac{a}{1+x}, \frac{a}{(1+x)^2}...$ with common ration $\frac{1}{1+x}$ now we can use the forumula for sum of $\infty$ GP Which is : $\frac{a}{1-r}$ here $r$ is common ratio; so we get-$\frac{a}{1-\frac{1}{1-x}}$ which becomes $\frac{a(x+1)}{x}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2251536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Chess board probability problem. Three random squares are chosen from a regular chess board. Find the probability that they form the letter 'L'. I cannot think about a general way to go about these type of questions. Need hints or solutions.
There is a 1.92 % chance of making an "L" without space in between. If you mean that they can form an "L" when they are far from each other is a different story. POSSIBLE Solution: 64/100=0.64 0.64x3=1.92, And this was in percentage form. If you need the answer not in percentage and in something else i can`t help you right NOW.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2251616", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 4 }
existence of closest point on boundary of domain to a point outside domain Let $D \subset \mathbb{R}^n$ (or $\mathbb{C}^n$) be a domain with $C^2$ boundary. Why is there a neighborhood U of $\partial D$ such that for every $z \in U$, there is a unique point of $\partial D$ that is closest to $z$?
This follows from the tubular neighbourhood theorem. Pick $U$ as a tubular neighbourhood of $\partial D$. Let $z \in U$, and let $p \in \partial D$ be a point which minimizes distance from $z$. It is easy to see that the segment joining $p$ and $z$ must be normal to $\partial D$, and thus the point $z$ corresponds to $(p,v)$, where $v=z-p$, on the normal bundle. If there were more than one point which minimize distance, it would follow that there was $(p',v')$, with $p' \neq p$, such that $z$ corresponds also to $(p',v')$, an absurd with the tubular neighbourhood being bijectively corresponding to a neighbourhood of the $0$-section of the normal bundle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2251710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
$p=(1-\zeta_{p})...(1-\zeta^{p-1}_{p})$ Let $\zeta_{p}$ be a primitive $p$th root of $1$. Then $t^{p}-1=(t-1)(t-\zeta_{p})(t-\zeta_{p}^{2})\ldots(t-\zeta_{p}^{p-1})$. Using this, I need to show that $p=(1-\zeta_{p})\ldots(1-\zeta^{p-1}_{p})$ but I do not know how.
Divide by $t-1$, use that $\frac{t^p-1}{t-1}=1+t+\dotsb + t^{p-1}$ and then evaluate your equation at $t=1$. For an alternative, you can evaluate the derivation of both sides at $t=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2251825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Is it possible to scale the eigenvector matrix up by multiplying or adding with a constant? I am using R to generate eigenvector matrix from laplacian matrix that represent a graph dataset. The issue that I have is that the values of eigenvector matrix are very much low, sometimes in the order of $10^{-20}$! My question is: is it possible to scale the eigenvector matrix up by multiplying or adding with a constant? I think by doing this, it will increase the magnitude of vectors but I am worried that it may destroy the direction of the vectors. Thank you very much
Yes, it is, since the space of all eigenvectors corresponding to an specific eigenvalue is a vector space, so if you multiply them by nonzero constant or even add them you will still get an eigenvector.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2251946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$\lim_{x \to 0} \frac{\log \left(\cosh\left(x^2-xc\right) \right)}{x^2}=\frac{c^2}{2}$ without L'Hospital's rule How to show that without using L'Hospital's rule \begin{align} \lim_{x \to 0} \frac{\log \left(\cosh\left(x^2-xc\right) \right)}{x^2}=\frac{c^2}{2} \end{align} I was able to show the upper bound by using the bound $\cosh(x) \le e^{x^2/2}$ \begin{align} \lim_{x \to 0} \frac{\log \left(\cosh\left(x^2-xc\right) \right)}{x^2} \le \lim_{x \to 0} \frac{\left(x^2-xc\right)^2}{2x^2}=\frac{c^2}{2} \end{align} My question: How finish this argument.
Denote $a=x^2-cx$ for simplicity. Then \begin{align} \frac{\ln(\cosh a )}{x^2}&=\frac{\ln(\cosh^2a)}{2x^2}=\color{blue}{\frac{\ln(1+\sinh^2a)}{2x^2}}=\frac12\cdot\frac{\ln(1+\sinh^2a)}{\sinh^2a}\cdot\left(\frac{\sinh a}{x}\right)^2=\\ &=\frac12\cdot\frac{\ln(1+\sinh^2a)}{\sinh^2a}\cdot\left(\frac12\cdot\left[\frac{e^a-1}{a}+\frac{e^{-a}-1}{-a}\right]\cdot\frac{a}{x}\right)^2\to\frac{c^2}{2}. \end{align} P.S. If you want a lower bound then using the inequality $e^a-1\ge a$ we can estimate $$ \sinh^2 a=\left(\frac12\left[\frac{e^a-1}{a}+\frac{e^{-a}-1}{-a}\right]\right)^2a^2\ge a^2 $$ and continue from the blue expression above as $$ \color{blue}{\frac{\ln(1+\sinh^2a)}{2x^2}}\ge\frac{\ln(1+a^2)}{2x^2}=\frac12\cdot\frac{\ln(1+a^2)}{a^2}\cdot\frac{a^2}{x^2}\to \frac{c^2}{2}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2252022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 1 }
Let $f,g: V \rightarrow W$ be linear operators. Prove that $r(f + g) \leq r(f) + f(g)$ Let $f,g: V \rightarrow W$ be linear operators. Prove that $r(f + g) \leq r(f) + f(g)$ Note: r = rank My idea was to use matrix representation and to prove that the rank of matrix C ($C= F+G$) can't be bigger than the sum of $r(F) + r(G)$,but to use that idea I must prove that $r(f) = r(g)$ which I don't have an idea how to do. Other idea was to prove that $(f+g)(v) = f(v) + g(v)$ but also with no luck.
Hint: $\operatorname{im}(f+g)\subseteq \operatorname{im}f+\operatorname{im}g$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2252161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Definition of a continuous function (through open sets) Why the definition of a continuous function defined in terms of inverse function? $f$ - continuous function, if for every open $V$, $f^{-1}(V)$ is open.
Because it is easier to use than the regular caculus like definition, f is continuous iff for all x and open V nhood f(x), there be an open U nhood of x with f(U) subset V. The two defintions can be proven equivalent. The later for real to real functions can be be shown equivalent to the usual epsilon delta definition. Likewise, for R^n to R functions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2252261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }