Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Equation of a rectangle I need to graph a rectangle on the Cartesian coordinate system. Is there an equation for a rectangle? I can't find it anywhere.
If the equations of the diagonals of the rectangle are Ax + By + C = 0 and Dx + Ey + F = 0 then an equation for the rectangle is: M|Ax + By + C| + N|Dx + Ey + F| = 1 M and N can be found by substituting the coordinates of two adjacent vertices of the rectangle. In fact, this equation can be used to describe any parallelogram. Roughly speaking, M (together with A and B) and N (together with D and E) give the size of the diagonals of the parallelogram.
{ "language": "en", "url": "https://math.stackexchange.com/questions/69099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 9, "answer_id": 2 }
Integrating $\int \frac{1}{1+e^x} dx$ I wish to integrate $$\int_{-a}^a \frac{dx}{1+e^x}.$$ By symmetry, the above is equal to $$\int_{-a}^a \frac{dx}{1+e^{-x}}$$ Now multiply by $e^x/e^x$ to get $$\int_{-a}^a \frac{e^x}{1+e^x} dx$$ which integrates to $$\log(1+e^x) |^a_{-a} = \log((1+e^a)/(1+e^{-a})),$$ which is not correct. According to Wolfram, we should get $$2a + \log((1+e^{-a})/(1+e^a)).$$ Where is the mistake? EDIT: Mistake found: was using log on calculator, which is base 10.
Your solution is absolutely correct. $\log(\frac{1+e^a}{1+e^{-a}})=2a+\log(\frac{1+e^{-a}}{1+e^a})$ The authors might have arrived at the solution like this. Let $I_1=\int^{a}_{-a} \frac{1}{1+e^x}dx$ and $I_2=\int^{a}_{-a} \frac{e^x}{1+e^x}dx$ Adding $I_1$ and $I_2$ , $I_1+I_2=\int^{a}_{-a}dx=2a$ As you have calculated $I_2=\log\left(\frac{1+e^a}{1+e^{-a}}\right)$ Therefore $I_1=2a-I_2=2a-\log\left(\frac{1+e^a}{1+e^{-a}}\right)=2a+\log\left(\frac{1+e^{-a}}{1+e^{a}}\right) $
{ "language": "en", "url": "https://math.stackexchange.com/questions/69179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
A question on partitions of n Let $P$ be the set of partitions of $n$. Let $\lambda$ denote the shape of a particular partition. Let $f_\lambda(i)$ be the frequency of $i$ in $\lambda$ and let $a_\lambda(i) := \# \lbrace j : f_\lambda(j) \geq i \rbrace$. For example: $n=5,~ \lambda=(1,1,3),~ f_\lambda(1)=2,~ a_\lambda(1)=2$ (added: since $f_\lambda(1)$ and $f_\lambda(3)$ are both at least 1). It is easy to see that for a fixed $\lambda$, $\sum_k f_\lambda(k)=\sum_k a_\lambda(k)$. But, I am having trouble showing: For a fixed $k$, $$\sum_\lambda f_\lambda(k)=\sum_\lambda a_\lambda(k)$$ Thanks for the help!
Here's a proof using generating functions; I haven't given much thought yet to how it could be translated into a combinatorial argument. The generating function for the number of partitions is $$p(x)=(1+x+x^2+\dotso)(1+x^2+x^4+\dotso)(1+x^3+x^6+\dotso)\dots=\prod_m\frac1{1-x^m}\;.$$ To count the number of times a part $k$ occurs in the partitions of $n$ (the left-hand side of the equation), we can replace the $k$-th factor by one including this count: $$ \begin{align} f_k(x) &=\left(0+1x^k+2(x^k)^2+3(x^k)^3+\dotso\right)\prod_{m\neq k}\frac1{1-x^m}\\ &=\left(x^k\frac{\mathrm d}{\mathrm d(x^k)}\left(1+x^k+(x^k)^2+(x^k)^3+\dotso\right)\right)\prod_{m\neq k}\frac1{1-x^m}\\ &=\left(x^k\frac{\mathrm d}{\mathrm d(x^k)}\frac1{1-x^k}\right)\prod_{m\neq k}\frac1{1-x^m}\\ &=\frac{x^k}{(1-x^k)^2}\prod_{m\neq k}\frac1{1-x^m}\\ &=\frac{x^k}{1-x^k}\prod_m\frac1{1-x^m}\\ &=p(x)\frac{x^k}{1-x^k}\;.\\ \end{align} $$ To count the number of at-least-$k$-fold occurrences of parts (the right-hand side of the equation), consider this generation function (which I'll first write out for $k=2$ to illustrate the idea and then generalize): $$\begin{align} g_2(x,y) &=\left(1+x+y(x^2+x^3+\dotso)\right)\left(1+x^2+y((x^2)^2+(x^2)^3+\dotso)\right)\dots\\ &=\left(1+x+\frac{yx^2}{1-x}\right)\left(1+x^2+\frac{y(x^2)^2}{1-x^2}\right)\dots\\ &=\frac{1-x^2+yx^2}{1-x}\frac{1-(x^2)^2+y(x^2)^2}{1-x^2}\dots\\ &=p(x)\left(1-x^2+yx^2\right)\left(1-(x^2)^2+y(x^2)^2\right)\dots\;,\\ g_k(x,y) &=\prod_m\left(\sum_{l=0}^{k-1}(x^m)^l+y\sum_{l=k}^\infty(x^m)^l\right)\\ &=p(x)\prod_m\left(1-(x^k)^m+y(x^k)^m\right)\;. \end{align} $$ Every factor of $y$ tracks the at-least-$k$-fold use of a part. What we want is the total count of factors of $y$ included in the coefficient of $x^n$; that is, we want to count $j$ times the coefficient of $y^jx^n$. This we can get by differentiating with respect to $y$ and then setting $y$ to $1$; the coefficient of $x^n$ in the result will be the desired count. But $$\begin{align} \left.\frac{\mathrm d}{\mathrm dy}g_k(x,y)\right|_{y=1} &=\left.\frac{\mathrm d}{\mathrm dy}p(x)\prod_m\left(1-(x^k)^m+y(x^k)^m\right)\right|_{y=1}\\ &=\left.p(x)\sum_{l=1}^\infty(x^k)^l\prod_{m\neq l}\left(1-(x^k)^m+y(x^k)^m\right)\right|_{y=1}\\ &=p(x)\sum_{l=1}^\infty(x^k)^l\\ &=p(x)\frac{x^k}{1-x^k}\;,\\ \end{align}$$ which coincides with the result for the left-hand side.
{ "language": "en", "url": "https://math.stackexchange.com/questions/69244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 2, "answer_id": 0 }
What is the meaning of a small o in between function names? i.e (f o g) I am helping with homework. I am stumped here - What is the meaning of the small round circle, or small "o" in question 19 (that I underlined) and question 20
It is function composition. If you have one function $f(x)$, and another function $g(x)$, then we can create a new function named $g\circ f$ (read as: "$g$ composed with $f$") that is defined as $$(g\circ f)(x)=g(f(x))$$ For example, if $f(x)=x+1$, and $g(x)=2x-1$, then $$(g\circ f)(x)=g(f(x))=g(x+1)=2(x+1)-1=2x+1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/69295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
True vs. Provable Gödel's first incompleteness theorem states that "...For any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system". What does it mean that a statement is true if it's not provable? What is the difference between true and provable?
Suppose I made some kind of statement like "For all natural numbers $n$, $P(n)$ holds." For my claim to be true, it must be that there is no counterexample. That is, there is no integer $n_0$ such that $P(n_0)$ fails. Even if this is the case, I may not be able to prove that it is so. So, my statement could be true, but unprovable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/69353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47", "answer_count": 8, "answer_id": 0 }
Why are samples always taken from iid random variables? In most mathematical statistic textbook problems, a question always ask: Given you have $X_1, X_2, \ldots, X_n$ iid from a random sample with pdf:(some pdf). My question is why can't the sample come from one random variable such as $X_1$ since $X_1$ itself is a random variable. Why do you need the sample to come from multiple iid random variables?
Suppose the sample came from a single random variable $X$, say a dice with sample $[3,5,1,3]$. To compute the probability of observing this sample, you'd start by writing down something like $$\text{Pr}\,(X=3\wedge X=5 \wedge X=1 \wedge X=3).$$ But this is zero, since the events $X=3$, $X=5$ and $X=2$ are mutually exclusive. Taking the math seriously you might even come to the conclusion that the philosophical underpinnings of frequentism make no sense. There is no such thing as a single random variable, which can be repeatedly sampled and produce different outcomes. Not sure if this argument is what Henning Makholm meant by "in order to fit the structure of the theory".
{ "language": "en", "url": "https://math.stackexchange.com/questions/69406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
How many different combinations of $X$ sweaters can we buy if we have $Y$ colors to choose from? How many different combinations of $X$ sweaters can we buy if we have $Y$ colors to choose from? According to my teacher the right way to think about this problem is to think of partitioning $X$ identical objects (sweaters) into $Y$ different categories (colors). Well,this idea however yields the right answer but I just couldn't convince my self about this way to thinking,to be precise I couldn't link the wording of the problem to this approach,could any body throw some more light on this?
Think of it this way. You go into the shop to buy $X$ sweaters, all identical except for their colors. You know that they come in $Y$ different colors, and you hate to mix colors, so you bring $Y$ shopping bags with you, one for each possible color. Now you pick out your $X$ sweaters and put each one into the bag reserved for its color. You end up with $Y$ shopping bags containing a total of $X$ sweaters, though some of the bags may be empty. Now suppose that instead of labelling each bag with a color, you label it with a number from $1$ to $Y$. Let $x_1$ be the number of sweaters in bag $1$, $x_2$ the number of sweaters in bag $2$, and so on up to $x_Y$, the number of sweaters in bag $Y$. Then $$x_1 + x_2 + \dots + x_Y = X,\tag{1}$$ and each $x_k$ ($k=1,\dots,Y$) is a non-negative integers. Counting the different ways to choose the sweaters is the same as counting the solutions to $(1)$ in non-negative integers. Another version of the same problem goes like this: how many ways are there to distribute $X$ identical marbles amongst $Y$ numbered boxes? If $x_k$ is the number of marbles in Box $k$, we are again just counting solutions to $(1)$ in non-negative integers. This kind of problem is often called a stars and bars problem. Let me explain the solution in terms of the marbles and boxes. Suppose, for the sake of illustration, that $Y=5$ and $X=10$. One possible distribution of the marbles puts $0$ in the first box, $3$ in the second box, $0$ in the third box, $2$ in the fourth box, and $5$ in the fifth box. I can represent this arrangement by a line of ‘stars and bars’: $$|***||**|*****$$ The empty space before the first | represents the empty first box; the three *’s between the first and second |’s represent the three marbles in the second box; and so on. Every possible distribution of the $10$ marbles in the $5$ boxes can be uniquely represented by such a string of $10$ stars and $4$ bars, and each string of $10$ stars and $4$ bars corresponds to a unique distribution of the marbles. Thus, the number of distributions is the number of strings of $10$ stars and $4$ bars. Once you know where the bars are, everything else is a star, so the answer is $\binom{10}{4}$. This of course is also the number of ways of picking out $10$ sweaters of $5$ possible colors and the number of solutions in non-negative integers of $(1)$ when $Y=5$ and $X=10$. More generally, $Y$ boxes will require $Y-1$ bars to separate them from one another, so you’ll have a string of $X$ stars and $Y-1$ bars, and there are $$\binom{X+Y-1}{Y-1} = \binom{X+Y-1}{X}$$ of these.
{ "language": "en", "url": "https://math.stackexchange.com/questions/69465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Closed form for a pair of continued fractions What is $1+\cfrac{1}{2+\cfrac{1}{3+\cfrac{1}{4+\cdots}}}$ ? What is $1+\cfrac{2}{1+\cfrac{3}{1+\cdots}}$ ? It does bear some resemblance to the continued fraction for $e$, which is $2+\cfrac{2}{2+\cfrac{3}{3+\cfrac{4}{4+\cdots}}}$. Another thing I was wondering: can all transcendental numbers be expressed as infinite continued fractions containing only rational numbers? Of course for almost all transcendental numbers there does not exist any method to determine all the numerators and denominators.
I know how to do these. Here is the second question. First, a more natural one: $$ 1+\cfrac{1}{1+\cfrac{2}{1+\cfrac{3}{1+\ddots}}}= \frac{1}{\displaystyle e^{1/2}\sqrt{\frac{\pi}{2}}\;\mathrm{erfc}\left(\frac{1}{\sqrt{2}}\right)} \approx 1.525135276\cdots $$ So the original one is $$ 1+\cfrac{2}{1+\cfrac{3}{1+\ddots}} = \frac{1}{\displaystyle \frac{1}{ e^{1/2}\sqrt{\frac{\pi}{2}}\;\mathrm{erfc}\left(\frac{1}{\sqrt{2}}\right)}-1} \approx 1.9042712\cdots $$ [erfc is here ]
{ "language": "en", "url": "https://math.stackexchange.com/questions/69519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 2, "answer_id": 1 }
What is wrong with my reasoning regarding finding volumes by integration? The problem from the book is (this is Calculus 2 stuff): Find the volume common to two spheres, each with radius $r$, if the center of each sphere lies on the surface of the other sphere. I put the center of one sphere at the origin, so its equation is $x^2 + y^2 + z^2 = r^2$. I put the center of the other sphere on the $x$-axis at $r$, so its equation is $(x-r)^2 + y^2 + z^2 = r^2$. By looking at the solid down the $y$- or $z$-axis it looks like a football. By looking at it down the $x$-axis, it looks like a circle. So, the spheres meet along a plane as can be confirmed by setting the two equations equal to each other and simplifying until you get $x = r/2$. So, my strategy is to integrate down the $x$-axis from 0 to $r/2$, getting the volume of the cap of one of the spheres and just doubling it, since the solid is symmetric. In other words, I want to take circular cross-sections along the $x$-axis, use the formula for the area of a circle to find their areas, and add them all up. The problem with this is that I need to find an equation for $r$ in terms of $x$, and it has to be quadratic rather than linear, otherwise I'll end up with the volume of a cone rather than a sphere. But when I solve for, say, $y^2$ in one equation, plug it into the other one, and solve for $r$, I get something like $r = \sqrt{2 x^2}$, which is linear.
If you restrict yourself to the $xy$ plane you have two intersecting circles. Now you want $y$ as a function of $x$ for the circle centered at $x=r$, so you can solve the equation for $y$, getting $y=\sqrt{r^2-x^2}$. Your confusion may have come from reusing $r$, once for the radius of the spheres and once for the radius perpendicular to the $x$ axis out to the right hand sphere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/69581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Determining the minimal number of generators of the maximal ideal of a local Noetherian ring Let ($A,m$) be a local, Noetherian ring. If $n$ is the minimal number of generators of the unique maximal ideal $m$, then by Krull's Hauptidealsatz and Nakayama's Lemma, we have the following inequality: $\dim(A) \leq n = \dim_{A/m}(m/m^{2})$. So, one way to determine $n$ is to compute $\dim_{A/m}(m/m^{2})$. And this might be a trivial question, but in general how difficult is it to determine the dimension of $m/m^{2}$ as an $A/m$-vector space? Also, what other techniques can one use to determine $n$? Also, in my notation $\dim_{A/m}(m/m^{2})$ is the dimension of a vector space, and $\dim(A)$ is the Krull dimension of the ring $A$.
An easy case is the geometric one. Suppose you have an algebraic variety $V$ over a field $k$ defined by the polynomials $F_1,F_2,...,F_r\in k[X_1,...,X_n]$, that is $V=V(F_1,F_2,...,F_r)\subset \mathbb A^n_k$ and the local ring you are interested in is $A=\mathcal O_{V,P}$ for some rational point $P\in V$ with coordinates $(a_1,a_2,...,a_n)\in k^n$. You can consider the jacobian matrix $Jac(P)=(\frac {\partial F_i}{\partial X_j}(P))_{i,j} \in k^{r\times n}$ The number you are interested in is then characterized by the rank over $k$ of this matrix and given by the formula $$ dim_k (m_{V,P} / m^2_{V,P})=n-rank(Jac(P)) $$ An unintuitive example If $V\subset \mathbb A^n_k$ is the curve parametrically given by $x_1=t^n,x_2=t^{n+1},...,x_n=t^{2n}$, all polynomials vanishing on the curve will have their partial derivatives zero at at the origin $O=(0,...,0)$ and so $Jac(O)$ is the zero matrix. Hence $ dim_k (m_{V,O} / m^2_{V,O})=n-rank(Jac(O)) =n $. This reflects that the curve is very singular at the origin, which is what the invariant you are interested in is meant to detect (it would be $1$ instead of $n$ for a smooth curve). A nice application You can prove with the above formula that in $\mathbb A^3_k$ the union of three coplanar lines through the origin is not isomorphic to the union of the three coordinate axes, not even locally at the origin: the singularities are different (which is not so easy to see with the naked eye...)
{ "language": "en", "url": "https://math.stackexchange.com/questions/69709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find x to keep the equality $\sqrt[2x+1]{\sqrt[11-4x]{(-2x)^{3x}}}=\sqrt[3x-1]{7x+2}$ $$\mbox{ Find }x \in \mathbb{Q} \mbox{ to keep the equality: } \sqrt[2x+1]{\sqrt[11-4x]{(-2x)^{3x}}}=\sqrt[3x-1]{7x+2}$$ I tried to write the roots using powers: \begin{align*}\sqrt[2x+1]{\sqrt[11-4x]{(-2x)^{3x}}}=\sqrt[3x-1]{7x+2}&\Rightarrow [(-2x)^{\frac{3x}{11-4x}}]^{\frac{1}{2x+1}}=(7x+2)^{\frac{1}{3x-1}}\\ &\Rightarrow (-2x)^{\frac{\frac{3x}{11-4x}}{2x+1}}=(7x+2)^{\frac{1}{3x-1}}\\ &\Rightarrow (-2x)^{\frac{3x}{(11-4x)(2x+1)}}=(7x+2)^{\frac{1}{3x-1}}\\ &\Rightarrow (-2x)^{\frac{3x}{-8x^2+18x+11}}=(7x+2)^{\frac{1}{3x-1}} \end{align*} I hope I did it right until this point. But I've stuck here. Can someone help me? Thanks.
Take logs and then use your favourite numerical root-finding algorithm. Hint: the logarithms are only both real for $-\frac{2}{7} < x < 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/69780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
How to prove that proj(proj(b onto a) onto a) = proj(b onto a)? How to prove that proj(proj(b onto a) onto a) = proj(b onto a)? It makes perfect sense conceptually, but I keep going in circles when I try to prove it mathematically. Any help would be appreciated.
Note that a projection $P$ satisfies $P^2 = P$. You are just applying a projection twice, so that's the same thing as applying it once.
{ "language": "en", "url": "https://math.stackexchange.com/questions/69834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Counting Number of k-tuples Let $A = \{a_1, \dots, a_n\}$ be a collection of distinct elements and let $S$ denote the collection all $k$-tuples $(a_{i_1}, \dots a_{i_k})$ where $i_1, \dots i_k$ is an increasing sequence of numbers from the set $\{1, \dots n \}$. How can one prove rigorously, and from first principles, that the number of elements in $S$ is given by $n \choose k$?
Do you agree that the set $S$ is in bijection with all subsets of $A$ with $k$ elements? Srivatsan Narayanan's comment should clear this up. Please ask for clarification if it does not. Every subset of $A$ with $k$ elements can be realized as the first $k$ elements of some ordering of $A$. Say two orderings are "equivalent" if they give rise to the same subset of $A$ with $k$ elements. There are $n!$ orderings of $A$, but there are $k!$ equivalent ways to order the first $k$ elements, and $(n-k)!$ equivalent ways to order the final $n-k$ elements. Thus, there are \begin{equation*} \frac{n!}{k!(n-k)!} = {n \choose k} \end{equation*} equivalent orderings of $A$. In particular there are ${n \choose k}$ distinct subsets of $A$ with $k$ elements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/69887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
ODE question: $y'+A(t) y =B(t)$, with $y(0)=0, B>0$ implies $y\ge 0$; another proof? I am trying to prove that the solution for the different equation $$y'+A(t) y =B(t)$$ with initial condition $y(0)=0$ and the assumption that $B\ge 0$, has non-negative solution for all $t\ge 0$, i.e. $y(t)\ge 0$ for all $t\ge 0$. The one I know is constructive: Fisrt consider the homogeneous ODE $x'+\frac{1}{2} A(t) x=0$ and let $u'=\frac{B}{x^2}$ with $u(0)=0$. Then $y=\frac{ux^2}{2}$ will satisfy the origional ODE of $y$. Clearly, by the construction of $y$, $y\ge 0$ for $t\ge 0$. But this proof is not natural in my opnion, there is no reason(at least I didn't see) to construct such $x(t)$ and $u(t)$. So is there any other way to prove this fact?
Well since it is easy to actually explicitely solve such ODE's, the most natural solution to me would have been to analyse that solution. The solution is $$ y(t) = \frac{ \int^t_0 \exp\left(\int^s_0 A(p) dp \right) B(s) ds }{ \exp \left(\int^t_0 A(p) dp \right)}$$ and since $\exp (x) > 0 $ for all $ x \in \mathbb{R}$ and it is given $B(s) \geq 0 $, the numerator is non-negative while the denominator is strictly positive, and so $y(t) \geq 0.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/69930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Functions with subscripts? In the equation: $f_\theta(x)=\theta_1x$ Is there a reason that $\theta$ might be a subscript of $f$ and not either a second parameter or left out of the left side of the equation altogether? Does it differ from the following? $f(x,\theta)=\theta_1x$ (I've been following the Machine Learning class and the instructor uses this notation that I've not seen before)
I personally dislike the notation. And I believe it means different things in different contexts. For example, it means the opposite of what the other answers say in probability, see: https://en.wikipedia.org/wiki/Likelihood_function#Discrete_probability_distribution It is shown that $P_{\theta}(X = x)$ and $p_{\theta}(x)$ both mean that $X/x$ is the constant variable in this context and we are plotting with respect to $\theta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/69988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Mathematical reason for the validity of the equation: $S = 1 + x^2 \, S$ Given the geometric series: $1 + x^2 + x^4 + x^6 + x^8 + \cdots$ We can recast it as: $S = 1 + x^2 \, (1 + x^2 + x^4 + x^6 + x^8 + \cdots)$, where $S = 1 + x^2 + x^4 + x^6 + x^8 + \cdots$. This recasting is possible only because there is an infinite number of terms in $S$. Exactly how is this mathematically possible? (Related, but not identical, question: General question on relation between infinite series and complex numbers).
$S = 1 + x^2 \, S$ is true even in the ring of formal power series. No convergence is needed here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/70048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
What does the big cap notation mean? I'm trying to understand How can an ordered pair be expressed as a set? and I don't know what the big cap/cup notations mean when placed next to an ordered pair: $\bigcap(a,b)$ and $\bigcup(a,b)$.
This is actually answered in the linked question, but for clarification, if by definition $(a,b)=\{\{a\},\{a,b\}\}$ then $$\bigcap(a,b) = \bigcap\{\{a\},\{a,b\}\} = \{a\} \cap \{a,b\} = \{a\}$$ and $$\bigcup(a,b) = \bigcup\{\{a\},\{a,b\}\} = \{a\} \cup \{a,b\} = \{a,b\}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/70124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
For a multiplication operator $M_f$ on $L^2$ with $f\geq 0$, is $SM_fS^{*}$ positive? I have the following problem. Let $\Omega \subset R^n$ have finite measure, let $H = L^2(\Omega)$ and let $S: H \to H$ be a bounded linear operator. Then it is well known that $P = SS^*$ is a positive operator, i.e., $(Px, x) \geq 0$. for all $x \in H$. Now let $M_f:H \to H$ be the multiplication operator induced by $f: \Omega \to R$ where $f \geq 0$ (or even $f \geq c > 0$). Is it true that $P_1 = SM_fS^*$ is also positive? If so, do you have a proof or reference? Thanks in advance.
Careful, you have the wrong sense of "positive". An operator of the form $A = S S^*$ on $L^2$ need not have the property that $Ah \ge 0$ for any $h \ge 0$ (this is sometimes called "positivity-preserving"). Even in finite dimensions this may fail: take $$ S = \begin{pmatrix} 1 & 2 \\ 1 & -2 \end{pmatrix}, \quad x = \begin{pmatrix}1 \\ 0 \end{pmatrix}.$$ Then $x \ge 0$, but $S S^* x = \begin{pmatrix}5 \\ -3 \end{pmatrix}$. Instead, $A = S S^*$ is positive in the sense that $(A h, h) \ge 0$ for any $h$ in the Hilbert space. We might say that $A$ is "positive semidefinite". This latter property is obvious in this case because $(S S^* h, h) = (S^* h, S^* h) = ||S^* h||^2 \ge 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/70245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Solve the phase plane equation to obtain the integral curves for the system: Solve the phase plane equation to obtain the integral curves for the system: $$\begin{align*}\frac{\mathrm dx}{\mathrm dt}&=2y-x\\\frac{\mathrm dy}{\mathrm dt}&=e^x+y\end{align*}$$ It's for a 200 level paper; differential equations. What is important with this question is that it is non-linear. So he can't use linear methods. He will try a Jacobian matrix and see if that gives anything useful. His textbook also says he could maybe change it into polar coordinates, but that doesn't seem to be helping.
The phase plane equation for the integral curves is $$\def\part#1#2{\frac{\mathrm d#1}{\mathrm d#2}} \part yx=\frac{\part yt}{\part xt}=\frac{\mathrm e^x+y}{2y-x}\;.$$ To get a grip on this, it helps to expand around the equilibrium. At equilibrium, $\part xt=2y-x=0$, and thus $y=x/2$, so we can write $y=x/2+z$, with $y'=1/2+z'$. Substituting this into the differential equation for $y$ yields $$ \begin{align} \frac12+z' &=\frac{\mathrm e^x+\frac x2+z}{2(\frac x2 + z)-x}\\ &=\frac{\mathrm e^x+\frac x2+z}{2z}\;,\\ z' &=\frac{\mathrm e^x+\frac x2}{2z}\;,\\ 2zz' &=\mathrm e^x+\frac x2\;,\\ z^2 &=\mathrm e^x+\frac {x^2}4+C\;,\\ z &=\pm\sqrt{\mathrm e^x+\frac {x^2}4+C}\;,\\ y &=\frac x2\pm\sqrt{\mathrm e^x+\frac {x^2}4+C}\;. \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/70332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding inverse cosh I am trying to find $\cosh^{-1}1$ I end up with something that looks like $e^y+e^{-y}=2x$. I followed the formula correctly so I believe that is correct up to this point. I then plug in $1$ for $x$ and I get $e^y+e^{-y}=2$ which, according to my mathematical knowledge, is still correct. From here I have absolutely no idea what to do as anything I do gives me an incredibly complicated problem or the wrong answer.
It may be more helpful to consider the significant hyperbolic identities first. We have in general: $\small \begin{array} {rcllll} 1)& \exp(z) &=& \cosh(z) + \sinh(z) \\ 2)& 1 &=& \cosh(z)^2 - \sinh(z)^2 \\ &&& \implies \\ 3)&\sinh(z) &=& \pm \sqrt{\cosh(z)^2-1} & \text{ using 2)}\\ 4)& \exp(z)&=& \cosh(z) \pm \sqrt{\cosh(z)^2-1} & \text{ using 1) and 3)}\\ \end{array} $ Now the given problem is to find another expression for $\small y=\cosh^{-1}(x)$ which means $\small x = \cosh(y) $ We use 4) and insert our current y for the general z to get $\small \begin{array} {rcllll} 5)& \exp(y)&=& \cosh(y) \pm \sqrt{\cosh(y)^2-1} & \text{ using 4)}\\ 6)& \exp(y)&=& x \pm \sqrt{x^2-1} & \text{ inserting x for } \cosh(y)\\ 7)& y&=& \log(x \pm \sqrt{x^2-1} ) & \\ 8)& \cosh^{-1}(x)&=& \log(x \pm \sqrt{x^2-1} ) &\text{ inserting } \cosh^{-1}(x) \text{ for } y \\ 9)& \cosh^{-1}(1)&=& ??? \\ \end{array} $ Now 8) can be used as a new, general hyperbolic identity like that in the list from 1) to 4) and 9) is your remaining little to-do ...
{ "language": "en", "url": "https://math.stackexchange.com/questions/70500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
Span of permutation matrices The set $P$ of $n \times n$ permutation matrices spans a subspace of dimension $(n-1)^2+1$ within, say, the $n \times n$ complex matrices. Is there another description of this space? In particular, I am interested in a description of a subset of the permutation matrices which will form a basis. For $n=1$ and $2$, this is completely trivial -- the set of all permutation matrices is linearly independent. For $n=3$, the dimension of their span is $5$, and any five of the six permutation matrices are linearly independent, as can be seen from the following dependence relation: $$ \sum_{M \in P} \det (M) \ M = 0 $$ So even in the case $n=4$, is there a natural description of a $10$ matrix basis?
The Birkhoff–von Neumann theorem states that the convex hull of permutation matrices is the set of all doubly stochastic matrices. Hence the span of all permutation matrices is given by $S=\{X\in M_{n,n}(\mathbb{C}): \textrm{ all column sums and row sums of } X \textrm{ are equal}\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/70569", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 1 }
Connected components of subspaces vs. space If $Y$ is a subspace of $X$, and $C$ is a connected component of $Y$, then C need not be a connected component of $X$ (take for instance two disjoint open discs in $\mathbb{R}^2$). But I read that, under the same hypothesis, $C$ need not even be connected in $X$. Could you please provide me with an example, or point me towards one? Thank you. SOURCE http://www.filedropper.com/manifolds2 Page 129, paragraph following formula (A.7.16).
Are you sure you read correctly? Suppose $C$ is not connected in $X$. By definition this means that there exists two open sets $U, V\subset X$ such that $U\cap C \neq \emptyset$, $V\cap C\neq \emptyset$, $U\cap V = \emptyset$, and $C\subset U\cup V$. But then by definition of subspace topology, $U' = U\cap Y$ is open in $Y$, and $V' = V\cap Y$ is open in $Y$. And since $C\subset Y$, you have that $U' \cap V' = \emptyset$ and $C \subset U'\cup V'$, and $C \subset C\cap U \subset Y\cap U = U'$ etc. so $C$ cannot be connected in $Y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/70628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Countable subadditivity of the Lebesgue measure Let $\lbrace F_n \rbrace$ be a sequence of sets in a $\sigma$-algebra $\mathcal{A}$. I want to show that $$m\left(\bigcup F_n\right)\leq \sum m\left(F_n\right)$$ where $m$ is a countable additive measure defined for all sets in a $\sigma$ algebra $\mathcal{A}$. I think I have to use the monotonicity property somewhere in the proof, but I don't how to start it. I'd appreciate a little help. Thanks. Added: From Hans' answer I make the following additions. From the construction given in Hans' answer, it is clear the $\bigcup F_n = \bigcup G_n$ and $G_n \cap G_m = \emptyset$ for all $m\neq n$. So $$m\left(\bigcup F_n\right)=m\left(\bigcup G_n\right) = \sum m\left(G_n\right).$$ Also from the construction, we have $G_n \subset F_n$ for all $n$ and so by monotonicity, we have $m\left(G_n\right) \leq m\left(F_n\right)$. Finally we would have $$\sum m(G_n) \leq \sum m(F_n).$$ and the result follows.
This link gives a proof without using monotonicity mentioned above. It is derived from the sense of "giving a room of $\epsilon$". http://mathonline.wikidot.com/countable-subadditivity-of-the-lebesgue-outer-measure#toc0
{ "language": "en", "url": "https://math.stackexchange.com/questions/70676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
In classical logic, why is $(p\Rightarrow q)$ True if $p$ is False and $q$ is True? Provided we have this truth table where "$p\implies q$" means "if $p$ then $q$": $$\begin{array}{|c|c|c|} \hline p&q&p\implies q\\ \hline T&T&T\\ T&F&F\\ F&T&T\\ F&F&T\\\hline \end{array}$$ My understanding is that "$p\implies q$" means "when there is $p$, there is q". The second row in the truth table where $p$ is true and $q$ is false would then contradict "$p\implies q$" because there is no $q$ when $p$ is present. Why then, does the third row of the truth table not contradict "$p\implies q$"? If $q$ is true when $p$ is false, then $p$ is not a condition of $q$. I have not taken any logic class so please explain it in layman's terms. Administrative note. You may experience being directed here even though your question was actually about line 4 of the truth table instead. In that case, see the companion question In classical logic, why is $(p\Rightarrow q)$ True if both $p$ and $q$ are False? And even if your original worry was about line 3, it might be useful to skim the other question anyway; many of the answers to either question attempt to explain both lines.
The statement $(P \land Q) \to P$ should be true, no matter what. So, we should have: \begin{array}{cc|ccc} P&Q&(P \land Q) & \to & P\\ \hline T&T&T&T&T\\ T&F&F&T&T\\ F&T&F&T&F\\ F&F&F&T&F\\ \end{array} Line 2 shows that we should therefore have that $F \to T = T$ Also note that line 1 forces $T \to T = T$, and that line 4 forces $F \to F=T$, which are another two values of the truth-table for $\to$ that people sometimes wonder about. So, together with the uncontroversial $T \to F = F$, the above give a justification for why we define the $\to$ the way we do.
{ "language": "en", "url": "https://math.stackexchange.com/questions/70736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "84", "answer_count": 16, "answer_id": 3 }
A ring element with a left inverse but no right inverse? Can I have a hint on how to construct a ring $A$ such that there are $a, b \in A$ for which $ab = 1$ but $ba \neq 1$, please? It seems that square matrices over a field are out of question because of the determinants, and that implies that no faithful finite-dimensional representation must exist, and my imagination seems to have given up on me :)
Consider the ring of infinite matrices which have finitely many non-zero elements both in each row and in each column and the matrix $$a=\begin{pmatrix}0&0&0&\cdots\\1&0&0&\cdots\\0&1&0&\cdots\\\ddots&\ddots&\ddots&\ddots\end{pmatrix}.$$ A canonical example is the quotient $A$ of the free algebra $k\langle x,y\rangle$ by the two-sided ideal generated by $yx-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/70777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30", "answer_count": 2, "answer_id": 1 }
Arranging a $30$ character word with letters $x, y, z$ A word of length $30$ needs to be formed from the letters $x, y, z$ (repeatable) with the following conditions: * *$y$ cannot occur more than once consecutively *$z$ cannot occur more than twice consecutively. The question is how many such words are possible. Thanks, Kiran
Let us denote by $X_n$ the number of possible words of length $n$ ending with the letter $X$, and similarly $Y_n, Z_n$. Then the reccurence applies: $$X_{n+1} = X_n + Y_n + Z_n$$ $$Y_{n+1} = X_n + Z_n$$ $$Z_{n+1} = X_n + Y_n$$ $$X_1 = Y_1 = Z_1 = 1$$ This can be easily programmed to obtain the requested number of possibilities: $$X_{30} + Y_{30} + Z_{30} = X_{31} = 367'296'043'199$$ The recurrence can be further simplified by using $Y_n = Z_n$ (see the comment by @Gerry): $$X_n = 2 X_{n-1} + X_{n-2}$$ We notice that the number of sequences of the length $n$ is $X_{n+1}$. The sequence $X_n, n \ge 1$, is $\{1, 3, 7, 17, 41, 99, 239, 577, ...\}$. This are numerators of continued fraction convergents to sqrt(2). This page gives also the explicit solution: $$X_n = \frac{1}{2}[(1-\sqrt{2})^n + (1+\sqrt{2})^n]$$ The sequence $Y_n$ (and $Z_n$) is $\{1, 2, 5, 12, 29, 70, 169, ...\}$ which are the Pell numbers. Since you are also interested in a program for generating the words, I posted a running C++ program to Ideone. The program represents a string $XYZ...$ by the integer in the base 3, $a = 012..$, which is stored as an array of integers $[0,1,2...]$. A new combination is generated by adding $1$ to $a$ in base 3 system and ignoring the forbidden combinations $..11..22..$. EDIT. I included the explicit reccurence for $X_n$ based on the comment by @Gerry. He mentions also the method for finding the explicit solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/70829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Lesser-known integration tricks I am currently studying for the GRE math subject test, which heavily tests calculus. I've reviewed most of the basic calculus techniques (integration by parts, trig substitutions, etc.) I am now looking for a list or reference for some lesser-known tricks or clever substitutions that are useful in integration. For example, I learned of this trick $$\int_a^b f(x) \, dx = \int_a^b f(a + b -x) \, dx$$ in the question Showing that $\int\limits_{-a}^a \frac{f(x)}{1+e^{x}} \mathrm dx = \int\limits_0^a f(x) \mathrm dx$, when $f$ is even I am especially interested in tricks that can be used without an excessive amount of computation, as I believe (or hope?) that these will be what is useful for the GRE.
I really like the so-called Schwinger trick that is based on identities like the following: $$ \frac{1}{\alpha^\nu}=\frac{1}{\Gamma(\nu)}\int^{\infty}_{0} \tau^{\nu-1}\mathrm{e}^{-\alpha \tau}\mathrm{d}\tau.$$ This technique is very useful, as you can see here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/70974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "169", "answer_count": 8, "answer_id": 7 }
Fourier transform Suppose $1< p<\infty$. Let $f$ be a continuous function with compact support defined on $\mathbb{R}$. Does it exist a function $g \in L^p(\mathbb{T})$ such that: $$ \widehat{f}|_{\mathbb{Z}}=\widehat{g} $$ where $\widehat{f}$ denote the Fourier transform on $\mathbb{R}$ and $\widehat{g}$ the Fourier transform on $\mathbb{T}$ ?
Taking $$ g(x)=\sum_{n=-\infty}^\infty f(x+2\pi n) $$ will give the desired result since $g\in C(\mathbb{T})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/71037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Division by $0$ Everyone knows that $(x/y)\times y = x$. So why does $(x/0)\times 0 \ne x$? According to Wolfram Alpha, it is 'indeterminate'. What does this mean? Also, are there any other exceptions to that rule?
$x/y$ means "the unique number such that $y \cdot (x/y) = x$." If $x$ is any number, does there exist a unique number $a$ such that $0 \cdot a = x\;$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/71114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is the derivative of a modular function a modular function Fix a positive integer $n$. Let $f:\mathbf{H}\longrightarrow \mathbf{C}$ be a modular function with respect to the group $\Gamma(n)$. Is the derivative $$\frac{df}{d\tau}:\mathbf{H}\longrightarrow \mathbf{C}$$ also a modular function with respect to $\Gamma(n)$? I think it's clear that $df/d\tau$ is meromorphic on $\mathbf{H}$ and that it is meromorphic at the cusp. I just don't know why it should be modular with respect to $\Gamma(n)$.
Suppose $f(\tau)$ is modular function of weight $m$, i.e. for $\left( \begin{array}{cc} a & c \\ c & d \end{array} \right) \in \Gamma(n)$, $f\left( \frac{a \tau + b}{c \tau + d} \right) = \left(c \tau + d \right)^{m} f( \tau )$. Differentiating this equality: $$ \begin{eqnarray} \frac{\mathrm{d}}{\mathrm{d} \tau}\left( f\left( \frac{a \tau + b}{c \tau + d} \right) \right) &=& \frac{\mathrm{d}}{\mathrm{d} \tau} \left( \left(c \tau + d \right)^{m} f( \tau ) \right) \\ f^\prime\left( \frac{a \tau + b}{c \tau + d} \right) \frac{\mathrm{d}}{\mathrm{d} \tau}\left( \frac{a \tau + b}{c \tau + d} \right) &=& \left(c \tau + d \right)^{m} f^\prime(\tau) + m c \left(c \tau + d \right)^{m-1} f(\tau)\\ f^\prime\left( \frac{a \tau + b}{c \tau + d} \right) \left( \frac{a d - b c}{(c \tau + d)^2} \right) &=& \left(c \tau + d \right)^{m} f^\prime(\tau) + m c \left(c \tau + d \right)^{m-1} f(\tau) \end{eqnarray} $$ Even though $ a d - b c = 1$, the resulting equation shows that the derivative is not a modular function of any weight, except $m=0$, in which case $f^\prime(\tau)$ is a modular function of weight $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/71158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Proof that series diverges Prove that $\displaystyle\sum_{n=1}^\infty\frac{1}{n(1+1/2+\cdots+1/n)}$ diverges. I think the only way to prove this is to find another series to compare using the comparison or limit tests. So far, I have been unable to find such a series.
Hints: 1. For every $k\geqslant0$, the sum of $\frac1n$ from $n=2^k$ to $n=2^{k+1}-1$ is at most $1$. 2. For every $k\geqslant0$ and every $n$ such that $2^k\leqslant n\leqslant 2^{k+1}-1$, $1+\frac12+\cdots+\frac1n\leqslant k+1$ and $\frac1n\geqslant\frac1{2^k}$. 3. For every $k\geqslant0$, the terms from $n=2^k$ to $n=2^{k+1}-1$ of the series which interests you sum to at least $\frac1{k+1}$. 4. The harmonic series diverges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/71215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 1 }
On the meaning of being algebraically closed The definition of algebraic number is that $\alpha$ is an algebraic number if there is a nonzero polynomial $p(x)$ in $\mathbb{Q}[x]$ such that $p(\alpha)=0$. By algebraic closure, every nonconstant polynomial with algebraic coefficients has algebraic roots; then, there will be also a nonconstant polynomial with rational coefficients that has those roots. I feel uncomfortable with the idea that the root of a polynomial with algebraic coefficients is again algebraic; why are we sure that for every polynomial in $\mathbb{\bar{Q}}[x]$ we could find a polynomial in $\mathbb{Q}[x]$ that has the same roots? I apologize if I'm asking something really trivial or my question comes from a big misunderstanding of basic concepts.
Define $\mathbb A$ as the set of all complex roots of rational polynomials; then we want to prove that $\mathbb A$ is algebraically closed. Let $f=X^n+a_{n-1}X^{n-1}+\cdots+a_0$ be a polynomial with coefficients in $\mathbb A$. I assume that we already know that $\mathbb A$ is a field, so taking the leading coefficient to be $1$ does not lose generality. Let $S$ be $\mathbb Q[a_0,\ldots,a_{n-1}]$, the smallest ring extension of $\mathbb Q$ that contains all of the coefficients of $f$. $S$ is a finite-dimensional vector space over $\mathbb Q$, because it is spanned by all products of powers of the $a_i$'s up to the degree of the rational polynomial each $a_i$ is a root in. Now let $\beta\in\mathbb C$ be a root of $f$. Then $S[\beta]$, the smallest ring extension of $S$ that contains $\beta$, is a finite-dimensional vector space over $S$ (actually a finitely generated module, except that it turns out that $S$ is in fact a field), because it is spanned by powers of $\beta$ from $1$ up to $\beta^{n-1}$. Thus, in particular $S[\beta]$ is also a finite-dimensional vector space over $\mathbb Q$. Now take sufficiently many powers of $\beta$, enough that there are more of them than the dimension of $S[\beta]$ over $\mathbb Q$. They must then be linearly dependent. But a nontrivial rational linear relation between powers of $\beta$ is exactly a rational polynomial that has $\beta$ as a root. So $\beta\in\mathbb A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/71267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28", "answer_count": 3, "answer_id": 2 }
Help solving this integral $\int_{-\infty}^{\infty} \frac{x^2 e^{-x^2/2}}{a+bx^2}dx$ So, I've got an integral in the following form: $$\int_{-\infty}^{\infty} \frac{x^2 e^{-x^2/2}}{a+bx^2}dx$$ where $b<0$ and $a\in\mathbb{R}$. I've tried substituting $y=x^2$ (after changing changing lower limit to 0 and multiplying by 2 of course) and $z=y+a$ but there is that pesky square root in the denominator... Anyone with better ideas? Is this thing even soluble?
Consider the function $$\mathcal{I}(a)=\int_{-\infty}^{+\infty} \frac{a}{a^2+x^2}e^{-(a^2+x^2)}dx.$$ Integration by parts gives $$\mathcal{I}(a)=\left[\tan^{-1}\left(\frac{x}{a}\right) e^{-(a^2+x^2)}\right]_{-\infty}^{+\infty}-\int_{-\infty}^{+\infty}\tan^{-1}\left(\frac{x}{a}\right)(-2x)e^{-(a^2+x^2)}dx$$ $$=\int_{-\infty}^{+\infty}\tan^{-1}\left(\frac{x}{a}\right)2xe^{-(a^2+x^2)}dx.$$ Now differentiate $\mathcal{I}$ with respect to $a$ and obtain $$\frac{d\,\mathcal{I}}{da}=\int_{-\infty}^{+\infty}\left[-\frac{x}{a^2+x^2}\right]2xe^{-(a^2+x^2)}+\tan^{-1}\left(\frac{x}{a}\right)\left[(-2a)2xe^{-(a^2+x^2)}\right]dx$$ $$=-\int_{-\infty}^{+\infty}\frac{2x^2}{a^2+x^2}e^{-(a^2+x^2)}dx-2a\mathcal{I}(a) $$ $$=-\int_{-\infty}^{+\infty}\left(\frac{2x^2}{a^2+x^2}+2a\frac{a}{a^2+x^2}\right)e^{-(a^2+x^2)}dx $$ $$=-2\int_{-\infty}^{+\infty}e^{-(x^2+a^2)}dx=-2\sqrt{\pi}e^{-a^2}.$$ Equipped with the fact $\lim\limits_{a\to\infty}\mathcal{I}(a)=0$, we arrive at $$\mathcal{I}(a)=\int_{+\infty}^a -2\sqrt{\pi}e^{-u^2}du= \pi \,\mathrm{erfc}(a),$$ where $\mathrm{erfc}$ is the complementary error function. Note this agrees as $a\to0$ because of the distributional fact that $a/(a^2+x^2)\to\delta(x)$. This implies $$\int_{-\infty}^{+\infty}\frac{1}{x^2+a}e^{-x^2}dx=\pi e^a\frac{\mathrm{erfc}\left(\sqrt{a}\right)}{\sqrt{a}}.$$ Finally, observe that $$\int_{-\infty}^{+\infty}\frac{x^2}{a+bx^2}e^{-x^2/2}dx=\frac{1}{b}\int_{-\infty}^{+\infty}\left(1-\frac{a}{a+bx^2}\right)e^{-x^2/2}dx$$ $$=\frac{1}{b}\left(\sqrt{2\pi}-\frac{a}{\sqrt{2}b}\int_{-\infty}^{+\infty}\frac{1}{\frac{a}{2b}+x^2}e^{-x^2}dx\right)$$ $$=\frac{1}{b}\left(\sqrt{2\pi}-\sqrt{\frac{a}{b}}\pi\exp\left(\frac{a}{2b}\right)\mathrm{erfc}\left(\sqrt{\frac{a}{2b}}\right)\right).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/71343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Elementary question in partial differentiation Let's say we have a function of the form $f(x+vt)$ where $v$ is a constant and $x,t$ are independent variables. How is $\frac{\partial f}{\partial x} = \frac{1}{v}\frac{\partial f}{\partial t}$ equal to $f$? If I let $u=x+vt$ then $\frac{\partial f}{\partial x} = \frac{\partial f}{\partial u}\frac{\partial u}{\partial x} = \frac{\partial f/\partial t}{\partial u/\partial t}\frac{\partial u}{\partial x}=\frac{1}{v}\frac{\partial f}{\partial t}$ but I cannot infer that $ \frac{1}{v}\frac{\partial f}{\partial t} = f$ unless I assume the form of D'Alembert's Solution to be the harmonic (exponential). For the general solution I do not know how this was arrived at. Edit: I still don't get it, as the context does not help. But I assume since it is a physics text, $f$ can be written as a Fourier series/integral of exponentials. Assuming that, the above holds.
You are correct - you can't infer that $f'(x) = f(x)$ unless $f$ is exponential, i.e. if $f(x)=A\exp(x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/71407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Large $n$ asymptotic of $\int_0^\infty \left( 1 + x/n\right)^{n-1} \exp(-x) \, \mathrm{d} x$ While thinking of 71432, I encountered the following integral: $$ \mathcal{I}_n = \int_0^\infty \left( 1 + \frac{x}{n}\right)^{n-1} \mathrm{e}^{-x} \, \mathrm{d} x $$ Eric's answer to the linked question implies that $\mathcal{I}_n \sim \sqrt{\frac{\pi n}{2}} + O(1)$. How would one arrive at this asymptotic from the integral representation, without reducing the problem back to the sum ([added] i.e. expanding $(1+x/n)^{n-1}$ into series and integrating term-wise, reducing the problem back to the sum solve by Eric) ? Thanks for reading.
I shifted the function by a unit since it won't effect the asymptotics and I'd like the global maximum to occur at $x=0$. $$ \mathcal{I}_n \sim \int^{\infty}_0 \left( 1 + \frac{x-1}{n} \right)^{n-1} e^{-(x-1) } dx $$ $$\left( 1 + \frac{ x-1}{n} \right)^{n-1} e^{-(x-1) } = e \left( 1 - \frac{1}{n} \right)^{n-1} \left( 1 - \frac{x^2}{2(n-1)} + \cdots \right)$$ $$ \approx e \left(1 - \frac{1}{n} \right)^{n-1} \exp\left(\frac{-x^2}{2(n-1)} \right) $$ so $$ \mathcal{I}_n \sim e\left( 1 -\frac{1}{n}\right)^{n-1} \int^{\infty}_0 \exp\left( \frac{-x^2}{2(n-1)} \right) dx $$ $$ = e\left( 1 -\frac{1}{n}\right)^{n-1} \sqrt{\pi(n-1)/2} \sim \sqrt{\pi n/2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/71447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 2 }
moment-generating function of the chi-square distribution How do we find the moment-generating function of the chi-square distribution? I really couldn't figure it out. The integral is $$E[e^{tX}]=\frac{1}{2^{r/2}\Gamma(r/2)}\int_0^\infty x^{(r-2)/2}e^{-x/2}e^{tx}dx.$$ I'm going over it for a while but can't seem to find the solution. By the way, the answer should be $$(1-2t)^{(-r/2)}.$$
$$ \begin{align} & {}\qquad E[e^{tX}]=\frac{1}{2^{r/2}\Gamma(r/2)}\int_0^\infty x^{(r-2)/2}e^{-x/2}e^{tx}\;dx \\ \\ \\ & = \frac{1}{2^{r/2}\Gamma(r/2)}\int_0^\infty x^{(r-2)/2}e^{x(t-(1/2))}\;dx \\ \\ \\ & = \frac{1}{2^{r/2}\Gamma(r/2)}\int_0^{-\infty} \left(\frac{u}{t-\frac12}\right)^{(r-2)/2}(e^u)\left(\frac{du}{t-\frac12}\right) \\ \\ \\ & = \frac{1}{2^{r/2}\Gamma(r/2)} \frac{1}{(t-\frac12)^{r/2}} \int_0^{-\infty} u^{(r-2)/2} e^u \; du. \end{align} $$ This last integral is a value of the Gamma function. And notice that $$ 2^{r/2} \left(t-\frac12\right)^{r/2} = (2t-1)^{r/2}. $$ Later edit: Someone questioned the correctness of what is written above; hence these comments. Notice that as $x$ goes from $0$ to $+\infty$, $u$ will go from $0$ to $-\infty$, since the factor $t-\frac12$ is negative. Now let $w=-u$, so $u=-w$ and $du=-dw$ and as $u$ goes from $0$ to $-\infty$, then $w$ goes from $0$ to $+\infty$, and we get something that looks like the standard form of the integral that defines the Gamma function. This still leaves us with the question of raising a negative number to a power. The fraction $\dfrac{u}{t-\frac12}$ is positive since $u$ and $t-\frac12$ are both negative. So instead of what was done above, let us substitute $\dfrac{u}{\frac12-t}$ for $x$. Then $u$ goes from $0$ to $+\infty$ and $e^{x(t-\frac12)}$ will become $e^{-u}$. Then this should work out without the additional substitution, and we won't have the problem of raising a negative number to a power. The integral is then $\Gamma(r/2)$, so it cancels that factor in the denominator.
{ "language": "en", "url": "https://math.stackexchange.com/questions/71516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
A couple of problems involving divisibility and congruence I'm trying to solve a few problems and can't seem to figure them out. Since they are somewhat related, maybe solving one of them will give me the missing link to solve the others. $(1)\ \ $ Prove that there's no $a$ so that $ a^3 \equiv -3 \pmod{13}$ So I need to find $a$ so that $a^3 \equiv 10 \pmod{13}$. From this I get that $$a \equiv (13k+10)^{1/3} \pmod{13} $$ If I can prove that there's no k so that $ (13k+10)^{1/3} $ is a integer then the problem is solved, but I can't seem to find a way of doing this. $(2)\ \ $ Prove that $a^7 \equiv a \pmod{7} $ If $a= 7q + r \rightarrow a^7 \equiv r^7 \pmod{7} $. I think that next step should be $ r^7 \equiv r \pmod{7} $, but I can't figure out why that would hold. $(3)\ \ $ Prove that $ 7 | a^2 + b^2 \longleftrightarrow 7| a \quad \textbf{and} \quad 7 | b$ Left to right is easy but I have no idea how to do right to left since I know nothing about what 7 divides except from the stated. Any help here would be much appreciated. There're a lot of problems I can't seem to solve because I don't know how to prove that a number is or isn't a integer like in problem 1 and also quite a few that are similar to problem 3, but I can't seem to find a solution. Any would be much appreciated.
For the first problem, an unimaginative but workable approach is just to check all the possibilities $a=0,1,2,\dots,12$. Do you see why, if these all fail, you're done? For the 3rd problem, I think you mean right implies left is easy. For left implies right, take each of the numbers $0,1,\dots,6$, square them, divide by 7, and note the remainders. From that you can work out all the possible remainders of $a^2+b^2$, and from that you can answer the question. Let me know if you've tried to work through this and found my hints too opaque.
{ "language": "en", "url": "https://math.stackexchange.com/questions/71583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Differential of Polynomial in $\mathbb{P}^2$ Lets $f(x, y, z)$ is homogeneous polynomial that equation $$f(x, y, z)=0$$ determines some curve $C_f$ in $\mathbb{P}^2$. By definition: point $P\in C_f$ is singularity point if $df_p=0$. Question: Am I right that $df_p$ written in coordinates in $\mathbb{P}^2$ as: $$\left(\frac{\partial f}{\partial x}(P), \frac{\partial f}{\partial y}(P), \frac{\partial f}{\partial z}(P)\right)$$ (i.e. such in $\mathbb{R}^3$) Thanks.
I think you get carried away by the powerful formalism of projective geometry. Actually the polynomial $f(x,y,z)$ does not correspond to a function $\mathbb P^2_k\to k$ , so that you can't define its differential. This is because, if the degree of $f$ is $d$, we have for all $\lambda\in k$ the equality $f(\lambda x,\lambda y,\lambda z)=\lambda^d f(x,y,z)$, and since $[x:y:z]=[\lambda x:\lambda y:\lambda z]\in \mathbb P^2_k$ there is no reasonable way to define $f(x,y,z)\in k$. However the polynomials $\frac{\partial f}{\partial x}, \frac{\partial f}{\partial y},\frac{\partial f}{\partial z}$ have three perfectly well defined zero sets, curves whose common points indeed are the singularities of $C(f)$. But all is not lost. To $f$ you can associate an affine cone $V=V(f)\subset \mathbb A^3_k$ whose equation is $f(x,y,z)=0$. The polynomial $f$ defines a perfectly regular function $f:V\to k$ i.e. $f\in \mathcal O (\mathbb A^3_k)$. Then for all $P\in \mathbb A^3_k$ we have indeed $df_P=\left(\frac{\partial f}{\partial x}(P), \frac{\partial f}{\partial y}(P), \frac{\partial f}{\partial z}(P)\right):k^3=T_P(V)\to k$, and for $P\in V$, the vanishing of $df_P$ tells you whether $P$ is singular on $V$. For example if $degree(f) \gt 1$, the origin is a singular point of the cone $V$, as it should be.
{ "language": "en", "url": "https://math.stackexchange.com/questions/71648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Need help with Applications of Differentiation Problem Question My Working Following the hint + some help I got from tutorial below is what I got ... but I believe I did something wrong ... its not the answer below the question yet $0.955\text{ }m\text{ }min$ (I believe its $0.955 m/min$) UPDATE In general how do I start with such problems without any hints? I suppose I must somehow formulate the equation like: $$\frac{dX}{dt} = \frac{dX}{dY} \cdot \frac{dY}{dt}$$ So that the $dY$ cancels. But suppose the following question ... I did: $$\frac{dV}{dt} = 5$$ $$Find \frac{dA}{dt} \text{ when } V = 216 cm^3$$ $$\frac{dV}{dt} = \frac{dA}{dt} \cdot \frac{dV}{dA}$$ So I tried expressing $V$ in terms of $A$: But I still ended up with an $l$ $$V = l^3, A = 6l^2$$ $$V = \frac{1}{6} A l$$
There are a couple of mistakes, at least one of which is a minor slip. I will use the notation of the post. The equation $$V=\frac{1}{3}\pi h^3 \tan^2\theta$$ for $V$ as a function of $h$ is correct. Since $\tan\theta=\frac{1}{3}$, it would be cleaner to write $V=\frac{1}{27}\pi h^3$. But we will continue using $\tan^2\theta$. Now we differentiate both sides with respect to $t$. Here there are two errors in the post. By the Chain Rule, we have $$\frac{dV}{dt}=\frac{dV}{dh}\frac{dh}{dt}=(\pi h^2\tan^2\theta)\frac{dh}{dt}.$$ There was a little slip here, you wrote $2h^2$ for the derivative of $h^3$ with respect to $h$, and it should be $3h^2$. In addition, the necessary $\frac{dh}{dt}$ part, though mentioned correctly once, later appears on the "wrong" side of the equation for $\frac{dV}{dt}$. The Chain Rule is usually an essential tool in related rates problems, and needs to be handled carefully. Now that we have a general relationship between $\frac{dV}{dt}$ and $\frac{dh}{dt}$, "freeze" things at the instant $t$ when $r=2$. When $r=2$, we have $h=6$. And while the cone is filling up, $\frac{dV}{dt}=12$. So at the instant when $r=2$, we have $$12=(\pi)(6^2)(\tan^2\theta)\frac{dh}{dt}.$$ You did some unnecessary work at the corresponding point of your calculation, but it did not result in an error. You found $\theta$ using the calculator, and then $\tan^2\theta$. That is not needed, since you already know that $\tan\theta=\frac{1}{3}$. Whatever approach we take, we should get $$12=(\pi)(6^2)(1/3)^2\frac{dh}{dt} =4\pi\frac{dh}{dt}.$$ Solve for $\frac{dh}{dt}$. We find that, at the instant when $r=2$, $$\frac{dh}{dt}=\frac{12}{4\pi}=\frac{3}{\pi}.$$ Numerically, the answer is about $0.95493$. Added: We look briefly at the "cube" problem that was added to the post. We have $\frac{dV}{dt}=5$, and want $\frac{dA}{dt}$. I think the natural thing to do is to let $s=s(t)$ be the side at time $t$. (In the post it is called $l$, fine too, but looks too much like $1$!) Everybody knows that $$V=s^3 \text{ and } A=6s^2$.$$ Don't think, differentiate with respect to time. $$\frac{dV}{dt}=\frac{dV}{ds}\frac{ds}{dt}=3s^2\frac{ds}{dt} \text{ and } \frac{dA}{dt}=12s\frac{ds}{dt}.$$ We have $\frac{dA}{dt}=5$. When $V=216$, $s=6$. From the expression for $\frac{dV}{dt}$ above, we find that $\frac{ds}{dt}=5/108$ when $s=6$. Now we can use the expression for $\frac{dA}{dt}$ to conclude that $\frac{dA}{dt}=(12)(6)(5/108)$. Simplify if desired. Or else find a direct relationship between $V$ and $h$. We have $V=r^3$, and $A=6r^2$, so $r^2=A/6$. Since $r^3=V$, we have $r^6=V^2=(A/6)^3$, So $216V^2=A^3$. Differentiate with respect to $t$. We get $$(216)(2V)\frac{dV}{dt}=3A^2\frac{dA}{dt}.$$ When $V=216$, $A=216$. Now we can use the above equation to find $\frac{dA}{dt}$ when $V=216$. I prefer the first way the problem was done, but the second fits better into the "find a relationship, then differentiate" pattern. Comment: The detailed description of what you did was very helpful in locating the problems. It would be very nice if everybody was this thorough in showing work!
{ "language": "en", "url": "https://math.stackexchange.com/questions/71716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Correlation Coefficient between these two random variables Suppose that $X$ is real-valued normal random variable with mean $\mu$ and variance $\sigma^2$. What is the correlation coefficient between $X$ and $X^2$?
Here's an efficient way to deal with the numerator in the fraction that defines the correlation. $$ \operatorname{cov}(X,X^2) = \operatorname{cov}\Big((X-\mu)+\mu,\ \ (X-\mu)^2 + 2\mu(X-\mu) + \mu^2\Big). $$ Now we can throw away the "${}+ \mu$" and "${}+ \mu^2$" at the end and we have $$ \operatorname{cov}\Big((X-\mu),\ \ (X-\mu)^2 + 2\mu(X-\mu)\Big). $$ Then use bilinearity of covariances and this becomes: $$ \operatorname{cov}(X-\mu, (X-\mu)^2) + 2\mu\operatorname{cov}(X-\mu,X-\mu)). $$ This is $$ 0 + 2\mu\sigma^2. $$ The first term is $0$ because the expected value of $X-\mu$ is $0$ and the distribution is symmetric about $0$. Summary: $\operatorname{cov}(X,X^2) = 2\mu\sigma^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/71832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Why is the math for negative exponents so? This is what we are taught: $$5^{-2} = \left({\frac{1}{5}}\right)^{2}$$ but I don't understand why we take the inverse of the base when we have a negative exponent. Can anyone explain why?
We know that positive exponents add, e.g. $5^3 \times 5^2 = 5^5$. If you accept that $5^0 = 1$, then it makes sense that $5^{-2} \times 5^2 = 5^0 = 1$. That means that $5^{-2} = \frac{1}{5^2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/71891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 2 }
Homology and semidirect products If $G=N\rtimes H$ what is the relation between the second integral homology groups (Schur multipliers) of $G,N$ and $H$.
The natural map $H_2(G) \rightarrow H_2(H)$ is surjective for a split extension. Apart from that, you cannot say anything very definitive. There is a natural map $H_2(N) \rightarrow H_2(G)$, but that is not usually injective. Also, the image of that map is not in general equal to the kernel of $H_2(G) \rightarrow H_2(H)$. There is another section of $H_2(G)$ coming (roughly) from commutators in $[N,H]$. The Lyndon-Hochschild-Serre Spectral Sequence provides a theoretical background for all of this, but it does not necessarily help with calculations in specific examples. Of course, for the direct product, $G = N \times H$, we have $H_2(G) \cong H_2(N) \oplus H_2(H) \oplus (N \otimes H)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/71953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
matrix calculus equation - least squares minimization Find a closed-form solution for vector $A$ minimizing the expression$$\frac{1}{2}\left|W(\Phi A-F)\right|^2$$ where $W$ is non-singular diagonal, $\Phi$ is full rectangular matrix, and $A$, $F$ are column vectors (of possibly different dimensions); $|\bullet|^2\equiv\bullet^T \cdot \bullet$. Motivation: The problem in question finds the best projection $A$ of point-wise function $F$ into space generated by basis $\Phi$, where points are weighted with diagonal matrix W. Basis dimension (number of rows of $\Phi$, length of $A$) and number of points, where $F$ is defined (columns of $\Phi$, size of $W$ and length of $F$) are different. EDIT: ANSWER ATTEMPT: I did the following algebra, is it correct? $\begin{align} \frac{1}{2}\left|W(\Phi A-F)\right|^2&=\frac{1}{2}(W\Phi A-WF)^T(W\Phi A -WF)=\\&=((W\Phi A)^T-(WF)^T)(W\Phi A -WF)=\\ &=\frac{1}{2}\left[A^T\Phi^TW^2\Phi A-A^T\Phi^T W^2 F - F^TW^2\Phi A+F^TW^2F \right] \end{align}$ (making use of $W^TW=W^2$) With rules for matrix calculus from wikipedia: $\begin{align} \frac{\partial}{\partial A}\bullet&=\frac{1}{2}\left[\Phi^T W^2\Phi A +(\Phi^T W^2\Phi)^TA-\Phi^TW^2F-(F^TW^2\Phi)^T\right] =\\ &=\Phi^T W^2 \Phi A - \Phi^TW^2F \end{align}$ Setting the first derivative equal to zero vector yields $\Phi^T W^2\Phi A=\Phi^TW^2F$, from which finally $\boxed{A=(\Phi^TW^2\Phi)^{-1}\Phi^TW^2F}$ I suppose no further simplification is possible (in particular expanding the inversion to $(W\Phi)^{-1}(\Phi^T W^T)^{-1}$ produces matrix size mismatch, therefore is not possible). EDIT2: after looking around at wikipedia, I found out I derived normal equation for weighted least-squares (that article uses $\hat\beta$, $W$, $X$, $y$ where I used $A$, $W^2$, $\Phi$, $F$).
Edit: Your derivation is correct, except perhaps the last step -- when your system is under-determined, i.e. when $\Phi$ is "wide" instead of square or "tall", the product $\Phi^TW^2\Phi$ is bound to be rank-deficient and hence non-invertible. In this case you must use pseudoinverse.
{ "language": "en", "url": "https://math.stackexchange.com/questions/72007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A graph with less than 10 vertices contains a short circuit? Lately I read an old paper by Paul Erdős and L. Pósa ("On the maximal number of disjoint circuits of a graph") and stumbled across the following step in a proof (I changed it a bit to be easier to read): It is well known and easy to show that every (undirected) graph with $n < 10$ vertices, where all vertices have a degree $\geq 3$ contains a circuit of at most $4$ edges. I would be very happy if someone could enlighten me how this is simple and why he can conclude that, maybe there are some famous formulas for graphs that make this trivial? For the ones interested, he also mentions that a counterexample for $n=10$ is the petersen graph.
If you start from vertex 1, it must connect to at least three other vertices, which we can call 2,3,4. Then each one must connect to two more vertices which are none of the ones already named (or we would have a 3-circuit), so 2 goes to 5,6; 3 to 7,8; 4 to 9 and where? Anywhere it goes makes a 4-circuit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/72076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Express this curve in the rectangular form Express the curve $r = \dfrac{9}{4+\sin \theta}$ in rectangular form. And what is the rectangular form? If I get the expression in rectangular form, how am I able to convert it back to polar coordinate?
$$ 4r + 4r\sin\theta = 9. $$ Therefore $$ \sqrt{x^2+y^2} + 4y = 9. $$ $$ \sqrt{x^2+y^2} = 9 - y. $$ Now square both sides and go on from there. But remember that squaring both sides can lead to extraneous roots. For example $3^2=9$ and $(-3)^2=9$, so a "$\pm$" is introduced when you go back to the unsquared form. In other words, the graph you get after squaring may contain additional points beyond those you should get, and you have to check for those.
{ "language": "en", "url": "https://math.stackexchange.com/questions/72123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
General Lebesgue Dominated Convergence Theorem In Royden (4th edition), it says one can prove the General Lebesgue Dominated Convergence Theorem by simply replacing $g-f_n$ and $g+f_n$ with $g_n-f_n$ and $g_n+f_n$. I proceeded to do this, but I feel like the proof is incorrect. So here is the statement: Let $\{f_n\}_{n=1}^\infty$ be a sequence of measurable functions on $E$ that converge pointwise a.e. on $E$ to $f$. Suppose there is a sequence $\{g_n\}$ of integrable functions on $E$ that converge pointwise a.e. on $E$ to $g$ such that $|f_n| \leq g_n$ for all $n \in \mathbb{N}$. If $\lim\limits_{n \rightarrow \infty}$ $\int_E$ $g_n$ = $\int_E$ $g$, then $\lim\limits_{n \rightarrow \infty}$ $\int_E$ $f_n$ = $\int_E$ $f$. Proof: $$\int_E (g-f) = \liminf \int_E g_n-f_n.$$ By the linearity of the integral: $$\int_E g - \int_E f = \int_E g-f \leq \liminf \int_E g_n -f_n = \int_E g - \liminf \int_E f_n.$$ So, $$\limsup \int_E f_n \leq \int_E f.$$ Similarly for the other one. Am I missing a step or is it really a simple case of replacing.
I ignore the E and proceed to prove it as required by Exercise 2.20 in Folland. The problem is essentially a Generalized Dominated Convergence Theorem which I prove by reworking the proof of Lebesgue's Dominated Convergence Theorem introduced in class. $|f_n| \le g_n$ implies $-g_n \le f_n \le g_n$ which implies $f_n + g_n \ge 0$ and $g_n-f_n \ge 0$ We know $ \lim (f_n + g_n) = f + g $ and $ \lim (g_n - f_n) = g -f$. We can therefore apply Fatou's Lemma to the above cases to get: (I) $\int g + \int f = \int (g+f) = \int \lim (f_n + g_n) = \int \liminf (f_n + g_n) \le \liminf \int (f_n + g_n) = \liminf \int g_n + \liminf \int f_n = \int g + \liminf \int f_n$ Thus, $\int f \le \liminf \int f_n$. (II) $\int g - \int f = \int (g-f) = \int \lim (g_n - f_n) = \int \liminf (g_n - f_n) \le \liminf \int (g_n - f_n) = \liminf \int g_n + \liminf \int - f_n = \int g - \limsup \int f_n$ Thus, $\int f \ge \limsup \int f_n$. (III) Notice $\{ \int f_n \}_{n \in \mathbb{N}}$ is fundamentally a sequence. So, as is true for any sequence: $$\liminf \int f_n \le \limsup \int f_n$$ (IV) Also, our work with Fatou's has established the following: $$\limsup \int f_n \le \int f \le \liminf \int f_n$$ Combining (III) and (IV), we get $\liminf \int f_n = \limsup \int f_n = \int f$ . The first equality implies the existence of $\lim \int f_n$ and the second establishes: $$\lim \int f_n = \int f = \int \lim f.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/72174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47", "answer_count": 3, "answer_id": 2 }
Existence of least squares solution to $Ax=b$ Does a least squares solution to $Ax=b$ always exist?
Assume there is an exact solution $\small A \cdot x_s = b $ and reformulate your problem as $\small A \cdot x = b + e $ where e is an error ( thus $\small A \cdot x = b $ is then only an approximation as required) we have then that $\small A \cdot (x_s - x) = e $ Clearly there are arbitrary/infinitely many solutions for x possible, or say it even more clear: you may fill in any values you want into x and always get some e. The least-squares idea is to find that x such that the sum of squares of components in e ( define $\small \operatorname{ssq}(e) = \sum_{k=1}^n e_k^2 $) is minimal. But if our data are all real data (what is usually assumed) then the smallest possible sum of squares of numbers is zero, so there in fact exists an effective minimum for the sum. Then restrictions on x may cause, that actually the error ssq(e) is bigger but always there will be a minimum $\small \operatorname{ssq}(e) \ge 0 $. So the question is answered in the affirmative. (A remaining question is, whether it is unique, but that was not in your original post.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/72222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 7, "answer_id": 1 }
How to show that $\frac{f}{g}$ is measurable Here is my attempt to show that $\frac{f}{g}~,g\neq 0$ is a measurable function, if $f$ and $g$ are measurable function. I'd be happy if someone could look if it's okay. Since $fg$ is measurable, it is enough to show that $\frac{1}{g}$ is measurable. $$ \left\{x\;\left|\;\frac{1}{g}\lt \alpha \right\}\right.=\left\{x\;\left|\;g \gt \frac{1}{\alpha} \right\}\right., \qquad g\gt 0,\quad \alpha \in \mathbb{R},$$ which is measurable, since the right hand side is measurable. Also, $$ \left\{x\;\left|\;\frac{1}{g}\lt \alpha \right\}\right.= \left\{x\;\left|\;g\lt\frac{1}{\alpha} \right\}\right.,\qquad g\lt 0,\quad \alpha \in \mathbb{R},$$ which is measurable since the right hand side is measurable. Therefore, $\frac{1}{g}$ is measurable, and so $\frac{f}{g}$ is measurable.
The function $h(x)=\frac{1}{x}$ is continuous so is Borel measurable, and $\frac{f}{g}$ is just $f(h\circ g)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/72323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Representing a number as a sum of at most $k$ squares Fix an integer $k >0 $ and would like to know the maximum number of different ways that a number $n$ can be expressed as a sum of $k$ squares, i.e. the number of integer solutions to $$ n = x_1^2 + x_2^2 + \dots + x_k^2$$ with $x_1 \ge x_2 \ge \dots \ge x_k$ and $x_i \ge 0$ for every $i$. What I'd really like to know about is the asymptotics as $n \to \infty$. I asked a number theorist once about the case $k=2$, and if I remember correctly, he said that there are numbers $n$ which can be expressed as a sum of two squares in at least $$n^{c/\log \log n}$$ different ways, for some constant $c > 0$, and this is more-or-less best possible. This is the kind of answer I am seeking for larger $k$. Clarification: What I mean by maximum is the following. I want functions $f_k(x)$, as large as possible, such that there exists some sequence of integers $\{ a_i \}$ with $a_i \to \infty$, and such that $a_i$ can be written as the sum of $k$ squares in at least $f_k(a_i)$ ways.
This is an example of "Waring's problem" - expressing integers as the sum of $k$ $p$th powers of integers. In the case of squares, when $k\ge5$ the asymptotics of the number of representations (as a function of $n$) have been known for some time (I believe Hua proved this in the 1930s). When $k\ge5$, the number of ways to write $n$ as the sum of $k$ squares of positive integers is asymptotic to $$ \frac{\Gamma(3/2)^k}{\Gamma(k/2)}n^{k/2-1} S(n), $$ where $\Gamma$ is the Gamma function that interpolates the values of factorials, and $S(n) = S_k(n)$ is the "singular series" that depends upon $n$ but is bounded above and below. One can find expositions of Waring's problem and the "circle method" (aimed at many different levels) in various places on the web, such as here and here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/72378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 0 }
Triangulation on Euclidean Space I have a couple of questions about triangulations of the Euclidean space: * *Is it possible to have an infinite triangulation of the Euclidean space $\mathbb{R}^2$ such that only a finite number of vertices have degree less or equal than 6? *If not, is it possible to have a triangulation where the average degree is greater or equal than 7? Here by average degree I mean the limit in $r$ of the average degree of all the points in the ball of center the origin and radius $r$. Thanks! Jim below answered my question with a nice example! Now I have a follow up related question: * *Consider a density in the Euclidean space and randomly deploy points accordingly to this density. Now generate the corresponding Delaunay triangulation. Does there exists a density whose average degree is greater or equal than 7 almost surely?
Sure it's possible. You can make it have constant valence of any degree $n$ higher than $6$. Here's one construction. Take a circle, call it $C_1$, and $n$ points on this circle. Connect the center to each of these points. So the center now has valence $n$ and all the points have valence $3$. So now take a larger circle, $C_2$, around this first one. Scatter points on this larger circle so that there are $n-3$ edges coming out from each point on $C_1$ and hitting $C_2$ in distinct points, except that the outermost edges from neighboring points on $C_1$ have to connect to the same point on $C_2$ to get a triangulation. This yields points on $C_2$ of valences $3$ and $4$. Now repeat this process with a new circle $C_3$, and proceed ad infinitum. Here is a picture of the first $3$ stages when $n=7$. As you can see, the triangles are getting scrunched together as you move outwards. This is because this is really a triangulation of the hyperbolic plane, so you have to fit a lot of area (assuming all triangles are the same size) into a small Euclidean area.
{ "language": "en", "url": "https://math.stackexchange.com/questions/72439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Tricky GCD problem Say we have two integers, $x$ and $y$. If $\gcd(x,y)=5$, how can we find every value for $\gcd(x^2,y)$? If you can find a list of every value, can you prove that this list is complete?
Here is something to consider: Can $\gcd(x^2,y)$ be divisible by any primes other than 5? (Note that if $p$ is any prime number, then $p$ divides $x^2$ if and only if $p$ divides $x$.) Do you think $5^{100000}$ is a possible value for $\gcd(x^2,y)$? Why or why not? If you try to make your intuition precise about this, that should take you the rest of the way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/72511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Generating coordinates for 'N' points on the circumference of an ellipse with fixed nearest-neighbor spacing I have an ellipse with semimajor axis $A$ and semiminor axis $B$. I would like to pick $N$ points along the circumference of the ellipse such that the Euclidean distance between any two nearest-neighbor points, $d$, is fixed. How would I generate the coordinates for these points? For what range of $A$ and $B$ is this possible? As a clarification, all nearest-neighbor pairs should be of fixed distance $d$. If one populates the ellipse by sequentially adding nearest neighbors in, say, a clockwise fashion, the first and last point added should have a final distance $d$.
Assuming you have a starting point along the ellipse, position P, generate the next candidate points by intersecting the ellipse with a circle of radius d, where d is the desired distance between adjacent points Then choose the candidate which is of the desired winding. Winding can be calculated by considering the vector from the origin of the ellipse to position P and the vector from position P to the candidate position.
{ "language": "en", "url": "https://math.stackexchange.com/questions/72555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 4 }
Simple use of a permutation rule in calculating probability I have the following problem from DeGroot: A box contains 100 balls, of which 40 are red. Suppose that the balls are drawn from the box one at a time at random, without replacement. Determine (1) the probability that the first ball drawn will be red. (2) the probability that the fifteenth ball drawn will be red, and (3) the probability that the last ball drawn will be red. For the first question: * *the probability that the first ball drawn will be red should be: $$\frac{40!}{(100-40)!}$$ *the probability that the fifteenth ball will be red: $$\frac{60! \times 40!}{(60-14)! \times (40-1)}$$ *the probability that the last ball drawn will be red: $$\frac{60! \times 40!}{100}$$ Am I on the right track with any of these?
I find it instructive to lead students through the horrible, brute force calculation before teaching them that the end result is best understood using symmetry. Let's calculate the probability that the 15th ball is red, taking into account all the balls drawn previously. For example, one of the outcomes that make up our event is $$RNNRNNNRRRNRNRR$$ where $R$ means a red ball and $N$ a non-red ball. The probability of getting this particular outcome is $${40\over 100}\cdot{60\over 99}\cdot{59\over 98}\cdots {33\over 86}={(40)_8 \, (60)_7\over (100)_{15}}.$$ We simplify the product of fractions using the Pochhammer symbol. The "7" and "8" are the number of $N$s and $R$s in the outcome. Adding the probabilities of all such outcomes gives $$\mathbb{P}(\mbox{15th ball is red})={1\over (100)_{15}}\sum_k {14\choose k} (40)_{15-k}\ (60)_k $$ $$={1\over (100)_{15}}\ 40\ \sum_k{14\choose k} (39)_{14-k}\ (60)_k ={1\over (100)_{15}}\ 40\ (99)_{14} ={40\over 100}.$$ Amazing!
{ "language": "en", "url": "https://math.stackexchange.com/questions/72620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to get apparent linear diameter from angular diameter Say I have an object, whose actual size is 10 units in diameter, and it is 100 units away. I can find the angular diameter as such: $2\arctan(5/100) = 5.725\ $ radians. Can I use this angular diameter to find the apparent linear size (that is, the size it appears to be to my eye) in generic units at any given distance?
Yes you can, but it's much easier to just use the original values. The ratio of the apparent size to the distance is constant, so in your case it's $1/10$, and you just multiply the distance by that to get the apparent size.
{ "language": "en", "url": "https://math.stackexchange.com/questions/72666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Prove for which $n \in \mathbb{N}$: $9n^3 - 3 ≤ 8^n$ A homework assignment requires me to find out and prove using induction for which $n ≥ 0$ $9n^3 - 3 ≤ 8^n$ and I have conducted multiple approaches and consulted multiple people and other resources with limited success. I appreciate any hint you can give me. Thanks in advance.
Clearly, if $3n\le 2^n$ then $9n^3-3\le (3n)^3 \le (2^n)^3 = 8^n$. So let us have a look at the (hopefully simpler) problem when $3n \le 2^n$. If we are able to solve this problem, only finitely many cases will remain to be checked. Claim: $3n\le 2^n$ holds for every $n\in\mathbb N$, $n\ge 4$. Proof by induction. $1^\circ$ For $n=4$ we have $3n = 12 \le 16 = 2^n$. $2^\circ$. Suppose the claim is true for $n$, we will verify it for $n+1$. $3(n+1)=3n\cdot\frac{n+1}n \le 2^n\cdot 2 = 2^{n+1}$. We have used $3n\le 2^n$ (inductive hypothesis) and $\frac{n+1}n=1+\frac1n\le2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/72726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
integral equation Given the integral equation $$\exp(x)-1=\int_0^{\infty} \frac{\mathrm dt}{t}\operatorname{frac}\left(\frac{ \sqrt x}{\sqrt t}\right) f(t)\;,$$ where $\operatorname{frac}$ denotes the fractional part of a number, $ \operatorname{frac}(x)= x-\lfloor x\rfloor$. My questions are: * *Can we deduce from this integral equation that $ f(x)= O(x^{1/4+\epsilon}) $ for some positive $\epsilon$? *Can we solve this integral by the Hilbert-Schmidt method?
You cannot conclude anything for the asymptotics of the integrand, because the function can have very high, very narrow peaks that contribute almost nothing to the integral.
{ "language": "en", "url": "https://math.stackexchange.com/questions/72801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Order of finite fields is $p^n$ Let $F$ be a finite field. How do I prove that the order of $F$ is always of order $p^n$ where $p$ is prime?
Let $p$ be the characteristic of a finite field $F$.${}^{\text{Note 1}}$ Then since $1$ has order $p$ in $(F,+)$, we know that $p$ divides $|F|$. Now let $q\neq p$ be any other prime dividing $|F|$. Then by Cauchy's Theorem, there is an element $x\in F$ whose order in $(F,+)$ is $q$. Then $q\cdot x=0$. But we also have $p\cdot x=0$. Now since $p$ and $q$ are relatively prime, we can find integers $a$ and $b$ such that $ap+bq=1$. Thus $(ap+bq)\cdot x=x$. But $(ap+bq)\cdot x=a\cdot(p\cdot x)+b\cdot(q\cdot x)=0$, giving $x=0$, which is not possible since $x$ has order at least $2$ in $(F,+)$. So there is no prime other than $p$ which divides $|F|$. Note 1: Every finite field has a characteristic $p\in\mathbb N$ since, by the pigeonhole principle, there must exist distinct $n_1< n_2$ both in the set $\{1, 2, \dots, \lvert F\rvert +1\}$ such that $$\underbrace{1+1+\dots+1}_{n_1}=\underbrace{1+1+\dots+1}_{n_2},$$ so that $\underbrace{1+1+\dots+1}_{n_2-n_1}=0$. In fact, this argument also implies $p\le n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/72856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "68", "answer_count": 6, "answer_id": 4 }
Variance of sample variance? What is the variance of the sample variance? In other words I am looking for $\mathrm{Var}(S^2)$. I have started by expanding out $\mathrm{Var}(S^2)$ into $E(S^4) - [E(S^2)]^2$ I know that $[E(S^2)]^2$ is $\sigma$ to the power of 4. And that is as far as I got.
Showing the derivation of $E(\left[{1\over2}(X-Y)^2-\sigma^2\right]^2) = (\mu_4+\sigma^4)/2$ of user940: LHS: $E(\left[{1\over2}(X-Y)^2-\sigma^2\right]^2) = E(\frac{1}{4}(X-Y)^4 - (X-Y)^2 \sigma^2 + \sigma^4) = E(\frac{1}{4}(X-Y)^4) - 2\sigma^2\sigma^2 + \sigma^4 = E(\frac{1}{4}(X-Y)^4) - \sigma^4 = \frac{1}{4}E(X^4 -4X^3Y +6X^2Y^2 -4XY^3 + Y^4) -\sigma^4 = \frac{1}{4}(2E(X^4) -8E(X)E(X^3) +6 E(X^2)(X^2)) - \sigma^4 = \frac{1}{2}(E(X^4)-4E(X)E(X^3) +3 E(X^2)(X^2) - 2\sigma^4)$ I use the fact that $E((x-y)^2) = 2\sigma^2$ here. RHS: $\require{cancel} (\mu_4+\sigma^4)/2 = \frac{1}{2}(E((X-\mu)^4) + \sigma^4) = \frac{1}{2}(E((X-E(X))^4) + \sigma^4) = \frac{1}{2}(E(X^4 -4X^3E(X) + 6X^2E(X)^2 -4XE(X)^3 + E(X)^4) + \sigma^4) = \frac{1}{2}(E(X^4 -4X^3E(X) + 6X^2E(X^2) - 6X^2\sigma^2 -4XE(X)(E(X^2)-\sigma^2) + (E(X^2)-\sigma^2)^2) + \sigma^4) = \frac{1}{2}(E(X^4) -4E(X)^3E(X) + 6E(X)^2E(X^2) - 6E(X)^2\sigma^2 -4E(X)^2(E(X^2)-\sigma^2) + (E(X^2)-\sigma^2)^2 + \sigma^4) = \frac{1}{2}(E(X^4) -4E(X)^3E(X) + 6E(X)^2E(X^2) - \cancel{6E(X)^2\sigma^2} -4E(X^2)E(X^2) +\cancel{4E(X^2)\sigma^2 +4E(X^2)\sigma^2} - 4\sigma^4 + E(X^2)^2-\cancel{2E(X^2)\sigma^2} + \sigma^4 + \sigma^4) = \frac{1}{2}(E(X^4) -4E(X)^3E(X) + 3E(X)^2E(X^2) - 2\sigma^4)$ I use the fact that $E(x) = \mu$ and that $E(x)^2 = E(x^2) - \sigma^2$ Now LHS = RHS.
{ "language": "en", "url": "https://math.stackexchange.com/questions/72975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "101", "answer_count": 6, "answer_id": 2 }
Extend positive function by positive function in Sobolev spaces This is in connection to this question. I understand the solution, but I want to ask something else regarding the extension of the function. The question is like this: Suppose that $v$ is a positive real function with $v \in H^1(\Omega)$ and there is a ball $B$ such that $v$ has an extension in $w \in H^1(B)$. Is it true that the extension $w$ can be chosen to be also positive? I searched a bit and didn't find a theorem about this matter.
In order to expand @Xianghong Chen 's comment, let $w\in H^1(B)$ an extension of $v$. We have to check that $w_+:=\max(w,0)$ is indeed in $H^1(B)$. Put $F_n(x):=\begin{cases}\sqrt{x^2+n^{-2}}-n^{-2}&\mbox{if }x\geq 0\\\ 0&\mbox{otherwise.}\end{cases}$. Then $w_n:=F_n(w)$ is in $H^1(B)$. Since $F_n(0)=0$ and for $x\geq 0$, $|F_n'(x)|=\frac{2x}{2\sqrt{x^2+n^{-2}}}\leq 1$, we have $\nabla w_n =F_n'(w)\nabla w$, and thanks to the dominated convergence theorem we can see that $w_n$ converges to $w_+$ in $L^2(B)$, and $\nabla w_n\to \nabla w\mathbf 1_{\{w(x)>0\}}$ in $L^2(B)$. So $w_+\in H^1(B)$ and is non-negative. $\nabla w_+=0$ where $w<0$, so $w_+>0$ almost everywhere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/73014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What is the uniqueness of the equivalence relation? In Lee, 'Introduction to Topological Manifolds', Appendix A, Exercise A.2 I am asked to prove that if $\mathcal C$ is a partition of $X$ there is a unique equivalence relation $\sim$ such that classes of equivalence of $\sim$ are elements of $\mathcal C$. Since the definition of difference/equality of equivalence relations was not given, I thought that it should be based on the equivalence classes of such a relation. More formal, am I right stating that There is a unique equivalence relation $\sim$ which admits sentence $S$ iff for any $\sim^1$ and $\sim^2$ admitting $S$ their classes of equivalence are the same?
By definition, an equivalence relation on a set $S$ is a subset of $S\times S$ fulfilling some conditions. This gives you a natural notion of equality for equivalence relations: Two equivalence relations on $S$ are equal if they are given by the same subset of $S\times S$. It is a general fact that an equivalence relation is uniquely determined by its equivalence classes. This is -- somewhat tautologically -- because two elements are equivalent if and only if they belong to the same equivalence class.
{ "language": "en", "url": "https://math.stackexchange.com/questions/73066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Calculating the exponential of a $4 \times 4$ matrix Find $e^{At}$, where $$A = \begin{bmatrix} 1 & -1 & 1 & 0\\ 1 & 1 & 0 & 1\\ 0 & 0 & 1 & -1\\ 0 & 0 & 1 & 1\\ \end{bmatrix}$$ So, let me just find $e^{A}$ for now and I can generalize later. I notice right away that I can write $$A = \begin{bmatrix} B & I_{2} \\ 0_{22} & B \end{bmatrix}$$ where $$B = \begin{bmatrix} 1 & -1\\ 1 & 1\\ \end{bmatrix}$$ I'm sort of making up a method here and I hope it works. Can someone tell me if this is correct? I write: $$A = \mathrm{diag}(B,B) + \begin{bmatrix}0_{22} & I_{2}\\ 0_{22} & 0_{22}\end{bmatrix}$$ Call $S = \mathrm{diag}(B,B)$, and $N = \begin{bmatrix}0_{22} & I_{2}\\ 0_{22} & 0_{22}\end{bmatrix}$. I note that $N^2$ is $0_{44}$, so $$e^{N} = \frac{N^{0}}{0!} + \frac{N}{1!} + \frac{N^2}{2!} + \cdots = I_{4} + N + 0_{44} + \cdots = I_{4} + N$$ and that $e^{S} = \mathrm{diag}(e^{B}, e^{B})$ and compute: $$e^{A} = e^{S + N} = e^{S}e^{N} = \mathrm{diag}(e^{B}, e^{B})\cdot[I_{4} + N]$$ This reduces the problem to finding $e^B$, which is much easier. Is my logic correct? I just started writing everything as a block matrix and proceeded as if nothing about the process of finding the exponential of a matrix would change. But I don't really know the theory behind this I'm just guessing how it would work.
A different, but rather specific, strategy would be to use the ring homomorphism $${a+bi\in\mathbb C \mapsto \pmatrix{a&-b \\ b&a}\in\mathbb R^{2\times 2}}$$in the block decomposition. Then your problem is equivalent to finding $$e^{t\pmatrix{1+i & 1\\ 0 & 1+i}}=e^{\pmatrix{t+ti & t\\ 0 & t+ti}}=e^{t+ti}e^{\pmatrix{0&t\\0&0}}=(e^{t+ti})\pmatrix{1&t\\0&1}$$ which unfolds to $$\pmatrix{e^t\cos t & -e^t\sin t & t e^t \cos t & -t e^t \sin t \\ e^t \sin t & e^t \cos t & t e^t \sin t & t e^t \cos t \\ 0 & 0 & e^t\cos t & -e^t\sin t \\ 0&0& e^t\sin t & e^t\cos t }$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/73112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Is it correct to say that $\mathbb{R}$ has fewer elements than $\mathbb{C}$ if both are infinite? My math teacher said that. I disagreed, but he said that I was wrong. But I'm not convinced - is it really right? Please notice that I'm not talking about $\mathbb{R}$ $⊂$ $\mathbb{C}$, but $\mathbb{R}$ $<$ $\mathbb{C}$.
$\mathbb R$ is smaller than $\mathbb C$ ... ??? In the sense of cardinality, NO. (As explained in other answers.) In the sense of Hausdorff dimension, YES. What sense did your teacher mean? I'm sorry, but we often find when a student reports what a teacher says, some details may get distorted or even omitted. Unless the teacher comes here and makes a defense, we cannot know what is going on. Or, even better, if the student talks to the teacher (what an idea!) and finds out what was intended.
{ "language": "en", "url": "https://math.stackexchange.com/questions/73172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 0 }
Solve for equation algebraically Is it possible to write the following function as $H(x)=$'some expresssion` ? $$D(x) = H(x) + H(x-1)$$ Edit: Hey everyone, thanks for all the great responses, and just to clarify H(x) and D(x) are always going to be polynomials, I wasn't sure if that made too big of a difference so I didn't mention it before. Edit 2: I'm sincerely sorry if I was too general, I've only really used polynomial equations in my education thus far, so I didn't realize they might not seam just regular to everyone else. Once again I'm very sorry If I didn't give everyone an opportunity to answer the question because of my vagueness. I sincerely appreciate all of your answers and time.
I'll use lower case, so $d(x) = h(x) + h(x-1)$. Let $D(x) = \sum \limits_{n=0}^{\infty} d(n) x^n/n!$ and $H(x) = \sum \limits_{n=0}^{\infty} h(n) x^n/n!$. $H'(x) = \sum \limits_{n=1}^{\infty} h(n) x^{n-1}/(n-1)! = \sum \limits_{n=0}^{\infty} h(n+1) x^n/n!$ so $H(x) + H'(x) = \sum \limits_{n=0}^{\infty} (h(n)+h(n+1)) x^n/n! = \sum \limits_{n=0}^{\infty} d(n+1) x^n/n!$. Since $D'(x) = \sum \limits_{n=0}^{\infty} d(n+1) x^n/n!$, $H(x) + H'(x) = D'(x)$. The usual integrating factor is $e^x$: $e^x D'(x) = e^x(H(x) + H'(x)) =(e^x H(x))'$, so $e^x H(x) = \int e^x D'(x) dx $ or $ H(x) = e^{-x} \int e^x D'(x) dx $. If you can do the integral (and I haven't made too many mistakes) , you can get $H$. You can integrate by parts, but you still have to deal with $\int e^x D(x) dx$. Now I'll see what was entered while I was entering this ...
{ "language": "en", "url": "https://math.stackexchange.com/questions/73222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 2 }
Containment of primary ideals Suppose, $R$ is a noetherian ring. Let $P$ be a prime ideal in $R$. Let $Q$ be a $P$-primary ideal that contains $P^n$. Then does $Q$ contain $P^{(n)}$ which is the $n$th symbolic power of $P$ and is the $P$-primary component that occurs in any irredundant primary decomposition of $P^n$? I think this should be true, but I am not sure where to start. I was thinking in terms of taking an irredundant decomposition for $P^n$ and intersecting it with $Q$. This introduces two $P$ primary components, so one must be contained in the other?
The answer is yes. Suppose $P^n = P^{(n)}\cap I_1\cap I_2\cap ...\cap I_r$ is a minimal primary decomposition. Since $P\subseteq Q$ we find that $P^n = (P^{(n)}\cap Q)\cap I_1\cap I_2\cap ...\cap I_r$ is also a minimal primary decomposition. Now, primary decomposition is not unique in general, but in this case we now that the $P$-primary component is isolated because the radical of $P^n$ is $P$ (thus, the radicals of all the $I_i$ must contain $P$) and therefore $P^{(n)} \cap Q = P^{(n)}$ which means that $P^{(n)}\subseteq Q$. If I am not mistaken, this observation is needed even for the definition of the symbolic power as "the $P$-primary component of $P^n$".
{ "language": "en", "url": "https://math.stackexchange.com/questions/73261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How to prove $\lim_{n \to \infty} \sqrt{n}(\sqrt[n]{n} - 1) = 0$? I want to show that $$\lim_{n \to \infty} \sqrt{n}(\sqrt[n]{n}-1) = 0$$ and my assistant teacher gave me the hint to find a proper estimate for $\sqrt[n]{n}-1$ in order to do this. I know how one shows that $\lim_{n \to \infty} \sqrt[n]{n} = 1$, to do this we can write $\sqrt[n]{n} = 1+x_n$, raise both sides to the n-th power and then use the binomial theorem (or to be more specific: the term to the second power). However, I don't see how this or any other trivial term (i.e. the first or the n-th) could be used here. What estimate am I supposed to find or is there even a simpler way to show this limit? Thanks for any answers in advance.
Use the fact that, when $n\to\infty$, $$\sqrt[n]{n}-1=\exp\left(\frac{\log n}n\right)-1\sim\frac{\log n}n$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/73403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 2 }
Finitely Presented Infinite Group has an Element of Infinite Order? Is it true that an infinite group which is finitely generated must have an element of infinite order? If not, I'd like a counterexample. [Edit] Somewhat easily proven false. More interesting (and as it happens, more relevant to my particular problem domain), what about finitely presented groups?
In full generality, this is false. As mentioned in the comments, the Burnside groups can be infinite groups, finitely generated by construction, such that every element has bounded finite order. One can even force such a group to be simple. There are positive results for certain families of groups, however. Immediately I am reminded of solvable groups and hyperbolic groups. Perhaps a much more interesting question is if there is a finitely presented, infinite group, with no element of infinite order. AFAIK, this question is still wide open.
{ "language": "en", "url": "https://math.stackexchange.com/questions/73474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Isolate a variable in a polynomial function How would I go about isolating $y$ in this function? I'm going crazy right now because I can't figure this out. The purpose of this is to allow me to derive $f(x)$ afterwards. $$ x = \frac{y^2}{4} + 2y .$$
Point the Zeroth: ignore points the first and points the second (in the sense that they aren't really the 'right' way of proceeding; they are presented so you can see that they are not the right path to take). Point the First: $y$ is not a function of $x$; if you plot this equation on the plane, you'll have a parabola, $$x = \frac{y^2}{4} + 2y = \left(\frac{y}{2}\right)^2 + 2\left(\frac{y}{2}\right)(2) + 2^2 - 2^2 = \left(\frac{y}{2} + 2\right)^2 - 4.$$ This parabola opens right, so it is not the graph of a function of $x$. Point the Second: You can break up the graph into two functions by using the quadratic formula: $$ \frac{y^2}{4} + 2y - x = 0$$ gives $$y^2 + 8y - 4x = 0,$$ so $$y = \frac{-8+\sqrt{64+16x}}{2},\quad\text{or}\quad y = \frac{-8-\sqrt{64+16x}}{2}.$$ We would then need to find the derivatives of each of these two separately, and for any given value of $x$ and $y$, determine which of the two formulas to use. They are not hard, but they are somewhat annoying. If $y = -4 + \frac{1}{2}\sqrt{64+16x}$, then $$\frac{dy}{dx} = \frac{1}{4}(64+16x)^{-1/2}(16) = \frac{4}{\sqrt{64+16x}}.$$ Similarly, if $y=-4-\frac{1}{2}\sqrt{64+16x}$, then $$\frac{dy}{dx} = -\frac{4}{\sqrt{64+16x}}.$$ Point the Third: What you really want to do here is implicit differentiation, which is a way of handling all of these difficulties without having to solve for $y$ first, and without having to worry about "which formula" to use later. Explicitly, from $$x = \frac{y^2}{4} + 2y,$$ take derivatives on both sides, using the Chain Rule and remembering that $y$ is an (implicit) function of $x$, so that $y'$ needs to be left indicated (we don't know what it is right now): $$\begin{align*} x & = \frac{y^2}{4} + 2y\\ \frac{d}{dx}x &= \frac{d}{dx}\left( \frac{y^2}{4} + 2y\right)\\ 1 &= \frac{2y}{4}y' + 2y'\\ 1&= \frac{y}{2}y' + 2y'\\ 1 &= y'\left(\frac{y}{2} + 2\right).\end{align*}$$ Solving for $y'$ gives an implicit definition for $\frac{dy}{dx}$ in terms of $y$ and $x$ (though in this case, $x$ plays no role): $$y' = \frac{1}{\frac{y}{2}+2} = \frac{2}{y+4}.$$ Point the Fourth: Alternatively, since you have $x$ explicitly as a function of $y$, use the Inverse Function Theorem: taking derivatives with respect to $y$, we have: $$\frac{dx}{dy} = \frac{1}{2}y + 2,$$ so $$\frac{dy}{dx} = \frac{1}{\quad\frac{dx}{dy}\quad} = \frac{1}{\frac{1}{2}y + 2} = \frac{2}{y+4}.$$ Point the Fifth: So, do these "implicit formulas" give the same answer as the "explicit ones" we got in Point the Second? Yes! If $y = \frac{-8+\sqrt{64+16x}}{2}$, then $y+4 = \frac{\sqrt{64+16x}}{2}$, so $$\frac{2}{y+4} = \frac{2}{\frac{1}{2}\sqrt{64+16x}} = \frac{4}{\sqrt{64+16x}},$$ same answer as in Point the Second; and if $y=\frac{-8-\sqrt{64+16x}}{2}$ then $y+4 = -\frac{1}{2}\sqrt{64+16x}$, so $$\frac{2}{y+4} = \frac{2}{-\frac{1}{2}\sqrt{64+16x}} = -\frac{4}{\sqrt{64+16x}},$$ again, same answer as in Point the Second. But using implicit differentiation (or in cases like this, when $x$ is an explicit function of $y$, the inverse function theorem) is much easier than first solving for $y$, possibly requiring breaking up the original implicit function into several different explicit functions, and then differentiating. If you were trying to work with the Folium of Descartes ($x^3+y^3=3xy$), you would have to consider three different formulas, each involving a sum of cubic roots that has square roots inside the radicals; if you were trying to work with a function like $y = \sin(x+y)$, you would have a hard time solving for $y$, but using implicit differentiation is pretty easy.
{ "language": "en", "url": "https://math.stackexchange.com/questions/73544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Closed form for $\sum_{m \geq 1} (-1)^m q^{m(m+1)/2 + m \Delta}$? Is there a useful closed form for the following series ($|\Delta|$ is a small integer)? $$f(q,\Delta) =\sum_{m=1}^{\infty} (-1)^m q^{m(m+1)/2 + m \Delta}$$ It is a large-$n$ approximation of the polynomial $-[n+\Delta, n]_q$ discussed here. EDIT: A more useful form, it turns out, is $ \tilde{f}(q,z) =\sum\limits_{m=1}^{\infty} (-1)^m q^{m(m+1)/2} z^m$. Its normal (non-$q$-analog) limit is trivial and appealing.
A fair bit of massaging is needed here. $$\begin{align*}\sum_{m=1}^{\infty} (-1)^m q^{m(m+1)/2 + m \Delta}&=\sum_{m=2}^{\infty} (-1)^{m-1} q^{m(m-1)/2}q^{(m-1)\Delta}\\&=-q^{-\Delta}\sum_{m=2}^{\infty} (-1)^m q^{m(m-1)/2}q^{m\Delta}\\&=q^{-\Delta}-1-q^{-\Delta}\sum_{m=0}^{\infty} (-1)^m q^{m(m-1)/2}q^{m\Delta}\\&=q^{-\Delta}-1-q^{-\Delta}\sum_{m=0}^{\infty}\frac{(q;q)_m}{(q;q)_m (0;q)_m} (-1)^m q^{m(m-1)/2}q^{m\Delta}\end{align*}$$ and finally we recognize the form of a basic hypergeometric function: $$\sum_{m=1}^{\infty} (-1)^m q^{m(m+1)/2 + m \Delta}=q^{-\Delta}-1-q^{-\Delta}{}_1 \phi_1\left({q \atop 0};q,q^\Delta\right)$$ Probably there is an easier expression in terms of Jacobi theta functions, but I haven't tried that route...
{ "language": "en", "url": "https://math.stackexchange.com/questions/73618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Obtaining a two step transition matrix in a stationary Markov chain I'm reading the chapter on Markov processes in DeGroot and do not find the explanation for the following thing: A transition matrix P is specified in the following way: $$P = \begin{pmatrix} 0.1 & 0.4 & 0.2 & 0.1 & 0.1 & 0.1\\ 0.2 & 0.3 & 0.2 & 0.1 & 0.1 & 0.1\\ 0.1 & 0.2 & 0.3 & 0.2 & 0.1 & 0.1\\ 0.1 & 0.1 & 0.2 & 0.3 & 0.2 & 0.1\\ 0.1 & 0.1 & 0.1 & 0.2 & 0.3 & 0.2\\ 0.1 & 0.1 & 0.1 & 0.1 & 0.4 & 0.2 \end{pmatrix}$$ And mentions that to obtain a two step matrix you simply multiply the matrix by itself to obtain $P^2$. I don't understand how these values are obtained for $P^2$: $$P = \begin{pmatrix} 0.14 & 0.23 & 0.20 & 0.15 & 0.16 & 0.12\\ 0.13 & 0.24 & 0.20 & 0.15 & 0.16 & 0.12\\ 0.12 & 0.20 & 0.21 & 0.18 & 0.17 & 0.12\\ 0.11 & 0.17 & 0.19 & 0.20 & 0.20 & 0.13\\ 0.11 & 0.16 & 0.16 & 0.18 & 0.24 & 0.15\\ 0.11 & 0.16 & 0.15 & 0.17 & 0.25 & 0.16 \end{pmatrix}$$ What am I missing? Should the values simply be multiplied by themselves?
If I understand the question correctly, what you're looking for is matrix multiplication.
{ "language": "en", "url": "https://math.stackexchange.com/questions/73672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$(a_1,\ldots,a_n)\!=\!(a)\;\Leftrightarrow\;a\!=\!\gcd(a_1,\ldots,a_n)$? Could you please help me finish the proof below. The only problem is the $(\Leftarrow)$ part of a). Proposition???: In any domain: a) $(a_1,\ldots,a_n)\!=\!(a)\;\Leftrightarrow\;a\!=\!\gcd(a_1,\ldots,a_n)$ b) $Ra_1\!\cap\!\ldots\!\cap\!Ra_n\!=\!Ra\;\Leftrightarrow\; a\!=\!\mathrm{lcm}(a_1,\ldots,a_n)$. Proof: $(\Rightarrow)\!:$ If $(a_1,\ldots,a_n)\!=\!(a)$, then $a_i\!\in\!(a)$, so $a\mid a_i$. If any other $a'\mid a_i$, then $a_i\!\in\!(a')$, so $(a')\geq(a_1,\ldots,a_n)\!=\!(a)$, hence $a'\mid a$. $\checkmark$ $(\Leftarrow)\!:$ If $a\!=\!\gcd(a_1,\ldots,a_n)$, then $a\mid a_i$, so $a_i\!\in\!(a)$, hence $(a_1,\ldots,a_n)\leq(a)$. If $a'\!\in\!(a)$, then ??? $(\Rightarrow)\!:$ If $Ra_1\!\cap\!\ldots\!\cap\!Ra_n\!=\!Ra$, then $a\!\in\!Ra_i$ so $a_i\mid a$. If any other $a_i\mid a'$, then $a'\!\in\!Ra_1\!\cap\!\ldots\!\cap\!Ra_n\!=\!Ra$, so $a\mid a'$. $\checkmark$ $(\Leftarrow)\!:$ If $a\!=\!\mathrm{lcm}(a_1,\ldots,a_n)$, then $a_i\mid a$, so $a\!\in\!Ra_i$, hence $Ra\!\leq\!Ra_1\!\cap\!\ldots\!\cap\!Ra_n$. If $a'\!\in\!Ra_1\!\cap\!\ldots\!\cap\!Ra_n$, then $a_i\mid a'$, so $a\mid a'$ and $a'\!\in\!Ra$. $\checkmark$ Is this not true in any domain? I would think it is, since the analogous statement for $\mathrm{lcm}$ is...
No, this is not true in $\mathbb{Z}[x,y]$, since $gcd(x,y)=1$, but $(x,y)$ is not principal (not to mention that $(x,y)\subsetneq R$). In particular, part a) doesn't hold in any domain that is not a Bézout domain (ie an integral domain in which every finitely generated ideal is principal--in particular, every PID is Bézout). Of course, if we're under the assumption that $R$ is a PID, then for your proof of $(\Leftarrow)$ in a), you can use the fact that $(a_1,a_2,\cdots,a_n)=(x)$ for some $x\in R$ and then show that $x$ must necessarily be a greatest common divisor of $a_1,\cdots,a_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/73695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Small primes attract large primes $$ \begin{align} 1100 & = 2\times2\times5\times5\times11 \\ 1101 & =3\times 367 \\ 1102 & =2\times19\times29 \\ 1103 & =1103 \\ 1104 & = 2\times2\times2\times2\times 3\times23 \\ 1105 & = 5\times13\times17 \\ 1106 & = 2\times7\times79 \end{align} $$ In looking at this list of prime factorizations, I see that all of the first 10 prime numbers, 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, appear within the factorizations of only seven consecutive integers. (The next prime number, 31, has its multiples as far from 1100 as it could hope to get them (1085 and 1116).) So no nearby number could hope to be divisible by 29 or 23, nor even by 7 for suitably adjusted values of "nearby". Consequently when you're factoring nearby numbers, you're deprived of those small primes as potential factors by which they might be divisible. So nearby numbers, for lack of small primes that could divide them, must be divisible by large primes. And accordingly, not far away, we find $1099=7\times157$ (157 doesn't show up so often---only once every 157 steps---that you'd usually expect to find it so close by) and likewise 1098 is divisible by 61, 1108 by 277, 1096 by 137, 1095 by 73, 1094 by 547, etc.; and 1097 and 1109 are themselves prime. So if an unusually large number of small primes occur unusually close together as factors, then an unusually large number of large primes must also be in the neighborhood. Are there known precise results quantifying this phenomenon?
The phenomenon you describe seems to be the concept behind the Sieve of Eratosthenes. Every number below $ \sqrt{N} $ where $ N $ is the number you want to factorize will appear at least once in the list of possible factors. Looking at the list generated by the Sieve, it will become obvious that only primes remain. A factorization using the Sieve then implies trial division by the primes up to $ \sqrt{N}$. This concept lets us see that obviously, if "many" small primes were used to remove elements in the Sieve, the remaining elements will have larger prime factors, possibly being primes themselves. However: no generalization of any kind is possible, since every number's occurence is cyclic. Think of a Fourier Transform , using prime factors as frequencies. 2 appears every second number, 3 every third and so on. At any point $N$, there is no way to determine if there are primes nearby of if the numbers nearby will have "large primes" as factors without it's value. I could also relate what you're saying to the concept of Mersenne Primes. Essentially, primes in the form $2^{n}–1$ the largest known being $(2^{43,112,609} – 1)$, which also happens to be the largest prime known. In this case, they are looking for primes in the viscinity of exponents of 2, which is also saying the that largest factor is a large prime, right? So yes, it stands to reason that if $N$ isn't a prime and has small factors, numbers nearby have a chance to have greater factors. No quantification of that is useful, however.
{ "language": "en", "url": "https://math.stackexchange.com/questions/73763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46", "answer_count": 3, "answer_id": 0 }
Prove an inequality by Induction: $(1-x)^n + (1+x)^n < 2^n$ Could you give me some hints, please, to the following problem. Given $x \in \mathbb{R}$ such that $|x| < 1$. Prove by induction the following inequality for all $n \geq 2$: $$(1-x)^n + (1+x)^n < 2^n$$ $1$ Basis: $$n=2$$ $$(1-x)^2 + (1+x)^2 < 2^2$$ $$(1-2x+x^2) + (1+2x+x^2) < 2^2$$ $$2+2x^2 < 2^2$$ $$2(1+x^2) < 2^2$$ $$1+x^2 < 2$$ $$x^2 < 1 \implies |x| < 1$$ $2$ Induction Step: $n \rightarrow n+1$ $$(1-x)^{n+1} + (1+x)^{n+1} < 2^{n+1}$$ $$(1-x)(1-x)^n + (1+x)(1+x)^n < 2·2^n$$ I tried to split it into $3$ cases: $x=0$ (then it's true), $-1<x<0$ and $0<x<1$. Could you tell me please, how should I move on. And do I need a binomial theorem here? Thank you in advance.
Just show that $a^n + b^n \le (a+b)^n$ ( i guess you can use induction here too because if $p>0$ and $q>0$ we have $p+q>0$) since $a,b$ are positive in the case after expansion you get $\sum_{i=1}^{n-1} \frac{n!}{i!(n-i)!} a^i b^{n-i}>0$ Which is obviously true
{ "language": "en", "url": "https://math.stackexchange.com/questions/73783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 3 }
Evaluating Partial Sums I need some help with the following question from my homework, I do not exactly understand what to do. Question at Hand Text is: Evaluate the partial sums of the infinite series $\displaystyle \sum_{n=1}^\infty \frac1{n(n+2)}$, and then evaluate the infinite series. The trouble I am having is understanding exactly what is asked of me to do.
The m-th partial sum of $ \displaystyle \sum_{n=1}^{\infty} \frac{1}{n(n+2) } $ is the sum truncated to the m-th term. In other words, it first wants you to find the finite sum $$ s_m = \sum_{n=1}^m \frac{1}{n(n+2)} $$ for all $m\in \mathbb{N}$. Then it wants you to find the original infinite sum by recalling the definition that $$ \sum_{n=1}^{\infty} \frac{1}{n(n+2) } = \lim_{m\to \infty} s_m .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/73847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is $\sum_{n=0}^{\infty}2^n$ equal to $-1$? Why? Possible Duplicate: Divisibility with sums going to infinity From Wikipedia and Minute Physics i see that the sum would be -1. I find this challenging to understand, how does a sum of positive integers end up being negative?
It only does so formally, not literally according to the usual definition of infinite series convergence over $\mathbb{R}$ or $\mathbb{C}$. The idea is to take the series to mean the familiar geometric series and analytically continue it as a function to arguments where the original definition does not work. I attempted an answer in another question as to what analytic continuation entails exactly. One may also use a variety of summability methods to evaluate these sums, whereby we alter or redefine the partial sums in some way in order to get a finite answer. The motivation for these perhaps seemingly artificial practices is that they are examples of regularization and renormalization of expressions naturally occuring in modern physics containing divergent or infinite factors. Suggestively, it does actually hold in a literal sense over $2$-adic integers. The fact that the series converges 2-adically follows from $|2^n|_2=2^{-n}\to0$, and if we wanted to have fun we could pull an Euleresque stunt and call the sum $S$, multiply by $2$ and say $2S=S-1\implies S=-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/73907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Poizat's definition of $p$-equivalent $k$-tuples Maybe this is too trivial a question to be posted anywhere, but anyway. I am reading Poizat's "A Course in Model Theory". In page 4 he defines the notion of two $k$-tuples, each in the universe of some relation, being $p$-equivalent. He gives two conditions: * *$a_i = a_j \leftrightarrow b_i = b_j$. *The function $s$ defined by $sa_1=b_1,...,sa_k=b_k$ is a $p$-isomorphism from $R$ to $R'$. I guess the first of them is just a remark, because it seems redundant given the second. If the function $s$ is a $p$-isomorphism then it is, in particular, a bijection between the finite subsets $\{a_1,...,a_k\}$ and $\{b_1,...,b_k\}$, and hence it is impossible to falsify condition 1. Am I missing anything?
Yes, I was missing something pretty obvious. Condition 1 is what enables him to talk about the function $s$ in condition 2. Otherwise he should have talked about the relation $s$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/73952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Contiguous Saddle Points Could one have a function $f(x,y)$ s.t. it is increasing along the line $x=y$ but the partial derivatives $\frac{\partial f(x,y)}{\partial y} = \frac{\partial f(x,y)}{\partial x} = 0$ on every point on that line. Essentially this would amount to every point on that line being a saddle point. The function is differentiable everywhere.
No, because the directional derivative in the direction of $\vec{v}=(1,1)$ is nonzero along $x=y$, but the directional derivative is a linear combination of partial derivatives. If $\vec{u}=\vec{v}/|\vec{v}|$, then $D_{\vec{u}}f=\nabla f\cdot \vec u$, and your condition is that $\nabla f$ vanishes along the line $x=y$, which means that all directional derivatives are also zero, so the function can't be monotone increasing in any direction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/74136", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
what's the relationship between a.s. continuous and m.s. continuous? suppose that X(t) is a s.p. on T with $EX(t)^2<+\infty$. we give two kinds of continuity of X(t). * *X(t) is continuous a.s. *X(t) is m.s. continuous, i.e. $\lim\limits_{\triangle t \rightarrow 0}E(X(t+\triangle t)-X(t))^2=0$. Then, what's the relationship between these two kinds of continuity.
For an example which is a.s. but not m.s. continuous, take your time interval to be $[0, \infty]$, and let $X_t$ be a standard one-dimensional Brownian motion started at 0 and stopped the first time it reaches 1. (That is, if $B_t$ is a standard Brownian motion, take $T = \inf\{t > 0 : B_t = 1\}$ and $X_t = B_{t \wedge T}$.) Since Brownian motion is recurrent, we have $X_t \to 1$ a.s. as $t \to \infty$, and so by setting $X_{\infty} = 1$ we get an a.s. continuous stochastic process on $[0,\infty]$. However, $X_t$ is not m.s. continuous. If it were, then by Cauchy-Schwarz we would have $E[X_t] \to E[X_\infty] = 1$ as $t \to \infty$. But $X_t$ is a martingale and so $E[X_t] = 0$ for all $t \in [0,\infty)$. If you don't like using $[0,\infty]$ as your time interval, then apply a time change: let $$Y_t = \begin{cases} X_{t/(1-t)}, & t < 1 \\ 1, & t \ge 1.\end{cases}$$ Now $Y_t$ is a.s. continuous but $E[Y_t] = 0$ for $t < 1$, $E[Y_t] = 1$ for $t \ge 1$. Note that $Y_t$ is a standard example of a local martingale which is not a martingale.
{ "language": "en", "url": "https://math.stackexchange.com/questions/74203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Simplify fraction - Where did the rest go? While studying maths I encountered the following fraction : $\frac{5ab}{10b}$ Which I then had to simplify. The answer I came up with is: $\frac{5ab}{10b} = \frac{ab}{2b}$ But the correct answer seemed to be: $\frac{5ab}{10b} = \frac{a}{2} = \frac{1}{2}$a Why is the above answer correct and mine wrong? I can't wrap my head around $b$ just disappearing like that.
If you are OK with cancelling the factors of 5 in the numerator and the denominator, well, this is just cancelling the factors of $b$ in the numerator and denominator.
{ "language": "en", "url": "https://math.stackexchange.com/questions/74275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
There exists a real number $c$ such that $A+cI$ is positive when $A$ is symmetric Without using the fact that symmetric matrices can be diagonalized: Let $A$ be a real symmetric matrix. Show that there exists a real number $c$ such that $A+cI$ is positive. That is, if $A=(a_{ij})$, one has to show that there exists real $c$ that makes $\sum_i a_{ii}x_i^2 + 2\sum_{i<j}a_{ij}x_ix_j + c\sum_i x_i^2 > 0$ for any vector $X=(x_1,...,x_n)^T$. This is an exercise in Lang's Linear Algebra. Thank you for your suggestions and comments.
You can use the fact $$ \lambda_{min}x^Tx \leq x^TAx\leq \lambda_{max}x^Tx $$ (which is not difficult to prove) and suppose that the matrix is already negative definite hence all the eigenvalues are negative. This means $x^TAx<0$ for all non-zero $x$. This allows us to write $$ \lambda_{min}\|x\|^2 \leq x^TAx\leq \lambda_{max}\|x\|^2 < 0 $$ But, consider the following: $$ x^T(A+cI)x = x^TAx +cx^Tx \geq (\lambda_{min}+c)\|x\|^2 $$ If we select $c>|\lambda_{min}|$ we obtain a positive definite matrix since for every non-zero $x$, $x^T(A+cI)x > 0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/74351", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Is there a reason why curvature is defined as the change in $\mathbf{T}$ with respect to arc length $s$ And not with respect to time $t$? (or whatever parameter one is using) $\displaystyle |\frac{d\mathbf{T}(t)}{\mathit{dt}}|$ seems more intuitive to me. I can also see that $\displaystyle |\frac{d\mathbf{T}(t)}{\mathit{ds}}| = |\frac{d\mathbf{r}'(t)}{dt}|$ (because $\displaystyle |\mathbf{r}'(t)| = \frac{ds}{dt}$, which does make sense, but I don't quite understand the implications of $\displaystyle |\frac{d\mathbf{T}(t)}{\mathit{dt}}|$ vs. $\displaystyle |\frac{d\mathbf{T}(t)}{\mathit{ds}}|$ and why the one was chosen over the other.
The problem is that if you define it in the "more intuitive way" the curvature depends on the parametrisation. We calculate the curvature of a curve, not of a parametrisation. Simple question: how do you calculate the curvature of the hyperbola $y^2-x^2=1$ at let's say $(0,1)$? You could a) solve for $y$ and use $x=t, y= \sqrt{t^2+1}$. b) use $x= \tan(t), y= \sec(t)$, c) use $y= \cosh(t) , y= \sinh(t)$. Each of these lead to a different $\displaystyle \left\vert\frac{d\mathbf{T}(t)}{\mathit{dt}}\right\vert$. So which one would you pick as the curvature of $y^2-x^2=1$? Intuitively, when we parametrise a curve, we are basically describing the curve as being the trajectory of a particle, by describing its coordinates at time $t$. I always think about different parametrisations as being different particles moving on the same curve/trajectory. If a particle moves faster, the calculation of $\displaystyle \left\vert\frac{d\mathbf{T}(t)}{\mathit{dt}}\right\vert$ also takes into acount its velocity. If I am not mistaken, in the arc lenght parametrisation, we simply pick the particle which covers 1 unit of the curve per unit of time. Basically we decide that the speed is constantly 1. This is the most "natural" choice you can make.
{ "language": "en", "url": "https://math.stackexchange.com/questions/74403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
A question about hyperbolic functions Suppose $(x,y,z),(a,b,c)$ satisfy $$x^2+y^2-z^2=-1, z\ge 1,$$ $$ax+by-cz=0,$$ $$a^2+b^2-c^2=1.$$ Does it follow that $$z\cosh(t)+c\sinh(t)\ge 1$$ for all real number $t$?
Write $$a=\rho\cos\phi,\quad b=\rho\sin\phi;\qquad x=r\cos\psi,\quad y=r\sin\psi.$$ Then $1+r^2=z^2$, whence $$r=\sinh \tau,\quad z=\cosh\tau$$ for some $\tau\geq0$. Similarly $c^2=a^2+b^2-1=\rho^2-1$, whence $$\rho=\cosh\alpha,\quad c=\sinh\alpha$$ for some $\alpha\in{\mathbb R}$. Therefore we get $$\eqalign{ax+by-cz&=\rho r(\cos\phi\cos\psi+\sin\phi\sin\psi)-\cosh\tau\sinh\alpha \cr &=\sinh\tau\cosh\alpha\cos(\phi-\psi)-\cosh\tau\sinh\alpha\ .\cr}$$ As $ax+by-cz=0$ this implies $$\left|{\sinh\alpha\over\cosh\alpha}\right|\leq\left|{\sinh\tau\over\cosh\tau}\right|$$ or $|\alpha|\leq\tau$. It follows that for any real $t$ one has $$\eqalign{z\cosh t+ c\sinh t&=\cosh\tau\cosh t+\sinh\alpha\sinh t\geq \cosh\alpha\cosh t+\sinh\alpha\sinh t\cr &=\cosh(\alpha+t)\geq1\ .\cr}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/74468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Existence of sequence such that $N$ divides the sum of the first $N$ terms for all $N$ Does there exist a sequence $a_n$ which consists of every natural number and for each $N$, $\sum\limits_{k=1}^N a_k$ is divisible by $N$?
Yes. You start with $3,1$. Now, you suppose that you have already found the first $2k$ numbers of your sequence. Put the smallest natural number that has not yet been used in place $2k+2$ and then find a number that has the right rest modulo $(2k+1)(2k+2)$ in place $2k+1$ so that both the sums up to $a_{2k+1}$ and $a_{2k+2}$ have the right divisibility properties. The first action ensures that all natural numbers are used eventually. The second ones ensures that the divisibility conditions are fulfilled and uses the fact that consecutive integers are relatively prime. The above construction starts out with: 3,1,14,2,30,4 Edited to add: The condition on the first $2k+1$ numbers tells you that $a_{2k+1}$ has a certain rest modulo $2k+1$, the condition on the first $2k+2$ numbers tells you that it has a certain rest modulo $2k+2$. By the Chinese remainder theorem, you can combine these conditions to a certain rest modulo $(2k+1)(2k+2)$. Since there are infinitely many numbers that have a certain rest and you have only used finitely many at the moment, you can find a number that works. Example for $a_3=x$: I have already fixed $3,1,x,2$. So 3 has to divide $x+4$ and 4 has to divide $x+6$. Therefore $x$ has rest 2 modulo 3 and rest 2 modulo 4 which is equivalent to being 2 modulo 12. The first such number, 2, has already been taken, so I take the second number $2+12=14$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/74545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
What is the math notation for this type of function? A function that turns a real number into another real number can be represented like $f : \mathbb{R}\to \mathbb{R}$ What is the analogous way to represent a function that turns an unordered pair of elements of positive integers each in $\{1,...,n\}$ into a real number? I guess it would almost be something like $$f : \{1,...,n\} \times \{1,...,n\} \to \mathbb{R}$$ but is there a better notation that is more concise and that has the unorderedness?
The set $\{1, \ldots ,N \}$ is often written as $[N]$, so this could be $f: \operatorname{Sym}^2([N]) \to \mathbb{R} $. Here $\operatorname{Sym}$ means the symmetric product, that is, $\operatorname{Sym}^2(S)$ can be thought of as the set of unordered pairs of elements of $S$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/74590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Hartshorne Exercise III.2.1(a) Show that $H^1(\mathbf{A}^1_k, \mathbf{Z}_U) \neq 0$ for $U = \mathbf{A}^1_k \setminus \{P,Q\}$, $k$ infinite field. Is it really neccessary that $P \neq Q$? My proof is as follows: Take the long exact sequence of $0 \to j_!j*\mathbf{Z} \to \mathbf{Z} \to i^*i_*\mathbf{Z} \to 0$ and get ($H^1(\mathbf{A}^1_k, \mathbf{Z}) = 0$ since $k$ is infinite, so the space is irreducible) $0 \to \mathbf{Z} \to \mathbf{Z} \to \mathbf{Z}^{|X \setminus U|} \to H^1(\mathbf{A}^1_k, \mathbf{Z}_U) \to 0$. Now tensor with $\mathbf{Q}$ and count the ranks: $1 - 1 + |X \setminus U| - rk H^1(\mathbf{A}^1_k, \mathbf{Z}_U) = 0$, so $H^1(\mathbf{A}^1_k, \mathbf{Z}_U) \neq 0$ for $|X \setminus U| > 0$. Am I missing something here?
Yes, it is really necessary that $P\neq Q$. Indeed, if $V=\mathbb A^1_k\setminus \lbrace P\rbrace$ , then $H^1(\mathbb A^1_k,\mathbb Z_V) =0$ Proof Use the long exact sequence associated to $$0\to \mathbb Z_V \to \mathbb Z \to j_\ast (\mathbb Z|\lbrace P\rbrace) \to 0 \quad ( \ast)$$ and get: $$\quad 0\to \text {don't care}\to \Gamma (\mathbb A^1_k,\mathbb Z)=\mathbb Z \stackrel {=}{\to}\Gamma (\mathbb A^1_k,j_\ast (\mathbb Z|\lbrace P\rbrace) =\mathbb Z\to H^1(\mathbb A^1_k,\mathbb Z_V)\to H^1(\mathbb A^1_k,\mathbb Z)=0 \to \cdots $$ The notable points are that $H^1(\mathbb A^1_k,\mathbb Z)=0$ because constant sheaves are flasque on irreducible spaces and above all that the morphism $\Gamma (\mathbb A^1_k,\mathbb Z)=\mathbb Z \to \Gamma (\mathbb A^1_k,j_\ast (\mathbb Z|\lbrace P\rbrace) =\mathbb Z$ is equality, which results from the definition of the maps in $(\ast)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/74641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Injective functions also surjective? Is it true that for each set $M$ a given injective function $f: M \rightarrow M$ is surjective, too? Can someone explain why it is true or not and give an example?
For finite sets, consider the two point set $\{a,b\}$ . If you have an injective function, $f(a)\neq f(b)$, so one has to be $a$ and one has to be $b$, so the function is surjective. The same idea works for sets of any finite size. If the size is $n$ and it is injective, then $n$ distinct elements are in the range, which is all of $M$, so it is surjective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/74682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 2 }
Is there a generalized method of rotation for curves? I know that we can rotate a curve in $R^2$ about a linear axis, as is common for first year calculus problems involving solids of revolution. But has anyone come up with a general method to take a real valued function in $R^2$ and rotate about another function in $R^2$ that is not necessarily linear? I assume the generalized definition would take a point off the curve to the point on the other side of the curve the same distance off the curve along a line perpendicular to the curve at some point. The alternative definition I thought of was given in an answer below, but that is not what I'm looking for. I want to be able to rotate a curve about another curve geometrically with or without an established coordinate plane, which is why I assumed the definition above. Trying to make this more precise: My definition takes a curve $C$ and finds the slope of the normal line at point $(x_o,y_o)$. Supposing the slope found is $m$, the normal line is $y=m(x-x_o)+y_o$. Find the point(s) $(x_o,y_o)$ for which this line intersects the point/curve to be rotated. Find the distance along this line between $C$ and the point to be rotated. Then, traversing the line in the opposite direction from $C$, find the point that same distance away from $C$. This yields the rotated point. Using the definition I give and use Curve $C$ as an axis: 1. Is the relation between a point in $R^2$ and its image after rotation a function? 2. Does this depend on whether $C$ represents a real valued function? For example, rotation about a circle is not a function while rotation about a parabola is? 3. Does this yield a well-defined surface of revolution? 4. Could such a rotation yield interesting results, e.g., transforming a smiley face into a sad face, or turning a one kind of conic into another? 5. Supposing this definition cannot yield a well-defined surface of revolution, as some have suggested, what definition could? 6. Are there helpful links or articles that address any of these issues? A counterexample to the conjecture of(1): Take $C$ to be the unit circle centered at the origin and rotate $(0,2)$ about $C$.
Depending on the purpose you have in mind, rotation of the function $f$ about the function $g$ may be equivalent to the rotation of $f-g$ about the $x$-axis. In particular, this works for finding the volume of the solid defined by the rotation of $f$ about $g$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/74769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Induction proof concerning a sum of binomial coefficients: $\sum_{j=m}^n\binom{j}{m}=\binom{n+1}{m+1}$ I'm looking for a proof of this identity but where j=m not j=0 http://www.proofwiki.org/wiki/Sum_of_Binomial_Coefficients_over_Upper_Index $$\sum_{j=m}^n\binom{j}{m}=\binom{n+1}{m+1}$$
It seems to me that you are looking for the proof of the identity: $$ \sum_{j=m}^n\binom{j}{m}=\binom{n+1}{m+1}$$ This is actually known as the Hockey-Stick Identity.You can find different methods of proving this in this page. Please note that there are many applications of the hockey stick identity and all the combinatorial identities you may encounter, this is the one most worth remembering. It makes a lot of apparently hard problems very easy.
{ "language": "en", "url": "https://math.stackexchange.com/questions/74844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Extending to a holomorphic function Let $Z\subseteq \mathbb{C}\setminus \overline{\mathbb{D}}$ be countable and discrete (here $\mathbb{D}$ stands for the unit disc). Consider a function $f\colon \mathbb{D}\cup Z\to \mathbb{C}$ such that 1) $f\upharpoonright \overline{\mathbb{D}}$ is continuous 2) $f\upharpoonright \mathbb{D}$ is holomorphic 3) if $|z_0|=1$ and $z_n\to z_0$, $z_n\in Z$ then $(f(z_n)-f(z_0))/(z_n-z_0)\to f^\prime(z_0)$ Can $f$ be extended to a holomorphic function on some domain containing $Z$?
As has already been mentioned, the answer is no. The identity theorem tells you that there is at most one way to extend $f$ analytically beyond the unit disk, which shows you that the answer must be negative (as per Henning's comment above). Another fact to observe in this context is that there are functions that are analytic on the unit disk and extend smoothly to the boundary, but do not extend analytically beyond the boundary anywhere. (Take a Riemann map to a Jordan domain whose boundary is smooth but not analytic anywhere.) However, even without any assumptions on derivatives, a continuous function as in your statement can be uniformly approximated arbitrarily closely by entire functions; this follows from Arakelyan's approximation theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/74901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
How do you parameterize a sphere so that there are "6 faces"? I'm trying to parameterize a sphere so it has 6 faces of equal area, like this: But this is the closest I can get (simply jumping $\frac{\pi}{2}$ in $\phi$ azimuth angle for each "slice"). I can't seem to get the $\theta$ elevation parameter correct. Help!
Following on FelixCQ's approach, take the cube to have corners $\pm 1$ and we want to project onto the unit sphere. So one side is $(-1+2t,1,1)$ Projecting that onto the unit sphere gives $\frac{1}{\sqrt{3-4t+4t^2}}(-1+2t,1,1)$, $\theta=\arccos(\frac{1}{\sqrt{3-4t+4t^2}}, \phi=\arctan(-1+2t)$. The other sides can be treated similarly, but you have to worry about the quadrants for $\phi$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/74941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
How do we check Randomness? Let's imagine a guy who claims to possess a machine that can each time produce a completely random series of 0/1 digits (e.g. $1,0,0,1,1,0,1,1,1,...$). And each time after he generates one, you can keep asking him for the $n$-th digit and he will tell you accordingly. Then how do you check if his series is really completely random? If we only check whether the $n$-th digit is evenly distributed, then he can cheat using: $0,0,0,0,...$ $1,1,1,1,...$ $0,0,0,0,...$ $1,1,1,1,...$ $...$ If we check whether any given sequence is distributed evenly, then he can cheat using: $(0,)(1,)(0,0,)(0,1,)(1,0,)(1,1,)(0,0,0,)(0,0,1,)...$ $(1,)(0,)(1,1,)(1,0,)(0,1,)(0,0,)(1,1,1,)(1,1,0,)...$ $...$ I may give other possible checking processes but as far as I can list, each of them has flaws that can be cheated with a prepared regular series. How do we check if a series is really random? Or is randomness a philosophical concept that can not be easily defined in Mathematics?
Leaving aside the theoretical aspect of your question, there are also pragmatic answers to it because there are real world uses for high-quality random generators (whether hardware or algorithmic). For statistical uses, "statistical randomness" is a used. For example you can use these "diehard tests" or TestU01.
{ "language": "en", "url": "https://math.stackexchange.com/questions/75005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
How many ways can 8 people be seated in a row? I am stuck with the following question, How many ways can 8 people be seated in a row? if there are 4 men and 4 women and no 2 men or women may sit next to each other. I did it as follows, As 4 men and 4 women must sit next to each other so we consider each of them as a single unit. Now we have we 4 people(1 men group, 1 women group, 2 men or women) they can be seated in 4! ways.Now each of the group of men and women can swap places within themselves so we should multiply the answer with 4!*4! This makes the total 4!*4!*4! =13824 . Please help me out with the answer. Are the steps clear and is the answer and the method right? Thanks
Look , we have 8 people , 4 of them are men , 4 of them are women , we need to put every man and woman together , it means we will have 4 groups of men and women sitting together. [Group1,Group2,Group3,Group4] , to get Group1 we have 4 possibilities of men , and 4 possibilities of women , then to get Group2 we left with 3 men and 3 women , then in the same manner we got : 4*4*3*3*2*2*1*1 =576 (4!*4!) , so finally we left only with combinations of the groups in the array [Group1,Group2,Group3,Group4], as I got there 4! # of combinations this groups . So I guess this is the answer : 4!*4!*4!
{ "language": "en", "url": "https://math.stackexchange.com/questions/75071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
how to solve this simple equation I'm trying to solve a bigger problem however I am stuck at this step: How can I solve: $$ 2^x - x = 5 $$ any hints/tips/steps please?
As alluded to in the comments there is an integer solution. For the other solution is existence of a solution good enough? You can use the Intermediate Value Theorem on the function $f(x)=2^x-x-5$. It is negative at $x=0$ and positive at $x=-6$. So, somewhere in between the IVT says there must be a $0$. Or you can use Newton's method on $f$ to approximate the $0$ of $f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/75099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to prove that $\lim\limits_{x\to0}\frac{\sin x}x=1$? How can one prove the statement $$\lim_{x\to 0}\frac{\sin x}x=1$$ without using the Taylor series of $\sin$, $\cos$ and $\tan$? Best would be a geometrical solution. This is homework. In my math class, we are about to prove that $\sin$ is continuous. We found out, that proving the above statement is enough for proving the continuity of $\sin$, but I can't find out how. Any help is appreciated.
Let $f:\{y\in\mathbb{R}:y\neq 0\}\to\mathbb{R}$ be a function defined by $f(x):=\dfrac{\sin x}{x}$ for all $x\in \{y\in\mathbb{R}:y\neq 0\}$. We have $\displaystyle\lim_{x \to 0}\dfrac{\sin x}{x}=1$ if and only if for every $\varepsilon>0$, there exists a $\delta>0$ such that $|f(x)-1|<\varepsilon$ whenever $0<|x-0|<\delta$. Let $\varepsilon>0$ be an arbitrary real number. Note that $\sin x=\displaystyle \sum_{n=0}^{\infty}(-1)^n\dfrac{x^{2n+1}}{(2n+1)!}$. If $x \neq 0$, we have $\dfrac{\sin x}{x}=$$\displaystyle \sum_{n=0}^{\infty}(-1)^n\dfrac{x^{2n}}{(2n+1)!}=1+$$\displaystyle \sum_{n=1}^{\infty}(-1)^n\dfrac{x^{2n}}{(2n+1)!}$. We thus have $|f(x)-1|=\left|\dfrac{\sin x}{x}-1\right|=\left|\displaystyle \sum_{n=1}^{\infty}(-1)^n\dfrac{x^{2n}}{(2n+1)!}\right|\leq \left|\displaystyle\sum_{n=1}^{\infty} \dfrac{x^{2n}}{(2n+1)!}\right|\leq \displaystyle\sum_{n=1}^{\infty} \left|\dfrac{x^{2n}}{(2n+1)!}\right|$ Therefore we have $|f(x)-1|\leq \displaystyle\sum_{n=1}^{\infty} \left|\dfrac{x^{2n}}{(2n+1)!}\right|\leq \displaystyle \sum_{n=1}^{\infty} |x^{2n}|=\sum_{n=1}^{\infty}|x^2|^n$ If $0<|x|<1$, then $0<|x^2|<1$, and the infinite series $\displaystyle\sum_{n=1}^{\infty}|x^2|^n$ converges to $\dfrac{x^2}{1-x^2}$. Choose $\delta:=\sqrt{\dfrac{\varepsilon}{1+\varepsilon}}$. Then $0<|x-0|<\delta$ implies that $0<|x|<$$\sqrt{\dfrac{\varepsilon}{1+\varepsilon}}<1$, and hence $x^2<\varepsilon-\varepsilon x^2$. But $x^2<\varepsilon-\varepsilon x^2$ implies that $\dfrac{x^2}{1-x^2}<\varepsilon$. We therefore have $\sum_{n=1}^{\infty}|x^2|^n<\varepsilon$ whenever $0<|x-0|<\delta$. But since $|f(x)-1|\leq\displaystyle\sum_{n=1}^{\infty}|x^2|^n$, we have $|f(x)-1|<\varepsilon$ whenever $0<|x-0|<\delta$. Since $\varepsilon$ was arbitrary, we have $\displaystyle\lim_{x \to 0}\dfrac{\sin x}{x}=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/75130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "536", "answer_count": 28, "answer_id": 19 }
mean and std deviation of a population equal? Hypothetically, if we have a population of size $n$ whose mean and std deviation are equal, I think with some work we have a constraint that the ratio, (Sum of squared points)/(Sum of points$)^2$ $= \frac{(2n-1)}{n^2}$, which gets small quickly as $n$ gets large. Are there heuristic considerations that might render such a population plausible as an extension of, say, the binomial distribution (as with the Poisson distribution, although that distribution the mean is equal to the variance)? Does this property (mean = Sqrt[variance] ) suggest anything about the population generally, if that question is not too vague? I have not encountered a population with this property in any texts, but am fairly sure it has been considered...?
One situation in which the mean is equal to the standard deviation is with the exponential distribution whose probability density is $$ f(x) = \begin{cases} \frac1\theta e^{-x/\theta} & \text{if }x>0, \\ \\ \\ 0 & \text{if } x < 0. \end{cases} $$ The mean and the standard deviation are both equal to $\theta$. These are the only distributions that are "memoryless" in the sense that if (capital) $X$ is a random variable that is so distributed, then $$ \Pr(X > x + y \mid X > y) = \Pr(X > x) $$ for all positive numbers $x$ and $y$. How you got $\dfrac{2n-1}{n^2}$ escapes me.
{ "language": "en", "url": "https://math.stackexchange.com/questions/75244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How many everywhere defined functions are not $ 1$ to $1$ I am stuck with the following question, How many everywhere defined functions from S to T are not one to one S={a,b,c,d,e} T={1,2,3,4,5,6} Now the teacher showed that there could be $6^5$ ways to make and everywhere defined function and $6!$ ways of it to be $1$ to $1$ but when I drew them on paper I could drew no more than 30, here are those, $(a,1),(a,2).........(a,6)$$(b,1),(b,2).........(b,6)$$(c,1),(c,2).........(c,6)$$(d,1)(d,2).........(d,6)$$(e,1)(e,2).........(e,6)$ Can anyone please help me out with how there are $6^5$ functions? Thanks
What you have in each line is not a function. For example, look at $(a,1)$, which you've counted as a "function." Call it $f(\circ)$. What then are $f(b),f(c),f(d),f(e)$? You can't say. Because simply fixing the correspondance $a\leftrightarrow 1$ does not give you a whole function, it only gives a small piece of one. Here is how to count the total number of one-to-one (bijective) functions $\{a,b,c,d,e\}\to\{1,2,3,4,5,6\}$: * *There are six possible outputs that $a$ could go to. Choose one arbitrarily. *There are then five possible outputs for $b$ - it can't be the same as $a$ - and that number five is independent of which output was chosen for $a$. Choose the output for $b$ then from this pool. *Now there are four possible outputs for $c$, by the same token. Choose again. *$\cdots\cdots\cdots$ *After you have chosen outputs for $a,b,c,d$, the last number remaining must go to $e$ as its output, so there is only one choice to make in this final stage. Since in each stage the number of choices is independent of which choices were made in the other stages, we can multiply the numbers together to count the total number of bijections: $6!$. It's easier simply counting the number of functions $S\to T$; there are six stages again (one for each input) and there are six possible outputs for each of $a,b,c,d,e$, so $6^5$ functions in total.
{ "language": "en", "url": "https://math.stackexchange.com/questions/75301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
The Pigeon Hole Principle and the Finite Subgroup Test I am currently reading this document and am stuck on Theorem 3.3 on page 11: Let $H$ be a nonempty finite subset of a group $G$. Then $H$ is a subgroup of $G$ if $H$ is closed under the operation of $G$. I have the following questions: 1. It suffices to show that $H$ contains inverses. I don't understand why that alone is sufficient. 2. Choose any $a$ in $G$...then consider the sequence $a,a^2,..$ This sequence is contained in $H$ by the closure property. I know that if $G$ is a group, then $ab$ is in $G$ for all $a$ and $b$ in $G$.But, I don't understand why the sequence has to be contained in $H$ by the closure property. 3. By the Pigeonhole Principle, since $H$ is finite, there are distinct $i,j$ such that $a^i=a^j$. I understand the Pigeonhole Principle (as explained on page 2) and why $H$ is finite, but I don't understand how the Pigeonhole Principle was applied to arrive at $a^i=a^j$. 4. Reading the proof, it appears to me that $H$ = $\left \langle a \right \rangle$ where $a\in G$. Is this true?
I am guessing that by "closed under the operation of $G$" they mean closed under the multiplication on $G$. Then $H$ is a subgroup if it then also contains inverses because it also contains the identity. Since $H$ is finite, then so must be $\{a^k:k\in\mathbb{Z}^+\}$ since it is contained in $H$. Thus, for some $i>j$, we must have $a^i=a^j$. Thus, $a^{i-j}$ must be the identity and $a^{i-j-1}=a^{-1}$. $H$ is not necessaily cyclic. Here we have shown that $\{a^k:k\in\mathbb{Z}^+\}$ is a cyclic subgroup, but there is no guarantee that this is all of $H$. We may need to perform the above process on any $b\in H\setminus\{a^k:k\in\mathbb{Z}^+\}$, etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/75371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 1 }
Prove that $\lim \limits_{n\to\infty}\frac{n}{n^2+1} = 0$ from the definition This is a homework question: Prove, using the definition of a limit, that $$\lim_{n\to\infty}\frac{n}{n^2+1} = 0.$$ Now this is what I have so far but I'm not sure if it is correct: Let $\epsilon$ be any number, so we need to find an $M$ such that: $$\left|\frac{n}{n^2 + 1}\right| < \epsilon \text{ whenever }x \gt M.$$ $$ n \lt \epsilon(n^2 + 1) $$ $$n \lt \epsilon n^2 + \epsilon$$ Now what? I am completely clueless on how to do this!
You could harvest from $\frac{n}{n^2+1} = \frac{1}{n+\frac{1}{n}}$ for any $n$ in $\mathbb{R}^\star$, and for any $\varepsilon>0,n>0$ then $\frac{1}{n+\frac{1}{n}}<\varepsilon \Leftrightarrow n + \frac{1}{n} > \frac{1}{\varepsilon}$. On the range $\left[1;+\infty\right[$ then $\frac{1}{n}$ is upper-bounded by $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/75429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 2 }
Number of point subsets that can be covered by a disk Given $n$ distinct points in the (real) plane, how many distinct non-empty subsets of these points can be covered by some (closed) disk? I conjecture that if no three points are collinear and no four points are concyclic then there are $\frac{n}{6}(n^2+5)$ distinct non-empty subsets that can be covered by a disk. (I have the outline of an argument, but it needs more work. See my answer below.) Is this conjecture correct? Is there a good BOOK proof? This question is rather simpler than the related unit disk question. The answer to this question provides an upper bound to the unit disk question (for $k=1$).
Here’s a brief outline of the approach I originally took. The $n\choose 2$ lines that are equidistant from a pair of points partition the plane into $$r_n\;=\;{\!\!{n\choose 2}+1\choose 2}+1-{n\choose 3}$$ regions. For example, five points gives 46 regions: $\hspace{1.5in}$ Each region $\mathcal{R}$ defines a permutation $P_\mathcal{R}$ of the points, ordered by their distance from any point in $\mathcal{R}$. Disks centred on a point in $\mathcal{R}$ cover (the points in) some prefix of $P_\mathcal{R}$. If $\mathcal{R}$ and $\mathcal{S}$ are two adjacent regions, then $P_\mathcal{R}$ and $P_\mathcal{S}$ differ by the reversal of a single pair of adjacent points (e.g. $\ a\dots pq\dots z\ $ and $\ a\dots qp\dots z\ $). Disks centred on points in $\mathcal{S}$ thus only cover a single subset ($\;a\dots q\;$) of points that is not covered by disks centred on points in $\mathcal{R}$. This gives us an upper bound of $n+r_n-1$ on the number of coverable subsets: an initial region gives us $n$ subsets; the other $r_n-1$ regions each add at most one further subset. However, if we consider the permutations corresponding to the regions (in a ‘circuit’) around a point at which (two or three) lines intersect, we see that there is actually one fewer distinct prefix (= coverable subset) for each of the $$i_n\;=\;{\!{n\choose 2}\choose 2}-2{n\choose 3}$$ intersection points. [It is this claim that remains to be proved rigorously.] Thus we have a total of $n+r_n-1-i_n$ (which is $\frac{n}{6} (n^2 +5)$) coverable subsets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/75487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Sums of truth-table values mod 2 range over all truth tables Let $A=\lbrace0,1\rbrace$. There are 16 distinct functions $f_i:A^2\to A$. Choose a permutation $P=\left(a_1,\ldots,a_4\right)$ of the elements of $A^2$, and for each $i$ consider the ordered quadruple $\left(\sum_{j=1}^nf_i(a_j)\pmod2\right)_{n=1}^4\in A^4$. Clearly this quadruple is $f_k(P)$ for some $k$. I claim that as $i$ ranges over $\left(1,\ldots,16\right)$ we obtain all sixteen $f_k$ this way. (My proof is by inspecting a single choice of $P$ — which I did manually — and handwavingly claiming that the choice of $P$ doesn't matter because everything's symmetric.) Question: Why is this true? (Something (a proof not by inspection or an explanation) that generalizes would be most welcome.)
Suppose that $f_i$ and $f_k$ result in the same function. Then $$\begin{align*} f_i(a_1)&\equiv f_k(a_1)\pmod2,\\ f_i(a_1)+f_i(a_2)&\equiv f_k(a_1)+f_k(a_2)\pmod2,\\ f_i(a_1)+f_i(a_2)+f_i(a_3)&\equiv f_k(a_1)+f_k(a_2)+f_k(a_3)\pmod2,\text{ and}\\ f_i(a_1)+f_i(a_2)+f_i(a_3)+f_i(a_4)&\equiv f_k(a_1)+f_k(a_2)+f_k(a_3)+f_i(a_4)\pmod2,\\ \end{align*}$$ and an easy reduction from the top down shows that $f_i(a_j)\equiv f_k(a_j)\pmod2$ for $j=1,2,3,4$, i.e., that $f_i=f_k$. The map is therefore injective, and since the set of functions is finite, it must be a bijection. Thus, you do get all $16$ functions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/75561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Filter to obtain MMSE of data from Gaussian vector Data sampled at two time instances giving bivariate Gaussian vector $X=(X_1,X_2)^T$ with $f(x_1,x_2)=\exp(-(x_1^2+1.8x_1x_2+x_2^2)/0.38)/2\pi \sqrt{0.19}$ Data measured in noisy environment with vector: $(Y_1,Y_2)^T=(X_1,X_2)^T+(W_1,W_2)^T$ where $W_1,W_2$ are both $i.i.d.$ with $\sim N (0,0.2)$. I have found correlation coefficient of $X_1,X_2$, $\rho=-0.9$ and $X_1,X_2 \sim N(0,1)$ Question: How to design filter to obtain MMSE estimator of $X_1$ from $Y$ vector and calculate MSE of this estimator?
This looks like homework but here goes. Since everything is Gaussian, the MMSE estimator for $X_1$ is the mean of the conditional pdf of $X_1$ given $(Y_1, Y_2)$ and the mean square error is the conditional variance of this. Do you know how to find the conditional pdf (hint: it is also Gaussian)
{ "language": "en", "url": "https://math.stackexchange.com/questions/75615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }