Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
What is the most efficient method to evaluate this indefinite integral? $$\int x^5 e^x\,\mathrm{d}x$$ Is there another, more efficient way to solve this integral that is not integration by parts?
You're question asked if there is a more efficient way than integration by parts to solve the indefinite integral $\int x^5e^x dx$, and other users have provided good answers. But in case you tacitly assumed that the answer to your question would pretty much be the same for definite integrals like $\int_a^b x^n e^{kx} dx$, I'd like to show for contrast that in this case there can be much more efficient methods than IBP for finding the integral. For example, consider the integral $\int_0^\infty x^5 e^{-x}dx$. To find this integral, first evaluate $\int_0^\infty e^{-ax}dx$: $$\int_0^\infty e^{-ax}dx=-\frac{e^{-ax}}{a}\bigg{|}_0^\infty=\frac{1}{a}.$$ Then just differentiate both sides with respect to $a$ five times to get: $$-\int_0^\infty x^5 e^{-ax}dx=-\frac{120}{a^6}.$$ Finally, set $a=1$ and voila, the integral is $\int_0^\infty x^5 e^{-x}dx=120$. And all you had to do really was a single integration plus five differentiations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/832348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 10, "answer_id": 3 }
Free module finite generated with a submodule not finite generated. Anyone knowns an example of a R-module finite genereted with a submodule not finite generated? I find the following example: Taking the set of function $f:[0,1]\rightarrow\mathbb{R}$ seen as module of it self. This is finite generated. If we take the subset of functions $f$ such that $f(x)=0$ for all $x\in[0,1]$ except a finite numbers of points then we will get a submodule wich is not finite generated. Why? This is a different example of the others I saw. Thanks a lot!!!
Let $R$ be the ring of function $f:[0,1]\to\mathbb R$. Let $M$ be the $R$-module of functions $f$ such that $f(x)=0$ for all $x\in[0,1]$ except for a finite numbers of points. Assume on contrary that $M$ is finitely generated with generators $g_1,\ldots,g_n$. For each $a\in [0,1]$ let $\chi_a(x)= \begin{cases} 1&x=a,\\ 0&x\neq a. \end{cases} $ The functions $\chi_a,a\in[0,1]$ clearly generates $M$. Thus each $g_i$ is a linear combination of the $\chi_a$. Then there exists $a_1,\ldots,a_m\in[0,1]$ such that each $g_i$ is linear combination of $\chi_{a_1},\ldots,\chi_{a_m}$. Consequenlty $\chi_{a_1},\ldots,\chi_{a_m}$ generates $M$. Let $a\in[0,1]$ distinct from each $a_j$. Since $\chi_a$ is not a linear combination of $\chi_{a_j},j$ this lead to a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/832433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
How Find the diophantine equation $x_{1}x_{2}x_{3}\cdots x_{n}=x_{1}+x_{2}+\cdots +x_{n}$ let $x_{1},x_{2},\cdots,x_{n}$ such $$n\ge 3,x_{1}\le x_{2}\le\cdots x_{n}$$ $$x_{1}x_{2}x_{3}\cdots x_{n}=x_{1}+x_{2}+\cdots +x_{n}$$ let the number of ordered pairs of postive integers $(x_{1},x_{2},\cdots,x_{n})$ is $f(n)$ we can find the $f(n)$ closed form? I guess we have $$f(n)\le n?$$ it is clear $$1+2+3=1\cdot 2\cdot 3$$
Observing that, calling $\sigma_i(x_1,\dots,x_n)$ the $i$-th symmetric polynomial then $x_1+\dots+x_n=\sigma_1(x_1,\dots,x_n)$and $x_1x_2\cdots x_n=\sigma_n(x_1,\dots,x_n)$; then looking at the symmetric polynomials properties maybe you can get some informations on what you want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/832527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Follow up on subspectra: is the restriction a subalgebra This is a question that came up after thinking about one of my previous questions here. My question is: If we consider the algebra $A$ of continuous linear operators $u: X \to X$ where $X$ is some Banach space can the algebra $A|_C = \{u|_C: u \in A\}$ where $C$ is some closed subset of $X$ and $u|_C$ denotes the restriction of $u$ to $C$ be viewed as a subalgebra of $A$ somehow? Basically I am asking if there exists some injective algebra homomorphism $\varphi : A |_C \hookrightarrow A$. I tried to produce such a homomorphism but I don't quite see how to make it injective.
I don't think you can expect anything like that if you just require $C$ to be a subset. Not even when it is a subspace, because you need $$ (ab)|_C=a|_C\,b|_C.$$ This requires $C$ to be an invariant subspace for all operators, which would never happen. This of course doesn't preclude the possible existence of some other homomorphism, but I wouldn't know how to address that.
{ "language": "en", "url": "https://math.stackexchange.com/questions/832643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to check does polygon with given sides' length exist? I have polygon with $n$ angles. Then I have got $n$ values, which mean this polygon's sides' length. I have to check does this polygon exist (means - could be drawn with given sides' length). Is there any overall formula to check that? (like e.g. $a+b\ge c$, $a+c\ge b$, $c+b\ge a$ for triangle)
No, there's no overall formula. There are some weak conditions (the sum of any $n-1$ sides' lengths must be greater than the length of the remaining side, for instance), but this is merely necessary for the existence of any polygon with those side-lengths, not one that has your desired angles. Is it sufficient? I'm not certain offhand. Re-reading, perhaps when you said that you "have "n" angles" you meant "I'm looking for an $n$-angle polygon". In that case, the inequalities I cited above are necessary, but are they sufficient? I suspect that they are, although they'd only guarantee a polygon with those side-lengths...not a non-self-intersecting polygon. For that latter condition, you'd have to do some additional work.
{ "language": "en", "url": "https://math.stackexchange.com/questions/832716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
What is the difference between 1-dim.harmonic oscillator and 2-dim. harmonic oscillator? I ask myself what exactly is meant with "2-dimensional harmonic oscillator". I only know the situation of a bob hanging on a bar... is that 1-dimensional or 2-dimensional?
The difference is the number of spatial dimensions in which the oscillator is allowed to oscillate. The 1D oscillator has a potential function, $$ V(x) = kx^2, $$ where the 2D oscillator has the potential function, $$ V(x,y) = k (x^2 + y^2) .$$ In the context of classical mechanics the differential equations for $x$ and $y$ can be obtained by, $$ \frac{d^2x}{dt^2} = -\frac{1}{m}\frac{\partial V}{\partial x} = -\frac{k}{m}x, $$ $$ \frac{d^2y}{dt^2} = -\frac{1}{m}\frac{\partial V}{\partial y} = -\frac{k}{m}y. $$ The solutions are, $$x(t) = A \sin\left(\sqrt{\frac{k}{m}}t + \delta \right),$$ $$y(t) = B \sin\left(\sqrt{\frac{k}{m}}t + \delta' \right).$$ For a concrete example consider a bead at the bottom of a bowl. For small deflections from the minimum of the bowl the bead will oscillate harmonically in two dimensions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/832777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Given a rational, how to find the integers whose quotient it is? I haven't found an answer to this anywhere. Excluding brute force, given a rational $q$ in its decimal form ($1.47$, for example), is there a good algorithm to find integers $m$ and $n$ such that $\frac m n = q$? Thank you.
If $q$ is in decimal form well yes, there's the whole "if $q$ is not periodic then it is $q$ without . divided by $10^n$, if it's periodic do this and that" (if you meant this comment and I'll write it all)
{ "language": "en", "url": "https://math.stackexchange.com/questions/832868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to solve: $\frac{3·x-5}{8·x-2}<6$ I'm trying to solve $\frac{3x-5}{8x-2}<6$ ? I'm not sure which first step to take. I mean if I multiply both sides by $8x-2$ then I'm not sure if the sign would switch, as this could be positive or negative depending on $x$.
Hint You can't multiply by $8x-2$ without discuss on its sign. The best way to answer the question is: $$\frac{3x-5}{8x-2}<6\iff\frac{3x-5}{8x-2}-6=\frac{-45x+7}{8x-2}<0$$ and now draw a signs table for this quotient. Edit The sign table is so the answer is $$\left(-\infty,\frac7{45}\right)\cup \left(\frac14,+\infty\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/832961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
Why is it safe to approximate $2\pi r$ with regular polygons? Considering this question: Is value of $\pi = 4$? I can intuitively see that when the number of sides of a regular polygon inscribed in a circle increases, its perimeter gets closer to the perimeter of the circle. This is the way Archimedes approximated $\pi$. However, the slope of the tangent at almost every every point of the circle differs from the slope of the tangent to the polygon. This was basically why $\pi \neq 4$. Actually, only finite number of slopes coincide. Then how do we make sure that the limit is $\pi$? Thanks.
The reason why we know the Archimedean approximation works when we also know the 'troll' (rectilinear) approximation doesn't is because the Archimedean approach approximates not just the position of the curve but also its direction. The rectilinear example shows that some information above and beyond just the position is necessary; we know that directional information is sufficient, in essence, because we can define the arc length using the directional information (more specifically, using the derivatives of the curve): if $\mathbf{f}(t) = \langle f_x(t), f_y(t)\rangle$ is a planar curve, then the arc length of a segment from $t=a$ to $t=b$ is given by the integral $$L=\int_a^b\left|\mathbf{f}'(t)\right|dt = \int_a^b\sqrt{\left(\frac{df_x(t)}{dt}\right)^2+\left(\frac{df_y(t)}{dt}\right)^2}\ dt$$ which depends on the direction of the curve, but also clearly only depends on that information.
{ "language": "en", "url": "https://math.stackexchange.com/questions/833028", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
What are non-tagential limits? I'm reading this article where they use a set of functions, $H^{\infty}$, defined like this "Let $H^{\infty }$ be the closed subalgebra of $L^{\infty }({\mathbb R})$ that consists of all functions being non-tangential limits on ${\mathbb R}$ of bounded analytic functions on the upper half plane." The thing is I don't know what is reffered as a non tangential limit. Please help.
If $x_n\to x$, we say that $(x_n)$ is a non-tangential sequence in the upper half plane if $\inf_{n > 0}\text{Im}(x_n-x)/\text{Re}(x_n-x) > 0$. So $f(x_n)$ converges non-tangentially to $y$ if $f(x_n)$ converges to $y$, and $(x_n)$ is a non-tangential sequence. So $x_n = (a+ib)/n$ for $b>0$ is non-tangential, but $x_n = \frac1n + i\frac1{n^2}$ is not non-tangential. http://demonstrations.wolfram.com/StolzAngle/ illustrates the concept (because non-tangential convergence is equivalent to convergence in so called Stolz domains).
{ "language": "en", "url": "https://math.stackexchange.com/questions/833109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Covariant derivative in cylindrical coordinates I am confused by the Wolfram article on cylindrical coordinates. Specifically, I do not understand how they go from equation (48) to equations (49)-(57). Equation (48) shows that the covariant derivative is: $$A_{j;k} = \frac{1}{g_{kk}}\frac{\partial A_j}{\partial x_k} - \Gamma^i_{jk}A_i$$ The next few equations expand this for the case of cylindrical coordinates, equation (50) is: $$A_{r;\theta} = \frac{1}{r}\frac{\partial A_r}{\partial \theta} - \frac{A_\theta}{r}$$ The contravariant metric tensor has non-zero elements: $$g^{11} = 1$$ $$g^{22} = \frac{1}{r^2}$$ $$g^{33} = 1$$ And the Christoffel symbols of the second kind have non-zero elements: $$\Gamma^1_{22} = -r$$ $$\Gamma^2_{12} = \frac{1}{r}$$ $$\Gamma^2_{21} = \frac{1}{r}$$ If I plug these values back into their definition of the covariant derivative I get for equation (50): $$A_{r;\theta} = \frac{1}{r^2}\frac{\partial A_r}{\partial \theta} - \frac{A_\theta}{r}$$ Why does this not match up with their results?
The thing you forgot is the scale factor $\frac{1}{r}$ given in equation (14). See Scale Factor in Mathworld.
{ "language": "en", "url": "https://math.stackexchange.com/questions/833179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Matching gender expectation If there are x men and y women and we pair them randomly (do not consider gender). What are the expected number of the pair man-man, man-woman, woman-woman respectively? (Assume x,y large so that even or odd in total is neglectible, I just want an approximation) Note: I am confusing. Why can we compute it just like the probability of drawing two balls out of the box with red balls and white balls inside? Many answers suggested that they are the same, but I think they are completely different, this one is like drawing two balls out of box repeatedly without putting back, and counting the numbers of each type.
$$(x+y)^2 = x^2 + 2 x y + y^2$$ Consequently, we expect $\frac{x^2}{(x+y)^2}$ male-male assignments, $\frac{2 x y}{(x+y)^2}$ male-female assignments, and $\frac{y^2}{(x+y)^2}$ female-female assignments.
{ "language": "en", "url": "https://math.stackexchange.com/questions/833311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
How can I prove that a square matrix is invertible if it satisfies this polynomial equation? For a 3x3 matrix $C$, it is given that $$C^3+I=3C^2-C$$ I am then required to prove that $C$ is invertible. I have attempted a proof, below, but I am not sure it is valid or if there is a better solution. Attempted proof $$C^3 + I = 3C^2 - C$$ $$I = - C^3 +3C^2-C$$ If it is assumed that $C^{-1}$ exists then $$I = C^{-1}(-C^4+3C^3-C^2)$$ If $C^{-1}$ is defined, then $I=C^{-1}C$; therefore test whether $$C \stackrel{!}{=} -C^4 + 3C^3 - C^2$$ $$ 0 = -C^4 + 3C^3 - C^2 - C$$ $$ C = 0, 1, 1\pm\sqrt{2}$$ Is this at all in the right direction?
Subtract to get $$I=-C^3+3C^2-C$$ Then factor to get $$I=C(-C^2+3C-I)$$ Now you have $I=CD$, for $D=-C^2+3C-I$. Hence $C$ is invertible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/833385", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 8, "answer_id": 0 }
An "elementary" approach to complex exponents? Is there any way to extend the elementary definition of powers to the case of complex numbers? By "elementary" I am referring to the definition based on $$a^n=\underbrace{a\cdot a\cdots a}_{n\;\text{factors}}.$$ (Meaning I am not interested in the power series or "compound interest" definitions.) This is extended to negative numbers, fractions, and finally irrationals by letting $$a^r=\lim_{n\to\infty} a^{r_n}$$ where $r_n$ is rational and approaches $r$. For a concrete example, how would we interpret $e^i$ in terms of these ideas?
So here's a good place to start $$e^{i\theta}$$ Is interpreted as the complex number that is formed if you form a circle of radius 1 in the complex number field. And starting from the point 1 + 0i you move along the circle for an angle $\theta$ to a new number in the complex number field: $$\sin(\theta) + \cos(\theta)i)$$ Now note that ANY complex number is of the form $$r e^{i\theta}$$ Where $r$ is the absolute value of the complex number (or) it's distance from the point 0. The value of the complex exponential simply indicates the angle. In other words we have polar coordinates here. Then taking exponents becomes quite obvious with it simply distributing over both items which can be factored into complex exponential themselves. If 'factor' your exponentials into a product of numbers of this format then it becomes intuitive what physically is occurring. Hope that helps :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/833441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
Where's my error on finding all the solutions of a linear congruence? I'm supposed to find all solutions of each of the linear congruence. 9x ≡ 5 (mod 25) I know there are other posts on the site about this, but I don't really follow. Here's what I did: I used the Euclidean Algorithm to find the gcd, which was 1 and then to find the equation, I ended up with 1=(4)25 - (11)9 Then I multiplied by 5 on both sides to get it in the form of the original and got: 5=(20)25 - (55)9 Then 55(9) - 25(20) = 5 So I had x ≡ 55 (mod 25) or x ≡ 5 (mod 25). But the book had x ≡ 20 (mod 25) What did I do wrong? Here's my exact work using Euclidean: My sign seems correct though.
$$\text{Hint: Use euclidean algorithm}$$ $$\text{Find $a,b$ such that $9a+25b=1$, which can be done since gcd(9,25)=1}$$ $$\text{You'll finish with: $1 \equiv 9 \cdot a \mod (25) \Rightarrow x \equiv 9x \cdot a (\mod 25)$. Remember $9x \equiv 5 (\mod 25)$}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/833504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Find the norm of functional Consider the functional from $l_2$. $$ x=(x_n)\mapsto \sum \frac{x_n+x_{n+1}}{2^n}. $$ What is the norm of the functional?
It should not be very difficult to rewrite your functional to the form $$x\mapsto \sum x_n y_n$$ where $y=y_n\in\ell_2$. Then the norm of this functional is precisely $\|y\|_2$, i.e., it is the same as the $\ell_2$-norm of the sequence $y$. To see this, just notice that you have the functional of the form $$f(x)=\langle x,y \rangle,$$ where $\langle \cdot,\cdot \rangle$ is the standard inner product on $\ell_2$. From Cauchy-Schwarz inequality you have $$|f(x)| = |\langle x,y \rangle| \le \|y\|\cdot\|x\|,$$ i.e., $$\frac{|f(x)|}{\|x\|} \le \|y\|$$ which means that $\|f\|\le\|y\|$. The opposite inequality follows from $$f(y)=\langle y,y \rangle =\|y\|^2$$ or $$\frac{f(y)}{\|y\|}=\|y\|.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/833606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is the derivative of a bump function still a bump function? My question is rather simple : are derivatives of bump functions still bump functions ? For example, for a bump function $u\in D(\mathbb{R}^d)$, that is, $$u \in C^\infty|\;\text{supp}\;u\subseteq K : \mathbb{R}^d \to \mathbb{R}$$ Is this always true for all derivatives? $$\frac{\partial u}{\partial x_j} \in D(\mathbb{R}^d)$$ I'm sure it is and couldn't imagine the contrary but I have an unexplainable doubt.
Yes, because, obviously, $\operatorname{supp}\partial u/\partial x_j\subset\operatorname{supp}u$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/833712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
An Integral inequality involving vanishing boundary data Suppose that $f$ is twice differentiable on $[0,1]$ with $f(0)=f(1)=0$. Also $f$ is not identically zero. Show that $$|f(x)|\leq \frac{1}{4}\int_0^1 |f''(x)|dx,\ \forall\ x\in [0,1].$$ Thank @FisiaiLusia, I am sorry that on my computer "Mathematics Stack Exchange requires external JavaSrcipt from another domain, which is blocked or failed to load". So that I could not vote for your answer.
We can assume that there exists point $c\in (0,1) $ such that $$ \sup_{v\in [0,1] } |f(v)|=f(c) .$$ By Lagrange Theorem we have that there exist points $u_1 \in (0,c) $ and $u_2 \in (c, 1)$ such that $$ f(c) =f(c) -f(0) =cf' (u_1 )$$ $$f(c) =f(c) -f(1) =(c-1)f' (u_2 )$$ so we have $$\int_0^1 |f''(s)|ds \geq \int_{u_1}^{u_2} |f''(s)|ds \geq \left|\int_{u_1}^{u_2} f''(s)ds\right| =|f'(u_2) -f'(u_1)|\geq f'(u_1) -f'(u_2) = \left(\frac{1}{c} +\frac{1}{1-c}\right) f(c) \geq 4f(c) .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/833878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Exponent of polynomials (of matrices) $A$ is a matrix over $\mathbb R$ (reals). Prove that for every $f,g\in \mathbb R[x]$, $\displaystyle e^{f(A)}\times e^{g(A)} = e^{f(A)+g(A)}$ I tried using the sigma writing but got stuck (I wrote $f(x)=\sum a_ix^i$, $g(x)=\sum bix^i$ and then started to develop the left part of the equation and then I got stuck ..) Any suggestions ? thanks guys
You should know that for square matrices $A,B$ the following holds: $AB=BA \implies e^Ae^B=e^{A+B}$. Can you prove that if $P,Q$ are polynomials and $A$ a square matrix, $P(A)Q(A)=Q(A)P(A)$ ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/833954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving $\forall x\in\mathbb R : \dfrac{e^x + e^{-x}}2 \le e^{\frac{x^2}{2}}$ with Cauchy's MVT Prove for all $x\in\mathbb R$: $$\dfrac{e^x + e^{-x}}2 \le e^{\frac{x^2}{2}}$$ Mclauren expansion: $$e^x=1+x+\frac{x^2}{2}+\frac{x^3}{3!}+R_4(x)$$ $$e^{-x}=1+-x+\frac{x^2}{2}-\frac{x^3}{3!}+S_4(x)$$ Adding both together: $2+\frac{x^2}{2}+R_4(x)+S_4(x)$ Question: Can I cancel $R_4(x)+S_4(x)$ because they tend to $0$ ? if yes then this is how I continued: We have now: $2+\dfrac{x^2}2 \le e^{\frac{x^2}{2}}$, or $\dfrac{2+\dfrac{x^2}2}{e^{\frac{x^2}{2}}} \le 1$. Now I thought about using Cauchy's MVT here: define: $f(x)=2+\dfrac{x^2}2 , \ g(x)= e^{\frac{x^2}{2}}$ both are continous and differentiable on all $\mathbb R$ so: $$\frac {f(x)} {g(x)}=\frac {f'(x)-f'(0)}{g'(x)-g'(0)}=\frac{1}{e^{\frac{x^2}{2}}}\le1$$ This is related to: Proving that $\frac{e^x + e^{-x}}2 \le e^{x^2/2}$ but my approach is different and I want to know if it's correct. Note: I can't use integration.
An alternative approach: $$ \ln\cosh x = \int_0^x \tanh t\,dt \le \int_0^x t\,dt = \frac{x^2}{2} $$ This uses the fact that $\tanh x\le x$ for $x\ge 0$ (and the reverse for $x\le 0$), which can be proved by computing the second derivative of $\tanh$, concluding that it's convex on $(\infty,0]$ and concave on $[0,\infty)$, and comparing it to its tangent line at $x=0$. Another way is to use the fact that if $f$ is convex on $[a,b]$ then $$ \frac1{b-a}\int_a^b f(x)\,dx = \int_0^1 f((1-t)a+tb) \,dt \le \int_0^1 \big((1-t)f(a)+tf(b)\big)\,dt = \frac{f(a)+f(b)}{2} $$ Applying this to $f(x)=e^x$ on $[-u,u]$ yields $\frac{\sinh u}{u}\le\cosh u$, which gives the needed inequalities for $\tanh$. Edit. You've added a note to the question saying that you can't use integration. The first line of this answer can be reformulated to accommodate that restriction: prove that if $f(a)\le g(a)$ and $f'(x)\le g'(x)$ for all $x\in[a,b]$, then $f(x)\le g(x)$ for all $x\in[a,b]$ (this is a consequence of MVT); then take $f(x)=\ln\cosh x$ and $g(x)=\frac{x^2}{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/834040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Homology of product of topological space and sphere is direct sum of homologies. Show that for $i > n \in\mathbb{N}$: $$H_{i}\left(X \times \mathbb{S}^{n}\right) \simeq H_{i}\left(X\right) \oplus H_{i - n}\left(X\right).$$ My first idea motivated by $n=0$ case (which is obvious) was to try induction but I cannot see how to perform next step. However question seems to be very neat so I decided to share it.
Ok so here is what I've got so far: Lemma 1. $$H_i(X\times S^n) \simeq H_i(X \times \{s\})\oplus H_i(X\times S^n,X\times\{s\})$$ Let's write seuquence for pair $(X \times S^n, X \times \{s\})$: $$H_{i + 1}(X \times S^n, X \times \{s\})\to H_i(X \times \{s\}) \to H_i(X \times S^n) \to H_i(X \times S^n, X \times \{s\}) \to H_{i-1}(X \times \{s\})$$ second arrow is generated by inclusion (monomorphism) so first and last arrows are equal to $0$ (and therefore third arrow is epimorphism) and moreover we can take arrow in opposite direction i.e. $H_i(X \times S^n) \to H_i(X \times \{s\})$ by taking retraction then their composition will be identity on $X \times \{s\}$ thus also on $H_i(X \times \{s\})$. But we know that for short exact sequence $A \to B \to C$ where first arrow is mono and second epi and there exists opposite arrow $B \to A$ which composited with $A \to B$ gives identity we have $B = A \oplus C$. Lemma 2. $$H_i(X \times S^n, X \times \{s\}) \simeq H_{i-1}(X \times S^{n-1}, X \times \{s\})$$ Let's take $A = X \times S^n \setminus\{s\}$, $B = X \times S^{n} \setminus\{n\}$, $C = D = X \times \{s\}$. We have $H_i(X \times \{s\}, X \times \{s\}) \simeq 0$, $H_i(A, X \times \{s\}) \simeq H_i(B, X \times \{s\}) \simeq H_i(X \times \{s\}, X \times \{s\})$ and from relative Mayer-Vietoris sequence for $(A, C), (B, D)$: $$0 \oplus 0 \to H_i( X \times S^n, X \times \{s\}) \to H_{i - 1}(X \times S^{n-1}, X \times \{s\}) \to 0 \oplus 0.$$ Hence the lemma. Difficulty. After applying lemma 2 for $n$ times in lemma 1 we get: $$H_i(X\times S^n) \simeq H_i(X \times \{s\})\oplus H_{i - n}(X\times S^0,X\times\{s\}).$$ $X\times \{s\} \to X\times S^0$ is cofibration and hence we have $$H_{i - n}(X\times S^0,X\times\{s\}) \simeq H_{i - n}((X\times S^0)/ (X\times\{s\})) \simeq H_{i - n}(X\times\{n\} \cup \{x\}\times\{s\}) \simeq H_{i - n}(X \times \{n\}).$$ But the last equality holds only for $i - n > 0$. (I hope that the meaning of $n$ is clear from the context ;)) I'd be grateful for checking, simplifying this solution and fixing case $i = n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/834128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Prove $u_{n}$ is decreasing $$u_{1}=2, \quad u_{n+1}=\frac{1}{3-u_n}$$ Prove it is decreasing and convergent and calculate its limit. Is it possible to define $u_{n}$ in terms of $n$? In order to prove it is decreasing, I calculated some terms but I would like to know how to do it in a more "elaborated" way.
If there is a limit, it will be defined by $$L=\frac{1}{3-L}$$ which reduces to $L^2-3L+1=0$. You need to solve this quadratic and discard any root greater than $2$ since this is the starting value and that you proved that the terms are decreasing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/834228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Find the number of real solutions to the equation $ x^7=x+1$. The equation $x^7 = x+1 $ has: a) no real solution b) no positive real solution c) a real solution in the interval (0,2) d) a real solution but not within (0,2) Which is the correct answer and why ? How do i find the answer to such questions?
Consider the 2 equations: $$y = x^7 \tag{A}$$ $$y = x + 1 \tag{B}$$ What does equation (A) look like? It has an intersection point at $(0, 0)$, grows incredibly fast before $x=-1$ and after $x=1$. Equation (B) grows so much slower that it can only intersection (A) at one point. Since $1^7 < 1 + 1 < 2^7$, the intersection point is somewhere between $x=1$ and $x=2$. So the answer is $c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/834310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
Geometrically, what is the span of vectors? Simple question from a calc 3 beginner. Visually I cannot imagine the span of two vectors, what does this necessarily mean? For example my text mentions if two vectors are parallel their span is a line, otherwise a plane. Can anyone elaborate?
Assuming it makes sense that the span of a single vector is a line, we can imagine the two vectors in 3-space. Because the span of each vector lies within the space of each of them, we can draw the two lines that are in the direction of these two vectors: * *if the two lines are equal, then this is all of the span. *otherwise, we can imagine picking one of the lines and sliding it along the other line such that it stays parallel to its original placement. To make the above intuition precise, the point is that the span of a single vector is the result of performing scalar multiplication on that single vector: that is, contracting it, shrinking it, or even flipping its orientation (making the direction go the other way). Then we can get any other point of the plane by finding the projection of the point onto the first line, and then translating by the projection of the point onto the second line; this is why when the lines are the same we do not leave the line. As far as the formal definition of the span goes, the span of a set $S=\{v_1,\ldots, v_n\}$ of vectors is given by the set $$\mathrm{span}(S)=\left\{ \sum_{i=1}^n{c_iv_i} \mid c_i\in \mathbb{F}, v_i\in S\right\}$$ where $\mathbb{F}$ is the field that you're working over (likely the real numbers $\mathbb{R}$). In the case where $S=\{v_1,v_2\}$, we're looking at the set of vectors of the form $c_1v_1+c_2v_2$. Now, if $v_1$ and $v_2$ are in the same direction, then there is $c$ such that $v_1=cv_2$, so we can rewrite this as $c_1v_1+c_2v_2=c_1v_1+c_2cv_1=(c_1+c_2c)v_1$, which is just any element in the span of $\{v_1\}$. This is why it is possible for the span to be a line. Now, when they aren't in the same direction, they lie precisely in a unique plane. Any point in the line that goes along the direction of the vector $v_1$ can be written as $c_1v_1$, and any point in the line that goes along the direction of the vector $v_2$ can be written as $c_2v_2$. Every point in the plane will then be of the form $c_1v_1+c_2v_2$ by projecting the point onto $v_1$ and $v_2$ to find $c_1$ and $c_2$, respectively.
{ "language": "en", "url": "https://math.stackexchange.com/questions/834451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 2 }
Probabilistic game Suppose a rich person offers you $\$1000$ and says that you can participate in $1000$ rounds of this game: In each round a coin is flipped and you get a $50$% return on the portion of your money that you risked if it lands on heads, or get a $40$% loss if it lands on tails. For example if you choose to risk all of your money in round one, then you'll either have $1500$ or $600$ dollars for rounds two. If you only risk half of it, then you'll either have $1250$ ($500+500*1.5$) or $800$($500+500*0.6$) dollars. What is your best strategy ?
The expected value of your game is positive ($E[X]=0.5\alpha-0.4\alpha\ge 0$, where $\alpha$ is the amount of money bet) so the more you bet, the more you'll earn. Moreover, betting all your money doesn't prevent you playing the $1000$ games since the result only affect a percentage of your bet, so the best strategy is to bet everything you have in the $1000$ rounds. EDIT : I assume the coin is a fair coin.
{ "language": "en", "url": "https://math.stackexchange.com/questions/834587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How many Sylow-$ 3$ subgroup does $G$ have? Let $G$ be a noncyclic group of order $21$. How many sylow-$3$ subgroup does G have? The possible orders of Sylow $3$ subgroups is $1, 7$. But how to check the exact number?
If it is $1$, $G$ must be cyclic as Sylow-$7$ subgroup is uniqe so it must be $7$. Notice that $n_7$ must be equal to $1$, so it has a uniqe sylow-$7$ subgroup. As you said $n_3\in \{1,7\}$, if $n_3=1$; it has also normal subgroup of order $3$ which means $G=HK$ and $H\cap K=1$ and $H,K$ is normal in $G$ which means that $G$ is abelian. Since order of $H,K$ is relativly prime, $G$ must be cyclic which is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/834662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Find $\lim_{k \rightarrow \infty} \sup a_k$ and $\lim_{k \rightarrow \infty} \inf (a_k)$, with $a_k=(\frac{1}{k})_{k \in \mathbb{N}}$ Find $\lim_{k \rightarrow \infty} \sup a_k$ and $\lim_{k \rightarrow \infty} \inf (a_k)$, with $a_k=(\frac{1}{k})$ Per definition: $\lim_{k \rightarrow \infty} \sup(a_k) = \lim_{k \rightarrow \infty} \sup (\bigcup_{i=k}^\infty \{a_i\})$ I have $\sup (\bigcup_{i=k}^\infty \{a_i\}) = \frac{1}{k}$ $\Rightarrow \lim_{k \rightarrow \infty} \sup (\bigcup_{i=k}^\infty \{a_i\}) = 0$ For the limes inferior I get: $\lim_{k \rightarrow \infty} \inf(a_k) = \lim_{k \rightarrow \infty} \inf (\bigcup_{i=k}^\infty \{a_i\})$ Since $\inf (\bigcup_{i=k}^\infty \{a_i\}) = 0$ $\Rightarrow \lim_{k \rightarrow \infty} \inf(a_k) = 0$ Is that correct? Can it be that the limes superior and limes inferior of a sequence are equal?
We know that a bounded sequence$(x_n)_{n=1}^{\infty}$ is convergence if and only if $\lim\sup\limits_{n\to\infty} x_n=\lim\inf \limits_{n\to\infty}x_n$. Since $(a_{k})=(\frac{1}{k})_k$ converges to $0$, so that $\lim\sup\limits_{n\to\infty} x_n=\lim\inf \limits_{n\to\infty}x_n=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/834742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Help understanding the characteristic polynomial Let $$A = \begin{bmatrix} 1 &2 &1 \\ 2 & 2 &3 \\ 1 & 1 &1 \end{bmatrix}$$ I'm calculating the characteristic polynomial by the following: $$P(x) = -x^3 + Tr(A)x^2 + \frac{1}{2}(a_{ij}a_{ji} - a_{ii}a_{jj})x + \det(A)$$ Now when I calculate using matlab (using the charpoly(A) function) I get the following results: Ans = 1 -4 -3 -1 This makes sense since, for second result ($-4$) you can calculate $2 + 3 + 4$ and for result $-1$ you can do by taking the determinant of this matrix, which is (1) But I do not and cannot seem to figure out how they are calculating the $-3$ (in this case)? I know it has something to do with the $\frac{1}{2}(a_{ij}a_{ji} - a_{ii}a_{jj})$ but what would these elements be in this matrix in order to calculate the third result for this?
See Wolfram Mathworld for your formula. It notes that Einstein summation is used in the coefficient for $x$, so you have to sum over both $i$ and $j$ to get the actual answer. The matlab answer gives the negative of $P_3(x)$ (as these have the same roots anyway): 1 stands for $x^3$, the -4 for the negative trace $1+2+1$, and the final $-1$ is just minus the determinant. So if you apply the formula with $-x^3$ at the start, we want the $x$-coefficient to be $3$, not $-3$. So compute the missing coefficient as the double sum ${1 \over 2}\sum_{i=1}^3\sum_{j=1}^3 (a_{ij}a_{ji} - a_{ii}a_{jj})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/834811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
calculation of $(x,y,z)$ in $x+\lfloor y \rfloor -\{z\} = 2.98\;\;,\lfloor x \rfloor +\{y\}-z = 4.05\;\;,-\{x\}+y+\lfloor z \rfloor = 5.01$ The no. of real solution of the equation $x+\lfloor y \rfloor -\{z\} = 2.98\;\;,\lfloor x \rfloor +\{y\}-z = 4.05\;\;,-\{x\}+y+\lfloor z \rfloor = 5.01$. Here I did not understand How can I solve it. help me Thanks.
I'm assuming that indeed $\{a\} = a - \lfloor a\rfloor$. Since then $0 \leqslant \{a\} < 1$ for all $a$, we have $$-1 < \{a\} - \{b\} < 1$$ for all $a,b$. Then from the given equations we obtain, by writing $x = \lfloor x\rfloor + \{x\}$ and analogously for $y,z$, the three equations $$\begin{gather} \{ x\} - \{ z\} = -0.02,\tag{1}\\ \{ y\} - \{ z\} = 0.05,\tag{2}\\ \{ y\} - \{ x\} = 0.01.\tag{3} \end{gather}$$ Adding $(1)$ and $(3)$ yields $\{y\} - \{z\} = -0.01$, which contradicts $(2)$, hence there is no solution to the system.
{ "language": "en", "url": "https://math.stackexchange.com/questions/834887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
About second uniqueness primary decomposition theorem I'm self-learning commutative algebra from Introduction to Commutative Algebra of Atiyah and Macdonald and get frustrated about the second uniqueness primary decomposition theorem. I copy the theorem for you to reference (page 54): Let $\mathfrak a$ be a decomposable ideal, let $\mathfrak a = \bigcap_{i=1}^nq_{i}$ be a minimal primary decomposition of $\mathfrak a$, and let $\{p_{i_1},...,p_{i_n}\}$ be an isolated set of prime ideals of $\mathfrak a$. Then $q_{i_1} \bigcap ...\bigcap q_{i_n}$ is independent of the decomposition. Here is the proof: We have $q_{i_1} \cap ...\cap q_{i_n} = S(\mathfrak a)$ where $S = A - p_{i_1} \cup ... \cup p_{i_n}$, hence depends only on $\mathfrak a$ (since the $p_i$ depend only on $\mathfrak a$). What makes me confused is that: I understand that the set of ALL prime ideal of an ideal $\mathfrak a$ is independent of the decomposition, but should it still be true when we just get an isolated set from that set? If that isolated set is just a proper subset of the set of all prime ideal associated with ideal $\mathfrak a$, why "$p_i$ depend only on $\mathfrak a$"? The second is that, I read from another the source which has another statement of this theorem. Here is the content: Let $\mathfrak a$ be a decomposable ideal, let $\mathfrak a = \bigcap_{i=1}^nq_{i}$ be a minimal primary decomposition of $\mathfrak a$, and let $\{p_{1},...,p_{m}\}$ be the set of minimal prime ideals of $\mathfrak a$. Then $q_1, q_2, ..., q_m$ are independent of the decomposition. In this statement, the selected set must be the set of ALL minimal prime ideal, not just an isolated set. So which is the statement true. Please help me clarify this. Thanks so much. I really appreciate.
The proof of the first statement: Let $q_i$ an isolated prime ideal. Then {$p_i$} is a isolated set because $p_i$ is minimal in the set of the associated primes. If you take S=A-$p_i$ then $q_i=S(a)$, where $S(a)=a^{ec}$. So, $q_i$ is uniquely determined by $a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/834965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How many ways to pick $X$ balls Suppose i have $3$ types of balls $A,B$ and $C$ and there are $n_a, n_b,$ and $n_c$ copies of these balls. Now i want to select $x$ balls from these $3$ types of balls $x < n_a + n_b + n_c$. Can anybody help me to arrive at the closed formula for this. I thought, if I partition $x$ into $3$ partition it will do but in that case its quite possible that i can pick more ball of particular type than its actual count.
This problem can be easily solved with Principle of Inclusion Exclusion (PIE) and the Balls and Urns technique. The answer is: $\dbinom{x+2}{2} - \dbinom{x-n_a+1}{2} - \dbinom{x-n_b+1}{2} - \dbinom{x-n_c+1}{2} + \dbinom{x-n_a-n_b}{2} + \dbinom{x-n_a-n_c}{2} + \dbinom{x-n_b-n_c}{2} - \dbinom{x-n_a-n_b-n_c-1}{2}$. Hint: Each term corresponds to the number of non-negative integer solutions $(a,b,c)$ to $a+b+c = x$ with 0, 1, 2, or 3 of the following conditions enforced: $a > n_a$ $b > n_b$ $c > n_c$
{ "language": "en", "url": "https://math.stackexchange.com/questions/835031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Integer solutions to $a^{2014} +2015\cdot b! = 2014^{2015}$ How many solutions are there for $a^{2014} +2015\cdot b! = 2014^{2015}$, with $a,b$ positive integers? This is another contest problem that I got from my friend. Can anybody help me find the answer? Or give me a hint to solve this problem? Thanks
Taking this equation mod $2015$ yields $a^{2014} \equiv -1 \pmod{2015}$. Since $2015 = 5 \cdot 13 \cdot 31$, we get the following: $a^{2014} \equiv -1 \pmod{5}$ $a^{2014} \equiv -1 \pmod{13}$ $a^{2014} \equiv -1 \pmod{31}$ By Fermat's Little Theorem, $a^{31} \equiv a \pmod{31}$. Hence, $a^4 \equiv a^{2014} \equiv -1 \pmod{31}$. We can check that $-1$ is not a quadratic residue $\pmod{31}$. Thus, there is no residue $a^2$ such that $(a^2)^2 = a^4 \equiv -1 \pmod{31}$. Therefore, there are no solutions to the original equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/835111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 0 }
Power series convergence radius My question is: how do I calculate the radius convergence of a power series when the series is not written like $$\sum a_{n}x^{n}?$$ I have this series: $$\sum\frac{x^{2n+1}}{(-3)^{n}}$$ Can I use the criterions as I was working with $x^{n}$, not $x^{2n+1}$? I tried this: $$k=2n+1\Rightarrow n=\frac{k-1}{2}$$ And I got $$R=\lim_{k\to\infty}\left|\frac{a_{k}}{a_{k+1}}\right|=\frac{1}{\sqrt{3}}$$ I know the answer is: the series converges for all $x$ that $|x|<\sqrt{3}$. How do I get it? Thanks :)
Since $x\ne0$, we can apply the ratio test for Absolute Convergence, which is stated precisely as so: Let $ \sum u_{k}$ be a series with nonzero terms and suppose that $$ p= {Lim_\xrightarrow{k\to\infty}}{\frac{|u_k+1|}{|u_k|}}$$ (a) The series converges absolutely if $p<1$. (b) The series diverges if $p>1$ or $ p = \infty$ (c) The test is inconclusive if $p=1$ Applied in this scenario, we get: $$\frac{|x^{2(n+1)+1}|}{|-3^{n+1}|}* \frac{|-3^n|}{|x^{2n+1}|} =\frac{|x^2|}{|-3|}=\frac{x^2}{3}$$ Now $$ {Lim_\xrightarrow{n\to\infty}}{\frac{x^2}{-3}}<1\iff|x|<\sqrt3$$ Our radius of convergence is $\sqrt3$ and the interval of convergence is $-\sqrt3< x < \sqrt3 $
{ "language": "en", "url": "https://math.stackexchange.com/questions/835196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
Geometric proof of dot product distributive property I'm working my way through a text book for fun in order to keep my math brain fresh and came across this simple yet perplexing problem. "Demonstrate geometrically that the dot product is distributive" I can do this algebraically but what would a geometric proof of this look like?
Maybe something like this? $$a \cdot (b+c) = a \cdot b + a \cdot c$$ "The 'projection' of $a$ onto $b+c$ is the same as the sums of the 'projections' of $a$ onto $b$ and of $a$ onto $c$." (You'll have to draw the triangle formed by $b$, $c$, and $b+c$. Also, you will need to be precise about what the 'projection' is.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/835337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why must polynomial division be done prior to taking the limit? Suppose I wish to evaluate the following, $$\mathop {\lim }\limits_{x \to 2} \left( {{{{x^2} - 4} \over {x - 2}}} \right)$$ If I just substitute two into $x$, it can't be done because the answer would be undefined (division by zero). But, if I complete the polynomial division, that I hate to do because I'm all thumbs at it, $$\mathop {\lim }\limits_{x \to 2} \left( {{{{x^2} - 4} \over {x - 2}}} \right) = \mathop {\lim }\limits_{x \to 2} \left( {x + 2} \right) = 4$$ Please tell me what's going on here?
The point is that the functions $$\frac{x^2-4}{x-2}\quad\hbox{and}\quad x+2$$ are equal except at $x=2$, where the second is defined and the first is not. If you look closely at the definition of a limit as $x\to a$, you will see that it is carefully framed in such a way that the value of the function (if any) when $x=a$ is irrelevant. Therefore the two functions above have the same limit as $x\to2$. However, as you have noted, you cannot just substitute $x=2$ in the first as it is undefined. On the other hand, the second function is defined at $x=2$, and better still, it is continuous at $x=2$, because it is a polynomial. Therefore, using the definition of continuity, $$\lim_{x\to2}(x+2)=2+2=4\ .$$ And finally, as already noted, $$\lim_{x\to2}\frac{x^2-4}{x-2}=\lim_{x\to2}(x+2)=4\ .$$ This kind of problem is superficially very simple, but as you can see, there is quite a lot behind it if you want to fully understand what is going on.
{ "language": "en", "url": "https://math.stackexchange.com/questions/835442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Show that if a square matrix A satisfies the equation ....then A must be invertible. (a) Show that if a square matrix $A$ satisfies the equation $A^2 + 2A + I = 0$, then $A$ must be invertible. What is the inverse? (b) Show that if $p(x)$ is a polynomial with a nonzero constant term, and if $A$ is a square matrix for which $p(A) = 0$, then $A$ is invertible. What am i supposed to do here? plug a square matrix with a b c d in the problem?.. but then what? and i dont have a clue how to do the second one either...
If $A$ is not invertible, then $0$ is an eigenvalue of $A$. Thus, $p(0)$ must be an eigenvalue of $p(A) = 0_{n\times n}$. But all of the eigenvalues of $p(A) = 0_{n\times n}$ are $0$. So, we must have $p(0) = 0$. This is a contradiction since $p$ has a non-zero constant term, and so, $p(0) \neq 0$. Therefore, $A$ is invertible, as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/835503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Egg drop problem Suppose that you have an $N$-story building and plenty of eggs. An egg breaks if it is dropped from floor $T$ or higher and does not break otherwise. Your goal is to devise a strategy to determine the value of $T$ given the following limitations on the number of eggs and tosses: Version 0: $1$ egg, $\leq T$ tosses Version 1: $\text{~}1$ $\text{lg}$$(N)$ eggs and $\text{~}1$ $\text{lg}(N)$ tosses. ($lg$ is log base 2) Version 2: $\text{~}1$ $\text{lg}$$(T)$ eggs and $\text{~}2$ $\text{lg}$$(T)$ tosses Version 3: $2$ eggs and $\text{~}$ $2\sqrt{N}$ tosses Version 4: $2$ eggs and $\text{~}$ $\sqrt{2N}$ tosses Version 5: $2$ eggs and $\leq$ $2\sqrt{2T}$ tosses I think I have the answer for most of these but don't know how to do a few. Could you please check over my work and provide hints on how to approach the ones I don't know how to do? For version 0, a simple iterative search starting from the 1st floor and working up to the $N$th floor in increments of 1 will work. For version 1, a binary search across the floors $1$ to $N$ will work. For version 2, I think you can iteratively double floors, visiting $1$, then $2$, then $4$, then $8$, etc. until the egg breaks at floor $2^k$. Then you can binary search across $2^{k-1}$ and $2^k$ For version 3, you can go iteratively go across floors with incrementing by $\sqrt{N}$: first visiting 0, then $\sqrt{N}$, then $2\sqrt{N}$, etc. Once the egg breaks at stage $k\sqrt{N}$, iterate across the range $(k-1)\sqrt{N}$ and $k\sqrt{N}$ one floor at a time. For versions 4 and 5 I don't know how to start. Can someone please provide a hint?
A very broad hint for version 4: consider that in your version-3 answer you don't have to use uniform intervals between drops of the first egg. Can you see how to use a non-uniform distribution so that the worst-case total is identical no matter where the first egg breaks?
{ "language": "en", "url": "https://math.stackexchange.com/questions/835582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 0 }
How to simplify this equation to a specific form? How can I simplify this expression? $$ 2(4^{n-1})-(-3)^{n-1} + 12 ( 2 (4^{n-2})-(-3)^{n-2})$$ The correct answer is $2 · 4^n − (−3)^n$
hint: use the fact that $$a^n = a\cdot a^{n-1}$$ and that $$a^n = a^2\cdot a^{n-2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/835691", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Show that the set of 2 continuous functions is closed. Let $f: \mathbb R \to \mathbb R $ and $g: \mathbb R \to \mathbb R$ be continuous functions. Show the set $ E = \{ x \in\mathbb R: f(x)=g(x)\} $ is closed. My approach A solution I found is the following: $h=f-g$ $h(x)=f(x)-g(x)=0$ f and g are continious and h is continious Taking $h(E)=\{ x\in \mathbb R:h(x)=0\}$ // makes no sense, why $x\in\mathbb R$ and not $h(x)\in\mathbb R$ $h^{-1} (E)=\{0\}$ is closed // why is the inverse only zero? => $E$ is closed Is the solution correct? Seems very elegant and short, but $h(E)$ makes no sense to me please explain.
How much do you know about continuous functions? For example, do you know that for a continuous function $h$ and a closed set $X$, the set $h^{-1}(X)$ is always closed? If you know that, the continuation is badly written, but captures the idea. First, you define the function $h = f-g$. Then, you can see that $$E=\{x\in \mathbb R: f(x) = g(x)\} = \{x\in\mathbb R: f(x)-g(x)=0\} = \{x\in\mathbb R: h(x) = 0\} = h^{-1}(\{0\})$$ This means that $E$ is the preimage of $\{0\}$ for the function $h$, and, since $h$ is continuous and $\{0\}$ is a closed set, $E$ is a closed set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/835757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can we prove $\int_1^\pi x \cos(\frac1{x}) dx<4$ by hand? Is there any way we can prove this definite integral inequality by hand: $$ \int_{1}^{\pi}x\cos\left(1 \over x\right)\,{\rm d}x < 4 $$ I don't where to start even, please help. That $\displaystyle\cos\left(1 \over x\right)\ \leq\ 1$ doesn't seem to help because $\displaystyle\int_{1}^{\pi}x\,{\rm d}x\ >\ 4$.
Hint: $$\begin{align} ∫_1^π x \cos \left(\frac1{x}\right)dx&=∫_{1}^{1/π}\frac{\cos {u}}{u}\left(-\frac{du}{u^2}\right)\\ &=∫_{1/π}^{1}\frac{\cos {u}}{u^3}du\\ &\leq∫_{1/π}^{1}\frac{1-\frac12u^2+\frac{1}{24}u^4}{u^3}du \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/835852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 0 }
Is $M=\{\frac{1}{k}|k \in \mathbb{N}\}$ closed? Is $M=\{\frac{1}{k}|k \in \mathbb{N}\}$ closed? I think it is closed, but I'm not sure whether my argumentation is correct. Since $(\frac{1}{k})_{k \in \mathbb{N}}$ is convergent $\Rightarrow$ from Cauchy $ \exists k_0 \in \mathbb{N}: \forall k \geq k_0:|\frac{1}{k}-\frac{1}{k+1}|< \epsilon ,\forall \epsilon \gt 0$ Let $O_k = (\frac{1}{k+1},\frac{1}{k}), \forall k \lt k_0$ and let $O=(-\infty,0) \cup \bigcup_{k=1}^{k_0-1}O_k \cup (1, \infty)$ $\Rightarrow M=\mathbb{R}-O$ Since O is open, M is closed. Is that argumentation correct?
It is not true that $M=\mathbb{R}-O$; the set $\mathbb{R}-O$ still contains all the intervals $(\frac{1}{k+1},\frac{1}{k})$ for $k\geq k_0$, and these intervals are not contained in $M$. It's possible some of your confusion is coming from the order of the quantifiers in the definition of Cauchy - you have to fix $\varepsilon$ first, and this determines $k_0$. Whatever $\varepsilon$ you choose, your $\mathbb{R}-O$ still differs from $M$ by infinitely many intervals. It doesn't even help that the length of these intervals is bounded by $\varepsilon$ - even if you make sense of taking $\varepsilon$ to $0$, so that "$\mathbb{R}-O$ converges to $M$", this convergence won't preserve the openness of $O$. As the other answers point out, $M$ has a limit point (i.e. $0$) that it doesn't contain, so it isn't closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/835926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Practical trigonometry question I can't figure out (Highschool Level) "Jack is on a bearing of 260 degrees from Jill. What is Jill's bearing from Jack?" The answer is 080 degrees. I really can't figure out how. Any help is appreciated.
Consider the following diagram: The rule about bearings is: "Point north and go clockwise". The bearing of Jill from Jack $(\theta)$ and the angle $\gamma$ ($=260-180)$ are alternate angles, so they're equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/835980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Relation between $\sin(\cos(\alpha))$ and $\cos(\sin(\alpha))$ If $0\le\alpha\le\frac{\pi}{2}$, then which of the following is true? A) $\sin(\cos(\alpha))<\cos(\sin(\alpha))$ B) $\sin(\cos(\alpha))\le \cos(\sin(\alpha))$ and equality holds for some $\alpha\in[0,\frac{\pi}{2}]$ C) $\sin(\cos(\alpha))>\cos(\sin(\alpha))$ D) $\sin(\cos(\alpha))\ge \cos(\sin(\alpha))$ and equality holds for some $\alpha\in[0,\frac{\pi}{2}]$ Testing for $\alpha=0$, I can say that the last two options will be incorrect. However which option among the first two will hold ?
Let $a=\cos{x}$, $b=\sin{x}$, and $a,b \in[0,1]$. We are now going to see which one ($\sin{a}$ or $\cos{b}$) is larger? Noticg that $a$ and $b$ satisfy $a^2+b^2=1$, and if we regard the value $a$ and $b$ as a pair $(a,b)$ on the $(a,b)$-plane, it should be a circle with radius $1$ in the first quadrant. If $\sin{a}=\cos{b}=\sin{(\frac{\pi}{2}-b)}, \forall a,b \in [0,1]$ holds, then $a=\frac{\pi}{2}-b$, (i.e. the line $a+b=\frac{\pi}{2}$), but $a+b=\frac{\pi}{2}$ does not touch the circle $a^2+b^2=1$, so the equality does not hold. Moreover, the region $a^2+b^2=1$ locates below the line $a+b=\frac{\pi}{2}$, that is, $a+b < \frac{\pi}{2}$. And, $\sin{\theta}$ is an increasing function for $\theta \in [0,1]$, we have $\sin{a} < \sin{(\frac{\pi}{2}-b)}=\cos{b}$. The answer is A).
{ "language": "en", "url": "https://math.stackexchange.com/questions/836058", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 2 }
finding the combination of sum of M numbers out of N I was thinking a problem of finding the number of way to add M numbers, ranged 0 to K, to a give a desired sum. Doing some researches online, I find a way to use polynomial to achieve the goal. For example, suppose there are M numbers, each one ranged from 0 to K inclusive, if we consider the polynomial $$(x^0 + x^1 + x^2 + \cdots + x^K)^M$$ the way to add those M numbers to get N is the coefficient of the term $x^N$, I also written a code to testify it. In this case, each number ranged 0 to K so there is duplicated numbers. But I would like to extend the problem to the case with no duplicate numbers. I constrain that there are N different numbers $1, 2, \cdots, N$, pick M out of them ($M<N$) at a time, so all possible sums are $$(1+2+\cdots+M), [2+\cdots+M+(M+1)], \cdots , [N+(N-1)+\cdots+(N-M+1)] $$ My question is how to figure out that how many way could that be from all possible M out of N numbers add up to above sum? I write a program to test that, when N, M are small (e.g. N=50, M=12), it is not that bad to enumerate all the cases, but if we increase N says to 100 or more, there are too many cases to retrieve. Any way like the polynomial method to do the trick quick? I would like to add an example to clarify my questions. Let's say I have N=10 numbers, {1,2,3,4,5,6,7,8,9,10}, picking M=5 numbers out of them at a times (no duplicates), how many way I could pick 5 numbers out of those 10-number set such that the sum is 22?
The number of ways to select $M$ distinct values from $1,2,\dots, N$ with sum $S$ is the coefficient of $y^{M} x^S$ in the product $\prod_{j=1}^N (1+y \, x^j)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/836170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Constraint to unconstraint optimization problem by subsitution Given the following convex optimization problem $\min_{x,p} ||x|| - p$ subject to $p > 0$ Can I change the above to an unconstrained convex optimization problem by substituting $c = \log(p)$ and minimize $\min_{x,c} ||x|| - \exp(c)$ If this is possible I am interested in literature about this topic, but I couldn't find much. Edit: This is just a minimal example. The problem in the example is of course unbounded, but I think my full optimization problem adds more complexity than necessary to answer my question.
Yes, of course you can do this. If $\phi$ is a diffeomorphism, over the appropriate domains $$\min_x\ f(x) = \min_y\ (f\circ \phi)(y).$$ I don't know a reference offhand unfortunately, but it is rather intuitive that this should work -- also you can see that $[J\phi]\nabla f = 0$ if and only if $\nabla f =0$ since $J\phi$ is invertible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/836256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Maximal ideal in the ring of polynomials over $\mathbb Z$ Let $\mathbb Z[x]$ the ring of polynomials with integers coefficients in one variable and $I =\langle 5,x^2 + 2\rangle$, how can I prove that $I$ is maximal ideal. I tried first see that $5$ and $x^2+2$ are both polynomial in that ring but how can i get that is maximal ? some help please.
If you quotient by the ideal $I$, then $$5 \equiv 0 \pmod{I} \text{ and } x^2+2 \equiv 0 \pmod{I}.$$ This also suggests that $x^2 \equiv -2 \equiv 3 \pmod{I}$ This helps you get the idea that perhaps $x \mapsto \sqrt{3}$ and $5 \mapsto 0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/836348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
2D Fourier Transform proof of Similarity Theorem I have to solve an exercise, but if i could use the following theorem, it would be piece of cake Similarity Theorem if $ \mathscr{F}\{g(x,y)\}= G( f_x,f_y)$ then $ \mathscr{F}\{g(ax,by)\}= \frac {1} {| a \cdot b|}G( f_x /a, f_y/b) $ i just need the proof, (You can find the theorem at page 8 here: http://ymk.k-space.org/elective_2DFT.pdf ) can you help? probably needs Jacobian and change of variable?
$$x' = ax, y' = by$$ Jacobian is $\dfrac{1}{|ab|}$ $$\int e^{i(xf_x + yf_y)} g(ax,by) \,dx\,dy = \dfrac{1}{|ab|}\int e^{i\left(\dfrac1ax'f_x + \dfrac1by'f_y\right)} g(x',y') \,dx'\,dy' \\ = \dfrac{1}{|ab|}\int e^{i\left(x'\dfrac{f_x}{a} + y'\dfrac{f_y}{b}\right)} g(x',y') \,dx'\,dy'\\ = \dfrac{1}{|ab|} G\left(\dfrac{f_x}{a},\dfrac{f_y}{b}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/836419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Fair die being rolled repeatedly A fair die is rolled repeatedly, and let $X$ record the number of the roll when the 1st $6$ appears. A game is played as follows. A player pays \$1 to play the game. If $X\leq 5$ , then he loses the dollar. If $6 \le X \le 10$, then he gets his dollar back plus \$1. And if $X > 10$, then he gets his dollar back plus \$2 . Is this a fair game? If not, whom does it favour? I think that that it is not a fair game because it solely depends on whether the first number is $6$ which is a $\frac{1}{6}$ chance. But I don't know how to prove this further.
The probability of failures before the first success is modeled by the geometric distribution. So you have to figure out what the expectation of the random variable with payout: $$ \begin{matrix} -1 & P(X \leq 5)\\ 1 & P(6 \leq X \leq 10)\\ 3 & P(X \geq 11) \end{matrix} $$ Where $X$ is geometrically distributed with probability of success is the probability of seeing a $6$ on a six-sided die. If the expectation of the payout is 0, the game is fair; otherwise someone expects to make a profit in the long run. Update To answer the comment. Since this is discrete, you'll have to calculate three sets of probabilities. The first is $X \leq 5$ which is made up of $X = 0, X = 1, X = 2, X = 3, X = 4,$ and $X = 5$. For all of these possibilities, the player loses a dollar. Now for $X = 6, X = 7,\ldots,X = 10$, the player gains two dollars. At this point we have the sum of all the probabilities for $X \leq 10$. So the probability that $X \geq 11$ is just 1 (total probability of everything) minus the running sum we needed for the first two sets. At this point we have three numbers: $p_1$ which is the sum for $X=0$ to $X=5$, $p_2$ which is the middle bucket, and $p_3 = 1 - p_1 - p_2$. Now we know that $p_1$ of the time, the player loses a dollar, $p_2$ of the time the player wins a dollar, and $p_3$ of the time the player wins two dollars. Now, is the expectation = 0 or not? That determines if it is "fair" or not.
{ "language": "en", "url": "https://math.stackexchange.com/questions/836488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Why can't we define more elementary functions? $\newcommand{\lax}{\operatorname{lax}}$ Liouville's theorem is well known and it asserts that: The antiderivatives of certain elementary functions cannot themselves be expressed as elementary functions. The problem I got from this is what is an elementary function? Who defines them? How do we define them? Someone can, for example, say that there is a function which is called $\lax(\cdot)$ which is defined as: $$ \lax\left(x\right)=\int_{0}^{x}\exp(-t^2)\mathrm{d}t. $$ Then, we can say that $\lax(\cdot)$ is a new elementary function much like $\exp(\cdot)$ and $\log(\cdot)$, $\cdots$. I just do not get elementary functions and what the reasons are to define certain functions as elementary. Maybe I should read some papers or books before posting this question. Should I? I just would like to get some help from you.
Elementary functions are finite sums, differences, products, quotients, compositions, and $n$th roots of constants, polynomials, exponentials, logarithms, trig functions, and all of their inverse functions. The reason they are defined this way is because someone, somewhere thought they were useful. And other people believed him. Why, for example, don't we redefine the integers to include $1/2$? Is this any different than your question about $\mathrm{lax}$ (or rather $\operatorname{erf}(x)$)? Convention is just that, and nothing more.
{ "language": "en", "url": "https://math.stackexchange.com/questions/836556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54", "answer_count": 5, "answer_id": 1 }
Finding parametric equations for the curved path of a particle around a half-circle I have a question about parametric equations. So far I've learned how to find the parametric equations for a straight line, I know about replacing $x^2$ and $y^2$ in the equation of the unit circle, but I'm having trouble with this particular problem. Note that this seems to be a popular problem, there are solutions to it plastered over the Internet, including on StackExchange, but the explanations are not in-depth enough for a newbie of my calibre of the process behind these solutions, that's what I'm struggling with. The question is as follows: “Find parametric equations for the path of a particle that moves halfway around the circle $x2 + (y – 1)2 = 4$ counter-clockwise, starting at the point (0 , 3).” Here is a screenshot of the graph: http://i.imgur.com/Y53jzKQ.png Could somebody please help me out?
A generic circle of radius $r$ centered at the origin is given parametrically by $\alpha(t) = \langle r\cos(t), r\sin(t) \rangle$ such that $0 \leq t \leq 2\pi$. This is pretty straightforward. Simply draw a circle centered at the origin and draw a line segment from its center to an arbitrary point on its perimeter. Call the angle the segment makes with the $x$-axis $t$. Then, simply think about how $\sin$ and $\cos$ are defined geometrically in terms of a right triangle. It follows immediately that the $y$-coordinate of your point will be given by $r\sin(t)$, and the $x$-coordinate of your point will be given by $r\cos(t)$. You can then translate the circle as many units as you'd like in the $x$ or $y$ direction by adding the appropriate number of units to either the $x$ or the $y$ component of the parametrization. I.e. $\alpha(t) = \langle x + r\cos(t), y + r\sin(t) \rangle$. Once you have a parameterization for the entire circle centered at the point you'd like, think about what values of $t$ that yield only the half you want. (Remember that $t$ is the angle between the $x$-axis and a segment from an arbitrary point on your half-circle to the origin).
{ "language": "en", "url": "https://math.stackexchange.com/questions/836644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A machine has $9$ switches. Each switch has $3$ positions. How many different settings are possible? A machine has $9$ switches. Each switch has $3$ positions. $(1)$ How many different settings are possible? Each switch has $3$ different settings and we have $9$ total. So, $3^9=19,683$ Now, the problem I am facing is what the heck the second part is asking. $(2)$ Answer $(1)$ if each position is used $3$ times. How would you interpret this? I see it as; if we have a certain position for the $9$ switches, say, ABCABCABC then that position is accounted for $2$ additional times $(3$ times total$)$. Thus, we would multiply part $(1)$ by $3$ because we are simply tripling our possibilities? Any ideas on how part $(2)$ should be approached would be great!
Hint for 2: How many different words can you spell with the letters AAABBBCCC?
{ "language": "en", "url": "https://math.stackexchange.com/questions/836736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }
Average number of Dyck words in a Dyck word Given a integer $n$, how many Dyck words are a substring of a Dyck word of size $n$, on average? For example, if $n=2$, then Dyck words of size $2$ are : * *[ ] [ ] *[ [ ] ] (1) contains two strict "sub-Dyck words" : [ ] (with the first two parentheses) and [ ] (with the last two parentheses). And the original [ ] [ ]. The total is 3 (2) contains only one strict "sub-Dyck word": [ ]. And the original [ [ ] ]. The total is 2 So for $n=2$ the answer is 2.5, of course it is harder to compute when $n$ gets bigger. Has anyone an idea on how to find a general formula for this problem?
So, I've coded a little Python program that computes for each $n$, the total number of "sub-Dyck words" in all Dyck words of semi-length $n$ Here is the output for $n$ ranging from 1 to 13 : 1, 5, 21, 84, 330, 1287, 5005, 19448, 75582, 293930, 1144066, 4457400, 17383860. Which is know as A002054 in oeis. And that's even comment number 7. Bingo! So the number I was looking for is $\frac{(2n+1)\times n}{n+2}$ Yet I don't consider the question solved, as I haven't found the proof for it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/836808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
How to show that $\int \limits_{-\infty}^{\infty} (t^2+1)^{-s} dt = \pi^{1/2} \frac{ \Gamma(s-1/2)}{\Gamma(s)}$ I want to compute the integral $$ \int \limits_{-\infty}^{\infty} (t^2+1)^{-s} dt $$ for $s \in \mathbb{C}$ such that the integral converges ($\mathrm{Re}(s) > 1/2$ I think) in terms of the Gamma function. If I'm not mistaken, the answer I'm looking for is $$ \int \limits_{-\infty}^{\infty} (t^2+1)^{-s} dt = \pi^{1/2} \frac{\Gamma(s-1/2)}{\Gamma(s)} $$ so my question is how to prove the above formula? Motivation Since people sometimes ask for motivation, I'm reading a paper in which the author gives the evaluation of a certain Fourier coefficient but doesn't show the computations, just states the result. I was able to reduce the problem of checking the author's evaluation of the Fourier coefficient to proving the above formula for the integral. Thank you for any help.
Hint: Let $x=\dfrac1{t^2+1}$ and then recognize the expression of the beta function in the new integral. But first, using the parity of the integrand, write $\displaystyle\int_{-\infty}^\infty f(t)~dt~=~2\int_0^\infty f(t)~dt$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/836871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Low Level Books on Conjectures/Famous Problems I am currently an undergraduate math/CS major with coursework done in Linear Algebra, Vector Calculus (that covered a significant amount of Real 1 material), Discrete Math, and about to take courses in Algebra and Real Analysis 1. I was wondering if there are any books about famous problems (Riemann, Goldbach, Collatz, etc) and progress thus far that are accessible to me. If possible, I would prefer them to have more emphasis on the math rather than the history. As an example, I enjoyed reading Fermat's Enigma by Simon Singh but was disappointed in how it glossed over most of the math. If there are any suggestions I would greatly appreciate it.
The Goldbach Conjecture by Yuan Wang. From the book's description: A detailed description of a most important unsolved mathematical problem - the Goldbach conjecture. Raised in 1742 in a letter from Goldbach to Euler, this conjecture attracted the attention of many mathematical geniuses. Several great achievements were made, but only until the 1920s. This work gives an exposition of these results and their impact on mathematics, in particular, number theory. It also presents (partly or wholly) selections from important literature, so that readers can get a full picture of the conjecture.
{ "language": "en", "url": "https://math.stackexchange.com/questions/836945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Two 3-cycles generate $A_5$ I want to solve the following exercise, from Dummit & Foote's Abstract Algebra Let $x$ and $y$ be distinct 3-cycles in $S_5$ with $x \neq y^{-1} $. Prove that if $x$ and $y$ do not fix a common element of $\{1,2,3,4,5\}$ then $\langle x,y\rangle =A_5$. I know that using brute force one can find $31$ distinct permutations in $\langle x,y \rangle$, and then Lagrange's Theorem finishes the proof. However, is there a more elegant way to go about this? (Preferably without using the fact that $A_5$ is simple, since it wasn't proven up to this point in the text.) Thank you!
Let $H = \langle x,y \rangle \leq A_{5}.$ Then $H$ has more than one Sylow $3$-subgroup, so has either $4$ or $10$ Sylow $3$-subgroups. If $H$ has $10$ Sylow $3$-subgroups, then $|H|$ is divisible by $30.$ If $|H| = 60,$ we are done. If $|H| = 30,$ then $H \lhd A_{5},$ and furthermore, since $H$ already contains $20$-elements of order $3,$ it can't have $6$ Sylow $5$-subgroups. Hence $H$ has a unique Sylow $5$-subgroup $S$ so $S^{g} = S$ for each $g \in A_{5}$ (since $S^{g} \leq H).$ But then $S \lhd A_{5},$ and $A_{5}$ has a unique Sylow $5$-subgroup, a contradiction. Hence we may now suppose that $H$ has $4$ Sylow $3$-subgroups. Hence $12$ divides $|H|.$ Suppose that $H \neq A_{5}.$ Then $|H| = 12.$ Now $H$ contains $8$ elements of order $3,$ so $H$ has a normal Sylow $2$-subgroup. Now the action of $A_{5}$ on $5$-points may be realised by (or, if you prefer, identified with) its conjugation action on its five Sylow $2$-subgroups. Now $H$ is the stabilizer of one of these Sylow $2$-subgroups in that action, and $x$ and $y$ have a common fixed point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/837042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
A problem with my reasoning in a problem about combinations I was given the following problem to solve: A committee of five students is to be chosen from six boys and five girls. Find the number of ways in which the committee can be chosen, if it includes at least one boy. My method was $\binom{6}{1}\binom{10}{4}= 1260$, using the logic of choosing $1$ boy, then choosing the rest. This was wrong, as the answer was $\binom{11}{5}-\binom{5}{5}= 461$. The correct answer's logic was committee with no restrictions – committee with no girls. Why was my method wrong? Please help...
Your idea is to choose one boy and then four others, which might include further boys. Nice idea, but unfortunately it doesn't work: the reason why should be clear from the following choices. * *Choose the boy $B_1$, then four more people $G_1,G_2,G_3,B_2$. *Choose the boy $B_2$, then four more people $G_1,G_2,G_3,B_1$. These are two of the committees you have counted. . . . . . BUT they are actually the same committee, so you should not have counted it twice. Similarly, by following your method, a committee $B_1,B_2,B_3,G_1,G_2$ would be counted three times, and so on. This is why your method gives the wrong answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/837119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What is the remainder after dividing $(177 + 10^{15})^{166}$ by $1003 = 17 \cdot 59$ What is the remainder after dividing $(177 + 10^{15})^{166}$ by $1003 = 17 \cdot 59$? I have made several observations about the problem but I cant see how they lead me to an answer. Observation one: $\phi(1003) = 16 \cdot 58 =8 \cdot 116$ (Euler totient function). Observation two: $$10^{15} \mod 1003 = 1000^5 \mod 1003 = (-3)^5 \mod 1003 = -243 \mod 1003.$$ So $$177 + 10^{15} \mod 1003 = 177 - 243 \mod 1003= - 66 \mod 1003 = -2\cdot 3 \cdot 11 \mod 1003.$$ Observation three: $(\mathbb Z/1003 \mathbb Z)\cong (\mathbb Z/17\mathbb Z)\times (\mathbb Z/59\mathbb Z)$ by the chinese remainder theorem. I am really struggling to use any of these to form some sort of coherent argument. I thought maybe I could use observation one and use Fermat's little theorem to conclude $((177 + 10^{15})^{166})^8 \mod 1003= 1 \mod 1003$, but then I should first check that $(177 + 10^{15})^{166} \in (\mathbb Z/1003 \mathbb Z)^*$. Even then, the problem would be reduced to looking for elements in $a\in(\mathbb Z/1003 \mathbb Z)^*$ such that $a^8 = 1$, for instance $a = 1002$ would work, but then I wonder if this is the only element in $(\mathbb Z/1003 \mathbb Z)^*$ such that its eighth power is unit... probably not. If anyone can make sense of my observations and use them to find an answer, or if anyone has a different method to see the answer I would be very happy to hear it! Thanks in advance!
Write $x=177+10^{15}=177+1000^5$. Modulo $17$ we have $$x=7+(-3)^5=-236=2\quad\Rightarrow\quad x^4=-1\quad\Rightarrow\quad x^{166}=x^6=-4\ .$$ Modulo $59$ we get $$x=-3^5\quad\Rightarrow\quad x^{166}=3^{830}=3^{18}=-2\ .$$ Now use the Chinese Remainder Theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/837192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove $\frac{\sin\theta}{1-\cos\theta} \equiv \csc\theta + \cot\theta$ This must be proved using elementary trigonometric identities. I have not been able to come to any point which seems useful enough to include in this post.
Lets get rid of the trigonometry stuff first: $$s=\sin(\theta),~~c=\cos(\theta),~~\csc(\theta)=\frac{1}{s},~~\cot(\theta)=\frac{c}{s}$$ Now we are solving this equation: $$\frac{s}{1-c}=\frac{1}{s}+\frac{c}{s}$$ $$\Leftrightarrow$$ $$\frac{s}{1-c}-\frac{1}{s}-\frac{c}{s}=0$$ multiply by $(1-c)\neq 0$ $$s-\frac{1-c}{s}-\frac{(1-c)c}{s}=0$$ multiply by $s\neq 0$ $$s^2-(1-c)-(1-c)c = s^2-1+c-c+c^2 =0\Leftrightarrow s^2+c^2=1$$ The only solutions to this algebraic equation are $s=\sin(\theta)$ and $c=\cos(\theta)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/837281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How to understand the variance formula? How is the variance of Bernoulli distribution derived from the variance definition?
PMF of the Bernoulli distribution is $$ p(x)=p^x(1-p)^{1-x}\qquad;\qquad\text{for}\ x\in\{0,1\}, $$ and the $n$-moment of a discrete random variable is $$ \text{E}[X^n]=\sum_{x\,\in\,\Omega} x^np(x). $$ Let $X$ be a random variable that follows a Bernoulli distribution, then \begin{align} \text{E}[X]&=\sum_{x\in\{0,1\}} x\ p^x(1-p)^{1-x}\\ &=0\cdot p^0(1-p)^{1-0}+1\cdot p^1(1-p)^{1-1}\\ &=0+p\\ &=p \end{align} and \begin{align} \text{E}[X^2]&=\sum_{x\in\{0,1\}} x^2\ p^x(1-p)^{1-x}\\ &=0^2\cdot p^0(1-p)^{1-0}+1^2\cdot p^1(1-p)^{1-1}\\ &=0+p\\ &=p. \end{align} Thus \begin{align} \text{Var}[X]&=\text{E}[X^2]-\left(\text{E}[X]\right)^2\\ &=p-p^2\\ &=\color{blue}{p(1-p)}, \end{align} or \begin{align} \text{Var}[X]&=\text{E}\left[\left(X-\text{E}[X]\right)^2\right]\\ &=\text{E}\left[\left(X-p\right)^2\right]\\ &=\sum_{x\in\{0,1\}} (x-p)^2\ p^x(1-p)^{1-x}\\ &=(0-p)^2\ p^0(1-p)^{1-0}+(1-p)^2\ p^1(1-p)^{1-1}\\ &=p^2(1-p)+p(1-p)^2\\ &=(1-p)(p^2+p(1-p)\\ &=\color{blue}{p(1-p)}. \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/837356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
How can I show the complete symmetric quadratic form has no zeros? The quadratic complete symmetric homogeneous polynomial in $n$ variables $t_1,\ldots,t_n$ is defined to be $$h_2(t_1,\ldots,t_n) := \sum_{1 \leq j \leq k \leq n} t_j t_k = \sum_{j=1}^n t_j^2 + \sum_{j<k} t_j t_k.$$ For example, in one variable we have $h_2(t_1) = t_1^2$, in two variables $$h_2(t_1,t_2) = t_1^2 + t_2^2 + t_1 t_2,$$ and in three $$h_2(t_1,t_2,t_3) = t_1^2 + t_2^2 + t_3^2 + t_1 t_2 + t_2 t_3 + t_3 t_1.$$ By a theorem of Jacobi, any quadratic form is conjugate (by an orthogonal transformation) to a diagonal one. It turns out that $h_2(t_1,\ldots,t_n)$ is conjugate to one with one eigenvalue $\frac{n+1}2$ and the others $\frac 1 2$, so there are no nontrivial zeros. I came to this question through trying to see if there are any integer solutions. Is there a more clean (perhaps modular) way of seeing there are no integer solutions to $h_2 = 0$?
Well there is the trivial solution $\tilde{t}=(t_1,t_2,\cdots ,t_n)=(0,0,\cdots ,0)$. Now write $$\displaystyle h_2(\tilde{t})=\frac{1}{2}\left(\sum_{i=1}^{n}t_i^2\right)+\frac{1}{2}\left(\sum_{i=1}^{n}t_i\right)^2\ge 0$$ where equality only occurs when everything is zero. I suppose I understood the question correctly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/837444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Existence of "simple" irreducible polynomial of degree 12 in a finite field Assume that we have a finite field $\mathbb{F}_p$, where $p$ is prime, $p \equiv 1\ (\textrm{mod}\ 4)$ and $p \equiv 1\ (\textrm{mod}\ 3)$. I was looking for irreducible polynomial in a form $X^{12} + a$, where $a \in \mathbb{F}_p$ and it turned out that it's very simple to find one: in all cases I tested it's sufficient to take $a \in \{2,5,6\}$. The problem is that I can't seem to figure out why this is the case. I'd be happy if someone could explain how to show the existence of irreducible polynomial in such form (I guess the reason why small $a$ is sufficient will follow from the reasoning).
$x^{12}-a$ is reducible mod $p=12k+1$ if and only if $a=0$ or $a^k$'s order is a proper divisor of 12. To see this, recall that a monic polynomial of degree $d$ is irreducible mod $p$ if and only if $\gcd(f(x), g_i(x)) = 1$ for all $g_i(x) = x^{p^i}-x$, $1\leq i < d$. Fixing $i$, and working in $\mathbb{Z}_p[x^{12}-a]$, \begin{align*} g_i(x) &\equiv x^{(12k+1)^i}-x\\ &\equiv x\cdot x^{12ki} \cdot x^{\sum_{j=2}^i (12k)^j \binom{i}{j}}-x\\ &\equiv xa^{ki}-x\\ &\equiv x\left[a^{ki}-1\right]. \end{align*} The third line follows from $$\sum_{j=2}^i (12k)^j\binom{i}{j} = 12\cdot 12k\cdot k\sum_{j=2}^i (12k)^{j-2}\binom{i}{j},$$ so $$x^{\sum_{j=2}^i (12k)^j \binom{i}{j}} = a^{(p-1)\cdot k\sum_{j=2}^i (12k)^{j-2}\binom{i}{j}} = 1^{k\sum_{j=2}^i (12k)^{j-2}\binom{i}{j}} = 1.$$ So if $a^k$'s order properly divides $12$, $x^{12}-a$ divides $g_i(x)$ for some $i<12$ and $f(x)$ reducible. Conversely, if $a^k$'s order does not properly divide 12, then $\gcd(x^{12}-a,g_i(x)) = 1$ for all $1\leq i < 12$ and $x^{12}-a$ is irreducible. EDIT: I think the above is correct now. Notice that $a$ being a primitive root is sufficient but not necessary for irreducibility; one example is $a=5, p=1453.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/837553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
The total number of points of $\mathbb{R}$ at which $f$ attains a local extremum Let $f(x) = \vert x^2-25 \vert$ for all $x \in \mathbb{R}$. The total number of points of $\mathbb{R}$ at which $f$ attains a local extremum is $A$. $1$ $B$. $2$ $C$. $3$ $D$. $4$ What I was thinking where $f'(x)=0$. but this only gives you $x=0$. Now here I'm stuck. Help me!
Recall that a function can have a relative maximum or relative minimum only at those numbers in its domain at which the derivative is undefined or is zero (these numbers are called critical points). Notice that $f'(x) = \begin{cases} -2x & x \in (-5, 5) \\ 2x & x \in (-\infty, -5) \cup (5, \infty) \\ \text{DNE} & x = -5, 5 \end{cases}$ Notice that if we consider $f'$ on the domain in which it is defined, $f'(x) = 0$ means $x = 0$. So our critical points are $x = -5, 0, 5$. Hence your answer (d) is out of the question since there are at most 3 local extremum. Now we check each point one by one: * *($x = 5$): $f(5) = 0$ and $f(5) \leq f(x)$ for all $x \in [4, 6]$. *($x = 0$): $f(0) = 25$ and $f(0) \geq f(x)$ for all $x \in [-1, 1]$. *($x = -5$): $f(-5) = 0$ and $f(-5) \leq f(x)$ for all $x \in [-6, -4]$. Here you could notice that $f$ is symmetric with respect to the $y$-axis and refer to the case where $x = 5$. Hence we have 3 local extrema. This should have been obvious from looking at the graph, which is simple enough to draw pretty quickly!
{ "language": "en", "url": "https://math.stackexchange.com/questions/837595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove the identity... $$\frac{\cos{2x}-\sin{4x}-\cos{6x}}{\cos{2x}+\sin{4x}-\cos{6x}}=\tan{(x-15^{\circ})}cot{(x+15^{\circ})}$$ So, here's what I've done so far, but don't know what do do next: $$\frac{\cos{2x}-2\sin{2x}\cos{2x}-\cos{6x}}{\cos{2x}+2\sin{2x}\cos{2x}-\cos{6x}}=$$ $$\frac{\cos{2x}-4\sin{x}\cos{x}\cos{2x}-\cos{(4x+2x)}}{\cos{2x}+4\sin{x}\cos{x}\cos{2x}-\cos{(4x+2x)}}=$$ $$\frac{\cos{2x}-4\sin{x}\cos{x}\cos{2x}-\cos{4x}\cos{2x}+\sin{4x}\sin{2x}}{\cos{2x}+4\sin{x}\cos{x}\cos{2x}-\cos{4x}\cos{2x}+\sin{4x}\sin{2x}}=...$$ I have no idea what to do next. I have a big feeling that I'm going in a totally wrong direction. Is there anything I can do with the expression in the beginning?
Using Prosthaphaeresis Formula, $$\frac{\cos2x-\cos6x}{\sin4x}=\frac{2\sin4x\sin2x}{2\sin4x}=\frac{\sin2x}{\dfrac12}$$ $$\implies \frac{\cos2x-\cos6x}{\sin4x}=\frac{\sin2x}{\sin30^\circ}$$ Now apply Componendo and dividendo and again apply Prosthaphaeresis formulae $$\sin C\pm\sin D$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/837805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Proving that $\sum^n_{k=1} e^{ik\theta}=\sum^n_{i=1}\cos k\theta +i\sum^n_{k=1}\sin k\theta$. Prove: $$\sum^n_{k=1} e^{ik\theta}=\sum^n_{i=1}\cos k\theta +i\sum^n_{k=1}\sin k\theta$$ Thanks a lot!! I tried: With Euler's identity I can get $\sin x= \dfrac{e^{ix} - e^{-ix}}{2i}$ and the cosine too. But i'm lost trying to development it.
Given that $$ e^{ik\theta} = \cos(k\theta) + \textbf{i} \sin(k\theta) $$ Then we get $$ \sum_{k=1}^n e^{ik\theta} = \sum_{k=1}^n \Big( \cos(k\theta) + \textbf{i} \sin(k\theta) \Big) $$ which gives $$ \sum_{k=1}^n e^{ik\theta} = \sum_{k=1}^n \cos(k\theta) + \textbf{i} \sum_{k=1}^n \sin(k\theta) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/837887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proving by calculation that $\arg(-2) = \pi$ The fact that it is true, seems very obvious, if one draws the complex number $z = (-2 + 0i)$ on the complex plane. The angle is certainly 180 degrees, or pi radians. But how can this be proven by calculation? Using $\arg(z)=\arctan(b/a)$ or even the "extended" version $\arg(z) = \arctan(b/a) + \text{sign}(b)(1-\text{sign}(a))$ gives $0$.
More complicated than $\arctan(y/x)$, yet valid for all $z\ne0$ is $$ \arg(x+iy)=\left\{\begin{array}{cl} 2\arctan\left(\frac{y}{x+\sqrt{x^2+y^2}}\right)&\text{if }y\ne0\text{ or }x\gt0\\[6pt] \pi&\text{otherwise} \end{array}\right. $$ which is based on the identity $$ \tan(\theta/2)=\frac{\sin(\theta)}{1+\cos(\theta)} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/837978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Application of Christoffel symbol in differential geometry When self-studying differential geometry, I find my book involves some clumsy, troublesome calculation about Christoffel symbol when proving theorem, which in fact doesn't have the symbols. I wonder if the symbol is actually useful when for doing calculation stuff of understand concept. Can someone explain to me the role of Christoffel symbol? Is it really important? Do I really need to go through all of them?
It is possible and highly recommended to learn how to do the calculations without using Christoffel symbols, in a coordinate free manner. Personally, I like the abstract index notation, for instance. On the other hand, a good understanding of the coordinate calculations is very helpful when one attempts to read the legacy papers. The role of the Christoffel symbols is easy to explain: they serve as the components of the connection in a local coordinate patch. More precisely, the Christoffel symbols are the components of the difference tensor between the given connection and the standard connection, which is available in this coordinate patch (and has all the Christoffels vanishing!). Here is my answer to a related question. If you need more references or further explanations, I am happy to add them to my answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/838062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Is triangle congruence SAS an axiom? I was wondering if there is a way to prove SAS in triangle congruence with Euclidean axioms. Thank you for your help!
The answer to this question is a bit complicated. It's not a straight yes or no. Euclid claims to prove side-angle-side congruence in his Proposition 4, Book 1. He does this by "applying" one triangle to the other. Essentially, this means he moves one triangle until it coincides with the other. Euclid's proof is universally regarded as problematic from a logical standpoint, because there are absolutely no axioms in the Elements that tell you when, or in what ways, a figure can be "applied" to another. There are modern axiom systems that are logically satisfactory, and they fill the gaps in Euclid in different ways. Some systems have axioms involving rigid motions, and in some cases it may be possible to give a proof similar to Euclid's. The best-known modern axiom system intended to replace Euclid's, while staying close to his in spirit, is the one given by Hilbert in 1899 in Grundlagen der Geometrie. Hilbert assumes as an axiom something slightly weaker than the full SAS property. Namely, Axiom IV-6 states that if two triangles have two corresponding sides and the angle between them equal, then they also have their remaining angles equal. From this, and his other axioms, Hilbert then proves that the remaining side must also be equal. So in Hilbert's case, two-thirds of the conclusion of the SAS property is assumed as an axiom, and the remaining one-third is proved.
{ "language": "en", "url": "https://math.stackexchange.com/questions/838132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
Connectedness of $\mathbb R$ Let $\mathbb R$ denote the real number space, and let sets A and B be closed and nonempty such that $\mathbb {R} \subset A \cup B $, why is it true that due to the connectedness of $ \mathbb{R} $, $A \cap B \neq \emptyset $?
Your assumptiom is $\mathbb{R} \subset A \cap B$. But $\mathbb{R} \neq \emptyset$ so $\emptyset \neq \mathbb{R} \subset A \cap B$ imply $A \cap B \neq \emptyset$. Note that we didn't use connectivity of $\mathbb{R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/838225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Pullback in morphism of exact sequences Suppose we have the following morphism of short exact sequences in $R$-Mod: $$\begin{matrix}0\to&L&\stackrel{f'} \to& M'&\stackrel{g'}\to &N' & \to 0\\ &\;||&&\downarrow\rlap{\scriptsize\alpha'}&&\;\downarrow\rlap{\scriptsize\alpha}\\ 0\to &L&\stackrel{f}\to& M&\stackrel{g}\to& N& \to 0 \end{matrix} $$ I have to show that the right square is a pullback. What I have done: Suppose we have $K \in R$-Mod and $ \phi : K \to N' $, $\lambda : K \to M $ that verifiy $$\alpha \circ \phi = g \circ \lambda$$ For $r \in K $ exists $m' \in M' $ with $g'(m') = \phi(k) $ because $g'$ is surjective. So I defined $z : K \to M' $ with $$z(k) = m'$$ But now I'm stuck, how to proceed ?
If $(m,n') \in M \times_N N'$, choose a lift $m' \in M'$ of $n'$ and consider the image $\tilde{m} \in M$. Then $\tilde{m},m$ have the same image in $N$, hence there is a unique $l \in L$ such that $m=\tilde{m}+f(l)$. Then $m' + f'(l)$ is a the unique element of $M'$ which maps to $m$ and to $n'$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/838334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proving Combinatorical Summation: $n!=\sum_{k=0}^n(-1)^k\binom{n}{k}(n-k)^n$ been stuck with this question for the last few hours, any help would be appreciated. $$ {\large n! = \sum_{k = 0}^{n}\left(-1\right)^{k}{\,n\, \choose \,k\,} \left(\,n - k\,\right)^{n}} $$ what I did: $\sum_{k=0}^n(-1)^k\binom{n}{k}(n-k)^n=n!\sum_{k=0}^n\frac{(-1)^k(n-k)^n}{k!(n-k)!}.$ So we are left to prove $\sum_{k=0}^n\frac{(-1)^k(n-k)^n}{k!(n-k)!}=1$. tried doing so using induction, or treating the sum as geometric sequence (which turned out poorly) Suggestions?
First, ask yourself the following question: $\quad\displaystyle\sum_{k=0}^na^k{n\choose k}x^{n-k}~=~?\quad$ Hint: See binomial theorem. Then apply the following two steps repeatedly: $(1).$ Differentiate both sides with respect to x, and $(2).$ Multiply both sides with x. Notice how, after each two steps, $\bigg(x\dfrac d{dx}\bigg)^k\circ(x-1)^n$ can be rewritten as $(x-1)^{n-k}\cdot P_k(x)$, where $P_k(x)$ is a polynomial of degree k in x and n. Notice also that $P_k(1)=n(n-1)\cdots(n-k+1)$. Try to prove these two observations by induction. Then it is quite evident that $P_n(1)$ will be $n!$
{ "language": "en", "url": "https://math.stackexchange.com/questions/838408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 3 }
When the approximation $\pi\simeq 3.14$ is NOT sufficent It's common at schools to use $3.14$ as an appropriate approximation of $\pi$. However, here it's stated that for some purposes, $\pi$ should be approximated to $32$ decimal places. I need an example of such a purpose, accessible to a middle school student. Thanks.
The approximation required for any number depends on the purpose, as you can see. It is a good approximation to take $\pi$ being approximately equal to 3.14. So, generally in mechanical engineering we can assume this approximation to be true because we take into account the factor safety which serves the purpose causing no damages. However, suppose you want to be build a spaceship you require things to be accurate. So, this approximation won't help!
{ "language": "en", "url": "https://math.stackexchange.com/questions/838467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 4 }
$T: V \rightarrow V$ a linear transformation such that $T^2 = I$ and $H_1= \{v \in V | T(v) = v\}\ $ and $H_2= \{v \in V|T(v) = -v\}\ $ Let V a vector space and $T: V \rightarrow V$ a linear transformation such that $T^2 = I$ and $H_1= \{v \in V | T(v) = v\}\ $ and $H_2= \{v \in V|T(v) = -v\}\ $ then $V = H_1 \bigoplus H_2$ I stuck in this problem please some help.
First of all it is clear that if $v \in H_1 \cap H_2$, then $v = T(v) = -v$ and so we must have $v = 0$. It follows that $H_1 \cap H_2 = 0$. Now let $v \in V$, by definition of $T$, $T^2(v)= T(T(v))=v$ which implies that $T(v) \in H_1$, furthermore by linearity of $T$ holds $T(v-T(v)) = T(v)-T^2(v) = -(v-T(v))$ which shows that $v -T(v)\in H_2$. So $v = T(v)+(v-T(v)) \in H_1 + H_2$. Since it is obvious that $H_1 + H_2 \subset V$, the proof is done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/838522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Confusion Regarding the Axiom of Countable Choice My current understanding of the Axiom of Countable Choice is that the following example needs it in order to work: Let $X$ be a countable family of finite sets. Then there exists a choice function $f$ choosing for each $x$ a bijection between $x$ and a natural number $n$. It seems to me that we need to use the Axiom of Countable Choice, and that the finiteness of the elements of $X$ does not change this. Is this indeed the case: is the Axiom of Countable Choice needed to define such a set? Why or why not?
Yes, we need the axiom of countable choice for choosing such bijections. If we can choose a bijection for each finite set, then their union is countable, since we can map it into $\omega\times\omega$ in the obvious way. An example of a countable set of finite sets, even pairs, whose union is not countable, if so, is an example of a countable family of finite sets that we cannot choose uniformly bijections for all pairs. For example, consider the mathematical realization of Russell's socks example. It is consistent that there is a countable family of pairs, that does not admit a choice function. If the union of these pairs would have been countable, or even linearly orderable, then we could have chosen the least one from each set. Since we cannot do that (within that particular model of set theory), that union is not countable, and we cannot choose an enumeration for each pair uniformly. It should, perhaps, be pointed out that in the case of finite sets this is weaker than the axiom of countable choice. This is not a trivial theorem, but an important nonetheless. Countable choice implies the countable union of countable sets are countable implies every countable family of finite sets admits a choice function (is equivalent to your question about choosing enumerations). None of the implications are reversible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/838637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find interval with function that solves ODE $y'(x)=1+(y(x))^2$ Let $g\in C^1(\mathbb{R})$ with $g'\gt 0$ and $g(0)=0$. Show that for the differential equation $$\begin{cases}y'(x) & = \dfrac{1}{g'(y(x))} \\[8pt] y(0) & = 0 \\\end{cases}$$ there exists exactly one non-empty open interval $I\subseteq\mathbb{R}$ and an $f\in C^1(I)$, such that $f$ solves the differential equation, $0\in I$ and $(I,f)$ is a maximum solution.
Hint: as $g'\neq 0$, using the chain rule the equation is equivalent to $$ g(y(x)) = g(y(0)), \\ y(0) = 0 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/838692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
very strange phenomenon $f(x,y)=x^4-6x^2y^2+y^4$ integral goes wild I am going over my lecture's notes in preparation for exam and I saw something a bit strange I would like someone to explain how it is possible. Look at the function $f(x,y) = x^4-6x^2y^2+y^4$ if we convert it to polar coordinates, we will get $f(r,\theta)=r^4\cos(4\theta)$ the weird thing about this function, is that it has 2 different integrals over the same area. What I mean: if we calculate the integral $$ \lim_{n \to \infty}\int_{0}^{2\pi} \int_{0}^{n} r^5\cos(4\theta)dr d\theta$$ we will see it is equal to zero. However, if we calculate $$\lim_{n \to \infty}\int_{-n}^{n} \int_{-n}^{n} x^4-6x^2y^2+y^4dxdy = \lim_{n \to \infty}\int_{-n}^{n} \frac{2n^5}{5}-4n^3y^2+2ny^4 dy=\lim_{n \to \infty} -\frac{16}{15}n^6 = -\infty$$ How come the integral is so different based just on how we choose to represent the same area? (it is $\mathbb R^2$ in both cases. first it is a giant circle, second case it is a giant square) So if asked what is $$\iint_{\mathbb R^2} x^4-6x^2y^2+y^4$$ is the correct thing to say that it does not exist?
The integral is unbounded when integrated over the entire plane of $\mathbb R^2$, what you are doing by changing the variables is analogous to changing the order of the terms in an non-converging summation, you can get more than one "answer" but in fact none of them are valid.
{ "language": "en", "url": "https://math.stackexchange.com/questions/838801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Proving polynomial limit theorems I am pretty confused on this math question. It is a two-parter but I'm not sure what part a is asking me, perhaps someone on StackExchange could help. The question reads as follows: (a) If p is a polynomial, prove, using limit theorems, that $$\lim_{x\to a}p\left(x\right) = p\left(a\right)$$ (b) Use the result in (a) to evaluate $$\lim_{x\to 0^-}\left(x^3 – 8x^2 + 2x – 1\right)$$ Could somebody give me some hints about 'proving' part a with limit theorems? I was under the impression that it was a limit theorem. Because I have a polynomial wouldn't I just replace the value of $x$ with $a$? What else can I say about it? And wouldn't I just replace $x$ with $0$ in part b?
Since any polynomial $p$ is continuous on $\Bbb R$ then we have $$\lim_{x\to a}p(x)=p(a)$$ so to find the limit just evaluate the polynomial on $a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/838900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Probability that a card drawn is King on condition that the card is a Heart From a standard deck of 52 cards, what is the probability that a randomly drawn card is a King, on condition that the card drawn is a Heart? I used the conditional probability formula and got: Probability that the card is a King AND a Heart: $\frac{1}{52}$ Probability that the card is a Heart: $\frac{13}{52}$ So: $\frac{\frac{1}{52}}{\frac{13}{52}} = \frac{1}{13}$. Is this correct?
We want to work out $P(king|heart)$. $P(heart)=1/4$ $P(king \& heart)=1/52$ Conditional probability formula: $P(A|B)=P(A \& B)/P(B)$. So substituting into this formula we get: $P(king | heart) = P(king \& heart) / P(heart) = (1/52)/(1/4) = 4/52 = 1/13$ as required. So yes, you are correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/838989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Help undersanding meromorphics Herglotz functions A meromorphic function $f$ is called meromorphic herglotz function if $\mathrm{Im}(z)>0$ implies $\mathrm{Im}(f(z))>0$ I need to prove that all the poles and zeros of $f$ are in $\mathbb{R}$. Morover, each pole and zero is simple and the poles and zeros alterante. There is a proof here, but I can't understand why $\operatorname{arg}(f)$ takes all the values in $[0,2\pi)$ and why that implies that all the zeros and poles are in $\mathbb{R}$.
If a meromorphic function $f$ has a pole or zero at $z_{0}$, then, for a unique non-zero integer $n$, $$ f(z) = (z-z_{0})^{n}g(z) $$ where $g$ is holomorphic near $z_{0}$ with $g(z_{0})\ne 0$. Then $$ f(z) = (z-z_{0})^{n}g(z_{0})+(z-z_{0})^{n+1}\left[\frac{g(z)-g(z_{0})}{z-z_{0}}\right]. $$ By choosing $z=re^{i\theta}+z_{0}$ with $r$ small enough, you can arrange for the first term to dominate so that the image of $C_{r}=\{ re^{i\theta}+z_{0} : 0 \le \theta \le 2\pi \}$ is very nearly circular.
{ "language": "en", "url": "https://math.stackexchange.com/questions/839075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Evaluating trigonometric functions How would I evaluate cot 30 + cot 60? I know that cot 30 = 1/tan30 and cot60 = 1/tan60. The answer must have a rational denominator where relevant. I have tried adding them like normal fractions after evaluating, but got the incorrect answer. Any relevant online reading material would be appreciated thanks.
Well tan(a)=sin(a)/cos(a) So, you want cos(30)/sin(30) + cos(60)/sin(60) = $\frac{\sqrt{3}/2}{1/2} + \frac{1/2}{\sqrt{3}/2} = \sqrt{3} + \frac{1}{\sqrt{3}} = \frac{3}{\sqrt{3}}+\frac{1}{\sqrt{3}} = \frac{4}{\sqrt{3}} = \frac{4 * \sqrt{3}}{3}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/839220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Why exponential function on p-adic numbers is meaningless? In the notes, page 3, it is said that $e^{2\pi i r y}$ is meaningless if $y$ is a general p-adic number. Why exponential function on p-adic numbers is meaningless? Thank you very much.
It’s not the exponential function for $p$-adic numbers that’s meaningless; rather it’s the act of multiplying the real number $\pi$ by a nonrational $p$-adic number that’s meaningless. There’s no way of multiplying a real times a $p$-adic unless one of them is rational. On the other hand, there is a $p$-adic exponential function, but it has nothing to do with the case that Conrad is discussing in these notes. It’s defined by the same power series that you learned in Calculus, but when considered as a function on a $p$-adic domain (whether $\mathbb Q_p$ or a complete field extension of $\mathbb Q_p$), its domain of definition is lamentably small, that is $\exp(z)$ converges at $z$ only when $v_p(z)>1/(p-1)$, where $v_p$ is the additive $p$-adic valuation normalized so that $v_p(p)=1$. In the language of absolute values, you need $|z|_p<\bigl(|p|_p\bigr)^{1/(p-1)}$. In particular, you can’t speak of $\exp(1)$, which would be, if it existed, the $p$-adic number corresponding to $e$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/839399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How to evaluate the following indefinite integral? $\int\frac{1}{x(x^2-1)}dx.$ I need the step by step solution of this integral please help me! I can't solve it! $$\int\frac{1}{x(x^2-1)}dx.$$
1/{x*(x^2-1)} =x/{x^2*(x^2-1)} If we substitute: x^2=z By differentiating both sides 2x dx = dz x dx= dz/2 Now if we solve the integral (1/2)log{(x^2-1)/x^2}+C
{ "language": "en", "url": "https://math.stackexchange.com/questions/839510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 4 }
If $B$ the inverse matrix of $A^2$ show that the inverse of $A$ is $AB$ How do I continue from $A(AB) = (BA)A = I$ and how can we justify it if we don't know that $AB=BA$? EDIT: Also, how can we prove that if $AB=I$ then $ BA = I$? This is seperate from the question above but I felt it didn't need a new post.
$A(AB)=(AA)B=A^2 B=I$ and $(BA)A=B(AA)=BA^2=I$
{ "language": "en", "url": "https://math.stackexchange.com/questions/839608", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Can the value of $(-9!)$ be found I saw this question on an fb page and I couldn't solve it. Question: What is the value of $(-9!)$? a)$362800$ b)$-362800$ c) Can not be calculated The first options seems to be incorrect,which leaves $c$ but I can't justify it.Does it have something to do with gamma function which asks for$\int _{ 0 }^{ \infty }{ { x }^{ -10 } } { e }^{ -x }dx$. Why can't it be calculated? Update: I have been given answers that "using the Gamma function, it can't be evaluated". Isn't there some other way to do so?
We have the following property: $n!=\dfrac{(n+1)!}{n+1}$ . Hence, $0!=\dfrac{1!}1=1;~(-1)!=\dfrac{0!}{0_\pm}=\pm\infty$, etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/839705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Is there any way to solve this problem without having to do it by hand? I'm dealing with the following problem in computational programming. I'm trying to find a way to build an algorithm that can quickly resolve the following problem statement. Is there any way to group the relations below, or to find values of y in the following problem statement very efficiently without having to do it by hand? Problem: Given an integer value x, where x > 2, is there any integer value y, where y >= 0, such that x satisfies any of these relations for some integer k, where k > 0? [1]: y = (x - 2k)/(20k) [2]: y = (x - 6k)/(20k) [3]: y = (x - 14k)/(20k) [4]: y = (x - 18k)/(20k) Example #1: If x = 4: From [1]: y = (4 - 2k)/(20k) = 0 for k = 2. Example #2: If x = 6: From [1]: y = (6 - 2k)/(20k) = 0 for k = 3. Example #3: If x = 3: From [1]: y = (3 - 2k)/(20k) ... k = 1 => y = (3 - 2(1))/(20(1)) = 1/20, which would not be an integer ... k = 2 => y = (3 - 2(2))/(20(2)), which would be negative. It would be negative for any k > 1. From [2]: y = (3 - 6k)/(20k) ... It would be negative for any k > 0. From [3]: y = (3 - 14k)/(20k) ... It would be negative for any k > 0. From [4]: y = (3 - 18k)/(20k) ... It would be negative for any k > 0. We can deduce that for x = 3, there is no value y that satisfies any of the above relations.
As stated, the problem has a very simple answer. If $x$ is even, relation [1] is satisfied if you let $y=0$ and $k=x/2$. If $x$ is odd, none of the relations can be satisfied, since they all imply that $x$ is even -- i.e., they can be rewritten as saying $x=2k(10y+r)$ with $r=1$, $3$, $7$, or $9$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/839780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Operations with complex numbers to give real numbers If: * *$|z|=|w|=1$ *$1 + zw \neq 0$ Then $\dfrac{z+w}{1+zw}$ is real. How can prove that.
Let $z=e^{ia}$ and $w=e^{ib}$ for some $a,b \in [0,2\pi)$. This takes care of condition 1. Then $1+zw \neq 0$ means $1+e^{i(a+b)} \neq 0$, which is the same as saying $a+b$ is not an odd multiple of $\pi$. Now consider \begin{align*} \dfrac{z+w}{1+zw} & = \frac{e^{ia}+e^{ib}}{1+e^{i(a+b)}} \end{align*} Rationalize the denominator and see what happens.
{ "language": "en", "url": "https://math.stackexchange.com/questions/839871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Integrate $\int_0^1 \ln(x)\ln(b-x)\,\mathrm{d}x$, for $b>1$? Let $b>1$. What's the analytical expression for the following integral? $$\int_0^1 \ln(x)\ln(b-x)\,\mathrm{d}x$$ Mathematica returns the following answer: $$2-\frac{\pi^{2}}{3}b+\left(b-1\right)\ln\left(b-1\right)-b\ln b+\mathrm{i}b\pi\ln b+\frac{1}{2}b\ln^{2}b+b\mathrm{Li}_{2}\left(b\right)$$ which contains the imaginary term $\mathrm{i}b\pi\ln b$. But the actual answer is real, so this term should cancel somehow with the dilogarithm function. But I don't know how to do this.
The following is an evaluation in terms of $ \displaystyle \text{Li}_{2} \left(\frac{1}{b} \right)$, which is real-valued for $b > 1$. $$\begin{align} \int_{0}^{1} \log(x) \log(b-x) \ dx &= \log(b) \int_{0}^{1} \log(x) + \int_{0}^{1}\log(x) \log \left(1- \frac{x}{b} \right) \ dx \\ &= - \log(b) - \int_{0}^{1} \log(x) \sum_{n=1}^{\infty} \frac{1}{n} \left( \frac{x}{b}\right)^{n} \ dx \\ &= - \log(b) - \sum_{n=1}^{\infty} \frac{1}{nb^{n}} \int_{0}^{1} \log(x) x^{n} \ dx \\ &= - \log(b) + \sum_{n=1}^{\infty} \frac{1}{nb^{n}} \frac{1}{(n+1)^{2}} \\ &= - \log(b) - \sum_{n=1}^{\infty} \frac{1}{n+1} \frac{1}{b^{n}} - \sum_{n=1}^{\infty} \frac{1}{(n+1)^{2}} \frac{1}{b^{n}} + \sum_{n=1}^{\infty} \frac{1}{n} \frac{1}{b^{n}} \\ &= - \log(b) - \left(-\frac{\log(1-\frac{1}{b})}{\frac{1}{b}}-1\right) - \left(\frac{\text{Li}_{2}(\frac{1}{b})}{\frac{1}{b}} -1\right) - \log \left(1- \frac{1}{b} \right) \\ &= - \log(b) +2 + (b-1) \log \left(1-\frac{1}{b} \right) - b \ \text{Li}_{2} \left( \frac{1}{b}\right) \end{align}$$ EDIT: The answer can be written in the form $$-b \ \text{Li}_{2} \left( \frac{1}{b}\right) +2 + (b-1) \log(b-1) - b \log(b) $$ which is what Wolfram Alpha returns for specific integer values of $b$ greater than $1$. For non-integer values of $b$ greater than $1$, it manipulates the answer a bit differently.
{ "language": "en", "url": "https://math.stackexchange.com/questions/839944", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Local minimum implies local convexity? Consider a real function $f$, and suppose it has a local minimum at $a\in \mathbb R$. It typically looks like What hypotheses can be added to $f$ so that there is some $\epsilon >0$ such that $f$ is convex over $(a-\epsilon,a+\epsilon)$ ? The motivation for this question is intuition, but I can't find any valid criterion.
No in the continuous case, a part requiring in practice convexity. Yes in the smooth case. In general local minima have nothing to do with convexity: The function $\sqrt{|x|}$ has a local minimum in $0$ but it is not convex The function $e^x$ is striclty convex everywhere but has no minimum. On the other hand, as pointed out in comments, if $f$ is continuously differentiable at least twice, and if $f''(x)\neq 0$ at a local minimum $x$, then we have $f'(x)=0$ and $f''(x)>0$, which forces local convexity. In fact, this issue is the core of the so called maximum principle which is very useful in the theory of differential equations: Suppose $f$ is a smooth function and you are able to show that at every critical point $f''<0$. Then $f$ has no local minima. (Ex. $x''(t)=e^x\sin(x'(t)e^t) -1$. If $x'=0$ then $x''<0$)
{ "language": "en", "url": "https://math.stackexchange.com/questions/840015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Correlation coefficient calculation Why do we remove of the mean of the data while calculating the correlation coefficient value of bivariate data in statistics? DotProduct/ProductOfLengthOfVectors should always give anyway a coefficient that is between -1 and 1. What does removal of the mean achieve?
Suppose we have finite samples $\{x_1,x_2,\ldots,x_n\}$ and $\{y_1,y_2,\ldots,y_n\}$ from two distributions with sample means: $$\bar{X}=\frac1{n}\sum_{i=1}^{n}x_i, \\\ \bar{Y}=\frac1{n}\sum_{i=1}^{n}y_i,$$ and sample variances $$S_X^2=\frac1{n}\sum_{i=1}^{n}(x_i-\bar{X})^2, \\\ S_Y^2=\frac1{n}\sum_{i=1}^{n}(y_i-\bar{Y})^2,$$ Normally we subtract means to calculate the correlation as $$\rho=\frac{\frac1{n}\sum_{i=1}^{n}(x_i-\bar{X})(y_i-\bar{Y})}{S_XS_Y}. $$ The Cauchy-Schwarz inequality can be applied to show that $|\rho| \leq 1.$ However, if we do not subtract means, then the sample correlation estimate will also fall between $-1$ and $1$ -- as long as we are consistent in not subtracting the means for the estimates of variance. In this case, the Cauchy-Schwarz inequality shows that $$\left|\sum_{i=1}^{n}x_iy_i\right|\leq \sqrt{\sum_{i=1}^{n}x_i^2}\sqrt{\sum_{i=1}^{n}y_i^2},$$ and $$\frac{\sum_{i=1}^{n}x_iy_i}{\sqrt{\sum_{i=1}^{n}x_i^2}\sqrt{\sum_{i=1}^{n}y_i^2}}\leq 1.$$ Either way, as long as the treatment of the means is consistent, the estimated correlation will fall between $-1$ and $1$. However, the choice of subtracting the means may be relevant in terms of estimating the variances without bias.
{ "language": "en", "url": "https://math.stackexchange.com/questions/840115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Analytic continuation of a real function I know that for $U \subset _{open} \mathbb{C}$, if a function $f$ is analytic on $U$ and if $f$ can be extended to the whole complex plane, this extension is unique. Now i am wondering if this is true for real functions. I mean, if $f: \mathbb{R} \to \mathbb{R}$, when is it true that there is an analytic $g$ whose restriction to $\mathbb{R}$ coincides with $f$ and also when is $g$ unique. Surely $f$ needs to be differentiable but this might not be sufficient for existance of such $g$. edit: I mean, is it easy to see that there is and extension of sine cosine and exponential real functions? Thanks a lot.
I think that most mathematicians would say that the functions you mention are restrictions to the real line of functions more naturally defined on the complex plane in the first place. So--yes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/840192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 2, "answer_id": 1 }
Can every positive real be written as the sum of a subsequence of dot dot dot I answered this thing Infinite sum of prime reciprocals and now wonder what happens if we do not have such a strong condition as Bertrand's postulate. i have been fiddling with this, not sure either way. Given a sequence $a_1 > a_2 > a_3 \cdots$ of strictly decreasing positive reals such that $$ a_i \rightarrow 0 \; \; \; \mbox{but} \; \; \sum a_i = \infty, $$ can every positive real number be expressed as the sum of a subsequence of the $a_i?$ The main thing is that we are not given any upper bound on $a_n / a_{n+1}.$ For the reciprocals of the primes, we had an upper bound of $2.$ Note that this is subtler than the thing about rearranging a strictly alternating conditionally convergent series to get anything you specify. That is a matter of overshooting with positive terms, then undershooting with negative terms, back and forth. This one is a little different. I think what I want is a careful proof of this: given two positive real numbers $B<C,$ we can find a finite subsequence of the $a_n$ with sum between $B$ and $C.$
Let $x$ be the desired real number, and let $i_1$ be the smallest positive integer such that $x > a_{i_1}$ (we know such an integer exists because $a_i \to 0$). Now let $i_2$ be the smallest positive integer greater than $i_1$ such that $x - a_{i_1} > a_{i_2}$. Continuing in this way, we obtain a subsequence $(a_{i_j})_{j=1}^{\infty}$ and the sequence of partial sums $(S_k)_{k=1}^{\infty}$ is strictly increasing and is bounded above by $x$, so $S_k \to y \leq x$. Suppose $y < x$ and set $\varepsilon = x - y$. Let $N$ be the smallest positive integer such that $a_N < \varepsilon$ and let $J$ be the largest positive integer such that $i_J < N$ (so $N \leq i_{J+1}$). As $i_{J+1}$ is the smallest positive integer greater than $i_J$ such that $x - a_{i_1} - \dots - a_{i_J} > a_{i_{J+1}}$ and $x - a_{i_1} - \dots - a_{i_J} > x - y = \varepsilon > a_N$, we must have $N = i_{J+1}$. Now note that $x - a_{i_1} - \dots - a_{i_J} - a_{i_{J+1}} > x - y = \varepsilon$, so $i_{J+2} = N+1$, and likewise $i_{J+M} = N+M-1$. But then $$y = \lim_{k\to\infty}\sum_{j=1}^ka_{i_j} = \sum_{j=1}^Ja_{i_j} + \lim_{k\to\infty}\sum_{j=J+1}^ka_{i_j} = \sum_{j=1}^Ja_{i_j} + \lim_{k\to\infty}\sum_{i=N}^ka_i$$ which is a contradiction as the series diverges (because $\sum\limits_{i=1}^{\infty}a_i = \infty$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/840286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 1 }
Probability Help (die problem) A die is rolled 20 times. How many different sequences a) each number 1-6 is rolled exactly three times My Answer: (20 choose 6)*(3 choose 1) b) each number 1-6 are each rolled exactly once in the first six rolls? My Answer: (20 choose 6)*(6 choose 1) c) each number rolled is at least as big as the number that was rolled directly before it? ex: 111222333344455566666 My Answer: No idea
Each number 1-6 is rolled exactly three times No sequence of twenty rolls can be created out of 18 numbers. Each number 1-6 are each rolled exactly once in the first six rolls Consider just the first six rolls. The first roll can be anything. The second roll can be any of the five remaining numbers, the third roll can be any of the four remaining numbers, etc. There are $6!$ such sequences. The remaining fourteen rolls can be any of the $6^{14}$ sequences. Therefore, there are $6!\ 6^{14}$ sequences of twenty rolls that fit the distinctness requirement for the first six rolls. Each number rolled is at least as big as the number that was rolled directly before it Let $M(\mathscr{l},s)$ be the number of sequences of length $\mathscr{l}$ with $s$ monotonically progressing symbols. We want to find $M(20,6)$. Either the first element of the sequence is ⚅ (six) or it isn't ⚅ (six), so we have: $$ M(20, 6)\ =\ M(19, 6)\ +\ M(20, 5) $$ where $M(19, 6)$ is the number of sequences that start with ⚅ (six), and $M(20, 5)$ is the number of sequences that start with ⚄ (five) or smaller. In general, we can set up a recurrence relation: $$ M(\mathscr{l}, s)\ =\ M(\mathscr{l}-1, s)\ +\ M(\mathscr{l}, s-1)$$ with base cases $$\begin{eqnarray*} M(\mathscr{l}, 1) &= 1 \\ M(1, s) &= s \\ \end{eqnarray*}$$ Graphically illustrated, the problem is: $$ \newcommand{r}[0]{\rightarrow}\newcommand{d}[0]{\downarrow} \begin{array}{c} M(20,6) &\r& M(20,5) &\r& \cdots &\r& M(20,1)=1 \\ \d & & \d & & & & \d \\ M(19,6) &\r& M(19,5) &\r& \cdots &\r& M(19,1)=1 \\ \d & & \d & & & & \d \\ M(18,6) &\r& M(18,5) &\r& \cdots &\r& M(18,1)=1 \\ \d & & \d & & & & \d \\ \vdots & & \vdots & & \ddots & & \vdots \\ M(1,6)=6&\r&M(1,5)=5 &\r& \cdots &\r& M(1,1)=1 \\ \end{array} $$ That's the same as saying that $M(\mathscr{l},s)$ is the number of paths, moving only downward and rightward, from $(\mathscr{l},s)$ to $(0,1)$, if we artificially extend the grid by one row. $$ \newcommand{r}[0]{\rightarrow}\newcommand{d}[0]{\downarrow} \begin{array}{c} M(20,6) &\r& M(20,5) &\r& \cdots &\r& M(20,1)=1 \\ \d & & \d & & & & \d \\ M(19,6) &\r& M(19,5) &\r& \cdots &\r& M(19,1)=1 \\ \d & & \d & & & & \d \\ M(18,6) &\r& M(18,5) &\r& \cdots &\r& M(18,1)=1 \\ \d & & \d & & & & \d \\ \vdots & & \vdots & & \ddots & & \vdots \\ M(1,6)=6&\r&M(1,5)=5 &\r& \cdots &\r& M(1,1)=1 \\ \d & & \d & & & & \d \\ M(0,6)=1&\r&M(0,5)=1 &\r& \cdots &\r& M(0,1)=1 \\ \end{array} $$ The answer to that combinatoric problem is $\left(\begin{array}{c}W+H\\W\end{array}\right) = \left(\begin{array}{c}W+H\\H\end{array}\right)$, where $W$ is the width (the number of right arrows) and $H$ is the height (the number of down arrows in the extended graph). To convince yourself: every path from the top-left to bottom-right corner is a sequence of $W + H$ steps, of which $W$ have to be rightward steps. Therefore, in general, $$M(\mathscr{l},s) = \left(\begin{array}{c}\mathscr{l}+s-1\\s-1\end{array}\right) = \left(\begin{array}{c}\mathscr{l}+s-1\\ \mathscr{l}\end{array}\right)$$ and $M(20,6) = \left(\begin{array}{c}25\\5\end{array}\right) = 53130$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/840349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
A simple probability question about two types of cards There is a hidden box that contains two different types of cards--1 Card A and 1 Card B. Card A has both sides of the card red while Card B has one side red and the other blue. If you randomly picked a card and saw a red face, what is the probability that this card is of type A. Using the conditional probability equation, I seem to be getting $0.5$ as the answer, however, the true answer is $2/3$. And I dont see how you can get $2/3$?
If we assume there are the same number of type 'A' cards as type 'B' cards then if you pull a card at random and look at one side only there are 4 equally likely results Red or Red from card A. Red or Blue from card B. By observing a red face we can eliminate one of these options so we now have only 3 equally likely results and of these how many are using Card A? Answer 2 of the 3 equally likely outcomes use card A so probability is $\dfrac{2}{3}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/840455", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Need explanation for simple differential equation I can't figure out this really simple linear equation: $$x'=x$$ I know that the result should be an exponential function with $t$ in the exponent, but I can't really say why. I tried integrating both sides but it doesn't seem to work. I know this is shameful noob question, but I would be grateful for any hints.
$$x'=x \Rightarrow \frac{dx}{dt} \Rightarrow \frac{1}{x} dx = 1 \ dt \Rightarrow \int \frac{1}{x} dx = \int 1 \ dt \Rightarrow \ln(x)= t + C \Rightarrow x(t)=e^te^c.$$ $\cdot \ \text{Let A}=e^c \ \text{then} \ x(t) = Ae^t$
{ "language": "en", "url": "https://math.stackexchange.com/questions/840522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 2 }
Probability Of a 4 sided die A fair $4$-sided die is rolled twice and we assume that all sixteen possible outcomes are equally likely. Let $X$ and $Y$ be the result of the $1^{\large\text{st}}$ and the $2^{\large\text{nd}}$ roll, respectively. We wish to determine the conditional probability $P(A|B)$ where $A = \max(X,Y)=m$ and $B= \min(X,Y)=2,\quad m\in\{1,2,3,4\}$. Can somebody first explain me this question and then explain its answer. I'm having trouble in approaching it.
min(X,Y)=2. So 5 outcomes left: (2,2) (2,3) (2,4) (3,2) (4,2) Q: What values can m take?
{ "language": "en", "url": "https://math.stackexchange.com/questions/840591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 3 }
What's the complexity of expanding a general polynomial? Suppose I have a polynomial in the form $(a_1 x_1+ a_2 x_2+...+ a_m x_m)^n$, where $x_1,...,x_m$ are the independent variables. I want to expand it to the form of sum of products. What is the complexity,i.e. the big O notation?
If I understand your question good. If you consider your polynomial $X=a_1x_1+\cdots+a_mx_m$, then you want to calculate the complexity of the operation $X^n$. As far as I know, it depends on the method used and the best algorithm to do $X^n$ is exponentiation by squaring which is given, for example, heretime complexity of exponentiation by squaring. As you can see, in the worst case, you will divide $b$ ($n$ in your case) by 2 until you reach $1$ and then you immediately reach $n=0$. The complexity of this step is $\log_2n$ (if $n$ was written in binary $n=2^p$, you have to divide $p$ ($=\log_2n$) times to reach $1$). If you want to calculate the complexity of expanding the polynomial without having it (you do not have $X$). So you have as input a vector x=[x_1, ..., x_m] and a vector a=[a_1, ..., a_m], you need to do a multiplication a_i*x_i and a summation and use a formula to get $\left(\sum a_ix_i\right)^n$ which will be a different complexity than I wrote above. I hope it helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/840735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Linear equation: $(A^\top A+B^\top B + D)x=c$ where $A,B$ are structured sparse and $D$ is diagonal. Updated: the goal is to solve $(A^\top A+B^\top B + D)x=c$. Maybe it is not necessary to compute $(A^\top A+B^\top B + D)^{-1}$. Denote $e=(1,1,\ldots,1)^\top\in\mathbb{R}^n$ and $$A=\begin{bmatrix} e & & & \\ & e & & \\ & & \ddots & \\ & & & e \end{bmatrix}$$ and $$B=\begin{bmatrix}\mathrm{diag}(e) & \mathrm{diag}(e) & \cdots & \mathrm{diag}(e)\end{bmatrix}$$ where $e$ appears $n$ times in $A$ and $n$ times in $B$ (i.e. $A,B\in\mathbb{R}^{n\times n^2}$). For example, if $n=3$ we have: $$A=\begin{bmatrix} 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 \end{bmatrix}$$ $$B=\begin{bmatrix} 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0\\ 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0\\ 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 \end{bmatrix}$$ Let $D$ be a $n^2\times n^2$ diagonal matrix with positive elements. I'm looking for an efficient way to solve the linear equation $$(A^\top A+B^\top B + D)x=c$$ where $x\in\mathbb{R}^{n^2}$ is the variable. Thank you in advance for any suggestions.
Let $C=A^TA+B^TB+D$. You can write it as $$ C=D+EE^T, \quad E=[A^T,B^T]. $$ The inverse can be then using the Woodbury formula written as $$ \begin{split} C^{-1}&=D^{-1}-D^{-1}E(I+E^TD^{1}E)^{-1}E^TD^{-1}\\ &=D^{-1}-D^{-1}[A^T,B^T]\left\{I+\begin{bmatrix}A\\B\end{bmatrix}D^{-1}[A^T,B^T]\right\}^{-1}\begin{bmatrix}A\\B\end{bmatrix}D^{-1} \end{split}\tag{$*$} $$ Assume that $D=\mathrm{diag}(D_1,\ldots,D_n)$, where each diagonal block $D_i$ is $n\times n$. Since $A=\mathrm{diag}(e^T)$ and $B=\mathrm{diag}(I_n)$, the "small" matrix inside the curly brackets can be written as $$ I+\begin{bmatrix}A\\B\end{bmatrix}D^{-1}[A^T,B^T]= \left[\begin{array}{ccc|c} 1+e^TD_1^{-1}e & & & e^TD_1^{-1} \\ & \ddots & & \vdots \\ & & 1+e^TD_n^{-1}e & e^TD_n^{-1} \\ \hline D_1^{-1}e & & D_n^{-1}e & I+D_1^{-1}+\cdots+D_n^{-1} \end{array}\right]. $$ It can hence be written in the form $$\tag{✿} I+\begin{bmatrix}A\\B\end{bmatrix}D^{-1}[A^T,B^T]=\begin{bmatrix}\Phi&\Theta^T\\\Theta&\Sigma\end{bmatrix}, $$ where the matrices $\Phi=\mathrm{diag}(1+e^TD_i^{-1}e)_{i=1}^n$ and $\Sigma=I+D_1^{-1}+\cdots+D_n^{-1}$ are $n\times n$ diagonal matrices. The $n\times n$ matrix $\Theta$ contains in the $i$th column the entries of the diagonal of $D_i^{-1}$. Now you can use some block inversion formulas, e.g., both $\Phi$ and the Schur complement $\Sigma-\Theta\Phi^{-1}\Theta^T$ are invertible (the Schur complement is normally a dense matrix). You can plug the computed inverse back to ($*$) and compute the inverse of $C^{-1}$. Note that you actually don't need the inverse explicitly. Assuming that the matrix (✿) has a Cholesky factorisation $LL^T$, all you need is to solve the system $$ LX=\begin{bmatrix}A\\B\end{bmatrix} $$ with multiple right-hand sides. Then $$ C^{-1}= D^{-1}-D^{-1}[A^T,B^T](LL^T)^{-1}\begin{bmatrix}A\\B\end{bmatrix}D^{-1} =D^{-1}-D^{-1}X^TXD^{-1}. $$ To solve a system with $C$, you don't need to compute $X$ but still you need to factorise the matrix (✿). You can use ($*$) to solve a system $Cx=b$ as follows: * *Compute $y=D^{-1}b$, *Compute $z=E^Ty$, *Solve the system with (✿) by $Lu=z$ and $L^Tv=u$, *Compute $s=D^{-1}Ev$, *Set $x=y-s$. Multiplying with $E$ and $E^T$ is easy as it involves only some summations and copies. NOTE: I'm afraid that there might be no nicely looking solution except in cases where $D$ is a multiple of identity (or at least the matrices $D_i$ are multiples of identity). Otherwise, $\Theta$ is a general $n\times n$ matrix (with positive entries) so the inverse of the Schur complement $\Sigma-\Theta\Phi^{-1}\Theta^T$ cannot be written in some simple form. However, in the mentioned special cases the matrix $\Theta\Phi^{-1}\Theta^T$ would have rank one so one could use the Sherman-Morrison formula to invert the Schur complement. NOTE 2: The matrix (✿) has a special form so this can be somewhat exploited in the Cholesky factorisation. The matrix can be factorised as $$ \begin{bmatrix}\Phi&\Theta^T\\\Theta&\Sigma\end{bmatrix} = \begin{bmatrix}I&0\\\Theta\Phi^{-1}&I\end{bmatrix} \begin{bmatrix}\Phi&0\\0&\Pi\end{bmatrix} \begin{bmatrix}I&\Phi^{-1}\Theta^T\\0&I\end{bmatrix}, $$ where $\Pi=\Sigma-\Theta\Phi^{-1}\Theta^T$ is the already mentioned Schur complement. So if $\tilde{L}\tilde{L}^T=\Pi$ is the Cholesky factorisation of $\Pi$, then $$ \begin{bmatrix}\Phi&\Theta^T\\\Theta&\Sigma\end{bmatrix} = \underbrace{\begin{bmatrix}\Phi^{1/2}&0\\\Theta\Phi^{-1/2}&\tilde{L}\end{bmatrix}}_{L} \underbrace{\begin{bmatrix}\Phi^{1/2}&\Phi^{-1/2}\Theta^T\\0&\tilde{L}^T\end{bmatrix}}_{L^T} $$ is the Cholesky factorisation of (✿). Of course, some rearrangements can be made to avoid computing the square roots.
{ "language": "en", "url": "https://math.stackexchange.com/questions/840828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Limit with L'Hospital with infinite indeterminate formats I'm trying to find the limit: $$\large \lim_{x\to0}(\sin x)^x$$ Whst I did was apply L'Hospital Rule: $$\large \text{let }y =(\sin x)^x\implies \ln y=x\ln\sin x$$ $$\large \lim_{x\to0}\ln y = \lim_{x\to0} x\ln\sin x = \lim_{x\to0}\frac x{\frac1{\ln\sin x}} = \lim_{x\to0} \frac1{\frac {-\cos x}{(\ln\sin x)^2\sin x}} = \lim_{x\to0} (-\tan x )(\ln\sin x)^2 = \lim_{x\to0} \frac{(\ln\sin x)^2}{\frac1{(-\tan x )}} = ... $$ Ultimately it keeps on going, please help me.
HINT: $$\lim_{x\to0^+}\frac{\ln\sin x}{\dfrac1x}=\lim_{x\to0^+}\frac{\dfrac{\cos x}{\sin x}}{-\dfrac1{x^2}}=-\lim_{x\to0}x\cdot \lim_{x\to0}\cos x\cdot\frac1{\lim_{x\to0}\dfrac{\sin x}x}$$ Hope you can take it home from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/840915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Prove no existing a smooth function satisfying ... related to Morse Theory i) Show that there does not exist a smooth function $f:\mathbb{R} \rightarrow \mathbb{R}$, s.t. $f(x) \geq 0$, $\forall x \in \mathbb{R}$, $f$ has exactly two critical points, $x_1,x_2\in\mathbb{R}$ and $f(x_1)=f(x_2) = 0$. (This part is easy). ii) Show that there does not exist a smooth function $f:\mathbb{R}^2 \rightarrow \mathbb{R}$, s.t. $f(x,y) \geq 0$, $\forall (x,y) \in \mathbb{R}^2$, $f$ has exactly two critical points, $(x_1,y_1),(x_2,y_2)\in\mathbb{R}^2$ and $f(x_1,y_1)=f(x_2,y_2) = 0$. I have tried several methods, however, it does not work, could anybody help me out?
As required by "This is much healthier", I post a new thread to express my opinions as related to "user126154"'s answer, which is great, however there is something that I can't convince myself. First of all, the compactness condition in the proof of "user126154" can be relaxed as proposed in Richard Palais: Topology Volume 2, Issue 4, 1963, which can be found here: Richard Palais's paper. Secondly, let's consider the smooth function $h(x,y) = [(x+1)^2 + y^2][(x-1)^2 + y^2]$, it will be easily shown that the critical points are isolated, and the only two minimal are non-degenerate critical points. Now, on the one hand, just as "user126154" proposed, we can constructed a new function the same way (the details are shown below): Consider any diffeomorphism $\phi:\mathbb{R}^2 \rightarrow B$. For the two minimal of $h$, $(-1,0);(1,0)$ denoted as $p_1,p_2$, now join $p_1$ to $p_2$ with a simple arc which avoids critical points other than the $p_1,p_2$. A regular neighborhood of such arc is a disc D where h has no critical points other than at $p_1$ and $p_2$. Now uses a diffeomorphism $\psi$ from B to D and compose with $h$, which is $h(\psi(\phi(x,y)))$. Then the constructed function should have only two critical points. On the other hand, if the constructed function is exactly a counterexample, for the reason that the critical points of the constructed function are non-degenerate, we can apply the theorem in the paper above to show that there is at least another critical point. So, what's the problem?
{ "language": "en", "url": "https://math.stackexchange.com/questions/840967", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 3 }
Showing $A \subset B \iff A\cap B=A$ Showing $A \subset B \iff A\cap B=A$ How would I show this? My proof Assume i. $A \cap B \subset A$ ii.$A \subset A \cap B$ Let $x$ be any element. Assume $x \in A \cap B$. Then $x \in A$ and $ x \in B$. By hypthesis $x \in A \rightarrow x \in B$ Thus $x \in A$ ii. Let $ x \in A$ By hypthesis $x \in A \rightarrow x \in B$ thus $x \in A \cap B$. Part 2 $ A\cap B=A \rightarrow A \subset B$ But I find myself stuck here.
What you have written seems correct, but a bit confusing. I might write the proof like this Proof We want to prove that $$ A \subset B \iff A\cap B=A $$ Assume that the right hand side is true. That is, assume that $A\cap B = A$. We want to show that $A\subseteq B$. Let $a\in A$. Since $A = A\cap B$, we have $x\in A$ and $x\in B$. Hence $A\subseteq B$. Assume now that the left hand side is true. That is, assume that $A\subseteq B$. We want to prove that $A\cap B = A$. We do this by proving to inclusions * *$A\cap B \subseteq A$ and *$A\subseteq A\cap B$. To show 1. let $x\in A\cap B$. Then $x\in A$ and $x\in B$. Hence $A\cap B \subseteq A$. To how 2. let $x\in A$. Then since $A\subseteq B$ we have $x\in B$. So $x\in A$ and $x\in B$. Then by definition of intersection $x\in A\cap B$. $\square$
{ "language": "en", "url": "https://math.stackexchange.com/questions/841024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
$f(x)=\log x\Leftrightarrow f^{-1}(x)=e^x$. Why $e=2.73\cdots$? $$f(x)=\log x\Leftrightarrow f^{-1}(x)=e^x.$$ Ok, $\log x$ is defined as the function $f(\cdot)$ such that: $f'(x)=\dfrac{1}{x}$. How to get, from this, the inverse of it $f^{-1}(x)$? And why $e=2.73\cdots$?
The other answer has not answered your question, “Why is $e$ equal to $2.718281828\cdots$?”. Let’s form the number $E=\lim_n(1+\frac1n)^n$, and evaluate it knowing the continuity of the log function and what its derivative is. Of course this number $E$ is computable, even if slowly, directly by hand. And if you take $n$ large enough, you will indeed find that your result is close to the number I quoted above. We have: $$ \log(E)=\log\left[\lim_n(1+\frac1n)^n\right]=\lim_n\frac{\log(1+\frac1n)-\log(1)}{1/n}=\log'(1)=1/1\,, $$ just using the definition of the derivative and applying it to the logarithm. So $\log(E)=1$, and I think that’s what you wanted to know.
{ "language": "en", "url": "https://math.stackexchange.com/questions/841120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }