Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
How do you prove the left and right side of an identity of a set are equal? I'm having some trouble understanding sets w/ associative binary operations. Say I have a set "S" w/ the associative binary operation SxS -> S. If 'L' is a left identity of S and 'R' is a right identity of S, how can I prove those two are equal? The definition of associativity is for all a, b, c in G, we have (a * b) * c = a * (b * c). But how does that apply to proving they are equal? It seems obvious but I'm sure there's a certain "proof" way of doing it?
Well this is the proof. $$ L = L * R = R$$ No need to use associativity. For an elaboration. A left identity is an element $L$ satisfying $L * g = g$ for every $g \in G$. Hence $L * R = R$. A right identity is an element satisfying $g * R = g \;\; \forall g \in G$. Hence, $L * R = L$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1422559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What makes a number representable? The set of real numbers contains element which can be represented (there exists a way to write them down on paper). These numbers include: * *Integer numbers, such as $-8$, $20$, $32412651$ *Rational numbers, such as $\frac{7}{41}$, $-\frac{14}{3}$ *Algebraic numbers *Any other number that can be created by a chain of functions whose definition is also finite (ex.: $\sin(\cos(\sqrt{445}))$, $\pi$, $e$) Another way of thinking about these is that it's possible to write a computer program occupying a finite amount of space that can generate them to any precision (or return it's nth digit). The set of reals also contains numbers which are impossible to represents (whose digits follow absolutely no logic). We never use such numbers because there is no way of writing them down. My questions are: * *What are these numbers called? *What is a more formal definition of them?
The numbers you are describe in your list are called computable numbers meaning they can be computed to arbitrary precision. Equivalently, a number $x$ is computable if it is decidable, given rational $a$ and $b$, if $x\in (a,b)$. The other numbers are said to be noncomputable. It is worth noting, however, that some noncomputable numbers can still be written down - for instance: $$\sum_{n=0}^{\infty}2^{-BB(n)}$$ where $BB$ is the busy beaver function (which isn't computable).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1422678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Proving that an operator $T$ on a Hilbert space is compact Let $H$ be a Hilbert space, $T:H \to H$ be a bounded linear operator and $T^{*}$ be the Hilbert Adjoint operator of $T$. Show that $T$ is compact if and only if $T^{*}T$ is compact. My attempt: Suppose first that $T$ is compact. The Hilbert Adjoint operator of $T$ is bounded therefore $T^{*}T$ is compact. How can i proceed with the converse part ?
Suppose that $f = \text{w}-\lim_{n \to \infty} f_n$. So we have $\lim_{n \to \infty} \| T^* T (f_n-f)\|=0$ because $T^*T$ is compact. Also, we know that sequence $\{f_n-f\}_{n=1}^{\infty}$ is bounded, so we have \begin{align*}\lim_{n \to \infty} \|T(f_n-f)\|^2 = \lim_{n \to \infty} \langle T^*T (f_n-f),f_n-f\rangle \leqslant \limsup_{n \to\infty} \|T^*T(f_n-f)\|\|f_n-f\| = 0, \end{align*} that is, $\text{s}-\lim_{n \to \infty} Tf_n = Tf$. So, $T$ is compact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1422760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Solve logarithmic equation $\log_{\frac{x}{5}}(x^2-8x+16)\geq 0$ Find $x$ from logarithmic equation: $$\log_{\frac{x}{5}}(x^2-8x+16)\geq 0 $$ This is how I tried: $$x^2-8x+16>0$$ $$ (x-4)^2>0 \implies x \not = 4$$ then $$\log_{\frac{x}{5}}(x^2-8x+16)\geq \log_{\frac{x}{5}}(\frac{x}{5})^0 $$ because of base $\frac{x}{5}$, we assume $x \not\in (-5,5)$, then $$x^2-8x+16 \geq 1$$ $$ (x-3)(x-5) \geq 0 \implies$$ $$ \implies x \in {(- \infty,-5) \cup (5, \infty)} \cap x\not = 4 $$ But this is wrong, because the right solution is $$x \in {(3,4) \cup (4,6)} $$ I'm sorry if I used the wrong terms, English is not my native language.
Given $$\displaystyle \log_{\frac{x}{5}}(x^2-8x+16)\geq 0\;,$$ Here function is defined when $\displaystyle \frac{x}{5}>0$ and $\displaystyle \frac{x}{5}\neq 1$ and $(x-4)^2>0$. So we get $x>0$ and $x\neq 5$ and $x\neq 4$ If $$\displaystyle \; \bullet\; \frac{x}{5}>1\Rightarrow x>5\;,$$ Then $$\displaystyle \log_{\frac{x}{5}}(x^2-8x+16)\geq 0\Rightarrow (x^2-8x+16)\geq 1$$ So we get $$\displaystyle x^2-8x+15\geq 0\Rightarrow (x-3)(x-5)\geq 0$$ So we get $x>5$ If $$\displaystyle \; \bullet 0<\frac{x}{5}<1\Rightarrow 0<x<5\;,$$ Then $$\displaystyle \log_{\frac{x}{5}}(x^2-8x+16)\geq 0\Rightarrow (x^2-8x+16)\leq 1$$ So we get $$\displaystyle (x-3)(x-5)\leq 0$$ So $$3\leq x<5-\left\{4\right\}$$ So our final Solution is $$\displaystyle x\in \left[3,4\right)\cup \left(4,5\right)\cup \left(5,\infty\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1422858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Implicit solution of ODE to explicit or approximate explicit function Working with the following ODE and implicit solution but need an explicit solution for J: The ODE with$J_c$ and $G$ as constants is: $$-\frac{1}{J^2}\frac{dJ}{dt} = G(J-J_c)$$ The implicit solution given by Field et al. (1995) is: $$Gt = \frac{1}{J_c^2} \left[ ln \left( \frac{J}{J_o}\cdot \frac{J_o-J_c}{J-J_c} \right) - J_c \left(\frac{1}{J}- \frac{1}{J_o} \right) \right] $$ There is no explicit statement about $J_o$ in the reference but physically it corresponds to the initial flux at time zero. This suggests that $J=J_o \text{ at time}~ t=0$. However as suggested and shown by JJacquelin when we plug in for $t=0$ we get $ln(1) = 0$ ? Checking the -ve sign in the original reference as suggested by JJacquelin. Any pointers to a complete explicit solution or good approximate of an explicit solution is greatly appreciated. Thanks, Vince
Based on contribution from JJacquelin the problem needs a numerical approach to solve. No solution based on standard functions (possibly non-standard) is likely.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1423027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Another messy integral: $I=\int \frac{\sqrt{2-x-x^2}}{x^2}\ dx$ I found the following question in a practice book of integration:- $Q.$ Evaluate $$I=\int \frac{\sqrt{2-x-x^2}}{x^2}\ dx$$ For this I substituted $t^2=\frac {2-x-x^2}{x^2}\implies x^2=\frac{2-x}{1+t^2}\implies 2t\ dt=\left(-\frac4{x^3}+\frac 1{x^2}\right)\ dx$. Therefore $$\begin{align}I&=\int\frac {\sqrt{2-x-x^2}}{x^2}\ dx\\&=\int \left(\frac tx\right)\left(\frac{2t\ dt}{-\frac4{x^3}+\frac 1{x^2}}\right)\\&=\int \frac{2t^2\ dt}{\frac{x-4}{x^2}}\\&=\int \frac{2t^2(1+t^2)\ dt}{{x-4}\over{2-x}}\\&=\int \frac{2t^2(1+t^2)(5+4t^2-\sqrt{8t^2+9})\ dt}{\sqrt{8t^2+9}-(8t^2+9)}\end{align}$$ Now I substituted $8t^2+9=z^2 \implies t^2=\frac {z^2-9}8 \implies 2t\ dt=z/4\ dz$. So, after some simplification, you get $$\begin{align}I&=-\frac1{512}\int (z^2-9)(z+1)(z-1)^2\ dz\end{align}$$ I didn't have the patience to solve this integration after all these substitutions knowing that it can be done (I think I have made a mistake somewhere but I can't find it. There has to be an $ln(...)$ term, I believe). Is there an easier way to do this integral, something that would also strike the mind quickly? I have already tried the Euler substitutions but that is also messy.
Let $$\displaystyle I = \int \frac{\sqrt{2-x-x^2}}{x^2}dx = \int \sqrt{2-x-x^2}\cdot \frac{1}{x^2}dx\;, $$ Now Using Integration by parts $$\displaystyle I = -\frac{\sqrt{2-x-x^2}}{x}-\int\frac{1+2x}{2\sqrt{2-x-x^2}}\cdot \frac{1}{x}dx $$ So $$\displaystyle I = -\frac{\sqrt{2-x-x^2}}{x}-\underbrace{\int\frac{1}{\sqrt{2-x-x^2}}dx}_{J}-\underbrace{\int\frac{1}{x\sqrt{2-x-x^2}}dx}_{K}$$ So for Calculation of $$\displaystyle J = \int\frac{1}{\sqrt{2-x-x^2}}dx = \int\frac{1}{\sqrt{\left(\frac{3}{2}\right)^2-\left(\frac{2x+1}{2}\right)^2}}dx$$ Now Let $\displaystyle \left(\frac{2x+1}{2}\right)=\frac{3}{2}\sin \phi\;,$ Then $\displaystyle dx = \frac{3}{2}\cos \phi d\phi$ So we get $$\displaystyle J = \int 1d\phi = \phi+\mathcal{C_{1}} = \sin^{-1}\left(\frac{2x+1}{3}\right)+\mathcal{C}$$ Similarly for calculation of $$\displaystyle K = \int \frac{1}{x\sqrt{2-x-x^2}}dx$$ Put $\displaystyle x=\frac{1}{u}$ and $\displaystyle dx = -\frac{1}{u^2}dt$ So we get $$\displaystyle K = -\int\frac{1}{\sqrt{2u^2-u-1}}du = -\frac{1}{\sqrt{2}}\int\frac{1}{\sqrt{\left(u-\frac{1}{4}\right)^2-\left(\frac{3}{4}\right)^2}}dx$$ So we get $$\displaystyle J = -\frac{\sqrt{2}}{3}\ln\left|\left(u-\frac{1}{4}\right)+\sqrt{\left(u-\frac{1}{4}\right)^2-\left(\frac{3}{4}\right)^2}\right|+\mathcal{C_{2}}$$ So we get $$\displaystyle J = -\frac{\sqrt{2}}{3}\ln\left|\left(\frac{1}{x}-\frac{1}{4}\right)+\sqrt{\left(\frac{1}{x}-\frac{1}{4}\right)^2-\left(\frac{3}{4}\right)^2}\right|+\mathcal{C_{2}}$$ So $$\displaystyle I = -\frac{\sqrt{2-x-x^2}}{x}-\sin^{-1}\left(\frac{2x+1}{3}\right)+\frac{\sqrt{2}}{3}\ln\left|\left(\frac{1}{x}-\frac{1}{4}\right)+\sqrt{\left(\frac{1}{x}-\frac{1}{4}\right)^2-\left(\frac{3}{4}\right)^2}\right|+\mathcal{C}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1423103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 2 }
Is this a sound demonstration of Euler's identity? Richard Feynman referred to Euler's Identity, $e^{i\pi} + 1 = 0$ as a "jewel." I'm trying to demonstrate this jewel without recourse to a Taylor series. Given $z = cos\theta + i sin\theta\; |\;|z| = 1$, $$\frac{dz}{d\theta}= -sin\theta + icos\theta =i(isin\theta+cos\theta)=iz$$ Now, if I let $z=u(\theta)$, then, $$\frac{du}{d\theta}=iu(\theta)$$ Undoing my original derivative, $$\int iu(\theta) d\theta =u(\theta)+C$$ $$ \therefore z=u(\theta)=e^{i\theta}$$ which is the general case. Substituting $\pi =\theta$ for the special case, and invoking the original equation, we are left with $z=cos\pi + isin\pi =-1 = e^{i\pi}$ $$\therefore e^{i\pi}+1=0$$ When I first worked through this, the constant of integration disturbed me, like a nasty inclusion marring the jewel. But now, I think it's fair for me to excise it in the line, $\;\therefore z=u(\theta)=e^{i\theta}$ Is that correct?
This is not a proof of Euler's discovery, but of something much weaker. What Euler noticed is that there is a connection between the exponential function and the trigonometric functions. Your proof only shows that solutions to $F'(t)=iF(t)$ (for real $t$) are equivalent to uniform circular motion in the complex plane. This suggests some formal relation of $F(t)$ to exponentials, but it does not: * *give a construction of $F(t)$ independent of circular motion; *show that $F(ix)$ exists for real $x$ *show that $F(-ix)$, for real $x$, equals the function we know as $e^x$. Those (or their equivalents) are the earth-shaking facts that Euler uncovered and the reason his formula is celebrated. Starting from a more advanced formalism (such as power series) where $e^z$ is defined for both real and imaginary $z$ then an argument like this one might reproduce Euler's finding.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1423172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Plot a single point on number line in interval notation For example, I want to plot the solution set $\{3\}\cup (2, \infty$). How do I represent 3 as a single point?
If you needed to represent $\{2\}\cup(4,\infty)$, then you could use a filled-in dot for individual points, so you might draw something like the following: As Alex G. mentioned, since $3$ is in $(2,\infty)$, $\{3\}\cup(2,\infty)=(2,\infty)$ so you wouldn't have to do this for the case you mentioned in your original question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1423243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Proof that bisecting a line segment with straightedge is impossible This is the proof I read from here. I will quote it fully: The answer is NO. To see why, consider a line L in the plane P, and two marked points A, B on it. It is desired to construct the midpoint M of the segment AB using the straightedge. Suppose we have found a procedure which works. Now, suppose we have a one-to-one mapping of plane P onto another plane P' which carries lines to lines, but which does not preserve the relation "M is the midpoint of the segment AB", in other words A, M, B are carried to points A', M', B' with A'M' unequal to B'M'. Then, this leads to a contradiction, because the construction of the midpoint in the plane P induces a construction in P' which also would have to lead to the midpoint of A'B'. (This is a profound insight, an "Aha" experience, and worth investing lots of time and energy in thinking it through carefully!!) I don't understand how the one-to-one mapping induces an equivalent construction of the midpoint of A'B' in the plane P', given that it only preserves lines?.
It depends on your "initial position". Given A, B, and possibly other points on the line AB (but not M) we cannot obtain other points on AB unless we are given at least 2 points C,D not on AB. Now suppose M lies on the line CD. If we have a projection ,that is, a perspective view P' of P, the recipe "Take the intersection M of CD with AB" transforms to "Take the intersection of A'B' with C'D' " which may or may not give the mid-point of A'B' because projections in general do not preserve ratios of lengths and do not in general preserve mid-points,, but M is still the mid-point of AB . I don't know which book you refer to.I don't intend to read it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1423486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How to calculate the probability of drawing numbers under certain constraints We have numbers $1,2,3,\dots, n$, where $n$ is an integer, and we select $d$ distinct numbers from them, $i_1,i_2,\dots,i_d$. What is the probability that there exist at least two numbers whose difference is less than $k$, i.e $|i_{h}-i_{j}|<k$, for any $h,j \in \{1,2,3,\dots,d\}$ I am thinking about the total number which satisfies the constraint should be $(n-k+1)(k-1)\binom{n-2}{d-2}$, and the total number is $\binom{n}{d}$, so the answer should be $\dfrac{(n-k+1)(k-1)\binom{n-2}{d-2}}{\binom{n}{d}}$ Thank you in advance
I think the best approach is to enumerate the number of ways to choose $d$ numbers so that the difference between any two is at least $k$. That is, we aim to enumerate the number of sets of the form $\{i_1,i_2,\ldots,i_d\}$ such that $|i_j-i_{j-1}|\geq k$ for all $2\leq j\leq d$. One way to think about this type of problem is via 'stars' and 'bars'. For example, if $n=7$, $d=3$, and $k=2$ we can consider $**|**|**|*$ to represent the set $\{2,4,6\}$. Note that each 'bar' represents a number corresponding to the number of stars that come before it. Notice also that, in order to satisfy the condition that the difference between any two numbers is 2 or greater, we need at least two 'stars' between each bar. We can come up with the general result using the above observations. we are choosing $d$ numbers, so we need $d$ bars. The difference between each consecutive number in our set must be at least $k$, so we need at least $k$ stars between each consecutive pair of bars. Think of the bars as dividers between bins. there are $d$ bars, so there are $d+1$ bins. $$\text{-bin 1-}|\text{-bin 2-}|\cdots|\text{-bin $d+1$-}$$ By our above logic, bins 2 through $d$ must contain at least $k$ stars and bin 1 must contain at least 1 star. So the number of ways to choose $d$ numbers with pairwise difference at least $k$ amounts to enumerating how many ways we can place the rest of the stars into bins. We have $k\cdot(d-1)+1$ stars already placed, leaving $n-k(d-1)-1$ stars to be placed in the $d+1$ bins available. Let $m=n-k(d-1)-1$ and $e=d+1$. So the question becomes, how many ways are there to place $m$ stars into $e$ bins? We can answer this question by placing $e-1=d$ dividers anywhere between $m$ stars. In this scenario the total number of spots for stars or bars is $m+d$ and we are choosing $d$ of them to be bars. So there are $\binom{m+d}{d}=\binom{n-k(d-1)-1+d}{d}=\binom{n+k+d-kd-1}{d}.$ So the probability you are looking for is $$1-\frac{\binom{n+k+d-kd-1}{d}}{\binom{n}{d}}=\frac{\binom{n}{d}-\binom{n+k+d-kd-1}{d}}{\binom{n}{d}}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1423678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How can we prove that $\frac{a}{b }\times\frac{c}{d} =\frac{ac}{bd}$ I am slowly reading calculus by michael spivak and it is one of the problems in first chapter. however I cant prove it please help me with it...
To see that equality holds, we will write the elements $\frac1b$ and $\frac1d$ as $b^{-1}$ and $d^{-1}$, respectively. Thus \begin{align*} \frac ab\cdot\frac cd&=(a\cdot b^{-1})\cdot(c\cdot d^{-1})\\ &=(a\cdot c)\cdot(b^{-1}\cdot d^{-1})\\ &=(a\cdot c)\cdot(b\cdot d)^{-1}\\ &=\frac{ac}{bd}. \end{align*} The second line uses the associative and commutative properties of multiplication.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1423807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Isomorphic or Equal in Vect? In the category of vector spaces, how many elements are there in the isomorphism class of 1-dimensional vector spaces? Secondly, is the polynomial algebra generated by the symbol $x$ equal to the polynomial algebra generated by the symbol $y$, or merely isomorphic to it.
The first question can have no answer. Vector spaces over a given field (even $1$-dimensioanl vector spaces) are not a set, unless you consider a small category. It is for the same reason as the set of all sets does not exist. For the second question, the answer is ‘yes: two polynomial algebras in one indeterminate over a given field $F$ are isomorphic’, since they are all realisations of the free algebra on one element. But they're not equal because the symbol $x$ (in general) does not belong to $F[y]$ and vice-versa.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1423974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Laurent series of $e^{z+1/z}$ What is the Laurent series of $e^{z+1/z}$? I had used $$a_k= \frac{1}{2\pi i}\int_c \frac{f(z)}{z^{k+1}}\,dz $$ for a curve $c$ in which we can use $e^z$ as an analytic func. and expanded the $e^{1/z}$ series expansion.
$$ \begin{align} e^ze^{\frac1z} &=\sum_{k=0}^\infty\frac{z^k}{k!}\sum_{j=0}^\infty\frac1{j!z^j}\\ &=\sum_{k=-\infty}^\infty z^k\color{#0000F0}{\sum_{j=0}^\infty\frac1{(k+j)!j!}}\\ &=\sum_{k=-\infty}^\infty\color{#0000F0}{I_{|k|}(2)}\,z^k \end{align} $$ where it is convention that $\frac1{k!}=0$ when $k<0$. $I_n$ is the Modified Bessel Function of the Second Kind. Why is the Sum in $\boldsymbol{j}$ Even in $\boldsymbol{k}$? In the preceding, it is stated that $$ e^{z+\frac1z}=\sum_{k=-\infty}^\infty I_{|k|}(2)z^k $$ which implies that $\sum\limits_{j=0}^\infty\frac1{(k+j)!j!}$ is even in $k$; and in fact it is, because of the convention, mentioned above, that $\frac1{k!}=0$ when $k<0$. For $k\ge0$, we have $$ \begin{align} \sum_{j=0}^\infty\frac1{(-k+j)!j!} &=\sum_{j=k}^\infty\frac1{(-k+j)!j!}\tag{1a}\\ &=\sum_{j=0}^\infty\frac1{j!(j+k)!}\tag{1b} \end{align} $$ Explanation: $\text{(1a)}$: the terms with $j\lt k$ are $0$ due to the convention $\text{(1b)}$: substitute $j\mapsto j+k$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1424105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Examples of monotone functions where "number" of points of discontinuity is infinite We know that if $f:D(\subseteq \mathbb{R})\to\mathbb{R}$ be a monotone function and if $A$ be the set of points of discontinuity of $F$ then $\left\lvert A \right\rvert$ is countable. Where $\left\lvert A \right\rvert$ denotes the cardinality of $A$. My question is, If $D=[a,b]$ then does there exist any function for which $\left\lvert A \right\rvert=\aleph_0$ ? Actually the question arose when I attempted to prove that $\left\lvert A \right\rvert$ is countable. Let me briefly discuss the argument. Proof (certainly wrong in view of the comment below but can't find the flaw in the following argument) Assume that $\left\lvert A \right\rvert$ is infinite. Then consider the quantity $$\displaystyle\lim_{x\to x_0+}f(x)-\displaystyle\lim_{x\to x_0-}f(x)$$for all $x_0\in A$. Now consider, $$B=\left\{\left\lvert\lim_{x\to x_0+}f(x)-\lim_{x\to x_0-}f(x)\right\rvert\mid x_0\in A\right\}$$Note that $B$ is bounded and $B\ne \emptyset$. So, $\sup B$ exists. Now define, $$\sup B\ge d_n> \dfrac{(n-1)\sup B}{n}$$ where $d_n\in B\setminus \{d_1,d_2,\ldots,d_{n-1}\}$ Then the series $\displaystyle\sum_{i=1}^\infty d_i$ diverges. My conclusion followed from this argument.
* *Take the graphic of any function you like, which is both continuous and monotonous over a finite interval of your choice $[a,~b]$. *Map said interval to $[0,1]$ using $x\mapsto\dfrac{x-a}{b-a}$. *Divide the new interval $[0,1]$ into an infinite number of subintervals of the form $\bigg(\dfrac1{n+1},\dfrac1n\bigg]$. *Let the graphic on $\bigg(\dfrac12,1\bigg]$ untouched, and move the graphic on $\bigg[0,\dfrac12\bigg]$ upwards by $\dfrac1{2^2}$, if the function is descending, or downwards by $\dfrac1{2^2}$, if it's ascending. *Similarly for $\bigg(\dfrac13,\dfrac12\bigg]$ and $\bigg[0,\dfrac13\bigg]$, leaving the former untouched, and translating the latter vertically by $\dfrac1{3^2}$, etc. *I intentionally chose the quantum of translation $\dfrac1{n^2}$ instead of just $\dfrac1n$, because I wanted to avoid follow up questions of the form: But then the value in $0$ will tend towards $\pm\infty$, since the harmonic series diverges, thus rendering the length of the function's value domain infinite. How could we avoid that, so that both its definition domain and image domain are finite ? Hope this helps!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1424201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Prove that $n^{(n+1)} > (n+1)^{n}$ when $n\geq 3$ Firstly, for $n=3$, $$3^4 > 4^3$$. Secondly, $(n+1)^{(n+2)} > (n+2)^{(n+1)}$ Now I'm stuck.
The inequality at stake is equivalent to $n > \left( 1+\frac{1}{n}\right)^n$. Taking log yields $\frac{\ln n}{n} >\ln \left(1+\frac{1}{n} \right)$. This is true since the concavity of $\log$ implies the stronger $\frac{1}{n} >\ln \left(1+\frac{1}{n} \right)$. Using this argument, your inequality can be refined to $(n+1)^n<e^1n^n$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1424277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Set theory and functions (1) Prove or disprove: For any sets $X,Y,Z$ and any maps $f:X \to Y$ and $g:Y \to Z$, if $f$ is injective and $g$ is surjective, then $g \circ f$ is surjective. So i proved previously that $f$ is injective if $g \circ f$ is injective, but then i also proved that if $g \circ f$ is surjective then $g$ is surjective. then this question appears, so how do i prove it that it is true or false? I'm saying it's true, because $g \circ f$ is bijective, so how do i go around proving it? Or my idea is already wrong? I need some help thanks. (2) Show that (it is true in general that) for any sets $A,B$, one has $P(A) \cup P(B) \subseteq P(A \cup B)$. I know that power set has exactly all the subsets of $A$ for $P(A)$. but I dont know how to start.
As you said in your question we have indeed: * *$g\circ f$ injective implies that $f$ is injective. *$g\circ f$ surjective implies that $g$ is surjective. Then under the condition that $f$ is injective and $g$ is surjective you seem to conclude that $g\circ f$ is bijective ("I'm saying it's true, because $g \circ f$ is bijective..."). From where do you get this? It seems to me that you turned aroung the two statements above. From injectivity of $f$ you are not allowed to conclude that $g\circ f$ injective. And from surjectivity of $g$ you are not allowed to conclude that $g\circ f$ surjective. A (repetition of a) counterexample: let $g:\{0,1,2\}\rightarrow\{0,1\}$ be prescribed by $0\mapsto0$, $1\mapsto1$, $2\mapsto1$ and let $f:\{0,1\}\rightarrow\{0,1,2\}$ be prescribed by $0\mapsto1$, $1\mapsto2$. Then $f$ is injective and $g$ is surjective. However $g\circ f:\{0,1\}\rightarrow\{0,1\}$ is constant (and prescribed by $0\mapsto1$ and $1\mapsto1$). It is evidently not injective and is not surjective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1424373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Geometric Interpretation of $|z_1-z_2|\ge ||z_1|-|z_2||$ I have to give a geometric argument that, given two complex numbers $z_1, z_2$, the following inequality holds $$|z_1-z_2|\ge ||z_1|-|z_2||$$ I know every complex number has a nonnegative modulus, and this becomes a problem if $|z_1|\lt |z_2|$, and it contradicts the fact that $|z|\ge 0$ for all $z \in \mathbb{C}$. Intuitively I understand what is happening, but I'm having trouble formulating a geometric argument. I would imagine that, looking at a Argand diagram, the Triangle Inequality would be $$|z_1|+|-z_2|\ge |z_1-z_2|$$ which makes sense. But I'm not sure where to go from here.
If you want to proof it using the triangle inequality: Apply the triangle inequality to get $|z_1-z_2|+|z_2| \geq |z_1|$ if $z_2\geq z_1$. Apply it again to get $|z_2-z_1|+|z_1| \geq |z_2|$ if $z_1\geq z_2$. This gives together with $|x|=|-x|$ and $|x|=x$ if $x \in \mathbb R_{\geq 0}$ the desired inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1424476", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Complex Analysis Geometric Series The question is: Let $n$ be a positive integer, let $h$ be a positive integer not divisible by $n$, and let $$ w = \cos\left(\frac{2\pi}{n}\right) + i \sin\left(\frac{2\pi}{n}\right) $$ Show that $$ 1 + w^h + w^{2h} + w^{3h} + \dots + w^{(n-1)h} = 0$$ I believe I do something with a geometric series, but I am not sure where to begin.
Note that $$w = \cos\left(\frac{2\pi}{n}\right) + i \sin\left(\frac{2\pi}{n}\right) = e^{\frac{2\pi i}{n}}.$$ Use the geometric series $$\sum_{k=0}^{n-1} z^k = \frac{z^n-1}{z-1}.$$ Your case delivers $$\sum_{k=0}^{n-1} \left( e^{\frac{2\pi i}{n}} \right)^{kh}=\sum_{k=0}^{n-1} \left( \left( e^{\frac{2\pi i}{n}} \right)^{h} \right)^k =\sum_{k=0}^{n-1} \left( e^{\frac{2\pi i h}{n}} \right)^k = \frac{\left( e^{\frac{2\pi i h}{n}} \right)^n -1}{e^{\frac{2\pi ih}{n}} -1} = \frac{e^{2\pi i h}-1}{e^{\frac{2\pi i h}{n}}-1} = \frac{1- 1}{e^{\frac{2\pi i}{n}}-1} = 0$$ as $h \in \mathbb{N}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1424606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving Exponential Inequality I would like to know the range of $n$ where this condition is true. Can somebody help me with it. The equation is as follows $$n^{100}\le2^{n^2}$$
Both functions in your inequality are continuous, so we find the correct intervals by solving for the equality $n^{100}=2^{n^2}$ first. Let $x=n^2$. Then $$x^{50}=2^x$$ $$x=2^{x/50}=e^{x(\ln 2)/50}$$ $$xe^{x(-\ln 2)/50}=1$$ $$x\frac{-\ln 2}{50}\cdot e^{x(-\ln 2)/50}=-\frac{\ln 2}{50}$$ $$x\frac{-\ln 2}{50}=W\left(-\frac{\ln 2}{50}\right)$$ where $W$ is the Lambert W function. $$x=-\frac{50}{\ln 2}W\left(-\frac{\ln 2}{50}\right)$$ $$n=\pm\sqrt{-\frac{50}{\ln 2}W\left(-\frac{\ln 2}{50}\right)}$$ Since $-\frac 1e<-\frac{\ln 2}{50}<0$, there are two values of the $W$ function here. We get four values for $n$: $n\approx \pm 1.00705$ or $n\approx \pm 20.9496$ If we assume that $n$ is integral, the solution set is $|n|\le 1$ or $|n|\ge 21$ If $n$ is real but not necessarily integral, replace the $1$ and $21$ with the approximate values above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1424685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Signing a Derivative of an Expectation Let $$Z=E[F(x)]=\int_{-\infty}^{\infty} F(x) \frac{1}{y} \phi\left(\frac{x-a}{y}\right)\,{\rm d}x$$ where $F(x)$ is a convex function of $x$, $\phi$ is the standard normal PDF and $a$ is some (finite) constant. I want to show that $$\frac{\partial Z}{\partial y}>0$$ Here's my proof so far: $$\frac{\partial Z}{\partial y}=\int_{-\infty}^{\infty} F(x) \frac{1}{y^2} \phi\left(\frac{x-a}{y}\right)\left(\frac{(x-a)^2}{y^2}-1\right)\,{\rm d}x$$ The idea of the rest of the proof I have in my mind is that since $\phi(x)=\phi(-x)$ and F is convex, it should be that any increase in the derivative for $x>0$ is greater than a corresponding value for $x<0$. [For example, I'm trying to get something analogous to $\int_{0}^{\infty} (F(x)-F(-x)) \frac{1}{y^2} \phi\left(\frac{x}{y}\right)\,{\rm d}x>0$] But, I'm unable to complete the proof.
With $u=(x-a)/y$, we have $$ \int_{-\infty}^{\infty} F(x) \frac{1}{y} \phi\left(\frac{x-a}{y}\right)\,\mathrm dx = \int_{-\infty}^{\infty} F(yu+a)\phi(u)\,\mathrm du\;, $$ and $$ \frac\partial{\partial y}\int_{-\infty}^{\infty} F(yu+a)\phi(u)\,\mathrm du =\int_{-\infty}^{\infty}F'(yu+a)u\phi(u)\,\mathrm du\;. $$ Now integrate by parts; the boundary term vanishes, $F''$ is positive and the indefinite integral of $u\phi(u)$ is negative (proportional to $\phi(u)$) and thus cancels the sign from the integration by parts. The derivative $\frac{\partial Z}{\partial y}$ in the question seems to contain several errors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1424770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Not affine, projective, geometrically connected, geometrically reduced, nor geometrically regular... Is there a field $k$ and a regular integral $k$-variety $X$ that is neither affine, projective, geometrically connected, geometrically reduced, nor geometrically regular?
This is more of a comment on Kevin's answer, but it's too long. I think the exposition can be made clearer with less technicalities (I don't even follow the second isomorphism) and complications. The point is that affine/projective are easy to get rid of, so we focus on the latter three. If we want to make a regular thing non-reduced (hence non-regular) and non-connected after an extension of scalars, we work over an imperfect coefficient field like $k=\mathbb{F}_p(t)$, and take the spectrum associated to a one-variable polynomial which, on a suitable field extension, will split into relatively prime factors (making it disconnected) which also ramify (making it non-reduced, hence nonregular). So for example, $\text{Spec } k[x]/(x^{2p}-t)$, since upon extension to $k'=\mathbb{F}_p(t^{1/(2p)})$, we get that $x^{2p}-t=(x-t^{1/(2p)})^p(x+t^{1/(2p)})^p$. Here $p$ is any odd prime. Then you do the easy part, which is making it non-affine/projective. The easiest way is to take projective space over our variety (which preserves all the properties we want), then remove a point. Valuative criterion shows it's not even proper over the base field (hence not projective), and dimension-counting like Kevin did shows it's not affine.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1424891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Is it possible for $(900q^2+ap^2)/(3q^2+b^2p^2)$ to be an integer? The original problem is: "Find all possible pairs of positive integers $(a, b)$ $$k = \dfrac{a^3+300^2}{a^2b^2+300}\tag1$$ such that $k$ is an integer." I've tried so many different ways. Now this question comes up from one of them. Let, $$\left(\dfrac{a}{10} \right)^2=\dfrac{3(300-k)}{kb^2-a}=\dfrac{p^2}{q^2}\tag2$$ If you calculate $k$ in terms of $a, b, p, q,$ then the following question comes up: "Suppose $a$, $b$, $p$ and $q$ are natural numbers such that $a<300$ and $\gcd(p, q)=1$. Is it possible for $$k=\frac{900q^2+ap^2}{3q^2+b^2 p^2}\tag3$$ to become an integer?" I think this is easier than original problem, but I don't know how to proceed from here.
While your $(3)$ can be derived from $(2)$, if it is an integer does not guarantee that $(1)$ will be an integer as well. Note that it has the four variables $a,b,p,q$ that can "integerize" your expression, while you're stuck with only $a,b$ for $(1)$. (For example, $a,b,p,q =3,2,9,5$ makes $(3)$ an integer, but $a,b = 3,2$ does not for $(1)$.) Also, your original problem should explicitly specify the constraint that $a \neq 300b^2$. If not, then $a = 300b^2$ yields an infinite family which always has $k=300$. A computer search with $a<10000$ and $b<100$ yields the infinite family and, $$a,b,k = 10,2,130$$ $$a,b,k = 20,1,140$$ $$a,b,k = 30,2,30$$ which seem to be the only other solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1424978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is it possible to calculate log10 x without using log? Is it possible to calculate $\log_{10} x$ without using $\log_{10}$? I'm interested because I'm working with a framework that has some simple functions, but log is not one of them. The specific platform is capable of doing addition, subtraction, multiplication and division. I can write a formula of a finite, predefined length; e.g. while loops or for loops that continue indefinitely are not available. It's a black-box project with an interface capable of creating formulas, so basically, I need to write the expression to calculate $\log_{10} x$ in one line, and I can't write my own method or something to do more dynamic calculations. An approximation of $\log_{10} x$ is perfectly acceptable.
Let x = 1545 and proceed as follows: * *Divide x by base 10 until x becomes smaller than base 10. 1545/10 = 154.5 154.5 / 10 = 15.45 15.45 / 10 = 1.545 As we divide 3 times the value of x by 10, the whole digit of our logarithm will be 3. *Raise x to the tenth power: 1.545 ^ 10 = 77.4969905 ... *Repeat the procedures from step 1. As we divide only 1 time, the next digit of our logarithm will be 1. *Repeat step 2. 7.74969905 ... ^ 10 = 781.354.964,875286221 ... *Repeat step 1. Since we divide x by 10 by 8 times, the next digit in our log will be 8. And so on, as many digits as you want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1425038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 7, "answer_id": 6 }
Find the value of K in a specific case of a cartesian plane I have a linear equation of a line in a cartesian plane $r:= \{(x,y) \in \mathbb{R}^2 \mid kx-(k+1)y+k-1=0, \,\, k \in \mathbb{R}\}$ and I have to find the value of k so that the line intersects the x axis in a x positive point. Any tips?
Straight line in intercepts form: $$ \dfrac{x}{(1-k)/k} +\dfrac{y}{(k-1)/k+1} = 1 $$ If k<1 positive x- intercept If k=1 line through origin If k>1 negative x- intercept
{ "language": "en", "url": "https://math.stackexchange.com/questions/1425132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
torsion formula for a parametric space curve I managed to prove $$ k(t) = \frac{ | a^{\prime} \times a^{\prime \prime} | }{ | a^{\prime} | ^3 } $$ for a regular parametric curve $a : I \to R^3 $ where $k(t)$ stands for its curvature but I am stucked in proving $$ \tau (t) = \frac{( a^{\prime} \times a^{\prime \prime} ) \cdot a^{\prime \prime \prime }}{ |a^{\prime} \times a^{\prime \prime} | ^2 } $$ Can I ask anybody to give me any hint or reference? Thanks in advance.
It's going to take too long to type it all out in MathJax, so I have uploaded two photos of the derivations of the formulas for curvature and torsion.enter image description here
{ "language": "en", "url": "https://math.stackexchange.com/questions/1425377", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Determining values of a coefficient for which a system is and isn't consistent. Given the system : \begin{array}{ccccrcc} x & + & 2y & + & z & = & 3 \\ x & + & 3y & - & z & = & 1 \\ x & + & 2y & + & (a^2-8)z & = & a \end{array} Find values of $a$ such that the system has a unique solution, infinitely many solutions, or no solutions. I begin by placing the system into an augmented matrix. $\displaystyle \left[ \begin{array}{rrr|r} 1 & 2 & 1 & 3 \\ 1 & 3 & -1 & 1 \\ 1 & 2 & a^2-8 & a \\ \end{array} \right] $ I then perform operations on the matrix. $\displaystyle \left[ \begin{array}{rrr|r} 1 & 2 & 1 & 3 \\ 1 & 3 & -1 & 1 \\ 1 & 2 & a^2-8 & a \\ \end{array} \right] $ $: (R_3-R_1)\rightarrow$ $\displaystyle \left[ \begin{array}{rrr|r} 1 & 2 & 1 & 3 \\ 1 & 3 & -1 & 1 \\ 0 & 0 & a^2-9 & a-3 \\ \end{array} \right] $ $\displaystyle \left[ \begin{array}{rrr|r} 1 & 2 & 1 & 3 \\ 1 & 3 & -1 & 1 \\ 0 & 0 & a^2-9 & a-3 \\ \end{array} \right] $ $: (R_2-R_1; R_3/(a-3))\rightarrow$ $\displaystyle \left[ \begin{array}{rrr|r} 1 & 2 & 1 & 3 \\ 0 & 1 & -2 & -2 \\ 0 & 0 & a+3 & 1 \\ \end{array} \right] $ $\displaystyle \left[ \begin{array}{rrr|r} 1 & 2 & 1 & 3 \\ 0 & 1 & -2 & -2 \\ 0 & 0 & a+3 & 1 \\ \end{array} \right] $ $: (R_1-2R_2)\rightarrow$ $\displaystyle \left[ \begin{array}{rrr|r} 1 & 0 & 5 & 3 \\ 0 & 1 & -2 & -2 \\ 0 & 0 & a+3 & 1 \\ \end{array} \right] $ Truth be told, at this point, I'm not too clear on how to progress (or if my approach is even ideal for this issue). I desire to find intervals across all of $a\in\mathbb{R}$ that explain where $a$ causes the system to have a unique, infinite, or inconsistent solution set.
What strikes me immediately is that if $a^2-8 = 1$, the first and last equations have the same LHS. Since choosing $a=3$ makes these equations identical, this becomes only two equations in three unknowns, so there are an infinite number of solutions. On the other hand, if you choose $a=-3$, then the first and third equations are contradictory, so there are no solutions in this case. I will leave the discussion of other values of $a$ to others.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1425571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How can this function have two different antiderivatives? I'm currently operating with the following integral: $$\int\frac{u'(t)}{(1-u(t))^2} dt$$ But I notice that $$\frac{d}{dt} \frac{u(t)}{1-u(t)} = \frac{u'(t)}{(1-u(t))^2}$$ and $$\frac{d}{dt} \frac{1}{1-u(t)} = \frac{u'(t)}{(1-u(t))^2}$$ It seems that both solutions are possible, but that seems to contradict the uniqueness of Riemann's Integral. So the questions are: * *Which one of them is the correct integral? *If both are correct, why the solution is not unique? *The pole at $u(t)=1$ has something to say?
I will explain the concept using the derivative since you are already pretty familiar with that. Lets define to $\int \:f\left(x\right)dx$ to be some fucntion where $\frac{d}{dx}\left(g\left(x\right)\right)=f\left(x\right)$. (Note this is not the exact definition of an anti-derivative, but an intuitive way of thinking about it.) So if $g\left(x\right)=x^2,\:\frac{d}{dx}\left(g\left(x\right)\right)=f\left(x\right)=2x$ And in reverse,$\int \:f\left(x\right)dx=x^2$ But what happens if $g\left(x\right)=x^2+5,\:\frac{d}{dx}\left(g\left(x\right)\right)=f\left(x\right)=2x$, Notice the 5 is a constant and disappears when taking the derivative. If we apply the reverse, we have no way of getting back the 5. In reverse you still get $\int \:f\left(x\right)dx=x^2$. So now here comes the problem, when you are going in reverse you have no idea what the constant is. This is purely because the constant disappears when taking the derivative. For example $\frac{d}{dx}\left(x^2+8\right)=\frac{d}{dx}\left(x^2+5\right)=\frac{d}{dx}\left(x^2+1\right)$! So when finding the Anti-Derivative you will get a function plus or minus some constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1425660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52", "answer_count": 6, "answer_id": 5 }
Finding the $n$th derivative of trigonometric function.. My maths teacher has asked me to find the $n$th derivative of $\cos^9(x)$. He gave us a hint which are as follows: if $t=\cos x + i\sin x$, $1/t=\cos x - i\sin x$, then $2\cos x=(t+1/t)$. How am I supposed to solve this? Please help me with explanations because I am not good at this. And yes he's taught us Leibniz Theorem. Thanks.
De Moivre taught us that if $t=\cos x + i\sin x$ then $t^n = \cos(nx) + i\sin(nx)$ and $t^{-n} = \cos(nx) - i\sin(nx)$ so $$ t^n + \frac 1 {t^n} = 2\cos(nx). $$ Then, letting $s=1/t$, we have \begin{align} & (2\cos x)^9 =(t+s)^9 \\[10pt] = {} & t^9 + 9t^8 s + 36t^7 s^2 + 84 t^6 s^3 + 126 t^5 s^4 + 126 t^4 s^5 + 84 t^3 s^6 + 36 t^2 s^7 + 9 t s^8 + s^9 \\ & {}\qquad \text{(binomial theorem)} \\[10pt] = {} & t^9 + 9t^7 + 36 t^5 + 84 t^3 + 126 t + 126 \frac 1 t + 84 \frac 1 {t^3} + 36 \frac 1 {t^5} + 9 \frac 1 {t^7} + \frac 1 {t^9} \\[10pt] = {} & \left( t^9 + \frac 1 {t^9} \right) + 9\left( t^7 + \frac 1 {t^7} \right) + 36\left( t^5 + \frac 1 {t^5} \right) + 84 \left( t^3 + \frac 1 {t^3} \right) + 126\left( t + \frac 1 t \right) \\[10pt] = {} & 2\cos(9x) + 18 \cos(7x) + 72\cos(5x) + 168\cos(3x) + 252\cos x. \end{align} Now find the first, second, third, etc. derivatives and see if there's a pattern that continues every time you differentiate one more time.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1425755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Compact and Open subsets of $\ell^p$ Let $(a_n)_1^{\infty}$ be a sequence of positive real numbers. Consider $$ A = \{x \in \ell^p : |x_n| < a_n \ \ \forall n\}$$ $$ B = \{x \in \ell^p : |x_n| \leq a_n \ \ \forall n\} $$ I'm interested to know under what conditions imposed on $a_n$ would $A$ be an Open Set and likewise what conditions would make $B$ Compact I was able to show the special case of $\ell^{\infty}$ where $B$ is Compact $\iff a_n \in c_0$ Similiarly I think $B$ is Compact in $\ell^{p}$ if $a_n \ \in \ell^{p}$ for $p < \infty$ However I'm not sure if the converse is true. Thoughts on this and on $A$ would be much appreciated Some info is available here at page 4 and 5, Proposition #2 and Proposition #3 http://www.math.mcgill.ca/jakobson/courses/ma354/snarski-analysis3.pdf
The notes you provided shows that $A$ is open if and only if $1\le p<\infty$ and $\inf a_n>0$. You are correct that if $p=\infty$, $B$ is compact if and only if $a_n\to0$ (this is also shown in the notes). You are also correct that if $1\le p<\infty$, $B$ is compact if and only if $(a_n)\in\ell^p$. I'll sketch a proof. Assume $(a_n)\in\ell^p$ and suppose $(x^k)_k$ is a sequence in $B$, so for each $k$, $(x^k_n)_n\in B$. For each $n$, the sequence $(x^k_n)_k$ is bounded in $\mathbb C$ (or $\mathbb R$ if we are only dealing with real sequences, but it really doesn't matter). By using a diagonal method, we can find a single subsequence $(k_j)_j$ such that $x_n^{k_j}\to x_n$ as $j\to\infty$ for all $n$. This implies $|x_n|\le a_n$ and so $x\in\ell^p$, hence $x\in B$. It suffices to show that $x^{k_j}\to x$ in $\ell^p$, that is, to show $$\lim_{j\to\infty}\sum_{n=1}^\infty|x_n^{k_j}-x_n|^p=0.$$ Since $(a_n)\in\ell^p$, this follows from the dominated convergence theorem, proving $B$ is compact. Now suppose $(a_n)\notin\ell^p$. Define $x_n=(a_1,a_2,\ldots,a_n,0,0,\ldots)\in\ell^p$. Then clearly $x_n\in B$ for all $n$ but $\|x_n\|_p\to\infty$, so $B$ is not bounded and hence not compact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1425858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Essentially bounded function on $\mathbb{R}$ I don't really know a lot about measure (just finishing my undergrad) so I'm not really on good terms with this. So, let $L^\infty[a,b]$ denote the space of all essentially bounded functions on $[a,b]$ with the norm $\left\| f \right\|_\infty = \operatorname{ess} \sup_{x\in[a,b]} |f(x)|$. What would exactly be the difference between a bounded and an essentially bounded function (also, the difference between the supremum and essential supremum)? If I'm not actually dealing with measures, but just with real variable functions $f:[a,b]\to\mathbb{R}$, can I omit the "essentially" part and just say that $L^\infty[a,b]$ denotes the space of bounded functions, and, additionally, omit the $\text{ess}$ in the definition of the norm? I understand that this seems like a silly question, but I've tried Googleing this and found nothing useful.
An essentially bounded function $f\in L^{\infty}([a,b])$ is explicitly related to the idea of measure, so I don't think there's a way to understand this without measures. The definition can be stated that $f\in L^{\infty}([a,b])$ if there is a $g$ measurable on $[a,b], f=g$ except on a set of measure zero, and $g$ is bounded. Two examples: $$f_1(x)=\begin{cases} x & \text{ if } x\in\mathbb{Q}\\ 0 & \text{ otherwise}\end{cases}$$Then $f_1=0$ except on a set of measure $0$, namely $\mathbb{Q}$, so $f_1$ is essentially bounded on $\mathbb{R}$, although $f_1$ is clearly not bounded on $\mathbb{R}$. Next, $$f_2(x)=\begin{cases} 1 & \text{ if } x\in \mathbb{Q}\\ 0 & \text{ otherwise}\end{cases}$$ Then $f_2=0$ except on a set of measure zero, so $f_2$ is also essentially bounded. Moreover, $f_2$ is also bounded, $|f_2|\leq 1$ and $\sup |f_2|=\|f_2\|=1$. However, note that $\|f_2\|_\infty\not=\|f_2\|$. So even if a function is essentially bounded and bounded, its essential bound is not necessarily equal to the bound: it can be greater or less than the bound. Thus, the essential bound and the supremum bound are fundamentally different.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1426032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 3, "answer_id": 2 }
Is there any winning strategy? 2015 and Game with marbles!!! Two players, Alex and Brad, take turns removing marbles from a jar which initially contains $2015$ marbles. Assume that on each turn the number of marbles withdrawn is a power of two. If Alex has the first turn and the player who takes the last marble wins, is there a winning strategy for either of the players? I try to solve it like the follow: First) I assume that both player know the amount of marbles they took each other Second) The problem say they pick power of 2 then the possible picks are $[1 , 2 , 4, 8, 16 ,32,64 128 , 256 , 512 , 1024]$ there are only $2015$ marbles. Third) the game has 2 players. Continue: a) The only odd pick is $1$ due to $2^0$ is $1$. The rest are odds If player A pick odd (1) B must pick an even number and A continue pick even and also B until complete the $2015$. And If player A pick even (2, 4, etc) B must pick an odd (1) number and A continue pick even and also B until complete the $2015$. I cannot see how can I use 2015 to set a winning rule, I also try use binary but I couldn’t do any advance. Please, let me know what I am doing wrong and how I can continue. Thanks, Greg Martin and Shailesh for your help. Note: I try to use brute fore (excel ;-) ) but I see that multiple of 3 are the wining but I can’t not put my assumption in words with the $2015$ number. Thanks again.
I think this is essentially the same as what @PSPACE-Hard said, but here is my take on the winning strategy; A winning strategy revolves all around the number $3$. This is because; * *Every number is either a multiple of $3$, or is $1$ ($2^0$) or $2$ ($2^1$) below a multiple of $3$. *Also, if either player is left with the number $3$, they are guaranteed to lose, because whether they choose $1$ or $2$, the other player is left with $1$ or $2$, and it is possible to win from either of these two numbers. As a number that is a power of $2$ is just like any other number, it fits with point (1), so $2^n+1$ or $2^n+2$ are multiples of $3$. Obviously $2^n$ cannot be a power of $3$, given its prime factors will be just $2$. As long as the player does this right up until $2^1+1$, they will leave the other player with $3$, which is a guaranteed victory given point (2). Therefore, the strategy to win is to simply leave the other player with a number that fits $3n$, where $n$ is a whole number. The thing that makes this a guaranteed win, is that the other player will not be able to reply with a number that is divisible by $3$, because taking away a multiple of of $2$ (or the number $1$) from a multiple of $3$, you will be left with a number that isn't a multiple of $3$. This means that the first player is guaranteed to win, as long as they; * *Play the number $2$ as their first number, leaving $2013$, which is divisible by $3$, and; *Play tactically to ensure that after each of the other player's move, they play $1$ or $2$ to leave a number divisible by $3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1426142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How to find the general formula for this recursive problem? I want to ask about recursive problem. Given: $$a_0= 11, a_1= -13,$$ and $$a_n= -a_{n-1} +2a_{n-2}.$$ What is the general formula for $$a_n$$ ? I've already tried to find the first terms of this series. From there, I got: $$a_2 = 35, a_3= -61,$$ and $$a_4= 131.$$ From there, I think I need to use the rule from arithmetic and geometric series to find the general formula that I want to find. But, I cannot find the certain pattern from this series, because the differences is always changing, such that -24, 48, -96, 192. From there I think the general formula for a_n should be including (-1)ⁿ. But, how can we deal with series 24,48,96,192? It seems the series is geometric, but how can we find the formula? Thanks
Linear Recurrence Equations have typical solutions $a_n=\lambda^n$. Using this, we can compute the possible values of $\lambda$ for this equation from $$ \lambda^n=-\lambda^{n-1}+2\lambda^{n-2} $$ which means, assuming $\lambda\ne0$, that $$ \lambda^2+\lambda-2=0 $$ This is the characteristic polynomial for the recurrence $$ a_n=-a_{n-2}+2a_{n-2} $$ The characteristic polynomial is $x^2+x-2$ which has roots $1$ and $-2$. Thus, the sequence is $a_n=b(1)^n+c(-2)^n$. Plugging in the values for $n=0$ and $n=1$ gives $$ a_n=3+8(-2)^n $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1426237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
For what positive value of $c$ does the equation $\log(x)=cx^4$ have exactly one real root? For what positive value of $c$ does the equation $\log(x)=cx^4$ have exactly one real root? I think I should find a way to apply IMV and Rolle's theorem to $f(x) = \log(x) - cx^4$. I think I should first find a range of values for $c$ such that the given equation has a solution and then try to find one that gives only a unique solution. I have thought over it but nothing comes to my mind. Perhaps I'm over-complicating it. Any ideas on how to proceed are appreciated.
One may use a slightly geometric argument for this problem. Notice that $\ln(x)=cx^4$ when $f(x)=cx^4-\ln x=0$. Now, $f$ has a vertex, and the vertex occurs where $f'(x)=4cx^3-\frac{1}{x}=0$. Thus, we have the system of equations \begin{align} f(x)&=cx^4-\ln x=0\\ f'(x)&=4cx^3-\frac{1}{x}=0. \end{align} Solve the second equation for $x^4$ to get $x^4=\frac{1}{4c}$. Substitute this into the first equation to get $\ln x = \frac{1}{4}$, which implies $x=e^{1/4}$. Finally, substitute this equation for $x$ into $\frac{1}{4}=\ln x=cx^4$ to find that $c=\frac{1}{4e}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1426353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 4 }
Left and right multiplying of matrices I am new to matrix multiplication and trying to understand something. Suppose you have a matrix equation $ A x=b $. I know to solve for $x$ you should left multiply by the inverse of A. But what is the reason you can't solve for $b$ like this: $ A A^{-1} x=b A^{-1} $ so that $ x=b A^{-1} $? I tried with an example to see that it doesn't work but I don't have a good understanding of the mechanics why not. What if you had something like this instead: $ A x = By $ could you solve like this? $ AA^{-1} x = BA^{-1}y $ and then get the solution $ x = BA^{-1}y $ or do you have to solve like this $ A^{-1}A x = A^{-1}By $ to get the solution $ x = A^{-1}By $
You are asking why $AB=A'B'$ does not imply $AEB = A'EB'$. You noticed that the thing is false. So you should not ask yourself "why is false" (which is nonsense) but you should ask yourself "why I thought it could be true?". You know that applying the same operation to the same object will held the same result. So if $A=B$ you can right multiply and obtain $AC=BC$ or you can left multiply and obtain $CA=CB$. If multiplication were commutative you could also multiply "in the middle" since the order of factors is not relevant. But when the operation is not commutative this is not admissible. Adding a factor in the middle of a product as in $AB \to AEB$ is not an operation performed on the object $AB$ itself, so there is no reason to think that $AB=A'B'$ would imply $AEB \neq A'EB'$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1426453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
prove isomorphisms using first isomorphism theorm Using first isomorphism theorem, prove the following isomorphisms * *$\Bbb R/\Bbb Z\xrightarrow\sim S^1,\; $ *$\Bbb C/\Bbb R\xrightarrow\sim \Bbb R,\; $ *$\Bbb C^\times/\Bbb R^\times_+\xrightarrow\sim T,\;$ *$\text{GL}_n(\Bbb C)/\text{SL}_n(\Bbb C)\xrightarrow\sim \Bbb C^\times,\; $ *$\text{GL}_n(\Bbb Z)/\text{SL}_n(\Bbb Z)\xrightarrow\sim \{\pm1\}$ I only know the statement of first isomorphism theorem, hence solution of these questions might help to grasp its application. Thanks.
for 1th. use $f(x+iy)=y$ for 3th. use $f(A)=determinan A$. for last one use, $f(x)=sgn x$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1426580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Can anyone give me $k$ projection maps $P_i:V \to V$ ;$i=1,...,k$ such that $\sum_{i=1}^k Im(P_i)$ is not a direct sum but $\sum_{i=1}^k P_i=Id$. Can anyone give me $k$ projection maps $P_i:V \to V$ ;$i=1,...,k$ such that $\sum_{i=1}^k Im(P_i)$ is not a direct sum but $\sum_{i=1}^k P_i=Id$. I think in $\Bbb R^2$ I can't get but in $\Bbb R^2$ I am not getting any example right now and difficult to see.
CLARIFICATION : This answers the case of finite-dimensional vector spaces over $\mathbb R$ (or any field with zero characteristic). Such a counterexample does not exist. Indeed, let $V_i={\textsf{Im}}(P_i)$ and $r_i={\textsf{dim}}(V_i)={\textsf{trace}}(p_i)$ since $p_i$ is a projector. Since $\sum_{i=1}^k p_k=\textsf{Id}$, we have $x=\sum_{i=1}^k px$ for any $x\in V$, whence $V=\sum_{i=1}^k V_i$ and in particular $$\textsf{dim}(V) \leq \sum_{i=1}^k \textsf{dim}(V_i)=\sum_{i=1}^k r_i\tag{1}$$. But taking the traces in $\sum_{i=1}^k p_k=\textsf{Id}$, we see that (1) must in fact be an equality, which forces the sum to be direct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1426676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How do these 2 nested for loops differ in terms of Big Oh Loop 1: sum $\gets 0$ for $i\gets 1$ to $n$ do $~~~~$ for $j \gets 1$ to $i^2$ do $~~~~~~~~~$ sum $\gets$ sum + ary$[i]$ Loop 2: sum $\gets 0$ for $i\gets 1$ to $n^2$ do $~~~~$ for $j \gets 1$ to $i$ do $~~~~~~~~~$ sum $\gets$ sum + ary$[i]$ I know this summation formula is $\sum_{i=1}^{n}i^2=n\left(n+1\right)\left(2n+1\right)/6\sim n^{3}/3,$
The first one runs in $$ \sum_{i=1}^n i^2 = \frac{n(n+1)(2n+1)}{6} = \Theta\left(n^3\right) $$ and the second one takes $$ \sum_{i=1}^{n^2} i = \frac{n^2\left(n^2+1\right)}{2} = \Theta\left(n^4\right) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1426789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Some clarification needed on the Relation between Total Derivative and Directional Derivative I will consider here functions of several variables only. If both directional derivative $D_{v}f(x)$ at $x$ along $v$ and total derivative $D f(x)$ at $x$ exist then $$D_{v}f(x)=Df(x)(v).$$ Existence of total derivative ensures that of directional derivative in every direction but not the other way round. There are functions who have, at some point of the domain, directional derivative in every direction but not differentiable at that point i.e. the total derivative at that point does exist. Now, all of my knowledge is theoretical. I cannot see the picture clearly, i.e. the picture of the two kinds of derivative existing together, or one existing and not the other - how do these work? I mean some geometrical interpretation for say $2$ or $3$ dimensional space would help. I am so confused with this thing, I am not even sure if I have managed to convey my problem properly. Please help with some clarification. Thanks.
There’s a very nice discussion of this topic on Math Insight that might be helpful. The key point for your question is that directional derivatives only “look” along straight lines, but the total derivative (also called the differential) requires you to look at all ways to approach the point. For a function on the real line, there are only two ways to do this—from the right and left—but once you move to functions on the plane and beyond, there are suddenly many, many paths available. For a two-dimensional surface in three dimensions, you can think of the differential as specifying the tangent plane to the surface at a point. For the differential to exist, the tangent vectors to the surface at this point of all smooth paths along the surface which pass through the point must lie in the same plane. Directional derivatives correspond to only those paths whose projections onto the $x$-$y$ plane are straight lines. There are clearly many other smooth paths through the point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1426858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 0 }
Finding global maximizers and minimizers I want to find if global maximum or minimum exists in $ f(x,y)=e-^{(x^2+y^2)}$ I found that (0,0) is the only critical point. In the Hessian matrix $H_{(f)}(0,0)$ was negative definite and so (0,0) is a local maximizer. There are few things I need to clarify . 1)As $(0,0)$ is the only critical point and it is a local maximum does it indicate that it is a global maximum as well, because it is the only critical point. 2)Using principal minor method in Hessian matrix $\partial^2f \over \partial^2x$=$2(2x^2-1) *e-^{(x^2+y^2)}$ which is the first minor. And this is $<0$ for (0,0) but for (2,0) it is $>0$. For a global min/max shouldn't Hessian be positive/negative definite for all $x \in R^2$. So in this case to have a global maximum isn't it necessary that the first minor be negative for all $x \in R^2$ 3) $\lim\limits_{x,y \rightarrow +\infty}f(x,y) =0$. Also $\lim\limits_{x,y \rightarrow -\infty}f(x,y) =0$. From this how to determine if the function has a global maximum or global minimum at (0,0)
While it is always nice to practice multi-variable extremal analysis, with partial derivatives, gradients, Hessians, etc, sometimes a much simpler argument will do. Since the function $e^{-(x^2+y^2)}$ depends only on the length of the vector $(x,y)$, we can look at the function $e^{-r^2}$. We can justify this formally by passing to polar coordinates: $x=r\cos\theta$ and $y=r\sin\theta$. Now it is evident that $e^{-r^2}$ has a global maximum at $r=0$ and only there, and that it has no minimum, not global nor local. So $f$ has a global maximum at the origin $x=y=0$ and only there, and has no minimum anywhere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1426931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
The number of distinct partial binary operations on a finite set of n elements I am asked to show that there are exactly $(n+1)^{n^2}$ partial binary operations on a finite set of n elements. My professor said that this can be done using a combinatoric argument, but I have failed to see how. Things I know: There are exactly $n^{n^2}$ different binary operations on any given finite set of n elements. Also every binary operation is a partial binary operation, but not vice versa. Therefore there should be more partial binary operations than there are plain binary operations. Any hints or clues would be much appreciated. Thank you for your time. DEFINITION: A binary operation on a set $S$ is a function from $S \times S$ to $S$.
HINT: Add an $(n+1)$-st value, $\text{undefined}$, to the set of possible values of the operation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1427036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Convergence of the power series I like to determine where the following power series converges. $$\sum_{k=1}^\infty \dfrac{x^k}{k} $$ Since the harmonic series diverges, I think that the series would converge if I make the numerator small by forcing $|x|<1$, but I cannot rigoroulsy show where the series converges. How should I approach this problem?
Hint. You may use the ratio test, evaluating $$ \lim_{n\rightarrow\infty}\left|\frac{a_{n+1}(x)}{a_n(x)}\right| $$ with $$ a_n(x)=\frac{x^n}n.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1427168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Solve $x^5 + y^5 + z^5 = 2015$ If $x, y, z$ are integer numbers, solve: $$x^5 + y^5 + z^5 = 2015$$ A friend of mine claims there is no known solution, and, at the same time, there is no proof that there is no solution, but I do not believe him. However, I wasn't able to make much progress disproving his claim. I tried modular arithmetics, but couldn't reach useful conclusion.
These problems are usually done allowing the variables to have mixed signs, some positive, some negative or zero. i think I will make this an answer. The similar problem for sums of three cubes has been worked on by many people; as of the linked article, the smallest number for which there are no congruence obstructions but no known expression is $$ x^3 + y^3 + z^3 = 33. $$ See THIS for the size of numbers involved. Indeed, on the seventh page, they give a list of numbers up to 1000 still in doubt, starts out 33, 42, 74, 156... I see nothing wrong with suggesting that your problem could be in the same unsettled state, plus I do not think as many people have worked on the sum of three fifth powers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1427261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Looking to solve an integral of the form $\int_1^\infty (y-1)^{n-1} y^{-n} e^{(\alpha -\alpha n)\frac{(y-1) }{y}} \; dy$ Looking for a solution the following integral. With $n \geq2$, $\alpha>1$, $$z(n,\alpha)=\frac{\left(\alpha (n-1)\right)^n}{\Gamma (n)-\Gamma (n,(n-1) \alpha )} \int_1^\infty (y-1)^{n-1} y^{-n} e^{(\alpha -\alpha n)\frac{(y-1) }{y}} \; dy $$ I am adding the numerical integration for different values of $\alpha$. I am showing how the integral behaves (for $n=5$ and $\alpha = 3/2$ as one answer was that it does not converge: .
Integrand behaves as $\frac{C}{y}$ as $y\rightarrow\infty$; hence the integral diverges. Indeed, $$ (y-1)^{n-1} y^{-n} \sim y^{-1}$$ as $y\to\infty$, and $$ e^{(\alpha -\alpha n)\frac{(y-1) }{y}}\to e^{(\alpha -\alpha n) } $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1427383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
How to explain Clairaut-Schwartz's Theorem, $f_{xy}=f_{yx}$? I am looking for a non-technical explanation of Clairaut's theorem which states that the mixed derivative of smooth functions are equal. A geometrical, graphical, or demo that explains the theorem and its implications will be helpful. I am not looking for a proof!
I don't think that there is a simple geometric or physical argument that makes this theorem intuitively obvious. In the following I try to explain what kind of information about $f$ the mixed partials do encode. That they are equal then follows from symmetry. Consider the small square $Q_h:=[-h,h]^2$ and add up the values of $f$ at the vertices of $Q_h$, albeit with alternating signs: $$\Phi(h):=f(h,h)-f(-h,h)+f(-h,-h)-f(h,-h)\ .$$ Assume that the mixed partial derivatives of $f$ are defined and continuous in a neighborhood of $(0,0)$. Then we can write $$\eqalign{\Phi(h)&=\int_{-h}^h f_x(t,h)\>dt-\int_{-h}^h f_x(t,-h)\>dt=\int_{-h}^h\int_{-h}^h f_{xy}(t,s)\>ds\>dt\cr &= f_{xy}(\tau,\sigma)|Q_h|\cr}$$ for some point $(\tau,\sigma)\in Q_h$. Doing the same calculation the other way around we obtain $$\Phi(h)= f_{yx}(\tau',\sigma')|Q_h|$$ for some point $(\tau',\sigma')\in Q_h$. Letting $h\to0+$ the claim follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1427458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Number Theory - Remainders A number is of the form $13k_1+12$ and of the form $11k_2+7$ That is $N = 13k_1 + 12 = 11k_2 + 7$ Now why must N also equal $(13 \times 11)k_3 + 51$ ? Thanks
Alternately, $N=13k_1+12 = 11k_2+7 \implies N-51 = 13(k_1-3)=11(k_2-4)$ Thus $N-51$ must be a multiple of both $13$ and $11$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1427579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Inverse vs Direct Limits This is probably a basic question but I haven't found anything satisfying yet. I'm trying to understand the difference between inverse and direct limits other than the formal definition. In my mind, an inverse limit is like $\mathbb{Z}_p$ and a direct limit is like the germ of functions at a point on a manifold. Perhaps these aren't the best ways to think about it, but it leads me to believe that inverse limits feel "big" and direct limits feel "small." But I've come across some confusion when seeing definitions like $$H^i(G, M) = \lim_{\to} H^i(G/H, M^H)$$ when I would have thought it would have gone in the other direction. Is it just a matter of formality depending on the direction of arrows, like how one calls left derived functors homology and right derived functors cohomology? Or is there a deeper distinction between these kinds of objects? Thanks.
Direct limits don't have to be small: any set is the direct limit of its finite subsets under inclusion, for instance. An inverse limit of some sets or groups is always a subset (subgroup) of their product, and dually a direct limit is a quotient of their disjoint union (direct sum), if that helps with intuition. I would say that in the end, yes, the difference is purely formal: every direct limit could be described as an inverse limit of a different diagram, although this would usually be very artificial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1427636", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Conditional expectation given an event is equivalent to conditional expectation given the sigma algebra generated by the event This problem is motivated by my self study of Cinlar's "Probability and Stochastics", it is Exercise 1.26 in chapter 4 (on conditioning). The exercise goes as follows: Let H be an event and let $\mathcal{F} = \sigma H = \{\emptyset, H, H^c, \Omega\}.$ Show that $\mathbb{E}_\mathcal{F}(X) = \mathbb{E}_HX$ for all $\omega \in H.$ I'm not quite clear what I'm supposed to show, since when $\omega \in H$, then the $\sigma$-algebra is "reduced" to the event H, or am I misunderstanding something here?
That is really strange. How about showing the context? Anyway: If $\mathbb{E}_\mathcal{F}(X) = \mathbb{E}(X | \mathcal{F})$ and if $\mathbb{E}_HX = \mathbb{E} [X|H]$, then $\mathbb{E}(X | \mathcal{F}) = \mathbb{E}(X | H)1_H + \mathbb{E}(X | H^C)1_H^C$ If $\omega \in H$, then $$\mathbb{E}(X | \mathcal{F})(\omega) = \mathbb{E}(X | H)1_H(\omega) + \mathbb{E}(X | H^C)1_H^C(\omega)$$ $$= \mathbb{E}(X | H)(1) + \mathbb{E}(X | H^C)(0)$$ $$= \mathbb{E}(X | H)$$ But you probably already knew that.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1427760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 4, "answer_id": 1 }
Surjectivity of a map from a space to itself I am wondering how to prove that a non-zero degree map from $A \to A$ is surjective. For example, identifying $S^1 \subset \mathbb{C}$, we can take $f:S^1 \to S^1$ via $f(z) = z^k$ with $k\neq 0$. This is a map of degree $k$. How can I show that it is surjective? Note that I am interested in general topological spaces, not the circle example in specific.
If $f$ is not surjective it has degree zero. To see this, assume there is some $y$ not in the image of $f$, and factor $f:S^1\to S^1$ as $f=h\circ g$, with $g:S^1\to S^1/\{y\}$, and $h:S^1 / \{y\}\to S^1$. Since $S^1/\{y\}$ is contractible, it's first homology group is the trivial group and so the induced homomorphism $h_{*}=g_{*}h_{*}=0$, hence the degree is zero. A similar result holds for higher $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1427850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is the solution space to $x_1x_4-x_2x_3=0$ in $\mathbb{R}$ a manifold? I was wondering whether the set of $(x_1,x_2,x_3,x_4)$ that solve the equation $x_1x_4-x_2x_3=0$ in $\mathbb{R}$ is a manifold. My first guess is that it is most likely not because if I define a function $J(x_1,x_2,x_3,x_4)=x_1x_4-x_2x_3$ then $0$ is not a regular value, as $dJ(0,0,0,0)=0.$ But this does afaik not mean that it cannot be a manifold. If anything is unclear, please let me know.
No, it is not a manifold because it is a cone with apex the origin $O$, and a cone that is not a vector subspace is never a manifold. The fact that $dJ(0,0,0,0)=0$ indeed essentially implies that $J=0$ is not a manifold at $O$, but this is a very subtle point never explained in differential geometry books. See here for more precise explanations .
{ "language": "en", "url": "https://math.stackexchange.com/questions/1427954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Worker A and worker B doing a project OK, so embarrassingly I've forgotten how to do this type of problem which I'll generalize: $n$ workers of type $A$ can do a job in $x$ hours. $m$ workers of type $B$ can do the same job in $y$ hours. How long would it take $n_1$ workers of type $A$ and $m_1$ workers of type $B$ to do the job working together? Logically I can see that $1$ type $A$ worker should spend $n$ times as much time as $n$ workers take. Same with $B$. So a type $A$ worker can do the job in $nx$ hours and a type $B$ worker could do the job in $my$ hours. I just don't seem to be able to figure out how long a mixture of type $A$ and type $B$ workers take. Could someone please explain the process to me? Thanks.
$nx$ man-hours of type A = $ym$ man-hours of type B thus 1 man-hour of type B = $\dfrac{nx}{ym}$ man-hours of type A. Convert to type A so hours needed by given mix = $\dfrac{nx}{n_1 + \dfrac{nx}{ym}\cdot m_1}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1428068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Find the sum of the following series to n terms $\frac{1}{1\cdot3}+\frac{2^2}{3\cdot5}+\frac{3^2}{5\cdot7}+\dots$ Find the sum of the following series to n terms $$\frac{1}{1\cdot3}+\frac{2^2}{3\cdot5}+\frac{3^2}{5\cdot7}+\dots$$ My attempt: $$T_{n}=\frac{n^2}{(2n-1)(2n+1)}$$ I am unable to represent to proceed further. Though I am sure that there will be some method of difference available to express the equation. Please explain the steps and comment on the technique to be used with such questions. Thanks in advance !
The $n$th term is $$n^2/(4n^2-1) =$$ $$ \frac{1}{4}.\frac {(4n^2-1)+1} {4n^2-1}=$$ $$\frac{1}{4} + \frac{1}{4}.\frac {1}{4n^2-1}=$$ $$\frac{1}{4}+ \frac {1}{4}. \left(\frac {1/2}{2n-1}- \frac {1/2}{2n+1}\right).$$ Is this enough?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1428163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 3 }
What is exactly the difference between a definition and an axiom? I am wondering what the difference between a definition and an axiom. Isn't an axiom something what we define to be true? For example, one of the axioms of Peano Arithmetic states that $\forall n:0\neq S(n)$, or in English, that zero isn't the successor of any natural number. Why can't we define 0 to be the natural number such that it isn't the successor of any natural number? In general, why is an axiom not a definition? Are there cases where we can formulate definitions as axioms or vice versa?
I'm not sure whether this answer would help but if you see the axioms and definitions listed in this page you might get to know what's the difference. Axioms acts as fundamentals while definitions are statements that include axioms to say about something.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1428228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "65", "answer_count": 13, "answer_id": 12 }
Probability of two Daniels in one group my professor gave us a "fun" problem to work on at home, but I am relatively new to probability. The question is as follows: There are four Daniels in a class of 42 students. If the class breaks into groups of three what is the probability that there is at least two Daniels in one group? I am trying to solve the easier problem "What is the probability that there is two Daniels?" However, I am stuck even here. I will show that I have this far and why. $$P(E)=\frac{{4 \choose 2}{40 \choose 1}}{{42\choose 3}}$$ I am using ${4 \choose 2}$ for choosing two daniels from 4, ${40 \choose 1}$, 1 student from the remaining forty, and ${42\choose 3}$ for groups of 3 possible given 42 students. I may have a deep misunderstanding of probability, however I feel that the thing I feel most unsure about is using ${40 \choose 1}$. Part of me feels like it should be ${42 \choose 1}$, but I am not entirely sure why. Any help, or clarifications of things I may be confused about will be greatly appreciated. Edit after feedback: Attempted solution to original question$$P(E)=\frac{{4 \choose 2}{38 \choose 1}}{{42\choose 3}}+\frac{{4 \choose 3}}{{42\choose 3}}= \frac{29}{1435}$$
The $P(E)$ you've calculated is correct. However, for solving the original problem, you'll have to calculate the probability of the event that there are $3$ Daniels in a group, and then add probabilities of both events. Edit: I re-read the problem, and your calculation seems a bit off. For event $E$, No of Daniels $= 4$; ways of choosing $2$ from $4$ = $4 \choose 2$ No of "Non-Daniels" $= 42 - 4 = 38$; ways of choosing $1$ from $38$ = $38 \choose 1$. Total groups of size $3 =$ ways of choosing $3$ from $42 =$ $42 \choose 3$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1428345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Proving basic fact about ordinals In a set of online lecture notes, I saw the following proposition. Let $C$ be a set of ordinals. Then $\sup \left\{ \alpha +\beta:\beta\in C \right\} =\alpha +\sup C$. How can I prove this?
For brevity, we'll write $\beta^+ := \beta+1$ for the successor ordinal of $\beta$. If $\sup(C) = 0$ then it's trivial. If $\sup(C) = \beta^+$ a successor ordinal, then $\beta^+ \in C$ so we get that the LHS is $\alpha + \beta^+$ immediately. If $\sup(C) = \lambda$ a nonzero limit, then if $\lambda \in C$, we're done as above, so suppose it's not in $C$. By definition of ordinal addition, $\alpha + \sup(C) = \alpha + \lambda = \sup \{ \alpha + \gamma : \gamma < \lambda \}$. That sup is the same as $\sup \{ \alpha + \beta : \beta \in C \}$: containment one way is because everything in $C$ is $< \lambda$, while containment the other is because $\lambda$ is the sup of $C$ so for any ordinal less than $\lambda$ we can find a larger ordinal which is also less than $\lambda$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1428412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is $5^{-1}$ in $\mathbb Z_{11}$? I am trying to understand what this question is asking and how to solve it. I spent some time looking around the net and it seems like there are many different ways to solve this, but I'm still left confused. What is the multiplicative inverse of $5$ in $\mathbb Z_{11}$. Perform a trial and error search using a calculator to obtain your answer. I found an example here: In $\mathbb Z_{11}$, the multiplicative inverse of $7$ is $8$ since $7 * 8 \equiv 56 \pmod {11}$. This example is confusing to me because $1 \pmod {11} \equiv 1$. I don't see how $56$ is congruent to $1 \pmod {11}$.
56 mod 11=1, because 56=55 (multiple of 11)+1 So the answer is 9, since $$9*5= 44+1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1428511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 1 }
Sum of the series $\sum\limits_{n=1}^{\infty }\frac{n}{3^n}$ I want to calculate the sum: $$\sum _{n=1}^{\infty }\:\frac{n}{3^n}\:$$ so $:\:\sum_{n=1}^{\infty}\:nx^n;\:x=\frac{1}{3}\:$ $$=x\sum_{n=1}^{\infty}\:nx^{n-1}=x\sum_{n=1}^{\infty}\:n\:\left(\int\left(x^{n-1}\right)dx\right)'=x\sum_{n=1}^{\infty}\:\left(x^n\right)' $$ now from here I would continue: $x\:\left(\frac{x}{1-x}\right)'=\frac{x}{^{\left(1-x\right)^2}}\:\left(\frac{1}{3}\right)=\frac{3}{4}$ In the answer that I saw, there is another step, from which we get the same result, but I don't understand why it is correct to do so: $$ x\sum_{n=1}^{\infty}\:\left(x^n\right)'=x\sum_{n=0}^{\infty} ({x^{n}})' =x\cdot \left(\frac{1}{1-x}\right)'=\frac{x}{^{\left(1-x\right)^2}} $$ Is this just a spelling mistake ?
Although this does not address the specific question, I thought it might be instructive to present another approach for solving a problem of this nature. So, here we go Let $S=\sum_{n=1}^\infty nx^n.\,\,$ Note that we could also write the sum $S$ as $S=\sum_{n=0}^\infty nx^n,\,\,$ since the first term $nx^n=0$ for $n=0$. We will use the former designation in that which follows. Observing that we can write $n$ as $n=\sum_{m=1}^n (1)$ (or $n=\sum_{m=0}^{n-1}(1)$), the series of interest $S$ can be written therefore $$\begin{align} S&=\sum_{n=1}^\infty \left(\sum_{m=1}^n (1)\right)x^n\\\\ &=\sum_{n=1}^\infty \sum_{m=1}^n (1)x^n\ \end{align}$$ Now, simply changing the order of summation yields $$\begin{align} S&=\sum_{m=1}^\infty (1)\left(\sum_{n=m}^\infty x^n\right)\\\\ &=\sum_{m=1}^\infty (1)\left(\frac{x^m}{1-x}\right)\\\\ &=\frac{x}{(1-x)^2} \end{align}$$ which recovers the result obtained through the well-known methodology of differentiation under the summation sign.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1428577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Inequation: quadratic difference equations Given: $$\frac{(x - 3)}{(x-4)} > \frac{(x + 4)}{(x + 3)}$$ Step 1: $$(x + 3)(x - 3) > (x + 4)(x - 4)$$ Step2 : Solving step 1: $$x^2 - 3^2 > x^2 - 4^2$$ *Step 3: $ 0 > -16 + 9$ ??? As you see, I can delete the $x^2$, but there is no point in doing that. What should be the next step?
What you did in step 1 amounts to multiply both sides by $(x-4)(x+3)$. Unfortunately, you have to reverse the inequation if this expression is negative, and leave it as is if it is positive. And as you don't know the sign of this product… You can simplify solving this inequation writing both sides in canonical form: $$\frac{x-3}{x-4}=1+\frac1{x-4}>\frac{x+4}{x+3}=1+\frac1{x+3}\iff\frac1{x-4}>\frac1{x+3}$$ Multiplying both members by $(x-4)^2(x-3)^2$ (which is positive on the domain of the inequation), we obtain: $$(x-4)(x+3)^2> (x-4)^2(x+3)\iff7(x-4)(x+3)>0\iff\begin{cases}x<-3\\\text{or}\\x>4\end{cases}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1428703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 2 }
Monotonicity, boundaries and convergence of the sequence $ \left\{ \frac{a^n}{n!} \right\} $. everyone. I have a doubt on the following question: Let $ \left\{ \frac{a^n}{n!} \right\}, n \in \mathbb{N} $ be a sequence of real numbers, where $ a $ is a positive real number. a) For what values of $ a $ is the sequence above monotonous? And bounded? b) For what values of $ a $ is the sequence above convergent? Determine the limit of the sequence on that case. How do I start this question? Clearly 0 is a lower boundary. But is there a $ n_{max} $ after which $ a^n $ is always smaller than $ n! $ ? As far as the monotonicity goes, I thought about splitting the problem into three cases: $ \bullet $ For $ 0 < a < 1 $: In this case, knowing that if $ 0 < a < 1 $, then $ a^{n+1} < a^n $, we can assume that $ A_{n+1} $ is always smaller than $ A_n $ because the numerator is decreasing (as showed) and the denominator is obviously increasing. Thus, if $ a_{n+1} < a_n, \forall \,\, n \in \mathbb{N} $, the sequence is strictly decreasing and so it is monotonic. $ \bullet $ For $ a = 1 $: In this case, the sequence is $ \frac{1}{n!} $ which is always positive, has 0 as a lower boundary and 1 as an upper boundary and thus it is bounded. Also, given that in this case $ A_{n+1} \leq A_n, \forall \,\, n \in \mathbb{N} $ the sequence is also monotonic. $ \bullet $ For $ a > 1 $: In this case, $ a^{n+1} > a^n, \forall \,\, n \in \mathbb{N} $. However, in order to known whether $ a_{n+1} < a_n $ or $ a_{n+1} > a_n $ we have to check if $ a^n > n! $, which I do not know how to do. Could anyone help me with this question? Thanks for the attention. Kind regards, Pedro
If $a_n = \frac{a^n}{n!} $, then $\frac{a_{n+1}}{a_n} =\frac{\frac{a^{n+1}}{(n+1)!}}{\frac{a^n}{n!}} =\frac{a}{n+1} $. Therefore, if $a < n+1 $, then $a_n$ is decreasing. In particular, for any positive real $a$, $a_n$ is eventually monotonically decreasing. If $n+1 < a$, then $a_n$ is increasing. Therefore $a_n$ first increases and then decreases, with a peak at about $n=a$. Its value there is about (using Stirling) $\frac{n^n}{n!} \approx \frac{n^n}{\sqrt{2\pi n}\frac{n^n}{e^n}} = \frac{e^n}{\sqrt{2\pi n}} $. Note that if $a=n$, then $a_n = a_{n+1}$ since $\frac{n^n}{n!} =\frac{n^{n+1}}{(n+1)!} $. Many questions here show that $a_n \to 0$ as $n \to \infty$. You should be able to do this. A more difficult question would be to get a more accurate estimate for the maximum term depending on how close $a$ is to an integer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1428798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Find the supremum and infimum of {x $\in$ [0,1]: x $\notin$ $\mathbb Q$}. Prove why your assertions are correct Ok I am lost from this question. Does that mean $x$ can only be $0$ or $1$? And it can't be any rational?
Does that mean x can only be 0 or 1? And it can't be any rational? No, it means $x$ is any irrational between $0$ and $1$. $\{x\in [0;1]: x\notin \Bbb Q\}$ is: the set of numbers from the real interval $0$ to $1$ inclusive, such that these numbers are also not in the rationals. Now, find the supremum of this set.   First, what is your definition of a supremum? A set $E ⊂ R$ is bounded above iff $∃ b ∈ R$ s.t. $a \leq b, ∀ a ∈ E$ ($b$ is an upper bound for $E$) No, that is not the definition of supremum. That is the definition of upper bound. A supremum is a special kind of upper bound. It's the least such. A value $s\in R$ is the supremum of $E$ if $\;\nexists b\in R: \forall a\in E: ((a\leq b) \wedge (b< s)\wedge (a\leq s))$ Now, can you find such an upper bound for your set? Can you show that this is the supremum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1428945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Proving that if $n \in \mathbb{Z}$ and $n^2 − 6n + 5$ is even, then $n$ must be odd. Prove that if $n \in \mathbb{Z}$ and $n^2 − 6n + 5$ is even, then $n$ must be odd. $p= n^2 - 6n + 55$ is even, $Q= n$ is odd Proof: Assume on contrary $n$ is even. Then $n= 2k$ for some $k \in \mathbb{Z}$. Then, $$n^2 -6n + 5= 2k^2-6(2k)+5=2k^2-12k + 5$$ Unsure of where to go from here.
Hint: Note that $$n^2-6n=n(n-6).$$ Since $n$ and $n-6$ are both _____ or both _____, then we see that $n$ is _____ if and only if $n(n-6)$ is. Since $5$ is odd and $n(n-6)+5$ is even, then $n(n-6)$ must be _____, and so....
{ "language": "en", "url": "https://math.stackexchange.com/questions/1429022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Is $BAB'$ (with positive definite $A$ and full-row rank$(B) = k$) itself of rank $k$? Here's the setup: A matrix $B$ with dimensions $k \times p$ with $k \leq p$ and rank$(B) = k$. A matrix $A$ with dimensions $p \times p$ is positive definite (not necessarily symmetric). Question: Is the square matrix $BAB'$ (which will have dimensions $k \times k$) a full-rank matrix? I know that the rank is at most $k$, but if the rank can be shown to equal $k$, that will make things very easy for what I need to do. Note: $B'$ is the notation I use for the transpose of matrix $B$, partially because I suck at HTML. Thank you in advance.
Hint: motivate the following steps to prove your statement $BAB^Tx=0$ $\Rightarrow$ $x^TBAB^Tx=0$ $\Rightarrow$ $y^TAy=0$ $\Rightarrow$ $y=B^Tx=0$ $\Rightarrow$ $x=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1429082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is "probability distribution function" a distribution? I can understand the definition of distribution as written in https://en.wikipedia.org/wiki/Distribution_(mathematics) On the other hand there are three different terms in the definition of probability distribution function(PDF) : https://en.wikipedia.org/wiki/Probability_distribution_function My question: is PDF a distribution? If so can anyone help me to clarify how a PDF is a distribution?
The only thing that relates them are that they are both restricted cases of measures (or at least the PDF can be interpreted as such). A distribution in probability theory is very like a measure with the restriction that the measure of the whole space is 1 (ie $\int dp = $int p(x) dx = 1$). The other distribution is quite restricted since it's only allowed to act on smooth functions with compact support (ie $\int \varphi d\mu$ need only bee defined if $\varphi$ is smooth with compact support). But since a probability distribution is that general and smooth functions with compact support is so well behaved you can always integrate a such (ie $\int \varphi dp$ is well defined as required of the second type of distribution). You could of course generalize the concept of PDF to allow for any measure (that is not necessarily representable as a function).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1429230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Is it possible to construct a metric in $\mathbb{R}^n $ s.t. it does not induce CONVEX balls? I'm studying point set topology and looking for a counterexample of "Balls are convex". We say set $K \subset \mathbb{R}^n $ is convex if $\forall x, y\in K$ implies $\lambda x + \left(1-\lambda y\right)\in K, \forall \lambda \in \mathbb{R}$. A counterexample has been found in $\mathbb{Q}$ (where the definition of convex has been specialized to "$\lambda\in\mathbb{Q}$") by using p-adic metric. E.g. consider ball $B_{\text{2-adic}}\left(0, 1\right)$.
Let $f:\mathbb{R}^n\to\mathbb{R}^n$ be any bijection, and define a metric $d'$ by$$d'(x,y)=d(f(x),f(y)),$$where $d$ is the Euclidean metric. Let $B$ denote the unit ball around zero in the Euclidean metric. Then the unit ball around zero in the new metric is$$B'=f^{-1}(B).$$The bijection $f$ can be chosen so that $f^{-1}(B)$ is a really wild subset. In particular, it can be non-convex. Note that if $f$ is a homeomorphism, then the metrics $d,d'$ are equivalent, in the sense that both induce the standard topology on $\mathbb{R}^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1429328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 3 }
How is $0\cdot\infty= -1$? It is known that the product of slopes of two perpendicular lines is equal to $-1$ ($m_1*m_2=-1$ for $m_1$ and $m_2$ being the slopes of the perpendicular lines $l_1$ and $l_2$). The slope of $x$-axis $=0$; the slope of $y$-axis$=$ undefined (or $\infty$); $x$-axis and $y$-axis are perpendicular to each other. So, it must mean that the product of their slopes (i.e. $0$ and $\infty$) must be equal to $-1$. How is $0*\infty=-1$? Is it really?
How is $0*\infty=-1$? Is it really? No, $0$ times $\infty$ is not equal to $-1$. In fact, the product isn't even defined. It is not a question of this somehow giving a contradiction, it just isn't defined. The rule you are referring to says that: Given two lines with slopes $m_1$ and $m_2$ (real numbers) then the lines are perpendicular if and only if their product is $-1$. So, it is part of the assumption in the rule that the slopes be real numbers, and $\infty$ is not a real number.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1429437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Identical Zero Sets for two distinct irreducible polynomials In these notes on page 13, question number 9 asks to give example of two irreducible polynomials in $\mathbb{R}[X,Y]$ with identical zero sets. I can think of trivial examples like $x^2+y^2$ and $x^4+y^2$, both of which vanish only on the origin. Are there non-trivial examples where the zero sets are dimension $1$, for example?
The answer is no. This is simply Bezout's Theorem. A proof may be found in Michael Artin's Algebra.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1429511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to find the distance from a point outside a circle to any point on a circle. I'm looking for a way to find out the distance from a point outside a circle to a point on a circle, where the point on the circle is based on radians, degrees, or both (whatever the formula works with). With this, I know the distance from the point to the circle ($x$), and the radius of the circle ($r$). I also know how many degrees ($\theta$) from a starting point on the circle, which is on the line between the point outside the circle and the center of the circle. So, I'm dealing with simple right triangles here, but what I don't know how to get is the lengths of opposite and adjacent arms given a number of degrees. Here is an poorly drawn GIF to help understand what I want. The red line is what I'm trying to get. Thanks for all your help.
The data of your problem is: $P=(a,b)$ outside point; $C=(c,d)$ center of circle; $r$ radius of circle. Let Q=(x,y) be your generic point and $D$ the searched distance. You have two equations to use: $$(1)....( PQ)^2=(x-a)^2+(y-b)^2= D^2$$ $$(2)....(x-c)^2+(y-d)^2=r^2$$ Hence (1)-(2) gives $$D^2=2(c-a)x+2(d-b)y+a^2+b^2-c^2-d^2+r^2$$ Where $D$ is function only of $x$ and $y$ as must be answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1429609", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Evaluate $\iint_{0Let $$f(x,y)=\begin{cases}xy &\text{ if } 0<x<y<1, \\ 0 &\text { otherwise. }\end{cases}$$ Evaluate the integral $\displaystyle \iint f(x,y)\,dx\,dy$. I'm having trouble with the limits on integration.
HINT your inequality $0<x<y<1$ indicates you can integrate $\int_0^y fdx$ first. What are the correct limits on the outer $dy$ integral then.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1429715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Is there a simpler way to calculate correlation? Let's consider that a variable y constructed from x $x_i ∈ \left\{1,3,5,7,8\right\}$ $f(x_i)=2x_i+1$ $y_i=f(x_i) + ε_i, ∀i∈ \left\{1;...;5\right\} $ where $ε_i$ is a identically and independantly distributed random variable which follows a normal law $\mathcal{N(0,2)}$ calculate the correlation coefficient of $x$ and $y$. Is it still valid as an informer of dependance? given $\sigma_{f(x)}$ which is about $5.66$ from the last exercise (which can be found again easily with a bit of calculation) Let's try to find $σ_y$ from its formula: $$σ_y=\sqrt[2]{Var(y)}$$ $$<=>σ_y=\sqrt[2]{\frac{\sum\limits_{i=1}^5 = (y_i-E(Y))²}{5}}$$ And that the expectetation $E(Y)$ of Y is $$E(Y)={\frac{\sum\limits_{i=1}^5 = (y_i)}{5}}$$ Using $E(Y)=E(f(x_i))+E(ε_i)$? then $$=>E(Y)=E(f(x_i))$$ then $$<=>σ_x=\sqrt[2]{\frac{\sum\limits_{i=1}^5 (f(x_i)+\frac{1}{\sqrt{2\pi}\sigma}e^{-(x-\mu)^2/(2\sigma^2)})-E(Y))²}{5}}$$ But here I'm not sure to be going in the right direction... It give unbelievable complex calculation to cope with...
Hints: Actually $\sigma^2_x=8$ so $\sigma_x \approx 2.83$. It is $\sigma_{f(x)}$ which is about $5.66$. You should then be able to calculate $\sigma^2_y$ (an integer) since it is $f(x)+\epsilon$, assuming the $x$ and $\epsilon$ are independent. That gives you $\sigma_y$. The covariance $\sigma_{f(x) y}$ is equal to the variance of $f(x)$, again assuming independence, and this is twice the covariance $\sigma_{xy}$, so you can easily calculate the correlation coefficient $\rho_{x y}= \dfrac{ \sigma_{x y} }{\sigma_x \sigma_y}$. This is a theoretical result informing you about the dependence of $y$ on $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1429818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is $\mp a$ actually different than $\pm a$? So, the way I understand $\pm a$ as a general concept is basically as follows: $\pm a$ is really just two numbers, functions, or whatever $a$ represents, but the catch is that one of the $a$'s is positive, and the other is negative. All of that makes sense to me. Mathematicians like to be efficient, but also precise, so they created a way to represent two (or more) entirely different objects, simply by using a special symbol. These, then, are my questions: * *Is $\pm a$ actually different than $\mp a$? *Why aren't there more these, if you will, Frankenstein symbols? *If the answer to my previous question is, "There are," then why aren't they as common?
On its own, $\mp a$ means the same thing as $\pm a$. However -- and this is a big however -- you almost never see $\mp$ unless it occurs in an expression with $\pm$ being used as well. And then it means "the opposite of whatever sign $\pm$ is currently." For example, the sum-or-difference of cube factorizations can be fit into one formula $$a^3 \pm b^3 = (a \pm b)(a^2 \mp ab + b^2),$$ where it should be understood that the sign in $(a \pm b)$ needs to be different from the sign of $ab$. I suspect $\pm$ and $\mp$ are atypical because addition and subtraction are such well-behaved mirror-images of each other. More importantly, there are many situations in which you'll "either do one, or the other." I can't really think of any pair of operations that work like that, to they point that we'd 'Frankenstein' their symbols together. That doesn't mean there aren't any, but even multiplication and division don't tend to crop up like that, and they'd be obvious second candidates. (Although we could write "either $ab$ or $\frac{a}{b}$" as $ab^{\pm 1}$ if we wanted to, and still not need a comparable hybrid symbol).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1429926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 0 }
Does $\sum_{n=1}^\infty \frac{1}{p_ng_n}$ diverge? I know of Euler's proof that the sum of the reciprocals of the primes diverges. But what if we multiply the primes by it's following prime gap. In other words, is $$\sum_{n=1}^\infty \frac{1}{p_ng_n} = \infty$$ true or false?
TRUE: We may get rid of the prime gaps by using Titu's lemma. We have: $$ \frac{1}{p_n g_n}+\ldots+\frac{1}{p_N g_N}\geq \frac{\left(\sum_{k=n}^{N}\frac{1}{\sqrt{p_k}}\right)^2}{p_N-p_n}\tag{1}$$ hence if $N$ is around $n^2$ and $n$ is big enough, by partial summation the RHS of $(1)$ is roughly: $$ 4\cdot\frac{p_N+p_n-2\sqrt{p_n p_N}}{(p_N-p_n)(\log N)^2} \tag{2}$$ so by combining $(2)$ with a condensation argument we easily get that the series $\sum_{n\geq 1}\frac{1}{p_n g_n}$ is divergent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1429997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Profit Share Distribution > 100% Total? So I read an article today which asserted the following: Canaccord’s latest estimate shows Samsung making 15 percent of profits in smartphones, with Apple making 92 percent. (The numbers add up to more than 100 because everyone else in the smartphone industry loses money, so their share of the profits is negative.) Upon digging further, I found another article which broke down the "smartphone profits" percentages as follows: * *Apple: 92% *Samsung: 15% *Blackberry: 0% *Lenovo: -1% *Microsoft: -4% How the heck does this work? Wouldn't the total sales revenue / profits be 100% and each company would have a representative value of that?
Particularly in the press, people often do silly things with percentages, but the values you quote are possible. It appears they are computing the total profits of the industry and allocating it to the various companies. This would guarantee that the percentages add to $100\%$, but then it appears that the values were rounded to the nearest percent. Say the correct values were * *Apple: 91.6% *Samsung: 14.6% *Blackberry: -0.4% *Lenovo: -1.4% *Microsoft: -4.4% These add nicely to $100\%$ and would round to the nearest percent to give the values you quote.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1430120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
The integer $n$ is not zero if and only if there is some prime $p>n$ such that $p-n$ is composite I recently solved a problem (it was posed in a spanish-talking forum) in which I used the following lemma. The integer $n$ is not zero if and only if there is a prime $p>n$ such that $p-n$ is composite. This is my proof (I only write the "non-trivial" implication): Let $n$ be a non-zero integer and let $p>|n|$ be a prime. Then there are integers $q$ and $r$ such that $$n=pq+r$$ and $0<r<p.$ It follows that if $p'>n+p$ is a prime of the form $p'=pk+r,$ $k\in\mathbb Z$ (its existence follows from Dirichlet's Theorem) then $$p'-n=pk+r-pq-r=p(k-q)$$ and since $k-q>1,$ $p'-n$ is a composite number (note that in fact there are infinitely many such primes). My question is, can that result be proved in a more elementary way (without appealing to Dirichlet's theorem)?
For any non-zero $n$, some arithmetic progression mod $n$ contains infinitely many primes (this doesn't require Dirichlet's theorem, only Euclid + pigeonhole). But if $n$ is a counterexample, then the stated condition forces every (sufficiently large) term of that arithmetic progression to be simultaneously prime (this is especially true for $n<0$ where we don't even need to assume infinitely many primes in that progression, just one). This is clearly impossible (for instance, pick any prime $q$ that doesn't divide $n$ and some of the terms of the progression will be divisible by $q$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1430221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Proof by Induction for Natural Numbers Show that if the statement $$1 + 2 + 2^{2} + ... + 2^{n - 1} = 2^{n}$$ is assumed to be true for some $n,$ then it can be proved to be true for $n + 1.$ Is the statement true for all $n$? Intuitively, then I don't think it holds for all $n.$
If we assume that $1+2+\cdots+2^{n-1}=2^n$, we can easily prove that $1+2+\cdots+2^n=2^{n+1}$. For $$1+2+\cdots+2^n=(1+2+\cdots+2^{n-1})+2^n=2^n+2^n=2^{n+1}.$$ But of course it is not true that $1+2+\cdots+2^{n-1}=2^n$. The base case $n=1$ does not hold. For then $1+2+\cdots+2^{n-1}=1\ne 2^1$. The whole point of this exercise is that in order to prove a result by induction, we must do the induction step (which worked) and we must verify the base case (which failed). In fact, $1+2+\cdots+2^{n-1}=2^n-1$. The proof is straightforward.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1430530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
$A \in B$ vs. $A \subset B$ for proofs I have to prove a few different statements. The first is if $A \subset B$ and $B \subset C$ then prove $A \subset C$. This one is fairly straight forward, but I'm stuck on how the next one differs. Prove that if $A \in B$ and $B \in C$ then $A \in C$. I don't really understand how to put this in logical symbols. I've only seen $a \in A$ written out but never "a set $A$ is an element of a set $B$". Here's what I have for a proof at this point, assuming I understand what "a set $A$ is an element of a set $B$" means: suppose $A \in B$ and $B \in C$. Then $A \in C$.
Take $A=\varnothing$, $B=\{\varnothing\}$ and $C=\{\{\varnothing\}\}$. Then clearly $A\in B\wedge B\in C$, but $A\in C$ implies $\varnothing=\{\varnothing\}$ wich cannot be true. This because $\{\varnothing\}$ has elements and $\varnothing$ has not. We conclude that the implication is false.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1430594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 5, "answer_id": 3 }
Solve this exponential inequality $$ {5}^{(x+2)}+{25}^{(x+1)}>750\\ t=5^{(x+1)}\\ t^2+5t-750>0\\ t^2+5t-750=0\\ $$ $$ a=1, b=5, c= -750\\ D=35+3000=3025 \\ t_1= 25; t_2=-30 $$ $t_1=25\Longrightarrow x=1; t_2=-30$ Doesn't have a solution So the solution is $x>1$. Is this correct? I say is bigger than $1$ and not smaller than $1$ because $1$(numer before $t^2$ is bigger than $0$. Is this reason correct or is it because $5>1$ so the exponential function is progressive (the bigger the $x$, the bigger the y)
$$5^{(x+2)} = 25 \cdot 5^x$$ $$25^{(x+1)} = 25 \cdot 25^x$$ $$750 = 25 \cdot 30$$ So your inequation may be expressed: $$5^x + 25^x >30$$ As the derivative of $5^x + 25^x$ is always possitive, there is only one $x$ that makes the equality true: $$x=1$$ And therefore, the solution of your inequation is $x>1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1430698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Estimate for a specific series For a positive integer $m$ define $$ a_m=\prod_{p\mid m}(1-p), $$ where the product is taken over all prime divisors of $m$, and $$ S_n=\sum_{m=1}^n a_n. $$ I am interested in an estimate for $|S_n|$. Any references, hints, ideas, etc., will be appreciated.
As Oussama Boussif notice, using the Euler product for the totient function $$\phi\left(m\right)=m\prod_{p\mid m}\left(1-\frac{1}{p}\right) $$ we have $$a_{m}=\prod_{p\mid m}\left(1-p\right)=\left(-1\right)^{\omega\left(m\right)}\frac{\phi\left(m\right)}{m}\prod_{p\mid m}p $$ where $\omega\left(m\right) $ is the number of distinct prime factors of $m $, so we have using the fact that $\prod_{p\mid m}p\leq m $ (equality holds if $m $ is a squarefree number) $$\left|\sum_{m=1}^{n}a_{m}\right|\leq\sum_{m=1}^{n}\phi\left(m\right)=\frac{3}{\pi^{2}}n^{2}+O\left(n\log\left(n\right)\right). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1430777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Show that there are $c,d \in (a,b), c\lt d$ such that $\frac{1}{b-a}\int_a^b f=\frac{1}{d-c}\int_c^d f$. Let $f: [a,b] \to R$ be continuous. Show that there are $c,d \in (a,b), c\lt d$ such that $\frac{1}{b-a}\int_a^b f=\frac{1}{d-c}\int_c^d f$. This is my solution. I tried to work from backwards. Letting $F(x)=\int_a^x f$, we want to get the equality $\frac{F(b)-F(a)}{b-a}=\frac{F(d)-F(c)}{d-c}$ for some $c,d\in (a,b)$. This is equal to $-F(d)+F(c)+\frac{F(b)-F(a)}{b-a}(d-c)=0$. So let's define $G(x)=F(b)-F(x)-\frac{F(b)-F(a)}{b-a}(b-x)$. Then we want $G(d)-G(c)=0$. So since $G(a)=G(b)=0$ and $G$ is continuous on $[a,b]$. Without loss of generality let's assume that the maximum of $G$ is greater than $0$. Say $G(x_1)$ is the maximum. Then for any $r$ such that $0=G(a)=G(b)\lt r \lt G(x_1)$, by the Intermediate value theorem we can find $c,d$ in $(a,x_1)$ and $(x_1,b)$ respectively, such that $G(c)=G(d)=r$. Hence we have found the desired points. This solves the problem, however, I feel like I have forced the answer and the solution does not really shed light on the geometric or intuitive meaning of the equality. Is there another way to solve it? Also, what is the meaning behind this equality? I would greatly appreciate any help.
To get some intuition, consider the case $F(b) = 0$ (here using your notation). Assume the maximum value of $F$ occurs at $x_0 \in (a,b),$ with $F(x_0) > 0.$ (Good to draw a picture.) Then by the IVT every value between $0$ and $F(x_0)$ will be taken by $F$ in both of the intervals $(a,x_0)$ and $(x_0,b).$ So for each $y\in (0,F(x_0))$ there exist $c(y) \in (a,x_0), d(y) \in (x_0,b)$ such that $F(c(y)) = y = F(d(y)).$ For each such $y$ we then have $$\frac{F(d(y))-F(c(y))}{d(y))-c(y)} = 0 = \frac{F(b)-F(a)}{b-a}.$$ Which says there are loads of solutions. The case where $F$ takes on a minimum value less that $0$ is similar, and of course if $F\equiv 0$ there is nothing to do. The general result follows from the above by looking at $F(x) - l(x),$ where $l$ is the line through $(a,0),(b,F(b)).$ Note that we used only the continuity of $F$ on $[a,b].$ So, back to the original problem, the conclusion holds if we only assume $f$ is Riemann integrable (or even just Lebesgue integrable) on $[a,b].$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1431027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Can someone show by vector means that any inscribed angle in a semicircle is a right angle Could someone explain how to prove any angle inscribed in a semicircle is a right angle using vectors. I understand that the dot product of two vectors is 0 is they are perpendicular but I don't know how to show this in a semicircle.
If the semicirce has radius $a$ you can represents the two vectors as the difference between the coordinates of the points $(-a,0)$ and $(a,0)$ with respect to a generic point $(a \cos \theta, a \sin \theta)$ : $$ \vec v_1=(a\cos \theta -a; a \sin \theta)^T \quad and \quad \vec v_2=(a\cos \theta +a, a \sin \theta)^T $$ so you have: $$ (\vec v_2,\vec v_2)=a^2(\cos \theta -1)(\cos \theta +1)+a^2 \sin^2 \theta=a^2(\cos^2 \theta -1 + \sin^2 \theta)=0 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1431104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
if $\alpha$ is an ordinal is it true that ${\aleph _{\alpha +1}}^{\aleph _{\alpha}}=\aleph _{\alpha +1}$? If we denote the following cardinals: $\beta _0=\aleph _0$, $\beta _k=2^{\beta _{k-1}}$ then I know that ${\beta _{k+1}}^{\beta _k}=\beta _{k+1}$ but, is it true that for some ordinal $\alpha$, ${\aleph _{\alpha+1}}^{\aleph _{\alpha}}={\aleph _{\alpha+1}}$? I really don't know how to approach this.
No, that is not necessarily true. In particular, whenever the continuum hypothesis fails so we have $\aleph_1 < 2^{\aleph_0}$, it will still be the case that $$ \aleph_1^{\aleph_0} \ge 2^{\aleph_0} $$ by simple inclusion. This doesn't answer whether it is consistent with ZFC that $\aleph_{\alpha+1}{}^{\aleph_\alpha} \ne \aleph_{\alpha+1}$ for all $\alpha$. However, that would follow (by the above argument) if it is consistent that $2^\kappa>\kappa^+$ for all infinite cardinals $\kappa$, which Wikipedia asserts has been proved by Foreman and Woodin, under certain large-cardinal assumptions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1431201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Let $f: A \times B \rightarrow C$ be continuous and closed under product of closed subsets of A and B, is $f$ closed? Assume product topology on $A \times B$. To make clear th title: $f$ is a countinuous map such that if $R \subset A$ and $S \subset B$ are closed sets, then $f(R \times S) \subset C$ is a closed set of $C$. Question: if $T$ is any closed subset of $A \times B$, is $f(T) \subset C$ closed ?
Not in general. In this context let $f:\mathbb R^2\rightarrow\mathbb R$ be prescribed by $\langle x,y\rangle\mapsto x$, let $\mathbb R$ be equipped with its usual topology, and $\mathbb R^2$ with the product topology. Then $f$ is continuous and $f(R\times S)=R$ for $R,X\subseteq\mathbb R$, so the conditions mentioned in your question are satisfied. The set $T:=\{\langle x,y\rangle\mid x>0\wedge y\geq\frac1{x}\}\subset\mathbb R^2$ is closed. However, the set $f(T)=(0,\infty)\subset\mathbb R$ is not closed. Proved is actually that projections are not necessarily closed maps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1431276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Deriving the Hessian from the limit definition of the derivative Could someone possibly help me understand how I can derive the Hessian matrix of a twice-differentiable function $f$ defined on $\mathbb{R}^n$ using the limit definition of the second derivative. Namely, how does: $\lim_{h -> 0}\frac{\nabla f(x+h) - \nabla f(x)}{h}$ result in the Hessian $\nabla^2 f(x)$. If I happen to be wrong about this, could you please point out what I am misunderstanding? Thank you very much!
Let $f: \mathbb{R}^n \to \mathbb{R}$ be a differentiable function. Then, at a point $p$, the derivative $Df\big|_p: \mathbb{R}^n \to \mathbb{R}$ can be computed by (but is not defined by) $$ Df\big|_p(v) = \lim_{h \to 0} \frac{f(p+hv)-f(p)}{h} $$ If $f$ is differentiable then $Df\big|_p$ is a linear function from $\mathbb{R}^n \to \mathbb{R}$. We have that $f(p+v) \approx f(p)+Df\big|_p(v)$ If $f$ is twice differentiable, then we can think of its second derivative as a bilinear form $Hf\big|_p:\mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$. It can be computed by (but not defined by) $$ Hf\big|_p(v,w) = \lim_{h \to 0} \frac{Df_{p+hv}(w) - Df\big|_p(w)}{h} $$ We have that $Df\big|_{p+v}(w) \approx Df\big|_p(w)+Hf\big|_p(v,w)$. It also turns out (the beginning of the multivariable Taylor's theorem), that $f(p+v) \approx f(p)+Df\big|_p(v)+\frac{1}{2!}Hf\big|_p(v,v)$. The pattern continues with higher derivatives being higher order symmetric tensors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1431326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
Prove that any integer that is both square and cube is congruent modulo 36 to 0,1,9,28 This is from Burton Revised Edition, 4.2.10(e) - I found a copy of this old edition for 50 cents. Prove that if an integer $a$ is both a square and a cube then $a \equiv 0,1,9, \textrm{ or } 28 (\textrm{ mod}\ 36)$ An outline of the proof I have is Any such integer $a$ has $a = x^2$ and $a = y^3$ for some integers $x,y$ Then by the Division Algorithm, $x = 36s + b$ for some integers $s,b$ with $0 \le b \lt 36$ and $y = 36t + c$ for some integers $t,c$ with $0 \le c \lt 36$ Using binomial theorem, it is easy to show that $x^2 \equiv b^2$ and $y^3 \equiv c^3$ Then $a \equiv b^2$ and $a \equiv c^3$ By computer computation (simple script), the intersection of the possible residuals for any value of $b$ and $c$ in the specified interval is 0,1,9,28 These residuals are possible but not actual without inspection which shows $0^2 = 0^3 \equiv 0$ , $1^2 = 1^3 \equiv 1$ , $27^2 = 9^3 \equiv 9$, and $8^2 = 4^3 \equiv 28$ $\Box$ There is surely a more elegant method, can anyone hint me in the right direction.
What you did is correct, but yes, a lot of the work (especially the computer check) could have been avoided. Firstly, if $a$ is both a square and a cube, then it is a sixth power. This is because, for any prime $p$, $p$ divides $a$ an even number of times (since it is a square), and a multiple of 3 number of times (since it is a cube), so $p$ divides $a$ a multiple of 6 number of times altogether, and since this is true for any prime $p$, $a$ is a perfect sixth power. So write $a = z^6$. Next, rather than working mod $36$, it will be nice to work mod $9$ and mod $4$ instead; this is equivalent by the chinese remainder theorem. So: * *Modulo $9$, $z^6 \equiv 0 \text{ or } 1$. You can see this just by checking every integer or by applying the fact that $\varphi(9) = 6$. *Modulo $4$, $z^6 \equiv 0 \text{ or } 1$. This is easy to see; $0^6 = 0$, $1^6 = 1$, $(-1)^6 = 1$, and $2^6 \equiv 0$. So $a = z^6$ is equivalent to $0$ or $1$ mod $4$ and mod $9$. By the chinese remainder theorem, this gives four possibilities: * *$a \equiv 0 \pmod{4}, a \equiv 0 \pmod{9} \implies a \equiv 0 \pmod{36}$ *$a \equiv 0 \pmod{4}, a \equiv 1 \pmod{9} \implies a \equiv 28 \pmod{36}$ *$a \equiv 1 \pmod{4}, a \equiv 0 \pmod{9} \implies a \equiv 9 \pmod{36}$ *$a \equiv 1 \pmod{4}, a \equiv 1 \pmod{9} \implies a \equiv 1 \pmod{36}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1431396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why are the following functions not equivalent I'm trying to find $$\lim_{x\to-\infty}\sqrt{x^2 + 2x} - \sqrt{x^2 - 2x}.$$ However, I kept getting $2$ instead of $-2$, so I graphed the function to see what was going on. See the picture below. http://puu.sh/k7Y3f/684aeed22b.png I found the issue in the steps I took, and it boiled down this. To summarize my steps, I rationalized the function, and then factored out $\sqrt{x^2}$ from each term in the denominator (green graph). However as soon as I sqrt those $x^2$'s, I get a different function? What gives? Why is the purple graph different from the green one?
If you do not like negative numbers, well, just apply a change of variable bringing them into positive numbers: $$\begin{eqnarray*} \lim_{x\to -\infty}\left(\sqrt{x^2+2x}-\sqrt{x^2-2x}\right)&=&\lim_{z\to +\infty}\left(\sqrt{z^2-2z}-\sqrt{z^2+2z}\right)\\&=&\lim_{z\to +\infty}\frac{-4z}{\sqrt{z^2+2z}+\sqrt{z^2-2z}}=\color{red}{-2}.\end{eqnarray*}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1431475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Regression Model with (Y,X) non-random? In regression, we assume that $(X,Y)$ are random variables following some certain distribution. How would the problem change if we do not assume $(X,Y)$ are randoms. Why can we just have $Y=f(X,\epsilon)$, where $(X,Y)$ are non-random, and $\epsilon$ is a random quantity??
Regression has nothing to do with randomness. Regression means fitting some parametrized function or curve to some points. That means to set values for the parameters and come up with some metric that describes the function or curve to fit better or worse than other functions or curves derived from other parameters. The result is the set of parameters that describe a function or curve that fits best according to the used metric. It's not the concern of regression if these points are "random" or not.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1431571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculate the whole area encosed by the curve $y^2=x^4(a-x^2),a>0$ Calculate the whole area encosed by the curve $y^2=x^4(a-x^2),a>0$. I could not plot this curve,so could not find the area.I tried wolframalpha also.Here $a$ is not specified.Required area is $\frac{\pi a^2}{4}$.Please help me.
$\qquad\qquad$ The figure corresponds to the the case $a=1$. The desired area is $${\frak A}=2\int_{-\sqrt a}^{\sqrt a}x^2\sqrt{a-x^2}dx$$ Now, the change of variables $ x=\sqrt{a}\cos(t/2)$ yields $${\frak A}=a^2\int_{ 0}^{2\pi}\cos^2\frac{t}{2}\sin^2\frac{t}{2}dt=\frac{a^2}{4} \int_{ 0}^{2\pi}\sin^2tdt=\frac{\pi a^2}{4}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1431709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is it useful to write a vector as a finite (or infinite) linear combination of basis vectors? I'm working on a project in an applied mathematics course and a professor asked me a basic question: What is useful about writing any element of a vector space in terms of a possibly infinite linear combination of basis vectors? I didn't really have an answer. So I'm wondering what's so useful about it? Why can't we just come up with an alternate expression for any arbitrary element and use that?
Sometimes there is a linear transformation that you want to understand. If you can find a basis such that the linear transformation has a simple effect on the basis vectors (like it simply scales them, for example) then this helps a lot to understand what the linear transformation does to an arbitrary vector (which can be written as a linear combination of the special basis vectors).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1431803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
What is the intersective curve between sphere and a right cone? I am confused this picture : What the curve is? I think that the curve is not circle and not the ellipse too, What is the intersective curve?
Consider a plane $\alpha$ parallel to the plane of the red-green coordinate lines. Let this plane contain the center of the red sphere. The brownish right cone in question (if it is a circular cone) intersects $\alpha$ in an ellipse $\mathscr E$. The curve, $\mathscr C$, whose shape we are interested in is the inverse stereographic projection of $\mathscr E$ on the sphere. In theory, the equation of this curve can be calculated by the inverse stereographic formula. It is easy to see that $\mathscr C$ is not a planar curve. Consider the largest and the smallest circles in $\alpha$ centered at the center of $\mathscr E$ and are tangent to $\mathscr E$. The inverse stereographic images of these circles are circles on the sphere and $\mathscr C$ is tangent to these circles on the surface of the sphere. Obviously, the two circles are not in the same plane. This proves that our curve is not a planar one. So, it is very probable that $\mathscr C$ is not a famous curve with a known name. Although, if we look at this curve (with one eye in the direction of the axis of the cone) from the North pole of the sphere we will see an ellipse because the generatrix lines (of the cone) coming out from our eye go through $\mathscr C$ and reach $\mathscr E$. If the cone is not a circular cone, and it happens to intersect $\alpha$ in a circle then $\mathscr C$ is a circle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1431894", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
inequality between matrix norms A is a $n\times k $ matrix. I have to show that $\|A\|_2\leq \sqrt{\|A\|_1\cdot \|A\|_\infty}$. I know that $\|A\|_2^2 = \rho(A^H\cdot A)\leq \|A^H \cdot A\| $ for every $\| \cdot \|$ submultiplicative matrix norm, but I don't know how to conclude. Any idea?
$\|A\|_{2}^2\leq trac(A^H\cdot A)\leq{\Vert A^H\cdot A\Vert_1\leq \| A\|_1\|A\| _\infty}$. for more information see $D.46$ and $D.52$ of abstract harmonic analysis hewitt&ross pages 706 and 709
{ "language": "en", "url": "https://math.stackexchange.com/questions/1432003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
evaluate the Limit $$ \lim \limits_{z \to 0} z\sin(1/z^2)$$ Anyone can help me with this question? Not sure how to solve this. I tried to bring z to denominator but don know how to continue.
A slight variation: Let y= 1/z. As z goes to 0, y goes to infinity so this limit becomes $\lim_{y\to\infty} \frac{sin(y^2)}{y}$. Now, as juantheron and Thomas said, sin(y) is always between -1 and 1: $$-\frac{1}{y}< \frac{sin(y^2)}{y}< \frac{1}{y}$$ and both ends go to 0 as y goes to infinity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1432134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Find the general integral of $ px(z-2y^2)=(z-qy)(z-y^2-2x^3).$ $ p=\frac{\partial z}{\partial x} $ and $ q=\frac{\partial z}{\partial y} $ Find the general integral of the linear PDE $ px(z-2y^2)=(z-qy)(z-y^2-2x^3). $ My attempt to solve this is as follows: $ p=\frac{\partial z}{\partial x} $ and $ q=\frac{\partial z}{\partial y} $ $$px(z-2y^2)+qy(z-y^2-2x^3)=z(z-y^2-2x^3)$$ \begin{align*} \text{The Lagrange's auxiliary equation is:} \frac{dx}{x(z-2y^2)}=\frac{dy}{y(z-y^2-2x^3)}=\frac{dz}{z(z-y^2-2x^3)} \end{align*} Now consider the 2nd and 3rd ratios, \begin{align*} \frac{dy}{y(z-y^2-2x^3)} & =\frac{dz}{z(z-y^2-2x^3)}\\ \implies \frac{dy}{y} & =\frac{dz}{z}\\ \implies \ln(y) & =\ln(z)+\ln(c_1)\\ \implies \frac{y}{z} & =c_1. \end{align*} But I am unable to get the 2nd integral surface. Kindly, help me. Thanks in advance.
Your calculus is correct. A first family of characteristic curves comes from $\frac{dx}{x(z-2y^2)}=\frac{dy}{y(z-y^2-2x^3)}$ which solution is $z=\frac{1}{c_1}y=c'_1y$ $$\frac{z}{y}=c'_1$$ A second family of characteristic curves comes from $$\frac{dx}{x(c'_1y-2y^2)}=\frac{dy}{y(c'_1y-y^2-2x^3)}$$ The solution of this ODE is : $y=\frac{c'_1}{2}\pm \sqrt{x^3+\frac{(c'_1)^2}{4}+c_2}$ $\left(y-\frac{c'_1}{2}\right)^2-x^3-\frac{(c'_1)^2}{4}=c_2$ $y^2-c'_1y -x^3=c_2$ $$y^2-z-x^3=c_2$$ The general solution of the PDE expressed on the form of implicit equation is : $$\Phi\left(\:\left(\frac{z}{y}\right)\:,\:\left(y^2-z-x^3\right)\: \right)=0$$ where $\Phi$ is an arbitrary function of two variables. Or equivalently : $$z=y\:F(y^2-z-x^3)$$ where $F$ is an arbitrary function. $\Phi$ or $F$ has to be determined to fit some boundary condition (not specified in the wording of the question).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1432235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Probability of getting more Heads between two gamblers I'm trying to solve this probability question I thought of. If two gamblers are playing a coin toss game and Gambler A has $(n+2)$ and B has $n$ fair coins. What is the probability that A will have more heads than B if both flip all their coins? I tried to solve it like this, using symmetry. If we compare the first $n$ coins of of $A$ and $B$: $E_{1}$: Event that A and B have the same number of Heads $E_{2}$: Event that A has 2 Heads less than B $E_{3}$: Event that A has 1 Head less than B $E_{4}$: Event that A has 3 Heads less than B $E_{5}$: Event that A has 2 Heads more than B $E_{6}$: Event that A has 1 Head more than B Let $P(E_{1}) = y$, $P(E_{2}) = P(E_{5}) = x$, $P(E_{3}) = P(E_{6}) = z$, $P(E_{4}) = k$ which implies $y + x + x + z + z + k = 1$ and probability of A having more heads = $y*0.75 + x + z + z(0.5^{2})$. However, I don't know how to solve the question from here. Thank You
You were right for using symmetry. Suppose A and B both have n coins. The probability of A winning would be 0.5. Now suppose A has $(n+2)$ coins and flips $n$ coins first then flips the other 2 coins. The probability is 0.5 for A to win with $n$ coins, and adding the 2 coins, the probability of at least one of them being heads is 3/4. So the probability of A winning is $3/4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1432439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Multivariable function limit How to approach this: $\lim\limits_{(x,y)\to(0,0)}\frac{x^2y}{x^2+y}$? Been able to grind $\lim\limits_{(x,y)\to(0,0)}\frac{x^2y}{x^2+y^2}$, it is in(link) finnish, but formulas and idea should be selfevident. However $\dot +y$ instead of $\dots+y^2$ in divisor confuses me. Or am I thinking in completely wrong direction?
$f(x,y)=\frac{x^2y}{x^2+y}$ Suppose the limit exist and it is finite. Then, let's take $\epsilon > 0$. There is $\delta > 0$ so that $|\frac{x^2y}{x^2+y} - L|<\epsilon$ for all $x,y$ so that $x^2 + y^2 < \delta ^2$. Let's consider $x_n= \frac{1}{\sqrt n}, y_n= -\frac{1}{n + 1}$. There is N so that $x_n^2 + y_n^2<\delta^2, \forall n > N$. We have $f(x_n,y_n)=-1$. So $|-1 - L| < \epsilon$ for arbitrary $\epsilon$, therefore $L = -1$. Now, consider $x_n= \frac{1}{\sqrt n}, y_n= -\frac{1}{n + 2}$. By following the same steps as above, we'll get $L = -\frac{1}{2}$. It is proven that there is no finite limit. Similar for infinite limit
{ "language": "en", "url": "https://math.stackexchange.com/questions/1432514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Find a metric space that cannot be embedded in the Hilbert cube. This is homework and I'm not looking for an answer. I just finished an exercise that asked to prove that every metric compact space can be embedded in the Hilbert cube. Knowing this I can see that I have to find a non-compact metric space to start, but I don't think that's enough because I've been thinking for a while now and nothing has come to mind, so a little intuition or hints on what I'm looking for would be appreciated. Thanks.
HINT: It isn’t really compactness that matters here: it’s second countability. You want a metric space that isn’t second countable; equivalently, you want one that isn’t separable. There are some really simple non-separable metric spaces. A further hint is in the spoiler-protected block below. Try discrete spaces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1432631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How do we reach the answer to the following recursive problem? $\large{a_{n+1} = \frac{a_n}{n} + \frac{n}{a_n}, a_1 = 1}$ Let the sequence $\large{\left< a_n \right>}$ be defined as above for all positive integers n. Evaluate $\large{\left \lfloor a_{2015} \right \rfloor}$. I wrote a C++ program to solve the problem but I've no idea how to mathematically approach it. Thanks in advance!
If we let $x_n=a_n/\sqrt n$ then we have the recursive rule for $x_n$: $$ x_{n+1}={x_n\over\sqrt{n(n+1)}}+{1\over x_n}\sqrt{n\over n+1}. $$ It is easy to prove that $\lim_{n\to\infty}x_n=1$, so that $a_n/\sqrt{2015}$ should be very close to $1$. In other words we may conclude that $$ \lfloor a_{2015} \rfloor = \lfloor \sqrt{2015} \rfloor=44. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1432722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Why is this not allowed to solve a differential equation? How come you can't just integrate like this?$$y'=2y+x \implies y=2yx+\frac{x^2}{2}+C$$
Because $y$ is a function of $x$. Writing $y = f(x)$, then the first term on the right you are integrating is $$\int 2y \ dx = \int 2f(x) \ dx$$ That term is equal to $2yx$ if and only if $f$ is a constant function. In general, such an assumption is dangerous as it could be wrong. It is wrong in this case: if $f$ is a constant function then left hand side of the original ODE $y' = 0$. From which it follows $0 = 2y + x$ and thus $$y = f(x) = -x/2$$ which is not a constant. Contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1432826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
In how many ways can a group of 5 boys and 5 girls be seated in a row of 10 seats? In how many ways can a group of 5 boys and 5 girls be seated in a row of 10 seats? Still having some confusion with the difference between combinations and permutations. I have tried this problem using both combination and permutation with the formulas and my calculator and neither were correct answers.
Ladies first: choose $5$ fixed places for the girls: $\binom{10}{5}$ - this (number of combinations) does not account for the order of the girls, whom you can permute in $5!$ ways. There are $5$ remaining places for the boys: Again, we can permute them in $5!$ ways. So the number is $$\binom{10}{5}\cdot 5!\cdot 5!=\ldots$$ You shouldn't find it surprising that the answer is $10!$ - that's because you are in fact permuting $10$ people over $10$ seats, no matter who is boy or girl.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1432905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
The value of $\gcd(2^n-1, 2^m+1)$ for $m < n$ I've seen this fact stated (or alluded to) in various places, but never proved: Let $n$ be a positive integer, let $m \in \{1,2,...,n-1\}$. Then $$\gcd(2^n-1, 2^m+1) = \begin{cases} 1 & \text{if $n/\gcd(m,n)$ is odd} \\ 2^{\gcd(m,n)}+1 & \text{if $n/\gcd(m,n)$ is even} \end{cases}$$ I've written up my own proof (see below), but I'm hoping to collect a few more. Any takers? Also, information regarding generalizations would be very welcome!
Since $2^{2m}-1 = (2^m-1)(2^m+1)$, we have that \begin{equation} \gcd(2^n-1,2^m+1) {\large\mid} \gcd(2^n-1,2^{2m}-1) = 2^{\gcd(2m,n)}-1 \end{equation} (Some proofs of the equality can be found here, among other places.) Case 1 If $n/\gcd(m,n)$ is odd, then $\gcd(2m,n) = \gcd(m,n)$, and so $\gcd(2^n-1,2^m+1)$ divides $2^{\gcd(m,n)}-1$, which in turn divides $2^m-1$. As $\gcd(2^m-1,2^m+1) = 1$, we conclude that $\gcd(2^n-1,2^m+1) = 1$. Case 2 On the other hand, if $n/\gcd(n,m)$ is even, then $\gcd(2m,n) = 2\gcd(m,n)$, which implies that $\gcd(2^n-1,2^m+1)$ divides $2^{2\gcd(m,n)}-1 = (2^{\gcd(m,n)}-1)(2^{\gcd(m,n)}+1)$. As $\gcd(2^m+1, 2^{\gcd(m,n)}-1) = 1$, it must be that $\gcd(2^m+1,2^n-1)$ divides $2^{\gcd(m,n)}+1$. Now observe that $2^{\gcd(m,n)}+1$ divides both $i)$ $2^n-1$ and $ii)$ $2^m+1$. Observation $i)$ follows from the fact that $2^{\gcd(m,n)}+1$ divides $(2^{\gcd(m,n)}+1)(2^{\gcd(m,n)}-1)=2^{2\gcd(m,n)}-1=2^{\gcd(2m,n)}-1$, which in turn divides $2^n-1$. Observation $ii)$ follows from the fact that $(2^{\gcd(m,n)}+1)(2^{\gcd(m,n)}-1)=2^{2\gcd(m,n)}-1=2^{\gcd(2m,n)}-1$ divides $2^{2m}-1 = (2^m+1)(2^m-1)$, which in turn implies that $2^{\gcd(m,n)}+1$ divides $2^m+1$. We therefore conclude that $\gcd(2^n-1,2^m+1) = 2^{\gcd(m,n)}+1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1433014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Whether to use AND or OR in describing an inequality My understanding is that the following: $5 < x < 10$ is read as "x is greater than 5 AND less than 10," whereas the solution to $| x + 2 | > 4$, which is $x > 2, x < -6$, should be read as "x is greater than 2 OR less than -6" Am I using this correctly? I'm wondering because while teaching some kids, I realized that Khan Academy doesn't seem to be using this correctly and I was doubting whether I was using them correctly. Thanks in advance for the help.
Your first example is definitely correct. The chained inequality $5 < x < 10$ formally has the same meaning as 5 < x & x < 10, that is, "5 is less than x and x is less than 10", equivalent to "x is greater than 5 and less than 10", or in interval notation $(5, 10)$. The second example is ambiguous, in that the piece of writing "x > 2, x < -6", is really malformed (undefined) mathematical writing; specifically, using the comma there isn't syntactically meaningful. That said, your English statement is correct that the solution to $|x + 2| > 4$ is certainly "x is greater than 2 or less than -6", which could be properly written as as "x > 2 or x < -6", or via the union of intervals $(-\infty, -6) \cup (2, \infty)$. In short: An "or" statement really needs a proper symbol written out for it, unlike a chained inequality which is defined to imply the "and" connector. (Edit) One of Khan's weaknesses is that he churns out so many videos he tends to be sloppy about details like this, and the video format isn't susceptible to editing afterward to fix errors like written text is.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1433134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Simple limit of a sequence Need to solve this very simple limit $$ \lim _{x\to \infty \:}\left(\sqrt[3]{3x^2+4x+1}-\sqrt[3]{3x^2+9x+2}\right) $$ I know how to solve these limits: by using $a−b= \frac{a^3−b^3}{a^2+ab+b^2}$. The problem is that the standard way (not by using L'Hospital's rule) to solve this limit - very tedious, boring and tiring. I hope there is some artful and elegant solution. Thank you!
Can you help with O-symbols? It's all right here? $$f(x) = \sqrt[3]{3x^2}\left(1 + \frac{4}{9x} + O\left(\frac{1}{x^2}\right) - 1 - \frac{1}{x} -O \left(\frac{1}{x^2}\right)\right)= \sqrt[3]{3x^2} \left(\frac{-5}{9x} + \frac{1}{18x^2} \right). $$ Hence $$\lim _{x\to \infty }\sqrt[3]{3x^2} \left(\frac{-5}{9x} + \frac{1}{18x^2} \right)= \lim_{x \to \infty} \sqrt[3]{3}{x^{-1/3}}^{\to 0} \left( - \frac{5}{9}+\frac{1}{18x}^{\to 0} \right) = 0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1433216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 4 }
Discontinuity of Dirichlet function Define $$f(x)= \begin{cases} 1, & \text{if }x\in\mathbb{Q}, \\ 0, & \text{if }x\in\mathbb{R}\setminus\mathbb{Q}. \end{cases}$$Then $f$ has a discontinuity of the second kind at every point $x$, since neither $f(x+)$ nor $f(x-)$ exists. Proof: We'll consider only for $f(x+)$. Case 1. If $x_0\in \mathbb{Q}$ then we can take $t_n=x_0+\frac{1}{n}$ at that $t_n\to x_0,t_n>x_0$ and $t_n\in \mathbb{Q}$. Hence $f(t_n)=1\to 1$ as $n\to \infty$. Also we can take $t_n=x_0+\dfrac{\sqrt{2}}{n}$ at that $t_n\to x_0,t_n>x_0$ and $t_n\in \mathbb{R}\setminus\mathbb{Q}$. Hence $f(t_n)=0\to 0$ as $n\to \infty$ Case 2. For $x_0\in\mathbb{R}\setminus\mathbb{Q}$ we apply a similar argument. We can take $t_n=x_0+\dfrac{1}{n}$ and in this case $f(t_n)\to 0$. Taking $t_n\in \mathbb{Q}$ such that $x_0<t_n<x_0+\dfrac{1}{n}$ we get $f(t_n)\to 1.$ Hence $f(x_0+)$ does not exists at any point $x_0\in \mathbb{R}$. Also $f(x_0-)$ does not exists at any point $x_0\in \mathbb{R}$. Hence Dirichlet function has discontinuity of the second kind at every point of $\mathbb{R}^1$. Is my proof true?
Yes, your proof is correct. To summarize, the key point is to "construct", or show the existence of, if one wishes, two decreasing or increasing convergent sequences $(x_n)$ and $(y_n)$ such that they both converge to the same given point, but $(f(x_n))$ and $(f(y_n))$ have different limits. You implicitly use the following two facts in your proof: * *(1) the sum of a rational and an irrational is irrational; *(2) the set of rational numbers is dense in the real line. You may also use the fact that the set of irrational numbers is dense in the real line together with (2) to give a proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1433308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }