Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Find the following limit $\lim_{x\to 0}\frac{\sqrt[3]{1+x}-1}{x}$ and $\lim_{x\to 0}\frac{\cos 3x-\cos x}{x^2}$ Find the following limits $$\lim_{x\to 0}\frac{\sqrt[3]{1+x}-1}{x}$$ Any hints/solutions how to approach this? I tried many ways, rationalization, taking out x, etc. But I still can't rid myself of the singularity. Thanks in advance. Also another question. Find the limit of $$\lim_{x\to 0}\frac{\cos 3x-\cos x}{x^2}$$ I worked up till here, after which I got stuck. I think I need to apply the squeeze theore, but I am not sure how to. $$\lim_{x\to 0}\frac{\cos 3x-\cos x}{x^2} = \lim_{x\to 0}\frac{-2\sin\frac{1}{2}(3x+x)\sin\frac{1}{2}(3x-x)}{x^2}=\lim_{x\to 0}\frac{-2\sin2x\sin x}{x^2}=\lim_{x\to 0}\frac{-2(2\sin x\cos x)\sin x}{x^2}=\lim_{x\to 0}\frac{-4\sin^2 x\cos x}{x^2}$$ Solutions or hints will be appreciated. Thanks in advance! L'hospital's rule not allowed.
Since $\cos(x) \sim -x^2/2$ if $x \to 0$, the second argument of your second limit is $\frac{-9x^2/2 + x^2/2}{x^2}$ which evaluates to $-4$; as for your first limit, since $(x+1)^{a} -1 \sim ax$ if $a > 0$ and $x\to 0$ you get that the argument is $(1/3)x /x$ which evaluates to $1/3$. Note that $\sim$ means asymptotic to. Also, you might say that these are the first terms of the Taylor series of the functions in your limit, but those asymptotics can be proven without Taylor series.
{ "language": "en", "url": "https://math.stackexchange.com/questions/201470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Show that for any polynomial $p(z)$ there is a $z$ with $|z|=1$ such that $|p(z)-1/z|\geq 1$. I'm having a bit of trouble on another problem, and I'm not sure where to start: Show that for any polynomial $p(z)$ there is a $z$ with $|z|=1$ such that $|p(z)-1/z|\geq 1$. Could anybody get me started with a tip or two? Thanks in advance.
You have to show that you can find $z$ of modulus $1$ such that $|zp(z)-1|\geq 1$.If it's not the the case, you apply Rouché's theorem to get a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/201529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How is a "riffle shuffle" mathematically defined? According to http://mathworld.wolfram.com/RiffleShuffle.html : (1) 8 out-shuffles return an ordinary deck of 52 cards to its original order. (2) 7 riffle shuffles are needed to get to close to random. [Hence 8 riffle shuffles would be even better, I think.] So clearly the "riffle shuffles" of (2) are not meant to be identical to "out-shuffles". But then how is a "riffle shuffle" of (2) mathematically defined?
A riffle shuffle is defined to take the deck, cut it into an initial segment $A$ and a final segment $B$, and then mix $A$ and $B$ together, preserving the ordering within $A$ and $B$. There is a standard model of a random riffle shuffle due Gilbert: Let there be $n$ cards in the deck. The probability that the deck is cut at position $k$ is $\frac{1}{2^n} \binom{n}{k}$; then all $\binom{n}{k}$ shuffles of $A$ and $B$ are equally likely. The first condition says that, with high probability, the cut point is near $n/2$, with an error of $\approx \sqrt{n}$. The second condition says that, when I have $a$ cards in my left hand and $b$ in my right, the odds that the next card will drop from the left hand is $a/(a+b)$, so the thicker pack of cards drops faster. This condition is a decent model for real shuffling, according to experiments by Diaconis. IIRC, these experiments only used two shufflers, Diaconis and a friend, and Diaconis is a practiced magician, so one might wonder if this is a fair sample. We had an undergrad, Alex Cope, who was paying random people here at Michigan to shuffle cards and checking them against the model; he doesn't seem to have published his work yet. ADDED JULY 2014 Cope's data still doesn't seem to be public, but you can read an analysis of it here. It is also a very convenient theoretical model because there is a very simple description for the inverse process: To produce the inverse of a random shuffle, go through the deck and random place each card in your left hand or your right, according to an independent coin flip; then stack up the decks in your two hands. For lots more on this model, including the 7 shuffles to randomize result, see Trailing the Dovetail Shuffle to its Lair by Bayer and Diaconis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/201670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Why is the topology of characters determined by the open sets containing the trivial character? Let $G$ be an abelian topological group, and let $\hat G$ denote the set of characters on $G$. Why is it true that if one has a topological basis of for the trivial character (say the topological of uniform convergence on compact sets), one has a topology for all of the characters? This seems clear since the complex circle is a homogeneous space, but I'm having trouble formalizing this for some reason.
The basis at identity yields a basis on the entire topological group, for any topological group. Notice that for any neighbourhood $U$ of a point $g\in G$ with $G$ a topological group, $g^{-1}U$ is a neighbourhood of identity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/201738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Probability problem: cars on the road I heard this problem, so I might be missing pieces. Imagine there are two cities separated by a very long road. The road has only one lane, so cars cannot overtake each other. $N$ cars are released from one of the cities, the cars travel at constant speeds $V$ chosen at random and independently from a probability distribution $P(V)$. What is the expected number of groups of cars arriving simultaneously at the other city? P.S.: Supposedly, this was a Princeton physics qualifier problem, if that makes a difference.
This is not drastically different from the answers above but a slightly different way to arrive at the recurrence relation. Let $E(n)$ be the expected number of clusters in the case where we begin with $n$ cars in the beginning. Now, if we add one more car from the behind (i.e behind the last car of the slowest cluster), two cases arise : 1) It catches up with the slowest cluster and joins it; or 2) It is too slow to catch up and forms a singleton cluster of its own The probability of event 2 is $\dfrac 1 {n+1}$. We have $n+1$ in the denominator because it could lie in any interval between and outside the n initial speeds (imagine them on a number line and you'll see $n+1$ possible interval) of the $n$ cars we had in the start. Therefore, the probability of event 1 is $\dfrac n {n+1}$. Thus $E(n+1) = \big( E(n)+1 \big) \dfrac 1 {n+1} + E(n) \dfrac n {n+1}$, so $E(n+1) = E(n) + \dfrac 1 {n+1}$ i.e. $E(n)$ is the $n$th harmonic number.
{ "language": "en", "url": "https://math.stackexchange.com/questions/201807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 6, "answer_id": 1 }
Induced Sheaf Structure is equivalent to Inverse Image Sheaf? Let $f:X \rightarrow Y$ be a map and let $Y$ be a ringed space, i.e. we have a sheaf of rings $O_Y$. Suppose that the regular functions are $k$-valued functions, where $k$ is a field. Define the open sets of $X$ to be generated by inverse images of open sets of $Y$. We want to give $X$ a ringed space structure. One way is to consider the inverse image sheaf. Another way is to define a function $g$ on $U$ to be regular, where $U$ is open in $X$, whenever there exists a covering $U \subseteq \cup_{a} f^{-1}(V_a)$ and regular functions of $Y$, $g_a : V_a \rightarrow k$, such that $g|_{U \cap f^{-1}(V_a)} = (g_a \circ f)|_{U \cap f^{-1}(V_a)}, \, \forall a$. Are the two constructions equivalent?
I ended up asking somebody in person, very known in algebraic geometry, and the two sheaf structures, i.e. the inverse image sheaf and the so called "induced structure" not only they are not equivalent, but very rarely coincide.
{ "language": "en", "url": "https://math.stackexchange.com/questions/201854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Proof that a perfect set is uncountable There is something I don't understand about the proof that perfect sets are uncountable. The same proof is present in Rudin's Principles of Mathematical Analysis. Do we assume that our construction of $U_n$ must contain all points of $S$? What if we are only collecting evenly-indexed points of $S$ ($x_{2n}$)? We would still get an infinitely countable subset of $S$, and the rest of $S$ can be used to provide points for $V$. What am I missing?
The crux of the argument is that we make sure to collect all the points. List the elements $\{x_{1}, x_{2}, \cdots , x_{n}, \cdots\}$ and then take the open interval $U_1$. If $x_{2}$ is not in $U_{1}$, ignore it, as it will not show up in $V$. If $x_{2}$ is in $U_{1}$, we construct $U_{2}$. In this way, we go down the list of $x_{n}$, and we ignore them if they are not in current $U_{j}$ we are considering, and otherwise we construct $U_{j+1}$. When we build $V$, and prove it is nonempty, we have shown that we have either actively eliminated points or ignored those that could not have ended up in $V$. Thus we get a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/201922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36", "answer_count": 3, "answer_id": 2 }
Please explain how this ratio is being calculated A,B and C are partners of a company. A receives $\frac{x}{y}$ of profit. B and C share the remaining profit equally among them. A's income increases by $I_a$ if overall profit increases from P% to Q%. How much A had invested in their company. I know the answer: $\frac{I_a\cdot100}{P-Q}$. This may be a very simple question, but I don't understand how it comes.
When you say profit increases from $P\%$ to $Q\%$ do you mean something different from $\frac P{100}$ to $\frac Q{100}$ (of what-sales, for example-are you assuming that sales stay the same)? If not, A receives $\frac xy \frac Q{100}$ instead of $\frac xy \frac P{100}$. You have a problem of units-$\frac xy \frac Q{100}$ is unitless, but you pay A dollars.
{ "language": "en", "url": "https://math.stackexchange.com/questions/201993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
The drying water melon puzzle I couldn't find an explanation to this problem that I could understand. A watermelon consist of 99% water and that water measures 2 litre. After a day in the sun the water melon dries up and now consist of 98% water. How much water is left in the water melon? I know the answer is ~1 litre, but why is that? I've read a couple of answers but I guess I'm a bit slow because I don't understand why. EDIT I'd like you to assume that I know no maths. Explain it like you would explain it to a 10 year old.
You start with 2 litres of water and $x$ litres (say we measure by volume) of "non water". The percentage of water is $$ \frac{2}{2 + x} = 99\% = \frac{99}{99+1}$$ You solve this to get that $x : 1 = 2 : 99$ or that $x = 2/99$. After drying, you have $y$ litres of water and $x$ litres of "non water". Since the non-water bits don't dry, the $x$ is same as before: that is $x = 2/99$. The percentage of water is $$ \frac{y}{y+x} = \frac{y}{y+ 2/99} = 98\% = \frac{98}{98 + 2} $$ So solving this you get that $y : 98 = 2/99 : 2 = 1 : 99$. Or, in other words, $y = 98 /99 \approx 1$. That's how much water you have left. To intuitively understand the problem, it is more helpful to think of the proportion of "non-water". The non water started out at 1%. It ended up in 2%. Since the amount of "non water" didn't change, to have its proportion go from 1% to 2% means that the total volume must have decreased by half. $$ \frac{\text{non water}}{\text{total starting volume}} = 1\% \longrightarrow \frac{\text{non water}}{\text{total final volume}} = 2\% $$ Since the watermelon started out almost all water, for the total volume to decrease by half you must lose at least (and almost exactly) half of the water.
{ "language": "en", "url": "https://math.stackexchange.com/questions/202095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Prove convergence without Lebesgue theory W. Rudin has the following exercise, "to convince the reader of the power of Lebesgue integration". Let $0 \leq f_n \leq 1$ be continuous functions from $[0,1]$ to $\mathbb R$, such that they converge pointwise to $0$. Prove that their integrals converge to $0$, without using any Lebesgue theory. How to do this?
In my opinion, the best solution is contained in a paper by Luxemburg, Arzelà's dominated convergence theorem for the Riemann integral, American Math. Monthly 78 (1971), available here but not for free. It is very nice to read this paper, and the proof is essentially elementary. Please do not ask me to copy it in my answer, since it takes a few pages :-)
{ "language": "en", "url": "https://math.stackexchange.com/questions/202157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Zero polynomial Possible Duplicate: Polynomial of degree $-\infty$? Today in Abstract Algebra my instructor briefly mentioned that sometimes the zero polynomial is defined to have degree $-\infty$. What contexts have caused this to become convention?
We want $\deg(P\cdot Q)=\deg P+\deg Q$ for two polynomials. In particular $\deg\mathbf 0=\deg P+\deg\mathbf 0$, so we can't take an integer. $+\infty$ could be a choice as we want $\deg(P+Q)\leq \max(\deg P,\deg Q)$ but with the definition $\deg((\alpha_j)_{j\geq 0}:=\sup\{k\geq 0\mid \alpha\neq 0\}$, we take a supremum over an emptyset, so we take $-\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/202216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 3 }
Constructing the reals from the rationals Dr. H. Jerome Keisler, in his book Elementary Calculus: An Infinitesimal Approach, states on page 24: Just as the real numbers can be constructed from the rational numbers, the hyperreal numbers can be constructed from the real numbers. In what sense is this true? Since $\mathbb{R}$ is uncountably infinite, and $\mathbb{Q}$ is countably infinite, I would think that it is not possible to construct $\mathbb{R}$ from $\mathbb{Q}$, at least in the sense that $\mathbb{Q}$ can be constructed from $\mathbb{N}$.
There are two classical (and equivalent) possibilities: * *Dedekind cuts. Represent every real by the set of rational numbers that are smaller than it -- such sets can be characterized without already knowing $\mathbb R$: they are the downwards closed subsets of $\mathbb Q$ that are neither empty nor $\mathbb Q$ itself and don't have a largest element. *Cauchy sequences. Let the reals be equivalence classes of sequences of rational numbers that "ought to" have a limit according to the Cauchy criterion. Two such sequences are equivalent (and so represent the same real number) if their term-by-term difference converges to $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/202291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Equal scores in a round-robin tournament implies wins=losses? Suppose you have a round robin tournament with $n$ teams where each team get in every single 1-1 match: * *$3$ points for a win, *$1$ point for a tie, *$0$ points for a loss. I would like to decide the truth of this conjecture: If the tournament ends with all the teams with the same score then every team had as much wins as losses. It is certainly true for $n=3$ but even checking the case n=4 is nontrivial.
There's a counterexample with 8 teams: A, B, C, X, Y, Z, M, S * *S wins against M *M wins against A, B, and C *A wins against Y and Z *B wins against Z and X *C wins against X and Y *X wins against M and A *Y wins against M and B *Z wins against M and C *Everything else ties. Teams ABCXYZ each win twice and lose twice for a score of $2\times0+3\times1+2\times3=9$. Team M wins against ABC and loses to XYZS, for a score of $4\times0+0\times1+3\times3=9$. Team S wins once and ties 6 times, for a score of $0\times 0+6\times1+1\times3=9$. How I found this: First, for easier counting I changed the rules by subtracting $n-1$ points from each team such that ties give $0$ points, and wins and losses each $2$ and $-1$. That makes it easier to see which possible combinations of wins and losses add up to the same. Then I'm looking for a directed graph with no 2-loops such that the value of each node is the same. Since there are as many wins as losses, in a counterexample there must be at least one team that win more times than they lose. But that team cannot possibly have less than $2$ points, so let's see which way we can make a node with value 2: They are: $1W+0L$, $2W+2L$, $3W+4L$ and so forth -- so if we have one of $1W+0L$ and $3W+4L$ the number of wins/losses add up right. From there it was just a matter of puzzling out where to add $2W+2L$ nodes to the graph such that we don't need more than one match to take place between the same two teams.
{ "language": "en", "url": "https://math.stackexchange.com/questions/202345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Cluster points of multiples of the fractional part of an irrational number. Would anyone like to help me complete this proof? I need some help understanding where to go next. The book is giving me hints and I am trying to follow along, but I am getting confused about how to finish. Let $c$ be irrational with $0<c<1$. Let $x_n=nc-[ nc] =nc \mod 1$, with $[nc]$ meaning $\operatorname{floor}(nc)$. Determine the cluster points of the sequence $x_n$. Let $\varepsilon>0$ Ok, so first I prove that $x_n=x_m$ implies $n=m$, which is easy, since $c$ is irrational. So every $x_n$ is unique. Secondly, I can use the Archimedian property to pick $m$ such that $\frac1m < \varepsilon$ . Then I can divide up the interval $[0,1)$ into $m$ pieces like this: for $1 \leq k \leq m$ I can let $I_k=\left[\frac{k-1}m,\frac km\right)$. Now I can take $\{{x_j : j=1, N+1, 2N+1,\ldots,mN+1}\}$ , which has $m+1$ distinct values, and thus by the pigeonhole principle, there must be $x_j$ and $x_{j'}$ that are both in the same $I_k$ and hence $|x_j-x_{j'}|<\varepsilon$. So here I am not sure where to go now. Would anyone care to help me out? I am trying to find the cluster points.
The cluster points are all points of $[0,1]$. Given a particular point $a$, an $\epsilon \gt 0$ and a number $N$, you need to show that there is an $n \gt N$ that has $|x_n-a|\lt \epsilon$. Once you find $x_j$ and $x_{j'}$ with $|x_j - x_{j'}| \lt \epsilon$, any time you add $j-j'$ to the subscript, it steps by that amount. If you keep doing these steps, one of them will land within $\epsilon$ of $a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/202392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Retraction of the Möbius strip to its boundary Prove that there is no retraction (i.e. continuous function constant on the codomain) $r: M \rightarrow S^1 = \partial M$ where $M$ is the Möbius strip. I've tried to find a contradiction using $r_*$ homomorphism between the fundamental groups, but they are both $\mathbb{Z}$ and nothing seems to go wrong...
For each $\alpha\in\partial M$, let $\gamma_\alpha$ be the closed loop in $M$ that starts at $\alpha$, goes directly across the strip to its antipode and then halfway around the boundary to its starting point in positive direction. Then $\alpha\mapsto\gamma_\alpha$ is a homotopy -- in particular every $\gamma_\alpha$ has the same homotopy class. On the other hand, if $x$ and $y$ are antipodes, then when we form $\gamma_x+\gamma_y$, the "directly across" sections cancel out, and the concatenated curve is homotopic to a single turn around the entire boundary. So the homotopy class of $r(\gamma_x+\gamma_y)$ in $\partial M$ is $1$. On the other hand, $r$ ought to induce a homomorphism between the homotopy groups, but $1$ is not twice anything in $\mathbb Z$, which is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/202447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 5, "answer_id": 3 }
Calculating Eigenvalues from two matrices Let $\alpha$ be the endomorphism given by $\alpha$:$\mbox{} \left[ \begin{array}{cc} a & b \\ c & d \end{array} \right]$$\rightarrow$ $\mbox{} \left[ \begin{array}{cc} d & -b \\ -c & a \end{array} \right]$ I need to find the eigenvalues and associate eigenspace. Since the determinants are equal, does that mean that the product of the eigenvalues is also the same? And which matrix ought I use to find the eigenvalues?
It is easy to see that $\alpha^2 = I$, from which it follows that $(\alpha -I)(\alpha + I) = 0$. Hence the set of eigenvalues is $\{\pm 1 \}$. Choose the basis $e_1 = \pmatrix{ 1 && 0 \\ 0 && 0 }$, $e_2 = \pmatrix{ 0 && 1 \\ 0 && 0 }$, $e_3 = \pmatrix{ 0 && 0 \\ 1 && 0 }$, $e_4 = \pmatrix{ 0 && 0 \\ 0 && 1 }$. In this basis, $\alpha$ has the form $A = \pmatrix{ 0 && 0 && 0 && 1 \\ 0 && -1 && 0 && 0 \\0 && 0 && -1 && 0 \\ 1 && 0 && 0 && 0 }$. The characteristic polynomial is easily computed to be $\det (\lambda I -A) = (\lambda+1)^3 (\lambda-1)$. Also from $A$ we have $\alpha e_2 = -e_2$, $\alpha e_3 = - e_3$, $\alpha(e_1+e_4) = e_1+e_4$ and $\alpha(e_1-e_4) = -(e_1-e_4)$, which gives all the eigenvectors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/202569", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Determine if S is a subspace of the F-vector space V $\mathbb{F} = \mathbb{C}$, $V = \mathbb{C}^{3\times 3}$, the set of all complex $3\times 3$ matrices, and S is all the set of all matrices of the form $$ \left( \begin{array}{ccc} a & a & a \\ 0 & 0 & a \\ a & a & a \end{array} \right)$$ where a is an arbitrary complex number. determine if $S \subset \mathbb{C}^{3\times 3}$ is a subspace.
Yes, this is indeed a subspace. To see that it is a subspace we need to check that it is closed under scalar multiplication and addition. Choose any scalar $z\in \mathbb{C}$ then $$ z \left( \begin{array}{ccc} a & a & a \\ 0 & 0 & a \\ a & a & a \end{array} \right) = \left( \begin{array}{ccc} za & za & za \\ 0 & 0 & za \\ za & za & za \end{array} \right) \in S$$ and for any $a, b\in \mathbb{C}$ we have, $$ \left( \begin{array}{ccc} a & a & a \\ 0 & 0 & a \\ a & a & a \end{array}\right) + \left( \begin{array}{ccc} b & b & b \\ 0 & 0 & b \\ b & b & b \end{array} \right) = \left( \begin{array}{ccc} a+b & a+b & a+b \\ 0 & 0 & a+b \\ a+b & a+b & a+b \end{array} \right) \in S$$ Thus we see that $S$ is closed under scalar multiplication and addition of vectors, so it is a subspace.
{ "language": "en", "url": "https://math.stackexchange.com/questions/202642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solve a simultaneous equation. How do we solve $|b-y|=b+y-2\;and\;|b+y|=b+2$? I have tried to square them and factorize them but got confused by and and or conditions.
$b+2=|b+y|$ which is real, so is $b$ $y+b-2=|b-y|$ which is real, so is $y+b-2$ and $y$ (1)If $b \ge y, b-y=b+y-2\implies y=1 \implies |b+1|=b+2$ and $b \ge y=1$ So, $b+1 >0\implies |b+1|=b+1=b+2$ which has no finite solution. (2) If $b<y, y-b=b+y-2\implies b=1, y>b=1$ So, $|1+y|=3\implies y+1=3\implies y=2$ The only solution $b=1,y=2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/202791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Need a hint: show that a subset $E \subset \mathbb{R}$ with no limit points is at most countable. I'm stuck on the following real-analysis problem and could use a hint: Consider $\mathbb{R}$ with the standard metric. Let $E \subset \mathbb{R}$ be a subset which has no limit points. Show that $E$ is at most countable. I'm primarily confused about how to go about showing that this set $E$ is at most countable (i.e. finite or countable). What I can show: since $E$ has no limit points, I can show that for every $x \in E$, there is a neighborhood $N_{r_x}(x)$ where $r_x > 0$ that does not contain any other point $y \in E$ where $y \neq x$. This suffices to show that every point within x is an isolated point.
Note: The union of countably many countable sets is countable. Assume $E$ is not countable. Then one of the sets $[n,n+1]\cap E$ with $n\in \mathbb Z$ is uncountable. Starting with $a_0=n$, $b_0=n+1$ we find a sequence of nested intervals $[a_k, b_k]$ such that $[a_k,b_k]\cap E$ is uncountable. In fact we can simply bisect an interval at each step and note that at least one of the halves must have iuncountably many points in common with $E$. In other words, we let $a_{k+1}=a_k$, $b_{k+1}=\frac{a_k+b_k}2$ if $[a_k, \frac{a_k+b_k}2]$ is uncountable and let $a_{k+1}=\frac{a_k+b_k}2$, $b_{k+1}=b_k$ otherwise (and observe that then $[a_{k+1},b_{k+1}]\cap E$ is uncountable as well. The nested intervals contain a point $c\in \mathbb R$. This $c$ has some (in fact uncountably many) points of $E$ in every $\epsilon$-neighbourhood. Indeed $[a_k,b_k]$ is contained in the $\epsilon$-neighbourhood as soon as $2^{-k}<\epsilon$. Or: If $E\cap [0,\infty)$ and $E\cap(-\infty,0]$ are both countable, then so is $E$. Hence assume wlog that $E\cap [0,\infty)$ is uncountable. Let $a=\inf\{x\in \mathbb R\colon [0,x]\cap E\mathrm{\ is\ uncountable}\}$ If $a=\infty$ then $E\cap [0,\infty)=\bigcup_n E\cap[0,n)$ is the union of countable sets, hence countable. If on the other hand $a$ is finite, then $[a,a+\epsilon)\cap E$ is uncountable for any $\epsilon>0$, hence $a$ is a limit point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/202943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 8, "answer_id": 3 }
Find all solutions to linear congruences Find all the solutions of each of the linear congruences below: \begin{align} &(a) &10x &\equiv 5 \pmod{15},\\ &(b) &6x &\equiv 7 \pmod{26},\\ &(c) &7x &\equiv 8 \pmod{11}. \end{align} I'm not entirely sure how to get these solutions by hand. I know how to prove there are solutions. For example: $(a) \quad\gcd(10,15)=5 $ and we know $5|5$. From there I set $10x+15y=5$ and divide through by $5$. Leaving us with $2x+3y=1$. I know some solutions for $x$ and $y$, such as $x=-1$ and $y=1$, but that's all I have thus far.
It seems like you're familiar with the theorem in the comments above. For part (a), by inspection, you can see that $x\equiv 2$ is a solution. Since $g=\gcd(10,15)=5$ and $m/g=15/5=3$, you know there are $5$ total solutions $\pmod{15}$, and the others are found just be adding $3$ successively until you've found all $5$. For (b), $\gcd(6,26)=2$ but $2\nmid 7$, so how many solutions can there be? Part (c) is nice because $7$ and $11$ are coprime, so $7$ is actually invertible here. Try to find $7^{-1}\pmod{11}$, and then multiply both sides of $7x\equiv 8\pmod{11}$ by it to find the unique solution for $x$ modulo $11$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/203019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Determinant with unknown parameter. I'm given 4 vectors: $u_1, u_2, u_3$ and $u_4$. I'm going to type them in as points, because it will be easier to read, but think as them as column vectors. $$u_1 =( 5, λ, λ, λ), \hspace{10pt} u_2 =( λ, 5, λ, λ), \hspace{10pt} u_3 =( λ, λ, 5, λ), \hspace{10pt}u_4 =( λ, λ, λ, 5)$$ The task is to calculate the value of λ if the vectors where linearly dependent, as well as linearly independent. I managed to figure out that I could put them in a matrix, let's call it $A$, and set $det(A) = 0$ if the vectors should be linearly dependent, and $det(A) \neq 0$ if the vectors should be linearly independent. Some help to put me in the right direction would be great!
Your guess is correct. But what is the difficulty then? May be you are unable to get the determinant in a simpler way..? $$\left|\begin{array}{cccc} 5&\lambda&\lambda&\lambda\\\lambda&5&\lambda&\lambda\\\lambda&\lambda&5&\lambda\\\lambda&\lambda&\lambda&5\end{array}\right|=(5-\lambda)^3\left|\begin{array}{rrrr} 1&0&0&\lambda\\-1&1&0&\lambda\\0&-1&1&\lambda\\0&0&-1&5\end{array}\right|=(5-\lambda)^3\left[\left|\begin{array}{rrr}1&0&\lambda\\-1&1&\lambda\\0&-1&5 \end{array}\right|+\left|\begin{array}{rrr}0&0&\lambda\\-1&1&\lambda\\0&-1&5 \end{array}\right|\right]$$ So, finally, the determinant is $$(5-\lambda)^3\left[(5+\lambda)+\lambda+\lambda\right]=(5-\lambda)^3(5+3\lambda)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/203066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
Why does positive semi-definiteness in this inequality imply a convex set? I was reading a proof that rewrote an inequality in the form: $$b^Tx +x^T A x \le \alpha$$ for $b,x \in \mathbb{R}^n$ and $\alpha \in \mathbb{R}$, and with $A$ positive semidefinite. It then concluded that the solution set is convex. Why is this the case? I can see that $b^Tx$ is an affine function in $x$ and the latter is some sort of ellipsoid? But I'm not sure why their sum in the inequality would lead one to conclude so readily that the solution set is convex?
Note that both $b^Tx$ and $x^TAx$ are convex functions, and that the sum of convex functions is convex, thus $b^Tx+x^TAx$ is convex. It is a fact that the sublevel sets of a convex function $f$, i.e. the sets $\{x:f(x)\le \alpha\}$, are convex.
{ "language": "en", "url": "https://math.stackexchange.com/questions/203211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
If $\alpha (A) \ge \beta (A) $ then $\alpha (A) = \beta (A)$, where $ \alpha $ and $\beta $ are measures $ \alpha $ and $\beta $ are measures on $ (\Omega, \mathscr F) $ and $ A \subset \mathscr F$. If $\alpha (A) \ge \beta (A) $, I need to prove that $\alpha (A) = \beta (A).$
Here is a true statement. Let $ \alpha $ and $\beta $ denote two probability measures on $ (\Omega, \mathscr F) $. If $\alpha (A) \geqslant \beta (A) $ for every $A$ in $\mathscr F$, then $\alpha = \beta$. Note the added hypothesis that $\alpha$ and $\beta$ are probability measures and the modified hypothesis on the comparison. To prove this, assume that there exists $A$ in $\mathscr F$ such that $\alpha(A)\ne\beta(A)$. Then $\alpha(A)\gt\beta(A)$, $\alpha(\Omega\setminus A)\geqslant\beta(\Omega\setminus A)$ and $A$ and $\Omega\setminus A$ are disjoint with union $\Omega$ hence $$ 1=\alpha(\Omega)=\alpha(A)+\alpha(\Omega\setminus A)\gt\beta(A)+\beta(\Omega\setminus A)=\beta(\Omega)=1, $$ which is absurd. Thus, $\alpha(A)=\beta(A)$ for every $A$ in $\mathscr F$, that is, $\alpha = \beta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/203286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Help me prove $\sqrt{1+i\sqrt 3}+\sqrt{1-i\sqrt 3}=\sqrt 6$ Please help me prove this Leibniz equation: $\sqrt{1+i\sqrt 3}+\sqrt{1-i\sqrt 3}=\sqrt 6$. Thanks!
$\sqrt{1 + i \sqrt 3} + \sqrt{1 - i\sqrt 3} = \sqrt 6$ ? $1 + i \sqrt 3 = 2 \exp \left( \dfrac {\pi}{3}i + 2 \pi n i \right) \quad \{ n \in \mathbb Z \}$ $\sqrt{1 + i \sqrt 3} = \sqrt 2 \exp \left( \dfrac {\pi}{6}i + \pi n i \right) \quad \{ n \in \mathbb Z \}$ $\sqrt{1 + i \sqrt 3} = \pm \left( \dfrac{\sqrt 6}{2} + \dfrac{\sqrt 2}{2} i \right)$ Similarly $\sqrt{1 - i \sqrt 3} = \pm \left( \dfrac{\sqrt 6}{2} - \dfrac{\sqrt 2}{2} i \right)$ So there are four possible values of $\sqrt{1 + i \sqrt 3} + \sqrt{1 - i\sqrt 3}$ One of then is $\sqrt 6$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/203462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 6, "answer_id": 2 }
Is this function Lipschitz? Let $f:X \rightarrow \mathbb R$ be a Lipschitz function on a metric space $X$ and $K<M$ be some constants. Is it such a function $g:X\rightarrow \mathbb R$ Lipschitz: $$ g(x)=f(x) \textrm{ if } \ K \leq f(x) \leq M, $$ $$ g(x)=K \textrm { if } \ f(x)<K, $$ $$ g(x)=M \textrm{ if } \ f(x)>M. $$ Thanks
We have $g(x)=\min\{\max\{f(x),K\},M\}$. Now, we just have to show that if $|f(x)-f(y)|\leq C|x-y|$, $|\max\{f(x),K\}-\max\{f(y),K\}|\leq C|x-y|$, which can be shown using the formula $2\max\{a,b\}=a+b+|a-b|$ and triangular inequality. By the way, the Lipschitz constant is the same.
{ "language": "en", "url": "https://math.stackexchange.com/questions/203513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
How to solve system of equations with mod? I'm trying to solve for $a$ and $b$: $$5 \equiv (4a + b)\bmod{26}\quad\text{and}\quad22\equiv (7a + b)\bmod{26}.$$ I tried looking it up online, and the thing that seemed most similar was the Chinese remainder theorem; however, I couldn't find an instance where it fit something more like what I want to solve. A simple explanation or a reference to one would be most appreciated. With my (limited) knowledge of algebra, I figured out that $x\ \textrm{mod}\ 26 = x - 26\lfloor\frac{x}{26}\rfloor$, so I tried substituting that into my equations: $$5=(4a+b)-26\left\lfloor\frac{4a+b}{26}\right\rfloor\quad\text{and}\quad 22=(7a+b)-26\left\lfloor\frac{7a+b}{26}\right\rfloor.$$ And I figured I could do something with that since I got rid of the mod, but... I have never solved an equation with a floor function before.
Since everything is $\bmod{26}$, you can use most of the methods for solving other simultaneous equations. Instead of dividing to get fractions, use modular division (which involves the Euclideam Algorithm). For example, let's use Gaussian elimination for this problem $$ \begin{align} 12&=2a+b\pmod{26}\\ 15&=9a+b\pmod{26} \end{align} $$ Subtracting the first from the second gives $$ 3=7a\pmod{26} $$ Using the Euclidean Algorithm, we get that $15\times7=105\equiv1\pmod{26}$. So, multiplying both sides by $15$ we get $$ 19=a\pmod{26} $$ Subtracting $2$ times the second from $9$ times the first yields $$ 78=7b\pmod{26} $$ Since $78\equiv0\pmod{26}$, multiplying both sides by $15$ yields $$ 0=b\pmod{26} $$ Using the Euclid-Wallis Algorithm As described in this answer, we can use the Euclid-Wallis Algorithm to invert $7\bmod{26}$: $$ \begin{array}{rrrrrrr} &&\color{orange}{3}&\color{orange}{1}&\color{orange}{2}&\color{orange}{2}\\ \hline \color{#00A000}{1}&\color{#00A000}{0}& 1&-1&\color{red}{3}&\color{blue}{-7}\\ \color{#00A000}{0}& \color{#00A000}{1}& -3&4&\color{red}{-11}&\color{blue}{26}\\ \color{#00A000}{26}&\color{#00A000}{7}&5& 2& \color{red}{1}&\color{blue}{0} \end{array} $$ This says that $3\times26-11\times7=1$, which says that $-11\times7\equiv1\pmod{26}$. Since $-11\equiv15\pmod{26}$, we also get $15\times7\equiv1\pmod{26}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/203660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 4, "answer_id": 0 }
Proving that $x^3 +1=15x$ has at most three solutions. in the interval [-4,4]. I need someone to check my work. Thanks! This is a 2 mark homework question by the way. I am not sure why am I using such a long way to prove it. Is there a way to shorten it or is there a shorter, more intuitive method? Proving that $x^3 +1=15x$ has at most three solutions. in the interval [-4,4]. Let $f(x)=x^3+1-15x$ Suppose for a contradiction, that this equation has at least 4 solutions, $a,b,c,d$, such that $f(a)=0,f(b)=0,f(c)=0,f(d)=0$. Since f is continuous and differentiable on $x\in\mathbb{R}$, by the Rolle's Theorem, there exist a $c_1 \in (a,b) , c_2 \in (b,c),c_3 \in (c,d) $, such that $f^\prime(c_1)=0,f^\prime(c_2)=0,f^\prime(c_3)=0 $ $f^\prime(x)=3x^2-15$ Moreover, if $f^\prime(x)$ has 3 solutions, by the Rolle's Theorem, Since f is continuous and differentiable on $x\in\mathbb{R}$, there exist a $d_1 \in (c_1,c_2) , d_2 \in (c_2,c_3)$, such that $f^{\prime\prime}(d_1)=0,f^{\prime\prime}(d_2)=0$ $f^{\prime\prime}(x)=6x$ Moreover, if $f^{\prime\prime}(x)$ has 2 solutions, by the Rolle's Theorem, Since f is continuous and differentiable on $x\in\mathbb{R}$, there exist a $e_1 \in (d_1,d_2)$, such that $f^{\prime\prime\prime}(e_1)=0$ $f^{\prime\prime\prime}(x)=6$ This implies that $f^{\prime\prime\prime}(e_1)=0=6$ Hence, we have a contradiction. Without loss of generality, we can apply the steps to cases where $f(x)=x^3+1-15x$ has 5 or more solutions and still achieve a contradiction. Therefore, the negation must be true, i.e $f(x)=x^3+1-15x$ has at most 3 solutions.
As others have pointed out, the degree of the polynomial implies that any interval of $\mathbb{R}$ will contain at most three solutions for a cubic - by the Fundamental theorem of Algebra (which roughly says that a polynomial of degree $n$ has $n$ complex solutions, and thus at most $n$ real solutions). If you were trying to show that $x^3-15x+1=0$ has exactly three real solutions in the interval, you could differentiate as you have done, show $f'=0$ has two solutions, that they lie in $[-4,4]$, and that $f(4)>0$, $f(-4)<0$ and (for instance) $f(1)<0$ and $f(-1)>0$, which implies $f(x)=0$ has three solutions in the interval. If you really did mean at most, then forgetting the FTA, I suppose you could simply show $f'$ is $0$ exactly twice in the given interval, and hence $f$ can be $0$ at most thrice.
{ "language": "en", "url": "https://math.stackexchange.com/questions/203711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 2 }
Calculate the nth moment given a joint density Suppose a point has a random location in the circle of radius 1 around the origin. The coordinates $(X,Y)$ of the point have a joint density $$f_{X,Y}(x,y) = \begin{cases}\frac{2}{\pi}(x^2+y^2)&\mathrm{\ if \ } x^2+y^2\le1\\ 0&\mathrm{\ otherwise\ }\end{cases}$$ Let $D$ be the distance from the random point to the center of the circle. How do I compute the $nth$ moment of $D$, $E(D^n)$, for $n = 1,2,...m$?
We have $D=\sqrt{X^2+Y^2}$, so the $n$-th moment of $D$ is the integral of $$(x^2+y^2)^{n/2}\left(\frac{2}{\pi}\right)(x^2+y^2)$$ over the unit disk. Thus we want to integrate $\displaystyle\frac{2}{\pi}\displaystyle(x^2+y^2)^{1+\frac{n}{2}}$ over the unit disk. Change to polar coordinates.
{ "language": "en", "url": "https://math.stackexchange.com/questions/203779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Solving for x with exponents (algebra) So I am trying to help a friend do her homework and I am a bit stuck. $$8x+3 = 3x^2$$ I can look at this and see that the answer is $3$, but I am having a hard time remembering how to solve for $x$ in this situation. Could someone be so kind as to break down the steps in solving for $x$. Thanks in advance for replies.
$8x+3=3x^2$ You can solve the above equation using "Splitting middle term of quadratic equation" formula as well. It is mostly useful for simple equations like above. Solution: You can re-write the above as: $3 \cdot x^2 - 8 \cdot x -3 = 0$ Now try to express the middle term: $8 \cdot x$ as the factor of the product of the coefficient of the other two terms: $3 \cdot (-3) = (-9)$ Now $9$ can be represent as $9 \cdot 1$ Now look at the sign of the middle term: $8 \cdot x$ and it is $-$ (negetive) Since it is negetive, express $-8x$ as $(-9x + 1x)$ So, the equation in this case will be: $3 \cdot x^2 -9 \cdot x + 1 \cdot x -3 = 0$ $\implies (x-3)(3x+1) = 0$ $\implies x =3, -\frac{1}{3}$ Now x can't be negative. So, $x = 3$ Now put x =3 in the above mentioned equations: So, $lm = 2 \cdot x +1 = 7$ $mn = 6 \cdot x -3 = 15 $ and $ln = 3 \cdot x^2 -5 = 22$ ............................................................................ Reference: http://www.teacherschoice.com.au/Maths_Library/Algebra/Alg_18.htm
{ "language": "en", "url": "https://math.stackexchange.com/questions/203841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Composition of functions that are onto or one-to-one I found part of my answer here: If g(f(x)) is one-to-one (injective) show f(x) is also one-to-one (given that...); however I wanted to flesh out the last two statements I had in a proposition in my notes. Proposition: Let $f: A \rightarrow B$ and $g: B \rightarrow C$. Then: (i) If $g \circ f$ is one-to-one, then $f$ is one-to-one. (ii) If $g \circ f$ is onto, then $g$ is onto. Proof: (i) Suppose $f(x)=f(y)$ for some $x,y$. Since $g \circ f$ is one-to-one: $$g\circ f(x) = g\circ f(y) \Rightarrow x=y,\forall x,y \in A.$$ Therefore $f$ must be one-to-one. (ii) Since $g \circ f (x)$ is onto, then for every $c \in C$ there exists an $a \in A$ such that $c=g(f(a))$. Then there exists a $b \in B$ with $b=f(a)$ such that $g(b)=c$. Thus g is onto. I wanted to confirm that these proofs are both correct for my peace of mind (as they weren't proven in class).
Both of your proofs are correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/203891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 0 }
Behaviour of the spectrum of a compact operator w.r.t. perturbations. Suppose $A$ and $B$ are linear compact operators on a Hilbert space with $\sigma(A)$ and $\sigma(B)$ as their spectrum. Is it possible to obtain some continuity result of $\sigma(A+\epsilon B)$ as $\epsilon\downarrow 0$ towards $\sigma(A)$? Is the limiting behavior of the form "$\sigma(A)+\epsilon \sigma(B)$''? Thanks in advance!
A partial answer. I think it is true if $\sigma(A)=\{0\}$. For $\lambda\neq 0$, $A-\lambda I$ is invertible. However, the set of isomorphisms is open in $\mathcal{B}(H)$, so there exist $\varepsilon>0$ such that $A-\lambda I+\varepsilon B$ is also invertible. This means that $\lambda\notin\sigma(A+\varepsilon B)$, for $\varepsilon$ small enough. This should imply that $\sigma(A+\varepsilon B)\to 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/203970", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Can a subgroup generated by two finite subgroups be infinite? Let $H$ and $K$ be two subgroups of $G$ of finite order. Can $HK$ be infinite?
Ok, since it seems that the general consensus is to look at the subgroup generated by $HK$, here is another example. The modular group $SL_2(\mathbb Z)$ is generated by two matrices of finite order: $$\begin{pmatrix} 0&1\\-1&0\end{pmatrix},\text{ and }\begin{pmatrix} 0&1\\-1&0\end{pmatrix}\begin{pmatrix} 1&1\\0&1\end{pmatrix}=\begin{pmatrix} 0&1\\-1&-1\end{pmatrix};$$ the former has order $4$ and the latter, order $3$. And of course, $SL_2(\mathbb Z)$ is infinite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/204037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Cycling Digits puzzle I'm trying to answer the following: "I have in mind a number which, when you remove the units digit and place it at the front, gives the same result as multiplying the original number by $2$. Am I telling the truth?" I think the answer to that is no. It's easy to prove that it's false for numbers with two digits: Let $N = d_0 + 10 \cdot d_1$. Then $2N = 2 d_0 + 20 d_1$ and the "swapped" number is $N^\prime = d_1 + 10 d_0$. We would like to have $2d_0 + 20 d_1 = d_1 + 10d_0$ which amounts to $8d_0 = 19d_1$. The smallest value for which this equality is fulfilled is $d_0 = 19, d_1 = 8$ but $19$ is not $\leq 9$ that is, is not a digit, hence there is no solution. Using the same argument I can show that the claim is false for $3$-digit numbers. I conjecture that it's false for all numbers. How can I show that? Is there a more general argument than mine, for all numbers? Thanks for helps.
The number has to be divisible by 9 The remainder left by a number when divided by $9$ is equal to the sum of its digits. Now, here, you are not changing the digits when transferring the digits, so the remainder does not change. However, multiplying by two should double the remainder module nine and this is a contradiction unless the number is divisible by $9$. Solution Assume that the number $u = 9k$ and the representation of $u$ is $$\overline{a_{n}a_{n-1}\ldots a_{1} a_{0}} = u = 9k$$. Then $$\overline{a_{0}a_{n}\ldots a_{2} a_{1}} = 2u = 18k$$. Hence multiplying by 10 we get, $$\overline{a_{0}a_{n}\ldots a_{2} a_{1}} * 10 + a_{0} = 180k + a_{0}$$. Regrouping the digits and then writing the original number as $9k$, we get $$a_{0}*10^{n+1} + \overline{a_{n}a_{n-1}\ldots a_{1} a_{0}}= 180k + a_{0}$$ which implies that \begin{align} &a_{0}*10^{n+1} + 9k = 180k + a_{0}\\ \\ &\left(10^{n+1} -1 \right) a_{0} = 171k\\ \\ &\underbrace{\overline{99\ldots999}}_{n+1}a_{0} = 171k \end{align} that is $$171k = 9*\left( \underbrace{\overline{11\ldots111}}_{n+1}\right) a_{0}$$ hence $$19k = \left( \overline{11\ldots 111}\right) a_{0}$$ So, the problem comes down to finding the solutions of the above equation. Combinining this with Ross Millikan's Analysis, we get the equation $$ 10^{n+1} = 171k + a_{0}$$ and $$ 9k \equiv a_{0} \, \mathrm{ mod } \, 10$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/204090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Multiplication operators Consider a commutative Banach algebra $A$ and the Banach algebra of bounded operators $B(A)$ on $A$. Associate to each $a\in A$ the multiplication operator $T_ax =ax$ ($x\in A$). Is always the mapping $\varphi(a)=T_a$, $\varphi\colon A\to B(A)$ a continuous algebra homormorphism? What if $A$ is not commutative?
Since $\|T_a x\| \le \|a \| \|x\|$, we have $\|T_a\| \le \|a\|$, so it is continuous. That it is a homomorphism is easy. Even if $A$ is not commutative, since $T_{ab} x = abx = T_a T_b x$ it is a homomorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/204141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What is an OPERATOR? I try to understand operator-valued kernels. For this purpose, first want to know what is an operator. I can see the definition of operator here, but I do not quit get it. Can anyone explain it in simple words, maybe with examples?
An operator is a special kind of function. The simplest functions take a number as an input and give a number as an output. Operators take a function as an input and give a function as an output. As an example, consider $\Omega$, an operator on the set of functions $\mathbb{R} \to \mathbb{R}.$ We can define $\Omega(f) := f + 1$. The operator $\Omega$ takes the function $x \mapsto f(x)$ as an input and gives $x \mapsto f(x)+1$ as its output function. Another, well known, linear operator is differentiation. In this example: $$\Omega(f) := \frac{df}{dx} \, . $$ It is a linear operator because $\Omega(\lambda f+\mu g) = \lambda\Omega(f) + \mu\Omega(g).$ Functions are increadibly general objects, so operators are even more so. Operators are functions on functions. If you're still stuck then I recommend you spend more time thinking about functions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/204221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
inequality exponential series with adjusted denominator The following problem might have something to do with the coefficients of the moment generating function. But I do not see how to prove it, nor do I have a counterexample. Given that the positive random variable $X$ is first order stochastically dominated by $Y$ $(0\preceq X\preceq Y)$, we know $$ E[\exp(-X)]\geq E[\exp(-Y)] $$ In other words, $$ \sum_{i=0}^\infty \frac{E[(-X)^i]}{i!} \geq \sum_{i=0}^\infty \frac{E[(-Y)^i]}{i!} $$ Is the following true for any $k$ and $0\preceq X\preceq Y$ $$\sum_{i=0}^\infty \frac{E[(-X)^i]}{(k+i)!} \geq \sum_{i=0}^\infty \frac{E[(-Y)^i]}{(k+i)!}$$ One may assume that $k$ is even if needed.
Yes, for every $k\geqslant0$. Consider $u_k(x)=(k-1)!\cdot\displaystyle\sum\limits_{i\geqslant0}\frac{(-x)^i}{(k+i)!}$, then it suffices to prove that the function $x\mapsto u_k(x)$ is nonincreasing. To do so, note that $\displaystyle\frac{(k-1)!}{(k+i)!}=\int_0^1(1-t)^{k-1}\frac{t^i}{i!}\,\mathrm dt$ for every $i\geqslant0$ hence $$ u_k(x)=\sum\limits_{i\geqslant0}(-x)^i\int_0^1(1-t)^{k-1}\frac{t^i}{i!}\,\mathrm dt=\int_0^1(1-t)^{k-1}e^{-tx}dt. $$ Since $x\mapsto e^{-tx}$ is decreasing for every $t$ in $(0,1)$, this proves the claim.
{ "language": "en", "url": "https://math.stackexchange.com/questions/204361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Conversion of a convex polygon to a triangle using a compass and strait edge Given a convex polygon with 4 or more sides, is there a way to transform (convert, reduce) that polygon to a triangle having the same area as the polygon by using only a compass and straight edge?
Yes, and it is not necessary that the polygon is convex. Divide the polygon into triangles. For each triangle, construct a rectangle with the same base and half the height, so that the triangle and the rectangle have the same area. For each rectangle with sides $a$ and $b$, construct a square of side $c$, where $c$ is a mean proportional of $a$ and $b$. (This means that $a:c=c:b$.) This is done by drawing a semicircle with diameter $a+b$, and and constructing a perpendicular to the diameter from the point where the segments of length $a$ and $b$ meet. Then $c$ is the height of the perpendicular inside the semicircle. Given two squares with sides $c$ and $d$, construct a right triangle with sides $c$ and $d$, then the hypothenuse will have length $\sqrt{c^2+d^2}$. Hence we can construct a single square with area $c^2+d^2$. By iterating this construction, we end up with a square of the same size as the original polygon.
{ "language": "en", "url": "https://math.stackexchange.com/questions/204429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Summation of powers inequality Can anyone provide a slick proof of the following? Let $0 < x \le 1$. Then $\displaystyle \sum_{k=0}^{n-1} x^k \ge \frac {1} {1 - (1 - 1/n)x}$.
As $1-(1-1/n)x>0$, we have to show that $(1-x)\sum_{k=0}^{n-1}x^k+\frac 1n\sum_{k=0}^{n-1}x^{k+1}\geq 1$. This is equivalent to $$1-x^n+\frac 1n\sum_{j=1}^nx^j\geq 1,$$ i.e., $$nx^n\leq \sum_{j=1}^nx^j.$$ This is true, as each terms in the sum is $\geq x^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/204481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Does $e^{\frac{1}{z}}$ belongs to $W^{1,p}(B(0,1),\mathbb{CP}^1)$? Some background for this question. Given a Riemannian manifold $(M,g)$ and a compact Riemannian manifold $(N,h)$ together with an isometric embedding $(N,h) \hookrightarrow (\mathbb{R}^N, g_{\mathrm{euc}})$, one can define the Sobolev spaces of mappings between manifolds as $$W^{k,p}(M,N) = \{ u \in W^{k,p}(M,\mathbb{R}^n) \,\, | \,\, u \in N \, \mathrm{a.e} \}.$$ This definition depends on the embedding of $N$, but one can show that different embeddings result in equivalent definitions in an appropriate sense. Now, I'm trying to understand whether the function $u(z) = e^{1/z} : B(0,1) \rightarrow \mathbb{CP}^1$ is in $W^{1,p}(B(0,1), \mathbb{CP}^1)$ for some $1 \leq p < 2$, where we put the Fubini-Study metric on $\mathbb{CP}^1$ and use the Euclidean metric on the unit disk $B(0,1) \subset \mathbb{C}$. Writing explicitly the embedding of $\mathbb{CP}^1$ into $\mathbb{R}^3$ and choosing one of the coordinates, the question is translated, up to some constants, into whether the following function $$f(x,y) = \frac{2e^{\frac{x}{x^2+y^2}} \cos \left( \frac{y}{x^2+y^2} \right)}{1+e^{\frac{2x}{x^2+y^2}}} = \mathrm{sech}\left(\frac{x}{x^2+y^2}\right)\cos\left(\frac{y}{x^2+y^2}\right)$$ belongs to $W^{1,p}(B)$. I'm somewhat lost with the calculations here. Can this be answered without much technical work?
The function $$f(x,y)=\operatorname{sech} \left(\frac{x}{x^2+y^2}\right)\cos\left(\frac{y}{x^2+y^2}\right)$$ is in $W^{1,p}(B)$ for $1\le p<3/2$. Indeed, the function is ACL (since it's locally Lipschitz on the punctured disk), so the integrability of $|\nabla f|^p$ is the only thing to worry about. It helps to unwind the inversion $1/z$ and work with the coordinates $u=x/(x^2+y^2)$ and $v=y/(x^2+y^2)$. (Chances are it would be easier for you to work this way from the beginning.) Letting $g(u,v)=f(x,y)$ we find that $|\nabla f(x,y)|^p=(u^2+v^2)^p |\nabla g(u,v)|^p$ because conformal transformation simply multiplies the gradient by derivative of the map. Taking the Jacobian into account, we obtain $$\iint_B |\nabla f|^p \,dxdy = \iint_{B^c} |\nabla g|^p (u^2+v^2)^{p-2}\,dudv$$ Both partial derivatives of $g(u,v)=\operatorname{sech} u\cos v$ are majorized by $\operatorname{sech} u$, which decays exponentially. Thus, the convergence of integral over $B^c$ is determined by its convergence over the set $|u|\le 1\le |v|$. (Contribution of the strips $1\le |u|\le 2$, $2\le |u|\le 3$, etc is a negligible exponential tail.) Since $$ \iint_{|u|\le 1\le |v|} |\operatorname{sech} u|^p (u^2+v^2)^{p-2}\,dudv \approx \int_{1}^\infty v^{2(p-2)}\,dv $$ we have convergence when $p<3/2$. And only then, because none of the estimates were too wasteful. Answer with a different function in it, might be of some $\epsilon$ value Every Sobolev function $f$ has an ACL representative. Namely, after redefining $f$ on a set of measure zero, we can ensure that for almost every $y$ the one-variable function $f(\cdot,y)$ is absolutely continuous (on the segment on which it's defined). Of course, the same holds for the other coordinate. The proof is based on Fubini's theorem: since the weak derivative $\partial f/\partial x$ is integrable with respect to planar measure, its restriction to almost every segment is integrable with respect to linear measure. Integrability of one-dimensional weak derivative gives absolute continuity. The ACL property allows us to quickly check that $$f(x,y)=\sec \left(\frac{x}{x^2+y^2}\right)\cos\left(\frac{y}{x^2+y^2}\right)$$ is not a Sobolev function on $B$. Indeed, the secant blows up along the circle $x=\frac{\pi}{2}(x^2+y^2)$. Every horizontal segment passing near the origin will cross the circle. So, no matter which representative of $f$ we take, the restriction to almost every horizontal segment near the origin will be unbounded, hence not continuous, and a fortiori not absolutely continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/204549", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove by definition of limit $\displaystyle\lim_{n\to\infty} \dfrac{1}{n!} = 0$ I have no idea how to proceed with this. Usually I start with a preliminary computation and solve for $n$ in terms of $\epsilon$ from the definition of the limit: $|\dfrac{1}{n!} - 0| < \epsilon$ Here, I do not see an approach to solve for n.
HINT: $$\frac1{n!}\le\frac1n$$ for $n>0$. Now use the Archimedean property of the reals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/204610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Homeomorphism between a cone in a compact space and this compactification Let $X$ be a compact Hausdorff space. Show that the cone in $X$ is homeomorphic to the compactification of $X \times [0,1)$. If $A$ is closed in $X$, show that $X/A$ is homeomorphic to the compactification of $X\setminus A$. I don't know even how to begin to solve this question, it's seems so hard, anyone could help me, please.
Here’s a start on the second problem; you should be able to use some of the ideas to help you with the first, as well. Suppose that $X$ is a compact Hausdorff space, and $A\subseteq X$ is closed. Let $Y=X\setminus A$, and let $Y^*=Y\cup\{p\}$ be the one-point compactification of $Y$. Finally, let $Z=X/A$, let $q:X\to Z$ be the quotient map, and let $a\in Z$ be the point corresponding to $A$ in $X$. You want to prove that $Y^*$ is homeomorphic to $Z$. The first step is figure out what the homeomorphism should be. $Z=q[Y]\cup\{a\}$, where $q[Y]$ is a homeomorphic copy of $Y$, and $Y^*=Y\cup\{p\}$, where $Y$ is a homeomorphic ‘copy’ of $Y$ sitting inside $Y^*$. Thus, each of the spaces $Z$ and $Y^*$ consists of a copy of $Y$ together with one extra point. This suggests that we should try the function $$h:Y^*\to Z:y\mapsto\begin{cases} q(y),&\text{if }y\in Y\\ a,&\text{if }y=p\;, \end{cases}$$ which sends each point of $Y$ to its copy in $Z$ and sends the extra point $p$ in $Y^*$ to the extra point $a$ in $Z$. Now you just have to check that this $h$ really is a homeomorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/204684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Linear Combinations and solutions Let A be a 5 x 3 matrix. If $$b = a_1 + a_2 = a_2 + a_3$$ then what can you conclude about the number of solutions of the linear system Ax = b? Explain. I'm not sure about this question. All I know is that if b can be written as a combination of column vectors a, then the linear system is consistent. I am not sure what this says about the number of solutions, however.
Since $A$ maps $(1, 0, 0)$ to $a_1$, $(0, 1, 0)$ to $a_2$ and $(0, 0, 1)$ to $a_3$, $(1, 1, 0)$ and $(0, 1, 1)$ are both solutions to $Ax = b$. Linear transformations give only one, zero or infinite solutions, thus there are infinite solutions. Alternatively, $m(1, 1, 0) + n(0, 1, 1)$ is a solution for any $m + n = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/204768", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Estimate for the product of primes less than n In this paper Erdős shows a shorter proof for one of his old results stating that $$ s(n) = \prod_{p < n} p < 4^n$$ where the product is taken over all primes less than $n$. He also remarks that using the prime number theorem one can show $$ s(n)^{\frac1n} \stackrel{n\to\infty}{\longrightarrow} e.$$ Can someone here prove this result? It does not seem straightforward to me. One (crude) attempt I tried was to consider the product $$\prod_{i=2}^n \frac{i}{\log{i}} = n!\prod_{i=2}^n \frac{1}{\log{i}}$$ which I do not know how to estimate, not to mention that I would then have to argue that it is an asymptotic estimate for $s(n).$ Is there a simple way to show the result about $s(n)$ using the prime number theorem?
The summation formula at the top of Answer 1 isn't good necessarily percentage-wise. For instance, if n = 14, the formula gives the estimate for the sum at about 50.6, but the sum of all of the primes from 2 to 13, inclusive, is actually 41.
{ "language": "en", "url": "https://math.stackexchange.com/questions/204902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
prime number theorem and prime counting function $\pi(x)$ is the prime counting function (no. of prime within x) For the interval $(x, x + \delta x]$, $\delta > 0$, what is the smallest integer $x_{0}$ such that for any $x >= x_{0}$, $\pi(x + \delta x) - \pi(x) > 0$ is always true? For example, Bertrand's Postulate tells us that when $\delta = 1$, the smallest integer to make the above statement true is $x_{0} = 2$. The following result might help: one paper by Rosser and Schoenfeld gives out two inequalities about $\pi(x)$: $\frac{x}{\log{x}}(1 + \frac{1}{2\log{x}}) < \pi(x)$, for $x>= 59$, and $\pi(x) < \frac{x}{\log{x}}(1+ \frac{3}{2\log{x}})$, for $x>1$
From Proposition 6.8 on pdf page 8 of DUSART, you may take $$ x_0 = \max \left( 396738, \; e^{\left( \frac{1}{5 \sqrt \delta} \right)} \right). $$ This is not the optimal value of your $x_0 = x_0(\delta)$ but it works. Note: Dusart's adviser was Guy Robin, whose adviser was Jean-Louis Nicolas. It all fits. =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
{ "language": "en", "url": "https://math.stackexchange.com/questions/204970", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Solving differential Equations by substitution An example in my Differential Equations textbook shows how to solve the homogenous differential equation $$ (x^2+y^2)\,dx +(x^2-xy)\,dy=0 $$ by substituting $y$ with $ux$, which I am trying to understand. The book explains that the reason we do this is so that $dy$ will equal $u\,dx + x\,du$. The answer says that after substitution, the equation becomes $$(x-ux)\,dx + x(u\,dx + x\,du) = 0 $$ and then $$ dx + x\,du = 0$$ My question is, how did it get to $dx+x\,du =0$? Is it a typo or am I missing something? I think it should be $ x\,dx + x\,du$ and then $dx + du$ and then $x + u$ and eventually $x + y/x$. However, the textbook says the answer is $x\ln(x)+y=cx$. What am I missing?
$$(x-ux)dx+x(udx+xdu)=0$$ Expanding both sides gives $$xdx-uxdx+xudx+x^2du=0$$ $$xdx+x^2du=0$$ Assuming $x$ is non-zero: $$dx+xdu=0$$ You forgot the extra factor of $x$ attached to the $du$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/205033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Finding the range of rational functions I have a problem that I cannot figure out how to do. The problem is: Suppose $s(x)=\frac{x+2}{x^2+5}$. What is the range of $s$? I know that the range is equivalent to the domain of $s^{-1}(x)$ but that is only true for one-to-one functions. I have tried to find the inverse of function s but I got stuck trying to isolate y. Here is what I have done so far: $y=\frac{x+2}{x^2+5}$ $x=\frac{y+2}{y^2+5}$ $x(y^2+5)=y+2$ $xy^2+5x=y+2$ $xy^2-y=2-5x$ $y(xy-1)=2-5x$ This is the step I got stuck on, usually I would just divide by the parenthesis to isolate y but since y is squared, I cannot do that. Is this the right approach to finding the range of the function? If not how would I approach this problem?
Going back to your original problem statement, you have to understand that the range of a function is the set of values that the function takes on for arguments in the function's domain. This avoids worrying about functions that are not 1-1. (I don't see the need for the "domain of $s^{-1}$" statement.) I assume that the domain in your case is the real numbers. Once you understand this, then you can apply the variety of analyses that show you how to get the range for your particular function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/205080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Limit Supremum and Infimum. Struggling the concept I am struggling to understand what is the limit supremum/infimum. I've been told that it is not the same thing as "the limit of a supremum of a set" (which makes sense since the supremum/infimum is usually a number). I've consulted with two Analysis books, but none of them seem to be able to convey it what they are trying to say. I got an example in my notebook that may clarify my confusion Ex. Consider $\left \{-200,100,1,2,-1,2,-1,1,2,-1 \right \}$ Then let $v_k = \sup \left \{a_n : n \geq k \right \}$ and $\limsup_{n\to\infty} a_n= \lim_{k\to\infty} v_k=2$ and $\liminf_{n\to\infty} a_n=-200$ Can someone explain to me the reasoning (without omitting any details) for the answers? I think I got a feeling for the liminf, but not limsup
I'd like to add to Christopher A. Wong's answer that the $\liminf$ is the smallest accumulation point while the $\limsup$ is the largest one. Moreover, if you have an understanding of the $\liminf$ already, then consider $\limsup a_n = -\liminf (-a_n)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/205223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 0 }
How to solve this recurrence relation? $f_n = 3f_{n-1} + 12(-1)^n$ How to solve this particular recurrence relation ? $$f_n = 3f_{n-1} + 12(-1)^n,\quad f_1 = 0$$ such that $f_2 = 12, f_3 = 24$ and so on. I tried out a lot but due to $(-1)^n$ I am not able to solve this recurrence? Any help will be highly appreciated. Please this is no homework. I came across this while solving a problem on SPOJ
Let's write up a few more terms of the sequence to maybe guess a pattern. It goes $$0, 12, 24, 84, 240, 732 \cdots $$ Also keep in mind we expect these numbers to look something like powers of 3, because that's what the solution would be if we didn't have that $12(-1)^n$ term in the recurrence. So then with this hint we can see the terms are $$ 3^1-3, 3^2+3, 3^3-3, 3^4+3, 3^5-3, 3^6+3 \cdots $$ so we can guess the solution is $f_n = 3^n + 3(-1)^n .$ Now let us prove out suspicions with mathematical induction. The base case $n=1$ is true. Assume $f_n = 3^n + 3(-1)^n$ is true for some $n.$ Then $f_{n+1} = 3f_n - 12(-1)^n = 3( 3^n + 3(-1)^n ) - 12(-1)^n = 3^{n+1} -3(-1)^n = 3^{n+1} + 3(-1)^{n+1}$ so our formula is true for $n+1$ as well, completing the induction step.
{ "language": "en", "url": "https://math.stackexchange.com/questions/205372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 6, "answer_id": 0 }
Integrating $\int \dfrac{2x \ln x}{\sqrt{x^2-9}}\mathrm dx$ by parts. We have to integrate $$\int\frac{2x \ln x}{\sqrt{(x^2-9)}} \mathrm dx$$ Is it right to use Integration by Parts? I tried to substitute it with $$u = \log x,\: \mathrm du = \frac 1x \mathrm dx;$$ $$v = x^2 - 9,\: \mathrm dv = 2x \mathrm dx.$$ But then I'm stuck with substituting it within the original equation because from $\mathrm du = \dfrac 1x \mathrm dx,$ and $\mathrm dv = 2x \mathrm dx,$ there will be two $\mathrm dx$'s to substitute and from $\mathrm du = \dfrac 1x \mathrm dx,$ the $x$ will go to the denominator and I don't know what to do any more.
You need to make the integral into only two parts, $u$ and $dv.$ So you should let $u= \ln x$ and $dv = \dfrac{2x}{\sqrt{x^2-9}} dx.$ Then using those choices, work out what $du$ and $v$ must be, and put it all in the formula $\displaystyle \int u dv = uv - \int v du.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/205426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
If $E$ is compact, then $m(E) = \lim\limits_{n \to \infty} m(\mathcal{O}_n)$ Suppose $E\subset \mathbb{R}^d$ is a given set, $m$ is the Lebesgue measure, and $\mathcal{O}_n$ is the open set: $$\mathcal{O}_n = \{x : d(x, E) < 1/n\}.$$ The goal is for me to show that if $E$ is compact, then $m(E) = \lim\limits_{n \to \infty} m(\mathcal{O}_n)$. I am having trouble not only visualizing these sets, but also intuitively realizing what this means. In other words, I have no idea how to begin this proof.
With the assumptions from my comment above: The role of compactness is to guarantee that $m(\mathcal O_n)<\infty$. Then, as $\mathcal O_1\supset\mathcal O_2\supset\cdots$ and $E$ is closed, we get that $E=\bigcap_n\mathcal O_n$, and the result follows by continuity of the measure.
{ "language": "en", "url": "https://math.stackexchange.com/questions/205481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
difficulty working out equation $\int_{-a}^{a}2(\sqrt{a^2-x^2})dx $ =$\left.2[\frac{1}{2}\sqrt{a^2-x^2}+a^2\arcsin(\frac{x}{a})]\right|_{-a}^{a}$ I encountered the below formula in my text. I know the author is using integration by substitution and double angle formula but for some reason, every working on paper that i did is different from the answer below: $\int_{-a}^{a}2(\sqrt{a^2-x^2})dx $ =$\left.2[\frac{1}{2}(\sqrt{a^2-x^2}+a^2\arcsin(\frac{x}{a}))]\right|_{-a}^{a}$ Can anyone help me to expand the above equation. I can't wrap my head on how above answer is reproduced. Thanks.
This can be done with a standard $\sin$ substitution: $(1)\ x=a\sin(\theta)$ followed by a change of variables: $(2)\ \phi=2\theta$. $$ \begin{align} \int_{-a}^a2\sqrt{a^2-x^2}\,\mathrm{d}x &=\int_{-\pi/2}^{\pi/2}2a\cos(\theta)\,\mathrm{d}a\sin(\theta)\tag{1}\\ &=2a^2\int_{-\pi/2}^{\pi/2}\cos^2(\theta)\,\mathrm{d}\theta\\ &=2a^2\int_{-\pi/2}^{\pi/2}\frac{1+\cos(2\theta)}{2}\,\mathrm{d}\theta\\ &=2a^2\int_{-\pi}^\pi\frac{1+\cos(\phi)}{4}\,\mathrm{d}\phi\tag{2}\\ &=2a^2\left[\frac{\phi+\sin(\phi)}{4}\right]_{\phi=-\pi}^{\phi=\pi}\\ &=\pi a^2 \end{align} $$ However, the answer you show in the question looks like the answer that comes from an integration by parts: $u=\sqrt{a^2-x^2}$ and $\mathrm{d}v=\mathrm{d}x$ so that $v=x$ and $\mathrm{d}u=-\frac{x\,\mathrm{d}x}{\sqrt{a^2-x^2}}$ $$ \begin{align} \int2\sqrt{a^2-x^2}\,\mathrm{d}x &=2x\sqrt{a^2-x^2}+\int\frac{2x^2}{\sqrt{a^2-x^2}}\,\mathrm{d}x\\ &=2x\sqrt{a^2-x^2}-\color{#C00000}{\int\frac{2(a^2-x^2)}{\sqrt{a^2-x^2}}\,\mathrm{d}x}+\int\frac{2a^2}{\sqrt{a^2-x^2}}\,\mathrm{d}x\tag{3} \end{align} $$ Adding the integral in red to both sides of $(3)$ and dividing by $2$ yields $$ \begin{align} \int2\sqrt{a^2-x^2}\,\mathrm{d}x &=\frac12\left[2x\sqrt{a^2-x^2}+\int\frac{2a^2}{\sqrt{a^2-x^2}}\,\mathrm{d}x\right]\\ &=\frac12\left[2x\sqrt{a^2-x^2}+2a^2\,\sin^{-1}(x/a)\right]+C\tag{4} \end{align} $$ Evaluating $(4)$ at the limits of integration yields $$ \begin{align} \int_{-a}^a2\sqrt{a^2-x^2}\,\mathrm{d}x &=\left[x\sqrt{a^2-x^2}+a^2\,\sin^{-1}(x/a)\right]_{-a}^a\\ &=\pi a^2 \end{align} $$ Not quite what you got, but of the same form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/205535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Compound Angle formula using Matrices show that $$ \cos(A+B)=\cos(A)\cos(B)-\sin(A)\sin(B)\\ \text{ if } AB=BA $$ the question gives a hint to use: $$ \sin(A)=\frac{1}{2i}(e^{iA}-e^{-iA})\\ \cos(A)=\frac{1}{2}(e^{iA}+e^{-iA}) $$
Using the hint, and we'd like to apply the identity $e^{A+B}=e^Ae^B$ to get $$\cos(A+B)=1/2(e^{i(A+B)}+e^{-i(A+B)})=1/2(e^{iA}e^{iB}+e^{-iA}e^{-iB})=$$ $$1/4[(e^{iA}e^{iB}+e^{-iA}e^{-iB}+e^{iA}e^{-iB}+e^{-iA}e^{iB})+(e^{iA}e^{iB}+e^{-iA}e^{-iB}-e^{iA}e^{-iB}-e^{-iA}e^{iB})]=$$ $$=\cos(A)\cos(B)-\sin(A)\sin(B)$$ So all we've got to prove is that identity when $AB=BA$. This is done as follows: Observe that in the power series expansion $$I+A+B+1/2!(A+B)^2+1/3!(A+B)^3+...$$ of $e^{A+B}$, if we expand out the powers of $A+B$ there are only a finite number of terms of any fixed degree $n$, where $n$ is the sum of the power of $A$ and the power of $B$. Similarly, in the product of power series $$e^Ae^B=(I+A+(1/2!)A^2+...)(I+B+(1/2!)B^2+...)$$ we can't get any more terms of degree $n$ once we're multiplying terms of degree $n+1$ or higher from either multiplicand. So verifying $e^{A+B}=e^Ae^B$ reduces to finite verifications that the part of each power series of degree $n$ equals that part of the other. For instance, the degree 0 parts are both $I$, since in the product the only two matrices I can multiply to get $I$ are $I$ from the left and $I$ from the right. Let's look at a higher degree, say, 3. We have $$(1/3!)(A+B)^3=(1/3!)(A^3+AB^2+BAB+B^2A+A^2B+ABA+BA^2+B^3)$$ while the degree-3 part of $e^Ae^B$ is $$(1/3!)IB^3+(1/2!)AB^2+(1/2!)A^2B+(1/3!)A^3I$$ Now we see why the hypothesis $AB=BA$ matters: in general these two expressions aren't equal at all! But using commutation we can collect, for instance, $AB^2,BAB,$ and $B^2A$ all together, and we see that in fact the two power series do agree on their degree-3 parts. To carry out the proof for general $n$ is essentially just to point out that since $A$ and $B$ commute, the standard binomial theorem applies for $(A+B)^n$, and that in the expansion of $e^Ae^B$ we get the binomial coefficients.
{ "language": "en", "url": "https://math.stackexchange.com/questions/205620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Proof of Cauchy Riemann Equations in Polar Coordinates How would one go about showing the polar version of the Cauchy Riemann Equations are sufficient to get differentiability of a complex valued function which has continuous partial derivatives? I haven't found any proof of this online. One of my ideas was writing out $r$ and $\theta$ in terms of $x$ and $y$, then taking the partial derivatives with respect to $x$ and $y$ and showing the Cauchy Riemann equations in the Cartesian coordinate system are satisfied. A problem with this approach is that derivatives get messy. What are some other ways to do it?
With only partial differentiation and algebra. From what we already know of complex functions: $ƒ(z) = u(r,\theta) + iv(r,\theta)$ $z = re^{i\theta} = r\cos\theta +ir\sin\theta = x + iy$ $x(r,\theta)= r\cos\theta,\quad y(r,\theta) = r\sin\theta $ Apply partial derivative and chain rule: $\frac{∂f}{∂r} = \frac{df}{dz}\frac{∂z}{∂r} = \frac{df}{dz}e^{i\theta} = \frac{df}{dz}\frac1rz$ $\frac{∂f}{∂\theta} = \frac{df}{dz}\frac{∂z}{∂\theta} = \frac{df}{dz}ire^{i\theta} = \frac{df}{dz}iz$ which are also equal to: $\frac{∂f}{∂r} = \frac{∂u}{∂r} + i\frac{∂v}{∂r}$ $\frac{∂f}{∂\theta} = \frac{∂u}{∂\theta} + i\frac{∂v}{∂\theta}$ then set the equality of $r\frac{∂f}{∂r} = \frac1i\frac{∂f}{∂\theta}$: $r(\frac{∂u}{∂r} + i\frac{∂v}{∂r}) = -\frac1i(\frac{∂u}{∂\theta} + i\frac{∂v}{∂\theta}) $ and separate the real and imaginary: $\frac{∂u}{∂r} = \frac1r\frac{∂v}{∂\theta}, \quad \frac{∂v}{∂r} = -\frac1r\frac{∂u}{∂\theta}$ yey :3
{ "language": "en", "url": "https://math.stackexchange.com/questions/205671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 6, "answer_id": 5 }
Continuous random variable, density functions. Let X be a continuous random variable with density fx(s) = k(s$^{-6}$) for s $\ge$ 1 and fx(s) = 0 for s<1. 1.) Determine the value of the constant k. 2.) Calculate E(X), VAR(X) 3.) Calculate P(X $\in$ [-2,-0.75]) I know I must integrate with respect to s and for E(X) = (X$^2$-E(X)) and VAR(X) = (X-E(X))$^2$, but how can I format it correctly to find E(X), VAR(X), k, and P(X $\in$ [-2,-0.75]) ?
You want $$\int_1^\infty k s^{-6}\,ds=1.$$ Integrate. You should get $\frac{k}{5}$, so $k=5$. For $E(X)$, use the usual formula $$E(X)=\int_{-\infty}^\infty sf_X(s)\,ds.$$ Our density function is $0$ except on $[1,\infty)$, so $$E(X)=\int_1^\infty (s)(5s^{-6})\,ds.$$ The integrand is $5s^{-5}$. Now do the integration. For the variance, you can use $E((X-\mu)^2)$, where $\mu=E(X)$. However, as usual it will be easier if you use the fact that the variance is $E(X^2)-(E(X))^2$. So the only thing not yet known is $E(X^2)$. But $$E(X^2)=\int_1^\infty (s^2)(5s^{-6})\,ds.$$ Simplify the integrand, and integrate. For the probability, if there is no typo, our interval is an interval of negative numbers. On the negatives, the density is $0$, so the integral of $f_X(s)$ over this interval is $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/205751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Determine if the set is a subspace of $\Bbb R^{2\times2}$ $$ S_1 = \{B \in \Bbb R^{2\times2} \mid BA = 0\} $$ I know that to check if a set is a subspace of a vector space, I need to determine if it is closed under addition and subtraction. I honestly have no idea how to do this for this set.
You have to check that if $B,C$ in $S_1$, and $\alpha\in\Bbb R$, then $B+C\in S_1$ and $\alpha B\in S_1$. I’ll do the second one; the basic ideas are the same in both cases. To show that $\alpha B\in S_1$, you must show that it satisfies the condition for membership in $S_1$: you must show that $(\alpha B)A=0$. By the properties of matrix arithmetic you know that $(\alpha B)A=\alpha(BA)$. By hypothesis $B\in S_1$, so $BA=0$. Thus, $(\alpha B)A=\alpha(BA)=\alpha 0=0$. You'll use the same basic approach to show that $B+C\in S_1$, though the specific properties of matrices that you’ll use are different. Oh, I almost forgot: you also have to check that $S_1\ne\varnothing$, which you can do by finding just one matrix that’s definitely in $S_1$. Since you don’t know just what matrix $A$ is, that might sound hard, but it really isn’t: there’s one $2\times 2$ matrix whose product with $A$ is certainly $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/205841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Theorem about orthogonal system in inner product space. It is known that "If $\{x_n\}$ is a sequence in a real Hilbert space $H$ satisfying $$ \langle x_n, x_m\rangle =0 \quad\forall n\ne m, $$ then $\displaystyle\sum_{n=1}^{\infty}x_n$ is convergent if and only if $\displaystyle\sum_{n=1}^{\infty}\|x_n\|^2$ is convergent". I would like to know if $H$ is an inner product space (pre-Hilbert space) the above statement is still true?
No. Take the subspace $l^2_{\text{fin}}(\mathbb R)$ of $l^2(\mathbb R)$ of all real-valued sequences with only finitely many nonzero entries. $l^2_{\text{fin}}(\mathbb R)$ is a pre-Hilbert space with the induced inner product and the sequences $\delta_{i,j}$ whose only nonzero entry is $\delta_{i,i} = 1$ form an orthonormal system $\{ \delta_{i,j}\} \in l^2_{\text{fin}}(\mathbb R)$ but obviously $\sum_i 2^{-i} \delta_{i,j} \not \in l^2_{\text{fin}}(\mathbb R)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/205891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How do you find the center of a circle with a pencil and a book? Given a circle on a paper, and a pencil and a book. Can you find the center of the circle with the pencil and the book?
Draw a right triangle inside the circle, making all vertices touch the circle. Transform it into a rectangle. Draw a line joining the opposite corners and you will have obtained the center of the circle Edit: The idea is very simple: 1. Put a corner of the book touching the circle from the inside in any place. 2. Use the edges of the book for drawing the catheti of the right triangle until they intersect the circle. 3. Draw the hypotenuse using those intersections. 4. Put a corner of the book in any of those intersections overlapping one edge with the cathetus. 5. Use the other edge of the book to draw a line until it intersects the circle. 6. Draw a line joining this new intersection with the intersection opposed to the hypotenuse (right angle point). 7. The point in which this line crosses (intersects) the hypotenuse is the center of the circle. Edit: In fact, any rectangle inside the circle with every corner touching the circle will do the trick
{ "language": "en", "url": "https://math.stackexchange.com/questions/205953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "101", "answer_count": 17, "answer_id": 2 }
Difficult limit involving $n$-th root and $n$-th exponentiation I've encountered this limit on a book without solution and after two days of thinking still I am not able to come up with an answer. $$\lim_{n\to+\infty}\left(\alpha k^{\frac 1n}-1\right)^n,\: k\in\mathbb R_{>0},\: \alpha\in\mathbb N_{>0}.$$ A full complete and detailed soultion is welcomed. Also hints of course are accepted. I've tried binomial Formula and to simplify the expression but all has been in vain.
First of all, you should prove (if you do not already know) that if $k>0$ then $\lim_{n\to\infty}k^{1/n}=1$. The you have three cases (in the following I assume that $\alpha>0$ is not necessarily an integer): * *$0<\alpha<2$: then $|\alpha\,k^{1/n}-1|\to|\alpha-1|<1$ and $\lim_{n\to\infty}\bigl(\alpha\,k^{1/n}-1\bigr)^n=0$. *$\alpha>2$: then $\alpha\,k^{1/n}-1\to\alpha-1>1$ and $\lim_{n\to\infty}\bigl(\alpha\,k^{1/n}-1\bigr)^n=\infty$. *$\alpha=2$: we have a limit of the form $1^\infty$. The case $k=1$ is trival. I will do the case $k>1$ and leave for you the case $0<k<1$. Taking logarithms we see that it is enough to compute $$ \lim_{n\to\infty}n\ln\bigl(2\,k^{1/n}-1\bigr). $$ We have $$\begin{align*} n\ln\bigl(2\,k^{1/n}-1\bigr)&=\ln\bigl(1+2(k^{1/n}-1)\bigr)\\ &\sim 2\,n\bigl(k^{1/n}-1\bigr)\\ &=2\,n\bigl(e^{\ln k/n}-1\bigr)\\ &\sim 2\ln k \end{align*}$$ as $n\to\infty$. Thus $$ \lim_{n\to\infty}\bigl(2\,k^{1/n}-1\bigr)^n=k^2\quad\forall k>1. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/205992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Determining a bound on the relative error of a linear system Suppose that the linear system $A\mathbf{x}=\mathbf{b}$ is perturbed so that $(A+\delta A)\mathbf{x}=\mathbf{b}$. We can calculate the relative error $\frac{\|\mathbf{x-\bar{x}\|}}{\|\mathbf{x}\|}$ if we know the true value of $\mathbf{x}$ and have an estimate for $\mathbf{\bar{x}}$, but if we want to put a general bound on the relative error in the system in terms of the system's condition number $K(A) = \|A^{-1}\|\cdot\|A\|$ and $\|\delta A\|/\|A\|$, how can this be done? I have been trying to rearrange what is known in order to gain an expression in terms of this, and have so far obtained, with working: $A\mathbf{x}=(A+\delta A)\mathbf{\bar{x}}$ equating the given formulae $(A+\delta A)^{-1}A\mathbf{x}=\mathbf{\bar{x}}$ multiplying both sides by inverse $(A+\delta A)^{-1}(A^{-1})^{-1}\mathbf{x}=\mathbf{\bar{x}}$ rewriting $A = (A^{-1})^{-1}$ $((A^{-1})(A+\delta A))^{-1}\mathbf{x}=\mathbf{\bar{x}}$ rearranging the inversion $(I + A^{-1}\delta A)^{-1}\mathbf{x}=\mathbf{\bar{x}}$ multiplying out the brackets At this point I become a bit dubious about the next step. I know that at some point the left hand side will need to look like ${\|\mathbf{x-\bar{x}\|}}$ so as to rearrange for the relative error, but multiplying the last line above by $-1$ then adding $\mathbf{x}$ to both sides, then taking the norm to obtain: $\|\mathbf{x}-(I + A^{-1}\delta A)^{-1}\mathbf{x}\|=\|\mathbf{\mathbf{x}-\bar{x}}\|$ seems a bit too convenient and following this through with pen and paper doesn't seem to lead anywhere. Is this the correct approach, and if not, is there a chance of some guidance towards the correct manner?
To answer your question in an appropriate way, we need to resort to the following theorem: Theorem 1 (Neumann Series) Let $\|\cdot\|$ be a submultiplicative norm such that $\\\|I\|=1$ and $\|A\|<1$. Then $(I-A)^{-1}$ exists, $$(I-A)^{-1}=\sum_{j=0}^\infty{A^{j}}$$ and $$\frac{1}{1+\|A\|}\leq\|(I-A)^{-1}\|\leq\frac{1}{1-\|A\|}.$$ Let $\tilde A$ be a perturbed matrix with an absolute error $\Delta A=A-\tilde A$. If $\tilde x$ is a solution of the system $\tilde A\tilde x=b$ and $x$ is such that $Ax=b$, then: $$(A-\Delta A)(x-\Delta x)=b \\ \Rightarrow(A-\Delta A)\Delta x=-\Delta Ax$$ where $\Delta x=x-\tilde x.$ Assuming that $A$ is nonsingular, we can multiply both sides of the last equality by $A^{-1}$, obtaining: $$(I-A^{-1}\Delta A)\Delta x=-A^{-1}\Delta Ax.$$ Now, suppose that $\|A^{-1}\Delta A\|\leq1$ and let $\|\cdot\|$ be a submultiplicative norm. By Theorem 1, we know that $(I-A^{-1}\Delta A)^{-1}$ exists, so we can multiply both sides of the last equality by it; this gives us: $$\Delta x=(I-A^{-1}\Delta A)^{-1}(-A^{-1}\Delta Ax)$$ Taking norms on both sides, we get: $$\|\Delta x\|\leq\|(I-A^{-1}\Delta A)^{-1}\|\|A^{-1}\|\|\Delta A\|\|x\|$$ Dividing both sides of the inequality by $\|x\|$ and noting that, by Theorem 1, $$\|(I-A^{-1}\Delta A)^{-1}\|\leq\frac{1}{1-\|A^{-1}\Delta A\|}\leq\frac{1}{1-\|A^{-1}\|\|\Delta A\|}$$ we arrive to: $$\frac{\|\Delta x\|}{\|x\|}\leq\frac{\|A^{-1}\|}{1-\|A^{-1}\|\|\Delta A\|}\|\Delta A\|$$ Manipulating this expression algebraically, we finally get: $$\|\delta x\|\leq\frac{\|A^{-1}\|\|A\|}{1-\|A^{-1}\|\|A\|\frac{\|\Delta A\|}{\|A\|}}\frac{\|\Delta A\|}{\|A\|}=\frac{\kappa(A)}{1-\kappa(A)\|\delta A\|}\|\delta A\|$$ where $\|\delta x\|=\frac{\|\Delta x\|}{\|x\|},\|\delta A\|=\frac{\|\Delta A\|}{\|A\|}$ and $\kappa(A)=\|A^{-1}\|\|A\|.$ PS: Please note that my notations for the absolute error associated with a variable $(\Delta \alpha=\alpha-\tilde\alpha)$ and for its relative error $(\delta\alpha=\frac{\Delta\alpha}{\alpha})$ aren't the same as yours.
{ "language": "en", "url": "https://math.stackexchange.com/questions/206075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
understanding the language of a theorem The theorem is stated as follows in the book: Let $\phi:G\rightarrow G'$ be a group homomorphism, and let $H=Ker(\phi)$. Let $a\in G$. Then the set $\phi^{-1}[\{\phi(a)\}] = \{x\in G | \phi(x)=\phi(a)\}$ is the left coset $aH$ of $H$, and is also the right coset $Ha$ of $H$. Consequently, the two partitions of $G$ into left cosets and into right cosets of $H$ are the same. I'm trying to parse this statement and it's not clear to me what claim the author is trying to make at the very end when he says "Consequently, the two partitions of $G$ into left cosets and into right cosets of $H$ are the same." I'm under the impression that, in general, the left and right cosets are not always the same. Under what condition are they the same? Under the condition that you have a homomorphism? Let me mention that at this point, we're not supposed to know what a normal subgroup is. The author introduces the idea of a normal subgroup 2 pages later.
If $N \subseteq G$ is a subgroup such that $gNg^{-1} = N$ for all $g \in G$, then $N$ is called a normal subgroup. See Wiki on that. This is equivalent to saying that any left and right cosets $gN = Ng$ for any $g \in G$ are the same. So the statement actually just says that every kernel of a homomorphism is a normal subgroup. This is easy to see since for all $h \in H = \mathrm{ker} (\phi)$, $a \in G$ you get: $$\phi (aha^{-1}) = \phi (a) \phi(h) \phi (a^{-1}) = \phi (a) \phi (a^{-1}) = 1,$$ so $aha^{-1} \in \mathrm{ker}(\phi) = H$. Therefore $aHa^{-1} \subseteq H$, multiplying by $a^{-1}$ from the left and $a$ from the right gives the other inclusion for its inverse. Since $a$ was arbitrary, equality follows. Maybe you were confused by the definition of coset: I would read $aH$ as $\{ ah \in G;\; h \in H\}$, the set of all elements of the form $ah$ with $h \in H$. Similiary for $aHa^{-1}$. That both definitions given are equivalent, is done by checking: $$x \in aH \Leftrightarrow a^{-1}x \in H \Leftrightarrow \phi(a^{-1}x) = 1 \Leftrightarrow \phi(a) = \phi(x).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/206138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Can all Hermitian matrices $H$ be written as $H=A^* A$? All the matrices below are square, complex matrices. 1) Is it true that, for every Hermitian matrix $H$, there exists $A$, that $A^*A=H$? 2) For any $A$, does $A^*A$ always have a square root? If it's not, is there any simple presumption of $A$ that makes $A^*A$ always have a square root?
The spectral theorem implies that any Hermitian matrix $H$ is diagonizable using an unitary base change $U$, so $U^* H U = D$, where $D$ is diagonal with real entries. For $D$ proposition 2) is true, since $\mathbb{C}$ is closed under squareroots, so $D = R^2$ for some diagonal $R$. Therefore, $H = U R^2 U^* = (U R U^*)^2$. For any matrix $A$, the matrix $AA^*$ is Hermitian. However, if you go to dimension $1$, $H = -1$ is Hermitian, but for any $A \in \mathbb{C}$ you have $AA^* = \lvert A \rvert ^2 > 0$, so 1) is false. As pointed out by others, it is true for positive-semi-definite matrices $H$, let me add a construction: You can run through the argument above, noting $R^* = R$, and take $A$ to be $UR$. So $AA^* = UR (UR)^* = UR^2 U^* = H$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/206180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Find $ \int x^{-n}(1-x)^{-1}\,dx. $ Let $$ f(x) = \frac{1}{x^n(1-x)} $$ where $n\in \mathbb{N}$ and $x$ is not 0 or 1. In a previous question, i determined that the partial fraction decomposition of $f(x)$ is $$ f(x) = \sum_{j=1}^{n}\frac{1}{x^j} + \frac{1}{(1-x)}$$ Integrating this, when $n = 1$, I have: $$ \int f(x)\, dx = \ln|x|-\ln|1-x| + C = \ln\left|\frac{x}{1-x}\right|+C,$$ for $x \neq 1,0$. If $n >1$, I have (after some trial and error), $$ \int f(x)\, dx = \ln|x|+ \sum_{j=2}^{n}\frac{x^{-j+1}}{-(j-1)}-\ln|1-x| + C $$ Have I missed anything here? Thanks for the feedback!
For each $j\neq 1$, if $$g_j(x)=x^{-j}$$ then $$\int g_j(x) dx =\frac{x^{1-j}}{1-j}+C$$ Thus, given $$ f(x) = \sum_{j=1}^{n}\frac{1}{x^j} + \frac{1}{(1-x)}$$ we have $$ \int f(x)dx = \int\frac1 xdx+ \sum_{j=2}^{n}\int \frac{dx}{x^j} +\int \frac{dx}{(1-x)}$$ whence $$ \int f(x)dx = \log x+ \sum_{j=2}^{n}\frac{x^{1-j}}{1-j} +\log{(1-x)}+C$$ You're right.
{ "language": "en", "url": "https://math.stackexchange.com/questions/206244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
proving that a function is $\alpha$-Holder Prove that the function $ f(x)=\sqrt{x}$ , is $\alpha$-Holder, with $0<\alpha\le \frac{1}{2} $ , on the set $[0,\infty)$ i.e there exist a constant $K$, such that $|\sqrt{x}-\sqrt{y}| \leqslant K|x-y|^{\alpha} $ for every $x,y \in [0,\infty)$.
It's not true for $\alpha < \frac{1}{2}$. Fix $y = 1$ and see that if $x \geq 1$, $$\frac{\sqrt{x} - 1}{(x-1)^\alpha} \sim x^{\frac{1}{2} - \alpha} \rightarrow \infty.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/206324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What is the equivariant cohomology of the 0-sphere acting on the 1-sphere? View $S^1$ as the unit circle in the complex plane and let $S^0$ act by complex conjugation. What is the Borel equivariant cohomology $H^*_{S^0}(S^1;{\mathbb{Z}})$ of this action? I ask this question as an analog of the following well-known equivariant cohomology of $S^1$ on $S^2$ by rotation about the $z$-axis. We have $H^*_{S^1}(S^2;{\mathbb{Z}})\cong {\mathbb{Z}})[x]\oplus {\mathbb{Z}})[y]$ where $x$ and $y$ are in $H^2$. One proof goes by decomposing $S^2$ equivariantly into ${D^+} \cup_{S^1} D^-$ where $D^+$ and $D^-$ are the upper and lower hemispheres respectively. Here the intersection ${D^+} \cap D^- = S^1$ is path-connected so the Mayer-Vietoris sequence applies. Trying to apply the same method to my above problem leads to $S^1=I\cup_{S^0} I$ where $S^0$ is disconnected. This is reminiscent of a classical problem related to Brown's work on groupoids and the van Kampen theorem. . .
Mayer-Vietoris sequence doesn't need connected intersections. In your case, the $S^0$-equivariant cohomology of $S^0$ is the ordinary cohomology of a point and you get $H_{S^0}^*(S^1;\mathbb{Z})=H^*(\mathbb{R}P^\infty)\oplus H^*(\mathbb{R}P^\infty)$ (for $*>0$; for $*=0$ you get of course $\mathbb{Z}$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/206405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Are inversion and multiplicaton open? If $G$ is a topological group, are inversion $G \to G$ and multiplication $G\times G \to G$ open mappings? More concretely, I try to show that division of complex numbers $$\{(z,w) \in \mathbb{C}^2;\; w \neq 0\} \to \mathbb{C},\; (z,w) \mapsto \tfrac{z}{w}$$ is an open mapping. I want to use this to construct charts on $\mathbb{CP}^1 = (\mathbb{C}^2\setminus\{0\})/\mathbb{C}^{\times}$. I don't know where to begin.
Take $O_1$ and $O_2$ two open subsets ot $G$. As the inversion $i$ is an homeorphism and an involution, $i(O_1)=i^{-1}(O_1)$ is open. Denote $m$ the multiplication map. Then $$m(0_1\times O_2)=\bigcup_{y\in O_2}O_1y$$ is open. We deal with the general case using the definition of product topology.
{ "language": "en", "url": "https://math.stackexchange.com/questions/206489", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
vector space of real valued function on a given set well, $a$ is true I guess but not sure , $b$ is true as $\max\{f,g\}=\frac{1}{2}(|f-g|-|f+g|\}$ $c$ is also true as $f\in V\Rightarrow f^2\in V$ so any polynomial expression will also be in $V$, could any one tell me my logic are correct or not? and help me to solve fully correctly?
a is indeed true, but you should give an argument for it (hint: $fg$ occurs in what kind of expression involving $f$, $g$ and $x \mapsto x^2$? With a bit of cleverness, you can make the excess $f^2$ and $g^2$ bits go away)
{ "language": "en", "url": "https://math.stackexchange.com/questions/206567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
compact composition of linear and nonlinear operators? Let $ X $ be Hilbert space, and $X'$ be its dual. Assume, $T:X' \rightarrow X$ is linear, continuous and $N: X \rightarrow X'$ is nonlinear, continuous and uniformly bounded. Does it imply that $T \circ N : X \rightarrow X$ is compact? Thanks!
No, unless $X$ is finite dimensional. Identify $X$ with its dual $X'$. Take $T$ to be the identity and $N(x)=x$ if $\|x\|\leq 1$ and $N(x)=x/\|x\|$ otherwise. Then, $N$ is "nonlinear", continuous and uniformly bounded and the composition $x\mapsto T(N(x))$ is not compact: $TN$ maps the unit ball in $X$ onto itself and this is not compact unless $X$ is finite dimensional.
{ "language": "en", "url": "https://math.stackexchange.com/questions/206642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
ZFC set theory,first order theory Possible Duplicate: What is the difference between Gödel's Completeness and Incompleteness Theorems? what is the relationship between ZFC and first-order logic? I am a bit confused by a few things that I have read recently. I have read that ZFC is a first order theory and that any part of mathematics can be expressed in ZFC. Now I know that first order logic is complete, however this would seem to contradict the incompleteness theorems (with I have a basic understanding of). I was wondering where I have gone wrong? Thanks very much for any help (sorry for the silly question)
"Complete" means two different things for a logic (such as first-order-logic) versus for a theory in that logic. A logic is complete iff: Every sentence that has no counterexample-model can be proved. A theory is complete iff: Every sentence that has no proof-of-its-negation can be proved. First-order logic is complete in the first sense. ZFC is (assuming it is consistent) incomplete in the second sense -- that is, there are sentences that ZFC neither proves nor disproves. That's completely compatible with the logic being complete; it just means that for each such sentence there are models of ZFC where it is true, and other models where it is false.
{ "language": "en", "url": "https://math.stackexchange.com/questions/206693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Lebesgue Measure of a ball in $\mathbb{R}^n$ What is the relationship between the Lebesgue measure of a ball in $\mathbb{R}^n$ to the measure of a sphere? I've derived the measure of a sphere $S^{n-1}$ in $\mathbb{R}^n$ to be $\frac{2\pi^{n/2}}{\Gamma(n/2)}$, but I don't know how to relate the two. Please help and thanks in advance!
The area of a sphere of radius $r$ is the derivative with respect to $r$ of the volume of the ball of radius $r$. (Think about the volume of a thin film of thickness $dr$ on the surface of the ball!) Since that volume is proportional to $r^n$, it follows that the area of the unit sphere is $n$ times the volume of the unit ball.
{ "language": "en", "url": "https://math.stackexchange.com/questions/206770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
What's the probability that Abe will win the dice game? Abe and Bill are playing a game. A die is rolled each turn. If the die lands 1 or 2, then Abe wins. If the die lands 3, 4, or 5, then Bill wins. If the die lands 6, another turn occurs. What's the probability that Abe will win the game? I think that the probability is $\frac{2}{5}$ just by counting the number of ways for Abe to win. I'm not sure how to formalize this though in terms of a geometric distribution.
I agree with you that $\dfrac{2}{5}$ is obvious. End of story. But if you really want to sum a series, abbreviate by $A$ the event "$1$ or $2$" and by $S$ the event "$6$." Then Abe can win in various ways. These are $A$ (wins immediately), $SA$ (get a $6$, then win), $SSA$, $SSSA$, and so on. These have probabilities $\dfrac{2}{6}$, $\:\dfrac{1}{6}\cdot\dfrac{2}{6}$, $\:\dfrac{1}{6}\cdot\dfrac{1}{6}\cdot\dfrac{2}{6}$, and so on. So we want to sum the series $$a+ar+ar^2+ar^3+\cdots,$$ where $a=\dfrac{2}{6}$ and $r=\dfrac{1}{6}$. By the usual formula for the sum of an infinite geometric series, this is $\dfrac{a}{1-r}$, which simplifies to $\dfrac{2}{5}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/206829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 6, "answer_id": 3 }
How many unique codes with this pattern? If I have a string of 4 chars, composed by 2 alphabetic chars and 2 numeric chars Expamples: AD98 CB77 HG47 HH33 The possible alphatic chars are: A B C D E F G H K L M N P Q R S T U V X Y Z The possibile numeric chars are: 2 3 4 5 6 7 8 9 How many unique codes are possibile?
There are $22$ possible choices for the first character. Each of them can be followed by any of the $22$ letters, so there are $22\cdot22$ possible pairs of letters at the beginning of the code. Similarly, there are $8$ possible choices for each digit, so there are $8\cdot8$ possible ways to form the two-digit part of the code. Altogether, then, there are $22\cdot22\cdot8\cdot8=30~976$ possible codes. The general rule is that when you have to make a string of $k$ choices, and you can make the first choice in any of $n_1$ ways, the second choice in any of $n_2$ ways, and so on, there are $n_1n_2\cdots n_k$ ways to make the whole string of choices. In this problem $n_1=n_2=22$ and $n_3=n_4=8$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/207033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proof of the Direct mapping Theorem for Mellin transform. I cannot understand an integration in the proof of the Direct mapping Theorem for the Mellin transform. A statement of the Theorem, together with an outline of the standard proof, can be found at page 15 of this book. After decomposing the Mellin transform into a sum, the following integral, which the author declares to be "easily computable", has to be examined: $$\int_0^1 \sum_{k,b} c_{k,b} x^{s+b-1} log(x)^k dx, \qquad b\in\mathbb{R}, k\in\mathbb{N},$$ And $s\in\mathbb{C}$ is bounded from below, but in principle it could be $\Re(s)<-b$. (And $\Re(s)<-b$ indeed happens, for example applying this Theorem to obtain a meromorphic extension for the Gamma function.) Integrating by parts I proved: $$\int_0^1 x^rlog(x)^n = \frac{(-1)^n n!}{(r+1)^{n+1}}, \qquad \forall \:r\in\mathbb{R}_{\geq -1},\:n\in\mathbb{N}.$$ But it seems to me that he claims this result to hold for any $r\in\mathbb{R}$. Question Do you see a gap in my arguments? Or do you know where to find a detailed proof of the Direct mapping Theorem for Mellin transform? Thank you very much!
The gap in my argument above was purely conceptual and not computational. The Direct Mapping Theorem for the Mellin transform provides regularity results for the analytic continuation of the Mellin transform of a function. In specific, consider the formula I proved integrating by parts: $$\int_0^1x^r\log(x)^ndx= \frac{(-1)^nn!}{(r+1)^{n+1}}.$$ The left hand side makes sense only for $r\geq-1$, but the right hand side is well defined for any $r\in\mathbb{R}$ (properly for any $r\in\mathbb{C}$). This actually defines the analytic continuation of $\int_0^1 x^r\log(x)^n dx$ to the whole complex plane. And we can use the equality for any $r\in\mathbb{R}$ as the Author of the book claims.
{ "language": "en", "url": "https://math.stackexchange.com/questions/207079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Asymptotic equivalence of exponents An earlier question links to a paper of Erdos in which he says that it is "well-known" that the Prime Number Theorem is equivalent to $(\prod_{p\leq n}p)^{1/n} \to e$ as $n\to \infty.$ ** Here is my confusion. If $~\prod_{p\leq n}p \sim e^n$ or $e^{\log \prod p}= e^{\sum \log p} \sim e^n,$ (the last relation appears in the linked question, but I take responsibility for it) doesn't this imply that (*) $\lim_{n \to \infty} (\sum_{p\leq n} \log p - n) = 0?$ Of course it's true that $\lim_{n \to \infty} \frac{\sum \log p}{n} =1 $ and I do not think that (*) is true. But I think we do have in general that $$ e^{f(x)}\sim e^{g(x)} \implies \lim (f(x) - g(x)) = 0,$$ since $\lim \frac{e^f}{e^g}= e^{f-g} = 1 \implies \lim (f-g) = 0.$ Can someone tell me where I have goofed? Thanks! **If someone could point me to a proof of this I would appreciate it --I don't see it in Apostol or Hardy & Wright).
The fact that it's equivalent is rather simple. Recall $\pi(x)/(x/ln x) \rightarrow 1$, your limit is quivalent to: $$\sum_{p\leq n} \log p / n \rightarrow 1$$ if you take $x=n \in \mathbb{N}$ then: $$\sum_{p\leq n} \log p /n \leq (\log n) \pi(n) /n \rightarrow 1$$ And to bound it from below, you notice that this fraction is greater than 1. Not sure how to argue from below, I'll check next time.
{ "language": "en", "url": "https://math.stackexchange.com/questions/207175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$L^2$-Oscillation Let $f:[0,1]\to \mathbb{R}$ be a smooth function such that the following property is satisfied. $$\int\limits_{[0,1]}\int\limits_{[0,1]}|f(x)-f(y)|^2dxdy\leq \varepsilon.$$ What can I most say about $\max\limits_{[0,1]}f-\min\limits_{[0,1]}f$?
$\def\abs#1{\left|#1\right|}$Nothing. We have \begin{align*} \int_{[0,1]^2} \abs{f(x) - f(y)}^2\, d(x,y) &\le \int_{[0,1]^2} \abs{f(x)}^2\,d(x,y) + \int_{[0,1]^2} 2\abs{f(x)}\abs{f(y)}\, d(x,y) + \int_{[0,1]^2} \abs{f(y)}^2\, d(x,y)\\ &= \|f\|^2 + 2\|f\|^2 + \|f\|^2\\ &= 4\|f\|^2 \end{align*} That is, your term will be small if $\|f\|^2$ is small. Now let $f_n$ be a smooth, positive function with $\max f_n = n$, $\min f_n = 0$, $\mathrm{supp}\, f_n \subseteq [0, \frac 1{n^3}]$. Then $\|f\|^2 \le n^2 \cdot \frac 1{n^3} = \frac 1n$, but $\max f_n - \min f_n = n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/207246", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
convergence in function space Maybe is a silly question, but for some reason I am confused... If $\mathcal{F}$ is a normed space of real functions and $\displaystyle{ f \in \mathcal{\bar F } }$ then there exists a sequence of functions $ \displaystyle{ (f_n ) \subset \mathcal{F} }$ such that $\displaystyle{ f_n \to f \quad \text{as} \quad n \to \infty}$ which is equivelant to $\displaystyle{ || f_n -f|| \to 0 \quad \text{as} \quad n \to \infty}$ Here is my question: The convergence $f_n \to f$ is uniform or pointwise ? Thank's in advance! edit: $\displaystyle{ f, f_n : A \subset \mathbb R \to \mathbb R }$ Is now more clear my question? Any ideas?
Convergence $f_n\to f$ here is defined by $\|f_f-f\|\to 0$ (whereas the latter is just convergence in $\mathbb{R}$). This notion can indeed lead to very different notions of convergence of functions. A simple example is convergence with respect to the supremum norm $\|f\|_\infty=\sup|f(x)|$ and the 1-norm $\|f\|_1 = \int|f(x)|dx$. The former is just uniform convergence, while the latter is a notion of convergence which is usually not treated in calculus courses. In fact the latter allows unbounded sequences of functions which still converge to the zero function (and you should try to find such an example by yourself). If I remember correctly, "pointwise almost everywhere convergence" is not induced by a norm...
{ "language": "en", "url": "https://math.stackexchange.com/questions/207311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Limit of a continuous function Suppose that $f$ is a continuous and real function on $[0,\infty]$. How can we show that if $\lim_{n\rightarrow\infty}(f(na))=0$ for all $a>0$ then $\lim_{x\rightarrow+\infty} f(x)=0$?
This is a standard (moderately tough) exercise in applying the Baire category theorem. Tim Gowers did a presentation of this result on his blog under the title "What is deep mathematics?". But as he writes: If you haven’t seen this before and want to get the most out of this post then you should (of course) make a serious attempt to solve this beautiful problem before reading on.
{ "language": "en", "url": "https://math.stackexchange.com/questions/207395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Interpolation inequality Lef $u$ be at least a $C^2$ function on $\mathbb{R}^n$. Let's denote the gradient by $D$. Also, (using the multiindex notation), define the seminorm $$||D^ku|| = \sup_{|\gamma|=k}{\sup_x{|D^{\gamma}u|}}$$ How can we prove the following : $$||Du|| \leq \epsilon||D^2u|| + C||u|| $$ where $C$ is some constant depending on $\epsilon$
By restricting the function to a line, i.e., considering the function, $t \mapsto u(a+tb)$ for some point $a\in \mathbb{R}^n$ and a unit vector $b\in\mathbb{R}^n$, you can reduce the problem to the case $n=1$. Now the problem is for a $\mathcal{C}^2$ function $f:\mathbb{R}\to\mathbb{R}$ to show $\|f'\| \le \epsilon \|f''\| + C\|f\|$. The idea is that if you have a point $x_0$ and a constant $M>0$ such that $f'(x_0)\ge M+1$ (or $-f'(x_0)\ge M+1$), and if you have a uniform bound $\|f''\|\le K$, then $f'\ge M$ (or $-f'\ge M$) whenever $|x-x_0| \le 1/K$, and then $|f(x_0+1/K)-f(x_0-1/K)| \ge 2M/K$, so $\| f\| \ge M/K$. This implies the desired inequality by juggling of constants.
{ "language": "en", "url": "https://math.stackexchange.com/questions/207476", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to prove an Inequality I'm beginner with proofs and I got the follow exercise: Prove the inequality $$(a + b)\Bigl(\frac{1}{a} + \frac{4}{b}\Bigr) \ge 9$$ when $a > 0$ and $b > 0$. Determine when the equality occurs. I'm lost, could you guys give me a tip from where to start, or maybe show a good resource for beginners in proofs ? Thanks in advance.
Another one... $ (a+b)({1 \over b} + {4 \over a}) \ge 9 \qquad \qquad $ This requires that also $a,b\ne 0$ and thus $a,b \gt 0$ as from the problem-definition Then multiplication with ab does not introduce inconsistencies and we can do $ (a+b)({1 \over a} + {4 \over b}) \ge 9 \qquad \qquad // *ab $ $ (a+b)(b + 4a) \ge 9ab \qquad \qquad // $ expand $ 4a^2+5ab+b^2 \ge 9ab \qquad \qquad // - 9ab $ $ 4a^2-4ab+b^2 \ge 0 \qquad \qquad $ Finally: (1) $ \qquad (2a-b)^2 \ge 0 \qquad \qquad $ This is always true (2) $ \qquad $ The case of equality is now obvious...
{ "language": "en", "url": "https://math.stackexchange.com/questions/207521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 8, "answer_id": 4 }
counting the number of loop runs I have the following loop structure: for $i_1=1$ to $m$ for $i_2=i_1$ to $m$ $\vdots$ for $i_n=i_{n-1}$ to $m$ Of course, all indices $i_k$ are integers, and $m$ and $n$ are also positive integers. How can I count how many times the inner loop will run?
Since the loop variables are called $i_1,\ldots,i_n$, there are clearly $n$ nested loops. This is probably not what you were after, but it is the answer, as per "What You Ask For Is What You Get". (Well actually you asked "How can I count ..."; I'll spare you the answer to that.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/207590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Unsure about applying series comparison test Does this converge or diverge? $$ \sum\limits_{n=1}^\infty (a_{n} = \frac{1}{2\sqrt{n} + \sqrt[3]{n}}) $$ The answer is: diverges by limit comparison to $\sum (b_{n} = \frac{1}{\sqrt{n}})$ If I look at $\lim_{n \to \infty}\frac{a_{n}}{b_{n}}$ I get $$ \frac{\sqrt{n}}{2\sqrt{n} + \sqrt[3]{n}} = \frac{n^\frac{1}{2}} {2n^{\frac{1}{2}} + n^{\frac{1}{2}}n^\frac{2}{3}} = \frac{1}{2 + n^\frac{2}{3}} = \frac{0}{0 + 1} = 0 $$ as $n\to\infty$ I would expect it to be some $c > 0$ or $\infty$ Also, because $a_{n} < b_{n}$ I don't think I can use the comparison test. I'm pretty sure I am missing something but not sure what. Thanks Update: My algebra was wrong, how is this for finding the limit? $$ \frac{\sqrt{n}}{2\sqrt{n} + \sqrt[3]{n}} = \frac{1} {2 + \frac{1}{n^\frac{1}{6}}} = \frac{1} {\frac{2\sqrt[6]{n} + 1}{\sqrt[6]{n}}} = \frac{\sqrt[6]{n}}{2\sqrt[6]{n} + 1} = \frac{1}{2 + 0} = \frac{1}{2} $$ as $n \to\infty$ And since $\frac{1}{2} > 0$ and $b_{n}$ diverges, $a_{n}$ diverges
There seems to be a mistake in the algebra -- namely, you took $\sqrt[3]{n} = n^{1/2} n^{2/3}$. But the right-hand side of this is equal to $n^{1/2 + 2/3} = n^{3/6 + 4/6} = n^{7/6} \ne \sqrt[3]{n}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/207663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove $ \int_{cX} \frac{dt}{t} = \int_{X} \frac{dt}{t}$ for every Lebesgue measurable set $X$ Let $c>0$. Let $X \subseteq (0,\infty)$ be a Lebesgue measurable set. Define $$ cX := \{ cx \mid x \in X \}. $$ Then $$ \int_{cX} \frac{dt}{t} = \int_{X} \frac{dt}{t}$$ Now I can prove this for $X$ an interval and, thus, any set generated by set operations on intervals. It is simply by using the Fundamental Theorem of Calculus and natural log $\ln$. But I'm not sure how to approach for general Lebesgue measurable set.
(I dislike the title, which looks like an assignment.) Hint: Since you know this for intervals, use an approximation argument of step functions for the functions $x\mapsto \chi_X(x)\cdot\frac{1}{x}$ and $x\mapsto \chi_{cX}(x)\cdot\frac{1}{x}$. Where $\chi_A$ denotes the characteristic function on $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/207734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Does there exist a convergent sequence $\{a_n\}$ such that $n d(a_{n+1},a_n)$ is not bounded? Does there exist a convergent sequence $\{a_n\}$ such that "$n d(a_{n+1} , a_n)$ is not bounded? If not, how can i prove that $n d(a_{n+1} , a_n)$ is bounded? And is it right to conclude this; This happens because domain of sequence is $\mathbb{N}$?
For example, let $a_{n}=(-1)^{n}\frac{1}{\sqrt{n}}$. Then $\lim_{n\rightarrow\infty}a_{n}=0$ and $nd(a_{n+1}-a_{n})=n(\frac{1}{\sqrt{n}}+\frac{1}{\sqrt{n+1}})$ come to infinity as $n$ come to infinity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/207796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Expected value of applying the sigmoid function to a normal distribution Short version: I would like to calculate the expected value if you apply the sigmoid function $\frac{1}{1+e^{-x}}$ to a normal distribution with expected value $\mu$ and standard deviation $\sigma$. If I'm correct this corresponds to the following integral: $$\int_{-\infty}^\infty \frac{1}{1+e^{-x}} \frac{1}{\sigma\sqrt{2\pi}}\ e^{ -\frac{(x-\mu)^2}{2\sigma^2} } dx$$ However, I can't solve this integral. I've tried manually, with Maple and with Wolfram|Alpha, but didn't get anywhere. Some background info (why I want to do this): Sigmoid functions are used in artificial neural networks as an activation function, mapping a value of $(-\infty,\infty)$ to $(0,1)$. Often this value is used directly in further calculations but sometimes (e.g. in RBM's) it's first stochastically rounded to a 0 or a 1, with the probabililty of a 1 being that value. The stochasticity helps the learning, but is sometimes not desired when you finally use the network. Just using the normal non-stochastic methods on a network that you trained stochastically doesn't work though. It changes the expected result, because (in short): $$\operatorname{E}[S(X)] \neq S(\operatorname{E}[X])$$ for most X. However, if you approximate X as a normal distribution and could somehow calculate this expected value, you could eliminate most of the bias. That's what I'm trying to do.
Since I do not have enough reputation to comment, I'll instead add a new answer. @korkinof's answer is almost correct. The final integral evaluates to the following: \begin{equation} \int \operatorname{sigmoid}(x) \mathcal{N}(x; \mu, \sigma^2) \mathrm{d}x \approx \int \Phi(\lambda x) \mathcal{N}(x; \mu, \sigma^2) \mathrm{d}x = \Phi\left(\frac{\lambda \mu}{\sqrt{1 + \lambda^2 \sigma^2}}\right). \end{equation} I verified my answer through simulation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/207861", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 3, "answer_id": 2 }
Asymptotic for binomial coefficient with square root I'm looking for asymptotic estimate for the binomial coefficient: $$ \ln{\binom{n}{[\sqrt{n}]}} $$ I assume Stirling's approximation can help, but I'm not sure I will get any good estimation with this approach. Is there any good way to make an estimation for this coefficient? Thanks in advance.
Using Shitikanth's hint I think you're going to be coming up with $$\text{ln}{n \choose [\sqrt{n}]}\approx\text{ln}\left(\frac{n^{n+\sqrt{n}/2+3/4}}{\left(n-\sqrt{n}\right)^{1/2-\sqrt{n}+n}}\right).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/207912", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Eigenvalues of a rectangular matrix I've read that the singular values of a matrix are equal to the $$\sigma=\sqrt{\lambda_{K}}$$ where $\lambda$ are the eigenvalue but I'm assuming this only applies to square matrices. How could I determine the eigenvalues of a non-square matrix. Pardon my ignorance.
The standard definition of an eigenvalue of $A$ is a number $\lambda$ so that for some vector $v$, $Av=\lambda v$. If you've got an $m$ by $n$ matrix, $v$ must be a vector of length $m$, and then $Av$ must be a vector of length $n$. Thus if $m\not= n$, there is no way for $\lambda v$ to be equal to $Av$, since the two vectors are of different dimensions. As leshik suggests in the comments, we need to use an alternative definition if we want to look at the "eigenvalues" of a rectangular matrix. An analog is to find a unit vector $\hat{u}$ so that $Mv=\sigma u$ and $M^*u = \sigma v$. Then $u$ and $v$ are the left- and right-singular vectors for the value $\sigma$ (the analogs to an eigenvector of a square matrix).
{ "language": "en", "url": "https://math.stackexchange.com/questions/207991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Modular Arithmetic order of operations In an assignment, I am given $E_K(M) = M + K \pmod {26}$. This formula is to be applied twice in a formal proof, so that we have $E_a(E_b(M)) =\ ...$. What I'm wondering is; is the original given formula equal to $(M + K)\pmod{26}$, or $M + (K \mod{26})$? This will obviously make a big difference down the line. I do suspect the first ($(M + K)\pmod{26}$), however I want to be certain before I move forward in my proof. NB: I did not tag this as homework as this is not actually part of the problem, rather just a clarification.
Without further context it is impossible to determine what is intended, because there are many abuses of the mod notation. However, with high probability, the expression $\rm\: M\!+\!K\ (mod\ 26)\:$ denotes the remainder of $\rm\:M\!+\!K\:$ when divided by $26\,$ (or the entire residue class $\rm\ M\!+\!K+26\,\Bbb Z\:$ containing the remainder). Your other possible interpretation, $\rm\:M + (K\ mod\ 26),\:$ is much more rare, though it does sometimes occur when discussing parity, e.g. authors may write $\rm\:n\ (mod\ 2)\:$ or $\rm\: n\ mod\ 2\:$ in arithmetical expressions that depend upon the parity of $\rm\:n.$ In the first case one is performing modular arithmetic, i.e. addition modulo $26$. In the second case one is performing integer arithmetic, i.e. normal integer addition. If the ambient context reveals which type of arithmetic is intended, then you can infer the meaning of the notation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/208028", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
$ \int\limits_C {\left( {z - z_0 } \right)^m dz} $ Evaluate the line integral $ \int\limits_C {\left( {z - z_0 } \right)^m dz} $ , where $C$ is the circle centered at $z_0$ with radius r>0, and: $i)$ $m$ is an integer $m \geqslant 0 $ $ii)$ m is an integer $m<0$ I want to see how can I solve this , to see some examples of integrals )=.
Parameterize the circle by $C(t) = z_0 + r e^{it} $ with $t\in (-\pi,\pi].$ Using the definition of a path integral, $$ \int_{\gamma} f(z) dz = \int^b_a f(\gamma(t)) \gamma'(t) dt$$ we have $$ \int_{C} (z-z_0)^m dz = \int^{\pi}_{-\pi} r^m e^{imt} ire^{it} dt = ir^{m+1} \int^{\pi}_{-\pi} e^{i (m+1)t} dt.$$ Now if $m+1\neq 0$ then the last integral is simply zero, seen either by direct evaluation or noting that the exponential goes through exactly $|m+1|$ periods. If $m+1=0$ then the integrand is $1$ and the value of the integral is thus $2\pi i.$ To summarize, $$\int_C (z-z_0)^m \, dz = \left\{ \begin{array}{lr} 2\pi i \text{ if }m=-1 \\ 0 \text{ if } m\in \mathbb{Z}\setminus \{-1\} \end{array} \right.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/208096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove $H \times G$ is commutative iff $H, G$ are commutative This is another proof question I am asking about, can someone give me tips on how to answer these questions? My question says: "Let $H,G$ be arbitrary groups. Prove that $H \times G$ is commutative (abelian) if and only if both groups $H, G$ are commutative" I don't get how to prove that $H $ and $G$ are commutative in general. Seeing as they are arbitrary groups, they can be anything right? How do I go about trying to prove this?
In general, be aware of and state what you know. In one direction you know that $H \times G$ is commutative. What does it mean? It means that for all $(h_1, g_1), (h_2, g_2) \in H \times G$ you have $(h_1, g_1) + (h_2, g_2) = (h_1 + h_2, g_1 + g_2) = (h_2, g_2) + (h_1, g_1)$. Next be aware of and state what you want. You want to show that for all $h_1, h_2 \in H, g_1, g_2 \in G$ you have $h_1 + h_2 = h_2 + h_1$ and $g_1 + g_2 = g_2 + g_1$. To finish, use what you know to show what you want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/208180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Non-negative matrix and inverse Lately, I´ve been struggling with math homework and came across a question I´m not sure how to answer. I will be glad for any help... Suppose we have matrix $A$ (size $n\times n$) and its inverse (lets call it $B$). They are both non-negative in the sense that all their elements $A_{ij}$ and $B_{ij}\geq 0$, where $1\leq i,j\leq n$. The question is, what can we say about these matrices - everything must be justified. This where I got so far: 1) $A$ is regular (otherwise it wouldn't have and inverse - I don't think I have to justify this statement) Are there any other features? I think I can justify some of them by using minor matrices, but I'm not sure how :-(
You know that $AB= I_n$. Since for all $i \neq j$ you have $\sum_k A_{ik}B_{kj}=0$ it follows that for all $i,j,k$ with $i \neq j$ you have either $A_{ik}=0$ or$B_{kj}=0$. Now, since $\sum_k A_{ik}B_{ki} \neq 0$, for each $i$ you can find some $k_i$ so that $A_{ik_i} \neq 0$ and $B_{k_i i} \neq 0$. Combining this with the above, you can prove that $B_{k_i j}=0 \forall j\neq i$ and $A_{j k_i}=0$ for all $j \neq i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/208251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Intersection and Span Assume $S_{1}$ and $S_{2}$ are subsets of a vector space V. It has already been proved that span $(S1 \cap S2)$ $\subseteq$ span $(S_{1}) \cap$ span $(S_{2})$ There seem to be many cases where span $(S1 \cap S2)$ $=$ span $(S_{1}) \cap$ span $(S_{2})$ but not many where span $(S1 \cap S2)$ $\not=$ span $(S_{1}) \cap$ span $(S_{2})$. Please help me find an example. Thanks.
In $\mathbb{R}^2$, we have: $$ S_1 = \{(1, 0), (0, 1)\} \\ S_2 = \{(2, 0), (0, 2)\} $$ $S_1 \cap S_2 = \emptyset$ . Yet, both $S_1$ and $S_2$ span $\mathbb{R}^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/208311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Intuitive explanation of why $\dim\operatorname{Im} T + \dim\operatorname{Ker} T = \dim V$ I'm having a hard time truly understanding the meaning of $\dim\operatorname{Im} T + \dim\operatorname{Ker} T = \dim V$ where $V$ is the domain of a linear transformation $T:V\to W$. I've used this equation several times in many problems, and I've gone over the proof and I believe that I fully understand it, but I don't understand the intuitive reasoning behind it. I'd appreciate an intuitive explanation of it. Just to be clear, I do understand the equation itself, I am able to use it, and I know how to prove it; my question is what is the meaning of this equation from a linear algebra perspective.
Perhaps think of it in terms of projections? Whatever T does not project into the image must disappear; i.e. is in the kernel. This is why it is the domain dimension that matters. The image is an injection into the range, so it has the same dimension as the corresponding preimage in the domain. The image of the kernel is just zero, so it is the dimension of the kernel in the domain that matters.
{ "language": "en", "url": "https://math.stackexchange.com/questions/208378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 4, "answer_id": 0 }
Can vectors be inverted? I wish to enquire if it is possible to solve the below for $C$. $$B^{-1}(x-\mu) = xc $$ Here obviously $B$ is an invertible matrix and both $c$ and $\mu$ are column vectors. Would the solution be $$x^{-1}B^{-1}(x-\mu) = c $$ is it possible to invert vectors ? How about if it was the other way $$B^{-1}(x-\mu) = cx $$ Is there any other way to do this ? Thanks in advance.
There's no such thing as an inverse of a vector (unless the vector is actually a $1\times 1$ vector, of course). Otherwise, there would be a solution $C$ for any $B,X,\mu$ (or at least any $X$ "invertible"), but that is obviously not the case (e.g. for any $X$ if we put $B=I$, $\mu$ linearly independent from $X$, there is no $C$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/208447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 5, "answer_id": 2 }
Cardinality of the set {10, (8+2)} Is |{10, (8+2)}| = 2? (cardinality of the set) I'm not sure about this because 8+2 = 10. I know the cardinality of the set {10, 10} = 1, but since 8+2 is a different representation of 10, it could be different? Any one can enlighten me on this? Thanks!
Unless (like Asaf seems to be assuming) we're working at a much higher level of sophistication than it seems here, the elements of a set are simply numbers -- not expressions, representations or splotches of ink or chalk. So when you write $\{10,(8+2)\}$ what you mean is that the number denoted by "$10$" is in the set, and the number denoted by "$(8+2)$" is in the set, and nothing else is. These two expression happen to denote the same number (which can also be described as "ten" or "one less than eleven" or 0x0A or "$1+1+1+1+1+1+1+1+1+1$"), so that number is the only element of the set in question, so its cardinality is $1$. Period. It is somewhat common to understand sets intuitively as "lists of things where the order doesn't matter (and neither does repetitions)". This understanding can be misleading unless one is extremely careful about what a repetition is (not to speak of what a "list" means if it has infinitely many elements). What is really going on is: A set of something that you can ask "is this one of your elements?" of for every "this" in the universe. The set consists of its yes/no answers to all of these questions, neither more nor less. So when you write $\{10,(8+2)\}$ you're speaking of a set that answers "yes, $X$ is one of my elements" if and only if $X=10$ or $X=(8+2)$. But no matter what $X$ is, "$X=10$" and "$X=(8+2)$" are either both true or false, so that is the same as answering "yes, $X$ is one of my elements" if and only if $X=10$. And therefore $\{10,(8+2)\}$ is (a name for) the same set that $\{10\}$ is a name for. A set consists only of its answers, so when the answers are the same we're looking at the same set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/208506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Representations of $\pi_1M$ and Heegaard Splittings I am reading Floer's Instanton-Invariant paper, and am stuck on a sentence. To set the stage: Consider a closed connected oriented 3-manifold $M$ and the nonabelian group $SU_2$. Denote the equivalence classes of representations $\mathcal{R}(M)=Hom(\pi_1M,SU_2)/\text{ad}(SU_2)$. Now given a Heegaard splitting $M=M_+\cup_SM_-$, one can consider $\mathcal{R}(M)$ as the intersection of $\mathcal{R}(M_+)$ and $\mathcal{R}(M_-)$ in $\mathcal{R}(S)$. Indeed, Seifert van-Kampen's theorem gives $\pi_1(M)\cong\pi_1(M_+)\ast_{\pi_1(S)}\pi_1(M_-)$ and then the statement follows by the universal property of amalgamated free products. The resulting intersection number (ignoring the trivial representation) can be shown to be independent of the particular Heegaard splitting. [The "result" refers to the integer-valued Casson invariant, which assigns a sign to each intersection $a\in\mathcal{R}$]. How is this done?
(For completeness) This is explained/proved in the main reference of the invariant: Casson's Invariant for Oriented Homology 3-Spheres (by Akbulut and McCarthy).
{ "language": "en", "url": "https://math.stackexchange.com/questions/208568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Why are the integrands dominated by $\alpha f$ This is on page 32 of Rudin's Real and Complex Analysis, 3rd Edition: Suppose $\mu$ is a positive measure on $X$, $f: X \rightarrow [0, \infty]$ is measurable, $\int_X f d\mu = c$, where $0<c<\infty$, and $\alpha$ is a constant. Prove that$$\lim_{n \rightarrow \infty} \int_X n \log[1+(f/n)^{\alpha}]d \mu = \begin{cases} \infty & \text{ if } 0 < \alpha <1, \\ c & \text{ if } \alpha=1, \\ 0 & \text{ if } 1 < \alpha < \infty. \end{cases}$$ The hint says "if $\alpha \geq 1$, the integrands are dominated by $\alpha f$". But why? Thanks a lot.
To be proven: $n\log(1+(t/n)^\alpha)\leqslant\alpha t$, for every $t\geqslant0$ and $n\gt0$, with $\alpha\geqslant1$. Step 1 Replace $t$ by $nt$, hence it suffices to prove that $\log(1+t^\alpha)\leqslant\alpha t$, for every $t\geqslant0$. Step 2 Show that, if $\alpha\geqslant1$, then $t^{\alpha-1}\leqslant1+t^\alpha$ for every $t\geqslant0$. (Hint: consider separately the cases $t\leqslant1$ and $t\geqslant1$.) Step 3 Compute the derivative of the function $u:t\mapsto\log(1+t^\alpha)-\alpha t$. Step 4 Compute $u(0)$ and conclude.
{ "language": "en", "url": "https://math.stackexchange.com/questions/208716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
maybe maximum modulus principle $ |f(z)| \leqslant 1 + |z|^{\frac{3} {2}} \forall z $ Let $f$ be an entire function such that : $$ |f(z)| \leqslant 1 + |z|^{\frac{3} {2}} \forall z $$ What we can conclude about $f$ . Sorry for asking this , but I want to see some examples of the contents of the chapter that I'm reading, this problem it's from the chapter of maximum modulus principle.
Edit: oop... misread... revised: The given inequality and Cauchy's formula for the second derivative $f''$, letting the large circle go to infinity, show that $f''(z)=0$, so $f$ is (not constant, but) linear. This is just a little extension of the argument for Liouville's theorem, so not really so much about maximum modulus, perhaps. Edit-edit: explicitly, by the Cauchy integral formula for the derivatives, $f''(z)={2!\over 2\pi i}\int_\gamma {f(\zeta)\,d\zeta\over (\zeta-z)^3}$, where $\gamma$ is a large circle of radius $R$. The numerator is bounded by $R^{3/2}$, and the denominator is essentially $R^3$. The length of the curve is $2\pi R$, so the integral expressing the second derivative is bounded by a constant multiple of $1/R^{1/2}$, which goes to $0$ as $R$ goes to $+\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/208895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Solving a differential equation given a general solution The general solution to $y'''+(a+1)y''+(a+5)y'+5y=0$ (where $a$ is a real-valued constant) is $y=c_1e^{-2t}\sin t+c_2y_2+c_3y_3$ Find $a$, $y_2$, and $y_3$. I thought that finding the characteristic equation would help. So I started as: $r^3+(a+1)r^2+(a+5)r+5=0$ But it doesn't seem to really help with anything, so I'm not quite sure where to go from here. Can I make some assumptions based on the general solution? Thanks!
Since I get a sense this might be homework, I'll give a few hints. * *Based on the general solution, you should know one root of the characteristic equation. *Complex roots of polynomials with real-valued coefficients come in pairs. *Based on that last term of the characteristic polynomial, the product of the 3 roots is $-5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/208962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Inequality involving $x^n$ This is probably not very exciting, but can't get my head around this for a while. Here's the inequality: $$ x_1^3-\epsilon x_1^2 -(2+\epsilon(1-x_1^n))x_1+(1+\epsilon)>0 $$ where $\epsilon>0, n \in \mathbb{Z^{+}}$. Please just give me hints, don't solve it for me. EDIT: $x_1>0$ EDIT2: it seems that the solution to inequality $1-x_1-x_1^2>0$ is the trick. For (see EDIT 1) $0<x_1<\frac{\sqrt{5}-1}{2} \approx 0.618$ the main inequality above is true for all $n, \epsilon$. If $x_1$ is larger than $\approx 0.618$, then it is true for some $\epsilon>0$ iff $n$ is not very large. The condition $\epsilon>0$ is crucial.
If it is wished that the inequality be true for all positive $x_1$, it will need modification. For example, let $n=1$. Then we are looking at $$x^3-(2+\epsilon)x+1+\epsilon.$$ This is not necessarily positive. Imagine $\epsilon$ very close to $0$. The function reaches a minimum at $x=\sqrt{(2+\epsilon)/3}$, and the minimum value is negative, though not by much. I chose a small $\epsilon$, because presumably that is what is intended. But if we pick $\epsilon$ large, like $10$, we are looking at $z^3-12x+11$, which is $-5$ at $x=2$. There will be similar difficulties with $n=2$. And for any $n\ge 3$, and small $\epsilon$, we can reproduce the same problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/209005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Positive Definite Matrix Determinant Prove that a positive definite matrix has positive determinant and positive trace. In order to be a positive determinant the matrix must be regular and have pivots that are positive which is the definition. Its obvious that the determinant must be positive since that is what a positive definite is, so how can I prove that?
One more proof to show that the determinant of a +ve definite matrix is +ve: let A be a +ve definite matrix. Now we know that A can be written as : A = $(V)D(V^T)$ ,which is the single value decomposition form of the +ve definite matrix A.Here $V^T$ and V are the orthogonal vectors such that $ (V^T)V $ = 1 .Thus we can also write : $|V^T|$ |V| = 1 ---(1). Also D is the diagonal matrix with eigen values of A as its diagonal elements. Now, A = $(V)D(V^T)$ . or $|A| = |V| |D| |V^T| $ . or |A| = |D| (from (1) ) ---(2) . |D| = product of eigen values ----(3). Thus if we prove that all the eigen values of a +ve definite matrix are +ve then we are done. $(x^T)A(x)$ > 0 . Ax=bx ,where b is the eigen value. thus $(x^T)bx > 0 $. $b(x^T)x > 0 $. $(x^T)x > 0 $ (always) . thus b>0 . thus from (2) and (3) : |A| = |D| > 0 or |A| > 0 . hence proved.
{ "language": "en", "url": "https://math.stackexchange.com/questions/209082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Check my answer for the derivative of $y=\tan^{-1}(40/h) -\tan^{-1}(32/h)$ Differentiate $$y=\tan^{-1}{\frac {40}{h}}-\tan^{-1}{\frac {32}{h}}$$ My answer Using the identity $\frac{d}{dy}\tan^{-1}(x)=\frac{1}{1+x^2}$, can I conclude that $$\frac{dy}{dh}=\frac{1}{1+(\frac{40}{h})^2}(-\frac {40}{h^2})-\frac{1}{1+(\frac{32}{h})^2}(-\frac {32}{h^2})$$
You are right. But the final result can be simplified a little bit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/209175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }