Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
A finite-dimensional vector space cannot be covered by finitely many proper subspaces? Let $V$ be a finite-dimensional vector space, $V_i$ is a proper subspace of $V$ for every $1\leq i\leq m$ for some integer $m$. In my linear algebra text, I've seen a result that $V$ can never be covered by $\{V_i\}$, but I don't know how to prove it correctly. I've written down my false proof below: First we may prove the result when $V_i$ is a codimension-1 subspace. Since $codim(V_i)=1$, we can pick a vector $e_i\in V$ s.t. $V_i\oplus\mathcal{L}(e_i)=V$, where $\mathcal{L}(v)$ is the linear subspace span by $v$. Then we choose $e=e_1+\cdots+e_m$, I want to show that none of $V_i$ contains $e$ but I failed. Could you tell me a simple and corrected proof to this result? Ideas of proof are also welcome~ Remark: As @Jim Conant mentioned that this is possible for finite field, I assume the base field of $V$ to be a number field.
This is a special case of a fact that an affine space over an infinite field is irreducible. The proof can be found in most books on elementary algebraic geometry(see for example Fulton's algebraic curves).
{ "language": "en", "url": "https://math.stackexchange.com/questions/145869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 6, "answer_id": 1 }
Compact complex surfaces with $h^{1,0} < h^{0,1}$ I am looking for an example of a compact complex surface with $h^{1,0} < h^{0,1}$. The bound that $h^{1,0} \leq h^{0,1}$ is known. In the Kähler case, $h^{p,q}=h^{q,p}$, so the example cannot be (for example) a projective variety or a complex torus. Does anyone know of such an example? Thanks.
The standard example of such a thing is the Hopf surface.
{ "language": "en", "url": "https://math.stackexchange.com/questions/145920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Prove or disprove: $(\mathbb{Q}, +)$ is isomorphic to $(\mathbb{Z} \times \mathbb{Z}, +)$? Prove or disprove: $\mathbb{Q}$ is isomorphic to $\mathbb{Z} \times \mathbb{Z}$. I mean the groups $(\mathbb Q, +)$ and $(\mathbb Z \times \mathbb Z,+).$ Is there an isomorphism?
Another way of seeing this (yes, there are many ways!) is to notice that two isomorphic groups have the same quotients. That is, if $G\cong H$ and $G\twoheadrightarrow K$ then $H \twoheadrightarrow K$. Now, by this question (which was only asked the other day, which is why I am posting this answer!), we know that every proper quotient of $\mathbb{Q}$ is torsion (that is, every element has finite order). On the other hand, $\mathbb{Z}\times\mathbb{Z}$ has a torsion-free proper quotient, $\mathbb{Z}\times\mathbb{Z}\twoheadrightarrow \mathbb{Z}$. Thus, they cannot be isomorphic. (Indeed, this actually proves that there cannot be a homomorphism from $\mathbb{Q}$ to $\mathbb{Z}\times\mathbb{Z}$, as lhf has already shown - the result we use tells us that the map cannot have non-trivial kernel, while this proves that the kernel cannot be trivial either.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/146071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 7, "answer_id": 5 }
Approximate a series with finite number of terms How it is possible to approximate: $$\sum_{i=1}^{NR}{i\cdot \left( \dfrac{1}{1-p} \right)^i} $$
Let $x = \dfrac1{1-p}$ and let $n = NR$. Then we are interested in the sum $\displaystyle \sum_{i=1}^{n} i x^i$. \begin{align} \sum_{i=1}^{n} i x^i & = x \left(\sum_{i=1}^{n} i x^{i-1} \right)\\ & = x \left( \sum_{i=1}^{n} \frac{d x^i}{dx} \right)\\ & = x \frac{d}{dx} \left( \sum_{i=1}^{n} x^i\right)\\ & = x \frac{d}{dx} \left( x\left(\frac{x^n - 1}{x-1} \right) \right)\\ & = x \left( \frac{nx^{n+1} - (n+1)x^n + 1}{(x-1)^2} \right) \end{align} Replacing $x$ by $\dfrac1{1-p}$ and $n$ by $NR$, we get that the solution is $$\left( \frac{NR}{p} - \frac1{p^2} + \frac1p \right) \left( \dfrac1{1-p}\right)^{NR} + \frac1{p^2} - \frac1p$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/146196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Surprising approximation of weighted sum of binomial coefficients The following sum appeared in connection to the problem addition of angular momentum in physics: $$ \frac{1}{2^{n+3}}\sum_{k=0}^n \left(\frac{n-2k-1}{\sqrt{k+1}}+\frac{n-2k+1}{\sqrt{n-k+1}}\right)^2 {n\choose k} $$ The intriguing thing about this sum is extremely well approximated by $1-\frac{c}{n}$, where $c$ is some constant when $n\to\infty$. I wonder if there is a simple way to see this, or to get this lower bound by carefully approximating this rather unappealing sum.
Notice that the sum can be given a probabilistic interpretation, since $\frac{1}{2^n} \binom{n}{k}$ is the point mass function of a symmetric Binomial random variable. Let $X \sim \operatorname{Binom}\left(n, \frac{1}{2}\right)$. Then the sum in question equals $$ S = \frac{1}{8} \mathbb{E} \left( \left( \frac{n-2X -1}{\sqrt{X+1}} + \frac{n-2X +1}{\sqrt{n-X+1}} \right)^2 \right) $$ In the large $n$ limit, the de Moivre-Laplace theorem can be used, and $X$ can be approximated in distribution as $\frac{n}{2} + \frac{\sqrt{n}}{2} Z$, where $Z$ is the standard normal variable. The sum $S$ then approximately equals: $$ S \approx \frac{1}{8} \mathbb{E}\left( \left( \sqrt{2}\frac{1+\sqrt{n} Z}{\sqrt{2+n + \sqrt{n} Z}} + \sqrt{2}\frac{-1+\sqrt{n} Z}{\sqrt{2+n - \sqrt{n} Z}} \right)^2 \right) = \mathbb{E}\left( Z^2 + \frac{3}{4n} Z^2 \left( Z^2-4\right) + \mathcal{o}\left(\frac{1}{n}\right) \right) = 1 - \frac{3}{4 n} + \mathcal{o}\left(\frac{1}{n}\right) $$ where $\mathbb{E}(Z^{2k}) =(2k-1)!!$ was used, in particular $\mathbb{E}(Z^{2}) =1$ and $\mathbb{E}(Z^{4}) = 3$. Added: The series expansion is derived using simple algebra: $$\begin{eqnarray} \frac{\pm 1 + \sqrt{n} Z}{\sqrt{2+n \pm \sqrt{n} Z}} &=& \frac{Z \pm \frac{1}{\sqrt{n}}}{\sqrt{1 + \frac{2}{n} \pm \frac{Z}{\sqrt{n}}}} \\ &=& \left( Z \pm \frac{1}{\sqrt{n}} \right) \left( 1 - \frac{1}{2} \left(\frac{2}{n} \pm \frac{Z}{\sqrt{n}} \right) + \frac{3}{8} \left(\frac{2}{n} \pm \frac{Z}{\sqrt{n}} \right)^2 + \mathcal{o}\left(\frac{1}{n} \right) \right) \\ &=& \left( Z \pm \frac{1}{\sqrt{n}} \right) \left( 1 \mp \frac{Z}{2 \sqrt{n}} + \frac{3 Z^2}{8 n} + \mathcal{o}\left(\frac{1}{n} \right) \right) \\ &=& Z \mp \frac{Z^2-2}{2 \sqrt{n}} + \frac{3 Z (Z^2-4)}{8n} + \mathcal{o}\left(\frac{1}{n} \right) \end{eqnarray} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/146261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Relating Gamma and factorial function for non-integer values. We have $$\Gamma(n+1)=n!,\ \ \ \ \ \Gamma(n+2)=(n+1)!$$ for integers, so if $\Delta$ is some real value with $$0<\Delta<1,$$ then $$n!\ <\ \Gamma(n+1+\Delta)\ <\ (n+1)!,$$ because $\Gamma$ is monotone there and so there is another number $f$ with $$0<f<1,$$ such that $$\Gamma(n+1+\Delta)=(1-f)\times n!+f\times(n+1)!.$$ How can we make this more precise? Can we find $f(\Delta)$? Or if we know the value $\Delta$, which will usually be the case, what $f$ will be a good approximation?
Well, $\Gamma(1) = \Gamma(2) = 1$, but $\Gamma(\frac{3}{2}) = \frac{\sqrt{\pi}}{2} <1$, so presumably you need $n>1$. And $f(n, \Delta) = \frac{\Gamma(n+1+\Delta)-\Gamma(n+1)}{\Gamma(n+2)-\Gamma(n+1)}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/146336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What's the difference between $\mathbb{Q}[\sqrt{-d}]$ and $\mathbb{Q}(\sqrt{-d})$? Sorry to ask this, I know it's not really a maths question but a definition question, but Googling didn't help. When asked to show that elements in each are irreducible, is it the same?
In general, if you have a field extension $L/K$, and $\alpha\in L$, we define $K[\alpha]$ to be the subring of $L$ generated by $K$ and $\alpha$, whereas we define $K(\alpha)$ to be the subfield of $L$ generated by $K$ and $\alpha$. Now, it sometimes happens that $K[\alpha]$ is already a field, in which case we have $K[\alpha]=K(\alpha)$. This is what happens with $\mathbb{Q}[\sqrt{-d}]$ and $\mathbb{Q}(\sqrt{-d})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/146437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Determine the conditional probability mass function of the size of a randomly chosen family containing 2 girls. Suppose that 15 percent of the families in a certain community have no children, 20 percent have 1, 35 percent have 2, and 30 percent have 3 children; suppose further that each child is equally likely (and independently) to be a boy or a girl. If a family is chosen at random from this community, then B, the number of boys, and G, the number of girls, determine the conditional probability mass function of the size of a randomly chosen family containing 2 girls. My attempt There are exactly three ways this can happen: 1) family has exactly 2 girls 2) family has 2 girls and 1 boy 3) family has all 3 girls The first one is pretty simple. Given that you are going to "select" exactly two children, find the probability that they are BOTH girls (it's a coin flip, so p = 50% = 0.5): $0.5^2 = 0.25$ So the probability that the family has exactly 2 girls is the probability that the family has exactly two children times the probability that those two children will be girls: $\frac{1}{4} \cdot 35\% = 8.75\%$ Now find the probability that, given the family has exactly 3 children, that exacly two are girls. Now you flip 3 times but only need to "win" twice-this is a binomial experiment. There are 3 choose 2 = 3 ways to have exactly two girls: 1st, 2nd, or 3rd is a boy... interestingly the probability of having any particular permutation is just $0.5^3 = 1/8$ (because it's still $0.5 \times 0.5$ for two girls, then $0.5$ for one boy). So the chance of exactly 2 girls is: $\frac{3}{8}$ Now find the probability for having exactly 3 girls... that's easy, there's only one way, you just have all 3 girls, probability is just $\frac{1}{8}$. Now, add these up $\frac{3}{8} + \frac{1}{8} = \frac{4}{8} = \frac{1}{2}$ So now use the percent of families with exactly 3 children to find this portion of the probability: $\frac{1}{2} \cdot 30\% = 15\%$ Hence, add the two probabilities... here is it in full detail $$\begin{eqnarray}\mathbb{P}(\text{contains 2 girls}) &=& \mathbb{P}(\text{2 children}) \times \mathbb{P}(\text{2 girls, 2 children}) + \\ &\phantom{+=}& \mathbb{P}(\text{3 children}) \times \mathbb{P}(\text{2 or 3 girls, 3 children}) \end{eqnarray}$$ $\frac{1}{4} 35\% + 30\% \times \left(\frac{3}{8} +\frac{ 1}{8}\right)$ $8.75\% + 15\% = 23.75\%$ Is my attempt correct?
The problem statement is somewhat ambiguous. Either we want to find (i) the (conditional) probability distribution of the family size, given that there are at least $2$ girls, or (ii) the (conditional) probability distribution of the family size, given that there are exactly $2$ girls. The phrase "containing $2$ girls" can be interpreted either way, with some tilting towards (i) as the more likely. We calculate the probabilities for (i). The probability that there are at least $2$ girls is $(0.35)(1/2)^2 +(0.30)(1/2)$. The second part of the sum is because if the family has $3$ children, the probability that girls will outnumber boys is, by symmetry, equal to $1/2$. You computed this probability correctly. That is not what you are being asked to find, though it is a very useful part of the process. Given that there are at least $2$ girls, the family size is either $2$ or $3$. We want to find the conditional probability that the family size is $2$. (Then we will be able to find quickly the conditional probability that the family size is $3$.) Let $G$ be the event "there are at least $2$ girls" and let $S_2$ be the event "the family size is $2$." We want $P(S_2|G)$. By a formula that may be familiar, we have $$P(S_2|G)P(G)=P(S_2\cap G)=P(G|S_2)P(S_2).$$ Now we are essentially finished. We know $P(G)$, because we computed it above. We know $P(G|S_2)$, it is $(1/2)^2$. And we know $P(S_2)$, we were told what it is in the statement of the problem. To solve the problem under the assumption (ii), the procedure is almost the same, but the probabilities are different. For example, the probability that a family has exactly $2$ girls is $(0.35)(1/2)^2+(0.30)(3/8)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/146486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why is $10\frac{\exp(\pi)-\log 3}{\log 2}$ almost an integer? I read that $$10\frac{\exp(\pi)-\log 3}{\log 2} =318.000000033252\dots \approx 318$$ Is this simply a coincidence or can this somehow be explained?
Here's one answer. The expression you wrote contains three binary operations and four unary operations. This becomes more obvious when you view the parse tree: (*) | -------- | | id (/) | | 10 -------- | | (-) log | | ----- 2 | | exp log | | pi 3 We can count how many similar expressions there are, and see what the probability that one of them is that close to an integer is. This will give us an idea of whether the result is due to chance or not. To be concrete, we restrict ourselves to the four binary operations $+$, $\times$, $-$ and $\div$ in any combination, and say that the leaves of the trees can be any of the numbers $1,2,\dots,10$ or $\pi$, optionally combined with one of the operators $\exp$, $\log$ or $\mathrm{id}$ (the identity function). We'll discount the expressions that are obviously integers , such as $1+(2+(3+4))$, but that won't be very many expressions compared to the total. Then it becomes a simple counting argument. Every expression is represented as a binary tree with values at the leaves. There are 33 possible leaf values (11 numbers $\times$ 3 operators) and 4 possible values at each node. There are three nodes in the tree and 4 leaves, and there are 5 possible binary trees with three nodes. Therefore the total number of expressions is $$4^3 \times 33^4 \times 5 \approx 3.8 \times 10^8$$ Therefore, to a very rough approximation we can say that we would expect there to be one expression that was within $$1/(3.8\times 10^8) \approx 2.6\times 10^{-9}$$ of an integer. Since your example is only within $3\times 10^{-8}$ of an integer, I feel fairly confident in saying that it is due to pure chance (or someone with a lot of time on their hands) and there isn't necessarily a deeper explanation for it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/146568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 1, "answer_id": 0 }
How do I show that every group of order 90 is not simple? Can comeone help me? How do I show that every group of order 90 is not simple?
This is a very very late answer. But anyway I'll post for the use of people who are learning form this site (like myself). First note that $90=2\times 3^2\times 5$ and of the form $pq^2r$ with three different primes (like $60=2^2\times 3\times 5$). Therefore this case is bit hard comparing with common exam/ homework problems, where we have the orders of the form $p, p^2, p^3, pq, pq^2, pqr.$ Assume a group $G$ of order $90$ is simple. Using Sylow theorems, with standard notations, it is not difficult to prove that $n_3=10$ and $n_5=6.$ So we have $6\times(5-1)=24$ elements of order $5$ and if $3$-Sylow subgroups intersects trivially then we have $10\times(9-1)=80$ elements of order $3,$ but this contradicts to the order of the group as $24+80\gt 90.$ Therefore $3$-Sylow subgroups must have a non-trivial intersection. Let $H, K\in Syl_3(G),$ then $|H\cap K|=3$ and $$|HK|=\dfrac{|H||K|}{|H\cap K|}=27.$$ Further $[H :|H\cap K|]=[K:|H\cap K|]=3$ is the smallest prime factor of both $|H|, |K|.$ Therefore $H\cap K$ is normal in both $H$ and $K.$ Now let us look at the narmalizer $N=N_G(H\cap K)$ of the intersection in $G.$ It is clear that $HK\subset N$ and therefore we have $|N|$ * *$\ge27$ *divides $90$ *divisible by $9.$ Hence we have only two possibilities $|N|\in\{45, 90\}.$ If $N=G,$ then $H\cap K$ is normal in $G.$ Otherwise $[G:N]=2$ and then $N$ is normal in $G.$ Therefore we have a contradiction with our assumption and hence $G$ can not be simple.
{ "language": "en", "url": "https://math.stackexchange.com/questions/146687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Basic angle geometry question I've faced a question that needed to find angle $\gamma$ as a part of it and the solution came from $\gamma= \beta - \alpha$. How did the book arrive to such conclusion, and does this apply for every similar case?
The angle "next to" $\beta$ (supplementary to $\beta$) is $180^\circ-\beta$. The sum of the angles of a triangle is $180^\circ$, so $$\alpha+(180^\circ-\beta)+\gamma=180^\circ.$$ Remark: This is an often-used result. Usually it is stated as follows. The external angle ($\beta$) at a vertex is the sum of the internal angles at the other two vertices. So $\alpha+\gamma=\beta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/146770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Procedure for evaluating the hypergeometric series $_2F_1\left\{\frac{v+2}{2},\frac{v+3}{2};v+1;z\right\}$ I'm trying to work out the procedure to get the following hypergeometric series into a simpler form, for all postive integer $v$: $$ _2F_1\left\{\frac{v+2}{2},\frac{v+3}{2};v+1;z\right\}$$ For example, plugging this into Wolfram Alpha gives for $v$ = 1, $$\frac{1}{(1-z)^{3/2}}$$ for $v$ = 2, $$\frac{4 (2 \sqrt{1-z} \,(z-1)-3 z+2)}{3 \sqrt{1-z}\, (z-1) z^2}$$ for $v$ = 3, $$-\frac{2 (3 z^2+4 (2 \sqrt{1-z}-3) z-8 \sqrt{1-z}+8)}{(1-z)^{3/2} z^3}$$ and so on. I'm guessing a transformation is repeatedly applied until a terminating form of the hypergeometric series is obtained. For $v$ = 1, applying Euler's transformation, $$_2F_1 (a,b;c;z) = (1-z)^{c-a-b}{}_2F_1 (c-a, c-b;c ; z)$$ gives the correct form; however, I cant work out what is used for $v$ = 2 and higher.
Answering my own question with this post: Tables of Hypergeometric Functions There is a link to a paper that describes a method for performing the transformations. The method is quite complicated and not worth rewriting here. Apparently the python package sympy has a function "hyperexpand" that can work it out. Update: Thanks again to Antonio for the hint to start from a known form. I've worked out the process: Substituting $a = v/2$ we get $$ _2F_1\left\{\frac{v+2}{2},\frac{v+3}{2};v+1;z\right\} = \,_2F_1\left\{a+1,a+\frac{3}{2};2a+1;z\right\} $$ Since [1], $$ _2F_1\left\{a,a+\frac{1}{2};2a+1;z\right\} = 2^{2a}(1 + \sqrt{1-z})^{-2a} $$ and [2], $$ _2F_1\{a_1 + 1,a_2;b;z\} = \left(\frac{z}{a_1}\frac{d}{dz} + 1\right) \,_2F_1\{a_1, a_2;b;z\}, $$ $$ _2F_1\{a_1,a_2+1;b;z\} = \left(\frac{z}{a_2}\frac{d}{dz} + 1\right) \,_2F_1\{a_1, a_2;b;z\}, $$ then, $$ _2F_1\left\{a+1,a+\frac{3}{2};2a+1;z\right\} = \left(\frac{z}{a_1}\frac{d}{dz} + 1\right)\left(\frac{z}{a_2}\frac{d}{dz} + 1\right)\left(2^{2a}(1 + \sqrt{1-z})^{-2a}\right).$$ The rest follows by doing the differentiation and substuting back $v/2 = a$, and gives the same result as described in Antonio's answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/146885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
uniform continuity of linear functions I've just proved the fact that every linear function on a finite dimensional normed vector space is uniformly continuous. Because, let $T:U\to V$ be a linear function on $U$ with basis : $(u_1,u_2,\dots,u_n)$ and suppose that $\|\cdot\|_u$, $\|\cdot\|_v$ be the respective norms associated with $U$ and $V$. Setting $M= \max(\|Tu_1\|_v,\dots,\|Tu_n\|_v)$, we have $$\|Tu\|_v=\|a_1Tu_1 + a_2Tu_2 +\dots+ a_nTu_n\|_v \le |a_1|\cdot\|Tu_1\|_v+\dots+|a_n|\cdot\|Tu_n\|_v \le M\|u\|_1,$$ where $\|u\|_1=|a_1|+\dots+|a_n|$. Since all norms on $U$ are equivalent, we have $\|Tu\|_v\le C\|u\|_u$ for some $C>0$. Thus, $\|Tx-Ty\| \le C\|x-y\|$ for all $x$, $y$ in $U$, thereby satisfying Lipschitz' condition. My questions are: * *Is it true that a linear function from one vector space to another is always continuous? (finite dimension not assumed) *If so, or else, is a continuous linear map always uniformly continuous?
You don't need the Axiom of Choice. Take the space of polynomials in one variable, with the norm $||p|| = \sup_{t\in [0,1]}|p(t)|$. Then take the sequence of polynomials $p_n$ defined by $p_n(t) = (\frac{x}{2})^n$. Clearly $p_n \rightarrow 0$ in this norm, but the linear functional $\phi(p) = p(4)$ is unbounded (in fact, $||p_n|| = \frac{1}{2^n}, \phi(p_n) = 2^n)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/146938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Why is solving non-linear recurrence relations "hopeless"? I came across a non-linear recurrence relation I want to solve, and most of the places I look for help will say things like "it's hopeless to solve non-linear recurrence relations in general." Is there a rigorous reason or an illustrative example as to why this is the case? It would seem to me that the correct response would be "we just don't know how to solve them," or "there is no solution using elementary functions," but there might be a solution in the form of, say, an infinite product or a power series or something. Just for completion, the recurrence relation I'm looking at is (slightly more than just non-linear, and this is a simplified version): $p_n = a_n b_n\\ a_n = a_{n-1} + c \\ b_n = b_{n-1} + d$ And $a_0 > 0, b_0 > 0, c,d$ fixed constants
But polynomial recurrence can be mapped to linear recurrences. For example for $a_{n+1} = a_n^2 + c$ let $b_{m,n} =a_n^m$. Then by the binomial formula: $$b_{m,n} = (a_{n-1}^2+c)^m = \sum_{j = 0}^mc^{m-j}{m \choose j}a_{n-1}^{2j} = \sum_{j = 0}^mc^{m-j}{m\choose j}b_{2j,n-1}$$ and even though it's a two variable linear recurrence you can use a lot of the same types of strategies to solve it as if it was one variable (indeed for any number of variables you can use the same approaches as if it were one variable). So what's the big deal?
{ "language": "en", "url": "https://math.stackexchange.com/questions/147075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 3, "answer_id": 1 }
Problem in Silverman/Tate Rational Points on Elliptic Curves I'm trying to figure out how to solve the following problem the "right" way. This is problem 1.2 on page 32: Let $C$ be the conic given by the equation $$F(x,y)=ax^2+bxy+cy^2+dx+ey+f = 0$$ Let $L$ be the line $y=\alpha x+\beta$ with $\alpha\neq 0$. Show that the intersection $L\cap C$ has zero, one or two points. Determine the conditions on the coefficients which ensure that the intersection $L\cap C$ consists of exactly one points. What is the geometric significance of these conditions? The part about the number of points is trivial, since we just substitute the expression for $y$ and we are left with a quadratic in $x$. After substituting the equation is $$ax^2+b\alpha x^2+b\beta x + c\alpha^2x^2 + 2c\alpha\beta x + c\beta^2+e\alpha x+dx+e\beta+f = 0.$$ The coefficient of $x^2$ is simply $c\alpha^2+b\alpha+a$ which has discriminant $b^2-4ac$, so if $$\alpha = \frac{-b\pm \sqrt{b^2-4ac}}{2c},$$ then the equation becomes a linear one, so there's obviously only one solution. The problem is when $\alpha$ is not of this form. In that case it seems like one would have to compute the discriminant for the quadratic. The second condition would then be that the discriminant is zero. I'm sort of curious what the problem is going after, since the discriminant being zero is clearly a condition on the coefficients. However, it's a pretty terrible expression, so I don't know how one could find any "geometric significance" in that expression? Any ideas?
You want one point of intersection but with multiplicity 2. That is the the line and conic are tangent at that point. The way this works out in the algebra is that the quadratic equation you wrote should have both roots equal. The general line will meet the conic in two points. Some lines will not meet the conic (over the ground field) so the quadratic will not have roots in that field but in an extension, for example if the field is the real numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/147135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Show either $f$ is constant or $g(z)=0$ for all $z$ in the region let $G$ be a region, and $f$ and $g$ be holomorphic function on $G$. if $\bar{f}\cdot g$ is holomorphic, show that either $f$ is a constant or $g(z)=0$ for all $z$ in $G$.
If $g(z)$ is not identically zero, you can find an open set $U$ on which $g(z)$ is nonzero. Thus on $U$, $\bar{f} = \bar{f} g \times {1 \over g}$ is analytic. So ${1 \over 2}(f + \bar{f})$ = $Re(f)$ and $Im(f) = {1 \over 2i}(f - \bar{f})$ are analytic functions on $U$. Applying the Cauchy-Riemann equations to these two functions gives that $Re(f)$ and $Im(f)$ are constant on $U$, so the same is true for $f$. Since $f$ is equal to a constant on an open subset of your region, it's equal to that constant everywhere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/147191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Which differential coefficient is correct and why? Given $$\cot{\theta} = \frac{\sqrt{1+\sin(x)}+\sqrt{1-\sin(x)}}{\sqrt{1+\sin(x)}-\sqrt{1-\sin(x)}}$$ I have to find its differential coefficient w.r.t $x$ i.e. $\dfrac{d \theta }{dx}$. Now I can find it in the following two ways: (1). When I write $\sqrt{1-\sin(x)}=\cos(x/2)-\sin(x/2)$, I get $$\cot(\theta) = \frac{\cos(x/2) + \sin(x/2) + \cos(x/2) - \sin(x/2)}{\cos(x/2) + \sin(x/2) - \cos(x/2) + \sin(x/2)} = \cot(x/2)$$ $$\theta=x/2 \implies \frac{d \theta}{dx} = \frac12$$ (2). When I write $\sqrt{1-\sin(x)}=\sin(x/2)-\cos(x/2)$, I get $$\cot(\theta) = \frac{\cos(x/2) + \sin(x/2) - \cos(x/2) + \sin(x/2)}{\cos(x/2) + \sin(x/2) + \cos(x/2) - \sin(x/2)} = \tan(x/2)$$ $$\theta=\pi/2 - x/2 \implies \frac{d \theta}{dx} = -\frac12$$ Which of the above two answers is correct and why? Please help me know it. Thanks.
Neither. When you deal in half-angle formulas, you have to be careful about sign. Notice that $\sqrt{1-\sin x}$ is always positive, while $\sin (x/2)-\cos(x/2)$ and $\cos(x/2)-\sin(x/2)$ may be positive or negative depending on $x$. The correct statement is $\sqrt{1-\sin x}=|\cos(x/2)-\sin(x/2)|$; from there, you'll need to do some case analysis to compute the derivative you want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/147258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Weierstrass Factorization Theorem Are there any generalisations of the Weierstrass Factorization Theorem, and if so where can I find information on them? I'm trying to investigate infinite products of the form $$\prod_{k=1}^\infty f(z)^{k^a}e^{g(z)},$$ where $g\in\mathbb{Z}[z]$ and $a\in\mathbb{N}$.
A vast generalization of Weierstraß's theorem is to Riemann surfaces. Florack, inspired by methods due to Behnke-Stein, proved the following in 1948: Let $X$ be a non-compact Riemann surface. Let $D$ be a closed discrete set in $X$ and to each $d\in D$ attach a complex number $a_d$. Then there exists a holomorphic function $f\in \mathcal O(X)$ defined on all of $X$ such that $f(a_d)=c_d.$ One may think of $f$ as holomorphically interpolating some discrete data. This result immediately implies that $X$ is a Stein manifold, a concept of fundamental importance in the theory of holomorphic manifolds: Stein manifolds are the analogues of affine varieties in algebraic geometry. A complete proof is in Theorem 26.7 of Forster's awesome Lectures on Riemann Surfaces. The special case where $X$ is an open subset of $\mathbb C$ is analyzed in John's answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/147326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Other functional equations for $\zeta(s)$? For the Riemann zeta function, we know of the standard functional equation that relates $\zeta(s)$ and $\zeta(1-s)$. I wanted to know whether there are functional equations that relates $\zeta(s)$ and $\zeta(s-1)$? EDIT: My main motivation behind asking this question is I have found such an equation, but I do not know whether such an equation exists in literature. Also, I do not want to appear as if I am promoting my formula here, but rather I am more interested in the works that have been done in such directions. As per @lhf's request here is my formula, for $\Re(s) > 1$ $$ \zeta(s) + \frac{2}{s-1}\zeta(s-1) = \frac{s}{s-2} - s\int_1^\infty \frac{\{x\}^2}{x^{s+1}} dx$$ where $\{x\}$ is the fractional part of x.
Yes, $\zeta(\overline{s})=\overline{\zeta(s)}$ for $s \in \mathbb C-\{1\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/147377", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Change of Basis Calculation I've just been looking through my Linear Algebra notes recently, and while revising the topic of change of basis matrices I've been trying something: "Suppose that our coordinates are $x$ in the standard basis and $y$ in a different basis, so that $x = Fy$, where $F$ is our change of basis matrix, then any matrix $A$ acting on the $x$ variables by taking $x$ to $Ax$ is represented in $y$ variables as: $F^{-1}AF$ " Now, I've attempted to prove the above, is my intuition right? Proof: We want to write the matrix $A$ in terms of $y$ co-ordinates. a) $Fy$ turns our y co-ordinates into $x$ co-ordinates. b) pre multiply by $A$, resulting in $AFy$, which is performing our transformation on $x$ co-ordinates c) Now, to convert back into $y$ co-ordinates, pre multiply by $F^{-1}$, resulting in $F^{-1}AFy$ d) We see that when we multiply $y$ by $F^{-1}AF$ we perform the equivalent of multiplying $A$ by $x$ to obtain $Ax$, thus proved. Also, just to check, are the entries in the matrix $F^{-1}AF$ still written in terms of the standard basis? Thanks.
Your approach seems correct. I don't know if the following helps, but anyway: So you have a vector space $V$ (over say the complex numbers) and you have say two basis $E$ and $D$. That $F$ is a change of basis matrix means that if as column vector $y = (y_i)$ written with respect to the basis $E$, then you get the coordinates with respect to $D$ by $x = Fy$. Now you have a linear transformation $T: V \to V$. With respect to each basis, this transformation is given by two matrices, say $A_E$, $A_D$. So if $y = (y_i)$ (wrt. basis $E$) then $Ty = A_Ey$ and the result with be the coordinates in the basis $E$. (And likewise for the basis $D$ using $A_D$). So given a vector $y = (y_i)$ written in the basis $E$, you could then first transform the coordinates to the basis $D$, then you the matrix $A_D$ and then transform the coordinated back to the basis $E$. So you get $A_E(y) = F^{-1}A_DFy$. One can actually write out all of this (If you have never done so I recommend that you do it) with coordinates. So you would start with the vector $v$ and write it as a linear combination of the basis $E$: $v = y_1e_1 + \dots y_ne_n$ and continue from there...
{ "language": "en", "url": "https://math.stackexchange.com/questions/147441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Showing the lebesgue measure is sigma finite on sets I want to show that the lebesgue measure is $\sigma$-finite on the following; * *$C_\mathrm{open} = \{ A \subset \mathbb{R} : A\,\,\mathrm{open}\}$ *$C_\mathrm{closed} = \{ B \subset\mathbb{R}: B\,\,\mathrm{closed}\}$. Usually, for example for $C = \{(a,b): -\infty\leq a\leq b\leq \infty \}$ I took the interval $(-i,i)$ and used infinite unions etc to write it as half open so that I could take the lebesgue measure of it, and then saw that the lebesgue measure was $2i$ which is finite as $i<\infty$ , but in this case I dont know how to take elements of these sets?
Well $(-i,i)$ is open, so what you did is right: $\mathbb R = \bigcup_{i \in \mathbb N} (-i,i)$. Similarly, $\mathbb R = \bigcup_{i \in \mathbb Z} [i, i+ 1]$. If this is not what you asked, let me know and I'll adapt my answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/147502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that the order of an element in the group N is the lcm(order of the element in N's factors p and q) How would you prove that $$\operatorname{ord}_N(\alpha) = \operatorname{lcm}(\operatorname{ord}_p(\alpha),\operatorname{ord}_q(\alpha))$$ where $N=pq$ ($p$ and $q$ are distinct primes) and $\alpha \in \mathbb{Z}^*_N$ I've got this: The order of an element $\alpha$ of a group is the smallest positive integer $m$ such that $\alpha^m = e$ where $e$ denotes the identity element. And I guess that the right side has to be the $\operatorname{lcm}()$ of the orders from $p$ and $q$ because they are relatively prime to each other. But I can't put it together, any help would be appreciated!
Hint $\rm\ \ pq\:|\:a^n\!-\!1\iff p,q\:|\:a^n\!-\!1\iff ord_p a, ord_q a\:|\:n\iff lcm(ord_p a, ord_q a)\:|\: n$
{ "language": "en", "url": "https://math.stackexchange.com/questions/147567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
If a group satisfies $x^3=1$ for all $x$, is it necessarily abelian? I know that any group satisfying $x^2=1$ for all $x$ is abelian. Is the same true if $x^3=1$? I don't think it is, but I can't find a basic counterexample.
For any odd prime $p$, there is a nonabelian group $H_p$ of order $p^3$ and such that $x^p = 1$ for all $x \in H_p$: the Heisenberg group modulo p. Added: As Dylan Moreland points out, this expository note of Keith Conrad gives a very nice discussion of the groups of order $p^3$, including the Heisenberg groups $H_p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/147642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 1, "answer_id": 0 }
Find the intersection of these two planes. Find the intersection of $8x + 8y +z = 35$ and $x = \left(\begin{array}{cc} 6\\ -2\\ 3\\ \end{array}\right) +$ $ \lambda_1 \left(\begin{array}{cc} -2\\ 1\\ 3\\ \end{array}\right) +$ $ \lambda_2 \left(\begin{array}{cc} 1\\ 1\\ -1\\ \end{array}\right) $ So, I have been trying this two different ways. One is to convert the vector form to Cartesian (the method I have shown below) and the other was to convert the provided Cartesian equation into a vector equation and try to find the equation of the line that way, but I was having some trouble with both methods. Converting to Cartesian method: normal = $ \left(\begin{array}{cc} -4\\ 1\\ -3\\ \end{array}\right) $ Cartesian of x $=-4x + y -3z = 35$ Solving simultaneously with $8x + 8y + z = 35$, I get the point $(7, 0, -21)$ to be on both planes, i.e., on the line of intersection. Then taking the cross of both normals, I get a parallel vector for the line of intersection to be $(25, -20, -40)$. So, I would have the vector equation of the line to be: $ \left(\begin{array}{cc} 7\\ 0\\ -21\\ \end{array}\right) +$ $\lambda \left(\begin{array}{cc} 25\\ -20\\ -40\\ \end{array}\right) $ But my provided answer is: $ \left(\begin{array}{cc} 6\\ -2\\ 3\\ \end{array}\right)+ $ $ \lambda \left(\begin{array}{cc} -5\\ 4\\ 8\\ \end{array}\right) $ I can see that the directional vector is the same, but why doesn't the provided answer's point satisfy the Cartesian equation I found? Also, how would I do this if I converted the original Cartesian equation into a vector equation? Would I just equate the two vector equations and solve using an augmented matrix? I tried it a few times but couldn't get a reasonable answer, perhaps I am just making simple errors, or is this not the correct method for vector form?
Note: In this case it might be quicker to "plug in" $$ 8(6-2\lambda_1+\lambda_2)+8(-2+\lambda_1+\lambda_2)+3(3+3\lambda_1-\lambda_2)=0 $$ which means $\lambda_1=-41-13\lambda_2$ and then you plug that into the original line equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/147698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Computing conditional probability out of joint probability If I have given a complete table for the joint probability $$P(A,B,C,D,E)$$ how can I compute an arbitrary conditional probability out of it, for instance: $$P(A|B)$$
The short answer for your example is that you can compute $P(A,B)$ and $P(B)$ from the table (you have to sum out all the other variables for fixed A and B). Using these values you can compute $P(A|B)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/147831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Show that $\int f=0$ implies $f=0$ a.e, given that $f$ is a nonnegative measurable function. I am trying to show that $\int f=0$ implies $f=0$ a.e, given that $f$ is a nonnegative measurable function. But I search my head from what I have learn I have no any clue of solving the problem. That is why I brought it to this room with the hope that somebody will give me a hint. Thanks
Hint: let $A_n:=\{f(x)\geq 2^{-n}\}$. Write $0\geq \int_{A_n}f(x)d\mu\geq 2^{-n}\mu(A_n)$. What about $\mu\left(\bigcup_{n\geq 1}A_n\right)$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/147887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Why are zeros/roots (real) solutions to an equation of an n-degree polynomial? I can't really put a proper title on this one, but I seem to be missing one crucial point. Why do roots of a function like $f(x) = ax^2 + bx + c$ provide the solutions when $f(x) = 0$. What does that $ y = 0$ mean for the solutions, the intercept at the $x$ axis? Why aren't the solutions at $f(x) = 403045$ or some other arbitrary $n$? What makes the x-intercept special?
I believe this is another way of looking at it. In most cases, "roots" are a subset of all possible solutions (all the $x$, $y$ coordinates that satisfy the equation). If an equation does not have a root solution, then there is no direct, algebraic way of solving for $x$, given a value $y$; but you can still solve for any $y$ (the function's output), given a value $x$. Consider: $y = x^2 - 1$; the root is $1$ when $y=0$. however, $y = x^2 + 1$; there is no real number algebraic solution when $y=0$. You can not solve for $x$ when the parabola does not intersect the $x$-axis. That's why setting the equation to $x$ is useful to indicate that situation. However, the equation still has an infinite number of solution coordinates by solving for $y$ given a value of $x$. You just can't go the other way around (solve for $x$) given $y$. It's comparable to trying to solve for $x$ in the equation $y = 5$; but you can solve for $y$, given any value of $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/147932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 6, "answer_id": 5 }
notation: 95%Ci ranges from A to B? What is the clearest way to state that a 95% CI has a lower bound of A and an upper bound of B. Most commonly, I see: $$95\%\textrm{CI}=[A,B]$$ But this seems to imply that the CI is a vector of length two. If I were speaking very precisely, I would say that the 95%CI ranges from A to B. Perhaps there is a more precise notation, for example: $$95\%\textrm{CI}\in[A,B]$$ ?
The closed interval $\{x : A \le x \le B\}$ from $A$ to $B$ is an interval and is denoted $[A,B]$. Therefore it makes sense to say that the interval is equal to $[A,B]$. The open interval $\{x : A < x < B\}$ from $A$ to $B$ is an interval and is denoted $(A,B)$. That looks even more like the standard notation for an ordered pair of numbers, but it's a quite different thing denoted by the same notation. Usually the context will make it clear which meaning is intended. It does not make sense at all to say the confidence interval is in $[A,B]$, i.e. $\mathrm{CI} \in [A,B]$. That the interval is in the interval---i.e. is a member of itself---is false.
{ "language": "en", "url": "https://math.stackexchange.com/questions/148012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Graph decomposition What is the smallest $n \in \mathbb{N}$ with $ n \geq5$ such that the edge set of the complete graph $K_n$ can be partitioned (decomposed) to edge disjoint copies of $K_4$? I got a necesary condition for the decomposition is that $12 |n(n-1)$ and $3|n-1$, thus it implies $n \geq 13$. But can $K_{13}$ indeed be decomposed into edge disjoint copies of $K_4$?
Here's the $(13,4,1)$-BIBD from the designtheory.org database (here), where it is described as unique up to isomorphism. To check it's indeed a $(13,4,1)$-BIBD, we can run the following code in GAP: S:= [ [ 1, 2, 3, 4 ], [ 1, 5, 6, 7 ], [ 1, 8, 9, 10 ], [ 1, 11, 12, 13 ], [ 2, 5, 8, 11 ], [ 2, 6, 9, 12 ], [ 2, 7, 10, 13 ], [ 3, 5, 9, 13 ], [ 3, 6, 10, 11 ], [ 3, 7, 8, 12 ], [ 4, 5, 10, 12 ], [ 4, 6, 8, 13 ], [ 4, 7, 9, 11 ] ]; A:=[]; for P in S do T:=Combinations(P,2); A:=Concatenation(A,T); od; Size(A)=Binomial(13,2); which returns true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/148079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
If two elements in a ED have the same Euclidean norm, are they associates? Is it obvious that in a Euclidean Domain two elements $x$ and $y$ having the same Euclidean norm are associates? Can someone give me a proof of this?
While this is not true, what you seem to want is true (based on the comments): an element $x$ of a Euclidean domain is a unit if and only if $\nu(x)=\nu(1)$. To see this, note that $\nu(1)\leq\nu(z)$ for all $z\neq 0$ (since $\nu(1)\leq\nu(1z)$). If $x$ is a unit, then $\nu(x)\leq\nu(xx^{-1})=\nu(1)$ gives the equality. Conversely, if $\nu(x)=\nu(1)$, divide $1$ by $x$ to get $1 = xy+r$ with $r=0$ or $\nu(r)\lt(x)=\nu(1)$. But $r\neq 0$ implies $\nu(r)\geq \nu(1)$, so we conclude that $r=0$ so $x$ is a unit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/148184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Tensor product of two chain homotopic maps are again chain homotopic Let $C$, $C'$, $D$, $D'$ be chain complexes, $f$, $f'\colon C\to C'$ and $g$, $g'\colon D \to D'$ two pairs of homotopic chain maps. How to show $f \otimes g$ and $f' \otimes g' \colon C\otimes D\to C'\otimes D'$ are homotopic? I have tried many formulas for such a homotopy, but none of them worked.
Here is an approach that avoids some computations (really they're hidden under the hood and I'm pretty sure that if you followed the advice given in t.b.'s comment you would find the same thing). I'm implicitly using the enriched $\operatorname{Hom}(C,D)$ functor of chain complex, where a cycle in degree zero is a chain map $C \to D$, and a boundary of degree zero is a nullhomotopic chain map. Let $F = f - f'$ and $G = g - g'$. Then both $F$ and $G$ are null homotopic, i.e. there exist homotopies $h$ and $k$ such that $F = \partial(h)$ and $G = \partial(k)$ (where $\partial(h) = dh + hd$). We also know that $f$, $f'$, $g$, and $g'$ are chain maps, i.e. $\partial(f) = \partial(f') = 0$ and $\partial(g) = \partial(g') = 0$. What we want to do is express $f \otimes g - f' \otimes g'$ as the boundary of some homotopy. So let's write: $$\begin{align} f \otimes g - f' \otimes g' & = f \otimes g - f' \otimes g + f' \otimes g - f' \otimes g' \\ & = F \otimes g + f' \otimes G \\ & = \partial(h) \otimes g + f' \otimes \partial(k) \\ & = \partial(h \otimes g + f' \otimes k) \end{align}$$ Where I used the fact that $\partial(h \otimes g) = \partial(h) \otimes g - h \otimes \partial(g)$ and the second term is zero. So to conclude, if $h$ is the homotopy between $f$ and $f'$, and $k$ is the homotopy between $g$ and $g'$, the homotopy between $f \otimes g$ and $f' \otimes g'$ is given by $$c \otimes d \mapsto h(c) \otimes g(d) + (-1)^{|c|} f'(c) \otimes k(d),$$ where the sign comes from the Koszul rule of signs $$(f' \otimes k)(c \otimes d) = (-1)^{|k| \cdot |c|} f'(c) \otimes k(d) = (-1)^{|c|} f'(c) \otimes k(d).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/148229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Box with N two-color balls, randomly chosen, with and without returning Suppose there is a box with $N$ balls, $k$ white and $N-k$ black. * *If after choosing a ball it gets returned to the box, then: $$p(\text{white})=\frac{k}{N},\space\space\space p(\text{black}) = \frac{N-k}{N} = 1-p(\text{white})$$ — probabilities of choosing one white (black) ball, at each moment, regardless of how many balls will be selected. *What if after choosing a ball, it will be kept outside the box, and the process of selecting balls will be continued, untill all balls are selected? I am aware that this is basic, course question. I am hoping for an answer or reference, to get a level of certainty. *A third method of selecting balls is possible: suppose there is a simple device – a metal "matrix" with $N$ holes, in the shape of the box. The matrix is put on top of the box, which is then turned upside down. Also, we somehow ensure, that each hole will pass through exactly one ball. Then: * *Process is similar to case 2., because balls are not returned to box after choosing. *Probability that given cell in the matrix will pass through white (black) ball is the same as in case 1., the basic case. *This means, that it is not the act of not returning ball that influences probabilities in case 2. – it is the act of receiving information about ball chosen, and thus missing (in case 2.) in the box. *Without recording the information, probabilities of case 2. are the same as in case 1.?
Suppose that we do not return balls to the box. The plain probability that the $i$-th ball selected is white is $\frac{k}{N}$, exactly like in the case of returning the ball to the box. One way of seeing this is to number the balls, white from $1$ to $k$, black from $k+1$ to $N$. Imagine that we draw the balls, one at a time, until all the balls are gone. All permutations of the labels are equally likely, and the fraction of these permutations for which the $i$-th ball drawn is white is $\dfrac{k}{N}$. (It can take a while until this fact becomes "obvious"!) But in drawing one at a time, there are other probabilities that we can compute, for example the probability that the $3$rd ball drawn is white given that the first two were white. This conditional probability is not $\dfrac{k}{N}$, it is $\dfrac{k-2}{N-2}$ (except in trivial cases, like $k=1$). Suppose again that we draw the balls, one at a time, until they are all gone. The conditional probability that the $3$rd ball drawn is white, given that the last two balls drawn (of the $N$) are white is also $\dfrac{k-2}{N-2}$. So it is not the temporal order of the drawing that matters in evaluating the conditional probability. When we calculate a conditional probability, it is not the act of receiving information that matters, it is the act of using the information to calculate a conditional probability, that is, to restrict the sample space. When one does replacement, conditional and unconditional probabilities are the same, since the sample space is unchanged.
{ "language": "en", "url": "https://math.stackexchange.com/questions/148299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Covering area algorithm? I'm looking for a 'covering rectangle with smaller rectangles' algorithm with the unique feature of being able to exclude some possible center points of rectangles. Basically, limiting the possible areas the smaller rectangles can be placed, while still having the algorithm try to solve for filling up the entire big rectangle with smaller rectangles (of a fixed size). Obviously this will sometimes result in the algorithm not succeeding, no possible solutions. Has anyone seen anything like this or know how it would be developed? somethings to keep in mind: 1. This problem can be optimally solved by simply placing the fixed size rectangle at every point that is allowed. This of course is too many rectangles, and I'm trying to accomplish this with the minimum amount of rectangles possible. the minimum amount can usually be determined by dividing the area of the big rectangle by the area of the smaller rectangle. Example: a big rectangle with an area of 200. small rectangle with an area of 5. The smallest possible amount of rectangles to cover the area inside the big rectangle is 40 (200/5=40). If you limit the places you can put the rectangles, this number might grow, and the spacing might become uneven. I'm essentially asking for a way to solve this problem. 2.Coverage areas are not boxes, packing algorithms are not covering algorithms. coverage areas can overlap. box packing algorithms don't overlap.
I cannot remark on "the unique feature of being able to exclude some possible center points of rectangles," a feature I do not understand. But I can offer some pointers to algorithms for covering with rectangles. First, an optimal cover is NP-hard: Culbertson and Reckhow, "Covering polygons is hard," 1988. Second, there are nevertheless published approximation algorithms: Berman and Dasgupta, "Complexities of Efficient Solutions of Rectilinear Polygon Cover Problems," 1994; Ramesh and Ramesh, "Covering Rectilinear Polygons with Axis-Parallel Rectangles," 1999. And you could use Google Scholar to find all later papers that reference these. Perhaps some of these algorithms can be adapted to include your unique features.
{ "language": "en", "url": "https://math.stackexchange.com/questions/148354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
cohomology of a finite cyclic group I apologize if this is a duplicate. I don't know enough about group cohomology to know if this is just a special case of an earlier post with the same title. Let $G=\langle\sigma\rangle$ where $\sigma^m=1$. Let $N=1+\sigma+\sigma^2+\cdots+\sigma^{m-1}$. Then it is claimed in Dummit and Foote that $$\cdots\mathbb{Z} G \xrightarrow{\;\sigma -1\;} \mathbb{Z} G \xrightarrow{\;N\;} \mathbb{Z} G \xrightarrow{\;\sigma -1\;} \cdots \xrightarrow{\;N\;} \mathbb{Z} G \xrightarrow{\;\sigma -1\;} \mathbb{Z} G \xrightarrow{\;\text{aug}\;} \mathbb{Z} \longrightarrow 0$$ is a free resolution of the trivial $G$-module $\mathbb{Z}$. Here $\mathbb{Z} G$ is the group ring and $\text{aug}$ is the augmentation map which sums coefficients. It's clear that $N( \sigma -1) = 0$ so that the composition of consecutive maps is zero. But I can't see why the kernel of a map should be contained in the image of the previous map. any suggestions would be greatly appreciated. Thanks for your time.
I just wanted to elaborate a bit on minu's excellent answer. Suppose that $\alpha=\sum_{i=0}^{m-1}a_i\sigma^i$ is in the kernel of $\sigma-1$. Then $$\textstyle(\sigma-1)(\alpha)=\sum\limits_{i=0}^{m-1}a_{i}(\sigma^{i+1}-\sigma^i)=(a_{m-1}-a_0)+(a_0-a_1)\sigma+\cdots+(a_{m-2}-a_{m-1})\sigma^{m-1}=0$$ so that $a_{i-1}-a_i=0$ for all $i$, and so all the $a_i$ are equal. Therefore, $$\alpha=a\left(\textstyle\sum\limits_{i=0}^{m-1}\sigma^i\right)=N(a)$$ where $a=a_0=a_1=\cdots=a_{m-1}$. Thus, $\ker(\sigma-1)\subseteq\operatorname{im}(N)$. Now suppose that $\alpha=\sum_{i=0}^{m-1}a_i\sigma^i$ is in the kernel of $N$. Then $$N(\alpha)=\textstyle\sum\limits_{i=0}^{m-1}a_i\left(\sum\limits_{j=0}^{m-1}\sigma^{i+j}\right)=\sum\limits_{i=0}^{m-1}a_i\left(\sum\limits_{j=0}^{m-1}\sigma^{j}\right)=\left(\sum\limits_{i=0}^{m-1}a_i\right)\left(\sum\limits_{j=0}^{m-1}\sigma^{j}\right)=0$$ which implies that $\sum_{i=0}^{m-1}a_i=0$, and therefore $$\textstyle(\sigma-1)\left(\sum\limits_{i=0}^{m-1}\left(-\sum\limits_{j=0}^ia_j\right)\sigma^i\right)=\left(a_0-\sum\limits_{j=0}^{m-1}a_j\right)+\sum\limits_{i=1}^{m-1}\left(\sum\limits_{j=0}^{i}a_j-\sum\limits_{j=0}^{i-1}a_j\right)\sigma^i$$ $$=(a_0-0)+\textstyle\sum\limits_{i=1}^{m-1}a_i\sigma^i=\alpha.$$ Thus, $\ker(N)\subseteq\operatorname{im}(\sigma-1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/148421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Meaning of the Soul Theorem The Soul Theorem states that in every complete, connected riemannian manifold $M$ with $\mathrm{sec}(M)\geq 0$, there exists compact, totally convex, totally geodesic submanifold $S$ such that $M$ is diffeomorphic to the normal bundle of $S$. I don't know if I'm being too formal on my approach but I don't totally get how a manifold could be diffeomorphic to a bundle given that they're in different classes of objects. What is the precise meaning of diffeomorphic here? In general the normal bundle of $S$ (any submanifold) will be of dimension les than $\mathrm{dim}(M)$ right? So how is it possible that they're diffeomorphic?
The total space of any vector bundle over a manifold is itself a manifold; if it has rank $r$ and the base space has dimension $n$, it has dimension $r+n$ (since it locally looks like a product). Since the rank of the normal bundle $NS$ is equal to the codimension of $S$ in $M$, the dimension of $NS$ as a manifold is equal to the dimension of $M$. The general version of the picture you want to be visualizing (which is more elementary than the Soul Theorem) is the tubular neighborhood theorem; one phrasing of it states that if $A$ is a submanifold of $M$ then $A$ has a neighborhood in $M$ which is diffeomorphic to $NA$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/148505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Density of the set $S=\{m/2^n| n\in\mathbb{N}, m\in\mathbb{Z}\}$ on $\mathbb{R}$? Let $S=\{\frac{m}{2^n}| n\in\mathbb{N}, m\in\mathbb{Z}\}$, is $S$ a dense set on $\mathbb{R}$?
This set looks really close to the rationals. Maybe you could use the density of the rationals and see if you can put an element of this set between any two rationals and go from there.
{ "language": "en", "url": "https://math.stackexchange.com/questions/148558", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Question about convergence of matrices (Linear algebra) Let $O(n)$ to be the orthogonal group, and define $m(A)$ for every $A \in O(n)$ as follows: $$m(A) = \max \left\{ \frac{|Ax-x|}{|x|} : x \in \mathbb{R} ^ n -\{0\} \right\}\;.$$ Given a sequence $ \{B_i \}$ , we can assume by compactness that it converges. Why does this imply that for every $\epsilon>0$ there exist $i<j$ such that $m(B_j B_i^{-1} )\leq \epsilon $ ? Thanks in advance
Suppose $B_i \to L$ as $i \to \infty$. Since both multiplication and inverse are continuous, as $i \to \infty$ and $j \to \infty$ we have $B_j B_i^{-1} \to L L^{-1} = I$. Now $m$ is continuous (note that by homogeneity we may take $x$ to be in the unit sphere which is compact), and $m(I) = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/148619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What exactly is nonstandard about Nonstandard Analysis? I have only a vague understanding of nonstandard analysis from reading Reuben Hersh & Philip Davis, The Mathematical Experience. As a physics major I do have some education in standard analysis, but wonder what the properties are that the nonstandardness (is that a word?) is composed of. Is it more than defining numbers smaller than any positive real as the tag suggests? Can you give examples? Do you know of a gentle introduction to the nonstandard properties?
A gentle introduction would be Wikipedia I guess. They also describe the criticisms of nonstandard analysis there.
{ "language": "en", "url": "https://math.stackexchange.com/questions/148665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 5, "answer_id": 4 }
Fourier transform and distibution beloning to S' I need prove, that a distribution $$\langle F_f,\phi \rangle= p.v. \int\limits_{-1/2}^{1/2}\frac{\phi(t)}{t\cdot \ln{|t|}}\mathrm{d}t$$ belongs to $S^\prime$ (adjoint to Schwartz space) and I need find, Fourier transform of the distribution.
If $\phi\in\mathcal{S}$, $\psi(t)=\frac{\phi(t)-\phi(0)}{t}\in C^\infty$, and by the Mean Value Theorem, $\|\psi\|_{L^\infty}\le\|\phi'\|_{L^\infty}$ Thus, $$ \begin{align} \lim_{\epsilon\to0}\left(\int_{-1/2}^{-\epsilon}\frac{\phi(t)}{t\log|t|}\,\mathrm{d}t+\int_{\epsilon}^{1/2}\frac{\phi(t)}{t\log|t|}\,\mathrm{d}t\right) &=\lim_{\epsilon\to0}\left(\int_{-1/2}^{-\epsilon}\frac{\psi(t)}{\log|t|}\,\mathrm{d}t+\int_{\epsilon}^{1/2}\frac{\psi(t)}{\log|t|}\,\mathrm{d}t\right)\\ &=\int_{-1/2}^{1/2}\frac{\psi(t)}{\log|t|}\,\mathrm{d}t\tag{1} \end{align} $$ Therefore $$ \begin{align} \left|\text{p.v.}\int_{-1/2}^{1/2}\frac{\phi(t)}{t\log|t|}\,\mathrm{d}t\right|&=\left|\int_{-1/2}^{1/2}\frac{\psi(t)}{\log|t|}\,\mathrm{d}t\right|\\ &\le\left|\int_{-1/2}^{1/2}\frac{1}{\log|t|}\,\mathrm{d}t\right|\;\|\psi\|_{L^\infty}\\ &\le\frac{1}{\log(2)}\|\phi'\|_{L^\infty}\tag{2} \end{align} $$ Inequality $(2)$ says that $\langle F_f,\cdot\rangle\in\mathcal{S}'$. The Fourier Transform of the distribution would be $$ \begin{align} \text{p.v.}\int_{-1/2}^{1/2}\frac{e^{-2\pi i\xi t}}{t\log|t|}\,\mathrm{d}t &=-i\int_{-1/2}^{1/2}\frac{\sin(2\pi\xi t)}{t\log|t|}\,\mathrm{d}t\\ &=-2i\int_0^{1/2}\frac{\sin(2\pi\xi t)}{t\log|t|}\,\mathrm{d}t\tag{3} \end{align} $$ I can not find a closed form for $(3)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/148800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Exponential objects in a cartesian closed category: $a^1 \cong a$ Hi I'm having problems with coming up with a proof for this simple property of cartesian closed categories (CCC) and exponential objects, namely that for any object $a$ in a CCC $C$ with an initial object $0$, $a$ is isomorphic to $a^1$ where $1$ is the terminal object of $C$. In most of the category theory books i've read this is usually left as an exercise, but for some reason I can't get a handle on it.
I know that this is a bit late, but I'm readong the book right now and found this post, so i'll give my solution anyway, which doesn't use the Yoneda Lemma, or adjunctions, etc. The idea is to notice that $id_a : a\to a$ is a terminal object in the comma category $C\to a$. But the definition of the exponential allows you to show that $ev : a^1 × 1 \to a$ is also a terminal object, and therefore isomorphic to the first one. It is then easy to conclude that $a^1×1 \simeq a$ and so $a^1 \simeq a$
{ "language": "en", "url": "https://math.stackexchange.com/questions/148864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 1 }
Sum of three primes Can all natural numbers ($n\ge 6$) be represented as the sum of three primes? With computer I checked up to $10000$, but couldn't prove it.
From the second edition of The Hardy-Littlewood Method by Robert C. Vaughan, the Corollary to Theorem 3.4 on page 33 is that all sufficiently large odd number is the sum of three primes. For even numbers and two primes, let $E(n)$ be the count of exceptions up to $n,$ meaning the count of even numbers that are not the sum of two primes. Let $A$ be a positive constant. The Corollary to Theorem 3.7 on page 36 is that there is a positive constant $C$ (where $C$ depends on $A$) such that $$ E(n) \leq C n (\log n)^{-A}. $$ So failures are eventually uncommon. I don't see that taking $p_1 + p_2 + 2$ improves matters very much. Vaughan also refers to a book by one of the main investigators in this area, Additive Theory of Prime Numbers by L.-K. Hua (1965).
{ "language": "en", "url": "https://math.stackexchange.com/questions/148924", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Modify Cantor Set to get a compact set with empty interior and measure $0<\alpha<1$. Possible Duplicate: Examples of perfect sets. The cantor set is obtained by succesively removing the "open middle third" from each closed interval in $K_n$ to obtain $K_{n+1}$, where $K_0 = [0,1]$. The intersection of all $K_n$ results in a compact set of null measure and empty interior. The question is about modifying this construction to get a set with empty interior (which would stem from the fact that it contains no open intervals) and measure $\alpha$, with $0<\alpha<1$. I thought about removing a smaller interval in each step (smaller by a factor of $\alpha$), but I realised this won't work because at step $n$ the removed part is smaller by a factor of $\alpha^n$ which tends to $0$. So at each step I should remove something that is proportionally greater than what I removed at the preceding step. I suspect a series will arise. I'll appreciate any hints that take me into the right direction.
An example of such a set is the fat Cantor set. A typical construction is by removing the middle $1/4^{th}$ at the first step and in general removing the middle $1/4^n$ from each of the $2^{n-1}$ intervals at the $n^{th}$ step. Hence, the measure of the set is $$1 - \sum_{n=1}^{\infty} \left( \dfrac{2^{n-1}}{4^n}\right) = 1 - \sum_{n=1}^{\infty} \left( \dfrac1{2^{n+1}}\right) = 1/2$$ and by definition it has no interval inside it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/149010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Analyze the convergence or divergence of the sequence $\left\{\frac1n+\sin\frac{n\pi}{2}\right\}$ Analyze the convergence or divergence of the following sequence a) $\left\{\frac{1}{n}+\sin\frac{n\pi}{2}\right\}$ The first one is divergent because of the in $\sin\frac{n\pi}{2}$ term, which takes the values, for $n = 1, 2, 3, 4, 5, \dots$: $$1, 0, -1, 0, 1, 0, -1, 0, 1, \dots$$ As you can see, it's divergent. To formally prove it,I could simply notice that it has constant subsequences of $1$s, $0$s, and $-1$s, all of which converge to different limits. If it were as subsequence, they would all be the same limit. My procedure is that correct?
You could also use the fact that if a sequence is convergent, then the difference between successive terms must converge to $0$. Let $x_n = \frac{1}{n}+\sin \frac{n \pi}{2}$. Then $x_{n+1}-x_n = \cos \frac{n \pi}{2} - \sin \frac{n \pi}{2} -\frac{1}{n(n+1)}$, from which we get the estimate $|x_{n+1}-x_n| \geq |\cos \frac{n \pi}{2} - \sin \frac{n \pi}{2}| -\frac{1}{n(n+1)} = 1-\frac{1}{n(n+1)}$. So the difference does not converge to $0$, so you can conclude that the sequence is divergent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/149087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Evaluating $\lim\limits_{n\to\infty} \left(\frac{1^p+2^p+3^p + \cdots + n^p}{n^p} - \frac{n}{p+1}\right)$ Evaluate $$\lim_{n\to\infty} \left(\frac{1^p+2^p+3^p + \cdots + n^p}{n^p} - \frac{n}{p+1}\right)$$
The first couple of terms of the Euler-Maclaurin Sum Formula can be derived as $$ \begin{align} \sum_{k=1}^nk^p &=\int_{0^+}^{n^+}x^p\,\mathrm{d}\!\left\lfloor x\right\rfloor\tag1\\ &=\int_{0^+}^{n^+}x^p\,\mathrm{d}(x-\{x\})\tag2\\ &=\tfrac1{p+1}n^{p+1}-\int_{0^+}^{n^+}x^p\,\mathrm{d}\!\left(\{x\}-\tfrac12\right)\tag3\\ &=\tfrac1{p+1}n^{p+1}-\left[x^p\left(\{x\}-\tfrac12\right)\right]_{0^+}^{n^+}+p\int_0^n\left(\{x\}-\tfrac12\right)x^{p-1}\,\mathrm{d}x\tag4\\[6pt] &=\tfrac1{p+1}n^{p+1}+\tfrac12n^p+O\!\left(n^{p-1}\right)\tag5 \end{align} $$ Explanation: $(1)$: write sum as a Riemann-Stieltjes Integral $(2)$: $\left\lfloor x\right\rfloor=x-\{x\}$ $(3)$: use $\{x\}-\frac12$ because its integral is periodic $(4)$: Integrate by Parts $(5)$: see below Proof of $(5)$: $$ \begin{align} \left|\,p\int_0^n\left(\{x\}-\tfrac12\right)x^{p-1}\,\mathrm{d}x\,\right| &=\left|\,p\int_0^n\left(\{x\}-\tfrac12\right)\left(x^{p-1}-\left\lfloor x\right\rfloor^{p-1}\right)\,\mathrm{d}x\,\right|\tag6\\ &=\left|\,p\sum_{k=0}^{n-1}\int_0^1\left(x-\tfrac12\right)\left((k+x)^{p-1}-k^{p-1}\right)\,\mathrm{d}x\,\right|\tag7\\ &\le p\sum_{k=0}^{n-1}\tfrac12\left((k+1)^{p-1}-k^{p-1}\right)\tag8\\[6pt] &=\tfrac{p}2n^{p-1}\tag9 \end{align} $$ Explanation: $(6)$: the integral of $\{x\}-\tfrac12$ over a unit interval is $0$ $(7)$: for $(k,x)\in\mathbb{Z}\times[0,1)$, $\{k+x\}=x$ and $\lfloor k+x\rfloor=x$ $(8)$: for $(k,x)\in\mathbb{Z}\times[0,1)$, $\left|x-\frac12\right|\le\frac12$ and $\left|(k+x)^{p-1}-k^{p-1}\right|\le(k+1)^{p-1}-k^{p-1}$ $(9)$: Telescoping Sum Divide $(5)$ by $n^p$, subtract $\frac{n}{p+1}$, and take the limit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/149142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39", "answer_count": 7, "answer_id": 2 }
Perfect square with digit-sum 15 Prove that there is not a single natural number $N$ with sum of digits equal to 15 that is the square of an integer.
Observe that the recursive sum of digit(R.D.) of any number of the form (9.a+b) where 0≤b<9 are same. By recursive sum of digit, I mean continue taking the sum of digits until it becomes <10. For example, $193^2 = 37249$ =>R.D. of $193^2$=R.D. of 37249 = R.D. of 25 = 7. As 193≡4(mod 9) => $193^2≡4^2(mod\ 9)$ =>R.D. of $193^2$ is 7 Now R.D. of $(9a)^2$=9, R.D. of $(9a±1)^2$=1, R.D. of $(9a±2)^2$=4, R.D. of $(9a±3)^2$=9 and R.D. of $(9a±4)^2$=7. But (R.D.) of 15 is 6.
{ "language": "en", "url": "https://math.stackexchange.com/questions/149225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Is there an operational isomorphism from $(\mathbb{Z},+)$ to $(\mathbb{Q}^{+},\cdot)$? Let $\left(\mathbb{Z},+\right)$ and $\left(\mathbb{Q}^{+},\cdot\right)$ be groups (let the integers be a group with respect to addition and the positive rationals be a group with respect to multiplication). Is there a function $\phi\colon\mathbb{Z}\mapsto\mathbb{Q}^{+}$ such that: * *$\phi(a)=\phi(b) \implies a=b$ (injection) *$\forall p\in\mathbb{Q}^{+} : \exists a\in\mathbb{Z} : \phi(a)=p$ (surjection) *$\phi(a+b) = \phi(a)\cdot\phi(b)$ (homomorphism) ? If so, provide an example. If not, disprove.
No, because $\mathbb Z$ is generated by $1$ while $\mathbb Q^+$ as a group under multiplication is not finitely generated. If such a homomorphism $\phi$ existed, then we could write any element of $\mathbb Q$ as $\phi (n)=\phi(1)^n$, so $\mathbb Q$ would be generated by $\phi(1)$. But $\phi(1)=a/b$ for some $a,b\in\mathbb Z$, and clearly raising $a/b$ to an integral exponent wont give us $1/p$ for primes $p$ not dividing $b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/149301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Limit points of sets Find all limit points of given sets: $A = \left\{ (x,y)\in\mathbb{R}^2 : x\in \mathbb{Z}\right\}$ $B = \left\{ (x,y)\in\mathbb{R}^2 : x^2+y^2 >1 \right\}$ I don't know how to do that. Are there any standard ways to do this?
Here are a few hints. I would start by drawing a picture and think "what would be the closure of these sets, i.e. the smallest closed subset of $\mathbb{R}^{2}$ containing them?". If the sets are closed already, then this task is completed. For A. Note that $\mathbb{Z}$ is a closed subset of $\mathbb{R}$ (can you show it?), the projection to the first component $pr_{1}:\mathbb{R}^{2}\to\mathbb{R}$ is continuous, and that the preimage of a closed set under a continuous function is closed. Can you conclude something to the set $A$ from all these? For B. Again, what would be the smallest closed set containing $B$? Take a guess (you will probably guess correctly if you think for a while), and show that any point outside this "guess" has a strictly positive distance to $B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/149341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Understanding Less Frequent Form of Induction? (Putnam and Beyond) I won't paste the question here since my problem is not a technical one but a conceptual one. Book is here: (Page 22 of the pdf) I do not understand why it is necessarily to induct $2^{k}$ to show $2^{k+1}$ when the proof follows up with inducting backwards from (n+1) to (n). I recall a similar induction problem where backwards induction itself already suffice. Why it is necessarily to show the former $2^{k}$? Doesn't showing (n+1) to (n) already include in-itself the powers of $2$?
Please do insert more details in your question. In any case, forward-backward induction is a very common concept, so here's my answer: It is not sufficient to be able go from $n+1$ to $n$ because you don't have a large value of $n$ to start. The forward part of going from $2^k$ to $2^{k+1}$ ensures that you have arbitrarily large values of $n$ from which you can go downwards.
{ "language": "en", "url": "https://math.stackexchange.com/questions/149399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Unimodality and continuity for probability distribution From Wikipedia about the conditions for the Vysochanskij–Petunin inequality The sole restriction on the distribution is that it be unimodal and have finite variance. (This implies that it is a continuous probability distribution except at the mode, which may have a non-zero probability.) Does it mean that unimodality and finite variance imply the distribution is continuous except at the mode? Why is that? Thanks!
Your interpretation is correct. If you regard some distributions on the integers as unimodal in the natural sense (such as the binomial and Poisson distributions) then the distribution with $$\Pr(X=0)=\frac{1}{2k^2}$$ $$\Pr(X=1)=1-\frac{1}{k^2}$$ $$\Pr(X=2)=\frac{1}{2k^2}$$ has a mean of $1$, a variance of $\frac{1}{k^2}$, and is unimodal when $k \gt \sqrt{\frac32}$. But it is bound by the Chebyshev inequality not the Vysochanskij–Petunin inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/149448", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Something is wrong with this proof, limit $\lim\limits_{(x,y) \to (0,0)} \frac{xy^3}{x^4 + 3y^4}$ Could someone please tell me what is wrong with this proof? Show that $\lim\limits_{(x,y) \to (0,0)} \dfrac{xy^3}{x^4 + 3y^4}$ does not have a limit or show that it does and find the limit. I know it is wrong because the limit doesn't exist, but this proof is contradicting me Proof Case 1 Assume for $x,y > 0$, then $x^4 + 3y^4 > x^4 > x > 0$ $$\begin{align*} 0 < x < x^4 + 3y^4 &\iff 0 < \dfrac{x}{x^4 + 3y^4} < 1 \\ & \iff 0 < \dfrac{x|y^3|}{x^4 + 3y^4} < |y^3| \\ & \iff \lim\limits_{(x,y) \to (0,0)} 0 < \lim\limits_{(x,y) \to (0,0)} \dfrac{x|y^3|}{x^4 + 3y^4} < \lim\limits_{(x,y) \to (0,0)}|y^3|\\ &\iff 0 < \lim\limits_{(x,y) \to (0,0)} \dfrac{x|y^3|}{x^4 + 3y^4} < 0 \end{align*}$$ Case 2. WLOG Assume $x,y <0$ and combine both cases. What's wrong the ppoof? I don't find the flaw
The statement $x^4>x$ is clearly wrong especially when $x$ is in the neighborhood of $0$. Also, you need to look at cases $x>0,y<0$ and $x<0,y>0$. But the main error is the statement $x^4 > x$. EDIT The answer is that the function is not continuous at the origin. This can be seen as follows. Remember that in two dimensions there are infinitely many different directions from which you can approach origin unlike in one dimension where you need to look only at two different cases $x>0$ and $x<0$. Hence, in two dimensions it is not sufficient/advantageous to split it into just four cases $x,y>0$, $x,y<0$, $x>0,y<0$, $x<0,y>0$ and analyze. Approach the origin along the straight line $y=mx$. Then we have that $$\lim_{\overset{x \rightarrow 0}{y \rightarrow 0}} \dfrac{xy^3}{x^4 + 3y^4} = \lim_{\overset{x \rightarrow 0}{y = mx}} \dfrac{xy^3}{x^4 + 3y^4} = \lim_{x \rightarrow 0} \dfrac{x(mx)^3}{x^4 + 3(mx)^4} \\ = \lim_{x \rightarrow 0} \dfrac{m^3 x^4}{x^4 + 3m^4 x^4} = \lim_{x \rightarrow 0} \dfrac{m^3 x^4}{(1 + 3m^4) x^4} = \lim_{x \rightarrow 0} \dfrac{m^3}{(1 + 3m^4)} = \dfrac{m^3}{1+3m^4}$$ Hence, approaching the origin along different straight lines give different answers. For instance, if we approach the origin along the line $y = x$, the limit is $\dfrac14$. If we approach the origin along the line $y = -x$, the limit is $-\dfrac14$. If we approach the origin along the line $y = 0$, the limit is $0$. This proves that the function is not continuous at the origin.
{ "language": "en", "url": "https://math.stackexchange.com/questions/149509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 0 }
Tensor product of $\mathbb R$ and $\mathbb C$ over $\mathbb R$. $$ \mathbb{C} \otimes_{\mathbb{R}} \mathbb{R} = \;? $$ I guess this guy is just $\mathbb{C}$, is this correct?
Yes this is correct. One way of seeing this is considering the map $$\mathbb C\otimes_\mathbb R \mathbb R\to \mathbb C$$ which maps $$z\otimes x\mapsto xz$$ and showing that it is an isomorphism. Another way is to see that $\mathbb C\otimes_\mathbb R \mathbb R$ is $2$-dimensional as a real vector space and that the induced product is the same as the product of $\mathbb C$. Hence they are isomorphic as algebras.
{ "language": "en", "url": "https://math.stackexchange.com/questions/149584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 0 }
What exactly is a sequence? (Construction of reals) I am working through an Analysis textbook and came to the construction of the reals using Cauchy Sequences. I understood the proof more or less but far from completely / intuitively. I have no image what exactly a sequence is.. does this construction mean we can have a special sequence to represent each real number we want? If so, how would a sequence for let's say $ \sqrt2 $ look like and what is the function creating this sequence? I would be glad to get any information which could help clear this up. Of if you have any good intuition to share :) Thank you!
Any function $\,f:\mathbb{N}\to\mathbb{Q}\,$ is a rational sequence, where we usually denote $\,a_1:=f(1)\,,\,a_2:=f(2)\,,...\,$ . The same can be done with the reals or complex instead of the rationas. As you talk of construction of the reals by means of Cauchy sequences I focused first at rational sequences. Added The construction I know for the reals by means of rationa Cauchy seq's is as follows: first, define $\,\displaystyle{R:=\left\{\{a_n\}\subset \mathbb{Q}\,/\,\{a_n\} \text{ is Cauchy}\right\}}\,$ , and define on this set the "usual" operations of addition and multiplication coordinatewise. Then, $\,R\,$ becomes a unitary commutative ring and $\,\displaystyle{M:=\left\{\{a_n\}\in R\,/\,\lim_{n\to\infty}a_n=0\right\}}\,$ is a maximal ideal in it, thus $\,R/M\,$ is a field...yes, the field of real numbers. Of course, there are several things to prove there but this is the idea.
{ "language": "en", "url": "https://math.stackexchange.com/questions/149631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Finite extension of residue fields of DVR's Let $R$ be a DVR with $K = Quot(R)$ and residue field $k$. Let $k'/k$ be a finite field extension. I would like to have a reference for the following statement (or to see, that it is not true): There exists a finite field extension $K'/K$ s.t. the residue field of the integral closure $R'$ of $R$ in $K'$ is $k'$. (In my concrete situation: $K/\mathbb Q_p$ finite field extension, $R = \mathcal O_K$ local number field of $K$, and $k \cong \mathbb F_q$ for some prime power $q$ of $p$.)
Write $k^\prime=k[X]/(\bar{p})$ where $\bar{p}$ is a monic irreducible in $k[X]$. Let $p$ be a monic lift of $\bar{p}$ in $R[X]$. Consider $R^\prime=R[X]/(p(x))$. This is a local ring with residue field $\bar{k}$. It is a finite $R$-algebra, so it's also Noetherian. The maximal ideal of $R^\prime$ is generated by the image of $\pi$, where $\pi$ is a uniformizer for $R$. The ring map $R\rightarrow R^\prime$ is injective, so $\pi$ is not nilpotent in $R^\prime$. This is enough to conclude that $R^\prime$ is a discrete valuation ring. Take $K^\prime$ to be the field of fractions of $R^\prime$. Then $R^\prime$ is integral over $R$ and integrally closed in $K^\prime$, so it must be the integral closure of $R$ in $K^\prime$. One has $K^\prime=K[X]/(p(X))$, which is finite over $K$. For more details see the first chapter of Serre's Local Fields.
{ "language": "en", "url": "https://math.stackexchange.com/questions/149681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Why is the expected value $E(X^2) \neq E(X)^2$? I wish to use the Computational formula of the variance to calculate the variance of a normal-distributed function. For this, I need the expected value of $X$ as well as the one of $X^2$. Intuitively, I would have assumed that $E(X^2)$ is always equal to $E(X)^2$. In fact, I cannot imagine how they could be different. Could you explain how this is possible, e.g. with an example?
Assuming $X$ is a discrete random variable $E(X)=\sum x_ip_i$. Therefore $E(X^2)=\sum x_i^2p_i$ while $[E(X)]^2=\left(\sum x_ip_i\right)^2$. Now, as Robert Mastragostino says, this would imply that $(x+y+z+\cdots)^2=x^2+y^2+z^2+\cdots$ which is not true unless $X$ is constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/149723", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31", "answer_count": 9, "answer_id": 8 }
Using Jacobi symbol: Is $(\frac{54}{77})=1$ solvable? I can't understand what to do in the following example of congruence. I need to decide if this congruence is solvable, and if so, to find all the solutions:$x^2 \equiv 54(77)$. I need to decide whether $(\frac{54}{77})$=1 or -1. First of all I want to ask you what is the idea behind Jacobi symbol? Do I use it when I need to decide $(\frac{a}{b})$ and $b$ is not a prime? both of them are not primes, I want to use Jacobi symbol rules: $(\frac{54}{77})=(\frac{54}{11})(\frac{54}{7})=(\frac{-1}{77})(\frac{5}{77})=(-1)^{-5}(\frac{-2}{7})=1$. But I know that If the reult is 1 then $54$ may or may not be a quadratic residue $(\mod77)$ . How do I determinate it? Thanks!
Hint: Deal separately with the solutions of $x^2\equiv 54\pmod{7}$ and $x^2\equiv 54\pmod{11}$, and splice solutions together (if there are any) using the Chinese Remainder Theorem. If there are no solutions mod one of $7$ or $11$, then there are no solutions mod $77$. (Actually, since the Jacobi symbol evaluates to $1$, if there are no solutions mod one of $7$ or $11$, then there are no solutions mod the other.) Remark: You are right, Jacobi symbol evaluating to $1$ is inconclusive. But the Jacobi symbol is cheap to compute, and if it evaluates to $-1$, then end of story. For small numbers, the Jacobi symbol is not as useful, since if we are interested in $x^2\equiv a \pmod{b}$, factorization of $b$ is easy.
{ "language": "en", "url": "https://math.stackexchange.com/questions/149798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is critical Haudorff measure a Frostman measure? Let $K$ be a compact set in $\mathbb{R}^d$ of Hausdorff dimension $\alpha<d$, $H_\alpha(\cdot)$ the $\alpha$-dimensional Hausdorff measure. If $0<H_\alpha(K)<\infty$, is it necessarily true that $H_\alpha(K\cap B)\lesssim r(B)^\alpha$ for any open ball $B$? Here $r(B)$ denotes the radius of the ball $B$. This seems to be true when $K$ enjoys some self-similarity, e.g. when $K$ is the standard Cantor set. But I am not sure if it is also true for the general sets.
I would read about the local dimension of a measure here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/149833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
The subset of discontinuous in all points is not open in the space of bounded functions Let $X\subset \mathcal{B}(\mathbb{R},\mathbb{R})$ be the subset of bounded functions $f:\mathbb{R}\to\mathbb{R}$ such that are discontinuous in all points. Prove that $X$ is not open (with usual metric of supremum). I think that I found an example of a function $f\in X$ such that for every $\epsilon>0$ there exist $g\in B(f,\epsilon)\cap X^c$. Let $f$ defined by: $$f(x)=\begin{cases} 1\ ; \text{if } x\in[-1,1]^c\cap\mathbb{Q}\\ x\ ; \text{if } x\in[-1,1]\cap\mathbb{Q}\setminus\{0\}\\ 1/2\ ; \text{if } x=0\\ -x\ ; \text{if } x\in[-1,1]\cap\mathbb{Q}^c\\ -1\ ; \text{if } x\in[-1,1]^c\cap\mathbb{Q}^c\\ \end{cases}$$ $f$ clearly be in $X$ and for each $\epsilon>0$ I can define $g$ by: $$g(x)=\begin{cases} f(x)\ ; \text{if } x\in\{0\}\cup(-\epsilon,\epsilon)^c\\ 0\ ; \text{if } x\in(-\epsilon,\epsilon)\setminus\{0\}\\ \end{cases}$$ By a "draw" I think that $g\in B(f,1.1\epsilon)$ and $g$ is continuous in an interval then $g\in X^c$. Therefore $X$ is not open. Am I right? Do you know another way to prove this fact?
Here’s a similar argument that I find a bit easier. Let $$f(x)=\begin{cases}\frac1{1+x^2},&\text{if }x\in\Bbb Q\\\\0,&\text{otherwise}\;.\end{cases}$$ For $a>0$ let $$f_a(x)=\begin{cases}f(x),&\text{if }|x|\le a\\\\0,&\text{otherwise}\;.\end{cases}$$ Clearly $f\in X$, each $f_a\in\mathcal{B}(\Bbb R,\Bbb R)\setminus X$ (since $f_a$ is continuous on $\Bbb R\setminus[-a,a]$), and $$\|f-f_a\|=\frac1{1+a^2}$$ can be made arbitrarily small by taking $a$ large enough.
{ "language": "en", "url": "https://math.stackexchange.com/questions/149890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Peano postulates I'm looking for a set containing an element 0 and a successor function s that satisfies the first two Peano postulates (s is injective and 0 is not in its image), but not the third (the one about induction). This is of course exercise 1.4.9 in MacLane's Algebra book, so it's more or less homework, so if you could do the thing where you like point me in the right direction without giving it all away that'd be great. Thanks!
How about the non-negative rational numbers? Or just $(1/2)\mathbb N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/149944", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 2 }
Proof needed for $\operatorname{Hom}_R(M,N) \otimes_RS \cong \operatorname{Hom}_S(M\otimes_R S,N\otimes_R S)$ I want a proof for $$\operatorname{Hom}_R(M,N) \otimes_RS \cong \operatorname{Hom}_S(M\otimes_R S,N\otimes_R S)$$ where $\phi\colon R \to S$ is a homomorphism and $M$ is finitely generated free $R$-module and $N$ is an $R$-module. or if anyone can say me about the isomorphism map between two side.
* *Since $M$ is a finitely generated free $R$-module, one has $M\cong R^n$ for some $n$ (this statement is equivalent to choosing a basis $e_1,\ldots, e_n$ for $M$). *Moreover, since $M$ is free, any homomorphism $\phi\colon M\to N$ can be specified completely by specifying the images $\phi(e_1),\ldots, \phi(e_n)$ of the basis vectors. This says exactly that $\operatorname{Hom}(M,N) \cong N^n$. Thus the left hand side of the desired isomorphism is $$\operatorname{Hom}(M,N)\otimes_RS\cong N^n\otimes_R S.$$ *Since $M\cong R^n$ and tensor product commutes with direct sums, we have that $M\otimes_RS\cong S^n$. In terms of our previously chosen basis $e_1,\ldots, e_n$ of $M$, this statement says simply that $M\otimes_R S$ is a free $S$-module with basis $e_1\otimes 1,\ldots, e_n\otimes 1$. *Just as in point (2), since $M\otimes_R S$ is a free $S$-module, specifying a homomorphism $\phi\colon M\otimes_R S\to N\otimes_RS$ is equivalent to specifying the images $\phi(e_1\otimes 1),\ldots, \phi(e_n\otimes 1)$ of the basis vectors of $M\otimes S$. Thus $$\operatorname{Hom}_S(M\otimes_R S, N\otimes_R S)\cong (N\otimes_RS)^n.$$ Thus, combining points (2) and (4), in order to prove the desired isomorphism it suffices to show that $N^n\otimes_R S\cong (N\otimes_R S)^n$. But this is exactly the fact that tensor product commutes with direct sums. This completes the proof. It is possible to give a simple description of the isomorphism $$f\colon \operatorname{Hom}(M,N)\otimes_R S\to \operatorname{Hom}_S(M\otimes_R S, N\otimes_R S)$$ we just constructed. It is given on simple tensors by $$f(\phi\otimes s)(m\otimes t) = \phi(m)\otimes st.$$ Despite the simplicity of this definition, it is not clear that $f$ is indeed an isomorphism. The above argument proves that $f$ is an isomorphism when $M$ is a finitely generated free $R$-module.
{ "language": "en", "url": "https://math.stackexchange.com/questions/150003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Differentiation of a function with respect to a product of variables How would you proceed to differentiate a function with respect to a product of variables where the product does not appear together in the function. For example: If $y=r\sin(\theta)$. How would you proceed with: $\frac{\mathrm{d} y}{\mathrm{d} (r\theta)}$ Thanks for your help in advance.
This expression is not well-defined. $y$ is a function of two variables. If you want one of them to be $r \theta$, you have to specify what the other one is to know what to keep constant. As a simpler example, suppose $y = \theta$. If I want my two variables to be $u = r \theta$ and $v = r$, then $y = \frac{u}{v}$ and $$\frac{\partial y}{\partial u} = \frac{1}{v} = \frac{1}{r}.$$ However, if I want my two variables to be $u = r \theta$ and $v = \theta$, then $y = v$ and $$\frac{\partial y}{\partial u} = 0.$$ Partial derivative notation is ambiguous. In addition to specifying the variable that changes, it ought to also specify the variables that don't change, but for whatever reason it doesn't.
{ "language": "en", "url": "https://math.stackexchange.com/questions/150060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Showing that if $R$ is a commutative ring and $M$ an $R$-module, then $M \otimes_R (R/\mathfrak m) \cong M / \mathfrak m M$. Let $R$ be a local ring, and let $\mathfrak m$ be the maximal ideal of $R$. Let $M$ be an $R$-module. I understand that $M \otimes_R (R / \mathfrak m)$ is isomorphic to $M / \mathfrak m M$, but I verified this directly by defining a map $M \to M \otimes_R (R / \mathfrak m)$ with kernel $\mathfrak m M$. However I have heard that there is a way to show these are isomorphic using exact sequences and using exactness properties of the tensor product, but I am not sure how to do this. Can anyone explain this approach? Also can the statement $M \otimes_R (R / \mathfrak m) \cong M / \mathfrak m M$ be generalised at all to non-local rings?
If $R$ is any commutative ring and $I \subset R$ is an ideal, then $M \otimes_R R/I \cong M/IM$. Consider the short exact sequence of $R$-modules $0 \to I \to R \to R/I \to 0$ and tensor with $M$ over $R$ to obtain the exact sequence $M \otimes_R I \to M \to M \otimes_R R/I \to 0$. The image of the first map is $IM$, so by the first isomorphism theorem we obtain $M/IM \cong M \otimes_R R/I$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/150114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Prove by Induction: Weak Inquality involving Product of Factorial, Factorial in the Exponent, Exponents. I want to show the below statement. $n!x^{n!} \leq n\cdot x^{n}$ Where $0 \leq x <1$ for all $n\in \mathbb{N}$ My approach is induction. The Induction start This is easy. Set n=1 and the statement $n!x^{n!} \leq n\cdot x^{n}$ is true. Induction step Now I assume that $(n-1)!x^{(n-1)!} \leq (n-1)\cdot x^{n-1}$ is true. But I'm stuck here. I have tried to deduce the statement $n!x^{n!} \leq n\cdot x^{n}$ but my attempts seem to fail..
The inequality is not true. For instance, for $n=3$, we get that $$3! x^{3!} \leq 3 x^3$$ $$6x^6 \leq 3 x^3$$ $$x^3 \leq \frac12$$ The above is true only for $$x \in \left[0,\frac1{2^{1/3}} \right]$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/150180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculate the volume between $z=x^2+y^2$ and $z=2ax+2by$ I'm trying to calculate the volume between the surfaces $z=x^2+y^2$ and $z=2ax+2by$ where $a>0,b>0$. Here's what I've tried: First I noticed the projection of the volume to the xy plane is a circle: $(x-a)^2+(y-b)^2\leq a^2+b^2$. Using this I simplified the calculation of the integral for the volume a little. Marking $B$ as the circle we get that the volume is: $$\iint_{}^{B} (2ax+2ay-x^2-y^2) = \iint_{}^{B} (a^2+b^2)-\iint_{}^{B} ((x-a)^2+(y-b)^2) $$ Using the symmetry of the circle we get: $$\iint_{}^{B} ((x-a)^2+(y-b)^2) = 2\iint_{}^{B} ((x-a)^2)$$ And we can also use the formula for the area of a circle to get: $$\iint_{}^{B} (a^2+b^2) = \pi (a^2+b^2)^2$$ So all I have left to do is calculate $\iint_{}^{B} ((x-a)^2)$, but this is where I get stuck. Trying to do it using iterated integrals becomes too complex (we have only covered Cartesian coordinates, so I can't use something like polar coordinates here). I know the result is supposed to be $(1/2)\pi (a^2+b^2)^2$. Assistance would be appreciated. Thanks!
Integrate in $y$ first; this gives the additional factor of $2\sqrt{(a^2+b^2)-(x-a)^2}$, so that the question reduces to $$\int_{a-\sqrt{a^2+b^2}}^{a+\sqrt{a^2+b^2}} (x-a)^2 \sqrt{(a^2+b^2)-(x-a)^2}\,dx$$ which may look scary, but is in fact a typical trigonometric substitution problem. Namely, $x=a+\sqrt{a^2+b^2}\sin \theta$ turns the integral into a multiple of $$\int_{-\pi/2}^{\pi/2} \sin^2\theta\, \cos^2\theta \,d\theta$$ Of course, with polar integrals this would have been much easier.
{ "language": "en", "url": "https://math.stackexchange.com/questions/150251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find the distribution of a transformation Let $$f_X (x, \theta) = \frac{1}{\theta} x^{\frac{1}{\theta} - 1}, \; x \in (0, 1)$$ find the distribution of: $$Y = - \frac{1}{\theta} \ln X$$ [Solution provided: $Y \sim \mathcal{E}(1)$ ] I did: $$P(Y = y) = P(- \frac{1}{\theta} \ln X = y) = P( X = e^{-\theta y}),\; y \in (0, +\infty)$$ and so: $$f_Y(y, \theta) = f_X(e^{-\theta y}, \theta) = \frac{1}{\theta} e^{-y(1-\theta)}$$ Am I missing something? Is this distrubition a $\mathcal{E}(1)$?If so why?
If I may quote another answer: The simplest and surest way to compute the distribution density or probability of a random variable is often to compute the means of functions of this random variable. In the case at hand, one wants to write $\mathrm E(u(Y))$ as $$ \color{blue}{\mathrm E(u(Y))=\int u(y)g(y)\mathrm{d}y}, $$ for every bounded measurable function $u$. Then one can be sure that $g$ is the density of the distribution of $Y$. So, in a way, the functions $u$ play the role of a dummy variable and one wants the equality above to hold for every $u$. The rest is easy: introducing $a=1/\theta$ for notational convenience and using the fact that for every bounded measurable function $v$, by definition of the distribution of $X$, $$ \mathrm E(v(X))=\int_0^1 v(x)ax^{a-1}\mathrm dx, $$ one gets $$ \mathrm E(u(Y))=\mathrm E(u(-a\log X))=\int_0^1 u(-a\log x)ax^{a-1}\mathrm dx=\int_0^1 u(-a\log x)x^{a}\frac{a\mathrm dx}x. $$ The change of variable $y=-a\log(x)$ yields $y\gt0$, $x^a=\mathrm e^{-y}$ and $\mathrm dy=a\mathrm dx/x$ hence $$ \mathrm E(u(Y))=\int_0^{+\infty} u(y)\mathrm e^{-y}\mathrm dy. $$ This proves that the density $g$ of $Y$ is defined by $g(y)=\mathrm e^{-y}$ if $y\gt0$ and $g(y)=0$ otherwise. In other words, $Y$ is a standard exponential random variable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/150319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Laplace transform identity Is there a function equal to its Laplace transform? I mean $$ \int_{0}^{\infty}dt\exp(-st)f(t)= f(s).$$ Of course I know $f(t)=0 $ satisfy the equation. For the case of the Fourier transform, I know the Hermite Polynomials are eigenfunction of the Fourier transform, perhaps it's enough with a shift or rotation into the complex plane ($s \rightarrow i\omega$)?
For $l$ with real part greater than $1$ a standard computation yields $$ \mathcal{L}(t^l) = s^{-l-1} \Gamma(l+1).$$ Pick $z$ with real part between $0$ and $1$ and let $f(t)=\sqrt{\Gamma(z)} t^{-z} + \sqrt{\Gamma(1-z)} t^{z-1}.$ Then we have $$ \mathcal{L}\left( f \right) = \sqrt{ \Gamma(z) \Gamma(1-z) } \cdot f(s).$$ Since $\displaystyle \Gamma(z) \Gamma(1-z) = \frac{\pi}{\sin(\pi z)} $ it suffices to find a $z$ with real part between $0$ and $1$ such that $\sin(\pi z)=\pi.$ A solution is $ z= \displaystyle \frac{\sin^{-1} \pi}{\pi} $ where $\sin^{-1}(\pi) = \dfrac{\pi}{2} - i \log(\pi + \sqrt{\pi^2-1}) .$ Therefore the Laplace transform operator has a fixed point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/150390", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 1, "answer_id": 0 }
What are some books that I should read on 3D mathematics? I'm a first-grade highschool student who has been making games in 2D most of the time, but I started working on a 3D project for a change. I'm using a high-level engine that abstracts most of the math away from me, but I'd like to know what I'm dealing with! What books should I read on 3D mathematics? Terms like "rotation matrices" should be explained in there, for example. I could, of course, go searching these things on the interweb, but I really like books and I would probably miss something out by self-educating, which is what I do most of the time anyway. I mostly know basic mathematics, derivatives of polynomial functions is the limit to my current knowledge, but I probably do have some holes on the fields of trigonometry and such (we didn't start learning that in school, yet, so basically I'm only familiar with sin, cos and atan2).
In addition to computer graphics books like Computer Graphics Using OpenGL by Hill and Kelley, you might want to check out a linear algebra textbook such as Linear Algebra and its Applications by Gilbert Strang. I haven't read the open source linear algebra book called A First Course in Linear Algebra, but it seems interesting.
{ "language": "en", "url": "https://math.stackexchange.com/questions/150510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
UFDs are integrally closed Let $A$ be a UFD, $K$ its field of fractions, and $f$ an element of $A[T]$ a monic polynomial. I'm trying to prove that if $f$ has a root $\alpha \in K$, then in fact $\alpha \in A$. I'm trying to exploit the fact of something about irreducibility, will it help? I havent done anything with splitting fields, but this is something i can look for.
The well-known simple proof of the Rational Root Test remains valid in a UFD or GCD domain. The sought result is simply the special case when $\:\!\rm f(x)\:\!$ is monic (lead coef $=1$). More generally, we can present the proof in a form that works for both gcds and cancellable ideals by using only universal laws common to both (commutative, associative, distributive etc). Below I give such a universal proof for degree $3$ (to avoid notational obfuscation). It should be clear how this generalizes to any degree. If $\rm\:D\:$ is a gcd domain and monic $\rm\:f(x)\in D[x]\:$ has root $\rm\:a/b,\ a,b\in D\,$ then $\rm\qquad f(x)\, =\, c_0 + c_1 x + c_2 x^2 + x^3,\:$ and $\rm\:b^3\:\! f(a/b) = 0\:$ yields $\rm\qquad c_0 b^3 + c_1 a b^2 + c_2 a^2 b\, =\, -a^3\ $ $\rm\qquad\qquad\ \ \,\Rightarrow\ (b^3, a b^2,\, a^2 b)\mid \color{#c00}{a^3},\ $ by the gcd divides LHS of above so also RHS $\rm\qquad\ (b,a)^3 = \, (b^3,\, a b^2,\, a^2 b,\ \color{#c00}{a^3}),\ $ so, by the prior divisibility $\rm\qquad\qquad\quad\:\! =\, (b^3,\, a b^2,\, a^2 b) $ $\rm\qquad\qquad\quad =\, b\, (b,a)^2,\ $ so cancelling $\rm\,(b,a)^2$ yields $\rm\qquad\ \, (b,a) =\, b\:\Rightarrow\: b\:|\:a,\ $ i.e. $\rm\: a/b \in D.\ \ $ QED The degree $\rm\:n> 1\:$ case has the same form: we cancel $\rm\:(b,a)^{n-1}$ from $\rm\,(b,a)^n = b\,(b,a)^{n-1}.$ The ideal analog is the same, except replace "divides" by "contains", and assume that $\rm\,(a,b)\ne 0\,$ is invertible (so cancellable), e.g. in any Dedekind domain. Thus the above yields a uniform proof that PIDs, UFDs, GCD and Dedekind domains satisfy said monic case of the Rational Root Test, i.e. that they are integrally closed. The proof is more concise using fractional gcds and ideals. Now, with $\rm\:r = a/b,\:$ we simply cancel $\rm\:(r,1)\:$ from $\rm\:(r,1)^n = (r,1)^{n-1}$ so $\rm\:(r,1) = (1),\:$ i.e. $\rm\:r \in D.\:$ See this answer for further details.
{ "language": "en", "url": "https://math.stackexchange.com/questions/150554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28", "answer_count": 3, "answer_id": 2 }
Simple Combinatorics Order does matter: $$\begin{align*} &1 2 3\\ &1 3 2\\ &2 1 3\\ &2 3 1\\ &3 1 2\\ &3 2 1 \end{align*}$$ Order doesn't matter: $$1 2 3$$ How does this work? Order doesn't matter: $1 2 3$. Can't it also be say, $3 2 1$? Doesn't this also show that order doesn't matter.
The question isn’t very clear, but I think that you’re trying to say something like this: If order matters, there are six permutations of the numbers $1,2$, and $3$: $123,132,213,231,312$, and $321$. If order doesn’t matter, there is just the set $\{1,2,3\}$. Can’t we also write that set $\{3,2,1\}$, for instance? Yes: the set whose only members are the numbers $1,2$, and $3$ can be written in any of the following ways: $$\begin{array}{} \{1,2,3\}&\{1,3,2\}&\{2,1,3\}\\ \{2,3,1\}&\{3,1,2\}&\{3,2,1\} \end{array}$$ Two sets are equal if and only if they have the same members; the order in which you list the members doesn’t matter. It doesn’t even matter if you list some members more than once: $$\{1,1,3,2,3,3,1,2,3,2,3,1\}=\{1,2,3\}\;,$$ though it’s hard to imagine why anyone would want to write it this way under normal circumstances. When we’re counting something and say that order doesn’t matter, we mean that we’re just counting sets of things. When order does matter, we’re counting permutations of things, like the six permutations of $1,2$, and $3$ that you listed. Informally you can think of a permutation as an ordered set, a set with a fixed order established for its elements; a plain old set has no such order.
{ "language": "en", "url": "https://math.stackexchange.com/questions/150607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
graph-theory task Possible Duplicate: Graph-theory exercise The picture below shows a graph connecting the cities А, Б, В, Г, Д, Е, Ж, И, К. On each path you can only move only in direction of the arrow. How many different ways are there from city A to city K? I understood that this exercise is from graph theory. Please tell me how I can solve exercises like this. P.S. Sorry for my poor English. It isn't my native language. I would be very grateful if you would mention errors in my English.
It’s like the earlier problem. Work backwards from К: the only way to get there directly is from Е. Е can only be reached directly from В and Г. Г can be reached only from А, while В can be reached from Б or А, and Б can be reached only from А. If you backtrack from К to А, the first step must be to Е. After that you have two choices, В and Г. If you go to В, you have three choices: directly to А, Б to А, or Г to А. Thus, there are three routes back through В. If you go instead to Г, you have no further choices: you can only go to А. Thus, there are altogether four routes: $$\begin{align*} &\text{А}\rightarrow\text{Г}\rightarrow\text{Е}\rightarrow\text{К}\\ &\text{А}\rightarrow\text{В}\rightarrow\text{Е}\rightarrow\text{К}\\ &\text{А}\rightarrow\text{Б}\rightarrow\text{В}\rightarrow\text{Е}\rightarrow\text{К}\\ &\text{А}\rightarrow\text{Г}\rightarrow\text{В}\rightarrow\text{Е}\rightarrow\text{К} \end{align*}$$ When you trace back, you see that only part of the graph actually matters: Б / \ / \ А-----В \ / \ \ / \ Г-----Е-----К (All arrows here are understood to point from left to right.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/150666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Subgroups of $\mathbb{Z}$? I have this set: $$I_a = \{ax+(7-a^2)y : x, y \in \mathbb{Z}\}$$ with $a$ integer. I have two tasks over it: 1) Prove that $I_a$ is subgroup of $\mathbb{Z}$; 2) Fully characterize this subgroup. I did the first task, but for the second I was thinking on cases, that is, if $a$ is equal to $7k, 7k\pm 1,7k\pm 2$ or $7k \pm 3$, for some $k \in \mathbb{Z}$. I was successful with $a= 7k$ (this is: $\langle 7 \rangle = I_a$), but I can't with the other cases. Some hints?
First, can you show that 7 is in $I_a$? Then, can you show that if there is even one number in $I_a$ that isn't a multiple of 7, then $I_a$ is all of $\bf Z$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/150742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
How do you explain paradoxes to non-mathematicians? For example, how do you explain why the perimeter of this staircase does not converge to $\sqrt2$? Or, why isn't $\sqrt{(-1)(-1)}=\sqrt{-1}\sqrt{-1}$? I would say, the reason is simply because they cannot be proved. But non-mathematicians don't find such explanation satisfactory. They seem to be more satisfied by sophisticated explanation. For the staircase paradox, my friend who studies physics reasoned that it's because the perimeter is of dimension 1 not 2. It makes no sense to me, but they found such explanation more sensible.
In the case of $\sqrt{(-1)(-1)}=\sqrt{-1}\sqrt{-1}$ at least, there is no paradox if you understand the limits of your rules. You and the non-mathematician learned at some point that $\sqrt{ab}=\sqrt{a}\sqrt{b}$ for nonnegative numbers a and b. If one has forgotten this last part, then they might be amazed by $\sqrt{(-1)(-1)}=\sqrt{-1}\sqrt{-1}$, because they have blindly applied a rule where it is not applicable. The same thing might happen if you pointed out that $ab \geq 0$ for all natural numbers. If someone forgets the last part and is amazed by the apparent paradox $(-1)5 \geq 0$ , does this really deserve to be called a paradox? Paradoxes do not have anything to do with "not being able to be proven", they are roughly just instances of where something you would expect to be true is not, in the system you are working in. Like the "twin paradox" in physics: if you are (erroneously) under the impression that time is absolute, you are stumped, but if you believe in relativity then everything is OK.
{ "language": "en", "url": "https://math.stackexchange.com/questions/150791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Ordinal exponentiation of limit ordinal Prove that If $a$ is a limit ordinal and $p$ and $q$ are finite ordinals, then $(ap)^q$ = $a^q$$p^q$. In my book, the right side of the equality is written as $a^qp$, but i think its typo since equality does not hold for the case $q=0$ and $p\ge1$.
The book is correct, except in the case that $q=0$ and $p\neq 1$. One result you might want to prove is that for any limit ordinal $\lambda$ and any non-$0$ finite ordinal $p$, we have $p\cdot\lambda=\lambda$. From that, associativity of multiplication, and the fact that $0\cdot\alpha=0=\alpha\cdot 0$ for all $\alpha$, the book's equality follows readily by induction for non-$0$ finite $q$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/150876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find Formulas of Absolute and "Ray" Functions $$y = \begin{cases} 0, & \text{if } x < 0, \\ 2x, & \text{if } x \geq 0. \end{cases}$$ How can we represent the above function in one formula? I'm not exactly sure how to tackle this sort of question. (This is not homework.) It comes from Functions and Graphs by I. M. Gelfand on page 32. On the face of it, it's an easy question that should have an easy answer. Yet my math intuition skills need a lot of work. (Hopefully this book will help!) To solve this, is there a general method to work backwards from these two "rays" to an absolute value function? Likewise, turning to page 33 to investigate another problem: $$y = \begin{cases} 2x + 1, & \text{if } x \geq 0, \\ \frac{1}{2}x + 1, & \text{if } x \leq 0. \end{cases}$$ For example, $|2x+1|-|\frac{1}{2}x+1|$ clearly can't work as the formula. Besides guessing and attempting all sorts of possible formulas to the end of time, how can we look at this systematically? Thank you so much for your time and patients!
$y = max(0,2x) = \frac{1}{2}(|2x| + 2x) = |x| + x$ More generally, suppose you had $y = ax, x > 0$, and $y = bx$, $x<0$. Consider the generic function $y = c|x| - dx$. Then for $x > 0$, $y = cx - dx$, and for $x < 0$, $y = -cx - dx$. So you want to choose $c$ and $d$ to solve $c - d = a$, and $-c - d = b$. From this you can work out the general form of $c$ and $d$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/150928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
A way to well-order real line How is well-ordering in real line possible? I know that the axiom of choice provides possible well-ordering, but intuitively, this does not seem to make sense. How can you compare 1.111111.... and 1.111111... when we can never know the full digits of each number?
I'm not sure you understand what well-ordering is. It has very little to do with comparing, in the sense of the usual ordering a set may have. For example, the usual ordering of the rationals is not a well-ordering, but one can explicitly construct a well-ordering of the rationals which bears essentially no relation to the usual ordering.
{ "language": "en", "url": "https://math.stackexchange.com/questions/150992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Conditions for integrability Michael Spivak, in his "Calculus" writes Although it is possible to say precisely which functions are integrable,the criterion for integrability is too difficult to be stated here I request someone to please state that condition.Thank you very much!
Just a small clarification (as an answer as I cannot comment): The way in which mixedmath wrote the answer might lead to confusions, a different way would be: A bounded function $f$ on $[a,b]$ is Riemann integrable iff it is continuous almost everywhere. Note that this assumes the function to be bounded and it is not an implication of the theorem (For instance think of $f=\frac{1}{\sqrt{(x)}}$, whose integral in $[0,1]$ is 2 but it is not bounded).
{ "language": "en", "url": "https://math.stackexchange.com/questions/151060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
An alternating series ... Find the limit of the following series: $$ 1 - \frac{1}{4} + \frac{1}{6} - \frac{1}{9} + \frac{1}{11} - \frac{1}{14} + \cdot \cdot \cdot $$ If i go the integration way all is fine for a while but then things become pretty ugly. I'm trying to find out if there is some easier way to follow.
Below are my personal notes on a powerful method for the summation of series using complex analytic methods. The result is quite general and the final calculation is simple. If you keep this up your sleeve, later you'll come across this problem again and in Ramanujan-like fashion immediately state "The value of this sum is simply $$ \sum_{\zeta= \pm 1/5} \operatorname{Res}_{z=\zeta}\left( \frac{\pi \cot(\pi z)}{25z^2-1} \right) = \frac{\pi}{5} \cot \left( \frac{\pi}{5} \right)."$$ P.S. Sorry it isn't LaTeXed up, I don't have the stamina to type all this up at the moment.
{ "language": "en", "url": "https://math.stackexchange.com/questions/151113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Every closed set in $\mathbb R^2$ is boundary of some other subset A problem is bugging me many years after I first met it: Prove that any closed subset of $\mathbb{R}^2$ is the boundary of some set in $\mathbb{R}^2$. I have toyed with this problem several times over the last 20 years, but I have never managed to either prove it, or conversely prove that the question as posed is in some way wrong. I can't remember which book I found the question in originally (but I am pretty sure that is exactly the question as it appeared in the book). Any help, with either the topology or the source would be gratefully received!
Any subspace of $\mathbb{R}^2$ is second countable, and hence separable. So if $X$ is your closed subspace, then let $A$ be a countable subset of $X$ that is dense in $X$. Then the closure of $A$ (in $\mathbb{R}^2$) is $X$, while the interior of $A$ is empty (since any nontrivial open set in $\mathbb{R}^2$ is uncountable). Thus $X$ is the boundary of $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/151265", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 2, "answer_id": 0 }
Product of two ideals doesn't equal the intersection The product of two ideals is defined as the set of all finite sums $\sum f_i g_i$, with $f_i$ an element of $I$, and $g_i$ an element of $J$. I'm trying to think of an example in which $IJ$ does not equal $I \cap J$. I'm thinking of letting $I = 2\mathbb{Z}$, and $J = \mathbb{Z}$, and $I\cap J = 2\mathbb{Z}$? Can someone point out anything faulty about this logic of working with an even ideal and then an odd ideal? Thanks in advance.
But in your example $IJ = I = I \cap J$. How about taking $I = J = 2 \mathbb{Z}$. Then $I \cap J = 2\mathbb{Z}$ while $IJ = 4\mathbb{Z}$. In general $IJ \subset I \cap J$, and the equality holds if $I+J = R$ where $R$ is the ring you are working with.
{ "language": "en", "url": "https://math.stackexchange.com/questions/151317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 4, "answer_id": 3 }
Can the borders of a map be deformed to give arbitrary area to any region? Let's say I have a geographic map, a connected region divided into sub-regions. Is it possible to deform the map (the borders of the regions) so that each sub-region is of arbitrary area while maintaining the adjacencies? I think it is possible, but I've forgotten almost all my topology. Is this a theorem? (Or basic definition?) Also, what is the correct way to describe how the original map and its deformation are related, does "homeomorphic" apply here?
Here is a constructive proof that it is possible. It produces some pretty skinny and twisty regions that look like plasticene, though; @whuber's alternative solution in the comments will produce round blobbly regions that look like bubbles. Enlarge the map uniformly so that every region is at least as big as its desired area. Now you just need to shrink the regions while maintaining adjacency. Pick a region $R$ whose area needs to be reduced. Find a connected sequence of regions $R, R_1, R_2, \ldots, R_n$ such that $R_n$ has a boundary with the outside space. Shrink $R$ to its desired area by "pulling in" its border with $R_1$ while keeping the endpoints of the border fixed, so that the topological adjacencies between regions do not change as shown below. This has increased the area of $R_1$, so for $i = 1, 2, \ldots, n$, restore $R_i$ to its original area by pulling in its border with $R_{i+1}$, or with the outside when $i=n$, in the same way. Thus you can reduce the area of any chosen region $R$ without changing the areas of the other regions, nor the adjacencies between regions. Repeat for all the regions that need shrinking, and you're done. Below, for example, we reduce the area of region $A$ using the sequence $A, B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/151409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Convex functions in integral inequality Let $\mu,\sigma>0$ and define the function $f$ as follows: $$ f(x) = \frac{1}{\sigma\sqrt{2\pi}}\mathrm \exp\left(-\frac{(x-\mu)^2}{2\sigma ^2}\right) $$ How can I show that $$ \int\limits_{-\infty}^\infty x\log|x|f(x)\mathrm dx\geq \underbrace{\left(\int\limits_{-\infty}^\infty x f(x)\mathrm dx\right)}_\mu\cdot\left(\int\limits_{-\infty}^\infty \log|x| f(x)\mathrm dx\right) $$ which is also equivalent to $\mathsf E[ X\log|X|]\geq \underbrace{\mathsf EX}_\mu\cdot\mathsf E\log|X|$ for a random variable $X\sim\mathscr N(\mu,\sigma^2).$
$\int x log \vert x \vert f(x) dx - \mu \int log \vert x \vert f(x) dx = \int( x - \mu) log \vert x \vert f(x) dx = \int x log \vert x + \mu \vert \phi(x) dx = \int_0^{\infty} x log(\vert \frac {\mu +x}{\mu - x}\vert )\phi(x) dx$ and the integrand is positive. $\phi$ is symmetric is all that gets used.
{ "language": "en", "url": "https://math.stackexchange.com/questions/151474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 2 }
The number of elements which are squares in a finite field. Meanwhile reading some introductory notes about the projective special linear group $PSL(2,q)$ wherein $q$ is the cardinal number of the field; I saw: ....in a finite field of order $q$, the number of elements ($≠0$) which are squares is $q-1$ if $q$ is even number and is $\frac{1}{2}(q-1)$ if $q$ is a odd number..." . I can see it through $\mathbb Z_5$ or $GF(2)$. Any hints for proving above fact? Thanks.
If our field is of order $2^k$, then the map $x \mapsto x^2$ is injective and thus bijective since the field is finite. Therefore every element is a square. When the field has odd order, consider the equivalence relation on the nonzero elements defined by $x \sim y \Leftrightarrow x^2 = y^2$. The number of nonzero squares is the number of equivalence classes of this relation. Each equivalence class contains exactly two elements, so half of the nonzero elements are squares. Intuitively, in the list of squares of nonzero elements $$a_1^2,\ a_2^2,\ a_3^2,\ \ldots$$ we get repetitions when $a_i^2 = a_j^2 \Leftrightarrow a_i = a_j$ or $a_i = -a_j$. Since $a_i \neq -a_i$ for each $a_i$, rearranging and relabeling the list gives $$b_1^2, (-b_1)^2, b_2^2, (-b_2)^2, b_3^2, (-b_3)^2, \ldots$$ where $b_i \neq \pm b_j$. Again, the number of different elements in this list is half the number of nonzero elements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/151530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29", "answer_count": 5, "answer_id": 1 }
Infinite Degree Algebraic Field Extensions In I. Martin Isaacs Algebra: A Graduate Course, Isaacs uses the field of algebraic numbers $$\mathbb{A}=\{\alpha \in \mathbb{C} \; | \; \alpha \; \text{algebraic over} \; \mathbb{Q}\}$$ as an example of an infinite degree algebraic field extension. I have done a cursory google search and thought about it for a little while, but I cannot come up with a less contrived example. My question is What are some other examples of infinite degree algebraic field extensions?
Let $\{n_1,n_2,...\}$ be pairwise coprime, nonsquare positive integers. Then $\mathbb{Q}(\sqrt{n_1},\sqrt{n_2},...)$ is an algebraic extension of infinite degree.
{ "language": "en", "url": "https://math.stackexchange.com/questions/151586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 5, "answer_id": 2 }
Generalizing the total probability of simultaneous occurrences for independent events I want to generalize a formula and I need your help with this. This is not my homework or assignment but I need to come up with a concise formula that fits my documentation. Background for my problem: Considering all events to be independent of each other, Let the probability of $Event$ $0$ be $P_{0}$ and $Event$ $1$ be $P_{1}$ and so on ... Then the probability that two events occur simultaneously is : $P_0P_1$ . This is nothing but the area of intersection of two circles $P_0$ and $P_1$. Continuing the same way, The total probability of simultaneous occurrence in case of three events is: $P_0 P_1 + P_0P_2 + P_1P_2 - 2 P_{0}P_{1}P_{2}$. Also can be visualized by drawing three intersecting circles. One clarification here: This gives me the total probability of any two events plus all the three occurring at the same time right? I cannot visualize the formula by drawing circles anymore. How can the above formula be generalized to get the probability of simultaneous occurrences when there are 4, 5, ... independent events. I have seen that the inclusion-exclusion principle is the answer. But I am not able to get an intuition for it. The inclusion-exclusion principle gives the probability of 2,3,4 sets intersecting but isn't my question different? I get this doubt because, probability of four independent events occurring simultaneous is: $P_0P_1P_2P_3$. But what I need is a general formula for the total probability of two or more independent events at the same time. Can anyone of you please throw some light? Yes indeed I meant that the probability of "two or more events". The answer you have given is very precise and the one I was looking for. Yes it is boring to try to visualize circles when the number exceeds three. Instead, in general if use 1 - ($w_0 + w_1$) then we should land up correctly given that the events are independent. Thank you so much.
$$1-\left(1+\sum_{i=1}^n\frac{P_i}{1-P_i}\right)\cdot\prod_{k=1}^n(1-P_k)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/151635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 0 }
A graph with almost all degrees different Is it true that for every $n\geq 3$ one can find an undirected grpah, with no loops or double edges, on $n$ vertices such that there are $n-1$ vertices, all with different degrees? (Clearly there can not be $n$ different degrees by the pigeonhole principle). Thanks for any help.
You can induct. Let $G_n$ be such a graph on $n$ vertices; then your pigeonhole argument shows that $G_n$ has either a vertex of degree $n-1$ or a vertex of degree $0$, but not both. Also, its complement $\tilde{G_n}$ is another such graph, which has a vertex of degree $0$ iff $G_n$ does not. So we can suppose without loss of generality that $G_n$ has no vertex of degree $0$. But then the union of $G_n$ with an isolated vertex is such a graph on $n+1$ vertices.
{ "language": "en", "url": "https://math.stackexchange.com/questions/151717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
When is something "obvious"? I try to be a good student but I often find it hard to know when something is "obvious" and when it isn't. Obviously (excuse the pun) I understand that it is specific to the level at which the writer is pitching the statement. My teacher is fond of telling a story that goes along the lines of A famous maths professor was giving a lecture during which he said "it is obvious that..." and then he paused at length in thought, and then excused himself from the lecture temporarily. Upon his return some fifteen minutes later he said "Yes, it is obvious that...." and continued the lecture. My teacher's point is that this only comes with a certain mathematical maturity and even eludes the best mathematicians at times. I would like to know : * *Are there any ways to develop a better sense of this, or does it just come with time and practice ? *Is this quote a true quote ? If so, who is it attributable to and if not is it a mathematical urban legend or just something that my teacher likely made up ?
I believe the famous mathematician was G. H. Hardy, and I am sure he had a different view as to what is "obvious" compared to lesser mortals (like myself). I would say that "obvious" should only ever be applied to things that the speaker believes that his listener should find "simple" ..
{ "language": "en", "url": "https://math.stackexchange.com/questions/151782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 10, "answer_id": 7 }
Proof for the amount of states of a regular language Let L be a formal language. Two words $u,v \in \Sigma^*$ are called separable if $\exists w \in \Sigma^* : uw \in L, vw \not\in L$. Nerode lemma: Let L be a formal language. If there are $n$ words of L that are pairwise separable, a final state machine that recognizes L would have at least $n$ states. Let $L_i = \{ubv \; | \; u,v \in \{a,b\}^*, |v| = i - 1\}$. I now need to prove that a final state machine for $L_i$ contains at least $2^i$ states using the defintion and lemma above. Therefore there should be $2^i$ pairwise separable words. I tried this for $L_2$: "-" means that the two words are not separable. The fields marked with "x" are ignored because of the symmetry. Therefore e.g. $\lambda$ and $b$ are separable by $a$ because $\lambda.a = a \not\in L \wedge b.a \in L$. Nevertheless instead of 4 pairwise separable items ($2^2$) I only got three: $\lambda,a$ and $ba$. One is missing. Could you please help me to find the missing one and a possibility to prove it for all $L_i$? Thanks in advance!
Let $E = \Sigma^i$. Clearly $|E| = 2^i$. I claim that all pairs from $E$ are separable. Suppose $u,v \in E$, with $u \neq v$. Let $k$ be the last index such that $[u]_k \neq [v]_k$. WLOG, we can take $[u]_k = b$. Then $u$ has the form $u = \alpha b \beta$, with $|\alpha| = k-1$, $|\beta| = i-k$. Choose $w = a^{k-1}$ (possibly empty). Then we have $[uw]_{|uw|-i+1} = b$, but $[vw]_{|vw|-i+1} = a$, so we have that $uw \in L_i$, but $vw \notin L_i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/151816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why metrizable group requires continuity of inverse? A metrizable group is a metric space $(G,d)$ with a binary operation $\cdot$ such $(G,(\cdot))$ is a group and maps $(\cdot):G\times G\to G$ and $f:G\to G$ given by $(\cdot)(x,y)=xy$ and $f(x)=x^{-1}$ are continuous respect $d$. Why requires definition that $f$ be continuous? Is it possible to have continuity of $(\cdot)$ without continuity of $f$?
Take $G=\mathbb R$ with addition and put on it the topology which has the set $\{(-\infty,a):a\in\mathbb R\}$ as a basis. Then addition is continuous but inversion is not. You want it metrizable, though...
{ "language": "en", "url": "https://math.stackexchange.com/questions/151878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 0 }
Prove that $\sin(x+\frac{\pi}{n})$ converges uniformly to $\sin(x)$. I've just starting learning uniform convergence and understand the formal definition. What I've got so far is: $|\sin(x+ \frac{\pi}{n}) - \sin(x)| < \epsilon \ \ \ \ \forall x \in \mathbb{R} \ \ \ \ $ for $n \geq N, \epsilon>0$ LHS = $|2\cos(x+\frac{\pi}{2n})\cdot \sin(\frac{\pi}{2n})| < \epsilon $ Am I going down the right route here? I've done some examples fine, but when trig is involved on all space, I get confused as to what I should be doing... Any help at all would be VERY much appreciated, I have an analysis exam tomorrow and need to be able to practice this. Thanks.
What you did so far is fine. If you want to continue in the direction you started, it remains to notice that $$\left|2\cos\left(x+\frac{\pi}{2n}\right)\cdot \sin\frac{\pi}{2n}\right| \le 2\left|\sin\frac{\pi}{2n}\right|$$ and that for any given $\varepsilon>0$ you can choose $N$ such that you have inequality $$2\left|\sin\frac{\pi}{2n}\right|<\varepsilon$$ for $n>N$. Can you see why this is true?
{ "language": "en", "url": "https://math.stackexchange.com/questions/152026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Examples demonstrating that the finitely generated hypothesis in Nakayama's lemma is necessary Recall that Nakayama's lemma states that Let $R$ be a commutative ring with unity, and let $J$ be the Jacobson radical of $R$ (the intersection of all the maximal ideals of $R$). For any finitely generated $R$-module $M$, if $IM=M$ for some ideal $I\subseteq J$, then $M=0$. I was trying to come up with a simple / illustrative example of the above situation where $M$ is non-zero and not finitely generated, but $IM=M$ for some $I\subseteq J$. The only example I managed to come up with was $$R=k[[\overline{x}_0,\overline{x}_1,\overline{x}_2,\ldots]]=k[[x_0,x_1,x_2,\ldots]]/(x_0^2,x_1^2-x_0,x_2^2-x_1,\ldots)$$ where $k$ is a field, and $I=M=(\overline{x}_0,\overline{x}_1,\overline{x}_2,\ldots)\neq0$. Since $k[[x_0,x_1,x_2,\ldots]]$ is a local ring with maximal ideal $(x_0,x_1,x_2,\ldots)$, we have that $R$ is a local ring with maximal ideal $I$, so that $I$ is the Jacobson radical of $R$, and $$IM=I^2=(\overline{x}_0^2,\overline{x}_1^2,\overline{x}_2^2,\ldots,\overline{x}_0\overline{x}_1,,\ldots,\overline{x}_1\overline{x}_2,\ldots)=(0,\overline{x}_0,\overline{x}_1,\ldots,\overline{x}_0\overline{x}_1,\ldots,\overline{x}_1\overline{x}_2,\ldots)\supseteq I$$ so that $IM=I^2=I=M$. Is there an easier example - maybe with $R$ noetherian? Also, what is the geometric picture here? As I understand it, letting $M$ be non-finitely generated corresponds to looking at non-coherent sheaves, which I don't have a whole lot of intuition for - even less than my novice understanding of coherent sheaves.
If $(R,\mathfrak m)$ is a local domain which is not a field, then for its fraction field $K$ we have $\mathfrak m K=K$. Amusingly, this proves that $K$ is not a finitely generated $R$-module.
{ "language": "en", "url": "https://math.stackexchange.com/questions/152164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 2, "answer_id": 0 }
Localisation is isomorphic to a quotient of polynomial ring I am having trouble with the following problem. Let $R$ be an integral domain, and let $a \in R$ be a non-zero element. Let $D = \{1, a, a^2, ...\}$. I need to show that $R_D \cong R[x]/(ax-1)$. I just want a hint. Basically, I've been looking for a surjective homomorphism from $R[x]$ to $R_D$, but everything I've tried has failed. I think the fact that $f(a)$ is a unit, where $f$ is our mapping, is relevant, but I'm not sure. Thanks
Define $\phi: R[x] \rightarrow R_D$ by sending $x$ to $1/a$. We will prove that the kernel of $\phi$ is the ideal of $R[x]$ generated by $ax -1$. Let $p\left(x\right) \in \ker \phi$. Note that $p(x)$ can be naturally viewed as an element of $R_D[x]$. Since $\phi(p(x))=0$, we have $p(1/a)=0$ in $R_D$. So $1/a$ is a root of $p(x) \in R_D[x]$. Since the leading coefficient of $ax-1$ is a unit in $R_D$, Euclidean division applies and we get that $p(x) = (ax-1) g(x)$ for some $g(x) \in R_D[x]$. Inspecting coefficients on both sides starting from lowest degree terms, we see that in fact $g(x) \in R[x]$ and we are done. PS: This answer is inspired by Robert Green's answer (and our subsequent interaction) to an identical question that i recently asked, being ignorant of the existence of this post.
{ "language": "en", "url": "https://math.stackexchange.com/questions/152236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 5, "answer_id": 1 }
An interesting problem about almost sure convergence Assume that $X_1,X_2,\ldots$ are independent random variables (not necessarily of the same distribution). Assume that that $Var[X_n]>0$ for all $n$. Assume also that $$\sum_{n=0}^\infty \frac{Var[X_n]}{n^2}<\infty,$$ that $$\frac{1}{n}\sum_{i=1}^n(X_i-E[X_i])\to 0 \textrm{ almost surely as $n\to\infty$},$$ that $$E[X_n]>0 \textrm{ for all $n$},$$ and that $$\liminf_{n\to\infty} E[X_n] > 0.$$ How can we prove that $$\sum_{i=1}^n X_i \to \infty\text{ almost surely as $n\to\infty$?}$$ This seems to intuitively make sense, but a formal proof escapes me. Also, what can we say if $E[X_n] = 0$ for all $n$ and $\lim_{n\to\infty} E[X_n] = 0$?
By the strong law of large numbers, $\frac1n\sum\limits_{k=1}^n\left(X_k-E(X_k)\right)$ converges almost surely. The hypothesis on the expectations implies that $\liminf\limits_{n\to\infty}\frac1n\sum\limits_{k=1}^nE(X_k)\geqslant c$ with $c\gt0$. Summing these two yields $\liminf\limits_{n\to\infty}\frac1n\sum\limits_{k=1}^nX_k\geqslant c$ hence $\sum\limits_{k=1}^nX_k$ diverges to $+\infty$ almost surely (and it does so, at least linearly).
{ "language": "en", "url": "https://math.stackexchange.com/questions/152280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Evaluating $\lim\limits_{x\to\infty} \frac{\ln(x^2+4)}{\sinh^{-1}x}$ $$\lim\limits_{x\to\infty} \frac{\ln(x^2+4)}{\sinh^{-1}x}$$ This is an exam practice question. BTW, I am refering to the inverse hyperbolic function above. Since this is infinity/infinity, I used one application of L'Hopital's rule for this one, and then did a bit of algebraic manipulation to get $\frac{2}{1}$, i.e. $2$. Is this right? Is there a quicker way to do this? It seems like Taylor Polynomials are often used, but they confuse me - Is this one, where Taylor Polynomials would have been easier?
HINT Use $\sinh^{-1}(x) = \ln (x + \sqrt{1+x^2})$. $$\displaystyle \lim_{x \rightarrow \infty} \dfrac{\ln(x^2+4)}{\ln(x+\sqrt{1+x^2})} = \displaystyle \lim_{x \rightarrow \infty} \dfrac{\ln(x^2) + \ln(1+4/x^2)}{\ln(x) + \ln(1+\sqrt{1+1/x^2})}$$ $$ = \displaystyle \lim_{x \rightarrow \infty} \dfrac{\ln(x^2)}{\ln(x)} \dfrac{\left(1+\dfrac{\ln(1+4/x^2)}{\ln(x^2)} \right)}{\left(1 + \dfrac{\ln(1+\sqrt{1+1/x^2})}{\ln(x)} \right)}$$ $$ = \displaystyle \lim_{x \rightarrow \infty} 2 \dfrac{\ln(x)}{\ln(x)} \lim_{x \rightarrow \infty} \dfrac{\left(1+\dfrac{\ln(1+4/x^2)}{\ln(x^2)} \right)}{\left(1 + \dfrac{\ln(1+\sqrt{1+1/x^2})}{\ln(x)} \right)}$$ $$ = 2 \displaystyle \lim_{x \rightarrow \infty} \dfrac{\left(1+\dfrac{\ln(1+4/x^2)}{\ln(x^2)} \right)}{\left(1 + \dfrac{\ln(1+\sqrt{1+1/x^2})}{\ln(x)} \right)} =2 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/152344", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
F measurable functions & Their atoms Is it true that if X is F measurable, or a Borel function, then it is constant on the atoms of F in all cases? Thanks
Consider the $\sigma$-algebra of all subsets of $\mathbb R$ with the measure defined as follows: $\mu(A)=0$ if $A$ is at most countable, $\mu(A)=\infty$ is $A$ is uncountable. Then $\mathbb R$ is an atom for this measure. Any function $f\colon \mathbb R\to\mathbb R$ is measurable, so it does not have to be constant. Another measure on the same $\sigma$-algebra: $\mu(A)=0$ if $0\notin A$ and $\mu(A)=1$ otherwise. (So, $\mu$ is a unit point mass at $0$). Again, $\mathbb R$ is an atom for this measure.
{ "language": "en", "url": "https://math.stackexchange.com/questions/152405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is there a published numerically stable version of the Recursive Least-Squares algorithm? I have implemented in MATLAB the recursive least-squares algorithm given in, for instance, Hayes's excellent Statistical Digital Signal Processing and Modeling (p. 541ff). However, when I run the algorithm on real data with a forgetting factor $\lambda<1$ (e.g. $\lambda = 0.9$), after about 2000 updates I always reach a point where my filter coefficients explode in size and obviously overflow. Stepping through the algorithm with a debugger reveals that the coefficients of $P$, the inverse correlation matrix, grow exponentially bigger due, I believe, to the division by $\lambda$ at each timestep. I've done a literature search but found very little useful information. I found a paper by Bouchard who recommends to force $P$ to be symmetric, but that didn't help. Is there a published version of the RLS algorithm that "fixes" these numerical instability problems? Or does anyone know what else I should try?
Have you looked into QR-RLS filters? I think Haykin's "Adaptive Filter theory" is the standard reference here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/152466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is this inequality true? Clearly if $a,b >0$ and $p \in \mathbb{N}$ $$ a^{p} + b^{p} \le (a+b)^{p} $$ Is there a constante $C = C(p)$ such that if $a,b >0$ and $p \in \mathbb{N}$ then \begin{equation} a^{p} - b^{p} \le C(p)(a-b)^{p} ? \end{equation}
It is not possible to find a constant $C(p)$ such that $$ a^{p} - b^{p} \le C(p)(a-b)^{p} $$ for all $a,b > 0$ when $p > 1$. For example, let $a = n+1$ and $b = n$. Then $$ a^p-b^p = (n+1)^p - n^p = \sum_{k=0}^{p-1} \binom{p}{k} n^k \to \infty $$ but $a-b = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/152532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Proving that $\lim\limits_{x \to 0}\frac{e^x-1}{x} = 1$ I was messing around with the definition of the derivative, trying to work out the formulas for the common functions using limits. I hit a roadblock, however, while trying to find the derivative of $e^x$. The process went something like this: $$\begin{align} (e^x)' &= \lim_{h \to 0} \frac{e^{x+h}-e^x}{h} \\ &= \lim_{h \to 0} \frac{e^xe^h-e^x}{h} \\ &= \lim_{h \to 0} e^x\frac{e^{h}-1}{h} \\ &= e^x \lim_{h \to 0}\frac{e^h-1}{h} \end{align} $$ I can show that $\lim_{h\to 0} \frac{e^h-1}{h} = 1$ using L'Hôpital's, but it kind of defeats the purpose of working out the derivative, so I want to prove it in some other way. I've been trying, but I can't work anything out. Could someone give a hint?
Define $$ f_n(x)=\left(1+\frac{x}{n}\right)^n\tag{1} $$ Note that $$ f_n^{\,\prime}(x)=\left(1+\frac{x}{n}\right)^{n-1}\tag{2} $$ On compact subsets of $\mathbb{R}$, both $(1)$ and $(2)$ converge uniformly to $e^x$. This means that $$ \frac{\mathrm{d}}{\mathrm{d}x}e^x=e^x\tag{3} $$ Therefore, $$ \begin{align} \lim_{x\to0}\frac{e^x-1}{x} &=\lim_{x\to0}\frac{e^x-e^0}{x-0}\\ &=\left.\frac{\mathrm{d}}{\mathrm{d}x}e^x\right|_{x=0}\\ &=\left.e^x\right|_{x=0}\\ &=1\tag{4} \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/152605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 8, "answer_id": 3 }
Every Lindelöf topological group is isomorphic to a subgroup of the product of second countable topological groups. I want to show that every Lindelöf topological group is isomorphic to a subgroup of the product of second countable topological groups. I received an answer using the fact that Lindelöf topological groups are $\omega$-narrow, but I want to show it by using the following theorem. Theorem: Every Hausdorff topological group $G$ is topologically isomorphic to a subgroup of the group of isometries $Is(M)$ of some metric space $M$, where $Is(M)$ is taken with the topology of pointwise convergence. Any help would be greatly appreciated!
A nice reference for this sort of thing is the book Topological Groups and Related Structures by Arhangel'skii and Tkachenko. A Hausdorff topological group $G$ is said to be $\omega$-narrow if for every open neighborhood $U$ of the identity $e$, there is a countable set $A$ such that $AU=G$. Certainly every Lindelöf topological group is $\omega$-narrow; take $A\subset G$ to be a countable set such that $\{aU\}_{a\in A}$ is an open cover of $G$. Guran's Theorem (3.4.23 in the book mentioned) states that a topological group is $\omega$-narrow iff it embeds as a topological subgroup of a product of second countable topological groups. This result is more general than the one you are asking for and the proof can be found in the book. On the other hand, the proof here doesn't seem to use Uspenskij's theorem (that $G$ can be embedded in the isometry group of some metric space $M$, in particular the metric space of all bounded left uniformly continuous real-valued functions on $G$). Perhaps for Lindelöf $G$, there is a simpler proof using Uspenskij's theorem and someone else can point the way to this. I am curious to know where it is said that such a proof is possible?
{ "language": "en", "url": "https://math.stackexchange.com/questions/152717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 2, "answer_id": 0 }
Does there exist an analytic function such that $|f(z)|=|\sin z|$? Does there exist an analytic function $f(z)$ defined on $\Omega$ such that $|f(z)|=|\sin z|$ for all $z\in\Omega\subseteq\mathbb{C}$? Well I guess if there is a constant $c$ with $|c|=1$ and $f(z)=c\sin(z)$ for all $z\in\Omega$, am I right?
Well $\sin z$ itself is analytic, so you can just let $f(z)=\sin z$. But maybe that's too easy...but yes, take $c=e^{i\theta}$, $\theta\in\mathbb{R}$, and your answer is correct. Depending on what $\Omega$ is, there could be other solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/152771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
$\frac{dy}{dx}=1+\frac{2}{x+y}$ solution by an "integrating term" I though about this trick and then found an example to apply it to: $$\frac{dy}{dx}=1+\frac{2}{x+y}$$ This is the trick: add $\frac{dx}{dx}=1$ to both parts $$\frac{dy}{dx}+\frac{dx}{dx}=1+\frac{2}{x+y}+1$$ Using the linearity of $d$ $$\frac{d(x+y)}{dx}=2\frac{1+(x+y)}{x+y}$$ $$\frac{(x+y)d(x+y)}{1+(x+y)}=2dx$$ $$d(x+y)-\frac{d(x+y)}{1+(x+y)}=2dx$$ $$-\frac{d(x+y)}{1+(x+y)}=2dx-d(x+y)$$ Now $2dx-d(x+y)=2dx-dx-dy=dx-dy=d(x-y)$ $$-\frac{d(x+y)}{1+(x+y)}=d(x-y)$$ $$\frac{d(1+x+y)}{1+(x+y)}=d(y-x)$$ Integrating: $$\ln|1+x+y|=(y-x)+\ln C$$ $$1+x+y=C\exp\left(y-x\right)$$ Is this a one-off case, or a particular example of a certain method? Does anyone know more examples of ODE's that can be solved similarly? I know the integrating multiplier theory quite well, but this one seems like something extra to that.
In general for equations of the form $y'(x)=f(ax+by+c)$, (where $a, b$ and $c$ are constantes $b\neq 0$), you can use the change of variables $u=ax+by+c$. You obtain the equation: $$ y'=\frac{u'-a}{b}=f(u) $$ which can be solved using separation of variables: $$ \frac{du}{b \,f(u)+a}=dx $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/152842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Showing $\sin(1/x)$ is not a rectifiable curve Intuitively it looks like near $0$, $\sin(1/x)$ oscillates wildly so that two points will be very far apart, but how can I properly formulate this?
Let $x_n = \frac{1}{\pi(\frac{1}{2}+n)}$, $n=0,1,...$. Note that $\sin \frac{1}{x_n} = (-1)^n$. Note that $x_n$ monotonically decreases to $0$. Consider the function on the interval $[x_n,x_0]$, using the partition $t_k = x_{n-k}$. Then the variation is exactly $2n$. Hence the variation is unbounded on the interval $(0,1)$ (or any open interval with $0$ as the left most point).
{ "language": "en", "url": "https://math.stackexchange.com/questions/152911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }