Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Probability of a probability to happen I'm trying to solve this problem but I'm struggling to. A university has 300 students enrolled. You sample 30 students and see that 20 are boys and 10 are girls. If the university had 150 male students and 150 females, what would be the probability of your observation? More generally, if there are N students enrolled in the university, B of which are boys and N − B girls, what is the probability that if you sample n (≤ N), then exactly b are boys and n − b are girls? So from the sample we know that there are (approximating) 66% boys and 33% girls. But I don't know how to estimate the probability of this observation, what am I missing?
To solve this problem you can use the Hypergeometric distribution. With the variables * *$N$ the population size ($300$) *$K$ number of success states in the population ($150$) *$n$ the number of draws ($30$) *$k$ the number of observed successes (boys, $20$) Then if $X$ is the number of boys drawn in the sample then $$ P(X = k) = \frac{{K \choose k} {N-K \choose n-k}}{N \choose n} $$ Recall that ${n\choose k} = \frac{n!}{k!(n-k)!}$ is the binomial coefficient. For your example of drawing $20$ boys in a sample of $30$, we have $$ P(X = 20) = \frac{{150 \choose 20} {300-150 \choose 30-20}}{300 \choose 30} \approx 0.0245 $$ The solution with the binomial distribution provided by @Atvin is an estimation of the probability with a large population size ($N\to\infty$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2458755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
Product of units modulo m Help!!! I can find units in a given $Z_m$ manually by testing out every single value which is relative to m,,, but in general I do not know a way how to find such units... hence I cant answer this question: Prove that the product of units mod m is congruent to $\pm 1$ mod m. Any idea?
The product of all units mod $m$ can be written as $P=\prod_{x^2\ne1} x \prod_{y^2=1} y$. The first product is $1$ because each $x$ is paired with $x^{-1}$ and they are different. Therefore, $P=y_1 \cdots y_n$ with $y_i^2=1$. If $n=1$, then $P=1$ because $y=1$ is certainly there. If $n=2$, then $P=-1$ because $y=-1$ is certainly there. If $n>2$, Then $y \mapsto -y$ is a permutation of the elements such $y^2=1$. This permutation has no fixed points, that is, $-y\ne y$. This implies than $n$ is even. Thus, every $y$ is paired with $-y\ne y$ and $y(-y)=-1$. Therefore, $P=(-1)^{n/2}=\pm 1$. (Actually, in the last case, $n/2$ is even and $P=1$. This follows from Lagrange's theorem applied to a subgroup $\{1,-1,y,-y\}$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2458846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Show that $\exists m \in(k,\ell):f''(m)+f(m)=0$ Let $f$ be twice differentiable on $\Bbb R$ and $f'(x)\not=0$, $f'(k)=f(\ell)$ and $f'(\ell)=f(k)$. Show that $\exists m \in(k,\ell):f''(m)+f(m)=0$. The only thing that I was able to do is to show that $\exists n \in (k,\ell):f'(n)-f(n)=0$. I'd like a hint.
Hint: consider $$ h(x)=(f(x))^2+(f'(x))^2 $$ and see what the mean value theorem implies.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2458955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove that every terms of a sequence defined by a recurrence relation is a perfect square. I would be happy if you let me know how to tackle this problem. Thanks.
The idea is to look for a linear recurrence relation among the square roots. Say we have a relation $b_n=c_1b_{n-1}+c_2b_{n-2}$. Assuming the roots of the characteristic polynomial are distinct, the general formula should be $b_n=d_1\lambda_1^n+d_2\lambda_2^n$ where $d_1,d_2$ are constants and $\lambda_1,\lambda_2$ are the roots of the characteristic polynomial. Then $b_n^2=d_1^2\left(\lambda_1^2\right)^n+d_2^2\left(\lambda_2^2\right)^n+2d_1d_2\left(\lambda_1\lambda_2\right)^n$, so it has a recurrence relation of order 3 where the product of two of the roots of the characteristic polynomial is the square of the third. Now observe that 2 is a root of the characteristic polynomial (and you should observe this, because one thing to do when looking for the roots of a polynomial is apply the rational root theorem to look for rational roots). Also, the product of the other two roots is 4, which is $2^2$. This suggests that the square roots should satisfy a reucurrence relation with characteristic polynomial $\frac{x^3+x^2-2x-8}{x-2}$; this gives you the recurrence to test and it works, once you try sufficiently many combinations of positive and negative signs on the first two terms of the recurrence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2459080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Can we say Dirac delta function is zero almost surely? It is known that $\delta(x) = \infty $ if $x = 0$ and $=0$ if $x\ne 0 $ and we also know that $\int_{-\infty}^{\infty}\delta(x)dx=1$. However, if we consider a Lebesgue integration, $\delta(x)$ is zero almost surely so that we can get $\int_{-\infty}^{\infty}\delta(x)d\mu(x)=0$. Why I get a contradiction here? Many thanks!
Because that is a bad "definition" of the "Dirac function"! The "Dirac function" is not a function at all, it is a "distribution" or "generalized function", a functional that assigns a number to every function. Specifically, the "Dirac function" assigns the number f(0) to every function, f.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2459185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find all integers $x$ such that $\frac{2x^2-3}{3x-1}$ is an integer. Find all integers $x$ such that $\frac{2x^2-3}{3x-1}$ is an integer. Well if this is an integer then $3x-1 \mid 2x^2-3$ so $2x^2-3=3x-1(k)$ such that $k\in \mathbb{Z}$ from here not sure where to go I know that it has no solutions I just can't see the contradiction yet.
Let $k=3x-1$ then $$3x\equiv_k 1$$ and $$2x^2-3 \equiv _k0$$ Multiplying last equation with 9 we get $$0\equiv _k2(3x)^2-27\equiv _k -25$$ So $$3x-1\mid 25 \implies 3x-1\in\{\pm 1,\pm5,\pm 25\}$$ and thus $$x\in\{0,2,-8\}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2459366", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Solving $n\times n$ determinant using triangular shape I have just started learning to solve nth order determinants by getting it into the triangular shape ( in this way the determinant is equal to the multiple of main or additional diagonal + the determination of the sign ). I have solved a couple of easy ones, but got stuck on this one ( which seems easy, but however I manipulate the rows or columns I can't get it into the triangular shape ). $$ \begin{vmatrix} 5 & 3 & 3 & \cdots & 3 & 3 \\ 3 & 6 & 3 & \cdots & 3 & 3 \\ 3 & 3 & 6 & \cdots & 3 & 3 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 3 & 3 & 3 & \cdots & 6 & 3 \\ 3 & 3 & 3 & \cdots & 3 & 6 \\ \end{vmatrix} $$ If someone could give the solution and explain the crucial steps to solving this, it would be very appreciated.
First, you can subtract the last line from the others which gives you :$$\begin{vmatrix} 2 & 0 &\ldots&0&-3 \\ 0 & 3 &\ddots&\vdots& \vdots \\ \vdots &\Large{0}&\ddots&0& \vdots \\ 0&\ldots&0&3&-3\\ 3&\ldots&\ldots&3&6 \end{vmatrix}$$ Then subtract $\frac{3}{2} L_{1}$ from $L_{n}$: $$\begin{vmatrix} 2 & 0 &\ldots&0&-3 \\ 0 & 3 &\ddots&\vdots& \vdots \\ \vdots &\Large{0}&\ddots&0& \vdots \\ 0&\ldots&0&3&-3\\ 0&3&\ldots&3&\frac{3}{2} \end{vmatrix}$$ Finally, subtract $L_{i}$ for every $i \in [2,n-1]$ from $L_{n}$: $$\begin{vmatrix} 2 & 0 &\ldots&0&-3 \\ 0 & 3 &\ddots&\vdots& \vdots \\ \vdots &\Large{0}&\ddots&0& \vdots \\ 0&\ldots&0&3&-3\\ 0&\ldots&\ldots&0&\frac{3}{2}+3(n-2) \end{vmatrix}=\begin{vmatrix} 2 & 0 &\ldots&0&-3 \\ 0 & 3 &\ddots&\vdots& \vdots \\ \vdots &\Large{0}&\ddots&0& \vdots \\ 0&\ldots&0&3&-3\\ 0&\ldots&\ldots&0&3(n-\frac{3}{2}) \end{vmatrix}$$ These operations don't affect the determinant so you can apply the formula for a triangular shaped matrix.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2459480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is $\int \frac{x}{x^5-7} dx$? I have tried out many trigonometric substitution like $x=\sin^{\frac{2}{5}}z$. But it did not work.
To make the problem more general, consider $$I_n=\int\frac x {x^n-A}\,dx=A^{\frac 2n-1}\int\frac t {t^n-1}\,dt=A^{\frac 2n-1}\int\frac t {\prod_{i=1}^n(t-r_i)}\,dt$$ where $r_i$ are the roots of unity. Using partial fraction decomposition, you will end with $$I_n=A^{\frac 2n-1}\sum _{i=1}^n\int \frac {\alpha_i}{t-r_i}\,dt=A^{\frac 2n-1}\sum _{i=1}^n{\alpha_i}\log({t-r_i})$$ For sure, since the ${\alpha_i},r_i$ terms will be complex numbers, you will need to recombine them by pairs if you want to get rid of all complex terms (coefficients and logarithms). There is also a shorter notation $$J_n=\int\frac t {t^n-1}\,dt=-\frac{1}{2} t^2 \, _2F_1\left(1,\frac{2}{n};1+\frac{2}{n};t^n\right)$$ where appears the Gaussian or ordinary hypergeometric function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2459857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Verifying a long polynomial equation in (the reciprocal of) the Golden Ratio I'm trying to show that the following equation holds true: $$4\sigma^{12}+11\sigma^{11}+11\sigma^{10}+9\sigma^9+7\sigma^8+5\sigma^7+3\sigma^6+\sigma^5+\sigma^4+\sigma^3+\sigma^2+\sigma = 1 + 2\sigma$$ where $\sigma$ is the reciprocal of the golden ratio; that is, $\sigma := \frac12(\sqrt{5} - 1)$. There must be a good way to show this. However, so far all my attempts have failed. This problem has a real world application. If you are interested in background take a look at A Fresh Look at Peg Solitaire[1] and in particular at Figure 9. [1] G. I. Bell [2007], A fresh look at peg solitaire, Math. Mag. 80(1), 16–28, MR2286485
Work with $x$ instead of $\sigma$. You have $$x^2=-x+1$$ so $$x^3=x-x^2=+2x-1$$ $$x^4=x^2-x^3=-3x+2$$ and you will find (without surprise) that the Fibonacci numbers appear, so you can write down $$x^5=+5x-3, x^6=-8x+5 \dots$$ and this may be an efficient way of reducing to a linear expression.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2459961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
A Square Inside A Triangle (but with a twist) I have a right $\triangle ABC$ and I want to find the side length of one of the legs, $x$, when the square is at its max area, as shown in I am given that $AB + BC$ is $10$. Here's what I tried so far: * *I found ratios between the sides using similarity, but I wasn't able to get a conclusive answer, just things in terms of each other. *I tried to set up equations using the Pythagorean theorem, but that just ended up with some messy variable terms and zero actual progress. The answer is $x=5$, but I want to know how I would go about approaching this kind of problem. It's like others I've seen before here and in other places, but not being given the side lengths threw me off.
The area of the square would be maximal, when $l$ would be maximal. Let $AB=a$ and $AC=b$. Thus, $$\frac{a-l}{a}=\frac{l}{b},$$ which by AM-GM gives: $$l=\frac{ab}{a+b}\leq\frac{\left(\frac{a+b}{2}\right)^2}{a+b}=\frac{5}{2}.$$ The equality occurs for $a=b=5$. Thus, the area of the square gets a maximal value for $x=5$. Done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2460091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Dual codes over finite commutative rings Let $R$ be a finite commutative ring and $C$ be a linear code of length $n$ over $R$, i.e., an $R$-submodule of $R^n$. We define $C^{\perp}$ by $C^\perp=\{v\in R^n~|~[v,w]=0~~\text{for all}~w\in C\}$ where $[v,w]=\sum_{i=1}^nv_iw_i\in R$ is the Euclidean inner product. I read in "Algebraic Coding Theory Over Finite Commutative Rings"(Steven T.Dougherty) that $(C^\perp)^\perp=C$ for any finite commutative ring $R$ and any linear code $C$ over $R$. Clearly it follows that $(C^\perp)^\perp\supseteq C$. But I cannot show the converse inclusion. Could you tell me the proof? If the author gets wrong and such property does not hold in general, I would like to know a counterexample. I have already known that this property holds for any finite quasi-Frobenius ring $R$ and any submodule $C\subseteq R^n$. See, for example, [J.A. Wood. Duality for Modules over Finite Rings and Applications to Coding Theory]. But I do not know whether this property holds for any finite commutative ring $R$ and any linear code $C$ over $R$.
You're right. It's not true, for example, using the ring $R=F_2[x,y]/(x^2,xy,y^2)$, which is not quasi-Frobenius. If you take $C=(x)\subseteq R^1$, then $C^\perp=(x,y)$ and $C^{\perp\perp}=(x,y)$. A ring which is quasi-Frobenius cannot suffer this problem. You could make an example for any length $n$ over $R$ by using $(x)\times\{0\}\times\ldots\times\{0\}$ with the same effect. The perp should be $(x,y)\times R\times\ldots\times R$, and the perp of that certainly contains $(x,y)\times\{0\}\times\ldots\times\{0\}$. You could also try $C=\oplus_{i=1}^n(x)$, but I'm not sure how to compute the results. $C^\perp$ certainly at least contains $\oplus_{i=1}^n(x,y)$, but it also contains things like even weight vectors having $1$'s and $0$'s only. If you have a computer algebra system, you might check $(x)\times (x)$ to see what the perp and double-perp are. One possibility is that the author actually only intended to talk about proper quotients of $\mathbb Z$, all of which are quasi-Frobenius.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2460205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
intuition behind discrepancy in expected number of coin tosses Yesterday evening I read a very interesting article on prime numbers which contained the following paragraph on coin tosses: If Alice tosses a coin until she sees a head followed by a tail, and Bob tosses a coin until he sees two heads in a row, then on average, Alice will require four tosses while Bob will require six tosses (try this at home!), even though head-tail and head-head have an equal chance of appearing after two coin tosses. I immediately tried to look into this question. Using the formula provided by André Nicolas, it can be shown that the expected number of coin tosses we need before getting $n$ consecutive heads is given by: \begin{equation} e_n = \sum_{k=1}^n\frac{1}{2^k}(e_n+k)+\frac{n}{2^n} \end{equation} For $n=2$, the expected number of heads($e_2$) is 6. Analytically, this makes sense once you become familiar with the above equation. Now, what I find interesting is that, as mentioned in the article, the probability of obtaining two heads is the same as the probability of obtaining a head followed by a tail: \begin{equation} P(HH)=P(HT)=\frac{1}{4} \end{equation} However, the expected number of coin tosses required to get the $HT$ pattern is 4 not 6. I still find this quite counter-intuitive. In fact, I ran a simulation using the following python code: import numpy as np head_tail = np.zeros(10000) two_heads = np.zeros(10000) for i in range(10000): z = np.random.randint(2, size=100) for j in range(100): if z[j] == 1 and z[j+1] == 0: head_tail[i] = j+2 break for j in range(100): if z[j] == 1 and z[j+1] == 1: two_heads[i] = j+2 break And I noticed that their distributions behaved very differently: It's by no means intuitive to me that the behaviour of these two distributions should be very different and yet they are remarkably different. Is there an intuitive reason why this must be the case?
Alice and Bob toss their coins until they see their first head.   The expected tosses until this happens is the same for each.   Then things are different. Alice just needs to continue until she finally sees a tail. However, if Bob doesn't immediately see a head on the next throw, he needs to start over from the beginning. Thus $\mathsf E(A) ~{= \mathsf E(H)+\mathsf E(T)\\ = 2+2\\= 4}\\ \mathsf E(B)~{=\mathsf E(H)+1+\tfrac12\mathsf E(B)\\ =3+\tfrac 12\mathsf E(B)\\=6}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2460349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
finding remainder by using fermat's little theorem or euler's totient I am trying to carry on for the following equation but I stuck with a big number that still needs to be smaller to calculate without calculator. $\ 331^{51}\mod 49$ Since the $\phi(49)=42$, I carried on the problem as follows: $=\ 331^{42}.331^{9}\mod 49$ which $331^{42}=1$, so; $=\ 331^{9}\mod 49$ Since $331/49$ remains $37$ problem becomes; $=\ 37^{9}\mod 49$ $=\ (-12)^{9}\mod 49$ $=\ (-12)^{8}.(-12)\mod 49$ Since 8 is even; $=\ (12)^{8}.(-12)\mod 49$ ... .(a bunch of conversions)... ... Finally I have found this (which is checked via wolframalpha and it is correct); $=\ (2)^{29}.(23)\mod 49$ But it is still too big to calculate manually. So I wonder maybe one of you guys have another idea? Especially, I want to know whether I can use $49$ as $7*7$ and create two different modular equations and solve them in a way? maybe with CRT maybe with another method?
Hint: Using binomial theorem, $$(-12)^7=(2-14)^7\equiv2^7\pmod{49}$$ and $$12^2\equiv-3\pmod{49}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2460458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to prove $(a_1+a_2+\dots a_n)\left(\frac{1}{a_1}+\frac{1}{a_2}+\dots+\frac{1}{a_n}\right)\ge n^2$? let $a_1,a_2,\dots ,a_n$ be a positive real numbers . prove that $$(a_1+a_2+\dots a_n)\left(\frac{1}{a_1}+\frac{1}{a_2}+\dots+\frac{1}{a_n}\right)\ge n^2$$ how to prove this inequality ..is this result prove by Induction
Another proof: See Art of Problem Solving about Chebyshev's inequality. There are two different versions of it. Use the second version. If $a_1\ge a_2\ge \ldots \ge a_n$ and $b_n\ge b_{n-1}\ge \ldots \ge b_1$, then $$n\left(\sum_{i=1}^{n} a_ib_i\right)\le \left(\sum_{i=1}^{n}a_i\right)\left(\sum_{i=1}^{n}b_i\right)$$ In this case, you're given $a_1,a_2,\ldots,a_n>0$. Use the inequality. Notice that if WLOG (without loss of generality) $a_1\ge a_2\ge \ldots\ge a_n$, then $\frac{1}{a_n}\ge \frac{1}{a_{n-1}}\ge \ldots \ge \frac{1}{a_1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2460537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 5 }
24 different complex numbers There are $24$ different complex numbers $z$ such that $z^{24}=1$. For how many of these is $z^6$ a real number? This is one of the AMC problems from this year. I've been trying to solve it, but I couldn't and a peek at the answers (not recommended, I know) talked about Euler's theorem etc., which I haven't learnt yet. Is there an 'easier' way to solve this problem?
Let's say $w=z^6$. We know that $w^4=1$, so $w=\pm 1,\pm i$. Each of these four numbers has $6$ distinct sixth roots. Does that help?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2460605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 7, "answer_id": 3 }
Is this proof style legitimate? Normally for direct proof of equality we have the form: Prove $$a = b$$ Proof (Style): We start with $a$ (or $b$) and show through a sequence of logically connected steps that $a$ is $b$ (or the other way around). $_{_\square}$ But, since I'm not great with proofs I just wanted to have someone validate the following direct proof style, or comment on it's relative legitimacy compared with the first proof style: Prove $$a = b$$ Proof (Style): $$x = x$$ $$\vdots \tag{logical steps}$$ $$a = b$$ Furthermore, could someone recommend an elementary text concerning the validity of proof methods (or whatever it's actually called). Thank.
Instead of starting with $x=x$, start with $a=a$ or $b=b$ But yes, sometimes that's exactly what you need to do. For example, suppose I have that $b=a$ and I want to show that $a=b$. Then I can do: * *$b=a \quad $ Premise *$b=b \quad $ = Intro *$a=b \quad$ = Elim 1,2
{ "language": "en", "url": "https://math.stackexchange.com/questions/2460697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to show that $\sum_{n=1}^{\infty}{2^n\over (2^n-1)(2^{n+1}-1)(2^{n+2}-1)}={1\over 9}?$ How to show that $${2\over (2-1)(2^2-1)(2^3-1)}+{2^2\over (2^2-1)(2^3-1)(2^4-1)}+{2^3\over (2^3-1)(2^4-1)(2^5-1)}+\cdots={1\over 9}?\tag1$$ We may rewrite $(1)$ as $$\sum_{n=1}^{\infty}{2^n\over (2^n-1)(2^{n+1}-1)(2^{n+2}-1)}={1\over 9}\tag2$$ $${2^n\over (2^n-1)(2^{n+1}-1)(2^{n+2}-1)}={A\over 2^n-1}+{B\over 2^{n+1}-1}+ {C\over 2^{n+2}-1}\tag3$$ $${2^n\over (2^n-1)(2^{n+1}-1)(2^{n+2}-1)}={1\over 3(2^n-1)}-{1\over 2^{n+1}-1}+ {2\over 3(2^{n+2}-1)}\tag4$$ $${1\over 3}\sum_{n=1}^{\infty}{1\over 2^n-1}-\sum_{n=1}^{\infty}{1\over 2^{n+1}-1}+{2\over 3}\sum_{n=1}^{\infty}{1\over 2^{n+2}-1}={1\over 9}\tag5$$
Hint: We can express $$\frac{1}{2^n-1}=\sum_{k=1}^{\infty} \frac{1}{2^{kn}}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2460845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 2 }
Why is the degree of a rational map of projective curves equal to the degree of the homogeneous polynomials? Let $C_1 \subseteq \mathbb{P}^m$ and $C_2 \subseteq \mathbb{P}^n$ be projective curves, and let $\phi : C_1 \rightarrow C_2$ be a nonconstant rational map given by $\phi = \left[ f_1, \ldots, f_n \right]$ for homogeneous polynomials $f_i \in K[X_0, X_1,\ldots , X_m]$ having the same degree $\deg(f_i) = d$. The degree of the rational map $\phi$ is defined as \begin{align*} \deg(\phi) = [ K(C_1) : \phi^* K(C_2)], \end{align*} where $\phi^* : K(C_2) \rightarrow K(C_1)$ is the injection of function fields given by precomposition: $\phi^* (f) = f \circ \phi$. In many cases it seems that $\deg(\phi) = d$, but I've been unable to find a proof of this or even conditions on when this will occur. Can someone give some insight into this? Even with specific examples, I can't seem to find a basis for $K(C_1)$ as a vector space over $\phi^*K(C_2)$ to even compute the degree.
The fact is that deg$(\phi)$ can be anything between $1$ and $d^n$. $1$ is the birational case, and $d^n$ if the sequence $f_0,\cdots,f_n$ is a regular sequence. For example degree of $(s^5:s^3t^2:s^2t^3:t^5):\mathbb{P}^1\to \mathbb{P}^3$ is $1$ and degree of $(s^6:s^4t^2:s^2t^4:t^6):\mathbb{P}^1\to \mathbb{P}^3$ is $2$. I checked the answers in Macaulay2 by packages RationalMaps and Cremona.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2461029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
A question on generic point and induced map between spectrum We let $\varphi:A \rightarrow A'$ be a ring homomorphism and $\varphi^*:Spec\,A' \rightarrow Spec\,A$ the corresponding induced map between spectrum. Now we suppose that $Spec\,A'$ is irreducible and let $x$ be its (unique) generic point (i.e. $\overline{\{x\}}=Spec\,A'$). We want to prove that the closure $\overline{\varphi^*(Spec\,A')}$ is irreducible and $\varphi^*(x)$ is a generic point of $Spec\,A$ (i.e. $\overline{\varphi^*(x)}=Spec\,A$). For the first part, the proof is not very hard. Note that $Spec\,A'$ is irreducible $\Leftrightarrow$ nilradical $rad(A')$ is a prime ideal. And from the property of the induced map between spectrum, we have $\overline{\varphi^*(Spec\,A')}=\overline{\varphi^*(V(rad(A'))}=V(\varphi^{-1}(rad(A')))$. We also know that $\overline{\varphi^*(Spec\,A')}$ is irreducible $\Leftrightarrow$ $I(V(\varphi^{-1}(rad(A'))))=\varphi^{-1}(rad(A'))$ is a prime ideal. From the definition of the induced map, we prove the first part. But I cannot prove the second part of the question. Since $Spec\,A'$ is irreducible, we know that the generic point $x=rad(A')$. So $\varphi^*(x)=\varphi^{-1}(rad(A'))$. If we want to prove that $\varphi^*(x)$ is a generic point $\Leftrightarrow$ $V(\varphi^{-1}(rad(A')))=Spec\,A=V(rad(A)).$ $\Leftrightarrow$ $\varphi^{-1}(rad(A')) \subseteq rad(A)$. But I wonder how to prove the last inclusion?
Your first claim about $\overline{\phi^{\ast}(\textrm{Spec}(A'))}$ being irreducible is a general fact from topology: a continuous image of an irreducible space is irreducible, and the closure of irreducible subset remains irreducible. Your next claim is false without some further assumptions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2461164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that: $\frac{1}{n}+\frac{1}{n+1}+\cdots+\frac{1}{2n}\ge\frac{2}{3}$ I've got three inequalities: $\forall n\in\mathbb N:$ $$\frac{1}{n}+\frac{1}{n+1}+\cdots+\frac{1}{2n} \ge\frac{1}{2}$$ $$\frac{1}{n}+\frac{1}{n+1}+\cdots+\frac{1}{2n} \ge\frac{7}{12}$$ $$\frac{1}{n}+\frac{1}{n+1}+\cdots+\frac{1}{2n}\ge\frac{2}{3}$$ From what I know the LHS converges to something about $0.69$ and each one of them requires the same method, but I can't come up with a proper way to solve it. Can someone give me a hint?
Hint: $$\frac{1}{n}\ge\ln(n+1)-\ln(n)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2461404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
“D-module” or “$D$-module”? Disclaimer: This question is not transcendental at all, so go easy on me. Starting with a (for simplicity) commutative unital ring $R$, we define a $R$-module. Obviously, since the name of the ring was italicized, the same must be done each time we write “$R$-module”, but if we are talking about the general theory of modules over a ring, we simply write “module”. On the other hand, given a ring $R$ of differential operators, we call a module over $R$ a D-module (“D” stands for “differential”, I guess$\ldots$). Of course, to denote the ring by $R$ in this situation is strange, so common sense dictates that we must use a “mathematical” variant of the letter D to denote such ring ($D,\mathscr{D},\mathcal{D}$, whatever$\ldots$). This subclass of modules is of great importance, so the corresponding theory has a name of its own: D-module theory, right? What is, then, the correct choice when we refer to the theory of modules over a ring of differential operators: “D-module theory” or “$D$-module theory”?
I am working with $D$-modules and I as you can see I am using the italic version. That is because it feels more natural to use the italic version as for other rings $R$ and $R$-modules. I am only working with one specific ring of differential operators though, which is the Weyl-Algebra $D = D_n$. So for me a $D$-module is really just a module over a specific ring and not a general module over some ring of differential operators, but I would also use the italic version in the latter case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2461623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Notion of uniquely transportable Categories I do not fully understand the notion of uniquely transportable categories. In the “Abstract and Concrete Categories” book one says that, for example, concrete category $(\mathbf G\mathbf r\mathbf p,U)$ over $\mathbf X$ with forgetful functor is uniquely transportable. But take, for example, the group of order 4 with such composition law: $a^2=b$ $ab=c=a^3$ $a^4=b^2=e$. And take the $\mathbf X-$isomorphism which maps $e\rightarrow a, a\rightarrow b, b\rightarrow c, c\rightarrow e$, then we have to find a group $B$ of order 4, which is isomorphic to one mentioned above. And the question is: how can we do this for case mentioned above and for arbitrary group $D\in \mathbf G\mathbf r\mathbf p$?
Given any group $D$ and a set $E$ with a bijection $f:D\to E$, make $E$ a group using the operation $*$ defined by $x*y=f(f^{-1}(x)\cdot f^{-1}(y))$, where $\cdot$ is the multiplication of $D$. The fact that $(E,*)$ is a group follows easily from the fact that $(D,\cdot)$ is a group and $f$ is a bijection. And essentially by definition of $*$, $f$ is an isomorphism from $(D,\cdot)$ to $(E,*)$. In your case, you just apply this to your bijection. Explicitly, $a$ is the identity element of $*$, $b*b=c$, $b*c=e=b*b*b$, and $b*b*b*b=c*c=a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2461750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The difference between ∈ and ⊂ I had a task where I had to figure if the argument was true or not. $A=\{ n ∈ ℤ \mid n^2 < 5 \}, \quad B=\{ 7, 8, \{2\}, \{2, 7, 8\}, \{\{7\}\} \}$ The first one was $\{-1, 2\} ∈ A$ and the answer to this was not true since the set is not an integer. The second one was $\{-1, 2\} ⊂ A$ and the answer to this was true since -1 and 2 are elements of $A$. I just can't seem to understand the difference between ∈ and ⊂ in this task and why the first argument was not true. Also it confuses me that the answer to the first one was that not true because the set is not an integer but aren't -1 and 2 integers? There is other parts of this task that also add more confusion: {2, 7, 8} ∈ B Answer: True since {2, 7, 8} is an element of set B {2, 7, 8} ⊂ B Answer: Not true, for example 2 is not an element in set B
$\in$ stands for "belongs to".For eg. an element may belong to a set. $\subset$ is the symbol for subset .For eg. one set can be a subset of another set if all elements of that set is included in the later set. You must be careful on what you are applying $\in$ or $\subset$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2461870", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 3 }
Let $p$ be a prime that decomposes completely in $O_K$. Show that $\#\text{Hom}(O_K, \mathbb{F}_p)=n$ Let $K$ be a number field of degree $n$. We say a prime $p$ decomposes completely in $O_K$ if $pO_K = \mathfrak{p}_1...\mathfrak{p}_n$ for some prime ideals $\mathfrak{p}_i$. I want to show that if $p$ is a prime that decomposes completely then $\#\text{Hom}(O_K, \mathbb{F}_p)=n$ but I don't really know how to attack this problem. I feel like I should be using that $O_K$ is a finitely generated $\mathbb{Z}$-module or that the degree of the extension is n, but I don't really know how.
Here are a few off the cuff comments that may help you on your way. First under any map $\mathcal{O}_K \rightarrow \mathbb{F}_p$, the set $p\mathcal{O}_K$ is sent to zero. So look at $\mathcal{O}_K/p\mathcal{O}_K$. This is isomorphic to $$\prod \mathcal{O}_K/\mathfrak{p}_i.$$ Maybe this is a good point for me to stop.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2461985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Density of a subset of $\mathbb{R}$ Let $\alpha$ be an irrational number. Is it true that the set $\{\frac{m}{\alpha} + n | m,n \in \mathbb{Z} \}$ dense in $\mathbb{R}$? If it is, how do we prove it?
Yes it is Lemma: Let $H$ be a subgroup of $\mathbb{R}$, then $H$ is dense if and only if there exists a convergence sequence $h_n\rightarrow \xi$ for some $\xi\in \mathbb{R}$ and $h_n\in H$ with the property that $h_n$ is not eventually constant. Proof: The first direction is obvious, For the other direction, let $h_n\rightarrow \xi$. Let $\varepsilon>0$, as $h_n$ is a Cauchy sequence there exists $h\in H$ for which $|h|<\varepsilon$, as $H$ is a group you have that $kh\in H$ for every $k\in\mathbb{Z}$, it follows that $\{kh:k\in\mathbb{Z}\}$ is $\varepsilon$-dense in $\mathbb{R}$, as $\varepsilon$ is arbitrary, we have that $H$ is dense in $\mathbb{R}$. Now, the set you mentioned is a group, hence it is enough to find a convergence sequence which is not a eventually constant. For every $m$, let $n_{m} = \lfloor m/\alpha\rfloor$, and consider the set $\frac{m}{\alpha}-n_m$ this is a set of infinitely many different numbers in $[0,1]$ hence there is a converging sub sequence $\frac{m_k}{\alpha}-n_{m_k}$, using the Lemma this completes the proof. Note: another way to do this is to use the fact that an irrational rotation of the circle has a dense orbit (with rotation $1/\alpha$)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2462133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Though $0^0$ is an indeterminate form of limits, is it undefined? I’ve done a my research, though I have not been able to find an adequate explanation as to whether or not $$0^0$$ exists as a real number, and why or why not? I must credit this question to “Question on the controversial ‘undefined’ $0^0$.” This Wikipedia entry lists a (seemingly?) exhaustive list of indeterminate forms of limits, with which I take no issue. All of which involve $\infty$ or a ‘multiple’ of $1/0$ and are therefore undefined as real numbers—that is, all except $0^0$. Desmos clearly claims that $0^0=1$; however, my TI-84 returns ERROR: DOMAIN in both real and complex mode. So, which is it? Is $0^0$ defined or not? I wasn’t quite sure what other tags apply, so feel free to edit.
Zero to zeroth power is often said to be "an indeterminate form", because it could have several different values. Since $x^0$ is $1$ for all numbers $x$ other than $0$, it would be logical to define that $0^0 = 1$. But we could also think of $0^0$ having the value $0$, because zero to any power (other than the zero power) is zero. Also, the logarithm of $0^0$ would be $0$ · infinity, which is in itself an indeterminate form. So laws of logarithms wouldn't work with it. So because of these problems, zero to zeroth power is usually said to be indeterminate. However, if zero to zeroth power needs to be defined to have some value, $1$ is the most logical definition for its value. This can be "handy" if you need some result to work in all cases (such as the binomial theorem).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2462257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
binary relation that is both symmetric and irreflexive I was asked to find a binary relation on $A$ that is symmetrical and irreflexive, and which is also a function from $A$ to $A$ where $A=\{1,2,3,4\}$ so i don't know if its correct but i came up with these $\{<1,2>, <1,3>, <1,4>, <2,1>, <2,3>, <2,4>, <3,1>, <3,2>, <3,4>, <4,1>, <4,2>, <4,3>\}$
hint: rephrase as find a function which has no fixed points and whose square is the identity. Your example is not a function since it relates the same element to multiple other elements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2462350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Linear feedback shift register sequences cipher breaking The premise here is that somehow it is known that a cipher is linear feedback shift register sequence as well as a bit of plaintext and ciphertext, such that it can be found that the first several bits of key are 1 0 1 0 0 1 1 1 0 1 0 0 1 1 1. So then it's possible to use this to try and determine the coefficients of the recursion sequence $x_{n+m} = \sum_{k=0}^{m-1}c_{k}x_{n+k}$ where $x_{k}$ and $c_{k}$ are integers (mod 2) and hence break the cipher completely. Is m the total number of digits (15)? If I'm only asked to find one $c_{k}$, does that imply that $c_{0}=c_{1}=...=c_{m-1}$ and hence, c_{k} can be factored out? Then is it just a matter of setting up multiple equations and solving? I'm I at all on the right track?
101001110100111 is really 1010011 being repeated, so then m=7. $x_{1+7} = c_{0}x_{1+0}+c_{1}x_{1+1}+c_{2}x_{1+2}+c_{3}x_{1+3}+c_{4}x_{1+4}+c_{5}x_{1+5}+c_{6}x_{1+6}$ $x_{8}=c_{0}x_{1}+c_{1}x_{2}+c_{2}x_{3}+c_{3}x_{4}+c_{4}x_{5}+c_{5}x_{6}+c_{6}x_{1}$ $=c_{0}*1+c_{1}*0+c_{2}*1+c_{3}*0+c_{4}*0+c_{5}*1+c_{6}*1=c_{0}+c_{2}+c_{5}+c_{6} = 1$ $x_{9}=c_{1}+c_{4}+c_{5}+c_{6}=0$ $x_{10}=c_{0}+c_{3}+c_{4}+c_{5}=1$ $x_{11}=c_{2}+c_{3}+c_{4}+c_{6}=0$ $x_{12}=c_{1}+c_{2}+c_{3}+c_{5}=0$ $x_{13}=c_{0}+c_{1}+c_{2}+c_{4}=1$ $x_{14}=c_{0}+c_{1}+c_{3}+c_{6}=1$ Use linear algebra to solve the system of equations to get $c_{0}-c_{5}-c_{6}=-1$ $c_{1}-c_{5}=0$ $c_{2}=0$ $c_{3}=0$ $c_{4}-c_{6}=0$ While there are multiple solutions, one possibility is $c_{2}=c_{3}=0$ and $c_{0}=c_{1}=c_{4}=c_{5}=c_{6}=1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2462491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Normed linear space and non-zero linear bounded functional Let $V$ be a normed linear space and $l$ be a non-zero linear bounded functional on $V$. If $d=\inf\{\|v\|:l(v)=1\}$, show that $\|l\|=\frac{1}{d}$.
You must have $\forall v \in V,|l(v)| \leq ||l||.||v||$. Without loss of generality you can assume that $l(v)=1$ (using the linearity of $l$ on the LHS and the absolute homogeneity of the norm on the RHS; of course in this case $||v|| \neq 0$). Your equation becomes $\forall v \in V, \frac{1}{||v||} \leq ||l||$ meaning that $||l|| = sup(\frac{1}{||v||}) = \frac{1}{inf(||v||)}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2462595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Can someone tell me what should i do next? Should I use the inequality between Arithmetic and Geometric mean? Problem 1: Let $a,b,c> 0$ $ab+3bc+2ca\leqslant18$ Prove that: $\frac{3}{a} + \frac{2}{b}+ \frac{1}{c}\geqslant 3$. I started on this way: $\frac{3}{a} + \frac{2}{b}+ \frac{1}{c}\geqslant 3$ $\frac{3bc+2ac+ab}{abc}\geqslant 3 $ $\frac{ab+3bc+2ac}{abc}\geqslant 3 \times \frac{abc}{3}$ $\frac{ab+3bc+2ca}{3}\geqslant abc$
By Holder: $$18\left(\frac{3}{a}+\frac{2}{b}+\frac{1}{c}\right)^2\geq(ab+3bc+2ac)\left(\frac{3}{a}+\frac{2}{b}+\frac{1}{c}\right)\left(\frac{2}{b}+\frac{1}{c}+\frac{3}{a}\right)\geq$$ $$\geq\left(\sqrt[3]{ab\cdot\frac{3}{a}\cdot\frac{2}{b}}+\sqrt[3]{3bc\cdot\frac{2}{b}\cdot\frac{1}{c}}+\sqrt[3]{2ac\cdot\frac{1}{c}\cdot\frac{3}{a}}\right)^3=162$$ and we are done! The Holder inequality for three sequences is the following. Let $a_1$, $a_2$,..., $a_n$, $b_1$, $b_2$,..., $b_n$, $c_1$, $c_2$,..., $c_n$, $\alpha$, $\beta$ and $\gamma$ be positive numbers. Prove that: $$(a_1+a_2+...+a_n)^{\alpha}(c_1+c_2+...+c_n)^{\gamma}(b_1+b_2+...+b_n)^{\beta}\geq$$$$\geq\left(\left(a_1^{\alpha}b_1^{\beta}c_1^{\gamma}\right)^{\frac{1}{\alpha+\beta+\gamma}}+\left(a_2^{\alpha}b_2^{\beta}c_2^{\gamma}\right)^{\frac{1}{\alpha+\beta+\gamma}}+...+\left(a_n^{\alpha}b_n^{\beta}c_n^{\gamma}\right)^{\frac{1}{\alpha+\beta+\gamma}}\right)^{\alpha+\beta+\gamma}.$$ It follows from convexity of $f(x)=x^k$, where $k>1$. In our case $\alpha=\beta=\gamma=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2462742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Proving that $f(x)=x^3+x+2$ is bijective without calculus I want to prove that $f(x)=x^3+x+2$, $f: \mathbb R \rightarrow \mathbb R$ is bijective without calculus. My attempts at showing to prove that it' injective and surjective are written below: $1)$ Injectivity: I want to show that $\forall a,b \in \mathbb R$ $f(a)=f(b) \implies a=b$. I started like this: $$f(a)=f(b) \implies a^3+a+2=b^3+b+2$$ $$\implies a(a^2+1)=b(b^2+1)$$ $$\implies \frac{a}{b}=\frac{b^2+1}{a^2+1}$$ Then I said since $\frac{b^2+1}{a^2+1}>0$ $\forall a,b \in \mathbb R$ then either $a \land b < 0$ or $a \land b > 0$. (For the case when $b=0 \land a \in \mathbb R$ it would be easy to prove that $a=b$.) From there it seemed pretty obvious that $\frac{a}{b}=\frac{b^2+1}{a^2+1} \implies a=b$ so I couldn't really draw a logical argument to show that $a=b$. $2)$ Surjectivity: I want to show that $\forall b \in \mathbb R$ $\exists a \in \mathbb R$ s.t. $f(a)=b$. I started like this: Let $b \in \mathbb R$ and set $f(a)=b$ then we have: $$a^3+a+2=b$$ $$\implies a^3+a=b-2$$ $$\implies a(a^2+1)=b-2$$ But then I couldn't find an expression for $a \in \mathbb R$ in terms of $b$. So I'm wondering if anyone can tell me how I can proceed with my surjectivity and injectivity proofs.
As for the injectivity, if $a>b$, then $a^3>b^3$, so $f(a)=a^3+a+2>b^3+b+2=f(b)$. As for the surjectivity, there is a formula to find a real solution of an equation of degree 3; it is not very nice, but you can use it. You can find it here
{ "language": "en", "url": "https://math.stackexchange.com/questions/2462852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Prove that if and only if $(A,B)$ is controllable, then $(A-BK,B)$ is also controllable Proof that the rank of $(A,B)$ is the same as $(A-BK,B)$ The image is taken from "Optimal Control Methods for Linear Discrete-Time Economic Systems" by Y. Murata Could someone please explain why the sum of columns of $B, AB$ and so on do not affect the row rank of the matrix in the above proof?
The proof is based on the following fact (which is easy to show): If $w_1,\ldots,w_m\in\operatorname{span}\{v_1,\ldots,v_n\}$ then $$ \operatorname{span}\{v_1,\ldots,v_n,(u_1+w_1),\ldots,(u_m+w_m)\} = \operatorname{span}\{v_1,\ldots,v_n,u_1,\ldots,u_m\}. $$ In particular, $$ \dim\operatorname{span}\{v_1,\ldots,v_n,u_1+w_1,\ldots,u_m+w_m\} = \dim\operatorname{span}\{v_1,\ldots,v_n,u_1,\ldots,u_m\}, $$ which is the same as $$ \operatorname{rk}[V,U+W] = \operatorname{rk}[V,U], $$ where $V = [v_1,\ldots,v_n]$, $W = [w_1,\ldots,w_m]$, and $U = [u_1,\ldots,u_m]$. The assumption $w_1,\ldots,w_m\in\operatorname{span}\{v_1,\ldots,v_n\}$ now means that $W = VL$ with a matrix $L$ of appropriate dimensions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2462979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Logic of the implication in $ε$-$δ$ proofs Im confused by why epsilon delta proofs logically work. An example is Proof: Given $ε>0$, choose $δ = {ε\over3}$. For all $x$, if $0<|x−2|<δ$ then $|(3x−1)−5| < ε$. That last part if $0<|x−2|<δ$ then $|(3x−1)−5| < ε$ LOOKS A LOT LIKE $P\to Q$ because of the "if then" but yet the proof in the book solves it like as if its $Q\to P$?: $$\begin{align}|(3x−1)−5| &= |3x−6|\\ &= |3(x−2)|\\ &= 3|x−2|\\ &<3δ\\ &= 3\left({ε\over3}\right) \\ &= ε\end{align}$$ So my question is how come it looks like a $P\to Q$ proof but yet we start with $Q$ to show $P$?
Let's just run through the proof. We want to prove that, for any $\epsilon > 0$, there exists $\delta > 0$ such that for all $x$, $0 < |x-2| < \delta \implies |(3x-1)-5|<\epsilon$. Hence, we take an arbitrary $\epsilon > 0$. We now want to prove the existence of a $\delta > 0$ such that the statement "for all $x$ (...)" holds. Note that $|(3x−1)−5| = |3x−6| = |3(x−2)| = 3|x−2|$. Now we want to construct a $\delta > 0$ such that IF $|x-2| < \delta$, THEN $3|x-2| < \epsilon$. Hence, if we take $\delta = \frac{\epsilon}{3}$, we have reached our goal: $|x-2| < \delta = \frac{\epsilon}{3}$, hence $3|x-2| < \epsilon$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2463130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 6, "answer_id": 2 }
Sum the series $(1^2+1)1! + (2^2+1)2! + (3^2+1)3! \cdots + (n^2+1)n!$ Problem: Sum the series: $$(1^2+1)1! + (2^2+1)2! + (3^2+1)3! \cdots + (n^2+1)n!$$ Source: A book on algebra.I came across this interesting looking series and decided to tackle it. My try : All I have tried is taking the $r^{th}$ term and summing it, but in vain: $$ T_r = (r^2+1)r!$$ $$T_r = r^2\cdot r! + r!$$ Now I don't know how to sum either of these terms.I'm not familiar with uni level math as I'm a high school student. All help appreciated!
Hint. Note that $$(n^2+1)n!=n(n+1)!-(n-1)(n)!$$ Therefore the sum can be written as $$(1(2)!-(0)(1)!)+(2(3)!-(1)(2)!)+(3(4)!-(2)(3)!)+\dots +(n(n+1)!-(n-1)(n)!).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2463223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
A quotient of infinite direct product of field Let $R$ be a ring of countable infinite direct product of a field $F$ and $I$ be the countable infinite direct sum of $F$,clearly $I$ is an essential ideal of $R$. Is $R$/$I$ an indecomposable $R$-module? Is $R$/$I$ a cyclic $R$-module? Is $I$ a maximal ideal in $R$? I will be thankful for any clarification.
As it is a countable product we will asume that it is indexed in the natural. 1) We will put $e\in R$ such that $e_i=1_F$ if $i=2k$ for some $k\in\mathbb{N}$ and $0_F$ otherwise, and $f\in R$ such that $f_i=1_F$ if $i=2k+1$ for some $k\in\mathbb{N}$ and $0_F$ otherwise. It is clear that $1_R=e+f$, $e$ and $f$ are central idempotents orthogonals. Then $R=Re\oplus Rf$. Moreover $e+I$ and $f+I$ are central idempotents orthogonals non zero and $1_R+I=(e+I)+(f+I)$. Therefore $R/I=(R/I)(e+I)\oplus(R/I)(f+I)$. 2)As mentioned in the comments $R/I=R(1+I)$. 3) It is not maximal as $R/I$ is not a field, not even a integer domain $(e+I)(f+I)=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2463381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Solve a nonlinear system of equations in 3 variables I need to solve this system of equations $$\frac 1x+\frac{1}{y+z}=-\frac2{15}$$ $$\frac 1y+\frac{1}{x+z}=-\frac2{3}$$ $$\frac 1z+\frac{1}{x+y}=-\frac1{4}$$ I've tried to express $x$ in terms of $y$, then $y$ in terms of $z$. But this leads to nothing good. I think I should use some matrix method, but I'm not sure what exactly I have to do. Need help here.
The solution is given by $$ (x,y,z)=(5,-1,-2) $$ This follows by multiplying with the common denominator, which gives three polynomial equations in $x,y,z$, which can be easily solved using resultants. The first polynomial equation, for example, is $x(2y + 2z + 15) + 15(y + z)=0$. One of the resultant equations is, for example, $yz - x - y - z=0$, so that we can substitute $x=yz-y-z$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2463492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Find the volume between $z=4-x^2-y^2$ and $z=4-2x$ as a triple integral So the volume of $z=4-x^2-y^2$ and $z=4-2x$ as a triple integral shall look similar to $$\int^2_0\int^{y=?}_{y=?}\int^{4-x^2-y^2}_{4-2x} dz dy dx$$ but how do I find the limits on $y$?
The point is that you are integrating over a domain in which $4-2x<4-x^2-y^2$. In that domain, $$ -2x < -x^2-y^2\\ 2x>x^2+y^2\\ y^2 < 2x -x^2 \\ -\sqrt{2x-x^2} < y < +\sqrt{2x-x^2} $$ So the integral is $$ \int_{x=0}^2 \int_{y=-\sqrt{2x-x^2}}^{-\sqrt{2x-x^2}} (4- x^2 -y^2 -(4-2x) )dy\,dx = \int_{x=0}^2 \int_{y=-\sqrt{2x-x^2}}^{-\sqrt{2x-x^2}} (2x- x^2 -y^2 )dy\,dx $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2463588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Consider a box containing four balls: one red, one green, one blue, and one tricolor (=red,green and blue) Consider a box containing four balls: one red, one green, one blue, and one tricolor (=red,green and blue). You draw one ball from the box. Consider the three events: R = {the drawn ball contains red} G = {the drawn ball contains green} Y = {the drawn ball contains red and green} Are R and G independent? At first I thought yes, but doesn't getting a tricolor ball affect the probability of R and G? Are G and Y independent? I think no because a tricolor ball can be in both G and Y.
Using conditional probability formulas: $$P(R|G)=\frac{P(R\cap G)}{P(G)}=\frac{1/4}{2/4}=\frac12=P(R) \Rightarrow Independent;$$ $$P(R|Y)=\frac{P(R\cap Y)}{P(Y)}=\frac{1/4}{1/4}=1\ne \frac12=P(R) \Rightarrow Dependent.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2463683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Redundant finite character family definition Definition: Let $F$ be a family of sets. $F$ is called of finite character if for each set $A$, we have that: $A\in F\iff$ Each finite subset of $A$, is also in $F$. I can't see the point of this definition, I find it redundant. It seems trivial, if $A$ is in $F$ then every subset (finite or infinite) of $A$ will be always in $F$. My thinking is that every family $F$ is of finite character, according to the definition.
As Somos pointed out in the comments, this is not "redundant". Let's consider the vector space $\mathbb{R}^{\alpha}$ for some dimension (possible infinite) $\alpha \ge 2$ and let $F_1$ be the collection of all linearly independent subsets $A \subseteq \mathbb R^{\alpha}$. $F_1$ is of finite character since a subset $A \subseteq \mathbb R^{\alpha}$ is linearly independent if and only if every finite subset $A^* \subseteq A$ is linearly independent over $\mathbb R^{\alpha}$. Now consider $F_2$ - the collection of all maximal linearly independent subsets $A \subseteq \mathbb R^{\alpha}$. $F_2$ is not of finite character since for any maximal linearly independent $A \in F_2$ and any $a \in A$ we may consider the finite subset $\{a\} \subseteq A$. Since $\alpha \ge 2$ the set $\{a \}$ is no maximal linearly independent over $\mathbb R^{\alpha}$ and hence $\{a\} \not \in F_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2463804", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Necessity of absolute value in The Fubini–Tonelli theorem? In The Fubini–Tonelli theorem(reference in wiki): What's the point of taking absolute value in $\int_{X\times Y}|f(x,y)|\,d(x,y)$? Isn't $f$ integrible in $X\times Y$ automatically implies $|f|$ integrable in $X\times Y$? So we only need $f$ integrable to apply Fubini–Tonelli theorem?
No .The theorem holds for all non-negative measurable functions; you compute the repeated iterated integral of the absolute value of of f and if it comes out finite you CONCLUDE the $f$ is integrable so you may then apply Fubini to $f$ to calculate the integral of $f$ by iterated integrals -very useful result
{ "language": "en", "url": "https://math.stackexchange.com/questions/2463979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Summing up $3+5+9+17+...$ Find the sum of sum of $3 +5+9+17+...$ till $n$ terms. Using Method of differences, the sum of the series is $$\sum\limits_{j=1}^n 2^{j-1}+n$$ I am facing difficulty in evaluating $$\sum\limits_{j=1}^n 2^{j-1}$$. How do I do that? Now I have $2^0 + 2^1 + 2^2 ... 2^{n-1}$ The sum of this series is : $2^n- 1$ as sum of GP is given by $a(1-r^n)/(1-r)$. Here $a = 1, r =2$
Without words: $$n+2^{n+1}-2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2464118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Product of polynomes in extensions Sorry if this is an elementary question. Let $K$ be a field and $L$ an extension of $K$. Let $P,Q\in{K[X]}$, so that $Q$ divides $P$ in $L[X]$, i.e. so that there exists a polynome $R\in{L[X]}$ so that $P=QR$. Then $R\in{K[X]}$. I think this is a consequence of the unique factorisation of $P$ as a product of irreducible polynomes of $K[X]$, but I want to be sure.
The relation $P=QR$ gives a linear system for the coefficients of $R$. This system has entries in $K$ because $P,Q \in K[x]$. If this system has a solution in $L$, then this solution can be found using Gaussian elimination. Therefore, the solution is in $K$ because Gaussian elimination only uses rational operations on the entries and these operations never leave $K$, which is a field.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2464228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How can I find the equation of a circle given two points and a tangent line through one of the points? I was wondering whether it was possible to find the equation of a circle given two points and the equation of a tangent line through one of the points so I produced the following problem: Find the equation of the circle which passes through $(1,7)$ and $(-6,0)$ and has a tangent with equation $2x-9y+61=0$ at $(1,7)$ This seems like it should be solvable but I cannot work out how. Clearly, the line and the circle have one point of intersection so I tried finding the point of intersection between the line and the circle using the generic circle equation $(x-a)^2 + (y-b)^2 = r^2$, the equation of the line, and the discriminant of the resulting quadratic, which must be 0, but this still produces a quadratic with two unknowns. I also feel like the fact that the perpendicular distance between centre $(a,b)$ and the line is the radius can be used somehow. Again, trying this seems to produce equations with too many unknowns. How can I solve this problem?
Hint. The center of such circle is on the line which is orthogonal to tangent line and passes through the point of tangency. Therefore, in your case, the coordinate of the center is $C=(1+2t,7-9t)$ for some $t\in \mathbb{R}$. In order to find $t$, impose that $C$ has the same distance from the given points $P=(1,7)$ and $Q=(−6,0)$: $$|CP|^2=|CQ|^2\Leftrightarrow (4+81)t^2=(7+2t)^2+(7-9t)^2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2464364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 6, "answer_id": 5 }
The sum of the real roots of $x^4+x^3+x^2+x-1$ Consider a polynomial $f(x)=x^4+x^3+x^2+x-1$. The sum of the real roots of $f(x)$ lies in the interval ... * *$(0,1)$ *$(-1,0)$ *$(-2,-1)$ *$(1,2)$ Using Intermediate Value property, I know that one root exists between $0$ and $1$, but I am stuck here and can't do anything else. Any hints on how should I proceed?
Let $$f(x)=x^4+x^3+x^2+x-1.$$ Thus, $$f(0.5)f(0.6)<0$$ and $$f(-1.3)f(-1.2)<0,$$ which says that there are two roots: $$x_1\in(0.5,0.6)$$ and $$x_2\in(-1.3,-1,2)$$ and $$x_1+x_2\in(-0.8,-0.6).$$ But $$f''(x)=12x^2+6x+2>0,$$ which says that $f$ is a convex function. Thus, our equation has at most two roots and the answer is $3)$. By the Descartes' rule of signs (https://en.wikipedia.org/wiki/Descartes%27_rule_of_signs) our equation has one positive root only and we can check a placing of the root by calculator or even by hand.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2464457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
The number of terms in the Multinomial Expansion $(x+\frac{1}{x}+x^2+\frac{1}{x^2})^n$ I am aware that there is a formula to calculate the number of terms in a multinomial expression $(x_1+x_2+x_3+...x_r)^n$, i.e. $^{n+r-1}C_{r-1}$. However, this is in the case when the terms $x_1, x_2, x_3 ... x_r$ are different variables. In my case, the variables are the same; i.e. x, raised to different powers. Can someone please point me in the right direction?
We obtain \begin{align*} \color{blue}{\left(x+\frac{1}{x}+x^2+\frac{1}{x^2}\right)}&=\frac{1}{x^{2n}}(1+x+x^2+x^3)^n\\ &=\frac{1}{x^{2n}}\left(1+x+x^3\left(1+x\right)\right)^n\\ &=\frac{1}{x^{2n}}(1+x)^n(1+x^3)^n\\ &\color{blue}{=\frac{1}{x^{2n}}\sum_{j=0}^n\binom{n}{j}x^j\underbrace{\sum_{k=0}^n\binom{n}{k}x^{3k}}_{n+1\text{ terms}}}\tag{1} \end{align*} Let's have a look at the three factors in (1). * *The rightmost sum contains $n+1$ pairwise different terms with exponents $3k,0\leq k\leq n$. *The leftmost factor $\frac{1}{x^{2n}}$ does not change the number of different terms as it is just a shift of each exponent of $x$ by $-2n$. *Now we assume $n\geq 2$ and analyse the left sum. A multiplication with the first three terms $\binom{n}{0},\binom{n}{1}x,\binom{n}{2}x^2$ results in $3(n+1)$ terms \begin{align*} \sum_{j=0}^n a_kx^{3k\color{blue}{+0}},\sum_{j=0}^n b_kx^{3k\color{blue}{+1}},\sum_{j=0}^n c_kx^{3k\color{blue}{+2}} \end{align*} which gives a sum of increasing powers of $x^k, 0\leq k\leq 3(n+1)$. Additionally we observe that whenever $n(>2)$ is increased by $1$, the overall number of terms is increased by $1$. We conclude: The number of different terms in the expansion of $\left(x+\frac{1}{x}+x^2+\frac{1}{x^2}\right)^n$ is \begin{align*} \color{blue}{1}&\qquad \color{blue}{n=0}\\ \color{blue}{4}&\qquad \color{blue}{n=1}\\ 3(n+1)+(n-2)=\color{blue}{4n+1}&\qquad \color{blue}{n\geq 2} \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2464554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Find all primes $p$ such that $p+1$ is a perfect square. Find all primes $p$ such that $p+1$ is a perfect square. All primes except for 2 (3 is not a perfect square, so we can exclude that case) are odd, so we can express them as $2n+1$ for some $n\in\mathbb{Z}_{+}$. Let's express the perfect square as $a^2$, where $a\in\mathbb{Z}_{+}$. Since we are interested in a number that is one more than $2n+1$, we know that our perfect square can also be expressed as $2n+1+1=2(n+1)$. $2n+2=a^2$ $2(n+1)=a^2$ So we know that our perfect square must be even, as it has a factor of 2 in it (in fact $2\cdot2$). It is my strong intuition that we get a perfect square only if $n=1$, and therefore $p=3$ and $p+1=a^2=2\cdot2=4$, but how should I continue with this proof? It seems to me that whatever factor we have on the LHS we need to have it twice on the RHS (since $a$ must be an integer), but how do I continue from there?
If a prime $p$ is of the form of $n^2-1$, then $$p=(n+1)(n-1)$$ and so $n-1$ must be 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2464649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Combinatorial proof that the Stirling numbers of second kind satisfy $S(m+n+1,m) = \sum_{k=0}^{m} kS(n+k,k)$ Give a combinatoric proof for the identity $S(m+n+1,m) = \sum_{k=0}^{m} kS(n+k,k)$ LHS gives us a way to partition $m+n+1$ elements into $m$ blocks. How will we interpret $k$?
Note that we can rewrite this as $$ S(n+m+1,m) = \sum_{k=1}^{m}k \, S(n+k,k) = \sum_{s=n+2}^{n+m+1}(s-n-1)\, S(s-1,s-n-1) = \sum_{s=n+2}^{n+m+1} (m - (n+m+1-s)) \, S(s-1,m-(n+m+1-s)), $$ by substituting $s = n+k+1.$ Let $P$ be a partition of the set $\{1, \dots, n+m+1\}$ into $m$ parts. Let $s$ be the largest element of $\{1, \dots, n+m+1\}$ that is not in a singleton (a singleton is a part that contains only one element). This means that the $n+m+1-s$ elements $s+1, s+2, \dots, n+m+1$ are all in singletons. The partition has $m-(n+m+1-s)$ other parts. This induces a partition of the elements in $\{1,\dots,s-1\}$ into $m-(n+m+1-s)$ parts by removing the element $s$ from its part in $P$ and removing the singletons with elements larger than $s$. Note that there are $S(s-1,m-(n+m+1-s))$ many such partitions. Now there are $m-(n+m+1-s)$ ways to choose the part to which the element $s$ belongs. ($s$ is not in a singleton.) Summing over all possible choices for $s$ gives the result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2464779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Solve integral $\int\frac{dx}{\sin x+ \cos x+\tan x +\cot x}$ I need to find: $$\int\frac{1}{\sin x+ \cos x+\tan x +\cot x}\ dx$$ My attempts: I have tried the conventional substitutions. I have tried the $\tan(x/2)$ substitutions, tried to solve it by quadratic but nothing has worked so far.
Partial Solution $$\begin{aligned} &\int\frac{\mathrm{d}x}{\sin(x)+\cos(x)+\tan(x)+\cot(x)} \\&=\int\frac{sc}{s^2c+c^2s+1}\,\mathrm{d}x\quad * \\&=\int sc\sum_{n\geq0}(-1)^n(s^2c+c^2s)^n\,\mathrm{d}x\quad\text{(Binomial series)}** \\&=\int sc\sum_{n\geq0}(-1)^n\left(\sum_{0\leq k\leq n}\binom{n}{k}(s^2c)^{k}(c^2s)^{n-k}\right)\mathrm{d}x\quad\text{(Binomial theorem)} \\&=\sum_{n\geq0}\,\sum_{0\leq k\leq n}(-1)^n\binom{n}{k}\int s^{n+k+1}c^{2n-k+1}\,\mathrm{d}x \end{aligned}$$ Then the integral in the last expression can be computed iteratively, for $a,b\in\mathbb{N}$, as: $$\int s^{a}c^{b}\,\mathrm{d}x =-\frac{s^{a-1}c^{b+1}}{a+b}+\frac{a-1}{a+b}\int s^{a-2}c^{b}\,\mathrm{d}x$$ This solves the integral as a series but doesn't give a closed form and still has the caveat of infinite terms. * $s=\sin(x),c=\cos(x)$ ** The series is the expansion of $\big((s^2c+c^2s)+1\big)^{-1}$, which converges since $|s^2c+c^2s|<1$. The iterative algorithm for $\int s^ac^b\,\mathrm{d}x$ is here, on Wikipedia.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2464865", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Segment trisection without compass I'm trying to figure out how to trisect a segment using only pen and ruler. There is a parallel line provided. No measurement is allowed.
lets mark ends of the segment as A and B. * *mark points C and D on the parallel line. *draw lines BD and AC, mark the cross point as E *draw lines BC and AD, mark the cross point as F *draw line EF, mark its cross point with line CD as G *draw line BG, mark its cross point with line AD as H The line EH cuts segment AB in 2:1 ratio.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2464986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
For what pair of integers $(a,b)$ is $3^a + 7^b$ a perfect square. Problem: For what pair of positive integers $(a,b)$ is $3^a + 7^b$ a perfect square. First obviously $(1,0)$ works since $4$ is a perfect square, $(0,0)$ does not work, and $(0,1)$ does not work, so we can exclude cases where $a$ or $b$ are zero for the remainder of this post. I have a few observations made but not much room for a full solution. First, since powers of an odd number are odd, and the sum of two odd numbers is even, so the base of the square must be an even number. Secondly, the last digit of the powers of $3$ are $\{1,3,7,9 \}$ , whereas the last digits of the powers of $7$ are $\{7,9,3,1 \}$. From here I am not sure how to proceed and any hints much appreciated. I'm not sure if there a finite amount of pairs or not either.
Just to register, from the comment by Daniel, there are just two possibilities, if the result is $x^2,$ either $$ 1 + 2 \cdot 3^c = 7^d, $$ or $$ 1 + 2 \cdot 7^e = 3^f. $$ I would guess that an elementary method shown in an answer by Gyumin Roh to http://math.stackexchange.com/questions/1551324/exponential-diophantine-equation-7y-2-3x can be modified for this task. My way of working with this takes a while... http://math.stackexchange.com/questions/1941354/elementary-solution-of-exponential-diophantine-equation-2x-3y-7 http://math.stackexchange.com/questions/1941354/elementary-solution-of-exponential-diophantine-equation-2x-3y-7/1942409#1942409 http://math.stackexchange.com/questions/1946621/finding-solutions-to-the-diophantine-equation-7a-3b100/1946810#1946810 http://math.stackexchange.com/questions/2100780/is-2m-1-ever-a-power-of-3-for-m-3/2100847#2100847
{ "language": "en", "url": "https://math.stackexchange.com/questions/2465086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Alternative & Null Hypothesis formulation Situation: A t-test is being used, population mean is known as well as sample mean, sample standard deviation and n = 31 . The statement is "Is the true mean goals per game µ for soccer players in the 2014 - 2015 season still 12.5" Is the null hypothesis, HO, µ = 12.5 How do I formulate a null hypothesis? Is the above correct? What question do I ask myself? Then would the alternative hypothesis be, Ha != 12.5? Follow up question, when I am conducting a t-test, and if my t-value comes out to be 0.13, would I have to double this when finding the p-value because I am conducting a two-sided t-test? Thanks, hope this post is clear!
Yes. $H_0: \mu = 12.5$ against $H_a: \mu \ne 12.5.$ You would use the test statistic $T = \frac{\bar X = \mu_0}{s/\sqrt{n}}.$ And you would reject $H_0$ at level $\alpha = 5\%$if $|T| > q^*,$ where $q^*$ cuts probability $0.025$ from the upper tail of Student's t distribution with $n-1$ degrees of freedom. With $n = 31,$ you would have $t^* = 2.052.$ I got this value using R statistical software (as below), but you could get it by looking at row $df = n-1 = 30$ of a printed table of the t distribution. qt(.975, 30) ## 2.042272 Of course you can't expect $\bar X$ to be exactly 12.5 this year. The question is whether it differs by enough to make you doubt that the population mean is still 12.5. If $|\bar X - 12.5|$ is large enough that $|T| > 2.052,$ then one says that the difference is 'statistically significant' and rejects $H_0.$ If $T = 0.13$ then the P-value is $$P(|T| \ge 0.13) = P(T \le -0.13) + P(T \ge 0.13) = 2P(T \le -0.13),$$ where the last equal sign is explained by the symmetry of the t-distribution about $0$. So Yes you would need to double the probability in one tail to get the two-sided P-value. You may be able to get a rough idea of the P-value by clever use of a printed t table, but largely P-value is a criterion of the computer age. In R you could get the P-value as follows: 2*pt(-0.13, 30) ## 0.8974342 Because the P-value is larger than 5%, you would not reject $H_0$ upon obtaining this value of $T.$ In the graph below, the P-value is the sum of two areas under the $\mathsf{T}(df=30)$ density curve: to the left of the left-hand red dotted line and to the right of the right-hand line.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2465295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove existence and uniqueness of Convex Hull containing compact set I want to prove the existence and uniqueness of the convex set described below, which is the convex hull. My thinking is that I'm to generate a set containing all the convex sets containing $A$ and take their intersection. Then pointing out that the intersection will also be convex. How could I formalize the set containing all such convex sets containing $A$? Thanks in advance If $A\subset\mathbb{R^n}$ is compact, then show that $\exists$ a unique convex subset $B$ of $\mathbb{R^n}$ such that $A\subset B$ and $B$ lies in any compact convex subset of $\mathbb{R^n}$ containing $A$.
To formalize: let $\mathscr C$ be the collection of sets (so it is a subset in the power set of $\mathbb R^n$): $$\mathscr C:= \{ B\subset \mathbb R^n : B \text{ is convex and } A\subset B\}.$$ Then the set you want is $$ A^h := \bigcap _{B\in \mathscr C} B.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2465419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Removing points from a triangular array without losing information I'm trying to find insights about the following puzzle, to see if I can find it on the OEIS (and add it if it's not already there): Suppose I give you a triangular array of light bulbs with side length $n$: o o o o o o o o o o o o o o o o o o o o o 1 2 ... n I'm going to turn on three lightbulbs that form an "upright" equilateral triangle as in the following example: o o x o o o o o o o o x o o x o o o o o o Before I turn on the lights, your job is to remove as many lightbulbs as possible from the array—without losing the ability to deduce the triangle of bulbs that has been turned on. To be clear, if a lightbulb has been removed, it is not lit up when its position is turned on. For example, if you removed the following bulbs (marked by .) you would only see the following two lights turn on (marked by x), which is enough uniquely deduce the third (unlit) position: . . . o . x . . o . . o o o o . => o o o . o o o o . o x o o . <- the third unlit position o . . . o o o . . . o o Let $a(n)$ be the maximum number of bulbs that can be removed without introducing any ambiguities. With a naive algorithm, I have checked values up to a triangle with side length 7, as seen below: . . . o . . o o . o . . . . . o . o o . . . . . o o o o o . o o . o . . . . . o o o o . o o o o o . o . o . o o . . . o o . o o o o . . o o o . . . o o o . o . o o o a(2) = 3 a(3) = 4 a(4) = 5 a(5) = 7 a(6) = 9 a(7) = 11 Searching for this sequence on the OEIS turns up dozens of results. As an upper bound for this sequence, we need the different configurations of 3, 2, 1, or 0 lights to be able to represent all of the $\binom{n + 1}{3}$ possible triangles. That is: $$\binom{n + 1}{3} \leq \binom{b(n) - a(n)}{3} + \binom{b(n) - a(n)}{2} + \binom{b(n) - a(n)}{1} + \binom{b(n) - a(n)}{0}$$ where $b(n) = \frac{1}{2}n(n+1)$.
A stronger upper bound on $a(n)$ comes from the following observation: if we choose any two missing lights from the same row, then they are part of a triangle with at most one light still working. By pigeonhole, there can be at most $\binom{n+1}{2} + 1 - a(n) \le \binom{n+1}{2}$ such triangles, otherwise two of them would correspond to the same single light bulb being lit up. Let $a_k$ be the number of lightbulbs missing from row $k$, so that $a_1 + a_2 + \dots + a_n = a(n)$. Then the number of ways to choose two missing lights from the same row is $$\binom{a_1}{2} + \binom{a_2}{2} + \dots + \binom{a_n}{2}$$ which is at least $n \cdot \binom{a(n)/n}{2}$ by convexity. This gives us the inequality $$n \cdot \binom{a(n)/n}{2} \le \binom{n+1}{2} \implies a(n) \le \frac{n + n\sqrt{4n+5}}{2}.$$ So we get a bound on the order of $a(n) = O(n^{3/2})$. (In particular, for large $n$, almost all the lightbulbs have to remain in place.) On the other hand, we can show that $a(n) = \Omega(n^{4/3})$ by a probabilistic construction. Let's pick some lightbulbs to remove by the following process: * *First, for each lightbulb, independently decide whether to remove it at random, with a probability $p$ of removing it (to be determined later). *Second, every time there is a triangle in which all three lightbulbs are removed, put them all back. *Third, if a lightbulb is the vertex of $k \ge 2$ triangles in which the other two lightbulbs are removed, put back one lightbulb (chosen arbitrarily) in each of them. This guarantees that there are no triangles with all three lightbulbs missing, for each triangle with two lightbulbs missing the third lightbulb is unique, and we don't care about triangles with at most one lightbulb missing because those are uniquely determined anyway. Now we compute the expected number of lightbulbs we removed. In step 1, we removed an average of $\binom{n+1}{2} p$ of them. In step 2, we found an average of $\binom{n+1}{3}p^3$ triangles with all three vertices missing, and added three lightbulbs back for each one, for $\binom{n+1}{3}p^3 \le \binom{n+1}{2} (np^3)$ more lightbulbs. Step 3 is the trickiest. We can think of it this way: if a lightbulb is the single remaining vertex of $\mathbf X$ triangles, we always remove $\mathbf X$ of them, except when $\mathbf X=1$, when we get to keep one. So we remove $\mathbb E[\mathbf X] - \Pr[\mathbf X=1]$ triangles. Every lightbulb is the vertex of $n-1$ triangles, and is the only remaining one with probability $p^2$, so \begin{align} \mathbb E[\mathbf X] - \Pr[\mathbf X=1] &= (n-1)p^2 - (n-1)p^2 (1-p^2)^{n-1} \\ &= (n-1)p^2[1-(1-p^2)^{n-1}] \\ &\le (n-1)p^2[1 - (1 - (n-1)p^2)] \\ &\le n^2p^4. \end{align} This is for each of $\binom{n+1}{2}$ lightbulbs in the array. So at the end of the day, the expected number of lightbulbs removed and not replaced is at least $$\binom{n+1}{2}\left(p - np^3 - n^2p^4\right)$$ and this is optimized when $p = (2n)^{-2/3}$ giving $\Omega(n^{4/3})$ lightbulbs removed. (The exact constant on the leading term there is something like $\frac{3}{8\cdot 2^{2/3}} \approx 0.236$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2465571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Calculus of variation: Lagrange's equation A particle of unit mass moves in the direction of $x$-axis such that it has the Lagrangian $L= \frac{1}{12}\dot x^4 + \frac{1}{2}x \dot x^2-x^2.$ Let $Q=\dot x^2 \ddot x$ represent a force (not arising from a potential) acting on the particle in the $x$-direction. If $x(0)=1$ and $\dot x(0)=1$, then the value of $\dot x$ is * *some non-zero finite value at $x=0$. *$1$ at $x=1$. *$\sqrt 5 $ at $x=\frac{1}{2}$ *$0$ at $x=\sqrt \frac{3}{2} $ My Attempt: The Lagrange's equation is $$ \frac{d}{dt}\left (\frac{\partial L}{\partial \dot x}\right )-\frac{\partial L}{\partial x}=Q \tag{1} $$ where $$ \frac{\partial L}{\partial x}=\frac{\partial }{\partial x}\left(\frac{1}{12}\dot x^4 + \frac{1}{2}x \dot x^2-x^2\right) =\frac{1}{2} \dot x^2-2x$$ and $$\frac{\partial L}{\partial \dot x}=\frac{1}{3}\dot x^3 +x \dot x $$ Hence equation $(1)$ becomes \begin{aligned} \frac{d}{dt}\left( \frac{1}{3}\dot x^3 +x \dot x \right)-\frac{1}{2} \dot x^2+2x &=\dot x^2 \ddot x \\ \Rightarrow\; \frac{d}{dt}\left( \frac{1}{3}\dot x^3\right) +\frac{d}{dt}\left(x \dot x \right) &=\dot x^2 \ddot x+\frac{1}{2} \dot x^2-2x \\ \Rightarrow\; \dot x^2 \ddot x+\frac{d}{dt}(x \dot x ) &=\dot x^2 \ddot x+\frac{1}{2} \dot x^2-2x \\ \Rightarrow\; \frac{d}{dt}(x \dot x ) &=\frac{1}{2} \dot x^2-2x \end{aligned} I don't know how to solve further and where to use the given values of $x(0) $ and $\dot x(0)$
well, continue \begin{align} \frac{d}{dt}(x \dot x ) &=\frac{1}{2} \dot x^2-2x \\ \implies \dot{x}^2+x \ddot{x} &= \frac{1}{2}\dot{x}^2-2x \\ \implies 0 &= x \ddot{x}+\frac{1}{2}\dot{x}^2+2x \\ &\vdots \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2465686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does the existence of the Levi-Civita connection depend on whether or not we define a metric on our smooth manifold? The Christoffel symbols of the Levi-Civita connection are calculated through the metric, but does that necessarily mean that its existence depends on whether or not we have a metric? Specifically, the Levi-Civita connection offers a specific way to parallel transport a vector along a manifold, so if we don't have a metric on a manifold, does this particular way to parallel transport a vector along the manifold gets "lost"? I mean, of course we need the metric to determine the Levi-Civita connection, but as a geometrical concept, it seems intuitive to me that as a way to transport vectors, its existence should not depend on whether on not we defined a metric on our manifold. Note that this question is motivated by the fact that we define connections before even talking about a metric. But in my Riemannian geometry course, we defined the Levi-Civita connection through the metric, but I wanted to know if this necessarily means that it can't be defined(as a concept) without a metric. Thanks in advance.
I think maybe you're misinterpreting the phrase "the Levi-Civita connection." Despite the definite article, there isn't just one such connection -- every Riemannian metric has its own Levi-Civita connection, uniquely determined by the metric. So the question in your title doesn't make sense -- without a specific choice of metric, "the Levi-Civita connection" has no meaning. It's like asking if "the derivative" exists without specifying a particular function to take the derivative of.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2465947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Probability of three dice in n rolls Consider the bet that all three dice will turn up sixes at least once in n rolls of three dice. Calculate $f(n)$, the probability of at least one triple-six when three dice are rolled n times. Determine the smallest value of n necessary for a favorable bet that a triple-six will occur when three dice are rolled n times. (DeMoivre would say it should be about $216 \log 2 = 149.7$ and so would answer 150—see Exercise 1.2.17. Do you agree with him?) As I understand: $ f(n)=1 - \text{(probability of no triple sixes in} \space n \space \text{rolls})$, i.e. $ f(n)=1- \left (\frac{5\cdot 5\cdot 5}{6\cdot 6\cdot 6}\right)^n$ Is that correct? What about the second question? Doesn't it depends on the bet, or what do they mean?
The probability $\frac{5^3}{6^3}$ is the probability of the event "you get no sixes at all (in a particular throw)". The probability for not getting "a triple six" (in one throw) is $1-\frac{1}{6^3}$. So $$f(n) = 1 - (1-\frac{1}{6^3})^n$$ The first value of $n$ for which this is larger than $0.5$ is $n=150$, so I would agree with de Moivre.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2466033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Spectral Theorem - $AB = BA \implies B\Phi(f) = \Phi(f)B$ Consider the construction of the Borel Functional Calculus for a self adjoint operator $A$ as descibred here: Continuity of the functional calculus form of the Spectral Theorem or better yet, here: http://www.math.mcgill.ca/jakobson/courses/ma667/mendelsontomberg-spectral.pdf on page 4. Assume the continuous functional calculus was established. I want to show: $AB = BA \implies B\Phi(f) = \Phi(f)B$ where $f \in \mathbb{B}(\mathbb{R})$ the bounded Borel functions on $\mathbb{R}$. See my suggestion for a simpler proof below please.
I think my last answer has a problem in it (see comments), I'll leave it there in case someone comes upon this. Here's my new suggested solution: Step 1 is to prove $\forall x \in H (x, B\Phi(f) x) = (x, \Phi(f) B x)$. Step 2 is to notice that the above implies $\forall x,y \in H$ $(x, B\Phi(f) y) = (x, \Phi(f) B y)$ by the polarization identity. Step 3 the above of course implies that $\Phi(f) B = B\Phi(f)$. So we only need a proof for 1: Set $x \in H$. Define $\phi_k = B^*x +i^kx$ for $k \in \{0,1,2,3\}$, $\psi_k = x +i^kB^*x$ for $k \in \{0,1,2,3\}$. Define $\mu = \sum_{k = 0}^{3} |\mu_{\phi_k}| + |\mu_{\psi_k}|$. Where $\mu_{\phi_k}$, $\mu_{\psi_k}$ are the spectral measures associated to the corresponding vectors. As each is a regular Borel measure it follows that $\mu$ is a regular Borel measure. Choose $\{f_n\} \subset C(\sigma(A))$ s.t $\int f_n d\mu \to \int f d\mu$. It follows that $\int f_n \mu_{\psi_k} \to \int f \mu_{\psi_k}$, and $\int f_n \mu_{\phi_k} \to \int f \mu_{\phi_k}$ for all $k \in \{0,1,2,3\}$. See $\int_X f_n d\mu_1 \to \int_X fd\mu_1 $ and $\int_X f_n d\mu_2\to \int_X f d\mu_2$ for a bounded Borel function $f$ So we have that $(x, B\Phi(f)x) = (B^*x, \Phi(f)x) \overset{definition}{=} \frac{1}{4} \sum_{k = 0}^{3} i^k \int f d\mu_{\phi_k} = lim_n \frac{1}{4} \sum_{k = 0}^{3} i^k \int f_n d\mu_{\phi_k} = lim_n(B^*x, \Phi(f_n)x) \overset{f_n \in C(\sigma(A))}{=} lim_n(x, \Phi(f_n)Bx) = lim_n \frac{1}{4} \sum_{k = 0}^{3} i^k \int f_n d\mu_{\psi_k} = \frac{1}{4} \sum_{k = 0}^{3} i^k \int f d\mu_{\psi_k} = (x, \Phi(f)Bx)$. Comments are really appreciated!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2466162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How to show that $- \log_b x = \log_{\frac{1}{b}} x$ I saw the following log rule and have been struggling to show it's true, using the change of base rule. Any hints for proving it would be much appreciated. $- \log_b x = \log_{\frac{1}{b}} x$ I get as far as showing that $- \log_b x = \log_b \frac{1}{x}$ and think the change of base rule for logs, $\log_b x = \frac{\log_c x}{\log_c b} $ might be useful, but am not sure how to proceed.
$$\log_{1/b} x=\frac{\log_b x}{\log_b \frac{1}{b}}=\frac{\log_b x}{-1}=-\log_b x$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2466283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
What is the identity arrow in this category? This is taken from An Introduction to Category Theory by Harold Simmons (Example 1.3.1) (slightly changed for formatting reasons). The objects are the finite sets. An arrow $f$ with source $A$ and target $B$ is a function $$f : A\times B \to \mathbb{R}$$ with no imposed conditions. For each pair of arrows $f,g$ with $\text{source}(f) = A$, $\text{target}(f) = B$, $\text{source}(g) = B$, $\text{target}(g) = C$ we define $g \circ f$ to be a function $$g \circ f : A \times C \to \mathbb{R}$$ with $$(g\circ f)(a,c) = \sum_{y \in B} f(a,y)g(y,c).$$ I understand how to check that for three arrows composition is associative, but I am unsure how one would define, for example, $\text{Id}_A, \text{Id}_B$ such that $$\text{Id}_B \circ f = f = f \circ \text{Id}_A,$$ I know they would be functions from $A\times A \to \mathbb R$ and $B\times B \to \mathbb R$ but when writing down the composition I am not sure how one obtains the original $f$.
Writing down the relation that the identity $I_A$ would have to satisfy on one side : $$f(a,a') = (f \circ I_A)(a,a') = \sum_{a'' \in A} f(a,a'')I_A(a'',a'),$$ so a natural candidate would be $I_A(a'',a') = 1$ if $a'' = a'$ and $0$ otherwise. (This is known as the Kronecker delta function.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2466380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
How do I find the mean and variance of $Y_{(1)}$ in a random sample? I have that $Y_1, \dots, Y_n$ is a random sample from a uniform population on $(\theta, \theta+1)$. I need to show that $Y_{(1)}-\frac{1}{n+1}$ is an unbiased estimator of $\theta$. I know that would mean that its mean is should equal $\theta$ but I'm confused on finding the mean of $Y_{(1)}$ as opposed to $\bar{Y}$?
\begin{align} F_{Y_{(1)}} (x)=\Pr(Y_{(1)} \le x) = 1-\Pr(Y_{(1)} > x) & = 1 - \Pr(Y_1>x\ \&\ \cdots\ \&\ Y_n>x) \\[10pt] & = 1-\Pr(Y_1>x)\cdots\Pr(Y_n>x) \\[10pt] & = 1-\Big(\Pr(Y_1>x)\Big)^n. \end{align} And then $f_{Y_{(1)}}(x) = \dfrac d {dx} F_{Y_{(1)}}(x).$ Once you have the density, you can evaluate an integral from $\theta$ to $\theta+1$ to find the expected value. This is a case of a simple unbiased estimator that is clearly not the best unbiased estimator, since the best would be $(Y_{(1)} + Y_{(n)} -1)/2.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2466479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
show that $\left(\sum_{i=0}^\infty\frac{a^i}{i!}\right)\left(\sum_{j=0}^\infty\frac{b^j}{j!}\right) = \sum_{k=0}^\infty\frac{(a+b)^k}{k!}$ I need to show that if $\sum_{i=0}^\infty \frac{a^i}{i!}$ is absolutely convergent for all $a\in\mathbb{R}$, then $$\left(\sum_{i=0}^\infty\frac{a^i}{i!}\right)\left(\sum_{j=0}^\infty\frac{b^j}{j!}\right) = \sum_{k=0}^\infty\frac{(a+b)^k}{k!}.$$ We see that the Cauchy product of the two series on the left hand side is defined to be $$\sum_{k=0}^\infty \sum_{i+j=k}^\infty \frac{a^i}{i!}\cdot \frac{b^j}{j!}.$$ I'm having trouble showing that $$\sum_{k=0}^\infty\sum_{i+j=k}^\infty \frac{a^i}{i!}\cdot \frac{b^j}{j!} = \sum_{k=0}^\infty\frac{(a+b)^k}{k!}$$ would anyone be able to help? I know that once I get this then since the two series on LHS on the very top are convergent, then so is the Cauchy product. Thus the statement would be proved.
We have that \begin{align*} \left(\sum_{i=0}^\infty\frac{a^i}{i!}\right)\left(\sum_{j=0}^\infty\frac{b^j}{j!}\right)&=\sum_{k=0}^{\infty}\sum_{(i,j)\in S_k}\frac{a^i}{i!}\cdot \frac{b^j}{j!}\\ &=\sum_{k=0}^{\infty}\sum_{(i,j)\in S_k}\frac{1}{(i+j)!} \binom{i+j}{i}a^ib^j\\ &=\sum_{k=0}^{\infty}\frac{1}{k!} \sum_{i=0}^k\binom{k}{i}a^ib^{k-i}=\sum_{k=0}^{\infty}\frac{(a+b)^k}{k!} \end{align*} where $S_k:=\{(i,j): i\geq 0, j\geq 0, i+j=k\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2466609", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
$\limsup$ and $\liminf$ in Extended Reals Is it true that any sequence $X_n$ converges to $x$ in the extended real number system if and only if $\limsup X_n \le \liminf X_n$?
Strictly speaking, what you wrote is not quite true. Indeed, consider the constant sequence $x_n=0$ and set $x:=1$. Then $\liminf x_n=0=\limsup x_n$, but $x_n\not\to x$. However, it's clear that you meant to ask: If $(x_n)$ is a sequence of real numbers, then $(x_n)$ converges to an extended real number $x$ iff $\limsup x_n \le \liminf x_n$. Let $(x_n)$ be a sequence of real numbers. It's easiest to think of $\liminf x_n$ and $\limsup x_n$ as the infimum and supremum, respectively, of all subsequential limits of $(x_n)$. If this isn't something you've seen before, it's a good exercise to work through. The wikipedia page may also be helpful. From this definition, or the others, it should be clear that $\liminf x_n \le \limsup x_n$. With these two remarks in mind, suppose $(x_n)$ converges to some extended real number $x$. Then every subsequence of $(x_n)$ must also converge to $x$, so by the above characterization, we have $\liminf x_n=\limsup x_n$. Conversely suppose $\limsup x_n \le \liminf x_n$. Hence $\liminf x_n=\limsup x_n$. We aim to show that if $x:=\liminf x_n$, then $x_n\to x$. Suppose by contradiction that $x_n\not\to x$. Then there exists a neighborhood $U$ of $x$ for which infinitely many terms of the sequence $(x_n)$ lie outside $U$. This gives a subsequence $(y_n)$ of $(x_n)$ such that $y_n\not\in U$ for every $n$. Then any convergent subsequence of $(y_n)$ will not converge to $x$, and thus contradict the assumption $\liminf x_n=\limsup x_n = x$. It only remains to see that $(y_n)$ has a convergent subsequence. If it is bounded, then this is given to us by the Bolzano-Weierstrass theorem. On the other hand, if it is unbounded, then it has a subsequence which converges to either $\pm\infty$. This completes the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2466753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Embedding of central simple algebras A paper I am reading has the following statement (without proof): If $B$ and $A$ are central simple algebras with the same center $K$ where $K$ is a local or a global field, then $B$ can be embedded into $A$ if and only if [B : K] is relatively prime with $[A : K]/[B : K]$. The necessity is fairly easy to show. But how does one show the sufficiency?
This sounds very unlikely to me. What if $A$ is a division algebra and $B$ is a matrix algebra over $K$? (When $K=\Bbb Q_p$, theses exist with any given square degree over $K$.) The other way round sounds fishy too. Over $K=\Bbb Q_p$ there are division algebras $A$ and $B$ with $|A:K|=16$ and $|B:K|=4$ and with $B$ a subalgebra of $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2466886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do I solve this kind of differential equation? $ $ $\frac{dy}{dx} + ay^2+b = 0$ How do I solve this kind of differential equation? $$\frac{dy}{dx} + ay^2+b = 0$$ I'm not seeing how to deal with the $y^2$ part. $ $ I suppose there's a simple technique.
$$\frac{dy}{dx}= -(ay^2+b)$$ $$\frac{y'}{(ay^2+b)}= -1$$ This is now a standard integral that can be solved using substitution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2466996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
To prove given function is constant function Suppose $f,g,h$ are functions from the set of positive real numbers into itself satisfying $$f(x)g(y)=h\left(\sqrt {x^2+y^2} \right)\ \ \forall \ x,y\in (0,\infty )$$ Show that the three functions $\frac {f(x)}{g(x)},\frac{g(x)}{h(x)},\frac{h(x)}{f(x)}$ are all constants. I only succeed in to show $f/g$ is constant function i,e, i got $f(x)=g(x)$. Anyone can help me to prove that $g/h$ is constant.
So, by dividing by $f(y)g(x)$, you get $f(x)/g(x)=f(y)/g(y)$ for all positive $x,y$. So then obviously $f/g$ is constant. Ok, here is my try: $f(x)g(1)=h(\sqrt{x^2+1})$ and $f(x)g(\sqrt{2})=h(\sqrt{x^2+2})=f(\sqrt{x^2+1})g(1)$. So,$f(x)g(\sqrt{2})=f(\sqrt{x^2+1})g(1)$. i.e. $f(x)=f(\sqrt{x^2+1})g(1)/g(\sqrt{2})$. Hence $f(\sqrt{x^2+1})/h(\sqrt{x^2+1})=g(\sqrt{2})/g(1)^2$. Now change variable $y=\sqrt{x^2+1}$. But we are done only for $y>1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2467098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Product and sum of line bundles on circle Given nontrivial one-dimensional vector bundle $E$ on circle (which is of course Möbius strip). I need to find out if $E \oplus E \oplus ... \oplus E$ (n times) would be trivial or not (my guess is that it never happens) and the same question about $E \otimes E \otimes ... \otimes E$. In the second case I can first of all notice that $E \otimes E$ is also one-dimensional and then we can look at transition maps. Let it be just multiplying by $-1$ (trivialization is considered in stereographic projections charts on circle). Well, then in $E \otimes E$ it will be $(-1) \otimes (-1) = 1 \otimes 1$ which makes this bundle into trivial one. For $E \otimes E \otimes E$ we obtain $-1 \otimes 1 \otimes 1= - id$ and so this bundle is not trivial and so on. We will obtain cylinder for even $n$ and Möbius strip for odd. Am I right? I would appreciate some explanation regarding to first question and indication of mistakes (if there are) in my solution.
Real line bundles $L \to B$ are classified by their first Stiefel-Whitney class $w_1(L) \in H^1(L; \mathbb{Z}_2)$, in particular, $w_1(L) = 0$ if and only if $L$ is trivial. Therefore, if $E$ denotes the non-trivial line bundle on $S^1$, we see that $w_1(E)$ is the unique non-zero element of $H^1(S^1; \mathbb{Z}_2) \cong \mathbb{Z}_2$. For line bundles $L_1$ and $L_2$, we have $w_1(L_1\oplus L_2) = w_1(L_1\otimes L_2) = w_1(L_1) + w_1(L_2)$, so for any line bundle $L$ we have $$w_1(L^{\oplus n}) = w_1(L^{\otimes n}) = \begin{cases} 0 & n\ \text{even}\\ w_1(L) & n\ \text{odd}. \end{cases}$$ There are two cases: * *if $L$ is trivial, then so is $L^{\oplus n}$ and $L^{\otimes n}$ for every $n$. *if $L$ is non-trivial, then $L^{\oplus n}$ is trivial if and only if $n$ is even, and $L^{\otimes n}$ is trivial if and only if $n$ is even. In particular, as $E \to S^1$ is non-trivial, we are in the second case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2467238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Assuming that $g(z)=\frac{f(z)}{z-z_0}$ is continuous at $z_0$, prove that $\int_\gamma \frac{f(z)}{z-z_0}dz=0$. Suppose that $f$ is analytic in some domain D containing the unit circle $\gamma$ and that $z_0$ is a point in D not on $\gamma $. Assuming that $g(z)=\frac{f(z)}{z-z_0}$ is continuous at $z_0$, prove that $$\int_\gamma \frac{f(z)}{z-z_0}dz=0.$$ This is where I am so far, we know that from Cauchy's Integral Formula $$\int_\gamma \frac{f(z)}{z-z_0}dz=2\pi if(z_0)$$ This is equal to $0$ if $f(z_0)=0$. To show that this is equal to $0$, this must have to do something with $g(z)$ being continuous at $z_0$. I know that $g(z)$ is analytic if $f(z_0)=0$, however I cannot show that continuity of $g(z)$ implies analyticity of $g(z)$. Any comments will be helpful.
Since $g(z)=\frac{f(z)}{z-z_0}$ is given to be continuous at $z_0$. Then \begin{eqnarray*} g(z_0)&=&\lim_{z \to z_0}\frac{f(z)}{z-z_0}\\ &=&\frac{\lim_{z \to z_0}f(z)}{\lim_{z \to z_0} (z-z_0)} \\ \lim_{z \to z_0}f(z)&=&g(z_0) \lim_{z \to z_0}(z-z_0)\\ f(z_0)&=&0 \end{eqnarray*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2467359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Proof in Fitch and counterexample in Tarski's World - From $A \to B$, infer $A \to (B \land C)$. Good official morning community, Strengthening the Consequent: From A→B A→B , infer $A \to (B \land C)$. I know that this proof is invalid and I want to make a counter example to prove that. How can I write $A \to (B \land C)$ and A→B A→B in Tarski's World. I tried writing sentences but it keeps telling me that it is of the wrong format. what I have done so far is that I translated A→B A→B to tet(a) → → medium(a) Would you please help me
Create $1$ object in Tarski's World, make it a small cube, and label it $a$. Then $Cube(a) \rightarrow Small(a)$ will evaluate to True, and $Cube(a) \rightarrow (Small(a) \land Tet(a))$ will evaluate to False.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2467515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Existence of integers $a$ and $b$ such that $p = a^2 +ab+b^2$ for $p = 3 $ or $p\equiv 1 \mod 3$ **Eisenstein primes, existence of integers ** I am working in subring $$R = \{a + b\zeta : a,b \in \mathbb{Z}\}$$ of $\mathbb{C}$ where $\zeta = \frac{1 + \sqrt{-3}}{2} \in \mathbb{C}$. I want to show that there exist integers $a$ and $b$ with $p = a^2 + ab+ b^2$ where $a,b \in \mathbb{Z}$ and that $p =3$ or $p \cong 1 \mod 3$. In this problem, we have a norm defined by $$N(a+ b\zeta) = a^2 +ab +b^2. $$ I have been stuck in this problem for a while now. I was wondering if anyone could give me a hint on how to find the existence of such integers. I have been thinking of using the norm somehow, like considering some $x \in R$ such that $N(x) = p$ and from there try to show something about $a$ and $b$, but I haven't really made any progress. Any help with this will be ggreatly appreciated!
Consider $p = a^2 +ab+b^2$. If $p$ divides $a$, then $p$ divides $b$, and so $p^2$ divides the RHS but not the LHS. So, $p$ does not divide either $a$ or $b$. Then, $p$ divides $a^3-b^3=(a-b)(a^2 +ab+b^2)$, that is $a^3 \equiv b^3 \bmod p$. If $bc \equiv 1 \bmod p$, then $(ac)^3 \equiv 1 \bmod p$. If $ac \equiv 1 \bmod p$, then $a \equiv b \bmod p$ and so $3a^2 \equiv 0 \bmod p$. This implies $3 \equiv 0 \bmod p$ and $p=3$. If $ac \not\equiv 1 \bmod p$, then $ac$ is an element of order $3$ in $U(p)$ and so $3$ divides $p-1$, by Lagrange's theorem of group theory. All this shows that the conditions in the problem are necessary. But they also point the way to the existence of a solution. Indeed, for $p=3$ we can take $a=b=1$. For $p \equiv 1 \bmod 3$, take $g$ a primitive root mod $p$ and let $a=g^{\frac{p-1}{3}}$ and $b=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2467631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Trick to calculate integrals such as $\int_0^\infty \left(\frac {2x^3}{3K}\right) e^{-x}dx$? Besides integration by parts, is there any trick to calculate integrals such a$$\int_0^\infty \left(\frac {2x^3}{3K}\right) e^{-x}dx=1 $$ I got $K=4$ by using integration by parts multiple times, but I think there should be some sort of formula to solve this kind of integrals.
$$ \int_0^\infty \left(\frac {2x^3}{3K}\right) e^{-x} \, dx = \frac 2 {3K} \int_0^\infty x^3 e^{-x} \, dx. $$ The integral here is $\displaystyle\int_0^\infty x^3 e^{-x}\, dx;$ the fraction $\dfrac 2 {3K}$ can be attended to after evaluating the integral. You say you did integration by parts multiple times. Now suppose you had had instead $$\int_0^\infty x^{50} e^{-x} \, dx.$$ Then you might think you'd have to do integration by parts a much larger number of times. However, all of those iterations of integration by parts are the same, thus: \begin{align} \int_0^\infty x^{50} \Big( e^{-x} \, dx\Big) & = \int u\, dv = uv - \int v\, du \\[10pt] & = \left. x^{50} (-e^{-x}) \vphantom{\frac \sum1}\, \right|_0^\infty - \int_0^\infty (-e^{-x}) \Big( 50 x^{49}\, dx\Big) \\[10pt] & = 50\int_0^\infty x^{49} e^{-x}\, dx. \end{align} You're back to the same integral, except that the exponent has been reduced from $50$ to $49,$ and you also have a factor of $50$ in front of the integral. So doing it again reduces the exponent from $49$ to $48$ and you get a factor of $49$ in front of the integral: $$ 50 \cdot 49 \int_0^\infty x^{48} e^{-x}\,dx. $$ Then do it again: $$ 50\cdot49\cdot48\int_0^\infty x^{47} e^{-x}\,dx. $$ And so on: $$ 50\cdot49\cdot48\cdot47\cdot46\cdots\cdots3\cdot2\cdot1\int_0^\infty x^0 e^{-x}\, dx $$ and you don't need integration by parts for that last one. So once you see the pattern, you've got it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2467696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Epsilon- Delta Proof $$ \lim_{x\to 1} \frac{1}{x^2+2} = \frac{1}{3} $$ I'm having a problem proving this using $\epsilon-\delta$ proofs. For some reason When I solve for $\epsilon$, I get a negative number. Since this value is supposed to equal $\delta$ and $\delta$ can't be negative I'm not sure how to move forward. Thanks!
When $x, y\in [0, 2]$, one has $$\left|\frac{1}{x^2+2} - \frac{1}{y^2+2}\right| = \left|y-x\right| \frac{y+x}{(x^2+2)(y^2+2)}\le |y-x|\frac{4}{4} \le |y-x| $$ Take $y = 1$ and $\delta = \epsilon$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2467948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Proving a loop is non-trivial using van Kampen's theorem So I want to prove the following statement in algebraic topology. Let $X=A \cup B$ be a topological space with $A$, $B$, and $A \cap B$ being open and path-connected. Suppose $\gamma$ is a loop in $A\cap B$, and $\gamma$ is not homotopic to the trivial loop in both $A$ and $B$. Then $\gamma$ is not homotopic to the trivial loop in $X$. I think it can be derived from van Kampen's theorem, which states that in this case $\pi_1(X)=\pi_1(A) {\Large{*}} \pi_1(B)/N$, where $N$ is the the normal subgroup generated by $\alpha_{A} \alpha_{B}^{-1}$, where $\alpha$ is a loop in $A \cap B$, $\alpha_{A}$ is the homotopic class of $\alpha$ in $A$. So I feel that what the quotient does is just identifying $\alpha_A$ with $\alpha_B$ for any $\alpha$ in $A\cap B$, and nothing else. Thus if $\alpha$ is origianlly non trivial in both $A$ and $B$, it should still be non trivial in $X$. But I could not write it down in a concrete way...
Group-theoretically, consider the amalgamated free product $(\langle a\rangle *\langle b\rangle)/N$ where $N$ is the normal closure of the subgroup $\langle ab,ab^{-1}\rangle$. The product is isomorphic to $\mathbb{Z}/2\mathbb{Z}$ since $a=b$ and $a^2=1$. The word $a^2$ is non-trivial in the "intersection" and on "either side," which I will make precise by a topological construction. Start with a wedge $S^1\vee S^1$ with $x$ and $y$ the generator loops of the fundamental group around each circle. On one side, attach a disk along $xy$, and on the other side attach a disk along $xy^{-1}$. This defines a space $X$. Let $A$ be an open neighborhood containing the first disk, the wedge, and a little of the second disk. Let $B$ be the same, but reversing the roles of the first and second disks. $A\cap B$ deformation retracts onto the wedge of circles, so its fundamental group is $\langle x,y\rangle$. The fundamental group of $A$ is free and generated by a loop $a$, and the inclusion map induces $x\mapsto a$, $y\mapsto a^{-1}$. The fundamental group of $B$ is free and generated by a loop $b$, and $x\mapsto b$, $y\mapsto b$ is the induced map for the inclusion. The loop $x^2$ is nontrivial in both $A$ and $B$ (where it is $a^2$ and $b^2$, respectively). Yet in $X$, $$x^2=xa=xy^{-1}=xb^{-1}=xx^{-1}=1$$ by going back and forth between the spaces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2468057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Generating a certain number after $k$ operations. Obtain a formula for this number as a function of $k$ There are n cookies on a table. Adam did this series of steps: In the 1st step he put 1 cookie in the middle of every two neighbouring cookies, in the 2nd step he put 2 cookies in the middle of every two neighbouring cookies, in the kth step he put k cookies in the middle of every two neighbouring cookies... Find the formula for computing the number of cookies on the table after the kth step. Now I have found that the formula for it is: (k+1)!(n-1)+1 However, I got this purely by computing the number of cookies manually for small ks and simply observing the patterns. How can I prove that this formula works, and why does it work?
* *Instead of considering previous pairs of cookies, ignore the one on the far right, so you start with $n-1$ cookies *and say that at the $k$th step you put $k$ cookies to the immediate right of each previous cookie, in effect multiplying the number of cookies by $k+1$ on the $k$th step *so from the start you multiply $(n-1)$ by $2,3,4,5,\ldots,k$ and $k+1$, i.e. by $(k+1)!$ to give $(k+1)!\,(n-1)$ *and finally add back the $1$ far right cookie you ignored at the start to give $(k+1)!\,(n-1)+1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2468173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can we see $ax^2-bx-\bar b x+c,a>0,c \ge 0 $ attains minimum when $x=\frac{\Re b}{a}$ How can we see $ax^2-bx-\bar b x+c, a>0,c\ge 0$ attains minimum (just minimize it over the set of all possible $x$ s.t. the quadratic function is real) when $x=\frac{\Re b}{a}$ where $\Re b$ is the real part of $b$? I know that $ax^2-bx+c$, where all numbers are real, attains minimum when $x=\frac{b}{2a}$.
You know that $ax^2-bx+c$ attains minimum at $x=\frac{b}{2a}$ (assuming $a,b,c\in\Bbb R$). You can then see that your equation reduces to $ax^2-2\Re bx+c$, which satisfies the assumptions, allowing you to apply the same rule to reach the desired conclusion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2468323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Prove a 3 parameter integral identity I have stumbled upon the following identity: $$\int_0^1 \frac{(1-t)^c}{(1-z t)^b} dt=\int_0^1 \frac{(1+t)^{b-c-2}}{(1+(1-z) t)^b} dt+\int_0^1 \frac{t^c(1+t)^{b-c-2}}{(1-z+t)^b} dt$$ It appears to work for all $b \in \mathbb{R}$, $c \geq 0$ and $|z|<1$. The identity is related to the integral representation of ${_2F_1}$, the hypergeometric function, however I don't have a nice proof of it. How can we prove this identity, and are the conditions on the parameters I listed correct?
We want to show \begin{eqnarray*} \int_0^1 \frac{(1-t)^c}{(1-z t)^b} dt=\int_0^1 \frac{(1+t)^{b-c-2}}{(1+(1-z) t)^b} dt+\int_0^1 \frac{t^c(1+t)^{b-c-2}}{(1-z+t)^b} dt. \end{eqnarray*} Make the substitution $t=\frac{1}{u}$ into the third integral ($dt=-\frac{du}{u^2}$) \begin{eqnarray*} \int_0^1 \frac{t^c(1+t)^{b-c-2}}{(1-z+t)^b} dt = -\int_{\infty}^1 \frac{(1+\frac{1}{u})^{b-c-2}}{u^c(1-z+\frac{1}{u})^b} \frac{du}{u^2} \\ =\int_1^{\infty} \frac{(1+u)^{b-c-2}}{(1+(1-z) u)^b} du. \end{eqnarray*} Now combine this with the second integral & the RHS becomes \begin{eqnarray*} \int_0^{\infty} \frac{(1+t)^{b-c-2}}{(1+(1-z) t)^b} dt. \end{eqnarray*} Now make the substition $w=\frac{t}{1+t}$ (so $dw=\frac{dt}{(1+t)^2}$ and $t=\frac{w}{1-w}$) and we get the LHS. \begin{eqnarray*} \int_0^{\infty} \frac{(1+t)^{b-c}}{(1+(1-z) t)^b} \frac{dt}{(1+t)^2} &=& \int_0^{1} \frac{(1+\frac{w}{1-w})^{b}}{(1+(1-z) \frac{w}{1-w})^b} \frac{1}{(1+\frac{w}{1-w})^c}dw \\ &=&\int_0^1 \frac{(1-w)^c}{(1-z w)^b} dw. \end{eqnarray*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2468378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Show $\max_{i\leq n} |X_i|^p/n$ converges to $0$ in probability $X_i$'s are i.i.d random variables and $\mathbb E(|X_i|^p)=k<\infty$ for some $p,k$. I want to show $Z= \max_{i\leq n} |X_i|^p/n$ converges to $0$ in probability. Here is what I have tried. $$\mathbb P(Z\geq \epsilon)=1-P(Z< \epsilon)=1-\prod_{i\leq n}\mathbb P(|X_i|^p/n<\epsilon),$$by indepdence. That is $$1-\prod_{i\leq n}(1-\mathbb P(|X_i|^p/n\geq\epsilon))=1-(1-P(|X_1|^p/n\geq\epsilon))^n.$$ Now apply Chebyshev we have $$\mathbb P(Z\geq \epsilon)\leq1-(1-\mathbb E(|X_1|^p/n\epsilon))^n=1-(1-k/n\epsilon))^n$$ which does not converges to 0 as $n \to\infty$. I don't know If I can find any stronger inequality to make this proof work . Any suggestions will be appreicated.
Let $Y_n=\max_{1\le i\le n}|X_i|^p$. Then $Y_n/n\xrightarrow{p}0$ iff $n\mathsf{P}(|X_1|^p>n\epsilon)\to 0$ as $n\to \infty$ for all $\epsilon>0$. Since $\mathsf{E}|X_1|^p<\infty$, $$ n\mathsf{P}(|X_1|^p>n\epsilon)\le \epsilon^{-1}\mathsf{E}[|X_1|^p 1\{|X_1|^p>n\epsilon\}]\to 0 $$ as $n\to\infty$ by the dominated convergence theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2468473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A curve where all tangent lines are concurrent must be straight line I'm trying to solve this question in the classical Do Carmo's differential geometry book (page 23): *A regular parametrized curve $\alpha$ has the property that all its tangent lines pass through a fixed point. Prove that the trace of $\alpha$ is a (segment of a) a straight line. My attempt Following the statement of the question, we have $\alpha(t)+\lambda(s)\alpha'(s)=const$. Taking the derivative of both sides we have $\alpha'(s)+\lambda'(s)\alpha'(s)+\lambda(s)\alpha''(s)=0$ which is equal to $(1+\lambda'(s))\alpha'(s)+\lambda(s)\alpha''(s)=0$. Since $\alpha'(s)$ and $\alpha''(s)$ are linearly independent, we have $\lambda'(s)=-1$ and $\lambda(s)=0$ for every $s$ which I found strange, since the derivative of the zero function is zero. I need a clarification at this point and a hand to finish my attempt of solution.
Let's assume that the tangents pass trough the point $c$ in $\mathbb{R}^2$. Then we the vectors $\alpha(s) - c$ and $\alpha'(s)$ are proportional, that is $(\alpha(s)-c) + \lambda(s) \alpha'(s)=0$ for some scalar function $\lambda(s)$. Replacing $\alpha(s)$ with $\alpha(s) - c$, we may assume that $c=0$, that is $$\alpha(s) + \lambda(s) \alpha'(s)=0$$ The set $\{s \ | \ \lambda(s) \ne 0\}$ is open and dense in the domain (since the curve is regular). On this domain we can write $$\alpha'(s) = \mu(s) \alpha(s)$$ and so on each connected component $$\alpha(s) = e^{M(s)} \cdot \mathbb{a}$$ where $M(s)$ is an antiderivative of $\mu(s)$. It is now easy to conclude (since $\gamma$ is regular) that the vector constants $\mathbb{a}$ are the same for all the connected components. Therefore, $\gamma$ describes a (part of ) the line of direction $\mathbb{a}$ through the origin.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2468583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
Induction - why do we allow $k=1$ in the second step? Having proved that the property holds for the first case (most often 1), in the second step we need to assume that it's true for some $k \ge 1$. Why not $k>1$
It's because if you don't you loose the connection with the first case. The idea is that you should in principle be able to step from the first case to any integer. If you don't assume it's only valid for some $k>1$ and prove that it's valid for $k+1$ then that step would only be valid if you step from $2$ and higher. For counter example where this goes wrong we could consider the statement $P(n): n\ne 2$. First since $1\ne2$ we have that $P(1)$ is true. Now assume that $P(k)$ is true for some $k>1$ then $k+1>2$ so $k+1\ne 2$. This would mean that all positive integers are different from $2$ (which obviously isn't true).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2468729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
What is the integral of $e^{\cos x}$ Question: Find out $\displaystyle{\int e^{\cos x}~dx}$. My Attempt: Let $\cos x = y$. Hence $-\sin x\ dx = dy$ or $$dx = \displaystyle{\frac{-dy}{\sin x}=\frac{-dy}{\sqrt{1-\cos^2x}}=\frac{-dy}{\sqrt{1-y^2}}}$$ So $$\begin{align}\int e^{\cos x}~dx &= \int e^y\left(\frac{-dy}{\sqrt{1-y^2}}\right)\\ &=-\int\frac{e^y}{\sqrt{1-y^2}}~dy \end{align}$$ This integral is one I can't solve. I have been trying to do it for the last two days, but can't get success. I can't do it by parts because the new integral thus formed will be even more difficult to solve. I can't find out any substitution that I can make in this integral to make it simpler. Please help me solve it. Is the problem with my first substitution $y=\cos x$ or is there any other way to solve the integral $\displaystyle{\int\frac{e^y}{\sqrt{1-y^2}}~dy}$?
$\int e^{\cos x}~dx$ $=\int\sum\limits_{n=0}^\infty\dfrac{\cos^{2n}x}{(2n)!}dx+\int\sum\limits_{n=0}^\infty\dfrac{\cos^{2n+1}x}{(2n+1)!}dx$ $=\int\left(1+\sum\limits_{n=1}^\infty\dfrac{\cos^{2n}x}{(2n)!}\right)dx+\int\sum\limits_{n=0}^\infty\dfrac{\cos^{2n+1}x}{(2n+1)!}dx$ For $n$ is any natural number, $\int\cos^{2n}x~dx=\dfrac{(2n)!x}{4^n(n!)^2}+\sum\limits_{k=1}^n\dfrac{(2n)!((k-1)!)^2\sin x\cos^{2k-1}x}{4^{n-k+1}(n!)^2(2k-1)!}+C$ This result can be done by successive integration by parts. For $n$ is any non-negative integer, $\int\cos^{2n+1}x~dx$ $=\int\cos^{2n}x~d(\sin x)$ $=\int(1-\sin^2x)^n~d(\sin x)$ $=\int\sum\limits_{k=0}^nC_k^n(-1)^k\sin^{2k}x~d(\sin x)$ $=\sum\limits_{k=0}^n\dfrac{(-1)^kn!\sin^{2k+1}x}{k!(n-k)!(2k+1)}+C$ $\therefore\int\left(1+\sum\limits_{n=1}^\infty\dfrac{\cos^{2n}x}{(2n)!}\right)dx+\int\sum\limits_{n=0}^\infty\dfrac{\cos^{2n+1}x}{(2n+1)!}dx$ $=x+\sum\limits_{n=1}^\infty\dfrac{x}{4^n(n!)^2}+\sum\limits_{n=1}^\infty\sum\limits_{k=1}^n\dfrac{((k-1)!)^2\sin x\cos^{2k-1}x}{4^{n-k+1}(n!)^2(2k-1)!}+\sum\limits_{n=0}^\infty\sum\limits_{k=0}^n\dfrac{(-1)^kn!\sin^{2k+1}x}{(2n+1)!k!(n-k)!(2k+1)}+C$ $=\sum\limits_{n=0}^\infty\dfrac{x}{4^n(n!)^2}+\sum\limits_{n=1}^\infty\sum\limits_{k=1}^n\dfrac{((k-1)!)^2\sin x\cos^{2k-1}x}{4^{n-k+1}(n!)^2(2k-1)!}+\sum\limits_{n=0}^\infty\sum\limits_{k=0}^n\dfrac{(-1)^kn!\sin^{2k+1}x}{(2n+1)!k!(n-k)!(2k+1)}+C$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2468863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 5, "answer_id": 3 }
Locus problem need help My try Let $(x,y)$ place on the demanded graph $\sqrt {x^{2}+y^{2}}-1=\sqrt {x^{2}+(y+3)^{2}}$ $-2\sqrt {x^{2}+y^{2}}= 6y+8$ $-\sqrt {x^{2}+y^{2}}=3y+4$ $x^{2}+y^{2}= 16+9y^2+24y$ What should I do now?
Well, you should start with formulating the "distance to a circle" and "distance to a line", which probably means "distance to the closest point on a ..." Let's start with the line. The line is horizontal, so only the distance from the y-coordinate to $-3$ matters. This expression becomes the distance between $y$ and $-3$, or $|y+3|$. Next, the circle. Imagine a line going from $(x, y)$ to the centre of the circle. The distance between $(x, y)$ and the closest point on the circle would be related to the distance between $(x, y)$ and the centre, in fact, the difference between the 2 distances would be exactly $1$. You can formulate this distance as $|\sqrt{x^2+y^2}-1|$. Hence, now you equate the 2 expressions together. $|\sqrt{x^2+y^2}-1| = |y+3|$ However, by a simple sketch, you can note that there are no points of the locus in the circle or have $y<-3$. Hence, this reduces to: $\sqrt{x^2+y^2}-1 = y+3$ By doing a few more manipulations, you can get one of the options above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2468960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why is the lim inf the union of intersections For my statistics class we had elementary set theory. It was stated that: $$\inf_{k\geq n } A_k = \bigcap\limits_{k=n}^{\infty} A_k$$ and $$\sup_{k\geq n } A_k = \bigcup\limits_{k=n}^{\infty} A_k$$ From this was deduced that: $$\lim\limits_{n\to\infty} \inf A_k = \bigcup\limits_{n=1}^{\infty} \bigcap\limits_{k=n}^{\infty} A_k$$ and $$\lim\limits_{n\to\infty} \sup A_k = \bigcap\limits_{n=1}^{\infty} \bigcup\limits_{k=n}^{\infty} A_k$$ I absolutely have no idea why. Could someone explain it to me in the least technical way possible? I neither get why the intersection of Ak from n onwards should be the infimum nor why the union of all intersections should be the limit of that infimum.
It is clear that for real numbers, e.g. $3$ and $8$, that the minimum is the smaller one, i.e. $3$. There is a way to talk about smaller sets: a set $A$ is "smaller" than a set $B$ if it is completely contained in it, i.e. $A\subseteq B$. This is natural somehow, as e.g. $\{1,2,3\}$ is smaller than $\{1,2,3,4,5,6\}$. In this case, check that the intersection gives $$A\cap B=A.$$ This means that the intersection somehow acts like the minimum function on sets. By the same reasoning, you can argue that the union gives $A\cup B=B$ and acts like a maximum function. It is a bit less natural if neither $A\subseteq B$ nor $B\subseteq A$, but above should deal as a motivation. Because $\inf$ and $\sup$ generalizes minimum and maximum for infinitely many numbers (and sets in our case), it is natural to define $$\inf \,\{A_k\}=\bigcap A_k,\qquad \sup \,\{A_k\}=\bigcup A_k.$$ Now remember that $\liminf_{n\to\infty}$ was defined as $\sup_{n\in\Bbb N}\inf_{k\ge n}$. This hopefully explains why the formula for $\liminf$ contains both a union and an intersection.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2469060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Differentiate hadamard product of square matrix $S \odot VV^T \in R^{n \times n} $ over rectangular matrix $V \in R^{n \times r}$ I want to differentiate $f = \log\det(L)$ over $V$ where $L = S \odot VV^T$. The thing that I know is $df = L^{-T} : dL = (S \odot VV^T) : (dS \odot VV^T+ S \odot (VdV^T+dVV^T ))$ where $:$is a Frobenius product and $\odot$ is a Hadamard product. Maybe $dS$ is zero over $dV$ and $df =(S \odot VV^T) : S \odot (VdV^T+dVV^T ) $ . But I don't know how to treat the Hadarmard product in trace. I wanna get a solution in a matrix form, it's okay if it is derived in element wise form. But I don't want the solution including the $E$ and $B$ matrices. How can i solve the problem? Thanks in advance.
The Hadamard and Frobenius products commute with each other, i.e. $$A\odot B:C = A:B\odot C$$ This property can be used to complete your derivation (assuming $dS$ is zero) $$\eqalign{ df &= L^{-T}:dL \cr &= L^{-T}:S\odot(dV\,V^T+V\,dV^T) \cr &= S\odot L^{-T}:(dV\,V^T+V\,dV^T) \cr &= (S\odot L^{-T}+S^T\odot L^{-1}):dV\,V^T \cr &= (S\odot L^{-T}+S^T\odot L^{-1})V:dV \cr\cr \frac{\partial f}{\partial V} &= (S\odot L^{-T}+S^T\odot L^{-1})V \cr\cr }$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2469224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there a generalization of Brahmagupta–Fibonacci identity for cubic? I was wondering if there was a cubic version of Brahmagupta–Fibonacci identity . I looked everywhere but I didn't find anything ,except the splendid website of Tito Piezas (see here) Furthermore I know the Gauss identity wich is : $$a^3+b^3+c^3-3abc=(a+b+c)(a^2+b^2+c^2-ab-bc-ca)$$ So my question is : Is there a generalization of Brahmagupta–Fibonacci identity or Gauss identity ? Thanks a lot .
There is, provided your are prepared to have 3 letters in the identity. It requires consideration of multiplication in a cubic field. See http://www.mathjournals.org/jrms/2017-032-004/2017-032-004-001.html Let $$N_j = \left ( \begin {array} {ccc} u_j & -a d y_j & -a d x_j - b d y_j \\ x_j & u_j - b x_j -c y_j & - c x_j - d y_j \\ y_j & a x_j & u_j - c y_j \\\end {array} \right) .$$ Then define $$N_1 N_2 = N_3. $$ Taking determinants gives such an identity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2469516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
existence of a ball in a measurable set Let $\lambda$ be a Lebesgue-measure on $\mathbb{R}^n$ and let $A\subset \mathbb{R}^n$ be Lebesgue-measurable set with $\lambda(A)>0$. I know that the difference set $A - A$ contains an open ball $B_r(x)$. My question is if the set A also contains an open ball with $B_r(x) \subset A$ where $r>0$?
No, not necessarily. One interesting counterexample is a "fat" Cantor set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2469647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find all integer solutions to $y^3 = x^6 + 6x^3 + 13$. Find all ordered pairs of integers $(x, y)$ that satisfy $$y^3 = x^6 + 6x^3 + 13.$$ I've found the solutions $(-1, 2)$ and $(2, 5)$. I believe that these are all the integer solutions, but I don't know how to prove it. Could someone please help?
As Peter said in the comments, this is a special case of the Mordell equation $y^3=z^2+4$ but we can use the fact that $y$ is very nearly $x^2$ to obtain an elementary proof. $$\frac y{x^2}=\left(1+\frac 6{x^3}+\frac{13}{x^6}\right)^{1/3}$$ $$1-\frac 3{|x|^3}<\frac y{x^2}<1+\frac 3{|x|^3}$$ $$|y-x^2|<\frac 3{|x|}$$ And since $y-x^2$ is an integer, it is either 0 which implies $6x^3+13=0$ which is impossible or $\frac 3{|x|}>1$ which wields 5 cases that can be tested by hand.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2469764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$ \sum_{n=1}^{\infty} \frac{ x^{n^2} (1 + x^n) - x^n}{1 -x^n} = 0.$ ?? While studying theta functions I noticed $$ \sum_{n=1}^{\infty} \frac{ x^{n^2} (1 + x^n) - x^n}{1 -x^n} = 0.$$ Why is that so ?? Is there a similar case with a term $x^{n^3}$ ??
I am not sure what you mean by "similar case", but the infinite sum is zero because of the equation $$ \sum_{n=1}^\infty x^n/(1-x^n) = \sum_{n=1}^\infty x^{n^2}(1+x^n)/(1-x^n) = \sum_{n=1}^\infty \sigma_0(n)x^n \tag{1}$$ which is the generating function of the number of divisors of $n$. This is OEIS sequence A000005 which has several generating functions for this sequence and further information. The reason for the equality is that the infinite sums count the number of divisors in several different ways. This is an example of a general technique. Suppose you have the double sum $$f(x) := \sum_{i,j=1}^\infty a_{ij}x^{ij}.$$ You can sum this by rows or columns to get $$f(x) = \sum_{i=1}^\infty\left(\sum_{j=1}^\infty a_{ij}x^{ij}\right) = \sum_{j=1}^\infty\left(\sum_{i=1}^\infty a_{ij}x^{ij}\right).$$ The second way is to sum by "gnomons" where $$f(x) = \sum_{n=1}^\infty\left(a_{nn}x^{nn}+\sum_{j>n}a_{nj}x^{nj}+\sum_{i>n}a_{in}x^{in}\right).$$ The third way is by powers of $x$ giving $$f(x) = \sum_{n=1}^\infty \left(\sum_{ij=n}a_{ij}\right)x^n$$ and if $a_{ij}:=1,$ then the three methods simplify to give equation $(1).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2469886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Pigeonhole principle explanation Let $A$ be any set of $10$ positive integers. Prove that there must exist at least $11$ subsets of $A$ having their element-sum with the same last $2$ digits. (Here element-sum means sum of all elements in the set) Answer: Let $A$ be a set of any $10$ positive integers. So $|A| = 10$. So, the set of all subsets of $A$ has cardinality $2^{10} = 1024$. Now, there are $10^2 = 100$ ways for the last two digits of a number. Let the pigeonholes be these "ways". So, there is one pigeonhole (one number with some ending two digits) containing $\lceil \frac{1024}{10} \rceil = 11$ subsets of $A$. I'm unsure why we had to mention "element-sum" as I don't see where it's used in the proof.
The element sum is used indirectly, in the fact that, we only care for it's last two digits. There are less than one tenth the possible 2 digit endings, as there are subsets. Pigeonhole principle then implies that at least 11 subsets go to one of the subset sum endings.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2469976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Semifactorial Identity I was wondering if anyone had any insight on how to prove the following identity: For all $m \in \mathbb{N}$ $$ \frac{1}{2m-1} + \frac{2m-2}{(2m-1)(2m-3)} + \frac{(2m-2)(2m-4)}{(2m-1)(2m-3)(2m-5)} + \cdots + \frac{(2m-2)!!}{(2m-1)!!} = 1$$ I attempted to rewrite and simplify the left hand side as: $$ \sum_{k = 1}^m \frac{\frac{(2m-2)!!}{(2m-2k)!!}}{\frac{(2m-1)!!}{(2m-2k-1)!!}} = \sum_{k = 1}^m \frac{(2m-2)!! (2m-2k-1)!!}{(2m-2k)!! (2m-1)!!} = \frac{\left[2^{m-1} (m-1)! \right]^2}{(2m-1)!} \sum_{k=1}^m \frac{(2m-2k-1)!!}{(2m-2k)!!} $$ But I do not seem to be getting anywhere. Does anyone see how to prove the statement?
$$\scriptsize\begin{align} &\;\;\;\frac 1{2m-1}+\frac {2m-2}{(2m-1)(2m-3)}+\frac{(2m-2)(2m-4)}{(2m-1)(2m-3)(2m-5)}+\cdots+\frac {(2m-2)!!}{(2m-1)!!}\\\\ &=\frac 1{2m-1}\left(1+\frac {2m-2}{2m-3}\left(1+\frac {2m-4}{2m-5}\left(1+\;\;\cdots\;\;\ \left(1+\frac 65\left(1+\frac 43\left(1+\frac 21\right)\right)\right)\right)\cdots+\right)\right)\\\\ &=\frac 1{2m-1}\left(1+\frac {2m-2}{2m-3}\left(1+\frac {2m-4}{2m-5}\left(1+\;\;\cdots\;\;\ \left(1+\frac 65\left(1+\frac 43\cdot \color{blue}3\right)\right)\right)\cdots+\right)\right)\\\\ &=\frac 1{2m-1}\left(1+\frac {2m-2}{2m-3}\left(1+\frac {2m-4}{2m-5}\left(1+\;\;\cdots\;\;\ \left(1+\frac 65\cdot \color{blue}5\right)\right)\cdots+\right)\right)\\\\ &\qquad\qquad \vdots\qquad\qquad\qquad\qquad\qquad \;\unicode{x22F0} \\\\ &=\frac 1{2m-1}\left(1+\frac {2m-2}{2m-3}\left(1+\frac {2m-4}{2m-5}\cdot \color{blue}{(2m-5)}\right)\right)\\\\ &=\frac 1{2m-1}\left(1+\frac {2m-2}{2m-3}\cdot \color{blue}{(2m-3)}\right)\\\\ &=\frac 1{2m-1}\cdot \color{blue}{(2m-1)}\\\\ &=\color{red}{\large 1}\\\\\end{align}$$ In alternative notation, starting from the innermost parenthesis, $$\scriptsize\begin{align} a_1\;\;&=1+\frac 21&&=3\\ a_2\;\;&=1+\frac 43a_1&&=5\\ a_3\;\;&=1+\frac 65 a_2&&=7\\ &\qquad \vdots &\qquad \\ a_n\;\;&=1+\frac {2n}{2n-1}a_{n-1}&&=2n+1\\ &\qquad \vdots &\qquad \\ a_{m-1}&=1+\frac {2m-2}{2m-3}\cdot {(2m-3)}&&=2m-1\\ S\;\;\;&=\frac 1{2m-1}a_{m-1}&&=1 \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2470085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Closed form of the elliptic integral $\int_0^{2\pi} \sqrt{1+\cos^2(x)}\,dx $ I want to prove the closed form shown in Wikipedia for the arc length of one period of the sine function. Source of wikipedia $$\int_0^{2\pi} \sqrt{1+\cos^2(x)} \ dx= \frac{3\sqrt{2}\,\pi^{\frac32}}{\Gamma\left(\frac{1}{4}\right)^2}+\frac{\Gamma\left(\frac{1}{4}\right)^2}{\sqrt{2\pi}}$$ Could someone offer some demonstration for this statement?
$$\int_0^{2\pi} \sqrt{1+\cos^ 2 x} dx = 4 \int_0^1 \frac{\sqrt{1+x^2}}{\sqrt{1-x^2}} dx = 4\int_0^1 \frac{1+x^2}{\sqrt{1-x^4}} dx $$ Now use the beta function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2470259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to calculate the grey area of this irregular five-pointed star? The corners of a five-pointed star lie on the sides of a square ABCD with the side length 1, that two corners of the star coincide with the corner points A and D. Further corner points of the star lie in each case in the interior of the edges AB, BC and CD. The area of the middle pentagon is $\frac{1}{12}$. Calculate the sum of the areas of the gray-colored triangles. I've got absolutly no idea how to solve this.
The idea is to use the several way to compute the grey areas. I don't want to multiply the notations to name all the lines intersections. I will call however $A_g$ the sum of all grey areas, $A_{ext}$ the sum of all white exterior areas and $A_m$ the area of the middle pentagon. If you calculate the area of each of the grey triangles based on the triangles and quadrilateres using the sides of the square, you notice that each exterior area will be used twice. In fact you will have every time grey triangle = large triangle - exterior parts in the said triangle (in some cases we are dealing with quadrilateres, but it does not change anything). Then you have $A_g=\frac{3}{2}-2A_{ext}$ Since you have the total area of the square being $1$, you can write it like this $A_g=2(1-A_{ext})-\frac12$ But $1-A_{ext}=A_g+A_m$! Thus $A_g=\frac 12-2A_m=\frac13$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2470374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Solve $x \ddot {x} + {\dot {x}}^2=0$. I solved a physics problem and I got this equation, but I don't know how to proceed. Could you solve for $x (t)$ this equation: $x \ddot {x} + {\dot {x}}^2=0$
$(xx')'=xx''+x'^2=0$ and $2xx'=2C$ then $(x^2)'=2C$ so $x^2=2Ct+D$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2470429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Prove that $\frac{1+\sin\theta+i\cos\theta}{1+\sin\theta-i\cos\theta}=\sin\theta+i\cos\theta$ Prove that $$\frac{1+\sin\theta+i\cos\theta}{1+\sin\theta-i\cos\theta}=\sin\theta+i\cos\theta$$ I tried to rationalize the denominator but I always end up with a large fraction that won't cancel out. Is there something I'm missing? Thanks in advance
We know that, $\sin^2\theta+\cos^2\theta = 1$ and $a^2-b^2=(a-b)(a+b$ then \begin{split} \frac{1+\sin\theta+i\cos\theta}{1+\sin\theta-i\cos\theta} &= &\frac{\color{red}{\sin^2\theta+\cos^2\theta} +\sin\theta+i\cos\theta}{1+\sin\theta-i\cos\theta} \qquad\quad\\\\&=& \frac{\color{red}{\sin^2\theta+(-i\cos\theta)(i\cos\theta)} +\sin\theta+i\cos\theta}{1+\sin\theta-i\cos\theta} \\ \\ &=&\frac{\color{red}{[\sin^2\theta- (i\cos\theta)^2]}+\sin\theta+i\cos\theta}{1+\sin\theta-i\cos\theta} \\\qquad~~~\qquad\\&=&\frac{\color{red}{(\sin\theta +i\cos\theta)(\sin\theta- i\cos\theta)}+\sin\theta+i\cos\theta}{1+\sin\theta-i\cos\theta} \\\\&=&(\sin\theta +i\cos\theta)\frac{1+\sin\theta- i\cos\theta}{1+\sin\theta-i\cos\theta} \\&=&(\sin\theta +i\cos\theta) \end{split}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2470541", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 8, "answer_id": 4 }
$\sum c_nn^{-s_0}$ converges implies $\sum c_nn^{-s}$ cannot have a pole in the line $\Re{s}=\sigma_0$ If the Dirichlet series $\sum c_nn^{-s}$ converges at $s_0=\sigma_0+it_0$, prove that the function defined by the series for $\Re{s}>\sigma_0$ cannot have a pole on the line $\Re{s}=\sigma_0$. I hope the question meant that for any $s , \Re{s}=\sigma_0$ there is a sequence $s_k\to s$ and $M>0$ such that $|\sum c_nn^{-s_k}|<M\,\,,\forall k$ For simplicity, lets assume $s_0=0$. Then $$\sum_{n=1}^{N}c_nn^{-s}=\frac{S_N}{N^{s}}+\sum_{n=1}^{N-1}S_n\left(n^{-s}-(n+1)^{-s}\right) $$ where $S_n$ are the partial sums of the convergent series $\sum c_n$. Since $$n^{-s}-(n+1)^{-s}=sn^{-s-1}+sO(n^{-2})$$ it is enough to show the uniform boundedness along a sequence $s_k\to s$ for the sums $$\sum\frac{S_n}{n^{s_k+1}}$$ at which i am stuck. Hints or any other approach would be appreciated.
One cannot show that for every $s$ with $\operatorname{Re} s = \sigma_0$ there is a sequence $s_k \to s$ on which the sum function is bounded. For example the series $$P(s) = \sum_{p \text{ prime}} \frac{1}{p^s}$$ converges for all $s$ with $\operatorname{Re} s \geqslant 1$, except for $s = 1$ (see e.g. here). Since $$P(s) = \log \zeta(s) + H(s)$$ with $H$ analytic on the half-plane $\operatorname{Re} s > \frac{1}{2}$, we have $$\lvert P(s)\rvert \sim \log \frac{1}{\lvert s-1\rvert}$$ near $1$, so $\lvert P(s_k)\rvert \to +\infty$ for every sequence $(s_k)$in $\{ s\in \mathbb{C} : \operatorname{Re} s \geqslant 1,\, s \neq 1\}$ with $s_k \to 1$. What I know how to show is that for $s$ with $\operatorname{Re} s = s_0$ one has $$\lim_{\varepsilon \downarrow 0}\: \varepsilon \sum_{n = 1}^{\infty} \frac{c_n}{n^{s+\varepsilon}} = 0.\tag{1}$$ This shows that the sum function doesn't have a pole at $s$, for if a meromorphic function $f$ has a pole at $s$, then one has $\lim\limits_{\varepsilon \to 0}\: \varepsilon f(s+\varepsilon) = \operatorname{Res}(f;s) \neq 0$ if the pole is simple, and $\lim\limits_{\varepsilon \to 0}\: \lvert\varepsilon f(s + \varepsilon)\rvert = +\infty$ if the order of the pole is greater than one. To show $(1)$, we assume that $s_0 = 0$ to simplify notation (replace $c_n$ with $c_n n^{-s_0}$). Further, by modifying $c_1$, we can assume that $\sum c_n = 0$. Then, for $\operatorname{Re} s > 0$, a summation by parts gives $$\sum_{n = 1}^{\infty} \frac{c_n}{n^s} = \sum_{n = 1}^{\infty} S_n\biggl(\frac{1}{n^s} - \frac{1}{(n+1)^s}\biggr),$$ whence $$\sigma \Biggl\lvert\sum_{n = 1}^{\infty} \frac{c_n}{n^s}\Biggr\rvert \leqslant \sum_{n = 1}^{\infty} \lvert S_n\rvert \sigma \biggl\lvert \frac{1}{n^s} - \frac{1}{(n+1)^s}\biggr\rvert.\tag{2}$$ Now we have $$\sigma\biggl\lvert \frac{1}{n^s} - \frac{1}{(n+1)^s}\biggr\rvert = \sigma\Biggl\lvert s\int_n^{n+1} \frac{dt}{t^{s+1}}\Biggr\rvert \leqslant \sigma \lvert s\rvert \int_n^{n+1} \frac{dt}{t^{\sigma+1}} = \lvert s\rvert\biggl(\frac{1}{n^{\sigma}} - \frac{1}{(n+1)^{\sigma}}\biggr).\tag{3}$$ By the assumption that $\sum c_n = 0$, for every $\eta > 0$ there is an $N$ such that $\lvert S_n\rvert \leqslant \eta$ for $n \geqslant N$. And there is a $C$ such that $\lvert S_n\rvert \leqslant C$ for all $n$. Inserting $(3)$ into $(2)$, and splitting the sum at $N$, we find that \begin{align} \sigma\Biggl\lvert \sum_{n = 1}^{\infty} \frac{c_n}{n^s}\Biggr\rvert &\leqslant \lvert s\rvert C \sum_{n = 1}^{N-1} \biggl(\frac{1}{n^{\sigma}} - \frac{1}{(n+1)^{\sigma}}\biggr) + \lvert s\rvert \eta \sum_{n = N}^{\infty} \biggl(\frac{1}{n^{\sigma}} - \frac{1}{(n+1)^{\sigma}}\biggr) \\ &= \lvert s\rvert C\biggl(1 - \frac{1}{N^{\sigma}}\biggr) + \frac{\lvert s\rvert\eta}{N^{\sigma}} \\ &\leqslant \lvert s\rvert C\biggl(1 - \frac{1}{N^{\sigma}}\biggr) + \lvert s\rvert\eta. \end{align} Now fix $t\in \mathbb{R}$ and let $s = \sigma + it$ with $0 < \sigma < 1$. Given $\varepsilon > 0$, choose $\eta = \frac{\varepsilon}{2(\lvert t\rvert + 1)}$, and a corresponding $N$. Then $\lvert s\rvert\eta < (\lvert t\rvert + 1)\eta = \frac{\varepsilon}{2}$. Finally, there is a $\delta \in (0,1)$ such that $$1 - \frac{1}{N^{\sigma}} < \frac{\varepsilon}{2(\lvert t\rvert + 1)C}$$ for $\sigma \leqslant \delta$, which shows $$\sigma \Biggl\lvert \sum_{n = 1}^{\infty} \frac{c_n}{n^{\sigma + it}}\Biggr\rvert < \varepsilon$$ for $0 < \sigma \leqslant \delta$, and the proof of $(1)$ is complete.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2470657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find two arithmetic progressions of three square numbers I want to know if it is possible to find two arithmetic progressions of three square numbers, with the same common difference: \begin{align} \ & a^2 +r = b^2 \\ & b^2 +r = c^2 \\ & a^2 +c^2 = 2\,b^2 \\ \end{align} and \begin{align} \ & d^2 +r = e^2 \\ & e^2 +r = f^2 \\ & d^2 +f^2 = 2e^2 \\ \end{align} where $a,b,c,d,r \in \Bbb N$. Here is an example that almost works: \begin{align} \ & 23^2 +41496 = 205^2 \\ & 205^2 + 41496 = 289^2 \\ & 23^2 +289^2 = 2\,(205)^2 \\ \end{align} and \begin{align} \ & 373^2 + 41496 = 425^2 \\ & 425^2 + 41496 = \color{#C00000}{222121} \\ & 23^2 + \color{#C00000}{222121} = 2\,(205)^2 \\ \end{align} where the difference is $41496$, but the last element isn't a square number. I can't find an example of two progressions with three numbers and the same common difference. Could you demonstrate that such progressions are nonexistent using reductio ad absurdum to this statement?
There are infinitely many solutions to the system, \begin{align} \ & a^2 +r_1 = b^2 \\ & b^2 +r_1 = c^2 \\ & a^2 +c^2 = 2b^2 \\ \hline \ & d^2 +r_2 = e^2 \\ & e^2 +r_2 = f^2 \\ & d^2 +f^2 = 2e^2 \\ \end{align} with $\color{blue}{r_1=r_2}$. Eliminating $r_1$ between the first two equations (and similarly for $r_2$), one must solve the Pythagorean-like, $$a^2+c^2=2b^2\\ d^2+f^2=2e^2$$ which has solution, $$a,b,c = p^2 - 2q^2,\; p^2 + 2p q + 2q^2,\; p^2 + 4p q + 2q^2\\ d,e,f = r^2 - 2s^2,\; r^2 + 2r s + 2s^2,\; r^2 + 4r s + 2s^2$$ Hence, $$r_1 = -a^2+b^2 = -b^2+c^2 = 4 p q (p + q) (p + 2 q)\\ r_2 = -d^2+e^2 = -e^2+f^2 = 4 r s (r + s) (r + 2 s)$$ Thus one must solve, $$p q (p + q) (p + 2 q) = r s (r + s) (r + 2 s)$$ This is essentially the same equation in this post, hence one solution (among many) is, $$p,\;q = 2 n (m + 6 n),\; m (m + 4 n)\\ \;r,\;s = m (m + 2 n),\; 4 n (m + 3 n)$$ For example, let $m,n = 1,1$, then, $$a,b,c = 146, 386, 526\\ d,e,f = 503, 617, 713\\ r_1=r_2= 127680$$ and so on.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2470772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Defining dihedral groups $\{\sigma \in S_n: $ something $\}$ I am trying to understand hos one can define the dihedral groups $D_n$. I have seen the "definition" that just says this is the group of symmetries of an $n$-polygon. So you have rotations and reflections. But I feel this definition is a bit vague. I asked around and heard that one can define this using generators and relations. I don't know about that. I know that one can "realize" for example $D_4$ as a subgroup of $S_4$. For example $$ D_4 =\{(1), (13), (24), (14)(23), (1234), (12)(34), (1432), (13)(24)\}. $$ I do understand that I get these elements from labelling the vertices of the $4$-gon. This is very concrete for me. Therefore my question is: Is there a nice way to actually define the general dihedral group $D_n$ as a specific concrete subgroup of $S_n$? So, for example, I am looking for something like $$D_n = \{\sigma \in S_n : \text{something} \}.$$ I am not looking for a vague algorithmic way of defining the groups. From the example of $D_4$ I am thinking that it should always have $2$-cycles, but I don't think that this is always true. For example with $D_5$.
Yes. Label the vertices of your regular $n$-gon with $1, 2, 3, \ldots, n$. Keep track of where the vertices go after the rotation or reflection is done. The result is a literal permutation on $\{1, 2, 3, \ldots, n\}$ and the operation is still composition, so $D_n$ is a subgroup of $S_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2470880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
Example of element of double dual that is not an evaluation map It's well known that if $V$ is a vector space over a field $F$, then there is a natural injection from $V$ to the double dual $V^{**}$, which associates to every $v \in V$ the evaluation map $\phi \mapsto \phi(v)$, where $\phi: V \to F$ is an arbitrary functional in $V^*$. It's also well known that this injection is an isomorphism if $V$ is finite-dimensional, as any finite-dimensional vector space has the same dimension as its dual. My question is this: are there any nice, readily understood examples of infinite-dimensional vector spaces $V$ for which an element of $V^{**}$ that is not an evaluation map can be explicitly constructed (at least with the axiom of choice)? I find infinite-dimensional double dual spaces hard even to think about.
Quick example, using the axiom of choice: take $V$ to be the set of polynomials with real coefficients. Let $S$ denote the subspace of $V^*$ consisting of those functionals such that $\lim_{n \to \infty} f(x^n)$ exists. With the axiom of choice, there necessarily exists a complementary subspace $S'$ such that $V = S \oplus S'$. Define $\phi:S \to \Bbb R$ by $\phi(f) = \lim_{n \to \infty} f(x^n)$. Define $\phi:S' \to \Bbb R$ by $\phi(f) = 0$. Extend $\phi$ to all of $V$ by linearity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2471015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
probability of a simple children’s race game Suppose that a simple children’s race game is played as follows, using an urn containing some blue balls and some red balls, and a “track” having positions numbered consecutively from 0 to 10. A blue “racer” starts at position 0 and moves one space forward (towards position 10) each time a blue ball is drawn from the urn. Similarly, a red “racer” starts at position 0 and moves one space forward each time a red ball is drawn, and the winning “racer” is the one that reaches position 10 first. If balls are drawn without replacement from an urn initially containing 11 blue balls and 12 red balls, what is the probability that the blue “racer” will win? For my answer, I considered that the blue "racer" will win when get 10 blue balls, and the number of red balls is flexible (no more than 9). So I calculated the probability of 10 blue balls with 1 red ball, plus 10 blue balls with 2 red balls plus etc. The probability is 0.59. However, I was wondering is there has any easier way to calculate the probability?
You should note that order is critical here. As such, it may be harder to simplify the calculation (note that yours is slightly lacking, again due to the ordering issue - also, as there are less blue balls than red ones, it should come to attention that the probability of the blue racer winning should be less than 0.5), but I will try to explain it clearly and give a nice formula. It is critical to understand that the event where the blue player wins happens if the 10th blue ball is drawn before the 10th red one. Equivalently, we are seeking the probability that the 10th blue ball was drawn after 10 to 19 turns. That is, the probability of drawing 9 blue balls and $1\le n\le 9$ red ones, and the final one being blue as well. This leads us to: $$\sum_{n=0}^9\frac{{11\choose9}*{12\choose n}}{23\choose9+n}\times\frac{2}{23-9-n\choose1}=\frac{53}{161}\approx0.33$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2471160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Derivatives of composite functions How would I solve a problem that is asking me to find the derivative of $F$ when $$F(x)=f\left(\frac{x+2}{x+4}\right)$$ and $f$ is differentiable. Not asking for the answer here obviously, just the steps needed to get off the ground.
You use the chain rule. If $F(x) = f(g(x))$ then $F'(x) = g'(x)f'(g(x))$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2471293", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Differential equations for chemical reaction $\mathrm{A + 2B \to 3C}$ In a chemical reaction $\mathrm{A + 2B \to 3C}$, the concentrations $a(t)$, $b(t)$ and $c(t)$ of the three substances A, B and C measure up to the differential equations $$ \begin{align} \frac{da}{dt} &= -rab^2\tag{1}\\ \frac{db}{dt} &= -2rab^2\tag{2}\\ \frac{dc}{dt} &= -3rab^2\tag{3} \end{align} $$ with $r > 0$ and begin condition $a(0) = 1$ and $b(0) = 2$. Show that $b(t) - 2a(t) = 0$ . Here is my solution, but it is not right. Any help would be great. First equation $$ \begin{align} \int\frac{da}{a} &= -rb^2\int dt\\ \ln(a) &= -rb^2t + C\\ a(t) &= e^{-rb^2t+ C}\\ a(0) &= 1 &&\to &1 &= e^{0 + C}\\ \ln(1) &= C &&\to &C &= 0\\ a(t) &= e^{-rb^2t} \end{align} $$ Second equation $$ \begin{align} \frac{db}{dt} &= -2rab^2\\ \int \frac{db}{b^2} &= -2ra\int dt\\ b &= \frac{1}{2rat + C}\\ b(0) &= 2 \quad \to \quad 2 = \frac{1}{C} \quad \to \quad C = \frac{1}{2}\\ b(t) &= \frac{2}{ 2rat + 1} \end{align} $$ But now $b(t) - 2a(t) \ne 0$. Where I am making mistake? Any tip will be enough.
Notice that, by your equations, $\dfrac{d(2a - b)}{dt} = 2\dfrac{da}{dt} - \dfrac{db}{dt} = -2rab^2 -(-2rab^2) = 0; \tag 1$ hence $2a(t) - b(t)$ is constant. Now $2a(0) - b(0) = 2(1) - 2 = 0; \tag 2$ the desired result follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2471462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Vertex Cover without integer programming Is there a way to formulate an LP(linear programming) for minimum vertex cover problem without forcing the variables to be integers (no integer programming)? The number of variables are not restricted as long as they are finite. I googled it for so long, but all I could find was 2-approximation algorithm.
To add to Mark's answer: It has been shown here that the factor 2 cannot be improved. On the other hand, if your graph is bipartite, the relaxed version of the ILP gives integer solutions, since then the constraint matrix is totally unimodular. So, this version of the problem can be solved efficiently. See e.g. here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2471543", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }