Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Prove $x^n-p$ is irreducible over $Z[i]$ where $p$ is an odd prime. Prove $x^n-p$ is irreducible over $Z[i]$ where $p$ is an odd prime. By gausses lemma this is equivalent to irreducability over $\mathbb{Q}(i)$. Using field extensions this is easy. $[\mathbb{Q}(i,\sqrt[n]{p}):\mathbb{Q}(i)][\mathbb{Q}(i):\mathbb{Q}]=[\mathbb{Q}(i,\sqrt[n]{p}):\mathbb{Q}(\sqrt[n]{p})][\mathbb{Q}(\sqrt[n]{p}):\mathbb{Q}]=2n$ Thus $[\mathbb{Q}(i,\sqrt[n]{p}):\mathbb{Q}(i)]=n$ and so $x^n-p$ must be the minimal polynomial, and so it is irreducible. However, the book says you can solve this problem using Eisenstein criterion. That is easy when $x^2+1$ is irreducible mod $p$ as $(p)$ is then prime. What do you do in the other cases?
To prove this via Eisenstein's criterion, use the fact that $\mathbb Z[i]$ is a principal ideal domain. In fact, it is Euclidean. Also, for odd primes $p$ in $\mathbb Z$, $p$ remains prime in $\mathbb Z[i]$ for $p=3\mod 4$, and $p$ factors as a product of two distinct primes $p=p_1p_2$ in $\mathbb Z[i]$ for $p=1\mod 4$. (No, this is not obvious...) Thus, in either case, there is a prime in $\mathbb Z[i]$ dividing the constant term (and all others except the leading term, since those others are $0$), and whose square does not divide the constant term. So we can apply Eisenstein. Appropriate proof(s) about the remaining-prime and/or factoring into distinct factors depend on your context... but the explanation may give you some motivation to look at otherwise-technical points.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3778041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Is $f(x)=\frac{x^{2}-1}{x-1}$ continuous at $x=1$? Given $f(x)=\frac{x^{2}-1}{x-1}$. The function is said to be discontinuous at $x=1$ but since we can simplify it and rewrite $f(x)=x+1$, this removes the discontinuity. So is the function continuous or discontinuous at $x=1$ How do the two forms of $f(x)$ differ as both expressions are equal to each other. What stops us from simplifying the earlier expression and saying the function is continuous ?There was a similar question here but it didn't address my latter point.
$\lim_{x\rightarrow 1} f(x) = 2$, so the singularity is removable. Let $$g(x) = \left\{ \begin{array}{cc} f(x), & x \ne 1\\ 2, & x=1 \end{array} \right.$$ The function $g(x)$ is continuous. So, yes, $f(x)$ is discontinous but this discontinuity is easily repaired. If you were to graph $y=f(x)$, it would be the straight line $y=x+1$ with the point $(1,2)$ removed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3778244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Relation between diagonal entries of $A^{-1}$ and inverse values of $a_{ii}$ for positive definite $A$. I'd like to expand upon this question. Namely, it says that if $A$, $A=A^T$, is a positive definite matrix, then it holds that \begin{equation}\tag{*}(A^{-1})_{ii}\ge \frac1{A_{ii}}.\end{equation} Can we prove the converse, i.e., if (*) holds for all $1\le i\le n$, then $A$ is positive definite? OK, as suggested by @Klaus and @Jan I accept this answer and continue here.
No, that's even wrong for numbers, e.g. $A = -1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3778415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
2 Cross Products? Usually, if we want to find the cross product of 2 vectors $\vec{b}$ and $\vec{c}$, we want to find the vector which is perpendicular to both of them. Let's say the cross product of $\vec{b}$ and $\vec{c}$ is $\vec{d}$. Isn't $-\vec{d}$ then also perpendicular to $\vec{b}$ and $\vec{c}$? Does that mean that there are 2 cross products, or am I making a mistake?
The cross product of $\vec b$ and $\vec c$ is defined as the vector with the following properties: * *The length of the product is equal to $|\vec b|\cdot|\vec c|\cdot\sin(\alpha)$, where $\alpha$ is the angle between the two vectors. *The product is perpendicular to both $\vec b$ and $\vec c$. *The direction of the product is such that it follows the right hand rule. The last point ensures that the cross product is uniquelly defined by $b$ and $c$. That is, of the two vectors that satisfy points 1 and 2, only one of them satisfies point 3 Note that there are many interpretations of the right hand rule, from (literally) hand-wavy ones, to (for the purpose of this question) circular ones (i.e., one way to define the right hand rule would be to say that it is defined by the direction of the cross product). Let's strike a balance then and define the right hand rule as such: If $\vec a \times \vec b=\vec c$, then, looking onto the plane, spanned by $\vec a$ and $\vec b$ from the positive side (i.e., from the side into which $\vec c$ points into), the angle required to rotate $\vec a$ into $\vec b$ is smaller than the angle required to rotate $\vec b$ into $\vec a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3778546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $p(x)$ be a polynomial with integer coefficients. Show that if $p(2)=3$ and $p(3)=5$ then $p(n)\ne0$ for all integers $n$. Let $p(x)$ be a polynomial with integer coefficients. Show that if $p(2)=3$ and $p(3)=5$ then $p(n) \neq 0$ for all integers $n$. I did manage to solve it using the fact that $a-b | p(a)-p(b)$ but I found a more elegant solution online and didn,t quite understand it and I am hoping that someone can help me understand it! If $p(n)=0$ then $p(n)=0 \pmod 2$ as well. But either $n=2 \pmod 2$ or $n=3 \pmod 2$ and in both cases $p(n) = 1 \pmod 2$. The contradiction. I understand that any number must either be divisible by $2$ or have a remainder of $1$ after division by $2$ but how did they conclude that this would imply that $p(n) = 1 \pmod 2$ from this? Thanks in advance!
If $n$ is even, then $p(n)\equiv p(2)\bmod 2$ and likewise for $n$ odd. In both cases, we have $p(n)\equiv 1\bmod 2$, so $p$ is odd at every integer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3778692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Inverse Laplace Transform, need help with this $\frac{s^2}{s^2+\sqrt{2}s+1}$ Inverse Laplace Transform of $\frac{s^2}{s^2+\sqrt{2}s+1}$ I transformed the denominator as $(s+\frac{\sqrt{2}}{2})^{2}$ + $\frac{1}{2}$ $\frac{s^2}{(s+\frac{\sqrt{2}}{2})^{2} + \frac{1}{2}}$ and I have no idea how to move forward because partial fraction decomposition fails.
Hint: $$\frac{s^2}{s^2+\sqrt2\,s+1}=1-\frac{\sqrt2\left(s+\frac{\sqrt2}2\right)}{\left(s+\frac{\sqrt2}2\right)^2+\frac12}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3778940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Convergence of Lebesgue measurable sets I've been working on the following result: Let $f$ be Lebesgue measurable on $[0,1]$ with $f(x)>0$ almost everywhere on $[0,1]$. Assume there are measurable sets $E_k \in [0,1]$ with $\int_{E_k} f(x)\to 0$ as $k \to \infty$. Then $m(E_k) \to 0$ as $k \to \infty$. I've been attempting to bound $f$ in some way as then the result follows quickly, but it doesn't seem that I can do this as the values of $f$ range over $[0,1]$.
There exist $k_1,k_2<...$ such that $\int_{E_{k_j}} f(x)dx <\frac 1 {2^{j}}$. Hence $\int \sum_j 1_{E_{k_j}} f(x)dx <\infty$. This implies that $\sum_j 1_{E_{k_j}} f(x) <\infty$ almost everywhere. Since $f(x) >0$ a.e. this gives $\sum_j 1_{E_{k_j}} <\infty$ almost everywhere. Hence $\lim \sup E_{k_j}$ has measure $0$. Now apply Fatou's Lemma to $1_{E_{k_j}^{c}}$. You get $1-m( \lim\sup E_{k_j}) \leq 1-\lim \sup m(E_{k_j})$ which shows that $m(E_{k_j}) \to 0$. So far we have proved that $m(E_{k_j}) \to 0$ for some subsequence $(k_j)$. But we can prove that $m(E_k) \to 0$ by applying this argument to subsequences of $(E_k)$. [A sequence of real numbers tends to $0$ iff every subsbequence of it has a further subsequence which tends to $0$].
{ "language": "en", "url": "https://math.stackexchange.com/questions/3779180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Prove that $(a_1 − 1)(a_2 − 2)...(a_9 − 9)$ is always an even number. $a_1, a_2,..., a_9$ is an arbitrary permutation of positive integers from 1 to 9. Prove that $(a_1 − 1)(a_2 − 2)...(a_9 − 9)$ is always an even number. So I don't understand what the question is asking by "arbitrary permutation of positive integers". Does that mean consecutive integers or multiples? And also, can someone please help me prove this too? Thank you so much! :)
The only way the product $(a_1 − 1)(a_2 − 2) \dots (a_9 − 9)$ could be odd is if $a_1, a_3, a_5, a_7$, and $a_9$ are all even numbers. That would take $5$ even numbers ...
{ "language": "en", "url": "https://math.stackexchange.com/questions/3779304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How many $4$-digit numbers of the form $1a2b$ are divisible by $3$? How many $4$-digit numbers of the form $\overline{1a2b}$ are divisible by $3?$ Hello I am new here so I don’t really know how this works. I know that for something to be divisible by 3, you add the digits and see if they are divisible by $3$. So that means $3+a+b=6, 9, 12, 15, 18,$ or $21.$ I’m just confused about how to calculate the number of cases.
Giving you a hint :- You got $3 + a + b = 6,9,12,15,18$ or $21$, which implies that $a + b = 3,6,9,12,15$ or $18.$ Now do Case-Work and find all possible $a,b$ which can satisfy these . This may take a bit of work. $($For e.g. when $a + b = 3$ we have $(a,b) = (0,3),(1,2)(2,1)(3,0))$ Note that you forgot the case when $3 + a + b = 3$, in that case $(a,b)$ = $(0,0)$. Edit :- Keep in mind that $a,b$ are $1$-digit numbers . Hence if $a + b = 12$ , $(a,b) = (1,11)$ is not a solution, but $(a,b) = (3,9)$ is a solution .
{ "language": "en", "url": "https://math.stackexchange.com/questions/3779392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
How to evaluate the volume of tetrahedron bounded between coordinate planes and tangent plane? Find the volume of the tetrahedron in $\mathbb{R}^3$ bounded by the coordinate planes $x =0, y=0, z=0$, and the tangent plane at the point $(4,5,5)$ to the sphere $(x -3)^2 +(y -3)^2 +(z -3)^2 = 9$. My attempt: I started with determining the equation of tangent plane which comes out to be $x+2y+2z=24$. This is because direction ratios of normal to sphere at $(4, 5, 5)$ are $2, 4, 4$. So, then equation of tangent plant is given by $2(x-4)+4(y-5)+4(z-5)=0$ which means $x+2y+2z=24$. The required volume is $$\int _{x=0}^4\int _{y=0}^{12-\frac{x}{2}}\int _{z=0}^{12-y-\frac{x}{2}}\:\:dz\:dy\:dx$$ but this is not giving me the required answer which is $576$. Please help.
The plane intercept the axes in the points \begin{align} A&=(24,0,0), \\ B&=(0,12,0), \\ C&=(0,0,12) \end{align} so the volume is $$ V=\frac{1}{6}\cdot24\cdot12\cdot12=576 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3779519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Open source software for calculation of eigenvalues of symbolic matrix I have following matrix \begin{bmatrix} -\alpha & 0 & \beta & \gamma\cdot\omega_m \\ 0 & -\alpha & -\gamma\cdot\omega_m & \beta \\ R_r\frac{L_h}{L_r} & 0 & -\frac{R_r}{L_r} & -\omega_m \\ 0 & R_r\frac{L_h}{L_r} & \omega_m & -\frac{R_r}{L_r} \end{bmatrix} where $$ \alpha = \frac{R_s + R_r\frac{L^2_h}{L^2_r}}{L_{s\sigma}+\frac{L_h}{L_r}L_{r\sigma}} $$ $$ \beta = \frac{R_r\frac{L_h}{L^2_r}}{L_{s\sigma}+\frac{L_h}{L_r}L_{r\sigma}} $$ $$ \gamma = \frac{\frac{L_h}{L_r}}{L_{s\sigma}+\frac{L_h}{L_r}L_{r\sigma}}\cdot p_p $$ and I would like to calculate the eigenvalues of that in symbolic manner. EDIT: The matrix can be rewritten in following form \begin{bmatrix} -a & 0 & b & c\cdot d \\ 0 & -a & -c\cdot d & b \\ e\cdot f & 0 & -e & -d \\ 0 & e\cdot f & d & -e \end{bmatrix} I have been looking for some open source software usable for that purpose. I have already tried the wxMaxima but I have received some overcomplicated expressions containing the square roots which I am not able to simplify. Can anybody recommend me any open source software which offers good results for eigenvalues calculation in symbolic manner?
This response is (perhaps) barely appropriate as an answer rather than a comment. However, it may well be the best that the OP can do. First of all, consider trying to programmatically identify the general roots of a quartic equation. Although the general formula is somewhat unwieldy, writing a computer program (e.g. using java, c, python, ...) to calculate the roots should be very straight forward. Similarly, writing a computer program to calculate the eigenvalues of a 4x4 matrix should also be straight forward. Given the other responses to this posting, I would say (as a retired professional programmer) that the OP's surrendering to the need to write his own software may be best. Edit It just occurred to me that dealing with roots like $(1 + \sqrt{2})$ or $[1 + \sin(23^{\circ})]$ may be problematic if the OP needs exactness rather than (for example) the right answers correct to 10 decimal places. If exactness is needed, then the OP has to (somehow) anticipate all the various forms that the solution may come in and develop special methods to handle them. For example, computing $(1 + \sqrt{2}) \times (3 - \sqrt{2}) \;=\; [1 + 2\sqrt{2}]$ would probably require special code.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3779608", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
What is the intuition behind pushouts and pullbacks in category theory? What is the intuition behind pullbacks and pushouts? For example I know that for terminal objects kind of end a category, they are kind of last is some sense, and that a product is a kind of pair, but what about pullbacks and pushouts what are the reasoning behind this names?
Pullbacks are fibred-products, i.e., a product with some compatibility restrictions. The terminology came from differential geometry when you really pull differential forms or their bundle on $B$ back to differential forms or their bundle on $A$ along immersion $A\to B$. Product $A\times B$ is just a special case when you pullback $$ \require{AMScd} \begin{CD} @. B\\ @. @V{!}VV\\ A@>{!}>> 1 \end{CD} $$ which the terminal object $1$ doesn't impose any restrictions, and get $$ \begin{CD} A\times B@>{\operatorname{proj}_2}>> B\\ @V{\operatorname{proj}_1}VV @V{!}VV\\ A@>{!}>> 1 \end{CD} $$ Dually we have pushouts as a kind of sum, subject to some constraint. Indeed, in Sets we have the disjoint union $$ \begin{CD} \varnothing@>{!}>> B\\ @V{!}VV @V{i_2}VV\\ A@>{i_1}>> A\amalg B \end{CD} $$ as the pushout of $\varnothing\to A,B$, and we also have $$ \begin{CD} A\cap B@>>> B\\ @VVV @VVV\\ A@>>> A\cup B \end{CD}. $$ I don't think "pushout" was coined before the late-1940s when category theory came along, and merely chosen because it is clearly opposite to "pullback" (a similar word "pushforward" existed in other context but that name was not chosen).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3779687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Evalution of a function where $t = x + \frac{1}{x}$ Consider a function $$y=(x^3+\frac{1}{x^3})-6(x^2+\frac{1}{x^2})+3(x+\frac{1}{x})$$ defined for real $x>0$. Letting $t=x+\frac{1}{x}$ gives: $$y=t^3-6t^2+12$$ Here it holds that $$t=x+\frac{1}{x}\geq2$$ My question is: how do I know that $t=x+\frac{1}{x}\geq2$ ? I want to know how to get to this point without previouly knowing that $t=x+\frac{1}{x}\geq2$
Because by AM-GM $$x+\frac{1}{x}\geq2\sqrt{x\cdot\frac{1}{x}}=2.$$ Your calculation of $y$ is right: $$y=t^3-3t-6(t^2-2)+3t=t^3-6t^2+12$$ and you got it without using $t\geq2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3779787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Checking Presentations in GAP If I have the following presentation for $A_5$ $$\langle x,y,z\mid x^3 = y^3= z^3 =(xy)^2=(xz)^2= (yz)^2= 1\rangle$$ with subgroup $$ H = \left\langle {x,y} \right\rangle$$ and let GAP apply coset enumeration to my generators and relations, as with the code below, is there a command I can use to check whether this presentation is indeed for $A_5$? <free group on the generators [ x, y, z ]> gap> x:=F.x; x gap> y:=F.y; y gap> z:=F.z; z gap> rels:=[x^3,y^3,z^3,(x*y)^2,(x*z)^2,(y*z)^2]; [ x^3, y^3, z^3, (x*y)^2, (x*z)^2, (y*z)^2 ] gap> G:=F/rels; <fp group on the generators [ x, y, z ]> gap> gens:=GeneratorsOfGroup(G); [ x, y, z ] gap> xG:=gens[1]; x gap> yG:=gens[2]; y gap> zG:=gens[3]; z gap> H:=Subgroup(G,[xG,yG]); Group([ x, y ]) gap> ct:=CosetTable(G,H); [ [ 1, 3, 4, 2, 5 ], [ 1, 4, 2, 3, 5 ], [ 1, 3, 5, 4, 2 ], [ 1, 5, 2, 4, 3 ], [ 2, 3, 1, 4, 5 ], [ 3, 1, 2, 4, 5 ] ] gap> Display(TransposedMat(ct)); [ [ 1, 1, 1, 1, 2, 3 ], [ 3, 4, 3, 5, 3, 1 ], [ 4, 2, 5, 2, 1, 2 ], [ 2, 3, 4, 4, 4, 4 ], [ 5, 5, 2, 3, 5, 5 ] ] I'm asking because I'm doing research where I will enter candidate presentations into GAP and check if the presentation is equal to a certain alternating group.
Yes. Use IdGroup(G);. If G is indeed (a presentation for a group isomorphic to) $A_5$, the output of this command is [60, 5].
{ "language": "en", "url": "https://math.stackexchange.com/questions/3779887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Under what conditions does $ \ (a+b)^{n}=a^{n}+b^{n}$ for a natural number $ n \geq 2$? Under what conditions does $ \ (a+b)^{n}=a^{n}+b^{n}$ holds for a natural number $ n \geq 2$? My attempt at solving: Using $(a+b)^2=a^2+2ab+b^2$; if $(a+b)^2=a^2+b^2$, $2ab=0$ therefore $a$ and/or $b$ must be $0$. If $a$ and/or $b$ is $0$ then $a^2$ and/or $b^2$ will be $0$. Therefore $(a+b)^2$ can never equal $a^2+b^2$.
Suppose $a\neq0$. Then $$(a+b)^n=a^n\Big(1+\tfrac{b}{a}\Big)^n$$ Then, to is enough to consider the question for which values $x$ is $(1+x)^n=1+x^n$. For then $(a+b)^n=a^n+b^n$ with $b=ax$. This leads to finding all the roots of $$ p_n(x):=\sum^{n-1}_{k=1}\binom{n}{k}x^k=0 $$ When $n=2$, $p_2(x)=2x=0$ and so $x=0$ when $n=3$, $p_3(x)= 3x+3x^2=0$ and so $x=0$ and $x=-1$, when $n=4$, $p_4(x)=4x+6x^2 + 4 x^3=2x(2+3x+2x^2)=0$ and so, $x=0$, $x=\frac{-3+i\sqrt{7}}{4}$ and $x=\frac{-3-i\sqrt{7}}{4}$. and so on. Notice that pairs $(a, b)$ that satisfy $(a+b)^n=a^n+b^n$ may be complex pairs.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3780011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Uniform integrability and stochastic dominance Let $(X_n)$ be a sequence of random variables, and $Y$ a integrable random variable with $$\sup P(|X_n| \ge a) \le P(Y \ge a),$$ for all $a \in \mathbb{R}$. Show that $(X_n)$ is uniformly ntegrable. This may be a stupid question, but I am having doubts if my solution is correct. Since $P(|X_n| \ge a) \le P(Y \ge a)$ for all $n$, we have $$\sup E(|X_n| ; |X_n|>a)=\sup \int_a^{\infty}xdF_{|X|}\le \int_a^{\infty}xdF_{Y}(x).$$ Since $Y$ is integrable, the limit when $a\rightarrow \infty$ is $0$. Is my reasoning correct? If the dominance was pontual and not stochastic, I am certain of how to prove the result, but I'm not sure in this case.
Not quite. The best way to go about this is using the Darth Vader Rule. Applied here, it gives us that $$E(|X_n|1_{|X_n| \geq a})= \int_a^\infty P(|X_n|\geq a)dx \leq \int_a^\infty P(|Y|\geq a)dx = E(|Y|1_{|Y|\geq a}).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3780122", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Every root of $x^n-1$ is simple in $ \mathbb{Z}_p[x]$ Let $p$ be a prime number s.t $p$ doesn't divide $n$. Show that every roots of $x^n-\overline{1}$ is simple in $\mathbb{Z}_p$ If $\overline {a} \in \mathbb{Z}_p$ is a root of $x^n - \overline{1}$ then $a^n \equiv 1\pmod p$ and $\gcd(a,p)=1$. By Fermat's little theorem we have $a^{p-1} \equiv 1\pmod p$. Ok, now I need to prove that if $\overline{b} \in \mathbb{Z}_p$ is root of $x^n - \overline{1}$, that is, $b^n \equiv 1 \pmod p$, then $\overline {a} = \overline{b}$, that is, $a \equiv b\pmod p$. Can you give me a way to solve that?
Hint: If $f(a)=0$, and $a$ is not a simple root, what can you say about $f'(a)$? (The formal derivative) Define $f(x)=x^n-1\in\Bbb Z_p[x]$, and let the group-homomorphism $$\frac{d}{dx}:\Bbb Z_p[x]\rightarrow\Bbb Z_p[x]$$ be the formal derivative. Detecting multiplicity, and why it works Let $f(x)\in R[x]$ be a polynomial in any polynomial ring over any commutative ring $R$. Suppose that $f(a)=0$, then $f(x)=(x-a)g(x)$. Taking the formal derivative we get $$f'(x)=g(x)+(x-a)h(x)$$ If $f'(a)=0$, we get that $g(a)=0$, and so $(x-a)\mid g(x)$, in other words $$(x-a)^2\mid f(x)$$ Applying the test Firstly, $f(0)\neq 0$, since $0^n-1\neq 0$, so assume that there is a non-zero $a\in\Bbb Z_p$ such that $f(a)=0$. Assume also, for sake of contradiction, that $f'(a)=0$. Then $$\frac{d}{dx}(x^n-1)=nx^{n-1}\\na^{n-1}=0$$ Thus $p\mid n$ or $p\mid a$. Both are impossible, so $f'(a)\neq 0$. And we're done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/3780227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Fundamental Group of Klein Bottle acts on $\mathbb{R}$ It is well know that the Fundamental Group of the Klein Bottle can be defined (up to isomorphism) as the group with two generator and one relation $$BS(1,-1)=\langle a,b: bab^{-1}=a^{-1}\rangle $$ In Algebraic Topology this fundamental group is defined as the group $G$ of homeomorphisms of $\mathbb{R}^{2}$ generate by $f,g:\mathbb{R}^{2}\rightarrow\mathbb{R}^{2}$ where: $$f(x,y)=(x,y+1), g(x,y)=(x+1,1-y)$$ I want to find a linear action of $G$ on $\mathbb{R}$, e.g, an operation $"\cdot":G\times \mathbb{R}\rightarrow \mathbb{R}$ such that: 1.- $g\cdot(x_{1}+x_{2})=g\cdot x_{1}+g\cdot x_{2}$ 2.- $g_{1}\cdot (g_{2}\cdot x)=(g_{1}g_{2})\cdot x$ 3.- $1_{G}\cdot x=x$, for all $g,g_{1},g_{2}\in G$ and $x, x_{1}, x_{2}\in \mathbb{R}$. My motivation: find a solvable notnilpotent group $G$ acting linearly on a finite dimensional vector space $M$ such that $H^{0}(G,M)=0$ but $H^{k}(G,M)\neq 0$ for some $k$.
Any one-dimensional representation of this group, will, as for any group, factor through its abelianization, which is $\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}$. Concretely, we can have $b$ act by multiplication by any non-zero scalar, and $a$ act by multiplication by $-1$ (we could also have $a$ acting trivially), and these are all the representations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3780363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Wronskian of two linearly independent differential functions. Show $c$ in [a,b] such that $g(c) = 0$ exists Let $f,g: [a,b] → R$, two differential functions and suppose $f(a) = f(b) = 0$. If $W(f,g): [a,b] → R$ and $W(f,g)(x) = f(x)g'(x) - g(x)f'(x)$ doesn't equal 0 for all $x$ in $[a,b]$, show that a $c$ in $[a,b]$ must exist such that $g(c) = 0$. I thought about using the Rolle Theorem to say that there exists a $k$ in [a,b] such that $f'(k) = 0$ and then use the Intermediate Value Theorem to show that g(a) and g(b) are the opposite signs therefore a c such that g(c)=0 must exist, but I get lost at that last part. Can anyone help me? Thank you!
If possible, take $g\neq 0 $ for all $x\in [a,b] $ Then we can easily define the differentiable function $\frac{f}{g} $ on $[a,b]$. Clearly, $(\frac{f}{g})(a) = 0 = (\frac{f}{g})(b) $. Then by Rolle's theorem, there exist at least one $c\in (a,b) $ such that $(\frac{f}{g})' (c) = \frac{gf'-fg'}{g^2}=\frac{-W(f,g)}{g^2} = 0 \implies W(f,g)=0 $ , and this is a contradiction . So, there exist $c\in (a,b) $ such that $g(c)=0$ Edit: Here, we shouldn't use the closed interval $[a,b]$, for this statement, $\text{there exist $c\in (a,b) $ such that g(c)=0 } $. Since at $a,b$, if either of any $g(a)=0$ or $g(b)=0$ true, Then , $W(f,g)$ becomes $0$, which makes the fact " $f,g$ linearly independent " wrong. So, we use $(a,b)$ instead of $[a,b]$, for the conclusion of the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3780481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Probability involved in information theory. I was reading Information, Entropy, and the Motivation for Source Codes chapter 2 MIT 6.02 DRAFT Lecture Notes(https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-02-introduction-to-eecs-ii-digital-communication-systems-fall-2012/readings/MIT6_02F12_chap02.pdf, 2.1.2 Examples), trying to understand the mathematics behind information gain when I came across this: Now suppose there are initially N equally probable and mutually exclusive choices, and I tell you something that narrows the possibilities down to one of M choices from this set of N. How much information have I given you about the choice? Because the probability of the associated event is M/N, the information you have received is log2(1/(M/N)) = log2(N/M) bits. (Note that when M = 1, we get the expected answer of log2 N bits.) I could not understand how the probability of the associated event is M/N. Please explain in detail.
It is quite immediate : Suppose that $C$ is a random variable with values in a set $\mathcal{C}$ having a cardinality $N$. Suppose that all possible values of $C$ have the same probability. Consider a subset $\mathcal{C}'$ of $\mathcal{C}$ that contains $M$ elements. Then, if you consider the event $E$ : $C \in \mathcal{C}'$ then $P(E)=\frac{M}{N}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3780583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Indefinite integral of $\frac{\sec^2x}{(\sec x+\tan x)^\frac{9}{2}}$ $$\frac{\sec^2x}{(\sec x+\tan x)^\frac{9}{2}}$$ My approach: Since it is easy to evaluate $\int{\sec^2x}$ , integration by parts seems like a viable option. Let $$I_n=\int{\frac{\sec^2x}{(\sec x+\tan x)^\frac{9}{2}}}$$ $$I_n=\frac{\tan x}{(\sec x+\tan x)^\frac{9}{2}} + \frac{9}{2}\int{\frac{\sec x \tan x}{(\sec x+\tan x)^\frac{9}{2}}dx}$$ Evaluating the new integral again using by parts yields $$\frac{\sec x}{(\sec x+\tan x)^\frac{9}{2}}+\frac{9}{2}\int{\frac{\sec^2x}{(\sec x+\tan x)^\frac{9}{2}}\,dx}$$ $$=\frac{\sec x}{(\sec x+\tan x)^\frac{9}{2}} + \frac{9}{2} I_n$$ Plugging it back, we obtain $$I_n=\frac{\tan x}{(\sec x+\tan x)^\frac{9}{2}} + \frac{9}{2}\frac{\sec x}{(\sec x+\tan x)^\frac{9}{2}} + \frac{81}{4}I_n $$ $$\frac{-77}{4}I_n=\frac{\tan x}{(\sec x+\tan x)^\frac{9}{2}} + \frac{9}{2}\frac{\sec x}{(\sec x+\tan x)^\frac{9}{2}}$$ This obviously doesn't match with bprp's answer. Help! Edit: How do I convert my answer to the answer obtained by him, if mine is correct
You missed a simplification after second step: $$I=\frac{\tan x}{(\sec x+\tan x)}\frac{9}{2}+\frac{9}{2}\int \frac{\sec x \tan x}{(\sec x+\tan x)^{\frac92}}dx$$ Now, take the original expression for $I$ $$ I = \int \frac{\sec^2 x}{(\sec x + \tan x)^{\frac92}}$$ Add this $ \frac{9}{2} $ times this to previous expression $$ \frac{11}{2} I = \frac{\tan x}{(\sec x+\tan x)}\frac{9}{2}+\frac{9}{2}\int \frac{\sec x \tan x + \sec^2 x}{(\sec x+\tan x)^{\frac92}}dx$$ Consider the second integral: $$ J = \int \frac{\sec x \tan x + \sec^2 x}{(\sec x+\tan x)^{\frac92}}dx$$ Substitute $ \sec x + \tan x = t$ $$ J = \int t^{-\frac92} dt$$ Done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/3780730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
prove $\left(3, 1+\sqrt{-5}\right)$ is prime ideal of $\mathbb{Z}\left[\sqrt{-5}\right]$ How to prove that $(3, 1+\sqrt{-5})$ is prime ideal of $\mathbb{Z}[\sqrt{-5}]$? attempt 1: use definition Consider $a, b, c, d, k_1, k_2 \in \mathbb{Z}$ s.t. $$ac-5bd=3k_1+k_2,\, \, ad+bc=k_2.$$ To prove $\exists j_1, j_2 \in \mathbb{Z}$ s.t. $3j_1+(1+\sqrt{-5})j_2=a+b\sqrt{-5}$ or $=c+d\sqrt{-5}$. This is a bad way. attempt 2: To prove $\dfrac{\mathbb{Z}\left[\sqrt{-5}\right]}{\left(3, 1+\sqrt{-5}\right)}$ is integral domain. I know how to work with quotient of polynomial ring but not how to work with quotient of $\mathbb{Z}\left[\sqrt{-5}\right]$. attempt 3: $$\mathbb{Z}\left[\sqrt{-5}\right]\cong \mathbb{Z}/\left(x^2+5\right)$$ When we have $\mathbb{Z}/\left(x^2+5\right)$, converting into $\mathbb{Z}\left[\sqrt{-5}\right]$ simplifies the problem. May be the other way round is useless. Please give a hint. Please do not give solution. Thanks!
Hint: The square of norm function $(a^2-5b^2)$ is a multiplicative function in the ring $\mathbb{Z}[\sqrt{-5}]$ for a number $a+b\sqrt {-5}$. Use this to prove the primality by proving one of the factors of the norm is $1$. After showing that the numbers are primes, it is correct that the ideal you describe is prime, by using this answer
{ "language": "en", "url": "https://math.stackexchange.com/questions/3780850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How to efficiently sample edges from a graph in relation to its spanning tree Consider a connected, unweighted, undirected graph $G$. Let $m$ be the number of edges and $n$ be the number of nodes. Now consider the following random process. First sample a uniformly random spanning tree of $G$ and then pick an edge from this spanning tree uniformly at random. Our process returns the edge. If I want to sample many edges from $G$ from the probability distribution implied by this process, is there a more efficient (in terms of computational complexity) method than sampling a new random spanning tree each time?
While the other answer is correct, it requires the computation of $|E| + 1$ many determinants. There is a faster route when $|E|$ is large. The first thing to note is Kirchoff's theorem which states that if $T$ is a uniform spanning tree then $$P(e \in T) = \mathscr{R}(e_- \leftrightarrow e_+)$$ where $e = \{e_-, e_+\}$ and $\mathscr{R}(a \leftrightarrow b)$ is the effective resistance between $a$ and $b$ when each edge is given resistance $1$. This implies that the probability an edge is sampled in your process is $$\mathscr{R}(e_- \leftrightarrow e_+)/(|V| - 1).$$ Thus we only need to compute the effective resistance. If we let $L$ denote the graph Laplacian and $L^+$ to be its Moore-Penrose pseudoinverse, then $$\mathscr{R}(a \leftrightarrow b) = (L^+)_{aa} + (L^+)_{bb} - 2 (L^+)_{ab}. $$ (See this master's thesis for some nice discussion and references.) Thus, the only computational overhead for computing the marginals is computing a single psuedoinverse. Depending on how large $|E|$ is, this may be faster than computing $|E|$ many determinants. EDIT: some discussion on complexity The Pseudoinverse of an $n \times n$ matrix can be done in $O(n^3)$ time. So computing $L^+$ takes $O(|V|^3)$ time. We have to compute this for $|E|$ many edges, so the above computes all marginals in $O(|E| |V|^3)$ time. Conversely, a determinant can be done in, say, $O(n^{2.3})$ time. So the other answer has complexity $O(|E|^2 |V|^{2.3}).$ Since $G$ is connected, $|E| \geq |V|-1$ and so this algorithm is always faster (asymptotically, at least).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3780959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Ideas for this integral: $\int \frac{\sqrt{\tan{x}}}{\sin{x}} dx$ $$\int \frac{\sqrt{\tan{x}}}{\sin{x}} \mathrm{d}x$$ So I was wondering if this correctly by converting $\sqrt{\tan{x}}$ into $\frac{\sqrt{\sin{x}}}{\sqrt{\cos{x}}}$ therefore I can divide it with $\sin{x}$ and that will give me $\frac{\sqrt{\cos{x}}}{\sqrt{\sin{x}}}$ and that will be $\sqrt{\cot{x}}$ so the integral is formed into $\displaystyle \int \sqrt{\cot x}\mathrm{d}x $. Is this so far correctly done?
$$I=\int \frac{\sqrt{\tan x}}{\sin x} dx$$ Let $\tan x =t^2 \implies \sec^2 x dx=2t dt$ Then $$I=\int \frac{2}{\sqrt{1+t^4}} dt$$ This can be found interms of Elliptic functions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3781079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Heine Borel Theorem statement (a) I have been following Prof Winston Ou's course on analysis on Youtube. In the lecture on Heine Borel theorem, he mentioned that a set $E$ in $\mathbb R$ is closed and bounded implies that $E$ is a k-cell (hence $E$ is compact). I don't understand how he came to this conclusion. For instance, I imagine $E$ could be a discrete set (and still being closed and bounded), but it will not be a k-cell. I would be very grateful if anyone could enlighten me.
He means $D$ will be contained in a $k$-cell, and as $k$-cells are compact and $E$ is still a closed subset of it, it will also be compact. The boundedness in Euclidean space/metric forces the set inside a product of compact intervals..
{ "language": "en", "url": "https://math.stackexchange.com/questions/3781333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that the transformation $w=\frac{2z+3}{z-4}$ maps the circle $x^2+y^2-4x=0$ onto the straight line $4u+3=0$ Question: Show that the transformation $w=\frac{2z+3}{z-4}$ maps the circle $x^2+y^2-4x=0$ onto the straight line $4u+3=0$. My try: $$\begin{align}\\ &x^2+y^2-4x=0\\ &\implies (x-2)^2+y^2=4\\ &\implies |z-2|=2\\ \end{align}\\ $$ Now, $w=\frac{2z+3}{z-4}$ $$\begin{align}\\ &\frac w2=\frac{2z+3}{2z-8}\\ &\implies\require{cancel}\frac{w}{w-2}=\frac{2z+3}{\cancel{2z}+3-\cancel{2z}+8}\\ &\implies\frac{w}{w-2}=\frac{2z+3}{11}\\ &\implies\frac{2z}{11}=\frac{w}{w-2}-\frac{3}{11}\\ &\implies\frac{2z}{\cancel{11}}=\frac{8w+6}{\cancel{11}(w-2)}\\ &\implies z=\frac{4w+3}{w-2}\\ &\implies z-2=\frac{2w+7}{w-2}\\ \end{align}\\ $$ $$\therefore\left|\frac{2w+7}{w-2}\right|=2\\ \implies 2w+7=2w-4 $$ Now, what to do? Where is my fault? Or how to do it? Is there any other possible ways?
Ak19 answered what I would have answered before I got to it, so here's another possible way of how to do it. These transformations map generalized circles to generalized circles; a generalized circle is either a circle or a line. This particular transformation maps the point $z=0$ on the given circle to $w=-\frac34$, the point $z=2+2i$ on the given circle to $-\frac34-\frac{11}4i$, and the point $z=2-2i$ on the given circle to $-\frac34+\frac{11}4i$, so we can see that it maps the circle to the line $u=-\frac34$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3781558", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
$n$ is prime iff $\binom{n^2}{n} \equiv n \pmod{n^4}$? Can you prove or disprove the following claim: Let $n$ be a natural number greater than two , then $$n \text{ is prime iff } \binom{n^2}{n} \equiv n \pmod{n^4}$$ You can run this test here. I have verified this claim for all $n$ up to $100000$ .
Note that $\displaystyle\binom{n^2}{n} = \frac{1}{(n - 1)!} \frac{n^2 (n^2 - 1) ... (n^2 - (n - 1))}{n} = \frac{1}{(n - 1)!} n (n^2 - 1) ... (n^2 - (n - 1))$ Consider a prime $p > 2$. Then $1, 2, ..., p - 1$ are all invertible modulo $p^4$; thus, so is $(p - 1)!$. Now consider $\displaystyle\binom{p^2}{p} = \frac{1}{(p - 1)!} p (p^2 - 1) ... (p^2 - (p - 1))$. Define the polynomial $P(x) = x (x^2 - 1) (x^2 - 2) ... (x^2 - (p - 1))$. We wish to reduce $P(x)$ modulo $x^4$. We note that this will only have an $x$ and $x^3$ term since $P$ is odd. The $x$ term will clearly be $(p - 1)! x$; the $x^3$ term will be $-(p - 1)! x^3 \left(\frac{1}1 + \frac{1}2 + \cdots + \frac{1}{p - 1}\right)$. Then mod $p^4$, we have $\displaystyle \binom{p^2}{p} = p - p^3 \left(\frac{1}1 + \frac{1}2 + \cdots+ \frac{1}{p - 1}\right)$ (taking division modulo $p^4$ as well). Note that when reducing $\mod p$, we have $\frac{1}1 + \frac{1}2 + \cdots + \frac{1}{p - 1} = 1 + 2 + ... + (p - 1)$, since every number from $1$ to $p - 1$ is a unit. And this sum is equal to $\frac{p (p - 1)}{2} \equiv 0 \pmod p$, since $p > 2$. Thus, we see that $\frac{1}1 + \frac{1}2 + \cdots + \frac{1}{p - 1}$ will be divisible by $p$ when the division is done $\mod p^4$ as well. Thus, we have that for all $p>2$ prime, $\displaystyle \binom{p^2}{p} \equiv p \pmod {p^4}$. I don't have anything going the other direction yet.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3781690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 3, "answer_id": 2 }
Book recommendation : Olympiad Combinatorics book Can anyone recommend me an olympiad style combinatorics book which is suitable for a high schooler ? I know only some basics like Pigeon hole principle and stars and bars . I hope to find a book which contains problems which purely test our originality ( the problems with beautiful constructions like USAMO 2017 -TSTST P2: Which words can Ana pick?, Nim problems, games,tillings, etc ) . More specifically problems which doesn't require theory but requires out of the box thinking . I don't know much about recurrence relations, generating functions or graph theory, so I would also love to see a book which introduces these topics .
One possibility is Problem-Solving Methods in Combinatorics: An Approach to Olympiad Problems by Pablo Soberon. As the title says, it's intended to prepare the student for Olympiad problems, and the author won a gold medal in the International Mathematical Olympiad. Some of the exercises in the book are drawn from recent Olympiads. Coverage includes the pigeonhole principle, graph theory, generating functions, and partitions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3781790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 0 }
Other absolute value definitions in $\mathbb R$ I know these definitions for the absolute value (or module): given a real number $x$, then $$\bbox[yellow] {|x|=\begin{cases}x & \text{if } x\geq 0\\ -x& \text{if } x< 0\end{cases}}$$ or $$\bbox[yellow] {|x|=\max\{x,-x\}}$$ Are there other definitions in $\mathbb R$ (for example using $\text{sgn}\, x$)? PS: The question is referred to high school students.
Here's some I could think of: * *$|x|$ can be defined as the (unsigned) distance of $x$ from the origin. *It's the even extension of $f:[0,\infty)\to\mathbb R$ where $f(x):=x$. *$|x|$ is the unique norm on $\mathbb R$ with $|1|=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3781861", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Inverse Laplace Transform via Circuit Analysis [HELP] Inverse Laplace Transform $\frac{1}{s^2 + \sqrt{2}s + 1}$ so what I did it changed the denominator to complete the square format which is $\left(s+\frac{\sqrt{2}}{2}\right)^2 + \frac{1}{2}$, then I can solve for $s$, it will make it as $$ \left(\left(s+ \frac{\sqrt{2}}{2}\right) + \frac{\sqrt{2}}{2}i\right) \left(\left(s+ \frac{\sqrt{2}}{2}\right) - \frac{\sqrt{2}}{2}i\right) $$ So now, to the sheet of paper is to do Partial Fraction Decomposition of this which is absurd to me because of complex roots it has: $$ \frac{1}{s^2 + s\sqrt{2} + 1} = \frac{1}{\left(s+\frac{\sqrt{2}}{2}\right)^2 + \frac{1}{2}} $$ Partial Fraction of Complex root will be $$ \frac{K}{\left(s+ \frac{\sqrt{2}}{2}\right) + \frac{\sqrt{2}}{2}i} + \frac{K^*}{\left(s+ \frac{\sqrt{2}}{2}\right) - \frac{\sqrt{2}}{2}i} $$ to follow the formula sheet. which I got my K = -$i\frac{\sqrt{2}}{2}$ and $K^*$ = $i\frac{\sqrt{2}}{2}$ the problem I get is magnitude and $\theta$ is undefined it makes no sense at all.
Once we complete the square, we can use the sine formula and Frequency Shift Theorem to evaluate the inverse transform: If we accept that $$\mathcal{L}(\sin(at)) = \frac{a}{s^2+a^2}$$ and $$\mathcal{L}(e^{ct}f(t)) = F(s-c)$$ where $F(s) = \mathcal{L}(f(t))$, we can take our original fraction: $\begin{align} \mathcal{L}^{-1}(\frac{1}{s^2+\sqrt{2}s+1}) & = \mathcal{L}^{-1}(\frac{1}{(s+\frac{1}{\sqrt{2}})^2+1/2})\\ & = \mathcal{L}^{-1}(\sqrt{2}\frac{\frac{1}{\sqrt{2}}}{(s+\frac{1}{\sqrt{2}})^2+1/2})\\ & = \sqrt{2}*\exp{\frac{-t}{\sqrt{2}}}*\sin(\frac{t}{\sqrt{2}}) \end{align}$ In that last step, we combined the two formulae above, as our fraction was in the form of $\mathcal{L}(\sin(at))$, but shifted by $c = \frac{-1}{\sqrt{2}}$, creating the '$\exp{\frac{-t}{\sqrt{2}}}$' term in the final answer. Were you to continue the partial fraction decomposition method directly, you would end up with two exponentials terms that you could manipulate into the same answer above using the identity: $$\sin(x) = \frac{e^{ix}-e^{-ix}}{2i}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3781963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
$\triangle ABC$ with a point $D$ inside has $\angle BAD=114^\circ$, $\angle DAC=6^\circ$, $\angle ACD=12^\circ$, and $\angle DCB=18^\circ$. Let $ABC$ be a triangle with a point $D$ inside. Suppose that $\angle BAD=114^\circ$, $\angle DAC=6^\circ$, $\angle ACD=12^\circ$ and $\angle DCB=18^\circ$. Show that $$\frac{BD}{AB}=\sqrt2.$$ I am requesting a geometric proof (with as little trigonometry as possible). A completely geometric proof would be most appreciated. I have a trigonometric proof below. Trigonometric Proof Wlog, let $AB=1$. Note that $\angle ABC=\angle ACB=30^\circ$, so $AC=1$. Then by law of sines on $\triangle ACD$, $$AD=\frac{\sin 12^\circ}{\sin 18^\circ}.$$ By law of cosines on $\triangle ABD$, $$BD^2=1^2+\frac{\sin^212^\circ}{\sin^2{18^\circ}}-2\frac{\sin 12^\circ}{\sin 18^\circ}\cos 114^\circ.$$ As $\cos 114^\circ=-\sin24^\circ$, we get $$BD^2=2+\frac{-\sin^218^\circ+\sin^212^\circ+2\sin12^\circ\sin18^\circ\sin 24^\circ}{\sin^218^\circ}.$$ Then from the identities $\sin^2\alpha-\sin^2\beta=\sin(\alpha-\beta)\sin(\alpha+\beta)$ and $\sin(2\alpha)=2\sin\alpha\cos\alpha$, we have $$BD^2=2+\frac{-\sin 6^\circ\sin 30^\circ+4\sin 6^\circ\cos 6^\circ \sin 18^\circ\sin24^\circ}{\sin^218^\circ}.$$ Because $\sin 30^\circ=\frac12$, we conclude that $BD=\sqrt{2}$ if we can prove $$8\cos 6^\circ \sin 18^\circ \sin 24^\circ=1.$$ This is true because by the identity $2\sin\alpha\cos\beta=\sin({\alpha+\beta})+\sin(\alpha-\beta)$, we have $$2\sin 24^\circ \cos 6^\circ =\sin 30^\circ+\sin 18^\circ.$$ Since $\sin 30^\circ=\frac12$, we obtain $$8\cos 6^\circ \sin 18^\circ \sin 24^\circ =2\sin 18^\circ +4\sin^218^\circ=1,$$ noting that $\sin 18^\circ=\frac{\sqrt5-1}{4}$. Attempt at Geometric Proof I discovered something that might be useful. Construct the points $E$ and $G$ outside $\triangle ABC$ so that $\triangle EBA$ and $\triangle GAC$ are similar to $\triangle ABC$ (see the figure below). Clearly, $EAG$ is a straight line parallel to $BC$. Let $F$ and $H$ be the points corresponding to $D$ in $\triangle EBA$ and $\triangle GAC$, respectively (that is, $\angle FAB=\angle DCB=\angle HCA$ and $\angle FAE=\angle DCA=\angle HCG$). Then $\triangle FBD$ and $\triangle HDC$ are isosceles triangles similar to $\triangle ABC$, and $\square AFDH$ is a parallelogram. I haven't been able to do anything further than this without trigonometry. Here is a bit more attempt. If $M$ is the reflection of $A$ wrt $BC$, then through the use of trigonometric version of Ceva's thm, I can prove that $\angle AMD=42^\circ$ and $\angle CMD=18^\circ$. Not sure how to prove this with just geometry. But this result may be useful. (Although we can use law of sines on $\triangle MCD$ to get $MD$ and then use law of cosines on $\triangle BMD$ to get $BD$ in terms of $AB$ too. But this is still a heavily trigonometric solution, even if the algebra is less complicated than the one I wrote above.) I have a few more observations. They may be useless. Let $D'$ be the point obtained by reflecting $D$ across the perpendicular bisector of $BC$. Draw a regular pentagon $ADKK'D'$. Geogebra tells me that $\angle ABK=54^\circ$ and $\angle AKB=48^\circ$. This can be proven using trigonometry, although a geometric proof should exist. But it is easy to show that $KD\perp CD$ and $K'D'\perp BD'$. In all of my attempts, I always ended up with one of the following two trigonometric identities: $$\cos 6^\circ \sin 18^\circ \sin 24^\circ=1/8,$$ $$\cos 36^\circ-\sin18^\circ =1/2.$$ (Of course these identities are equivalent.) I think a geometric proof will need an appearance of a regular pentagon and probably an equilateral triangle, and maybe a square.
Let $\omega$, $O$ be the circumcircle and circumcenter of $\triangle ABC$, respectively. Let $P,Q,R,S$ be four points on the shorter arc $AC$ of $\omega$ dividing this arc into five equal parts. First, we shall prove that $\triangle RSD$ is equilateral. Let $D'$ be a point inside $\omega$ such that $\triangle RSD'$ is equilateral. Also, let $E$ be inside $\omega$ such that $\triangle PQE$ is equilateral. Invoking symmetries we see that $\triangle D'SC \equiv \triangle D'RQ \equiv \triangle EQR \equiv \triangle EPA$. Note that $\angle EQR = \angle QRD'=\angle QRS-60^\circ = 168^\circ - 60^\circ = 108^\circ$. Hence $\angle D'QR = 90^\circ - \frac 12\angle QRD' = 36^\circ$ and $\angle EQD'=108^\circ - 36^\circ = 72^\circ$. But also $\angle D'EQ = 180^\circ - \angle EQR = 180^\circ - 108^\circ = 72^\circ$. Hence $ED'Q$ is isosceles with $QD'=ED'$. Again, using symmetries we see that $AED'C$ is an isosceles trapezoid with $AE=ED'=D'C$. We have $\angle ACD'=\angle SCD' - \angle SCA = 36^\circ - 24^\circ = 12^\circ$. Since $AED'C$ is an isosceles trapezoid, it is cyclic and since $AE=ED'=D'C$, it follows that $\angle D'AC = \frac 12 \angle EAC = \frac 12 \angle ACD'=6^\circ$. Hence $D'$ coincides with $D$. Now comes my favourite part. Some angle chasing shows that $\angle QCE = 18^\circ = \angle DCB$ and $\angle DQC = 24^\circ = \angle BQE$. Hence $D$ and $E$ are isogonal conjugates in $\triangle BQC$. It follows that $\angle CBD = \angle EBQ$. Choose $T$ on $\omega$ so that $BT$ is a diameter. Clearly, $\triangle BQE$ is symmetric to $\triangle TRD$ with respect to perpendicular bisector of $QR$. In particular, $\angle RTD = \angle EBQ$. Let $RT$ intersect $BC$ at $X$. Since $\angle CBD = \angle EBQ = \angle RTD$, quadrilateral $BDXT$ is cyclic. Hence $\angle BDT = \angle BXT$. Then some angle chasing shows that $\angle DOB = 102^\circ = \angle BXT = \angle BDT$. This precisely means that the circumcircle of $DOT$ is tangent to $BD$ at $D$. Tangent-secant theorem yields $BD^2=BO\cdot BT = BO \cdot 2BO = 2BO^2$. Hence $$\frac{BD}{AB} = \frac{BD}{BO} = \sqrt 2,$$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3782069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 3, "answer_id": 0 }
Simplification of an algebraic determinant After studying some analytic geometry, I came across this step in a solution, however, I am not how they managed to simplify the determinant in this way. When I tried to evaluate this, I got: $\frac{bc-ad}{2}+\frac{ad-bc}{2b-2d}$, but didn’t see how this got to the desired form. Many thanks.
We obtain $$\frac12 b\frac{ad-bc}{d-b}-\frac12 d\frac{ad-bc}{d-b}+\frac12d=\frac12 (b-d)\frac{ad-bc}{d-b}+\frac12d=\frac12(bc-ad+d)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3782321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Range of Convergence of $\sum\limits_{n=1}^{\infty} \frac{(-1)^{n-1}}{n \ 3^n (x-5)^n}$ $$\sum\limits_{n=1}^{\infty} \frac{(-1)^{n-1}}{n \ 3^n (x-5)^n}$$ I am trying to use the alternating series test to find a range of $x$ for which $(1) b_n > b_{n+1}$ and $ (2) \lim_{n \to \infty} \frac{1}{n \ 3^n (x-5)^n} = 0$. If $|\frac{1}{x-5}| \leq 1$ then condition $(1)$ and $(2)$ will not hold. So wouldn't the range be $x < 4$ and $5 \leq x$ ? I know this is not right since the answer should be $ 5\frac{1}{3} \leq x$ and $x < 4 \frac{2}{3}$ ... could someone provide a solution?
By the root test, the series converses for all $x$ such that $$ \limsup_n\sqrt[n]{\frac{1}{n3^n|x-5|^n}}=\frac{1}{3|x-5|}\lim_n\frac{1}{\sqrt[n]{n}}=\frac{1}{3|x-5|}<1 $$ Thus, the series converges for all $x$ such that $|x-5|>\frac{1}{3}$, i.e., all $x$ in $(-\infty,\tfrac{14}{3})\cup(\tfrac{15}{3},\infty)$. At the point $x=\frac{15}{3}$ the series becomes $\sum_n\frac{(-1)^{n-1}}{n}$ which converges (is an alternating series with decreasing driving term $\frac{1}{n}$). At $x=\frac{14}{3}$ the series becomes $-\sum_n\frac{1}{n}$ which diverges. Therefore, the series converges for all $x\in (-\infty,\tfrac{14}{3})\cup[\tfrac{15}{3},\infty)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3782557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Calculate $\lim_{h\to 0} \frac{\cos(x-2h)-\cos(x+h)}{\sin(x+3h)-\sin(x-h)}$ Calculate $$\lim_{h\to 0} \frac{\cos(x-2h)-\cos(x+h)}{\sin(x+3h)-\sin(x-h)}$$ If I take the limit it results in undefined value. I try to change the formation using identity $\sin A + \sin B$ $$\lim_{h\to 0} \frac{-2\sin\frac{(2x-h)}2\sin(-3h/2)}{2\cos(x+h)\sin(2h)}$$ How do I actually evaluate the limit? With and without derivative?
Direct evaluation gives $0/0$ so apply L'Hospital's rule: \begin{align}\lim_{h\to 0}\frac{\frac{d}{dh}\left(\cos(x-2h)-\cos(x+h)\right)}{\frac{d}{dh}\left(\sin(x+3h)-\sin(x-h)\right)}&=\lim_{h\to 0}\frac{2\sin(x-2h)+\sin(x+h)}{3\cos(x+3h)+\cos(x-h)}\\&=\frac{2\sin(x)+\sin(x)}{3\cos(x)+\cos(x)}\\&=\frac{3}{4}\tan(x) \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3782670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Proving a limit using the $\epsilon$ - $\delta$ definition of limit. Given $\lim _{x\to a}\left(f\left(x\right)\right)=\infty$ and $\lim _{x\to a}\left(g\left(x\right)\right)=c$ where $c \in R$, prove $\lim _{x\to a}\left[f\left(x\right)+g\left(x\right)\right]=\infty$. My attempt: Let for every $M>0$ exists $\delta_1$ which satisfies $0 < |x-a| < \delta_1 \implies f(x) > M$. Let for every $\epsilon > 0$ exists $\delta_2$ which satisfies $0 < |x-a| < \delta_2 \implies |g(x) - c| < \epsilon$. Or, I can write it as $0 < |x-a| < \delta_2 \implies c - \epsilon < g(x) < c + \epsilon$. Let for every $N > 0$ exists $\delta$ which satisfies $0 < |x-a| < \delta \implies f(x) + g(x) > N$. Using $\delta =$ min{$\delta_1, \delta_2$} so $f(x) > M$ and $g(x) > c - \epsilon$, I get $f(x) + g(x) > M + c - \epsilon$. And I'm stuck. I've seen a solution somewhere which divides the final equations for $c = 0, c > 0,$ and $c < 0$ but I don't get the idea why do I have to solve it in cases. I don't know how to construct such $\delta$ that satisfies $N$. I have taken a look at a similar question, proof of limit using epsilon-delta definition, but I'm not quite enlightened with the answer yet.
It's useful to start with stating what you want to prove. In this case: For every $N>0$ there is $\delta>0$ such that $$ 0<|x-a|<\delta \Rightarrow f(x) + g(x) > N.$$ So, if you choose your $M$ and $\epsilon$ so that $$ M+c-\epsilon \geq N,$$ you are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3783040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
If $\lim_{\alpha \to \infty}\alpha P[X > \alpha] = 0$ then $E[X] < \infty$? Let $X$ be a positive random variable. Suppose that $\lim_{\alpha \to \infty}\alpha P[X > \alpha] = 0$ Does this implies that $X$ has finite expectation? that is $E[X] < \infty $ I know that if $E[X] < \infty$ $\Rightarrow$ $\lim_{\alpha \to \infty}\alpha P[X > \alpha] = 0$ (For any positive random variable see: Expected value as integral of survival function) , so I was wondering if the converse is true. I have also tried to think in a counterexample but unfortunately I have not been successfull. I would really appreciate any hints or suggestions with this problem.
Here is an another counter example based on a problem solved here Consider the probability space $((0,1),\mathscr{B}((0,1)),\lambda)$ where $\lambda$ is Lebesgue's measure restricted to the unit interval, and consider the function $X:(0,1)\rightarrow\mathbb{R}$ defined by $$X(t):=\frac{1}{t|\log t|}\mathbb{1}_{(0,e^{-1}]}+e\mathbb{1}_{(e^{-1},1)}(t)$$ It is not difficult to check that $X$ is continuous, and that $\lim_{t\rightarrow0+}X(t)=\infty$, $X$ is continuous and strictly monotone decreasing on $(0,e^{-1}]$, and $$ \int_{(0,1]}X\,d\lambda\geq-\int_{(0,e^{-1}]}\frac{dx}{x\log x}=-\log(-\log(x))|^{e^{-1}}_0=\infty $$ For each $\alpha$ large enough, there is exactly one $a_\alpha<\tfrac1e$ such that $X(a_\alpha)=\alpha$. Then $$\begin{align} \lambda(X>\alpha)&=\lambda((0,a_\alpha))=a_\alpha\\ &=\frac{1}{\alpha}\frac{1}{|\log a_\alpha|} \end{align}$$ Furthermore, as $a_\alpha\rightarrow0$ when $\alpha\rightarrow\infty$, $$ \lim_{\alpha\rightarrow\infty}\alpha\lambda(X>\alpha)=\frac{1}{|\log a_\alpha|}\xrightarrow{\alpha\rightarrow\infty}0 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3783304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Find function $ f(x) $ to ensure the limit has certain value If $\lim_{x\to1}$ $\frac{f(x)}{(x-1)(x-2)} = -3$ , then provide a possible function $y = f(x)$ *I don't understand what the question is asking me and how I should solve it. Can a possible function be $y = f(1)$? Or must I do something else to figure out the answer? I'd appreciate if anyone can help me out.
Suppose you wanted to get rid of the discontinuity at $x=1$. Create a function $f(x)$ that removes this discontinuity, that is, $f(x)=(x-1)g(x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3783455", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
If $abc=1$ where $a,b,c>0$, then show that $(a-1+b^{-1})(b-1+c^{-1})(c-1+a^{-1}) \leq 1$. I tried writing everything in terms of $a$ and $c$, but got stuck at $(a-1+ac)(a+1-ac)(1-a+ac) \leq a^2c$ where I thought of trying to show for all $x,y,z>0$ that $(x-y+z)(x+y-z)(-x+y+z) \leq xyz $ and substitute $x=1, y=a$ and $z=ac$. However, I'm not sure if that inequality is even correct and I doubt that this approach will work, so I would like some hints or a piece of advice. I would be grateful for that.
This is a well-known inequality from some years of IMO I think. After your substitution, use the following: $$(x-y+z)(x+y-z)\leq\dfrac{(x-y+z+x+y-z)^2}{4}=x^2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3783655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
If $x_0=1$ and $x_n=\frac {1}{1+x_{(n-1)}}$, find: $\lim_{x\to\infty} x_n$ If $x_0=1$ and $x_n=\dfrac {1}{1+x_{(n-1)}}$, find $\displaystyle\lim_{x\to\infty} x_n$. My attempt: $x_1=1+\dfrac 1 2=\dfrac 3 2$ $x_2=1+\dfrac {1}{1+\frac 3 2}=\dfrac2 5$ Which gives following series: $$1, \frac32, \frac35,\frac 58, \frac 8 {13}, \dots$$ Since the denominator is greater than the numerator, the limit of $x_n=0$ when $x\to\infty$. I do not think this result is correct. I think this problem must have a solution using continued fractions. Any idea?
You miscalculated the two first initial terms of the sequence. $x_{1}=\frac{1}{1+x_{0}}=\frac{1}{1+1}=\frac{1}{2}$ and $x_{2}=\frac{1}{1+x_{1}}=\frac{1}{1+\frac{1}{2}}=\frac{2}{3}$. Let's try to apply the Fixed Point Theorem. Consider $I=[\frac{1}{2},1]$ and $f(x)=\frac{1}{1+x}$ defined in $I$. * *$f$ is continuous in $I$ *$f(I)\subset I$ once $f'<0$ *$f$ is Lipschitz continuous because as $f$ is a function of class $C^1$ you have that $|f'(x)|=\frac{1}{(1+x)^2}\leq \frac{1}{1+\frac{1}{2}}=\frac{2}{3}$. Then for any $x,y \in I$ $\Rightarrow$ $|f(x)-f(y)|\leq\frac{2}{3}|x-y|$. Then $x_{n}$ converges for the only fixed point of $f$ in $I$, i.e., $l=\frac{1}{1+l}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3783796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Is $g(x) = \frac{x^3+9}{x^2}$ one-to-one? Let $g(x) =\frac{x^3+9}{x^2}$ restricted to $D(g) = (-\infty,0)$. Is it 1-1? My approach $g(x) \text{ 1-1 }: g(x_1)=g(x_2) \iff x_1 = x_2$ Let $x_1, x_2 \in D(g)$ then, $$\frac{x_1^3+9}{x_1^2} = \frac{x_2^3+9}{x_2^2} \iff x_2^2 \cdot x_1^3 + 9x_2^2 - x_1^2 \cdot x_2^3 + 9x_1^2 =0 \iff\\ \iff x_2^2(x_1^3+9)-x_1^2(x_2^3+9) = 0 \quad (1)$$ In order for $(1)$ to hold, * *$x_2 = x_1 = 0$, which can't be because $0 \notin D(g)$ or *$x_1 = x_2 = \sqrt[3]{-9} \in D(g)$ Therefore $g(x_1)=g(x_2) \iff x_1 = x_2$ holds only for $\sqrt[3]{-9}$ (and not for every $x \in D(g)$) $$\boxed{ \text{Hence, }g(x)\text{ is NOT 1-1 }}$$ Is this correct?
I would use a calculus approach: $$g(x) = \frac{x^3+9}{x^2}=x+\frac{9}{x^2}$$ Differentiating gives us $$1-\frac{9}{x^3}>0$$ which means it is a monotonic function over the negative domain.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3784121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Nonlinear differential equation with sine function If $y\in C^1(\mathbb{R})$ and $y'(x) =\sin(y(x) +x^2)$ for every $x\in\mathbb{R}$ with $y(0)=0$ I proved that $y$ is smooth and that $y'(0)=y''(0)=0$ and that $y'''(0)>0$ but how can I prove that $y>0$ in $(0,\sqrt{\pi}) $ and $y<0$ in $(-\sqrt{\pi}, 0)$?
Hint: The constant function $z(x) = 0$ satisfies $z'(x) < \sin (z(x) + x^2)$ for every $x\in (-\sqrt{\pi}, 0) \cup (0, \sqrt{\pi})$, hence it is a sub-solution on those intervals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3784249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How is the Lie algebra $\mathfrak{sl}_{2}(\mathbb{C})$ generated by the one element $(E_{11} - E_{22})$? In the article Classification of simple complex Lie algebras, 2nd paragraph of chapter 7 (p.15, bottom), the author considers the basis $$ x = E_{21} := \left[ \begin{array}{ll} 0 & 0\\ 1 & 0\\ \end{array} \right], \quad y = E_{12} := \left[ \begin{array}{ll} 0 & 1\\ 0 & 0\\ \end{array} \right], \quad h = \left[ \begin{array}{lr} 1 & 0\\ 0 & -1\\ \end{array} \right] $$ for the Lie algebra $\mathfrak{sl}_{2}(\mathbb{C})$. The following relations hold: $$ [h, x] = 2x, \quad [h, y] = -2y, \quad [x, y] = h. $$ The author asserts that $h$ alone generates the entire Lie algebra. I have tried to carry out the generation and got $$ hh = I, \quad {1 \over 2}(hh + h) = E_{11} := \left[ \begin{array}{ll} 1 & 0\\ 0 & 0\\ \end{array} \right], \quad {1 \over 2}(hh - h) = E_{22} := \left[ \begin{array}{ll} 0 & 0\\ 0 & 1\\ \end{array} \right], $$ the latter two expressions not even being Lie brackets. But I am not seeing how to generate $x$ or $y$ from $h$ exclusively. What am I missing? Sources are appreciated as well.
It's not true. Each element generates a $1$-dimensional abelian Lie algebra. However, if you read the proof, he clearly means the ideal generated by $h$, not the subalgebra. Indeed, he is trying to prove that $\mathfrak{sl}_2(\mathbb{C})$ is simple, so he wants ideals. Also, $[h,h]=0$, not $[h,h]=I$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3784401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What does consistency of propositional logic means? I know a few proofs of consistency of propositional logic, and all of them are based on very similar things. We are showing our axioms are tautologies and our inference rules are preserving truth, so we can only prove the tautologies. Since $\left(A\wedge\lnot A\right)$ is not a tautology, we can't prove $\left(A\wedge\lnot A\right)$, and since inconsistent systems can prove all statements, propositional logic is consistent. Here is my question; what axiomatic system did we use to prove the consistency of propositional logic, and how do we know that that axiomatic system is consistent? How can we be sure that propositional logic is actually consistent? I know inconsistent systems can prove their consistency if we write the formula that states "this axiomatic system is consistent" with its language, since they can prove every statement. Thus, proving that an axiomatic system is consistent with its own axioms is not enough to actually prove the consistency...
The question is: "What does consistency of propositional logic means?" My reply . There are two classical notions of consistency: consistency in the traditional sense and consistency in Post's sense (or consistency in the absolute sense) . According to the definitions of these notions: consistency of propositional logic in the traditional sense means that there does not exist such well-built formula that this formula and its negation belong simultaneously to the set of consequence of propositional logic; consistency of propositional logic in Post's sense (or in the absolute sense) means that the set of consequence of propositional logic, is not equal to the set of all well-built formulas (the set of propositional formulas) of propositional logic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3784485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why does $A_s = k[U,T,S]/(UT-S) \otimes_{k[S]} k[S]/(S-s)$ simplify to $ A_s = k[U,T] / (UT-s)$? When reading algebraic geometry (on the technique of base change) in the book Algebraic Geometry 1 - Schemes by Ulrich Gortz, et.al, I came up with the following tensor product: $$ A_s = k[U,T,S]/(UT-S) \otimes_{k[S]} k[S]/(S-s), \quad \quad (\star) $$ and the author claimed that $$ A_s = k[U,T] / (UT-s). \quad \quad (\star\star) $$ My question is: how to simplify $(\star)$ to $(\star\star)$? More on this question: I have read Atiyah and MacDonald's Comm. Algebra and know what a tensor product is. Yet I have not been familiar with the concrete calculation of tensor products (though I know the universal property of tensor products, its relation with localization, its exactness and etc.) So beside the above question, I hope to know that what is going on in your mind when calculating the tensor product? For example, when calculating the quotient ring $k[x,y]/(y-x^2)$, we can imagine that $y-x^2 = 0$ and hence $y=x^2$, then in the ring $k[x,y]$, we can make $y$ be $x^2$ and the quotient ring is isomorphic to $k[x]$. For example, when calculating the quotient ring $k[x,y]/(1-xy)$, we can imagine that $xy=1$ and hence $y=1/x$, then in the ring $k[x,y]$, we can make $y$ be $1/x$ and the quotient ring is isomorphic to $k[x, 1/x]$, or the localization $k[x]_{x}$. Then, when calculating the tensor product, is there a way like these above in mind to help us calculate these? Thank you all! :)
An arbitrary element of $A_s$ is a $k$-linear combination of elements of the form $p(U,T)S^i\otimes 1$, where $p$ is a polynomial over $k$, in two variables. As we are tensoring over $k[S]$, we have the equality: \begin{eqnarray*}p(U,T)S^i\otimes 1&=&p(U,T)\otimes S^i\\&=&p(U,T)\otimes s^i\\&=&p(U,T)s^i\otimes 1\end{eqnarray*} Thus in effect we substitute in $s$ for $S$ in the ring $k[U,T,S]/(UT-S)$, resulting in $k[U,T]/(UT-s)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3784607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Confusion in Applying Gauss Divergence Theorem. Evaluate $\displaystyle \iint \vec{F}\cdot ds$ where $\vec{F}= 3x\hat{i} + 2y\hat{j} -5z\hat{k}$ and $S$ is the portion of the $y = x^2 + z^2$ that lies behind $y =1$ oriented in the direction of positive y -axis I am trying to solve this with the help of Gauss theorem. So, if I consider the disc $S_2 = (x^2 + z^2 = 1)$ along with $S_1 = (y = x^2 + z^2)$ then surface is closed, so Gauss theorem is applicable on $S = S_1 \cup S_2$ Now, $\displaystyle\iint F\cdot ds = \int \operatorname{div}F \,\mathrm{dv}$ But $\nabla\cdot F= 0$ this gives, $$\iint_{S_2} F\cdot ds + \iint_{S_1} F\cdot ds = 0$$ Now, for $S_2$ outward normal Vector = $\hat{j}$ So, $\displaystyle\iint_{S_1} F\cdot ds = -\iint (2y)\cdot dx dz$ Solving the integral gives $2\pi$ therefore, $\displaystyle\iint_{S_1} F\cdot ds = -2\pi$ However , answer given to me is $2\pi$ I do not understand why I am getting an extra negative sign. Can someone please tell my mistake ?
When you closed it off with $S_2$, the $S_1$ with its normal pointing in the positive $y$-direction is the inward normal. So there is an extra negative sign.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3784755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Do a set of vertices uniquely determine a polytope? The question in title arises because I am trying to prove that a polytope is the convex hull of its vertices, i.e., $\mathcal{P}=conv(V)$. Here is how far I have got. Convex hull of a finite set of vectors is a polytope. So for $v_1,...,v_k$, $\mathcal{Q}=conv(v_1,...v_k)$ is a polytope and I can show that for this polytope, $\mathcal{Q}$, $v_i$ must be vertices for all $i$. So I now know that if I take convex hull of a set of vectors, I get a polytope whose vertices are those vectors. But I require that vertices determine a polytope uniquely or else I cannot complete my proof. Is this true and if not, how do I prove that a polytope is convex hull of its vertices?
Yes, in general this is how one defines a convex polytope. In general, a convex polytope is defined as the convex hull of a finite set (of vertices), page 14: Def: A convex polytope $P\subset \mathbb{R}^n$, or symply a polytope, is defined as the convex hull of a non empty finite set $\{x_1,\dots,x_q\}$ So maybe you should provide the definition of convex polytopes that you are using. There another way to characterize a convex polytope $P$. Which is by means of a finite set of inequalities. Some times this is refered as the H-V theorem were H stands for half-spaces and V for vertices. Sometimes also as the Fundamental Theorem of Polytopes: Theorem 9.2: A non-empty set $P$ of $\mathbb{R}^n$ is a (convex) polytope if, and only if, it is a bounded polyhedral set. Polyhedral set is defined in the same text as Def: A subset $Q$ of $\mathbb{R}^n$ is called a polyhedral set if $Q$ is the intersection of a finite number of closed halfspaces, or $Q=\mathbb{R}^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3784883", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is gradient co-variant ? (intuitively ) At 7:47 in this video, the professor defines a function F(x,y) on regular cartesian grid, then later defines the same function on a scaled cartesian grid (where x' = 2x and y'=2y) , now, after this he takes gradient on the function defined on the new grid. And for some reason, the gradient on the new function is four times the function in cartesian, and I can't understand why that should be the case.. I tried replicating what he did for a single variable function but my results differed quite a bit. I took the function $f(2x) = x$ then defined it on a new x' coordinate system which is squished. $$ f(2x) = x$$ $$ 2x= x'$$ $$ f(x') = \frac{x'}{2}$$ $$ \frac{df}{dx'} = \frac{1}{2}$$ The derivative is actually half of the original function. As in the derivative scales accordingly such that when I multiply the derivative of the inner function (analogue to basis here), everything becomes the same. As in, $$ \frac{df}{dx'} \frac{dx'}{dx} = 1$$ So, I intuitively think that the gradient in this new coordinate system should scale in such a way such that the scale up of the basis vectors is cancelled by the scaling down of the gradient. As in the gradient should be invariant of coordinating scaling and dilation. However the regular gradient definition we all know and love is not, Why exactly does this happen? other than for the fact of it being a purely algebraic result? Later in the video, he says that we can fix the discrepancy by doing, $$ \nabla F = \frac{ \partial F}{\partial x} \frac{i}{|i|^2} + \frac{ \partial F}{\partial y} \frac{ j}{ |j|^2}$$ but, I can't see a systematic approach for finding that 'fix'
For a single variable note that if $f(x)=x/2$ then $df/dx=1/2 i_x$ (writing the direction of coordinates would be useful here) and after re-scaling 2 times the coordinates we have: $$f(y)=y/2,\quad y=2x$$ and $$df/dy=1/2 i_y=1i_x$$ and this is the derivative with respect to $y$ and if we convert it to $x$ using chain rule we get: $$df/dx=(df/dy)(dy/dx)=2\times 1i_x$$ which is 4 times of the original one i.e. $4\times 1/2i_x$. The multi-variable case is similar. Caution: When you write $f(2x)=x$ for an original function $f(x)=x/2$ then argue that if I take $x'=2x$ it will be $f(x')=x'/2$ is a useless cycle. In other words you are stretching coordinate first twice then stretching it in half. I.e. coming back to the original coordinates.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3785166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Definition of the Space $\mathbb{R}^\infty$ I'm trying to understand the definition of the topological space $\mathbb{R}^{\infty}$, which Hatcher defines as $\cup_n \mathbb{R}^n$ in his book Algebraic Topology. I'm having trouble making sense of this union since $\mathbb{R}^m$ is not literally a subset of $\mathbb{R}^n$ for $m < n$. Certainly the first embeds into the second, but in order to form the space $\bigcup_n \mathbb{R}^n$ and give it the weak topology we need an infinite chain of subsets $\mathbb{R}^0 \subset \mathbb{R}^1 \subset \mathbb{R}^2 \subset \cdots$. We can almost obtain this by replacing each Euclidean space by its embedded image in higher-dimensional Euclidean spaces, but this approach falls short because it will only give us finite subset chains. How do we get around this formal problem?
I think this is just a common shorthand in algebraic topology. More rigorously, I think he's defining $\mathbb{R}^\infty$ as the colimit of the diagram $\mathbb{R}^1\hookrightarrow\mathbb{R}^2\hookrightarrow\dots$ I believe the topology this colimit inherits from each Euclidean space will be the one you're thinking of, but I could be wrong.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3785302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Understanding Difference Between Cauchy-Goursat and Related Theorem I am reading Brown and Churchill's Introductory complex analysis book, which states the Cauchy-Goursat theorem as follows: If a function $f$ is analytic at all points interior to and on a simple closed contour $C$, then $$ \int_{C} f(z) dz =0 $$ I understand this result and its proof just fine. However, there is a second theorem which claims the following: If a function $f$ is analytic throughout a simply connected domain $D$, then $$ \int_{C} f(z) dz =0 $$ for every closed contour $C$ lying in $D$. So in this second theorem we don't $C$ to be simple due to the fact that $D$ is simply connected. However, if $f$ is analytic at all points interior to and on a closed contour $C$, isn't the interior of the closed contour $C$ a simply-connected domain by default? To be more precise, if I rewrite the first theorem as: If a function $f$ is analytic at all points interior to and on a closed contour $C$, then $$ \int_{C} f(z) dz =0 $$ is this not true? I don't see why we require $C$ to be simple, because even if $C$ intersects itself, we can just treat our non-simple closed contour as a union of simple closed contours, and those integrals are all $0$.
I think the main issue here is that the `interior' of a closed curve is not necessarily as easy to define as you might think, especially when you start to consider some very pathological curves in the plane. That's why most authors focus on simple closed curves, for which the Jordan curve theorem gives a nice characterization of the interior of a curve (however this is not trivial to prove, as much as it sounds like an obvious statement). If a curve is entirely enclosed within a simply connected region, then one does not have to worry about interior or exterior at all, which is what the second theorem states. You can think of the first theorem as just a special case of the second, but usually the Cauchy-Goursat theorem is mentioned first as simple closed curves are more familiar in a sense than simply connected regions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3785420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Quasilinear PDE $u_t + (u^2)_x = 0$ cauchy problem The problem I am trying to solve is: \begin{equation}\label{eq:3.1} \begin{cases} \partial_t u + \partial_x(u^2)=0 & x\in \mathbb{R}, t \in (0,\infty]\\ u(x,0)= \begin{cases} 0 & x\leq 0\\ x & 0<x\leq 1\\ 1 & x>1 \end{cases} \end{cases} \end{equation} What I have done is: We will try to reduce the problem to ODEs on a curve $x(t)$ on the $(t,x)$ plane. The equation can be compared with the canonical form, \begin{equation} a\frac{\partial u}{\partial x} +b\frac{\partial }{\partial t} = c, \end{equation} where $a = 2u$, $b= 1$ and $c=0$. From the Lagrange-Charpit equations, we have, \begin{align}\label{eq:3.2} &\frac{dx}{a}=\frac{dt}{b}=\frac{du}{c} & \text{ substituting we have,}\nonumber\\ \implies &\frac{dx}{2u}=\frac{dt}{1}=\frac{du}{0}& \end{align} using second and the third ratio from the equation we have, \begin{align}\label{eq:3.3} &\frac{du}{dt}=0 & \text{integrating we have,} \nonumber\\ \implies&u=B,& \end{align} where $B$ is a arbitrary constant. Using the initial conditions, \begin{equation}\label{eq:3.4} u(x,0)= \begin{cases} 0 & x\leq 0\\ x & 0<x\leq 1\\ 1 & x>1 \end{cases} \end{equation} where the characteristic curve $x(t)$, passes through $(c,0)$. By substitution we have, \begin{equation} B= \begin{cases} 0 & x\leq 0\\ c & 0<x\leq 1\\ 1 & x>1. \end{cases} \end{equation} Therefore solution can be written as \begin{equation}\label{eq:3.5} u= \begin{cases} 0 & x\leq 0\\ c & 0<x\leq 1\\ 1 & x>1. \end{cases} \end{equation} using first and the second ratios from the equation we have, \begin{align}\label{eq:3.6} &\frac{dx}{dt}=2u & \text{substituting we have,} \nonumber\\ \implies&\frac{dx}{dt}= \begin{cases} 0 & x\leq 0\\ 2c & 0<x\leq 1\\ 2 & x>1. \end{cases} &\text{integrating we have,}\nonumber\\ \implies&x= \begin{cases} B & x\leq 0\\ 2ct+B & 0<x\leq 1\\ 2t+B & x>1. \end{cases} &\nonumber\\ \end{align} where $B$ is a arbitrary constant. Using the initial conditions , and that the characteristic curve $x(t)$ passes through $(c,0)$ we have, \begin{equation} x= \begin{cases} c & x\leq 0\\ 2ct+c & 0<x\leq 1\\ 2t+c & x>1. \end{cases} \end{equation} Therefore $u$ becomes, \begin{equation} u(x,t)= \begin{cases} 0 & x\leq 0\\ \frac{x}{2t+1} & 0<x\leq 1\\ 1 & x>1. \end{cases} \end{equation} I think I am missing something. The solution should have $t$ dependence in the intervals. Thanks.
This PDE is very similar to Burgers equation, and the solution $u(x,t)$ deduced from the method of characteristics reads $u = f(x-2u t)$ in implicit form, where $f = u(\cdot, t=0)$. Following the steps in the linked post (see also the comments section), we find $$ u(x,t) = \left\lbrace \begin{aligned} &0 & & x\leq 0\\ &\tfrac{x}{1+2t} & & 0< x\leq 1+2t\\ &1 & & x> 1+2t \end{aligned}\right. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3785527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why this simple relation between two complicated sums? I have the two following sums: $$A_N =\sum_{n=0}^N\sum_{\substack{m=0 \\ m\neq n}}^N 1/\sqrt{n+m-2\sqrt{nm}}$$ $$B_{N,p} =\sum_{n=0}^N\sum_{\substack{m=0 \\ m\neq n}}^N 1/\sqrt{n+m-2\sqrt{nm}\cos{(2\pi(n-m)/p)}}$$ with $p$ a positive integer. Numerically, I find the following "conjecture": $$ B_{N,p} \to \frac{A_N}{p},$$ when $N \to \infty $. I would like to find a way to prove this, but I found no fruitful approach so far. Could anyone help me with that? Any ideas or hints would be very appreciated!
Yes, the "conjecture" holds (in the form of $\color{blue}{B_{N,p}/A_N\to1/p}$ as $N\to\infty$). The basic idea is simple: the main contribution to $B_{N,p}$ is given by the terms with $n\equiv m\pmod p$. The next thing we need is $$\lim_{N\to\infty}\frac{1}{N^{3/2}\log N}\sum_{0<n<m<N}\frac{1}{\sqrt{m}-\sqrt{n}}=\frac43.\tag{L}\label{mainlim}$$ To show it, let the sum be $S_N$, and use (for the rightmost inequality, we assume $m-n>1$) $$\iint\limits_{\substack{m\leqslant x\leqslant m+1\\n-1\leqslant y\leqslant n}}\frac{dx\,dy}{\sqrt{x}-\sqrt{y}}\leqslant\frac{1}{\sqrt{m}-\sqrt{n}}\leqslant\iint\limits_{\substack{m-1\leqslant x\leqslant m\\n\leqslant y\leqslant n+1}}\frac{dx\,dy}{\sqrt{x}-\sqrt{y}}.$$ Summing the lower bound over $0<n<m<N$, we obtain a lower bound for $S_N$ as the integral over a domain that contains $\{(x,y):0\leqslant y\leqslant x-2\leqslant N-2\}$. And an upper bound for $S_N$ is the sum of $$\sum_{m=2}^N\frac{1}{\sqrt{m}-\sqrt{m-1}}+\sum_{m=3}^N\frac{1}{\sqrt{m}-\sqrt{m-2}}=\mathcal{O}\left(\sum_{m=1}^N\sqrt{m}\right)=\mathcal{O}(N^{3/2})$$ and the integral over a domain that is contained in $\{(x,y):1\leqslant y\leqslant x-1\leqslant N-2\}$: $$\iint\limits_{\substack{2\leqslant x\leqslant N\\0\leqslant y\leqslant x-2}}\frac{dx\,dy}{\sqrt{x}-\sqrt{y}}\leqslant S_N\leqslant\mathcal{O}(N^{3/2})+\iint\limits_{\substack{2\leqslant x\leqslant N-1\\1\leqslant y\leqslant x-1}}\frac{dx\,dy}{\sqrt{x}-\sqrt{y}}.$$ The integrals may be evaluated exactly (by substituting $x=y+z$ and doing the inner integration over $y$; let me omit the details), and both appear to be $(4/3)N^{3/2}\big(\log N+\mathcal{O}(1)\big)$. This completes the proof of $\eqref{mainlim}$. This also gives the asymptotics of $A_N\asymp(8/3)N^{3/2}\log N$ and, more generally, for any $0\leqslant b<a$ $$\sum_{0\leqslant n<m\leqslant N}\frac{1}{\sqrt{am+b}-\sqrt{an+b}}\asymp\frac43\sqrt\frac{N^3}{a}\log N.\qquad(N\to\infty)\tag{A}\label{asympto}$$ Now, as planned at the beginning, we split $B_{N,p}=E_{N,p}+D_{N,p}$, where $$E_{N,p}=2\sum_{\substack{0\leqslant n<m\leqslant N\\n\equiv m\pmod p}}a_p(n,m),\quad D_{N,p}=\sum_{\substack{0\leqslant n,m\leqslant N\\n\not\equiv m\pmod p}}a_p(n,m),\\a_p(n,m)=\big[n+m-2\sqrt{nm}\cos\big(2\pi(n-m)/p\big)\big]^{-1/2}.$$ The sum in $E_{N,p}$ is over pairs $(n,m)=(n'p+r,m'p+r)$ with $0\leqslant n'<m'\leqslant\lfloor(N-r)/p\rfloor$ and $0\leqslant r\leqslant p-1$; since $a_p(n,m)=(\sqrt{m'p+r}-\sqrt{n'p+r})^{-1}$ then, we use $\eqref{asympto}$ and get $$E_{N,p}\asymp\frac83\sum_{r=0}^{p-1}\sqrt\frac{\lfloor(N-r)/p\rfloor^3}{p}\log N\asymp\frac{8}{3p}N^{3/2}\log N.$$ For $D_{N,p}$ finally, we have $2\sqrt{nm}\leqslant n+m$ and $a_p(n,m)\leqslant\big[(n+m)\big(1-\cos(2\pi/p)\big)\big]^{-1/2}$, hence $$D_{N,p}\leqslant\frac{1}{\sqrt{1-\cos(2\pi/p)}}\sum_{0\leqslant n\neq m\leqslant N}\frac{1}{\sqrt{n+m}}=\mathcal{O}(N^{3/2}).$$ Gathering these asymptotic results, we obtain the claim stated at the beginning. Update (an elementary approach, avoiding integrals) * *$\color{blue}{A_N=\Omega(N^{3/2}\log N)}$ follows from $$A_N=2\sum_{0\leqslant n<m\leqslant N}(\sqrt{m}-\sqrt{n})^{-1}=2\sum_{d=1}^N\sum_{n=0}^{N-d}(\sqrt{n+d}-\sqrt{n})^{-1}\\=2\sum_{d=1}^N\frac1d\sum_{n=0}^{N-d}(\sqrt{n+d}+\color{LightGray}{\sqrt{n}})\geqslant2\sum_{d=1}^N\frac1d\sum_{n=d}^N\sqrt{n}=2\sum_{n=1}^N\sqrt{n}\sum_{d=1}^n\frac1d\\\geqslant2\sum_{n=1}^N\sqrt{n}\log n\geqslant2(N/2)\sqrt{N/2}\log(N/2).\qquad(N>1)$$ *$\color{blue}{E_{N,p}/A_N\to1/p}$ is shown using the increase of $r\mapsto a_p(n'p+r,m'p+r)$; this gives lower/upper bounds for $E_{N,p}$ in terms of something like $A_{\lfloor N/p\rceil}$, and the same can be done for $A_N$ itself (to avoid dealing with $A_{\lfloor N/p\rceil}/A_N$). *The $\color{blue}{D_{N,p}=\mathcal{O}(N^{3/2})}$ above is shown elementarily.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3785587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculation involving determinant of a matrix Suppose I have the following Toeplitz symmetric matrix \begin{align} M=\begin{bmatrix} 1 & c & c & x \\ c & 1 & c & c \\ c & c & 1 & c \\ x & c & c & 1 \end{bmatrix} \end{align} I want to write an algorithm that takes $c$ as input and calculates the range of $x$ for which matrix $M$ is positive semidefinite. Currently, I do Gaussian elimination by hand and reduce the problem to checking the determinant of a $2 \times 2$ matrix. But how do I automate the process so I can write a function that takes $c$ and $n$ as inputs, where $n$ is the dimension of $M$, and returns the range of $x$. Thanks!
For your specific example, done with pen and paper, $$M_4=\left( \begin{array}{cccc} 1 & c & c & x \\ c & 1 & c & c \\ c & c & 1 & c \\ x & c & c & 1 \end{array} \right)$$ $$\Delta_4=\left(4 c^3-5 c^2+1\right)+\left(4 c^2-4 c^3\right) x+\left(c^2-1\right) x^2$$ With a computer $$M_5=\left( \begin{array}{ccccc} 1 & c & c & c & x \\ c & 1 & c & c & c \\ c & c & 1 & c & c \\ c & c & c & 1 & c \\ x & c & c & c & 1 \end{array} \right)$$ $$\Delta_5=\left(-6 c^4+14 c^3-9 c^2+1\right)+6 \left(c^4-2 c^3+c^2\right) x+\left(-2 c^3+3 c^2-1\right) x^2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3785737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
find the matrix of $D$ a relative to this basis $ (-\cos x, \sin x).$ let $D: V\to V$ be the differentiation operator. find the matrix of $D$ a relative to this basis $ (-\cos x, \sin x).$ My attempt: $D(-\cos x)=\sin x=0. \cos x + 1.\sin x $ $D(\sin x)=\cos x = 1.\cos x + 0 .\sin x$ Therefore the matrix representing the differential operator $D$ a relative to this basis $(-\cos x ,\sin x)$ is $\begin{bmatrix} 0 &1 \\ 1& 0 \end{bmatrix} ^T=\begin{bmatrix} 0 &1 \\ 1& 0 \end{bmatrix}$ Is its true ?
$D(\sin x) = \cos x = -(-\cos x)$, as the basis element is $-\cos x$ not $\cos x$. This does not affect the first computation. So the matrix is then $\begin{pmatrix} 0 & -1 \\ 1 & 0\end{pmatrix}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3785811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Conjecture about graph with points and arrows with weights Postulates * *Each point has arrow(s), and each arrow is pointing to another point. *An arrow of a point cannot point to itself. *Arrows can only point to any direction on the right. That means $\uparrow$, $\downarrow$ are not allowed, and of course $\leftarrow$ is not allowed. *Each arrow has a number $x$ on it with $0<x<1$ and $x\in\mathbb{R}$. *Of a point, say A, arrows pointing out to the right must have their numbers summing up to 1, only if the arrows exist. (That means if a point does not have arrows pointing to the right, the condition does not have to be satisfied) Same for arrows pointing to A. Conjecture Suppose there are a few points and arrows. Prove that if there are some points each with some arrows pointing to the right, there must be the same case but being pointed from the left, in order to satisfy the conditions listed above. (At least this is the pattern I can see.) For example, if there are 2 points each with 3 arrows each pointing to another point, there must be two points each being pointed with 3 arrows. I think it is related to Graph Theory, but I have no idea of proving it, or better say I am not familiar with Graph Theory. Evidences Example 1: There exists $A\xrightarrow{1} B$. Since there is 1 point with 1 arrow pointing to another point, there must be and there is 1 point being pointed with 1 arrow. Example 2: point A and B both have two arrows pointing to C and D. Hence, both point C and D are being pointed by two arrows from A and B. Example 3: point A and C both have two arrows pointing out, so two points D and F are being pointed by two arrows. point B has three arrows pointing out, so a point E is being pointed by three arrows. This graph also satisfies the conditions above: * *For point A: $0.8+0.2=1$ *For point B: $0.2+0.3+0.5=1$ *For point C: $0.3+0.7=1$ *For point D: $0.8+0.2=1$ *For point E: $0.2+0.3+0.5=1$ *For point F: $0.3+0.7=1$ These are just simple cases. The conjecture will not be obvious or maybe even false in more complex graphs.
This is not true. For example, consider a network with four nodes $A,B,C,D$ on the left and four nodes $E,F,G,H$ on the right. Give $A$ two arcs of weight $0.5$ to $E,F$. Give $D$ two arcs of weight $0.5$ to $G,H$. Give each of $B,C$ four arcs of weight $0.25$ to $E,F,G,H$. Then every node on the left has $1$ unit going out, and every node on the right has $0.5+0.25+0.25=1$ unit coming in. But all nodes on the left have $2$ or $4$ arcs going out, and all nodes on the right have $3$ coming in.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3785911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
partition and equivalence relation's classes My question concerns the definitions more than semantics. That is, a family of sets $P$ is a partition of $X$ if the following conditions hold: * *$P$ doesn't contain the empty set; *union of all $P$'s sets gives $X$; *elements of $P$ are pairwise disjoint. Now for an equivalence relation $R$ over $X$, its classes are defined as: * *$\forall x\in X,$ $c_R(x)=\{y \mid (x,y) \in R\}$ My question is: why is the family of equivalence classes over $R$ a partition of $X$? Why it shouldn't be: because we can have a relation $R$ and a set $X$ such that: $ \exists x,y \in X, c_R(x)\cap c_R(y)\ne \emptyset$ which violates the condition 3. in the definition of a partition. EDIT I am convinced that every two classes of an equivalence relation are either disjoint or equal. But I still have a problem with the definition of equivalence classes: So since we have a class for each element, we can have equal classes and so non pairwise disjoint. unless family word in the family of equivalence classes over a relation $R$ on a set $X$ is a partition of $X$ refers to unique classes.
When you talk about the equivalence classes of a relation on a set: * *if you violate (1), i.e. $\emptyset \in P$, you have an invalid relation *if you violate (2), either the union of class elements has something which is not in the original set (which means you have an invalid relation since it is defined on things outside the set of interest) or the set has something not in the union of equivalence classes (which invalidates the relation being an equivalence relation) *if you violate (3), 2 equivalence classes contain the same item $x$, then your equivalence classes just collide into the same class because the equivalence relation is transitive, and if $a \ne x$ is in the first class and $b \ne x$ in the second, since $R$ is transitive you have $(a,x)$ and $(x,b)$ imply $(a,b)$ and classes collapse.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3786044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
Solving quintic equations of the form $x^5-x+A=0$ I was on Wolfram Alpha exploring quintic equations that were unsolvable using radicals. Specifically, I was looking at quintics of the form $x^5-x+A=0$ for nonzero integers $A$. I noticed that the roots were always expressible as sums of generalized hypergeometric functions: $$B_1(_4F_3(\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5};\frac{1}{2},\frac{3}{4},\frac{5}{4};\frac{3125|A|^4}{256}))+B_2(_4F_3(\frac{7}{10},\frac{9}{10},\frac{11}{10},\frac{13}{10};\frac{5}{4},\frac{3}{2},\frac{7}{4};\frac{3125|A|^4}{256}))+B_3(_4F_3(\frac{9}{20},\frac{13}{20},\frac{17}{20},\frac{21}{20};\frac{3}{4},\frac{5}{4},\frac{3}{2};\frac{3125|A|^4}{256}))+B_4(_4F_3(\frac{-1}{20},\frac{3}{20},\frac{7}{20},\frac{11}{20};\frac{1}{4},\frac{1}{2},\frac{3}{4};\frac{3125|A|^4}{256}))$$ where the five roots have $(B_1,B_2,B_3,B_4)\in\{(A,0,0,0),(-\frac{A}{4},-\frac{5A|A|}{32},\frac{5|A|^3}{32},-1),(-\frac{A}{4},\frac{5A|A|}{32},-i\frac{5|A|^3}{32},i),(-\frac{A}{4},\frac{5A|A|}{32},i\frac{5|A|^3}{32},-i),(-\frac{A}{4},-\frac{5A|A|}{32},-\frac{5|A|^3}{32},1)\}$ After observing this, I was left with a lot of questions. First, given $A$, is there a formula I can use to generate the values for $D$, $E$, $H$, and $K$? Second, why do these patterns persist? Third, if I take a different set of quintics that can't be solved using radicals and that differ only in their constant term, does a similar pattern to the roots exist? Fourth, can anyone prove that these patterns that I found will always hold? Edit: I found the patterns for $D$, $E$, $H$ and $K$. The question has been updated accordingly.
The answer to your third question is yes! The method uses Bring radicals, whose explicit form in terms of generalized hypergeometric functions can be found using the Lagrange inversion theorem. (In fact since any quintic can be reduced to this form, in principle this method can be used to solve any quintic.) I can answer your second and fourth questions partially by developing this method. But I'm afraid I will only be able to obtain the first solution with coefficients $(A, 0, 0, 0)$. The idea at its core is quite simple. Basically we rewrite the equation as $x^5 - x = - A$, treat the left hand side as a function $f(x) = x^5 - x$, then try to answer the question "what is $f^{-1}(-A)$." This is then done by expressing $f^{-1}$ as a power series. The Lagrange inversion theorem gives this inverse as $$ x = \sum_{k=0}^\infty \binom{5 k}{k} \frac{A^{4k+1}}{4k+1}\ . $$ Unfortunately, this series doesn't converge for all values of $A$. In fact the radius of convergence is $4/(5\times 5^{1/4})\approx 0.535$, so evaluating the series directly would only give us the solution for one integer $A = 0$. This is where the generalized hypergeometric function comes in. We can analytically continue this series to define a function of $A$. The function whose power series (at zero) is $$ \sum_{n=0}^{\infty}\prod_{k=0}^{n} \frac{(k+a_1)\cdots(k+a_p)}{(k+b_1)\dots(k+b_q)(k+1)} z $$ is denoted as $_p F_q(a_1,\dots, a_p;b_1,\dots,b_q;z)$. To convert our function $f^{-1}(A)$ into the standard form, we need to compute the ratio between consecutive terms, which is $$ \begin{align} & \quad \frac{(5k +5)!A^{4k+5}}{(k+1)!(4k+4)!(4k+5)}\cdot\frac{k!(4k)!(4k+1)}{(5k)!A^{4k+1}}\\ & = \frac{(5k+5)(5k+4)(5k+3)(5k+2)(5k+1)(4k+1)A^4}{(k+1)(4k+4)(4k+3)(4k+2)(4k+1)(4k+5)} \\ & = \frac{5(5k+4)(5k+3)(5k+2)(5k+1)}{4(4k+5)(4k+3)(4k+2)(k+1)}A^4 \\ & = \frac{(k+1/5)(k+2/5)(k+3/5)(k+4/5)}{(k+1/2)(k+3/4)(k+5/4)(k+1)}\left(5\left(\frac{5A}{4}\right)^4\right)\ . \end{align} $$ Now since the numerator has four factors and the denominator has three factors besides $(k+1)$, this is $_4F_3$ (times an extra factor of $A$, since the starting term in our series is $A$, not $1$). The parameters are the numbers added to $k$ in each factor, and the argument is $(5^5/4^4)A^4 = (3125/256)A^4$. This gives you the first solution $A \;_3F_4(\frac{1}{5}, \frac{2}{5}, \frac{3}{5}, \frac{4}{5}; \frac{1}{2}, \frac{3}{4}, \frac{5}{4}; \frac{3125}{256}A^4)$. In order to obtain the other roots, in principle we can find use this root to factor the polynomial and try to solve the resulting quartic. However that involves too much computation and doesn't seem like the neatest way of obtaining the results you got here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3786161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
Visualising the sum of the first $n$ positive odd integers Using the fact that $1+2+\cdots+n=\frac{n(n+1)}{2}$, we can deduce that sum of first $n$ positive odd integers is $n^2$. However, is there a way of finding the sum of $1+3+5+\cdots+(2n-1)$ visually?
Here is a ‘proof’ I once found in a book for young children. It is not a real proof in the mathematical sense, but rather a convincing example that any mathematician feels could be transformed into a rigourous proof: Imagine wooden cubes stacked in rows, with the basis containing, say, $7$ cubes, the row above, $5$ cubes, the row still above, $3$ and the last row $1$, like this: It is a geometrical evidence that, moving the grey squares from the bottom right corner to the top left corner, one recreates a square with sides equal to the number of rows, i.e. $4$ units, hence we have $16$ of them for the sum of the $4$ first odd numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3786256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do you prove $\pi =\sqrt{12}\sum_{n\ge 0}\frac{(-1)^n}{3^n(2n+1)}$? In the book Pi: A Source Book I found the following: Extract the square root of twelve times the diameter squared. This is the first term. Dividing the first term repeatedly by 3, obtain other terms: the second after one division by 3, the third after more division and so on. Divide the terms in order by the odd integers $1,\,3,\,5,\,\ldots$; add the odd-order terms to, and subtract the even order terms from, the preceding. The result is the circumference. That is equivalent to $$\pi =\sqrt{12}\sum_{n\ge 0}\frac{(-1)^n}{3^n(2n+1)}.$$ The formula is due to an Indian mathematician Madhava of Sangamagrama. The proof of this formula should be in the treatise Yuktibhāṣā written in c. 1530 by an Indian astronomer Jyesthadeva, which I don't have access to. I've been trying to find a proof of the formula elsewhere but with no success. Maybe this could be proved from $$\arctan x=\sum_{n\ge 0}\frac{(-1)^n x^{2n+1}}{2n+1}$$ which is mentioned in Yuktibhāṣā as well, but I don't see how could that be done.
Consider that $$\sum_{n=0}^\infty (-1)^n\frac{ x^{2 n+1}}{2 n+1}=\tan ^{-1}(x)$$ $$\sum_{n=0}^\infty (-1)^n\frac{ x^{2 n}}{2 n+1}=\frac{\tan ^{-1}(x)}x$$ Make $x=\frac 1{\sqrt 3}$ and the rhs is $\frac{\pi }{2 \sqrt{3}}$. Multiply by $\sqrt{12}$ to get $\pi$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3786341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 4 }
What does it mean when a number is subscripted with a truth statement? I have seen the following in several papers: $1_{\lvert r\rvert>1}$. What does this mean? Does this evaluate to 1 if $\lvert r\rvert>1$ and 0 otherwise? What would this evaluate to if instead of 1 we had a variable like $x$? Thanks.
Yes, it is an indicator function: $1$ when the condition is true, $0$ when it is not. I have not seen it used with $x$ instead of $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3786417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Probability number comes up and then comes up again before another number I was trying to follow the logic in a similar question (Probability number comes up before another), but I can't seem to get it to work out. Some craps games have a Repeater bet. You can bet on rolling aces twice before rolling a 7, rolling 3 three times before 7, etc. The patent for this game (https://patents.google.com/patent/US20140138911) says the odds for aces twice before 7 is 48:1. The wizard of odds (https://wizardofodds.com/games/craps/appendix/5/) says the probability is 0.020408 (which is 1/49). I tried calculating this by multiplying the odds of the two events 1/36 for rolling aces and (1/36)/((1/36)+(1/6)) for rolling aces before 7. I got (1/36)*((1/36)/((1/36)+(1/6))) = 0.003968253968253969 which is like 1/252. I'm obviously missing something, but can't see what. Edit:...sorry...after typing this up i figured it out. The bet has to be made and then aces has to roll before 7 twice. So if 7 rolls before the first aces the bet loses, so I was wrong by using the 1/36 for the first aces. ((1/36)/((1/36)+(1/6)))*((1/36)/((1/36)+(1/6))) 0.020408163265306128 I still don't understand why one says 48:1 when its 1/49
Let $E_1$ denote the event that you roll snake eyes before a 7. Let $E_2$ denote the event that you roll snake eyes before a 7, given that event E_1 has already occurred. In fact, the chance of $E_2$ is the same as the chance of $E_1$. I simply separated the events for clarity. The key formula here is that if $A$ and $B$ are mutually exclusive events, and you are trying to compute the chance of $A$ happening before $B$, the probability is $\frac{p(A)}{p(A) + p(B)}$, where $p(A)$ refers to the chance of event A occurring and $p(B)$ refers to the chance of event B occurring. Here, there are 6 ways out of 36 to roll a 7 and 1 way out of 36 to roll snake eyes. Therefore, it is immediate that $p(E_1) = \frac{1/36}{(1/36) + (6/36)} = \frac{1}{7}.$ Once event $E_1$ occurs the chance of event $E_2$ then occurring is similarly $= \frac{1}{7}.$ Thus the chance of events $E_1$ and then $E_2$ both occurring is $\frac{1}{7} \times \frac{1}{7}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3786553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Closed form of $\int\limits_0^{2\pi} \prod\limits_{j=1}^n \cos(jx)dx$ and combinatorial link I have been trying to find a closed form for this integral: $$I_n = \int\limits_0^{2\pi} \prod_{j=1}^n \cos(jx)dx$$ The first values are: $I_1=I_2=0,I_3=\frac{\pi}{2}, I_4=\frac{\pi}{4}, I_5=I_6=0, I_7=\frac{\pi}{8}, I_8=\frac{7\pi}{64}$ I am not able to see here a clean pattern except that for $n=4k+1,4k+2$ the integral should be zero. If someone could give me a hint I would appreciate it. EDIT As suggested by Winther in the comments, the problem can be viewed from a combinatorial standpoint. Looking at the complex exponential representation one gets $2^n$ integrals of the form $\int_0^{2\pi}e^{iNx}dx$, which is only nonzero, if $N=0$. The integral evaluates to $\frac{M\pi}{2^{n-1}}$, where $M$ is the number of nonzero integrals. So one needs to find $M$, which is the number of binary numbers $b$ for which holds that $$\sum_{k=1}^n (2b_k-1)k = 0$$ where $b_k$ is the k-th digit of $b$. With this, it is easy to see if for some $b$ it holds, it will also hold for $\overline{b}$ (each digit is inverted).
As pointed out in the comments, the result is $$ I_n = a_n \frac{2\pi}{2^{n}} $$ where $a_n$ is the numbers of solutions of $\sum_{j=1}^n s_n \,j =0$ where $s_j \in \{1,-1\}$ (or number of ways of marking a subset of $\{ 1,2, \cdots n\}$ such that the sum of the marked subset equals the sum of the unmarked subset). This is given by OEIS A063865. Asymptotics (for $n=0,3 \pmod 4$): $$I_n \approx \sqrt{24 \pi} \, n^{-3/2} $$ Ref
{ "language": "en", "url": "https://math.stackexchange.com/questions/3786668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
Integration of $\sqrt {\tan x}$ I have tried many ways to integrate $\sqrt {\tan x}$ including integration by parts but didn't get to any final result. I also assumed, $$ \tan x = t^2 $$ $$ \int \sqrt {\tan x} \,dx $$ $$⇒\int \frac{2t^2}{1+t^4}dt$$ but it's getting a bit complicated further, kindly help. Also, are there any simpler ways to integrate this. Answer, $$ \frac{1}{\sqrt 2} \tan^{-1}\left[\frac {\sqrt {\tan x}-\sqrt {\cot x}}{\sqrt{2}}\right] +\frac{1}{2\sqrt 2}\ln\left[\frac {\sqrt {\tan x}+\sqrt {\cot x}-\sqrt {2}}{\sqrt {\tan x}+\sqrt {\cot x}+\sqrt {2}}\right] +C $$
You can factor the denominator as the product of two quadratics (finding the [necessarily complex] roots can help there), then use partial fractions. It gets ugly, but Wofram Alpha gives the same ugly answer as you get by doing this, so I assume there isn't a simpler way. BTW, are you sure about the numerator? I have vague memories of it being $\frac{1}{1+t^4}$, but it was quite a while ago that I did it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3786821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Can a function be differentiable at its endpoints? If not, are these endpoints critical values? The reader is asked to identify the function's critical values. Point $x=c$ is a critical value because $f'(c)=0$. Also, points $x=b,d,e$ are critical values because $f'$ is undefined at those points. Since critical values include points where $f'$ is undefined, would the endpoints $x=a$ and $x=g$ be included? If $f'(a)$ and $f'(g)$ are undefined, then they are considered critical values by definition. According to this question and this question and this question, a function is still continuous at the endpoints of its domain, which would imply that (1.) $$\lim_{x \to a^-} f(x) = \lim_{x \to a^+} f(x)$$ (2.) $$\lim_{x \to g^-} f(x) = \lim_{x \to g^+} f(x).$$ Continuity is a pre-requisite for differentiability, but not sufficient for differentiability, so we have to be careful. Would it also be correct that (3.) $$\lim_{x \to a^-} \frac{f(x)-f(a)}{x-a} = \lim_{x \to a^+} \frac{f(x)-f(a)}{x-a}$$ (4.) $$\lim_{x \to g^-} \frac{f(x)-f(g)}{x-g} = \lim_{x \to g^+} \frac{f(x)-f(g)}{x-g}?$$ I don't see why it would be fair to say (1.) and (2.) are true but not (3.) and (4.). If $f$ is not differentiable at $x=a$ and $x=g$, then these points should be called critical values, since the definition of critical values are points where $f'$ is zero or undefined. This would be advantageous! Careful students looking for local extrema will remember to check endpoints if they are technically considered critical values. What is the official answer according to real analysis? Thanks for your thoughts!
None of the claimed equalities are true because they simply do not make any sense. Limits are not defined by taking limits from the left and right sides. Rather, limits are based on taking limits from any direction within the domain. If the domain includes a left and right side of a point, then the limit is defined from the left and right sides. If there is only one side, then the limit is defined by that side. This is pointed out in the links you've provided. In these cases, this is not the case, and we simply have $$f(a)=\lim_{x\to a}f(x)=\lim_{x\to a^+}f(x)$$ $$f(g)=\lim_{x\to g}f(x)=\lim_{x\to g^-}f(x)$$ $$f'(a)=\lim_{x\to a}\frac{f(x)-f(a)}{x-a}=\lim_{x\to a^+}\frac{f(x)-f(a)}{x-a}$$ $$f'(g)=\lim_{x\to g}\frac{f(x)-f(g)}{x-g}=\lim_{x\to g^-}\frac{f(x)-f(g)}{x-g}$$ whereas the limits from the opposite sides are nothing more than undefined. It is worth noting that in more general settings, such as the complex numbers or $\mathbb R^2$, there is no notion of left or right sided limits, or perhaps the left and right sides are not the only sides to consider.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3786923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Fourier series of $f(x) = |x|^3$ and evaluating series I found the Fourier serie for the function $$f: [-\pi, \pi], \quad f(x) = |x|^3$$ Coefficents: $$ a_0 = \frac{\pi^3}{2} $$ $$ a_n = \frac{6 \pi}{n^2} \cos(n\pi) - \frac{12}{n^4\pi} \cos(n\pi) + \frac{6}{n^4} $$ $$ b_n = 0 $$ So, the Fourier serie is given by $$ f(x) = \frac{a_0}{2} + \sum_{n = 1}^{+\infty} a_n \cos(nx) $$ $$ f(x) = \frac{\pi^3}{4} + \sum_{n = 1}^{+\infty}\left( \frac{6 \pi}{n^2} (-1)^n - \frac{12}{n^4\pi} (-1)^n + \frac{6}{n^4} \right) \cos(nx) $$ Now, I should evaluate the following serie: $$ \sum_{n = 1}^{+\infty} \frac{1}{n^4}(\pi^2 n^2 (-1)^n - 2 (-1)^n + 2) $$ How can I find it with the help of this Fourier series?
I think the fourier coefficients are $$n\neq1,\;\;a_n=\frac{6(2-n^2\pi^2)}{\pi n^4}(-1)^n-\frac{12}{\pi n^4}$$ So when we susntitute $\;x=0\;$ , we get (Dirichlet's convergence theorem) $$0=\frac{\pi^3}4+6\sum_{n=1}^\infty\left(\frac{(2-n^2\pi^2)}{\pi n^4}(-1)^n-\frac{2}{\pi n^4}\right)\implies-\frac{\pi^3}{24}=\sum_{n=1}^\infty\left(\frac{(2-n^2\pi^2)}{\pi n^4}(-1)^n-\frac{2}{\pi n^4}\right)\implies$$ $$-\frac{\pi^4}{24}=\sum_{n=1}^\infty\left(\frac{(2-n^2\pi^2)}{ n^4}(-1)^n-\frac{2}{ n^4}\right)$$ and there you have your sum...(of course, I assume you know the sum of $\;\sum\limits_{n=1}^\infty\frac1{n^4}\;$ ...)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3787061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Poisson process into a series of Bernouilli trial problem The ants of a colony arrives at a location with two food sources according to a poisson process $N(t),t\geq 0$ at the rate of $\lambda$. Once there, each ant will independently choose to eat form one of the sources $A$ or $B$ with respective probabilities $p, (1-p)$ respectively. Let $\{T^{(a)}_i\}, i\in(0,\infty)$ be the interarrival sequence of ants that go to food source A. What is the distribution of $T^{(A)}_i$? Are these random variables independent? I am not sure how to do this... Do I just plug in the expected amount of arrivals at each hour into the binomial equation to get the answer?
Clearly the long term arrival rate splits into $\lambda = p \lambda + (1-p)\lambda$. If the number of ants at the origin is kept (supposed to be) constant then you can think to split the procession right at the origin and the two are clearly independent and Poisson distributed. However the actual process looks to be that the ants return to the origin after a (constant ?) forth and back traveling time. If this is not negligible wrt $1/\lambda$, then the two processions are not longer independent. In fact if at a certain moment there is e.g. a massive group of ants leaving for A, that will cause a massive decrease in the remaining number and thus a decrease in the number of ants leaving for B.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3787203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to multiply out brackets when they contain vectors Little confused on the rules here, obviously if I treat it as a vector and its transpose I can compute this if I knew each vector entry but I am keen to know the general rule for any vector: $(\mathbf x - \mathbf y)(\mathbf x - \mathbf y)^T$ How does it relate to $(x-y)(x-y) = x^2 - 2xy + y^2 $ Thanks for your help
To see why removing brackets is a bit different, you can remove them step by step: $$(\mathbf x - \mathbf y)(\mathbf x - \mathbf y)^T = \mathbf x (\mathbf x - \mathbf y)^T - \mathbf y(\mathbf x - \mathbf y)^T = \mathbf x \mathbf x^T - \mathbf x\mathbf y^T - \mathbf y\mathbf x^T - \mathbf y \mathbf y^T$$ The result cannot be simplified any further, because unlike the scalar case, $\mathbf x\mathbf y^T \neq \mathbf y\mathbf x^T$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3787475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to prove $\frac{a^{n+1}+b^{n+1}+c^{n+1}}{a^n+b^n+c^n} \ge \sqrt[3]{abc}$? Give $a,b,c>0$. Prove that: $$\dfrac{a^{n+1}+b^{n+1}+c^{n+1}}{a^n+b^n+c^n} \ge \sqrt[3]{abc}.$$ My direction: (we have the equation if and only if $a=b=c$) $a^{n+1}+a^nb+a^nc \ge 3a^n\sqrt[3]{abc}$ $b^{n+1}+b^na+b^nc \ge 3b^n\sqrt[3]{abc}$ $c^{n+1}+c^na+c^nb \ge 3c^n\sqrt[3]{abc}$ But from these things, i can't prove the problem.
Let $A_p:=\left(\frac{1}{N}\sum_{i=1}^Na_i^p\right)^{1/p}$ be the $p$th mean of $(a_i)$. By the extended AM-GM inequality, $GM\le A_n\le A_{n+1}$. Hence $$GM\times A_n^n\le A_{n+1}\times A_{n+1}^n=A_{n+1}^{n+1}$$ or $$\sqrt[3]{abc}\times\frac{a^n+b^n+c^n}{3}\le\frac{a^{n+1}+b^{n+1}+c^{n+1}}{3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3787573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 5 }
Understanding the map about the classification of all abelian extensions with Galois groups with a fixed exponent (Kummer Theory) Let $F$ be a field and let $\zeta$ be a primitive $n$-th root of unity in $F$. Also, let $E/F$ be a finite Galois extension with Galois group $G$. Now I am trying to understand the following Theorem from Milne's Fields and Galois Theory (page 73): Question What exactly does this map in this theorem do? In the sections before the theorem, there were some more maps which seemed to play a role in understanding the map in the theorem (and I think this was the author's intention). The previous sections are these (on page 72): However, I have not figured out yet how the map in the theorem and the previously discussed maps/sequences are related. Could you please help me explaining that?
As you are aware, this is the main theorem of Kummer theory. I keep you notations and assumptions, but beware that you forgot the hypothesis that the characteristic of $F$ does not divide $n$. Besides, it will be more convenient to replace a subgroup $B$ containing ${F^\times}^n$ as a subgroup of finite index, by the quotient $\bar B=B.{F^\times}^n/{F^\times}^n$ considered as a finite subgroup of $F^\times/{F^\times}^n$. The reason is that, thanks to the hypotheses, $F(b^{1/n})/F$ depends only on the class $\bar b$ of $b$ mod ${F^\times}^n$. We'll denote this extension by $F({\bar b}^{1/n})/F$. Analogously, $F(B^{1/n})$ will be written $F({\bar B}^{1/n})$. Now the map that you ask about can be rewritten as $E\in (a) \to \bar B_E \in (F^\times\cap {E^\times}^n)/{E^\times}^n \in (b)$ (with an obvious rewriting of (b)). By definition, $E=F(\bar B^{1/n})$, which is easily shown to be a galois extension, with group $G_{E/F}$ finite abelian of exponent $n$. Usually, $ \bar B_E$ is called the radical of $E$, and Kummer's main theorem states thatit is canonically isomorphic to Hom ($G_{E/F}, \mu_n)$ (you already gave thje cohomological proof !). It follows that $ \bar B_E$ and $G_{E/F}$ have the same order, and the correspondence between (a) and (b) is bijective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3787683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Asymptotic Estimate of Vector Function I would like to compute the asymptotic limit of the of the following function $$f(x,\omega) = \frac{x - \omega\sqrt{1+|x|^2}}{x\cdot \omega - \sqrt{1+|x|^2}}$$ Where $x\in \mathbb{R}^3$ and $\omega \in \mathbb{S}^2 = \{y\in\mathbb{R}^3 \ | \ |y| = 1\}$ is a point on the unit sphere. More precisely, I need to estimate $||f(x,\cdot)||_{L^\infty(\mathbb{S^2)}} := \sup_{\omega\in\mathbb{S}^2}|f(x,\omega)|$ for large $|x|$. I have the rough estimate \begin{align} |f(x,\omega)| &\leq \bigg|\frac{\frac{x}{\sqrt{1+|x|^2}} - \omega}{\frac{x\cdot\omega}{\sqrt{1+|x|^2}} - 1}\bigg| \leq \frac{2}{1 - \frac{|x|}{\sqrt{1+|x|^2}}} \\ &= \frac{2\sqrt{1+|x|^2}}{\sqrt{1+|x|^2} - |x|} = \mathcal{O}(|x|^2) \end{align} as $|x|\rightarrow \infty$. But I am wondering if I can do better than this and obtain a sharper estimate (possibly $\mathcal{O}(|x|)$ or even $\mathcal{O}(1)$)? Edit: To elaborate I am looking for a better function $g(|x|)$ such that \begin{align} \sup_{(\hat{x},\omega)\in\mathbb{S}^2\times\mathbb{S}^2}|f(|x|\hat{x},\omega)| \leq C g(|x|) \end{align} I suspect something like linear in $|x|$ like $g(|x|) = a|x| + b$.
Let $x=|x|\hat{x}$ and $\hat{x}=(\hat{x}\cdot\omega)\omega+\alpha\omega^\perp$, then $$ f(x,\omega)=\frac{x-\omega\sqrt{1+|x|^2}}{x\cdot\omega-\sqrt{1+|x|^2}}=\omega+\frac{\alpha}{\hat{x}\cdot\omega-\sqrt{(1+|x|^2)/|x|^2}}\omega^\perp$$ Hence for $\xi=\sqrt{(1+|x|^2)/|x|^2}$, \begin{align*}|f(x,\omega)|^2&=1+\frac{\sin^2\theta}{(\cos\theta-\xi)^2}\\ &=1+\sin^2\theta(\xi^2-2\xi\cos\theta+\cos^2\theta)^{-1}\\ &=1+\sin^2\theta\left((1-\cos\theta)^2+\frac{1-\cos\theta}{|x|^2}+O(|x|^{-4})\right)^{-1}\\ &=1+\frac{\sin^2\theta}{(1-\cos\theta)^2}\left(1-\frac{1}{(1-\cos\theta)|x|^2}+O(|x|^{-4}) \right) \end{align*} Finally taking the square root, $$\fbox{$|f(x,\omega)|=\frac{1}{\sin(\theta/2)} - \frac{\sin^2\theta}{16\sin^5(\theta/2)}\frac{1}{|x|^2}+O(|x|^{-4})$}$$ Note: $\lim_{|x|\to\infty}|f(x,\omega)|=1/\sin(\theta/2)$, which has no sup near $\theta\to0$. Edit: At $\theta=0$, i.e., $\hat{x}=\omega$, $f(x,\omega)=1$ exactly, but for nearby values, $f$ is unbounded. Edit2: A plot of $|f(x,\omega)|$ and the approximation agree.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3787786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
$\lim_{n\to\infty} f_{n}(x) = g(x) \implies \lim_{n\to\infty} f_{n}^{'}(x) = g^{'}(x) $ Let $f_{n} :\mathbb R \to\mathbb R$ be differentiable for each $n \in\mathbb N$ with $|f_{n}^{'}(x)| ≤ 1$ for all $n$ and $x$. Assume $\lim_{n\to\infty} f_{n}(x) = g(x) $ for all $x$. Is $\lim_{n\to\infty} f_{n}^{'}(x) = g^{'}(x) $? I believe it is true, and that one can show it by taking the appropriate limits with respect to some $h \to 0$. Something of the sort: $\lim_{n\to\infty} f_{n}(x) = g(x)$ $-\lim_{n\to\infty} f_{n}(x) = -g(x)$ $\lim_{n\to\infty} f_{n}(x+h) - \lim_{n\to\infty} f_{n}(x) = g(x+h) -g(x)$ $\lim_{n\to\infty} \frac{f_{n}(x+h) - f(x)}{h} = \frac{g(x+h) -g(x)}{h}$ $\lim_{h\to 0} \lim_{n\to\infty} \frac{f_{n}(x+h) - f(x)}{h} = \lim_{h\to 0}\frac{g(x+h) -g(x)}{h}$
There's no reason for $f_n'(x)$ to even converge. Consider, for instance, $$f_n(x) = \dfrac {\sin(nx)}n$$ Clearly $f_n \to 0$ pointwise on $\mathbb{R}$. However the sequence of derivatives $f^{'}_{n} = \cos(nx)$ does not converge pointwise on $\mathbb{R}$: $$f^{'}_n(\pi) = (-1)^{n}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3787959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Evaluate $\frac{1+3}{3}+\frac{1+3+5}{3^2}+\frac{1+3+5+7}{3^3}+\cdots$ It can be rewritten as $$S = \frac{2^2}{3}+\frac{3^2}{3^2}+\frac{4^2}{3^3}+\cdots$$ When $k$ approaches infinity, the term $\frac{(k+1)^2}{3^k}$ approaches zero. But, i wonder if it can be used to determine the value of $S$. Any idea? Note: By using a programming language, i found that the value of $S$ is $3.5$
Calculus is not required to evaluate the sum. Let $$f(z) = \sum_{k=1}^\infty (k+1)^2 z^k.$$ Then $$z f(z) = \sum_{k=1}^\infty (k+1)^2 z^{k+1} = -z + \sum_{k=1}^\infty k^2 z^k$$ hence $$f(z) - zf(z) = \sum_{k=1}^\infty (k+1)^2 z^k + z - \sum_{k=1}^\infty k^2 z^k = z + \sum_{k=1}^\infty (2k+1) z^k.$$ Now let $$g(z) = \sum_{k=1}^\infty (2k+1) z^k.$$ Then using the same technique, $$g(z) - z g(z) = \sum_{k=1}^\infty (2k+1)z^k - \sum_{k=2}^\infty (2k-1)z^k = z + 2\sum_{k=1}^\infty z^k = z + \frac{2z}{1-z}.$$ Therefore, $$g(z) = \frac{z(3-z)}{(1-z)^2},$$ and $$f(z) = \frac{z(4-3z+z^2)}{(1-z)^3}.$$ All that remains is to select $z = 1/3$ to obtain the value of the desired sum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3788078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
Evaluate $\lim _{n\to \infty }\int _{0}^{1}nx^ne^{x^2}dx$ Evaluate $\lim _{n\to \infty }\int _{0}^{1}nx^ne^{x^2}dx.$ I applied the mean value thorem of integral to $\int _{0}^{1}nx^ne^{x^2}dx.$ We get $c\in (0,1):$ $$\int _{0}^{1}nx^ne^{x^2}dx=(1-0)nc^ne^{c^2}.$$ Taking limit ($\lim_{n\to \infty}$)on the both side, We get, $$\lim_{n\to \infty}\int _{0}^{1}nx^ne^{x^2}dx=\lim_{n\to \infty} nc^ne^{c^2}=0.$$ My answer in the examination was wrong. I don't know the correct answer. Where is my mistake?
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \lim _{n \to \infty}\int_{0}^{1}nx^{n}\expo{x^{2}}\dd x & = \lim _{n \to \infty}\bracks{n\int_{0}^{1} \exp\pars{n\ln\pars{1 - x} + \pars{1 - x}^{2}}\dd x} \\[5mm] & = \lim _{n \to \infty}\bracks{n\int_{0}^{\infty} \expo{1 -\pars{n + 2}x}\dd x} \\[5mm] & = \expo{}\lim _{n \to \infty}{n \over n + 2} = \bbx{\large\expo{}} \\ & \end{align} See Laplace's Method.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3788142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 1 }
Definition of finite field with fixed characteristic In this article Discrete logarithms in quasi-polynomial time in finite fields of fixed characteristic the term finite fields of fixed characteristic is not defined and I couldn't find it on the literature, too. * *What is the definition of finite fields of fixed characteristic
The term quasi-polynomial time means quasi-polynomial in...which parameters? What fixed characteristic says is that the problem is quasi-polynomial in the field size as the field size varies, as long as we keep the characteristic fixed. So, for instance, if their algorithm takes $\le C2^pk^r$ steps over a field of characteristic $p$ and size $k$ for some constants $C,r$, then it is polynomial for fields of fixed characteristic, but exponential in the characteristic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3788290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Partial reciprocal sum How can I show that $$\sum_{k=1}^{n}\frac{1}{n+k}\leq\frac{3}{4}$$ for every integer $n \geq 1$? I tried induction, estimates with logarithms and trying to bound the sum focusing on the larger terms or things like $\frac{1}{n+1}+\frac{1}{n+2}\leq\frac{2}{n+1}$ but nothing seems to work. Do you have any suggestion? Thanks
First, note that $f(n)=\sum_{k=1}^n\frac{1}{k+n}$. When $n$ is a positive integer greater than 1, this is a special case of $g(n)=\int_{n+1}^{2n+1}\frac{1}{\lfloor x\rfloor}dx$ $$\frac{dg}{dn}=\frac{d}{dn}\int_{n+1}^{2n+1}\frac{1}{\lfloor x\rfloor}dx=2\frac{1}{\lfloor 2n+1\rfloor}-\frac{1}{\lfloor n+1\rfloor}=\frac{2\lfloor n+1\rfloor-\lfloor2n+1\rfloor}{\lfloor1+2n\rfloor\lfloor1+n\rfloor}$$ For positive $n$, the numerator of this is $1$ if the fractional part of $n$ less than $\frac{1}{2}$, and $0$ is the fractional part is greater than $\frac{1}{2}$, and the denominator is positive. As such, $g(n)$ is an increasing function for all positive $n$. If a function $g(n)$ is increasing over the domain $(1,\infty)$, then it has an upper bound of $\lim_{n\rightarrow\infty}g(n)$ $$\lim_{n\rightarrow\infty}g(n)=\lim_{n\rightarrow\infty}\int_{n+1}^{2n+1}\frac{1}{\lfloor x\rfloor}dx=\lim_{n\rightarrow\infty}\int_{1}^{2n+1}\frac{1}{\lfloor x\rfloor}dx-\int_{1}^{n+1}\frac{1}{\lfloor x\rfloor}dx+\ln(2n+1)-\int_1^{2n+1}\frac{1}{x}dx-\ln(n+1)+\int_1^{n+1}\frac{1}{x}dx=\lim_{n\rightarrow\infty}\int_{1}^{2n+1}\frac{1}{\lfloor x\rfloor}-\frac{1}{x}dx-\int_{1}^{n+1}\frac{1}{\lfloor x\rfloor}-\frac{1}{2}dx+\ln(\frac{2n+1}{n+1})=\lim_{n\rightarrow\infty}\int_{1}^{2n+1}\frac{1}{\lfloor x\rfloor}-\frac{1}{x}dx-\lim_{n\rightarrow\infty}\int_{1}^{n+1}\frac{1}{\lfloor x\rfloor}-\frac{1}{2}dx+\lim_{n\rightarrow\infty}\ln(\frac{2n+1}{n+1})=\gamma-\gamma+\ln(2)=\ln(2)$$ Here, $\gamma$ is the Euler Mascheroni constant, defined as $\lim_{m\rightarrow\infty}\sum_{i=1}^m\frac{1}{i}-\ln(m)$. We found that $g(n)$, and therefore our sum, has an upper bound of $\ln(2)$. As $\ln(2)\approx0.69<\frac{3}{4}$, we have proven our sum is less than $\frac{3}{4}$ for all $n>1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3788424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 4 }
Find the winding number and number of zeros of certain function about $|z|=2$. I have the function $f(z)=z^3+\frac{1}{(z-1)^2}$ and I am asked to find the winding number about $C:=\{|z|=2\}$ and then the number of zeros inside $C$. I know that the winding number is: $$ n(f,C)=\frac{1}{2\pi i}\int_C \frac{f'(z)}{f(z)}dz, $$ but this gives the integral: $$ \frac{1}{2\pi i}\int_C \frac{3z^2(z-1)^3 -2}{(z-1)(z^3(z-1)^2+1)}dz $$ which is not very workable, even with the residue theorem (unless I am mistaken). Instead, if I write $C$ as $2e^{2 \pi i \theta}$ for $\theta \in [0,1]$, I can re-examine $f$ as: $$ 8e^{6\pi i \theta} + \frac{1}{(2e^{2\pi i \theta}-1)^2}. $$ The fraction on the right is at its largest (in terms of modulus) when $\theta = 0$ and its smallest when $\theta = 1/2$. This leads me to believe that, since the left term is much larger, the curve will wind three times around. How can I make this more rigorous? Furthermore, by the argument principle, I get that $n(f,C)=\#\text{zeros of }f-\#\text{poles of }f$. Since $f$ has (counting with multiplicity) $2$ poles inside $C$, this would give me that $f$ has $5$ zeros inside $C$, but this seems odd. Is this correct?
The zeroes of $f$ are the zeroes of the polynomial $P(z)=z^3(z-1)^2+1$ and for $|z| \ge 2$, one has $|P(z)| \ge 7$ by trivial majorizations so all the 5 zeroes of $P$ are inside $C$, hence $f$ has indeed $5$ zeroes there. To compute the integral one uses the above observation that all the zeroes of the denominator are inside $C$ so by Cauchy one can move $C$ to infinity and the integral stays same - in other words: $n(f,C)=\frac{1}{2\pi i}\int_C \frac{f'(z)}{f(z)}dz=\frac{1}{2\pi i}\int_{|z|=R} \frac{f'(z)}{f(z)}dz, R\ge 2$ But then only the ratio of the leading terms matters, so one gets by using $z=Re^{it}, dz=izdt$, dividing by $z^6$ both numerator and denominator, estimating trivially the other terms and taking $R \to \infty$ that: $n(f,C)=\frac{1}{2\pi i}\int_0^{2\pi}3i(1+O(1/R))dt=3+O(1/R) \to 3$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3788595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Does the existence of a minimal cover for a subset of reals need some form of choice? In Kanamori's book, "The Higher Infinite" p. 376, he defines a minimal cover of some $A \subseteq \omega^\omega$, to be any $B \subseteq \omega^\omega$, such that $A\subseteq B$ and that $B$ is Lebesgue measurable and if $Z \subseteq B-A$ is Lebesgue measurable, then $m_L(Z) = 0$. And he claims that picking some $B$ with $A\subseteq B$ and $m_L(B)$ minimal, does the job. [Here $m_L$ denotes the Lebesgue measure.] Now here is my problem. The whole premise of this chapter is that we don't want to use choice to do these things. But any way I try to construct such a $B$, I inevitably use some form of choice. The best I can do is $\mathsf{AC}_\omega(\omega^\omega)$. Is there some choice-free way to do this? A sketch of a proof with $\mathsf{AC}_\omega(\omega^\omega)$: Let $x = \inf\{m_L(B): A\subseteq B \text{ and } B \text{ is Lebesgue measurable}\}$. By $\mathsf{AC}_\omega(\omega^\omega)$, let $\langle B_n: n<\omega\rangle$ be a sequence such that $A\subseteq B_n$ and $m_L(B_n) \rightarrow x$ as $n \rightarrow \infty$. Now $B = \bigcap_n B_n$ is the desired minimal cover. $\square$
This is in fact a theorem of ZF. The Caratheodory construction of Lebesgue measure works in ZF, though without choice it need not be $\sigma$-additive. Every Borel codable set of reals is measurable but not necessarily every Borel set. See the paper cited below. Let $U_i$ enumerate the basic open sets of $\omega^{\omega}.$ By (finite) subadditivity of $\lambda^*$ and adjusting the proof of the Lebesgue density theorem, we have that for any $X$ with $\lambda^*(X)>0$ and $\epsilon >0,$ there is $U_i$ such that $\lambda^*(X \cap U_i) > (1-\epsilon) \lambda(U_i).$ Define $S_n$ recursively by having $i \in S_n$ if $\lambda^*(A \cap U_i \setminus \bigcup_{j<i, j \in S_n} U_j)>\frac{n-1}{n} \lambda(U_i).$ Let $V_n = \bigcup_{i \in S_n} U_i$ and $V = \bigcap_{n<\omega} V_n.$ By our density lemma, each $A \setminus V_n$ is null. Since $\lambda^*$ is additive among subsets of disjoint measurable sets, we have $$\lambda(V_n) - \frac{1}{n} \le \frac{n-1}{n}\lambda(V_n) \le \sum_{i \in S_n} \lambda^*\left (A \cap U_i \setminus \bigcup_{j<i, j \in S_n} U_j \right ) \le \lambda^*(A) \le \lambda^*(A \cup V_n) =\lambda(V_n).$$ We compute $$\lambda^*(A \setminus V) \le \inf_{n<\omega} \left ( \lambda^*\left (\bigcap_{i<n} V_n \setminus V \right ) + \lambda^*\left (A \setminus \bigcap_{i<n} V_n \right ) \right ) =0.$$ Therefore, $B := A \cup V$ is measurable, with $$\lambda^*(A) \le \lambda(B) = \lambda(V) \le \inf_{n<\omega} \lambda(V_n) \le \inf_{n<\omega} \left (\lambda^*(A) + \frac{1}{n} \right) = \lambda^*(A).$$ So $B$ is as desired. Foreman, Matthew; Wehrung, Friedrich, The Hahn-Banach theorem implies the existence of a non-Lebesgue measurable set, Fundam. Math. 138, No. 1, 13-19 (1991). ZBL0792.28005.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3788694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Problem with evaluating $\lim_{n \rightarrow \infty} e^{-n-t\sqrt{n}}\cdot \left(e^{e^{\frac{t}{\sqrt{n}}}\cdot n}-1\right)$ As above, I have a problem with evaluating $$\lim_{n \rightarrow \infty} e^{-n-t\sqrt{n}}\cdot \left(e^{e^{\frac{t}{\sqrt{n}}}\cdot n}-1\right).$$ I checked the result in the limit calculator and it should be equal to $e^{\frac{t^2}{2}}$. I have no idea how to obtain it. My attempt: $ \lim_{n \rightarrow \infty} e^{-n-t\sqrt{n}}\cdot(e^{e^{\frac{t}{\sqrt{n}}}\cdot n}-1)=\lim_{n \rightarrow \infty} e^{-n-t\sqrt{n}+e^{\frac{t}{\sqrt{n}}}\cdot n}-\lim_{n \rightarrow \infty}e^{-n-t\sqrt{n}}=\lim_{n \rightarrow \infty} e^{-n-t\sqrt{n}+e^{\frac{t}{\sqrt{n}}}\cdot n}-0=?.$ Unfortunately I don't know how to proceed later, because I'm obtaining the limit of the type $0\cdot \infty$ which is of course problematic. Thanks for help in advance.
Hint use Taylor expansion of $\displaystyle e^{\frac{t}{\sqrt{n}}}$. $$\exp\left(\frac{t}{\sqrt{n}}\right) \approx 1 + \frac{t}{\sqrt{n}} + \frac{t^2}{2n} + O\left(\frac{t^3}{n^{3/2}}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3788828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What is the smallest real number $m$ such that $n < m^n$ for all $n \geq 1$? I have taken a short look at this problem and found it to be much harder than expected to solve. Per the title, I am looking to find the smallest number $m\in\mathbb{R}$ such that the inequality $$n < m^n$$ is true for all $n\in\mathbb{R}$, $n \geq 1$. Using Python I was able to determine that $m\in [1.4446678610097, 1.4446678610098]$, suggesting the solution $m = e^{1/e}\approx 1.44466786100976\dots$. I assume this to in fact be the solution, and there there is a proof that I cannot come up with. Further research points to the Lambert W function, but the contents are beyond me at this point. A solution/proof or explanation of this problem is appreciated.
As was pointed out in the comments, there is no smallest such $m$. To see why $m = e^{1/e}$ doesn't work, notice that with $n=e$ we get $m^n = (e^{1/e})^e = e = n$. However, if you're willing to ask instead about the smallest positive $m$ for which $n \leq m^n$ for all $n \geq 1$, then $m = e^{1/e}$ is indeed correct. First, observe that with $m$ positive the inequality $n \leq m^n$ is equivalent to $\ln(n) \leq n\ln(m)$, or equivalently $\frac{\ln(n)}{n} \leq \ln(m)$. You can now use calculus to prove that the absolute maximum of the function $f(x) = \frac{\ln(x)}{x}$ on $(0, \infty)$ occurs at $x=e$, and at that value $f(e) = \frac{1}{e}$. Thus we should choose $m$ so that $\ln(m) = \frac{1}{e}$, which gives us $m = e^{1/e}$, as you expected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3788931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Prove there exists $v$ such that $f^3(v) = f(f(f(v))) \neq 0$ Let $V$ be a vector space of dimension $n$ and $f: V \to V$. If $\dim Im f \geq 2n/3$, then prove there exists $v$ such that $f^3(v) = f(f(f(v))) \neq 0$ The only thing I can deduce is that $\dim \ker f \leq n/3$.
$dimImf =dim(Kerf_{\mid Im f}+dim(Im f^2)$ since $dim(Ker f)<n/3$ and $dim(Imf)\geq {{2n}\over 3}$, we deduce that $dim(Imf^2)>n/3$, $dimImf^2=dim(Ker f_{\mid Imf^2}+dim(Imf^3)$ since $dim(Ker f)<n/3$ and $dim(Imf^2)>n/3$, we deduce that $dimImf^3>0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3789007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The Quaternions are the smallest.... I've been reading https://github.com/GleasSpty/MATH-104-----Introduction-to-Analysis, and the author formulates the integers as the smallest (by inclusion under isomorphism) nontrivial totally ordered cring that contains the natural numbers, the rationals as the smallest totally ordered field that contains the integers, and the reals as the smallest dedekind-complete (or cauchy-complete) totally ordered field that contains the rationals. Similarly, there's the algebraic numbers which are the smallest (edit: they're not totally ordered) algebraically complete field that contains the rationals, and the complex numbers which are both algebraically complete and dedekind-complete. Is there a similar statement for the Quaternions/Octonions?
By Frobenius' theorem, the quaternions $\Bbb{H}$ can be characterized as the smallest noncommutative division ring that contains $\Bbb{C}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3789134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How many different ways can you go about completing a course/ class at university? How many different combinations of results in assignments are possible in a University course? I am interesting in calculating the number of unique ways I can finish this course that I am doing. To make things easier, there are no partial marks. Here are some conditions: * *There are four assessable tasks * *Assignment 1: Weighted 10% *Assignment 2: Weighted 15% *Assignment 3: Weighted 15% *Assignment 4: Weighted 60% *Possible marks for each assessment: * *Assignment 1: /10 *Assignment 2: /10 *Assignment 3: /10 *Assignment 4: /100 I am a little rusty on my combinations discrete mathematics. Can this be viewed as a pigeonhole principle problem or would this be permutations/combinatrix? From an algorithmic point of view, how would you go about solving this? TL/DR: How many combinations of indiviudal graded assessments can you get in a course? What is the range of end of semester marks possible? Thank you for your time.
Eureka By writing a small block of code I was able to find out 134431 unique combinations of grades!! This surpised me, mostly because I don't understand entirely why this is the case. I am still curious as to how you would solve this mathematically instead of programmatically... don't hesitate to correct me or prove otherwise either !! thank you everyone :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3789216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How many nonnegative integers $x_1, x_2, x_3, x_4$ satisfy $2x_1 + x_2 + x_3 + x_4 = n$? Can anyone give some hints about the following question? How many nonnegative integers $x_1, x_2, x_3, x_4$ satisfy $2x_1 + x_2 + x_3 + x_4 = n$? Normally this kind of question uses stars and bars but there are $2x_1$, which I don’t know how to handle. Help please! Ps :I think may be we can use recurrence relation.
One idea is to deal with $x_1$ separately in order to use stars and bars on $x_2,x_3,x_4$. For example you fix $x_1=0$ and then you have $x_2+x_3+x_4=n$ or you fix $x_1=1$ and then get $x_2+x_3+x_4=n-2$ and so on and so forth. This then generates the summations $$x_2+x_3+x_4=n-2i$$ which have ${n-2i+2 \choose 2}$ solutions. Thus as $x_1$ can be a number between $0$ and $\lfloor n/2\rfloor$ you get the summation $$\sum_{i=0}^{\lfloor n/2\rfloor}{n-2i+2 \choose 2}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3789347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Calculating the acceleration vector of elliptical curve. Satisfying Kepler's first law but not second. I am trying to solve problem 16 in section 1.6 of David Bressoud book Second Year Calculus. He gives a hint at the end of the book which says. If $r$ and $\theta$ are related by $\frac{r^2\cos^2(\theta)}{a^2}+\frac{r^2\sin^2(\theta)}{b^2}=1$ and if $r^2\frac{d\theta}{dt}=k$ then $\vec{a}=\frac{-rk}{a^2b^2}\vec{u_r}$. Where $\vec{u_r}$ and $\vec{u_\theta}$ are the local coordinates. He gives a formula for $\vec{a}$ earlier in the text as $\vec{a}=(\frac{d^2r}{dt^2}-r(\frac{d\theta}{dt})^2)\vec{u_r}+\frac{1}{r}\frac{d}{dt}(r^2\frac{d\theta}{dt})\vec{u_\theta}$. Specifically I need help calculating $\frac{d^2r}{dt}$ without the $\theta$ term. Also $k$ is a constant. My current calculation has lead to $\frac{d^2r}{dt^2}=\frac{k}{r}(\frac{1}{b^2}-\frac{1}{a^2})(\frac{k\cos(2\theta)}{r}+\frac{\sin(2\theta)}{2})$.
we have $\frac{r^2 \cos^2(\theta)}{a^2}+\frac{r^2 \sin^2 (\theta)}{b^2} = 1$ Differentiating this equation with respect to $t$, gives $$r^2\sin(2\theta)\frac{d\theta}{dt}\left(\frac{1}{b^2}-\frac{1}{a^2}\right)+2\frac{1}{r}\frac{dr}{dt}=0$$ Now replacing $\frac{d\theta}{dt} = \frac{k}{r^2}$, gives $$\sin(2\theta)\left(\frac{1}{b^2}-\frac{1}{a^2}\right)+2\frac{1}{r}\frac{dr}{dt}=0$$ We need to find the derivative of $\sin 2\theta$ to progress. This equals $$2\frac{k}{r^2}\cos 2\theta$$ Now, $\cos 2\theta = 2\cos^2\theta -1$ and from the equation, using $\cos^2(\theta) + \sin^2 \theta = 1$, we have $$ \cos^2(\theta) = \left(\frac{1}{r^2} - \frac{1}{b^2}\right)\left(\frac{1}{a^2}-\frac{1}{b^2}\right)^{-1}$$ So $$\cos 2\theta = 2 \left(\frac{1}{r^2} - \frac{1}{b^2}\right)\left(\frac{1}{a^2}-\frac{1}{b^2}\right)^{-1} - 1$$ Now from before we have $$\frac{d}{dt}\left(\frac{1}{r}\frac{dr}{dt}\right) = \frac{k}{r^2}\cos 2\theta\left(\frac{1}{a^2}-\frac{1}{b^2}\right)$$ $$\frac{1}{r}\frac{d^2r}{dt^2}-\frac{1}{r^2}\left(\frac{dr}{dt}\right)^2 = \frac{k}{r^2}\cos 2\theta\left(\frac{1}{a^2}-\frac{1}{b^2}\right)$$ We already have an expression for $\left(\frac{dr}{dt}\right)^2$ in terms of $\sin^2 2 \theta = 1-\cos^2 2\theta$. So all that's left to do is to substitute in the expression for $\cos^2 2\theta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3789553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $\{x_1,x_2,\cdots,x_n\}$ is a basis, is $\{x_1+x_2,x_2+x_3,\cdots,x_n+x_1\}$ a basis too? Let's say we have a vector space $V$ with a basis $\{x_1,x_2,\cdots,x_n\}$ then is $\{x_1+x_2,x_2+x_3,\cdots,x_{n-1}+x_n,x_n+x_1\}$ a basis too? My Answer: For n=2 clearly this is false because of the following counter example: \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} If we apply the above to get the new set \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix} which is not linearly indepedent to form a basis. But what about $n\geq3 ?$ I believe it should work by intuition that $v_1 = x_1+x_2$ can only be formed using $x_1$ and $x_2$ and so on hence any of the vectors cannot be formed using the others by any linear combination.
Let there exist scalars $c_j$'s such that the following linear combination is equal to $0.$$c_1(x_1+x_2)+c_2(x_2+x_3)+\ldots+c_{n-1}(x_{n-1}+x_n)+c_n(x_n+x_1)=0$ Case 1: $n$ is even: Let $n=2m,\ m\in\Bbb N$. \begin{aligned}&c_1(x_1+x_2)+c_2(x_2+x_3)+\ldots+c_{2m-1}(x_{2m-1}+x_{2m})+c_{2m}(x_{2m}+x_1)=0\\\implies& (c_1+c_{2m})x_1+(c_1+c_2)x_2+(c_2+c_3)x_3+\ldots+(c_{2m-2}+c_{2m-1})x_{2m-1}+(c_{2m-1}+c_{2m})x_{2m}=0\end{aligned} Hence, $c_i+c_{i+1}=0, i=1,2,\ldots,2m-1$ and $c_{2m}+c_1=0\tag 1$ Now, note that \begin{aligned}c_{2i+1}&=c_1, i=1,2,\ldots,m-1&\\&&\text{and}&\\c_2&=c_{2i},i=2,3,\ldots,m.&\end{aligned} By $(1)$, $c_1+c_{2m}= c_1+c_2=0.$ Take $c_1= 2$, say, then,clearly $c_2=-2, c_3=2$ etc. Hence, we don't necessarily have $c_i=0\ \forall i=1,2,3,\ldots,n$. Thus, $x_1+x_2,x_2+x_3,\ldots,x_n+x_1$ are not linearly independent and thus can't be basis. Case 2: $n$ is odd: Let $n=2k+1, k\in\Bbb N$. Proceed as in case $(1)$ above to get a system of linear equations similar to $(1)$ $c_i+c_{i+1}=0,i=1,2,\ldots,2k$ and $c_{2k+1}+c_1=0\tag{2}$ Again, note that \begin{aligned}c_{2i+1}&=c_1, i=1,2,\ldots,k&\\&&\text{and}\\c_2&=c_{2i},i=2,3,\ldots,k.&\end{aligned} By $(2)$, \begin{aligned}c_1+c_{2k+1}= c_1+c_1&=0\\\implies c_1&=0=c_3=\ldots=c_{2k+1}\end{aligned} and hence, again by $(2), c_2=c_4=\ldots=0$. Thus, in this case, $x_1+x_2,x_2+x_3,\ldots,x_n+x_1$ are linearly independent and thus form a basis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3789976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
$m^*(f(E))\leq\int_E|g'(x)|dx$ for absolutely continuous function $f$ Suppose $f$ is an absolutely continuous function on $[0,1]$, and suppose $E\subset (0,1)$ is any measurable set. I'd like to show that $m^*(f(E))\leq\int_E|f'(x)|dx$. I know that since $f$ is AC on $[0,1]$, we can write $f(x)=\int_0^xf'(t)dt+f(0)$. However, I'm not sure if/how this helps. What can I try?
See Measure Theory, Vol I by Bogachev, Proposition 5.5.4, p. 348 for the following: If $f$ is differentiable at each point of a measurable set $E$ then $m^{*}(f(E)) \leq \int_E |f'(x)|dx$. Your result follows follows from this since absolute continuity of $f$ implies differentiabilty of $f$ at almost all points and also implies that $f$ maps null sets to null sets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3790069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
I've Hit a Major Snag While Writing a Paper on Deriving the Cubic Formula! So I've writing a paper for school on deriving the cubic formula. As of now I have written the cubic formula as a system of two equations in terms of original coefficients $a$, $b$, $c$, and $d$. The system is below: $$z=\sqrt[3]{\frac{9abc-2b^3-27a^2d}{54a^3}\pm\sqrt{\frac{4ac^3+27a^2d^2-18abcd-b^2c^2+b^3d}{108a^4}}}$$ $$x=z-\frac{\left(\frac{-b^2}{3a^2}+\frac{c}{a}\right)}{3z}-\frac{b}{3a}$$ This system is almost entirely based off the work shown in this article (http://math.sfsu.edu/smith/Documents/Cubic&Quartic.pdf). The article says that "Actually, the equation for $z$ gives three complex cube roots for each of the $+$ and $–$ signs, hence six different formulas for $z$. But when you substitute these in the equation for $y$, at most three different $y$ values will result, and the last equation will thus give at most three distince [sic] roots $x$." The mention of a $y$-value can safely be equated to my $x$-value since I combined the original article's two equations ($y=z-\frac{p}{3z}$ and $x=y-\frac{b}{3a}$ into a single equation). Thus, according to the very article that this equation was formulated from, when using the formula I should get 6 $z$-values, but upon plugging these into my second equation to solve for $x$, I should see only 3 distinct $x$-values. When I test this, however, with the cubic $-2x^3+3x^2-x+5=0$, which has solutions 1.92, -0.21-1.12$i$, and -0.21+1.12$i$, I get the following: $$z_1=1.399 \therefore x_1=1.901$$ $$z_2=-0.67+1.16i \therefore x_2=-0.2+1.11i$$ $$z_3=-0.67-1.16i \therefore x_3=-0.2-1.11i$$ $$z_4=0.461 \therefore x_4=1.142$$ $$z_5=-0.23+0.4i \therefore x_5=0.18+0.24i$$ $$z_6=-0.23-0.4i \therefore x_6=0.18-0.24i$$ Note that $z_1$, $z_2$, and $z_3$ all came from using a $+$ sign for the $\pm$ input in the equation for $z$ (the complex solutions came from multiplying the real solution by $e^\frac{2i\pi}{3}$ and $e^\frac{4i\pi}{3}$). Coincidentally (or not) only these 3 $z$-values gave correct (though somewhat off due to lazy rounding) $x$-values. The $z$-values derived by using a $-$ sign for the $\pm$ input ($z_4$, $z_5$, and $z_6$), however, did not yield correct $x$-values. More crucially, the prediction the article made that the 6 $z$-values would collapse into only 3 $x$-values when plugged into the second equation did not come true. This has left me with really nowhere to go. I cannot possibly justify my paper by simply stating that "you have to only use the $+$ side of the $\pm$ sign when solving for $z$ because it just works that way." I need some justification for this decision. Or possibly I have made some mistakes in my calculations and the article's assertion was, indeed, correct. That's what I'm hoping to learn from you guys! If you have any insight into this problem, any questions for me, or any advice, please reach out!
You start something of the form: $z = \sqrt [3] {A \pm \sqrt {A^2+B^3}}\\ x = z - \frac {B}{z} -\frac {b}{3a}$ Lets choose $z = \sqrt [3] {A + \sqrt {A^2+B^3}}$ and let $\bar z = \sqrt [3] {A - \sqrt {A^2+B^3}} $ represent the conjugate (option with the negative sign). Then $z-\frac {B}{z} = z-\frac {B}{\sqrt [3] {A + \sqrt {A^2+B^3}}}\frac {\sqrt [3] {A - \sqrt {A^3+B^2}}}{\sqrt [3] {A - \sqrt {A^2+B^3}}} = z-\frac {B\sqrt [3] {A^2 - {A^2+B^3}}}{\sqrt [3] {A^2 - (A^2+B^3)}} = z + \sqrt [3] {A - \sqrt {A^2+B^3}} = z + \bar z$ And if you transpose $z$ and $\bar z$ you get something identical. $x = (e^{\frac {2\pi}3i})^k\sqrt[3]{\frac{9abc-2b^3-27a^2d}{54a^3}+\sqrt{\frac{4ac^3+27a^2d^2-18abcd-b^2c^2+b^3d}{108a^4}}} + (e^{\frac {-2\pi}3i})^k\sqrt[3]{\frac{9abc-2b^3-27a^2d}{54a^3}-\sqrt{\frac{4ac^3+27a^2d^2-18abcd-b^2c^2+b^3d}{108a^4}}} - \frac {b}{3a}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3790218", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Show that the matrix $I-uu^T$ has rank $n-1$ where $u$ is a unit vector in $R^n$ I've tried a few examples and I know that $I-uu^T$ is symmetric but I'm stuck here. Any help will be appreciated!
Note that $\rm P_u := u u^\top$ is the (rank-$1$) projection matrix that projects onto the line spanned by vector $\rm u$. Hence, ${\rm I}_n - {\rm P_u}$ is the projection matrix that projects onto the $(n-1)$-dimensional orthogonal complement of the line. Since the rank of a projection matrix is equal to its trace, $$\mbox{rank} \left( {\rm I}_n - {\rm P_u} \right) = \mbox{tr} \left( {\rm I}_n - {\rm P_u} \right) = n - 1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3790341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Find probability of certain number of cards being dealt from remaining cards I did find similar questions to this but I didn't understand the complex answers, so here goes: I need to find the probability of being dealt a specified amount of cards from the remaining cards in the deck, for example: I have being dealt 2 cards, Ace and King of clubs, there are now 50 cards remaining in the deck, 11 of which are clubs. I know I can find the probability of the next card dealt being a club by doing: 11 / 50 = 0.22 (22%) But I need 3 more clubs to make my flush (5 cards of the same suit) and there are 5 cards to be dealt, how do I find the probabilty of being dealt 3 more clubs after 5 more cards have been dealt, would it be something like: (11 / 50) + (11 / 49) + (11 / 48) + (11 / 47) + (11 / 46)
If I understand well then $5$ cards are drawn from a deck of $50$ cards of which exactly $11$ are clubs. Then the probability that exactly $3$ clubs are drawn equals:$$\frac{\binom{11}3\binom{39}2}{\binom{50}5}$$ If you are looking for the probability that at least $3$ clubs are drawn then see the answer of Jfischer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3790505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Arrangement of $12$ people in a row such that neither of $2$ particular persons sit on either of $2$ ends of the row If $12$ persons are arranged in a row such that neither of two particular persons can sit on either end of the row, is My attempt: Total ways $=$ Sitting $12$ persons in a row $-$ Sitting $2$ particular persons $2$ ends of the row $$=12!-2!\cdot 10!$$ But it seems that this answer is wrong. Can anyone please explain to me the right answer. Thanks.
The $10$ normal persons can be seated in $10!$ ways. There are $9$ slots in between them for the special persons. When the first special person is seated there are $10$ slots for the second special person. Therefore there are $$10!\cdot 9\cdot 10=326\,592\,000$$ admissible seatings for the $12$ persons.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3790600", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Why $\sqrt{\left(\frac{-\sqrt3}2\right)^2+{(\frac12)}^2}$ is equal to 1? $\sqrt{\left(\frac{-\sqrt3}2\right)^2+{(\frac12)}^2}$ By maths calculator it results 1. I calculate and results $\sqrt{-\frac{1}{2}}$. $\sqrt{\left(\frac{-\sqrt3}2\right)^2+{(\frac12)}^2}$ $\sqrt{\frac{-{(3)}^{{\displaystyle\frac12}\times2}}{2^2}+\frac{1^2}{2^2}}=\sqrt{\frac{-3}4+\frac14}=\sqrt{\frac{-3+1}4}=\sqrt{\frac{-2}4}=\sqrt{-\frac12}$ Enlighten me what went wrong?
Hint:$(-\sqrt{3})^2=(-1)^2(3)^{{\frac{1}{2}}2}=3$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3790726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Are there any identities for the determinant of almost upper triangular matrices of the following form? I've encountered a problem in which I need to compute the determinant of an almost upper triangular matrix of the following form: $$ A = \begin{pmatrix} 1 & a_{1,2} & a_{1,3} & a_{1,4} & a_{1,5} & \dots \\ 1 & a_{2,2} & a_{2,3} & a_{2,4} & a_{2,5} & \dots \\ 1 & 0 & a_{3,3} & a_{3,4} & a_{3,5} & \dots \\ 1 & 0 & 0 & a_{4,4} & a_{4,5} & \dots \\ 1 & 0 & 0 & 0 & a_{5,5} & \\ \vdots & & & & & \ddots \\ \\ 1 & 0 & 0 & 0 & \dots & 0 & a_{N,N} \end{pmatrix} $$ All matrix entries below the diagonal are zero, except those in the first column, which are equal to one. The matrix is infinite, so $N \to \infty$. I wonder whether there are identities that describe the form of the determinant of this matrix. References to relevant articles are appreciated.
Note that we can write this matrix in the form $A = B + uv^T$, where $$ B = \begin{pmatrix} 1 & a_{1,2} & a_{1,3} & a_{1,4} & a_{1,5} & \dots \\ 0 & a_{2,2} & a_{2,3} & a_{2,4} & a_{2,5} & \dots \\ 0 & 0 & a_{3,3} & a_{3,4} & a_{3,5} & \dots \\ 0 & 0 & 0 & a_{4,4} & a_{4,5} & \dots \\ 0 & 0 & 0 & 0 & a_{5,5} & \\ \vdots & & & & & \ddots \\ \\ 0 & 0 & 0 & 0 & \dots & 0 & a_{N,N} \end{pmatrix}, \quad u = (0,1,\dots,1)^T, \quad v = (1,0,\dots,0)^T. $$ With the matrix determinant lemma, we find that $$ \det(A) = \det(B + uv^T) = (1 + v^TB^{-1}u) \det(B) \\ = (1 + v^TB^{-1}u) \cdot a_{22} a_{33} \cdots a_{NN}. $$ From there, it suffices to find $v^TB^{-1}u$, i.e. the first entry of $B^{-1}u$. I don't think that there is a nice explicit form for $v^TB^{-1}u$, but the answer can be computed very efficiently because the matrix is upper triangular.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3790845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is $\beta$ a basis of vector space $V$? Let $V=\{p\in\mathbb{R}[X]:\deg(p)\leq n\}$, knowing that $\{1,X,\dots,X^n\}$ is a basis of $V$, determine whether $\beta=\{1,X,X^2+1,X^3+X,\dots,X^n+X^{n-2}\}$ is a basis of $V$. Consider: $\quad c_0+c_1X+c_2(X^2+1)+\dots+c_n(X^n+X^{n-2})=0$ $\implies (c_0+c_2)+(c_1+c_3)X+(c_2+c_4)X^2+\dots+(c_n+c_{n-2})X^{n-2}+c_{n-1}X^{n-1}+c_nX^n=0$ Because of the definition of the zero polynomial, it must follow, that all coefficients $\quad c_0,c_1,\dots,c_n=0$, meaning $\beta$ is linearly independent. It is given that $\{1,X,\dots,X^n\}$ is a basis of $V$, so $\dim(V)=\dim(\{1,X,\dots,X^n\})=n+1$. Because $|\beta|=|\{1,X,\dots,X^n\}|,\;\dim(V)=\dim(\beta)$. Therefore $\beta$ is a basis of $V$. Is this proof correct? Thank you for the help
More simply, you may consider the determinant of $\beta$ in the standard basis: $$\det\beta=\begin{vmatrix} 1 & 0 & 1 & 0 & \dots\dots & 0 \\ 0 & 1 & 0 & 1 & \dots\dots & 0 \\ 0 & 0 & 1 & 0 & \dots\dots & 0 \\ 0 & 0 & 0 & 1 & \dots\dots & 0 \\ \vdots & & & & \ddots& \vdots \\ 0 & 0 & 0 & 0 & \dots\dots & 1 \end{vmatrix}$$ It is an upper triangular determinant, so its value is the product of the diagonal element, $1$, which proves $\beta$ is a basis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3791007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Finding $|K^\times/\ker(s)|$ and isomorphism $K^\times/S\cong\mathbb{Z}/2\mathbb{Z}$ for finite field K Let $K$ be a finite field with $q$ elements and $K^\times := K\setminus\{0\}$ be the multiplicative group. Assume that the characteristic of $K$ is not $2$, and let $s:K^\times\to K^\times$ given by $x\mapsto x^2$ be a group homomorphism. First of all, I want to find the number of elements of $S := \operatorname{im}(s)$. The kernel of $s$ must contain only two elements, namely $1$ and $n-1$, thus $S \cong K^\times/\ker(s)$ and so $$|S|=|K^\times/\ker(s)|=\frac{q-1}{2}.$$ My only problem here is that I found the fact that $|\ker(s)|=2$ by trial and error $$(s(x)=1\iff x=n-1 \lor x=1)$$ however I am lacking formal proof for this. I would appreciate any help in this regard. My second problem lies in proving that there is an isomorphism $K^\times/S\cong\mathbb{Z}/2\mathbb{Z}$. Because $$|S|=\frac{q-1}{2} \quad \textrm{and} \quad |K^\times|=|K|-1=q-1$$ we have that $\;|K^\times/S|=2=|\mathbb{Z}/2\mathbb{Z}|$. Therefore they have the same dimension. To finish this, I believe I would need to show that $K^\times/S$ is a cyclic group just like $\mathbb{Z}/2\mathbb{Z}$ but here as well I am stuck and require help. Thank you in advance.
The elements of the kernel are exactly the roots of the polynomial $x^2-1=(x-1)(x+1)\in K[x]$. Clearly its roots are $1$ and $-1$, and since the characteristic of $K$ is not $2$ they are two different elements. Thus $|Ker(s)|=2$. As for the second question, all groups of order $2$ are isomorphic to each other. Each group of order $2$ must have the form $G=\{e,g\}$ with the multiplication $e^2=e, eg=ge=g$ and $g^2=e$. So it is always the same group up to how you call the elements and the operation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3791209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How many ways are there to place $15$ pieces of size $1 \times 2$ into a $3 \times 10$ rectangle? How many ways are there to place $15$ pieces of size $1 \times 2$ into a $3 \times 10$ rectangle? (rotating and flipping are considered different ways) I think this question might be solved by recursion. I tried to split each column to $(2,1)$, and split each row into groups of even numbers. But the groups of column and row sometimes can't be satisfied at the same time. Any help or hint is appreciated. Thank you very much.
Recursion: we're tiling left to right. Let's say we have already tiled some part of the rectangle by somehow including leftmost untiled square, then we're left 7 possibilities of how the leftmost untiled squares (gray) look like: 1 2 3 4 5 6 7 Let's denote $f_k(n)$ the number of ways we can tile $k$th case with $n$ full ($=$untiled) $3\times 1$ blocks after the leftmost (partially tiled as in 1-6 or untiled like in 7), so we're interested in $f_7(9)$. We denote $\mathbf{f}(n)=(f_1(n),f_2(n),f_3(n),f_4(n),f_5(n),f_6(n),f_7(n),f_7(n-1))^T$ then $\mathbf{f}(n+1)=A\mathbf{f}(n)$ where $$A=\begin{pmatrix} 0&0&0&0&0&1&0&0\\ 0&0&0&0&1&0&0&0\\ 0&0&0&1&0&0&1&0\\ 0&0&1&0&0&0&0&0\\ 0&1&0&0&0&0&0&0\\ 1&0&0&0&0&0&1&0\\ 0&0&1&0&0&1&0&1\\ 0&0&0&0&0&0&1&0\\ \end{pmatrix}$$ and $\mathbf{f}(1)=(1,0,0,1,0,0,3,0)^T$. The rest (factorization of $A$ into $SDS^{-1}$ and computing $(0,0,0,0,0,0,1,0)SD^8S^{-1}\mathbf{f}(1)$ we left to WolframAlpha) and the answer is $571$. Btw, looking up sequence A001835 it comes that the problem has a shorter solution: $a(n) = 4\cdot a(n-1) - a(n-2)$, with $a(0) = 1,\, a(1) = 1.$ Can you find it? )
{ "language": "en", "url": "https://math.stackexchange.com/questions/3791313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why $8^{\frac{1}{3}}$ is $1$, $\frac{2\pi}{3}$, and $\frac{4\pi}{3}$ The question is: Use DeMoivre’s theorem to find $8^{\frac{1}{3}}$. Express your answer in complex form. Select one: a. 2 b. 2, 2 cis (2$\pi$/3), 2 cis (4$\pi$/3) c. 2, 2 cis ($\pi$/3) d. 2 cis ($\pi$/3), 2 cis ($\pi$/3) e. None of these I think that $8^{\frac{1}{3}}$ is $(8+i0)^{\frac{1}{3}}$ And, $r = 8$ And, $8\cos \theta = 8$ and $\theta = 0$. So, $8^{\frac{1}{3}}\operatorname{cis} 0^\circ = 2\times (1+0)=2$ I just got only $2$. Where and how others $\frac{2\pi}{3}$, and $\frac{4\pi}{3}$ come from?
$8^{\frac{1}{3}}$=$2(1)^{\frac{1}{3}}=2,2\omega,2{\omega}^2$ here $\omega$ is cube root of unity
{ "language": "en", "url": "https://math.stackexchange.com/questions/3791438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Show $ \prod_{k=1}^{n} 4^k = 2^{n*(n+1)}$, where did I go wrong in my induction step? Can someone help me out with the induction step? Show $ \prod_{k=1}^{n} 4^k = 2^{n*(n+1)}$ Base case n=1: $$4^1 = 2^{1*(1+1)} = 2^2$$ Induction step (to show: $2^{(n+1)*(n+2)} = 2^{n^2+3n+2}$ ) : $$ \prod_{k=1}^{n+1} 4^k = 4^{n+1} * 2^{n*(n+1)} = \frac{1}{2} * 4^{n+1} * 4^{n*(n+1)} = \frac{1}{2} 4^{n^2+n+n+1}= \frac{1}{2} 4^{n^2+2n+1} = 2^{n^2+2n+1}$$ but now we have $$ 2^{n^2+3n+2} \neq 2^{n^2+2n+1} $$ Where did I go wrong?
The sum of the first $n$ natural numbers is $\frac{n(n+1)}{2}$ and so $\prod_{k=1}^n4^k=4^{\sum_{k=1}^nk}=4^{\frac{n(n+1)}{2}}=2^{n(n+1)}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3791672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Evaluating the limit of the quotient of two infinite sums How can I evaluate this limit? $$\lim_{n\to\infty}\underbrace{\frac{\sum_{k=1}^n \frac 1k}{\sum_{k=1}^{n+1} \frac{1}{2k-1} }}_{=:a_n}$$ By WolframAlpha, the limit has to be 2 but how can I show this? I see it is monotonous increasing so when i could show $\sup_{n \in \mathbb N} a_n = 2$, it would be done. But I'm a bit stuck...
Comparing term by term, we have $$ \begin{align} \sum_{k=1}^n\frac1k &\le\sum_{k=1}^n\frac1{k-\frac12}\\ &=\sum_{k=1}^{n+1}\frac1{k-\frac12}-\frac1{n+\frac12} \end{align} $$ Similarly, $$ \begin{align} \sum_{k=1}^n\frac1k &\ge\sum_{k=1}^n\frac1{k+\frac12}\\ &=\sum_{k=2}^{n+1}\frac1{k-\frac12}\\ &=\sum_{k=1}^{n+1}\frac1{k-\frac12}-2 \end{align} $$ Thus, $$ 2\sum_{k=1}^{n+1}\frac1{2k-1}-2\le\sum_{k=1}^n\frac1k\le2\sum_{k=1}^{n+1}\frac1{2k-1}-\frac1{n+\frac12} $$ and therefore, $$ 2-\frac2{\sum_{k=1}^{n+1}\frac1{2k-1}}\le\frac{\sum_{k=1}^n\frac1k}{\sum_{k=1}^{n+1}\frac1{2k-1}}\le2-\frac1{\left(n+\frac12\right)\sum_{k=1}^{n+1}\frac1{2k-1}} $$ Now apply the Squeeze Theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3791827", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Why is this sequence not uniformly convergent? In this problem is explained that $f_n(x)$ is pointwise convergent, however not uniformly convergent. The explanation why is not unifromly convergent is also given. However I cannot understand it, when I use the theorem below I get that limit of $f_n - f = 0$ Could maybe someone give me more detail answer why the sequence is uniformly convergent?
Since $\displaystyle(\forall n\in\Bbb N):\left|f_n\left(\frac1{2n}\right)\right|=\frac n4$, you have $\displaystyle\sup_{x\in[0,1]}\left|f_n(x)\right|\geqslant\frac n4$. In other words, $\displaystyle\|f-f_n\|_\infty\geqslant\frac n4$ and, in particular, it is not true that $\displaystyle\lim_{n\to\infty}\|f-f_n\|_\infty=0$. So, the convergence is not uniform.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3791893", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Local-global test of algebraicity Let $\alpha \in \mathbb{C}$. Suppose for all primes $p$ and all isomorphisms $j : \mathbb{C} \rightarrow \mathbb{C}_p$, $j(\alpha) \in \bar{\mathbb{Q}}_p$. Is $\alpha \in \bar{\mathbb{Q}}$?
Sure, if $\alpha$ is not algebraic take $\beta\in \Bbb{C}_p,\not \in \overline{\Bbb{Q}}_p$ and $\sigma\in Aut(\Bbb{C}), \sigma(\alpha)=j^{-1}(\beta)$. (those things require axiom of choice)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3792004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove $\lim\limits_{n \to \infty }\sqrt[n]{a}=1$, if $a>0$ I tried solving by using $\log$ and got $\log(a)/n = \log (1)$ which after applying limit (of $n \to \infty$) gives $0= \log(1)$. Is this right?
Write , $a^{\frac{1}{n}} = e^{\ln(a^{\frac{1}{n}} )}= e^{\frac{1}{n} \cdot \ln(a) } $ Now , just apply the limit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3792121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }