Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Testing polynomial divisibility by evaluation at specific points This question is just something I got to wondering about. Assume $$p(x), q(x) \in \Bbb Z[x], \deg (q) = m \lt \deg(p) =n.$$ Assume also $\forall a_k \in \{a_k~|~0 \leq k \leq n-m \} \subseteq \Bbb Z~ (\text{with } a_k \text{ distinct}), q(a_k)|p(a_k) \text{ in } \Bbb Z.$ Does it follow that $\exists k(x) \in \Bbb Q[x] \text{ such that } p(x)=k(x)q(x)$? If so, and if $p$ and $q$ are both monic, does it follow that $k(x) \in \Bbb Z[x]$? In other words, can you test polynomial divisibility (in $\Bbb Q[x]$) by testing divisibility for $n-m+1$ witnesses? Clearly that's enough witnesses to whittle yourself down to a single candidate for the quotient polynomial. But it's not obvious to me that this candidate must actually be the quotient. This seems relatively basic but I don't think I've ever seen the question discussed.
No, this doesn't work. Consider $p(x) = x^4 + 1$ and $q(x) = x^2 + 1$. We have $p(a) = q(a)$ for $a=0,\pm 1$ but $p$ and $q$ have no roots in common over $\mathbb{C}$. In this example, it fails most obviously because the conditions on $k(x)$ determine it to have degree less than $2$. But that's not the only thing that goes wrong. As another example, take $p(x) = 3x^4+1$ and $q(x)=x^2+1$. Now, $q(\pm 1) = 2 | 4 = p(\pm 1)$ and $q(0) = 1 = p(0)$. The conditions forced upon $k(x)$ are that $k(\pm 1) = 2$ and $k(0)=1$, which determines $k(x) = x^2+1$, which does not divide $x^4+1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3148405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Probability that lightbulb stops working in odd year I have a lightbulb that has an exponential lifetime distribution with mean $\mu$ months. So if I construct a pdf $f(x)$ and cdf $F(x)$ with parameter $\lambda$, and since $E[X] = \frac{1}{\lambda}$, \begin{align} f(x) &= \frac{1}{\mu}e^{-\frac{1}{\mu}x} \\ F(x) &= 1-e^{-\frac{1}{\mu}x} \end{align} Year $Y$ can be odd if it's the $1$st year ($12$ months), $3$rd year ($36$ months) and so on. I'm having troubles constructing the $P(Y= \mathrm{odd})$ model here, as I don't know how to put together the infinite odd years described above.
There's only one parameter in this exponential model – $\lambda=\frac1\mu$. Thus $$F_Y(y)=1-e^{-\lambda x}$$ $$P(12k<Y<12(k+1))=F_Y(12(k+1))-F_Y(12k)=1-e^{-12\lambda(k+1)}-1+e^{-12\lambda k}$$ $$=-e^{-12\lambda(k+1)}+e^{-12\lambda k}=e^{-12\lambda k}(1-e^{-12\lambda})$$ Then the probability the bulb fails in an odd year is an infinite sum with $k=2n=0,2,4\dots$: $$\sum_{n=0}^\infty e^{-24\lambda n}(1-e^{-12\lambda})$$ $$=(1-e^{-12\lambda})\sum_{n=0}^\infty(e^{-24\lambda})^n=\frac{1-e^{-12\lambda}}{1-e^{-24\lambda}}$$ $$=\frac{1-e^{-12/\mu}}{1-e^{-24/\mu}}$$ (We can derive this faster by using the memorylessness of the exponential distribution; in each 24-month cycle the probability of the bulb failing in the first half of that cycle given that it survives to the start of that cycle is constant, and equal to $\frac{F_Y(12)}{F_Y(24)}$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3148761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
checking the Solution of Bessel differential equation I want to check the first part of the solution and help in the second part. Obtain the solution $$y_{1}(x)=J_{0}(x)=1-\frac{x^{2}}{2^{2}}+\frac{x^{4}}{2^{2}\cdot 4^{2}}-\ldots+\frac{(-1)^{n} x^{2 n}}{2^{2 n}(n !)^{2}}+\ldots$$ of the differential equation $$x \frac{d^{2} y}{d x^{2}}+\frac{d y}{d x}+x y=0$$ Show that $$y_{2}(x)=u(x) J_{0}(x)$$ is a second solution if $$u(x)=\int \frac{d x}{x J_{0}^{2}}$$ Solution : Firstly Let $y(x)=\sum_{\lambda=0}^{\infty} a_{\lambda} x^{k+\lambda}$. We differentiate and substitute. The result is $$\sum_{\lambda=0}^{\infty} a_{\lambda}(k+\lambda)(k+\lambda-1) x^{k+\lambda-1}+\sum_{\lambda=0}^{\infty} a_{\lambda}(k+\lambda) x^{k+\lambda-1}+\sum_{\lambda=0}^{\infty} a_{\lambda} x^{k+\lambda+1}=0$$ By setting $\lambda=0,$ we get the coefficient of $x^{k-1},$ the lowest power of $x$ appearing on the left-hand side, $$\quad a_{0}\left[k(k-1)+k\right]=0$$ and $a_{0} \neq 0$ by definition. Equation therefore yields the indicial equation with solutions $k=0$ It is of some interest to examine the coefficient of $x$ also. Here we obtain $a_{1}=0$. Proceeding to the coefficient of $x^{J-1}$ for $k=0,$ we set $\lambda=j$ in the first, and second terms and $\lambda=j-2$ in the third term. By requiring the resultant coefficient of $x$ to vanish, we obtain $$a_{j}\left[j(j-1)+j\right]+a_{j-2}=0$$ When $j$ is replaced by $j+2,$ this can be rewritten for $j \geq 0$ as $a_{j+2}=-a_{j} \frac{1}{(j+2)(j+2)}$ which is the desired recurrence relation. Repeated application of this recurrence relation leads to $$a_{2}=-a_{0} \frac{1}{2(2)}=-\frac{a_{0} }{2^{2}}$$ $$a_{4}=-a_{2} \frac{1}{4(4)}=\frac{a_{0} }{2^{4} (2!)^{2}}$$ $$a_{6}=-a_{4} \frac{1}{6(6)}=-\frac{a_{0} }{2^{6} (3!)^{2}}, \quad$$ and so on and in general, $$a_{2 p}=(-1)^{p} \frac{a_{0}}{2^{2 p} (p!)^{2}}$$ Inserting these coefficients in our assumed series solution and take $a_0=1$, we have $$y_1(x)=J_0(x)= \left[1-\frac{ x^{2}}{2^{2} (1!)^{2}}+\frac{ x^{4}}{2^{4} (2 !)^{2}}-\cdots\right]$$ In summation form $$y_1(x)=J_0(x)=\sum_{j=0}^{\infty}(-1)^{j} \frac{ x^{2 j}}{2^{2j} (j!)^{2} }= \sum_{j=0}^{\infty}(-1)^{j} \frac{1}{(j !)^{2}}\left(\frac{x}{2}\right)^{2 j}$$ Secondly If $$y_{2}(x)=u(x) J_{0}(x)$$ where $$u(x)=\int \frac{d x}{x J_{0}^{2}}$$ then $$y'_{2}(x)=u(x) J'_{0}(x)+u'(x) J_{0}(x)$$ and $$y''_{2}(x)=u'(x) J'_{0}(x)+u(x) J''_{0}(x)+u''(x) J_{0}(x)+u'(x) J'_{0}(x)$$ We get $$x(u'(x) J'_{0}(x)+u(x) J''_{0}(x)+u''(x) J_{0}(x)+u'(x) J'_{0}(x))+u(x) J'_{0}(x)+u'(x) J_{0}(x)+x(u(x) J_{0}(x))$$ $$xu'(x) J'_{0}(x)+xu(x) J''_{0}(x)+xu''(x) J_{0}(x)+xu'(x) J'_{0}(x)+u(x) J'_{0}(x)+u'(x) J_{0}(x)+xu(x) J_{0}(x)$$ $$2xu'(x) J'_{0}(x)+xu(x) J''_{0}(x)+xu''(x) J_{0}(x)+u(x) J'_{0}(x)+u'(x) J_{0}(x)+xu(x) J_{0}(x)$$ $$u(x)(x J''_{0}(x)+ J'_{0}(x)+x J_{0}(x))+xu''(x) J_{0}(x)+u'(x) J_{0}(x)+2xu'(x) J'_{0}(x)$$ I can not continue..
Your series solution looks good. As for finding the second solution... it would help a great deal if we actually used the definition of $u$. \begin{align*}y_2(x) &= u(x)J_0(x)\\ y_2'(x) &= u(x)J_0'(x) + u'(x)J_0(x) = u(x)J_0'(x) + \frac{1}{xJ_0(x)}\\ y_2''(x) &= u(x)J_0''(x) + u'(x)J_0'(x) - \frac{J_0(x)+xJ_0'(x)}{x^2J_0^2(x)}\\ &= u(x)J_0''(x) + \frac{J_0'(x)}{xJ_0^2(x)} -\frac{1}{x^2J_0(x)}-\frac{J_0'(x)}{xJ_0^2(x)}\\ y_2''(x) &= u(x)J_0''(x) - \frac{1}{x^2J_0(x)}\end{align*} We have of course used the Fundamental Theorem to differentiate $u$. And now, we put those calculated derivatives into the differential equation: \begin{align*}xy_2(x) + y_2'(x) + xy_2(x) &= xu(x)J_0(x) + u(x)J_0'(x) + \frac{1}{xJ_0(x)} + xu(x)J_0''(x) - \frac{1}{xJ_0(x)}\\ &= u(x)\left(xJ_0(x)+J_0'(x)+xJ_0''(x)\right) = u(x)\cdot 0 = 0\end{align*} We use that $J_0$ is a solution of the differential equation, and there it is.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3148880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding a future co-linear midpoint of two moving objects I have three objects in a 2d space: S, E1, and E2. E1 and E2 are at some location ((E1x, E1y) & (E2x, E2y)) and moving at constant velocities VE1 and VE2. They will be set initially and not change. S starts at some location and needs to pick a direction. It has a speed and can only move at this speed. How does one select the exact direction that will result in S being both equidistant from and co-linear to E1 and E2 in the minimum amount of time.
Without loss of generality, assume $S$ starts at the origin. If $S$ has constant speed $s$ and chooses a constant heading $\vec h_S$ where $|\vec h_S|=1$ then its position at time $t$ is $\vec S(t) = (st) \vec h_S$ Let's call the midpoint of $E_1$ and $E_2$ $F$, so at time $t$ $\vec F(t) = \frac 1 2 \left( \vec E_1(t) + \vec E_2(t) \right)$ Then $F$ starts at position $\vec F(0) = \frac 1 2 \left( \vec E_1(0) + \vec E_2(0) \right)$ and has constant velocity $\vec v_F = \frac 1 2 \left( \vec v_{E_1} + \vec v_{E_2} \right)$ and so its position at time $t$ is $\vec F(t) = \vec F(0) + t\vec v_F$ Then $S$ intercepts $F$ at time $t$ if $(st) \vec h_S = \vec F(0) + t\vec v_F \\ \Rightarrow t(s \vec h_S - \vec v_F) = \vec F(0)$ So $S$ has to choose its heading $\vec h_S$ so that $s \vec h_S - \vec v_F$ is parallel to $\vec F(0)$. As pointed out in a comment, if $s$ is too small this may not be possible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3149063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Using transition matrix and initial distribution to calculate probabilities. (Markov chains) Let a markov chain with state space $\{1,2,3,4\}$ and transition matrix $$P=\begin{pmatrix}1/3 && 1/3 && 0 && 1/3 \\ 1/4 && 1/4 && 1/4 && 1/4 \\ 0 && 0 && 1/2 && 1/2 \\ 0 && 0 && 0 && 1\end{pmatrix}$$ and initial distribution $\lambda=(1/2,1/2,0,0)$ be given. How can I use this to calculate $\mathbb P[X_0=2, X_1=1, X_2=2, X_3=1]$ and $\mathbb P[X_0=2, X_2=2, X_3=1]$ ? I tried the following: $P(X_0=i_0,X_1=i_1,...,X_n=i_n)=P(X_0=i_0)P(X_1=i_1|X_0=i_0)\dots P(X_n=i_n|X_{n-1}=i_{n-1})$ So $P[X_0=2, X_1=1, X_2=2, X_3=1]=P(X_0=2)P(X_1=1|X_0=2)P(X_2=2|X_1=1)P(X_3=1|X_2=2)=\lambda_0 P_{21} P_{12}P_{21}=1/2\times1/4\times1/3\times1/4$
$P[X_0=2, X_2=2] = \sum_{i=1}^{4}P[X_2=2|(X_1=i,X_0=2) ].P[X_1=i,X_0=2] $ $= \left( \sum_{i=1}^{4}P[X_2=2|(X_1=i,X_0=2) ].P[X_1=i|X_0=2] \right) . P[X_0 =2] $ since $X_2$ only depends on the value of $X_1$ $= \left( \sum_{i=1}^{4}P[X_2=2|X_1=i].P[X_1=i|X_0=2]\right) . P[X_0 =2] $ so $P[X_0=2, X_2=2, X_3 =1] = $ $P[X3=1|X_2=2] .\left(\sum_{i=1}^{4}P[X_2=2|X_1=i].P[X_1=i|X_0=2]\right) . P[X_0 =2] $ and the sum is just two vector multiplication
{ "language": "en", "url": "https://math.stackexchange.com/questions/3149167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that $T_H$ is a linear subspace of $T_G$ so that $\dim H\leq \dim G$ Let $H$ be a subgroup of a matrix group $G$. Show that $T_H$ is a linear subspace of $T_G$ so that $\dim H\leq \dim G$ Definition: Let $\phi:G\rightarrow H$ be a smooth homomorphism of matrix groups. If $\gamma '(0)$ is a tangent vector to $G$ at $I$, we define a tangent vector $d_\phi(\gamma '(0))$ to $H$ at $I$ by $$d_\phi(\gamma '(0))=(\phi\circ\gamma)'(0)$$ The resulting map $d_\varphi:T_G\rightarrow T_H$ is called the differential of $\phi$. To see if a vector space $V$ is a subspace of some other (higher-dimension) vector space, we need to check the following: * *$V\neq\emptyset$ *$\vec{x},\vec{y}\in V \implies \vec{x}+\vec{y}\in V$ *$\alpha\in \mathbb F, \vec{x}\in V\implies \alpha\vec{x}\in V$ But I don't know how to apply these to the question. Any help is appreciated
It looks like your book is defining tangent vectors as derivaties of somoth curves. But if $\gamma$ is a curve landing in $H$ then a priori it lands in $G$, so the injectivity is automatic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3149388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Probability minimum is "reached" Let $(X_i)_{i=1,\dots,n}$ be a finite sequence of random variables such that $X_i\sim\mathcal{E}(\lambda_i).$ We can prove that $Y:=\min_{1\le i\le n}X_i\sim\mathcal{E}(\lambda=\sum_{i=1}^n \lambda_i)$ Now I would like to compute $P(X_i=Y).$ We have $$P(\min_{j\ne i}X_j>x)=P(\cap_{j\ne i}\{X_j>x\})=\prod_{j\ne j}e^{-\lambda_j x}=e^{-(\lambda-\lambda_i)x}$$ Now not sure how I can continue.
Manipulating the probabilities: $$P(X_i = Y) = 1 - P(X_i \ne Y) = 1 - (P(X_i < Y) + P(X_i > Y)) = 1 - P(X_i > Y) =$$ $$1 - P(X_i > \text{min}_{j = 1, \ldots, n}X_j) = 1 - P(X_i > \text{min}_{j \ne i}X_j)$$ Now, $X_i$ and $Y_i = \text{min}_{j \ne i}X_j$ are two independent exponential random variables with parameters $\lambda_i$ and $\tilde{\lambda}_i = \sum_{i \ne j} \lambda_j$. Then $P(X_i > Y_i) = P(\mathcal{E}(\lambda_i) > \mathcal{E}(\tilde{\lambda}_i)) = \tilde{\lambda_i} / (\lambda_i + \tilde{\lambda}_i) = \tilde{\lambda_i} / \lambda$. So $P(X_i = Y) = 1 - \tilde{\lambda_i} / \lambda = (\lambda - \tilde{\lambda_i}) / \lambda = \lambda_i / \lambda$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3149473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is it generally true that $\langle a,b \rangle \cong \langle c,d \rangle\Rightarrow \text{ either }|a|=|c|,|b|=|d| \text{ or } |a|=|d|,|b|=|c|$? Background: We are given two groups $G,H$ generated by two elements, say $G=\langle a,b\rangle$ and $H=\langle c,d\rangle$. Further suppose that the orders of $a,b,c,d$ are finite and $\{|a|,|b|\}\neq\{|c|,|d|\}$. Can we conclude that $G\not\cong H$? Motivation: I have two groups $G$ and $H$, both being non-Abelian and of order 8 (in fact, $G$ is the quaternion group and $H$ is the dihedral group of degree 4), and I know a set of generators for both of the two groups, say $\{a,b\}$ and $\{c,d\}$ correspondingly. I want to show that $G$ is not isomorphic to $H$ since $|a|=|b|=|d|=4$ but $|c|=2$, where $|a|$ is the order of $a\in G$.
No, take $G=H=\mathbb{Z}_4$ which has two sets of generators $\langle 1,2\rangle$ and $\langle 1, 3\rangle$. They satisfy the assumption but the order equalities don't follow. For non-cyclic case note that if $\langle a, b\rangle$ generates a group then so does $\langle a, ab\rangle$. Now for any integers $n,m,r>1$ there is a (finite) group $G$ and elements $a,b\in G$ such that $|a|=n$, $|b|=m$ and $|ab|=r$. For details see Theorem 1.64 here. That gives you more counterexamples.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3149590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Show that there exists $i\in \lbrace 1, 2, 3 \rbrace $ s.t. there exists $a, b\in A_i $ s.t. $a+b\in B $. Let $A=\lbrace 1, 2, 3,..., 2019\rbrace= A_1\cup A_2\cup A_3$, where $A_1\cap A_2=A_2\cap A_3= A_1\cap A_3=\emptyset $ and $B=\lbrace 672, 1008, 1344, 1680, 2016\rbrace $. Show that there exists $i\in \lbrace 1, 2, 3 \rbrace $ s.t. there exists $a, b\in A_i , a\neq b$ s.t. $a+b\in B $. I consider $A_1=\lbrace x_1, x_2, ..., x_k \rbrace$. Then $2018-x_1, 2018-x_2,..., 2018-x_k\notin A_1$.But is not enough.
Letting $b=168$, we have $B=\{4b,6b,8b,10b,12b\}$, while $2019=12b+3$. Thus, $\{b,2b,3b,4b,5b,6b,7b,8b,9b,10b,11b,12b\}\subseteq A$, and restricting to these multiples of $b$ only, it suffices to show that for any partition $\{1,2,3,4,5,6,7,8,9,10,11,12\}=A_1\cup A_2\cup A_3$, there are $i\in\{1,2,3\}$ and $a,b\in A_i,\ a\ne b$, such that $a+b\in\{4,6,8,10,12\}$. This can be done by a simple case analysis. * *Suppose WLOG that $1\in A_1$. *If also $3\in A_1$, then we can take $a=1$ and $b=3$; suppose thus that $3\in A_2$. *If $5\in A_1$, we take $a=1$ and $b=5$; if $5\in A_2$, we take $a=3$ and $b=5$. Suppose thus that $5\in A_3$. *Now if $7\in A_1$, then we take $a=1$ and $b=7$; if $7\in A_2$, we take $a=3$ and $b=7$; finally, if $7\in A_3$, we take $a=5$ and $b=7$. Notice, that in this solution we have only used the fact that $\{b,3b,5b,7b\}=\{168,504,840,1176\}\subset A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3149752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Trying to Solve A Differential Equation Using Variation of Parameters? A few of my classmates and I are trying to solve the following differential equation: $x'' - x'\sin(t) - x\cos(t) = 0$ using the variation of parameters. However, we're at a bit of a loss because we struggled to find a fundamental solution, and at this point we are not even sure if this is the correct method to use. We tried using $x(t) = A\cos(t) + B\sin(t)$, $e^{at}\cos(t)$, and $e^{at}\sin(t)$ and substituting $x, x'$, and $x''$ from each back into the original equation, but none of these are working. Please provide some direction as to if this is even the correct way to approach the problem. Kind Regards, GingerKittyLover
Note that $$0=x''(t) - x'(t)\sin(t) - x(t)\cos(t)=D(x'(t)-x(t)\sin(t))$$ and it follows that $$x'(t)-x(t)\sin(t)=c$$ which is a linear ODE of the first order. Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3149854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Conditional Expectation of Two Random Variable I'm new to conditional expectations and having trouble with the following problem: * *For $\Omega = \{ a, b, c, d \}$, with $P(a) = 4P(b) = 2P(c) = 3P(d)$. Define the random variables: $X(ω) = \begin{cases}~~~5 &:&\omega \in \{ a, b \}\\ −2&:& ω \in \{ c, d \}\end{cases}$ $Y (ω) = \begin{cases}~~~3&:& ω ∈ \{ a, c \}\\~~~ 4&:& ω ∈ \{ b, d \}\end{cases}$ I need to find $E[X|Y]$ and I've gotten this far: (Also just realized it should say $y∈\{3,4\}$ not $y∈\{-2,5\}$.) Not sure where to go from here, as I've only done problems where the probabilities were defined as, for example, $P(a)$ = $1/6$, etc.
Firstly, you have $\mathsf P(a)=4\mathsf P(b)=2\mathsf P(c)=3\mathsf P(d)$, so you don't need to evaluate them since there will be cancelation. Secondly, $\mathsf E(X\mid Y)$ is a random variable, measured over $Y$, so don't forget to leave in the indicator functions.. $$\begin{align}\mathsf E(X\mid Y)&=\begin{cases}\mathsf E(X\mid Y=3)&:&Y=3\\\mathsf E(X\mid Y=4)&:&Y=4\end{cases} \\[2ex]&=\begin{cases}\dfrac{X(a)~\mathsf P(a)+X(c)~\mathsf P(c)}{\mathsf P(a)+\mathsf P(c)}&:&Y=3\\[2ex]\dfrac{X(b)~ \mathsf P(b)+X(d)~\mathsf P(d)}{\mathsf P(b)+\mathsf P(d)}&:& Y=4\end{cases} \\[2ex]&=\dfrac{5~\mathsf P(a)-2~\mathsf P(c)}{\mathsf P(a)+\mathsf P(c)}\mathbf 1_{Y=3}+\dfrac{-2~ \mathsf P(b)+5~\mathsf P(d)}{\mathsf P(b)+\mathsf P(d)}\mathbf 1_{Y=4} \\[2ex]&=\dfrac{10-2}{2+1}\mathbf 1_{Y=3}+\dfrac{-2~ \mathsf P(b)+5~\mathsf P(d)}{\mathsf P(b)+\mathsf P(d)}\mathbf 1_{Y=4}&&\text{since }\mathsf P(a)=2\mathsf P(c) \\[2ex]&=\dfrac{8}{3}\mathbf 1_{Y=3}+\dfrac{-2~ \mathsf P(b)+5~\mathsf P(d)}{\mathsf P(b)+\mathsf P(d)}\mathbf 1_{Y=4} \end{align}$$ And I'll leave the rest to you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3150007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Verification of Alternate Proof for Identity Theorem in Conplex Analysis. Can't we prove Identity Theorem like this. IDENTITY THEOREM. Consider a function which is analytic on an open connected domain $D$, if the set of zeros of $f$ has a limit point in $D$ then $f=0$ on $D$. My Proof Idea. If it is not $0$ identically then since the zeros of $f$ has a limit point in $D$ say $z_0$. By continuity of $f$ we have $f(z_0)=0$ and since $z_0$ is a limit point of set of zeros it is not isolated. Thus $z_0$ being a zero of the non-constant analytic function $f$ is not isolated. This contradicts the fact that An non-constant analytic function has isolated zeros. Hence our assumption is wrong and f is a constant function with value being zero.
As far as I can tell, the error in your proof is in the last line, when you use the fact that a non-constant function has isolated zeros. This is definitely true if your domain is $\mathbb{C}, $ which is probably the version you are thinking of, but is false in general. For example, let $A$ and $B$ be disjoint and define a function $f$ by $f \equiv 0$ on $A$ and $f \equiv 1$ on $B.$ Then $f$ is analytic on $A \cup B$ and has non-isolated zeros but is not constant. The theorem you are looking for is this: given a function $f\colon A \to B$, if $f$ has an accumulation point of zeros on some domain (i.e. open and connected) $C \subseteq A$ then $f$ is constant on $C$. This phenomenon is more general throughout math, and is related to topology. For example, via purely topological methods we can show that if a function is locally constant on a connected component then it is globally constant on the whole connected component, but we cannot conclude the function is constant elsewhere. Note: you should also use the assumption that $D$ is open, since if our limit point is on the boundary the Identity Theorem can fail. Note this is part of the theorem you should be using above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3150155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Using Taylor series expansion to solve the equation $\frac{\tanh^{-1}(x)}{\beta} -2x =0$ I want to use the Taylor series epansion of $\tanh$ to get an approximate solution $\tilde{x}(\beta)$ for the equation $$ \frac{\tanh^{-1}(x)}{\beta} -2x =0 $$ for $\beta >\frac{1}{2}$ such that I can calculate the limit $$ \lim_{\beta \downarrow \frac{1}{2}}\frac{\tilde{x}(\beta)}{\left(\beta-\frac{1}{2}\right)^{\frac{1}{2}}}. $$ Unfortunately I have no clue on how to appropriately use the Taylor expansion to solve this. I guess it is possible to use the $O$-notation and break the problem down to a polynomial equation. Thank you very much in advance for any help.
Write the problem first as $$\beta=\frac{\tanh ^{-1}(x)}{2 x}$$ Now, expand the rhs using the usual $$\tanh ^{-1}(x)=\sum_{n=1}^\infty \frac {x^{2n+1}} {2n+1}$$ Using a few terms, we then have $$\beta=\frac{1}{2}+\frac{x^2}{6}+\frac{x^4}{10}+O\left(x^{6}\right)$$ Now, using series reversion $$\tilde{x} (\beta)=\sqrt{6} \sqrt{\beta -\frac{1}{2}}-\frac{9}{5} \sqrt{6} \left(\beta -\frac{1}{2}\right)^{3/2}+O\left(\left(\beta -\frac{1}{2}\right)^{5/2}\right)$$ as already given by ComplexYetTrivial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3150287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
$ f(\frac{x^2+1}{x^2})=x^2+\frac{1}{x^2}-2 \Rightarrow f(x)=?$ $ f(\frac{x^2+1}{x^2})=x^2+\frac{1}{x^2}-2 \Rightarrow f(x)=?$ I'm getting a wrong answer even though my solution looks valid to me: Inverse of $(1+\frac{1}{x^2})$ is $(\frac{1}{{\sqrt{x-1}}})$, so I plug this expression in wherever I see $x$'s $$f(x)=(\frac{1}{{\sqrt{x-1}}})^2+(\sqrt{x-1})^2-2$$ Finally, I get $f(x)=\frac{(x-2)^2}{x-1}$ which is wrong according to the answer key. The answer should have been $x^2-4$ What am I doing wrong?
I think your answer is correct. I solved the problem like this: $$f(\frac{x^2+1}{x^2})=f(1+\frac{1}{x^2})=x^2+\frac{1}{x^2}-2$$ Now $f(1+\frac{1}{x^2})$ can be written as a function of just $\frac{1}{x^2}$ so, $$f(1+\frac{1}{x^2})=g(\frac{1}{x^2})=x^2+\frac{1}{x^2}-2$$ Put $t=\frac{1}{x^2}$, to get: $$f(1+t)=g(t)$$ $$g(t)=t+\frac{1}{t}-2$$ Or, $$g(x)=x+\frac{1}{x}-2$$ Now, $g(x)=f(1+x)$ Put, $1+x=\alpha$ to get, $$f(\alpha)=g(\alpha-1)=\alpha -1 + \frac{1}{\alpha-1} -2$$ $$f(x)=x - 1 + \frac{1}{x-1} -2$$ $$f(x)=\frac{(x-2)^2}{x-1}$$ where $x\neq 0$,$x\neq1$ Hope this helps...
{ "language": "en", "url": "https://math.stackexchange.com/questions/3150399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What is the geometric meaning of this null-determinant? While reading about interpolation I came across the following equation in Norlund. It involves determinants and I don't understand it in full yet. I do know how Lagrange and Newton follow by using the Laplace expansion. $$ P_n=-\det \begin{bmatrix} 1 & x_0 & \dots & x_0^n & y_0\\ 1 & x_1 & \dots & x_1^n & y_1\\ \vdots & \vdots & \ddots & \vdots & \vdots\\ 1 & x_n & \dots & x_n^n & y_n\\ 1& x & \dots & x^n & 0\\ \end{bmatrix} : \det \begin{bmatrix} 1 & x_0 & \dots & x_0^n\\ 1 & x_1 & \dots & x_1^n\\ \vdots & \vdots & \ddots & \vdots\\ 1 & x_n & \dots & x_n^n\\ \end{bmatrix} $$ Where $P_n$ is the interpolation polynomial. Let's define the following for the interpolation polynomial $P_n(x_i)=y_i$ for $i\in[0,1,...,n]$. But this implies, after converting this equation into the determinant of a single matrix $A$, the following: $$ \det(A)=\det \begin{bmatrix} 1 & x_0 & \dots & x_0^n & y_0\\ 1 & x_1 & \dots & x_1^n & y_1\\ \vdots & \vdots & \ddots & \vdots & \vdots\\ 1 & x_n & \dots & x_n^n & y_n\\ 1& x & \dots & x^n & P_n\\ \end{bmatrix} =0\quad\quad\quad\quad\quad\quad\quad\quad $$ (Question) What is the geometric interpretation of this statement? My best guess is something along the lines of infinite solution because $x\in\mathbb{R}$. Thanks in advance.
In the hypotesis, you have that $$\det \begin{bmatrix} 1 & x_0 & \dots & x_0^n\\ 1 & x_1 & \dots & x_1^n\\ \vdots & \vdots & \ddots & \vdots\\ 1 & x_n & \dots & x_n^n\\ \end{bmatrix} \neq 0$$ It means that $$\ \text{rank} \begin{bmatrix} 1 & x_0 & \dots & x_0^n\\ 1 & x_1 & \dots & x_1^n\\ \vdots & \vdots & \ddots & \vdots\\ 1 & x_n & \dots & x_n^n\\ \end{bmatrix}\ = n+1 = \text{rank} \begin{bmatrix} 1 & x_0 & \dots & x_0^n\\ 1 & x_1 & \dots & x_1^n\\ \vdots & \vdots & \ddots & \vdots\\ 1 & x_n & \dots & x_n^n\\ 1 & x & \ldots & x_n\\ \end{bmatrix}\ $$ are both maximum. Now $$ \det \begin{bmatrix} 1 & x_0 & \dots & x_0^n & y_0\\ 1 & x_1 & \dots & x_1^n & y_1\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 1 & x_n & \dots & x_n^n & y_n\\ 1& x & \dots & x^n & P_n\\ \end{bmatrix} =0 $$ means that $$ n+1 \leq \text{rank} \begin{bmatrix} 1 & x_0 & \dots & x_0^n & y_0\\ 1 & x_1 & \dots & x_1^n & y_1\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 1 & x_n & \dots & x_n^n & y_n\\ 1& x & \dots & x^n & P_n\\ \end{bmatrix} < n+2 \Rightarrow \text{rank} \begin{bmatrix} 1 & x_0 & \dots & x_0^n & y_0\\ 1 & x_1 & \dots & x_1^n & y_1\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 1 & x_n & \dots & x_n^n & y_n\\ 1& x & \dots & x^n & P_n\\ \end{bmatrix} = n+1$$ so the last column is linearly dependent from the other columns: $\exists \lambda_0, \ldots, \lambda_n$ (the coefficients of the interpolation polynomial!) such that $\sum_{i = 0}^{n} \lambda_i x_j^i = y_j$ for all $j = 0, \ldots, n$ and $\sum_{i = 0}^{n} \lambda_i x^i = P_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3150674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
exponential null operator In $A$ banach algebra with unit, and $X\in A$. if i define $e^X=\sum_{n=0}^{\infty} \frac{1}{n!}X^n$ why $e^0=Id$ , i am aassuming $O^0=Id $ with $0$ null operator thanks
Since$$e^X=\operatorname{Id}+X+\frac{X^2}{2!}+\frac{X^3}{3!}+\cdots,$$then$$e^0=\operatorname{Id}+0+\frac{0^2}{2!}+\frac{0^3}{3!}+\cdots=\operatorname{Id}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3150880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Chordal Graph to Directed Acyclic Graph I have seen an exercise which says an undirected graph $G=(V,E)$ is chordal if and only if the edges of $G$ can be oriented with directions, such that the resulting graph $D=(V,A)$ has the following properties: * *$D$ is acyclic *if $(x,y)$ and $(x,z)$ belong to $A$, then $(y,z)$ or $(z,y)$ belongs to $A$ The sufficiency is very easy since if we have a directed graph with these properties, we can assume the graph is not chordal (there exists cycle with length $\geq 4$ without a chord), and start assigning directions to the edges in this cycle. In the end, we will need to have $D$ has a directed cycle and this will contradict. However, the necessity is very hard to prove. If we are given that $G$ is chordal, how can we prove the orientation? Shall we give an algorithm to orient the edges, or is there a shortcut (contradiction etc.)?
If G is chordal, to construct D: take a perfect elimination ordering and label the vertices by their index in it. Now orient all the edges by increasing labels, that is, assuming $i < j$ : $\{v_i,v_j\} \Rightarrow (v_i,v_j)$. The property is satisfied, and D is acyclic since any cycle would contain a decreasing edge, which does not exist in D.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3151028", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding other eigenvector and matrix $A$ given eigenvalues I want to find a symmetric matrix $A$, whose eigenvalues are $4$ and $-1$. One of the eigenvectors corresponding to the eigenvalue $4$ is $(2,3)$. I want to find an eigenvector corresponding to the eigenvalue $-1$ and then find the matrix $A$.
Recall that the eigenspaces of a symmetric matrix are mutually orthogonal. Thus, all eigenvectors with eigenvalue $-1$ are orthogonal to all eigenvectors with eigenvalue $4$. Can you come up with a nonzero vector that’s orthogonal to $(2,3)$? Once you’ve done that, you have a basis of $\mathbb R^2$ that consists of eigenvectors of $A$, therefore $A$ is diagonalizable. Can you construct $A$ from its diagonalization?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3151143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
How to simplify $\sqrt{2+\sqrt{3}}$ $?$ Simplify $\dfrac{2\left(\sqrt2 + \sqrt6\right)}{3\sqrt{2+\sqrt3}}$ The answer to this question is $\frac{4}{3}$ in a workbook. How would I simplify $\sqrt{2+\sqrt3}$ $?$ If it was something like $\sqrt{3 + 2\sqrt2}$ , I would have simplified it as follows: $\sqrt{3 + 2\sqrt2}$ $=$ $\sqrt{(\sqrt2)^2 + 2(\sqrt2)(1) + (1)^2}$ $=$ $\sqrt{(\sqrt2 + 1)^2}$ $=$ $\sqrt2 + 1$ But I can't simplify $\sqrt{2+\sqrt3}$  like that as $2+\sqrt3$ is can't be written as squares of two numbers. Is there any other method?
Note that$$\left(\frac{2\left(\sqrt2+\sqrt6\right)}{3\sqrt{2+\sqrt3}}\right)^2=\frac{4\left(8+4\sqrt3\right)}{9\left(2+\sqrt3\right)}=\frac{16}9=\left(\frac43\right)^2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3151235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Optimal Value of a Cost Function as a Function of the Constraining variable Consider the optimization problem : $ \textrm{min } f(\mathbf{x}) $ $ \textrm{subject to } \sum_i b_ix_i \leq a $ Using duality and numerical methods (with subgradient method) i.e. $d = \textrm{max}_\lambda \{ \textrm{inf}_x ( f(\mathbf{x}) - \lambda(\sum_i b_ix_i - a)) \}$ . we can obtain the optimal cost. Now I want to express $d$ as a function of $a$ to calculate the optimal cost as a function of the constraining variable. How should I know if $d = d(a)$ is convex / concave in $a$ , if $f$ is convex and $x$ is in a convex set and the problem fullfils Slater conditions and so on? Does anyone know where can I find some theory about expressing the dual function as a function of the constraining variable and the properties of this function definition? Kind regards
It could perhaps be useful to know that you are essentially asking about properties of the value function in parametric programming, or multi-parametric programming to be completely general. https://en.wikipedia.org/wiki/Parametric_programming And regarding convexity, the answer is yes, see e.g., Convexity and Concavity Properties of the Optimal Value Function in Parametric Nonlinear Programming A. V. FIACCO 3 AND J. KYPARISIS 4 https://apps.dtic.mil/dtic/tr/fulltext/u2/a138202.pdf
{ "language": "en", "url": "https://math.stackexchange.com/questions/3151310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Differential Equations - Exact ODE's Hi all, I've attempted the question part b) after rearranging the 1st expression to : $$v(x,y)\, dx - u(x,y) \, dy = 0.$$ After this I tried using the method of differentiating each expression and trying to calculate if they were exact. It was very lengthy so I am unable to post on here but it seems there is an easier way to go about this. Any help is appreciated.
The equation can be rewritten as $$\frac{1}{(x^2+y^2)^2}\left(2xdx+2ydy\right)y-\frac{2y^2}{(x^2+y^2)^2}dy+\frac{y^2-x^2}{(x^2+y^2)^2}dy+dy=0$$Thus$$y\frac{1}{(x^2+y^2)^2}d(x^2+y^2)+\left(-\frac{1}{x^2+y^2}\right)d(y)+dy=0$$which can be put up as $$y\left\{d\left(-\frac{1}{x^2+y^2}\right)\right\}+\left(-\frac{1}{x^2+y^2}\right)d(y)+dy=0$$Using the product rule of differentiation we have$$d\left\{\frac{-y}{x^2+y^2}\right\}+dy=0$$ So,integrating we get$$-\frac{y}{x^2+y^2}+y=c$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3151457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Union of two countable set is also countable This question is asked many time when I search it. But i didn't find what am I really looking for. My approach: Let $A_1,A_2$ be two countable sets. If they have some common elements then redefine it by $B_2=A_1\setminus A_2=\{x\in A_2:x\notin A_1 \}$. The point of this is that the union $A_1 \cup A_2=A_1 \cup B_2$ and the sets $A_1,B_2$ are disjoint. I guess three case be happen for $B_2:$ * *If $B_2=\emptyset$, then $A_1\cup A_2=A_1 \cup B_2=A_1$ which we already know to be countable. *If $B_2=\{b_1,b_2,\cdots,b_m \}$ has $m$ elements, then how can I define and make sure $h:A_1 \cup B_2\to \mathbb{N}?$ In order to satisfy the statement $A$ is countable iff there is a bijective $(1-1$ and onto$)$ mapping $f: A \to \mathbb{N}$, where $\mathbb{N}$ is the set of all natural numbers. *If $B_2$ is infinite again the same problem faced. I have no idea how to approach and fix all these things. Besides, I wanted to know Is this approach is enough to proof the statement or not? Any hint or solution will be appreciated. Thanks in advance.
Sets are countable if you can enumerate the elements ($=$ assign an index to each), without omission. Let $C:=A\cup B$. From the enumerations $$a_1,a_2,a_3,\cdots$$ and $$b_1,b_2,b_3,\cdots,$$ we form the enumeration $$a_1,b_1,a_2,b_2,a_3,b_3,\cdots$$ also written $$c_1,c_2,c_3,c_4,c_5,c_6,\cdots$$ In other terms, the elements of $A$ get the odd indexes and those of $B$ the even ones. You can check that this is an enumeration without omission. If $A$ and $B$ have common elements, they will be cited twice. But this doesn't matter. (If you want, you can just skip the duplicates, but it is not even required.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3151699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
$\begin{vmatrix}1&1&1\\a^2&b^2&c^2\\a^3&b^3&c^3\end{vmatrix}=K(a-b)(b-c)(c-a)$, solve for $K$ Qestion: $\begin{vmatrix}1&1&1\\a^2&b^2&c^2\\a^3&b^3&c^3\end{vmatrix}=K(a-b)(b-c)(c-a)$, solve for $K$ Answer: $K=(ab+bc+ca)$ My attempt: $$\begin{align}\begin{vmatrix}1&1&1\\a^2&b^2&c^2\\a^3&b^3&c^3\end{vmatrix}&=\begin{vmatrix}1&0&0\\a^2&b^2-a^2&c^2-a^2\\a^3&b^3-a^3&c^3-a^3\end{vmatrix}\\&=\begin{vmatrix}b^2-a^2&c^2-a^2\\b^3-a^3&c^3-a^3\end{vmatrix}\\&=(b-a)(c-a)\begin{vmatrix}b+a& c+a\\b^2+ba+a^2&c^2+ca+a^2\end{vmatrix}\\&=(c-b)(a-b)(b+a)(c+a)\begin{vmatrix}1&1\\(b+a)-ba&(c+a)-ca\end{vmatrix}\\\end{align}$$ I don't know what should I do next, maybe I've made a mistake but I didn't notice it
Consider the matrix \begin{bmatrix} 1& 1 &1 &1 \\ X & a&b&c \\X^2 & a^2 & b^2 & c^2 \\ X^3 & a^3 & b^3 & c^3 \end{bmatrix} Its determinant is a Vandermonde determinant, equal to $$(a-X)(b-X)(c-X)(b-a)(c-a)(c-b)$$ But developping with respect to the first column, you see that the determinant you are looking for is just equal to the coefficient (with a sign minus) of the $X$ term in this polynomial, which is $$(-bc-ac-ab)(b-a)(c-a)(c-b)$$ Finally you get that the $K$ you are looking for is equal to $$K = ab + ac + bc$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3151779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
For any natural $m \neq n$ show that $|\sqrt[n]{m} - \sqrt[m]{n}| > \frac{1}{mn}$. Here's my try. The inequality above is equivalent to $$|m^{\frac{1}{n}} - n^{\frac{1}{m}}|> \frac{1}{mn}$$ First, I want to get rid of the absolute value. Assume without loss of generality that $m>n$. Then $m^m > n^n$. Raising this inequality to the power of $\frac{1}{mn}$ gives me $m^{\frac{1}{n}} > n^{\frac{1}{m}}$. Next, I want to arrange everything so that it looks like $n^{\frac1n}-m^{\frac1m}$($m$ goes with $m$, $n$ goes with $n$ $-$ that's easier to consider). For that, I know when the function $f(x) = x^{\frac1x}$ is decreasing. As $f'(x) = (e^{\frac1x \cdot \ln{x}})' = x^{\frac1x - 2}\cdot (1- \ln{x})$, it is clear that $f(x)$ is decreasing on the ray $[e;+\infty )$. So, consider $m$, $n$ $\geq 3$. (Other cases can be checked numerically.) So, $m>n$, then $m^{\frac{1}{n}} > n^{\frac{1}{n}}$. At the same time $n^{\frac{1}{m}} < m^{\frac{1}{m}}$ $\Rightarrow$ $-n^{\frac{1}{m}} > -m^{\frac{1}{m}}$. Adding up these two inequalities gives me $$m^{\frac{1}{n}} - n^{\frac{1}{m}} > n^{\frac{1}{n}} - m^{\frac{1}{m}} > 0$$ Secondly, I want to utilize the fact that $m$ and $n$ are natural. As $m>n$, then $m-n\geq 1$. Dividing by $mn$ gives me $$\frac{m-n}{mn} = \frac1n - \frac1m \geq \frac1{mn}$$ Now I just have to prove this: $$n^{\frac{1}{n}} - m^{\frac{1}{m}} > \frac1n - \frac1m$$ Or, equivalently, $$ n^{\frac{1}{n}} - \frac1n > m^{\frac{1}{m}} - \frac1m$$ So, consider the function $g(x) = x^{\frac1x} - \frac1x $. I want to show that it is decreasing, starting at some point. Taking the derivative does nothing good: $g'(x) = \frac1{x^2} \cdot (x^{\frac1x} - \ln{x} \cdot x^{\frac1x} + 1)$, and I am unable to find the zeroes of it. WolframAlpha says that $g(x)$ is, indeed, decreasing from $x\approx 5.677$, and that is good, we can always check the answer for $m$, $n$ $\leq 5$. However, this is not a satisfactory solution.
Suppose that $m>n$, as a result of which $m^m>n^n$. If $m=2$, then $n=1$ and the proof is easy to complete; suppose thus that $m\ge 3$. Applying Lagrange's mean value theorem to the function $f(x)=x^{1/(mn)}$ on the interval $[n^n,m^m]$, we get $$ \sqrt[n]m-\sqrt[m]n = f(m^m)-f(n^n) = \frac1{mn}\,c^{-1+1/(mn)}(m^m-n^n), $$ where $c\in(n^n,m^m)$. Consequently, $$ \sqrt[n]m-\sqrt[m]n > \frac1{mn}\,m^{-m+1/n}\, (m^m-n^n), $$ and to complete the proof it suffices to show that $$ m^m-n^n > m^{m-1/n}. $$ Clearly, this will follow from $$ m^m-n^n>m^{m-1/m}, $$ and then, in view of $m^{1/m}>1+1/m$ (which is very easy to prove), from $$ \left(1+\frac1m\right)(m^m-n^n)>m^m. $$ The last inequality simplifies to $$ m^{m-1} > \left(1+\frac1m\right)n^n, $$ and this is true in view of $$ m^{m-1}\ge (n+1)^n \ge \left(1+\frac1n\right)n^n > \left(1+\frac1m\right)n^n. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3151907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Computing last two digits of $27^{2018}$ For abstract algebra I have to find the last two digits of $27^{2018}$, without the use of a calculator, and as a hint it says you should work in $\mathbb{Z}/100\mathbb{Z}$. I thought breaking up the problem into $\mod(100)$ arguments. Thus: $27^{2}=729\equiv 29 \mod (100)$, and $27^{4}=(27^{2})^{2} \equiv 29^{2}=861\equiv 61 \mod (100)$ and $27^{8}=(27^{4})^{2} \equiv 61^{2}=3421 \equiv 21 \mod(100)$ and so on until I would find something that repeated itself. But I've done quite some terms no and I've not seen any iteration yet. So I'm thinking this is the wrong way. Any suggestions?
Because $\varphi(100)=40$ certainly the sequence of powers of $27$ will repeat after $40$ terms. A quick calculation shows that in fact it already repeats after $20$ terms. You could also use the Chinese remainder theorem to reduce the problem to computing $27^{2018}$ mod $25$ and mod $4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3152033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 4 }
Reducible and Irreducible polynomials over $\Bbb Q$ Consider polynomials of the form: $$x^r-(1-x)^k,$$ for $r,k\ge2.$ $x\in(0,1).$ When $r=k$ the polynomial seems to be reducible, except at $r=k=2.$ Do the irreducible and reducible polynomials form a pattern when plotted? To try to visualise what was going on, I made a lattice of all the points representing reducible and irreducible polynomials of this form: $$ x^r=(1-x)^k. $$ I plotted each solution $x$ at a certain height $h$ in such a way that the solutions formed a grid. For example, all points greater than $2$ for $r=k,$ I colored green, because they are reducible, and plotted an $x$ value of $1/2$ and at different heights $h.$ Here's what it looks like for $r,k=\{2,3,4,5,6\}.$ Red=Irreducible, Green=Reducible.
Do the irreducible and reducible polynomials form a pattern when plotted? Depends what you consider a pattern, but here are few quick observations. Let $f_{r,k}(x)=x^r-(1-x)^k$, then: * *The graph is vertically symmetrical. This follows simply from fact that if $f(x)$ is irreducible, then $f(-x)$ is irreducible, as well as $f(x+1)$. In this case both together can be used to see $f(1-x)$ is irreducible iff $f(x)$ is irreducible, which applied to $f_{r,k}(x)=f_{k,r}(-x+1)$ gives you the vertical symmetry. *Middle vertical line is all consisting of green dots except for $(r,k)=2$. This is because for $r=k$ we have $f_{r,r}(x)=x^r-(1-x)^r$ is always multiple of $(2x-1)$ hence reducible. To see this (I'm sure there are easier ways), consider for example \begin{align*} f_{r,r}(x)&=x^r-(1-x)^r\\ &=(2x-1+(1-x))^r-(1-x)^r\\ &=\sum \binom{r}{i}(2x-1)^i (1-x)^{r-i}-(1-x)^r. \end{align*} All of the terms in the sum are multiples of $2x-1$ except for one with $i=0$, and so for some polynomial $P(x)$ we have \begin{align*} f_{r,r}(x)&=P(x)(2x-1)+(1-x)^r-(1-x)^r=P(x)(2x-1). \end{align*} *There is a vertical line going through just red dots. There is a family of polynomials $f_{r+1,r}(x)$ which seem to be irreducible (as well as they symmetrical counterpart $f_{r,r+1}(x)$). I could not find a proof suitable for all $r$'s, but I've checked for $r\leq 1000$ and they are all irreducible. Maybe someone will find a proof, but notice that such sequences of irreducible polynomials are quite common, the real challenge is to prove it. For example consider: * *Is $ f_n=\frac{(x+1)^n-(x^n+1)}{x}$ irreducible over $\mathbf{Z}$ for arbitrary $n$? *Is polynomial $x^{2n-1}-4nx^{2n-2}-2\binom{2n}{3}x^{2n-4}-\dots-2\binom{2n}{3}x^{2}-4n$ irreducible? *Irreducibility of q-factorial plus 1 *there are many more...
{ "language": "en", "url": "https://math.stackexchange.com/questions/3152209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Automata|The mid 1/3 of regular language is still regular Define $$L_{\frac{1}{3}}=\{w \in \Sigma^*\ |\ \exists x,y\in \Sigma^*,\ xwy\in L,\ |x|=|w|=|y|\}$$L is a regular language, is $L_{\frac{1}{3}}$ a regular language? I think it might be similar to the question of half of L. Automata | Prove that if $L$ is regular than $half(L)$ is regular too But I find it hard to build an automaton like that.
A powerful method to prove this result (and many similar ones) is to use the fact that a language is regular if and only if it is recognized by a finite monoid. A language $L$ of $A^*$ is recognized by a finite monoid $M$ if there is a surjective monoid morphism $f:A^* \to M$ and a subset $P$ of $M$ such that $f^{-1}(P) = L$. Let $\mathcal{P}(M)$ be the monoid of subsets of $M$, with product defined, for each $X, Y \in \mathcal{P}(M)$, by $$ XY = \{xy \mid x \in X, y \in Y\} $$ Let now $N$ be the commutative submonoid of $\mathcal{P}(M)$ generated by $f(A)$. I claim that $L_{1/3}$ is recognized by $M \times N$ and hence is regular. Indeed, let $g: A^* \to M \times N$ be the monoid morphism defined, for each letter $a \in A$, by $g(a) = (f(a), f(A))$. Thus, for each word $u \in A^*$, $$ g(u) = (f(u), f(A)^{|u|}) $$ Setting $$ Q = \{ (m,R) \in M \times N \mid RmR \cap P \not= \emptyset \}, $$ one gets \begin{align} g^{-1}(Q) &= \{ u \in A^* \mid g(u) \in Q \} = \{ u \in A^* \mid f(A)^{|u|}f(u)f(A)^{|u|} \cap P \not= \emptyset \}\\ &= \{ u \in A^* \mid f(A^{|u|}uA^{|u|}) \cap P \not= \emptyset \} = \{ u \in A^* \mid (A^{|u|}uA^{|u|}) \cap f^{-1}(P) \not= \emptyset \} \\ &= \{ u \in A^* \mid A^{|u|}uA^{|u|} \cap L\not= \emptyset \}\\ &= L_{1/3} \end{align} which proves the claim.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3152301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Weakly convergence If $g \in L^p(\mathbb{R})$ be a given non-trivial function, show that following sequences converge weakly in $L^p$ but not strongly in $L^p$. (a) $g_k(x)=k^{1/p}g(kx)$. (b) $h_k(x)=g(x+k)$. I need to show that for every $f \in L^q$ where q is the conjugate exponent to $p$, we have $$\int k^{1/p}g(kx)f(x)\rightarrow \int u(x)f(x)$$ for some $u\in L^p$. Similarly for part (b). I used the technique of changing of variables but I could not simplify the integral.
You need $1<p<\infty$. The arguments for a) and b) are similar. Here are some hints for a): Claim: the weak limit is $0$. Start with $\int k^{1/p} g(kx)f(x)dx=\int k^{1/p-1} g(y)f(\frac y k)dy$. Split the integral into integral over $|y| \leq M$ and $|y| >M$. Use Holder's inequality for the second part. Observe that the norm of $k^{1/p} g(x)$ in $L^{p}$is same as that of $g$. In view of this it is enough to prove that result for $f$ in some dense subset of $L^{q}$. The set of functions in $L^{q}$ that vanish in some neighborhood of $0$ is dense and for these functions the first term tends to $0$ (in fact it is identically $0$ for $k$ sufficiently large).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3152461", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
is this convex inequality possibly true? Let $a_1,\dots,a_k$ be non-negative and sum to $1$ and let $x_1,y_1,\dots,x_k,y_k$ be positive. Then is it true that $$\prod_{i=1}^kx_i^{\alpha_i}+\prod_{i=1}^ky_i^{\alpha_i}\leq\sum_{i=1}^k(x_i+y_i)^{\alpha_i}?$$ This is example $1.2.3$ in Convex Analysis and Minimization Algorithms $I$ by Hiriart-Urruty and Lemarachal.
Consider what happens when all the $x_i$'s and $y_i$'s equal some value $x$ and the $\alpha_i$'s equal $1/k$. Then the claim is that $$ 2x=2\prod_{i=1}^kx^{1/k}\leqslant \sum_{i=1}^k(2x)^{1/k}=k(2x)^{1/k}. $$ But if $x>(2^{1/k - 1} k)^{1/(1 - 1/k)}$ this is false.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3152555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Definitionally prove that $\lim_{x \to 0}\frac{f(x)-f(0)}{x^2} = \frac{f''(0)}{2}$ $$\lim_{x \to 0}\frac{f(x)-f(0)}{x^2} = \frac{f''(0)}{2}\quad (f'(0) = 0)$$ It seems quite a rudimentary problem, but I can't find an appropriate solution without using L'hospital's rule and Maclaurin series. Is it possible that a problem can not be solved without them?
Proof using MVT: let $g(x)=f(x)-\frac 1 2 x^{2}f''(0)$. Then $g''(0)=0$. If we prove the result for $g$ then result for $f$ follows immediately. Now $\frac {g(x)-g(0)} {x^{2}}=\frac {g'(\xi_x)} {x} $for some $\xi_x$ between $0$ and $x$. But $\frac {g'(\xi_x)} {x} =\frac {g'(\xi_x)} {\xi_x} \frac {\xi_x} x \to 0$ because $\frac {\xi_x} x$ is bounded.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3152674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The Proximity Operator of a Function with Multiple Affine Mapping Let $f(\mathbf{x}) = g(\mathbf{A}\mathbf{x})$, where $\mathbf{A} \in \mathbb{R}^{M \times N}$ is a linear transformation satisfying $\mathbf{A}\mathbf{A}^T = \mathbf{I}$. Then for any $\mathbf{x} \in \mathbb{R}^{N}$, \begin{equation} \text{prox}_f (\mathbf{x}) = \mathbf{x} + \mathbf{A}^T (\text{prox}_g(\mathbf{Ax}) − \mathbf{Ax}). \end{equation} Now, if $f(\mathbf{x}) = \sum_{p=1}^{P} g(\mathbf{A}_p\mathbf{x})$, where $\mathbf{A}_p \in \mathbb{R}^{M \times N}$ are multiple linear trasformations satisfying $\mathbf{A}_p\mathbf{A}_p^T = \mathbf{I}$. Then for any $\mathbf{x} \in \mathbb{R}^{N}$, what would be the proximal mapping for the new $f(\mathbf{x})$?.
Even in the simpler case where $P=2$ and $A_1=A_2= \textbf{I}$, there does not exist a closed form solution for the proximal operator of the sum. If you are interested in solving an optimization problem check the keywords "Douglas-Rachford" and "splitting".
{ "language": "en", "url": "https://math.stackexchange.com/questions/3152831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the tangent at a sharp point on a curve? How to know which line represents tangent to a curve $y=f(x)$ (in RED) ?From the diagram , I cannot decide which line to take as tangent , all seem to touch at a single point.
Since we're doing geometry, let's think kinematically, too. Imagine a point moving along your curve, in the direction of increasing $x.$ Furthermore, let a ray project from the point, tangent to the curve there (you might think of your curve as a lane, and the point a car in the night; imagine the headlights beaming straight ahead as the tangent ray at any location on the curve). Then as you approach the cuspoid from the left, the ray points down; at that point therefore, we can say it points down from the left. Now, let the point turn without leaving the point; then the tangent ray points up to the right -- at the same point! That happens nowhere else on the curve. Thus, we have two possible candidates for the tangent using this definition of tangent. If we require the tangent at a point to be unique, then there is no way to pick one of the two at the cuspoid without making further assumptions (which will be more or less arbitrary). The same thing happens for a point approaching the offending point from the right along the curve. Thus, the curve has no unique tangent at that point. The reason is that you can think of the two branches of the curve as separate curves intersecting at a nonzero angle at that point. Suppose, for example, that we have two semicircular arcs instead, of the same radius and touching externally. For more clarity, take the positive parts of $(x+1)^2+y^2=1$ and $(x-1)^2+y^2=1.$ Then if we do not consider out tangents as directed, we can have a unique tangent at the origin in this case, since the two arcs are parallel at that place. However, if we think in terms of functions, there is still no derivative at that point, since the derivative measures the slope of the tangent line wrt increasing $x.$ However, if you imagine our point moving along this curve from left to right and crossing the origin, then the tangent ray points in opposite directions immediately before and after the point. Thus, there is no derivative there. However, purely geometrically, it is possible to speak of a tangent in this case. I hope I have contributed to your understanding.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3152962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 7, "answer_id": 4 }
Show that matrix $A$ is similar to a matrix $B$ with elements on diagonal $(0, ..., 0, \operatorname{Tr(}A))$ respectively. Let $A$ be a matrix $n \times n, n \geq 2 $. Let's assume that not all entries outside of the diagonal are zeros (we don't know what entries are on the diagonal). Show that matrix $A$ is similar to a matrix $B$ with elements on diagonal $(0, ..., 0, \operatorname{Tr}(A))$ respectively. We know that $\operatorname{Tr}(A) = \sum_{i=1}^n \lambda_i$, where $\lambda_i$ is the i-th eigenvalue of A. I found a similar problem with the solution using Rational Canonical Form. However, so far on the course we have only developed the Jordan Form and I believe we should use it to solve this problem. Using Jordan Normal Form to show $M$ is similar to a matrix whose diagonal is $(0, 0, \operatorname{Tr}(M))$? Any help would be greatly appreciated.
Hint Prove the claim by induction on the size of the matrix, with inductive hypothesis that the claim holds for $n \times n$ matrices. For an $(n + 1) \times (n + 1)$ matrix $A$, decompose $$A = \pmatrix{\lambda&\ast\\\ast&B} ,$$ where $B$ has dimension $n \times n$. By the inductive hypothesis there is a matrix $Q$ such that $Q B Q^{-1}$ has $B_{11} = \cdots = B_{n - 1, n - 1} = 0$. Then, conjugate $A$ by $P := \pmatrix{1&\cdot\\\cdot&Q}$. Doing so gives $$A' := P A P^{-1} = \pmatrix{\lambda&\ast\\\ast&Q B Q^{-1}}$$ and in particular its diagonal entries except perhaps the $(1, 1)$ and $(n + 1, n + 1)$ entries are zero. If $\lambda = 0$, $A'$ has the desired form. If not, it remains to find a matrix by which conjugation clears the $(1, 1)$ entry, and we can take $$\pmatrix{-A'_{n + 1, 1} / \lambda&\cdot&1\\\cdot&I_{n - 1}&\cdot\\\cdot&\cdot&1} .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3153095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to form Matrix pairs of $m$ friends and to find common friend from all possible pairs. The below is a problem given in entrance exam. Problem: A golf club has $m$ members with serial numbers $1, 2 . . . , m$. If members with serial numbers $i$ and $j$ are friends, then $A(i, j) = A(j, i) = 1$, otherwise $A(i, j) = A(j, i) = 0$. By convention, $A(i, i) = 0$, i.e. a person is not considered a friend of himself or herself. Let $A^k$$(i, j)$ refer to the $(i, j)$th entry in the $k^{th}$ power of the matrix $A$. Suppose it is given that $A^9(i, j) > 0$ for all pairs $i,j$ where $1 ≤ i,j ≤ m$, $A^2$$(1,2) > 0$ and $A^4(1, 3) = 0$. Suppose it is given that $A^9(i,j) > 0$ for all pairs $i,j$ where $1 ≤ i,j ≤ m, A^2(1,2) > 0$ and $A^4(1,3) = 0$. Determine if below problem statements are necessarily true and please provide the reasons for it. * *Does members $1$ and $2$ have at least one friend in common. *$m≤9$ *$m≥6$ *$A^2(i,i)> 0$ for all $i$, $1≤i≤m.$ $\\$ My approach: I tried to form the question in $A(i, j)$ pairs as per the question. $$ \begin{array}{c|lcr} (i, j) & \text{1} & \text{2} & \cdots \\ \hline 1 & 0 & 1 & \cdots \\ 2 & 1 & 0 & \cdots \\ \vdots & \vdots & \vdots \\ \end{array} $$ Given that $A^2(1, 2) > 0$ and $A$ gives the below matrix. $$ \left[ \begin{array}{cc|c} 0&1\\ 1&0 \end{array} \right] $$ And $A^2$ gives the below matrix. $$ \left[ \begin{array}{cc|c} 1&0\\ 0&1 \end{array} \right] $$ Determinant of this gives 1. I could not proceed further as I am still not sure if my approach to this problem is correct or not. Can someone please explain the approach to this problem and also let me know if there exists a book which contains these type of problems which would help me a lot.
Not a complete solution, but some thoughts. The well-known fact (you can easily prove it by induction) is that $k$th power of adjacency matrix $A$ is a matrix of pathes that have length $k$. Thus, $A^{k}(i, j)$ equals to number of pathes from $i$ to $j$ that have length $k$. In this terms, $A^9(i, j) > 0$ means there is a path from $i$ to $j$ of length 9, $A^2(1, 2) > 0$ means there is a path from $1$ to $2$ of length 2, and $A^4(1, 3) = 0$ means there is no path from $1$ to $3$ of length 4. Now, this members $1$ and $2$ have at least one friend in common. immediatly follows from $A^2(1, 2) > 0$ as you have a path $1 \to x \to2$ and $x$ is a common friend for $1$ and $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3153192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Number of ways to distribute 20 identical pencils to 6 non-identical children with restrictions How many ways are there to pass out 20 pencils (assume all the pencils are identical the same) to six children? Based on the following condition: a) No restriction. ( i.e. each kid may receive zero to 20 pencils.) b) Every child receives at least one pencil. c) None of child receives the same number of pencils. d) If the pencils are given out randomly. What is the probability that there are at least two kids receive the same number of pencils if every kids receives at least one pencil? I have calculated a) using C(25,20) = 53,130 I calculated b) using C(19,14) = 11,628 for c) I get answer 840 using the following program class final34{ public static void main(String[] args){ int u,v,x,y,z; int count = 0; int last=0; for(u =1;u<11;u++){ for(v = 1;v<11; v++) if(u!=v) for(x = 1;x<11;x++) if(x!=u && x!=v) for(y = 1; y<11;y++) if(y!=u && y!=v && y!=x) for(z=1;z<11;z++) if(z!=u && z!=v && z!=x && z!=y) if(u+v+x+y+z == 20){ System.out.println(u+" "+v+" "+x+" "+y+" "+z); count++; } System.out.println("Count after iteration "+u+": "+(count-last)); last = count; } System.out.println("Total Count: "+count); } } I see the output: Count after iteration 1: 144 Count after iteration 2: 144 Count after iteration 3: 120 Count after iteration 4: 120 Count after iteration 5: 96 Count after iteration 6: 72 Count after iteration 7: 48 Count after iteration 8: 48 Count after iteration 9: 24 Count after iteration 10: 24 Total Count: 840 But I don't know how to prove this using discrete mathematics. Also I need help on d).
I realized that there is no way to generalize the answer for question c). it needs a special solution. First of all, out of 6 children, 1 child has to get 0 pencils. After that 5 children are remaining. So, there are just 7 ways to distribute 20 pencils among them, which are: (1,2,3,4,10) (1,2,3,5,9) (1,2,3,6,8) (1,2,4,5,8) (1,2,4,6,7) (1,3,4,5,7) (2,3,4,5,6) Now, with these 7 cases, there are 5! ways to arrange them in order. Therefore, we get 120*7 = 840 ways. Now, coming back to the first child, to whom we gave 0 pencils. It can be any of the 6 children. So, 840*6 = 5040 is the final answer. I could found the first 5 cases by myself, but, as anyone can see that the last two cases are too far and so many cases need to be checked before one can figure those two out. that's why I used a program to compute these. I am also providing that here so that it is helpful for others. import java.util.*; class final34{ public static void main(String[] args){ int u,v,x,y,z; int count = 0; int last=0; List<List<Integer>> val = new ArrayList<List<Integer>>(); for(u =1;u<11;u++){ for(v = 1;v<11; v++) if(u!=v) for(x = 1;x<11;x++) if(x!=u && x!=v) for(y = 1; y<11;y++) if(y!=u && y!=v && y!=x) for(z=1;z<11;z++) if(z!=u && z!=v && z!=x && z!=y) if(u+v+x+y+z == 20){ List<Integer> temp = new ArrayList<>(); temp.add(u); temp.add(v); temp.add(x); temp.add(y); temp.add(z); Collections.sort(temp); boolean check = false; for(List a: val){ Collections.sort(a); if(a.equals(temp)){ check = true; } } if(!check){ val.add(temp); } count++; } System.out.println("Count after iteration "+u+": "+(count-last)); last = count; } System.out.println("Count: "+count); System.out.println(val); } }
{ "language": "en", "url": "https://math.stackexchange.com/questions/3153331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Find the maximum integer value $n$ such that $2^n\mid 3^{1024} -1$ Since $2^{10} = 1024$ then we can $3^{1024}-1 = ( 3 - 1 )\Pi_{i=0}^{9} {3^{2^i}+1}$ And then we can start eliminating the $3-1=2$ and $(3^2-1)= 2\times 5$ But then? I guess I could calculate, but this is not different as starting calculating in first place. What is a better method?
Hint For $i \geq 1$ you have: $$3^{2^i}+1 \equiv (-1)^{2^i}+1 \equiv 1+1 \equiv 2 \pmod{4}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3153495", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Prove that there exist $135$ consecutive positive integers so that the $n$th least is divisible by a perfect $n$th power greater than $1$ Prove that there exist 135 consecutive positive integers so that the second least is divisible by a perfect square $> 1$, the third least is divisible by a perfect cube $> 1$, the fourth least is divisible by a perfect fourth power $> 1$, and so on. How should I go about doing this? I thought perhaps I should use Fermat's little theorem, or its corollary? Thanks!
Use the Chinese Remainder Theorem. Pick $134$ distinct primes. The perfect square is the square of the first, the cube is the cube of the second, and so on. All your moduli are distinct, so CRT guarantees a solution. If you use the smallest primes in order and $N$ is the least of your $135$ numbers, you have $N+1 \equiv 0 \pmod {2^2}, N+2 \equiv 0 \pmod {3^3}, N+3 \equiv 0 \pmod {5^4}\ldots$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3153661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to find the angle in a protein which is inside of a triangle which appears inscribed in a circle? I'm confused at which property or identity can be used to find the angle in a triangle when it looks inscribed in a circle but one of its sides doesn't appear to pass through the center. I'm also confused about the other angles, particularly to the ones which are near the tangent line. The problem is as follows: A certain serum protein is under research for its optical properties in a medical laboratory in Taichung. The results shown a circumference whose points $A$, $B$, $C$, $D$ and $E$ represent the atoms of the protein crystal. In order to find the protein's optical properties a technician uses a set of beams which draw lines and form angles in the circumference. One beam is tangent to the circumference, while the others drawn an angle labeled as $\omega$ which are congruent. What would be the angle represented by $\phi$? The given alternatives in my book are: $\begin{array}{ll} 1.& 36^{\circ} \\ 2.& 45^{\circ} \\ 3.& 53^{\circ} \\ 4.& 72^{\circ} \\ 5.& 108^{\circ} \\ \end{array}$ What I tried to do is to draw the known information in the sketch as shown below. Since the line (beam) is tangent to the circumference, then this should make a $180^{\circ}$ angle with it. So I assumed all the $\omega$ angles cover the whole tangent beam colored with blueberry therefore: $\omega+\omega+\omega+\omega+\omega=180^{\circ}$ $5\omega=180$ $\omega=\frac{180}{5}=36^{\circ}$ Then I traced a chord between $CD$ and from then the other identity which I could identify was that the arc given between $\overset{\frown}{AED}$ covers the angle on $\angle ABD$ and $\angle ACD$ therefore: $\angle ABD = \angle ACD$ Now the part where I'm still doubtful is if there was a way to prove that $\angle ADB = \angle BAD = \angle CDB$ ? The only thing that I could come up with was to establish that the segment $AB = CD$. But this is which I don't know if its right. By doing this triangle congruence can be used and state that the angles mentioned earlier are the same, but again I don't know if that's right. Assuming that what I did was okay, the rest would be just using the triangle formula for the sum of its interior angles as follows: $2\omega + \phi + \omega = 180^{\circ}$ $3\omega\left(36^{\circ}\right)+\phi=180^{\circ}$ $\phi=180-108=72^{\circ}$ Which correspond to the fourth alternative. But as mentioned I'm not very sure if what I did was the right thing. Therefore. Can somebody help me with this and clear out my doubts?.
Well, as you know, $\omega=36^\circ$. Now, the bottom two $\omega$’s and $\phi$ cut out the same arc of the circle, namely the arc $AED$, so that $\phi=2\omega=72^\circ$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3153777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Using spherical coordinates for triple integral $ \int_{0}^3 \int_{0}^{\sqrt{9-x^2}} \int_{0}^{\sqrt{9-x^2-y^2}} \frac{\sqrt{x^2+y^2+z^2}}{1+x^2+y^2+z^2} \ dz \ dy \ dx$ Using spherical co-ord's this becomes : $ \int_{0}^{2\pi} \int_{0}^\pi \int_{0}^3 \frac{r^3\sin(\theta)}{1+r^2} \ dr \ d\theta \ d\phi $ Is this correct? If it is how do I carry on from here?
It looks fine to me, except for the limits of integration. The next step is to write it as$$\frac\pi2\left(\int_0^{\frac\pi2}\sin(\theta)\,\mathrm d\theta\right)\left(\int_0^3\frac{r^3}{1+r^2}\,\mathrm dr\right).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3153966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Sheafification of a given presheaf Let $\mathcal{F}$ be a presheaf on $\mathbb{R}$ such that $\mathcal{F}(U)$ is the abelian group of continuous functions with bounded support on $U$. Then what is the sheafification of $\mathcal{F}$? I guess the sheafification should be the abelian group of continuous functions, but how should I prove it rigorously? Thanks in advance!
Let $F$ be our presheaf of continuous functions with bounded support, $G$ be the sheaf of continuous functions, $F^+$ the sheafification of $F$. Every continuous function with bounded support is a continuous function, so we have an injective morphism $F\to G$, which induces a map $F^+\to G$ by definition of the sheafification. To show that this is an isomorphism, we just need to prove that it is an isomorphism on stalks. However, $F^+$ has the same stalks as $F$, so it suffices to prove that the map $F\to G$ induces an isomorphism on stalks. However, this is fairly clear. We already know injectivity, so it just remains to check surjectivity. Pick $x\in\Bbb{R}$. Choose $U$ some bounded open neighborhood of $\Bbb{R}$. Then for any $[(f,W)]\in G_x$, we have that the class $[(f|_{W\cap U},W\cap U)]\in F_x$ maps to $[(f,W)]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3154116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Definition linear ODE An ordinary differential equation is said to be linear if $$F(t,y(t),...,y^{(n)}(t))=0$$ is linear in every derivative. I run into a little problem when using this definition for the equation $$y'y=0$$ because we have that $$F(t,\alpha y_1(t)+\beta y_2(t),y'(t))=(\alpha y_1(t)+\beta y_2(t))y'(t)=\alpha y_1(t)y't(t) + \beta y_2(t)y'(t)\\ =\alpha F(t,y_1(t),y'(t)) + \beta F(t,y_2(t),y'(t))$$ and vice versa for $y'(t)$. This shows that the ODE is linear. But an equivalent definition of linearity states that it has to have the form $F(t,y(t),...,y^{(n)}(t))=\sum_{k=0}^n a_i(t)y^{(i)}(t) - g(t)=0$ With this definition the ODE is not linear anymore. Not sure what my mistake is there.
We have $F(t,y(t),y’(t))=yy’$. Note that $$F(\alpha t, \alpha y(t), \alpha y’(t)) =\alpha^2yy’ \neq \alpha yy’=\alpha F(t,y(t),y’(t))$$ and thus $F$ is not linear in its arguments, implying that it is not a linear differential equation using your first definition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3154285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
A sum of a product of binomial coefficients I am trying to simplify the following summation of products of binomial coefficients: $$\sum_{k=0}^n \binom{k+b}{a} \binom{y+n-k}{y}$$ where $b > a$ (specifically, $b = 2a+1$). I have searched through some of the usual resources (Gradshteyn and Ryzhik, HW Gould, the DLMF, Abramowtiz and Stegun, Wolfram's online resources) and have come up empty so far. If anyone has any ideas on how to approach this problem, I would be very thankful!
You can show that $$ \sum_{k=a-b}^n \binom{k+b}{a} \binom{y+n-k}{y} %=\sum_{h=0}^{n+b-a}\binom{h+a}a\binom{y+n+b-a-h}{y} =\binom{b+n+y+1}{a+y+1}.\tag{*} $$ That is, your summation is a nice one minus some missing terms. Therefore, your summation is $$ \boxed{\binom{b+n+y+1}{a+y+1}-\sum_{k=a-b}^{-1}\binom{k+b}a\binom{y+n-k}y}$$ To prove $(*)$, use $$ \sum_{i=k}^{n-h} \binom{i}k\binom{n-i}h=\binom{n+1}{k+h+1}\tag{**}$$ and reindex appropriately. A combinatorial proof of this $(**)$: to choose a subset of $\{1,2,\dots,n,n+1\}$ of size $k+h+1$ whose $(k+1)^{st}$ smallest element is equal to $i+1$, you choose the $k$ elements below $i+1$, and the $h$ elements which are above $i+1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3154464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Solve tan(x)+cos(x)=1/2 Is it possible (not numerically) to find the $x$ such as: $$ tan(x)+cos(x)=1/2 $$ ? All my tries finishes in a 4 degree polynomial. By example, calling c = cos(x): $$ \frac{\sqrt{1-c^2}}{c}+c=\frac{1}{2} $$ $$ \sqrt{1-c^2}+c^2=\frac{1}{2}c $$ $$ 1-c^2=c^2(\frac{1}{2}-c)^2=c^2(\frac{1}{4}-c+c^2) $$ $$ c^4-c^3+\frac{5}{4}c^2-1=0 $$
If we set $X=\cos x$ and $Y=\sin x$, the equation becomes $$ Y=\frac{1}{2}X-X^2 $$ so the problem becomes intersecting the parabola with the circle $X^2+Y^2=1$. This is generally a degree four problem. The image suggests there is no really elementary way to find the intersections. The equation becomes $$ X^4-X^3+\frac{5}{4}X^2-1=0 $$ as you found out. The two real roots are approximately $$ -0.654665139167 \qquad 0.921490878816 $$ These correspond to $x=\pm2.284535877184578$ and $x=\pm0.39889463967156$, that correspond to what WolframAlpha finds.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3154610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Show if $n \sqrt n \in O(n^4)$ I'm quite confused as to how to solve this step by step. What I have done was graph this using Desmos graphing software to visually see it, and I can see that the $n \sqrt n $ is below $n^4$. I thought I could do something like this: $n \sqrt n \lt n^4$ $ \sqrt n \lt n^3$ $n^{1/2} \lt n^4$ $1 \lt n^4 / n^{1/2}$ $1 \lt n^{7/2}$ So based on this, $n \sqrt n$ is not bigO of $O(n^4)$. Is that correct? Or would I say it is BigO iff $n > 1$ ? Thank you.
${{n\sqrt n}\over n^4}={1\over n^{5/2}}$ and $\lim_\limits{n\rightarrow +\infty}{1\over n^{5/2}}=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3154744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Is there any bijection between $\mathbb{R}$ and $\mathbb{R}^2$? Is there any bijection between $\mathbb{R}$ and $\mathbb{R}^2$ ? If have then what is the mapping ? Please define the mapping. They have same cardinality then it is possible to have a bijection between them.
Hint: first, show that $\mathbb{R} \simeq (0,1)$. Then, observe that if $(0,1) \simeq (0,1)^2$, then $\mathbb{R} \simeq \mathbb{R}^2$. For the former, one can consider a map such as $0.a_1 a_2 a_3 a_4 \dots \mapsto (0.a_1 a_3 \dots,0.a_2a_4\dots) $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3155010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding 01 or 00 A and B play a bit game. Unbiased bit generator is generating 0 or 1 repeatedly until one of the following happens. * *The bit patterns to '00' (i.e., a 0 is immediately followed by a 0) for the first time. In this case A wins. *The bit patterns to '01' (i.e., a 0 is immediately followed by a 1) for the first time. In this case B wins. Who has more probability of winning the game? I am getting both probability to be the same. Can anyone help me out how to approach this question?
The first $0$ is followed by a $0$ or a $1$ equiprobably and the game is over.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3155171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove without induction that $2×7^n+3×5^n-5$ is divisible by $24$. I proved this by induction. But I want to show it using modular arithmetic. I tried for sometime as follows $$2×7^n-2+3×5^n-3\\ 2(7^n-1)+3(5^n-1)\\ 2×6a+3×4b\\ 12(a+b)$$ In this way I just proved that it is divisible by 12 but it is not enough. Am I missing something or it will solved by some other method.
Note that you have $$ 7^n - 1 = 6a\\ 5^n - 1 = 4b $$ Now we're interested in whether $a$ and $b$ are even or odd. Which is to say we want to know when $7^n - 1$ is divisible by $4$ (so that when you divide it by $6$ you get an even number), and when $5^n-1$ is divisible by $8$ (so that when you divide it by $4$, you get an even number). The binomial theorem gives $$ 7^n - 1 = (8-1)^n - 1\\ = 8^n - \binom n18^{n-1} + \cdots + (-1)^{n-1}\binom{n}{n-1}8 + (-1)^n - 1 $$ We see that this is divisible by $4$ exactly when $(-1)^n - 1$ is, which is to say when $n$ is even. Then we have $$ 5^n - 1 = (4 + 1)^n - 1\\ = 4^n + \binom n14^{n-1} + \cdots + \binom{n}{n-1}4 + 1 - 1 $$ and we see that this is divisible by $8$ precisely when $\binom{n}{n-1} = n$ is even. So $a$ and $b$ are both even for even $n$, and both odd for odd $n$, proving that $a+b$ is always even, meaning $12(a+b)$ is divisible by $24$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3155303", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 2 }
When is $YE(Y\mid\mathcal{G}) = E(Y^2\mid\mathcal{G})$? I know this is true when $Y \in \mathcal{G}$, but is there a better (less restrictive) condition that can allow this? In particular, I want to know when I can say that $E(YE(Y\mid\mathcal{G})) = E(E(Y^2\mid\mathcal{G}))$, which is, of course, equal to $EY^2$.
No. If I understand correctly your notation, then $Y\in\mathcal{G}$ means that $Y$ is measurable with respect to $\mathcal{G}$. In particular you have $E(Y|\mathcal{G}),E(Y^2|\mathcal{G})\in\mathcal{G}$. But then, from the equation $YE(Y|\mathcal{G}) = E(Y^2|\mathcal{G})$ you necessarily have that $Y\in\mathcal{G}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3155421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why don't we have to use polynomial long division on quadratic irreducible partial fractions in this case? For $(x^2+3x+1) /(x^2+1)^2 $ Why don't we have to use polynomial long division in this case since the degree of the denominator has the same magnitude of degree as the numerator before doing partial fraction decomposition?
We don't have to, we can go backwards, make ansatz, multiply by least common multiple for denominator and solve linear equation system that arises. $$\frac{a}{x^2+1} + \frac{bx+c}{(x^2+1)^2}$$ Now multiply both numerator and denominator of first term by $x^2+1$ $$\frac{a(x^2+1) + bx+c}{(x^2+1)^2}$$ Now set up equations for what $a$, $b$ and $c$ are. Can you finish from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3155560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
About the fact that $\frac{a^2}{b} + \frac{b^2}{a} + 7(a + b) \ge 8\sqrt{2(a^2 + b^2)}$ There's a math competition I participated yesterday (19/3/2019). In these kinds of competitions, there will always be at least one problem about inequalities. Now this year's problem about inequality is very easy. I am more interested in last year's problem, which goes by the following: If $a$ and $b$ are positives then prove that $$ \frac{a^2}{b} + \frac{b^2}{a} + 7(a + b) \ge 8\sqrt{2(a^2 + b^2)}$$ Now I know what you are thinking. It's a simple problem. By the Cauchy - Schwarz inequality, we have that $\dfrac{a^2}{b} + \dfrac{b^2}{a} \ge \dfrac{(a + b)^2}{a + b} = a + b$. $$\implies \dfrac{a^2}{b} + \dfrac{b^2}{a} + 7(a + b) \ge (a + b) + 7(a + b) = 8(a + b) \le 8\sqrt{2(a^2 + b^2)}$$ And you have fallen into the traps of the people who created the test. Almost everyone in last year's competition did too. But someone came up with an elegant solution to the problem. He was also the one winning the contest. You should have 15 minutes to solve the problem. That's what I also did yesterday.
Let $a^2+b^2=2k^2ab,$ where $k>0$. Thus, by AM-GM $k\geq1$ and we need to prove that: $$\frac{\sqrt{(a+b)^2}(a^2-ab+b^2)}{ab}+7\sqrt{(a+b)^2}\geq8\sqrt{2(a^2+b^2)}$$ or $$\sqrt{2k^2+2}(2k^2-1)+7\sqrt{2k^2+2}\geq16k$$ or $$\sqrt{2k^2+2}(k^2+3)\geq8k,$$ which is true by AM-GM: $$\sqrt{2k^2+2}(k^2+3)\geq\sqrt{4k}\cdot4\sqrt[4]{k^2\cdot1^3}=8k.$$ Done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/3155658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Dimensions of paper needed to roll a cone (Updated with clarifications) I'm looking for a way to calculate the dimensions of a piece of paper needed to roll up into a cone shape. Please consider the following diagram I created (nothing is drawn to scale): In this example, I'm trying to create a cone shape, that is 100 mm tall and has a top diameter of 3 mm, and a bottom diameter of 12 mm (FIG 1). FIG 2 shows a piece of paper that is 0.1 mm thick shaped like what I would expect the final piece to look like. However, sides A and C would be much longer than illustrated. Essentially, what I need is a way to figure out the lengths of sides A, B, and C in FIG 2, such that when the paper is rolled tightly, I could achieve the cone in FIG 1. NOTE: I'm not trying to just create a "shell" of the cone as shown in FIG 3. I would like to roll the paper shape tightly into the cone to provide a solid structure (as shown in FIG 4). EDIT: I'm adding some clarification. I did a "real world" example to try to illustrate what I'm trying to accomplish. * *I cut out a shape similar to the one is FIG 2 above. image 1 *I roll it as shown here. image 2 *This is what the cone looks like when done rolling. image 3 *The top of the cone shows a diameter of 2.1 mm image 4 *And the bottom of the cone shows a diameter of 3.4 mm image 5 I hope this makes things more clear. EDIT 2: Maybe the numbers in the initial example are just not realistic. So how about something like 2 mm for the top diameter and 5 mm for the bottom one.
If we slightly simplify the situation, we can say that at the top (and at the bottom) the paper is arranged in concentric circles. If $t=0.1$mm is the thickness of the paper, we get a radius $r_n = n \cdot t$ for the $n$th circle. The circumference of the $n$th circle is $2 \pi r_n$. The total length of the paper (for $N$ circles) is the sum of the circumferences $$ l = \sum_{n=1}^N 2 \pi r_n = 2 \pi \cdot t \cdot \sum_{n=1}^N n = 2 \pi \cdot t \cdot \frac{N(N+1)}{2} $$ and the diameter is $d = 2 N t$. So if you want a diameter $d$, then you need $N=\frac{d}{2t}$ many layers resulting in a length of $$l = 2 \pi \cdot t \cdot \frac{\frac{d}{2t}\left(\frac{d}{2t}+1\right)}{2} = \pi \cdot \frac{d}{2} \left(\frac{d}{2t}+1\right).$$ According to the pictures in your clarification, $A$ and $C$ are independent of each other, thus the above calculation can be done seperatly for the top and the bottom. At the top, you want the diameter to be $d=3$mm, resulting in a length of $A\cong 65$mm. At the bottom, you want the diameter to be $d=12$mm, thus $C \cong 1100$mm would do the trick. However, rolling up a meter of paper perfectly tight is probably not physically feasible, thus the calculation might not agree with experimental results.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3155904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
People arranged in a queue, probability of $A$ always being in front of $B$. There are $n\geq 2$ people including $A$ and $B$. They form a line uniformly at random and we want to find the probability that $A$ is in front of $B$. So far I've determined my sample space to be $|\Omega|=n!$. I also let $A$'s position in the queue to be $k$, and determined the number of ways that $A$ is in front of $B$ is $(n-2)!(n-k)$. (Or (n-1)!(n-k) ?) Intuitively I know that the number of ways that $A$ is in front of $B$ is the same as that if $B$ is in front of $A$ and the probability I should be getting is $\frac{1}{2}$ but I'm not sure how to get there from the information I have so far.
Intuitively I know that the number of ways that A is in front of B is the same as that if B is in front of A and the probability I should be getting is 1/2 but I'm not sure how to get there from the information I have so far. Your intuition is the way to go.   The other people are a distraction.   All you care about is the where to place $A$ and $B$. But if you must consider them, note that $\binom n2(n-2)!$ counts ways to select two from the $n$ places to put $A$ and $B$ such that $A$ precedes $B$, then to arrange the remaining $n-2$ people into the remaining places. Divide and calculate.$$\mathsf P(A\prec B)=\dfrac{\binom n2(n-2)!}{n!}\\~~=\dfrac{1}{2}$$ PS: and of course $\sum_{k=1}^{n-1}(n-k) =\sum_{j=1}^{n-1} j= \dfrac{n(n-1)}2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3156202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Confused about how to show independent random variable $Y$ has the Poisson distribution with parameter $t\lambda$ Assume there are $N$ independent exponential random variable $(X_1, X_2,\ldots, X_N)$ with parameter $\lambda$. Fix a real number $t > 0$. Let $Y$ be the largest $N$ so that $X_1 + X_2 + \ldots + X_N \leqslant t$ ($Y = 0$ if $X_1 > t$). How to show independent random variable $Y$ has the Poisson distribution with parameter $t\lambda$? Approach: I want to prove that $ P(Y \geqslant k) = 1 - \sum_{j=1}^{k-1} \frac{e^{-\lambda t}(\lambda t)^k}{k!}$, in order to do this, I wanted to use \begin{align} & P(Y\geqslant k) = P(X_1 + X_2 + \cdots + X_k \leqslant t) \\[8pt] = {} & \int_{x_1 =0}^t P(X_2 + X_3 + \ldots + X_k \leqslant t - x_1)f_{X_1}(x_1) \, dx_1. \end{align} Then I don't know how to keep going from here. please help me...
Let $S_k:=\sum_{i=1}^k X_i$ (note that $S_k\sim \Gamma(k,\lambda^{-1})$). Then \begin{align} \mathsf{P}(Y=k)&=\mathsf{P}(S_k\le t, S_k+X_{k+1}>t)=\mathsf{E}\!\left[1\{S_k\le t\} \mathsf{P}(X_{k+1}>t-S_k\mid S_k)\right] \\ &=\mathsf{E}\!\left[1\{S_k\le t\}e^{-\lambda(t-S_k)}\right]=\int_0^t\frac{\lambda^k}{(k-1)!}x^{k-1}e^{-\lambda t}\,dx \\ &=\frac{(\lambda t)^ke^{-\lambda t}}{k!}. \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3156309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Given: $\int_{a}^{b}f=\int_{a}^{b}g$, Can we always find some point $\theta$ in $[a,b]$ such that $f(\theta)=g(\theta)$ I noted something today, I don't know whether the result holds in general setting or not. Suppose $f$ and $g$ are continuous function on $[a,b]$ such that $\int_{a}^{b}f=\int_{a}^{b}g$. Can we always find some point $\theta$ in $[a,b]$ such that $f(\theta)=g(\theta)$ Example: Take $f=x^2$ and $[a,b]=[0,1]$ then $\int f=1/3$. Take $g=1/3$ then $\int_{[0,1]} g=1/3$ Now we can easily find some points which satisfy $x^2=1/3$. There are lot of examples that I checked and every time I found that resullt holds good. Can we write a proof? Thanks for reading and helping out.
An equivalent formulation (via $h=f-g$) is Suppose $h$ is a continuous function on $[a,b]$ such that $\int_{a}^{b}h(x) \, dx=0$. Can we always find some point $\theta$ in $[a,b]$ such that $h(\theta)=0$? and that is true. If $h$ has no zeros then the intermediate value theorem implies that $h$ is strictly positive or strictly negative in the interval, and consequently $\int_{a}^{b}h(x) \, dx \ne 0$. (Strictly speaking, we must assume that $a \ne b$, because otherwise $\int_a^b h(x) \, dx = 0$ for arbitrary functions $h$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3156411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Cartier divisor example in Harthsorne This question is about example 6.11.4 in Chapter II of Hartshorne. The example is about computing the Cartier divisor class group of the cuspidal cubic curve $y^2z = x^3$ in $\mathbb{P}_{k}^{2}$. He begins by saying "note that any Cartier divisor is linearly equivalent to one whose local function is invertible in some neighbourhood of the singular point $Z = (0,0,1)$". I am not sure what this statement means at all. A Cartier divisor is, by definition, represented by a family $\{(U_{i}, f_{i}) \}$ where the $U_{i}$ cover $X$ and the $f_{i} \in \mathcal{K}^{*}(U_{i})$. But the elements of $\mathcal{K}^{*}(U_{i})$ are invertible by definition. So every local function is invertible, right? So what does Hartshorne mean by his statement?
I agree he did not phrase it very well. Let $U$ be an open chart containing the cusp. By removing the cuspidal point from the other charts, we may assume it is the only one containing the cuspidal point. Let $f$ be the rational function on $U$. What he means, is that we may assume that $f$ is invertible in $\mathcal{O}$ (and not in $\mathcal{K}$, which would be obvious as $\mathcal{K}$ is a field). So, he is assuming that $f \in \mathcal{O}^*(V)$, where $V \subset U$ is an open set containing the cusp. The way to achieve it is the following. Notice that $f^{-1}$ is an element of $\mathcal{K}^*$, as so is $f$. Thus, $f^{-1}$ gives a principal Cartier divisor. So, the divisor $\lbrace (U_i,f_i) \rbrace$ is the same as $\lbrace (U_i,f_i \cdot f^{-1}) \rbrace$. Since $U = U_{i_0}$ for some index ${i_0}$, this shows that we may assume that on that chart the divisor is represented by $(U,1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3156503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Easier way to solve a LES (only pen/paper, no calculator) I want to solve the following Linear Equation System with only pen and paper; $$ 470 = x_A - \frac{3}{10}x_B \tag{1} $$ $$ 940 = x_B - \frac{2}{10}x_A \tag{2} $$ I attempt to solve for $x_A$ by inserting equation (2) into (1) and rewriting it, etc. I got something like; $$ x_A=470 +\frac{3}{10}(940+\frac{2}{10}x_A) \Rightarrow \dots \Rightarrow x_A\frac{94}{100}=470+\frac{3}{10}940=470+\frac{2820}{10} $$ Next I try to isolate $x_A$ and simplify the expression; $$ \dots \Rightarrow x_A =\frac{100}{94}(470+\frac{2820}{10})=\frac{100}{94}(470+282)=\frac{100}{94}752 $$ This is where I get stuck. Have I dug myself into a difficult hole when there is a better/more efficient approach to solving this LES? Or am I just missing a good method to solve $x_A=\frac{100}{94}\cdot752$ to get it to 800? Any insight into "how to think" here and how you solve it with just pen and paper is much appreciated. (Solution: $x_A=800$, $x_B=1100$)
First notice $94=47\cdot 2$. You can simplify before adding/multiplying: $$x_A\frac{94}{100}=470+\frac{3}{10}940\color{red}{=470+\frac{2820}{10}} \Rightarrow \\ x_A\frac{47\cdot 2}{100}=47\cdot 10+\frac3{10}\cdot 47\cdot 20 \Rightarrow \\ x_A\frac2{100}=10+\frac{60}{10} \Rightarrow \\ 2x_A=1000+600 \Rightarrow \\ x_A=800.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3156615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Evaluate limit without using L'H rule I was revising teacher's notes about L'H rule and I came across this limit. $$\lim_{x \to \frac{\pi}{2}} \frac{x-\frac{\pi}{2}}{\sqrt{1-\sin x}}$$ The teacher tried to evaluate without L'H to demonstrate L'H is necessary here but I can't explain the first passage he did: $$\lim_{x \to \frac{\pi}{2}} \frac{x-\frac{\pi}{2}}{\sqrt{1-\sin x}} = \lim_{x \to \frac{\pi}{2}} \frac{1}{\frac{-\cos x}{2\sqrt{1-\sin x}}}$$ Can you please help me?
Hint $\dfrac\pi2-x=2y$ $$\lim_{y\to0}\dfrac{-2y}{\sqrt{1-\cos2y}}=-\sqrt2\lim_{...}\dfrac y{|\sin y|}$$ So, the limit doesn't exist
{ "language": "en", "url": "https://math.stackexchange.com/questions/3156743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Find general term of $1+\frac{2!}{3}+\frac{3!}{11}+\frac{4!}{43}+\frac{5!}{171}+....$ Find general term of $1+\frac{2!}{3}+\frac{3!}{11}+\frac{4!}{43}+\frac{5!}{171}+....$ However it has been ask to check convergence but how can i do that before knowing the general term. I can't see any pattern,comment quickly!
Hint The numerator is easy. For the denominator, see the succesive differences. If you still can't figure out see Arithmetico–geometric sequence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3156962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How can vector space $V\subsetneq W$ but $V\cong W$, where $V$ and $W$ don't have finite dimensions? Let $V$ and $W$ two vector spaces. If they have finite dimension, then $V\subset W$ and $V$ and $W$ have same dimension will imply that $V=W$. But I heard that in infinite dimensions, this doesn't hold anymore in the sense that you can have that $V$ is a proper subspace of $W$ (i.e. $V\subsetneq W$) and $V$ and $W$ are isomorphic. Could someone explain this fact please ? Because I really don't understand how is this possible.
A standard example is with the space of real polynomials $\mathbb{R}[x]$. If you define a map $T:\mathbb{R}[x]\to \mathbb{R}[x]$ by $p(x)\to p(x^2)$ then it is an isomorphism between $\mathbb{R}[x]$ and the image of $T$. However $T(\mathbb{R}[x])$ is a proper subspace.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3157074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
the following claim true or false: $\bigtriangledown f(x)$ always points in the same direction as $x-\hat{x}$ if $f(\hat{x})=0$ Take $f: \mathbb{R}^{n}\rightarrow \mathbb{R}, f\geq0 \ \forall x \in \mathbb{R}^{n}$. A statement in an exercise claims the following: given an $x' \in \mathbb{R}^{n}$, let $\hat{x}$ be the point closest to $x'$ satisfying $f(\hat{x})=0.$ if $f$ is continuously differentiable, then the gradient $\bigtriangledown f(\hat{x})$ always points in the same direction as $x' - \hat{x}$. This appears wrong to me as if $f \geq 0\ \forall x$, then $\bigtriangledown f(\hat{x}) = 0$. And even if we removed the condition $f \geq 0 \ \forall x$, taking $n=1, f=x$ with $x'=-1$ and $\hat{x}=0$ is a simple counter example. Am I missing something?
I suspect that the claim might be a typo, and that it's talking about $\nabla (x')$ instead of $\nabla f (\hat{x})$. And "point in the same direction" is clearly false if taken literally, but suppose instead it meant "lie in the same half-space" (i.e., have positive dot product). Even so, you'd need more conditions on $f$ to make this true. I might suggest throwing out that text.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3157182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Computing a Basis of Holomorphic Differential Forms for a Given Curve I find myself repeatedly having to ask this question, because no one seems to have answered it. I put this up on Math Overflow with a bounty, got a guy who said he would answer it, but—though he helped with other things—never actually got around to answering the question. Basically, I'm starting to get annoyed. All I wish to know is this: Everything is complex numbers. Consider a curve $C$ defined by the set of all $\left(x,y\right)\in\mathbb{C}^{2}$ satisfying a formula $f\left(x,y\right)=0$. What is/are the algorithm(s)/formula(e) for a “basis of holomorphic differential forms” for $f\left(x,y\right)$? Let me be clear: I want an algorithm and/or formula I can use by hand to compute bases of holomorphic differential forms using classical, pre-20th century methods for a given curve. Moreover, it would be preferable to keep things in affine complex space as much as possible. Examples: $$4x^{2}y^{2}-\left(x^{2}+y^{2}\right)^{3}=0$$ $$x^{3}-3xy+y^{3}=0$$ $$xy^{3}-x^{4}-2y^{6}x^{2}+2x+1=0$$ $$y^{2}\left(x^{2}+y^{2}\right)-x^{2}+1=0$$ $$xy^{2}-y\cos x+y+\frac{1}{2}=0$$ $$2^{x}y-x^{3}+y^{2}-1=0$$ $$5^{x}y-xy+3^{-y}-1=0$$ Etc. To be clear, I've been doing quite a bit of research on the topic, to mixed results. It's clear to me that to compute the basis of differential forms, it suffices to compute a collection of linearly independent expressions (defined by affine equations) whose zero-loci define curves which are adjoint to the curve defined by the formula $f\left(x,y\right)=0$. However, the moment I start asking questions about how to do so, I get bombarded by obfuscating abstractions and generalities—"blow up", "resolution of singularities", "birational equivalence", "sequences of maps", "charts", and the like. Help with wresting the concrete procedures free from the clutches of abstract nonsense would be most appreciated.
If your plane curve is smooth and irreducible, then Theorem 1 on p630 of Brieskorn and Knorrer's "Plane algebraic curves" will answer your question (see also the Corollary on p634). If your curve isn't smooth then you can't avoid blow ups and resolutions of singularities. Brieskorn and Knorrer's book also has a section explaining how to do this for plane curves.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3157358", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does $d(x,y) = \log(1 + | x - y |)$ define a metric on $\mathbb{R}$? Help would be much appreciated here! I got as far as evaluating the following: * *$d(x,y) \geq 0$. Argument is that $(1 + |x - y|)$ doesn't produce negative values, since the $|x-y| \geq 0$ itself and, at most, it would be added to $1$. So this condition is satisfied! *$d(x,y) = d(x,y)$. Since we have that $\log(1 + |x - y|) = \log(1 + |y - x|)$ by commutative of real numbers. Now, here is where I am stuck. How can I demonstrate this algebraically? * *$d(x,y) = 0$ implies that $x = y$. What I could do is $\log(1+|x - y|) = 0$. It seems that the only way this log can be zero, is if $1 + |x - y| = 1$. Therefore, it would $|x - y| = 0$ would rise naturally. Does it make sense? * *$d(x, y) \leq d(x,z) + d(z,y)$. What is left is add a zero to the inside of the modulo function and then try and break it into the triangle inequality. That is not working so well thus far, but here it goes: $\log(1 + |x - y|) = \log(1 + |x - z + z - y|)$ but then it turns out that: $\log(1 + |x - z + z - y|) \leq \log(1 + |x - z| + |z - y|)$
The $d\left(x,y\right)=0$ proof is correct. For the triangle inequality, I would start out with the right hand side. Notice that \begin{align*} \log\left(1+\left|x-z\right|\right)+\log\left(1+\left|z-y\right|\right)&=\log\left(\left(1+\left|x-z\right|\right)\left(1+\left|z-y\right|\right)\right)\\ &=\log\left(1+\left|x-z\right|+\left|z-y\right|+\left|x-z\right|\left|z-y\right|\right). \end{align*} Recall that for all $a,b>0$, $\log\left(a\right)\ge\log\left(b\right)\iff a\ge b$. By the triangle inequality, $\left|x-y\right|\le\left|x-z\right|+\left|z-y\right|$, and so \begin{align*} 1+\left|x-y\right|&\le1+\left|x-z\right|+\left|z-y\right|\\ &\le1+\left|x-z\right|+\left|z-y\right|+\left|x-z\right|\left|z-y\right|. \end{align*} Thus, \begin{align*} d\left(x,y\right)=\log\left(1+\left|x-y\right|\right)&\le\log\left(1+\left|x-z\right|+\left|z-y\right|+\left|x-z\right|\left|z-y\right|\right)\\ &=\log\left(1+\left|x-z\right|\right)+\log\left(1+\left|z-y\right|\right)\\ &=d\left(x,z\right)+d\left(z,y\right). \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3157465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 1 }
Find weak solution to Riemann problem for conservation law Find the weak solution of the following conservation law $$ u_t + (u^2)_x = 0 $$ with the initial condition $$ u(x,0) = \left\lbrace \begin{aligned} &u_l & &\text{if } x < 0 ,\\ &u_r & &\text{if } x > 0. \end{aligned}\right. $$ Consider both cases $u_l>u_r$ and $u_l<u_r$. Fin the solution at $x=0$ in each case. Attempt We have equation: $u_t + 2 u u_x = 0 $ and characteristics are given by $t' = 1 $ and $x' = 2u $ and $u' = 0$ and so $u = const$, $t = s$, $x = 2 u s + r $ so that $$ x = 2 u(x,0) t + r $$ are characteristicts. so that $$ x = \begin{cases} 2 u_l t + r, \; \; r < 0 \\ 2 u_r t + r, \; \; r > 0 \end{cases} $$ so thereis a shock formation at $x=0$. Any help in how to continue this problem?
This is very similar to the Riemann problem of the inviscid Burgers' equation (see e.g. (1), (2), (3), (4) and related posts). For this type of problem, weak solutions are not unique. Thus, I guess that the problem statement asks for the entropy solution. I will provide a detailed general answer for the case of conservation laws $u_t + f(u)_x = 0$ with Riemann data $u(x<0,0) = u_l$ and $u(x>0,0) = u_r$, where the flux $f$ is smooth and convex (the case of concave fluxes is very similar). If the flux has inflection points, the more general solution is provided here. In the case of convex a flux $f$, there are only two possible types of waves: * *shock waves. If the solution is a shock wave with speed $s$, $$ u(x,t) = \left\lbrace\begin{aligned} &u_l & &\text{if } x < s t, \\ &u_r & &\text{if } st < x, \end{aligned}\right. $$ then the speed of shock must satisfy the Rankine-Hugoniot jump condition $s = \frac{f(u_l)-f(u_r)}{u_l-u_r}$. Moreover, to be admissible, the shock wave must satisfy the Lax entropy condition $f'(u_l) > s > f'(u_r)$, where $f'$ denotes the derivative of $f$. *rarefaction waves. They are obtained from the self-similarity Ansatz $u(x,t) = v(\xi)$ with $\xi = x/t$, which leads to the identity $f'(v(\xi)) = \xi$. Since $f'$ is an increasing function, we can invert the previous equation to find $v(\xi) = (f')^{-1}(\xi)$. The final solution reads $$ u(x,t) = \left\lbrace\begin{aligned} &u_l & &\text{if } x \leq f'(u_l) t, \\ &(f')^{-1}(x/t) & &\text{if } f'(u_l) t \leq x \leq f'(u_r) t, \\ &u_r & &\text{if } f'(u_r) t \leq x, \end{aligned}\right. $$ where $(f')^{-1}$ denotes the inverse function of $f'$. One notes that this solution requires $f'(u_l) \leq f'(u_r)$. In the present case, the flux $f: u \mapsto u^2$ is a smooth convex function, so that its derivative $f':u\mapsto 2u$ is increasing. Shock waves are obtained for $u_l \geq u_r$ (cf. Lax entropy condition), and rarefaction waves are obtained for $u_l \leq u_r$. In the first case, the shock speed deduced from the Rankine-Hugoniot condition reads $s = u_l + u_r$. The value of the solution at $x=0$ for positive times is $u_r$ if $s < 0$, and $u_l$ otherwise. In the second case, the reciprocal of the derivative is given by $(f')^{-1} : \xi \mapsto \xi/2$. The value of the solution at $x=0$ for positive times is $u_r$ if $u_r < 0$, $u_l$ if $u_l > 0$, and $0$ otherwise (i.e., if $u_l < 0 < u_r $).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3157585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Using Archimedian property to prove that infimum of set is $0$ Given set A = [ $\frac{1}{n} | n \in \mathbb{N}$] It seems to me i have to prove two things : * *$\frac{1}{n} \geq 0 , \forall n \in \mathbb{N}$ 2, Assuming $b$ be another lower bound , and so $b \leq 0$ Work : 1 Now $\frac{1}{n} \geq 0 , \forall n \in N$ Also $n \geq 1$ So $\frac{1}{n} \leq 1$ . But $\frac{1}{n} \in R^+$. So, $0 < \frac{1}{n} \leq 1$ *Assume that $b > 0$ so by Archimedian property $\exists n \in \mathbb{N}$ such that $\frac{1}{n} < b$ which is contradiction to the fact that b is lower bound for the set Is this correct ?
$0$ is a lower bound, $ 1/n >0$, $n \in \mathbb{Z^{+}}$. Assume $b >0$, real, is a lower bound. Archimedean principle: There is a $n_0 \in \mathbb{Z^+}$ s.t. $n_0 >1/b$, i.e. $b > 1/n_0$, a contradiction(Why?). Hence $\inf (A)=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3157870", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Three dice: conditional probability After throwing $3$ dice, we know that on every die there is a different number. What is the probability that there is a $6$ on exactly one dice? I figured that P(A) - we get 6 on some dice, P(B) - different number on every dice. Then $$P(B)=\dfrac{6∗5∗4}{6^3}\text{ and } P(A\cap B)=\dfrac{1*5*4}{6^3}$$
The required probability is $$\frac {\binom 3 1 \times 5 \times 4} {6 \times 5 \times 4} = \frac 3 6 = \frac 1 2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3157989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Soft Question: Self-made graphs vs. Desmos I once heard that in real analysis you should expect to draw lots of pictures. Boy were they right. My question is, is it better to draw those pictures yourself or to rely on Desmos? On the one hand, drawing it yourself (or even better, mentally picturing it yourself) is good exercise, but on the other hand mighn't it be a bit pointless? Akin to instisting on doing arithmetic instead of using a calculator? What does MathStackExchange think?
There can be benefits to both. Sketching the graph yourself may allow you to notice some characteristics of a function that you wouldn’t otherwise just looking at the graph, because drawing it yourself forces you to pay attention to the details. On the other hand, Desmos will give a more accurate depiction of the graph, and it is easier to examine a graph more in-depth than when having to sketch by hand. Perhaps a solution is to first graph the function yourself and then check with Desmos. This allows you to analyze different properties of the graph yourself without relying on Desmos and then to further examine the function once you understand these properties.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3158127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
probability with martingales 12.2 sum of zero-mean independent variables in L^2 I am struggling with the following theorem from David Williams, Probability with Martingales: THEOREM Suppose that $(X_{k}:k\in\mathbb{N})$ is a sequence of independent random variables such that, for every $k$, $E(X_{k})=0, \sigma_{k}^2:=Var(X_{k})<\infty$. (a) Then $$ \sum\sigma_{k}^2<\infty\Rightarrow\sum X_{k}\text{ converges a.s. .} $$ (b) If the variables $(X_{k})$ satisfies $$ \exists K \in [0,\infty),\forall k, \omega,\\ |X_{k}(\omega)|\leq K, $$ then $$\sum X_{k}\text{ converges a.s.}\Rightarrow\sum\sigma_{k}^2<\infty. $$ The proof for the statement (a) is easy to understand, but I cannot get the other one. According to the proof, "since $\sum X_{n}$ converges a.s., the partial sums of $\sum X_{k}$ are a.s. bounded, and it must be the case that for some $c$, $P(T=\infty)>0$." Here $T$ is the stopping time $$T = \inf\{r: |\sum_{k=1}^r X_k| > c\}.$$ The problem is that I cannot find this $c$. I know that it's trivial if its boundedness is uniform in $\Omega$, but it's not the case, is it? Can anyone figure out which $c$ meets this condition? Thanks in advance.
Using the notation from Williams, $M_n = X_1 + \dots + X_n$, we know that $M_n$ is a bounded sequence almost surely since it converges almost surely. I will write $T_n = \inf\{r : |M_r| > n\}$. Notice that we have $$\{\sup_{r \geq 1} |M_r| < \infty\} = \bigcup_{n \geq 1} \{T_n = \infty\}.$$ So if $\mathbb{P}(T_n = \infty) = 0$ for each $n$ then $$1 = \mathbb{P}(\sup_{r \geq 1} |M_r| < \infty) = \mathbb{P}\big(\bigcup_{n \geq 1} \{T_n = \infty\} \big) \leq \sum_n \mathbb{P}(T_n = \infty) = 0$$ which is a contradiction and hence there is an $n$ such that $\mathbb{P}(T_n = \infty) > 0$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3158274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to compute a primitive element for the splitting field of $x^3-2 \in \Bbb{Q}[x]$? Let $\alpha:=\sqrt[3]{2}\in\mathbb{R}$ and $\omega:=e^{2\pi i/3}\in\mathbb{C}$. Then the splitting field for the polynomial $x^3-2\in\mathbb{Q}[x]$ is $$\mathbb{Q}(\alpha,\omega\alpha,\omega^2\alpha)=\mathbb{Q}(\alpha,\omega).$$ Since $\mathbb{Q}$ has characteristic zero we know from the Primitive Element Theorem that there exists some $\gamma\in\mathbb{Q}(\alpha,\omega)$ with $$\mathbb{Q}(\alpha,\omega)=\mathbb{Q}(\gamma).$$ Question: How can I find a specific example of such an element $\gamma$?
The primitive element theorem asserts that for $u \in K$ and $\alpha,\omega$ separables, $\alpha+\omega u$ is a primitive element of $K(\alpha,\omega)/K$ iff $\forall \sigma \in Gal(\overline{K}/K)$, $$\sigma(\alpha)+\sigma(\omega)u = \alpha +\omega u \implies \sigma(\alpha)=\alpha,\sigma(\omega) = \omega$$ only finitely many $u$ don't work because those not working are of the form $u=\frac{ \sigma(\alpha)-\alpha}{\omega - \sigma(\omega)}$. Here $K=\Bbb{Q}, \alpha= 2^{1/3}, \omega = e^{2i \pi /3}$, from our knowledge of Galois groups $\sigma(\alpha) = \omega^l \alpha, \sigma(\omega) = \omega^m, 3 \nmid m$, with $u=1$, for $\sigma(\alpha)+\sigma(\omega)= \alpha +\omega $, if $l\ne 0$ then $\alpha= \frac{\omega^m-\omega}{1-\omega^l} \in \Bbb{Q}(\omega)$ a contradiction, thus $l = 0$ and $\sigma(\alpha) = \alpha,\sigma(\omega)=\omega$. Whence $\Bbb{Q}(2^{1/3},e^{2i \pi /3})=\Bbb{Q}(2^{1/3}+e^{2i \pi /3})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3158363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Probability, balls from boxes Consider two boxes: there are 15 white and 12 black balls in the first box, and 14 white and 18 black balls in the second box. Anna puts her hand in the first box, takes at once two balls and places them in the second box. Then, she takes one ball wihout looking from the second box. Knowing that she took a black ball from the second box, what is the probability that she transferred two balls of different colors from the first box to the second box? I tried using Bayes Theorem: X:=transfer 2 colours Y:= pick black from the 2nd box $P(X|Y)=\frac{P(Y|X)(P(X)}{P(Y)}$ with $P(X)=\frac{(15,2)(12,0)}{(27,2)}+\frac{(15,0)(12,2)}{(27,2)}$ $P(Y)=\frac{18}{32}$ $P(Y|X)=\frac{18}{34}+\frac{20}{34}$ Does this work like this?
HINT I would say $P(X) = \frac{20}{39}$ (see colleagues 'calculus' comment) $P(Y|X)=\frac{19}{34}$ Knowing that she took a black ball from the second box: $\Rightarrow P(Y)=1$ $ \Rightarrow P(X|Y)=\frac{P(Y|X)P(X)}{P(Y)}=\frac{P(Y|X)P(X)}{1}=\cdots$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3158463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
On the Lagrange Interpolation Error for the Trapezoidal Quadrature Rule This is part of a document on numerical analysis. I have included its link below. In the derivation for the error in the linear interpolation approximation on page 84, the author "brings" the ξ(x) term of the integral outside using the mean value theorem generalized for integrals. I cannot see how this theorem is used; I am specifically referring to the highlighted conditions from a Wikipedia article in the other image. Conditions for mean value theorem for integrals. I haven't found anything on this thus far on the internet. I appreciate any feedback. Thanks.
This is the weighted or extended version of the mean value theorem, $\int_a^bw(x)g(x)dx=g(c)\int_a^bw(x)dx$ if $w$ has a uniform sign. But you are right, one needs that $f$ is continuous for that. Here we can show that the function $g$ in $$ f(x)-P_1(x)=(x-a)(x-b)g(x) $$ is continuous on $[a,b]$ if $f$ is continuously differentiable there and we know from the interpolation error formula that its values are equal to $g(x)=\frac12f''(\xi_x)$ for some $\xi_x\in(a,b)$. There is no need for the relation $x\mapsto\xi_x$ to be continuous as well. Proof of the continuity of $g$: Expanding the linear interpolation one finds an alternative way to write $g(x)$ as \begin{align} g(x)(x-a)(x-b)&=f(x)-f(a)-(x-a)\frac{f(b)-f(a)}{b-a} \\ &=\frac{(f(x)-f(a))(b-x)-(x-a)(f(b)-f(x))}{b-a} \\[1em] \implies g(x)&=\frac{\frac{f(b)-f(x)}{b-x}-\frac{f(x)-f(a)}{x-a}}{b-a}=[a,x,b]f \end{align} so that for the limits of $g$ to exist in $a$ and $b$ we need that the difference quotients of $f$ converge there, that is, that $f$ is differentiable in these points. Now inserted into the mean value theorem you get $$ \int_a^b(x-a)(x-b)g(x)dx=g(c)\int_a^b(x-a)(x-b)dx=\frac12f''(\xi_c)\cdot (-\frac16)(b-a)^3. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3158571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to integrate $\frac{1}{\sqrt{x^2+x+1}}$ How to integrate $$\frac{1}{\sqrt{x^2+x+1}}$$ I tried to solve this integral as follows $\displaystyle \int \frac{1}{\sqrt{x^2+x+1}} \ dx= \int \frac{1}{\sqrt{(x+\frac{1}{2})^2+\frac{3}{4}}} \ dx= \int \frac{1}{\sqrt{(\frac{2x+1}{2})^2+\frac{3}{4}}} \ dx= \int \frac{1}{\sqrt{\frac{3}{4}((\frac{2x+1}{\sqrt{3}})^2+1)}} \ dx= \int \frac{1}{\sqrt{(\frac{2x+1}{\sqrt{3}})^2+1}}\frac{2}{\sqrt{3}} \ dx$ Substitution $t=\frac{2x+1}{\sqrt{3}} ;dt=\frac{2}{\sqrt{3}} \ dx$ $\displaystyle\int \frac{1}{\sqrt{(\frac{2x+1}{\sqrt{3}})^2+1}}\frac{2}{\sqrt{3}} \ dx= \int \frac{1}{\sqrt{t^2+1}} \ dt$ Substitution $\sqrt{u-1}= t;\frac{1}{2\sqrt{u-1}} \ du= dt$ $\displaystyle \int \frac{1}{\sqrt{t^2+1}} \ dt= \int \frac{1}{2 \sqrt{u(u-1)}} \ du= \frac{1}{2}\int \frac{1}{\sqrt{u^2-u}} \ du= \frac{1}{2}\int \frac{1}{\sqrt{(u-\frac{1}{2})^2-\frac{1}{4}}} \ du= \int \frac{1}{\sqrt{(2u-1)^2-1}} \ du$ Substitution $g= 2u-1;dg= 2 \ du$ $\displaystyle \int \frac{1}{\sqrt{(2u-1)^2-1}} \ du= \frac{1}{2}\int \frac{2}{\sqrt{(2u-1)^2-1}} \ du= \frac{1}{2}\int \frac{1}{\sqrt{g^2-1}} \ dg= \frac{1}{2}\arcsin g +C=\frac{1}{2}\arcsin (2u-1) +C= \frac{1}{2}\arcsin (2(t^2+1)-1) +C=\frac{1}{2}\arcsin (2((\frac{2x+1}{\sqrt{3}})^2+1)-1) +C$ However when I tried to graph it using desmos there was no result, and when i used https://www.integral-calculator.com/ on thi problem it got the result $$\ln\left(\left|2\left(\sqrt{x^2+x+1}+x\right)+1\right|\right)$$ where have I made a mistake?
As you did, by completing the square and with $y:=\dfrac2{\sqrt3}\left(x+\dfrac12\right)$, $$I:=\int\frac{dx}{\sqrt{x^2+x+1}}=\int\frac{dy}{\sqrt{y^2+1}}.$$ Then by some magic $y:=\dfrac12\left(t-\dfrac1t\right)$ yields $dy=\dfrac12\left(1+\dfrac1{t^2}\right)$ and $$\int\frac{dy}{\sqrt{y^2+1}}=\int\frac{\dfrac12\left(1+\dfrac1{t^2}\right)}{\dfrac12\left(t+\dfrac1{t}\right)}dt=\log t.$$ Now we need to invert, $$y=\dfrac12\left(t-\dfrac1t\right)\iff t^2-2yt-1=0\iff t=y\pm{\sqrt{y^2+1}}.$$ Finally, choosing the plus sign and ignoring the additive constant, $$I=\log\left(x+\dfrac12+\sqrt{x^2+x+1}\right).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3158670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
How might I quickly determine the equation of the parabola, given the coordinates of its focus and vertex? I have this MCQ. Which of the following is the equation of the parabola with focus at $(1,2)$ and vertex at $(3,2)$ : A. $ y^2 - 4y + 8x - 20 = 0 $ B. $ y^2 + y + 8x -20 = 0 $ C. $ y^2 + 4y + 8x - 20 = 0 $ D. $ y^2 - 4y + x - 20 = 0 $ E. $ y^2 - 4y + 8x + 20 = 0 $ Now, I know how to find equation of a parabola from focus and vertex, or from directrix and vertex, but that takes some time. I was wondering if there is a way to just sort of see which equation is the right one, because right from the focus and vertex we can see that the parabola will open on the left and its equation will be $y^2 = - 4ax$. I'm not sure if it's actually possible though, but I think it would be cool if there was a way.
You don't need any particular formula here: just plug the coordinates of the vertex into the equation and you'll see that only in case A the point belongs to the parabola.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3158805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is it ever feasible to actually pump water out of the top of a tank? In math/calculus classes, a problem frequently posed asks how much work (J) is required to pump water out of the top of differently shaped water tanks. If you are unsure of what I am referring to, here are some example problems: * *Emptying water out of a Conical Tank? (Calculus)? *Work required to pump water out of tank in the shape of a paraboloid of revolution *Example 9.5.4 in https://www.whitman.edu/mathematics/calculus_online/section09.05.html Is it ever actually feasible to pump water out of the top of a tank, as is modeled by the solutions to these problems? Does some sort of pump actually exist that is able to pull water from its surface in a tank, ever? This might be a question better suited for engineering forums, but I'm unsure. I'm writing a mathematics paper and am currently struggling to find the real-world application of pumping water out of the top of a tank. And really hoping that there is one.
Most soap dispensers are examples of tanks where you pump a liquid from the surface. The tube extending to the bottom is irrelevant - the work required is the same as if the tube magically expanded and contracted to just reach the surface, because the pressure will cause the liquid inside and outside the tube to be at the same height. A more practical problem would be finding the work to pump some quantity of oil from beneath the ground to the surface.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3158930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If A is a closed linear subspace of a vector space X (which may not be Hilbert) is it true that $A^{\perp} = 0 \Leftrightarrow A = X$? If A is a closed linear subspace of a vector space X (which may not be Hilbert) is it true that $A^{\perp} = 0 \Leftrightarrow A = X$? $A^{\perp} = \{x \in X|\langle x,a\rangle=0, \forall a \in A\}$. Obviously, we have if $A = X$, then $A^{\perp} = 0$. Is the other direction (i.e. $A^{\perp} = 0 \Rightarrow A = X$) right? (Notice that $X$ may not be a Hilbert space.) If it is not right, could you please provide a counterexample?
If $X$ is the space of all sequences $(x_n)_{n\in\mathbb N}$ of real numbers such that $n\gg0\implies x_n=0$, if$$\left\langle(x_n)_{n\in\mathbb N},(y_n)_{n\in\mathbb N}\right\rangle=\sum_{n=1}^\infty x_ny_n,$$you can take$$A=\left\{(x_n)_{n\in\mathbb N}\in X\,\middle|\,\sum_{n=0}^\infty \frac{x_n}n=0\right\}.$$Then $A\neq X$. And $A$ is closed, since it is the kernel of a continuous map. But $A^\perp=\{0\}$. In fact, if $(x_n)_{n\in\mathbb N}\in A^\perp$, then $(x_n)_{n\in\mathbb N}$ is orthogonal to each of the following sequences: * *$(1,-2,0,0,0,0,\ldots)$; *$(1,0,-3,0,0,0,\ldots)$; *$(1,0,0,-4,0,0,\ldots)$ and so on. But this means that $x_1=2x_2$, $x_1=3x_3$, $x_1=4x_4$, … Since $x_n=0$ if $n\gg0$, this implies that $(x_n)_{n\in\mathbb N}$ is the null sequence.Note that $X$ is a subspace of $\ell^2$. If $X$ was the whole space $\ell^2$, then my definition of $A$ would still make sense and $A$ would still be closed, but it would not be true anymore that $A^\perp=\{0\}$. In fact, then we would have $A^\perp=\left\langle\left(\frac1n\right)_{n\in\mathbb N}\right\rangle$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3159045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Give a counterexample or prove this statement We all know that in Real Analysis, if $E$ is Lebesgue measurable, then there is a $F_{\sigma}$ set $F$ such that: $$F\subset E,\ m(E\backslash F)=0$$ But I want to know whether the next statement hold: Suppose $E$ is a Lebesgue measurable set, is there a $F_{\sigma}$ set $F$ such that: $$E\subset F,\ m(F\backslash E)=0 $$ Personally, I don't think this statement is correct. But I found it hard to give a counterexample, can anyone help me?
The general answer to your question is NO. For a measurable set $E$ what we can to is this: $$ A \subseteq E \subseteq B,\qquad m(E\setminus A)=0,\qquad m(B\setminus E)=0 $$ where $A$ is an $F_\sigma$ set and $B$ is a $G_\delta$ set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3159189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Expanding $x^x$ to series with $o(x)$ polynomials I have some doubts in finding $x^x$ series. I know that from Taylor theorem I have $$ f(x) = f(a) + f'(a)(x-a) + \frac{f''(a)}{2!}(x-a)^2 + \cdots + \frac{f^{(k)}(a)}{k!}(x-a)^k + o(x^k) $$ I want to have $o(x)$ polynomials so: Let $$ f(x) = x^x $$ $$ f'(x) = (e^{x\cdot \ln x}) = (\ln x + 1)x^x$$ In use of taylor I should $$ f(x) = f(0) + \frac{f'(0)(x)}{1!} + o(x) $$ but $f(0)$ is not defined $0^0$. Also $f'(0)$ is not defined... But I found informations that it should be $$ 1+ \ln x \cdot x + o(x)$$ I don't know why :-(
As you have written $$x^x=e^{x\ln{x}}$$ and as the series expansion of $e^x$ converges for all values of $x\in\mathbb{C}$ one can write $$e^x=\frac{x^0}{0!}+\frac{x^1}{1!}+\frac{x^2}{2!}+...$$ $$e^{x\ln{x}}=\frac{(x\ln{x})^0}{0!}+\frac{(x\ln{x})^1}{1!}+\frac{(x\ln{x})^2}{2!}+...$$ $$\therefore x^x=1+x\ln{x}+\frac{x^2\ln^2{x}}{2}+\frac{x^3\ln^3{x}}{6}+...$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3159271", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
A certain functor in Hakim's "Topos annellés et schemas relatifs" is a sheaf for the canonical topology In M. Hakim's Topos annellés et schemas relatifs, page 43, (3.4.7), the Author wants to define a sheaf $f_{0A}^*(X)$ over a topos $T$, with respect to the canonical topology. A scheme $X$ is given, together with a commutative ring object $A\in T$, which we can also understand as a representable sheaf over $T$, thus as a functor from $T$ to $Rings$. The definition of $f^*_{0A}$ is as follows: $U\in T \mapsto Hom_{Rings}(\Gamma(X,O_X),A(U))$ The definition makes perfectly sense, but it is unclear to my why this gives us a sheaf for the canonical topology. Certainly we should use that $A$ is a (representable) sheaf, but how? $X$ is fixed, so we could replace $\Gamma(X,O_X)$ with any ring $R$ if this is confusing. Thank you in advance for any help or suggestion. EDIT. Perhaps this is simpler than I thought. $A$ is a sheaf by assumption. Hence $Hom_{rings}(R,A(-))$ is a sheaf for any $R$, since $Hom_{Rings}$ is covariant and the equalizer condition for $A$ $$A(U)\to \prod A(U_i)\rightrightarrows \prod A(U_i\times_UU_j)$$ is preserved by applying $Hom_{Rings}(R,-)$ (correct? I am not quite sure of this last sentence).
The argument provided in your edit is correct: The functor $Hom_{\text{Rings}}(R, -)$ preserves limits (this is an entirely general fact, valid for homming out of any object in any category), hence in particular equalizer diagrams.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3159359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Probability of getting "tails-tails" for the first time. A fair coin is tossed until the coin lands "tails-tails" (i.e. a tails followed by a tails) for the first time. Let $X$ counts the number of tosses required. Find the probability of the event $X=n.$ What will be the expectation of $X$? I think this problem has to be solved by recursion but I find difficulty to solve this. Any suggestions regarding this will be highly appreciated. Thank you very much for your valuable time. EDIT $:$ If got "tails-tails" for the first time in $n$ steps. Then the first two steps will be either $HH$ or $HT$ or $TH.$ If the first two tosses will yield either $HH$ or $TH$ then we follow the previous step. If it yields $HT$ in the first two steps then the next two steps will be either $HH$ or $HT$ and we go on like that. We know that $\Bbb E(X) = \Bbb E(X \mid A) \Bbb P(A) + \Bbb E(X \mid A^c) \Bbb P(A^c).$ So in this case we can write $\Bbb E(X) = \Bbb E(X \mid H) \Bbb P(H) + \Bbb E(X \mid T) \Bbb P (T).$ Now $$\Bbb E(X \mid H) = \Bbb E(X) + 1, \Bbb P(H) = \frac 1 2.$$ What is $\Bbb E(X \mid T)$? $$\Bbb E(X \mid T) = \Bbb E((X \mid T) \mid TH) \Bbb P (TH \mid T) + \Bbb E((X \mid T) \mid TT) \Bbb P(TT \mid T).$$ Now $$\Bbb E((X \mid T) \mid TH) = 1 + \Bbb E(X \mid H) = 2 + \Bbb E(X).$$ and $$\Bbb E((X \mid T) \mid TT) = 2.$$ Also $$\Bbb P(TH \mid T) = \Bbb P(TT \mid T) = \frac 1 2.$$ So we get $$\Bbb E (X \mid T) = \frac 1 2 \Bbb E(X) + 2.$$ Putting all these together we get $\Bbb E(X) = 6.$ Where have I done mistake? Would you please check it?
Hints: Let $g_n$ be the number of sequences of length $n$ made up of $H$ and $T$ (heads and tails) that contain no two consecutive tails. Then the sequence you want is such a sequence (but of length $n - 2$) followed by $TT$. Assume you know the value of $g_n$ in general. In terms of $g_{n - 3}$, what is the desired probability? Now, to compute $g_n$, consider how we can obtain a such a sequence of length $n$ from a similar sequence of smaller length. Suppose you're given a sequence of length $n - 1$ that contains no $TT$ (two consecutive tails). Then definitely we can add an $H$ to the end of this sequence to get a valid sequence of length $n$? How many such sequences are there (in terms of the unknown $g_k$)? Does this not count all valid sequences ending in $H$? So that leaves valid sequences of length $n$ ending in $T$. How shall we get such a sequence? Well, if the last term is $T$, then certainly the term before that must be $H$ (for otherwise we would have $TT$ at the end). So what we want is a valid sequence of length $n - 1$ that ends in $H$. How many such sequences are there (again in terms of the unknown $g_k$)? Adding the above two should give a recurrence relation for $g_n$. The base cases are $g_0 = ???$ and $g_1 = ???$. You can also easily convert this into a recurrence relation for the desired probability itself. Full Solution: The recurrence relation for $g_n$, the number of $H$-$T$ sequences of length $n$ that contain no $TT$ (consecutive tails)$ is $$g_n = g_{n - 1} + g_{n - 2};\ g_0 = 1,\ g_1 = 2.$$ A sequence of length $n$ that ends in $TT$ and has no two consecutive tails before that is of the form $G_{n-3}HTT$, where $G_{n-3}$ is a sequence of length $n - 3$ with no consecutive tails. The probability of a sequence of $n$ tosses ending in $TT$ at the last two tosses for the first time is therefore \begin{align*} p_n & = \dfrac{g_{n-3}}{2^n}\\ & = \dfrac{g_{n-4} + g_{n-5}}{2^n}\\ & = \dfrac {p_{n-1}} 2 + \dfrac {p_{n-2}} 4 \end{align*} with base cases $p_2 = \dfrac 1 4$, $p_3 = \dfrac 1 8$. Then the expected number of tosses is $\sum\limits_{n = 2}^\infty n\, p_n$. To compute this using the recurrence relation that we already have for $p_n$, \begin{align*} n p_n &= \dfrac n 2 p_{n - 1} + \dfrac n 4 p_{n - 2} \\ &= \dfrac {1} 2 (n - 1) p_{n - 1} + \dfrac 1 2 p_{n-1} + \dfrac{1} 4 (n - 2)p_{n - 2} + \dfrac 2 4 p_{n-2} \end{align*} Now, we sum this up from $n = 4$ to $\infty$. Note that $\sum p_n = 1$. Thus, \begin{align*} \mathbb E[X] - 3 p_3 - 2 p_2 &= \dfrac 1 2 (\mathbb E[X] - 2 p_2) + \dfrac 1 2 (1 - p_2) + \dfrac 1 4 \mathbb E[X] + \dfrac 1 2 \implies\\ \mathbb E[X] &= 6. \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3159489", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
If $\chi(g)$ generates a dense subgroup of $\chi(G)$ for all $\chi$ then $g$ generates a dense subgroup of $G$. This question arised from something in Ergodic theory, however this is not necessary to state or answer the question. Suppose that $G$ is a compact abelian group and $g\in G$. Are the following two equivalent: * *For every continuous character $\chi:G\rightarrow S^1$, $\chi(g)$ generates a dense subgroup of $\chi(G)$. *$g$ generates a dense subgroup of $G$. Clearly $2$ implies $1$, but I can't figure out whether the opposite holds. For those who are interested in the connection with ergodic theory. Property $2$ means that $(G,R_g)$ is an ergodic system, and I ask whether this is equivalent to property 1 which means that all the factors $(\chi(G),R_{\chi(g)})$ are ergodic.
The answer is Yes!. Suppose by contradiction that $1$ holds but $2$ doesn't. Let $H=\overline{\left<g\right>}$, then by assumption $H\leq G$ is a proper subgroup. Look at $G/H$. This is a non-trivial group and so admits a non-trivial character $\chi:G/H\rightarrow S^1$. We may compose with $G\rightarrow G/H$ to obtain a character $\tilde{\chi}:G\rightarrow S^1$ which is trivial on $H$. In particular $\tilde{\chi}(g)$ is trivial, however $\tilde{\chi}(G) = \chi(G/H)$ is a non-trivial subgroup of $S^1$ - contradiction. This completes the proof. Edit: I just figured even another answer for that! Let $s\in G$, for all $\chi\in\hat G$ there exists a net $n_\alpha$ such that $\chi(g^{n_\alpha})\rightarrow \chi(s)$. Using a diagonal argument we can choose the same sub-net $n_\beta$ for all characters, also by moving into a sub-sub-net if necessary we may assume that $g^{n_\beta}\rightarrow h\in G$. Since all the characters are continuous we have that $\chi(h)=\chi(s)$ for all $\chi$. Using the fact that characters separates points we have that $h=s$, which means that $g^{n_\beta}$ converges to $s$. In particular $s\in \overline{\left<g\right>}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3159602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find a Givens rotation matrix such that $y=Gx$ Assume that $x,y \in \mathbb{R}^2$ with $||x||_2=||y||_2=1$. Find a Givens rotation matrix $G=\begin{bmatrix}c & s \\ -s & c \end{bmatrix}$ (i.e., find $c$ and $d$ with $c^2+d^2=1$) such that $y=Gx$. Answer: Let $x=(x_1,x_2) \ $ and $ \ y=(y_1,y_2)$. Then, $\begin{bmatrix} y_1 \\ y_2 \end{bmatrix}=\begin{bmatrix}c & s \\ -s & c \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}$. This gives us, $y_1=cx_1+sx_2, \\ y_2=-sx_1+cx_2.$ Let $c=\cos \theta, \ s=\sin \theta$, then $ \ \sqrt{c^2+d^2}=1$. But how to find the angles $\theta$ ? Help me
HINT show that $<x,y>=c$ then $cos \theta=c=<x,y>$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3159745", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to prove $\text{det}(I+xy^{\top}+wz^{\top})=(1+y^{\top}x)(1+z^{\top}w)-(x^{\top}z)(y^{\top}w)$? Suppose $x,y,z,w$ are vectors in $\mathbb{R}^n$ and $I$ is the identity matrix. Show that $\text{det}(I+xy^{\top}+wz^{\top})=(1+y^{\top}x)(1+z^{\top}w)-(x^{\top}z)(y^{\top}w)$.
Consider the matrix $$\tag1 \begin{bmatrix} 1+y^Tx & y^Tw \\ z^Tx & 1+z^Tw \end{bmatrix} =I + \begin{bmatrix} y^T\\ z^T \end{bmatrix} \begin{bmatrix} x & w\end{bmatrix} $$ For any $A,B$, we have the equality $\det(I+AB)=\det(I+BA)$. So the determinant in $(1)$ is equal to the determinant of $$ I + \begin{bmatrix} x & w\end{bmatrix}\begin{bmatrix} y^T\\ z^T \end{bmatrix} =I+xy^T+wz^T. $$ Proof of the equality $\det(I+AB)=\det(I+BA)$. The matrices $AB$ and $BA$ have the same eigenvalues (padding with zeroes when the sizes are not equal). So $I+AB$ and $I+BA$ have the same eigenvalues, and if the two lists have different length then the remaining eigenvalues are $1$. As the determinant is the product of the eigenvalues, we get $\det(I+AB)=\det(I+BA)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3159899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that each $y\in Y_1$ has a unique $x_y\in X.$ Setting: Let $X,Y$ be compact Hausdorff spaces and $E,F$ be any Banach spaces. Let $C(X,E)$ be the collection of all $E$-valued continuous functions on $X.$ $C(Y,F)$ is defined similarly. Endow sup-norm to both $C(X,E)$ and $C(Y,F).$ Let $T:C(X,E)\to C(Y,F)$ be a continuous linear operator. In this paper, the authors defined the following sets in page $188:$ $$Y_3 = \{ y\in Y: (Tf)(y) = 0 \text{ for all } f\in C(X,E) \} \quad \text{and}$$ $$Y_1 = \{ y\in Y\setminus Y_3: \exists x_y\in X\text{ such that }(Tf)(y) = 0 \text{ if } f(x_y) = 0, f\in C(X,E) \}.$$ Then the author stated that it is easy to see that point $x_y\in X$ corresponding to each $y\in Y_1$ is uniquely determined. Question: Show the bolded statement. Suppose not, that is, there exist distinct $x_y, x_y'\in X$ that correspond to $y\in Y_1.$ Then choose a function $f\in C(X,E)$ such that $f(x_y) \neq 0$ and $f(x_y') =0.$ Since $y\in Y_1$ and $f(x_y') = 0,$ so $(Tf)(y) = 0.$ However, in order to obtain a contradiction, I would require if and only if in the definition of $Y_1,$ that is $Tf(y) = 0$ if and only if $f(x_y) = 0.$ With the new statement, since $f(x_y) \neq 0,$ so $Tf(y) \neq 0,$ a contradiction.
As $y\notin Y_3$, we know that there exists $g\in C(X,E)$ such that $(Tg)(y)\neq 0$ and thus $g(x_y')\neq 0$ and $g(x_y')\neq 0$ (as both are in $Y_1$). Set $a:= g(x_y')$ and recall that in compact spaces the Urysohn lemma holds. Hence, there exists a continuous function $h: X \rightarrow \mathbb{R}$ such that $$h(x_y)=0 \qquad \text{and} \qquad h(x_y')=1$$ Now define $$f: X \rightarrow E, \ f(z)= -h(z) a$$ Clearly, $f\in C(X,E)$ and $f(x_y)= -h(x_y) a =0$ and $f(x_y')=-h(x_y') a = -a$. As $f(x_y)=0$ we get $(Tf)(y)=0$. Hence, we get $$T(f+g)(y)= (Tf)(y)+ (Tg)(y)= (Tg)(y) \neq 0$$ but $$(f+g)(x_y')= f(x_y') + g(x_y') = -a + a=0$$ which is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3160032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
By means of an example, show that $P(A) + P(B) = 1$ does not mean that $B$ is the complement of $A$ I'm in grade 10, and I've just started to learn about complementary events. I am rather perplexed with this question. Isn't this question kinda contradictory, since $P(A) + P(A') = 1$? This is what I got to: $P(A) + P(B) = 1$ $P(A) + P(A') = 1$ How could it be proven that $B$ isn't the complement of $A$? Help would be greatly appreciated.
From the two statements you obtained (correctly), you can further obtain $P(A') = P(B)$. But that does not imply $A'=B$. Just as $x^2 = y^2$ for reals $x,y$ does not imply $x = y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3160157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 7, "answer_id": 3 }
Does there exist a characteristically simple group, which is not a direct product of simple groups? Does there exist a characteristically simple group, which is not a direct product of simple groups? A characteristically simple group is a group without non-trivial proper characteristic subgroups. The only thing I know, that if such group $G$ exists, it should not be Artinian: Suppose it is. If it has no non-trivial proper normal subgroups, then it is simple. If we have one, we can choose the minimal one (let’s denote it as $N$), according to the Zorn lemma (which can be applied, as our group is Artinian). It is not hard to see that $N$ will be simple. Then, $\langle \Pi_{\phi \in Aut(G)} \phi(N) \rangle = G$ as it is characteristic. And as all $\phi(N)$ are normal in it and either are the same subgroup or intersect trivially (because of their minimality), that results in $G$ being the direct product of $[Aut(G):Stab_{Aut(G)}(N)]$ isomorphic copies of $N$. However, not all groups are Artinian. And I do not know how to deal with non-Artinian case here.
Actually, as it was pointed out in the comments $(\mathbb{Q}, +)$ is an example of such group. It is non-simple $(\mathbb{Q}, +)$ as $(\mathbb{Z}, +) \triangleleft (\mathbb{Q}, +)$. It is characteristically simple, as for any field $\mathbb{F}$, $Aut(\mathbb{F}, +)$ acts transitively on $\mathbb{F}\setminus\{0\}$. It is not a direct product of any two non-trivial groups, as was proved in four different ways here: Is $(\mathbb{Q},+)$ the direct product of two non-trivial subgroups?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3160323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Evaluating $\int\frac{dx}{\sqrt{4-9x^2}}$ with different trig substitutions ($\sin$ vs $\cos$) gives different results I am trying to solve the following integral with trig substitutions. However, I get a different answer for two substitutions that should yield the same result. $$\int\frac{dx}{\sqrt{4-9x^2}}$$ * *For the first trig sub, I set $9x^2 = 4\cos^2\theta$. This simplifies to: $x = \frac{2}{3}\cos\theta$, and $dx = -\frac{2}{3}\sin\theta$. Substituting in, I get: $$\int\frac{-2\sin\theta}{6\sin\theta} = -\frac{\theta}{3} = -\frac{1}{3}\,\cos^{-1}\left(\frac{3x}{2}\right)+C \tag{1}$$ *For the second trig sub, I set $9x^2 = 4\sin^2\theta$. This simplifies to: $x = \frac{2}{3}\sin\theta$, and $dx = \frac{2}{3}\cos\theta$. Substituting in, I get: $$\int\frac{2\cos\theta}{6\cos\theta} = \frac{\theta}{3} = \frac{1}{3}\,\sin^{-1}\left(\frac{3x}{2}\right)+C \tag{2}$$ My question is: Why do these two trig substitutions yield different results graphically? Shouldn't they result in the same graph?
Since $(-\arcsin)'(x)=\arccos'(x)=-\dfrac1{\sqrt{1-x^2}}$, you got twice the same thing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3160694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Set theory question regarding $A\overline{\sim} B = \{x+y : \in A\times B\}$ Given $A,B \in P(N)$ We mark $\overline{\sim}$ as $$A\overline{\sim} B = \{x+y : \langle x,y\rangle\in A\times B\}$$ Now, order R will be as following $ARB$ iff $\exists M\in P(N)$ so $A\overline{\sim} M=B$. The question is R is reflexive? symmetric? anti-symmetric? transitive? I think R is partial order, i.e reflexive, anti-symmetric and transitive. and I need to prove it. I think my proof is correct for reflexive so let`s focus on anti-symmetric and transitive. Just for make it sure - N is Set of Natural numbers, including zero. Here what I done for proving transitive: Let $A,B,C \in P(N)$ and assume ARB and BRC. ARB so exists $T \in P(N)$ so $A\overline{\sim} T=B$ BRC so exists $S \in P(N)$ so $B\overline{\sim} S=C$ Let define M as following set: $\{K+P | k\in T$ and $p\in S\}$ $S,T\in P(N)$ so $M\in P(N)$ we need to show $A\overline{\sim} M=C$ Let $q\in A\overline{\sim} M$ here I got stuck, since I have no idea how to show that $q\in C$. Might this R is not transitive? Anti-Symmetric: Let $A,B\in P(N)$ and assume ARB and BRA we need to show A=B. let $a\in A$ ARB so exists $M\in P(N)$ so $A\overline{\sim} M=B$ and here I got stuck. Can we say that M is a set with just a zero element? we have exists as not as target so we can`t choose it, I think.. Might R is not anti-symmetric? Any help would be appreciated.
You got off to a great start with your transitivity proof! Note that $M=T\overline{\sim} S.$ Having chosen $q\in A\overline{\sim}M,$ we know that $q=a+m$ for some $a\in A$ and some $m\in M.$ Since $M=T\overline{\sim} S,$ then $m=t+s$ for some $t\in T$ and some $s\in S,$ whence $$q=a+(t+s)=(a+t)+s.$$ Since $a\in A$ and $t\in T,$ then $a+t\in A\overline{\sim} T=B,$ so since $s\in S,$ then $q\in B\overline{\sim} S=C.$ Thus, $A\overline{\sim}M\subseteq C.$ On the other hand, suppose that $q\in C=B\overline{\sim} S,$ so that $q=b+s$ for some $b\in B$ and $s\in S.$ Since $B=A\overline{\sim} T,$ then $b=a+t$ for some $a\in A$ and $t\in T,$ so that $$q=(a+t)+s=a+(t+s),$$ and since $M:=T\overline{\sim}S,$ then $q\in A\overline{\sim}M.$ Thus, $C\subseteq A\overline{\sim}M,$ so $C=A\overline{\sim}M,$ and so $ARM,$ as desired. You started the proof of antisymmetry just as I would have, but you didn't follow through. Indeed, $A\overline{\sim}M=B$ for some $M,$ but also $B\overline{\sim}L=A$ for some $L.$ Thus, $a=b+l$ for some $b\in B$ and $l\in L.$ Furthermore, since $b\in B=A\overline{\sim} M,$ then $b=a'+m$ for some $a'\in A$ and $m\in M,$ so that $a=(a'+m)+l=a'+(m+l).$ It follows that $A\subseteq A\overline{\sim}(M\overline{\sim}L).$ On the other hand, take $x$ to be any element of $A\overline{\sim}(M\overline{\sim}L),$ so that $x=a+(m+l)=(a+m)+l$ for some $a\in A,m\in M,l\in L.$ Since $a+m\in A\overline{\sim}M=B,$ then $x\in B\overline{\sim}L=A.$ Thus, $A\overline{\sim}(M\overline{\sim}L)\subseteq A,$ and so $A=A\overline{\sim}(M\overline{\sim}L).$ Now, if $A=\emptyset,$ it's clear that $A\overline{\sim}M=\emptyset$ by the definition of $\overline{\sim},$ so $B=A$ in that case. Suppose that $A$ is not empty, and let $a_0$ be the least element of $A.$ Since $a_0\in A=A\overline{\sim}(M\overline{\sim}L),$ then $a_0=a+(m+l)$ for some $a\in A,m\in M,l\in L.$ Consequently, $a\le a_0,$ so that $a=a_0$ by our choice of $a_0$ as the least element of $A.$ Thus, since $m,l\in\Bbb N,$ we conclude that $m=l=0.$ From this, we know that $0\in M$ and $0\in L.$ Thus, taking any $a\in A,$ we have that $a=a+0\in A\sim M=B,$ so $A\subseteq B.$ Similarly, $B\subseteq A,$ and so $A=B,$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3160866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why we can't differentiate both sides of a polynomial equation? Suppose we had the equation below and we are going to differentiate it both sides: \begin{align} &2x^2-x=1\\ &4x-1=0\\ &4=0 \end{align} This problem doesn't seems to happens with other equation like $\ln x =1$ or $\sin x = 0$, we can keep differentiating these two without getting "$4=0$", for example. This why I asked about polynomials. PS: I'm not trying to solve any of these equations by differentiating then. But differentiation or integration helps and solving equations? I remember that sometimes to solve trigonometry equtions like $\sin x = \cos x$ we had to square both side so we could use the identity $\sin^2x + \cos^2x =1$. Even thought squaring appears to make it worse because we have a new root.
The kicker is that our domain of truth isn't "big enough" to allow it. From your example, the functions on both sides only agree on $\left\{-\frac12,1\right\}$ However, we can't differentiate functions at isolated points of their domains! On the other hand, consider the equation $$\sin x=\cos x\tan x.$$ The functions here agree everywhere the function on the right-hand side is defined--namely, all points except the odd integer multiples of $\frac\pi2.$ We can therefore differentiate at all such points, to obtain $$\cos x=-\sin x\tan x+\cos x\sec^2 x,$$ which one can verify to be true for all such points.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3160946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 0 }
Proof of Lemma: Every integer can be written as a product of primes I'm new to number theory. This might be kind of a silly question, so I'm sorry if it is. I encountered the classic lemma about every nonzero integer being the product of primes in Ireland and Rosen's textbook A Classical Introduction to Modern Number Theory. In this textbook there is also a proof for it provided, and I'd like to understand why it is that the proof actually works. The proof is as follows: Assume, for contradiction, that there is an integer $N$ that cannot be written as a product of primes. Let $N$ be the smallest positive integer with this property. Since $N$ cannot itself be prime we must have $N = mn$, where $1 < m, n < N$. However, since $m$, $n$ are positive and smaller than $N$ they must each be a product of primes. But then so is $N = mn$. This is a contradiction. I feel like this proof kind of presupposes the lemma. I think this line of reasoning could be strengthened using induction, and I've seen other proofs of this lemma that use induction. Can someone help me out? What am I missing and why do I think that this proof of the lemma is circular? Edit: I'd like to add that this textbook states that if $p$ is a prime number, then so is $-p$. That's where my confusion stems from.
Although the proof by contradiction is correct, your feeling of unease is fine, because the direct proof by induction is so much clearer: Take an integer $N$. If $N$ is prime, there is nothing to prove. Otherwise, we must have $N = mn$, where $1 < m, n < N$. By induction, since $m, n$ are smaller than $N$, they must each be a product of primes. Then so is $N = mn$. Done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3161147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 7, "answer_id": 3 }
Why does the integral domain "being trapped between a finite field extension" implies that it is a field? The following is an exercise from Qing Liu's Algebraic Geometry and Arithmetic Curves. Exercise 1.2. Let $\varphi : A \to B$ be a homomorphism of finitely generated algebras over a field. Show that the image of a closed point under $\operatorname{Spec} \varphi$ is a closed point. The following is the solution from Cihan Bahran. http://www-users.math.umn.edu/~bahra004/alg-geo/liu-soln.pdf. Write $k$ for the underlying field. Let’s parse the statement. A closed point in $\operatorname{Spec} B$ means a maximal ideal $n$ of $B$. And $\operatorname{Spec}(\varphi)(n) = \varphi^{−1}(n)$. So we want to show that $p := \varphi{−1}(n)$ is a maximal ideal in $A$. First of all, $p$ is definitely a prime ideal of $A$ and $\varphi$ descends to an injective $k$-algebra homomorphism $ψ : A/p \to B/n$. But the map $k \to B/n$ defines a finite field extension of $k$ by Corollary 1.12. So the integral domain $A/p$ is trapped between a finite field extension. Such domains are necessarily fields, thus $p$ is maximal in $A$. In the second last sentence, the writer says that the integral domain $A/p$ is trapped between a finite field extension. I don't exactly know what it means, but I think it means that there are two injective ring homomorphisms $f:k\to A/p$ and $g:A/p\to B/n$ such that $g\circ f$ makes $B/n$ a finite field extension of $k$. But why does it imply that $A/p$ is a field?
$A$ and $B$ be finitely generated algebras over $k$. Let $\mathfrak m $ be maximal ideal of $B$. We have an injective map $A/\phi ^{-1}(\mathfrak m) \rightarrow B/\mathfrak m $. Identify $A/\phi ^{-1}(\mathfrak m)$ to its image via this map. Let $T\in A/\phi ^{-1}(\mathfrak m) $, then $1/T \in B/ \mathfrak m $- which is algebraic extension of the field $k$. So $1/T $ is there is a monic polynomial over $k$ which $1/T$ satisfies, multiplying this by $T^{n-1}$ you get that $1/T \in A/\phi ^{-1}(\mathfrak m) $ and you are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3161381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
Proving matrices equation when all the matrices in it may not be invertible I'm reviewing linear algebra for my exams this year, and I just encountered this problem. For an arbitrary matrix, $\boldsymbol{A} \in \mathcal{R}^{m \times n}$, prove there must be a unique matrix $\boldsymbol{P} \in \mathcal{R}^{n \times m}$ matching the following 4 equations. $$ APA = A \\ PAP = P \\ (AP)^T = AP \\ (PA)^T = PA $$ Things will be easy if $A$ is invertible, but when $A$ is not invertible I have no ideas how to do it.
The matrix $A$ induces a decomposition $\ker A\oplus(\ker A)^\perp$ in the domain and $\mathrm{im}A\oplus(\mathrm{im}A)^\perp$ in the codomain. The core of the matrix is the square matrix $U:(\ker A)^\perp\to\mathrm{im}A, x\mapsto Ax$. Since $AP$ is to act as an 'left-identity' on $A$ we have to define $Px=U^{-1}x$ for $x\in \mathrm{im}A$. Similarly $PA$ acts as right-identity for $A$, so $Px=0$ for $x\in(\mathrm{im}A)^\perp$. Thus $P=\begin{pmatrix}U^{-1}&0\\0&0\end{pmatrix}$ with respect to bases for the above spaces. Then $AP$ and $PA$ are idempotent symmetric matrices. The definitions for $Px$ are forced, so this makes $P$ unique. Edit: If you prefer using algebra only: Every matrix can be written as $Q^*UR$ where $Q,R$ are projection matrices of the type $[I,O]$. Note that $QQ^*=I$, $RR^*=I$. The equation $APA=A$ implies $Q^*URPQ^*UR=Q^*UR$; so multiplying by $Q$ and $R^*$ gives $RPQ^*=U^{-1}$. Edit: Example to illustrate the above. Take $$A=\begin{pmatrix}-1&-1&2&3\\0&0&1&1\\2&2&1&-1\end{pmatrix}.$$ Its nullspace or kernel consists of the plane spanned by the vectors $u_3=(1,-1,0,0)$, $u_4(0,1,-1,1)$. Take the perpendicular space $(\ker A)^\perp$ spanned by, say, $u_1=(1,1,1,0)$, $u_2=(0,0,1,1)$. The image space $\mathrm{im}A$ is the plane spanned by $v_1=Au_3=(0,1,5)$ and $v_2=Au_4=(5,2,0)$. Its perpendicular space is spanned by $v_3=(2,-5,1)$ (using cross product). So the matrix of $A$ using the basis $u_i$ for the domain and $v_j$ for the codomain looks like $$\begin{pmatrix}1&0&0&0\\0&1&0&0\\0&0&0&0\end{pmatrix}$$ So the required matrix $P=\begin{pmatrix}1&0&0\\0&1&0\\0&0&0\\0&0&0\end{pmatrix}$ with respect to the same bases in reverse. Hence $P=\frac{1}{150}\begin{pmatrix}-2 & 5 & 29 \\ -2 & 5 & 29 \\ 24 & 15 & 27 \\ 26 & 10 & -2 \\\end{pmatrix}$ with respect to the standard bases. You can check that this matrix has the desired properties. The proof is based on the rank-nullity formula, in essence. The dimension of $(\ker A)^\perp=\dim V_1-\mathrm{nullity}(A)=\mathrm{rank}(A)=\dim(\mathrm{im}A)$. Hence every matrix $A:V_1\to V_2$ can be decomposed into three parts, where the first part $R:V_1\to(\ker A)^\perp$ is a projection, the second $U:(\ker A)^\perp\to\mathrm{im}(A)$ is invertible (I called this the core but it is not a standard name), and the third $Q^*:\mathrm{im}(A)\to V_2$ is the embedding of the image subspace into the codomain.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3161466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
If $\mathbb E[|X_n-X|^2]\to 0$ can we say that $X_{n_k}(\omega )\to X(\omega )$ for a.e. $\omega $? Let $(\Omega ,\mathcal F,\mathbb P)$ a probability space and let $X_n\to X$ in $L^2(\Omega )$ where $(X_n)$ is a sequence of random variable and $X$ is a random variable as well. By a theorem of Lebesgue measure theory, we know that there is a subsequence such that $X_{n_k}(\omega )\underset{k\to \infty }{\to} X(\omega )$ a.e. Is it also true with random variable ? Because if yes, then the convergence in $L^2$ is very strong (and I don't get why $L^2$ convergence is called weak convergence). Indeed, we get that $$\mathbb P\{\lim_{k\to \infty }X_{n_k}=X\}=1,$$ which is (at my opinion) very strong.
Yes it's true. And it's indeed quite strong, but it's weaker than $$\mathbb P\left\{\lim_{n\to \infty }X_n=X\right\}=1,$$ But it's at least better (stronger) than convergence in distribution or convergence in probability, since we indeed have more information of the sequence $(X_n)$. By the way, when I say that $L^2$ convergence than a.s. convergence it's in the sense that $L^2$ convergence implies $$\lim_{k\to \infty }X_{n_k}(\omega )=X(\omega ),\ \ a.s.,$$ which is not as well $$\lim_{n\to \infty }X_n(\omega )=X \ \ a.s.,$$ but notice that a.s. convergence doesn't implies $L^2$ convergence as $L^2$ convergence doesn't imply a.s. convergence. So the term "weaker" is not completely adapted for the situation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3161623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Quick but not simple question. $2^\sqrt2$ or e, which is greater? $2^\sqrt2$ vs $e$, which is greater? $(2^\sqrt2)^\sqrt2 = 4\quad $ & $\quad e^\sqrt2$ = ? $\log(2^\sqrt2) = \sqrt2\log(2)\quad$ & $\quad \log(e) = 1$ I tried but can't induce comparable form. Is anybody know how to prove it?
If you know that $\ln(2)\approx0.69$ and $1/\sqrt2=\sqrt2/2\approx1.414/2=0.707$, then you have $\ln(2)\lt1/\sqrt2$, in which case $\ln(2^\sqrt2)=\sqrt2\ln2\lt1=\ln(e)$, hence $2^\sqrt2\lt e$. It's not hard to show that $\sqrt2\gt1.4$, since $1.4^2=1.96\lt2$. It's a little trickier to show that $\ln(2)\lt0.7$, but this can be done by comparing the area beneath the curve $y=1/x$ to the areas of the trapezoids containing it with endpoints at $x=1$, $4/3$, $5/3$, and $2$: $$\ln(2)=\int_1^2{dx\over x}\lt{1\over6}\left(1+2\cdot{3\over4}+2\cdot{3\over5}+{1\over2} \right)={1\over6}\left(1+{3\over2}+{6\over5}+{1\over2} \right)={1\over6}\cdot{42\over10}={7\over10}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3161742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 2 }
Investigate whether $ f $ meets Lipschitz continuity and whether it is uniform continuity Let $f: (-\infty,0]\rightarrow\mathbb R$ which is a function: $f(t)=\arcsin(e^{t})$ for $t\le0$.Investigate whether $ f $ meets Lipschitz continuity and whether it is uniform continuity My try: For $t\neq0$: $$f'(t)=\frac{e^{t}}{\sqrt(1-e^{2t})}$$$$f'(t)>1$$However $f'(t)$ has no upper limit so function $f'$ is not limited.We know that limited derivative $ \Leftrightarrow $ Lipschitz continuity for differentiable functions. So the first observation seems to be that $ f $ does not meet the Lipschitz continuity. But $f$ it is not differentiable in the whole domain so I don't know if I could use this observation.Moreover I know that function is uniform continuity when $$\forall_{\epsilon>0}\exists_{\delta>0} \forall_{x \in D}\forall_{y \in D}(|x-y|< \delta\Rightarrow |f(x)-f(y)|<\epsilon)$$Unfortunately I don't have idea how to show that $f$ (no)meets this condition.Can you get me some tips with it?
For the first part, note that if $f$ were Lipschitz continuous on $(-\infty,0]$ with constant $L$, then the derivative $f'$, where it exists, would satisfy $|f'(x)|\leq L$. But this does not hold since $|f'(t)|\rightarrow +\infty$ for $t\rightarrow 0^-$. Another argument to prove that $f$ is not Lipschitz continuous on $(-\infty,0]$ resorts to the definition. If $f$ were Lipschitz continuous on $(-\infty,0]$, then there would be a constant $L>0$ such that $|f(x)-f(y)|\leq L|x-y|$ for all $x,y\in (\infty,0]$. Take $x=0$ and $y=-\frac 1n$: the previous conditions would read $$ \forall n\in \mathbb{N}\quad|1-\arcsin e^{-\frac 1n}|\leq L\cdot \frac 1n, $$ that is $$ n|1-\arcsin e^{-\frac 1n}|\leq L, $$ which is not possible to satisfy for all $n\in \mathbb{N}$. As for the second part, you have to use some properties of uniformly continuous function, since the definition is not so helpful in these cases. Here, $f$ is continuous over $(-\infty,0]$ and $\lim_{x\rightarrow -\infty} f(x)$ exists, in particular it is equal to $0$. This implies uniform continuity over $(-\infty,0]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3161878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
An application of Fredholm Alternative I have just started reading the Fredholm Alternative for finite dimensional spaces and I came towards an excersise that it reads as follows: If $H$ is a Hilbert space and $T:H\to H$ is bounded, linear map, with $\langle Tx,x\rangle >0$ for $x\neq0$ then prove that $T$ is surjective. My way of thinking was to use the adjoint map $T^*$ like that: $\langle Tx,x\rangle = \langle x,T^* x\rangle >0$ , for $x\neq0$. The last thing means that $KerT^* =\{0\}$ and therefore, $ ({KerT^*})^\bot =H$, which means that $T$ is surjective. But I do have some considerations in case the dimension of $H$ is not finite: * *Is the adjoint $T^*$ always defined ? *Why we have $KerT^*\oplus({KerT^*})^\bot =H$ ? Propably the above hold as $H$ is Hilbert and $T$ is bounded, but I cannot understand how I can deduce those... Any clarification or hint is really appreciated.
1) Yes. It will be helpful to know the other terms for adjoint: "dual" and "transpose" (even for operators that are not matrices). See Transpose of a linear map. For a full, excellent exposition (which addresses infinite-dimensional linear algebra despite the title), see FDVS. I don't recommend learning linear algebra from any other textbook (Halmos also wrote a problem book on the subject), nor going forward to functional analysis before studying linear algebra. 2) Because (see the FDVS book cited above for every italicized term) if $U$ is any subspace of an inner product space $V$, then $V$ is the direct sum of $U$ and the orthogonal complement of $U$. (For now, see the last bulleted item under "Inner Product Spaces, Properties" in the article Orthogonal complement.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3162153", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is it possible to "mod" the action of a symmetric group on a symmetric operad? I am relatively new to category theory, so only have a rough understanding of the technicalities behind operads. My understanding is that symmetric operads are defined so that they are "nicely" acted on by the symmetric group. My question is: Is it possible to alter this action i.e. quotient the action so that a proper subgroup of the symmetric group acts on the operad? I do not see how this would raise any issues with the definition of the symmetric operad (for example I think you still have equivariance), but am wondering if any technicalities would prevent me from doing this.
Yes. (This paper formulates extra structure on a sequence $G_0,G_1,\dots$ of groups to get a theory analogous to the theory of operads.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3162416", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Summation of 'for loop' with conditional? I am trying to convert instances of nested 'for' loops into a summation expression. The current code fragment I have is: for i = 1 to n: for j = 1 to n: if (i*j >= n): for k = 1 to n: sum++ endif Basically, the 'if' conditional is confusing me. I know that the loops prior will be called n^2 times, but the third loop is only called when $i*j >= n$. How would I write the third summation to account for this, and then evaluate the overall loop's time complexity?
Hint: The "for" cycle in $k$ is very easy to turn into something simpler... As for the other parts, see if you can split the problem into easier steps. For example, what happens for $i=1$? And what happens for $i=2$? And $i=3$? And...
{ "language": "en", "url": "https://math.stackexchange.com/questions/3162547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Prove that if there are integers $m$ and $n$ such that $am +bn =1$ then $a$ and $b$ are coprime. Suppose $a,b \in \mathbb{N}$. Prove that if there are integers $m$ and $n$ such that $am +bn =1$ then $a$ and $b$ are coprime. I came up with the following proof, but I am sure a shorter argument is possible. To prove: $\forall a,b \in \mathbb{N}$ , $\exists m,n \in \mathbb{Z}$ | $am + bn = 1\rightarrow $ $gcd(a,b) = 1$ In order to prove this by contradiciton, suppose then that $\exists m,n \in \mathbb{Z}$ | $am + bn = 1$ and that $gcd(a,b) \neq 1$. Take $ k = gcd(a,b) \neq 1$. Now, $k = ra+sb$ and $s,b \in \mathbb{Z}$, assuming that k can be written as a linear combination of a and b. This is an established theorem. So we have: (1) $am + bn = 1$ (2) $ra + sb = k$ Adding $(1) + (2)$ , we get : $(r+m)a + (s+n)b = k+1$ Since $ k= gcd(a,b)$, then $k|(r+m)a$ and $k|(s+n)b$. Then $k|(r+m)a + (s+n)b = k+1$. So $k|k+1$. But this is impossible, since dividing $k \neq1$ into $k$ gives a remainder of 1. Having derived this contradiction, it cannot be the case that if $am + bn = 1$, then $gcd(a,b) \neq 1$. So it must be the case that: ($\exists m,n \in \mathbb{Z}$ | $am + bn = 1$) $\rightarrow $ ($gcd(a,b) = 1$)
Suppose $am+bn=1$. If $k$ divides both $a$ and $b$ then there exist $p$ and $q$ such that $a=kp$ and $b=kq$. Substituting that into our first equation gives $kpm+kqn=1\implies k$ divides $1$ Therefore, $k=1$ and $a$ and $b$ are coprime.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3162667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Let $T$ be an exponential random variable with parameter $\theta$. For $t \gt0$, compute $\Bbb{E}(T|T\le t)$ Let $T$ be an exponential random variable with parameter $\theta$. For $t \gt0$, compute $\Bbb{E}(T|T\le t)$. My work: First $$P(T\le s|T\le t)=\frac{\int_0^s\theta e^{-\theta x}dx}{\int_0^t\theta e^{-\theta x}dx} =\frac{1-e^{-\theta s}}{1-e^{-\theta t}}, \ \ 0\le s\le t$$ Hence $$\Bbb{E}(T|T\le t)=\int_0^t s\cdot \frac{\theta e^{-\theta s}}{1-e^{-\theta t}} ds$$ I am not sure if I make a right calculation, please give a verification.
Your computation looks correct. Alternative way: using the absence of memory property, we have $$ \mathrm E[T; T> t] = \mathrm E[T\mid T> t] \mathrm P(T>t) = (t+ \mathrm E[T])e^{-\theta t} = (t+ \theta^{-1})e^{-\theta t}, $$ whence $$ \mathrm E[T\mid T\le t] = \frac{\mathrm E[T; T\le t]}{\mathrm P(T\le t)} = \frac{\mathrm E[T] - \mathrm E[T; T> t]}{1-e^{-\theta t}} \\ = \frac{\theta^{-1} - (t+ \theta^{-1})e^{-\theta t}}{1-e^{-\theta t}} = \theta^{-1} - \frac{t}{e^{\theta t} - 1}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3162919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Finding the value of $\lim\limits_{n\rightarrow \infty}\sqrt{n}\int^{\frac{\pi}{4}}_{0}\cos^{2n-2}(x)\mathrm dx$ Finding the value of $\displaystyle \lim\limits_{n\rightarrow \infty}\sqrt{n}\int^{\frac{\pi}{4}}_{0}\cos^{2n-2}(x)\mathrm dx$ What I tried Let $\displaystyle I_{k} =\int^{\frac{\pi}{4}}_{0}\cos^{k}(x)\mathrm dx=\int^{\frac{\pi}{4}}_{0}\cos^{k-1}(x)\cdot \cos (x)\mathrm dx$ $$ I_{k}=\left.\cos^{k-1}(x)\sin x\right|^{\frac{\pi}{4}}_{0}+(k-1)\int^{\frac{\pi}{4}}_{0}\cos^{k-2}(x)\left(1-\cos^2 x\right)\mathrm dx$$ $$ k I_{k}=\bigg(\frac{1}{2}\bigg)^{\frac{k}{2}}+(k-1)I_{k-2}$$ How do I solve it? Help me please.
Put \begin{equation*} I_{n}=\sqrt{n}\int_{0}^{\pi/4}\cos^{2n-2}(x)\,\mathrm{d}x. \end{equation*} Via the substitutions $ y=\sin x $ and $ y=\frac{z}{\sqrt{n-1}} $ we get \begin{gather*} I_{n}=\sqrt{n}\int_{0}^{\pi/4}(1-\sin^2(x))^{n-1}\,\mathrm{d}x = \sqrt{n}\int_{0}^{1/\sqrt{2}}(1-y^2)^{n-1}\cdot\dfrac{1}{\sqrt{1-y^2}}\,\mathrm{d}y =\\[2ex] \dfrac{\sqrt{n}}{\sqrt{n-1}}\int_{0}^{\sqrt{n-1}\left/\sqrt{2}\right.}\left(1-\dfrac{z^2}{n-1}\right)^{n-1}\cdot\dfrac{1}{\sqrt{1-\dfrac{z^2}{n-1}}}\,\mathrm{d}z = \dfrac{\sqrt{n}}{\sqrt{n-1}}\int_{0}^{\infty}f_{n(z)}\,\mathrm{d}z \end{gather*} where \begin{equation*} f_{n}(z)=\begin{cases} \left(1-\dfrac{z^2}{n-1}\right)^{n-1}\cdot\dfrac{1}{\sqrt{1-\dfrac{z^2}{n-1}}}&\mbox{ if } 0<z<\sqrt{n-1}\left/\sqrt{2}\right.\\ 0&\mbox{ if } z>\sqrt{n-1}\left/\sqrt{2}\right. \end{cases} \end{equation*} Then $ 0 \le f_{n}(z)<e^{-z^2}\cdot \dfrac{1}{\sqrt{1-1/2}} $ and $\displaystyle \lim_{n\to \infty}f_{n}(z) = e^{-z^2}.$ Consequently, according to Lebesgue's dominated convergence theorem \begin{equation*} \lim_{n\to \infty}I_{n} = \int_{0}^{\infty}e^{-z^2}\,\mathrm{d}z =\dfrac{\sqrt{\pi}}{2}. \end{equation*} Remark. This is an alternative answer where we use the beta function and the gamma function. From https://en.wikipedia.org/wiki/Beta_function we get \begin{equation*} \sqrt{n}\int_{0}^{\pi/2}\cos^{2n-2}(x)\,\mathrm{d}x = \dfrac{\sqrt{n}\,\Gamma(n-\frac{1}{2})}{\Gamma(n)}\cdot\dfrac{\sqrt{\pi}}{2}\to \dfrac{\sqrt{\pi}}{2}, \mbox{ as } n\to \infty \end{equation*} where we find the limit here https://en.wikipedia.org/wiki/Gamma_function Since \begin{equation*} 0 \le \sqrt{n}\int_{\pi/4}^{\pi/2}\cos^{2n-2}(x)\,\mathrm{d}x \le \sqrt{n}\,2^{1-n}\cdot\dfrac{\pi}{4} \to 0, \mbox{ as } n\to \infty \end{equation*} we are ready.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3163039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Meaning of linear independence with row vectors So far I have understood that a set of vectors $S = {v_1, v_2, . . . , v_k }$ in a vector space V is linearly independent when the vector equation $c_1v_1 + c_2v_2 + . . . + c_kv_k = 0$ has only the trivial solution$c_1 = 0, c_2 = 0, . . . , c_k = 0.$ An example in matrix form is: $\begin{bmatrix}1 & 1 & 2 & 4 \\ 0 & -1 & -5 & 2 \\ 0 & 0 & -4 & 1 \\ 0 & 0 & 0 & 6 \\ \end{bmatrix} \begin{bmatrix} c_1\\ c_2 \\ c_3 \\ c_4 \\ \end{bmatrix} = \begin{bmatrix} 0\\ 0 \\ 0 \\ 0 \\ \end{bmatrix} $ But a matrix of this form $\begin{bmatrix}1 & 1 & 2 & 4 \\ 0 & -1 & -5 & 2 \\ 0 & 0 & -4 & 1 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix} \begin{bmatrix} c_1\\ c_2 \\ c_3 \\ c_4 \\ \end{bmatrix} = \begin{bmatrix} 0\\ 0 \\ 0 \\ 0 \\ \end{bmatrix} $ is linearly dependent because it has more than a trivial solution However, I am confused about row vectors, specifically the idea that to get a basis for a subspace using row vectors we must put the matrix in reduced row echelon form to find the linearly independent vectors. For example here the accepted answer gives an example of finding a basis with row vectors using this $\begin{bmatrix}1 & 1 & 2 & 4 \\ 2 & -1 & -5 & 2 \\ 1 & -1 & -4 & 0 \\ 2 & 1 & 1 & 6 \\ \end{bmatrix} \Rightarrow \begin{bmatrix}1 & 1 & 2 & 4 \\ 0 & -3 & -9 & -6 \\ 0 & -2 & -6 & -4 \\ 0 & -1 & -3 & -2 \\ \end{bmatrix} \Rightarrow \begin{bmatrix}1 & 1 & 2 & 4 \\ 0 & -3 & -9 & -6 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix}$ and then goes on to say that "Only two of the four original vectors were linearly independent." In what respect are these two vectors linearly independent? This looks exactly like the second example that I gave in which the vectors were dependent because they had more than a trivial solution? Does linear independence with regard to row vectors mean something else? Or does this also only have a trivial solution, and if so, how?
Linear independence is linear independence is linear independence. It's defined entirely independently of matrices. It is, instead, defined in terms of a vector equation, which in finite dimensions, can be turned into a system of linear equations. As such, matrices are an excellent tool to determine linear independence. The definition of linear independence is precisely what you wrote. We say $v_1, \ldots, v_n$ are linearly independent if the only solution of $$a_1 v_1 + \ldots + a_n v_n = 0 \tag{$\star$}$$ for scalars $a_1, \ldots, a_n$, is the trivial solution $a_1 = a_2 = \ldots = a_n = 0$. That is, no other possible choices of scalars will make the above linear combination into the $0$ vector. This doesn't matter if they are row vectors, column vectors, or more abstract vectors (such as matrices, functions, graphs on a fixed vertex set, algebraic numbers, etc). All you need to define linear independence is an (abstract) vector space. For example, the real functions $\sin^2(x)$, $\cos^2(x)$, and the constant function $1$ are not linearly independent because $$2 \cdot \sin^2(x) + 2 \cdot \cos^2(x) - 2 \cdot 1 \equiv 0,$$ i.e. the linear combination is exactly the $0$ function, even though the scalars aren't all $0$. On the other hand, the functions $\sin^2$ and $\cos^2$ are linearly independent, because, if we assume $$a_1 \sin^2(x) + a_2 \cos^2(x) \equiv 0,$$ that is, is equal to $0$ for all $x$, then trying $x = 0$ yields $$0 = a_1 \sin^2(0) + a_2 \cos^2(0) = a_2$$ and trying $x = \pi/2$ yields $$0 = a_1 \sin^2(\pi/2) + a_2 \cos^2(\pi/2) = a_1.$$ Thus, we logically come to the conclusion that $a_1 = a_2 = 0$, i.e. the functions are linearly independent. So, where do matrices come in? If our vectors belong to $\Bbb{R}^m$ (or $\Bbb{C}^m$, or indeed $\Bbb{F}^m$ where $\Bbb{F}$ is a field), then equation $(\star)$ turns into a system of homogeneous linear equations. When you turn this system of linear equations into a matrix of coefficients, the columns will turn out to be precisely the vectors $v_1, \ldots, v_n$, expressed as column vectors. It doesn't matter whether $v_1, \ldots, v_n$ are expressed originally as column vectors or row vectors! Once you turn them into equations, then a matrix, they will become columns. (You should try this for yourself to convince yourself of this fact.) So, if you take the rows of a given matrix, and try to figure out (by definition) whether they are linearly independent or not, you'll inevitably end up with these vectors being columns, i.e. you'll get the same matrix, just transposed. Further, we also get a nice technique for proving linear (in)dependence of vectors in $\Bbb{R}^3$, and pruning them down to a linearly independent set: stick them as columns in a matrix $A$, row reduce to a row-echelon form $B$, and if the $i$th column of $B$ does not have a pivot in it, then the $i$th column of $A$ depends linearly on the previous columns of $A$, and hence can be removed without damaging the span. If you instead stick the vectors in as rows in a matrix and reduce as above, then this will not tell you which vectors depend on each other, in the same way that the column approach does. However, row operations preserve the span of the row vectors, hence the non-zero rows of a row-echelon form of a matrix will be a basis for the span of your vectors. This basis may have no vectors in common with your original set of vectors, however!
{ "language": "en", "url": "https://math.stackexchange.com/questions/3163109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }