Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
What is the difference between average slope and Instant slope(Instantaneous Rate of Change) I'm starting to learn calculus, and I'm getting confused about what average slope and instant slope(instantaneous rate of change)do and what they're differences are after looking at several sources on the internet. I know that average slope is $\frac{Δy}{Δx }$ and instant slope is $\frac{dy}{dx}$. Are these formulas correct? And if they are, what difference is there between Δy and dy?
$\Delta y$ and $\Delta x$ represent actual numbers. If you have two points on the graph of a function, then $\Delta y$ is the change in their $y$-coordinates, and $\Delta x$ is the change in their $x$-coordinates. So, when you divide change in $y$ by change in $x$, i.e. $\frac{\Delta y}{\Delta x}$, you get the slope of the line that connects them. As you move the points closer together (i.e. make $\Delta x$ smaller and smaller), the line no longer connects two points, but becomes a line tangent to the graph of the function. The slope of that line is written as $\frac{dy}{dx}$, where here we think of $\frac{dy}{dx}$ as a single symbol, and not a fraction. Take a look at this Desmos link for a way to visualize this. The $h$ slider controls $\Delta x$, so as you make $h$ smaller and smaller you can see the slope $\frac{\Delta y}{\Delta x}$ get closer to 2, which is $\frac{dy}{dx}$, the slope of the tangent line at $x = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3750972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to solve this logarithmic equation with sum of exponential functions? I come across this logarithmic equation recently (solve for $x \in \mathbb{R}$) : $$ 2x \geq \log_2 \left( \frac{35}{3} \cdot 6^{x-1} - 2 \cdot 9^{x - \frac{1}{2}} \right)$$ With few quick changes, this equation can be rewritten as : $$ \ln \left( \frac{4}{3} \right) x + \ln 3 \geq \ln \left(\frac{35}{6} \cdot 2^x - 2 \cdot 3^x \right)$$ So, how do you handle the right part ? Factoring doesn't appear to be so trivial...
We need to solve $$2^{2x}\geq\frac{35}{18}\cdot6^x-\frac{2}{3}\cdot9^x,$$ where $$ \frac{35}{18}\cdot6^x-\frac{2}{3}\cdot9^x>0$$ and after substitution $\left(\frac{3}{2}\right)^x=t$ we obtain a quadratic inequality: $$\frac{2}{3}t^2-\frac{35}{18}t+1\geq0.$$ Can you end it now? I got the following answer. $$(-\infty,-1]\cup\left[2,\log_{\frac{3}{2}}\frac{35}{12}\right).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3751082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
values of q for which tangent integral is converges Finding value of $q$ for which $$\int^{1}_{0}\frac{1}{(\tan (x))^{q}}dx$$ converges What i try:: Let $\tan x=t.$ Then $\displaystyle dx=\frac{1}{\sec^2 (x)}dx=\frac{1}{1+t^2}dt$ And changing limits $$I=\int^{\tan (1)}_{0}\frac{1}{(1+t^2)t^{q}}dt<\int^{\tan(1)}_{0}\frac{1}{t^2\cdot t^{q}}dt$$ $$I<\int^{\tan(1)}_{0}t^{-q-2}dt=\frac{1}{-q-1}\bigg(t^{-q-1}\bigg)\bigg|^{\tan(1)}_{0}=\frac{1}{-q-1}\cdot (\tan (1))^{-q-1}$$ Here $q\in\mathbb{R}-\{-1\}$ for which integral i converges Can anyone plese explain me is my solution is right. If not then how do i solve it. Help me please.Thanks
As $x\to0$, $\tan x\sim x$ and so $I$ converges iff $\int_0^1\frac{dx}{x^q}$ converges, that is iff $q<1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3751310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Another way to solve $\int \frac{\sin^4(x)}{1+\cos^2(x)}\ dx$ without the substitution $y=\tan\left(\frac{x}{2}\right)$? Is there another way to solve an integral $$\int \frac{\sin^4(x)}{1+\cos^2(x)}\ dx$$ without the substitution $y=\tan\left(\frac{x}{2}\right)$? $\large \int \frac{\sin^3(x)}{1+\cos^2(x)}\ dx$ is easily solved using the substitution $y=\cos(x)$. What if the power of sine is even?
If you enjoy special functions, using $t=\tan(x)$ $$I_n=\int \frac{\sin^n(x)}{1+\cos^2(x)}\ dx=\int \left(\frac{t}{\sqrt{t^2+1}}\right)^n\frac{dt}{t^2+2}$$ $$I_n=\frac {t^{n+1}}{2(n+1)}\,F_1\left(\frac{n+1}{2};\frac{n}{2},1;\frac{n+3}{2};-t^2,-\frac{t^2}{2}\right)$$ where appears the Appell hypergeometric function of two variables.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3751405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 4 }
List of quadratic field with the UFD property Let D be a square free integer let $K=\mathbb{Q}(\sqrt{D})$ and let $\mathcal{O_{K}}$ be the ring of integer of $K$ My question: where I can find a list of of the value of $D$ which makes the ring of the quadratic field $K=\mathbb{Q}(\sqrt{D})$ have the UFD property for $D<300$ ? Can we check whether any value of $D$ make the the ring of a quadratic field have the UFD property using sage or magma ? if yes what is the command?
The broken one-liner L = [n for n in [2..300] if n.is_squarefree() and QuadraticField(n, 'a').class_number() == 1] computes in sage the list of all squarefree $n\le 300$, so that $\Bbb Q(\sqrt n)$ has class number one, this corresponds to the property of being UFD. Some first few entries in L are: sage: L[:21] [2, 3, 5, 6, 7, 11, 13, 14, 17, 19, 21, 22, 23, 29, 31, 33, 37, 38, 41, 43, 46] Further checks in sage: sage: K.<a> = QuadraticField(19) sage: K Number Field in a with defining polynomial x^2 - 19 with a = 4.358898943540674? sage: K.class_number() 1 sage: K.OK() Maximal Order in Number Field in a with defining polynomial x^2 - 19 with a = 4.358898943540674? sage: K.OK().class_number() 1 In contrast: sage: L.<b> = QuadraticField(34) sage: L Number Field in b with defining polynomial x^2 - 34 with b = 5.830951894845300? sage: L.class_number() 2 sage: LOK = L.OK() sage: LOK Maximal Order in Number Field in b with defining polynomial x^2 - 34 with b = 5.830951894845300? sage: LOK.class_number() 2 sage: L.class_group() Class group of order 2 with structure C2 of Number Field in b with defining polynomial x^2 - 34 with b = 5.830951894845300? sage: L.class_group().gens() (Fractional ideal class (3, b + 1),) sage: L.ideal(3) Fractional ideal (3) sage: L.ideal(3).factor() (Fractional ideal (3, b + 1)) * (Fractional ideal (3, b + 2)) sage: L.ideal(29).factor() (Fractional ideal (29, b + 11)) * (Fractional ideal (29, b + 18))
{ "language": "en", "url": "https://math.stackexchange.com/questions/3751518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that a positive operator is invertible Help guys, I need to prove this: Let $(V,\langle~,~ \rangle)$ a finite n-dimensional euclidean space.Let $T$ be a linear operator defined positive (There exists a non singular operator $S$ such that $T=S^*S$) on $V$, prove that $T$ is invertible I tried this: We know by hypothesis that $S$ is ivertible,then there exists $S^{-1}$ such that $SS^{-1}=I$ $S$ then I need to prove that $T=T^*$ But that is trivial, since $T=S^*S=(S^*S)^*=S^*(S^*)^*=S^*S=T$ Is that correct? I need help!
$T$ is invertible if $Tx=0$ implies $x=0$. In this case, $T=S^*S$ where $S$ is non-singular. Therefore, if $Tx=0$, it follows that $$ 0 = \langle Tx,x\rangle=\langle S^*Sx,x\rangle=\langle Sx,Sx\rangle=\|Sx\|^2 \implies Sx=0 \implies x=0. $$ So $T$ is invertible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3751638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
What is Friedberg doing in this proof to show the (ij)th entry of this matrix Here is the Theorem: Let $V$ and $W$ be finite-dimensional vector spaces over $F$ with ordered bases $\beta = \{x_1, \ldots, x_n\}$ and $\gamma = \{y_1, \ldots, y_m\}$ respectively. For any linear transformation $T : V \to W$, the mapping $T^T : W^* \to V^*$ defined by $T^T(g) = gT$ for all $g \in W^*$ is a linear transformation with the property that $[T^T]_{\gamma^*}^{\beta^*} = ([T]_\beta^\gamma)^T$. At some point in his proof he derives this formula $$T^T(g_j) = g_j T = \sum\limits_{s = 1}^{n}(g_j T)(x_s)f_s$$ with dual bases $ \beta^* = \{f_1, \ldots, f_n\}$ and $\gamma^* = \{g_1, \ldots, g_m\}$ and then claims that the $(i, j)^{\text{th}}$ entry of $[T^T]_{\gamma^*}^{\beta^*}$ is $$(g_jT)(x_i)$$ I don't understand what he does here to make this claim. Could somebody please clarify?
You have not explicitly said so, but I suspect that $(x_1,\dots,x_n),(y_1,\dots,y_m)$ are meant to denote bases for $V$ and $W$, and $(f_1,\dots,f_n),(g_1,\dots,g_m)$ are the corresponding dual bases for $V^*$ and $W^*$. Please correct me if I am wrong. Recall that for a transformation $\alpha:V \to W$, the entries $a_{ij}$ of $[\alpha]^\gamma_\beta$ are defined so that $$ \alpha(x_j) = \sum_{i=1}^m a_{ij}y_i. $$ With that in mind, let $A$ denote the matrix $[T^\top]_{\gamma^*}^{\beta^*}$ of the transformation $T^\top:W^* \to V^*$. By the above, this means that $$ T^\top(g_j) = \sum_{k=1}^n a_{kj} f_i, $$ where I have switched the summation index for clarity. It follows that $$ (g_j T)(x_i) = (T^\top(g_j))(x_i) = \left( \sum_{k=1}^n a_{kj} f_k\right)(x_i) = \sum_{k=1}^n f_k(x_i) a_{kj} = a_{ij}. $$ So, the $i,j$ entry of $T^\top$ is indeed $g_j T(x_i)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3751735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Evaluating $\lim _{m\to\infty}\left(\frac1{m^2}+\frac2{m^2}+\frac3{m^2}+\cdots+\frac{m}{m^2}\right)$. Where's my error? Question is following $$\lim _{m \to \infty}\left(\frac{1}{m^{2}}+\frac{2}{m^{2}}+\frac{3}{m^{2}}+\cdots+\frac{m}{m^{2}}\right)$$ method-1 $$\lim _{m \rightarrow \infty}\left(\frac{m(m+1)}{2 m^{2}}\right)$$ $$=\frac{1}{2}$$ method-2(applying limits individually) $$\lim _{m \rightarrow \infty} \frac{1}{m^{2}}+\lim _{m \rightarrow \infty} \frac{2}{m^{2}}+\lim _{m \rightarrow \infty} \frac{3}{m^{2}}+\infty=0$$ Since denominator is always greater than the numerator What’s wrong with method-2. Is it wrong to apply limits individually. If so then how?
First case is correct. Second case wrong: limit of sums equal sum of limits when we have fixed amount of summands.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3751879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
question about similar triangle word problem A person is walking directly away from a light on an $18$-foot tall pole. At this instant, the person casts a shadow $14$ feet long. If they walk 10 feet farther from the pole, they will cast a shadow $20$ feet long. How tall is the person? I've tried setting up $2$ unknown variables for two different triangle figures since you don't know the height of the person and (I think) you don't know the initial distance from the pole for the shadow to appear. Here's how I set it up: $h$ = height of person $x$ = distance for shadow to form Similar triangle $1$: $(\frac{18}{14}+x)=(\frac{h}{14}) = 252=14h+hx$ Similar triangle $2$: $(\frac{18}{x}+30)=(\frac{h}{20}) = 360=30h+hx$ I'm pretty sure I set the problem up wrong because I can't seem to get the answer for h.
See the image (not to scale!): We have from the smaller triangle: $$(x+14):18 = 14:h$$ and from the big triangle: $$(x+30):18 = 20:h$$ which resolves to $$\begin{cases}x=18\times 14:h - 14\\x=18\times 20:h-30\end{cases}$$ Left hand sides are the same, so right hand sides are equal, too: $$\frac{18\cdot 14}h-14 = \frac{20\cdot 18}h-30$$ Multiply both sides by $h$ to obtain $$18\cdot 14-14h = 20\cdot 18-30h$$ hence $$(30-14)h = (20-14)\cdot 18$$ $$16 h= 6\cdot 18$$ and the answer is: $$h=6\tfrac 34$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3752007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How do I transform this problem into a semidefinite program? $$\begin{array}{ll} \text{minimize} & \dfrac{(c^T x)^2}{(d^Tx)}\\ \text{subject to} & Ax \leq b\\ & d^T x > 0\end{array}$$ I have been stuck on this question for a couple days. I am sharing with you what I tried, although I am pretty sure it's wrong. Please help.
Let $r=d^Tx > 0$ and $s=c^T x$. Your objective $\min t$ subject to $t \geq \frac{s^2}{r}$ is conic representable via the rotated quadratic cone (best for practical optimization): $(t,r,s) \in \mathcal{Q}_r^3$, or alternatively via the semidefinite cone: $\left(\begin{array}{ll} t & s\\ s & r \end{array}\right) \in \mathcal{S}^2_+$ If you for some reason need a standard form SDP you need to reformulate $r=d^Tx$ and $s=c^T x$ with two inequalities each and append them to your system of inequalities $Ax \leq b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3752182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Inverse of $y=1/x$ Can anybody say anything about the inverse function of $y=1/x$ and plot it on a graph and then compare the graphs of the given function and it's inverse? Is $y=1/x$ invertible? If yes then do the graphs of $y=1/x$ and its inverse coincide?
Solve the equation $$y=\frac1x$$ for $x$. This immediately gives you $$x=\frac1y$$ and is valid for $x,y\ne0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3752320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Convergence of double Integral Let's assume that $f \in L^p(\mathbb{R})$ ($1 \leq p < \infty$). Does it then hold that $$ \lim_{n \rightarrow \infty} \int_0^1 \int_0^1 \left \lvert f\left( \frac{\tilde{r}}{n} \right) -f\left(\frac{r}{n} \right) \right \rvert^p ~\mathrm{d}r \mathrm{d}\tilde{r} = 0 \quad ? $$ And if yes, how can I prove it? I am quite certain that this is true. I tried to use that $\displaystyle \lim_{h \rightarrow 0} \lVert f(\cdot + h) - f \rVert_{L^p} = 0$ but that did not help me as in this case I have a double Integral. Maybe I could youse a density argument or a transformation but I don't quite know how. Can anyone help me out? Or does there exist a counterexample?
Well, for fixed $\tilde{r} \in [0,1]$ we have $$\lim_{n\to\infty} \int_0^1 \left|f\left(\frac{\tilde{r}}n\right) - f\left(\frac{r}n\right)\right|^p\,dr =\int_0^1 \lim_{n\to\infty}\left|f\left(\frac{\tilde{r}}n\right) - f\left(\frac{r}n\right)\right|^p\,dr= 0$$ by the Lebesgue dominated convergence theorem since the integrand is dominated by $2^p\|f\|_{L^P}^p$ which is integrable on $[0,1]$. Therefore the sequence of functions $$\tilde{r} \mapsto \int_0^1 \left|f\left(\frac{\tilde{r}}n\right) - f\left(\frac{r}n\right)\right|^p\,dr$$ converges pointwise to $0$ when $n\to\infty$ and it is bounded again by $2^p\|f\|_{L^P}^p$ so by the Lebesgue dominated convergence theorem we have $$\lim_{n\to\infty} \int_0^1 \int_0^1 \left|f\left(\frac{\tilde{r}}n\right) - f\left(\frac{r}n\right)\right|^p\,dr\,d\tilde{r} = \int_0^1 \left(\lim_{n\to\infty}\int_0^1 \left|f\left(\frac{\tilde{r}}n\right) - f\left(\frac{r}n\right)\right|^p\,dr\right)d\tilde{r} = 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3752440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is there a geometric intuition for integration by parts? Is there a geometric intuition for integration by parts? $$\int f(x)g'(x)\,dx = f(x)g(x) - \int g(x)f'(x)\,dx$$ This can, of course, be shown algebraically by product rule, but still where is geometric intuition? I have seen geometry of IBP using parametric equations but I don't get it. Newest edit: few similar questions has been asked before, but they use parametric equations to show geometry behind IBP. I am interested if there is geometric intuition which uses functions in Cartesian plane or some other, maybe more natural, explanation.
Note. Edited because Adayah pointed out (correctly, and to my chagrin) that this answer was totally sloppy—sloppier even than I intended it to be. Let's hope it's better now. When we use integration by parts on an integral $$ \int u(x) \, \mathrm{d}v(x) = \int u(x) v'(x) \, \mathrm{d}x $$ we implicitly treat $u$ and $v$ as parametric functions of $x$. If we plot these functions against each other on the $u$-$v$ plane, we might obtain something like the below: (Note that $v$ is on the horizontal axis, and $u$ on the vertical.) In this diagram, the purple region below the curve represents the definite integral $$ \int_{v(x)=2}^3 u(x) \, \mathrm{d}v(x) = \int_{x=v^{-1}(2)}^{v^{-1}(3)} u(x) v'(x) \, \mathrm{d}x $$ Similarly, the blue region to the left of the curve represents the definite integral $$ \int_{u(x)=1}^2 v(x) \, \mathrm{d}u(x) = \int_{x=u^{-1}(1)}^{u^{-1}(2)} v(x) u'(x) \, \mathrm{d}x $$ Note that we can set * *$x_1$ such that $u(x_1) = 1$ and $v(x_1) = 2$ *$x_2$ such that $u(x_2) = 2$ and $v(x_2) = 3$ and so we can relate those two integrals by $$ \int_{x=x_1}^{x_2} u(x) v'(x) \, \mathrm{d}x = \left. u(x) v(x) \phantom\int\!\!\!\!\! \right]_{x=x_1}^{x_2} - \int_{x=x_1}^{x^2} v(x) u'(x) \, \mathrm{d}x $$ Obviously this simple visualization of integration by parts relies (at least to some degree) on $u(x)$ and $v(x)$ being one-to-one; otherwise, we have to use signed areas. However, the necessary rigor can be added. I'm making the assumption that rigor was not what was needed here. (ETA: Though more than I provided at first!)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3752680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35", "answer_count": 2, "answer_id": 1 }
Show $\log(\det(A))\le \operatorname{tr}(A)-n$ Suppose that $A$ is a real, symmetric, positive definite $n\times n$ matrix. Show that $$\log(\det(A))\le \operatorname{tr}(A)-n \quad \text{and} \quad \log(\det(I_n+A))\le \operatorname{tr}(A).$$ Since $A=CDC^{-1}$ we can say the following: $$\det(A)=\det(C)\det(D)\det(C^{-1})=\det(D)=\Pi \lambda_i$$ But I'm not sure how to proceed from there. I need to somehow show that the trace is greater than the eigenvalues multiplied.
We have: $$\mathrm{Tr}(A) = \sum \lambda_i $$ $$\det(A) = \prod \lambda_i $$ So, if eigenvalues are positive reals, we have to show $$ \sum \ln \lambda_i \leq \sum \lambda_i -n $$ which is true as $$\ln x \leq x -1 $$ for all $x>0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3752813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $X$ is $\sigma(\mathcal{G} \cup \mathcal{H})$-measurable can we write $X=f(G,H)$? Let $(\Omega, \mathcal{F}, P)$ be a probability triplet. Let $\mathcal{G}$ and $\mathcal{H}$ be two sub-sigma-algebras of $\mathcal{F}$. Let $X:\Omega\rightarrow\mathbb{R}$ be a random variable such that $\sigma(X) \subseteq \sigma(\mathcal{G} \cup \mathcal{H})$. Can we say there is a Borel-measurable function $f:\mathbb{R}^2\rightarrow\mathbb{R}$ and two random variables $G$ and $H$ such that $X = f(G,H)$, and $\sigma(G) \subseteq \mathcal{G}$, $\sigma(H) \subseteq \mathcal{H}$? Are there references where I can cite such a result? If it makes the problem easier, I am also interested in the case when $\mathcal{G}$ and $\mathcal{H}$ are independent and/or when one of the sub-sigma-algebras, say $\mathcal{G}$, is generated by some random variable $R$. Note 1: I can show it is true if there are random variables $R, S$ such that $\mathcal{G}=\sigma(R)$ and $\mathcal{H}=\sigma(S)$. Note 2: There may be some hope by applying properties of countably generated sigma algebras, since $\sigma(X)$ is countably generated. If we define the events $B_x = \{X \leq x\}$ for each rational number $x$, I wonder if there is a way of writing a particular event $B_x$ in terms of events in $\mathcal{G}$ and $\mathcal{H}$. Application: I have a random variable $Z$ and I want to write for some $f$: $$ E[Z|\sigma(\mathcal{G} \cup \mathcal{H})] \overset{?}{=} f(G,H)$$ This is similar in spirit to the known fact $E[Z|\sigma(Y)]=f(Y)$ for some $f$.
What is true, and this may be sufficient for your purposes, is that under the stated conditions there is a $\mathcal G\otimes\mathcal H$-measurable map $Z:\Omega\times\Omega\to\Bbb R$ such that $X(\omega) = Z(\omega,\omega)$ for all $\omega\in\Omega$. (And conversely, because $\omega\mapsto(\omega,\omega)$ is $\sigma(\mathcal G\cup\mathcal H)/\mathcal G\otimes\mathcal H$-measurable.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3752941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Can we apply here the Cayley–Hamilton theorem? We have the matrix \begin{equation*}A:=\begin{pmatrix}3 & 1 & 0 & -1& -1 \\ 0 & 2 & 0 & 0 & 0 \\ 1 & 0 & 2 & 0 & -1 \\ 0 & 0 & 0 & 2 & 0 \\ 1 & 0 & 0 & -1 & 1\end{pmatrix}\in M_5(\mathbb{R})\end{equation*} The characteric polynomial is \begin{equation*}P_A(\lambda)=(2-\lambda)^5\end{equation*} The eigenvalue $\lambda=2$ has the algebraic multiplicity $5$. The eigenspace is \begin{equation*}\left \{\begin{pmatrix}e\\ 0\\ c\\ 0\\ e\end{pmatrix}: c, e\in \mathbb{R}\right \}=\left \{e\begin{pmatrix}1\\ 0\\ 0\\ 0\\ 1\end{pmatrix}+c\begin{pmatrix}0\\ 0\\ 1\\ 0\\ 0\end{pmatrix}: c, e\in \mathbb{R}\right \}\end{equation*} So the geometric multiplicity of the eigenvalue $\lambda=2$ is $2$. How can we calculate $(A − 2I_5)^3$ ? Can we apply here the Cayley–Hamilton theorem?
Here, dimension of eigenspace corresponding to the eigenvalue 2 is 2, There are two Jordan canonical forms possible! So, in one Jordan canonical form, there can be two blocks possible which are of order 3 and order 2 and in another of Jordon canonical form, there can also be possiblity for two blocks of order 4 and 1, But through the first case max order of block is 3, hence there be possiblity of minimal polynomial of order 3, hence we can use Cayley Hamilton's theorem! For the first case ,Jordon canonical form, $$ \begin{pmatrix} 2 & 1 & 0 & 0 & 0 \\ 0 & 2 & 1 & 0 & 0 \\ 0 & 0 & 2 & 0 & 0 \\ 0 & 0 & 0 & 2 & 1 \\ 0 & 0 & 0 & 0 & 2 \\ \end{pmatrix} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3753080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What does `Sitzber. Heidelberg Akad. Wiss., Math.-Naturw. Klasse. Abt. A' stand for? I would like to cite an article from 1914 by Oskar Perron without any abbreviations. I am unable to figure out what `Sitzber. Heidelberg Akad. Wiss., Math.-Naturw. Klasse. Abt. A' is short for. Can anyone here perhaps help me with this?
It is “Sitzungsberichte der Heidelberger Akademie der Wissenschaften, Mathematisch-Naturwissenschaftliche Klasse: Abteilung A, Mathematisch-physikalische Wissenschaften” – Source
{ "language": "en", "url": "https://math.stackexchange.com/questions/3753229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that the diagonals of a regular octagon intersect at the angular points of a square. $ABCDEFGH$ is a regular octagon and $AF, BE, CH, DG$ are drawn. Prove that their intersections are the angular points of a square. This question is pretty trivial to solve with coordinate geometry, but it is time-consuming to find the coordinates. Can anybody please provide a solution that does not involve coordinate geometry?
Another way. Let $BE\cap CH=\{K\},$ $BE\cap DG=\{L\}$, $AF\cap DG=\{M\}$ and $AF\cap CH=\{N\}.$ Thus, since our octagon is cyclic, we obtain: $$\measuredangle EBF=\measuredangle BFA$$ and from here $$BE||AF,$$ which gives $$KL||MN.$$ Similarly $$KN||ML,$$ which gives that $KLMN$ is a parallelogram. Now, since $$\measuredangle ACH=\measuredangle BAC,$$ we obtain $$AB||CH,$$ which gives $ABKN$ is a parallelogram and from here $$KN=AB.$$ Similarly, $$KN=CD,$$ which gives that $KLMN$ is a rhombus. Also, $$\measuredangle KNM=\measuredangle ACH+\measuredangle CAF=\frac{1}{2}\cdot\frac{360^{\circ}}{8}+\frac{1}{2}\cdot3\cdot\frac{360^{\circ}}{8}=90^{\circ},$$ which gives that $KLMN$ is a square.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3753350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why is the image of a faithful representation of a Lie group a Lie subgroup? Let $\varphi:G\to\operatorname{GL}_n(\mathbb{R})$ be a faithful Lie group representation, i.e. an injective Lie group homomorphism. Then is $\varphi(G)$ a Lie subgroup of $\operatorname{GL}_n(\mathbb{R})$? Somehow I think this shouldn't be to difficult to see, but I don't quite get it. I know that $\varphi(G)$ is an immersed submanifold because $\varphi$ is injective and of constant rank, but how to show that the multiplication $m:\varphi(G)\times\varphi(G)\to\varphi(G)$ is smooth?
You are perhaps confused about the definitions, specifically the definition of Lie subgroup (which you have not specified in your question; I will do so below following the most common convention). An immersion is a $C^\infty$ map $\phi:N \to M$ between manifolds such that the differential $d \phi_n$ is injective at each $n \in N$. A submanifold of a manifold $M$ is a pair $(N,\phi)$ consisting of a manifold $N$ and a one-to-one immersion $\phi:N \to M$. If $G$ is a Lie group, a Lie subgroup of $G$ is a pair consisting of a Lie group $H$ and a group homomorphism $\phi: H \to G$ such that the pair $(H,\phi)$ is a submanifold of the manifold $G$. With these definitions it is clear that checking that a faithful representation $\phi:G \to \mathrm{GL}_n(\mathbf{R})$ is a Lie subgroup amounts to checking that it is an immersion (which it is, as you have written above---though I would rather see it by observing that it suffices to prove that it is an immersion at the identity $1 \in G$, which follows from the fact that all vectors in the kernel exponentiate to elements in the kernel).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3753481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do you find the interval in which a parametric equation will be traced exactly once I have been all over the internet and I can't find an answer to what seems like a simple question. I want to be able to find the interval for a parametric equation so that it is only traced once. My equations are: \begin{align} x &= 11\cos(u) - 4\cos\left(\frac{11u}{2}\right)\\ y &= 11\sin(u) - 4\sin\left(\frac{11u}{2}\right) \end{align} After looking at the graph, I realize the answer is $4\pi$, but how would I solve this if I am not able to look at the graph. I have seen solutions where people check one loop at a time until they arrive back at the original starting point, but it always seems as if they know how long one loop is and to me it seems they are just taking arbitrary numbers. In other words, how would I know to check every $\frac{\pi}{4}$ versus every $10\pi$.
You are looking for the smallest value of $p$ such that $x(u)=x(u+p)$ and $y(u)=y(u+p)$ for all $u$. Notice the equations for both contain only trigonometric functions, so this $p$ must be some multiple of $\pi$. Consider $$x(u)=11\cos u-4\cos\left(\frac{11u}{2}\right) $$ The period of the first term is clearly $2\pi$. But for the second term, $$4\cos\left(\frac{11(u+2\pi)}{2} \right)=4\cos\left(\frac{11u}{2} +\pi\right) =\color{red}-4\cos\left(\frac{11u}{2}\right)$$ The smallest rational multiple of $\pi$ that is required is, as you’ll see, $\frac{4\pi}{11}$. Taking the lcm of the two periods gives the overall period: $4\pi$. In general, it’s a good idea to remember that the period of $\cos \left(\frac{ax}{b}\right)$ is $2\pi \left(\frac ba\right)$ (similarly for sin). The same argument goes for $y(u)$ as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3753625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Simplify $\frac{d}{dt}\int_x^t f(t,y)dy$ I am trying to simplify $\frac{d}{dt}\int_x^t f(t,y)dy$ as a part of a proof. I am somewhat confused on how I can proceed with this. Do I define a function $g(t,y)$ such that $\frac{\partial g}{\partial y} = f(x,t)$ and then say $\frac{d}{dt}\int_x^t f(t,y)dy = \frac{d}{dt}(g(t,t) - g(t,x))$? Or can I just say that $\frac{d}{dt}\int_x^t f(t,y)dy = \frac{d}{dt}(f(t,t) - f(t,x))$? If so why can I say this. Also note that I am trying to avoid using leibniz integral rule since it has not been covered yet.
You can think of the integral as being a function of three parameters: $$g(a,b,c) = \int_a^b f(c,y)\:dy$$ Thus the derivative you want can be derived from chain rule $$\frac{d}{dt}g(a(t),b(t),c(t)) = \frac{\partial g}{\partial a}\frac{da}{dt} + \frac{\partial g}{\partial b}\frac{db}{dt} + \frac{\partial g}{\partial c}\frac{dc}{dt}$$ $$= 0 + f(t,t) +\int_x^t \frac{\partial f}{\partial t}(t,y)\:dy $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3753735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Determine the $\lambda \in \mathbb{R}$ for which this integral converges Consider the cuspoidal cubic given by $x^2 - y^3 =0$ in $\mathbb{C}^2$. The log-canonical threshold of the cuspoidal cubic is determined by finding the largest value of $\lambda \in \mathbb{R}$ for which the integral $$\int \frac{1}{| x^2 - y^3|^{2\lambda}}$$ converges in a neighborhood of $0$. There is an algebraic way of showing that $\lambda = \frac{5}{6}$, but I'm curious as to whether we can deduce this from the convergence of the above integral. Does anyone know how to show that the above integral converges in a neighborhood of $0$ only if $\lambda =\frac{5}{6}$? Additional Remark: From what I understand, to compute the lct, the integral needs to converge in a neighborhood of $0$ in $\mathbb{C}^2$. Regardless, I do not know how to integrate the function if $x$ and $y$ are real variables. Both settings may be of interest. Thank you for your interest in this problem
Here is another approach that I have learned from Donaldson: For a non-negative integer $r$, consider the annular regions $$\Omega_r = \{ (z,w) \in \mathbb{C}^2 : 2^{-3(r+1)} \leq | x | \leq 2^{-3r}, \ 2^{-2(r+1)} \leq | y | \leq 2^{-2r} \}.$$ Let $I_r = \int_{\Omega_r} | x^2 - y^3 |^{-2 \lambda}$. The substitution $z = 2^3 x$ and $w = 2^2 y$ shows that $$I_{r+1} = 2^{12\lambda -10} I_r.$$ Hence, $\sum_r I_r$ is finite if $\lambda < 5/6$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3753806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
How can I integrate $\int \frac{u^3}{(u^2+1)^3}du?$ How to integrate following $$\int\frac{u^3}{(u^2+1)^3}du\,?$$ What I did is here: Used partial fractions $$\dfrac{u^3}{(u^2+1)^3}=\dfrac{Au+B}{(u^2+1)}+\dfrac{Cu+D}{(u^2+1)^2}+\dfrac{Au+B}{(u^2+1)^3}$$ After solving I got $A=0, B=0, C=1, D=0, E=-1, F=0$ $$\dfrac{u^3}{(u^2+1)^3}=\dfrac{u}{(u^2+1)^2}-\dfrac{u}{(u^2+1)^3}$$ Substitute $u^2+1=t$, $2u\ du=dt$, $u\ du=dt/2$ $$\int\frac{u^3}{(u^2+1)^3}du=\int \frac{dt/2}{t^2}-\int \frac{dt/2}{t^3}$$ $$=\frac12\dfrac{-1}{t}-\frac{1}{2}\dfrac{-1}{2t^2}$$ $$=-\dfrac{1}{2t}+\dfrac{1}{4t^2}$$ $$=-\dfrac{1}{2(u^2+1)}+\dfrac{1}{4(u^2+1)^2}+c$$ My question: Can I integrate this with suitable substitution? Thank you
Substitute $u=\sinh t$ to integrate \begin{align} & \int \dfrac{u^3}{(u^2+1)^3}du= \int \frac{\sinh^3t}{\cosh^5t}dt\\ =&\int\tanh^3td(\tanh t)=\frac14\tanh^4t+C= \frac{u^4}{4(u^2+1)^2}+C \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3753883", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 10, "answer_id": 2 }
Proof Check: $x \leq y+ \epsilon$ for all $\epsilon >0$ iff $x \leq y$. Synopsis I want to be sure I'm utilizing proof by contradiction correctly, so please check my proof of the exercise below. It's relatively simple, so it shouldn't take you too much time. Exercise Let $x$ and $y$ be real numbers. Show that $x \leq y + \epsilon$ for all real numbers $\epsilon > 0$ if and only if $x \leq y$. Proof Suppose $x \leq y + \epsilon$ for all $\epsilon > 0$ and $x > y$. Then $x - y > 0$ and $x - y + \epsilon > \epsilon$ for all $\epsilon > 0$. But if $x \leq y + \epsilon$, then $y - x + \epsilon \geq 0$. So $0 \leq y-x+\epsilon < y-x+(x-y+\epsilon) = \epsilon$, a contradiction. For the converse, suppose $ x \leq y$. Then it is obvious that $x \leq y+\epsilon$. This concludes our proof. Update This proof is obviously wrong. It is not a contradiction that $\epsilon >0$. For some reason, I deluded myself that my conclusion stated that $\epsilon < 0$, but that's just due to my occasional stupidity and habitual lack of double checking. Instead, consider some $\epsilon$ such that $0 < \epsilon < x-y$. Then $x \leq y + \epsilon < y+x-y < x$, a contradiction. Thank you to the various people who commented on the issues with my proof. This was a very stupid mistake, and I don't even know how I overlooked what I did.
No, it is not correct. I tried to prove it by contradiction, and you got the conclusion that 0<\varepsilon, claiming that that's a contradiction. Why? There is no contradiction there. If $x>y$, let $\varepsilon=\frac12(x-y)$. Then $\varepsilon>0$ and therefore $x\leqslant y+\varepsilon$. But this means that$$x\leqslant y+\frac12(x-y)=\frac12(x+y)<\frac12(x+x)=x.$$This gives you a contradiction: $x>x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3754141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove inequality $\tan(x) \arctan(x) \geqslant x^2$ Prove that for $x\in \left( - \frac{\pi} {2},\,\frac{\pi}{2}\right)$ the following inequality holds $$\tan(x) \arctan(x) \geqslant x^2.$$ I have tried proving that function $f(x) := \tan(x) \arctan(x) - x^2 \geqslant 0$ by using derivatives but it gets really messy and I couldn't make it to the end. I also tried by using inequality $\tan(x) \geqslant x$ on the positive part of the interval but this is too weak estimation and gives opposite result i.e. $x\arctan(x) \leqslant x^2$.
It's enough to prove this for $0<x<\pi/2$. Let $f(x)=(\tan x)/x$. Then $f$ is increasing on $(0,\pi/2)$. To prove this, for instance $f(x)$ has nonnegative Maclaurin coefficients. Let $x\in(0,\pi/2)$, and let $y=\arctan x$. Then $x=\tan y\ge y$ as $f(y)=(\tan y)/y\ge1$. Therefore $g(y)\le g(x)$, that is $$\frac{\tan y}y\le\frac{\tan x}x$$ or $$\frac{x}{\arctan x}\le\frac{\tan x}x$$ etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3754270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
How do I finish solving $f(x)f(2y)=f(x+4y)$? I'm trying to solve this functional equation: $$f(x)f(2y)=f(x+4y)$$ The first thing I tried was to set $x=y=0$; then I get: $$f(0)f(0)=f(0)$$ which means that either $f(0)=0$ or we can divide the equation by $f(0)$ and then $f(0)=1$. Case 1: If $f(0)=0$ then we can try to set $x=0$. Then; $$f(0)f(2y)=f(4y)$$ $$0=f(4y)$$ Which means that one of the solutions is a constant function $f(x)=0$ Case 2: If $f(0)=1$. This is where I'm stuck. I tried to set $x=0$, then I get: $$f(2y)=f(4y)$$ I also tried to set $x=-4y$. Then I get: $$f(-4y)f(2y)=f(0)=1$$ I think that these two observations could be useful, but I don't know how to continue from here. I have guessed that another solution is $f(x)=1$ but I don't know how to show that there aren't any others as well.
Assuming that $ f(0)=1$, you got for $ y\in \Bbb R $, $$f(4y)=f(2y)=f(y)=f(\frac y2)$$ $$=...=f(\frac{y}{2^n})$$ for each $ n\ge 0$. but by continuity of $ f $ at $ 0$, $$\lim_{n\to +\infty}f(\frac{y}{2^n})=f(0)=1$$ thus $$(\forall y\in \Bbb R)\;\; f(y)=1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3754417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Can we relax hypothesis of Fundamental theorem of calculus? Let $F$ is continuous $[a,b]$ and differentiable on $[a,b]$ and $F'(x)=f(x)$ for $x\in [a,b]$. Assume that $f$ is Riemann integrable. Then Fundamental theorem of calculus say that $$F(x)-F(a)=\int_{a}^x f(t) dt$$ My Question is: Can we say that $$F(x)-F(a)=\int_{a}^x f(t) dt$$ hold true if we remove the assumption that $F$ is differentiable at $a$ and $b$. My thoughts: The proof uses the mean value theorem to prove the theorem but mean value requires only that $F$ is continuous $[a,b]$ and differentiable on $(a,b)$.
Apostol gives the theorem in following manner FTC: Let $f:[a, b] \to\mathbb {R} $ be Riemann integrable on $[a, b] $ and let $g:(a, b) \to\mathbb {R} $ be such that $g'(x) =f(x) $ for all $x\in(a, b) $. Then the limits $$\lim_{x\to a^{+} } g(x), \lim_{x\to b^{-}} g(x) $$ exist and we have $$\int_{a} ^{b} f(x) \, dx=\lim_{x\to b^-} g(x) - \lim_{x\to a^+} g(x) $$ Thus essentially you don't need the $F$ in your question to be differentiable (or even continuous or defined) at end points $a, b$. On request of user @sani via comment I give a proof of the above mentioned theorem. Let $$F(x) =\int_{a} ^{x} f(t) \, dt\tag{1}$$ Since $f$ is Riemann integrable on $[a, b] $ it is bounded on $[a, b] $ and let $M$ be an upper bound for $|f|$ on $[a, b] $. Then $$|F(x+h) - F(x) |=\left|\int_x^{x+h} f(t) \, dt\right|\leq M|h|$$ if both $x, x+h$ lie in $[a, b] $. This proves that $F$ is continuous on $[a, b] $. Consider $g$ defined on $(a, b) $ such that $g'(x) =f(x) $ on $(a, b) $. Let $c\in(a, b) $. By the usual FTC we have $$g(x) =g(c) +\int_{c} ^{x} f(t) \, dt$$ for all $x\in(a, b) $ and using $(1)$ we can write above equation as $$g(x) =g(c) +F(x) - F(c) \tag{2}$$ Since $F$ is continuous on $[a, b] $ we can see the limits of RHS of $(2)$ as $x\to a^+$ and as $x\to b^{-} $ exist and we have $$\lim_{x\to a^+} g(x) =g(c) +F(a) - F(c) $$ and $$\lim_{x\to b^-} g(x) =g(c) +F(b) - F(c) $$ Subtracting these two equations we get $$F(b) - F(a) =\lim_{x\to b^-} g(x) - \lim_{x\to a^+} g(x) $$ Note that $F(a) =0$ and $F(b) =\int_a^b f(x) \, dx$ via definition $(1)$ and the proof for the above mentioned theorem is complete.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3754502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Solve $x^5\equiv 4\pmod 7$ We know about calculating $x^2\equiv 2\pmod 7$ using quadratic residue properties in order to find out whether a solution exists or not. I wonder is there any way to determine that $x^n\equiv k\pmod v$, where $v\ge 2$, $k\in\Bbb Z$, and $n\ge 3$? As I asked in title: Solve $x^5\equiv 4\pmod 7$
Here is another way to determine this. Over the field $\Bbb F_7$ the Berlekamp algorithm gives the factorisation $$ x^5-4=(x^4 + 2x^3 + 4x^2 + x + 2)(x - 2). $$ Hence $x=2$ is the solution of the equation $x^5=4$. We may rewrite this as $x\equiv 2\bmod 7$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3754648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Proving that among any $2n - 1$ integers, there's always a subset of $n$ which sum to a multiple of $n$ How can one prove that among any $2n - 1$ integers, there's always a subset of $n$ which sum to a multiple of $n$? It is not hard to see this is equivalent to show that among $2n-1$ residue classes modulo $n$ there are $n$ whose sum is the zero-class. Thus, this problem is an example of a Zero sum problem. Also, the general case was first proven in the $1961$ paper of Erdős, Ginzburg and Ziv. This is a resource intended to be part of the ongoing effort to deal with abstract duplicates. There are quite a few posts here related to proving that among any $2n - 1$ integers, there's always a subset of $n$ which sum to a multiple of $n$, with varying degrees of generality from using only specific values of $n$ to proving it for all cases. Each of my following answers deal with a degree of generality by explaining it and then linking to the related existing posts. However, there are many ways to deal with this problem, including some which may not yet be handled by any posts on this site. Some examples, as suggested by quid's question comment, include: * *What are some different ways to prove the results? Basically all solutions use the pigeon-hole principle in some way, so can this be solved without using that principle? Also, as MathOverflow's EGZ theorem (Erdős-Ginzburg-Ziv) asks, can the general solution, as in the EGZ theorem, be proven without using the Chevalley–Warning theorem (or a variant of its proof)? *The answer links to Use Pigeonhole to show of any set of $2^{n+1}-1$ positive integers is possible choose $2^{n}$ elements such that their sum is divisible by $2^{n}$. where one answer shows how you can prove it using induction. Are there any other special cases of subsets of $n$ which can be solved on their own apart from the linked ones related to powers of $2$? *The answer gives a proof for $n = 2$, and gives examples of specific $n$ which have been asked on this site of $3$, $4$, $5$, $6$ and $9$. However, are there any other small values of $n$ which can also reasonably be handled explicitly?
Posts may potentially alter the general conditions, such as restrict the set of available congruences and use a set of available integers which is considerably larger than necessary, with the idea being that a specific method can be used to solve the problem. The only such post I know of is the following one which deals with choosing $19$ integers from a set of $181$ integers which only include the $10$ square congruences modulo $19$, with this being solved directly using the pigeon-hole principle on those available congruences: * *Can't understand the solution of this INMO problem
{ "language": "en", "url": "https://math.stackexchange.com/questions/3754764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
Let $A\subset\Omega$ and $\mathcal{B}_{A} = \{B\cap A:B\in\mathcal{B}\}$. Show that $\mathcal{B}_{A}$ is a $\sigma$-algebra on $A$. Let $\Omega$ be a nonempty set and $\mathcal{B}$ be a $\sigma$-algebra on $\Omega$. Let $A\subset\Omega$ and $\mathcal{B}_{A} = \{B\cap A:B\in\mathcal{B}\}$. Show that $\mathcal{B}_{A}$ is a $\sigma$-algebra on $A$. MY ATTEMPT To begin with, notice that $A\in\mathcal{B}_{A}$. Since $\Omega\in\mathcal{B}$, we conclude that $A = \Omega\cap A\in\mathcal{B}_{A}$. Let us suppose that $S_{1},S_{2},\ldots, \in\mathcal{B}_{A}$. Then one has that $S_{i} = B_{i}\cap A$ for some $B_{i}\in\mathcal{B}$, where $i\geq 1$. Consequently, \begin{align*} S = \bigcup_{i=1}^{\infty}S_{i} = \bigcup_{i=1}^{\infty}(B_{j}\cap A) = \left(\bigcup_{i=1}^{\infty}B_{j}\right)\cap A = B\cap A \Rightarrow S\in\mathcal{B}_{A} \end{align*} Could someone help me to prove that $\mathcal{B}_{A}$ is closed under complementation? Any help is appreciated.
If $S =A\cap B\in \mathcal B_A$ where $B \in \mathcal B$ then complement of $S$ in $A$ is $A\cap S^{c}$ and $S^{c} \in \mathcal B$ so $S^{c} \in \mathcal B_A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3754871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to integrate $ \int\frac{x-2}{(7x^2-36x+48)\sqrt{x^2-2x-1}}dx$? How to integrate $$ \int\frac{x-2}{(7x^2-36x+48)\sqrt{x^2-2x-1}}dx\,\,?$$ The given answer is $$ \color{brown}I=-\frac{1}{\sqrt{33}}\cdot \tan^{-1}\bigg(\frac{\sqrt{3x^2-6x-3}}{\sqrt{11}\cdot (x-3)}\bigg)+\mathcal{C}.$$ I tried by different substitutions i.e $\frac{x^2 - 2x -1}{x-3} = t$, but I am not getting my desired answer. $ORIGINAL$ $QUESTION$: This question was asked in our test and the given answer was option D ,i.e none on the given options were correct.
$$I=\int \frac{x-2}{(7x^2-36x+48)\sqrt{x^2-2x-1}}\,dx$$ This can be simplifies using $$\frac{x-2}{7x^2-36x+48}=\frac 1{7(a-b)}\left(\frac{a-2 } {x-a }+\frac{2-b } {x-b } \right)$$ where $$a=\frac{2}{7} \left(9-i \sqrt{3}\right) \qquad \text{and} \qquad b=\frac{2}{7} \left(9+i \sqrt{3}\right) $$ which makes that we are facing two integrals $$I_c=\int \frac {dx} {(x-c)\sqrt{x^2-2x-1}}$$ Complete the square and let $x=1+\sqrt 2 \sec(t)$ which gives $$I_c=\int \frac{dt}{(1-c) \cos (t)+\sqrt{2}}$$ Now, using the tangent half-angle subtitution $$I_c=2\int\frac{du}{\left(c+\sqrt{2}-1\right) u^2-c+\sqrt{2}+1}=\frac{2 }{\sqrt{-c^2+2 c+1}}\tan ^{-1}\left(u\frac{\sqrt{c+\sqrt{2}-1} }{\sqrt{-c+\sqrt{2}+1}}\right)$$ and so on ....
{ "language": "en", "url": "https://math.stackexchange.com/questions/3755017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
How to calculate $ \lim_{x\to\infty} (\frac{x}{x+1})^x$ using L'Hopitals rule? I am trying to calculate $ \lim_{x\to\infty} (\frac{x}{x+1})^x$ using L'Hopital. Apparently without L'Hopital the limit is $$ \lim_{x\to\infty} (\frac{x}{x+1})^x = \lim_{x\to\infty} (1 + \frac{-1}{x+1})^x = \lim_{x\to\infty} (1 - \frac{1}{x+1})^{x+1} \frac{1}{1-\frac{1}{x+1}} = e^{-1} * \frac{1}{1} = \frac{1}{e}$$ I am wondering how one could calculate this limit using L'Hopital's rule. My failed approach My initial syllogism was to use the explonential-log trick in combination with the chain rule as following: $$ \lim_{x\to\infty} (\frac{x}{x+1})^x = e^{\lim_{x\to\infty} x \ln(\frac{x}{x+1})} \quad (1) $$ So, basically the problem that way is reduced to: $$ \lim_{x\to\infty} x \ln(\frac{x}{x+1}) = \lim_{x\to\infty} x * \lim_{x\to\infty}\ln(\frac{x}{x+1}) \quad (2)$$ As far as $\ln(\frac{x}{x+1})$ is concerned, it has the form $f(g(x))$, so using the chain rule for limits and chain rule for derivatives in order to apply L'Hopital we can rewrite it as: $$ \lim_{x\to\infty} \ln( \lim_{x\to\infty} \frac{(x)'}{(x+1)'}) = \lim_{x\to\infty} ln(1) \quad (3)$$ But $(2),(3) \to 0 * \infty$, so that failed. Any ideas on how we could approach this in other ways?
Caution, $$\lim fg=\lim f\lim g$$ can only be used when the limits on the right both exist, which is not the case here. By L'Hospital $$\lim_{x\to\infty}\log\left(\frac x{x+1}\right)^x=\lim_{x\to\infty}\frac{\log\left(\dfrac x{x+1}\right)}{\dfrac1x}=\lim_{x\to\infty}\frac{\dfrac1x-\dfrac1{x+1}}{-\dfrac1{x^2}}=-\lim_{x\to\infty}\frac x{x+1}=-1.$$ The simplest is, by continuity of the inverse function, $$\lim_{x\to\infty}\left(\frac x{x+1}\right)^x=\frac1{\lim_{x\to\infty}\left(1+\dfrac1x\right)^x}=\frac1e.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3755096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Unclear problem with $n$-th power matrix and limit Find $$\lim\limits_{n \to \infty} \frac{A_n}{D_n}$$ where $$\begin{pmatrix} 19 & -48 \\ 8 & -21 \\ \end{pmatrix} ^{\! n} = \begin{pmatrix} A_n & B_n \\ C_n & D_n \\ \end{pmatrix}$$ $n$ - is the power of a matrix, but what is $A_n, B_n, C_n, D_n$ then? Is it a corresponding element of a matrix in the $n$-th power? How is this type of problem called? And what is the way to solve that problem?
Here is an unconventional approach: we have $$ M = \pmatrix{19 & -48\\ 8 & -21}. $$ We find that the eigenvalues satisfy $$ \det(M - xI) = x^2 + 2x - 15 = 0 \implies x = -5,3. $$ By the Cayley Hamilton theorem, the powers of $M$ satisfy the recurrence $$ M^n + 2M^{n-1} -15 M^{n-2} = 0 $$ From the theory of constant coefficient homogeneous linear difference equations, it follows that $M^n$ has the form $$ M^n = (-5)^n P + 3^n Q $$ for some matrices $P,Q$. We can solve for $P,Q$ using the "initial conditions" of $n=0,1$. We have $$ P + Q = M^0 = \pmatrix{1&0\\0&1}, \quad (-5)P + 3Q = M^1 = \pmatrix{19 & -48\\8 & -21}. $$ Subtracting the second equation from $3$ times the first yields $$ 3P - (-5)P + 0Q = 3\pmatrix{1&0\\0&1} - \pmatrix{19 & -48\\8 & -21} \implies\\ 8P = \pmatrix{-16 & -48\\8 & 24} \implies P = \pmatrix{-2&-6\\1&3}. $$ We use the first equation to find $$ Q = \pmatrix{1&0\\0&1} - P = \pmatrix{3&6\\-1&-2}. $$ So, we have $$ M^n = (-5)^n \pmatrix{-2&-6\\1&3} + 3^n\pmatrix{3&6\\-1&-2}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3755238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Players and coins Three players A,B and C flip simultaneously a coin. The coin of A (B,C) gives head with probability $a$ ($b,c$), with $0<a,b,c,<1$. If two of three coins give the same result, the player who flip the third coin is tossed out of play; if the coins are all equal, players flip again the coins. * *What is the probability that A is the first player tossed out? *What is the value of the probability of 1) if $a=b=c$? Could you answer without calculations? *If $a=b=c$, what is the mean number of games to finish the game? I'm stuck. Could you give me any hints? Thanks in advance.
Letting $t$ be the probability that a given round ends in a tie, we get $$ t=abc+(1-a)(1-b)(1-c)=1-a-b-c+ab+bc+ca $$ Explanation: * *$abc$ is the probability that in a given round, all players get heads.$\\[4pt]$ *$(1-a)(1-b)(1-c)$ is the probability that in a given round, all players get tails. Letting $p$ be the probability that the game ends with $A$ tossed out, we get $$ p=a(1-b)(1-c)+(1-a)bc+tp $$ Explanation: * *$a(1-b)(1-c)$ is the probability that in the first round, $A$ gets heads and $B,C$ both get tails.$\\[4pt]$ *$(1-a)bc$ is the probability that in the first round, $A$ gets tails and $B,C$ both get heads.$\\[4pt]$ *$tp$ is the probability that * *$\;$The first round ends in a tie.$\\[5pt]$ *$\;A$ gets tossed out in the game starting with round $2$ (effectively a new game). Solving for $p$, we get $$ p=\frac{a(1-b)(1-c)+(1-a)bc}{1-t} $$ which simplifies to $$ p=\frac {a-ab+bc-ca} {a+b+c-ab-bc-ca} $$ By symmetry, if $a=b=c$, we should get $p={\large{\frac{1}{3}}}$, which can be checked against the above formula. Letting $e$ be the expected number of rounds for a complete game, we get $$ e=(1-t)(1)+t(1+e) $$ Explanation: * *$(1-t)(1)$ is the probability that game ends in the first round, multiplied by $1$, since the number of rounds is $1$.$\\[4pt]$ *$t(1+e)$ is the probability the first round ends in a tie, multiplied by $1+e$, since one round has already expired, and $e$ is the expected number of rounds to follow (in what is effectively a new game). Solving for $e$, we get $$ e = \frac{1}{1-t} = \frac{1}{a+b+c-ab-bc-ca} $$ and for the case $a=b=c$, the above formula yields $$ e=\frac{1}{3a(1-a)} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3755323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If a 'distance function' does not possess triangle inequality property, would the limit of a converging sequence still be unique? Let $X$ be a set and $d$ be a function such that $d:X\times X\to \mathbb{R}$ such that it satisfies positivity, that is, $d(x,y)\geq 0$ and $d(x,y)=0 \iff x=y.$ Moreover suppose it satisfies symmetry property, that is, $d(x,y)=d(y,x).$ However it does not satisfy triangle inequality. Obviously if triangle inequality was to be satisfied then this will make $(X,d)$ a metric space and subsequently every converging sequence will have a unique limit. Hence I am just curious if this property is taken away, can there still be examples such that every converging sequence has a unique limit with respect to this function $d$? I hope I explained my question sufficiently clear, many thanks in advance!
Let $d(x,y) = (x-y)^2$ on $\Bbb R$, which satisfies the first two axioms but not the triangle inequality, because: $$d(0,2)=4\not≤2=d(0,1)+d(1,2)$$ however limits are still unique, in fact you have the same limits as the usual metric $d(x,y)=|x-y|$, since $(x_n-y)^2\to0$ iff $|x_n-y|\to0$ (continuity of the root on positive numbers). There exist "metrics" failing the triangle inequality without unique limits: Let $$d(x,y)=\begin{cases}0 & x=y \\ \frac1{|xy|} &x\neq y\end{cases}$$ on $\Bbb N$, then you have that any unbounded sequence $x_n$ converges to any integer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3755437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Evaluating $\int_{-\infty}^\infty\frac{\cos(2x)}{x^2+4}\:\mathrm{d}x$ As stated in the title, I want to evaluate the integral $$I=\int_{-\infty}^\infty\frac{\cos(2x)}{x^2+4}\:\mathrm{d}x$$ I'm pretty sure it evaluates to $$\frac{\pi}{2e^4}$$ But I'm not sure how to evaluate it. I have read an Instagram post where 3 different methods are provided for proving that \begin{equation} I(t)=\int_{-\infty}^\infty\frac{\cos(tx)}{x^2+1}\:\mathrm{d}x=\frac{\pi}{e^t} \end{equation} and I think similar logic can be applied here, but I am not sure how yet. One of the methods mentioned in the post uses laplace transform to prove it but it's a little bit long. I'm wondering if there's any elegant method for evaluating $I$ I encountered this integral when I tried to solve this integral from one of the members of the Instagram math community. $$\omega=\int_0^{\infty}\frac{x^2-4}{x^2+4}\:\frac{\sin 2x}{x}\mathrm{d}x$$ I first split the integral, used a property of laplace transform and some properties of the sine integral then used integration by parts and got to here $$\omega=\frac{\pi}{2}-\left(2\int_{-\infty}^\infty\frac{\cos(2s)}{s^2+4}\:\mathrm{d}s+\pi\right)$$ Thank you so much for your help and attention! (BTW I'm not so proficient in complex analysis so I would prefer a solution without one :P)
Too long for a comment, just reducing this case, to the case you know from instagram and some notes. For $t \in \mathbb R,a > 0$ let:$$I(t,a) = \int_{-\infty}^\infty \frac{\cos(tx)}{x^2+a^2}dx $$ Note that it converges for every $t \in \mathbb R,a> 0$. Taking substitution $x=ay$, $dx=ady$ we get: $$ I(t,a) = \int_{-\infty}^\infty \frac{\cos(aty)}{a^2y^2+a^2}(ady) = \frac{1}{a} \int_{-\infty}^\infty \frac{\cos(aty)}{y^2+1}dy = \frac{1}{a} \cdot I(ta,1) $$ So it boils down to evaluate $$I(s) := I(s,1) = \int_{-\infty}^\infty \frac{\cos(sx)}{x^2+1}dx$$ There are many ways of calculating it. Probably the easiest one might be either complex analysis or noticing it is almost the characteristic function of Cauchy distribution (it does not require $"$complex analysis$"$ (even though there are complex numbers under integral sign) to calculate, however it would be a long road for you if you're not familiar with notion of characteristic function and inverse formula for them. It can be calculated via taking derivative and manipulations, however one need to be a bit careful with showing we can go with derivative under integral sign, since $\frac{d}{ds}(\frac{\cos(sx)}{x^2+1}) = -\frac{x\sin(xs)}{x^2+1}$ and integral of the latter does not converge when treated as improper lebesgue integral on the whole line (so dominated convergence theorem cannot be applied straightforward). However, it does converge when treated as improper Riemann integral or limit of proper Lebesgue integrals, so it actually makes sense. We'll proceed by a bit different way. It is amazing what a substitution can do: Let $s>0$ and take substitution: $y=sx, dy = sdx$, then: $$ I(s) = \int_{-\infty}^\infty \frac{s\cos(y)}{s^2+y^2}dy $$ Derivative of function under integral (with respect to $s$) yields $\frac{\cos(y)(s^2+y^2) - 2s^2\cos(y)}{(s^2+y^2)^2} = \frac{cos(y)(y^2-s^2)}{(s^2+y^2)^2}$, which is integrable on the whole line, treated as Lebesgue improper integral, so dominated convergence theorem allows us to go with derivative under integral. Taking integral one more time (justification is the same) we get: $$ \frac{d^2}{ds^2} I(s) = \int_{-\infty}^\infty \cos(y) \cdot (\frac{d^2}{ds^2} \frac{s}{y^2+s^2}) dy = - \int_{-\infty}^\infty \cos(y) \cdot (\frac{d^2}{dy^2} \frac{s}{y^2+s^2})dy$$ Integrating by parts gives us: $$ \frac{d^2}{ds^2}I(s) = -\frac{d}{dy}(\frac{s}{y^2+s^2})\cos(y)|_{-\infty}^\infty -\int_{-\infty}^\infty \sin(y) \frac{d}{dy}(\frac{s}{y^2+s^2})dy $$ I'll leave calculations so that boundary terms goes to zero. One time again: $$ \frac{d^2}{ds^2}I(s) = - \sin(y)\frac{s}{y^2+s^2}|_{-\infty}^\infty + \int_{-\infty}^\infty \frac{s\cos(y)}{y^2+s^2}dy = I(s)$$ Hence the general solution is $I(s) = Ae^s + Be^{-s}$ for some constants $A,B$. We can find them by letting $s \to \infty$ and $s \to 0^+$. Indeed, back to first form of integral, by dominated convergence $\lim_{s \to 0^+} I(s) = \lim_{s \to 0^+} \frac{\cos(sx)}{x^2+1}dx = \pi$, so $A + B = \pi$. To justify limit as $s \to \infty$ we use integration by parts with $cos(sx)$ and $\frac{1}{x^2+1}$, getting: $$ \lim_{s \to \infty}I(s) = \lim_{s \to \infty} \int_{-\infty}^\infty \frac{\cos(sx)}{x^2+1}dx = \lim_{s \to \infty} \int_{-\infty}^\infty \frac{2x\sin(sx)}{s(x^2+1)^2}dx $$ which tends to zero, since we can bound $|\sin(sx)| \le 1$, and we have something that tends to zero left. But if $Ae^s + Be^{-s} \to 0$ as $s \to \infty$, then $A=0$. So $B=\pi$. And we get $I(s) = \pi e^{-s}$ for $s > 0$ and by symetry and easy calculation for $s=0$, we get for any $s \in \mathbb R$: $$I(s) = \pi e^{-|s|}$$ This means that $$ I(t,a) = \frac{1}{a} I(ta,1) = \frac{\pi}{a}e^{-|ta|}$$ So your integral is equal indeed, to $\frac{\pi}{2e^4}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3755533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Projection of space curve shortens Let $C$ be a rectifiable, open curve in $\mathbb{R}^3$, and let $|C|$ be its length. Orthogonally project $C$ to a plane $\Pi$ (e.g., the $xy$-plane). Call the projected curve $C_{\perp}$, and its length $|C_{\perp}|$. I would like to claim $|C_{\perp}| \le |C|$. I would appreciate either a simple proof, or a reference. This may be so well-known that it is hard to cite a reference. (I only need it in $\mathbb{R}^3$, but it should hold in any dimension.)
If you are dealing with rectifible curves, you are taking polygons with vertices on $C$ and looking at the limit at the lengths of the polygons as the points become closer. But if you project a line segment to $\{z=0\}$ its length cannot increase, so the projected polygons are no longer than the original.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3755651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Probability that every player gets 1 queen, jack, and king The four jacks, queens, and kings from a standard deck of cards are shuffled, and three cards are dealt to each of four players. Compute the probability that each player gets one jack, one queen, and one king. I know that there are $12$ cards total and since every player gets one jack, one queen, and one king, would it be $\binom{12}3$ for the first player?
Let's write up what we discussed in chat. Since we have to distribute three of the twelve cards to the first player, three of the remaining nine cards to the second player, three of the remaining six cards to the third player, and give the fourth player all three of the remaining three cards, there are $$\binom{12}{3}\binom{9}{3}\binom{6}{3}\binom{3}{3}$$ ways to distribute the twelve cards to four players so that each player receives three cards each. If each player receives one king, then there are four ways to give one of the four kings to the first player, three ways to give one of the remaining three kings to the second player, two ways to give one of the remaining two kings to the third player, and one way to give the remaining king to the fourth player. Hence, there are $4! = 4 \cdot 3 \cdot 2 \cdot 1$ ways to distribute the four kings so that each player receives one. By symmetry, there are also $4!$ ways to distribute the queens so that each player receives one and $4!$ ways to distribute the jacks so that each player receives one. Hence, the number of favorable cases is $$4!4!4!$$ Therefore, the probability that each player receives one king, one queen, and one jack when the twelve face cards are distributed to four players when each player is dealt three cards is $$\frac{4!4!4!}{\dbinom{12}{3}\dbinom{9}{3}\dbinom{6}{3}\dbinom{3}{3}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3755782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
When does $(xz+1)(yz+1)=az^{3}+1 $ have finitely many solutions in positive integers? Consider the diophantine equation in three variables $x$, $y$ and $z$; ($xz+1$)($yz+1$) $=$ $6z^{3}+1$. The only positive integer solutions I have found are {$x=4,y=10,z=7$} and {$x=10,y=4,z=7$}. From a Maple program, I have iterated over all values of $z$ in the range $50<z<10^{8}$, the only corresponding solutions of $x$ and $y$ are those with $ x=0$ and $y$ positive and vise versa. I would like to find out if this diophantine equation contains finitely many or infinitely many solutions in positive integers $x, y$ and $z$. In general; For a given positive integer $a$, what conditions are sufficient for the diophantine equation ($xz+1$)($yz+1$) $=$ $az^{3}+1$ to have finitely many solutions in positive integers $x, y$ and $z$. From experimental results, it appears that this equation has finitely many solutions in positive integers if and only if $a$ is not a third power of any integer i.e. $a\neq m^{3} $ for all integers $m$. Any help or references on this question will be appreciated.
Here is a partial answer: If $a=b^3$ is a cube, then there is an infinite family of solutions to $(xz+1)(yz+1)=az^3+1=b^3z^3+1$ given by $$(x,y,z) = (b, b^2z-b, z),\ b, z\in\mathbb{N}.$$ This arises from the factorization $b^3z^3+1 = (bz+1)(b^2z^2-bz+1) = (bz+1)((b^2z-b)z+1)$. In addition to the above, for any $a$ there are solutions $(a+1, a^2+a-1, a^2+2a)$ and $(2a-1, 2a+1, 4a)$, and there appear to be (empirically) solutions for some values of $x$ between $a+1$ and $2a-1$. For all of these solutions, $z = x+y$ and each $x$ corresponds to a unique $y$. There appear to be no solutions for $y\ge x>2a-1$. These, together with a finite set of solutions for $x<a+1$, appear to cover all solutions to the equation. I can prove very little of this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3756018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding the angle between vectors $\mathbf x$ and $\mathbf y$ in radians Two unit vectors $\mathbf{x}$ and $\mathbf{y}$ in $\Bbb R^n$ satisfy $\mathbf{x}\cdot\mathbf{y}=\frac{\sqrt{2}}{2}$ in radians. How would I go about finding the angle between $\mathbf{x}$ and $\mathbf{y}$? As I don't know the $\mathbf{x}$ and $\mathbf{y}$ unit vectors, would the unit circle be useful here? For instance, using $\frac{\sqrt{2}}{2}$ and plugging those values into $\dfrac{\mathbf{x}\cdot\mathbf{y}}{\mathbf{\|x\| \|y\|}}$ to find the angle?
No, I believe the unit circle is not really involved here. It is simple. You already know the $cosinus$ of the angle $\theta$ between the two vectors. It is this expression: $$cos(\theta) = \dfrac{\mathbf{x}\cdot\mathbf{y}}{\mathbf{||x||\cdot ||y||}}$$ Just plug in the numbers in this formula. Thus you get: $$cos(\theta) = \frac{\sqrt{2}/2}{1.1}$$ And once you know that $a = cos(\theta) = \sqrt{2}/2$, find $\theta = \arccos(a) = \arccos(\sqrt{2}/2) = \pi / 4$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3756090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Let s be a set of five positive integers at most 9. Prove that the sums of the elements in all the non empty subsets of s cannot be distinct.? Let s be the set of five positive integer the maximum of which is at most 9 prove that the sums of the elements in all the non empty subset of as cannot be distinct? Note: I know this is similar to another question with 6 elements with max 14. I saw this in Quora, and it annoys me that my proof is essentially a backtracking proof to eliminate all the possibilities. It starts like this: They have to be distinct or else two 1-element sets have the same sum. If the largest is 8 then the largest the set can be is 8-7-6-5-4 which sums to 30 but there are 31 possible sets so this can not be. So 9 is there. If 8 is there then 1 can’t be. And so on. My questions: (1) If there a more elegant proof? (2) What is the proper generalization replacing 9 and 5? They all must be distinct, so if the max is $n$ and there are $m$ of them then to force the max to be $n$ we must have $(n-1)+(n-2)+...(n-m) \lt 2^m-1 $ or $mn-\dfrac{m(m+1)}{2} \lt 2^m-1 $ or $n \lt \dfrac{2^m-1}{m}+\dfrac{m+1}{2} $. For $m=5$ this is $n \lt \dfrac{31}{5}+3 =9+\dfrac15 $ so $n \le 9$. But I don't see how to elegantly prove nonexistence.
Hint: Pigeonhole principle. You can form $31$ sums in total from the non-empty subsets of the five numbers. What is the largest such possible sum? What is the smallest?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3756242", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Hard Differential Equation Can anyone help me to find the solution of this ODE : $$4(y')^2-y^2+4=0.$$ I've tried to find it's solution by putting $y = e^{at}$ (for null solution) and $y = 2$ (for particular solution). My final solution is $$y = c_1 e^{0.5t} + c_2 e^{-0.5t} + 2,$$ but didn't match with the solution of this dude in this video link [Time stand 2:20 ] "when question is shown in the video". Even I've plotted his solution and my solution in graph plotter but there is slight difference in the graphs. Please, explain in full detail if you know how.
The general form of equation $$ A^2-B^2=1 $$ can be parametrized as $A=\pm\cosh(u)$, $B=\sinh(u)$ similar to a circle equation. If $(A,B)$ change smoothly, but remain on this curve, then also $u$ is a smooth function. Here that gives $$y(x)=\pm2\cosh(u(x)), ~~ y'(x)=\sinh(u(x))$$ from that parametrization. Now take the derivative of the first equation and compare with the second, giving $$u'(x)=\pm\tfrac12\implies u(x)=\pm\tfrac12x+c.$$ This already solves the problem completely $$ y(x)=\pm2\cosh(\tfrac12x+C). $$ As to your attempt, that is not possible as the equation is not linear. You can get a linear equation by taking the derivative $$ 2y'(4y''-y)=0. $$ Excluding the constant solutions and solutions with constant segments, the second factor indeed has the general solution $$ y=c_1e^{\frac12 x}+c_2e^{-\frac12 x}. $$ Inserting back into the original equation results in $$ (c_1^2e^{x}-2c_1c_2+c_2^2e^{-x})-(c_1^2e^{x}+2c_1c_2+c_2^2e^{-x})+4=0 \\ \implies c_1c_2=1,~~c_1=\pm e^C,~c_2=\pm e^{-C} $$ which again produces the solution. The coefficients in the video give just another parametrization of the coefficient pair. $c_1=\frac12c$ and $c_2=2c^{-1}$ still satisfy $c_1c_2=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3756334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Integration of an improper integral I have an integral that has two individually divergent parts. Wolfram says that the answer is \begin{equation}\frac{\ln(s+1)}{s},\end{equation} but I cannot figure out how it's done. The integral is \begin{equation} \int_{1}^{\infty}\left({\frac{1}{x}-\frac{1}{x+s}}\right)dx \end{equation}
$\int_1^{M} (\frac 1 x -\frac 1 {x+s})dx =\ln M-[\ln (M+s)-\ln (1+s)]$ $=\ln (\frac M {M+s})+\ln(1+s)$. $Let$ $M \to \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3756451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Finding the centre of a circle under a specific condition Question: Consider a circle, say $\mathscr{C}_1$ with the equation $x^2 + (y-L)^2=r^2$. A second circle, say $\mathscr{C}_2,$ with equal radii that has a centre $(x_0,y_0)$ which lies on the line $y=mx$. Find an expression for $x_0$ and $y_0$, in terms of $L$, $r$ and $m$, such that $\mathscr{C}_1$ and $\mathscr{C}_2$ touch at one point. My Attempts: I had attempted to find an expression that would allow for the discriminant to be zero in order for the two circles to only touch once. I ended up with $m = \dfrac{1}{2} (2L - 2\sqrt{10r-x_0}$) although this does not seem to be correct. I arrived at this answer by solving $x^2 + (y-L)^2 = r^2$ and $(x-x_0)^2 + (y-y_0)^2=r^2$ although I am nearly certain that I have made a mistake. I have also considered using approximate to see if I can identify a relation however as of right now, I have been entirely unsuccessful. Any help or guidance would be greatly apprecaited!
For the two circles to touch at exactly one point, the distance between the centers of the two circles should be the sum of the radii. $$4r^2=(0-x_0)^2+(L-mx_0)^2.$$ Now solve for $m$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3756573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Solving $2\sin\left(2x\right)=3\left(1-\cos x\right)$ Background - this was part of a homework packet for students looking to skip HS pre-calc. There is a text book they use as well, but this particular problem was not in it. $$2\sin\left(2x\right)=3\left(1-\cos\left(x\right)\right)$$ My first step was to eliminate the double angle. $$4\sin\left(x\right)\cos\left(x\right)=3\left(1-\cos\left(x\right)\right)$$ and distribute on right side $$4\sin\left(x\right)\cos\left(x\right)=3-3\cos\left(x\right)$$ And this is where I am stuck. Now, I can see that $0$ and $2\pi$ are solutions, but with a graph of both sides, one more. The worksheet instructions do not say whether or not graphing is allowed, although the particular chapter in the book for this seems to rely heavily on calculator work. My question Can this be solved by manipulation, if so, how? If not, what is the hint to stop and go to the graph?
Rewrite $2\sin\left(2x\right)=3\left(1-\cos x\right)$ as $$ 4\sin\frac x2\cos\frac x2\cos x= 3\sin^2\frac x2$$ Then, let $t= \tan\frac x2, \> \cos x = \frac{1-t^2}{1+t^2}$ and factorize $$\sin\frac x2 \frac{4-3t-4t^2-3t^3}{1+t^2}=0 $$ The factor $\sin\frac x2=0 $ yields $\frac x2 =\pi n$ and $4-3t-4t^2-3t^3=0$ has one real root at $$t= \frac19\left(-4+\sqrt[3]{584+9\sqrt{4227}}-\sqrt[3]{-584+9\sqrt{4227}}\right) $$ Thus, the solutions are $$x=2\pi n,\>\>\> 2\pi n + 2\arctan t$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3756704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Probability of $\limsup_{n\to \infty} \{X_n X_{n+1}>0\}$ where $\{X_n\}$ are independent Gaussian r.v.'s with mean 0 Let $\{X_n\}$ be a sequence of independent Gaussian random variables with $\mathbb{E}\, X_n = 0$ for all $n \geq 1$. Find the probability of the event $$ \limsup_{n\to \infty} \big\{ X_n X_{n+1}> 0 \big\} $$ My first thought is that it should be 1 since Gaussians are always positive for a finite value. I was thinking of applying Borel-Cantelli and was trying something along the lines of \begin{align*} \mathbb{P} \big( \limsup_{n\to \infty} \big\{ X_n X_{n+1}> 0 \big\}\big) &= \mathbb{P}\big( X_n X_{n+1} > 0 \,\,\, i.o. \big) \\ &\leq \mathbb{P}\big( \big\{ X_n X_{n+1}> 0 \,\,\, i.o \big\} \cap \big\{ X_{n+1} > 0 \,\,\, i.o\big\} \big)\\ &= \mathbb{P}\big( \big\{ X_n X_{n+1}> 0 \,\,\, i.o \big\}\big) \,\,\mathbb{P}\big( \big\{ X_{n+1} > 0 \,\,\, i.o\big\} \big) \,\,\,\, \text{(by independence)} \end{align*} I'm not sure I'm thinking of this problem right, though.
Note that it is enough to consider only events $\{X_{2k}X_{2k+1} > 0 \}_{k \in \mathbb N}$ and by independence of $\{X_k\}_{k \in \mathbb N}$, those are independent as well. Moreover $\mathbb P(X_{2k}X_{2k+1} > 0 ) = \mathbb P(X_{2k},X_{2k+1} > 0) + \mathbb P(X_{2k},X_{2k+1}<0) = \frac{1}{2}$ by symmetry, so by Borel Cantelli. $$ \sum_{k=1}^\infty \mathbb P(X_{2k}X_{2k+1} > 0) = \sum_{k=1}^\infty \frac{1}{2} = \infty$$ and since events are independent we get $\mathbb P(\limsup \{X_{2k}X_{2k+1} > 0 \}) = 1$, so in particular $\mathbb P (\limsup \{X_{k}X_{k+1} > 0 \}) = 1$, since $\limsup \{X_{2k}X_{2k+1} >0 \} \subset \limsup \{X_kX_{k+1} > 0 \} $
{ "language": "en", "url": "https://math.stackexchange.com/questions/3756789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How do I show $\lim_{n \to \infty} \int_0^\infty \frac{n}{n^2+x}\sin(\frac{1}{x})\, dx = 0\,$? How do I show $$\lim_{n \to \infty} \int_0^\infty \frac{n}{n^2+x}\sin\left(\frac{1}{x}\right)\, dx = 0\,\,?$$ I've tried splitting into the cases where $x \leq 1$ and $x \geq 1$ but I am having trouble finding bounds so that I can apply the dominated convergence theorem.
Edit: The second half of this is nonsense. See the comments below... Say the integrand is $f$. If $0<x\le1$ then $|f(x)|\le 1$, while if $x\ge1$ then $|f(x)|\le 1/x^2$, since $|\sin(t)|\le|t|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3757048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Express $\frac{\partial^2F}{\partial x^2} - \frac{\partial^2F}{\partial y^2}$ in terms of the partial derivatives of $F$ with respect to $u$ and $v$. If $F = F(u,v)$ and $u = x - y, v = x + y$, express $\frac{\partial^2F}{\partial x^2} - \frac{\partial^2F}{\partial y^2}$ in terms of the partial derivatives of $F$ with respect to $u$ and $v$. I'm not entirely sure how to approach this. Any help would be appreciated. Thank you.
$\frac{\partial u}{\partial x} = 1,\frac{\partial u}{\partial y} = -1, \frac{\partial v}{\partial x} = 1, \frac{\partial v}{\partial y} = 1$ $\frac{\partial F}{\partial x} $ $= \frac{\partial F}{\partial v}\frac{\partial v}{\partial x} + \frac{\partial F}{\partial u}\frac{\partial u}{\partial x}$ $= \frac{\partial F}{\partial v} + \frac{\partial F}{\partial u}$ $\frac{\partial^2 F}{\partial x^2} $ $= \frac{\partial}{\partial v}(\frac{\partial F}{\partial x})\frac{\partial v}{\partial x} + \frac{\partial}{\partial u}(\frac{\partial F}{\partial x})\frac{\partial u}{\partial x}$ $= \frac{\partial}{\partial v}(\frac{\partial F}{\partial x}) + \frac{\partial}{\partial u}(\frac{\partial F}{\partial x})$ $= \frac{\partial}{\partial v}(\frac{\partial F}{\partial v} + \frac{\partial F}{\partial u}) + \frac{\partial}{\partial u}(\frac{\partial F}{\partial v} + \frac{\partial F}{\partial u})$ $=\frac{\partial^2 F}{\partial v^2} +2\frac{\partial^2 F}{\partial v\partial u} + \frac{\partial^2 F}{\partial u^2}$, Assume F is continuous Similarly, $\frac{\partial F}{\partial y} $ $= \frac{\partial F}{\partial v}\frac{\partial v}{\partial y} + \frac{\partial F}{\partial u}\frac{\partial u}{\partial y} = \frac{\partial F}{\partial v} - \frac{\partial F}{\partial u}$ $\frac{\partial^2 F}{\partial x^2} $ $= \frac{\partial}{\partial v}(\frac{\partial F}{\partial y}) - \frac{\partial}{\partial u}(\frac{\partial F}{\partial y})$ $=\frac{\partial^2 F}{\partial v^2} -2\frac{\partial^2 F}{\partial v\partial u} + \frac{\partial^2 F}{\partial u^2}$, Assume F is continuous Therefore, $\frac{\partial^2 F}{\partial x^2} - \frac{\partial^2 F}{\partial y^2} = 4\frac{\partial^2 F}{\partial v\partial u}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3757286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to find $\int_0 ^ \frac{\pi}{2} \frac{\cot x}{\cot x + \csc x}\,dx \,$? How to find $$\int_0 ^ \frac{\pi}{2} \frac{\cot x}{\cot x + \csc x}\,dx \,\,?$$ The integrand $ \frac{\cot x}{\cot x + \csc x} $ is not defined at $x =0$. But the function is bounded on $(0 , \frac{\pi}{2}]$. $$\lim _{x \to 0} \frac{\cot x}{\cot x + \csc x} = 0$$ So this is not an improper integral. My Attempt : $$\int_0 ^ \frac{\pi}{2} \frac{\cot x}{\cot x + \csc x} = \lim_{t \to 0} \int_t ^ \frac{\pi}{2} \frac{\cot x}{\cot x + \csc x} = \lim_{t \to 0} \left[(\frac{\pi}{2} - 1)+ (\tan{\frac{t}{2} - t})\right] = \left(\frac{\pi}{2} - 1\right)$$. I know how to find the anti-derivative of the integrand. I first found out the anti-derivative of the integrand in $[t , \frac{\pi}{2}]$ , where $0 < t < \frac{\pi}{2}$. Let's say it is $\phi(t)$. Then I find $\lim_{t \to 0} \phi(t)$. I am not sure if this is a right way. Can anyone please check it?
$$\int_0 ^ \frac{\pi}{2} \frac{\cot x}{\cot x + cosec x} dx=\int_{0}^{\pi/2} \frac{\cos x}{1+\cos x} dx=\pi/2-\int_{0}^{\pi/2} \frac{dx}{1+\cos x} dx$$ $$=\pi/2-\frac{1}{2}\int_{0}^{\pi/2} \sec^2(x/2)~dx=\pi/2-\tan x |_{0}^{\pi/2}=\pi/2-1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3757423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Is $\mathbb{Z}(\sqrt{11})$ a UFD? Is $\mathbb{Z}(\sqrt{11})$ a UFD? I read about real quadratic filed and about algebraic integers. In all of books that I read tehy show that imaginary filed doesn't have unique factorization such as $$6=2\cdot3=(1+\sqrt{-5})\cdot(1-\sqrt{-5})\,,$$ but not for real.
The main reason that you won't find anything in the literature on the fact that $\mathbb{Z}[\sqrt{D}]$ is a UFD or not is that we don't know. For $D>0$ squarefree such that $D\not\equiv 1 \mod 4$, this is equivalent to ask whether $\mathbb{Z}[\sqrt{D}]$ is a PID or not (because this ring is a Dedekind domain in this case). It is conjectured that this will be the case for infinitely many values of $D$, but at this point, this is wide open. Concerning your specific example, the answer is YES. The ring $R=\mathbb{Z}[\sqrt{11}]$ is indeed a PID (hence a UFD). To show it, you need the machinery of algebraic number theory. First, you have to realize that $R$ is the ring of integers of $K=\mathbb{Q}(\sqrt{11})$. Minkowski's bound then says that any element of the class group of $K$ may be represented by an ideal $I$ of norm $\leq 3$. Since $I$ is the product of prime ideals and the norm is multiplicative, it is enough to show that prime ideals of norm $2$ and $3$ are principal. Such ideals are exactly those which appear into the decomposition of $2R$ and $3R$ into a product of prime ideals. By a celebrated theorem of Dirichlet, these decomposition are reflected by the decomposition of $X^2-11$ mod $2$ and $3$ into irreducible factors. First, $X^2-11=X^2+1$ mod $3$, which is irreducible mod $3$ since $-1$ is not a square mod $3$. Hence $3R$ is the decomposition we are looking for. We conclude that the only prime ideal lying above $3$ is $3R$, which is principal. Note that $3R$ has in fact norm $9$, and that we can even discard it... Now $X^2-11=(X-1)(X+1)$ mod $2$. Hence $2R=(2, -1+\sqrt{11})(2,1+\sqrt{11})$ (using Dirichlet's theorem). Set $\mathfrak{p}=(2, -1+\sqrt{11})$. Then $(2,1+\sqrt{11})=(2,-1-\sqrt{11})=\mathfrak{p}^*$, where $*$ is the unique non trivial $\mathbb{Q}$-automorphism of $K$. Thus, if $\mathfrak{p}$ is principal with generator $\alpha$, so is $\mathfrak{p}^*$ (generated by $\alpha^*$). It remains to show that $\mathfrak{p}$ is principal. Since this ideal has norm $2$, it will be generated by an element of norm $2$ . Let $\alpha=3-\sqrt{11}$. Then $\alpha=2 -(-1+\sqrt{11})\in\mathfrak{p}$. Hence $(\alpha)\subset\mathfrak{p}$. Now $N((\alpha))=\vert 3^2-11(-1)^2\vert =2=N(\mathfrak{p})$, so $\mathfrak{p}=(\alpha)$. Finally, $\mathfrak{p}=(\alpha)$, and we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3757620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does $\ker T\cap {\rm Im}\,T=\{0\}$ imply $V=\ker T\oplus{\rm Im}\,T$? Let $T: V\rightarrow V$ be a linear operator of the vector space $V$. We write $V=U\oplus W$, for subspaces $U,W$ of $V$, if $U\cap W=\{0\}$ and $V=U+W$. If we assume $\dim V<\infty$, then by the rank-nullity theorem, $\ker T\cap {\rm Im}\,T=\{0\}$ implies $V=\ker T\oplus {\rm Im}\,T$. However, my question is about the case $\dim V$ is infinite. Is it still true? What if $T$ has a minimal polynomial? Thanks.
Consider the shift operator $s$, defined on $\text{Vect}(e_i, i\in\mathbb{N})$, where $s(e_n)=e_{n+1}$ for $n\in\mathbb{N}$. Note that $\ker(s)=0$ but $s$ is not surjective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3757763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
every trail from start node having the same end node given a directed graph (can have cycles) with: * *an arbitrary number of nodes *an arbitrary number of edges *that satisfies the condition that there is (at least) one trail (i.e. a walk where no edge is repeated) that visits all nodes. Would this be a true statement: Every trail (again, can not repeat edges) from a given starting node will have the same ending node. This could be either an open walk (start and end nodes differ) or a closed walk (start and end nodes are the same). However, the walk must satisfy the condition that it cannot end until there are no available edges to continue walking on. Note that even though the same edge cannot be walked more than once, nodes may be visited multiple times. I know this may not satisfy the definition of "trail", but it fits the problem I have. Examples: trivial case: the graph A->B, B->A. Given A as a start node, the end node is always A. slightly more complex example: Given A as start node, C is the end node. Is there a counterproof where there are two trails (open or closed) that ends in different nodes? Or, conversely, is there a proof/name for this graph property? Disclaimer: I'm not very experienced in math or graph theory, this is a problem that I encountered while programming.
Remove the edge from $B$ to $A$ in your second graph. Then from $A$ you have trails $$A\to B\to C\to B$$ and $$A\to C\to B\to C\;.$$ I am assuming here that the edge $B\to C$ is considered different from the edge $C\to B$, but if that is not the case, this is still a counterexample.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3757959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove that $13\sqrt{2}$ is irrational. I am currently a beginner at proofs and I am having trouble proving this problem... I know that the square root of $2$ is irrational because the square root of $2$ can be expressed as $\frac{p}{q}$ and once both sides are squared it is true that both $p$ and $q$ are even which is a contradiction to the assumption that they have no common factors. I am having trouble proving that $13$ and the square root of $2$ is irrational though and any help would be greatly appreciated! Since we are not dealing with the square root of $13$, I do not know how to start since we can not set it equal to $\frac{p}{q}$. Thank you in advance!
A much less elegant but maybe clearer proof than those involving greatest common denominator (g.c.d.):- If $\sqrt{2}$ is rational, the we can express it as a ratio of two integers, $A$ and $B$. So: $$ \sqrt{2} = \frac {A}{B} $$ where $$ A, B \in \mathbb{N} $$ $$ \implies A = \sqrt{2}B $$ Since $ A^2 = 2B^2$ we can see that $A^2$ is even, i.e. $$ A^2 \in \mathbb{E} $$ Each integer can be written as the product of a unique set of prime factors to various powers. So any integer that is the square of a smaller integer has each of its prime factors at least twice. So the integer square root of a number that is even will itself also be even. (And indeed if the integer square is divisible by $N$ then so will the starting integer be divisible by $N$.) $$\implies A \in \mathbb{E} $$ Writing $A$ as some even number $2K$, we see that $$ B^2 = \frac{A^2}{2} = \frac{(2K)^2}{2} = 2K^2 $$ $$ \implies B^2 \in \mathbb{E} $$ $$ \implies B \in \mathbb{E} $$ But from our earliest definition, we have: $$A^2 = 2B^2$$ and, with $B^2$ resolved into pairs of prime factors, this would imply that $A^2$ has one unpaired prime factor, i.e. $2$. Yet, from our reasoning on squared integers, we know that this cannot be so since squared integers may only have pairs of prime factors. Hence, since our initial assertion that $\sqrt{2}$ is rational has led us to an erroneous deduction, we can conclude that $\sqrt{2}$ is not a rational number. It is clear from the above reasoning that neither is any $\sqrt{K} $ rational, where $K \in \mathbb{N}$ and $\sqrt{K} \notin \mathbb{N}$ The latter generality covers $\sqrt{338}$ or $13 \sqrt{2}$. It also covers the more obvious situation where $K$ is a prime number, i.e. $K$ has no factors (unless you regard $1$ as a factor) let alone an integer root.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3758045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Evaluate $\lim_{x \to 0} \frac{\sqrt{1 + x\sin x} - \sqrt{\cos x}}{x\tan x}$ What I attempted thus far: Multiply by conjugate $$\lim_{x \to 0} \frac{\sqrt{1 + x\sin x} - \sqrt{\cos x}}{x\tan x} \cdot \frac{\sqrt{1 + x\sin x} + \sqrt{\cos x}}{\sqrt{1 + x\sin x} + \sqrt{\cos x}} = \lim_{x \to 0} \frac{1 + x\sin x - \cos x}{x\tan x \cdot(\sqrt{1 + x\sin x} + \sqrt{\cos x})}$$ From here I can’t see any useful direction to go in, if I even went in an useful direction in the first place, I have no idea.
We use the elementary limit results $\displaystyle \lim_{x\to 0}\frac{\sin x}{x} =1=\lim_{x\to 0} \frac{\tan x}{x}$ Now coming to the main problem we may write it as $$\begin{aligned}\lim_{x\to 0} \frac{\sqrt{1+x^2\left(\frac{\sin x}{x}\right)}-\sqrt{\cos x}}{x^2\left(\frac{\tan x}{x}\right)}&=\lim_{x\to 0} \frac{\sqrt{1+x^2}-\sqrt{\cos x}}{x^2}\\&=\lim_{x\to 0}\frac{1}{x^2}\left(1+\frac{x^2}{2}-\frac{x^4}{8}+\cdots -\sqrt{1-\underbrace {\frac{x^2}{2!}+\frac{x^4}{4!}-\cdots}}_{q}\right)\\&=\lim_{x\to 0}\frac{1}{x^2}\left(1-\frac{x^2}{2}+O(x^4)-\left(1-\frac{q}{2}+\frac{q^2}{4}+O(q^6)\right)\right)\\&=\lim_{x\to 0} \frac{1}{x^2}\left(1+\frac{x^2}{2}-1+\frac{q}{2}-O(q^4)\right) \\&=\lim_{x\to 0} \frac{1}{x^2}\left(\frac{x^2}{2}+\frac{1}{2}\left(\frac{x^2}{2!}-\frac{x^4}{4!}+O(x^6)\right)\right)=\frac{1}{2}+\frac{1}{4}=\frac{3}{4}\end{aligned}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3758133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 0 }
Solution verification: sum at least $4N/7$ times odd Problem: Let $a_{j}$,$b_{j}$,$c_{j}$ be whole numbers for $ 1 \leq j \leq N$. Suppose that for each $j$ at least one of $a_{j}$,$b_{j}$,$c_{j}$ is odd. Show that there are whole numbers $r$,$s$ and $t$ such that the sum $$r\cdot a_{j} + s\cdot b_{j} + t\cdot c_{j}$$ (for $ 1 \leq j \leq N$) is in at least $4N/7$ cases odd. My attempt: Choose $r=s=t=1$. To get an odd sum with 3 numbers for which one is odd for sure you must have that: 2 are even and the other odd or all three are odd. The total amount of possible even-odd combinations for 3 numbers is equal to ${3 \choose 1} +{3 \choose 2} +{3 \choose 3}=7$ (1 odd v 2 odd v 3 odd), so with $r=s=t=1$ we see that in $4/7$ cases, the sum is odd for a fixed $j$. In total we have thus that in $4N/7$ circumstances the sum is odd. My doubts: So, while reading the question I thought this would seem like a classic pigeonhole principle question and at first I had the intention to go that route, but when I was trying some random cases I came across this with $r=s=t=1$ and with this all the restrictions hold. What do you guys think about my solution?
Let $$u_1 = (0,0,1)$$ $$u_2 = (0,1,0)$$ $$u_3 = (1,0,0)$$ $$u_4 = (0,1,1)$$ $$u_5 = (1,0,1)$$ $$u_6 = (1,1,0)$$ $$u_7 = (1,1,1)$$ and let for each $i\in \{1,2,...,n\}$ define $v_i = (a_i,b_i,c_i)$. ''Connect'' $u_j$ with $v_i$ iff $u_j\cdot v_i\equiv _2 1$ and we count the number of all connections. It is easy to see that each $v_i$ has degree exactly $4$, so the number of all connections is $4n$. This means that one $u_j$ from the set $\{u_1,u_2,...u_7\}$ must be connected with at least $4n/7$ vectors from $\{v_1,...v_n\}$. So if take $r,s,t$ to be the coordinates of $u_j$ we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3758196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What's the gradient of a vector field? Imagine I have the following function $$ \vec{f}(\vec{x}) = x \vec{x}, x = | \vec{x} |, \vec{x} \in R^3 $$ That is, the function is essentially a quadratic function, but contains a vector direction as well. Intuitively from single variable calculus I would expect the gradient $ \nabla \vec{f} = (\partial \vec{f}/ \partial x_1,\partial \vec{f}/ \partial x_2,\partial \vec{f}/ \partial x_3) $ to be proportional to $2x$, however I also would expect it to be a 3x3 matrix. My most naive attempt would be to do $$ \vec{f} = x_1^2 \vec{e}_1 + x_2^2 \vec{e}_2 + x_3^2 \vec{e}_3 $$ and say that $$ \nabla \vec{f} = 2 x_1 \vec{e}_1 + 2 x_2 \vec{e}_2 + 2 x_3 \vec{e}_3 $$ But it would mean that every gradient w.r.t. a vector would always be a diagonal matrix, which seems wrong to me. What I really want to create is the Jacobian $ \partial \vec{f}_i / \partial x_j $ but I think I get a little bit confused about what I do with the base vectors $ \vec{e_i} $ during the partial derivative.
Recall the formula for gradient of scalar times vector: $$\nabla(a\vec{v}) = \vec{v}\otimes\nabla a + a\nabla\vec{v}.$$ In our case we have $f(\vec{x})= |\vec{x}|\vec{x}$ so $$\nabla{|\vec{x}|} = \nabla\sqrt{x_1^2+x_2^2+x_3^2}= \frac{\vec{x}}{|\vec{x}|}, \quad \nabla \vec{x} = \nabla(x_1,x_2,x_3) = I$$ where $I$ is the identity matrix. Therefore $$\nabla f(\vec{x}) = \vec{x}\otimes \frac{\vec{x}}{|\vec{x}|} + |\vec{x}|I = \left[\frac{x_ix_j}{|\vec{x}| } + |\vec{x}|\delta_{ij}\right]_{1\le i,j \le 3}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3758352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Can we say that $\text {tr}\ (A) = 0\ $? Let $A$ be an $n \times n$ real matrix with $A^3 + A = 0.$ Can we say that $\text {tr}\ (A) = 0\ $? I think it's true but can't prove it. Any help will be highly appreciated. Thanks in advance.
The most straightforward way to approach the proof is to use the minimal polynomial of $A$. If $K$ is the minimal polynomial of $A$, then any other polynomial $Q$ with $Q(A)=0$ is a multiple of $K$. Hence, the eigenvalues of $A$ are either $0$, $i$ or $-i$. Moreover, as $A$ is a real matrix, the sum of its eigenvalues must be a real number. In order to clarify this, consider a characteristic equation as below: $$ a_1 x^n+a_2x^{n-1}+ ...+a_{n+1}=0$$ Then, the sum of the eigenvalues is $\frac{-a_2}{a_1}$. To conclude the given fact, it's enough to observe that the only real number made by adding $0,-i,i$ together is $0$. So, we are done since $tr(A)$ is the sum of its eigenvalues. PS: the link below may be useful. https://en.wikipedia.org/wiki/Minimal_polynomial_(linear_algebra)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3758487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Isometry on inner product space $V$ is an inner product vector space. If a transformation $T\colon V\to V$ satisfies $\langle T(x), T(y)\rangle = \langle x, y\rangle$ for every vector $x, y \in V$, prove or disprove that $T$ is linear. Seems true, but can't prove it. Tried plugging $x+y$ into $x,y$ and got $\langle T(x+y), T(x+y)\rangle = \langle T(x)+T(y),T(x)+T(y)\rangle$ but this do not lead to the conclusion. Also I got that $T$ is one-to-one. Does anyone know the answer? Any help is appreciated!
I found answer. The point is using that $\langle x,x\rangle=0$ implies $x=0$. Consider $||T(u+v)-T(u)-T(v)||^2$ then the condition directly gives $||T(u+v)-T(u)-T(v)||^2=||u+v-u-v||^2=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3758642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to evaluate $\int \frac{dx}{\sin(\ln(x))}$? I am wondering how to evaluate the indefinite integral $$\int \frac{dx}{\sin(\ln(x))} \quad (1)$$ Attempt 1 I tried using Weierstrass substitution. The Weierstrass substitution, (named after K.Weierstrass (1815)), is a substitution used in order to convert trigonometric functions rational expressions to polynomial rational expressions. Integrals of this type are usually easier to evaluate. This substitution is constructed by letting: $$t = \tan\left(\frac{x}{2}\right) \iff x = 2\arctan(t) \iff dx = \frac{2}{t^2+1}$$ Using basic trigonometric identities it is easy to prove that: $$\cos x = \dfrac{1 - t^2}{1 + t^2}$$ $$\sin x = \dfrac{2t}{1 + t^2}$$ But I couldn't express $\ln(x)$ in terms of $t$. Attempt 2 I tried using integration by parts but I couldn't find a workaround, it gets more complicated, really fast. $$ \int \frac{dx}{\sin(\ln(x))} \ = x \sin(\ln(x)) - \int \frac{\cot \left(\ln \left(x\right)\right)}{x\sin \left(\ln \left(x\right)\right)} $$ Attempt 3 The most logical substitution I could think of. It doesn't seem to lead anywhere though. Let, $\ln(x) = u \iff dx = \, e^u du$ $$ (1) \iff \int \frac{dx}{\sin(\ln(x))} = \int \frac{e^u}{\sin(u)} du = \int \frac{(e^u)'}{\sin(u)} du = $$ $$ \frac{(e^u)'}{\sin(u)} - \int e^u \left(\frac{1}{\sin(u)}\right)' = \frac{(e^u)'}{\sin(u)} - \int e^u \frac{\cos(u)}{\sin^2(u)} = ?$$ Attempt 4 A combination of attempts 1,2, 3. Let $\ln(x) = t$ then $dx = e^t dt$, therefore, $$\int \frac{dx}{\sin(\ln(x))} dx = \int \frac{e^t }{\sin(t)}dt \quad (1)$$ Let's first evaluate $$ \int \frac{1\:}{\sin\left(t\right)}dt \quad (2)$$ Using the Weierstrass substitution $$ t = \arctan(\frac{x}{2})$$ it is easy to prove that $$ (2) = \int \frac{1\:}{\sin\left(t\right)}dt= \ln \left|\tan \left(\frac{t}{2}\right)\right|+C$$ Therefore, $$ (1) \iff I = \int e^x\left(\ln \:\left|\tan \:\left(\frac{t}{2}\right)\right|\right)'dt = e^x \ln \:\left|\tan \:\left(\frac{t}{2}\right)\right| - \int (e^x)' \ln \:\left|\tan \:\left(\frac{t}{2}\right)\right|dt = $$ $$ e^x \ln \:\left|\tan \:\left(\frac{t}{2}\right)\right| - \left( e^x \ln \:\left|\tan \:\left(\frac{t}{2}\right)\right| - \int e^x \left(\ln \:\left|\tan \:\left(\frac{t}{2}\right)\right|\right)'dt \right) $$ $$ I = 0 + I \iff 0=0$$ Tautology. No answer here. Attempt 5 Ask a question on MathExchange: Any ideas? Note: A complex-plane solution was proposed in the comments, but I am evaluating this on $\mathbb{R}$
Based on the hypergeometric answers of J.G. and Simply Beautiful Art. Taking another branch of the solution of the hypergeometric differential equation from those answers, we can get solutions like this: $$ f(x) = \mathrm{Re}\left[ {\frac { \left( 1+i \right) {x}^{1+i}}{{x}^{2\,i}-1} \;{\mbox{$_2$F$_1$}\left(1,1;\frac{3-i}{2};\,{\frac {{x}^{2\,i}}{{x}^{2\,i}-1}}\right)} } \right] $$ which satisfies $$ f'(x) = \frac{1}{\sin(\log x)} $$ in the interval $(0.21 , 0.55)$. Here we are inside the radius of convergence of the hypergeometric function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3758742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 6, "answer_id": 5 }
What is the expression for the centroid of an arbitrary parameterized space curve? Let $\gamma:t\in[a,b]\rightarrow (x(t),y(t),z(t))\in \mathbb{R}^3$ be a parametrized curve I am looking for the expression of the centroid of the curve $\gamma$ in a good reference. (I didn't find a good one.)
You can use the standard definition of center-of-mass: $$ r={\int_a^b\gamma(t)\,|\dot\gamma(t)|\,dt\over \int_a^b|\dot\gamma(t)|\,dt}, $$ where: $\dot\gamma(t)=(\dot x(t), \dot y(t), \dot z(t))$ and $|\dot\gamma(t)|=\sqrt{\dot x^2(t)+\dot y^2(t)+\dot z^2(t)}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3758819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding the limit of $\frac{(2n)!}{n!}$ So I've tried to use Stirling's approximation and got that $\lim\frac{2n!}{n!}=\frac{1}{1}=1$ Any thoughts?
$\frac{2n!}{n!} = \frac{2n(2n-1)(2n-2)!}{n(n-1)!} \ge 2.\frac{2(n-1)!}{(n-1)!}$ So, $\frac{2n!}{n!} \ge 2^n$ $\lim\limits_{n \to +\infty} \frac{2n!}{n!} \ge \lim\limits_{n \to +\infty} 2^n = + \infty$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3758931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Finding the absolute extrema of $F(x) = 2x + 5\cos(x)$ Find the absolute extrema of $F(x) = 2x + 5\cos(x)$ on the interval $[0,2\pi]$ using the extreme value theorem. Answer should be 2 ordered pairs. I got $\arcsin(2/5)$ for the first value of $x$, but can’t figure out the second. Thanks in advance.
If $f(x)=2x+5\cos x , x \in [0,2\pi], f'(x)=2-5 \sin x, f''(x)=-5 \cos x$ $$f'(x)=0 \implies \sin x=\frac{2}{5} \implies x_1= \sin^{-1} (2/5), x_2=\pi-\sin^{-1} (2/5)$$ $$f''(x_1)<0, f''(x_2)>0 \implies f_{max}=f(\sin^{-1}(2/5))=2\sin^{-1}(2/5)+\sqrt{21},$$ $$ f_{min} =f(x_2)=2[\pi-\sin^{-1}(2/5)]-\sqrt{21}$$ $$f(0)=5, f[2\pi]=4\pi+5$$ So the absolute max is $f(2\pi)=4\pi+5$ and absolute min is $2[\pi-\sin^{-1}(2/5)]-\sqrt{21}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3759057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 1 }
Central limit theorem for multi-dimensional martingale difference I know that there is a CLT for $\mathbb R$-valued martingale difference process that goes roughly as follows: Let $X$ be an $\mathbb F$-martingale difference process, i.e. $\mathbf E [X_t \mid \mathcal F_{t-1}]=0$, and suppose $X$ satisfies some kind of Lindeberg condition, then $$ \frac{\sum_{i=1}^n X_i}{\sqrt{\sum_{i=1}^n \mathbf E[X_i^2]}} \xrightarrow{\mathcal D} \mathcal N_{0,1}. $$ I am searching for some multi-dimensional version of this theorem, namely when $X_t$ takes values in $\mathbb R^d$. I googled 'multivariate martingale CLT' and 'multidimensional martingale CLT' and what I found are only some obscure continuous-time results, e.g. This paper. Is there some discrete-time multi-dimensional martingale CLT theorem that looks close to the one described above?
Use the Cramer-Wold device. That is, if $t^{\top}S_n\xrightarrow{d}t^{\top}S$ for all $t\in \mathbb{R}^d$, then $S_n\xrightarrow{d}S$, where $S_n:=\sum_{i=1}^n X_i/\sigma_n$ and $\{\sigma_n\}$ is a normalizing sequence. Note that in your case $\{t^{\top} X_n\}$ is a martingale difference sequence w.r.t. $\{\mathcal{F}_n\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3759152", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Functional equation for $\eta(s)$ following Riemann's $2^{nd}$ method. Being \begin{equation*} \eta(s)=\sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n^{s}}=\frac{1}{1^{s}}-\frac{1}{2^{s}}+\frac{1}{3^{s}}-\frac{1}{4^{s}}+\cdots \end{equation*} and following Riemann's second method, (Edwards p.15), to obtain the functional equation for $\zeta(s)$ one can think the same way and try the same aproach for $\eta(s)$. Thus from \begin{equation*} \int_{0}^{\infty} \operatorname{exp}\left(-n^{2} \pi x\right) x^{s / 2-1} d x=\pi^{-s / 2} \Gamma\left(\frac{s}{2}\right)\frac{1}{n^{s}} \text { for } s>0 \end{equation*} one can express $\eta(s)$ as \begin{equation*} \pi^{-s / 2} \Gamma \left(\frac{s}{2}\right)\underbrace{\left(1-\frac{1}{2^{s}}+\frac{1}{3^{s}}+\cdots\right)}_{\eta(s)} =\int_{0}^{\infty}\left(e^{-\pi 1^2 x}-e^{-\pi 2^2 x}+e^{-\pi 3^2 x}+\cdots\right)x^{s/2}\text{ }\frac{dx}{x} \end{equation*} How would one proceed from here to craft a functional equation for $\eta(s)$? I'm interested in refferences and/or answers. Any of them will be very much apreciated. Thanks.
Ignoring technicalities of convergence, in Riemann's second proof, you start with the Poisson summation formula $\sum_{n\in\mathbb Z} f(n / x) = x \sum_{n\in\mathbb Z} \hat f (n x)$, take the Mellin transform of both sides, and use the self-dual function $f(x)=e^{-x^2}$. To get the alternating sum you want, you could either change the function or change the summation formula. For the function, you could use something like $\sum_{n\in\mathbb Z} f(n / x) \exp(\pi i n)$, and do some computations. You could also take a twisted Poisson summation formula $\sum (-1)^n f(n) = \sum_{n \textrm{ odd}} \hat f(n/2)$, but the steps for proving that are identical to the manipulations done to derive the functional equation for $\eta(s)$ from the functional equation for $\zeta(s)$. Furthermore, an inverse Mellin transform allows you to go in the converse direction: a functional equation of Dirichlet series gives a summation formula. If the gamma factor is different then it will not be a Fourier transform but a generalization. If the degree of the functional equation is $d$ then the sum will be over $d$-th roots of natural numbers instead of over natural numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3759310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
small distances between powers of irrationals The value of $$\inf \left\{ |\pi^m-e^n|: m,n\in\mathbb{N} \right\}$$ is a known unsolved problem. But transcendental numbers are known to cause problems of this sort. Is the value of $$\inf \left\{ |\sqrt{2}^m-\sqrt{3}^n|: m,n\in\mathbb{N} \right\}$$ known? Or at least is it known if $|\sqrt{2}^m-\sqrt{3}^n|$ can be arbitrarily small?
Note you have $$d = \left|\sqrt{2}^m-\sqrt{3}^n\right| = \frac{\left|2^m - 3^n\right|}{\sqrt{2}^m + \sqrt{3}^n} \tag{1}\label{eq1A}$$ As stated near the bottom of Differences Between Powers, Indeed, Tijdeman proved that there exists a number $c \ge 1$ such that $$\left|2^m - 3^n\right| \ge \frac{2^m}{m^c}$$ Also, a closely related post is $\liminf |2^m - 3^n|$. Its accepted answer uses Baker's theorem to show that $|2^m-3^n|/m>2^m\cdot c'\cdot m^{-C}$ which is very similar to what Tijdeman determined. Since you're looking for $d$ in \eqref{eq1A} to be very small, let $$\sqrt{3}^n = (1 + \epsilon)\sqrt{2}^m \tag{2}\label{eq2A}$$ where $\epsilon \approx 0$. Also, to get smaller values of $d$, $\epsilon$ should get closer to $0$ as $m$ increases. From \eqref{eq1A}, using Tijdeman's result and \eqref{eq2A}, gives $$\begin{equation}\begin{aligned} \left|\sqrt{2}^m-\sqrt{3}^n\right| & \ge \frac{2^m}{m^c(\sqrt{2}^m + \sqrt{3}^n)} \\ & = \frac{2^m}{m^c(2 + \epsilon)\left(2^{m/2}\right)} \\ & = \frac{2^{m/2-1}}{m^c\left(1 + \frac{\epsilon}{2}\right)} \end{aligned}\end{equation}\tag{3}\label{eq3A}$$ The numerator is an exponential in $m$ while, since $c$ is a fixed real number and $\epsilon$ is relatively small (and ideally decreasing), the denominator is basically a polynomial in $m$. Since exponentials grow faster than polynomials, this means \eqref{eq3A} shows the minimum difference grows without bound as $m$ increases. This also means the $\epsilon$ in \eqref{eq2A} cannot stay close to $0$ and, actually, must be increasing. Thus, this proves $\left|\sqrt{2}^m-\sqrt{3}^n\right|$ can't be made arbitrarily small. Regarding the smallest value $d$ can be, this can be determined by checking the smallest values of $m$, with the required number to check depending on what the value of $c$ is. However, I don't know if anybody has done this and, if so, what the result is.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3759471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Help with a differential equation system Given $x' = -x$ and $y' = -4x^3+y$, we want to linearize and show phase portrait at origin. So I make system $\vec{Y}' = \begin{pmatrix} -1 & 0 \\ 0 & 1 \end{pmatrix}\vec{Y}$ by just scrapping the $-4x^3$ term. But now we have repeated $0$ eigenvalue, so I try to find an eigenvector. $\left[ \begin{pmatrix} -1 & 0 \\ 0 &1 \end{pmatrix} - \begin{pmatrix} 0 & 0 \\ 0 & 0\end{pmatrix} \right]\begin{pmatrix}v_1 \\ v_2\end{pmatrix} = \begin{pmatrix}0 \\ 0\end{pmatrix} \implies v_1 = v_2 = 0$. So $\vec{v} = \begin{pmatrix} 0 \\ 0 \end{pmatrix}$? Unless I am mistaken. What kind of eigenvector is this? I can't think of how to draw a phase portrait, thanks!
The assertion that the eigenvalues of the matrix $\vec Y'$ are both zero is erroneous. However, we have: The eigenvectors of the matrix $\vec Y' = \begin{pmatrix} -1 & 0 \\ 0 & 1 \end{pmatrix} \tag 1$ are $(1, 0)^T$, with eigenvalue $-1$, and $(0, 1)$, with eigenvalue $1$, as is easily checked, e.g. $\begin{pmatrix} -1 & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 \\ 0 \end{pmatrix} = -1\begin{pmatrix} 1 \\ 0 \end{pmatrix}, \tag 2$ with a similar calculation for eigenvcector $(0, 1)$. Thus the point $(0, 0)$ is a saddle, as corroborated by the phase portrait which is easily drawn.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3759571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Linear Algebra books that also covers multilinear algebra I want self-studying linear algebra but I also want to know what tensors are. I don't see any books that cover both linear and multilinear algebra (tensors are part of multilinear algebra right?). So if there are any books that do would be great. I don't mind that the books being "theoretical" i.e.,proof-theorem style books without many(or any) application since I want to really develop my intuition behind the subject.
One book you could use would be Lectures in Geometry, Semester 2: Linear Algebra and Differential Geometry, by Postnikov. It's the second volume in his six-part series Lectures in Geometry. This might not be a good book to learn linear algebra from by itself, as there are no exercises and some important topics are omitted. It's really focused on what's needed for differential geometry. But it could supplement another book well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3759709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to prove that if $x>0$ and $y>0$, then $\sqrt{x}+\sqrt{y}>\sqrt{x+y}$, using the relation of arithmetic and geometric means? How to prove that if $x>0$ and $y>0$, then $$\sqrt{x}+\sqrt{y}>\sqrt{x+y}\,,$$ using the relation of arithmetic and geometric means? I started by showing that if $x>0$ and $y>0$, based on the relation of arithmetic and geometric means, $\dfrac{x+y}{2}\ge\sqrt{xy}$. Hence, $x+y\ge2\sqrt{xy}$. I am now stuck here and don't know what must be the next step. Any suggestions or comments will be much appreciated.
Is this correct? Since $x+y≥2\sqrt{xy}$, by AM-GM relationship, $x+y+2\sqrt{xy}≥x+y$ $(\sqrt{x}+\sqrt{y})^2≥x+y$ Thus, $\sqrt{x}+\sqrt{y}>\sqrt{x+y}$. QED.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3759816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Taking derivative with function of multiple variables? Suppose we are studying the function $$ f(x,y) = xy + ax^2 + bx^2y^2, $$ We want to find the maximum $x$ satisfying the equation $$ f(x,y) = c, $$ where $a, b, c$ are constants. Somebody suggested to make use of the following auxiliary function $$ g(x,y) = xy, $$ so that $$ g + ax^2 + bg^2 = c. $$ Isolating $x$, $$ x^2 = \frac{c-g-bg^2}{a}. $$ Now he says that the same condition as $ \frac{dx}{dy} = 0 $ is $$ \frac{dx^2}{d g(x,y)} = 0$$ Why? Edit: I saw this trick here.
I don't want to go into details of different cases, so I will assume that $x$ and $a,b,c$ are all positive, while $y$ is negative. The canonical procedure would be to solve $f(x,y)=c$ with respect to $x$, to obtain $x=g(y)$, then solve $g'(y)=0$ to obtain $y_0$ (suppose it is unique), finally the solution would be $x_0 = g(y_0)$. If we make a change of variables, as \begin{align} X &= x^2, \\ Y &= xy, \end{align} the equation $f(x,y)=c$ becomes $F(X,Y)=c$, i.e. $$ Y+aX+bY^2=c. $$ Taking into account that when $x$ is positive and maximum, also $X$ will be maximum, thanks the relation $X=x^2$ between them, then we can solve for $X$ and obtain $X=G(Y)$, i.e. $$ G(Y)=\frac{c-Y-bY^2}{a} $$ and solve $G'(Y)=0$ obtaining $$ Y_0 = -\frac{1}{2b} $$ and of consequence $$ X_0 = G(Y_0)=\frac{1}{a}\left(\frac{1}{4b}+c\right) $$ and finally $$ x_0 = \sqrt{\frac{1}{a}\left(\frac{1}{4b}+c\right)} $$ As a final remark, my $X=G(Y)$ corresponds to your $$ x^2 = \frac{c-g-bg^2}{a}. $$ while my $G'(Y)=0$ corresponds to your $$ \frac{dx^2}{d g(x,y)} = 0. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3759940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to show that $\sum_{n=1}^{N} \cos(2n-1)x = \frac {\sin(2Nx)}{2\sin(x)} $ I am studying Fourier analysis and have been given the following question: Show that $$\sum_{n=1}^{N} \cos(2n-1)x = \frac {\sin(2Nx)}{2\sin(x)} $$ I used the formula for a finite geometric sum and Euler's formula to get to the following: $\sum_{n=1}^{N} \cos(2n-1)x = Re (\sum_{n=1}^{N} e^{i(2n-1)x}) = Re (\sum_{n=0}^{N-1} e^{i(2n+1)x}) = Re (e^{ix} \sum_{n=0}^{N-1} (e^{i2x})^n) = .... = Re(\frac{i}{2 \sin{x}}(1-e^{i2xN}))$ I have been stuck here for a while and am unsure how to get to the required $\frac {\sin(2Nx)}{2\sin(x)}$. What do I do next?
Since $2\sin(a)\cos(b) = \sin(a+b)+\sin(a-b) $, $2\sin(x)\cos((2n-1)x) = \sin(2nx)+\sin(-(2n-2)x) = \sin(2nx)-\sin((2n-2)x) $, so you get a telescoping sum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3760074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Let $ABC$ be a triangle and $M$ be the midpoint of $BC$. Squares $ABQP$ and $ACYX$ are erected. Show that $PX = 2AM$. $\textbf{Question:}$ Let $ABC$ be a triangle and $M$ be the midpoint of $BC$. Squares $ABQP$ and $ACYX$ are erected. Show that $PX = 2AM$. I could solve this problem using computational techniques but I am looking for purely synthetic solution. I tried drawing some extra midpoints, connected them. But still couldn't find the solution. Any kind of hint or full solution both are appreciated.
To rotate a vector around some point in the plane by $\alpha$ it's the same to rotate this vector by $\alpha$ around the tail of the vector. Let $R^{\alpha}(\vec{a})$ be a rotation of $\vec{a}$ by $\alpha$. Thus, $$R^{90^{\circ}}(\vec{AD})=R^{90^{\circ}}\left(\frac{1}{2}\left(\vec{AB}+\vec{AC}\right)\right)=\frac{1}{2}\left(\vec{PA}+\vec{AX}\right)=\frac{1}{2}\vec{PX}$$ and we are done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/3760238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
If $A$ is a simple finite dimensional $\mathbb{C}$-algebra then $A\cong M_n(\mathbb{C})$ I'm trying to prove the problem 17 of chapter 13 of the book Algebra: a Graduate Course (by Martin Isaacs), which is: Let $A$ be a simple finite dimensional $\mathbb{C}$-algebra. Show that $A\cong M_n(\mathbb{C})$ for some integer $n\geq 1$. In one direction I'm trying to use the fact that $M_n(\mathbb{C})$ is generated by the $E_{ij}$ to construct the bijection with the generators of $A$ but I don't know if it is a good option.
The field $\mathbb{C}$ is not important. The statement is true for any algebraically closed field $F$. Take a simple right $A$-module $V$. This is a finite dimensional vector space over $F$, because it is a quotient of $A$ modulo some maximal right ideal. Therefore its endomorphism ring is a finite dimensional division algebra over $F$. Since $F$ is algebraically closed the dimension must be $1$, so the endomorphism ring is $F$. Also, the annihilator of $V$ in $A$ must be $\{0\}$, because $A$ is simple, so $V$ is faithful. Thus Wedderburn-Artin allows us to conclude that $A$ is isomorphic to $M_n(F)$, where $n=\dim_FV$ and the isomorphism is easily checked to be an $F$-algebra isomorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3760386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Let $f$ be an entire function s.t. $F(z) = \lim_{n\to\infty} f^{(n)}(z)$ exists for all $z$ with local uniform convergence. What can we say about $F$? I have stumbled into this problem, without a given answer. Let $f$ be an entire function such that $F(z) = \lim\limits_{n\to\infty} f^{(n)}(z)$ exists $\forall z \in \mathbb{C}$ with local uniform convergence. * *What can you say about the function $F$? *What can you say about the function $f$? I have sort of convinced myself that $F(z) =Ce^z$ and thus $f(z)=F(z)$ but i am very doubtful about this and even if it is correct I have no idea how to prove it, and there is probably more information you have to provide about the given functions. Such that $F$ is analytic implies that $f$ is analytic.
As already worked out in the comments, $F(z) = \lim_{n\to\infty} f^{(n)}(z)$ (locally uniformly) implies that $$ F'(z) = \lim_{n\to\infty} f^{(n+1)}(z) = F(z) $$ so that $F(z) = Ce^z$ for some constant $C \in \Bbb C$. Then $g(z) = f(z) - Ce^z$ satisfies $$ \lim_{n\to\infty} g^{(n)}(z) = \lim_{n\to\infty} f^{(n)}(z) - Ce^z = F(z) - Ce^z = 0 $$ so that it remains to characterize all entire functions $g$ with the property that $$ \lim_{n\to\infty} g^{(n)}(z) = 0 $$ locally uniformly in $\Bbb C$. Writing $g$ as a power series $g(z) = \sum_{k=0}^\infty \frac{b_k}{k!} z^k$ we have the necessary condition $$ \lim_{k\to\infty} b_k = \lim_{k\to\infty} g^{(k)}(0) = 0 \,. $$ That condition is also sufficient: If $b_k \to 0$ then for $|z| \le R$ $$ \left| g^{(n)}(z) \right| = \left| \sum_{k=0}^\infty \frac{b_{k+n} }{k!} z^k\right| \le \sum_{k=0}^\infty \frac{|b_{k+n}| }{k!} R^k \, . $$ Given $\epsilon > 0$ we can choose $N$ such that $|b_n| < \epsilon e^{-R}$ for $n > N$, which implies that $$ \left| g^{(n)}(z) \right| \le \epsilon e^{-R} \sum_{k=0}^\infty \frac{1}{k!} R^k = \epsilon $$ for $n > N$ and $|z| \le R$. Summarizing the results: If $f$ is a an entire function then $(f^{(n)})$ converges locally uniformly in $\Bbb C$ if and only if $$ f(z) = Ce^z + \sum_{k=0}^\infty \frac{b_k}{k!} z^k $$ for some $C \in \Bbb C$ and some sequence $(b_k)$ of complex numbers converging to zero. In that case the limit function is $F(z) = Ce^z $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3760485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
How prove that the elementary operations don't change the rank of a matrix One considers certain operations, called elementary row operations, that are applied to a matrix $A$ to obtain a new matrix $B$ of the same size. These are the following: * *exchange rows $i_1$ and $i_2$ of $A$ (where $i_1\neq i_2$); *replace row $i_1$ of $A$ by itself plus the scalar $c$ times row $i_2$ (where $i_1\neq i_2$); *multiply row $i$ of $A$ by the non-zero scalar $\lambda$. Naturally this operations can be implemented on a column and so we would call the analogous operations on the columns "elementary column operations". Theorem If $B$ is the matrix obtained by applying an elementary row/column operation to $A$, then these two matrix has the same rank. Unfortunately I'm not able to prove the previous theorem, so could someone help me, please?
This is a super important linear algebra theorem. The basic idea of the proof is that each of these operations is equivalent to right-multiplication by a matrix of full rank. I'll give an example of each operation in the 2 by 2 case: * *Swap the rows by multiplying on the right by \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} *Add the top row to the bottom with \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} *Scale the top row by $c$ using \begin{pmatrix} c & 0 \\ 0 & 1 \end{pmatrix} Since multiplying by a matrix of full rank preserves rank, it follows that the elementary row operations are rank-preserving.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3760618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
The integral of a nonnegative measurable function definition: what does $f_n(\omega)\uparrow f(\omega)$ mean? Let $f:\Omega\to\overline{\mathbb{R}}_{+}$ be a nonnegative measurable function on $(\Omega,\mathcal{F},\mu)$. The integral of $f$ with respect to $\mu$ is defined as \begin{align*} \int f \, \mathrm{d}\mu = \lim_{n\to\infty}\int f_n \, \mathrm{d}\mu \end{align*} where $\{f_n\}_{n\geq1}$ is any sequence of nonnegative simple functions s.t. $f_n(\omega)\uparrow f(\omega)$ for all $\omega\in\Omega$. MY QUESTION What is the meaning of the notation $f_n(\omega)\uparrow f(\omega)$? Does this mean that $f_{n}$ converges pointwise to $f$ and $f_{n+1}(\omega)\geq f_n(\omega)$ for every $\omega\in\Omega$? I am new to this so any help is appreciated.
Yes, it's as you say. This means that the sequence of functions converges "upwards" to $f$, meaning that $f_n\to f$ pointwise and for each $x\in \Omega$, and for each $n$, $f_n(x)\le f_{n+1}(x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3760806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to smooth sine-like data I'm trying to produce a growth graph but I'm getting sine wave artefacts due to the way the data is compared (current 7 days / previous 7 days). I've drawn the red and yellow lines by hand by first connecting the mid point of each sine wave (red), then connecting those mid points (yellow). Low data artificially inflates the data 7 days later, and high data artificially suppresses the data 7 days later, which is causing the sine-like effect. I tried a Fourier analysis, but the wavelength in the data is not constant, so the results were undesirable. Also, I just realised that if it had worked, I'd get a large value in the middle of the graph and small values at either end, when what I want is a gradually rising graph. What formula can I use to achieve a similar smoothing effect to the hand drawn lines? This is a graph of the $log_2$ of the raw data and some smoothing. Note the wiggliness of the smoothed data. I'm basically trying to make it smoother without having to consider a wider range of dates per data point.
Probably a logarithmic function ? $log$$y$ $x$ strictly increases for $x, y > 1$ . Plus, you may even try taking the derivative of the function used and adjust it. This is one of the methods used in machine learning (especially in linear regression) as a learning technique and this is called stochastic gradient descent. (Or did I speak foolishly ?)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3760962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Alternate way to solve $\lim\limits_{x \to 0} (\sin x) ^x$? My solution: $$ \lim_{x \to 0} (\sin x)^x = \lim_{x \to 0} e^{(x)(\ln \sin x)} = \exp \left( \lim_{x \to 0} (x) (\ln \sin x) \right)$$ Now we have $\lim_{x \to 0} (x) (\ln\sin x)$. Now we can say that the limit is $0$ as $\ln$, $\sin x$ decreases more slowly than $x$. Question: Is there any other method to solve the above limit without using the arguments saying one function decreases slower?
Just a small variant on Kavi Rama Murthy's answer, together with a comment on the one-sidedness of the limit: Note, $$(\sin x)^x=\left(\sin x\over x\right)^xx^x$$ We have $$\lim_{x\to0}\left(\sin x\over x\right)^x=1^0=1$$ and $$\lim_{x\to0^+}x^x=1$$ (from the easy L'Hopital for $x\ln x={\ln x\over1/x}$). Therefore $$\lim_{x\to0^+}(\sin x)^x=1$$ Remark: I've specified the limit from the right, $x\to0^+$, since $x^x$ and $(\sin x)^x$ are undefined (as real-valued functions) for (small) negative values of $x$. The function $\left(\sin x\over x\right)^x$ is defined for negative values of $x$ near $0$, so it's OK to consider its limit as $x\to0$ from both sides.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3761182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 5 }
Fix point of function over disk Prove that there is a fix point of function $f:B(0,1) \to R^2$, where $B(0,1)$ is circle of radius 1, and $f(x,y)=\frac {1}{4}(ye^{x}-y,cosy)$. I tried to prove that f is contractive mapping (there is $0 < q < 1$ such that $d(f(a),f(b)) \leq qd(a,b)$), and than use Banach theorem.
Using Brouwer theorem : You know that $f$ is continous and $f(B(0,1))\subseteq B'\triangleq\overline{B} (0,1)$ where $\overline{B} (0,1)$ denotes the closed disk. So $f: B' \to B'$ is continous, from a closed disk to itself. By Brouwer theorem it has a fixed point. Exercise to end the proof : verify it isn't on the edge of the disk. (the factor $\dfrac{1}{4}$ is sufficient)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3761338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
$G= \langle a, b : a^{7} = b^{3} = 1,\ b^{-1}ab = a^{2} \rangle$ and commutator group Consider the $a$ and $b$ the following permutation in $S_{7}$: $$a = (1\ 2\ 3\ 4\ 5\ 6\ 7 ),\ b = (2\ 3\ 5)(4\ 7\ 6)$$ Consider the group $G = \langle a, b \rangle$. I know that $a^{7} = b^{3} = 1$ and $b^{-1}ab = a^{2}$. Moreover, with these relations we can say that $|G| = 21$. Well, is simple to see that $\langle a \rangle \leq G'$, because $$[a, b] = a^{-1}b^{-1}ab = a^{-1}a^{2} = a$$ I'd like to know that $G' = [G, G] = \langle a \rangle$. Someone can help me? Thank you.
To see the other inclusion, consider the map $G\rightarrow\mathbb{Z}_3$ sending $a$ to $0$ and sending $b$ to $1$. This application preserves relations, and so it induces an homomorphism from $G$ to $\mathbb{Z}_3$. Its kernel coincides with $\langle a\rangle$. Therefore $G/\langle a \rangle\cong\mathbb{Z}_3.$ By definition of commutator subgroup (as the smallest subgroup whose quotient is an abelian group), we obtain $G'=\langle a\rangle$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3761448", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Spectral norm, eigenvalues range I stumbled upon a property in solutions of some exercises which stated that if a hessian of a possibly non-convex function f(x) is bounded in spectral norm then its eigenvalues lie in the interval. $$ ||\nabla^2f(x)||_2 \leq L $$ $$ eigenvalues \in [-L, L]$$ I fail to understand or more I am unable to find where this property comes from, I looked through many materials about spectral norm, spectral radius and I think at this point I am completely confused. I know that spectral norm is the maximal singular value of a matrix. In this case does it mean that hessian is symmetric so eigenvalues == singular values? How do we go further with that to get the interval? I get the upper bound of the interval, it's obvious but why the lower bound. Thank you in advance for pointing me to right sources or directly answering.
The spectral norm of a matrix is, by definition, the largest absolute value of its eigenvalues. If the largest absolute value of the eigenvalues is less than or equal to $L$, then all the eigenvalues have absolute value less than or equal to $L$, so if they are real, they all lie in the interval $[-L,L]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3761544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Find the other 2 vectors in a triangle given all the magnitudes and one of the sides. I'm trying to run a simulation in Python of a linkage system. The problem I'm working with returns results that don't make sense (like one of the linkages changes size). I've reduced the issue I'm confused about to the following. Suppose I have the following diagram: Suppose I am given $\mathbf{c}$, $\lVert \mathbf{a}\rVert$, $\lVert \mathbf{b}\rVert$. Are $\mathbf{a}$ and $\mathbf{b}$ unique? If so, how do I find $\mathbf{a}$ and $\mathbf{b}$? If they are not unique, what other information would I need to make them unique (e.g. the angle between $\mathbf{a}$ and $\mathbf{b}$)? I feel like the SSS theorem I learned about waaaaay back in junior high might be relevant. By SSS, I think if two triangles have sides that are proportional to each other, then the angles are the same. Could this be use to show $\mathbf{a}$ and $\mathbf{b}$ are unique? If so, how would I then derive $\mathbf{a}$ and $\mathbf{b}$?
SSS determines the shape of the triangle, but the direction can vary. You may think of $\vec{c}$ as an unoriented segment $AB$ of length $||c||$. Given $||a||$ and $||b||$, the possible locus of $a$ and $b$ are two circles centered at $A$ and $B$ respectively, with radii of $||a||$ and $||b||$. Geometrically, two circles in this situation intersect at exactly two points symmetric about the segment $AB$ according to the triangle inequality. This does not break the SSS theorem, for the shape of these two triangle are actually the same, but the orientations of them are different.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3761652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Through two given points on a circle, construct two parallel chords with a given sum. The problem is from Kiselev's Geometry exercise 317. Through two given points on a circle, construct two parallel chords with a given sum. Here is what I have tried so far: Mark the two points by $A$ and $C$ respectively. If we have constructed such two chords and marked the two other points by $B$ and $D$, the quadrilateral $ABCD$ is an isosceles trapezoid where $AC$ is a diagonal and (without loss of generality) $AB$ and $CD$ are parallel. The midline of the bases measures half of the given sum, and it passes through the midpoint of the diagonal $AC$. Unfortunately, I could not progress any further from here; I think I should utilize the fact that the 4 points are concyclic and $ABCD$ is an isosceles trapezoid, but I could not find usage of the fact. Any help would be much appreciated.
Let Q, R be the given points. QRBA the given circle. Partition the line segment of summed length QP at A. Draw a parallel through B and parallelly transfer AP to BR. The point R must lie on the circle because $\alpha,\beta$ are opposite supplementary angles in a cyclic quadrilateral. Likewise transfer AQ to BS. Draw congruent circle PABS. Let the diameter of circles be $d$. The geometric construction is anti-symmetric with respect to mid-point of AB. First I approached in the way you suggested.But partitioning into AP, AT instead of AT, AQ led me into errors. Fwiw found the following relation by Sine Rule relevant to the construction involving a side, diagonal ( of isosceles trapezium AQRB ) and distance between given parallel lines $h$. $$ r_1\cdot r_2= h\cdot d $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3761747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Integral: $\int \dfrac{dx}{(x^2-4x+13)^2}$? How can I integrate $$\int \dfrac{dx}{(x^2-4x+13)^2}?$$ Here is my attempt: $$\int \dfrac{dx}{(x^2-4x+13)^2}=\int \dfrac{dx}{((x-2)^2+9)^2}$$ Substitute $x-2=3\tan\theta$, $\ dx=3\sec^2\theta d\theta$ \begin{align*} &=\int \dfrac{3\sec^2\theta d\theta}{(9\tan^2\theta+9)^2}\\ &=\int \dfrac{3\sec^2\theta d\theta}{81\sec^4\theta}\\ &=\dfrac{1}{27}\int \cos^2\theta d\theta\\ &=\dfrac{1}{27}\int \frac{1+\cos2\theta}{2} d\theta\\ &=\dfrac{1}{54}\left(\theta+\frac{\sin2\theta}{2}\right)+C \end{align*} This is where I got stuck. How can I get the answer in terms of $x$? Can I solve it by other methods?
After square completion and substituting $u=\frac{x-2}{3}$, there is a simple standard trick to evaluate the integral without trigonometric substitutions: $$\int \dfrac{dx}{(x^2-4x+13)^2} \stackrel{u=\frac{x-2}{3}}{=}\frac 1{27} \underbrace{\int \frac{1}{(u^2+1)^2}du}_{I(u)}$$ Just rewrite the numerator $$I(u) = \int\frac{1+u^2-u^2}{(u^2+1)^2}du = \arctan u - \frac 12\underbrace{\int u \frac{2u}{(u^2+1)^2}}_{J(u)}$$ So, only one quick partial integration gives $$J(u) = -\frac u{u^2+1}+\arctan u$$ Hence, $$I(u) = \arctan u - \frac 12\left(-\frac u{u^2+1}+\arctan u\right) =\frac 12 \left(\arctan u + \frac u{u^2+1}\right)$$ Finally, substitute back $u=\frac{x-2}{3}$ and you are done: $$\int \dfrac{dx}{(x^2-4x+13)^2} = \frac 1{27}I(u)= \frac 1{54}\left(\arctan \frac{x-2}{3} + \frac{\frac{x-2}{3}}{\left(\frac{x-2}{3}\right)^2+1}\right) (+C)$$ $$= \frac 1{54}\left(\arctan \frac{x-2}{3} + \frac{3(x-2)}{\left(x-2\right)^2+9}\right)(+C)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3761986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 9, "answer_id": 2 }
Jordan normal form of powers of Jordan normal form Previous related question: Jordan normal form powers Let $A$ be a $n\times n$ Matrix such that $A=PBP^{-1}$ where $B$ is in Jordan normal form with $\lambda_i(k)_j$ Where $i$ is the size, $k$ is the eigenvalue and $j$ the order. From the previous question I know that each Jordan block $\lambda_i(k)_j$ when the matrix is raised to the $n$-th power is an upper triangular matrix $$\sum_{r=0}^{i-1} {n \choose r} k^{n-r}t^r$$ Where $t$ is the matrix with 1’s on it’s super diagonal and 0’s everywhere else. How can I get this matrix to Jordan normal form?
So you want to know the Jordan canonical form of the $i \times i$ matrix $$ A = \sum_{r=0}^{i-1} \left( n \atop r \right) k^{n-r} t^r .$$ Since $A$ has $k^n$ as an $i$-fold repeated eigenvalue, it is sufficient to find the Jordan form for $$ A - k^n I = \sum_{r=1}^{i-1} \left( n \atop r \right) k^{n-r} t^r .$$ First consider the case $k \ne 0$. Then $$ (A- k^n I)^{i-1} = n^{i-1} k^{(n-1)(i-1)} t^{i-1} \ne 0$$ since $t^r = 0$ for $r \ge i$. Similarly $(A- k^n I)^i = 0$. Therefore the minimal polynomial for $A$ is $p(x) = (x - k^n)^i$, and its Jordan canonical form must be $k^n I + t$, that is, a single block of size $i$. Next, consider the case $k = 0$, when $A = t^n$. Denote the unit vectors by $e_r$ with $1 \le r \le i$. Then the unit vectors split into groups: * *$e_1, e_{n+1}, e_{2n+1}, \dots$ of size $[(i+n-1)/n]$; *$e_2, e_{n+2}, e_{2n+2}, \dots$ of size $[(i+n-2)/n]$; *$e_3, e_{n+3}, e_{2n+3}, \dots$ of size $[(i+n-3)/n]$; *$\vdots$ *$e_n, e_{2n}, e_{3n}, \dots$ of size $[i/n]$; where $[x]$ denotes the integer part of $x$. On each group, $A$ acts as a Jordan block. So its Jordan canonical form is a collection of blocks of size $[(i+n-1)/n], [(i+n-2)/n], \dots, [i/n]$. And if you think about it, this is $n - i + n[i/n]$ blocks of size $[i/n]$ and $i - n[i/n]$ blocks of size $[i/n]+1$. (In particular, if $n \ge i$, then it is $i$ blocks of size $1$, that is, $A = 0$ is diagonal.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3762073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that $\bigcap\mathcal H\subseteq(\bigcap\mathcal F)\cup(\bigcap\mathcal G)$. Not a duplicate of Prove that $∩\mathcal H ⊆ (∩\mathcal F) ∪ (∩\mathcal G)$. This is exercise $3.5.17$ from the book How to Prove it by Velleman $($$2^{nd}$ edition$)$: Suppose $\mathcal F$, $\mathcal G$, and $\mathcal H$ are nonempty families of sets and for every $A\in\mathcal F$ and every $B\in\mathcal G$, $A\cup B\in\mathcal H$. Prove that $\bigcap\mathcal H\subseteq(\bigcap\mathcal F)\cup(\bigcap\mathcal G)$. Here is my proof: Let $x$ be an arbitrary element of $\bigcap\mathcal H$. Now we consider two different cases. Case $1.$ Suppose $x\in\bigcap\mathcal F$. Therefore $x\in (\bigcap\mathcal F)\cup(\bigcap\mathcal G)$. Case $2.$ Suppose $x\notin \bigcap\mathcal F$. So we can choose some $A_0$ such that $A_0\in\mathcal F$ and $x\notin A_0$. From $\forall A\in\mathcal F\forall B\in\mathcal G(A\cup B\in\mathcal H)$ and $A_0\in\mathcal F$, it follows that $A_0\cup B\in\mathcal H$ for every $B\in\mathcal G$. Since $x\in\bigcap\mathcal H$, $x\in A_0\cup B$ for every $B\in\mathcal G$. Since $x\notin A_0$, $x\in B$ for every $B\in\mathcal G$ and so $x\in\bigcap \mathcal G$. Thus $x\in (\bigcap\mathcal F)\cup(\bigcap\mathcal G)$. Since the above cases are exhaustive, $x\in (\bigcap\mathcal F)\cup(\bigcap\mathcal G)$. Therefore if $x\in\bigcap\mathcal H$ then $x\in (\bigcap\mathcal F)\cup(\bigcap\mathcal G)$. Since $x$ is arbitrary, $\forall x\Bigr(x\in\bigcap\mathcal H\rightarrow x\in (\bigcap\mathcal F)\cup(\bigcap\mathcal G)\Bigr)$ and so $\bigcap\mathcal H\subseteq(\bigcap\mathcal F)\cup(\bigcap\mathcal G)$. $Q.E.D.$ Is my proof valid$?$ Thanks for your attention.
Your proof is okay. It is more handsome though to prove the contrapositive statement:$$x\notin\left(\bigcap\mathcal{F}\right)\cup\left(\bigcap\mathcal{G}\right)\implies x\notin\bigcap\mathcal{H}$$ Proof: If $x\notin\left(\bigcap\mathcal{F}\right)\cup\left(\bigcap\mathcal{G}\right)$ then some $A\in\mathcal{F}$ exists $x\notin A$ and some $B\in\mathcal{G}$ exists with $x\notin B$. Then $x\notin A\cup B\in\mathcal{H}$ so we conclude that $x\notin\bigcap\mathcal{H}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3762217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do I find all functions $F$ with $F(x_1) − F(x_2) \le (x_1 − x_2)^2$ for all $x_1, x_2$? In calculus class we were given this so-called "coffin problem" originally from Moscow State University. Find all real functions $F(x)$, having the property that for any $x_1$ and $x_2$ the following inequality holds: $$F(x_1) − F(x_2) \le (x_1 − x_2)^2$$ I have the solution to this problem, which is supposed to make the question very intuitive once you see it. However, I still do not quite understand it, and I would appreciate your help. Solution: The inequality implies $$\frac{F(x_1) − F(x_2)}{|x_1 − x_2|} \le |x_1 − x_2|,$$ so the derivative of $F$ at any point $x_2$ exists and is equal to zero. Therefore, by the fundamental theorem of calculus, the constant functions are exactly the functions with the desired property. Based on this solution, I substituted $x_1=x_2+h$ and took the limit as $h$ approaches zero, therefore by first principles, the derivative of $F(x)$ at $x_2$ is less than or equal to zero. Where do I proceed from here?
Exchanging $x_1$ with $x_2$ in the original inequality shows $F(x_1)-F(x_2)$ is bound by $\pm(x_1-x_2)^2$, i.e. $\left|\frac{F(x_1)-F(x_2)}{x_1-x_2}\right|\le|x_1-x_2|$. This proves the two-sided derivative is $0$. But you actually don't need derivatives to solve the problem. Since $|F(x)-F(0)|\le x^2$ for all $x$, $|F(x)-F(0)|\le\lim_{n\to\infty}n\left(\frac{x}{n}\right)^2=\lim_{n\to\infty}\frac{x^2}{n}=0$ by the triangle inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3762311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Lagrangian fiber bundles and Poisson commutative subalgebras Let $M$ be a symplectic manifold and $\pi : M \to B$ a fiber bundle (fibers are manifolds). Is it true that $\pi$ is an isotropic fiber bundle i.e. all fibers are isotropic submanifolds if and only if $\pi^*(C^\infty(B))$ is Poisson commutative? More precisely, the latter condition means that $\{\pi^*(f),\pi^*(g)\}=0$ for all $f,g \in C^\infty(B)$ and we use Poisson bracket produced by the symplectic structure. How to formulate condition that the fiber bundle is Lagrangian in terms of commutative Poisson subalgebras? Do they correspond to maximal such subalgebras?
A counterexample is the fiber bundle $\mathbb{R}^{3}\times\mathbb{R}\rightarrow\mathbb{R}^{3}$ with projection $\pi:(x_1,x_2,x_3,z)\mapsto(x_1,x_2,x_3)$ and symplectic form $\omega\in\Omega^{2}(\mathbb{R}^{3}\times\mathbb{R})$ given by $$ \omega=dx_1\wedge dx_2 + dx_3\wedge dz. $$ The fibers are one-dimensional, hence isotropic. But $\pi^{*}(C^{\infty}(\mathbb{R}^{3}))$ is not Poisson-commutative, since $\{x_1,x_2\}=\pm 1$ (depending on sign conventions).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3762455", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Number of ternary strings of length n such that number of 0s is greater than or equal to number of occurrences of any other digit I understand how to count this for a binary string of a fixed length using combinations, so I think the way to go with this problem is to use an exponential generating function for each of the set {0, 1, 2} when counting the solutions. For example, if I want to count the number of ternary strings with an even number of 0s, we can use (1 + x^2/2! + x^4/4! + x^6 + ...) for 0's, (1 + x + x^2/2! + x^3/3! + ...) for 1's, and (1 + x + x^2/2! + x^3/3! + ...) for the number of 2's, and then we can combine exponential generating functions in the following way: Exponential Generating Functions with odd num of 0's Not sure how I would account for more 0's than any other term though. Thank you!
I don't think generating functions give the best approach. You should exploit the symmetry of the situation instead. The are $3^n$ ternary strings of length $n$. In how many of them is $0$ a winner, that is, in how many of them are there at least as many $0$'s as $1$'s or $2$'s? If we count all the winners, $\frac13$ of them will be $0$, $\frac13$ of them will be $1$, $\frac13$ of them will be $2$, by symmetry. The only problem is that some strings have $2$ or $3$ winners. Therefore, the problem reduces to counting the number of two-way ties and three-way ties. For example, let $n=3$. There are $27$ strings. In $6$ of them there is a three-way tie. There are no two-way ties. In $21$ cases there is a single winner. The total number of winners is $21+3\cdot6=39$, so $0$ comes in first (including ties) $$\frac{39}3=13$$ times. Can you do the general case? (Note that three-way ties are possible only when $n$ is divisible by $3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3762543", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Optimal Betting Strategy question I am preparing for an exam in probability theory and I bumped against a question I can't solve. Given are an integer starting capital $k$, an end goal capital $m$ and a period of $n$ days. Each day I can bet some integer amount $X$ of my choosing $(X \leq k)$ on an unfair coin landing on heads. The probability the coin lands on heads is different each day, with $p_i$ denoting the probability of it landing on heads on day $i$ with $i \in (1,...,n)$. If the bet is successful, I increase my capital by $X$, if not I lose $X$ amount. (All probabilities $p_1, p_2,..., p_n$ are known before the betting process starts). The question is: With an optimal betting strategy, what is the probability of achieving capital at least equal to $m$ after $n$ days? An example input: $n = 5, k = 2, m = 20, p_1 = 0.3, p_2 = 0.5, p_3 = 0.2, p_4 = 0.7, p_5 = 1.0$ Though not from a homework, if this question falls under the category of questions one should solve by themselves or look for help from a tutor or elsewhere, please tell me, I will take it down. Any advice as to how to approach the problem would be awesome though.
It seems to me that this question is much easier than people are making it out to be, although I might be wrong here. First: The optimal strategy will at each betting day be the one that increases the expected gains the most. Therefore you should bet all your capital if $p>0.5$, since the expected return is the largest. Second: The probability of having capital after n days with the optimal strategy is equal to the product of the probabilities of the days you bet, so hence: $$P(\text{"Return larger than m"}) = \begin{cases}\prod_{bet\in bets}p_{bet}&2^{|bets|}k>m\\0&\text{else}\end{cases}$$ For your example $n=5,k=2,m=20,p_1=0.3,p_2=0.5,p_3=0.2,p_4=0.7,p_5=1.0$, we have that the probability of having capital left is: $p_4p_5=0.7$, but since $m=20$ the probability that we end up with a return greater or equal to this value is 0. I think most other people here have attempted to solve the much harder problem of optimizing the probability of having the return be greater or equal to m, but from what I understand that is not the actual assignment, so the solution I presented here should be correct. Also, this problem reminds me a lot of the St. Petersburg paradox, which I quite like.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3762663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 4 }
Compute the value of $\lim \int_0^1 f_n(x) \sin (nx)dx.$ Let $\{f_n(x)\}$ be a sequence in $L^2[0,1]$ and $\lim f_n=f$ almost everywhere, where $f(x)\in L^2[0,1]$. Then find $$\lim_{n\to \infty} \int_0^1 f_n(x)\sin (nx)dx.$$ By Riemann-Lebesgue lemma, we can conclude that $$\lim \int_0^1 f(x)\sin(nx)dx=0.$$ But how to compute it for $f_n(x)$?
The limit need not exist. For example if $f_n(x)=n^{2} \chi_{(0,\frac 1n)}$ and $f=0$ then $f_n \to f$ at every point but the given integral tends to $\infty$. [$\int_0^{1}f_n(x) \sin (nx)dx=n\int_0^{1} \sin y dy=n(1-\cos 1)$ by the substitution $y=nx$].
{ "language": "en", "url": "https://math.stackexchange.com/questions/3763035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there an ordered field with distinct subfields isomorphic to the reals? Is there an ordered field with distinct subfields isomorphic to the field $\mathbb R$ of real numbers?
Yes. Let $K$ be any real-closed field that contains $\mathbb{R}$ as a proper subfield. In particular, then, $K$ has a nonzero infinitesimal element $\epsilon$. Let $B$ be a transcendence basis for $\mathbb{R}$ over $\mathbb{Q}$ and let $B'=\{b+\epsilon:b\in B\}$. Then $B'$ is still algebraically independent (a polynomial with coefficients in $\mathbb{Q}$ evaluated at elements of $B'$ is infinitesimally close to the evaluation at the corresponding elements of $B$), and in fact there is an isomorphism of ordered fields $\mathbb{Q}(B)\to\mathbb{Q}(B')$ mapping $b$ to $b+\epsilon$ for each $b\in B$. Now let $L$ be the algebraic closure of $\mathbb{Q}(B')$ in $K$. Since $K$ is real-closed, this means $L$ is a real closure of $\mathbb{Q}(B')$ as an ordered field. Also, $\mathbb{R}$ is a real closure of $\mathbb{Q}(B)$. Since real closures are unique up to isomorphism and $\mathbb{Q}(B)\cong\mathbb{Q}(B')$ as ordered fields, this means that $L\cong\mathbb{R}$. Thus $K$ contains two distinct subfields isomorphic to $\mathbb{R}$, namely $\mathbb{R}$ and $L$. (In fact, $K$ contains $2^{2^{\aleph_0}}$ such subfields, since you could modify $B'$ to add $\epsilon$ only to some particular subset of $B$, and there are $2^{2^{\aleph_0}}$ different such subfields. Or, you could get different such subfields by picking a different infinitesimal element to be $\epsilon$. Since there is no bound on the number of infinitesimal elements such a field $K$ can have, there is no bound on the number of subfields isomorphic to $\mathbb{R}$ that an ordered field can have.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3763151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Determining whether $\iint_{|x|+|y| \leq 1} \ln(x^{2}+y^{2}) \,dx\,dy$ is positive or negative I have an integral: $$\iint_{|x|+|y| \leq 1} \ln(x^{2}+y^{2}) \,dx\,dy$$ So basically it's: $$\int_{-1}^{0}\,dx \int_{-x-1}^{x+1} \ln(x^{2}+y^{2})\,dy + \int_{0}^{1}\,dx \int_{x-1}^{-x+1} \ln(x^{2}+y^{2})\,dy$$ But it's two huge integrals, and it takes lots of time and calculations to get an answer. So, I wonder maybe there is another easy way to find out whether the answer is positive or negative. Maybe I don't see something.
We can compute the value of this integral exactly. By rotational symmetry we have that the integral is equivalent to the integral on the square (and subsequent triangle): $$I = \iint_{\left[-\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}\right]^2} \log\left(x^2+y^2\right)\:dA = 8\int_0^{\frac{1}{\sqrt{2}}} \int_0^x \log\left(x^2+y^2\right)\:dy\:dx$$ In polar coordinates we get the integral $$4 \int_0^{\frac{\pi}{4}}\int_0^{\frac{\sec\theta}{\sqrt{2}}}2r\log\left(r^2\right)\:dr\:d\theta = 2\int_0^{\frac{\pi}{4}}\sec^2\theta\left[\log\left(\sec^2\theta\right)-\log 2 - 1\right]\:d\theta$$ which by the substitution $x = \tan\theta$ gives $$I = 2\int_0^1\log\left(1+x^2\right)\:dx - 2 (1+\log 2)$$ Just the integral piece is taken care of by an integration by parts $$\int_0^1\log\left(1+x^2\right)\:dx = x\log\left(1+x^2\right)\Bigr|_0^1 - \int_0^1\frac{2x^2}{1+x^2}\:dx = \log 2 - 2 + \frac{\pi}{2}$$ which means our final answer is given by $$I = \pi-6$$ which is clearly negative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3763265", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Probability selecting three cards out of a deck In Introduction to Probability by Blitzstein & Hwang, Chapter 2 Problem 5: Three cards are dealt from a standard, well-shuffled deck. The first two cards are flipped over, revealing the Ace of Spades as the first card and the 8 of Clubs as the second card. Given this information, find the probability that the third card is an ace in two ways: using the definition of conditional probability, and by symmetry. Solution: Let A be the event that the first card is Ace of Spades, B be the event that second card is 8 of Clubs, and C be the event that third card is an Ace. $P(C|A,B) = \dfrac{P(A,B,C)}{P(A,B)}$ Numerator: Having first as Ace of Spade, second as 8 of Clubs and third as an Ace, is similar to choosing three cards out of 52 cards without replacement. However, there are 3 ways for the third card to be an Ace since there are three Aces left, Ace of Hearts, Diamonds, and Clubs. $P(A,B,C) = 3\cdot(\dfrac{1}{52})(\dfrac{1}{51})(\dfrac{1}{50})$ Denominator: This is the same as choose two cards out of 52 without replacement. $P(A,B) = (\dfrac{1}{52})(\dfrac{1}{51})$ Therefore, $P(C|A,B) = \dfrac{P(A,B,C)}{P(A,B)} = \dfrac{3\cdot(\dfrac{1}{52})(\dfrac{1}{51})(\dfrac{1}{50})}{(\dfrac{1}{52})(\dfrac{1}{51})} = \dfrac{3}{50}$ Is this solution correct? By the way I don't get it as how to use symmetry to view this problem...
Symmetry: there are 50 cards left. Each has the same probability so the probability to get an ace is $\tfrac{3}{50}$ as there are 3 aces left.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3763502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Convexifying Optimization Problem Let $\mathbf{V} \in \mathbb{R}_{+}^{n \times m}$ and $\mathbf{E} \in \mathbb{R}_{+}^{n \times m}$. I am trying to convexify the following program which solves for $\mathbf{X} \in \mathbb{R}^{n \times m}$: \begin{align} &\max &\sum_{i = 1}^n \log \left(\sum_{j = 1}^n V_{ij}\left( X_{ij} - E_{ij} \right)\right) - \sum_{i = 1}^n \log\left(\sum_{j = 1}^m X_{ij} - E_{ij}\right)\\ &\forall j \in \{1, \dots, m\} & \sum_{i = 1}^n X_{ij} \leq 1\\ &\forall i \in \{1, \dots, n\}, j \in \{1, \dots, m\} & X_{ij} \geq 0 \end{align} I tried putting $\sum_{i = 1}^n \log(\sum_{j = 1}^m X_{ij} - E_{ij})$ as a constraint using a dummy variable but I do not think I am doing it correctly since my solver tells me that the program is not convex. How can I convexify this problem?
This can be convexified by Difference of Convex (DC) Programming. See DC Programming: The Optimization Method You Never Knew You Had To Know. Variations and extension of the convex–concave procedure, Thomas Lipp1 and Stephen Boyd A modeling system such as CVXPY which supports Disciplined Convex-Concave Programming (DCCP) makes it easy to enter this problem, and let the modeling syetm do the dirty work for you, while reducing human error propensity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3763807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Which irrationals become rational for some positive integer power? Related to Irrationals becoming rationals after being raised to some power. Let $r \in \mathbb{R} \setminus \mathbb{Q}$. True or false: there exists an $n \in \mathbb{N}$ (positive integers) such that $r^n = r \cdot \dots \cdot r \in \mathbb{Q}$. This is clearly true for some irrationals like $\sqrt{2}$ or $a^{1/n}$ (positive integer $a$ such that $a^{1/n} \notin \mathbb{Z}$; see How to prove: if $a,b \in \mathbb N$, then $a^{1/b}$ is an integer or an irrational number?). But is it true for ALL irrationals? If not, can we classify all the irrationals for which the statement is true?
You have to find the transcendental numbers like exponential number, π, sin(a) , sin h(a) etc. which make your statement wrong! One thing that you have to remind, every transcendental numbers are irrational numbers, but converse is not always true i.e, there can be irrationals which are not transcendental numbers, like the numbers what you have given and these are part of algebraic numbers, which are suitably described in 1st answer! So, for your question, only algebraic irrational numbers can be transformed into rational numbers by repeated multiplication as many times you need. But transcendental irrational numbers can't be! Also cardinality of the set of these transcendental irrational numbers is uncountable! But on the other hand, cardinality of the set of these algebraic rational numbers is countable!
{ "language": "en", "url": "https://math.stackexchange.com/questions/3763999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How to solve polynomial rational relations for $y$ (e.g $\sqrt{4-3y-y^2} = x(y+4)$)? From time to time, I struggle to solve polynomial relations for $y$. A trivial example is : $$ \frac{y}{x} = x \iff y = x^2$$ Easy. But consider this relation: $$ \sqrt{4-3y-y^2} = x(y+4)$$ No matter how much I mess around it, seems impossible to bring it in $y = f(x)$ form. * *$ \frac{\sqrt{4-3y-y^2}}{(y+4)}= x $ *$ 4 - 3y - y^2 = x^2(y^2 + 8y + 16) \iff (x^2-1)y^2+y(-8x-3)+4(1-4x) = 0$ Is there a trivial methodology that I am missing or is it indeed impossible to inverse some relations?
Hint: $$4-3y-y^2=\dfrac{25-(2y+3)^2}4$$ needs to be perfect square of a rational number
{ "language": "en", "url": "https://math.stackexchange.com/questions/3764098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Are open sets locally simplicial? So I read parts of Rockafellar's "Convex Analysis". When introducing "locally simplicial" sets, all the examples stated are convex sets, yet he mentions that they do not need to be convex. I wonder whether all open sets in $\mathbb{R}^n$ are locally simplicial? I think such a fact would have been included in the reference if it were true. My reasoning goes as follows. A subset $S \subset \mathbb{R}^n$ is locally simplicial, if for all $x\in S$ there is a finite collection of simplices $\{S_1, \dots, S_m\}$ such that for some neighborhood $U$ of $x$ it holds $$ U \cap S = U \cap (S_1 \cup \dots \cup S_m).$$ If I assume $S$ to be open, then for every $x$ there is a ball with radius $\epsilon$ around $x$ that is still in $S$. If I place a single simplex $S_1$ that contains $x$ in its interior into that ball and take $U=S_1 \setminus \partial S_1$, I have that $U$ is a neighborhood of $x$ and it holds $$U=U\cap S = U \cap S_1.$$
This community wiki solution is intended to clear the question from the unanswered queue. Yes, your proof is correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3764245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find volume of the solid generated by revolving Find volume of the solid generated by revolving the region bounded by the parabola $ =^2+1 ,=0 $ and the line =3 about the line =3 Using Disk method,I found the answer to be 9.4
$$V=\pi \int_{y_0}^{y_1} (R(y))^2dy$$ $$=\pi \int_{0}^{\sqrt{2}} (3-(y^2 +1))^2dy$$ $$=\frac{32\sqrt{2}\pi}{15}\approx 9.478$$ You're correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3764404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How does one show that the residue of the first derivative of a holomorphic function is zero in its singularity? I've got a holomorphic function $f:\mathbb{C}\backslash\{0\}\rightarrow\mathbb{C}$ and I want to show that $$res_{0}f'=0$$
Applying the definition of Residue as the -1'th Laurent series coefficient: $$\operatorname{Res}(f';0)=a_{-1}=\frac{1}{2\pi i} \oint_{\gamma}\,\frac{f'(z)}{(z-c)^{-1+1}}\,dz = \frac{1}{2\pi i} \oint_{\gamma}\,f'(z)\,dz .$$ And $f'$ has a primitive along all of $\gamma$ so this integral is $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3764514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to differentiate $g(X)=\operatorname{tr}\left(X^{-1}\right)$? Let $X$ be a square invertible $n \times n$ matrix. Calculate the derivative of the following function with respect to X. $$ g(X)=\operatorname{tr}\left(X^{-1}\right) $$ I'm stumped with this. As when I work through it I use these two identities. * *$$\frac{\partial}{\partial \boldsymbol{X}} \boldsymbol{f}(\boldsymbol{X})^{-1}=-\boldsymbol{f}(\boldsymbol{X})^{-1} \frac{\partial \boldsymbol{f}(\boldsymbol{X})}{\partial \boldsymbol{X}} \boldsymbol{f}(\boldsymbol{X})^{-1}$$ and 2. $$ \frac{\partial}{\partial \boldsymbol{X}} \operatorname{tr}(\boldsymbol{f}(\boldsymbol{X}))=\operatorname{tr}\left(\frac{\partial \boldsymbol{f}(\boldsymbol{X})}{\partial \boldsymbol{X}}\right) $$ I should arrive at the solution. using 1. I get $$d/dX(X^{-1}) = -X^{-1}\otimes X^{-1}$$. So the answer should be the trace of that right? which = $$tr(-X^{-1})tr(X^{-1}).$$ but the solution seems to be $$-X^{-2T}$$? which I can't see
The problem is with this equation $$\frac{\partial}{\partial \boldsymbol{X}} \operatorname{tr}(\boldsymbol{f}(\boldsymbol{X}))=\operatorname{tr}\left(\frac{\partial \boldsymbol{f}(\boldsymbol{X})}{\partial \boldsymbol{X}}\right)$$ Note that on the LHS you are taking the derivative of a function $\mathbb R^{n\times n} \to \mathbb R$, whereas on the RHS you are taking the trying to take the trace of the derivative of a function $f\colon\mathbb R^{n\times n}\to\mathbb R^{n\times n}$. As you already figured out, this derivative can be expressed by a 4-th order tensor $-(X^{-1} \otimes X^{-1})$. Obviously, the result cannot be $-\operatorname{tr}(X^{-1})\operatorname{tr}(X^{-1})$, as this is a scalar, but the result needs to be a second order tensor.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3764596", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }