Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Estimate total distance using Riemann sum based on data table We did not spend a lot of time in class on Riemann sum so I confused with this question. Speedometer readings for a motorcycle at $12$-second intervals are given in the table below. $$ \begin{array}{c|c|c|c|c} t sec & 0 & 12 & 24 & 36 & 48 & 60 \\ \hline v(t) ft/sec & 23 & 22 & 18 & 17 & 20 &23\\ \end{array} $$ Estimate the total distance traveled during the time interval $[0,60]$ using a Riemann sum based on the table data. I hope someone can help. Thanks.
The definition of a Riemann sum is as following: Let $f$ be a function, $\Pi=\{x_0,\dots,x_n\}$ be a partition and $S=\{c_1,\dots,c_n\}$ a set of values such that $c_i\in[x_{i-1},x_i]$. The Riemann sum is $$R(\Pi,S)=\sum_{i=1}^nf(c_i)(x_i-x_{i-1}).$$ You have a discrete function $f=v$ and you have a partition in first row of the table. Can you find the Riemann sum now?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1066355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculate: $ \lim_{x \to 0 } = x \cdot \sin(\frac{1}{x}) $ Evaluate the limit: $$ \lim_{x \to 0 } = x \cdot \sin\left(\frac{1}{x}\right) $$ So far I did: $$ \lim_{x \to 0 } = x\frac{\sin\left(\frac{1}{x}\right)}{\frac{1}{x}\cdot x} $$ $$ \lim_{x \to 0 } = 1 \cdot \frac{x}{x} $$ $$ \lim_{x \to 0 } = 1 $$ Now of course I've looked around and I know I'm wrong, but I couldn't understand why. Can someone please show me how to evaluate this limit correctly? And tell me what I was doing was wrong.
Your proof is incorrect, cause you used incorrect transform, but it has already been stated. I'll describe way to solve it. $$\lim_{x \to 0}\frac{\sin(\frac{1}{x})}{\frac{1}{x}} \neq 1$$ Hint: Solution is well-known trick. Note $(\forall x \in \mathbb{R})\left(\sin(x) \in[-1;1]\right)$ (obvious) and use squeeze theorem to solve it. Note simple implication. $$ \left(\forall h \in \mathbb{R}\right) \left(\sin h \in [-1;1]\right) \Longrightarrow (\forall x,h \in \mathbb{R})(|x \cdot \sin h| \leq |x|)$$ So, true is inequality $|x \cdot \sin \frac{1}{x}| \leq |x|$, therefore (and because module is always non-negative) using squeeze theorem you receive limit. $$\left(0 \leq\left | \lim_{x \to 0} x\cdot \sin \frac{1}{x} \right | \leq \lim_{x \to 0} \left| x \right| = 0 \right)\Longrightarrow \lim_{x \to 0}x \cdot \sin(x) = 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1066434", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Are finitely presentable modules closed under extensions? If $0 \to A \to B \to C \to 0$ is an exact sequence of modules, and $A$ and $C$ are finitely presentable, then is $B$ finitely presentable? The answer is "yes" if we replace modules with groups, as shown here. The answer is also "yes" if we replace "finitely presentable" by "finitely generated": take a set-theoretic splitting of $B \to C$ to view $B$ as $A \times C$ with the addition given by twisting the addition on $A \oplus C$ by a cocyle: then generators for $B$ are given by pairs of generators for $A$ and generators for $C$. When we're working over a Noetherian ring, this means the same goes for finite presentability. How about in non-Noetherian situations? I'm particularly interested in non-commutative rings, especially group rings. The obvious way to get finitely many relations would be to take the relations for $A$, the relations for $C$, and add each relation of the form $(a_i,b_j) + (a_{i'},b_{j'}) = (a_i + a_{i'}, b_j + b_{j'} + \omega(a_i,a_{i'}))$ (where the $a_i$'s and $b_j$'s are our generators and $\omega$ is our cocycle). But this doesn't obviously work because $\omega$ is typically not bilinear. My intuition is that because the obvious approach doesn't work, the answer should be "no". But knowing that the answer is "yes" for groups makes me less confident.
You can proceed as usual by starting with $F\stackrel{f}\to A\to 0$ and $H\stackrel{h}\to C\to 0$, where $F$ and $H$ are free of finite rank. Then show that there is an exact sequence $G=F\oplus H\stackrel{g}\to B\to 0$. Now consider $F'=\ker f$ and so on. You have a short exact sequence $0\to F'\to G'\to H'\to 0$. Now use the result for finitely generated modules.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1066604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Question about sines of angles in an acute triangle Let $\triangle ABC$ be a triangle such that each angle is less than $ 90^\circ $. I want to prove that $\sin A + \sin B + \sin C > 2$. Here is what I have done: Since $A+B+C=180^{\circ}$ and $0 < A,B,C < 90^\circ$, at least two of $A,B,C$ are in the range 45 < x < 90, without loss of generality, let these angles be $A$ and $B$. $\sin A + \sin B + \sin C = \sin A + \sin B + \sin(180^\circ-A-B) = \sin A + \sin B + \sin(A+B)$ Since $45^\circ < A,B < 90^\circ$ it follows that $2^{0.5} < \sin A + \sin B < 2.$ Am I near the answer?
i am able to simplify $$ \sin A + \sin B + \sin (A + B) = \sin A + \sin B + \sin A \cos B + \sin B \cos A \\ = (1+\cos B)\sin A + (1 + \cos A)\sin B \\ = 4\cos^2 B/2\sin A/2 \cos A/2 + 4\cos^2 A/2 \sin B/2 \cos B/2 \\ = 4\cos B/2 \cos A/2(\sin A/2 \cos B/2 + \sin B/2 \cos A/2) \\ = 4\cos B/2 \cos A/2\sin (A/2 + B/2) = 4\cos A/2 \cos B/2 \cos C/2 $$ 4now, using the fact that $A/2 < 45^\circ, B/2 < 45^\circ$ and $C/2 < 45^\circ,$ i can only conclude $$\sin A + \sin B + \sin C > \sqrt 2.$$ i am going to try to improve the bound. introduce $0 <\alpha, \beta < 45^\circ$ so that $A/2 = 45^\circ - \alpha, B/2 = 45^\circ - \beta, C/2 = \alpha + \beta.$ in terms of these new variables, $$ 4\cos A/2 \cos B/2 \cos C/2 = 2(\cos \alpha + \sin \alpha)(\cos \beta + \sin \beta)\cos(\alpha + \beta) \\ = 2(\cos(\alpha + \beta) + \sin(\alpha + \beta))\cos(\alpha + \beta) \\ = 1 + \cos(2\alpha + 2\beta) + \sin(2\alpha + 2\beta) = 1 + \sqrt 2 \sin(2\alpha + 2\beta + 45^\circ)$$ since $0 < 2\alpha + 2 \beta < 90^\circ,$ we get the desired bound $$2 < 1 + \sqrt 2\sin(2\alpha + 2 \beta) < 1 + \sqrt 2 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1066712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
The Centralizer $C_H(x)$ where $x \in G$ and $H \leq G$. Let $G$ be a group and $H$ be a subgroup of $G$. Let $x \in G$. Then $C_H(x)=H$ if and only if $x \in Z(H)$? It is obvious that if $x \in Z(H)$ then $C_H(x) = H$. But I could not prove or provide a counter-example to the other statement. Remark: $C_H(x) = \{h\in H | \ [h,x]=1\}$
Consider the example of $G=\mathbb{Z}_2\times \mathbb{Z}_2$, $H=\mathbb{Z}_2\times \{0\}$. Then $(0,1)$ satisfies $C_H((0,1))=H$, however $(0,1)\notin Z(H)$ because $(0,1)\notin H$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1066820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Preserve self-adjoint properties I was thinking about this problem recently: Let $T$ be a self-adjoint operator on $L^2((-1,1),d x)$. Now you define an operator $G$ by $G(f) := T(\frac{f}{(1-x^2)})$ with $\operatorname{dom}(G):=\{f \in L^2(-1,1); \frac{f}{(1-x^2)} \in \operatorname{dom}(T)\}$. Is this operator also self-adjoint then? Of course this question could be easily generalized, but I wanted to ask this example first
Essentially, you need to check that $\forall f,g\in dom(G)\cap dom(T)$ you have $$(f,T[g]/\phi) = (f,T[g/\phi]),$$ where $(\cdot,\cdot)$ is a scalar product in $L^2$ and $\phi$ is the function $\frac{1}{1-x^2}$. I don't quite see how it could be possible for generic $T$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1066918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proving the limit at $\infty$ of the derivative $f'$ is $0$ if it and the limit of the function $f$ exist. Suppose that $f$ is differentiable for all $x$, and that $\lim_{x\to \infty} f(x)$ exists. Prove that if $\lim_{x\to \infty} f′(x)$ exists, then $\lim_{x\to \infty} f′(x) = 0$, and also, give an example where $\lim_{x\to \infty} f′(x)$ does not exist. I'm at a loss as to how to prove the first part, but for the second part, would a function such as $\sin(x)$ satisfy the problem?
If $\lim_{x\rightarrow\infty}f'(x)=c$ were some positive number, that would imply, for some $0<k<c$ and all large enough $x$ that $f'(x)>k$. Think about what this means intuitively and why this is inconsistent with $f$ converging.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1067005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Partial Converse of Holder's Theorem Holder's Theorem is the following: Let $E\subset \mathbb{R}$ be a measurable set. Suppose $p\ge 1$ and let $q$ be the Holder conjugate of $p$ - that is, $q=\frac{p}{p-1}.$ If $f\in L^p(E)$ and $g\in L^q(E),$ then $$\int_E \vert fg\vert\le \Vert f\Vert _p\cdot\Vert g \Vert_q$$ I am trying to prove the following partial converse is true. Suppose $g$ is integrable, $p>1$ and $$\int_E \vert fg\vert\le M\Vert f\Vert_p$$ when $f\in L^p(E)$ is bounded, for some $M\ge0$. I am trying to prove that this implies that $g\in L^q(E)$, where $q$ is the conjugate of $p$. I am interested in knowing if my proof is correct or if it has any potential. Attempt: Assuming this is true, I want to show $\int_E \vert g\vert^q<\infty$. Note that $g^{q-1}\in L^p(E)$ since $\int_E \vert g\vert^{pq-p}=\int_E \vert g \vert ^q<\infty$ (I think this is true since $g$ is integrable - I know it holds when $q$ is a natural number). Take $f=g^{q-1}$ and the hypothesis tells us that $$\int_E \vert g \vert^q\le M\Vert g^{q-1}\Vert_P\\<\infty \,\,(\text{since }g^{q-1}\in L^p)$$so $g$ is integrable.
An alternative proof. Let $T_g(f):=\int_E f g\,,\, \forall f\in L^p.$ It's clear that $T_g$ is linear. By the condition $\int_E \vert fg\vert\le M\Vert f\Vert_p$, we see $T_g$ is bounded. So, $T_g\in (L^p)^*.$ By the Riesz Representation Theorem for the Dual of $L^p$(c.f. p. 160, Real Analysis, 4th Edition, Royden), we know there exists an unique $L^q$ function $\tilde{g}$ s.t. $T_g(f)=\int_E f \tilde{g}\,,\, \forall f\in L^p.$ By the uniqueness, we conclude that $g=\tilde{g}\in L^q.$ $\tag*{$\blacksquare$}$ The following is the theorem I quoted in my answer. $\tag*{$\blacksquare$}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1067067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Explanation for the definition of monomials as products of products I'm attempting to learn abstract algebra, so I've been reading these notes by John Perry. Monomials are defined (p. 23) as $$ \mathbb{M} = \{x^a : a \in \mathbb{N}\} \hspace{10mm} \text{or} \hspace{10mm} \mathbb{M}_n = \left\{\prod_{i=1}^m{\left( x_1^{a_1}x_2^{a_2} \dotsm x_n^{a_n} \right)} : m,a_1,a_2,\dotsc,a_n \in \mathbb{N}\ \right\} $$ which makes sense. I understand the non-commutativity of matrix multiplication, but I don't understand this statement (p. 24): So multiplication of monomials should not in general be considered commutative. This is, in fact, why we defined $\mathbb{M}_{n}$ as a product of products, rather than combining the factors into one product in the form $x_1^{a_1}x_2^{a_2} \dotsm x_n^{a_n}$. I'm missing the connection between the non-commutativity of multiplication and the construction of the monomial definition.
In the commutative case, $x_1x_2x_1$ is equal to $x_1^2x_2$. However, in the non-commutative case, they need not be equal. Suppose now that we're in a situation where $x_1x_2x_1 \neq x_1^2x_2$. If the set of monomials had not been defined as products of products, then $x_1x_2x_1$ would not be considered a monomial as it is not of the form $x_1^{a_1}x_2^{a_2}\dots x_n^{a_n}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1067160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculating the mean and variance of a distribution * *Suppose $$P(x) = \frac{1}{\sqrt{2\pi\cdot 36}}e^{-\frac{1}{2}\cdot (\frac{x-2}{6})^2}$$ What is the mean of $X$? What is the standard deviation of $X$? *Suppose $X$ has mean $4$ and variance $4$. Let $Y = 2X+7$. What is the mean of $Y$? What is the standard deviation of $Y$? Are there resources that can help me answers these question? I have the answers, but I honestly have no idea how to solve them. Typically when I've seen mean and standard deviation, it's been in the context of finding them for a specific data set.
1) Since: $X \sim N(\mu, \sigma^2)$ you have that: $$\mu=2 \qquad \sigma^2=6^2$$ 2) $$E[Y]=E[2X+7]=E[2X]+E[7]=2E[X]+E[7]$$ Substituting $E[X]$ with $\mu=4$, we get: $$E[Y]=2 \cdot 4+7=15$$ For what it concerns the variance: $$Var[Y]=\cdots=Var[2X]+Var[7]=2^2\cdot Var[X]+Var[7]$$ Substituting $Var[X]=\sigma^2=4$, and since the variance of a number is always zero we get that: $$4\cdot4=16$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1067245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Is this a legitimate proof? If not, how to prove? Question: Determine all natural numbers $n$ such that: $7 \mid \left(3^n - 2\right) \implies3^{n}\equiv 2\pmod{7}$ Multiply both sides by 7 $7 \cdot 3^{n}\equiv 7\cdot2\pmod{7}$ Divide both sides by seven, since $\gcd(7,7) = 7$, we have to divide modulus by $7$ $\implies3^{n}\equiv 2\pmod{7/7}$ $\implies3^{n}\equiv 2\pmod{1}$ Therefore $n$ is any natural number, since one divides everything. But I made a mistake somewhere since the original equation doesn't work for $n = 1$
Noting that $n=1$ does not work, let $n \ge 2$. Then as $3^2 \equiv 2 \pmod 7$, we have the equivalent statement $$2\cdot 3^{n-2} \equiv 2 \pmod 7 \iff3^{n-2}\equiv 1 \pmod 7$$ Now that has solutions $n = 6k+2$ as $3^6$ is the smallest positive power of $3$ that is $\equiv 1 \pmod 7$, so the solution is for natural number s.t. $n = 2 \pmod 6$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1067338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Compute almost sure limit of martingale? Let $Y$, $Y_1$, $Y_2$, $\dots$, be nonnegative i.i.d random variables with mean $1$. Let $$X_n = \prod_{1\le m \le n}Y_m$$ If $P(Y = 1) < 1$, prove that $\lim\limits_{n\to\infty}X_n = 0$ almost surely. I feel like this question has something to do with the idea that $(X_n)$ is a martingale (which I can prove easily) but I am not sure if I am overthinking it or not. I was trying to use Doob's upcrossing inequalities in a clever way but there might be an easier approach to the problem.
$\dfrac{\log X_n}{n} = \dfrac{\sum_{m \le n} \log Y_m}{n} \to E\log Y_1$ almost surely by the strong law of large number. And by Jensen's inequality, $E\log Y_1 \leq \log EY_1 =0$ since $EY_1 = 1$. Since $P(Y = 1) < 1$, $E\log Y_1 < \log EY_1 =0$. So we get that $\dfrac{\log X_n}{n}$ converges to a strictly negative number almost surely, thus $\log X_n \to -\infty$, i.e. $X_n \to 0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1067486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Rules for whether an $n$ degree polynomial is an $n$ degree power Given an $n$ degree equation in 2 variables ($n$ is a natural number) $$a_0x^n+a_1x^{n-1}+a_2x^{n-2}+\cdots+a_{n-1}x+a_n=y^n$$ If all values of $a$ are given rational numbers, are there any known minimum or sufficient conditions for $x$ and $y$ to have: * *Real number *Rational number *Integer solutions and how many of them would exist. If it is not known/possible (or too hard) for an $n$ degree polynomial, do such conditions exist for quadratic ($n=2$) and cubic ($n=3$) polynomials.
This addresses user45195's question and is too long for a comment. When I said too broad, it was because the question originally didn't limit the field of $x$. A familiar field is the complex numbers $\mathbb{C}=a+bi$, of which a special case are the reals $\mathbb{R}$, and even more limited, the rationals $\mathbb{Q}$. If $x$ was in $\mathbb{C}$, then it's just an old result (Fundamental Theorem of Algebra), that for any $y$, one can always find $n$ roots $x$ that solve the equation in your post, and there's nothing new to be said. However, if $x,y$ are limited to the rationals $\mathbb{Q}$, that's where it gets interesting. The equation, $$f(x) = y^2\tag1$$ where the degree $d$ of $f(x)$ is either $d = 2,3,4,5$ has been extensively studied. See algebraic curve, including Pell equations ($d=2$), elliptic curve ($d=3,4$), and hyperelliptic curve ($d=5$). For, $$f(x) = y^3\tag2$$ then $d=3$ still has special cases as elliptic curves. For $d=4$, see trigonal curve. However, for the higher, $$f(x) = y^m\tag3$$ where $d,m>3$, is more complicated. See superelliptic curve.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1067581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Is it possible to find the $n$th digit of $\pi$ (in base $10$)? Is it possible that there exists some function $f:\mathbb N_1\to \{0,1,2,3,4,5,6,7,8,9\}$, where $$f(1)=\color{red}1, f(2)=\color{red}4, f(3)=\color{red}1, f(4)=\color{red}5, f(5)=\color{red}9, f(6)=\color{red}2, \ldots \quad ?$$ I know that such a function, if it exists, cannot be a polynomial, since all polynomials are finite-dimensional, whereas the sequence containing the digits of $\pi$ has infinitely many elements, but could it be that there is some function (even if we cannot explicitly state it) which gives the $n$th term of the sequence containing the digits of $\pi$? If this is true, and such a function does exist, can the same be said for all transcendental numbers? Edit: to be absolutely clear, I am looking for the explicit function $f$ such that $f(n)$ is the $n$th digit of $\pi$ (in base $10$).
What exactly means explicit? For a given $n$, there is certainly an algorithm that computes the $n$'th digit of $\pi$. But the answer is no if you ask about some other real numbers. There are non-computable numbers that encode the halting problem, for example. There is no chance to have an algorithm computing the digits of some numbers in general -- and hence not any reasonable "formula" composed of common basic operations for those numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1067653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How many positive integers of n digits chosen from the set {2,3,7,9} are divisible by 3? I'm preparing myself for math competitions. And I am trying to solve this problem from the Romanian Mathematical Regional Contest “Traian Lalescu’', $2003$: Problem $\mathbf{7}$: How many positive integers of $n$ digits chosen from the set $\{2,3,7,9\}$ are divisible by $3$? Solution. Let $x_n,y_n,z_n$ be the number of all positive integers of $n$ digits $2,3,7$ or $9$ which are congruent to $0,1$ and $2$ modulo $3$. We have to find $x_n$. Consider $\varepsilon=\cos\dfrac{2\pi}3+i\sin\dfrac{2\pi}3$. It is clear that $x_n+y_n+z_n=4^n$ and $$x_n+\varepsilon y_n+\varepsilon^2z_n=\sum_{j_1+j_2+j_3+j_4=n}\varepsilon^{2j_1+3j_2+7j_e+9j_4}=(\varepsilon^2+\varepsilon^3+\varepsilon^7+\varepsilon^9)^n\;.$$ It follows that $x_n-1+\varepsilon y_n+\varepsilon^2z_n=0$. Applying Proposition $4$ in Subsection $2.2.2$ we obtain $x_n-1=y_n=z_n=k$. Then $3k=x_n+y_n+z_n-1=4^n-1$, and we find $k=\dfrac13(4^n-1)$. Finally, $x_n=k+1=\dfrac13(4^n+2)$. Please help me with the solution, I don't understand it well enough, especially the displayed line. Are there any other solutions for this problem?
Here is an alternative approach. Let $x_n,y_n$, and $z_n$ be as in the argument given in the question; clearly $x_1=2$, and $y_1=z_1=1$. For $n\ge 1$ let $X_n,Y_n$, and $Z_n$ be the sets of $n$-digit numbers using only the digits $2,3,7$, and $9$ and congruent modulo $3$ to $0,1$, and $2$, respectively (so that $x_n=|X_n|$, $y_n=|Y_n|$, and $z_n=|Z_n|$). Finally, recall that an integer is congruent modulo $3$ to the sum of its digits. Now let $n>1$, and let $k\in X_n\cup Y_n\cup Z_n$. Let $\ell$ be the $(n-1)$-digit number obtained by removing the last digit of $k$, and let $d$ be the last digit of $k$. If $d=3$ or $d=9$, then $k\equiv\ell\pmod3$; if $d=7$, then $k\equiv\ell+1\pmod3$; and if $d=2$, then $k\equiv\ell+2\pmod3$. Thus, $k\in X_n$ iff either $d\in\{3,9\}$ and $\ell\in X_{n-1}$, or $d=2$ and $\ell\in Y_{n-1}$, or $d=7$ and $\ell\in Z_{n-1}$. These three cases are exhaustive and mutually exclusive, so $$x_n=|X_n|=2|X_{n-1}|+|Y_{n-1}|+|Z_{n-1}|=2x_{n-1}+y_{n-1}+z_{n-1}\;.$$ Similar reasoning shows that $$y_n=2y_{n-1}+x_{n-1}+z_{n-1}$$ and $$z_n=2z_{n-1}+x_{n-1}+y_{n-1}\;.$$ But clearly $x_{n-1}+y_{n-1}+z_{n-1}=4^{n-1}$, so these recurrences reduce to $$\left\{\begin{align*} x_n&=x_{n-1}+4^{n-1}\\ y_n&=y_{n-1}+4^{n-1}\\ z_n&=z_{n-1}+4^{n-1}\;. \end{align*}\right.$$ If we set $x_0=1$, the first of these recurrences holds for $n=1$ as well as for $n>1$, and we have $$x_n=1+\sum_{k=0}^{n-1}4^k\;.$$ If this isn’t immediately clear, note that $$\begin{align*} x_n&=4^{n-1}+x_{n-1}\\ &=4^{n-1}+4^{n-2}+x_{n-2}\\ &\;\vdots\\ &=4^{n-1}+4^{n-2}+\ldots+4^0+x_0\\ &=1+\sum_{k=0}^{n-1}4^k\;, \end{align*}$$ and the formula can be proved rigorously by induction on $n$. Finally, this is just a geometric series, so $$x_n=1+\frac{4^n-1}{4-1}=1+\frac13(4^n-1)=\frac13(4^n+2)\;.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1067761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Let $A$ be a non-zero linear transformation on a real vector space $V$ of dimension $n$. Let $A$ be a non-zero linear transformation on a real vector space $V$ of dimension $n$. Let the subspace $V_0 \subset V$ be the image of $V$ under $A$. Let $k = \dim (V_0) \lt n$ and suppose that for some $\lambda \in \mathbb{R}$, $A^2 = \lambda A$. Then which of the following are true? * *$\lambda = 1$ *$\det(A) = |\lambda|^{1/n}$ *$\lambda$ is the only eigenvalue of $A$ *There is a nontrivial subspace $V_1 \subset V$ such that $Ax = 0$ for all $x \in V_1$ I am able to say that 4 is true as $\operatorname{rank}(A) < n$. 2 is false by using the determinant function on both sides. Using the argument used for 4, 0 is also an eigenvalue. So 3 is also false. But I want to know what we can say about $\lambda$. Is is an eigenvalue? Is it equal to 1?
If $\lambda\ne0$ then the polynomial with simple roots $x^2-\lambda x=x(x-\lambda)$ annihilates $A$ and clearly $A\ne \lambda I_n$ and $A\ne0$ so $0$ and $\lambda$ are eigenvalues of $A$ and the multiplicity of $\lambda$ is $k$. If $\lambda=0$ then $A$ is nilpotent and $A$ in it's Jordan canonical form has a Jordan block with size $k$. Can you know now the correct options?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1067861", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Geometric distribution example (making kids until couple has a boy and a girl), need explanation So the condition is following: a man and a woman want to have kids : girl and a boy. They continue to make kids until they get both genders. What is the expected number of kids? As I remember, the solution was following: $$E[x] = 1 + 1/p = 3$$ Can someone explain why? How we get this expression and number of $3$? It is very confusing for me... Thank you.
The given expression $$E[X]=1+\frac1p=3$$ can be explained as follows * *The term "$1$" stands for the first kid (if you want to have two kids you have to start by having the first kid). This kid has certainly a gender (is boy or girl) so you have the one gender after the first try! (great). *Now, you have to keep trying until you get the second gender. Assuming equal probability of each gender (i.e. $1/2$ for a boy and the same for a girl), the number $N$ of efforts until the first success (getting the missing gender) is geometrically distributed with parameter $p=1/2$. It is well known that the expected value of $N$ is equal to $$E[N]=\frac{1}{p}=\frac{1}{\frac12}=2$$ Therefore adding the first birth and using the linearity of expectation, you derive the given expression of the expected number of tries, i.e. $$E[X]=E[1+N]=1+E[N]=1+\frac{1}{p}\overset{p=\frac12}=1+2=3$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1067989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
interpolation properties of analytic paths Assume we are given $n$ points in $\mathbb{C}^k$ can we find an analytic path $\phi:[0,1]\to \mathbb{C}^k$ passing through these $n$ points?
As Harald Hanche-Olsen pointed out, there is an interpolating polynomial of degree at most $n-1$ that does the job. Namely, let $p_1,\dots,p_n$ be the given points, pick any numbers $0\le t_1<t_2<\dots<t_n\le 1$, and define $$ \phi(t) = \sum_{j=1}^n p_j \frac{\prod_{l\ne j}(t-t_l)}{\prod_{l\ne j}(t_j-t_l)} $$ This is a polynomial of degree at most $n-1$ such that $\phi(t_j)=p_j$ for all $j=1,\dots,n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1068055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
lagrange interpolation, polynomial of degree $2n-1$ Let $a_1, \dots, a_n$ and $b_1, \dots, b_n$ be real numbers. How would I go about showing the following? * *If $x_1, \dots, x_n$ are distinct numbers, there is a polynomial function $f$ of degree $2n - 1$, such that $f(x_j) = f'(x_j) = 0$ for $j \neq i$, and $f(x_i) = a_i$ and $f'(x_i) = b_i$. *There is a polynomial function $f$ of degree $2n-1$ with $f(x_i) = a_i$ and $f'(x_i) = b_i$ for all $i$. I have tried attacking the first part with Lagrange interpolation, not to too much success...
1. Clearly $f$ will have to be of the form$$f(x) = \prod_{\substack{j=1 \\ j\neq i}}^n (x- x_j)^2(ax+b)$$$($because each $x_j$, $j \neq i$ is a double root$)$. It therefore suffices to show that $a$ and $b$ can be picked so that $f(x_i) = a_i$ and $f'(x_i) = b_i$. If we write $f$ in the form $f(x) = g(x)(ax + b)$, then we must solve$$[g(x_i)x_i] \cdot a + g(x_i) \cdot b = a_i,$$$$[g'(x_i)x_i + g(x_i)] \cdot a + g'(x_i) \cdot b = b_i.$$These equations can always be solved because$$[g(x_i)x_i] \cdot g'(x_i) - [g'(x_i)x_i + g(x_i)]g(x_i) = [g(x_i)]^2 \neq 0.$$ 2. Let $f_i$ be the function constructed as above, and let $f = \sum_{k=1}^n f_k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1068137", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculate the distance between intersection points of tangents to a parabola * *Question Tangent lines $T_1$ and $T_2$ are drawn at two points $P_1$ and $P_2$ on the parabola $y=x^2$ and they intersect at a point $P$. Another tangent line $T$ is drawn at a point between $P_1$ and $P_2$; it intersects $T_1$ at $Q_1$ and $T_2$ at $Q_2$. Show that $$\frac{|PQ_1|}{|PP_1|} + \frac{|PQ_2|}{|PP_2|} = 1$$ * *My attempt at the question I include a possible scenario of the graph for convenience'(I hope) sake: The outer two tangents are tangents $T_1$ and $T_2$, and the inner tangent is tangent $T$. * *Since points $P_1$ and $P_2$ are points on the parabola, I can give them coordinates as follows, $$\tag 1 P_1(P_{1x}, P^2_{1x})$$ and $$\tag 2 P_2(P_{2x}, P^2_{2x})$$ *Using $y\prime = 2x$, I calculate the equations for the tangents $T_1$ and $T_2$ respectively, they are, $$T_1 = y = 2P_{1x}(x - P_{1x}) + P^2_{1x}$$ and $$T_2 = y = 2P_{2x}(x - P_{2x}) + P^2_{2x}$$ * *By setting $T_1 = T_2$ and then solving for $x$ I show that the two tangents intersect at a point $x = \frac{P_{1x} + P_{2x}}{2}$, which in words is the two tangents to the parabola is halfway between points $P_1$ and $P_2$. Then substituting $x = \frac{P_{1x} + P_{2x}}{2}$ into any of the tangent line equations I get the $y$ coordinate of the tangent lines' intersection, which is $y = P_{1x}\cdot P_{2x}$ Now I have the coordinates for point $P$, that is $$\tag 3 P\Big(\frac{P_{1x} + P_{2x}}{2}, P_{1x}\cdot P_{2x}\Big)$$ *To get coordinates for points $Q_1$ and $Q_2$ I will substitute $Q_{1x}$ in to the equation of tangent $T_1$ and substitute $Q_{2x}$ in to the equation of tangent $T_2$. That yields the following for coordinates: $$\tag 4 Q_1(Q_{1x}, \,\,2P_{1x}Q_{1x} - P_{1x}^2)$$ $$\tag 5 Q_2(Q_{2x}, \,\,2P_{2x}Q_{2x} - P_{2x}^2)$$ *Since I have all the points necessary to calculate $\frac{|PQ_1|}{|PP_1|} + \frac{|PQ_2|}{|PP_2|}$, I feel inclined to apply the distance formula. Doing so yielded the following: $$\tag 6 |PQ_1| = \frac{\sqrt{(4P_{1x}^2 + 1)(P_{1x} + P_{2x} - 2Q_{1x})^2}}{2}$$ $$\tag 7 |PP_1| = \frac{\sqrt{(4P_{1x}^2 + 1)(P_{1x} - P_{2x})^2}}{2}$$ $$\tag 8 |PQ_2| = \frac{\sqrt{(4P_{2x}^2 + 1)(P_{1x} + P_{2x} - 2Q_{2x})^2}}{2}$$ $$\tag 9 |PP_2| = \frac{\sqrt{(4P_{1x}^2 + 1)(P_{1x} - P_{2x})^2}}{2}$$ * *Now I calculate $\frac{|PQ_1|}{|PP_1|} + \frac{|PQ_2|}{|PP_2|}$ using the above: $$\frac{|PQ_1|}{|PP_1|} + \frac{|PQ_2|}{|PP_2|} = \frac{\frac{\sqrt{(4P_{1x}^2 + 1)(P_{1x} + P_{2x} - 2Q_{1x})^2}}{2}}{\frac{\sqrt{(4P_{1x}^2 + 1)(P_{1x} - P_{2x})^2}}{2}} + \frac{\frac{\sqrt{(4P_{2x}^2 + 1)(P_{1x} + P_{2x} - 2Q_{2x})^2}}{2}}{\frac{\sqrt{(4P_{2x}^2 + 1)(P_{1x} - P_{2x})^2}}{2}}$$ $$\tag {10} =\frac{\sqrt{(P_{1x} + P_{2x} - 2Q_{1x})^2} + \sqrt{(P_{1x} + P_{2x} - 2Q_{2x})^2}}{\sqrt{(P_{1x} - P_{2x})^2}}$$ *I can't seem to find a way to show that $(10)$ is equal to $1$. I have, however tested a few instances and it held up, for what it's worth. But for now, I'm at a loss as to how to proceed. Any hints, suggestions, or alternative approaches?
Suppose that the third tangent is drawn at a point $A$ with coordinates $A(a, a^2)$. Then its tangent intersects $T_{i}$ at $$ Q_{i}\left(\frac{P_{ix} + a}{2}, P_{ix}a \right) $$ using your equation $(3)$. In other words, $$ 2(Q_{1x} - Q_{2x})=P_{1x}-P_{2x} $$ Therefore, in the argument of the square root in the numerator of the first term of (10), \begin{align*} P_{1x} + P_{2x} - 2Q_{1x} &= P_{1x} + P_{2x} - 2[(P_{1x} - P_{2x})/2 + Q_{2x}] \\ &= 2(P_{2x} - Q_{2x}) \end{align*} In general, $\sqrt{x^2} = |x|$ so we need to figure out the relative size of all of this. Assume without loss of generality that $P_{2x} > P_{1x}$ so the denominator of $(10)$ is $P_{2x} - P_{1x}$. Clearly $P_{2x} > Q_{2x}$ so the first term in the numerator is $2(P_{2x} - Q_{2x})$. Finally, \begin{align*} P &< Q_{2x} \\ (P_{1x} + P_{2x})/2 &< Q_{2x} \\ P_{1x} + P_{2x} - 2Q_{2x} &< 0 \end{align*} so the second term in the numerator is $2Q_{2x} - P_{1x} - P_{2x}.$ Putting this all together: \begin{align*} \frac{|P Q_{1}|}{|P P_{1}|} + \frac{|P Q_{2}|}{|P P_{2}|} &= \frac{2(P_{2x} - Q_{2x})+ (2Q_{2x} - P_{1x} - P_{2x})}{P_{2x} - P_{1x}} \\ &= \frac{(2P_{2x} - P_{2x}) - P_{1x} + (2Q_{2x} - 2Q_{2x})}{P_{2x} - P_{1x}} \\ &= \frac{P_{2x} - P_{1x}}{P_{2x} - P_{1x}} \\ &= 1 \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1068239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Is $[\bar{\mathbb Q}:\bar{\mathbb Q}\cap\mathbb R]=2$? Is $[\bar{\mathbb Q}:\bar{\mathbb Q}\cap\mathbb R]=2$ ? I think it is true that $\bar{\mathbb Q}\cap\mathbb C=\bar{\mathbb Q}$, because I've heard that the closure of the reals is $\mathbb C$. And $\bar{\mathbb Q}$ is a subfield of $\mathbb C$, so using this fact, we have $\bar{\mathbb Q}\cap\mathbb R$, intersection of $2$ fields, again a field. but does the index necessarily have to be an integer as in the group theory ?
Yes, $[\overline{\mathbb Q}:\overline{\mathbb Q}\cap \mathbb R]=2$, since you attain the former from the latter by adjoining $i$. The index is always either a positive integer or infinite, since it represents the cardinality of a basis of a vector space.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1068327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
How to solve a linear system in matrix form using Laplace transform? How to solve this linear system using Laplace transform? $$\mathbf X'(t)=\left[\begin{array}{r,r,r}-3&0&2\\1&-1&0\\-2&-1&0\end{array}\right]\mathbf X(t); ~~~~~~~~\mathbf X(0)=\left[\begin{array}{r}4\\-1\\2\end{array}\right]$$ I am struggling with this problem. I tried by write it as $$\begin{cases}x_1' &= -3x_1+2x_3,\quad x_1(0)=4\\ x_2'&= -1x_1+1x_2, \quad x_2(0)=-1\\ x_3'&= -2x_1 -1x_2, \quad x_3(0)=2 \end{cases}$$ Then Laplace transform both sides but I have hit a dead end. Is there any better way to solving this problem? I would be grateful for help. I did only simple problems so far. This is very complicated for me.
We are given: $$X'(t) = \begin{bmatrix} -3 & 0 & 2 \\ 1 & -1 & 0\\ -2 & -1 & 0\end{bmatrix} \begin{bmatrix} x(t) \\ y(t)\\ z(t)\end{bmatrix}, ~~ X(0) = \begin{bmatrix} 4 \\ -1\\ 2\end{bmatrix}$$ We can write this as: $$\tag 1 \begin{align} x' &= -3x + 2z \\ y' &= x-y \\ z' &= -2x - y \end{align}$$ Taking the Laplace transform of $(1)$ yields: $$\begin{align} s x(s) - x(0) &= -3 x(s) + 2 z(s) \\ s y(s) - y(0) &= x(s) - y(s) \\ s z(s) - z(0) &= -2 x(s) - y(s) \end{align}$$ This reduces to the system: $$\begin{bmatrix} s+3 & 0 & -2 \\ -1 & s+1 & 0\\ 2 & 1 & s\end{bmatrix} \begin{bmatrix} x(s) \\ y(s)\\ z(s)\end{bmatrix} = \begin{bmatrix} -4 \\ 1\\ -2 \end{bmatrix}$$ All that is needed is to solve for $x(s),y(s),z(s)$ and then find the inverse Laplace Transform. You could have also used many methods to solve this system, including eigenvalues/eigenvectors, matrix exponential, etc. Update If we find the inverse of the matrix on the left, we get: $$\begin{bmatrix} \frac{s^2+s}{s^3+4 s^2+7 s+6} & -\frac{2}{s^3+4 s^2+7 s+6} & \frac{2 s+2}{s^3+4 s^2+7 s+6} \\ \frac{s}{s^3+4 s^2+7 s+6} & \frac{s^2+3 s+4}{s^3+4 s^2+7 s+6} & \frac{2}{s^3+4 s^2+7 s+6} \\ \frac{-2 s-3}{s^3+4 s^2+7 s+6} & \frac{-s-3}{s^3+4 s^2+7 s+6} & \frac{s^2+4 s+3}{s^3+4 s^2+7 s+6} \\ \end{bmatrix}$$ Multiplying that by the column vector on the right yields: $$\begin{bmatrix} x(s) \\ y(s)\\ z(s)\end{bmatrix} = \begin{bmatrix} -\frac{2 (2 s+2)}{s^3+4 s^2+7 s+6}-\frac{4 \left(s^2+s\right)}{s^3+4 s^2+7 s+6}-\frac{2}{s^3+4 s^2+7 s+6} \\ -\frac{4 s}{s^3+4 s^2+7 s+6}+\frac{s^2+3 s+4}{s^3+4 s^2+7 s+6}-\frac{4}{s^3+4 s^2+7 s+6} \\ \frac{-s-3}{s^3+4 s^2+7 s+6}-\frac{4 (-2 s-3)}{s^3+4 s^2+7 s+6}-\frac{2 \left(s^2+4 s+3\right)}{s^3+4 s^2+7 s+6} \\ \end{bmatrix}$$ We now want to find the inverse Laplace Transform of each component, for example: $$x(t) = \mathscr{L^{-1}} \left(-\frac{2 (2 s+2)}{s^3+4 s^2+7 s+6}-\frac{4 \left(s^2+s\right)}{s^3+4 s^2+7 s+6}-\frac{2}{s^3+4 s^2+7 s+6} \right)$$ Thus, $$x(t) = -e^{-2 t} \left(-\sqrt{2} e^t \sin \left(\sqrt{2} t\right)+2 e^t \cos \left(\sqrt{2} t\right)+2\right) $$ Hopefully, this gives you enough to do the other two, else you will not learn.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1068473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Order of $f(n) = 4n + 6n^3 - 8n^5$ If a function $$f(n) = 4n + 6n^3 - 8n^5$$ then the order of $f$ is: The answer I have is $\log(n)$, but I'm not sure if it's right.
The order of a polynomial is usually its largest power. In this case, it would be 5. If instead you are trying to find $g$ s.t. $$ f\in O\left(g\right)\text{ as }n\rightarrow\infty, $$ (big O notation) then $g$ can be $n^{5}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1068572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Evaluating $\int_{0}^{\pi/2}\frac{x\sin x\cos x\;dx}{(a^{2}\cos^{2}x+b^{2}\sin^{2}x)^{2}}$ How to evaluate the following integral $$\int_{0}^{\pi/2}\frac{x\sin x\cos x}{(a^{2}\cos^{2}x+b^{2}\sin^{2}x)^{2}}dx$$ For integrating I took $\cos^{2}x$ outside and applied integration by parts. Given answer is $\dfrac{\pi}{4ab^{2}(a+b)}$. But I am not getting the answer.
let $x=\dfrac{t}{2}$, we have $$I=\dfrac{1}{2}\int_{0}^{\pi}\dfrac{\dfrac{t}{2}\sin{\dfrac{t}{2}}\cos{\dfrac{t}{2}}}{\left(a^2\sin^2{\dfrac{t}{2}}+b^2\cos^2{\dfrac{t}{2}}\right)^2}dt=\dfrac{1}{2}\int_{0}^{\pi}\dfrac{t\sin{t}}{[(a^2+b^2)+(a^2-b^2)\cos{t}]^2}dt$$ So \begin{align*}I&=-\dfrac{1}{2(a^2-b^2)}\int_{0}^{\pi}t\;d\left(\dfrac{1}{(a^2+b^2)+(a^2-b^2)\cos{t}}\right)\\ &=-\dfrac{1}{2(a^2-b^2)}\dfrac{1}{(a^2+b^2)+(a^2-b^2)\cos{t}}\Bigg|_{0}^{\pi}+\int_{0}^{\pi}\dfrac{1}{(a^2+b^2)+(a^2-b^2)\cos{t}}dt\\ &=-\dfrac{\pi}{4b^2(a^2-b^2)}+\int_{0}^{\pi}\dfrac{1}{(a^2+b^2)+(a^2-b^2)\cos{t}}dt \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1068649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 1 }
NP Solvable in Polynomial Time I just took an exam and am a little curious about this question (it may not be verbatim, but the idea is clear): TRUE/FALSE: If an NP complete problem can be solved in polynomial time, then P = NP. My thought was FALSE. A single NP-complete problem being solved in polynomial time doesn't prove that P = NP, it just proves that this one problem $\in$ P. It made sense during the exam, but I'm not too confident now.
You're incorrect. If some NP-complete problem $A$ can be solved in polynomial time, then given any other NP problem $B$ we can solve it in polynomial time by first reducing $B$ to $A$ in polynomial time and then running the polynomial algorithm for $A$. A nearby statement is false, namely, that if a problem in NP can be solved in polynomial time then $P=NP$. Any problem in $P$ is a counterexample-unless, of course, $P=NP$!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1068720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Power set of Set differences Assume that $\mathcal P(A-B)= \mathcal P(A)$. Prove that $A\cap B = \varnothing$. What I did: I tried proving this directly and I got stuck. Let $X$ represent a nonempty set, and let $X\in\mathcal P(A-B)$. By definition of power set and set difference: $X\subseteq A$ and $X\nsubseteq B$. By definition of a subset and subset negation: Let $y$ be an arbitrary element of $X$ such that: $(\forall y)(y \in X \rightarrow y \in A)\land (\exists y)(y \in X \land y \nsubseteq B)$ This is where I get stuck, how do I confirm I have no common elements with the fact that there is at least one element they don't share? I also tried proving the contrapositive by assuming that the intersection is not disjoint, where I let $X$ be a subset in the intersection then $X$ must belong to both $A$ and $B$ and that's where I got stuck. I apologize for bad formatting first time poster
We will prove the contrapositive: If $A \cap B \neq \emptyset$, then $\mathcal P(A - B) \neq \mathcal P(A)$. Suppose that there is some element $x \in A \cap B$ so that $x \in A$ and $x \in B$. Then $x \notin A - B$ (otherwise, if $x \in A - B$, then $x \notin B$, contradicting the fact that $x \in B$). This implies that $\{x\} \subseteq A$ and $\{x\} \not\subseteq A - B$ so that $\{x\} \in \mathcal P(A)$ while $\{x\} \notin \mathcal P(A - B)$. So $\mathcal P(A - B) \neq \mathcal P(A)$, as desired. $~~\blacksquare$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1068811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Showing that $f(x)=x^2$ for $x \in \mathbb{Q}$ and $f(x)=0$ for $x \not\in \mathbb{Q}$ is differentiable in $x=0$ I am supposed to show that $f(x) = x^2$ for $x$ in the rationals and $f(x) = 0$ for $x$ in the irrationals is differentiable at $x = 0$ and I am supposed to find the derivative of $f(x)$ at $x = 0$. Is my proof correct or not? My proof: consider limit as $h\rightarrow0$ of $\frac{f(0 + h) - f(0)}h$ then we have limit as $h\rightarrow0$ of $\frac{h^2 - 0^2}h$ and then we get limit as $h\rightarrow0$ of $\frac{h^2}h =$ limit as $h\rightarrow0$ of $h= 0 = $ the derivative of $f(x)$ at $0$
You're pretty close, but you're missing the case when $x$ is irrational. To find the derivative we have to evaluate $$f'(0)=\lim\limits_{x\rightarrow 0}\frac{f(x)-f(0)}{x-0}=\lim\limits_{x\rightarrow 0}\frac{f(x)}{x}$$ since $f(0)=0^2=0$. One way to evaluate this is to let $(x_n)\rightarrow 0$ be an arbitrary sequence converging to zero with $x_n\neq 0$ for all $n$. Then if $$\lim\limits_{n\rightarrow\infty}\frac{f(x_n)-f(0)}{x_n-0}=\lim\limits_{n\rightarrow\infty}\frac{f(x_n)}{x_n}=0$$ we're done. Let $\varepsilon>0$ be given. We're trying to find $N\in \mathbb{N}$ such that $\forall\ n\geq N$, $\frac{f(x_n)}{x_n}<\varepsilon$. Note that since $(x_n)\rightarrow 0$, $\exists\ N\in\mathbb{N}$ such that $\forall\ n\geq N, |x_n|<\varepsilon$. Now let $n\geq N$. Let's evaluate $\frac{f(x_n)}{x_n}$. Case 1 $x_n\in\mathbb{Q}$. Then $$\frac{f(x_n)}{x_n}=\frac{(x_n)^2}{x_n}=x_n<\varepsilon\ \checkmark$$ Case 2 $x_n\notin\mathbb{Q}$. Then $$\frac{f(x_n)}{x_n}=\frac{0}{x_n}=0<\varepsilon\ \checkmark$$ Thus $f$ is differentiable at $0$ and $$f'(0)=\lim\limits_{x\rightarrow 0}\frac{f(x)-f(0)}{x-0}=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1068914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Metric Spaces: closure of a set is the set of all limits of sequences in that set I am studying metric spaces and got confused about many different ways of defining the closure. Let $S$ be a subset of $M.$ Then, the closure of $S$ is $ \{x \in M : \forall \epsilon>0, \ \ B(x,\epsilon) \cap S \neq \emptyset \}$ Also, the lecturer mentioned something about converging sequences. Is it true that the closure of $S$ is $\{x \in M: \text{there exist a sequence} \ (x_n) \in S \ \text{such that} \ d(x_n,x) \rightarrow 0 \ \text{as} \ n \rightarrow \infty\}$ Is this true? and do we need $x_n \neq x$ in the above definition? I remember learning in real analysis that $p$ is a limit point if and only if there exist a sequence $x_n$ converging to $p$ but $x_n \neq p$. Then, since closure includes points that are intuitively 'in' the set and not the limit points, we don't need the condition that $x_n \neq x$. Have I understood this correctly? Thanks
The property of a point $x$ that for all $\varepsilon>0$, the intersection $B(x,\varepsilon)\cap S$ is non-empty is equivalent to every neighborhood $U$ of $x$ intersects $S$ since $B(x,\varepsilon)$ is a neighborhood, and every neighborhood $U$ contains a ball $B(x,\varepsilon)$ for some $\varepsilon>0$. These characterizations define $x$ as an adherent point, and one way to define the closure $\overline S$ of a set $S$ is as the set of all adherent points of $S$. In a metric space $M$ a point $x$ is an adherent point of $S$ if and only if there is a sequence $(x_n)_n$ in $S$ converging to $x$. The if-part is trivial. For the only if-part, we can construct a sequence in $S$ by choosing a point $x_n$ in every ball $B(x,1/n)$. So in a metric space the closure is indeed the set of all limits of sequences in $S$. Be careful not to confuse the terms limit of a sequence and limit point of a set. A limit point of a set $S$ in a space $X$ is a point $x$ such that every neighborhood $U$ of $x$ contains a point of $S$ distinct from $x$, that is, $U\cap S\setminus\{x\}\ne \emptyset$. In that case, if $X$ is metric, one can construct a sequence $(x_n)_n\subseteq S$ converging to $x$ such that $x_n\ne x$ for each $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1068999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A question about matrices such that the elements in each row add up to $1$. Let $A$ be an invertible $10\times 10$ matrix with real entries such that the sum of each row is $1$. Then is the sum of the entries of each row of the inverse of $A$ also $1$? I created some examples, and found the proposition to be true. I also proved that if two matrices with the property that the sum of the elements in each row is $1$ are multiplied, then the product also has the same property. Clearly, $I$ has this property. I think I have a proof running along the following lines: $$A^{-1}A=I$$ where $A$ and $I$ satisfy the aforementioned property. Also, if $A^{-1}$ did not satisfy this property, then neither would the product of $A$ and $A^{-1}$, which is a contradiction. Is the proposition true, and if so, is my proof correct?
Any square matrix $\;n\times n\;$ has rows sum equal to $\;1\;$ iff $\;u:=(1,1,\ldots,1)^t\;$ is an eigenvector with eigenvalue $\;1\;$, but then $$Au=u\implies A^{-1}\left(Au\right)=A^{-1}u\iff u= A^{-1}u$$ so the answer is yes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1069087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Intersection of two circles. Let $C_1$ and $C_2$ be the circles: $\rho=a\sin\theta, \rho=a(\cos\theta + \sin\theta)$ respectively. The graphs of these two circles are From the graphs, we see that the intersection points are $(0,0)$, $(\pi/2, a)$. But when we solve the system of equations: $\rho=a\sin\theta, \rho=a(\cos\theta + \sin\theta)$, we obtain $(\theta, \rho)=(\pi/2, a)$ or $(-\pi/2, -a)$. $(\pi/2, a)$, $(-\pi/2, -a)$ are different from $(0,0)$, $(\pi/2, a)$. I am confused. Thank you very much.
Note that $\left(-\frac{\pi}{2},-a\right)$ is another representation of the point $\left(\frac{\pi}{2},a\right)$ (to see this, draw the radius $-a$ at an angle of $-\frac{\pi}{2}$). So the two points you get are actually the same point written differently. As for $(0,0)$, note that $a\sin\theta$ passes through $(0,0)$ when $\theta$ is a multiple of $\pi$, while $a(\cos\theta + \sin\theta)$ passes through $(0,0)$ when $\theta$ is $\frac{3\pi}{4}\pm n\pi$ where $n$ is an integer. Thus the two curves never pass through $(0,0)$ at the same value of $\theta$, which is why you don't see this solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1069187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Find an $\epsilon$ such that the $\epsilon$ neighborhood of $\frac{1}{3}$ contains $\frac{1}{4}$ and $\frac{1}{2}$ but not $\frac{17}{30}$ I am self studying analysis and wrote a proof that is not confirmed by the text I am using to guide my study. I am hoping someone might help me comfirm/fix/improve this. The problem asks: Find an $\epsilon$ such that $J_{\epsilon}(\frac{1}{3})$ contains $\frac{1}{4}$ and $\frac{1}{2}$ but not $\frac{17}{30}$ Here $J_\epsilon(x)$ means the $\epsilon$-neighborhood of $x$. I know that $d\left(\frac{1}{4},\frac{1}{3}\right)<d\left(\frac{1}{2},\frac{1}{3}\right)$ $$\left|\frac{1}{2} - \frac{1}{3}\right| = \frac{1}{6}$$ I know that: $$J_{\frac{1}{6}}(\frac{1}{3}) = \left(\frac{1}{6},\frac{1}{2}\right)$$ and so $$J_{\frac{1}{6}+\epsilon}\left(\frac{1}{3}\right) =\left(\frac{1}{6}-\epsilon,\frac{1}{2}+\epsilon\right)$$ where $\epsilon<\frac{1}{15}$ is a satisfactory solution. Am I allowed to generalize this way with epsilon? Would the answer be better If provide some concrete value of epsilon?
$ϵ<\frac{1}{15}$ is a satisfactory solution. It's best not to use same letter to mean two different things: the $\epsilon$ that's requested in the problem is somehow also $\frac16+\epsilon$ at the end of proof. You could write $\epsilon = \frac16+\delta$. So, your final answer is $$\frac16<\epsilon<\frac{1}{6}+\frac{1}{15}$$ which is correct, but a concrete value would be better. Here's a less convoluted approach. Review the three conditions: $$ \epsilon>\left|\frac13-\frac14\right|,\quad \epsilon>\left|\frac13-\frac12\right|,\quad \epsilon\le \left|\frac13-\frac{17}{30}\right| $$ (Non-strict inequality in the last condition, because I assume neighborhoods are open. If they are not, adjust.) They can be condensed into two, because the second implies the first: $$ \epsilon>\frac16 ,\quad \epsilon\le \frac{7}{30} $$ So, anything within $$ \frac{5}{30}<\epsilon\le \frac{7}{30} $$ works... but there is a natural choice of $\epsilon$ here that is nice and simple and does not depend on strict/non-strict inequalities. Generally, in this course it is better to present concrete evidence of existence of epsilons and deltas. Saying: "let $\delta=\epsilon/4$" is better than "pick $\delta$ such that this and that inequalities hold".
{ "language": "en", "url": "https://math.stackexchange.com/questions/1069300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Closed form of $\int_0^\infty \ln \left( \frac{x^2+2kx\cos b+k^2}{x^2+2kx\cos a+k^2}\right) \;\frac{\mathrm dx}{x}$ Today I discussed the following integral in the chat room $$\int_0^\infty \ln \left( \frac{x^2+2kx\cos b+k^2}{x^2+2kx\cos a+k^2}\right) \;\frac{\mathrm dx}{x}$$ where $0\leq a, b\leq \pi$ and $k>0$. Some users suggested me that I can use Frullani's theorem: $$\int_0^\infty \frac{f(ax) - f(bx)}{x} = \big[f(0) - f(\infty)\big]\ln \left(\frac ab\right)$$ So I tried to work with that way. \begin{align} I&=\int_0^\infty \ln \left( \frac{x^2+2kx\cos b+k^2}{x^2+2kx\cos a+k^2}\right) \;\frac{\mathrm dx}{x}\\ &=\int_0^\infty \frac{\ln \left( x^2+2kx\cos b+k^2\right)-\ln \left( x^2+2kx\cos a+k^2\right)}{x}\mathrm dx\tag{1}\\ &=\int_0^\infty \frac{\ln \left( 1+\dfrac{2k\cos b}{x}+\dfrac{k^2}{x^2}\right)-\ln \left( 1+\dfrac{2k\cos a}{x}+\dfrac{k^2}{x^2}\right)}{x}\mathrm dx\tag{2}\\ \end{align} The issue arose from $(1)$ because $f(\infty)$ diverges and the same issue arose from $(2)$ because $f(0)$ diverges. I then tried to use D.U.I.S. by differentiating w.r.t. $k$, but it seemed no hope because WolframAlpha gave me this horrible form. Any idea? Any help would be appreciated. Thanks in advance.
$\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack} \newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,} \newcommand{\dd}{{\rm d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\dsc}[1]{\displaystyle{\color{red}{#1}}} \newcommand{\expo}[1]{\,{\rm e}^{#1}\,} \newcommand{\fermi}{\,{\rm f}} \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{{\rm i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\Li}[1]{\,{\rm Li}_{#1}} \newcommand{\norm}[1]{\left\vert\left\vert\, #1\,\right\vert\right\vert} \newcommand{\pars}[1]{\left(\, #1 \,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}} \newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,} \newcommand{\sech}{\,{\rm sech}} \newcommand{\sgn}{\,{\rm sgn}} \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$ $\ds{\int_{0}^{\infty}\ln\pars{x^{2} + 2kx\cos\pars{b} + k^{2}\over x^{2} + 2kx\cos\pars{a} + k^2}\,{\dd x \over x}:\ {\large ?} \,,\qquad 0 <\ a\,,\ b\ <\pi\,,\quad k > 0.}$ \begin{align}&\overbrace{\color{#66f}{\large\int_{0}^{\infty} \ln\pars{x^{2} + 2kx\cos\pars{b} + k^{2}\over x^{2} + 2kx\cos\pars{a} + k^2}\,{\dd x \over x}}} ^{\dsc{x \over k}\ \ds{\mapsto}\ \dsc{x}} \\[5mm]&=\int_{0}^{\infty} \ln\pars{x^{2} + 2x\cos\pars{b} + 1\over x^{2} + 2x\cos\pars{a} + 1} \,{\dd x \over x} =\lim_{R\ \to\ \infty}\bracks{\fermi\pars{a,R} - \fermi\pars{b,R}} \\[5mm]&\mbox{where}\ \begin{array}{|c|}\hline\\ \ \fermi\pars{\mu,R}\equiv \int_{0}^{R}\ln\pars{x} {2x + 2\cos\pars{\mu}\over x^{2} + 2x\cos\pars{\mu} + 1}\,\dd x\,,\quad 0 < \mu < \pi\ \\ \\ \hline \end{array}\qquad\qquad\quad\pars{1} \end{align} Note that $\ds{r_{-} \equiv \exp\pars{\bracks{\pi - \mu}\ic}}$ and $\ds{r_{+} \equiv \exp\pars{\bracks{\pi + \mu}\ic}}$ are the roots of $\ds{x^{2} + 2x\cos\pars{\mu} + 1 = 0}$ such that $\ds{\pars{~\mbox{with}\ R > 1~}}$: \begin{align}&\dsc{\int_{0}^{R}\ln^{2}\pars{x} {x + \cos\pars{\mu}\over x^{2} + 2x\cos\pars{\mu} + 1}\,\dd x} \\[5mm]&=2\pi\ic\bracks{\half\,\ln^{2}\pars{r_{-}} + \half\,\ln^{2}\pars{r_{+}}} -\int_{R}^{0}\bracks{\ln\pars{x} + 2\pi\ic}^{2} {x + \cos\pars{\mu}\over x^{2} + 2x\cos\pars{\mu} + 1}\,\dd x \\[5mm]&=\pi\ic\bracks{-\pars{\pi - \mu}^{2} - \pars{\pi + \mu}^{2}} +\dsc{\int_{0}^{R}\ln^{2}\pars{x} {x + \cos\pars{\mu}\over x^{2} + 2x\cos\pars{\mu} + 1}\,\dd x} \\[5mm]&+2\pi\ic\ \overbrace{\int_{0}^{R}\ln\pars{x} {2x + 2\cos\pars{\mu}\over x^{2} + 2x\cos\pars{\mu} + 1}\,\dd x} ^{\dsc{\fermi\pars{\mu,R}}}\ +\ \pars{2\pi\ic}^{2}\int_{0}^{R} {x + \cos\pars{\mu} \over x^{2} + 2x\cos\pars{\mu} + 1}\,\dd x \\[5mm]&-{\mathfrak C}\pars{R,\mu} \end{align} where $\ds{\left.{\mathfrak C}\pars{R,\mu} \equiv\oint\ln^{2}\pars{z} {z + \cos\pars{\mu}\over z^{2} + 2z\cos\pars{\mu} + 1}\,\dd z\,\right\vert _{z\ \equiv\ R\expo{\ic\theta}\,,\ 0\ <\ \theta\ <\ 2\pi}}$ This expression leads to: \begin{align} 0&=-2\pi\ic\pars{\pi^{2} + \mu^{2}} + 2\pi\ic\int_{0}^{R}\ln\pars{x} {2x + 2\cos\pars{\mu}\over x^{2} + 2x\cos\pars{\mu} + 1}\,\dd x \\[5mm]&+\pars{2\pi\ic}^{2}\int_{\cos\pars{\mu}}^{R + \cos\pars{\mu}} {x \over x^{2} + \sin^{2}\pars{\mu}}\,\dd x - {\mathfrak C}\pars{R,\mu} \end{align} and \begin{align} \fermi\pars{\mu,R}&=\int_{0}^{R}\ln\pars{x} {2x + 2\cos\pars{\mu}\over x^{2} + 2x\cos\pars{\mu} + 1}\,\dd x \\[5mm]&=\pi^{2} + \mu^{2} -\pi\ic\ln\pars{\bracks{R + \cos\pars{\mu}}^{2} + \sin^{2}\pars{\mu}} + {{\mathfrak C}\pars{R,\mu} \over 2\pi\ic} \end{align} In the $\ds{R \to \infty}$ we'll have: \begin{align} \lim_{R \to\ \infty}\bracks{\fermi\pars{a,R} - \fermi\pars{b,R}} &=a^{2} - b^{2} \end{align} such that expression $\pars{1}$ is reduced to: \begin{align}&\color{#66f}{\large\int_{0}^{\infty} \ln\pars{x^{2} + 2kx\cos\pars{b} + k^{2}\over x^{2} + 2kx\cos\pars{a} + k^2}\,{\dd x \over x}} =\color{#66f}{\large a^{2} - b^{2}} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1069376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29", "answer_count": 5, "answer_id": 0 }
Erwin Kreyszig's Introductory Functional Analysis With Applications, Section 2.7, Problem 6 Suppose that $X$ and $Y$ are two normed spaces over the same field ($\mathbb{R}$ or $\mathbb{C}$). Show that the range of a bounded linear operator $T \colon X \to Y$ need not be closed in $Y$. Kreyszig gives the following hint: Consider the operator $T \colon \ell^\infty \to \ell^\infty$ defined by $$Tx := y = (\eta_j), \, \mbox{ where } \, \eta_j := \frac{\xi_j}{j} \, \mbox{ for all } x := (\xi_j) \in \ell^\infty.$$ How do we characterise the range of this operator and show that the range is not closed in $\ell^\infty$?
Let $e_n$ be the element of $\ell_\infty$ whose $m$'th coordinate is $1$ if $m=n$ and $0$ otherwise. The closed linear span of $\{e_n\mid n\in \Bbb N\}$ in $\ell_\infty$ is the space $c_0$ of sequences that tend to $0$. For your operator, we have $T(je_j)=e_j$; so the range of $T$ contains each $e_j$. Since the range of a linear operator is a linear space, it follows that the range of $T$ contains the linear span of the $e_j$. So, if the range of $T$ is closed, it must contain the space $c_0$ (in fact it would be equal to $c_0$). But this is not the case: the vector $y=(1/\sqrt j)$ is in $c_0$ but is not on the range of $T$. If $Tx=y$, then the $j$'th coordinate of $x$ would have to be $\sqrt j$. The sequence $(\sqrt j)\notin\ell_\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1069451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof $|\sin(x) - x| \le \frac{1}{3.2}|x|^3$ So, by Taylor polynomial centered at $0$ we have: $$\sin(x) = x-\frac{x^3}{3!}+\sin^4(x_o)\frac{x^4}{4!}$$ Where $\sin^4(x_0) = \sin(x_o)$ is the fourth derivative of sine in a point $x_0\in [0,x]$. Then we have: $$\sin(x)-x = -\frac{x^3}{3!}+\sin(x_0)\frac{x^4}{4!}$$ I thought about proving the error $\sin(x_0)\frac{x^4}{4!}$ is always positive, but for $x>\pi$ this is not the case. Someone has a hint?
$$\sin x - x = \int_{0}^{x}(\cos t-1)\, dt$$ hence, for any $x>0$: $$|\sin x - x| = \int_{0}^{x}2\sin^2\frac{t}{2}\,dt\leq \int_{0}^{x}\frac{t^2}{2}\,dt=\frac{x^3}{6}$$ since $\sin\frac{t}{2}\leq\frac{t}{2}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1069576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is indefinite integration non-linear? Let us consider this small problem: $$ \int0\;dx = 0\cdot\int1\;dx = 0\cdot(x+c) = 0 \tag1 $$ $$ \frac{dc}{dx} = 0 \qquad\iff\qquad \int 0\;dx = c, \qquad\forall c\in\mathbb{R} \tag2 $$ These are two conflicting results. Based on this other question, Sam Dehority's answer seems to indicate: $$ \int\alpha f(x)\;dx\neq\alpha\int f(x)\;dx,\qquad\forall\alpha\in\mathbb{R} \tag3 $$ However, this clearly implies that indefinite integration is nonlinear, since a linear operator $P$ must satisfy $P(\alpha f) = \alpha Pf, \forall\alpha\in\mathbb{R}$, including $\alpha=0$. After all, a linear combination of elements of a vector space $V$ may have zero valued scalars: $f = \alpha g + \beta h, \forall\alpha,\beta\in\mathbb{R}$ and $g, h\in V$. This all seems to corroborate that zero is not excluded when it comes to possible scalars of linear operators. To take two examples, both matrix operators in linear algebra and derivative operators are linear, even when the scalar is zero. In a matrix case for instance, let the operator $A$ operate a vector: $A\vec{x} = \vec{y}$. Now: $A(\alpha\vec{x}) = \alpha A\vec{x} = \alpha\vec{y}$. This works even for $\alpha = 0$. Why is $(3)$ true? Can someone prove it formally? If $(3)$ is false, how do we fix $(1)$ and $(2)$? When exactly does the following equality hold (formal proof)? $$ \int\alpha f(x)\;dx = \alpha\int f(x)\;dx,\qquad\forall\alpha\in\mathbb{R} $$ I would appreciate formal answers and proofs.
My "answer" should be a comment under the two posted answers, but I don't have enough reputation to post comments. There is a (very popular) mistake in both answers. Indefinite integral operator does NOT give a class of functions equal up to constant translation. Let's look at an example. Someone might evaluate integral of 1/x like this: $$ \int\frac{1}{x}dx = \begin{cases} logx + C & \text{if x > 0}\\ log(-x) + C & \text{if x < 0}\\ \end{cases} = log|x| + C \\ \text{where C} \in \mathbb{R}. $$ However, this is not an exhaustive answer. The constants for negative and positive x can differ. The exhaustive answer would be $$ \int\frac{1}{x}dx = \begin{cases} logx + C & \text{if x > 0}\\ log(-x) + D & \text{if x < 0}\\ \end{cases},\\ \text{where C,D} \in \mathbb{R}. $$ Thus it is not true to say that $$ \int f(x) dx = \{ F(x) + C : C \in \mathbb{R} \}\\ \text{for a particular $F(x)$}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1069664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 3, "answer_id": 2 }
Taylor Expansion of $x\sqrt{x}$ at x=9 How can I go about solving the Taylor expansion of $x\sqrt{x}$ at x=9? I solved the derivative down to the 5th derivative and then tried subbing in the 9 value for a using this equation $$\sum_{n=0}^\infty \frac{f^{(n)}(a)}{n!}(x-a)^n.$$ Can someone walk me through the process (in neatly put-together LaTEX/Jax?) I know I'm asking for a lot! Thanks. :)
Lets write down the first couple of derivatives first: $f(x)=x\cdot \sqrt x$ $f'(x)=\frac{3\cdot\sqrt x }{2}$ $f''(x)=\frac{3}{4\cdot \sqrt x}$ You mentioned the taylor expansion in your opening post: $\sum_{n=0}^\infty \frac{f^{(n)}(a)}{n!}(x-a)^n.$ So your first three terms will be: $\frac{f^{o}(9)}{0!}(x-9)^0=27$ $\frac{f^{1}(9)}{1!}(x-9)^1=\frac{9(x-9)}{2}$ $\frac{f^{2}(9)}{2!}(x-9)^2=\frac{1}{8}(x-9)^2$ (Just so you don't get confused $f^2=f''$) Now it's your turn. Try to calculate the next couple of terms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1069733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Limits using Maclaurins expansion for $\lim_{x\rightarrow 0}\frac{e^{x^2}-\ln(1+x^2)-1}{\cos2x+2x\sin x-1}$ $$\lim_{x\rightarrow 0}\frac{e^{x^2}-\ln(1+x^2)-1}{\cos2x+2x\sin x-1}$$ Using Maclaurin's expansion for the numerator gives: $$\left(1+x^2\cdots\right)-\left(x^2-\frac{x^4}{2}\cdots\right)-1$$ And the denominator: $$\left(1-2x^2\cdots\right) + \left(2x^2-\frac{x^4}{3}\cdots\right)-1$$ $$\therefore \lim_{x\rightarrow 0} f(x) = \frac{-\dfrac{x^4}{2}}{-\dfrac{x^4}{3}} = \frac{3}{2}$$ But Wolfram gives that the limit is $3$. I thought, maybe I used too few terms. What is a thumb rule for how many terms in expansion to use to calculate limits? Using three terms yielded the answer $\lim_{x\rightarrow 0}f(x) = -4$. What did I do wrong?
In a case where $x \to 0$ as here, if the two expansions of top and bottom happen to start with the same degree, the limit is the ratio of the coefficients. Here, the top expansion starts with $x^4$ while the bottom starts with $x^4/3$ making the limit $3.$ That is, we have $$\frac{x^4+\cdots}{x^4/3+\cdots},$$ and when top and bottom are divided by $x^4$ the top becomes $1$ plus terms going to zero, while the bottom becomes $1/3$ plus terms going to zero, as $x\to 0.$ [note there are some signs to keep track of, in seeing that each of top and bottom actually start out with a nonzero fourth power.]
{ "language": "en", "url": "https://math.stackexchange.com/questions/1069824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Quadratic formula in double inequalities I have the double inequality: $-x^2 + x(2n+1) - 2n \leq u < -x^2 + x(2n-1)$ and I am trying to get it into the form $x \leq \text{ anything } < x+1$ Or at least solve for x as the smallest term. I know I need to use the quadratic formula but I don't understand how I can solve for two quadratics at once? How would this work?
If you write $$f_1(x)=x^2-(2n+1)x+2n+u\ ,\quad f_2(x)=x^2-(2n-1)x+u$$ then you want to solve $$f_1(x)\ge0\ ,\quad f_2(x)<0\ .$$ The important thing to notice is that $$f_2(x)=f_1(x+1)\ .$$ Each quadratic has discriminant $$\Delta=(2n-1)^2-4u\ .$$ Firstly, if $\Delta\le0$ then $f_2(x)$ cannot be negative, and so there is no solution. Now consider graphing $y=f_1(x)$ and $y=f_2(x)$, both on the same axes. (Please draw it yourself, I am not good at posting diagrams online.) The graph of $f_1$ is just that of $f_2$, shifted $1$ unit to the right. Suppose that $f_1$ has roots $\alpha_1<\alpha_2$ and $f_2$ has roots $\beta_1<\beta_2$. Please mark these on your graph. There are two cases. * *Case I: if $\Delta\le1$ then $\beta_2\le\alpha_1$ and from the graph you can see that the solution is $\beta_1<x<\beta_2$. *Case II: if $\Delta>1$ then $\beta_2>\alpha_1$ and the solution is $\beta_1<x\le\alpha_1$. So, the solutions are: * *if $\Delta\le0$, no solution; *if $0<\Delta\le1$ then $n-\frac12-\frac{\sqrt\Delta}2<x<n-\frac12+\frac{\sqrt\Delta}2$; *if $\Delta>1$ then $n-\frac12-\frac{\sqrt\Delta}2<x<n+\frac12-\frac{\sqrt\Delta}2$. Putting the answer in the form you requested, this implies $$x<n-\frac12|1-\sqrt\Delta|<x+1\ ,$$ provided that $\Delta>0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1069903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Branch cut and principal value I do not understand the principal value and it is relation to branch cut. Please tell me about principal value with some examples, then explain the branch cut concept. For instance, what is the $\text{Arg} (-1-i)$ , tell me your thinking steps. Thanks
In Complex Analysis, we usually have defined $\arg(z)$ and $\text{Arg}(z)$ where the later generally denotes the principal argument. Most books that I have dealt with define the principal argument to lie in $(-\pi, \pi)$ but it is not unheard of to see it defined between $(0, 2\pi)$. If we consider $z = -1 - i$, then we have \begin{align} \arg(z) &= \arctan(1)\\ &= \frac{5\pi}{4} + 2\pi k\\ \text{Arg}(z) &= \frac{-3\pi}{4} \end{align} We have to remember to mind the quadrant that the point $z$ lies in when taking the $\arctan$ As I said in the comments, Wikipedia has a good explanation of the branch cut "A branch cut is a curve in the complex plane such that it is possible to define a single analytic branch of a multi-valued function on the plane minus that curve. Branch cuts are usually, but not always, taken between pairs of branch points.(Wikipedia)"
{ "language": "en", "url": "https://math.stackexchange.com/questions/1069992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find the remainder if $19^{55}$ is divided by 13. The question, as stated in the title, is Find the remainder if $19^{55}$ is divided by 13. Here is my approach for solving this problem. I know that $19\equiv6$ (mod 13), so $19^{55}\equiv 6^{55}$ (mod 13). Then I can see that $6^{12}\equiv 1$ (Fermat's Little Theorem), so $6^{55}=\left(6^{12}\right)^46^7\equiv 6^7$ (mod 13). And from here I don't know where to go. Could you give me a bit of guidance? Is there a much easier path to the answer?
$6^7\equiv 6(6^2)^3\pmod{13}\equiv 6(-3)^3\pmod{13}\equiv -6.27\pmod{13}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1070091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
A power series that converges for $|x| \leq 1$ and diverges otherwise. I need to find a power series $\sum a_n z^n$ that converges for $|x| \leq 1$ and diverges otherwise. I think I have one I just want to be sure. So, the series: $\sum \frac{z^n}{n^2}$ has radius of convergence of 1. So it converges when $|z| <1$ and diverges when $|z| >1$, correct? And we know it converges at $z= \pm 1$ by the comparison test, correct? This part is where I'm having trouble with. Could someone explain in detail how to use the comparison test with this? I know the comparison test says, "if you have two series $\sum a_n$ and $\sum b_n$ with $a_n, b_n \geq0$ and $a_n \leq b_n$, then if $\sum b_n$ converges, then $\sum a_n$ converges." But what other series would you use in the comparison test. I also know that $|\frac{z^n}{n^2}|= \frac{1}{n^2}$. Can you use this fact? Please help! This series would work correct?
You don't need the comparison test. If $|z|=1$ then $\sum z^2/n^2$ converges absolutely so it converges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1070154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Linear Algebra: Polynomials Basis Consider the polynomials $$p_1(x) = 1 - x^2,\;p_2(x) = x(1-x),\;p_3(x) = x(1+x)$$ Show that $\{p_1(x),\,p_2(x),\,p_3(x)\}$ is a basis for $\Bbb P^2$. My question is how do you even go about proving that these polynomials are even independent? Are there certain rules I should know?
No particular rule for polynomials: they are elements of a vector space of dimension $3$. Since $\{1;x;x^2\}$ is obviously a basis for your space, you can simply show that the matrix $$ \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 1 \\ -1 & -1 & 1 \end{bmatrix} $$ has rank $3$, which is done by a simple elimination. Why is this true? Because the columns of this matrix are the coordinates of $p_1$, $p_2$ and $p_3$ with respect to the basis $\{1;x;x^2\}$ and a set of vectors is linearly independent if and only if the set of their coordinate vectors is linearly independent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1070251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Series sum $\sum 1/(n^2+(n+1)^2)$ In an exercise, I caculate the Fourier expansion of $e^x$ over $[0,\pi]$ is $$e^x\sim \frac{e^\pi-1}{\pi}+\frac{2(e^\pi-1)}{\pi}\sum_{n=1}^\infty \frac{\cos 2nx}{4n^2+1}+\frac{4(1-e^\pi)}{\pi}\sum_{n=1}^\infty \frac{n\sin 2nx}{4n^2+1}.$$ From this, it is easy to deduce $$\sum_{n=1}^\infty \frac{1}{4n^2+1}=\frac{\pi}{4}\frac{e^\pi+1}{e^\pi-1}-\frac{1}{2}.$$ However, I could not find the following sum $$\sum_{n=1}^\infty \frac{1}{(2n-1)^2+1},$$ from which we can calculate the sum $\sum 1/(n^2+1)$.
We can approach such kind of series by considering logarithmic derivatives of Weierstrass products. For instance, from: $$\cosh z = \prod_{n=1}^{+\infty}\left(1+\frac{4z^2}{(2n-1)^2\pi^2}\right)\tag{1}$$ we get: $$\frac{\pi}{2}\tanh\frac{\pi z}{2} = \sum_{n=1}^{+\infty}\frac{2z}{z^2+(2n-1)^2}\tag{2},$$ so, evaluating in $z=1$: $$\sum_{n=1}^{+\infty}\frac{1}{(2n-1)^2+1}=\color{red}{\frac{\pi}{4}\tanh\frac{\pi}{2}}.\tag{3}$$ With the same approach, but starting from the Weierstrass product for $\frac{\sinh z}{z}$, we can compute $\sum_{n\geq 1}\frac{1}{1+n^2}$, too: $$\sum_{n\geq 1}\frac{1}{n^2+1}=\frac{-1+\pi\coth\pi}{2}.\tag{4}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1070320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Discrete analogue of Green's theorem Following formula concerning finite differences is in a way a discrete analogue of the fundamental theorem of calculus: $$\sum_{n=a}^b \Delta f(n) = f(b+1) - f(a) $$ We can think about the Green's theorem as a two-dimensional generalization of fundamental theorem of calculus, so I'm interested is there a discrete analogue of Green's theorem?
It's entirely about summation: in a partition of a plane region $R$, we have the following for any function $G$ defined on the edges of the partition: $$\sum_{\partial R} G=\sum_{R} \Delta G$$ ($\Delta G$ is defined on the faces of the partition).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1070433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Calculate $\lim_{x \to 0} (e^x-1)/x$ without using L'Hôpital's rule Any ideas on how to calculate the limit of $(e^x -1)/{x}$ as $x$ goes to zero without applying L'Hôpital's rule?
I don't know if this is really "without" Hôpitals rule for you but if you are allowed to use $$ \exp(x)=\sum_{k=0}^\infty \frac{x^k}{k!} $$ the limit is straightforward since this sum converges locally uniformly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1070524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Why is this change of basis useful? In my textbook there is a theorem which states Let $A$ be a real $2\times 2$ matrix with complex eigenvalues $\lambda =a\pm bi$ (where $(b\ne 0)$. If $\mathbf x$ is an eigenvector of $A$ corresponding to $\lambda=a-bi$, then the matrix $P=\begin{bmatrix} \operatorname{Re}(\mathbf x) & \operatorname{Im}(\mathbf x)\end{bmatrix}$ is inveritble and $$A=P\begin{bmatrix} a & -b \\ b & a\end{bmatrix}P^{-1}$$ My question is: why is this important? I understand why diagonalizing a matrix is important -- it's easier to operate on diagonal matrices than arbitrary matrices. But why is this decomposition (is this a decomposition? I've heard the word and it sounds like what this is) useful at all? Does it have something to do with the fact the complex numbers can be represented by $2\times 2$ matrices? Edit: I'm also curious if this has any analogs in higher dimensions. That is, can we find a similar decomposition for $3\times 3$ (or $4\times 4$ or $5\times 5$) matrices? Thanks.
You can write $$C=\alpha\left[\begin{array}{cc}\cos\theta&-\sin\theta\\ \sin\theta &\cos\theta\end{array}\right]$$ $C^n$ involves $n\theta$ so it is easy to calculate, and hence powers of $A$ are easy to calculate. On the other hand, I don't know how useful it is because I hadn't heard of it before.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1070594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Calculating canonical divisor in product of projective spaces. Let $X$ be an intersection of two divisors of bidegree $(a,b)$ and $(c,d)$ in $\mathbb{P^2}\times \mathbb{P^2}$. Then how can I find the canonical divisor $K_X$? I'm asking because I have no experience in working with bidegrees.
As usual when you have an intersection like this (which I assume has the right dimension and so on), you can use the adjunction formula. Adjunction says that we have an exact sequence of bundles on $X$ $$ 0 \rightarrow N_X^\vee \longrightarrow \Omega_{\mathbf P^2 \times \mathbf P^2 \mid X} \rightarrow \Omega_X \longrightarrow 0$$ where $N_X$ is the normal bundle of $X$. Now taking top exterior powers and rearranging we get $$K_X = K_{\mathbf P^2 \times \mathbf P^2 \mid X} \otimes \bigwedge^2 N_X.$$ We know that $K_{\mathbf P^2 \times \mathbf P^2 \mid X} = O(-3,-3)_{\mid X}$ so it remains to find $\bigwedge^2 N_X$. If we write $X=D_1 \cap D_2$ for the intersection of your two divisors, then \begin{align*} N_X &= (N_{D_1} \oplus N_{D_2})_{\mid X} \\ &= (O(a,b) \oplus O(c,d) )_{\mid X}. \end{align*} So $\bigwedge^2 N_X = O(a+c,b+d)_{\mid X}$. Putting everything together we get $$K_X = O(a+c-3,b+d-3)_{\mid X}.$$ Edit: Alex asked why my formula for $\bigwedge^2 N_X$ is true, so let me give some more detail on that. There is a general formula for the exterior powers of a direct sum, written out nicely for example in the answers here. In particular, it says that if $L_1$ and $L_2$ are line bundles, then $$\bigwedge^2 (L_1 \oplus L_2) = L_1 \otimes L_2.$$ Applied here that gives us \begin{align*}\bigwedge^2 N_x &= O(a,b) \otimes O(c,d) \\ &= \left(\pi_1^* O(a) \otimes \pi_2^* O(b) \right) \otimes \left( \pi_1^*O(c) \otimes \pi_2^* O(d) \right) \\ \end{align*} where $\pi_1$, $\pi_2$ are the two projections $\mathbf P^2 \times \mathbf P^2 \rightarrow \mathbf P^2$. Rearranging the factors in the tensor product then gives what we are after.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1070675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Triangulation of hypercubes into simplices A square can be divided into two triangles. A 3-dimensional cube can be divided into 6 tetrahedrons. Into what number of simplices an n-dimensional hypercube can be divided? (For example, a 4-hypercube, or a 5-hypercube.)
As mentioned before there are (unsurprisingly) many ways to triangulate the cube. However one easy (and sometimes useful) way to do this is: Let $\pi$ be a permutation of $\{1,2,...,d\}$. Then define $S_\pi = \{x \in \mathbb{R}^d: 0 \leq x_{\pi(1)} \leq x_{\pi(2)}\leq ... \leq x_{\pi(d)} \leq 1 \}$. Clearly, $S_\pi$ is a simplex since it is described by $d+1$ linearly independent inequalities. Now the cube $[0,1]^d$ can be triangulated by all $S_\pi$ where $\pi$ ranges over all permutations of $\{1,2,...,d\}$. To check that these simplices have disjoint interior is an easy exercise. This method uses $d!$ simplices corresponding to $2! = 2$ in dimension 2 and $3!=6$ in dimension 3.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1070770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
$A$ has full rank iff $A^H A$ is invertible Let $A \in \mathbb{K}^{m,n}$ be a matrix. How to show that $\text{rank}(A) = n$ if and only if the matrix $A^HA$ is invertible?
Unsure of your notation/assumptions, but here's a hint: * *For real matrices, $$\text{rank}(A^*A)=\text{rank}(AA^*)=\text{rank}(A)=\text{rank}(A^*)$$ *For complex matrices, $$\text{rank}(A^*A)=\text{rank}(A)=\text{rank}(A^*)$$ Mouse over for more after you've pondered it a bit: $\text{rank}(A)=n\iff \text{rank}(A^*A)=n\iff A^*A\text{ has full rank}\iff A^*A\text{ invertible}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1070866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why schemes are $(X,\mathcal O_X)$ rather than $(\mathcal O_X,X)$ or $\{X,\mathcal O_X\}$ Is there a reason why schemes are ordered pairs $(X,\mathcal O_X)$ rather than for example $(\mathcal O_X,X)$ or $\{X,\mathcal O_X\}$?
This is a general way to talk about structures that consist of several "parts". For example a field is a set $F$ together with two operations ($+$, $\cdot$) and two special elements ($0$ and $1$). So to unambiguosly specify a field, we could denote it with a pentuple $(F,+,\cdot,0,1)$. Then for example a field homomoprhism to another field $(F',+',\cdot',0',1')$ is a map $f\colon F\to F'$ such that $f(a+b)=f(a)+'f(b)$, $f(a\cdot b)=f(a)\cdot'f(b)$, $f(0)=0'$, $f(1)=1'$. It is necessary to explicitly mention the other parts $+,\cdot,0,1$ because we could define different field structures on the ery same set $F$. Then again, is this really less ambiguous? We might indeed employ a different convention and wirte the pentuple in different order, for example $(F,0,1,\cdot,+)$ or we might even consider it unnecessary to specifically mention $0$. But within a text (a course, a book), one should fix one such notation once and for all - later you will by abuse of language speak of "the field $\mathbb Q$" anyway. So back to your original situation: One could certainly use the notation $(\mathcal O_X,X)$ instead of $(X,\mathcal O_X)$; but it is very convenient to always start with the underlying set (as I did with fields above), so here we start with $X$. (Incidentally, one could even consider $X$ redundant, but that is another story). And why don't we write $\{X,\mathcal O_X\}$? Well, in a set we cannot automatically distinguish between the elements by any implicit order. For example, if we did so for fields and said a dield is a set $\{F,+,\cdot,0,1\}$, then we would have no way to tell which of $+$, $\cdot$ is addition and which is multiplication, e.g., we could not tell if the distributive law should be $a\cdot(b+c)=(a\cdot b)+(a\cdot c)$ or $a+(b\cdot c)=(a+b)\cdot (a+c)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1070934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that the set $\{x\in\mathbb{R}^N:\nabla f(x)=0 \}$ is convex Let $f:\mathbb{R}^N\rightarrow \mathbb{R}$ be a $C^1$ convex function. Show that $\{x\in\mathbb{R}^N:\nabla f(x)=0 \}$ is convex (we assume that empty set is convex). Any hint?
Convexity implies $f(y)\ge f(x)+\langle \nabla f(x),y-x\rangle$ for all $x,y$. Specializing this to the points of your set, you will find they are points where $f$ attains its global minimum, say $m$. Argue that $\{x:f(x)=m\}$ is convex.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1070993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is the required radius of the smaller circles around a larger circle so they touch? I am trying to determine how to calculate the required radius of the smaller circles so they touch each other around the larger circle. (red box) I would like to be able to adjust the number of smaller circles and the radius of the larger circle. As an example: $$\begin{align} R&=1.5\\ n&=9\\ r&=\,? \end{align}$$
Another approach: Lets say we have $n$ small circles. Then the center points of the small circles form a regular $n$-gon, where the side length is $2r$. The radius of the big circle is the circumradius of the $n$-gon which is $R = \frac{2r}{2\sin(\pi/n)}$ so $r = R \sin(\pi/n)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1071082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Does Binomial variables independence implies Bernoulli variables independence $X$, $Y$ are independent variables with Binomial distribution. $X={\sum_{i=1}^nX_i}$, $Y={\sum_{i=1}^nY_i}$. $X_i$, ($1\le i\le n$) are independent Bernoulli variables. Same applies for $Y_i$ Is the set of $X_i$ and $Y_i$ independent?
Surprisingly, the answer is no. Consider the case $n=2$ with probability space $\{0,1\}^4$ and $X_1, X_2$ the first two coordinate functions and $Y_1, Y_2$ the second two. The probabilities of the $16$ different configurations are $$\begin{array}[cccc]{} x_1 & x_2 & y_1 & y_2 & p(x_1,x_2,y_1,y_2)\cr 0 & 0 & 0 & 0 &1/16\cr 0 & 0 & 0 & 1 &2/16\cr 0 & 0 & 1 & 0 &0\cr 0 & 0 & 1 & 1 &1/16\cr 0 & 1 & 0 & 0 &2/16\cr 0 & 1 & 0 & 1 &2/16\cr 0 & 1 & 1 & 0 &0\cr 0 & 1 & 1 & 1 &0\cr 1 & 0 & 0 & 0 &0\cr 1 & 0 & 0 & 1 &0\cr 1 & 0 & 1 & 0 &2/16\cr 1 & 0 & 1 & 1 &2/16\cr 1 & 1 & 0 & 0 &1/16\cr 1 & 1 & 0 & 1 &0\cr 1 & 1 & 1 & 0 &2/16\cr 1 & 1 & 1 & 1 &1/16\cr \end{array}$$ Then unless I've miscalculated $X_1, X_2$ are independent Bernoulli(1/2), $Y_1, Y_2$ are independent Bernoulli(1/2), $X_1+X_2$ and $Y_1+Y_2$ are independent, but e.g. $X_1$ and $Y_1$ are dependent: $P(X_1 = 1, Y_1 = 1) = 7/16$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1071185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Can someone help me understand the proof that every cauchy sequence is bounded? This proof is written by a user Batman as an answer to someone's question(just to give credit). Every proof that I've seen is the same idea, and I'm having trouble understanding it intuitively. (I don't see why to take n=N and why the max is used.) If someone could explain it in words it would be really helpful. Choose ϵ>0. Then, there exists N such that for m,n ≥ N, |am−an|<ϵ. By the triangle inequality, |am|−|an|≤|am−an|<ϵ. Take n=N and we see |am|−|aN|<ϵ for all m≥N. Rearranging, we have |am|<ϵ+|aN| for all m≥N. Thus, |am|≤max{|a0|,|a1|,…,|aN1|,ϵ+|aN|} for all m. Thus, am is bounded (it is sandwiched in ±max{|a0|,|a1|,…,|aN1|,ϵ+|aN|}).
Just to make things simpler, let's take $\epsilon=1$. Since the sequence is Cauchy, there is an integer N such that $m,n\ge N\implies|a_m-a_n|<1$. In particular, taking $n=N$, we have that $|a_m-a_N|<1$ for $m\ge N$; so $|a_m|=|(a_m-a_N)+a_N|\le |a_m-a_N|+|a_N|<1+|a_N|$ for $m\ge N$ using the Triangle inequality. Now we have a bound for all but finitely many of the terms of the sequence, but we still have to account for the terms $a_1,a_2,\cdots,a_{N-1}$. If we take $M=\max\{|a_1|,\cdots,|a_{N-1}|, 1+|a_N|\}$, then $|a_n|\le M$ for all n since $|a_n|\le M$ if $m\ge N$ by the first part, and $|a_n|\le M$ for $1\le m\le N-1$ by the definition of M.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1071325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Sum of $k$-th powers Given: $$ P_k(n)=\sum_{i=1}^n i^k $$ and $P_k(0)=0$, $P_k(x)-P_k(x-1) = x^k$ show that: $$ P_{k+1}(x)=(k+1) \int^x_0P_k(t) \, dt + C_{k+1} \cdot x $$ For $C_{k+1}$ constant. I believe a proof by induction is the way to go here, and have shown the case for $k=0$. This is where I'm stuck. I have looked at the right hand side for the k+1 case: $$ (k+2)\int^x_0P_{k+1}(t) \, dt + C_{k+2} \cdot x $$ and I don't see how this reduces to $P_{k+2}(x)$. Even if we are assuming the kth case, replacing $P_{k+1}$ in the integrand of the $(k+1)$-st case just makes it more messy. I am not looking for the answer just a push in the right direction. I can see that each sum ends up as a polynomial since expressions like $P_1(x) = 1+2+\cdots+x=\frac{x(x+1)}{2}$, but I don't know how to do that for arbitrary powers, and I believe I don't need to in order to solve this problem.
Hm. This problem is weird, in that as defined $P_k(n)$ is only defined on the naturals. Although as you noted, you can find a closed form and evaluate it at an arbitrary point. Have you tried taking the derivative of both sides and using Fundamental Thm of Calculus? That would get rid of the integral, and turns the $C_{k+1}\cdot x$ term to just a constant, which looks more appealing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1071431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 2 }
When is a number square in Galois field $p^n$ if it's not square mod $p$? Here is the problem, that I'm stuck on. There is no square root of $a$ in $\mathbb{Z}_p$. Is there square root of $a$ in $GF(p^n)$? Well, it's certainly true that $$x^{p^n}=x$$ and $$x^{p^n-1}=1$$ for nonzero $x$. If there is such an $x$ that $a=x^2$ then $$a^{\frac{p^n-1}{2}}=1$$ Now this actually is an equation in $\mathbb{Z}_{p}$ only, so I can write $$a^{\frac{p^n-1}{2}}\equiv 1\mod p$$ Now the answer seems to be just a touch away because I know that $x^2\equiv a \mod p$ iff $a^{(p-1)/2}\equiv 1 \mod p$. How can I tie the ends together, what am I missing?
Because finite fields are uniquely determined by their order, you know that if $a\in\Bbb Z$, then one of two things is true, either $a=x^2$ for some other $x\in\Bbb Z/p=\Bbb F_p$, or not. In the case it is not, then $x^2-a$ is irreducible over $\Bbb F_p$. If so then $$\Bbb F_p[x]/(x^2-a)\cong\Bbb F_{p^2}$$ and this is independent of choice of non-square $a\in\Bbb F_p$. Since there is only one field of order $p^2$, all integers are squares iff $$\Bbb F_{p^n}\supseteq\Bbb F_{p^2}\iff 2|n$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1071491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Finding the radius of a third tangent circle Sorry if this is a foolish question, but I'm having difficulty understanding how to solve for $r_3$ in the following diagram... According to WolframAlpha's page on tangent circles, the radius of $c_3$ can be calculated using the following formula $r_3=\frac{r_1 \times r_2}{(\sqrt{r_1}+\sqrt{r_2})^2}$, which can be simplified to $\frac{1}{\sqrt{r_3}}=\frac{1}{\sqrt{r_1}} + \frac{1}{\sqrt{r_2}}$. To be completely honest, I'm having a hard time understanding understanding how this formula works (as I'm a very visual person). What exactly is happening here?
This answer to a slightly different problem gives a useful diagram showing how to compute the distance between the points of tangency of two circles and a line, given that the circles are externally tangent (as yours are). From this we see that if we label the three points of tangency $A,$ $B,$ and $C$ (in sequence from the leftmost such point to the rightmost in your diagram), then considering just the two circles of radius $r_1$ and $r_3$, which touch the line at $A$ and $B$, $$ |AB| = 2\sqrt{r_1 r_3}. $$ For the other two pairs of circles we get $ |BC| = 2\sqrt{r_2 r_3} $ and $ |AC| = 2\sqrt{r_1 r_2}.$ We can also see that $ |AC| = |AB| + |BC|;$ substituting the formulas we just found for those three lengths (or perhaps even better still, labeling the three distances $ 2\sqrt{r_1 r_3},$ $ 2\sqrt{r_2 r_3},$ and $2\sqrt{r_1 r_2}$ on your diagram), $$ 2\sqrt{r_1 r_3} + 2\sqrt{r_2 r_3} = 2\sqrt{r_1 r_2}.$$ I can't think of a visual representation of the last step, but algebraically, you can divide all three terms by $2\sqrt{r_1 r_2 r_3}$ to get $$ \frac1{\sqrt{r_2}} + \frac1{\sqrt{r_1}} = \frac1{\sqrt{r_3}},$$ which is your simplified formula. You can then get the other formula by further algebraic manipulation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1071577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Question on induction technique When one uses induction (say on $n$) to prove something, does it mean the proof holds for all finite values of $n$ or does it always hold when even $n$ takes $\pm\infty$?
Regular inductive proofs work only over $\mathbb{N}$, the set of natural numbers. The proof that "inductive proofs work" depends on the well-ordering property of $\mathbb{N}$, and $\infty \not\in \mathbb{N}$. As people in the comments have pointed out, induction can be extended to other well-ordered sets as well. Regardless, it is meaningless to ask if a statement holds at $\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1071663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 0 }
Proving $\frac{1}{\sqrt{1}}+\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{3}}+\cdots+\frac{1}{\sqrt{n}}>2-\frac{2}{n}$ by induction for $n\geq 1$ I have the following inequality to prove with induction: $$P(n): \frac{1}{\sqrt{1}}+\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{3}}+\cdots\frac{1}{\sqrt{n}}>2-\frac{2}{n}, \forall n\in \mathbb{\:N}^*$$ I tried to prove $P(n+1)$: Let $S = \frac{1}{\sqrt{1}}+\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{3}}+\cdots\frac{1}{\sqrt{n}}+\frac{1}{\sqrt{n+1}}\Rightarrow$ $$P(n+1):S>2-\frac{2}{n+1}$$ I got into this point and I think it is all wrong, but I'll write it here also: $$S>\frac{2(n+1) + n\sqrt{n+1}}{n(n+1)}$$ and I don't know what to do next... Could anybody help me, please? I would also like to know if there's any other smarter way of solving this kind of exercises.
Initial comment: Begin by noting that, for all $n\geq 1$, we have that $$ n(\sqrt{n}-2)+2>0\Longleftrightarrow n\sqrt{n}-2n+2>0\Longleftrightarrow \color{red}{\sqrt{n}>2-\frac{2}{n}}.\tag{1} $$ Thus, it suffices for us to prove the proposition $P(n)$ for all $n\geq 1$ where $$ P(n): \frac{1}{\sqrt{1}}+\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{3}}+\cdots+\frac{1}{\sqrt{n}}\geq \sqrt{n}.\tag{2} $$ If we can prove $(2)$, then we will have proven $$ \color{blue}{\frac{1}{\sqrt{1}}+\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{3}}+\cdots+\frac{1}{\sqrt{n}}}\color{red}{\geq\sqrt{n}}\color{blue}{> 2-\frac{2}{n}}, $$ as desired. I'm sure you can handle the proof of $(1)$ quite easily. Claim: For $n\geq 1$, let $P(n)$ denote the statement $$ P(n): \frac{1}{\sqrt{1}}+\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{3}}+\cdots+\frac{1}{\sqrt{n}}\geq \sqrt{n}. $$ Base step: $P(1)$ holds since $1\geq\sqrt{1}$ is true. Before induction step: Consider the following inequality for any $x\geq 1$: $$ \sqrt{x}+\frac{1}{\sqrt{x+1}}>\sqrt{x+1}\tag{3}. $$ Briefly, observe that for $x\geq 1, \sqrt{x(x+1)}>x$; thus, $\sqrt{x(x+1)}+1>x+1$. Dividing by $\sqrt{x+1}$ proves $(3)$. The purpose of $(3)$ is to streamline the calculations below in the inductive step. Inductive step: Fix some $k\geq 1$ and suppose that $P(k)$ is true. Then \begin{align} \frac{1}{\sqrt{1}}+\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{3}}+\cdots+\frac{1}{\sqrt{k}}+\frac{1}{\sqrt{k+1}} &\geq \sqrt{k}+\frac{1}{\sqrt{k+1}}\tag{by $P(k)$}\\[1em] &> \sqrt{k+1},\tag{by $(3)$} \end{align} which shows that $S(k+1)$ follows. This concludes the inductive step. Thus, for all $n\geq 1, P(n)$ is true. $\blacksquare$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1071752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Under what conditions the point $z$ is unique in its range of existence? Theorem: Let $f$ be continuous on $[a,b]$. If the range of $f$ contains $[a,b]$, then the equation $f(x)=x$ has at least one solution $z$ in $[a,b]$, i.e., $f(z)=z$. My question is: Under what conditions the point $z$ is unique in its range of existence?
This is a sufficient condition: $$ |f(x)-f(y|<|x-y|\quad\forall x,y\in[a,b]. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1071850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
In triangle ABC, Find $\tan(A)$. In triangle ABC, if $(b+c)^2=a^2+16\triangle$, then find $\tan(A)$ . Where $\triangle$ is the area and a, b , c are the sides of the triangle. $\implies b^2+c^2-a^2=16\triangle-2bc$ In triangle ABC, $\sin(A)=\frac{2\triangle}{bc}$, and $\cos(A)=\frac{b^2+c^2-a^2}{2bc}$, $\implies \tan(A)=\frac{4\triangle}{b^2+c^2-a^2}$ $\implies \tan(A)=\frac{4\triangle}{16\triangle-2bc}$. But the answer is in the form, $\frac{x}{y}$ where $x$ and $y$ are integers. Any help is appreciated. Thanks in advance.
HINT: $$16\triangle=(b+c+a)(b+c-a)$$ $$\iff16rs=2s(b+c-a)$$ $$8r=b+c-a$$ Using this and $a=2R\sin A$ etc., $$8\cdot4R\prod\sin\dfrac A2=2R\cdot4\cos\dfrac A2\sin\dfrac B2\sin\dfrac C2$$ $$\implies4\sin\dfrac A2=\cos\dfrac A2$$ as $0<B,C<\pi,\sin\dfrac B2\sin\dfrac C2\ne0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1071953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Interpreting the Möbius function of a poset I have just learned about incidence algebras and Möbius inversion. I know that the Möbius function is the inverse of the zeta function, and that it appears in the important Möbius inversion formula. But does it have any interpretation outside these two contexts? Does the Möbius function itself have any meaning? In other words, what can I learn about a poset by looking only at the Möbius function and its values?
Here is one answer: Every interval in the poset has an associated abstract simplicial complex consisting of all chains in the interval that do not contain the maximum or the minimum. This complex gives rise to some topological space, which is often an interesting space for common posets (e.g. a sphere). The Mobius function evaluated on the interval gives you the reduced Euler characteristic of this complex because it is the number of chains of odd length minus the number of chains of even length (which is another way to interpret it). Here we consider the empty set to be a chain of even length.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1072025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Solving second order difference equations with non-constant coefficients For the difference equation $$ 2ny_{n+2}+(n^2+1)y_{n+1}-(n+1)^2y_n=0 $$ find one particular solution by guesswork and use reduction of order to deduce the general solution. So I'm happy with second order difference equations with constant coefficients, but I have no idea how to find a solution to an example such as this, and I couldn't find anything useful through Google or in my text book. EDIT: I copied the question wrong, answers make more sense now I realise that ..
Hint: Write the equation as $$2n(y_{n+2} - y_{n+1})= (n + 1)^2(y_{n} - y_{n+1})$$ or $$2n A_{n+1} - (n+1)^2A_{n}$$ with solution $$A_{n} = (n!)^2/ 2^{(n-1)}(n-1)!$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1072156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Setting corresponding entries in a matrix I've recently read "Matrix Inversion and the Great Injustice", a rather humorous article of a student venting his frustrations due to feeling as if he has been graded unfairly. I follow everything so far, up until this part (at the bottom of the first page): I was asked to solve the following problem: Determine the inverse matrix of: \begin{bmatrix}1 & 3\\1 & 1\end{bmatrix} I remembered that the inverse matrix must be a matrix. $$ X = \begin{bmatrix}\mathbf{x}_1 & \mathbf{x}_2\\\mathbf{x}_3 & \mathbf{x}_4\end{bmatrix} $$ of size 2 × 2 such that $$MX = I$$ where I is the identity matrix. When I multiplied the left hand side I got $$ \begin{bmatrix}\mathbf{x}_1 + 3\mathbf{x}_3 & \mathbf{x}_2 + 3\mathbf{x}_4\\\mathbf{x}_1 + \mathbf{x}_3 & \mathbf{x}_2 + \mathbf{x}_4\end{bmatrix} = \begin{bmatrix}1 & 0\\0 & 1\end{bmatrix} $$ and when I set the corresponding entries equal I obtained the following system \begin{bmatrix}1 & 0 & 3 & 0 & | & 1\\0 & 1 & 0 & 3 &| & 0 \\1 & 0 &1 & 0 & | & 0\\ 0 & 1 & 0 & 1 & | & 1\end{bmatrix} Now, I do not understand what is meant with "setting the corresponding entries equal". Can someone please point me in the right direction? Appreciate it.
It means that if you have two matrices: $\begin{bmatrix} a&b\\c&d \end{bmatrix}=\begin{bmatrix} h&i\\j&k \end{bmatrix}$ then it must be the case that $a=h, b=i, c=j, d=k$. In your case, $$\begin{bmatrix}\mathbf{x}_1 + 3\mathbf{x}_3 & \mathbf{x}_2 + 3\mathbf{x}_4\\\mathbf{x}_1 + \mathbf{x}_3 & \mathbf{x}_2 + \mathbf{x}_4\end{bmatrix} = \begin{bmatrix}1 & 0\\0 & 1\end{bmatrix}$$ so $a=h$, where $a=x_1+3x_3$ and $h=1$. You can write this as an equation: $x_1+0x_2+3x_3+0x_4=1$. The process is similar for the other entries.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1072222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Putnam definite integral evaluation $\int_0^{\pi/2}\frac{x\sin x\cos x}{\sin^4 x+\cos^4 x}dx$ Evaluate $$\int_0^{\pi/2}\frac{x\sin x\cos x}{\sin^4 x+\cos^4 x}dx$$ Source : Putnam By the property $\displaystyle \int_0^af(x)\,dx=\int_0^af(a-x)\,dx$: $$=\int_0^{\pi/2}\frac{(\pi/2-x)\sin x\cos x}{\sin^4 x+\cos^4 x}dx=\frac{\pi}{2}\int_0^{\pi/2}\frac{\sin x\cos x}{\sin^4 x+\cos^4 x}dx-\int_0^{\pi/2}\frac{x\sin x\cos x}{\sin^4 x+\cos^4 x}dx$$ $$\Longleftrightarrow\int_0^{\pi/2}\frac{x\sin x\cos x}{\sin^4 x+\cos^4 x}dx=\frac{\pi}{4}\int_0^{\pi/2}\frac{\sin x\cos x}{\sin^4x+\cos^4x}dx$$ Now I'm stuck. WolframAlpha says the indefinite integral of $\dfrac{\sin x\cos x}{\sin^4 x+\cos^4x}$ evaluates nicely to $-\frac12\arctan(\cos(2x))$. I already factored $\sin^4 x+\cos^4 x$ into $1-\left(\frac{\sin(2x)}{\sqrt{2}}\right)^2$, but I don't know how to continue.. I suggest a substitution $u=\frac{\sin(2x)}{\sqrt{2}}$? Could someone provide me a hint, or maybe an easier method I can refer to in the future?
\begin{align} \int_0^{\pi/2}\frac{\sin x\cos x}{\sin^4x+\cos^4x}\mathrm dx&=\int_0^{\pi/2}\frac{\sin x\cos x}{\sin^4x+\left(1-\sin^2x\right)^2}\mathrm dx\\[7pt] &=\int_0^{\pi/2}\frac{\sin x\cos x}{2\sin^4x-2\sin^2x+1}\mathrm dx\\[7pt] &=\frac14\int_0^1\frac{\mathrm dt}{t^2-t+\frac12}\qquad\color{blue}{\implies}\qquad t=\sin^2x\\[7pt] &=\frac14\int_0^1\frac{\mathrm dt}{\left(t-\frac12\right)^2+\frac14}\qquad\color{blue}{\implies}\qquad \frac u2=t-\frac12\\[7pt] &=\frac12\int_{-1}^1\frac{\mathrm du}{u^2+1}\\[7pt] &=\bbox[5pt,border:3px #FF69B4 solid]{\color{red}{\large\frac \pi4}} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1072316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 4, "answer_id": 0 }
Classification of numbers on the base of binary representation The problem is the following. I would like to find a simple algorithm or principle of classification of numbers regarding their presentation in binary form. Let's consider an example. The numbers by 4-digit binary numbers are following: $0000_2=0$ $0001_2=1$ $0010_2=2$ $\vdots $ $1111_2=15$ I need to divide the set {0,1,...,15} into subsets, each member of which can be presented by certain numbers of binary ones in binary representation. I.e. $C_0=\{0\}$ ($0$ binary ones) $C_1=\{1, 2, 4, 8\}$ (1 binary one: $0001_2$, $0010_2$, $0100_2$, $1000_2$) $C_2= \{3, 5, 6, 9, 10, 12\}$ (2 binary ones) $\vdots $ $C_5=\{ 15 \}$ Does there exist a result from the number theory allowing to reveal these subsets without transition to binary representation?
The number of ones in the binary representation of $n$ is the greatest integer $r$ such that $2^r$ divides $$\binom{2n}n$$ See https://oeis.org/A000120 It is not too hard (but not too simple, either) to prove this by induction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1072393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
What are some applications of elementary linear algebra outside of math? I'm TAing linear algebra next quarter, and it strikes me that I only know one example of an application I can present to my students. I'm looking for applications of elementary linear algebra outside of mathematics that I might talk about in discussion section. In our class, we cover the basics (linear transformations; matrices; subspaces of $\Bbb R^n$; rank-nullity), orthogonal matrices and the dot product (incl. least squares!), diagonalization, quadratic forms, and singular-value decomposition. Showing my ignorance, the only application of these I know is the one that was presented in the linear algebra class I took: representing dynamical systems as Markov processes, and diagonalizing the matrix involved to get a nice formula for the $n$th state of the system. But surely there are more than these. What are some applications of the linear algebra covered in a first course that can motivate the subject for students?
Computer science has a lot of applications! * *Manipulating images. *Machine learning (everywhere). For example: Multivariate linear regression $(X^TX)^{-1}X^{T}Y$. Where X is an n x m matrix and Y is an N x 1 Vector.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1072459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "69", "answer_count": 20, "answer_id": 7 }
Questionable Power Series for $1/x$ about $x=0$ WolframAlpha states that The power series for $1/x$ about $x=0$ is: $$1/x = \sum_{n=0}^{\infty} (-1)^n(x-1)^n$$ This is supposedly incorrect, isnt it? This is showing the power series about $x=1$ in the form $(x - c)$ I dont understand how WolFramalpha says that is correct: http://m.wolframalpha.com/input/?i=power+series+of+1%2Fx&x=0&y=0 Thanks!
It is rather strange, because Wolfram Alpha is perfectly happy to return a Laurent series for e.g. series of 1/(x+x^2) at x = 0 Somehow, $1/x$ is treated differently.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1072532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove $f:\mathbb{N} \rightarrow \mathbb{R}$ is continuous using the definition of sequential continuity The definition of sequential continuity is that $x_n \rightarrow x \implies f(x_n) \rightarrow f(x)$. If the terms of the sequence $\{x_n\}$ are only natural numbers, I know that for all $\epsilon > 0$, we can find an $N \in \mathbb{N}$ such that for all $n \geq N$, $x_n = x$. I'm not sure what to do from here
Any function $f : \mathbb N \to \mathbb R$ is continuous. To show this using sequential continuity, let $\{x_n\}$ be a sequence in $\mathbb N$ that converges to $x$. A convergent sequence in $\mathbb N$ is eventually constant. But then, $\{f(x_n)\}$ is also eventually constant. Say $f(x_n) = f(x)$ for $n > N$ for some $N$. It follows that $\left|f(x_n) - f(x)\right| = 0$ for $n > N$. Therefore, $f$ is continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1072712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Faulty proof that $V=U_1 \oplus W$ and $V=U_2 \oplus W$ implies $U_1 = U_2$ The question is as follows: Prove or give a counterexample: if $\ U_1, U_2, W$ are subspaces of $V$ such that $V=U_1 \oplus W$ and $\ V = U_2 \oplus W$, then $\ U_1 = U_2$. I happily proved this but then found out that it is in fact incorrect! I posted a counterexample here. Could someone point out what is wrong with my 'proof'? $V = U_1 \oplus W$ and $V=U_2 \oplus W$, so $V=U_1 + W$ and $V=U_2+W$, hence $U_1 + W = U_2 +W$. Since $U_1 + W$ contains all elements $u_1 \in U_1$, we have that $u_1 \in U_2 + W$. Since $U_2 + W$ contains all elements of $U_2$ and $W$, we have that $u_1 \in U_2$ or that $u_1 \in W$. If we assume, for contradiction, that $u_1 \notin U_2$ then $u_1 \in W$. But then, since $-u_1 \in W$, we are able to write $0 = u_1 + (-u_1)$. As $u_1 \neq 0$ (because $0 \in U_2$), we have written $0$ as a sum of two nonzero elements in $U_1$ and $W$ respectively. This contradicts that $V = U_1 \oplus W$. Hence, $u_1 \in U_2$. This implies that $U_1 \subseteq U_2$. By the same line of reasoning it can be shown that $U_2 \subseteq U_1$, and thus $U_1 = U_2$.
$u\in U + W$ does not mean that $u\in U$ or $w\in W$; rather, it means that $u$ is the sum of something in $U$ and something in $W$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1072779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove that ${x dy - y d x\over x^2 + y^2}$ is not exact Please can someone verify my proof that $$\psi = {x dy - y d x\over x^2 + y^2}$$ is not exact? Here is my work: If $\psi$ was exact there would exist $f:\mathbb R^2 \setminus \{0\} \to \mathbb R$ such that $df = f_x dx + f_y dy = \psi$. Here $f_x = {-y \over x^2 + y^2}$ and $f_y = {x \over x^2 + y^2}$. It would hold true that $$\int f_x dx = \int f_y dy = f.$$ So I calculate these integrals: $$ \int f_x dx = -{1\over 2}\log(x^2 + y^2)$$ and $$ \int f_y dy = {1\over 2}\log(x^2 + y^2)$$ It is clear that these cannot be equal therefore $\psi$ is not exact. Edit The first thing I had tried (it did not work) was to calculate the integral along a closed curve: $$ \int_{S^1}\psi = \int_{S^1} {x \over x^2 + y^2} dy - \int_{S^1} {y \over x^2 + y^2} dx = \int_{S^1} x dy - \int_{S^1} y dx= x \int_{S^1} dy - y \int_{S^1} dx = 0$$ since $\int_{S^1}dx = 0$.
By Green's theorem, $$\int_{S^1} \psi = \int_{S^1} x\, dy - y\, dx = \iint_{D^2} \text{div}(\langle x,y\rangle)\, dA = 2\cdot \text{Area}(D^2) = 2\pi$$ Alternatively, parametrize $S^1$ by setting $x = \cos(t)$, $y = \sin(t)$, $0 \le t \le 2\pi$. Then $$\int_{S^1} \psi = \int_0^{2\pi} (\cos(t)\cdot \cos(t) - \sin(t)\cdot (-\sin(t)))\, dt = \int_0^{2\pi} (\cos^2(t) + \sin^2(t))\, dt = 2\pi$$ Either way, $\int_{S^1} \psi \neq 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1072852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Every simple planar graph with $\delta\geq 3$ has an adjacent pair with $\deg(u)+\deg(v)\leq 13$ Claim: Every simple planar graph with minimum degree at least three has an edge $uv$ such that $\deg(u) + \deg(v)\leq 13$. Furthermore, there exists an example showing that $13$ cannot be replaced by $12$. This seems related to Hard planar graph problem. By Euler's formula, we know that $|V|-|E|+|F|=2$. Rearranging, we have $|F| = 2+|E|-|V|$ As $\delta\geq3$ we know $|V|\geq 4$ and $|E|\geq 6$ and since each face has at least 3 edges bounding it and each edge is on the boundary of at most 2 faces we have then that $3|F|\leq 2|E|$. Substituting into Euler's formula, we have $$2|E|\geq 3|F| = 3(2+|E|-|V|)\Rightarrow 3|V|-6\geq |E|$$ From this we have from the handshaking lemma that avgdegree = $\sum\frac{\deg(v)}{|V|} = 2\frac{|E|}{|V|} \leq 6 - \frac{12}{|V|}\lneq 6$. And so, the average degree of a simple planar graph is strictly less than 6. Suppose there was a smallest counterexample, I.e., some graph such that every edge $uv$ has $\deg(u)+\deg(v)>13\dots$ It is at this point that I feel as though I am stuck. It seems that you should be able to reach a contradiction about either the average degree, the planarity, or $\delta\geq 3$. In the hint in the linked similar problem, they suggest to show that you can find a stricter bound for the minimum degree of the smallest counterexample from below, but it doesn't seem to apply here. They then suggest to show that a substantial proportion of the vertices are of low degree (in this problem would translate to degree less than 6), and then for the punchline to show that the number of edges is enough that there are more than #of vertices of high degree choose 2, meaning there must necessarily be an edge between two vertices of low degree thus proving the claim.
This may be better as a simple comment, but I lack the reputation. It can be assumed that we are working with a triangulation, but we need to be careful about which edges we add when triangulating. Suppose that the graph has minimum degree three and there is no edge $uv$ with $\deg(u) + \deg(v) \leq 13$. Now suppose there were a face of degree at least four. Let $x$ be the vertex of minimum degree on that face. In any case, both neighbours of $x$ on the face will have degree at least $7$. Add an edge between those two vertices and continue this process until you are left with only triangular faces. I stole this trick from https://faculty.math.illinois.edu/~west/pubs/dischnew.pdf where it is used at the start of Lemma 3.2. Incidentally, that lemma proves something stronger than what is required here: Every plane graph with minimum degree $3$ has an edge with weight at most $11$ or a $4$-cycle through two degree $3$ vertices and a common neighbour of degree at most $10$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1072921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 0 }
Galois finite extension Let $K/ \mathbb{Q}$ be a finite Galois extension, $K \otimes_{\mathbb{Q}} \mathbb{R} \simeq \mathbb{R}^s \oplus \mathbb{C}^t$. Prove that either $s=0$ or $t=0$.
The automorphisms of $K/\Bbb Q$ are continuous, so they extend to a completion. If we denote an infinite place by $\mathfrak{p}$, we know one completion is $K_{\mathfrak{p}}$. As all other completions are given by $\sigma(K)_{\sigma(\mathfrak{p})}$ and $\sigma(K)=K$ (since we are dealing with a Galois extension) we use the fact that $\text{Gal}\left(K/\Bbb Q\right)$ permutes the places transitively to conclude all completions are isomorphic as $\Bbb R$ vector spaces, in particular they have the same dimension. Now as $$K\otimes_{\Bbb R}\Bbb R \cong\prod_{\mathfrak{p}\text{ real}}K_{\mathfrak{p}}\times\prod_{\mathfrak{p}\text{ complex}}K_{\mathfrak{p}}$$ we conclude the result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1073027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Integral $\int_{1}^{2011} \frac{\sqrt{x}}{\sqrt{2012 - x} + \sqrt{x}}dx$ Evaluate: $$\int_{1}^{2011} \frac{\sqrt{x}}{\sqrt{2012 - x} + \sqrt{x}}dx$$ Using real methods only. I am not sure what to do. I tried finding a power series, which was too ugly. I just need some hints, not an answer to do this integral, this is from the MIT Integration bee 2012.
HINT: As $\displaystyle\int_a^bf(x)\ dx=\int_a^bf(a+b-x)\ dx$ So, if $\int_a^bf(x)\ dx=I,$ $$2I=\int_a^b[f(x)+f(a+b-x)]\ dx$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1073120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Inverse Laplace Transformation I have a question about laplace transformation. $\frac{8s+4}{s^2+23}$ I tried to split them. $\frac{8s}{s^2+23}$ is the image of a cosine and $\frac{4}{s^2+23}$ is the image of a sine. Here is what I did : $\frac{8s}{s^2+(\sqrt{23})^2}$ is the image of $8\cos(\sqrt{23}t)$ and $\frac{4}{s^2+(\sqrt{23})^2}$ is the image of $\frac4{\sqrt{23}}\sin(\sqrt{23}t)$ But according to homework, I am wrong. But I am sure I have the correct answer, there is no mistake.
If you don't mind some Residue theory, we can check use that to check your solution. \begin{align} \mathcal{L}^{-1}\biggl\{\frac{8s+4}{s^2+23}\biggr\}&=\frac{1}{2\pi i}\int_{\gamma-i\infty}^{\gamma+i\infty}\frac{8s+4}{s^2+23}e^{st}ds\\ &=\sum\text{Res} \end{align} The poles in the $s$-plane occur at $s=\pm i\sqrt{23}$ both of order one. Then we have \begin{align} \mathcal{L}^{-1}\biggl\{\frac{8s+4}{s^2+23}\biggr\}&= \lim_{s\to i\sqrt{23}}(s-i\sqrt{23})\frac{8s+4}{s^2+23}e^{st}+\lim_{s\to -i\sqrt{23}}(s+i\sqrt{23})\frac{8s+4}{s^2+23}e^{st}\\ &= \frac{8i\sqrt{23}+4}{2i\sqrt{23}}e^{it\sqrt{23}}+\frac{-8i\sqrt{23}+4}{-2i\sqrt{23}}e^{-it\sqrt{23}}\\ &=4e^{it\sqrt{23}}+4e^{-it\sqrt{23}}+\frac{2}{i\sqrt{23}}e^{it\sqrt{23}}-\frac{2}{i\sqrt{23}}e^{-it\sqrt{23}}\\ &=8\cos(t\sqrt{23})+\frac{4}{\sqrt{23}}\sin(t\sqrt{23}) \end{align} Thus, the answer agrees with your by tables.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1073214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Undoing anonymous donations All the students in a class are planning to do a trip. Not all of the students can afford it, and it is considered shameful to reveal their poverty. So it is suggested that anyone can donate anonymously to a fund. If the fund becomes big enough to cover the trip, the trip happens. If not, the donators gets their money back, preserving anonymity of who donated and who didn't. Is this possible?
Yes it is possible if the paiement is done via Paypal / Online paiement. Bank transactions are validated and done only if the fund limit is reached. So, while the fund is not reached, nobody loose money and everybody stay anonymous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1073290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Determinant: Alternative Definitions Reference Foundation for: Determinant: Continuity Problem Given a vector space $V$. Consider an endomorphism $T:V\to V$. The rank of an endomorphism: $$\mathrm{rank}T:=\dim\left(\mathrm{im}T\right)$$ The determinant of an endomorphism: $$\det T:=\text{???}$$ What would be a nice definition not relying on representations by matrices? (I assume basic knowledge of Differential Geometry and Functional Analysis.)
Let $V$ be an $n$-dimensional vector space over the field $\mathbb{F}$. Given a linear map $T : V \to V$, there is an induced linear map $\bigwedge^nT : \bigwedge^n V \to \bigwedge^n V$ given by $\left(\bigwedge^nT\right)(v_1\wedge\dots\wedge v_n) = (Tv_1)\wedge\dots\wedge(Tv_n)$. As $\bigwedge^nV$ is one-dimensional, $\bigwedge^nT = k\operatorname{id}_{\bigwedge^nV}$ for some scalar $k \in \mathbb{F}$. This scalar is precisely $\det T$. Let me summarise some facts about the vector spaces $\bigwedge^pV$ (see the Wikipedia article on exterior algebras for more information). Given a vector space $V$ of dimension $n$, there is an associated vector space $\bigwedge^pV$ for any $0 \leq p \leq n$ called the $p^{\text{th}}$ exterior power of $V$. The elements of $\bigwedge^pV$ are linear combinations of terms of the form $v_1\wedge\dots\wedge v_p$ where $v_1, \dots, v_p \in V$. The symbol $\wedge$ is called the wedge product, and it satisfies skew-symmetry, i.e. $v_i\wedge v_j = -v_j\wedge v_i$. If $\{v_1, \dots, v_n\}$ is a basis for $V$, then $\{v_{i_1}\wedge\dots\wedge v_{i_p} \mid i_1 < \dots < i_p\}$ is a basis for $\bigwedge^pV$ and therefore the dimension of $\bigwedge^pV$ is ${n \choose p}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1073376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Is split-complex $j=i+2\epsilon$? In matrix representation imaginary unit $$i=\begin{pmatrix}0 & -1 \\ 1 & 0 \end{pmatrix}$$ dual numbers unit $$\epsilon=\begin{pmatrix}0 & 1 \\ 0 & 0 \end{pmatrix}$$ split-complex unit $$j=\begin{pmatrix}0 & 1 \\ 1 & 0 \end{pmatrix}$$ Given this definition, does not it follow that $$j=i+2\epsilon$$ and as such, one of these systems can be fully expressed through others?
If, ignoring the means by which you reached this conclusion (which was well-addressed in epimorphic's answer), we supposed that $$j=i+2\varepsilon$$ then it follows that (assuming commutativity) $$j^2=(i+2\varepsilon)^2=i^2 + 4\varepsilon i+4\varepsilon^2$$ which, replacing each by the definition of their square: $$1=-1 + 4\varepsilon i$$ which only works if we define $\varepsilon i = \frac{1}2$. This is a pretty long shot from any "reasonable" definition, since our intuition about $i$ and $\varepsilon$ should certainly not lead us to this point. Moreover, making the definition $\varepsilon i = \frac{1}2$ breaks very important properties of multiplication - for instance, it makes the operation not associative since $$(\varepsilon^2)i\neq \varepsilon(\varepsilon i)$$ $$0i \neq \varepsilon \frac{1}2$$ $$0\neq \frac{1}2\varepsilon$$ which poses a rather major difficulty for algebra. Moreover, the equation $j=i+2\varepsilon$ is not even particularly special; setting $j=i+\varepsilon$ (i.e. by making $\varepsilon$ correspond to twice the matrix representation you suggest - which is an equally valid matrix representation of the dual numbers) yields that we want $\varepsilon i = 2$ - but this doesn't solve the lack of associativity. In fact, if we want associativity, we conclude that $\varepsilon i$ must not be a linear combination of $1$, $i$, and $\varepsilon$ with real coefficients (since $\varepsilon i$ can't be invertible given that $\varepsilon^2 = 0$ and $\varepsilon i$ can't be a multiple of $\varepsilon$ as that would cause $i\cdot i \cdot \varepsilon$ to break associativity) - which implies that $(i+a\varepsilon)^2=1=j^2$ must have no solution, so we can't reasonably express $j$ in such a system. The fundamental issue with this is that the expression $i+2\varepsilon$ doesn't even make sense without additional structure. Though we can happily add the terms together in a formal sum (i.e. where we write every number as $a+bi+c\varepsilon$ without allowing any simplification) and this sometimes yields meaningful results, this leaves us with the issue of multiplication. Eventually the term $\varepsilon i$ will come up, and, unless we wish to break important properties of multiplication, we have to consider it as an entirely new thing - and we can prove that, in any extension of $\mathbb R$ which still obeys certain algebraic properties, but contains a new element $\varepsilon$ squaring to $0$ and $i$ squaring to $-1$, there would be only two solutions to $x^2=1$, and they are $1$ and $-1$ - there is no extra root $j$, so $j$ cannot meaningfully play into that system.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1073486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Is it true that the relation |A| < |B| is a sufficient condition for claiming that $f$ is a bijection? This is an exercise of an assignment I have: Suppose $A$ and $B$ are finite sets and $f\colon A\to B$ is surjective. Is it true that the relation “$|A| < |B|$” is a sufficient condition for claiming that $f$ is a bijection? Justify your answer. And this is my answer: No. In fact, if $|A| < |B|$, then there should exist at least $1$ element of $A$ that points to more than $1$ element of $B$, since all elements of $B$ must be pointed (surjective), but this is not a function, because the same element of $A$ point to different elements of $B$. Is my answer correct? I don't know if the question actually is asking for a proof or what, and if my answer is a proof, if correct.
If $\lvert A \rvert < \lvert B \rvert$, then you cannot have any surjective function $f\colon A\to B$ anyway, and the question is vacuous. (the image $f(A)$ of $A$ by any function $f$ must satisfy $\lvert f(A) \rvert \leq \lvert A \rvert$, with equality when $f$ is injective).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1073556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluating $\int_0^{2} \frac{dx}{\sqrt[3]{2x^2-x^3}}$ How to calculate this integral? $$\int_0^{2} \frac{dx}{\sqrt[3]{2x^2-x^3}}$$ I suppose that it should be parted like this: $$\int_0^{1} \frac{dx}{\sqrt[3]{2x^2-x^3}} + \int_1^{2} \frac{dx}{\sqrt[3]{2x^2-x^3}}$$ but I have no idea how to calculate these two. Thank you for help in advance.
$\sqrt[3]{2x^2-x^3} = x\sqrt[3]{\dfrac{2}{x}-1} \to u = \sqrt[3]{\dfrac{2}{x} - 1} \to u^3 = \dfrac{2}{x} - 1 \to x = \dfrac{2}{u^3+1}$. Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1073638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Normal subgroups in groups of odd order I put the following question in my first-year algebra final this year: Suppose $G$ is a finite group of odd order and $N$ is a normal subgroup of order $5$. Show that $N\le Z(G)$. (By the way, this problem has been posed on this site before.) The proof that I guided them through goes like this: all the conjugacy classes of $G$ have odd order; since $N$ is normal, it is a union of conjugacy classes. The only possibilities are $3$, $1$, $1$, and five $1$'s. In either case, $N$ contains a nonidentity element whose conjugacy class consists only of itself, so it is in $Z(G)$; but that element generates $N\cong \mathbb{Z}/5$ and the result follows. So let $S(p)$ be the following statement: If $G$ is a finite group of odd order and $N$ is a normal subgroup of order $p$, then $N\le Z(G)$. For which (odd) $p$ does this hold? The argument above shows that it holds for $p=5$, but pretty clearly that proof will not work for $p>5$. In fact, if $p$ is any prime that is not a Fermat prime, $S(p)$ is false; a counterexample follows. Let $q$ be an odd prime dividing $p-1$, and consider the nonabelian group $G$ of order $pq$. It must have only one $p$-Sylow subgroup, since the number of such subgroups divides $q$ is and $\equiv 1\mod{p}$, so it is normal. So $G$ satisfies the conditions of the theorem. But the center of $G$ is trivial since otherwise $G/Z(G)$ is cyclic and thus $G$ would be abelian. This counterexample does not work when $q=2$ since the group of order $pq$ has even order. So my question is: does $S(p)$ hold when $p$ is a Fermat prime?
Let $N$ be a cyclic group of prime order, and let $A$ be any group of automorphisms of $N$. Then the orbits of $N\setminus\{e\}$ under $A$ all have the same size. (The proof is easy, based on the fact that every element of $N\setminus\{e\}$ is a generator of $N$, and hence every element of $N\setminus\{e\}$ is a power of every other element.) Note that this holds for your counterexample: all the orbits have size $q$. In particular, if $N$ is a normal subgroup of $G$, then taking $A$ to be $G$ acting by conjugation, we see that all conjugacy classes in $N\setminus\{e\}$ have the same size. Furthermore, if $N$ has odd order, then all these conjugacy classes have odd size. Finally, if the order of $N$ is a Fermat prime, then the only odd divisor of $\#N-1$ is 1. Therefore every element of $N$ is its own conjugacy class, hence is in the center. (PS: No way I could have answered this question if you hadn't written it so well!)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1073736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 3, "answer_id": 1 }
Partial Integral of an ellipse this is my first question on stack exchange so please bear with me. I am trying to generate a synthetic image of an ellipse in Matlab where each pixel is shaded according to how much of that pixel is contained within the ellipse. I had a similar problem for a circle and tackled it by simply integrating the area of the circle contained in the pixel, that is: $\textbf{I}(y,x) = \int_{x_{left}}^{x_{right}}\left(\pm \sqrt{r^2-(y-y_c)^2}+x_c\right)dx-A_{extra}$ where $[x_c,y_c]$ is the center location of the circle, $r$ is the radius of the circle, $A_{extra}$ is the extra area between the pixel being considered and the line $y=x_c$, and the plus or minus is determined by whether the upper or lower half of the circle was being considered. This worked very well and produced exactly what I wanted. I am now trying to generalize this to an ellipse. I currently have an ellipse parameterized as $Ax^2+Bxy+Cy^2+Dx+Fy+G=0$ that I would like to generate this image for but I am running into some issues. The first issue that I have is that $A-G$ here are parameterized for $(x,y)$ in units of distance and also assume that the coordinate system is based in the center of the image. I tried to get around this by determining the following: $x_c = \frac{2CD-BF}{B^2-4AC}+imcenter_x$, $y_c = \frac{2AF-BD}{B^2-4AC}+imcenter_y$ $a = dist2pix\sqrt{\frac{2\left[AF^2+CD^2-BDF+G\left(B^2-4AC\right)\right]}{\left(B^2-4AC\right)\left[\sqrt{(A-C)^2+B^2}-A-C\right]}}$ , $b = dist2pix\sqrt{\frac{2\left[AF^2+CD^2-BDF+G\left(B^2-4AC\right)\right]}{\left(B^2-4AC\right)\left[-\sqrt{(A-C)^2+B^2}-A-C\right]}}$ $\phi=\left\{\begin{array}{cc}0 &B=0\text{ and }A<C\\\pi/2 &B=0\text{ and }A>C\\\frac{1}{2}\cot^{-1}{\left(\frac{A-C}{B}\right)}&B\neq0\text{ and }A<C \\\frac{\pi}{2}+\frac{1}{2}\cot^{-1}{\left(\frac{A-C}{B}\right)}&B\neq0\text{ and }A>C \end{array}\right.$ where $imcenter_x$ and $imcenter_y$ are the x and y coordinates of the center of the image, $dist2pix$ is a conversion factor to go from distance units to pixels, $a$ and $b$ are the semi-major and semi-minor distances, and $\phi$ is the rotation from the x-axis to the semi-axis along which $a$ lies. I then reparamaterize into $A_2-G_2$ by expanding $\frac{\left((x-x_c)\cos(\phi)-(y-y_c)\sin(\phi)\right)^2}{a^2}+\frac{\left((x-x_c)\sin(\phi)+(y-y_c)\cos(\phi)\right)^2}{b^2}=1$ and then rearranging into the form from above. My first question is, is this a valid way to adjust the parameters to reflect units of pixels and be centered at the upper left corner of the image as I desire. (note these parameterizations were taken from http://mathworld.wolfram.com/Ellipse.html) My next question is what is the best way to perform this integration after getting the right parameterization. I have tried using a similar method to what I used for the circle but I do not believe this is valid due to the fact that there may be some integrals where both the positive and negative halves would need to be considered if the ellipse were extra elongated. I feel like a similar problem would occur if I tried to convert into polar coordinates as well because I believe that $r$ would still be of order 2 leading to a square root. Any suggestions would be greatly appreciated as would any tips on formatting conventions or the like. Thank you in advance! Andrew
So I know this question is really old but I just wanted to say that I ended up going with a very simple numerical approximation of the integral by approximating the ellipse with 2 line segments in each pixel (that is I basically did a first order trapezoidal approximation). It's not exactly what I wanted but it was accurate enough to suite my needs since the radius of curvature is relatively larger compared to the size of a pixel for all of the ellipses I was considering. If in the future someone has a better answer I will accept that one instead.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1073834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to simplify $\det(M)=\det(A^T A)$ for rectangular $A=BC$, with square diagonal $B$ and rectangular $C$ with orthonormal columns? Assume a real, square, symmetric, invertible $n \times n$ matrix $M$ and a real, rectangular $m \times n$ matrix $A$ such that $m \geq n$ and $M = A^T A$. Also assume that $A = B C$, where $B$ is diagonal ($m \times m$) and $C$ is rectangular ($m \times n$) with orthonormal columns. Thus: \begin{align} M &= A^T A \\ &= (BC)^T BC \\ &= C^T B^T B C \\ &= C^T B^2 C \end{align} What are some possible strategies for simplifying $\det(M)$ here, ideally in terms of $A$, $B$, and/or $C$? (Note: I'm looking for a closed-form expression, not a numerical approach.) Here are a few initial thoughts: * *It seems that there should be a simple solution here, given that $C$ has orthonormal columns and $B$ is diagonal. *The fact that $C$ is rectangular complicates things a bit: e.g. if $C$ were square, $C^T B^2 C$ would provide a direct eigendecomposition of $M$. *Singular value decomposition (SVD) appears useful here. For example, the non-zero singular values of $A$ are square roots of the eigenvalues of both $A^T A$ and $A A^T$, and $\det(M)$ is just the product of these eigenvalues; however, applying SVD to $A$ would produce the factorization: $A = U \Sigma V^T$, requiring square orthogonal $U$ and $V$ and rectangular diagonal $\Sigma$, which doesn't quite map onto the $A = BC$ above, where B is square diagonal and $C$ is rectangular with orthonormal columns. *Maybe there is a way to apply QR decomposition? (Then $\det(M)$ is simple based on the diagonal elements of $R$.) But again, there is no clear mapping from the $B C$ above onto $Q$ (square orthogonal) and $R$ (rectangular triangular). *Maybe some other matrix decomposition method would help? Or maybe there's something really simple that I am overlooking?
The simplest thing I can think of is to take the QR decomposition of $A=BC$, then $\det(M)$ is simply the square of the product of the diagonal elements of $R$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1073905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How does $\log(x^2 + 1)$ become $\log(2x^2)$? My textbook attempts to take the big O of $\log(x^2 +1)$. It proceeds by saying $x^2 + 1 \le 2x^2$ when $x \ge 1$. But I don't know how it came up with this idea. Question: Why set $x^2+1$ to a random value to be $2x^2$? Why $2$ of all numbers? Why not $x^2$ or $x^3$?
I can give some observations. * *My guess for changing $x^2+1$ to $2x^2$ is to get rid of the $+1$ and get a monomial (one term) so that taking the log is easier. *$x^3$ would not be good because it is asymptotically too fast compared to $x^2+1$. Try following the textbook using $x^3$ instead of $2x^2$ and you'll get an upper bound (big-O notation) that is looser (bigger) than the one produced by $2x^2$ and therefore gives less information about the growth of $\log(x^2+1)$. Edit: the big-O notation result will be the same if you use $x^3$, but all the inequalities before applying big-O will be looser. In this example it doesn't matter, but usually you want to bound things more tightly/closely rather than loosely. *The coefficient $2$ is arbitrary. You could use $1.000000001 x^2$ instead for a slightly tighter bound! Anything $>1$ is ok.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1074010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Using Mean Value Theorem to Prove Derivative Greater than Zero I'm working on a problem where at one point I have to show that for $x\ge a$, $$g (x) = \int_a^x f - (x-a) f \left({a+x \over 2} \right)$$, $g'(x) \ge 0$. Additional information: I know that $f''(x)\gt0$, $f'(x)<0$, and $f(x)\gt0$. Here is what I have so far: $g'(x) = f(x) - f(a) - f({a+x \over 2}) - (x-a) f'({a+x \over 2})({1 \over 2})$ Using the Mean Value Theorem, I replaced $f(x) - f(a)$ with $(x-a) f'(x_0)$ where $x_0 \in [a,x]$, resulting in: $\displaystyle g'(x) = (x-a)\left[f'(x_0) - {1 \over 2} f'\left({a+x \over 2}\right)\right] - f\left({a+x \over 2}\right)$. I am having trouble, however, going from this step (assuming I've taken the right steps thus far, and getting the result that $g'(x) \ge 0$.
if $f''(x)>0,x\in [a,b]$, then we know $$\dfrac{1}{b-a}\int_{a}^{b}f(x)dx\ge f\left(\dfrac{a+b}{2}\right)\tag{1}$$ I think you want prove this well know inequality? take $b\to x$,then $$\int_{a}^{x}f(t)dt-(x-a)f\left(\dfrac{x+a}{2}\right)\ge 0$$ Indeed.for $(1)$ inequality we can use $$f(x)\ge f\left(\dfrac{a+b}{2}\right)+f'\left(\dfrac{a+b}{2}\right)(x-a)$$ solve it because $f''(x)>0$ so $$\int_{a}^{b}f(x)dx\ge \int_{a}^{b}\left(f\left(\dfrac{a+b}{2}\right)+f'\left(\dfrac{a+b}{2}\right)(x-a)\right)dx=(b-a)f\left(\dfrac{a+b}{2}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1074096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Convergence of series of $1/n^x$ - pointwise and uniformly, Consider the series $$\zeta(x) = \sum_{n\ge 1}\frac {1}{n^x}.$$ For which $x \in[0,\infty)$ does it converge pointwise? On which intervals of $[0,\infty)$ does it converge uniformly? My work: I think that I can state (without proof, since this goes back to Calc II?) that we have pointwise convergence for $x>1$. For uniform convergence, we use the Weierstrass M-Test, and see pretty easily that our series of functions in x is bounded by a series of constants: $$|\frac{1}{n^x}| \le \frac{1}{n^2}$$ for each n, and for all $x \in [2,\infty)$. But the series of constants, $\frac{1}{n^2}$, converges, and so our original series in $x$ converges uniformly in the set $[2,\infty)$. Is this ok? If yes, then my question is concerning the set $(1,2)$, i.e., the numbers greater than $1$ but less than $2$. We still get pointwise convergence in this set - how can I say something about the possibility of uniform convergence in this set? For now, I don't have a convenient bound as I did for the set $[2,\infty)$. Is there a refined upper bound or another test for uniform convergence that could be useful - or something that I can say to rule out uniform convergence in this smaller set? Thanks in advance, Edit: What is the limit as x goes to infinity? That is, what is $$lim_x->\infty,\zeta(x) = \sum_{n\ge 1}\frac {1}{n^x}?$$ By the dominated convergence theorem (for series), noting that $$|\frac{1}{n^x}| \le \frac{1}{n^{1+ \delta}}$$ for all n , and for all x $\in$ [1+$\delta$,$\infty$), and the fact that $\frac{1}{n^{1+ \delta}}$ is summable, we have that $$\lim_{x->\infty}\sum_{n\ge 1}\frac {1}{n^x} = \sum_{n\ge 1} lim_{x->\infty} \frac {1}{n^x}.$$ ...what is this limit? Evaluating the limit on the R.H.S., if we fix any n>1, all of these terms are zero. But what about the case where we fix n=1 and evaluate this limit? It seems that we have a $1^{\infty}$ situation.
The given series is the zeta Riemann series and it's pointwise convergent on $(1,+\infty)$. Moreover for all $a>1$ we have $$\frac{1}{n^x}\le \frac1{n^a},\quad \forall x\ge a$$ and since $\sum\frac1{n^a}$ is convergent then we have uniform convergence on every interval $[a,\infty)$. The given series isn't uniformly convergent on $(1,\infty)$. Indeed $$\sup_{x>1}\sum_{k=n+1}^\infty\frac1{k^x}\ge \sup_{x>1}\sum_{k=n+1}^{2n}\frac1{k^x}\ge\sup_{x>1} \frac n{(2n)^x}=\frac12$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1074129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Comparison of the consequences of uniform convergence between the real and complex variable cases, In the real variable case, I think that uniform convergence preserves continuity and integrability, i.e., for an integral of a sequence of continuous (or integrable) functions, which converge uniformly to some function over a set E, then we know that this function is continuous (or integrable) -- and then the limit of the integrals is equal to the integral of the limit function. (the stronger version of this integration theorem, the dominated convergence theorem, only requires pointwise convergence of the functions, in order to take the limit inside the integral.) What else can we get from uniform convergence in the real variable case? Does it preserve differentiability? Or, in general it does not? And, I think in the complex variable setting, uniform convergence preserves all of continuity, integrability, and differentiability. ...anything else to be aware of? Thanks in advance,
You need stronger conditions than uniform convergence to ensure that $$f_n(x) \to f(x) \implies f_n'(x) \to f'(x).$$ Here is a standard theorem found in virtually all real analysis books. Suppose $(f_n)$ is a sequence of differentiable functions that converges pointwise at some point in $[a,b]$ and $(f_n')$ converges uniformly on $[a,b]$ to $g.$ Then $(f_n)$ converges uniformly to a differentiable function $f$ and $f_n'(x) \to g(x)$ for all $x \in [a,b].$ The following is an example where uniform convergence of $f_n \to f$ alone does not ensure $f_n' \to f'$. Consider the sequence of functions $f_n:[a,b] \to \mathbf{R}$ with $$f_n(x) = \frac{x}{1+nx^2}.$$ It's easy to show that $f_n$ has a maximum at $x = 1/\sqrt{n}$. Hence, on $[0,1]$, $$f_n(x) \leqslant f_n\left(\frac1{\sqrt{n}}\right)= \frac1{2\sqrt{n}}.$$ Consequently $f_n(x) \rightarrow f(x) \equiv 0$ uniformly on $[0,1]$. Also, $f_n$ is differentiable with $$f'_n(x) = \frac{1-nx^2}{(1+nx^2)^2}.$$ However, $f'_n(0) \rightarrow 1 \neq f'(0).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1074205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
What is $\tau(A)$ of components of $G \backslash A$, where $A \subseteq V$? A graph is $t$-tough if for all cutsets $A$ we have : definition of t-tough can be found here http://personal.stevens.edu/~dbauer/pdf/dmn04f6.pdf Now I am reading a paper which author defines t-tough graph in other terms: Link to the paper: http://www.sciencedirect.com/science/article/pii/S0012365X09002775 "Graph $G$ is $\alpha$-tough if the number $\tau(A)$ of components of $G\backslash A$ is at most max$\{1, |A|/\alpha\}$ for every non empty set A of vertices." The question is : What is $\tau (A)$??
From the context you provided, it seems that the definition of $\tau(A)$ is given in the sentence you quote: $\tau(A)$ is the number of connected components of the graph $G\setminus A$, obtained by removing from $G$ a subset $A$ of its vertices.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1074286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
periodic solution of $x''-\ (1-\ x^2-\ (x')^2)\ x'+x=0$ Assume differential equation $$x''-\ (1-\ x^2-\ (x')^2)\ x'+x=0$$ I want to discusse about non-constant periodic solution of it. Can someone give a hint that how to start to think. And does it have periodic solution. My tries: I changed it to system of differential eqution below$$ x_1'=x_2$$ $$x_2'=-x_1+x_2-(x_1^2+x_2^2)x_2$$ I know that if it has perodic solution there exist $t_1$ and $t_2$ such that $x_1(t_1)=0$ and $x_2(t_2)=0$
In terms of $$Z = x^2 + x'^2 - 1$$ the equation becomes $$Z' = -2x'^2Z$$ which has the solution $Z(t) = Z(0)e^{-2\int_0^t x'^2 dt}$. Now if $x$ is periodic then $Z$ must be periodic, but this is only possible (since $\int_0^t x'^2 dt$ is an increasing function) if $x'\equiv 0$ or $Z(0) = 0$. The only non-constant periodic solutions therefore satisfy $$Z\equiv 0\iff\frac{x'}{1-x^2} = \pm 1 \iff \arcsin(x) = \pm t + C$$ and we get that all non-constant periodic solutions are on the form $x(t) = \sin(\pm t + C)$ for some constant $C$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1074387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How many different (circular) garlands can be made using $3$ white flowers and $6m$ red flowers? This is my first question here. I'm given $3$ white flowers and $6m$ red flowers, for some $m \in \mathbb{N}$. I want to make a circular garland using all of the flowers. Two garlands are considered the same if they can be rotated to be the same. My question is: how many different garlands can I make? I tried an approach in which I divided the question into three parts: * *No. of ways in which the white flowers are all together *No. of ways in which two white flowers are together *No. of ways in which no two white flowers are together Can you please help? The answer is $3m^2+3m+1$ in the book that I saw Thanks
First I will dramatically overcount. Then I will overcompensate for my overcounting. Then I will compensate for my overcompensation to reach the final answer. Imagine the garland as a fixed circle, with a total of $3+6m$ positions for flowers. Then the number of possible garlands is simply the number of ways to choose the $3$ positions of the white flowers. This is the binomial coeffient $$\binom{3+6m}{3} = \frac{(6m+1)(6m+2)(6m+3)}{6}$$ But we've agreed that garlands that can be rotated to be the same fixed garland are considered the same garland. Each garland can be rotated in $3+6m$ ways, so we have to divide our answer above by $3+6m$, to get $$\frac{(6m+1)(6m+2)}{6}$$ This is N.F.Taussig's answer, but it's missing a little something. This is the one garland that remains the same when rotated by an angle less than $2\pi$. This special garland has the white flowers arranged in an equilateral triangle, and any rotations by $2\pi/3$ do not change the garland. There are $1+2m$ fixed versions of this garland, and we've counted them each $1/(3+6m)$ times. That means we've counted $\frac{1+2m}{3+6m} = 1/3$ of these garlands, when there is actually a full $1$ garland of this type. We need to add $2/3$ to our previous answer, to get our final answer of $$\frac{(6m+1)(6m+2)}{6}+\frac{2}{3} \,= \,\boxed{6m^2 + 3m+1} \,\,\text{ garlands}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1074469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 2 }
If we have $f(x)=e^x$, then what is the maximum value of $δ$ such that $|f(x)-1|< 0.1$ whenever $|x|<δ$? If we have $f(x)=e^x$, then what is the maximum value of $δ$ such that $|f(x)-1|< 0.1$ whenever $|x|<δ$? I tried to solve this problem with delta-epsilon definition from the definition, 1 is L (the value of the limit) so the limit will be : $lim_{x--->0}$ $e^x=1$ ( the approching value will then be $0$ ) then, $|e^x-1|$$<0.1$ .. whch will lead us to : $0.9<e^x<1.1$ $ln0.9<x<ln1.1$ Then what to do ?? I am lost ! note : the answer should in terms of $ln$
You are almost done. Your last inequality is $$\ln\frac9{10}<x<\ln\frac{11}{10}$$ or $$-\ln\frac{10}9<x<\ln\frac{11}{10}$$ Since $11/10<10/9$, for $|x|<\delta=\ln(11/10)$, the inequality $|f(x)-1|<0.1$ holds. What happens if $|x|\ge\ln(11/10)$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1074546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Determine monotone intervals of a function Let $$ f(x) = \int_1^{x^2} (x^2 - t) e^{-t^2}dt. $$ We need to determine monotone intervals of $f(x)$. I tried to differentiate $f(x)$ as follows. $$ f'(x) = \left(x^2 \int_1^{x^2} e^{-t^2}dt \right)' - \left(\int_1^{x^2} te^{-t^2}dt\right)' \\ = 2x \int_1^{x^2} e^{-t^2}dt + 2x^3e^{-x^4} - 2x^3e^{-x^4} \\ = 2x \int_1^{x^2} e^{-t^2}dt. $$ But it seems that we are not able to compute $\int_1^{x^2} e^{-t^2}dt$ explicitly. Thank you very much.
Note that $f'(x)=0 \iff x \in \{ -1, 0, 1\} $. It suffices to show that $ \displaystyle\int_{1}^{x^2} e^{-t^2} \, \mathrm{d}t $ is positive for $|x|>1$ and negative for $|x|<1$. Do you see why this is?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1074624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
how many ways to choose 3 coins? Sorry I don't know the correct math terms here, I haven't had a math class in some time. That's probably why I have trouble finding an existing question like this, too. Let's say there are 4 differnt kinds of coins: penny (P), nickle (N), dime (D), and quarter (Q). How many ways can you have 3 coins? Since there are 3 coins and 4 posibilities for each my first thought was 4x4x4, or 64, but that's not right. 2 pennies and a nickle are still just 2 pennies and a nickle no matter if its PPN, PNP, or NPP. The order doesn't matter. I listed out all the possiblities and counted them, but what's the correct formula to use here? 1. PPP 2. PPN 3. PPD 4. PPQ 5. PNN 6. PND 7. PNQ 8. PDD 9. PDQ 10. PQQ 11. NNN 12. NND 13. NNQ 14. NDD 15. NDQ 16. NQQ 17. DDD 18. DDQ 19. DQQ 20. QQQ
Here's a solution using a much more general method, the Pólya-Burnside lemma. We consider the three choices of coins as a single object with three slots that must be filled. The three slots are indistinguishable, so any permutation of them is considered a symmetry of this object; its symmetry group is therefore $S_3$, the symmetric group on three elements. Suppose there are $N$ choices of coins for each slot. The Pólya-Burnside lemma says that to find the number of ways of filling all the slots, adjusted for symmetry, is to find the number of fillings that are left fixed by each of the six symmetries, and average those six numbers. The six symmetries fall into three conjugacy classes: * *The identity symmetry, with three orbits *The three symmetries that exchange two slots and leave one fixed, with two orbits each *The two symmetries that permute the slots cyclically, with one orbit each In each conjugacy class, the number of ways to assign coins to the slots so that the assignment is left unchanged by that symmetry is $N^k$ where $k$ is the number of orbits and $N$ is the number of types of coins. The identity symmetry contributes $N^3$ ways; the three symmetries of type 2 contribute $N^2$ ways each for a total of $3N^2$, and the two symmetries of type 3 contribute $N$ ways each for a total of $2N$. Averaging these we find that the number of ways of assigning $N$ types of coins to the three slots is always $$\frac{N^3+3N^2 + 2N}6$$ and taking $N=4$ we find that the particular answer is $$\frac{4^3+3\cdot4^2+2\cdot 4}6 = \frac{120}{6} = 20.$$ We can also observe that $$\frac{N^3+3N^2 + 2N}6 = \frac{N(N+1)(N+2)}{3!} = \binom{N+2}{3},$$ which agrees with the solution found by the stars-and-bars method described elsewhere in this thread. This is a big hammer to use for a little problem, but I think it's instructive as a simple example of how to use the Pólya-Burnside lemma.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1074717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
In projective geometry the dual of the cross ratio dual is an angle measurement? I am trying to get my head around angles in projective geometry. I understand (more or less) the cross ratio and that it can be seen as an distance measurement. (for example in the Beltrami Cayley Klein model of hyperbolic geometry) But then there is its dual where the cross ratio is some kind of angle measurement of lines? I just draw a blank here which lines are mend here? can anybody give some light?
You can define angles in terms of the cross ratio (of lines) and the circular points at infinity. If you have two lines that intersect at the point P, you can draw two more lines from P to the circular points at infinity. Taking the natural logarithm of this will give you the Euclidean angle between the two original lines times i (up to multiples of 2πi depending on branch). The circular points at infinity add just enough structure to projective geometry to let you tell circles apart from other conics, this turns out to give enough structure to define angles. The same construction can be used to define hyperbolic angles, a.k.a. rapidity. In that case, instead of drawing lines to the circular points at infinity, you pick a pair of real points that define asymptotes for a unit hyperboloid in Minkowski space. This gives the relation between the distances using cross ratios in the Beltrami-Klein model and hyperbolic angles in the hyperboloid model.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1074818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Jamie rolls a die multiple times. find the probability that she rolls her first 5 before she rolls her second even number Jamie rolls her fair 6-sided die multiple times. Find the probability that she rolls her first 5 before she rolls her second (not necessarily distinct) even number? This is what I have so far... the probability that she rolls a 5 is $\frac{1}{6}$ the probability that she rolls an even number if $\frac{1}{2}$ Now I'm stuck...can anyone help me get to the final product? please be clear and concise.
Consider the following two events: * *$A:=\left\{\mbox{sequence of rolls containing one or no even number, and ending in a 5}\right\}$ *$B:=\left\{\mbox{sequence of rolls containing some even number}\right\}$ The probability of interest, in a sequence of independent dice rolls, is $$ P\left(A \mbox{ followed by } B\right){}={}P\left(B\, |\, A\right)P\left(A\right)\,. $$ Note that, because the rolls are independent and the event $B$ is an almost-sure event (that is, the probability of it not occurring is zero), we have $$ P\left( B \, | \, A\right){}={}P\left( B \right){}={} 1\,. $$ So, $$ P\left(A \mbox{ followed by } B\right){}={}P\left(A\right)\,. $$ We proceed, therefore, to compute the event $A$ as follows (making liberal use of independence): $$ \begin{eqnarray*} P\left(A\right)&{}={}&\sum\limits_{k=1}^{\infty} \left(P\left(k\mbox{ rolls, no even, ending in }5\right) {}+{} P\left(k\mbox{ rolls, one even, ending in }5\right)\right) \newline &{}={}&\sum\limits_{k=1}^{\infty} \left(\frac{1}{6}\left(\frac{1}{3}\right)^{k-1}{}+{}\frac{1}{6}{k-1\choose 1}\left(\frac{1}{2}\right)\left(\frac{1}{3}\right)^{k-2}\right) \newline &{}={}&\frac{1}{6}\sum\limits_{k=1}^{\infty} \left(\left(\frac{1}{3}\right)^{k-1}{}+{}\frac{1}{2}\left(k-1\right)\left(\frac{1}{3}\right)^{k-2}\right) \newline &{}={}&\frac{1}{6}\left(\frac{1}{\left(1-\frac{1}{3}\right)}{}+{}\frac{1}{2\left(1-\frac{1}{3}\right)^2}\right)\newline &{}={}&\frac{7}{16}\,. \end{eqnarray*} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1074902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 3 }
Given $n$ linear functionals $f_k(x_1,\dotsc,x_n) = \sum_{j=1}^n (k-j)x_j$, what is the dimension of the subspace they annihilate? Let $F$ be a subfield of the complex numbers. We define $n$ linear functionals on $F^n$ ($n \geq 2$) by $f_k(x_1, \dotsc, x_n) = \sum_{j=1}^n (k-j) x_j$, $1 \leq k \leq n$. What is the dimension of the subspace annihilated by $f_1, \dotsc, f_n$? Approaches I've tried so far: * *Construct a matrix $A$ whose $k$th row's entries are the coefficients of $f_k$, i.e., $A_{ij} = i - j$, and compute the rank of the matrix. Empirically, the resulting matrix has rank 2 for $n = 2$ to $n = 6$, but I don't see a convenient pattern to follow for a row reduction type proof for the general case. *Observe that $f_k$, $k \geq 2$ annihilates $x = (x_1,\dotsc, x_n)$ iff $$\sum_{j=1}^{k-1} (k-j)x_j + \sum_{j = k+1}^n (j-k)x_j = 0$$ iff $$k\left(\sum_{j=1}^{k-1} x_j - \sum_{j=k+1}^n x_j\right) = \sum_{j=1}^{k-1} jx_j - \sum_{j=k+1}^n jx_j,$$ and go from there, but I do not see how to proceed. Note: This is Exercise 10 in Section 3.5 ("Linear Functionals") in Linear Algebra by Hoffman and Kunze; eigenvalues and determinants have not yet been introduced and so I am looking for direction towards an elementary proof.
All the $f_k$ are linear combinations of the two linear functionals $$ \sum_{j=1}^n x_j \quad\text{and}\quad \sum_{j=1}^n jx_j; $$ therefore the dimension is at most $2$. Checking that the dimension is at least $2$ should be easy. (For an exercise, you might want to use this observation to construct a solution along the lines of your idea #1.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1074979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Find $\lim_{x\to 0^+}\sin(x)\ln(x)$ Find $\lim_{x\to 0^+}\sin(x)\ln(x)$ By using l'Hôpital rule: because we will get $0\times\infty$ when we substitute, I rewrote it as: $$\lim_{x\to0^+}\dfrac{\sin(x)}{\dfrac1{\ln(x)}}$$ to get the form $\dfrac 00$ Then I differentiated the numerator and denominator and I got: $$\dfrac{\cos x}{\dfrac{-1}{x(\ln x)^2}}$$ when substitute in this form I get: $\dfrac{1}{0\times\infty^2}$ Can we have the result $0\times\infty^2=0$? Then the limit will be $\dfrac10=\infty$?
We can use approximation arguments : when $x$ is small $\sin(x) \approx x$ and any polynomial grows faster than logarithm. Hence $\lim_{x \to 0^+} \sin(x) \ln(x) = \lim_{x \to 0^+} x = 0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1075036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Triangle-free graph with 5 vertices What is the maximum number of edges in a triangle-free graph on 5 vertices? No answers, please...just hints. I believe that E $\leq$ 5, but I'm not sure where to go from there.
Consider a pentagon. If you try to add any more edges to the pentagon a triangle will be formed. Thus for graph having a cycle containing 5 vertices ( all vertices that is ) can have at maximum 5 edges without violating the condition. Now consider bi-partite graphs with a total of 5 vertices , say $x$ in one group and $5-x$ in other. Bipartite graphs can't have odd length cycles. Thus no triangles and no pentagons. As we have already taken care of pentagons. You just need to find maximum in case of bi-partite graph. In bi-partite graph maximum edges can be $(5-x)*x$. Hope I have not answered completely.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1075117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
How find this diophantine equation $(3x-1)^2+2=(2y^2-4y)^2+y(2y-1)^2-6y$ integer solution Find this following Diophantine equation all integer solution $$(3x-1)^2+2=(2y^2-4y)^2+y(2y-1)^2-6y$$ or $$9x^2-6x+3=4y^4-12y^3+12y^2-5y$$ Maybe this equation can be solved by using Pell equation methods? I want take right is Quadratic formula with $(ay^2+by+c)$ maybe we have $$(3x-1)^2-A(ay^2+by+c)^2=B?$$
You can write it as $$(3x-1)^2-(2y^2-3y+\tfrac34)^2=-\tfrac12y-\tfrac{41}{16}.$$ Factoring the LHS gives two factors at least one of which gets too large as $y$ is large, as $$|(3x-1)-(2y^2-3y+\tfrac34)|+|(3x-1)+(2y^2-3y+\tfrac34)|\geqslant2\cdot|2y^2-3y+\tfrac34|.$$ It suffices to check the $y$'s with $-\tfrac12y-\tfrac{41}{16}=0$ (impossible) or $|-\tfrac12y-\tfrac{41}{16}|\geqslant|2y^2-3y+\tfrac34|$, that is, $y\in\{0,1,2\}$. (Note all this makes a little more sense after denominators are cleared, but it's perfectly valid to act as if they are.) The only solution is $(1,2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1075179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Distinct integers with $a=\text{lcm}(|a-b|,|a-c|)$ and permutations Do there exist three pairwise different integers $a,b,c$ such that $$a=\text{lcm}(|a-b|,|a-c|), b=\text{lcm}(|b-a|,|b-c|), c=\text{lcm}(|c-a|,|c-b|)?$$ None of the integers can be $0$, because the lcm is never $0$. So we know that $|a-b|<\max(a,b)$ (and same with $|b-c|,|c-a|$.) But this is still plausible, because lcm is greater than (or equal to) each of the two numbers.
It seems the following. All greatest common divisors considered below are positive. Put $d=\text{gcd}(a,b,c)$. Let $d’=\text{gcd}(a-b,a-c)$ Then $d|d’$. From the other side, $d’|a$, so $d’|b=a-(a-b)$ and $d’|c=a-(a-c)$. So $d’|d$ an therefore $d’=d$. Hence $|a|=|a-b||a-c|/d$. Similarly, $|b|=|b-a||b-c|/d$ and $|c|=|c-a||c-b|/d$. Put $a=a’d$, $b=b’d$, and $c=c’d$. Then $\text{gcd}(a’,b’,c’)=1$. Moreover, $|a’|=|a’-b’||a’-c’|$. Hence $a’|b’c’$ so $|a’|=\text{gcd}(a’,b’)\text{gcd}(a’,c’)$. Similarly, $|b’|=\text{gcd}(b’,a’) \text{gcd}(b’,c’)$, and $|c’|=\text{gcd}(c’,b’) \text{gcd}(c’,a’)$. Put $x=\text{gcd}(b’,c’)$, $y=\text{gcd}(a’,c’)$, and $z= \text{gcd}(a’,b’)$. Then $|a’|=yz$, $|b’|=xz$, and $|c’|=xy$. So $yz=|yz-xz||yz-xy|$ and therefore $1=|y-x||z-x|$. Similarly $1=|y-x||z-x|$ and $1=|x-z||y-z|$. Without loss of generality we may assume that $x>y$ and $x>z$. Then $|x-y||x-z|=1$ implies $y=z=x-1$. Then $y-z=0,$ a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1075231", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }