Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Euclidean distance and dot product I've been reading that the Euclidean distance between two points, and the dot product of the two points, are related. Specifically, the Euclidean distance is equal to the square root of the dot product. But this doesn't work for me in practice. For example, let's say the points are $(3, 5)$ and $(6, 9)$. The Euclidean distance is $\sqrt{(3 - 6)^2 + (5 - 9)^2}$, which is equal to $\sqrt{9 + 16}$, or $5$. However, the dot product is $(3 * 6 + 5 * 9)$, which is $63$, and the square root of this is not $5$. What am I getting wrong?
distance=$D=\sqrt{((|A|^2-(A\cdot B)^2)+(|B|-(A\cdot B))^2)}$ I found this geometrically like this: sorry for the lousy formatting, this is my first time posting here. I needed to find D in terms of norms and dot products for my linear algebra homework, as I assume most people asking this question are, and the other answers weren't helpful to me. I don't know where the original asker got that $D=\sqrt{A\cdot B}$, but hopefully the equation I found for my homework will be useful to others trying to find how D is related to the dot product.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1236465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28", "answer_count": 4, "answer_id": 3 }
Suppose that $f$ is differentiable on $\mathbb{R}$ and $\lim_{x\to \infty}f'(x)=M$. Show that $\lim_{x\to \infty}f(x+1)-f(x)$ exists and find it. I've been stuck on this question for a long time now and was wondering if anyone could show me how it's done. So far I have done the following: Since $\lim_{x\to \infty}f'(x)=M$ then $\forall \epsilon ,\exists A >0$ s.t. if $x>A$ then $|f'(x)-M|<\epsilon$. So we have that $M-\epsilon<f'(x)<M+\epsilon$. By MVT on $[x,x+1]$ for $x>A$, $\exists c\in (x,x+1)$ s.t. ${f(x+1)-f(x)\over 1}=f'(c)$. Therefore, $M-\epsilon<f(x+1)-f(x)<M+\epsilon$. However, I do not know what to do after this.
Here is an easier way out. By Mean Value theorem, given any $x$, there exists $y \in [x,x+1]$ such that $$f(x+1) - f(x) = \dfrac{f(x+1)-f(x)}{(x+1)-x} = f'(y)$$ Now as $x \to \infty$, since $y \in [x,x+1]$, we have $y \to \infty$. Hence, we obtain that $$\lim_{x \to \infty} \left(f(x+1) - f(x)\right) = \lim_{y \to \infty} f'(y) = M$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1236572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Express $\ln(3+x)$ and $\frac{1+x}{1-x}$ as Maclaurin series its probably a lot to ask.. but how can I obtain the Maclaurin series for the two functions $f(x)=\ln(3+x)$ and $g(x)=\frac{1+x}{1-x}$ ? as far as I know I cant use any commonly known series to help me with this one ? so finding the derivatives at 0: $f(0) = \ln(3)$ $f'(x)= 1/(x+3) \implies f'(0) = 1/3$ $f''(x)= -1/(x+3)^2 \implies f''(0)= -1/ 3^2$ $f'''(x)= 2/(x+3)^3 \implies f'''(0)= 2/3^3$ $f''''(x)= -6/(x+3)^4 \implies f''''(0)= -6/3^4$ and now I guess Ill have to use the maclaurin formula: $$f(x)=\sum \frac{f^{(n)}(0) x^n}{n!}$$ but now how do I continue from here? I've got: $$(\ln 3) + \frac{x}{3} - \frac{x^2}{3^2 \cdot 2!} + \frac{2x^3}{3^3 \cdot 3!} - \frac{6x^4}{3^4 \cdot 4!} $$ same question goes for $g(x)$ honestly, on that one upon simplifying the numerator from $g(0)$ with the factorial when I plug it into the maclaurien series formula I just get $g(x)= 1+2x+2x^2 + 2x^3 + 2x^4 + \cdots$
for $(x+1)/(1-x)$ write it as $$ x \frac{1}{1-x} + \frac{1}{1-x} $$ substitute the series for $1/(1-x)$ and proceed. To get a single series, add the terms one by one. Also $$ \log(3 -x) = \log(3) + \log(1 - x/3) $$ so substitute $x/3$ in the series for $\log(1-y)$ which can be found be integrating the series for $1/(1-y).$ switch the sign of $x$ to get the series for $\log(3+x).$ Ie multiply the coefficient of $x^n$ by $(-1)^n.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1236687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why aren't these negative numbers solutions for radical equations? I was working on radical equations and I came across a few problems where I got answers that worked when I checked, but were not listed as solutions. My teacher's only explanation was, "just because." Here is one problem where the only solution is $1$. $x=\sqrt{2-x}$ How I solved it $x^{2}=2-x$ $x^{2}+x-2=0$ $(x+2)(x-1)=0$ $x= \{-2, 1\}$ Then plugging both numbers back in, I get $1 = \sqrt{2-1}$ $1 = \sqrt{1}$ $1 = 1$ and $-2 = \sqrt{2--2}$ $-2 = \sqrt{4}$ The square root of $4$ can be both $-2$ because $-2 \times -2 = 4$ and $2$. $1$ is the only solution listed and my teacher says that it's right. What is the explanation for this? Why isn't $-2$ a solution for the problem?
The answer, as already the others has highlighted, is that $\sqrt{4} \neq -2$, but to aid you in the actual expression evaluation, consider that $\sqrt{x^2} \neq x$, but $\sqrt{x^2} = \left| x \right|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1236801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 5, "answer_id": 0 }
Identifying translations and rotations as compositions. I am having trouble understanding the below which are the ones underline in red and blue. For the red: Why is that $R_{A,90}(A)=A$ and that $\tau_{AB}(A)=B$ As for the blue: Why is that $R_{A,90}(B)=A'$
The transformation $R_{A,90^\circ}$ is a counterclockwise rotation of the plane through an angle of $90^\circ$ around the point $A$. Since $A$ is the centre of rotation, it doesn't move, and therefore $R_{A,90^\circ}(A)=A$. When $B$ is rotated $90^\circ$ counterclockwise around the point $A$, it ends of at $A'$: it started directly below $A$, so it ends up the same distance directly to the right of $A$, at $A'$. Think of it as an hour hand moving backwards $90^\circ$ from $6$ o'clock to $3$ o'clock. The transformation $\tau_{AB}$ is a translation of the whole plane in the direction from $A$ to $B$; the length of the translation is the distance $d$ from $A$ to $B$. If you translate (move) the point $A$ $d$ units directly towards $B$, it ends up at $B$, so $\tau_{AB}(A)=B$. Similarly, the point $Y$ ends up at $X$, because $|YX|=|AB|=d$, and $X$ is directly below $Y$, so that the lines $YX$ and $AB$ are parallel.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1236870", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$5 \nmid 2^{n}-1$ when $n$ is odd I want to prove that $$5 \nmid 2^{n}-1$$ where $n$ is odd. I used Fermat's little theorem, which says $2^4 \equiv 1 \pmod 5$, because $n$ is odd then $4 \nmid n$ , so it is done. can you check it and say that my proof is right or wrong. thanks.
${\rm mod}\ 5\!:\,\ 2^{1+2j}\equiv 2(4^j)\equiv 2(-1)^j\equiv \pm2 \not\equiv 1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1236975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Caden has 4/3 kg of sand which fills 2/3 ​​ of his bucket. How many buckets will 1kg sand fill? I have already finished Calculus II but I go back and practice the basics on Khan Academy. This problem confuses me conceptually every time. I know what the answer is, but I am having a hard time rationalizing the steps. What are the mental steps you take solving this problem? Thanks in advance and please have mercy on me. I am doing this for my own entertainment not for a grade!
As the bucket is filled by $2/3$, a half of its actual content is needed to get it filled completely, that is a half of $4/3$ kg.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1237069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 10, "answer_id": 7 }
Is the following statement true on $L^0$ spaces? Let $(\Omega,\mathcal{F},P)$ be a probability space. Let $X,Y\in L^0(\Omega;\mathbb{R})$ two random variables taking values in $\mathbb{R}$. Is it true that: $$\int_{A} f(X(\omega)) P(d\omega) = \int_{A} f(Y(\omega)) dP(\omega)$$ for all $A\in \mathcal{F}$ and all Lipschitz continuous functions $f:\mathbb{R}\rightarrow \mathbb{R}$ then $$X=Y, \quad P-a.s.?$$ In other notation this is like saying: $$E[f(X)] = E[f(Y)] \quad \forall f\in Lip(\mathbb{R}) \Rightarrow X=Y,\quad P-a.s.$$ Could we also relax it to all $f$ continuous and bounded (not necessarily Lipschitz)? Thank you for the help :)
Under the further hypothesis that $(\Omega, \mathcal{F},P)$ is a standard probability space, I believe that this is true even if we restrict to only one $f$, namely $f(x) = x$. Recall the Lebesgue Differentiation Theorem. To apply that here, assume that $\Omega =[0, 1]$, $\mathcal{F}$ is the completed $\sigma$-algebra, and $P$ is Lebesgue measure. Equivalently, we can assume that $(\Omega, \mathcal{F},P)$ is mod $0$ isomorphic to that space, which is equivalent to the hypothesis that the space is standard. From there, we have that $$ X(\omega) = \lim_{\epsilon \to 0} \frac{1}{2\epsilon} \int_{B_\epsilon (\omega)} X dP = \lim_{\epsilon \to 0} \frac{1}{2\epsilon} \int_{B_\epsilon (\omega)} Y dP = Y(\omega) $$ where the first and third equality are almost surely true by Lebesgue differentiation, and the second by the hypothesis. So, with the probability space standard, this is true, but for more unusual situations, you'd need some way to get from integrals to function values, and Lebesgue Differentiation is the only method to do that that I am aware of.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1237174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Solving integral $\int \frac{3x-1}{\left(x^2+16\right)^3}$ I need to solve this one integral. $$\int \frac{3x-1}{\left(x^2+16\right)^3}$$ You need to use the method of undetermined coefficients. That's what I get: $$(3x-1) = (Ax + B)(x^{2}+16)^{2} + (Cx + D)(x^{2}+16) + (Ex + F)$$ $$1: 256B + 16D + F = -1$$ $$x: 256A + 16C + E = 3$$ $$x2: 32B + D = 0$$ $$x3: 32A + C = 0$$ $$x4: B = 0$$ $$x5: A = 0$$ $$A = 0;B = 0;C = 0;D = 0;E = 3;F = -1;$$ It turns out that I'm back to the same integral. What is wrong I do?
Hint: Alternately, evaluate $I(a)~=~\displaystyle\int\frac{3x-1}{x^2+a}~dx$ assuming $a>0$, and then try to express your integral in terms of $I''(16)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1237289", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proving $ \frac{1}{c} = \frac{1}{a} + \frac{1}{b}$ in a geometric context Prove or disprove $$ \frac{1}{c} = \frac{1}{a} + \frac{1}{b}. $$ I have no idea where to start, but it must be a simple proof. Trivia. This fact was used for determination of resistance of two parallel resistors in some circumstances long time ago.
In general, if the vertex angle is $2\theta$ and $OC$ is the angle bisector, since $$\text{Area of triangle }OAB = \text{Area of triangle }OAC + \text{Area of triangle }OBC$$ we have $$\dfrac{OA \cdot OB \cdot \sin(2\theta)}2 = \dfrac{OA \cdot OC \cdot \sin(\theta)}2 + \dfrac{OC \cdot OB \cdot \sin(\theta)}2$$ $$ab\sin(2\theta) = ac\sin(\theta) + cb\sin(\theta) \implies \dfrac{2\cos(\theta)}c = \dfrac1a + \dfrac1b$$ Taking $\theta=\pi/3$, we obtain what you want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1237390", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 7, "answer_id": 0 }
Integrating a jacobian to find the volume. I want to solve the following: Prove that $$\displaystyle \int_R \sin^{n-2}\phi_1 \sin^{n-3}\phi_2\cdots\sin \phi_{n-2} d\theta d\phi_1\cdots d\phi_{n-2} = \frac{2\pi^{n/2}}{\Gamma(n/2)}$$ where $ R=[0,2\pi] \times [0,\pi]^{n-2}$. Hint: Compute $ \int_{\mathbb R^n}e^{-|x|^2}dx$ in spherical coordinates. So I am having problems to calculate the latter integral in spherical coordinates because I dont know how to integrate (in finite steps) $sin^{n}(x)$ and I dont know how this results to be a division of integrals.Can you help me to solve this please?, Thanks a lot in advance :)
Hint all the variables $\phi_i$ can be separted in the integral, so you can transform the integral into: $$\displaystyle \int_R \sin^{n-2}\phi_1 \sin^{n-3}\phi_2\cdots\sin \phi_{n-2} d\theta d\phi_1\cdots d\phi_{n-2}=\int_{0}^{2\pi}d\theta \prod_{i=1}^{n-2}\int_{0}^{\pi}\sin^i\phi_i d\phi_i$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1237491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
How to evaluate $\log x$ to high precision "by hand" I want to prove $$\log 2<\frac{253}{365}.$$ This evaluates to $0.693147\ldots<0.693151\ldots$, so it checks out. (The source of this otherwise obscure numerical problem is in the verification of the Birthday problem.) If you had to do the calculation by hand, what method or series would you use to minimize the number of operations and/or size of the arguments involved in order to get this result? I'm doing a formal computer proof, so it's not exactly by hand, but I still want to minimize the number of evaluations needed to get to this result. One method is by rewriting it into $2<e^{253/365}$ and using the power series; since it is a positive series you know you can stop once you have exceeded $2$. Working this out, it seems you need the terms $n=0,\dots,7$ of the sum, and then it works out to $$2<\sum_{n=0}^7\frac{(253/365)^n}{n!}=\frac{724987549742673918011}{362492763907870312500},$$ which involves rather larger numbers than I'd like. There is also the limit $(1+\frac{253}{365n})^n\to$ $e^{253/365}$, but the convergence on this is not so good; it doesn't get past $2$ until $n=68551$, at which point we are talking about numbers with $507162$ digits. For $\log 2$ there is of course the terribly converging alternate series $\log 2=\sum_{n=1}^\infty\frac{-(-1)^n}n$, which requires $71339$ terms to get the desired bound. This can be improved by pushing the series into its geometrically convergent region as $\log 2=2\log{\sqrt 2}=-2\sum_{n=1}^\infty \frac{(1-\sqrt2)^n}n$, but now there is the added complication of estimating $\sqrt 2$ to sufficient precision. Assuming that $\sqrt 2$ is known exactly, you need to take this series out to $12$ terms, at which point we are verifying $$\frac{1959675656 \sqrt2-2771399891}{1011780}<0\Leftarrow2771399891^2>1959675656^2\cdot 2.$$ What other methods are there to do a calculation like this? Is there a way to use a root-finding method like Newton's to get a strict bound out with fast convergence?
You could express this as $\log(1/2) > -\dfrac{253}{365}$. The series $$\log(1/2) = \log(1-1/2) = -\sum_{n=1}^\infty \dfrac{1}{n 2^n}$$ converges quickly, and has nice bounds: $$ \log(1/2) \ge - \sum_{n=1}^{N-1} \dfrac{1}{n 2^n} - \sum_{n=N}^\infty \dfrac{1}{N 2^n} = - \sum_{n=1}^{N-1} \dfrac{1}{n 2^n} - \dfrac{1}{N 2^{N-1}}$$ EDIT: Another way to write it is $2 < (\exp(1/365))^{253}$, and use the continued fraction $$ \exp(1/n) = 1 + \dfrac{1}{n-1 + \dfrac{1}{1 + \dfrac{1}{1 + \dfrac{1}{3n-1 + \dfrac{1}{1+ \dfrac{1}{1+ \dfrac{1}{5n-1 + \ldots}}}}}}}$$ In particular, $$\exp(1/365) >1+ 1/(364+1/(1+1/(1+1/1094))) = \dfrac{800080}{797891}$$ and (if you don't mind exact arithmetic with big integers) $$\left(\dfrac{800080}{797891}\right)^{253} > 2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1237604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Let $x \in \mathbb Q \setminus \{0\}$ and $y \in \mathbb R\setminus \mathbb Q$. Prove that $\frac{x}{y} \in \mathbb R \setminus \mathbb Q$ Let $x \in \mathbb Q\setminus \{0\}$ and $y \in \mathbb R\setminus \mathbb Q.$ Prove that $\frac{x}{y} \in \mathbb R \setminus \mathbb Q$ I saw this question in a basic analysis test but it confuses me because intuitively it makes sense but how do you show mathematically that the set of rationals is not in the solution space?
From the hypothesis we have that $$\exists n,m \in \mathbb{Z} \setminus \{ 0 \} : x = \frac{m}{n}$$ and that $$ y \in \mathbb{R} \setminus \mathbb{Q}$$ Now assume that $$ \frac{x}{y} \in \mathbb{Q}$$ Now we $$ \exists p,q \in \mathbb{Z} : \frac{x}{y} = \frac{p}{q}$$ Well we can write this as $$ \frac{1}{y} \frac{m}{n} = \frac{p}{q} $$ With some further algebraic manipulations can you see why this would be a contradiction?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1237693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Hamiltonian Graphs with Vertices and Edges I have this question: Alice and Bob are discussing a graph that has $17$ vertices and $129$ edges. Bob argues that the graph is Hamiltonian, while Alice says that he’s wrong. Without knowing anything more about the graph, must one of them be right? If so, who and why, and if not, why not? I figured that since $\frac{129}{17}$ is only ~$7.5$ which is the average degree and for a graph to be Hamiltonian every degree has to be at least $\frac{n}{2}$, which is $\frac{17}{2}$ ~$8.5$ that the graph cannot be Hamiltonian since there are not enough edges to satisfy the condition, but this was marked wrong. Does anyone know why?
The complete $K_{17}$ would have ${17\choose 2}=136$ edges. So we are missing only $7$ edges. Thus every vertex has degree at least $16-7=9>\frac{17}2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1237774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove $(\alpha_1 + ........ + \alpha_n)^2 ≤ n \cdot (\alpha_1^2 + ....... + \alpha_n^2)$ For any real numbers $\alpha_1, \alpha_2, . . . . ., \alpha_n$, $$(\alpha_1 + ...... + \alpha_n)^2 ≤ n \cdot (\alpha_1^2 + ..... + \alpha_n^2)$$ And when is the inequality strict?
This is just an application of Jensen's inequality. As $f(x) = x^2$ is a convex function, we can apply it here. Reference : Jensen's inequality
{ "language": "en", "url": "https://math.stackexchange.com/questions/1237956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Estimate $\ln\left( 1.04^{0.25} + 0.98^{0.2} -1 \right)$ with 2D Taylor I need to estimate $\ln\left( 1.04^{0.25} + 0.98^{0.2} -1 \right)$ with a Taylor approximation of a two variable function (i.e. x and y). Eventually I managed to pull the (presumably) correct function: $$f(x,y) = \ln \left( (1+2x)^{\frac{5}{4}y} - (1-x)^y - 1 \right)$$ around $\left( 0, 0.2 \right)$. But its partial derivatives are overly complicated, see examples for $f_{xx}$ and $f_{yx}$. So my best guess for the function is wrong. It feels like I'm missing an identity that would simplify the task. Could you please direct my towards the most appropriate function?
My opinion is that you complicate things too much. Just try $f(x,y)=\ln (x^{0.25} + y^{0.02} -1)$ around $(1,1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1238034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What did I do wrong in the permutations question. I was given the following question: A hardware store sells numerals for house numbers. It has large quantities of the numerals 3, 5, and 8 but no other numerals. How many different house numbers, with no more than three digits can be made from these numbers? So, I tried to solve it by multiplying 3 x 3 x 3 = 27 because there were 3 choices at each step. However, according answers of this book the answer is 39 and not 27. Please explain what I did wrong. Thank you :)
You've worked out the number of house numbers with exactly $3$ digits, but the question says `no more than' $3$ digits. See if you can work out the number of house numbers with only $2$ or $1 $ digit!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1238154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$3 X^3 + 4 Y^3 + 5 Z^3$ has roots in all $\mathbb{Q}_p$ and $\mathbb{R}$ but not in $\mathbb{Q}$ This is an exercise in my textbook in a chapter about the Hasse-Minkowski-theorem: Show that the polynomial $3 X^3 + 4 Y^3 + 5 Z^3$ has a non-trivial root in $\mathbb{R}$ and all $\mathbb{Q}_p$. Show that it has only the trivial root in $\mathbb{Q}$. I don't know, at the first sight, this seems like a pretty hard exercise, especially the second part. Is it doable? Do you have any tips how to start? Or should I simply skip it? Because I don't see how this exercise helps me to understand/apply the Hasse-Minkowski-theorem.
This is a classical counterexample to Hasse-Minkowski, due to Selmer. The solution of the second part is indeed difficult. For a detailed proof see, for example, Chapter $7$ of the thesis of Arnélie Schinck. However, the first part is quite easy, and goes as follows: we can explicitly list a solution for each $\mathbb{Q}_p$, including for $\mathbb{Q}_{\infty}=\mathbb{R}$: $$ (x,y,z)=(-1,(3/4)^{1 /3},0),(0,(5/4)^{1/3},-1),(5,-2(15/4)^{1/3},-3),(-1,0,(3/5)^{1/3}). $$ All solutions exist in $\mathbb{R}$, and at least one exists in a given $\mathbb{Q}_p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1238216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Dot and Cross Product Proof: $u \times (v \times w) = ( u \cdot w)v - (u \cdot v)w$ How do you prove that: $u \times (v \times w) = ( u \cdot w)v - (u \cdot v)w$ ? The textbook says as a hint to "first do it for $u=i,j$ and $k$; then write $u-xi+yj+zk$ but I am not sure what that means.
I'm going to use the basic definitions of scalar and vector triple products to prove this. Let $\mathbf u= a_1 \mathbf i + a_2 \mathbf j + a_3 \mathbf k, \mathbf v= b_1 \mathbf i + b_2 \mathbf j + b_3 \mathbf k, \mathbf w= c_1 \mathbf i + c_2 \mathbf j + c_3 \mathbf k$, Then $(\mathbf{v} \times \mathbf{w})= (b_2c_3-b_3c_2) \mathbf i + (b_3c_1-b_1c_3) \mathbf j + (b_1c_2-b_2c_1) \mathbf k$ Hence, $\mathbf{u} \times (\mathbf{v} \times \mathbf{w})= \begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k}\\ a_1 & a_2 & a_3 \\ (b_2c_3-b_3c_2) & (b_3c_1-b_1c_3) & (b_1c_2-b_2c_1) \end{vmatrix}$ =$(a_1c_1+a_2c_2+a_3c_3)(b_1 \mathbf{i}+ b_2 \mathbf{j} + b_3 \mathbf{k}) - (a_1b_1+a_2b_2+a_3b_3)(c_1 \mathbf{i}+ c_2 \mathbf{j} + c_3 \mathbf{k})$ =$( \mathbf{u} \cdot \mathbf{w}) \cdot \mathbf{v} - (\mathbf{u} \cdot \mathbf{v}) \cdot \mathbf{w}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1238362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $4=5$, then $6=8\,$ (yes or no?) I had an argument with a friend about the statement in the title. I asserted that if $4=5$, then $6=8$, as you can derive any conclusion from a false statement. However, he does not agree, and claims that you cannot know that $6$ would equal $8$ if $4$ were to equal $5$. Is the statement in the title correct?
$$4=5$$ $$4-1=5-1$$ $$2(4-1)=2(5-1)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1238465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 8, "answer_id": 3 }
Proof for $0a = 0$ Is this a valid proof for $0a =0$? I am using only Hilbert's axioms of the real numbers (for simplicity). $(a+0)(a+0) = a^2 + 0a + 0a + 0^2 = (a)(a) = a^2$ Assume that $0a$ does not equal zero. Then from $a^2 + 0a + 0a + 0^2$ we get $(a+0)(a+0) > a^2$ or $(a+0)(a+0) < a^2$ which contradicts the proven above statement. Therefore, $0a = 0$ QED
$0+0=0.\\\text {Multiplying on both sides by a from the right,}\\ [0+0].a=0.a.\\or, 0.a+0.a=0.a.\\\text {Adding -0.a on both sides,}\\-(0.a)+[0.a+0.a]=-(0.a)+0.a\\or, [-(0.a)+-0.a]+0.a=-(0.a)+0.a.\\or, 0+0.a=0.\\or, 0.a=0. $
{ "language": "en", "url": "https://math.stackexchange.com/questions/1238517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proving $k$ is divisible by $3$ iff the sum of the digits of $k$ is divisible by 3 I am trying to prove that $k$ is divisible by $3$ iff the sum of the digits of $k$ is divisible by 3 for all $k \in Z$. I am not even sure what tags to use because I am not sure of right methods to use to solve this problem. I don't see how you could solve this with induction. Any tips on the general approach would be appreciated.
Let $k = a_0 + 10a_1 + 10^2a_2 + ... + 10^na_n$, this is a decimal representation where $a_i$ represents an individual digit. Consider $k \pmod 3$. Since $10^i \equiv 1^i = 1\pmod 3$, $k \equiv a_0 + a_1 + ... + a_n \pmod 3$, and you're done. Using induction is more unwieldy, and you'll still have to use modular arithmetic to do the inductive step (I think).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1238590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$a^2b + abc + a^2c + ac^2 + b^2a + b^2c + abc +bc^2$ factorisation I came across this from a university mathematics resource page but they do not provide answer to this. What I did was this: $(a^2+b^2+c^2)(a+b+c) - (a^3 + b^3 + c^3) + 2abc$ But I don't think this is the correct solution. How should I spot how to factorise expression here? I wish to learn more of this seemingly complex and uncommon algebra factorisation. Can you recommend me a book or a website for this? There seem to be only common factorisations when I google. Many thanks in advance, Chris
From What I see $$ab(a+c)+ac(a+c)+b^2(a+c)+bc(a+c)=(a+c)(ab+ac+b^2+bc)$$ $$=(a+c)(a(b+c)+b(b+c))=(a+b)(b+c)(a+c)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1238712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Rearranging into $y=mx+c$ format and finding unknowns $a$ and $b$ Two quantities $x$ and $y$ are connected by a law $y = \frac{a}{1-bx^2}$ where $a$ and $b$ are constants. Experimental values of $x$ and $y$ are given in the table below: $$ \begin{array}{|l|l|l|l|l|l|} \hline x & 6 &8 & 10 & 11 & 12\\ \hline y & 5.50 & 6.76 & 9.10 & 11.60 & 16.67\\\hline \end{array} $$ By plotting a suitable graph, find $a$ and $b$. (Use tables correct to $2$ significant figures in your work) I dont know whether or not i should use logs to rearrange the equation into $y=mx+c$ format.
When you are asked questions like this one, basically the problem is to find the change of variables which transform the equation into the equation of a straight line. So, starting with $$y = \frac{a}{1-bx^2}$$ rewrite $$\frac 1y=\frac{1-bx^2}{a}=\frac 1a-\frac ba x^2$$ I am sure that you can take from here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1238819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Equivalent definition of cograph A cograph is simple graph defined by the criteria * *$K_1$ is a cograph, *If X is a cograph, then so is its graph complement, and *If X and Y are cographs, then so is their graph union X union Y and also it is said that a graph is called cograph if G does not contain the path graph $P_4$ as an induced subgraph. How these definition are equivalent or atleast how first one is implying second? I am totally struck in which direction to proceed. Any reference or hint?
To see that the first definition implies the second, you can use induction on $n$, the number of vertices. The base case is trivial. For the induction step, the cograph $G$ can be obtained either by the union of two smaller cographs $G_1$ and $G_2$, or by complementing such a graph. In the first case, apply induction on both $G_1$ and $G_2$ to deduce that $G$ has no $P_4$. In the second case, $G$ is obtained by complementing a cograph $G'$ obtained from the union of two cographs. We just argued that $G'$ is $P_4$-free, so it suffices to observe that the complement of a $P_4$ is a $P_4$, and that complementing a graph without a $P_4$ cannot create one. In the other direction (defintion 2 implies definition 1), the proof I know of relies on the fact that if $G$ is a $P_4$-free graph, then for any $X \subseteq V(G)$, one of $G[X]$ or the complement of $G[X]$ is disconnected ($G[X]$ denotes the subgraph induced by $X$). It's provable by induction. Then, this fact can be used to show that $G$ was obtained by taking the union of two cographs, or by complementing it. EDIT : The full proof can be find on page 6, Theorem 19 of this document : https://homepages.warwick.ac.uk/~masgax/Graph-Theory-notes.pdf
{ "language": "en", "url": "https://math.stackexchange.com/questions/1238902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is Category Theory similar to Graph Theory? The following author noted: Roughly speaking, category theory is graph theory with additional structure to represent composition. My question is: Is Category Theory similar to Graph Theory?
Category theory and graph theory are similar in the sense that both are visualized by arrows between dots. After this the similarities quite much stop, and both have different reason for their existence. In category theory, we may have a huge amount of dots, and these dots do often represent some abstract algebraic structure or other object with some meaning. The arrows which go between dots needs to respect very specific rules, which makes it possible to draw conclusion about the category and hence about the objects which the dots represents. In Graph theory however we can draw the arrows in any way we want do not need to care about rules. This gives us a very free way to draw, but also, we do not get the same implementations as graph theory may draw conclusion about the objects which the dots represent, since in graph theory we have dots without meaning. Category theory draws from graph theory that we may talk about dots being connected, the degree of a dot etc. And when we do not have an extremly huge amount of dots, a category is a graph. So in this case Category theory is just a special case of graph theory. On the other hand, graph theory (and everything else) may be studied abstractly in category theory, so we could say that graph theory is a special object of study in category theory. My conclusion Is that the largest similarity (and why people think they are similar) is the way we visualize the two.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1239027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35", "answer_count": 3, "answer_id": 2 }
Proving $6^n - 1$ is always divisible by $5$ by induction I'm trying to prove the following, but can't seem to understand it. Can somebody help? Prove $6^n - 1$ is always divisible by $5$ for $n \geq 1$. What I've done: Base Case: $n = 1$: $6^1 - 1 = 5$, which is divisible by $5$ so TRUE. Assume true for $n = k$, where $k \geq 1$: $6^k - 1 = 5P$. Should be true for $n = k + 1$ $6^{k + 1} - 1 = 5Q$ $= 6 \cdot 6^k - 1$ However, I am unsure on where to go from here.
$6$ has a nice property that when raised to any positive integer power, the result will have $6$ as its last digit. Therefore, that number minus $1$ is going to have $5$ as its last digit and thus be divisible by $5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1239106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 5 }
$\bigcup \alpha$ where $\alpha$ is a finite ordinal. Given a finite ordinal, is it correct in saying $\bigcup \alpha = \alpha - 1$? As an illustrative example consider $3 = \{\emptyset , \{\emptyset\}, \{\emptyset, \{\emptyset\}\}\}$. I believe $\bigcup 3 = \{\emptyset , \{\emptyset\}\} = 2$. Is this accurate?
Assume $\alpha=\beta+1=\beta\cup\{\beta\}$ where $\beta$ is an ordinal (note that this is applicable for all nonzero finite ordinals, but also for many infinite ones). Then $x\in\bigcup\alpha$ iff $x\in y$ for some $y\in \alpha$. And this is equivalent to $x\in\beta$ or $x\in y$ for some $y\in \beta$. Since $x\in y\in \beta$ implies $x\in \beta$ as well, we ultimately have $$ x\in\bigcup\alpha\iff x\in\beta$$ which means $$ \bigcup\alpha=\beta.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1239215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
To prove that $I = \{\,(n,m) \in \Bbb Z \times \Bbb Z : n,m $ are even $\}$ is not a maximal ideal of $ \Bbb Z \times \Bbb Z $. To prove that $I = \{\,(n,m) \in \Bbb Z \times \Bbb Z : n,m $ are even $\}$ is not a maximal ideal of $ \Bbb Z \times \Bbb Z $. Thus there exists an ideal $J$ of $ \Bbb Z \times \Bbb Z $ such that $I \subset J \subset \Bbb Z \times \Bbb Z $. Will $J = \{\,(n,m) \in \Bbb Z \times \Bbb Z : n $ is even $\}$ work here?
Yes, it will. It is clearly between $I$ and the full ring and is an ideal. Alternatively, without exhibiting $J$, you can calculate $(1,0)\cdot(0,1)=(0,0)$, which shows that the product of two elements $\notin I$ can be $\in I$; hence $I$ is not even prime, let alone maximal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1239334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
4 cards are shuffled and placed face down. Hidden faces display 4 elements: earth, wind, fire, water. You turn over cards until win or lose. Question: $4$ cards are shuffled and placed face down in front of you. Their hidden faces display 4 elements: water, earth, wind, fire. You turn over cards until win or lose. You win if you turn over water and earth. You lose if you turn over fire. What is the probability that you win? I understand that wind is effectively absent from the sample space. Does not affect your chances of winning or losing. I also know that $\frac13$ (because we removed wind), you can pick fire where you lose the game.
Probablity of losing is not 1/3. It should be 2/3. As Below, He wins if, * *First three cards are Water/Earth/Wind = 3! *First two cards are Water/Earth and third is Fire = 2 so the total = 6 + 2 = 6 Probability of winning = 8/24 = 1/3 , so loosing 2/3.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1239438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Dimension of garden to minimize cost Math question: A homeowner wants to build, along her driveway, a garden surrounded by a fence. If the garden is to be $5000$ square ft, and the fence along the driveway cost $6$ dollars per foot while on the other three sides it cost only $\$2$ per foot, find the dimension that will minimize the cost. Also find the minimum cost. this is what I got so far: $X=6, H=2, \ V=5000ft^2, \ V=x^2h, \ C=36x^2+8xh$
Let $A=5000$, length of the side along driveway$=x$ and width $=y$. Then $A=xy=5000$. Total cost $\displaystyle =C=6x+2(x+2y)=8x+\frac{4\times 5000}{x}$. Now we have to find $x$ that minimizes $C$. So differentiating $C$ w.r.t. $x$ and equating it to $0$, $\displaystyle \frac{dC}{dx}=8-\frac{20000}{x^2}=0\Rightarrow x^2=2500 \Rightarrow x=50$. When $\displaystyle x>50, \frac{dC}{dx}>0 $ and when $\displaystyle x<50, \frac{dC}{dx}<0 $. Hence, $C$ has a minimum at $x=50$. So $y=100$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1239638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to Prove Unboundedness Suppose I have a submartingale $X_k$, what results/theorems can be useful if I want to show that $X_k$ is unbounded in the limit. There are results (basically bounding $\mathbb{E}X_k$) for convergence of submartinagles. But, I wanted to show that a submartingale goes to infinity in the limit for my research. Take the submartingale $X_0=0, X_{k+1}=X_k+1$ for instance.
If the submartingale is of the form $$X_k = \sum_{j=1}^k \xi_j$$ where $\xi_j \in L^1$ are independent identically distributed random variables, then, by the strong law of large numbers, $$\frac{X_k}{k} \to \mathbb{E}\xi_1 \qquad \text{almost surely as $k \to \infty$.}$$ This means that $$\lim_{k \to \infty} X_k = \infty$$ whenenver $\mathbb{E}\xi_1 \neq 0$. (Note that $\mathbb{E}\xi_1 \geq 0$ holds in any case since $(X_k)_{k \in \mathbb{N}}$ is a submartingale.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1239728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof of Mean-value for Laplace's equation. $\textbf{ Statement of Theorem:}$ If $u \in C^2(U)$ is harmonic, then $$u(x) = \frac{1}{m(\partial B(x,r))}\int_{\partial B(x,r)} u dS = \frac{1}{m(B(x,r))}\int_{B(x,r)} u dy$$ for each $B(x,r) \subset U$. $\textbf{Proof:}$ Let $$ \phi(r) := \frac{1}{m(\partial B(x,r))} u(y) d S(y) = \frac{1}{m(B(0,1))} \int u(x+rz) dS(z)$$ Then, $$ \phi'(r) = \frac{1}{m(\partial B(0,1))} \int_{\partial B(0,1)} Du(x + rz) \cdot z dS(z)$$ Using Green's formula we compute, $$ \phi'(r) = \frac{1}{m(\partial B(x,r))} \int_{\partial B(x,r)} D u(y) \cdot \frac{y-x}{r} dS(y)$$ $$\frac{1}{m(\partial B(x,r))} \int_{ \partial B(x,r)} \frac{\partial u}{\partial \nu} dS(y) \hspace{10mm} (*)$$ where $\nu$ is the outward facing normal vector. $$ = \frac{r}{n} \frac{1}{m(B(x,r))} \int_{B(x,r)} \Delta u(y) dy, \hspace{10mm} (**)$$ $\textbf{Question:}$ What happened between steps $(*)$ and $(**)$. I can see we used the surface area of the ball, however I am not sure how $\frac{\partial u}{\partial \nu}$ turned in to the Laplacian, $\Delta u$.
Divergence Theorem : $$ \int_U {\rm div}\ \nabla f =\int_{\partial U} \nabla f\cdot \nu $$ where $\nu$ is an unit outnormal to $\partial U$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1239840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to determine $\left|\operatorname{Aut}(\mathbb{Z}_2\times\mathbb{Z}_4)\right|$ I know that the group $\mathbb{Z}_2\times\mathbb{Z}_4$ has: 1 element of order 1 (AKA the identity) 3 elements of order 2 4 elements of order 4 I'm considering the set of all automorphisms on this group, denoted $\operatorname{Aut}(\mathbb{Z}_2\times\mathbb{Z}_4)$. I know that: * *The automrphism needs to map the identity to the identity *The automorphism needs to preserve the orders of the elements in the group. So by my watch, the 3 elements of order 2 are permutated, and the 4 elements of order 4 are permuted. I just wish to count how many automorphisms there are. I am wuite confused. Is it $3\times4=12$. I have a feeling this is wrong.
Note that $G$ is generated by two elements of order $4$. To count the possible automorphisms we can focus on the maps to these two generators. To the first generator we can map any of the $4$ elements of order $4$, giving $4$ possibilities. To the second generator we don't want to map the same element of order $4$ as the first generator. And we don't want to map the other element of order $4$ that shares the same subgroup of order $4$. That is, to the other generator we need to map one of the elements of order $4$ in the other subgroup of order $4$. This gives $2$ possibilities for the second generator, for a grand total of $4\times2=8$ automorphisms. Note: You can also do this by noting that $G$ is generated by an element of order $4$ and an element of order $2$, but you have to be careful when assigning the element of order $2$. As I mentioned in the comments one element of order $2$ is in both of the subgroups of order $4$ and the other two are not. So an automorphism can't take that one one element of order $2$ to one of the other two. In particular the subgroup generated by the first generator of order $4$ contains that one element of order $2$, so it can't be the second generator.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1239930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why does the Gaussian-Jordan elimination work when finding the inverse matrix? In order to find the inverse matrix $A^{-1}$, one can apply Gaussian-Jordan elimination to the augmented matrix $$(A \mid I)$$ to obtain $$(I \mid C),$$ where $C$ is indeed $A^{-1}$. However, I fail to see why this actually works, and reading this answer didn't really clear things up for me.
You want to find a matrix $B$ such that $BA = I$. $B$ can be written as a product of elementary matrices iff it is invertible. Hence we attempt to obtain $B$ by left-multiplying $A$ by elementary matrices until it becomes $I$. All we have to do is to keep track of the product of those elementary matrices. But that is exactly what we are doing when we left-multiply $I$ by those same elementary matrices in the same order. This is what is happening with the augmented matrix.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1240055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 3, "answer_id": 0 }
An integral related to the derivative of Legendre polynomials I want to calculate the integral $$ I=\int_{-1}^{1} \Big(\frac{\mathrm{d}P_{n+1}(t)}{\mathrm{d}t}\Big) \Big(\frac{\mathrm{d}P_{m+1}(t)}{\mathrm{d}t}\Big) \mathrm{d}t $$ where $P_n(t)$ is Legendre polynomials. By virtue of the identity $$ \frac{\mathrm{d}P_{n+1}(t)}{\mathrm{d}t}=2\sum_{k=0}^{[n/2]}\frac{P_{n-2k}(t)}{\vert\vert P_{n-2k}(t)\vert\vert^2} $$ where ${\vert\vert P_{n}(t)\vert\vert}=\sqrt{\frac{2}{2n+1}}$, and Using the property of Double Series, then $$ I=4\int_{-1}^{1} \Big(\sum_{k_1=0}^{[n/2]}\frac{P_{n-2k_1}(t)}{\vert\vert P_{n-2k_1}(t)\vert\vert^2}\Big) \Big(\sum_{k_2=0}^{[m/2]}\frac{P_{m-2k_2}(t)}{\vert\vert P_{m-2k_2}(t)\vert\vert^2}\Big) \mathrm{d}t\\ =4\sum_{k_1=0}^{[n/2]}\sum_{k_2=0}^{[m/2]} \frac{1}{\vert\vert P_{n-2k_1}(t)\vert\vert^2} \frac{1}{\vert\vert P_{m-2k_2}(t)\vert\vert^2} \int_{-1}^{1}{P_{n-2k_1}(t)}{P_{m-2k_2}(t)}\mathrm{d}t\\ =4\sum_{k_1=0}^{[n/2]}\sum_{k_2=0}^{[m/2]} \frac{1}{\vert\vert P_{n-2k_1}(t)\vert\vert^2} \delta_{n-2k_1,m-2k_2}\\ $$ If $n=m$, then $$ I=4\sum_{k=0}^{[n/2]}\frac{1}{\vert\vert P_{n-2k}(t)\vert\vert^2} =2\sum_{k=0}^{[n/2]}\big(2(n-2k)+1\big)\\ =2([n/2]+1)\big((2n+1)-2[n/2]\big) =(n+1)(n+2) $$ But if $n\neq m$, How can I move on?
Legendre polynomials are orthogonal.here So it may work to calculate that integral by using integrating by parts twices.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1240154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Solve $x + y + z = xyz$ such that $x , y , z \neq0$ I came across the equation $x+y+z=xyz$ such that $x , y , z \neq 0$. I set $x=1, y=2, z=3$ but how can i reach formal mathematical solution without " guessing " the answer ? Thank you
you can write $y+z=x(yz-1)$ and if $yz\ne1$ you will get $$x=\frac{y+z}{yz-1}$$ if $yz=1$ we get $y+z=0$ or $y^2+1=0$ which gives us complex solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1240229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 3 }
is 1 greater than i? I'm not sure this question even makes sense because complex numbers are a plane instead of a line. The magnitudes are obviously the same because i is a unit vector, but is there any inequality you can write about an imaginary number and a real number without just using their magnitudes?
When we go from real numbers to complex numbers, we lose ordering of values. You cannot compare between two non-real values or between a real and non-real value. So, you cannot compare $i$ and $1$ since $i$ is non-real while $1$ is real. The only thing you can compare is their absolute values. $$|i|=|1|=1$$ But that doesn't give any comparison between $1$ and $i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1240325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
Error in my proof? What is wrong in this proof. It seems correct to me but still doesn't make proper sense. $$\sqrt{\cdots\sqrt{\sqrt{\sqrt{5}}}}=5^{1/\infty}=5^0=1$$ EDIT So does this mean that $5^{1/\infty} = 1$ $(5^{1/\infty})^\infty = 1^\infty$ But according to me, $1^\infty$ is indeterminate.
$$\lim_{n\to\infty}x^{1/2^n}=1$$ because the exponent approaches zero
{ "language": "en", "url": "https://math.stackexchange.com/questions/1240400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 8, "answer_id": 5 }
Too strong assumption in the Uniqueness Theorem of Rudin's Real and Complex Analysis? In Rudin's Real and Complex Analysis, there is the following result about Fourier transforms. The Uniqueness Theorem If $f\in L^1(\mathbb{R})$ and $\hat{f}(t)=0$ for all $t\in\mathbb{R}$, then $f(x)=0$ almost everywhere. Isn't the assumption "for all $t\in\mathbb{R}$" unnecessarily too strong? I am pretty sure that we only need that $\hat{f}(t)=0$ almost everywhere to conclude that $f(x)=0$ almost everywhere. But Rudin is a very smart guy, so I guess there is a good reason for saying "for all $t\in\mathbb{R}$". Note: In Rudin, the Uniqueness Theorem is a corollary of the following theorem. The Inversion Theorem If $f\in L^1(\mathbb{R})$ and $\hat{f}\in L^1(\mathbb{R})$, and if $$g(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty \hat{f}(t)e^{ixt}dt\qquad(x\in\mathbb{R}),$$ then $g\in C_0$ and $f(x)=g(x)$ almost everywhere.
As Ian pointed out, the Fourier transform of an $L^1$ function is continuous. For continuous functions, being zero a.e. and being zero everywhere are equivalent; so it makes more sense to use the shorter version, without a.e.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1240540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Looking for an intuitive approach of ODE I started reading up on the topics of differential equations and tried to solve certain problems to get used to those kind of equations, in particular I tried to understand every "$=$" and "$\Rightarrow$". However I soon stumbled over this equation: $ {1 \over y} * { dx \over dy } - {x \over y^2} = y $ or: $ {d \over dy} ( {x \over y} ) = y $ Now I am trying to understand why: $ {1 \over y} * { dx \over dy } - {x \over y^2} = {d \over dy} ( {x \over y} ) $ In particular I am confused how to interpret those two terms, how to interpret them: ${d \over dy} ( {x \over y} ) $ and $ { dx \over dy } $ ; I would be happy, if someone could help me and might recommend a detailed read about differential equation.
$x$ depends on $y$. So : $$\frac{d}{dy} \left( \frac{x}{y} \right)=\frac{d}{dy} \left( x \cdot \frac{1}{y} \right)=\frac{dx}{dy} \cdot \frac{1}{y}+x \frac{d}{dy}\left(\frac{1}{y} \right)=\frac{dx}{dy} \cdot \frac{1}{y}+x \left(-\frac{1}{y^2} \right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1240627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Convergence in the weak operator topology implies uniform boundedness in the norm topology? If $\{T_n\}$ is a sequence of bounded operators on the Banach space $X$ which converge in the weak operator topology, could someone help me see why it is uniformly bounded in the norm topology? I know how to apply the uniform boundedness principle to reach the conclusion above under the stronger assumption that the $T_n$ converge in the strong operator topology. I'd appreciate any helpful hints.
Hint: Try to prove the following result $\textbf{Result}:$ If for a sequence $(x_n)$ in a Banach space $X$, the sequence $(f(x_n))$ is bounded for all $f\in X^*$, then the sequence $(\|x_n\|)$ is bounded. Now suppose that you have proved the result. Suppose $(T_n)$ converges to $T$ in weak operator topology, i.e., \begin{align*} |f(T_nx)-f(Tx)| \longrightarrow 0 & & \text{for all } x\in X, f\in X^*. \end{align*} In particular, $(f(T_nx))$ is bounded for all $x\in X, f\in X^*$. Now by the result it follows that $(\|T_nx\|)$ is bounded for all $x \in X$. And now Uniform Boundedness Principle gives you that $(\|T_n\|)$ is uniformly bounded. Proof of Hint: We have $|f(x_n)| < \infty$ for all $f\in X^*$. Consider the sequence $(T_n)$ of linear functionls on $X^*$ defined by $T_n : X^* \to \mathbb{K}$ by $T_n(f) = f(x_n)$. Then $\|T_nf\| = |f(x_n)| < \infty$ for all $f \in X^*$. So, UBP implies that $(\|T_n\|)$ is bounded. But $\|T_n\| = \|x_n\|$ (by cannonical embedding of $X$ into $X^{**}$) and this proves our result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1240713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Change the matrix by multiplying one column by a number. Consider a positive definite matrix A. We can think of the matrix as a linear transformation. Now supopse we get matrix B by multiplying only one column of A by a number. Is there a geometric relation between A and B?
We have that all Matrix $A\in M_{n\times m}(\mathbb{R})$, $m\leq n$ has a vectorial subspace of $\mathbb{R}^n$ asociated. if $V$ is the vectorial subspace generated for the columns of $A$. Then we can see that each element of $V$ has the form: $$v=Ax, x\in \mathbb{R}^m.$$ Now if $A=(a_{1},\dots, a_{m})$, where $a_{i}$ are columns of $A$, we can write $B=(a_{1},\dots,\lambda a_{i},\dots, a_{m})$. Then: $$Bx=\sum_{j\neq i}^{m}x_{j}a_{j}+x_{i}\lambda a_{i}.$$ Where $x_{j}$ are the components of vector $x$. Then we can to see this as a dilation or contraction of the component $v_ {i}$ of the vector $v$ in the direction of column $a_ {i}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1240821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Green's Theorem and limits on y for flux I'm working through understanding the example provided in the book for the divergence integral. The theorem (Green's): $$ \oint_C = \mathbf{F}\cdot \mathbf{T}ds = \oint_CMdy-Ndx=\int\int_R(\frac{\partial M}{\partial x}+\frac{\partial N}{\partial y} )dxdy $$ The example uses the following: $\mathbf{F}(x,y) = (x-y)\mathbf{i} + x\mathbf{j}$ over the region $\mathbf{R}$ bounded by the unit circle $C: \mathbf{r}(t)=cos(t)\mathbf{i} + sin(t)\mathbf{j}, 0 \le t \le 2\pi$. There is then the following relations: $$ \begin{array}{rr} M = cos(t) - sin(t) & dx = d(cos(t)) = -sin(t)dt \\ N = cos(t) & dy = d(sin(t)) = cos(t)dt \end{array} \\ \begin{array}{rrrr} \frac{\partial M}{\partial x}=1 & \frac{\partial M}{\partial y} = -1 & \frac{\partial N}{\partial x}=1 & \frac{\partial N}{\partial y} = 0 \end{array} $$ Now that the foundation is laid, here's the rightmost part of the first equation given: $$ \begin{array}{rcl} \int\int_R \frac{\partial M}{\partial x} + \frac{\partial N}{\partial y} dxdy & = & \int\int_R 1 + 0 dxdy \\ & = & \int\int_R dxdy \\ & = & \text{area inside unit circle} \\ & = & \pi \end{array} $$ I understand it intuitively because it's the area over that region and the area of a circle is $A = \pi\cdot r^2$. With $r = 1$ that's obviously $\pi$. What I'm not sure of is how to express it in an integral. The question $x^2 + y^2 = 1$ represents the unit circle. Thus, $x$ as a function of $y$ I get $x = \sqrt{1-y^2}$, thus the final stage I show should be: $$ \int_{?}^{?} \int_{0}^{\sqrt{1-y^2}}dx dy $$ right? What should be used for the limits on y? I know it's simple but I'm just not seeing it and I need some guidance. Thanks
For Green's Theorem $$\oint_C (Mdx+Ndy)=\int \int_R \left(\frac{\partial N}{\partial x}-\frac{\partial M}{\partial y}\right) dxdy$$ Here, $M=(x-y)$ and $N=x$ such that $$\begin{align}\oint_C (Mdx+Ndy)&=2\int \int_R dxdy\\\\ &=2\int_{-1}^{1} \int_{-\sqrt{1-y^2}}^{\sqrt{1-y^2}}dxdy\\\\ &=4\int_{-1}^{1} \sqrt{1-y^2}dy\\\\ &=2\left(y\sqrt{1-y^2}+\arcsin(y)\right)|_{-1}^{1}\\\\ &=2(\arcsin(1)-\arcsin(-1))\\\\ &=2\pi \end{align}$$ If we evaluate the line integral in a straightforward way, we let $x=\cos \phi$ and $y=\sin \phi$. Then, $dx=-\sin \phi d\phi $ and $dy=\cos \phi d\phi $. We obtain the following $$\begin{align}\oint_C (Mdx+Ndy)&=\int_0^{2\pi} (-\cos \phi \sin \phi+\sin^2 \phi+\cos^2 \phi)d\phi\\\\ &=\int_0^{2\pi} (-\cos \phi \sin \phi+1)d\phi\\\\ &=2\pi \end{align}$$ as expected!!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1240917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Algebra verbal find the amount of sold items Hey I am having an exam tomorrow, so I looked up at some verbal algebra questions, and found one that I could not solve, because I don't really understand how would I do this. The question is like this: An IKEA agent bought beds for 60,000 dollars. 1/4 of the beds that he bought, he sold them with a 80% profit. 4 beds were sold without any profit and rest of the beds were sold with a lose of 10% per bed. Overall he made a profit of 9500 dollars. How many beds did the agent buy? How much did the agent pay per bed? I totally don't understand how would I solve this question, could someone give me a tip? the 1/4 part confuses me
Let $B$ be the number of beds. Let $P$ be the price he paid per bed. $$60000=BP$$ $$\begin{align}9500&=\frac{B}4\cdot0.8\cdot P + \left(\frac{3B}{4}-4\right)\cdot(-0.1)\cdot P\\ &=\frac{BP}{4}\left(0.8+3\cdot(-0.1)\right)-4\cdot(-0.1)\cdot P\end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1241013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Show $\sigma^{-1} (i j)\sigma = ((i)\sigma (j)\sigma)$ Let $n \geq 2$ be an integer and $i, j \in \{1, 2, ..., n\} $ be distinct elements. Let $\sigma \in S_n$, Show that $\sigma^{-1} (i j)\sigma = ((i)\sigma (j)\sigma)$ let $\tau=\sigma^{-1}(ij)\sigma$, then $((i)\sigma) \tau=((i)\sigma)\sigma^{-1} (i j)\sigma=i(ij)\sigma=e(j)\sigma=(j)\sigma$, similarly, we we can get $((j)\sigma)\tau=(i)\sigma$. Let $k\in \{1,2,3\dots,n\}$ and $k\neq (i)\sigma\neq (j)\sigma$, then $(k)\tau =k$ I am still don't what I need to show to complete the proof, can anyone give me hit? Thanks
$$((i)\sigma)\sigma^{-1}(ij)\sigma=(i)(ij)\sigma=(j)\sigma$$ and $$((j)\sigma)\sigma^{-1}(ij)\sigma=(j)(ij)\sigma=(j)\sigma$$ So, $(i)\sigma\mapsto(j)\sigma$ and $(j)\sigma\mapsto(i)\sigma$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1241135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Logic behind finding a $ (2 \times 2) $-matrix $ A $ such that $ A^{2} = - \mathsf{I} $. I know the following matrix "$A$" results in the negative identity matrix when you take $A*A$ (same for $B*B$, where $B=-A$): $$A=\begin{pmatrix}0 & -1\cr 1 & 0\end{pmatrix}$$ However, I am not certain how one would go about finding this matrix without guessing and checking. Is there some systematic way of doing so? I have tried by assuming the following: $$\begin{pmatrix}a & b\cr c & d\end{pmatrix}^2 = \begin{pmatrix}-1 & 0\cr 0 & -1\end{pmatrix}$$ ...and then get the following equations: $$a*a+b*c=-1$$ $$a*b+b*d=0$$ $$a*c+c*d=0$$ $$b*c+d*d=-1$$ This only tells me that $a=-d$, and then as best I can tell leaves both $c$ and $d$ without a solution, so perhaps this isn't the best method, but its the only approach that's coming to mind. Any suggestions?
One reason that it is hard to describe a systematic procedure that will let you find that solution to $A^2=-I$ is that there are so many other solutions to this equation. A solution cannot have real eigenvalues, so the image of the first standard basis vector $e_1$ (which give the firs column of your matrix) cannot be a multiple of $e_1$ itself, but otherwise it can be any vector$~v$ whatsoever. Then $[e_1,v]$ will be a basis of $\Bbb R^2$, and one can arrange that $A\cdot v=-e_1$ (since one can choose the images of a basis freely when defining a linear map), which will ensure that $A^2=-I$ (an easy check). But the example you gave is so obtained for the simplest choice of$~v$, namely $v=e_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1241232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
expectation calculation problem I got the answers for this and i know its 1.05 but the way it explains is very difficult to understand so im seeking for some help here. A system made up of 7 components with independent, identically distributed lifetimes will operate until any of 1 of the system's components fails. If the life time X of each component has density function $f(x) = \begin{cases} 3/x^4, & \text{for 1<x}\\ 0, & \text{otherwise} \end{cases}$ what is the expected lifetime until failure of the system? I tried to find the intersect of 7 components by integrating and power it by 7 but it doesnt give me anything useful...
You want the expected time until the earliest component failure, of seven i.i.d. components.   This is the seventh least order statistic. $$\begin{align} \mathsf E[X_{(7)}] & = \binom{7}{1}\int_1^\infty x\cdot f_{X}(x)\cdot (1-F_X(x))^6 \operatorname d x \\ & = 7\int_1^\infty x \cdot\frac {3}{x^4}\cdot \left(\int_x^\infty \frac {3}{y^4}\operatorname d y\right)^6\operatorname d x \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1241292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Finite conjugate subgroup In a paper titled "Trivial units in Group Rings" by Farkas, what does it mean by Finite conjugate subgroup. Here is the related image attached- What is finite conjugate subgroup of a group? It is not clear to me what is author referring here.
I had never heard of this before, but after a bit of searching, I found the definition $\Delta(G) = \{ g \in G : |G:C_G(g)| < \infty \}$ or, in other words, the elements of $G$ whose conjugacy classes in $G$ are finite. It is easy to see that $\Delta(G)$ is a normal (in fact characteristic) subgroup of $G$. Apparently it arises frequently in the study of group rings, and one source cited Passman's book on Infinite group Rings as a source for its basic properties.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1241400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Verbal question problem help I had this question today and I got confused on how to construct my solution for this. At the first view of the question, I decided to use $X$ and $Y$ and no other option: Maria purchased $X$ books for 300 dollars. After a day, the shop sold the books for 2 dollars cheaper. So Ana purchased 10 books more than Maria, and each book costed dollar 2 less than the original price. Ana paid 350 for her books. How many books Maria purchased, and how much each costed? I structured my 2 equations like this: ${X \cdot Y = 300}$ $(X + 10) \cdot (Y - 2) = 350$ Did I build my two equations correctly according to the question? How can I solve it? because when I solve it, I get $-Y^2 = 640$ and then I get totally lost not knowing if I did that right. My steps: $X = 300 - Y$ $((300 - Y) + 10)(Y - 2) = 350$ $=> (3000 -10Y)(Y - 2) = 350 => 3000Y - 6000 - 10Y^2 - 20Y = 350$ Now idea what to do from this step...
As I mentioned in my comment, the first step should be $X=300/Y$. But that will make it hard to solve. A better way is to look at your second equation. If you multiply it out, there is a term $XY$ which can be replaced with $300$ according to your first equation. You might then go on to do your substitution with this simplified step. Also you should be careful when doing the algebra. I noticed several mistakes in your steps. For example $(300-Y)+10 \ne 3000 -10Y$, and so on.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1241485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Determining half life without logs, given only reduction undergone and total time taken I have a half-life question that I can't solve. There's very limited information given. Even the half-life formula has not been taught yet. The mass of a radioactive substance in a certain sample has decreased 32 times in 10 years. Determine the half-life of the substance. The answer is 2. From the question I understand that 32 half-lives have gone by in 10 years. How can this be solved using the simplest possible methods? Not using graphs and avoiding logarithms too if possible?
The solution $2$ assumes the following interpretation of the problem statement: Suppose you have a radioactive sample of mass $m_0$ which after $10$ years is reduced to mass $m_{10} = \frac{m_0}{32}$. What is the half-life of the substance? Recall that the half-life of a radioactive substance is the time after which the mass of a sample is halved by radioactive decay. If $10$ is a multiple of the half-life, then $$ m_{10} = \left(\left(\left(m_0 \cdot \frac{1}{2}\right) \cdot \frac{1}{2}\right) \dotsm \right) \cdot \frac{1}{2} = \frac{m_0}{2^n} $$ Since $2^5 = 32$, we know that in $10$ years the mass got halved $5$ times; in other words, once every $\frac{10}{5} = 2$ years. Note: Still, I find your problem statement ambiguous, at best.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1241616", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Giving integer images (bis) Prove the statement below (the restriction $x < y < z$ is to avoid apparent uncertainties but the property is valid for all $x, y, z$ really). $$F(x,y,z) = \frac{(y+z)x^n}{(z-x)(y-x)} +\frac{(z+x)y^n}{(z-y)(x-y)}+\frac{(x+y)z^n}{(x-z)(y-z)} \in \mathbb Z.$$
If $n = 1$, then the whole expression is equal to $-1$. If $n=2$, it is equal to $0$. Let us therefore assume that $n > 2$. We have: $$F(x,y,z) = \frac{y^nz^2 - y^2z^n + x^n(y-z)(y+z) + x^2(z^n-y^n)}{(x-y)(x-z)(y-z)} = \frac{f(x,y,z)}{(x-y)(x-z)(y-z)}.$$ What we want to know is if the numerator is divisible by $x-y$, $x-z$ and $y-z$. Let's check (for example) $y-z$. It is clear that $y-z$ divides the second term. Finally, observe that $$y^nz^2 - y^2z^n + x^2(z^n-y^n) = y^2z^2(y^{n-2}-z^{n-2}) - x^2(y^n-z^n)$$ and $$a^k-b^k = (a-b)(a^{k-1} + a^{k-2}b + \dots + b^{k-1}).$$ We apply this with $a = y$ and $b = z$ after noticing that if $a,b \in \mathbb Z$, then $a+b$ and $ab$ is an integer (so the expression in brackets is indeed integer).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1241706", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Prove that a symmetric distribution has zero skewness Prove that a symmetric distribution has zero skewness. Okay so the question states : First prove that a distribution symmetric about a point a, has mean a. I found an answer on how to prove this here: Proof of $E(X)=a$ when $a$ is a point of symmetry Of course I used method 2 But now for the rest of this proof I'm struggling. $\mu_{2X}$ and $\mu_{3X}$ are the $2^{nd}$ and $3^{rd}$ moments about the mean, respectively (w.r.t X). $$E[X] = \mu = a$$ $$\mu_{2X}= E[(X-a)^2] = E[((a+Y)-a)^2] = E[Y^2]$$ $$\mu_{3X} = E[Y^3]$$ Please just check my notation. I always use subscripts to show which distribution is in queston but especially with Skewness, can this be done? $$\text{Skewness} = \sqrt{\beta_{1X}} = \frac{\frac{1}{n}\sum_{x \in S}y^3}{(\sqrt{\frac{1}{n}\sum_{x \in S}y^2})^3}$$ So I thought it just needs to be shown that the numerator is always equal to 0. I don't know if I approached it correctly but it seemed to make sense to me and now I'm stuck
I have a simpler proof. I hope this is ok. Let $Y = X - a$ be a random variable. Now note that due to symmetricity $Y$ and $-Y$ have the same distribution. That implies $$E[Y^3] = E[(-Y)^3]$$ This implies $E[Y^3] = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1241793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How many non prime factors are in the number $N=2^5 \cdot 3^7 \cdot 9^2 \cdot 11^4 \cdot 13^3$. to find non prime factors in the number $N=2^5 \cdot 3^7 \cdot 9^2 \cdot 11^4 \cdot 13^3$. First I tried finding all the factors by adding 1 to each of the exponents and then multiplying them and then finding the prime factors of the given number and then subtracting the prime factors from the total factors but I'm not getting the answer. Answer is $1436$.
Write it as $2^5 * 3^{11} * 11^4 * 13^3$. Thus, as you said, adding one to each exponent and multiplying, we get the total number of divisors, which is $6* 12 * 5* 4 = 1440$. Subtracting the four prime factors ($2,3,11,13$) leaves us with 1436.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1242091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
How to scale a random integer in $[A,B]$ and produce a random integer in $[C,D]$ I know there are many methods to scale a number from range $[A,B]$ to a range $[C,D]$, and I've searched over and over the web. I've seen this math.SE thread. I need to scale a big number (signed 32-bit integer) between $-2147483648$ and $2147483647$, to a smaller range (from 0 to millions). However, the original number is generated by a random number generator that I cannot control in any way, so I must to be sure that the output formula does not alter its randomness (in a way that I can demonstrate - academic papers are well accepted). I need the correct way to scale this number taking in consideration that: * *the output range must have as maximum value a power of two (e.g., $4194304$) *the formula can be demonstrated *the formula does not alter the randomness of the source number Anyone may be of help?
This works in special cases: Let $[a,b]$ be an interval and consider a uniform distribution on the intervals in this interval. Let $k$ be the number of integers in $[a,b]$. Let $[c,d]$ be another interval and suppose that the number of integers in this interval is $l$. BIG ASSUMPTION: Suppose that $l\mid k$. Then, any $\frac{k}{l}=m$ to $1$ map will give you a uniform distribution on $[c,d]$. For example, if you have the number $n$, you could take $n\pmod l$ and then add $c$. If you don't have the BIG ASSUMPTION, then you may be able to weight the map described here, (and not have it exactly $m$ to $1$, but that is more challenging and situation specific).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1242163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Proving the sequence $f_{x_n}(x)= (n+1)x^n$ converges in distribution I am preparing for a final exam and just working on sample problems. Let $X_1,X_2,\dots$ be an infinite sequence of continuous random variables such that $f_{x_n}(x)= (n+1)x^n$ for $0<x<1$ and $0$ otherwise. Show that $\{X_n\}$ converges in distribution to the degenerate r.v $X$ where $P(X=1)=1$. Progress: I have found the cumulative distribution for both $F_x$ and $F_{x_n}$: * *$F_{x_n} = x^n$ for $0<x<1$, $1$ for $x\ge 1$, and $0$ for $x<0$ *$F_x(x)= 1$ for $x\ge 0 $ and $0$ when $x<0$. I don't know how to prove it from there on.
Hint: A sequence of rvs converge in distribution is the sequence of corresponding characteristic functions converge for points where the function sequence is continuous. This is Levy's continuity theorem. So find the characteristic function sequence and show its convergence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1242246", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Vector times symmetric matrix For most of you here, this is probably quite basic. As for a symmetric matrix $A$ the first row equals the first column, multiplying the matrix with a column vector $b$ equals multiplying the transposed vector $b'$ with the symmetric matrix, i.e. if $A=A'$ then $$Ab=b'A$$ Could you please confirm? Is there a better derivation than the verbal one above? Also, while I have found many sources online about matrix algebra, I have not found this property, yet. is there a reliable source on the web where one could find this? I was hoping that such a site would also contain additional information which might help to answer some other questions I have.
Except in dimension $1$, your claim is not correct. However, because transposition swaps factors we have $$ (Ab)^T=b^TA^T=b^TA$$ that is multiplying with a column vector from the right equals the transpose of multiplying the transposed vector from the left.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1242455", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that the Wronskian of solutions of $y''+p(x)y'+q(x)y=0$ satisfies $\frac{dW}{dx}+pW=0$ So I am given: $\{y_1(x),y_2(x)\}$ is a fundamental solution set of the ODE: $$y''+p(x)y'+q(x)y=0$$ I need to show that the Wronskian $W(y_1,y_2)$ satisfies the ODE $\frac{dW}{dx}+pW=0$ and hence, $W(x)=C \cdot \exp(-\int p(x) dx)$ I calculated the Wronskian to be $W=y_1y_2'-y_1'y_2$, then $\frac{dW}{dx}=y_1y_2''-y_1''y_2$, but at this point, I'm not too sure what to show.
you have $$y_1'' + py_1' + y_2 = 0\tag 1$$ $$y_2'' + py_2' + qy_2 = 0 \tag 2 $$ multiplying $(1)$ by $y_2,(2)$ by $y_1$ and subtracting gives you, $$y_1''y_2 - y_2y_1'' + p(y_1'y_2 - y_2'y_1) = 0 \to W' + pW = 0$$ the solution is $$W = Ce^{\int_0^x p\, dt} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1242534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof of Newton Girard formula symmetric polynomials Newton Girard formula states that for $k>2$: \begin{equation} p_k=p_{k-1}e_1-p_{k-2}e_2+\cdots +(-1)^{k}p_1e_{k-1}+(-1)^{k+1}ke_{k} \end{equation} where $e_i$ are elementary symmetric functions and $p_0=n$ with $p_k=x_1^k+\cdots+x_n^k$. I am using induction to prove this result. I am stuck at the inductive step, that is to show: \begin{equation} p_{k}e_1-p_{k-1}e_2+\cdots +(-1)^{k+1}p_1e_k+(-1)^{k+2}(k+1)e_{k+1}= x_1^{k+1}+\cdots+x_n^{k+1} \end{equation} I am not able to see how I can use my inductive hypothesis in the left hand side of the above expression.
By way of enrichment here is an alternate formulation using cycle indices. Recall that the OGF of the cycle index $Z(P_n)$ of the unlabeled set operator $\mathfrak{P}_{=n}$ is given by $$G(w) = \sum_{n\ge 0} Z(P_n) w^n = \exp\left(\sum_{q\ge 1} (-1)^{q+1} a_q \frac{w^q}{q}\right).$$ Differentiating we obtain $$G'(w) = \sum_{n\ge 0} (n+1) Z(P_{n+1}) w^n = G(w) \left(\sum_{q\ge 1} (-1)^{q+1} a_q w^{q-1}\right).$$ Extracting coefficients we thus have $$[w^n] G'(w) = (n+1) Z(P_{n+1}) = \sum_{q=1}^{n+1} (-1)^{q+1} a_q Z(P_{n+1-q})$$ This is apparently due to Lovasz. Substitute the cycle indices with the variables $X_1$ to $X_m$ to get $$(n+1) Z(P_{n+1})(X_1+\cdots+X_m) \\= \sum_{q=1}^{n+1} (-1)^{q+1} (X_1^q+\cdots+X_m^q) Z(P_{n+1-q})(X_1+\cdots+X_m)$$ This yields $$(n+1) e_{n+1}(X_1,\ldots,X_m) = \sum_{q=1}^{n+1} (-1)^{q+1} p_q(X_1,\ldots,X_m) e_{n+1-q}(X_1,\ldots,X_m).$$ Now a choice of variable names yields the result. Remark. The identity for $G(w)$ follows from the EGF for the labeled species for permutations where all cycles are marked with a variable indicating length of the cycle. This yields $$\mathfrak{P} \left(A_1 \mathfrak{C}_{=1}(\mathcal{W}) + A_2 \mathfrak{C}_{=2}(\mathcal{W}) + A_3 \mathfrak{C}_{=3}(\mathcal{W}) + \cdots \right).$$ Translating to generating functions we obtain $$G(w) = \exp\left(a_1 + a_2 \frac{w^2}{2} + a_3 \frac{w^3}{3} + \cdots\right).$$ The fact that $$Z(P_n) = \left.Z(S_n)\right|_{a_q := (-1)^{q+1} a_q}$$ then confirms the claim.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1242641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If the set $B=\{f(x) : x\in A\}$ has supremum and $C=\{k+f(x): x\in A\}$, then what is $\sup C$? If the set $B=\{f(x) : x\in A\}$ has supremum and $C=\{k+f(x): x\in A\}$, then what is $\sup C$? Since $C$ is not an empty set and $f(x)\le \sup(B) ⇒ k+f(x)\le k+\sup(B)$. So $C$ is bounded above. Thus, $\sup(C)\le k+\sup(B)$ If $\sup(C)\ge k+\sup(B)$ is true, I can conclude $\sup(C)=k+\sup(B)$. But I don't know how to show $\sup(C)\ge k+\sup(B)$.
* *$\sup(C) \geq k + \sup (B)$ is equivalent with $\sup (C) - k \geq \sup (B)$. *Given that $\sup (C)$ is an (in fact, the smallest) upper bound on $C$, can you show that $\sup (C) - k$ is an upper bound on $B$? *Since $\sup(B)$ is its smallest upper bound, you'd automatically have the inequality $\sup (C) - k \geq \sup (B)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1242752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Absolute Value Algebra with inverses I noticed the following equality in some material regarding limits and infinite series. $$ \left |\frac{x}{x+1} - 1 \right| = \left |\frac{-1}{x+1} \right| $$ And I'm honestly stumped (and slightly ashamed) on how to algebraically go from the lefthand side to the righthand side. Any pointers? Thanks!
It's just a little bit of algebra to get there. \begin{align*} \left|\frac{x}{x+1} -1\right| &= \left|\frac{x}{x+1} - \frac{x+1}{x+1}\right| \\ &= \left|\frac{x-x-1}{x+1}\right| \\ &= \left|\frac{-1}{x+1}\right| \end{align*} I hope that helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1242868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Seeking after notation for two objects equal up to a constant Sometimes we want to express that two objects are equal up to a constant but there is no need to keep writing out the constant or constants. For example, often times the constant or constants involved in the derivation of a primitive of a function plays or play a role unimportant. I wonder if there is a convenient, established notation for such a matter. I thought of using modular notation. But using that does not simplify because I still have to mention "modulo-what". If possible, some variants of the equality sign are preferable. For instance, if the constant of proportionality is unimportant then it may be suppressed by using $\propto.$ In this sense I mean by saying that some variants of the equality sign are preferable.
Apparently there is no standard notation. I have seen $\propto$ used in the additive context as well as the multiplicative, leaving it to the reader to overcome the abuse of notation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1243060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Clockwise rotation of $3\times3$ matrix? I've recently been studying matrices and have encountered a rather intriguing question which has quite frankly stumped me. Find the $3\times3$ matrix which represents a rotation clockwise through $43°$ about the point $(\frac{1}{2},1+\frac{8}{10})$ For example: if the rotation angle is $66°$ then the centre of rotation is $(0.5, 1.3)$. Points are represented by homogeneous coordinates. What would this look like? What would the centre of rotation be? Would anyone be able to provide a step-by-step solution to this so I can learn from it please? In addition, there's a follow-up question: A "house" shape is formed as a polygon with vertices $(0, 0), (1, 0), (1, 1), (0.5, 1.5)$ and $(0, 1)$. Use your matrix to rotate each of these vertices, and draw the rotated house. How can this be achieved? Thanks!
The center of rotation is the point $C=(c_1,c_2)=(1/2,9/5)$ and the angle is $\theta= -43°$. You can represent this transformation whith a $3 \times 3$ matrix using homogeneous coordinates in the affine plane. Note that you can perform your transformation in three steps: 1) translate the origin to the point $C$ 2) rotate about the new origin by the angle $\theta$ 3) return to the old origin by means of the opposite translation. In homogeneous coordinates these transformations are represented by the matrix: $$ M= T R T^{-1}= \begin{pmatrix}1&0&c_1\\0&1&c_2\\0&0&1\end{pmatrix} \begin{pmatrix}\cos \theta&-\sin \theta&0\\\sin \theta&\cos \theta&0\\0&0&1\end{pmatrix} \begin{pmatrix}1&0&-c_1\\0&1&-c_2\\0&0&1\end{pmatrix} $$ You can apply this matrix to the points of your ''hause'', using homogeneous coordinates, e.g. the point $A=(0.5,1.5)$ is represented by $(0.5,1.5,1)^T$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1243159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
$\lim_{x\to\infty}{f(x)}=\lim_{x\to\infty}{g(x)}\Rightarrow\lim_{x\rightarrow\infty}{\frac{f(x)}{2^x}}=\lim_{x\rightarrow\infty}{\frac{g(x)}{2^x}}$? Is this statement true? Why? $$\lim_{x \rightarrow \infty}{f(x)} =\lim_{x \rightarrow \infty}{g(x)} \quad \Rightarrow\quad \lim_{x \rightarrow \infty}{\frac{f(x)}{2^x}}=\lim_{x \rightarrow \infty}{\frac{g(x)}{2^x}}$$
The answer is definitely no. Think of 2 functions which tends to $\infty$ as $x\to\infty$...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1243244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Prove the inclusion-exclusion formula We just touched upon the inclusion-exclusion formula and I am confused on how to prove this: $|A ∪ B ∪ C| =|A| + |B| + |C| − |A ∩ B| − |A ∩ C| − |B ∩ C| + |A ∩ B ∩ C|$ We are given this hint: To do the proof, let’s denote $X = A ∪ B$, then $|(A ∪ B) ∪ C| = |X ∪ C|$, and we can apply the usual subtraction rule (you will have to apply it twice). That just made me even more confused. I was hoping someone can guide me through this, or explain
$|(A\cup B)\cup C|=|A\cup B|+|C|-|(A\cup B)\cap C|$ Now, $|A\cup B|=|A|+|B|-|A\cap B|$ $$\text{and }|(A\cup B)\cap C|=|(A\cap C)\cup (B\cap C)|=|(A\cap C)|+|(B\cap C)|-|(A\cap C)\cap (B\cap C)|=|(A\cap C)|+|(B\cap C)|-|A\cap B\cap C|$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1243366", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Real Numbers are Roots $r, s$. Real numbers $r$ and $s$ are roots of $p(x)=x^3+ax+b$, and $r+4$ and $s-3$ are roots of $q(x)=x^3+ax+b+240$. Find the sum of all possible values of $|b|$. Using Vieta's Formulas, $r+s+x_1$ $=0$ $\Rightarrow x_1$ $=-r-s$, where $x_1$ is the third root. Similarly, $x_2=-r-s-1$ $=x_1-1$, where $x_2$ is the third root of $q(x)$. I have the list here: $a_n = a_n$ $a_{n-1} = -a_n(r_1+r_2+\cdots+r_n)$ $a_{n-2} = a_n(r_1r_2+r_1r_3+\cdots+r_{n-1}r_n)$ $\vdots$ $a_0 = (-1)^n a_n r_1r_2\cdots r_n$ Obviously, $b = a_0$ so for $p(x)$: $$b = (-1)^{3}(1)(r)(s)(-1)(r + s) = (r)(s)(r+s) = r^2s + s^2r$$ For $q(x)$ then, $$a_0 = (b + 240) = (-1)^{3}(1)(r+4)(s-3)(-1)(r + s + 1) = (r+4)(s-3)(r + s + 1) $$ $$= r^2s - 3r^2 + rs^2 + 2rs - 15r + 4s^2 -8s - 12$$ $$b = r^2s - 3r^2 + rs^2 + 2rs - 15r + 4s^2 -8s - 252$$ But that leaves an awfully weird system of equations. PLEASE DO NOT GIVE ME A FULL ANSWER, Just help.
Since $r,s$ are two roots of $x^3+ax+b=0$, we have $$ r^3+ar+b=0,s^3+as+b=0 \tag{1}$$ and hence $$ (r^3-s^3)+a(r-s)=0. $$ Assuming $r-s\neq0$, we have $$ r^2+rs+s^2+a=0.\tag{2}$$ Similarly since $r+4,s-3$ are two roots of $x^3+ax+b+240=0$, we have $$ (r+4)^3+a(r+4)+b+240=0,(s-3)^3+a(s-3)+b+240=0\tag{3}$$ and hence $$ [(r+4)^3-(s-3)^3]+a(r-s+7)=0.$$ Assuming $r-s+7\neq 0$, we have $$ (r+4)^2+(r+4)(s-3)+(s-3)^2+a=0.\tag{4}$$ From (4)-(3) gives $$ 13+5r-2s=0 $$ from which we find that $$s=\frac{1}{2}(5r+13).\tag{5}$$ Putting this in (2), we obtain $a$ in terms of $r$. Putting them in the first equation of (1), we have $b$ in terms of $r$. Putting $s,a,b$ in the first equation of (3), we will have an equation of $r$ which you can get the values $r$. I think you can do the rest.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1243458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Let $A,B \in {M_2}$ and $C=AB-BA$. Why is ${C^2} = \lambda I$ true? Let $A,B \in {M_2}$ and $C=AB-BA$. Why does ${C^2} = \lambda I$?
It is a well-known fact that for any two matrices $A$ and $B$ with $C = [A, B] = AB - BA, \tag{1}$ we have $\text{Tr} (C) = 0, \tag{2}$ where $\text{Tr}(X) = \text{trace}(X) = X_{11} + X_{22} \tag{3}$ for any $2 \times 2$ matrix $X = [X_{ij}]$, $1 \le i, j \le 2$. It is also easy to verify (2) by direct calculation, especially in the $2 \times 2$ case (though it in fact holds for matrices of any size); with $A = [A_{ij}]$, $B = [B_{ij}]$, we have $(AB)_{ii} = A_{i1}B_{1i} + A_{i2}B_{2i}; \tag{4}$ reversing the roles of $A$ and $B$ one writes $(BA)_{ii} = B_{i1}A_{1i} + B_{i2}A_{2i}; \tag{5}$ if we now subtract (5) from (4) and sum over $i$, we find that all the terms of $\text{Tr}([A, B]) = \sum_1^2 (AB - BA)_{ii} = \sum_1^2 ((AB)_{ii} - (BA)_{ii}) \tag{6}$ cancel out, leaving us with $\text{Tr}([A, B]) = 0; \tag{7}$ incidentally, essentially the same argument works for any two $n \times n$ matrices $A$, $B$. That $\text{Tr}(C) = 0$ is the first piece of the puzzle; the second is provided by the Hamilton-Cayley theorem for $2 \times 2$ matrices, which asserts that $C$ satisfies the equation $C^2 - \text{Tr}(C) + \det(C) I = 0; \tag{8}$ see Proof that the characteristic polynomial of a $2 \times 2$ matrix is $x^2 - \text{tr}(A) x + \det (A)$ for a more detailed discussion. Since $\text{Tr}(C) = 0$, (8) reduces to $C^2 + \det(C) I = 0, \tag{9}$ or $C^2 = -\det(C) I; \tag{10}$ taking $\lambda = -\det (C)$ in (10), we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1243577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
a relation in logic Suppose $\prec$ is a relation defined in the set of well defined formulas such that $\phi \prec \psi$ iff $\models \phi \rightarrow \psi$ and $ \nvDash \psi \rightarrow \phi$ I would like to prove the following: * *If $\phi \prec \psi$ then there exists $\chi$ such that $\phi\prec\chi\prec\psi$. *I'd like to find $\phi_1\prec\phi_2\prec\phi_3\prec...$ Thank you.
The best way to explain this is going to be with the aid of an example. Consider two basic formulae $\phi$ and $\psi$ consisting of propositional variables $p$ and $q\,$. That is let $\phi :=p\,$, and $\psi:= p\lor q\,$. Now, since $\vDash p\rightarrow (p \lor q )\,$, and $\not\vDash (p \lor q )\rightarrow p\,$, it follows that $\phi \prec \psi\,$. Now to show that we can always find a formula $\chi$ such that $\phi\prec\chi \prec\psi\,$. Probably the best way to see this is by (i) taking into account that the introduction of an additional propositional variable gives us the required freedom to fashion such a formula $\chi\,$, and (ii) recourse to functional completeness, to be understood that the set of logical connectives or Boolean operators of our formal language is adequate to express all the all possible truth tables. This is achieved by composition of the basic connectives. In this example I work with a particular pair of formulae $\phi$ and $\psi$ (defined earlier) such that $\phi \prec \psi\,$, but the method (algorithm) applies to any such formulae. Consider the following truth table. $$ \begin{matrix} p & q & r & (p \lor q) & (p \rightarrow (p \lor q) ) &( (p \lor q) \rightarrow p ) & \chi :=f (p,q,r) \\ \hline 1 & 1 & 1 & 1& 1& 1& 1\\ 0 & 1 & 1 & 1& 1& 0& 0\\ 1 & 0 & 1 & 1& 1& 1& 1\\ 0 & 0 & 1 & 0& 1& 1& 0\\ 1 & 1 & 0 & 1& 1& 1& 1\\ 0 & 1 & 0 & 1& 1& 0& 1\\ 1 & 0 & 0 & 1& 1& 1& 1\\ 0 & 0 & 0 & 0& 1& 1& 0\\ \end{matrix} $$ Note how introducing an additional propositional variable $r$ doesn't have any effect on the formulae that don't have $r$ in them. Introducing $r$ creates copies of the valuations of formulae without it. See the values of $p$ and $q$ -- they just repeat, once when $r$ is $1$ and again when $r$ is $0\,$. The idea is to make $\chi$ weaker then $p$ yet stronger than $p\lor q\,$. Note how row $2$ and it's repeated counterpart (as far as $p$ and $q$ are concerned), row $6\,$, are those rows that are responsible for $\not\vDash (p \lor q )\rightarrow p\,$. That is, those are the rows where $p \lor q$ is true, but $p$ is false. The trick that I suggest is to keep the top half of $\chi$'s valuation the same as $p$'s, and the bottom half the same as that of $p \lor q\,$. This achieves two things: it maintains $\chi$ strictly weaker than $p$, i.e. $\not\vDash \chi \rightarrow p\,$, since on the $6$'th row $\chi$ is true and $p$ is false, but otherwise $\chi$ is the same as $p\,$, and as a matter of fact $\vDash p \rightarrow \chi\,$, so $p \prec \chi\,$. Also it is the top half of $\chi$'s valuation thus constructed that gives $\not\vDash (p \lor q) \rightarrow \chi \,$, in virtue of row $2$, yet otherwise $\vDash \chi \rightarrow (p \lor q)\,$, hence $\chi \prec (p \lor q)\,$. Hence $p \prec \chi \prec (p \lor q)\,$. That is, $\phi \prec \chi \prec \psi \,$, as required. That we can always construct such a formula $\chi$ is granted by function completeness, i.e. $f$ denotes a composition of functional connectives from the adequate set, say $\{\neg, \lor \} \,$ and $f(p,q,r)$ denotes the complex formula that includes the new propositional variable $r\,$, that we need to introduce.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1243676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
If each term in a sum converges, does the infinite sum converge too? Let $S(x) = \sum_{n=1}^\infty s_n(x)$ where the real valued terms satisfy $s_n(x) \to s_n$ as $x \to \infty$ for each $n$. Suppose that $S=\sum_{n=1}^\infty s_n< \infty$. Does it follow that $S(x) \to S$ as $x \to \infty$? I really do not recall any theorems... please point out !!!
No. Let $s_n(x)=\frac 1x$, $s_n=0$. Then $S=\sum_{n=1}^\infty s_n$ trivially converges. However, $S(x)=\sum_{n=1}^\infty s_n(x)$ is clearly divergent, since the summand is a constant (with respect to $n$) greater than $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1243761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 4 }
Determinants of triangular matrices Does a lower triangular matrix have a determinant that is equal to the product of the elements in the diagonal similar to an upper triangular matrix.
The matrix looks like this: $$\begin{bmatrix} a_{1,1}&0&\cdots&\cdots&0 \\b_{1,2}&a_{2,2}&\ddots&&\vdots \\ \vdots&\ddots&\ddots&\ddots&\vdots \\ \vdots&&\ddots&\ddots&0 \\ b_{1,n}&\cdots&\cdots&b_{n-1,n}&a_{n,n} \end{bmatrix}$$ The determinant can be written as the sum of the product of the elements in the top row with their associated minors: so this determinant would be: $$a_{1,1}*\begin{bmatrix} a_{2,2}&0&\cdots&0 \\b_{2,3}&a_{3,3}&\ddots&\vdots \\ \vdots&\ddots&\ddots&0 \\ b_{2,n}&\cdots&b_{n-1,n}&a_{n,n} \end{bmatrix}$$ This action eliminates the first column, $b_{1,x}$ and we are left with a similar matrix to find the determinant of. Reducing this one likewise we are left with $a_{1,1}*a_{2,2}*$(a matrix of remaining rows and columns) and so on until we have the product $a_{1,1}*a_{2,2}*\cdots*a_{n,n}$. The product of the diagonal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1243847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Parameterizing cliffs I am looking for a function $f(x; \alpha, X_1, X_2, Y_1, Y_2)$ that has the following property: For $\alpha=0$ it behaves linearly between $(X_1, Y_1)$ and $(X_2, Y_2)$, and as $\alpha$ gets closer to 1, it approximates a sharp cliff, as in the figure below. The function needs not be defined for $\alpha=1$. Is there a relatively "simple" function (trigonometrics, powers, logarithms and exponentials are fine) that captures this behavior?
The linear equation is given by $$\frac{x - x_1}{\lvert x_2 - x_1\rvert} + \frac{y - y_2}{\lvert y_1 - y_2\rvert} = 1.$$ We can get successively 'sharper' curves by using $$\left(\frac{x - x_1}{\lvert x_2 - x_1\rvert}\right)^{k} + \left(\frac{y - y_2}{\lvert y_1 - y_2\rvert}\right)^k = 1$$ for integer $k \geq 1$, using larger $k$. Here are the graphs for $k = 1, 2, 3, 4, 5, 6$ through the points $(0, 2)$ and $(3, 0)$: For $k = 1$, we have the linear equation. For $k = 2$, it's one quarter of an ellipse. Unfortunately, I'm not sure how to translate my parameter of positive integer $k$-values into the parameter $\alpha \in [0, 1)$. I'll leave this here for now, and think about it. EDIT: It appears that using $k = \dfrac{1+ \alpha}{1 - \alpha}$ for the exponent works quite well. Here is an image with $\alpha \in \{0, 0.05, 0.1, 0.2, 0.3, 0.5, 0.7, 0.9\}$: And if instead you use the function $$\left\lvert\frac{x - x_1}{x_2 - x_1}\right\rvert^{k} + \left\lvert\frac{y - y_2}{y_1 - y_2}\right\rvert^k = 1,\quad \alpha = \frac{1 + \alpha}{1 - \alpha} \in [0, 1),$$ you get something that behaves rather well, regardless of whether $x_1 < x_2$, etc. Now these are between the points $(x_1, y_1) = (0, 2)$ and $(x_2, y_2) = (3, 4)$, with some bonus points thrown in:
{ "language": "en", "url": "https://math.stackexchange.com/questions/1243927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Collection of all partial functions is a set I'm studying real analysis from prof. Tao's book "Analysis 1" and I'm stuck on the following exercise: "Let $ X $ , $ Y $ be sets. Define a partial function from $ X $ to $ Y $ to be any function $ f: X' -> Y' $ whose domain $ X' $ is a subset of $ X $, and whose range $ Y' $ is a subset of $ Y $. Show that the collection of all partial functions from $ X $ to $ Y $ is itself a set." My attempt: I know that if I "build" the set { $Y_1^{X_1} , Y_2^{X_2},... $ } ($X_i \in 2^X$, $Y_i \in 2^Y$) then the result follows immediately from the axiom of union, so my goal is to build that set. To do so I let $ X' $ be an arbitrary element of $ 2^X $ (power set of $X$) and for every $Y' \in 2^Y$ I let $ P(Y',y):="y=Y'^{X'}"$. From the axiom of replacement I now know that the set {$f:X'->Y'| Y' \in 2^Y$}={$Y_1^{X'}, Y_2^{X'},...$} exists. Now I want to allow $X'$ in the set above to vary in $2^X$ (this, together with the axiom of union should be enough to conclude the proof) but I haven't been able to do it so far (I think I should use the axiom of replacement again but I don't know how to apply it). So, I would appreciate any hint about how to conclude this last step. Best regards, lorenzo.
Completely revised. I’m assuming that you’ve already shown (or assumed) that for any sets $S$ and $T$ the set $T^S$ is well-defined. Note that if $f:S\to T\subseteq Y$, then $f:S\to Y$ as well. Thus, we really need only $$\bigcup\left\{Y^S:S\in 2^X\right\}\;.$$ Let $P(x,y)$ hold iff $x\in 2^X$ and $y=Y^x$, or $x\notin 2^X$ and $y=\varnothing$; clearly $P$ is functional, and $2^X$ is a set, so we can apply replacement to conclude that $$\left\{Y^S:S\in 2^X\right\}$$ exists. Now just apply the union axiom.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1244040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
I need help finishing this proof using the Intermediate Value Theorem? Let $f$ and $g$ be continuous functions on $[a,b]$ such that $f(a)\geq g(a)$ and $f(b) \leq g(b)$. Prove $f(x_0)=g(x_0)$ for at least one $x_0$ in $[a,b]$. Here's what I have so far: Let $h$ be a continuous function where $h = f -g$ Since $h(b)=f(b)-g(b)\leq 0$, and $h(a)=f(a)-g(a)\geq 0$, then $h(b)\leq 0 \leq h(a)$ is true. So, by IVT there exists some $y$ such that... And that's where I need help. I read what the IVT is, but I could use some help explaining how it applies here, and why it finishes the proof. Thank you!
There exists some $y \in (a,b)$ such that $h(y) = 0$. So $f(y) = g(y)$. The other case $a=b$ is trivial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1244133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there any interpretation to the imaginary component obtained when computing the geometric mean of a series of negative returns? When computing returns in finance geometric means are used because the return time series of a financial asset is a geometric series: $\mu_r = \sqrt[T]{\prod_{t=1}^T r_t}$ where the return is computed as $r_t = \log\left(\frac{p_{t+1}}{p_t}\right)$ and $p$ is the value of the asset. Negative returns are (sadly) a financial reality. But the geometric mean return obtained when there are negative returns does not lend itself to a straightforward interpretation because $r < 0 \implies \mu_r \notin \mathbb{R} $. Is there any financial interpretation to the imaginary component obtained when computing the geometric mean of a return time series (geometric series) including negative returns?
If you use the logarithmic values, then you will get the log of the geometric mean by calculating $$log(\mu_r)=\frac{1}{T} \cdot \sum_{t=1}^T r_t$$ The formula you have posted is only valid for returns, which have not been logarithmized.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1244237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show this is not a manifold with boundary Consider a curve $\alpha: \mathbb R \to \mathbb R^2$ defined by $t \mapsto (e^t \cos(t), e^t \sin(t))$. Show the closure of $\alpha(\mathbb R )$ is not a manifold with boundary. Denote $\alpha(\mathbb R )$ by M. The closure of M is the spiral plus the limit point, the origin. I want to show there is no boundary coordinate chart for the origin. Intuitively,for every open neighbourhood $B$ of $(0,0)$, for every given direction $(a,b)$, there is a point $(x,y)$ in $\overline M \cap B$ s.t. $(x,y)= \lambda (a,b)$. Then $\overline M \cap B$ is not the graph of a smooth function $g$ because if $g'(0)=b/a$, then there are points in $\overline M \cap B$ that are not in that direction. But how can I write this rigourously?
Suppose $M$ is a manifold with boundary. Then there is an open set $U$ in $\mathbb{R}^2$ and a diffeomorphism $F:U\to\mathbb{R}^2$ such that $F(M\cap U)$ is either * *the line $\{(x,0):x\in\mathbb{R}\}$, or *the half-line $\{(x,0):x\ge 0\}$ Consider the $y$ component of $F$, call it $g$. It vanishes on $M$. Consequently, $g_x$ vanishes at every point where the tangent to $M$ is horizontal. As such points accumulate toward the origin, continuity implies $g_x(0,0)=0$. Similarly $g_y(0,0)=0$. But then $DF(0,0)$ is degenerate, contradicting it being a diffeomorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1244302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What is my error in this matrix / least squares derivation? I'm doing a simple problem in linear algebra. It is clear that I have done something wrong, but I honestly can't see what it is. let, $y = Ax$, $y_{ls} = Ax_{ls}$ where A is skinny and full rank, and $x_{ls} = (A^T A)^{-1}A^Ty$ is the standard least squares approximation. Now, I have tried to compute $y^Ty_{ls}$. $y^Ty_{ls} = y^TAx_{ls} = x^TA^TA(A^TA)^{-1}A^Ty$ The $A^TA(A^TA)^{-1}$ seems it should cancel to the identity leaving $x^TA^Ty = y^Ty$. Clearly this should not be true. What have I done wrong here?
The issue is this: your problem is overdetermined and there is no solution vector $x$ such that $$ \mathbf{A}x - b = 0. $$ Instead of asking for an exact solution, we relax requirements and ask for the best solution. Picking the $2-$norm, the best solution is $$ x_{LS} = \left\{ x\in\mathbb{C}^{n} \colon \lVert \mathbf{A} x_{LS} - b \rVert_{2}^{2} \text{ is minimized} \right\}. $$ Summary: $$ \mathbf{A} x_{LS} \ne b, $$ instead $\mathbf{A}x_{LS}$ is as close as possible to $b$ as measured in the $2-$norm.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1244381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving or disproving $c-d|p(c)-p(d)$ where $p$ is a polynomial. I have $p(X)=\sum_{i=0}^{n}{a_iX^i}$, where $a_i\in\Bbb{Z}$. Let $c,d\in\Bbb{Z}$. Prove or disprove: $c-d|p(c)-p(d)$. I did some algebra but I can't think of a way to divide high power parts by $c-d$. I can't on the other hand find a counter example, and it does feel like a true statement. I would really appreciate your help here.
It is true because $c-d \mid c^k-d^k, k \in \mathbb {N}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1244522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Oriented Hypersurface admits unique unit normal vector field This question is the converse of this question and is taken from Lee's Smooth Manifolds, problem 15.7. Namely: Suppose $M$ is an oriented Riemannian manifold and $S\subset M$ is an oriented smooth hypersurface. Show that there is a unique smooth unit normal vector field along $S$ that determines the given orientation of $S$. So we can choose an orientation on $S$ via orthonormal coordinate charts and complete this to an orientation on $M$. However, this is in no way uniform, so we can't make it a vector field. Not sure where to go from here.
I know this is an old post, but here is a detailed solution, which I wrote partly as an exercise for myself to go through the details. First we define a rough unit vector field $N$ that determines the orientation on $S$. Let $p\in S$, and note that we can write $T_p M=T_pS\oplus (T_pS)^\perp$. Since $S$ is a hypersurface, we know that $(T_pS)^\perp$ is $1$-dimensional, and since $\Vert N_p\Vert=1$, there are two candidates for $N_p$. Consider an orientation form $\omega$ of $M$. For $N$ to determine the orientation, we need given an oriented basis $(E_1\vert_p,\dots,E_{n-1}\vert_p)$ of $T_pS$, that $\omega(N_p,E_1\vert_p,\dots,E_{n-1}\vert_p)>0$. Clearly there is only one choice for $N_p$. This choice is well-defined, for given any other oriented basis $(\tilde E_i\vert_p)$, we know the transition matrix between $(E_i\vert_p)$ and $(\tilde E_i\vert_p)$ has positive determinant, and therefore the choice of $N_p$ would be the same for $(\tilde E_i\vert_p)$. We now show smoothness of $N$. It suffices to show this locally. Consider an oriented orthonormal frame $(E_1,\dots,E_{n-1})$ on a neighbourhood $U\ni p$ of $S$. We can think of these as maps $$ U\to TS\vert_U\to TM\vert_U. $$ Consider $N_p\in(T_pS)^\perp$. Note that $$ N_p=N_p-\sum_{i=1}^{n-1}\langle N_p,E_i\vert_p\rangle. $$ We can extend $N_p$ locally to a vector field $X$ and define $\tilde N$ by $$ \tilde N=X-\sum_i^{n-1}\langle X,E_i\rangle. $$ By the observation above $N_p=\tilde N_p$, so $\omega(\tilde N_p,E_1\vert_p,\dots,E_{n-1}\vert_p)>0$, and therefore by continuity of $\omega$ we also have $\omega(\tilde N,E_1,\dots,E_{n-1})>0$ locally. It follows that $N=\tilde N$ locally, and therefore $N$ is smooth.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1244597", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Where do I best learn informal set theory? I want to learn math for machine learning, and I want to start with informal set theory. I was reading 'naive set theory' (1960) by halmos, and it didn't seem to contain modern set notations. If anyone knows a good material for learning informal set theory, please leave a comment. That being said, I do not mind some rigor as long as it helps me with statistics, calculus, and other math fields used in machine learning.
You can see the book "Book of Proof" of Richard Hammack; it have many diagrams and pics. The chapter about cardinals is very educational. P.S.: Machine leaning is more about Linear Algebra and Probability Theory.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1244678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to prove that it is possible to make rhombuses with any number of interior points? I was given some square dot paper which can be found on this link: http://lrt.ednet.ns.ca/PD/BLM/pdf_files/dot_paper/sq_dot_1cm.pdf and was told to draw a few rhombuses with the vertices on the dots but no other dots on the edges. Here is an example: This is not a rhombus it just shows how the shape must be lay out. A = 6 The number of interior dots would be represented by A. I was then told that it is possible to draw rhombuses with any number of interior points. All I had to do was prove this. I started by trying to draw these shapes and came up with some of the below: A = 2 A = 4 A = 9 However, I am still unsure of how you can prove something like this since there may be numerous ways of drawing these rhombuses. I would appreciate support. Thank you :)
The square dot paper you describe can be thought of as something called the integer lattice, that is, the set of points $(m,n)$ in the plane where $m,n$ are integers. In order to make this identification, pick an arbitrary point on the square dot paper and call it $(0,0)$. If $m,n$ are positive integers, then $(m,n)$ is the point which is $m$ points to the right and $n$ points up from $(0,0)$. If $m$ is negative, move to the left instead of the right. If $n$ is negative move down instead of up. To rephrase what we wish to prove with this terminology, we could say that for any number of points $s$, there is a rhombus $R$ with vertices on the integer lattice whose boundary (except for the vertices) contains no points of the integer lattice, and $R$ contains $s$ points of the integer lattice in its interior. Your first example generalizes naturally in the following way. Let $n$ be a positive integer and consider the rhombus $R$ whose vertices consist of the points $(1,0)$, $(-1,0)$, $(0,n)$ and $(0,-n)$. The rhombus $R$ contains no points of the integer lattice on its boundary except its vertices since its sides have $x$-values between $0$ and $1$. Moreover, $R$ contains the points $(0,k)$ for $-n < k < n$ within its interior, of which there are $2n-1$. This proves that you can find a rhombus containing a desired odd number of points of the integer lattice in its interior. If you are having trouble seeing this, try drawing a rhombus like this on your dot paper. To find a rhombus which contains a desired even number of points of the integer lattice in its interior, try using a diagonally placed rhombus like yours in the first example (note: that one has 2 points in the interior). Have a shot at working out the rest of this argument for yourself; it is similar to the one I have above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1244775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 5, "answer_id": 1 }
My dilemma about $0^0$ We know that $0^0$ is indeterminate. But if do this: $$(1+x)^n=(0+(1+x))^n=C(n,0)\cdot ((0)^0)((1+x)^n) + \cdots$$ we get $$(1+x)^n=(0^0)\cdot(1+x)^n$$ So, $0^0$ must be equal to $1$. What is wrong in this? Or am I mistaking $0^0$ as indeterminate? Other threads are somewhat similar but not exactly the one I am asking.
The binomial theorem states that: $$(a+b)^n=\sum_k\binom nka^kb^{n-k}$$ (assuming I made no typo). What you noticed is basically that, when $a=0$, this only works if $0^0=1$. More specifically: When $k=0$, you're supposed to evaluate: $$a^0b^n$$ when $a=0$. Now, while $0^0$ is an indeterminate form, it makes sense to assume that it's $1$ in this case, because $\displaystyle\lim_{a\to0}a^0=1$. In other words, while $0^0$ is indeterminate in general, in this case, it makes the most sense to take it as $1$. At least, that's my understanding of it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1244846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Factorising ideals in the ring of integers of a quadratic field In an undergraduate algebraic number theory course, I was given the question "If $K = \mathbb Q(\sqrt{-33})$ Factorise the ideal $(1+\sqrt{-33})\subset \mathcal O_K$ into a product of prime ideals." I know that the norm is multiplicative and that the norm of $(1+\sqrt{-33})$ is $34=2\cdot 17$. Also all ideals $\mathfrak p$ of prime norm are prime ideals as $\left|\mathcal O_K/\mathfrak p \right|=p \implies \mathcal O_K/\mathfrak p \cong \mathbb Z/p\mathbb Z$ - a field. So $\mathfrak p$ is a maximal ideal and hence prime. So I'm looking for prime ideals of norm $2$ and $17$. I also want their product to contain $(1+\sqrt{-33})$. I was unsure of how to proceed from here, so I looked at the solutions. My lecturer writes "the obvious candidates are $(2,1+\sqrt{-33})$ and $(17, 1+\sqrt{-33})$." Why are these obvious? I managed to check by multiplying each of these ideals by their complex conjugate that they indeed have norm $2$ and $17$ respectively, but how can you tell immediately that these have the required norm. On the other hand, how do you know for sure that $1+\sqrt{-33}\in (2,1+\sqrt{-33})(17, 1+\sqrt{-33})$ before doing calculations? I know that I can say $1+\sqrt{-33}=17\cdot(1+\sqrt{-33})-8\cdot 2 \cdot(1+\sqrt{-33})\in (2,1+\sqrt{-33})(17, 1+\sqrt{-33}),$ but to me that counts as calculation. He writes "$(1+\sqrt{-33})=(2,1+\sqrt{-33})(17, 1+\sqrt{-33})$ must work, but this can be verified using direct computation too."
For your second question: in a Dedekind domain $(I+J)(I \cap J) = IJ$, so $$ (2, 1 + \sqrt{-33})(17, 1 + \sqrt{-33}) = (2, 1 + \sqrt{-33}) \cap (17, 1 + \sqrt{-33}) \ni 1 + \sqrt{-33} $$ because $(2, 1 + \sqrt{-33})$ and $(17, 1 + \sqrt{-33})$ are different maximal ideals. As for the other one, I have been taught to make very light use, if at all, of the word "obvious" in proofs. Anyway, first note that you need two ideals which lie over $2$ and $17$. Now, if you know that those ideals are prime, then they are indeed good candidates, because all the generators of the product are multiples of $1 + \sqrt{-33}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1244946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Evaluate the integral $\int \sin(x)\cos(3x^2)dx$ I am looking for a solution for the following integral problem. $$\int \sin(x)\cos(3x^2)dx$$ Passed over these integral things long time ago. I cannot see how to go for a solution.
$$\begin{align}I=\dfrac12~\sqrt{\dfrac\pi6}~\bigg\{\cos\dfrac1{12}\bigg[S\bigg(\dfrac{6x+1}{\sqrt{6\pi}}\bigg)-S\bigg(\dfrac{6x-1}{\sqrt{6\pi}}\bigg)\bigg]+\\\\+\sin\dfrac1{12}\bigg[C\bigg(\dfrac{6x-1}{\sqrt{6\pi}}\bigg)-C\bigg(\dfrac{6x+1}{\sqrt{6\pi}}\bigg)\bigg]\bigg\}\end{align}$$ where S and C are the two Fresnel integrals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1245053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Injection of the mapping cone of $z^2$ We define the mapping cone of $f:S^1\to S^1=:Y$, $f (z)=z^2$ as the quotient space of $S^1\times [0,1]\sqcup Y$ where $(z,0)$ and $(z',0)$ are identified and where $(z,1)$ and $f(z)$ are identified for all $z,z'\in S^1$. We call this space $C_f$. Probably my question is very easy: What is the injection $Y\to C_f$, so that we can consider $Y\subseteq C_f$? Further question: How does the injection $S^1\to C_f$ look like? An explizit formula would be helpful. Thanks!
The question is not that trivial. Though it's pretty obvious that the composition $$Y\hookrightarrow S^1\times[0,1]\sqcup Y\xrightarrow q C_f$$ is injective, you still need to show that it's a homeomorphism onto its image, i.e. an embedding. To this end, let $C\subseteq Y$ be closed. Then $\bar C=f^{-1}(C)\times\{0\}\sqcup C$ is closed in $S^1\times[0,1]\sqcup Y$, and $q(\bar C)$ is closed in $C_f$ since $\bar C$ is its preimage. But $q(\bar C)=C$, so $C$ is closed in $C_f$, and this shows that $q:Y\to C_f$ is even a closed embedding.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1245146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Given a system of differential equations, how can one tell if $\textbf{x}_c = (0,0)^T$ is a unique critical point? I have: $$\frac{d\textbf{x}}{dt}=\begin{bmatrix} -1 & 2 \\ -2 & -1 \\ \end{bmatrix}\textbf{x}(t)$$ with $\textbf{x}(0)=(1,-1)^T$. I am asked whether the critical point $\textbf{x}_c=(0,0)^T$ is unique or not. I don't know how to go about answering this one.. Help please! :-) Thanks!
The critical point(s) occur where $\dfrac{d\mathbf x}{dt} = 0; \tag{1}$ since, as a vector field, $\dfrac{d\textbf{x}}{dt}=\begin{bmatrix} -1 & 2 \\ -2 & -1 \\ \end{bmatrix}\textbf{x}, \tag{2}$ the critical points occur wherever $\begin{bmatrix} -1 & 2 \\ -2 & -1 \\ \end{bmatrix}\textbf{x}_c = 0 \tag{3}$ has a solution. We see that $\det(\begin{bmatrix} -1 & 2 \\ -2 & -1 \\ \end{bmatrix}) = 5 \ne 0; \tag{4}$ the coefficient matrix is thus non-singular; the only solution to (3) is $\mathbf x_c = \begin{pmatrix} 0 \\ 0 \end{pmatrix}; \tag{5}$ thus this critical point is indeed unique.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1245262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show For any language L two L-structures M and N are elementarily equivalent iff they are elementarily equivalent for every finite sublanguage. Setting For any language $\mathcal L$, two $\mathcal L$-structures $\mathcal M$ and $\mathcal N$ are elementarily equivalent iff they are elementarily equivalent for every finite sublanguage. Attempt ($\Rightarrow$) Given $\mathcal M\equiv \mathcal N$, so $\mathcal M\models \phi \iff \mathcal N \models \phi$ for every $\mathcal L$-sentence $\phi$. Now suppose there is finite sublanguage of $\mathcal L$ where $\mathcal M\not\equiv \mathcal N$, then it follows that we can find some $\mathcal L$-sentence $\phi$ where $\mathcal M\models \phi$ but $\mathcal N \not\models \phi$, contradicting the assumption that $\mathcal M\models \phi \iff \mathcal N \models \phi$ for every $\mathcal L$-sentence $\phi$. ($\Leftarrow$) Now suppose $\mathcal M\equiv \mathcal N$ in every finite sublanguage, then it follows by compactness $\mathcal M\equiv \mathcal N$. Problem In the $\Rightarrow$ direction, I am not confident that I can use compactness. Since each finite sublanguage may not necessarily generate a finite set of sentences, which compactness theorem requires.
Hint: each formula uses only finitely many symbols.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1245353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How many words with letters from the word ABRACADABRA if they must end in a consonant and $d$ must be after $r$. How many words with letters from the word ABRACADABRA if they must end in a consonant and $d$ must be after $r$. What I did: I have $A:5$ $B:2$ $R:2$ $C:1$ $D:1$ If the words must end in a consonant and d must be after r I have only two cases: 1)$D$ at the end. 2)$C$ at the end. 3)$B$ at the end. Case 1: $$ \_\ \_\ \_\ \_\ \_\ \_\ \_\ \_\ \_\ \_\ \text{D} $$ So I have to choose $10$ letters for the remaining slots with $A$ repeated $5$ times, $2$ $B$s and 2 $R$s: $$ \frac{10!}{5!2!2!} $$ Case 2: I set $D=R$ and same thought process as before, giving me: $$ \frac{10!}{5!3!2!} $$ Case 3: Same as case 2. $$Total= \frac{10!}{5!2!2!}+\frac{10!}{5!3!} $$ Is this correct?
Since D must appear after both R's, the last letter can be a B, C, or D. Case 1: The last letter is D. We have ten places to fill with five A's, two B's, two R's, and one C. We can fill five of the ten places with A's in $\binom{10}{5}$ ways. We can fill two of the remaining five places with B's in $\binom{5}{2}$ ways. We can fill two of the remaining three places with R's in $\binom{3}{2}$ ways. Finally, we can fill the last place with a C in $\binom{1}{1}$ way, so there are $$\binom{10}{5}\binom{5}{2}\binom{3}{2}\binom{1}{1} = \frac{10!}{5!5!} \cdot \frac{5!}{3!2!} \cdot \frac{3!}{2!1!} \cdot \frac{1!}{1!0!} = \frac{10!}{5!2!2!1!}$$ permutations that end in a D, as you found. Case 2: The last letter is C. Then we have ten places to fill with five A's, two B's, two R's, and one D. If, at first, we ignore the requirement that D must appear after the two R's, using the same procedure as above yields $$\binom{10}{5}\binom{5}{2}\binom{3}{2}\binom{1}{1}$$ However, in only one third of these permutations does D appear after both R's. Thus, the number of permutations in which the last letter is a C and D appears after both R's is $$\frac{1}{3}\binom{10}{5}\binom{5}{2}\binom{3}{2}\binom{1}{1} = \frac{1}{3} \cdot \frac{10!}{5!2!2!1!}$$ Case 3: The last letter is a B. Then we have ten places to fill with five A's, two R's, one B, one C, and one D. If, at first, we ignore the requirement that D must appear after both R's, then using the same procedure as above yields $$\binom{10}{5}\binom{5}{2}\binom{3}{1}\binom{2}{1}\binom{1}{1}$$ However, in only one third of these permutations does the letter D appear after both R's. Thus, the number of permutations in which the last letter is a B and D appears after both R's is $$\frac{1}{3}\binom{10}{5}\binom{5}{2}\binom{3}{1}\binom{2}{1}\binom{1}{1} = \frac{1}{3} \cdot \frac{10!}{5!2!1!1!1!}$$ To find the number of words that can be formed from the word ABRACADABRA in which the last letter is a consonant and D appears after both R's, add the totals for the three disjoint cases.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1245441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why does $\mathrm{Rank}(A{A^*} - {A^*}A) \ne 1$? Given $A \in M_n$, why does $\mathrm{Rank}(A{A^*} - {A^*}A) \neq 1$?
Hint. Observe that $$ \mathrm{Trace}(AA^*)=\mathrm{Trace}(A^*A), $$ and hence $$ \mathrm{Trace}(AA^*-A^*A)=0. $$ Next show that, if $B$ is diagonalizable and $\mathrm{Rank}(B)=1$, then $\mathrm{Trace}(B)\ne 0$. Finally, observe that $AA^*-A^*A$ is hermitian and hence diagonalizable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1245565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Higher Order ODE with Differential Operators I am trying to solve an ODE problem involving higher order. Let $p(s) = s(s^2-s+1)(s-1)$ and $D = d/dt$. Solve the initial value problem $$p(D)x = t + e^t,$$ $x'''(2) = 1$, $x''(2) = 1$, and $x'(2) = 1$, and $x(2) = 0$. Attempt: I believe I need to solve the homogenous equation by finding all the roots of $p(s)$. Then, for the particular solution I don't really know what to do? Variation of parameters? (Method of Undetermined Coefficient maybe?, I dislike that method and would like to avoid it) . Edit: Also what is $p(s)$, and $p(D)x$, I am confused on those two things as-well. Jessica,
You want $$ p(D)f=D(D^{2}-D+1)(D-1)f = t+e^{t} $$ The operator $D^{2}$ annihilates $t$ and $(D-1)$ annihilates $e^{t}$. Therefore, $$ D^{3}(D^{2}-D+1)(D-1)^{2}f = 0. $$ Because $D^{2}-D+1=(D-1/2+i\sqrt{3}/2)(D-1/2-i\sqrt{3}/2)$, That gives a solution $$ f = A + Bt + Ct^{2}+Ee^{t/2}\cos(\sqrt{3}t/2)+Fe^{t/2}\sin(\sqrt{3}t/2)+Ge^{t}+Hte^{t}. $$ When you plug back into the original equation, $D(D^{2}-D+1)(D-1)$ annihilates the terms with $A$, $E$, $F$ and $G$. The remaining terms are $$ g = Bt+Ct^{2}+Hte^{t}. $$ Then, \begin{align} p(D)f = p(D)g & = (D^{2}-D+1)(D-1)[D(Bt+Ct^{2})] \\ & +D(D^{2}-D+1)[(D-1)(Hte^{t})] \\ & = (D^{2}-D+1)(D-1)(B+2Ct) \\ & + D(D^{2}-D+1)He^{t} \\ \end{align} Single powers of $D$ annihilate $B$ and higher powers annihilate $2Ct$. Therefore, $$ (D^{2}-D+1)(D-1)(B+2Ct) = (2D-1)(B+2Ct)=(4C-B)-2Ct $$ And, $De^{t}=e^{t}$. Therefore, $$ D(D-D^{2}+1)He^{t} = (1)(1-1+1)He^{t}=He^{t}. $$ Finally, $$ p(D)f = (-B+4C)-2Ct+He^{t} = t+e^{t}\\ \implies -B+4C=0,\;\; C=-1/2,\;\; H =1 \\ \implies B = 4C=-2. $$ The general solution is then $$ f = A-2t-t^{2}/2+Ee^{t/2}\cos(\sqrt{3}t/2)+Fe^{t/2}\sin(\sqrt{3}t/2)+Ge^{t}+te^{t} $$ I'll leave it to you to solve $f'''(2)=f''(2)=f'(2)=1$ and $f(2)=0$ for the remaining constants $A$, $E$, $F$ and $G$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1245661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Cauchy-Riemann equations in polar form. Show that in polar coordinates, the Cauchy-Riemann equations take the form $\dfrac{\partial u}{\partial r} = \dfrac{1}r \dfrac{\partial v}{\partial \theta}$ and $\dfrac{1}r \dfrac{\partial u}{\partial \theta} = −\dfrac{\partial v}{\partial r}$. Use these equations to show that the logarithm function defined by $\log z = \log r + i\theta$ where $z=re^{i\theta}$ with $-\pi<\theta<\pi$ is holomorphic in the region $r > 0$ and $-\pi<\theta<\pi$. What I have so far: Cauchy-Riemann Equations: Let $f(z)$ = $u(x, y)$ +$iv(x, y)$ be a function on an open domain with continuous partial derivatives in the underlying real variables. Then f is differentiable at $z = x+iy$ if and only if $\frac{∂u}{∂ x}(x, y)$ = $\frac{∂ v}{∂ y}(x, y)$ and $\frac{∂u}{∂ y}(x, y)$ = −$\frac{∂ v}{∂ x}(x, y)$. So we have $f'(z)= \frac{∂u}{∂ x}(z) +i \frac{∂ v}{∂ x}(z)$. Let $f(z)$ = $f(re^{iθ})$= $u(r,θ)$ +$iv(r,θ)$ be a function on an open domain that does not contain zero and with continuous partial derivatives in the underlying real variables. Then f is differentiable at $z$ = $re^{iθ}$ if and only if $r \frac{∂u}{∂r}=\frac{∂ v}{∂θ}$ and $\frac{∂u}{∂θ}$ = $−r \frac{∂v}{∂ r}$. Sorry, if this is not very good. I just decided to start learning complex analysis today...
Proof of Polar C.R Let $f=u+iv$ be analytic, then the usual Cauchy-Riemann equations are satisfied \begin{equation} \frac{\partial u}{\partial x} =\frac{\partial v}{\partial y} \ \ \ \ \ \text{and} \ \ \ \ \frac{\partial u}{\partial y} =-\frac{\partial v}{\partial x} \ \ \ \ \ \ \ (C.R.E) \end{equation} Since $z=x+iy=r(\cos\theta + i \sin\theta)$, then $x(r, \theta)=r\cos\theta$ and $y(r,\theta)=r\sin\theta$. By the chain rule: \begin{align*} \frac{\partial u}{\partial r} & = \frac{\partial u}{\partial x} \cos\theta+ \frac{\partial u}{\partial y} \sin\theta \\ & \overset{(C.R.E)}{=} \frac{1}{r} \left( \frac{\partial v}{\partial y} r\cos\theta - \frac{\partial v}{\partial x} r\sin\theta\right) =\frac{1}{r} \left( \frac{\partial v}{\partial \theta}\right) \end{align*} and again, by the chain rule: \begin{align*} \frac{\partial v}{\partial r} & = \frac{\partial v}{\partial x} \cos\theta+ \frac{\partial v}{\partial y} \sin\theta \\ & \overset{(C.R.E)}{=} \frac{-1}{r} \left( \frac{\partial u}{\partial y} r\cos\theta - \frac{\partial u}{\partial x} r\sin\theta\right) =\frac{-1}{r} \left( \frac{\partial u}{\partial \theta}\right) \end{align*} So indeed $$ \left( \frac{\partial u}{\partial r}\right) = \frac{1}{r} \left( \frac{\partial v}{\partial \theta}\right) \ \ \ \ \ \text{and} \ \ \ \ \left(\frac{\partial v}{\partial r} \right) = \frac{-1}{r} \left( \frac{\partial u}{\partial \theta}\right) \ \ \ \ \ \ \ \ \ \blacksquare $$ Logarithm Example $log(z)=\ln(r)+i \theta$ with $z=re^{i\theta}$, $r>0$ and $-\pi<\theta<\pi$. Then $$ u(r, \theta)=\ln(r) \ \ \ \text{ and } \ \ \ v(r, \theta) =\theta $$ and $$ \left( \frac{\partial u}{\partial r}\right) =\frac{1}{r}= \frac{1}{r} \cdot 1 = \frac{1}{r} \cdot \left( \frac{\partial v}{\partial \theta}\right) \ \ \ \ \ \text{and } \ \ \ \ \left(\frac{\partial v}{\partial r} \right) = 0 = \frac{-1}{r}\cdot 0 = \frac{-1}{r} \left( \frac{\partial u}{\partial \theta}\right) $$ So indeed, $log(z)$ is analytic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1245754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 2, "answer_id": 1 }
Path of diffusion process with discontinuous drift Let $(B_t)$ be a standard Brownian motion on some probability space and let $X_t$ be the process defined by the SDE $dX_t = \mu_t dt + dB_t$, where $\mu_t$ is adapted, deterministic, and only takes the values $0$ or $1$. (For example, $\mu_t = 0$ on $(2n, 2n+1]$ for all $n \geq 0$ and $1$ everywhere else.) Does it follow that $X_t$ has continuous sample paths? (Under the same probability measure that makes $(B_t)$ a standard BM?) I think it does -- use the Girsanov theorem to get a new probability measure so that with $X_t$ is now a standard BM. Because the new probability measure is mutually absolutely continuous with the old one, it should follow that $X_t$ is also continuous almost everywhere. Am I right with this line of thought, or am I missing something?
Applying Girsanov's theorem is overkill. Note that, by definition, $$X_t = X_0 + \int_0^t \mu_s \, ds+ B_t, \qquad t \geq 0.$$ We know that $t \mapsto B_t$ is continuous (almost surely); moreover, it is well-known that mappings of the form $$t \mapsto I(t) := \int_0^t \mu_s \, ds$$ are continuous whenever the integral is well-defined. (Just consider e.g. $\mu(s) = 1_{[1,2]}(s)$; draw a picture to see that $t \mapsto I(t)$ is continuous, it doesn't have any jumps.) This means that $t \mapsto X_t$ is continuous almost surely since it is the sum of the continuous functions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1245942", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$a,b,c,p$ are rational number and $p$ is not a perfect cube Given that $a,b,c,p$ are rational number and $p$ is not a perfect cube, if $a+bp^{1\over 3}+cp^{2\over 3}=0$ then we have to show $a=b=c=0$ I concluded that $a^3+b^3p+c^3p^2=3abcp$ but how can I go ahead? could you please help? Thanks
Clearly $p^{1/3}$ is one of the roots of the Quadratic equation: $cx^2+bx+a = 0$ Now, sum of roots = $-b/c$. Since RHS is rational and one of the terms in LHS is irrational, it is logical to assume that the other root is: $-p^{1/3}$ (so that the LHS becomes rational). The sum of roots then becomes $0$. Hence, $b = 0$. Again, product of roots = $a/c$. $\implies -p^{2/3} = a/c$ $\implies a/c + p^{2/3} = 0$ $\implies a + cp^{2/3} = 0$ Now, $a$ is rational and $cp^{2/3}$ is irrational (due to $p^{2/3}$ being irrational). This means both the terms are $0$. Hence, $a = 0$ and $cp^{2/3} = 0$. And, $p \neq 0$ as it is not a perfect cube (but $0$ is). So, $c = 0$. Hence, $a=b=c=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1246003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
2010 unit circles $C$ is a unit circle with radius $r$. $C_1,C_2,\ldots, C_{2010}$ are unit circles along the circumference of $C$ touching $C$ externally. Also the pairs $C_1C_2;C_2C_3,\ldots;C_{2010}C_1$ touch. Then find $r$. Options: 1) $cosec(\pi/2010)$ 2) $sec(\pi/2010)$ 3) $cosec(\pi/2010)-1$ 4) $sec(\pi/2010)-1$ I try solving but I am getting a weird angle. A picture might help. Thank you.
Made it before I noticed its already done, but anyway here it is...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1246095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Prove an interesting function $f$ is semicontinuos. Let $r_n$ be the sequence of all rational numbers and $$f(x)=\sum_{n,r_n<x}\frac{1}{2^n}.$$ Show that $f$ is lower semicontinuous in $R$. Since $f$ is continuous on irrationals. It is also an increasing function. So we only need to look at limit from the left side to rationals actually satisfy the condition of lower semicontinuous. Then I got stuck here. Could someone kindly help? Thanks!
Let $$ f_K (x) :=\sum_{n\leq K,\ r_n< x } \frac{1}{2^n}$$ Then $f_K\rightarrow f $ uniformly since $\sum_{n>M} \frac{1}{2^n } < \varepsilon$ for some $M$. Note that each $f_K$ is lower. So the limit is lower. (1) Lower Continuity of $f_K$ : Define $g_n$ : $$ g_n(x) =\left\{ \begin{array}{ll} \frac{1}{2^n}, & \hbox{$r_n<x$;} \\ 0, & \hbox{$x\leq r_n$;} \\ \end{array} \right. $$ is lower. And note that $f_K(x)=\sum_{i=1}^K g_i(x)$. So we have claim that if we have two lower continuous $f,\ g$ then $f+g$ is lower continuous : Assume that $x_n<x,\ x_n\rightarrow x$ Then $$ (f+g)(x_n)=f(x_n)+ g(x_n)\rightarrow f(x)+ g(x) =(f+g)(x)$$ (2) Uniform limit of lower continuous functions is lower : Assume that $x_k< x,\ x_k\rightarrow x $ Then $$ |f(x_k)-f(x)| \leq |f_K(x_k)-f(x_k) | +|f_K(x_k)-f_K(x) |+|f_K(x)-f(x) | $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1246194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Compute $f(2)$ if $\int_0^{x^2(1+x)} f(t) dt = x$ for all $x \geq 0$ This is exercise 22(d) of section 5.5 (p.209) of Apostol's Calculus, vol.I. Compute $f(2)$ is $f$ is continuous and satisfies the given formula for all $x \geq 0$. $$\int_0^{x^2(1+x)} f(t) dt = x$$ I tried to differentiate both sides so that I have $x^2(1+x) f(x^2(1+x)) = 1$. But the answer would be $f(2) = \frac{1}{12}$, which I know is wrong. Any help?
here is another way to do this. we will use the fact that $$(1+0.1)^2(1+1+0.1)=(1+2\times 0.1 + \cdots)(2 + 0.1) = 2+5\times0.1+\cdots $$ putting $x = 1, 1+0.1$ in $\int_0^{x^2(1+x)} f(t) dt = x, $ we get $$\int_0^2f(t)\, dt = 1, \int_0^{2+5\times0.1+\cdots}f(t)\,dt=1+0.1$$ subtracting one from the other, $$\int_2^{2+5\times0.1+\cdots}f(t)\,dt=0.1\to f(2)\times5\times0.1=0.1 \to f(2)=\frac15.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1246305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Has the 3x3 magic square of all squares entries been solved? It is my understanding that it has not yet been determined if it is possible to construct a $3$x$3$ magic square where all the entries are squares of integers. Is this correct? Has any published work been done on this problem?
The existence or not of a non-trivial integer 3x3 magic square of squares is STILL a unsolved problem. The quoted reference to Kevin Brown's web pages only discusses an extremely special configuration of numbers, which does not exist. The page does NOT claim to prove non-existence for all possible magic squares. If you are interested in this topic you should consult the web-site http://www.multimagie.com/ which gives lots of details and references.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1246417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Vectors in a plane I have a question where i need to prove that three vectors are all on one plane. I proved that the angle between ALL the three vectors is 120. Is that enough to prove that it is on one plane? because together they are 360. Basiclly, is there any other way to build three vectors with 120 angle between them, without being all on one plane Thanks!
Assume WLOG that all the vectors are normalized. Then $$u\cdot v=v\cdot w=u\cdot w=-\frac12$$ Now, consider the equation $$au+bv+cu\times v=w$$ and dot-product with $u$: $$a-\frac b2=-\frac12$$ now with $v$: $$-\frac a2+b=-\frac12$$ and we get $a=b=-1$. Then $$-u-v+cu\times v=w$$ and dot-product with $w$: $$\frac12+\frac12+c[u,v,w]=1$$ so $c=0$ or $[u,v,w]=0$, and both facts tell us the same: $u$, $v$ and $w$ are coplanar.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1246470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 3 }
Uniqueness of prime ideals of $\mathbb F_p[x]/(x^2)$ What are the prime ideals of $\mathbb F_p[x]/(x^2)$? I have been told that the only one is $(x)$, but I would like a proof of this. I want to say that a prime ideal of $\mathbb F_p[x]/(x^2)$ corresponds to a prime ideal $P$ of $\mathbb F_p[x]$ containing $(x^2)$. And then $P$ contains $(x)$ since it is prime. But I don't know if prime ideals correspond to prime ideals under the correspondence theorem, and I still can't seem to prove that if they do, $P$ can't be some non-principal ideal properly larger than $(x)$. Some context: I'm considering why the prime ideals $\mathfrak p$ of $\mathcal O_K$, (with $K=\mathbb Q(\sqrt d)$ and $\textrm{Norm}(\mathfrak p)=p$, a ramified prime) are unique. My definition of a ramified prime is that $\mathcal O_K/(p) \cong \mathbb F_p[x]/(x^2)$ and I know nothing else about these primes.
Prime ideals do correspond under the correspondence theorem, so your argument suffices. To see this, Let $I\subset P\subset R$ be any prime in $R$ containing an ideal $I$, then $R/P \cong \frac{R/I}{P/I}$ by the 3rd isomorphism theorem. Since $P$ is prime, $R/P$ is an integral domain, hence so is $\frac{R/I}{P/I}$. Thus, $P/I$ is a prime ideal in $R/I$. Now start with a prime ideal $Q\subset R/I$, lift it to an ideal containing $I\subset Q'\subset R$, and apply the same argument to see that $Q'$ is a prime ideal in $R$. For uniqueness, you are correct in saying that if $(x^2)\subset P$, then $(x)\subset P$, as $P$ is prime. But $(x)$ is maximal, so this forces $(x) = P$ by definition of a maximal ideal. To see that it is maximal, note $F[x]/(x)\cong F$ is a field.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1246522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Map between free sheaf modules not arising from a "matrix" My question is about this example in the Stacks project. For convenience and completeness, here is the relevant part. Let $X$ be countably many copies $L_1, L_2, L_3, \ldots$ of the real line all glued together at $0$; a fundamental system of neighbourhoods of $0$ being the collection $\{U_n\}_{n \in \mathbf{N}}$, with $U_n \cap L_i = (-1/n, 1/n)$. Let $\mathcal{O}_X$ be the sheaf of continuous real valued functions. Let $f : \mathbf{R} \to \mathbf{R}$ be a continuous function which is identically zero on $(-1, 1)$ and identically $1$ on $(-\infty, -2) \cup (2, \infty)$. Denote $f_n$ the continuous function on $X$ which is equal to $x \mapsto f(nx)$ on each $L_j = \mathbf{R}$. Let $1_{L_j}$ be the characteristic function of $L_j$. We consider the map $$ \bigoplus\nolimits_{j \in \mathbf{N}} \mathcal{O}_X \longrightarrow \bigoplus\nolimits_{j, i \in \mathbf{N}} \mathcal{O}_X, \quad e_j \longmapsto \sum\nolimits_{i \in \mathbf{N}} f_j 1_{L_i} e_{ij} $$ with obvious notation. This makes sense because this sum is locally finite as $f_j$ is zero in a neighbourhood of $0$. For a fixed $j$, I see why the sum $\sum_{i\in \mathbb{N}} f_j 1_{L_i} e_{ij}$ is locally finite. Every point that is not the origin admits a neighbourhood $U$ that lies completely in $L_k$ for some $k$. Then the sum reduces to the term $f_k e_{kj}$. And $U_j$ provides a neighbourhood of the origin where the sum is finite. The sum is not only finite, but the zero function on this nhood. This does not use the fact that $f_j$ depends on $e_j$. It would also work if I use $f_3$ in every sum. What is the error in this reasoning? Or is it really redundant.
I think I have found your misunderstanding: The sum is not finite on $U_n$. Observe that $e_j$ maps to an infinite collection of functions $(g_{i,j})$ on $X$, where $g_{i,j}$ takes the value $f_j$ on $L_i$ and is zero on all the other lines. Since $U_n$ is the union of the copies of $(-1/n,1/n)$ in every line $L_i$, the image of $e_m$ for any $m>2n$ will be an infinite collection of non-zero functions on $X$, each one is identically $1$ on the union of intervals $(-1/n,-2/m)\cup(2/m,1/n)$ contained in one of the lines. It is exactly the fact that the $f_j$ "converge" to a function that is $0$ at the origin and $1$ everywhere else that show no matter how small a neighbourhood of the origin you take, some $e_m$ will always map to an infinite sum of terms. I hope this helps. Also, where you have written "The sum reduces to the term $f_ke_{ik}$", I think you should instead have $f_ke_{kj}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1246698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Sum identity involving sin How one can prove that $$\sum_{k=1}^n(-1)^k\sin(2k\theta)=\cos(n\pi/2+\theta+n\theta)\sec\theta\sin(n\pi/2+n\theta)?$$ It looks difficult as there is sum on the other side and product of trigonometric functions on the other side. The book Which Way did the Bicycle Go gave a hint of induction but it looks difficult.
$$\begin{align*} \sum_{k=1}^{n} (-1)^{k} \sin(2k\theta) &= {} \mathrm{Im}\Bigg[ \sum_{k=1}^{n} \big[ -\exp(2i\theta) \big]^{k} \Big) \Bigg] \\[2mm] &= \mathrm{Im}\Bigg[ \big( -\exp(2i\theta) \big)\frac{1 - \Big( - \exp(2i\theta) \Big)^{n}}{1 + \exp(2i\theta)} \Bigg] \quad \mathrm{if} \; 1 + \exp(2i\theta) \neq 0 \\[2mm] &= \mathrm{Im}\Bigg[ \big(-\exp(2i\theta) \big)\frac{1 - \exp\big( 2i n \theta + n\pi \big)}{1+\exp\big( 2i\theta\big)} \Bigg] \\[2mm] &= \mathrm{Im}\Bigg[ \big(-\exp(2i\theta) \big)\frac{\exp\Big( i n \theta + \frac{n\pi}{2} \Big)}{\exp(i\theta)} \times \frac{\exp\Big( -i n \theta - \frac{n\pi}{2} \Big) - \exp\Big( i n \theta + \frac{n\pi}{2} \Big)}{\exp(-i\pi\theta) + \exp(i\theta)} \;\; \Bigg] \\[2mm] &= \mathrm{Im}\Bigg[ -\exp\Big( i (n+1)\theta + \frac{n\pi}{2} \Big)\frac{-2i\sin\Big(n\theta + \frac{n\pi}{2} \Big)}{2\cos( \theta )} \Bigg] \\[2mm] &= \cos\Big( (n+1)\theta + \frac{n\pi}{2} \Big)\frac{\sin\Big(n\theta + \frac{n\pi}{2} \Big)}{\cos(\theta)}. &= \end{align*} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1246764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }