Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
How to find the following integration Let $X_1, \cdots, X_n$ be $iid$ normal random variables with unknown mean $\mu$ and known variance $\sigma^2$. How to find $E[\Phi(\bar X)]$, where $\bar X:=\frac{\sum_{i=1}^nX_i}{n}$, please? I guess the answer should be $\Phi(\mu)$. Here is how I started. Note that $Y:= \bar X$ is also normal with mean $\mu$ and variance $\frac{\sigma^2}{n}$. $$E[\Phi(\bar X)] = \int_{-\infty}^\infty \Phi(y) f_Y(y) dy.$$ Let $z=\frac{y-\mu}{\sigma/\sqrt{n}}$ and the above integration becomes $$\int_{-\infty}^\infty \Phi(\mu+\frac{\sigma z}{\sqrt{n}}) \phi(z) dz$$ I could not proceed any further from here. Does anyone know what to do, please? Thank you!
Find $E[\Phi(c\bar X)]$ where $X_i \sim N(\mu, 1)$ and $c$ is a constant. Then, proceed following Dilip but replace $X$ with $\bar Xc.$ Let $Z\sim N(0,1)$ be independent of all $X_i.$ Then $$\begin{align} E[\Phi(c\bar X)] &= \int_{-\infty}^\infty P\{Z \leq cy\mid \bar X = y\}f_\bar X(y)\,\mathrm dy\\ &= P\{Z \leq c\bar X\}\\ &= P\{Z-c\bar X \leq 0\} \\ &= \Phi\left(\frac{c\mu}{\sqrt{1 + c^2/n}}\right) \end{align}$$ If we then choose $c$ to make this last expression equal to $\Phi(\mu),$ then we will obtain its MVUE. In this case $c=\sqrt\frac n{n-1}.$ If $\sigma^2\ne1$, then $c=\sqrt\frac n{n-\sigma^2}$ for $n$ sufficiently large.
{ "language": "en", "url": "https://math.stackexchange.com/questions/820211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How do you actually calculate inverse $\sin, \cos, $ etc. ? I started to wonder, how does one actually calculate the $\arcsin, \arccos, $ etc. without a calculator? For example I know that: $$\arccos(0.3) = 72.54239688^{\circ}$$ by an online calculator, but how would one calculate this with a pencil and paper? How is it done? :) I can't remember any math teacher talk about this? How is it actually implemented? Thnx for any help =)
Basically you can use infinite series to calculate approximation of inverse trigonometric functions. $$ \arcsin z = z+ \left( \frac 12 \right) {z^3 \over 3} + \left( {1 \cdot 3 \over 2 \cdot 4} \right){z^5 \over 5} + \left( {1 \cdot 3 \cdot 5 \over 2 \cdot 4 \cdot 6} \right){z^7 \over 7}\ +\ ... \; = \sum_{n=0}^\infty {\binom{2n}{n}z^{2n+1} \over 4^n(2n+1)}; \;\;\; |z| \le 1 $$ $$ \arccos z = \frac \pi2 - \arcsin z = \frac \pi2 - \sum_{n=0}^\infty {\binom{2n}{n}z^{2n+1} \over 4^n(2n+1)}; \;\;\; |z| \le 1 $$ $$ \arctan z = z-{z^3 \over 3}+{z^5 \over 5}-{z^7 \over 7}\ +\ ... \ = \sum_{n=0}^\infty{(-1)^nz^{2n+1} \over 2n+1}; \;\;\;|z|\le1, \; z\neq i, -i $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/820317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
What is the difference between two statements of $\varepsilon-N$ definition? Here is a homework question, TRUE/FALSE: $$\lim_{n\to\infty}a_n=a\Longleftrightarrow$$ * *$\forall\varepsilon>0,\ \exists N\in\mathbb{Z^+},\ \text{whenever}\ n>N\Rightarrow|a_n-a|<\varepsilon$. Answer: TRUE *$\exists N\in\mathbb{Z^+},\forall\varepsilon>0,\ \ \text{whenever}\ n>N\Rightarrow|a_n-a|<\varepsilon$. Answer: FALSE I am confused that what is difference between 1 & 2 (or say why No.2 is wrong)? Thanks!
The first is the correct statement. It says: no matter how small $\epsilon$ is, you can always choose a large enough $N$ so that every $a_n$ is within $\epsilon$ of $a$ whenever $n>N$. The second statement says something quite different. It says that if I hand you some large integer $N$, then no matter how small $\epsilon$ is, $a_n$ is within $\epsilon$ of $a$ whenever $n>N$. This second statement is plainly false. For example, after I give you $N$, simply choose $\epsilon$ to be smaller than some difference $|a_n-a|$ for $n>N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/820398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 4 }
How do I verify that $\sin (\theta)$ and $\cos (\theta)$ are functions? I am studying pre-calculus mathematics at the moment, and I need help in verifying if $\sin (\theta)$ and $\cos (\theta)$ are functions? I want to demonstrate that for any angle $\theta$ that there is only one associated value of $\sin (\theta)$ and $\cos (\theta)$. How do I go about showing this?
The proof is based simply on similar triangles. If a right-angled triangle has an angle $\theta$ then the other two angles are $90^{\circ}$ and $(90-\theta)^{\circ}$. If two triangles have the same angles then they are similar. My picture shows two similar triangles: $\triangle OAB$ and $\triangle OA'B'$. Since $\theta = \angle AOB$ then, by definition $$\sin\theta = \frac{\|AB\|}{\|OB\|}$$ Since $\theta = \angle A'OB'$ then, by definition $$\sin\theta = \frac{\|A'B'\|}{\|OB'\|}$$ We can show that $\sin \theta$ has a single, unique value if we can show that the two ratios agree. Let $T$ be the linear transformation given by an enlargement, centre $O$, with scale factor $\lambda$, such that $T(A) = A'$ and $T(B) = B'$. We have $\|A'B'\|=\lambda\|AB\|$ and $\|OB'\| = \lambda\|OB\|$, hence $$\frac{\|A'B'\|}{\|OB'\|} = \frac{\lambda\|AB\|}{\lambda\|OB\|}=\frac{\|AB\|}{\|OB\|}$$ This shows that the ratio of the opposite side to the hypotenuse is the same for any two similar, right-angled triangles with angle $\theta$. That means that $\sin\theta$ is uniquely, well-defined. (N.B. Similarity allows rotation and reflection as well as enlargement. However, rotations and reflections preserve lengths and so preserve ratios of lengths.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/820490", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 1 }
Questions for first year students at the University. I will help teach in a introductory class in mathematics for engineers in applied math at the University. Anyone have any good and cool favorite questions or know where I can find some? Anything is welcome For the moment there is some questions like * *write: $1 + 2 + 3 + ... + 100$ as a sum *show that $XY-YX = I$ has no solution for square matrices. *$p | (n^2 - n)$ ($p$ is prime). Thanks
Try asking students to solve for $x$ in the equation $x+\sin(x)=0$. The solution $x=0$ seems trivial, but it can only be solved by numerical methods, a course of which engineering majors will benefit greatly from.
{ "language": "en", "url": "https://math.stackexchange.com/questions/820575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Arithmetic and geometric sequences: where does their name come from? Where does the name of these two famous types of sequences come from? The article Geometric progression of Wikipedia says that the geometric sequence is called like this because every term is the geometric mean of its two adjacent terms. Though it is true, it only reduces the question to: why is the geometric mean geometric (in opposition to the arithmetic mean). Continuing my investigation on Geometric mean, I was told that a square with the same area than a rectangle with sides $a$ and $b$ has their geometric mean $\sqrt{ab}$ for side. That's again totally true, but a square with same perimeter than this rectangle of sides $a$ and $b$ has their arithmetic mean $\frac{a+b}{2}$ for side! Thus, my question: Who coined these names? And why? Why is the geometric mean more geometric than the arithmetic mean?
Who coined these names? And why? Who is already very difficult to know. Why is nearly impossible. Because there was no name for the mean? Anyway, according to a book by Anthony Lo Bello (1), "arithmetic" comes from the Greek word ἀριθμός arithmos, meaning "number". In a similar way, "geometric" comes from γεωμετρία geometría, "measurement of the earth". [The transliterations are not from the book.] This section, written by Amartya Dutta of the Indian Statistical Institute, mentions the term "arithmetic mean" was used in 1635 by Henry Gellibrand, an astronomer. However, I didn't find it. The Gaugers Magazine, written by William Hunt and published in 1687, contains the earliest use I could verify. "Geometric mean" was already used in the 1771 edition of the Encyclopædia Britannica, by James A. Landau (see here). The actual term ("geometrical mean") comes from a long-titled work written by E. Halley and published from 1695 to 1697, in a volume of the magazine Philosophical Transactions. You can see the unformatted original text here. In another question, David K shows a wonderful figure illustrating why this mean is so geometric.
{ "language": "en", "url": "https://math.stackexchange.com/questions/820680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 1, "answer_id": 0 }
'Obvious' theorems that are actually false It's one of my real analysis professor's favourite sayings that "being obvious does not imply that it's true". Now, I know a fair few examples of things that are obviously true and that can be proved to be true (like the Jordan curve theorem). But what are some theorems (preferably short ones) which, when put into layman's terms, the average person would claim to be true, but, which, actually, are false (i.e. counter-intuitively-false theorems)? The only ones that spring to my mind are the Monty Hall problem and the divergence of $\sum\limits_{n=1}^{\infty}\frac{1}{n}$ (counter-intuitive for me, at least, since $\frac{1}{n} \to 0$ ). I suppose, also, that $$\lim\limits_{n \to \infty}\left(1+\frac{1}{n}\right)^n = e=\sum\limits_{n=0}^{\infty}\frac{1}{n!}$$ is not obvious, since one 'expects' that $\left(1+\frac{1}{n}\right)^n \to (1+0)^n=1$. I'm looking just for theorems and not their (dis)proof -- I'm happy to research that myself. Thanks!
It's not exactly a theorem, but it fools every math newcomer: $e = \lim_{n\to\infty} \left(1+\frac{1}{n}\right)^n$ $(1 + 1/\infty)$ is $1$, obviously. And 1 to the power of $\infty$ is obviously still 1. Nope, it's 2.718...
{ "language": "en", "url": "https://math.stackexchange.com/questions/820686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "371", "answer_count": 71, "answer_id": 28 }
'Obvious' theorems that are actually false It's one of my real analysis professor's favourite sayings that "being obvious does not imply that it's true". Now, I know a fair few examples of things that are obviously true and that can be proved to be true (like the Jordan curve theorem). But what are some theorems (preferably short ones) which, when put into layman's terms, the average person would claim to be true, but, which, actually, are false (i.e. counter-intuitively-false theorems)? The only ones that spring to my mind are the Monty Hall problem and the divergence of $\sum\limits_{n=1}^{\infty}\frac{1}{n}$ (counter-intuitive for me, at least, since $\frac{1}{n} \to 0$ ). I suppose, also, that $$\lim\limits_{n \to \infty}\left(1+\frac{1}{n}\right)^n = e=\sum\limits_{n=0}^{\infty}\frac{1}{n!}$$ is not obvious, since one 'expects' that $\left(1+\frac{1}{n}\right)^n \to (1+0)^n=1$. I'm looking just for theorems and not their (dis)proof -- I'm happy to research that myself. Thanks!
What about this: $\mathbb{R}$ and $\mathbb{R}^2$ are not isomorphic (as Abelian groups with addition). It falls under the category of "Let's take the Hamel basis of $\mathbb{R}$...", but I like it a lot.
{ "language": "en", "url": "https://math.stackexchange.com/questions/820686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "371", "answer_count": 71, "answer_id": 60 }
Seeking a more direct proof for: $m+n\mid f(m)+f(n)\implies m-n\mid f(m)-f(n)$ If $f:\mathbb N\to\mathbb Z$ satisfies: $$\forall n,m\in\mathbb N\,, n+m\mid f(n)+f(m)$$ How to show that this implies: $$\forall n,m\in\mathbb N,\,n-m\mid f(n)-f(m)?$$ I was almost incidentally able to prove this by classifying such functions, but that seems circuitous for such a result. Is there a proof that is (more) direct?
This is actually pretty easy. Let $n>m$, and take an $N$ such that $N(n-m)>m$. Set $a=N(n-m)-m$. Then $$ m+a=N(n-m),\qquad n+a=m+a+n-m=(N+1)(n-m). $$ Now $$ f(n)-f(m)=f(n)+f(a)-(f(m)+f(a)), $$ but by assumption $n-m\mid f(m)+f(a)$ and $n-m\mid f(n)+f(a)$, and we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/820731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
In the Quadratic Formula, what does it mean if $b^2-4ac>0$, $b^2-4ac<0$, and $b^2-4ac=0$? Concerning the Quadratic Formula: What does it mean if $b^2-4ac>0$, $b^2-4ac<0$, and $b^2-4ac=0$?
$\Delta={ b^2-4ac}>0$ means that the equation has two real solutions. $\Delta= { b^2-4ac}<0$ means that the equation has no real solutions, but two complex solutions. $\Delta={ b^2-4ac}=0$ means that the equation has one solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/821058", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Subsets of intervals If S $\subseteq$ $\mathbb{R}$ is a nonempty, bounded set, and I := [inf S, sup S], show that S $\subseteq$ I. Moreover, if J is any closed bounded interval containing S, show that I $\subseteq$ J. To show that S $\subseteq$ I, let x $\in$ S. Since S is bounded, inf S $\lt$ x < sup S. Thus x $\in$ I. Therefore S $\subseteq$ I. Is this correct? I feel like I am missing something. How do I show that I $\subseteq$ J if S $\subseteq$ J?
It's not true that $\inf S < x$; for example, if $S = [0, 1]$ and $x = 0$, this is false. What is true, and follows directly from the definitions, is that $$\inf S \le x \le \sup S$$ since $\inf$ and $\sup$ are actually bounds on $S$. The result follows immediately from this observation. For your second question: Suppose $S \subseteq J = [a,b]$. Argue that $a \le \inf S$, so that $a$ is a lower bound for $I$. Make the same argument for the upper bounds.
{ "language": "en", "url": "https://math.stackexchange.com/questions/821159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to find integral $\underbrace{\int\sqrt{2+\sqrt{2+\sqrt{2+\cdots+\sqrt{2+x}}}}}_{n}dx,x>-2$ Find the integral $$\int\underbrace{\sqrt{2+\sqrt{2+\sqrt{2+\cdots+\sqrt{2+x}}}}}_{n}dx,x>-2$$ where $n$ define the number of the square I know this if $0 \le x\le 2$, then let $$x=2\cos{t},0\le t\le\dfrac{\pi}{2}$$ so $$\sqrt{2+x}=\sqrt{2+2\cos{t}}=2\cos{\dfrac{t}{2}}$$ so $$\sqrt{2+\sqrt{2+x}}=2\cos{\dfrac{t}{2^2}}$$ so $$\int\sqrt{2+\sqrt{2+\sqrt{2+\cdots+\sqrt{2+x}}}}dx=\int2\cos{\dfrac{t}{2^n}}(-2\sin{t})dt$$ and for $x\ge 2$ case, I let $x=\cosh{t}$, but for $-2\le x\le 0$ case, I can't do it.
You were too timid: For $-2\leq x\leq2$ use the substitution $$x=2\cos t\qquad(-\pi\leq t\leq 0)\ .$$ Then everything goes through as before: $$\sqrt{2+x}=\sqrt{2+2\cos t}=2\cos{t\over2},\quad \sqrt{2+\sqrt{2+x}}=\sqrt{2+\cos{t\over2}}=2\cos{t\over4}\ ,$$ etcetera.
{ "language": "en", "url": "https://math.stackexchange.com/questions/821337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
Regarding Similarity of matrices Consider a set $T$ of square matrices over finite field $\mathbb{F_p}$. Clearly the cardinality of the set $T$ is $p^{n^2}$ where the square matrices are of size $n$. Question is: How many non-similar ($A=P^{-1}.B.P$ for some $P$ $\in$ $T$) matrices we would have? Next, suppose there are two matrices $A$ and $B$ such that their minimal polynomial is same $f$ (say). Question is: Are $A$ and $B$ similar matrices?
To your second question: no, the minimal polynomial does not suffice. As an example, consider the following two matrices $$ A = \begin{bmatrix} 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ \end{bmatrix}, \qquad B = \begin{bmatrix} 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 0\\ \end{bmatrix}. $$ They have the same minimal polynomial, namely $x^{2}$, but they are not similar. Similarity depends in general not on a single polynomial, but on a sequence of polynomials, each dividing the next one. For $A$ the sequence is $x, x, x^{2}$, for $B$ it is $x^{2}, x^{2}$. A good reference is Jacobson's Basic Algebra I. As to your first question, I refer you to this question and the answer provided there.
{ "language": "en", "url": "https://math.stackexchange.com/questions/821435", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to find $x^2 - x$? I'm quite a novice when it comes to maths. I'm on a problem in which I have had to isolate $x$ , through factorials which I completed without problem. However, now I am stuck on a seemingly more minor problem. The problem I currently have is $x^2 - x = 380$. I know that this can be solved for $x = 20$, however I am unsure how this has been worked out. I am sorry for this being such a basic question, however I simply have no idea how this was solved. Thanks,
$$x^2 - x = 380$$ Rewrite this as: $$x^2-x-380=0$$ Use quadratic formula to solve it now. It is: $$x=\frac{-b \pm \sqrt{b^2-4ac}}{2a}$$ For: $$ax^2+bx+c=0$$ Just plug this in: $$x=\frac{-(-1) \pm \sqrt{(-1)^2-4(1)(-380)}}{2(1)}=\frac{1 \pm \sqrt{1521}}{2}=\frac{1 \pm 39}{2}$$ Which means: $$\therefore x=20 \text{ or } -19$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/821635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 7, "answer_id": 4 }
Prove an expression for angle bisector Show that the vector $\dfrac{A\,\vec B+B\,\vec A}{A+B}$ represents the bisector of the angle between $\vec A$ and $\vec B$. I can prove that the numerator is the bisector of both vectors but I am unsure how to show that the expression given is as well. Does it matter that the expression is divided by a scalar? I would assume not, but I am not sure. Thanks.
You are right that the denominator is not that important. Here it serves to give a convex combination of the points $\vec A$ and $\vec B$, i.e., the bisector that is in the segment between these points.
{ "language": "en", "url": "https://math.stackexchange.com/questions/821724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
show than this PDE can be reduced to heat equation How to reduce this PDE to heat equation $$x^2G_{xx}=G_t$$ ($G_{xx}$ is the 2nd order derivative on $x$, $G_t$ is the 1st derivative on $t$) We wish to obtain a form such that $G(x,t)=F(U(x,t))$, when substituted into the original equation we have $$U_{xx}=U_t$$
Let $\begin{cases}x_1=\ln x\\t_1=t\end{cases}$ , Then $G_x=G_{x_1}(x_1)_x+G_{t_1}(t_1)_x=\dfrac{G_{x_1}}{x}=e^{-x_1}G_{x_1}$ $G_{xx}=(e^{-x_1}G_{x_1})_x=(e^{-x_1}G_{x_1})_{x_1}(x_1)_x+(e^{-x_1}G_{x_1})_{t_1}(t_1)_x=(e^{-x_1}G_{x_1x_1}-e^{-x_1}G_{x_1})e^{-x_1}=e^{-2x_1}G_{x_1x_1}-e^{-2x_1}G_{x_1}$ $G_t=G_{x_1}(x_1)_t+G_{t_1}(t_1)_t=G_{t_1}$ $\therefore e^{2x_1}(e^{-2x_1}G_{x_1x_1}-e^{-2x_1}G_{x_1})=G_{t_1}$ $G_{x_1x_1}-G_{x_1}=G_{t_1}$ Let $\begin{cases}x_2=x_1-t_1\\t_2=t_1\end{cases}$ , Then $G_{x_1}=G_{x_2}(x_2)_{x_1}+G_{t_2}(t_2)_{x_1}=G_{x_2}$ $G_{x_1x_1}=(G_{x_2})_{x_1}=(G_{x_2})_{x_2}(x_2)_{x_1}+(G_{x_2})_{t_2}(t_2)_{x_1}=G_{x_2x_2}$ $G_{t_1}=G_{x_2}(x_2)_{t_1}+G_{t_2}(t_2)_{t_1}=G_{t_2}-G_{x_2}$ $\therefore G_{x_2x_2}-G_{x_2}=G_{t_2}-G_{x_2}$ $G_{x_2x_2}=G_{t_2}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/821795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving a structure is a field? Please help with what I am doing wrong here. It has been awhile since Ive been in school and need some help. The question is: Let $F$ be a field and let $G=F\times F$. Define operations of addition and multiplication on $G$ by setting $(a,b)+(c,d)=(a+c,b+d)$ and $(a,b)*(c,d)=(ac,db)$. Do these operations define the structure of a field on $G$? In order to be a field, the following conditions must apply: * *Associativity of addition and multiplication *commutativity of addition and mulitplication *distributivity of multiplication over addition *existence of identy elements for addition and multiplication *existence of additive inverses *existence of multiplicative inverse 0 cannot equala, a-1*a=1 I started with 1. saying $(a,b)+(c,d)+(e,f)=(a+c+e,b+d+f)$ $(a,b)+[(c,d)+(e,f)]=[(a,b)+(c,d)]+(e,f)$ $(a,b)+(c+e,d+f)=(a+c,b+d)+(e,f)$ $(a+c+d,b+e+f)=(a+c+e,b+d+f)$ which is not correct but I'm not sure where I went wrong. Is my logic incorrect?
It can help proving some more general result; let $A$ be any non empty set with an operation $\%$ on it (I use a generic symbol). Consider now the operation $?$ on $A\times A$ defined by $$ (a,b)\mathbin{?}(c,d)=(a\mathbin{\%}c,b\mathbin{\%}d) $$ * *If $\%$ is associative, then also $?$ is associative *$?$ is commutative if and only if $\%$ is commutative *If $\%$ has a neutral element $e$, then $(e,e)$ is a neutral element for $?$ *An element $(a,b)$ has an inverse (with respect to $?$) if and only if both $a$ and $b$ have an inverse (with respect to $\%$) Once you have verified these facts, you immediately have 1, 2, 4 and 5. Moreover, the neutral element for addition in $G$ is $(0,0)$. Distributivity of multiplication in $G$ can be verified directly. However, condition 6 fails, because $(1,0)$ is not the neutral element for addition, but it has no multiplicative inverse.
{ "language": "en", "url": "https://math.stackexchange.com/questions/821864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Eigenvalues of $AB$ and $BA$ where $A$ and $B$ are square matrices Show that if $A,B \in M_{n \times n}(K)$, where $K=\mathbb{R}, \mathbb{C}$, then the matrices $AB$ and $BA$ have same eigenvalues. I do that like this: let $\lambda$ be the eigenvalue of $B$ and $v\neq 0$ $ABv=A\lambda v=\lambda Av=BAv$ the third equation is valid, because $Av$ is the eigenvector of $B$. Am I doing it right?
Alternative proof #1: If $n\times n$ matrices $X$ and $Y$ are such that $\mathrm{tr}(X^k)=\mathrm{tr}(Y^k)$ for $k=1,\ldots,n$, then $X$ and $Y$ have the same eigenvalues. See, e.g., this question. Using $\mathrm{tr}(UV)=\mathrm{tr}(VU)$, it is easy to see that $$ \mathrm{tr}[(AB)^k]=\mathrm{tr}(\underbrace{ABAB\cdots AB}_{\text{$k$-times}}) =\mathrm{tr}(\underbrace{BABA\cdots BA}_{\text{$k$-times}})=\mathrm{tr}[(BA)^k]. $$ Now use the above with $X=AB$ and $Y=BA$. Alternative proof #2: $$ \begin{bmatrix} I & A \\ 0 & I \end{bmatrix}^{-1} \color{red}{\begin{bmatrix} AB & 0 \\ B & 0 \end{bmatrix}} \begin{bmatrix} I & A \\ 0 & I \end{bmatrix} = \color{blue}{\begin{bmatrix} 0 & 0 \\ B & BA \end{bmatrix}}. $$ Since the $\color{blue}{\text{red matrix}}$ and the $\color{red}{\text{blue matrix}}$ are similar, they have the same eigenvalues. Since both are block triangular, their eigenvalues are the eigenvalues of the diagonal blocks.
{ "language": "en", "url": "https://math.stackexchange.com/questions/821934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 6, "answer_id": 4 }
Is it sufficient to prove collatz conjecture doing it for $3+6k, k \geq 0$? Thinking about this problem, I saw two interesting properties of Collatz graph. Firstly, if we consider that every even number $e$ can be represented (on a single way) as $e = o 2^n$, where $o$ is an odd number and $n$ is an integer ($n \geq 1$), then after $n$ steps our $e$ will generate $o$. It means that we "only" have to prove that every odd number is in Collatz graph. The second point is: multiples of 3 are never generated by other odd numbers ($3n+1=3k$?). Also one can realize that every odd number that isn't multiple of 3 is generated (when we skip steps that generates even numbers) by infinitely many odd multiples of 3. So, if there are some number of the form $3+6k$ (an odd number multiple of 3) that isn't on Collatz graph, then his "function" ($3(3+6k)+1 \over 2^M$, where $M$ is the greatest integer for which previous division is integer) also isn't on the same graph. So we can treat odd numbers multiples of 3 as "odd endpoints" of our graph. If all those propositions are correct, then Collatz conjecture can be, in some way, simplified. My question is: what is wrong with my propositions? If nothing is wrong, why I didn't find that information anywhere?
Did you see this? In a previous article, we reduced the unsolved problem of the convergence of Collatz sequences, to convergence of Collatz sequences of odd numbers, that are divisible by 3. In this article, we further reduce this set to odd numbers that are congruent to $21\bmod24$…
{ "language": "en", "url": "https://math.stackexchange.com/questions/822057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }
Rig categories concept which is equivalent to monoid concept in monoidal categories In monoidal categories, there is a notion of monoid. Is there an "equivalent" concept in rig categories (i.e., categories with two monoidal structures which are related like + and * in a rig)?
The question is not really precise. And you certainly don't look for an equivalent concept. Perhaps you are looking for the notion of a rig object internal to a rig category? This is an object $x$ equipped with morphisms $z : 0 \to x$ (zero) , $a : x \oplus x \to x$ (addition), $u : 1 \to x$ (unit) and $m : x \otimes x \to x$ (multiplication) such that the evident diagrams commute. In more detail, we require that $(x,a,z)$ is a commutative monoid object, $(x,m,u)$ is a monoid object, and that $m$ distributes over $a$ in the following sense: $$x \otimes (x \oplus x) \xrightarrow{x \otimes a} x \otimes x \xrightarrow{m} x$$ agrees under the identification $x \otimes (x \oplus x) \cong (x \otimes x) \oplus (x \otimes x)$ with $$(x \otimes x) \oplus (x \otimes x) \xrightarrow{m \otimes m} x \oplus x \xrightarrow{a} x.$$ Likewise, we demand that two certain morphisms $(x \oplus x) \otimes x \rightrightarrows x$ are equal. Finally, we require that $z$ is absorbing for $m$, i.e. that $x \otimes 0 \xrightarrow{x \otimes z} x \otimes x \xrightarrow{m} x$ equals $x \otimes 0 \cong 0 \xrightarrow{z} x$. Likewise for $0 \otimes x \to x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/822124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove $|\det A| \leq \prod_{j=1}^n ||a_j||$ Let's say A is a square n by n matrix. ||$x$||=$x^T x$ and x is a real n-column norm. How would you show this? I tried to use the QR factorization here in showing that ||$a_j$||=||$r_j$||, but I'm not sure how far that will get you.
Let $A=QR$ be the QR factorisation of $A$ with $R=[r_1,\ldots,r_n]$. Then Mr Hadamard says that $$ \left|\,\det A\,\right| = \left|\,\det QR\,\right| = \left|\,\det Q\,\right|\;\left|\,\det R\,\right| = \left|\,\det R\,\right| = \prod_{i=1}^n\left|\,r_{ii}\,\right| \leq \prod_{i=1}^n\|r_i\|=\prod_{i=1}^n\|a_i\|. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/822232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Normal Operators: Numerical Range Disclaimer: As I realized in the comments that this works for normal operators I decided to modify this question. Besides, I got the proof now - thanks to T.A.E.! Prove that for normal operators the spectrum is contained in the closure of the numerical range: $$\sigma(N)\subseteq\overline{\mathcal{W}(N)}$$
If the distance from $\lambda$ to $\mathcal{W}(A)$ is $d > 0$, then $$ |((A-\lambda I)\phi,\phi)| \ge d\|\phi\|^{2},\;\;\; \phi \in \mathcal{D}(A)\\ \implies d\|\phi\| \le \|(A-\lambda I)\phi\|,\;\;\; \phi \in\mathcal{D}(A). $$ If $(A\phi,\phi)=(\phi,A^{\star}\phi)$ for $\phi$ in a core domain of $A^{\star}$, then you also get $$ d\|\phi\| \le \|(A^{\star}-\overline{\lambda}I)\phi\|,\;\;\;\phi \in \mathcal{D}(A^{\star}). $$ So, in this case $\mathcal{N}(A^{\star}-\overline{\lambda}I)=\{0\}$. Using the adjoint relation, $\mathcal{R}(A-\lambda I)^{\perp}=\mathcal{N}(A^{\star}-\overline{\lambda}I)$, it follows that $(A-\lambda I)$ has dense range and bounded inverse. So, under these assumptions, $\lambda \in \rho(A)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/822319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What jobs in Mathematics are always in demand, and are deeply Mathematically specialised or greatly general? I am wondering what jobs in the field of Mathematics are (seemingly) always in demand. I am also wondering what jobs there are that are (once again seemingly) greatly Mathematically demanding in regards to either deep specialisation or great generality in the application fields. For the second criteria, I mean that a great amount of learning must be done to understand the career. Whether that be learning a lot of content from multiple fields, or significant investment in a specialization. More specifically, what fields meet both of these criteria? Thank you for your time. Note: If you wish, give it in regards to any location you wish, I understand this can be (and likely is) location reliant.
With the arrive of big data, Statistics, as a branch of Mathematics, is now in high demand. And the challenges there are in many areas. Buzz words like Machine Learning, Analytics, Computer Vision and so on have statistics at their core. Personally, being a software engineer with a maths background, I recently engaged in postgraduate studies in statistics, and I can say that I've been headhunted couple of times since then. Lucky, my current position involves both, software engineering and statistics; but the interest from companies (mainly the big ones) is there!
{ "language": "en", "url": "https://math.stackexchange.com/questions/822409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
A difficult integral evaluation problem How do I compute the integration for $a>0$, $$ \int_0^\pi \frac{x\sin x}{1-2a\cos x+a^2}dx? $$ I want to find a complex function and integrate by the residue theorem.
Since it hasn't been specifically objected to yet, here is a solution that doesn't rely on complex variable methods. We shall make use of the Fourier sine series, $$\frac{a\sin x}{1-2a\cos x+a^2}=\begin{cases} \sum_{n=1}^{\infty}a^{n}\sin{(nx)},~~~\text{for }|a|<1,\\ \sum_{n=1}^{\infty}\frac{\sin{(nx)}}{a^{n}},~~~\text{for }|a|>1. \end{cases}$$ These series may readily be derived by taking the imaginary parts of the complex geometric series $\sum_{n=1}^{\infty}\left(ae^{ix}\right)^n$ and $\sum_{n=1}^{\infty}\left(\frac{1}{a}e^{ix}\right)^n$, respectively. By expanding the integrand in terms of these series and then swapping the order of integration and summation, a closed form may be obtained. For the $|a|>1$ case, $$\begin{align}I(a)&=\int_0^\pi \frac{x\sin x}{1-2a\cos x+a^2}\mathrm{d}x\\ &=\int_{0}^{\pi}x\sum_{n=1}^{\infty}\frac{\sin{(nx)}}{a^{n+1}}\mathrm{d}x\\ &=\sum_{n=1}^{\infty}\frac{1}{a^{n+1}}\int_{0}^{\pi}x\sin{(nx)}\mathrm{d}x\\ &=\sum_{n=1}^{\infty}\frac{1}{a^{n+1}}\left(\frac{\sin{(n\pi)}}{n^2}-\frac{\pi\cos{(n\pi)}}{n}\right)\\ &=\pi\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n\,a^{n+1}}\\ &=\frac{\pi}{a}\log{\frac{1+a}{a}}. \end{align}$$ For the $0<|a|<1$ cases, we can make use of the fact that if $0<|a|<1$, then $\frac{1}{|a|}>1$, thus allowing us to invoke the previous result. Hence, $$\begin{align}I(a)&=\int_0^\pi \frac{x\sin x}{1-2a\cos x+a^2}\mathrm{d}x\\ &=\frac{1}{a^2}\int_0^\pi \frac{x\sin x}{a^{-2}-2a^{-1}\cos x+1}\mathrm{d}x\\ &=\frac{1}{a^2}\frac{\pi}{a^{-1}}\log{\frac{1+a^{-1}}{a^{-1}}}\\ &=\frac{\pi}{a}\log{(1+a)}. \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/822484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
What can we say about the convergence of this series $$\sum {z^n\over n!} $$ I used Alembert's Ration test and get $$\lim_{n \infty}{u_n\over u_{n+1}}={n+1\over z}$$ As this tends to $\infty>1$ can i say that the given series is convergent for all values of $z$ ? Note : z is a complex number
Yes. A good thing to remind is the proof of this property, which in your case writes: $$ \frac{u_{n+1}}{u_n} = \frac z{n+1} \implies \left|\frac{u_{n+1}}{u_n}\right| \le 1/2 <1 $$when $n$ is big enough, and then $$ |u_n| \le C2^{-n} $$for every $n$, for a certain $C>0$. Hence the series is (absolutely) convergent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/822642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
infinite discrete subspace In topological space $(X, ‎\tau )$ every compact subspace of $X$ is closed, so no infinite subspace of $X$ can have the cofinite topology. Is it right to say: Each infinite subspace of $X$ contains an infinite discrete subspace. How can I prove it? Thank you.
A nice "folklore" topology theorem that is often useful, see the paper: Minimal Infinite Topological Spaces, John Ginsburg and Bill Sands, The American Mathematical Monthly Vol. 86, No. 7 (Aug. - Sep., 1979), pp. 574-576. Suppose $X$ is any infinite topological space. Then there exists a countably infinite subspace $A$ of $X$ such that $A$ is homeomorphic to one of the following five spaces: * *$\mathbb{N}$ in the indiscrete topology. *$\mathbb{N}$ in the cofinite topology. *$\mathbb{N}$ in the upper topology (all non-trivial open sets are of the form $n^\uparrow = \{m \in \mathbb{N}: m \ge n\}$). *$\mathbb{N}$ in the lower topology (all non-trivial open sets are of the form $n^\downarrow = \{m \in \mathbb{N}: m \le n\}$). *$\mathbb{N}$ in the discrete topology. Now, if $X$ is a KC-space (i.e. all compact subsets of $X$ are closed in $X$), then every subspace $Y$ of $X$ is also a KC-space. As the first 4 spaces are all non-KC (in the first 3 spaces all subsets are compact, and in the lower topology exactly all finite sets are compact, but not all of these are closed, e.g. $\{2\}$ is not closed as $3$ is in its closure) this means that every infinite (subset of a ) KC-space contains an infinite discrete subspace. (Added) Note that this question and its answer give a (detailed explanation of a) direct proof for this. This uses recursion to construct the set, using at every stage that a countably infinite subspace $A$ of $X$ cannot have the cofinite topology. The latter is clear, as I remarked above: suppose $X$ is a KC-space, then $A \subset X$ is also a KC-space (a compact $K \subset A$ is also compact in $X$ so closed in $X$, so $K$ is closed in $A$ as well) and the cofinite countable space is not a KC-space, as all subsets are compact but only the finite subsets are closed (and the whole subspace).
{ "language": "en", "url": "https://math.stackexchange.com/questions/822838", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find all functions $f$ such that $f\left(x^2-y^2\right)=(x-y)\big(f(x)+f(y)\big)$. Find all functions $f:\mathbb R\to\mathbb R$ such that $$f\left(x^2-y^2\right)=(x-y)\big(f(x)+f(y)\big)\text.$$ I have derived these clues: * *$f(0)=0$; *$f(x^2)=xf(x)$; *$f(x)=-f(-x)$. But now I am confused. I know solution will be $f(x)=x$, but I don't know how to prove this.
I don't know how to do this without continuity. If you can show $f(x) \to f(0)$ whenever $x \to 0$, then the original expression yields it. Then all you need is $f(x)$ bounded near zero in order to use $f(x^2) = x f(x)$ to get that the limit exists and is zero. However, taking continuity for granted, consider your result $f(x^2)=xf(x)$ which can be rewritten as $f(x) = x^{1/2} f(x^{1/2})$ if we let $x^2 \to x$ for positive $x$. Then $f(x^2) = x \cdot x^{1/2} f(x^{1/2})$. But then we can write $f(x^{1/2}) = x^{1/4} f(x^{1/4})$ if we let $x \to x^{1/4}$ in $f(x^2) = xf(x)$ so that $f(x^2) = x \cdot x^{1/2} \cdot x^{1/4} f(x^{1/4})$. Repeating this $n$ times we get $$f(x^2) = x^{\sum_{n=0} 2^{-n}} f(x^{2^{-n}}) = x^{(2-2^{-n})} f(x^{2^{-n}})$$ which holds for $x>0$. Now for $x > 0$, taking the limit as $n \to \infty$ gives $$f(x^2) = x^2 f(1)$$ using continuity of $f$ or restated $$f(x) = f(1) x$$ for positive $x$. But now for negative $x$ use the fact $f(x) = -f(-x)$ so that $f(-x) = -x f(1)$. Letting $\alpha = f(1)$, we say all solutions must be of the form $f(x) = \alpha x$. We easily verify they are all solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/823022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Prove $a_n=1+\frac{a_{n-1}}{1+a_{n-1}}$ increasing There is a homework question in Calculus-1 course: Calculate the limit of $\{a_n\}$: $$a_1=1,\ a_n=1+\frac{a_{n-1}}{1+a_{n-1}}$$ I think the key points are bounded and increasing, and I have proved that $$a_n\in(1, 2)$$ If I knew it's increasing then $$a=1+\frac{a}{1+a}\Rightarrow\lim a_n=\frac{\sqrt5+1}{2}$$ My question is How to Prove it's increasing? I tried it in two ways: $$a_{n+1}-a_n=1+\frac{a_n}{1+a_n}-a_n=\frac{-a^2_n+a_n+1}{1+a_n}$$ But how to prove that $-a^2_n+a_n+1>0$? Another way is $$\frac{a_{n+1}}{a_n}=\frac{1}{a_n}+\frac{1}{1+a_n}=\frac{1+2a_n}{a_n+a^2_n}$$ But how to prove that $1+2a_n>a_n+a^2_n$? This is not a proof question which means $\frac{\sqrt5+1}{2}$ is not a known result. Thank you!
First, we prove that $a_n$ is bounded above. Obviously, $a_n = 1+ \frac{a_{n-1}}{a_{n-1}+1} < 2$. To prove that the sequence is increasing, we can just use induction. The base cases are trivial so suppose the claim holds for all naturals $ \le k$. Then $a_{k+1}-a_k = \frac{a_k}{1+a_k} - \frac{a_{k-1}}{1+a_{k-1}} = \frac{a_k(1+a_{k-1})-a_{k-1}(1+a_k)}{(1+a_k)(1+a_{k-1})}$. The numerator is $a_k-a_{k-1}$ which by the inductive hypothesis is $> 0$ so we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/823093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
$L$-function of an elliptic curve and isomorphism class Let $E$ be an elliptic curve defined over $\mathbb{Q}$. We have a $L$-function $$L(E,s)$$ built from the local parameters $a_p(E)$. If two elliptic curves are isomorphic, they clearly have the same $L$-function. What about the converse ? If two elliptic curves $E,E'$ over $\mathbb{Q}$ have the same $L$-function, what can be said about them ? Would they be isomorphic ? Isogenous ? Or one of these properties would hold for "most" curves ? In other words, does this "local" analysis of the curve characterizes it ?
Isogenous. A theorem of Faltings says that two elliptic curves $E_1$ and $E_2$ over a number field $F$ are isogenous if and only if they have the same $L$-factors at almost all places. See, for example, this article, Theorem 3.1, or find the source: G. Faltings, Endlichkeitssatze fur abelsche Varietaten uber Zahlkorpern, Invent. Math. 73 (1983), 349-366. There are indeed cases where two elliptic curves are isogenous, not isomorphic, and all the $L$-factors are identical. For instance, take $E_1$ and $E_2$ be the curves 11a1 and 11a2, respectively, as in Cremona's tables. As the notation indicates, the two curves share an isogeny. Then, $$j(E_1)=-122023936/161051 \neq -52893159101157376/11 = j(E_2),$$ so the curves are not isomorphic. One can verify that all the $L$-factors coincide in this case (in general, they may differ at primes of bad reduction, but here both have bad reduction at 11, and both have bad split multiplicative reduction, so they get the same $L$-factor at 11). For instance, you can verify this using Sage or MAGMA, and verify that $$L(E_1,1)=L(E_2,1)=0.253841860855910684337758923351\ldots$$ In particular, their Mordell-Weil rank is $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/823191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Inverse Z-transform with a complex root The z-transform of a signal is $$ X(z)=\frac{1}{z^2+z+1}$$ I attempted to solve for the the inverse z-transform by decomposing the denominator into complex roots, $\alpha$ and $\alpha^\ast$, to get $$\frac{1}{z^2+z+1} = \frac{A}{z-\alpha}+\frac{B}{z-\alpha^\ast}=\frac{\frac{-i\sqrt{3}}{3}}{z-\alpha}+\frac{\frac{i\sqrt{3}}{3}}{z-\alpha^\ast}$$ for $$\alpha = e^{2\pi i /3} \ \ \text{and} \ \ \alpha^\ast = e^{-2\pi i /3}$$ And ultimately this leads to a long process of manipulating a series to relate it to the definition of the z-tranform. I was wondering if there was a faster or easier way to solve this problem, perhaps with a more direct series expansion
Notice that $(1-z)(1+z+z^2)=1-z^3$. (That's a trick worth noting whenever you're working with $1+z+z^2+\ldots z^n$.) So $1/(z^2+z+1)$ is actually $(1-z)/(1-z^3)$. $1/(1-z^3)$ is the sum of a geometric progression ie. $1+z^3+z^6+\ldots$. So the final result is $1+z^3+z^6+\ldots-z(1+z^3+z^6+\ldots)$ So if we write $X(z)=\sum_{n=0}^\infty a_n z^n$ we have $a_n=\cases{1&$n=0$ mod $3$\cr -1&$n=1$ mod $3$\cr0&$n=2$ mod 3}$ It's a surprisingly simple result!
{ "language": "en", "url": "https://math.stackexchange.com/questions/823278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
a question about the evaluation of triple integral, I am stuck! How to use the method of orthogonal transformation to figure out the triple integral ?. I am stuck about it! The triple integral is: $$ \iiint\cos\left(ax + by + cz\right)\,{\rm d}x\,{\rm d}y\,{\rm d}z \qquad\mbox{and}\qquad x^{2} + y^{2} + z^{2} \leq 1 $$ My solution: I want to suppose $\quad u = ax + by + cz\,,\quad v=y\quad$ and $\quad w=z$. \begin{align} &\mbox{Then}\quad\iiint\cos\left(ax + by +cz\right)\,{\rm d}x\,{\rm d}y\,{\rm d}z =\iiint { 1\over{ a}}\cos\left(u\right)\,{\rm d}u\,{\rm d}v\,{\rm d}w \\[3mm]&\mbox{and}\quad \left({1\over a}\,u - {b\over a}\,v - {c\over z}\,w\right)^{2} + v^{2} + w^{2} \leq 1 \end{align} But I don't know how to continue. Is that right ?. Or can someone use other methods to solve the question ?. You don't need to use orthogonal transformation necessarily.
Let $\vec{u}$ be the vector $(a,b,c)$. Let $\lambda = |\vec{u}| = \sqrt{a^2+b^2+c^2}$ and $\displaystyle\;\hat{u} = \frac{\vec{u}}{|\vec{u}|}$ be the associated unit vector. Pick two more unit vectors $\hat{v}$, $\hat{w}$ such that $\hat{u}, \hat{v}, \hat{w}$ forms an orthonormal basis. You then parametrize the points in $\mathbb{R}^3$ as $$\vec{r} = (x,y,z) = x\hat{x} + y\hat{y} + z\hat{z} = u\hat{u}+v\hat{v}+w\hat{w}$$ This is the sort of orthogonal transform you are supposed to use. You don't need to work out what are $\hat{v}$ and $\hat{w}$ exactly. What you need to know is they exist and under this transform, both the unit sphere and the volume element are preserved. i.e. $$|\vec{r}| \le 1 \quad\iff\quad x^2 + y^2 + z^2 \le 1 \quad\iff\quad u^2 + v^2 + w^2 \le 1$$ $$dx dy dz = du dv dw$$ Since $ax+by+cz = \lambda u$, your integral can be rewritten and evaluated as $$\begin{align} \int_{|\vec{r}|\le 1}\cos(\lambda u) du dv dw =& \pi \int_{-1}^1 (1-u^2)\cos(\lambda u)du = \frac{\pi}{\lambda}\int_{-1}^1 (1-u^2)d \sin(\lambda u)\\ =& \frac{2\pi}{\lambda}\int_{-1}^1 \sin(\lambda u) u du = -\frac{2\pi}{\lambda^2}\int_{-1}^1 u d\cos(\lambda u)\\ =& -\frac{2\pi}{\lambda^2}\left\{ \big[u\cos(\lambda u)\big]_{-1}^1 - \int_{-1}^1\cos(\lambda u) du \right\}\\ =& \frac{4\pi}{\lambda^2}\left(\frac{\sin\lambda}{\lambda} - \cos\lambda\right) \end{align} $$ As a double check, consider what happens for small $\lambda$. We have $$\frac{4\pi}{\lambda^2}\left(\frac{\sin\lambda}{\lambda} - \cos\lambda\right) \sim \frac{4\pi}{\lambda^2}\left(\left(1 - \frac{\lambda^2}{6}\right) - \left(1 - \frac{\lambda^2}{2}\right) + O(\lambda^4)\right) = \frac{4\pi}{3} + O(\lambda^2)$$ In the limit $\lambda \to 0$, one recover the volume of the unit sphere $\displaystyle\;\frac{4\pi}{3}\;$ as expected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/823361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Volume between cylinder and plane Problem: Find the volume bounded by $z = y^2, x =0, y =0, z =9-x$. My working: $z$ goes from $y^2$ to $9-x$ so these are the limits of integration. Work out the points of intersection of $9-x$ and $y^2$. When $y=0$, $9-x=0$ and $x=9$. So $x$ goes from 0 to 9. When $x=0$, $y^2 = 9$ so $y=3$ (take the positive one). So $y$ goes from 0 to 9. Then evaluate \begin{align} \int_{x=0}^{x=9} \int_{y=0}^{y=9} \int_{z=y^2}^{z=9-x} dz dy dz &= \int_{x=0}^{x=9} \int_{y=0}^{y=9} y^2 - 9 + x dy dx \\ &= \int_{x=0}^{x=9} 18+3x dx \\ &= \frac{567}{2} \end{align} My textbook says the answer is $\frac{324}{5}$. What have I done wrong?
Your limits of integration don't make sense. The region of integration is given by the set $$R = \{(x,y,z) \in \mathbb R^3 \mid (y^2 \le z \le 9-x) \cap (0 \le x \le 9) \cap (0 \le y \le 3)\}.$$ The projection of $R$ onto the $xz$-plane is simply the triangle $x \ge 0$, $z \ge 0$, $x + z \le 9$. Over this triangle, the curve $y = \sqrt{z}$ is the boundary, so the integral is $$\int_{x=0}^9 \int_{z=0}^{9-x} \int_{y=0}^\sqrt{z} 1 \, dy \, dz \, dx = \frac{324}{5}.$$ You may also project $R$ onto the $yz$-plane, in which case we would have $$\int_{y=0}^3 \int_{z=y^2}^9 \int_{x=0}^{9-z} 1 \, dx \, dz \, dy = \frac{324}{5}.$$ Projecting onto the $xy$-plane is trickier, but yields the integral $$\int_{y=0}^3 \int_{x=0}^{9-y^2} \int_{z=y^2}^{9-x} 1 \, dz \, dx \, dy = \frac{324}{5}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/823468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How to reason about two points on the unit sphere. I've recently been thinking about various problems involving two points on the surface of a unit sphere. Let's specify them with a pair of unit 3-vectors ${\bf \hat a}$ an ${\bf \hat b}$. Is there some aid to thinking about this 4-dimensional space? In particular: * *What is its topology? (In terms suitable for dumb engineers like me.) *How do I integrate over parts of this space? The measure would derive from surface area on the original sphere but the integrands of interest are probably all simple functions the scalar ${\bf \hat a\cdot \hat b}$ *Is there a coordinate space or other model that makes it easy to deal with questions like (1) and (2).
This is $S^2 \times S^2$; you can give it the product topology, so that a basis for the topology is given by $\{U \times V \mid U, V \text{ open } \subset S^2\}$. In order to think about 1 and 2, it might help to think of it as a manifold; you can use polar coordinates to homeomorphically biject open sets to open sets of $\mathbb{R}^4$, and you can integrate by using the change of coordinates formula: https://en.wikipedia.org/wiki/Integration_by_substitution#Substitution_for_multiple_variables
{ "language": "en", "url": "https://math.stackexchange.com/questions/823573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Similarity between matrices. I have two matrices, $A = \begin{pmatrix} 1&2\\3&4\end{pmatrix}$ $B =\begin{pmatrix} 1&2\\-1&-4\end{pmatrix}$ I need to check if A is similar to B. I did by first computing the characterstic polynomial of the first one and the second one. I got that $\det(t I-A) = t^2 -5t -2$ and $\det(t I-B) = t^2+3t-2$ Which means they are not similar. Please correct me if I'm wrong. Also, I have another two matrices $C = \begin{pmatrix} i\sqrt{2}&0&1 \\0&1&0 \\ 1&0&-i\sqrt{2}\end{pmatrix}$ $D = \begin{pmatrix} i&0&1 \\0&1&0 \\ 0&0&-i\end{pmatrix}$ And I need to check the similarity between them. I got that: $\det(t I-C) = (t-i)(t-1)(t+i)$ and $\det(t I-D) = (i\sqrt{2}t + 1)(t-1)$ Which makes them not similar. Are my characteristic polynomials wrong?
Now, let's prove similar matrices have equal traces. General proposition: Let $A\in M_{mn}(\mathbb F)$ and $B\in M_{nm}(\mathbb F)$. Then $\operatorname{trace}(AB)=\operatorname{trace}(BA)$. Note: $AB\in M_m(\mathbb F)$ and $BA\in M_n(\mathbb F)$. As stated in previous answers, two matrices $A,B\in M_n(\mathbb F)$ are similar $\iff\exists S\in M_n(\mathbb F),\det S\ne 0$ s.t. $$A=S^{-1}BS$$ $\operatorname{trace} A=\operatorname{trace}\left(S^{-1}BS\right)=\operatorname{trace}\left(\left(S^{-1}B\right)S\right)=\operatorname{trace}\left(S\left(S^{-1}B\right)\right)=\operatorname{trace}\left(SS^{-1}B\right)=\operatorname{trace} B$
{ "language": "en", "url": "https://math.stackexchange.com/questions/823638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
How find the approximate $\ln{2}$ such the error is less than $0.001$ if $$1.4142<\sqrt{2}<1.4143$$, use it to approximate $$\ln{2}$$ such the error is less than $0.001$ This is National Higher Education Entrance Examination.
Let us try : $$\log(2)=2\log(\sqrt 2)$$ Now, let us use $$\log\frac{1+x}{1-x}=2\Big(\frac{x}{1}+\frac{x^3}{3}+\frac{x^5}{5}+...\Big)$$ and use $$\frac{1+x}{1-x}=\sqrt 2$$ that is to say $x=3 - 2 \sqrt 2$, which means, according to $1.4142<\sqrt{2}<1.4143$, $0.1714<x<0.1716$ (this is a small number which will make the series expansion converging quite fast). If we use the first term only, the result is between $0.3428$ and $0.3432$ (this is already good); using the first two terms, the result is between $0.346157$ and $0.346569$; using the first three terms, the result is between $0.346216$ and $0.346628$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/823839", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Compute $\int_0^1 \frac{\arcsin(x)}{x}dx$ $$\int_0^1 \frac{\arcsin(x)}{x}dx$$ This is a proposed for a Calculus II exam, and I have absolutely no idea how to solve it. Tried using Frullani or Lobachevsky integrals, or beta and gamma functions, but I can't even find a way to start it. Wolfram Alpha gives a kilometric solution, but I know that cannot be the only answer. Any help appreciated!
Let $y=\arcsin x\;\Rightarrow\;\sin y =x\;\Rightarrow\;\cos y\ dy=dx$, then $$ \int_0^1 \frac{\arcsin(x)}{x}dx=\int_0^{\Large\frac\pi2}y\cot y\ dy. $$ Now use IBP by taking $u=y$ and $dv=\cot y\ dy\;\Rightarrow\;v=\ln(\sin x)$, then \begin{align} \int_0^{\Large\frac\pi2}y\ \cot y\ dy&=\left.y\ln(\sin y)\right|_0^{\Large\frac\pi2}-\int_0^{\Large\frac\pi2}\ln(\sin y)\ dy\\ &=-\int_0^{\Large\frac\pi2}\ln(\sin y)\ dy. \end{align} The last integral can be evaluated by using property $$ \int_a^b f(x)\ dx=\int_a^b f(a+b-x)\ dx. $$ We obtain $$ \int_0^{\Large\frac\pi2}\ln(\sin y)\ dy=-\frac\pi2\ln2, $$ where $$ \int_0^{\Large\frac\pi2}\ln(\sin y)\ dy=\int_0^{\Large\frac\pi2}\ln(\cos y)\ dy\quad\Rightarrow\quad\text{by symmetry}. $$ Thus $$ \int_0^1 \frac{\arcsin(x)}{x}dx=\large\color{blue}{\frac\pi2\ln2}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/823923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Is this integral with sine and cosine such a challenge? ...or maybe I just don't know some specific trick with trigonometric functions? Well, anyway, here it is: $$\int{\sin^6{x}\cos^4{x}\, dx}$$ I'm bored with it, because I get 9 integrals out of 1 and the whole thing frustrates me as hell. So, is there a simpler way of integrating this or should I just make up my mind to it and integrate and integrate and integrate and integrate...untill the end comes? By the way, here's the answer: $$ \frac{3 x}{256}-\frac{1}{512} \sin (2 x)-\frac{1}{256} \sin (4 x)+\frac{\sin (6 x)}{1024}+\frac{\sin (8 x)}{2048}-\frac{\sin (10 x)}{5120}$$
How about this? $$\int\sin^6x\cos^4xdx=\int\sin^4x\cos^4x(\sin^2x)dx=\frac1{32}\int\sin^42x(1-\cos2x)dx=$$ $$\frac1{32}\int\sin^42xdx-\frac1{32}\int\sin^42x\cos2xdx$$ This should cut your work down quite a bit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/823979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
$ \sum\limits_{n=1}^\infty \dfrac{(1+i)^n}{n^2}$ is divergent, and no idea about $\sum\limits_{n=1}^\infty \dfrac{(3+4i)^n}{5^n\,\sqrt[999]{n}}$ How can one see that $ \sum\limits_{n=1}^\infty \dfrac{(1+i)^n}{n^2}$ is divergent, and by which criterion? I was using a binomial theorem for $ (1+i)^n $ as $ \sum\limits_{k=0}^n \dbinom{n}{k} i^n$, but it seems to get very complicated and not clear for me. Question 2: What is the result in this case: $\sum\limits_{n=1}^\infty \dfrac{(3+4i)^n}{5^n\,\sqrt[999]{n}}$? The root criterion gives me 1 in this case...
This is for your second sum: $$|3+4i|=5$$ $$\arg(3+4i)=\arctan(4/3)=:\alpha \approx 1$$ $$\sum\limits_{n=1}^\infty \frac{(3+4i)^n}{5^n\,\sqrt[999]{n}} = \sum\limits_{n=1}^\infty \frac{\exp(\alpha n i)}{\sqrt[999]{n}}$$ $$\frac{1}{\sqrt[999]{n}} \downarrow 0$$ I'm pretty much certain that this sum will converge by something like Leibniz' alternating series test, but this is not exactly an alternating series so I can't say 100%.
{ "language": "en", "url": "https://math.stackexchange.com/questions/824050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Linear map from zero vector to zero vector. I am reading an introduction on linear maps in my text book on linear algebra. The following statements are made: Suppose $G_1 (\vec{u}) = (x_1 + 2x_2 + 3x_3 + 1, 4x_1, 9x_3)$ Then we can use the following property of linear maps. Let $\lambda = 0$ and $\vec{u} = \vec{0}$ $$G(\lambda\vec{u}) = \lambda G(\vec{u})$$ And specifically: $$G(\vec{0}) = 0 \cdot G(\vec{0}) = \vec{0}$$ This means that a linear map maps the zero vector to the zero vector. It also means that $G_1$ cannot be a linear map, this is because $G_1(0,0,0) = (1,0,0) \neq (0,0,0)$. The constant term $1$ is breaking the linearity. My analysis I don't understands the above statements completely. For example this statement: $G(\vec{0}) = 0 \cdot G(\vec{0}) = 0$ should be true for any function $G(\vec{u})$, since whatever result of the map $G(\vec{u})$ will be it will be multiplied by $0$ and result in $\vec{0}$. In the case above it would be $0 \cdot (1,0,0) = \vec{0}$. This would map the zero vector to the zero vector and hence be a correct linear map? Can anyone please explain this to me?
You are being confused. There are two statements going on here. The first statement is a property of linear transformations. Specifically, it is true that $G(\lambda \mathbf{u}) = \lambda G(\mathbf{u})$ for any $\lambda$ and any $\mathbf{u}$. A consequence of this is that every linear map must map $\mathbf{0}$ to $\mathbf{0}$, because $\mathbf{0} = 0\mathbf{v}$ for any vector $v$, so $$G(\mathbf{0}) = G(0\mathbf{v}) = 0G(\mathbf{v}) = \mathbf{0}.$$ This property holds true for any vector $\mathbf{v}$ we choose, and certainly it holds if we wish to choose $\mathbf{v} = \mathbf{0}$. So, this is a necessary condition for a mapping $G$ to be linear. Now, let's see what happens under your mapping $G_1$ when applied to $\mathbf{0}$: $$G_1(\mathbf{0}) = (1,0,0)^T.$$ Since $G_1$ doesn't map $\mathbf{0}$ to $\mathbf{0}$, then it cannot be linear! Therefore, it doesn't make sense to try to pull zero out of the function, because the function is not linear. Hence, $G_1$ does not satisfy the property that $G_1(\lambda \mathbf{u}) = \lambda G_1(\mathbf{u})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/824127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
How to explain the perpendicularity of two lines to a High School student? Today I was teaching my friend from High School about linear functions. One of the exercises we had to do was finding equations of perpendicular and parallel lines. Explaining parallel equations was quite easy, if we have the equation $y = ax + b$ it's not hard to show with a couple of examples that changing the parameter $b$ only "moves" the line up or down but doesn't change the angle, thus lines $k$ and $\ell$ are parallel iff $a_k = a_{\ell}$. However, I couldn't find a clear way to explain why those lines are perpendicular iff $a_k \times a_{\ell}= -1$. Of course, it's obvious if we use the fact that $a = \tan (\alpha)$ with $\alpha$ being the angle at which line intersects the X axis and that $\tan (\alpha) = - \cot (\frac{\pi}{2} + \alpha)$. But this forces us to introduce trigonometry and rises oh so many questions about the origin of the equation above. Does anyone know a good, simple explanation that's easy to remember?
Give a good answer to this question may be also a way to give a rigorous definition of "perpendicular" and of what is the "measure" (in radians) of an angle (a not simple task at elementary level) without using angles measured in degrees. I sketch the reasoning in steps: 1) Let's $O$ the intersection point of two straight lines $r$ and $s$ and take an orthogonal coordinate system with center $O$ such that the equations of the lines are $ r \rightarrow y=ax $; $s \rightarrow y=bx$. 2)Take two points AB in r such that P is the middle point of AB and two points CD in the same way. Using the midpoint formula students can find that: $$ A\equiv (x_A,ax_A) \qquad B\equiv (-x_A,-ax_A) $$ $$ C\equiv (x_C,bx_C) \qquad D\equiv (-x_C,-bx_C) $$ 3) Define: the stright lines $r$ and $s$ are perpendicular iff $\overline {AC}=\overline{BC}$ 4) Proof: $r$ is perpendicular to $s$ iff $ab=-1$. We have: $$ \overline {AC}=\overline{BC} \Rightarrow \overline {AC}^2=\overline{BC}^2 \iff $$ $$ \iff (x_A-x_C)^2+(ax_A-bx_C)^2=(-x_A-x_C)^2+(-ax_A-bx_C)^2 \iff $$ and, with simple calculations ,we have $$ x_Ax_C(1+ab)=0 $$ so, if $x_A,x_C \ne 0$, we have $ab=-1$ and we have obtained the result without the use of "angles" and giving a rigorous definition of perpendicularity. 5) At last, using a circle with center $O$ and radius $ r=\overline {AB}$, and noting that equal chords subtend equal arcs, we can define the "measure" of the angle $\angle AOC$ to be (the arc) $\wideparen{AB}/r=\pi/2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/824175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 13, "answer_id": 11 }
Equation of matrices Let $V$, a 3d vector space above $F$. Let $T:V\rightarrow V$, linear transformation and $E$, an "ordered" basis such that: $$[ T ]_E = \left( \matrix{ 0 & 0 & a \cr 1 & 0 & b \cr 0 & 1 & c } \right)$$ where $a,b,c \in F$. Show there's $v\in V$ such that: $$v \in \ker\left(T^3 + (-c)T^2 + (-b)T + (-a)\operatorname{Id}_v\right)$$ Moreover, Show that $$T^3 + (-c)T^2 + (-b)T + (-a)\operatorname{Id}_v = 0$$ It seems to me a bit unreasonable to calculate $T^3, T^2$ (Which I did actually) What's the catch of this exercise? What approach should I take? Thanks.
This seems to be a case of Cayley's theorem ; the result that every matrix is a root of its characteristic polynomial . Compute the characteristic polynomial and then, by Cayley's theorem, if you evaluate $T$ in it (with $T^n=T \circ T\circ\cdots\circ T$ n times ) you will get $0$ (meaning the matrix with all-zero entries). See, e.g.: http://mathworld.wolfram.com/Cayley-HamiltonTheorem.html
{ "language": "en", "url": "https://math.stackexchange.com/questions/824271", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Determine the Cumulative Distributive Distribution(CDF) of a truncated value? It is the last part(part h) that I am having problems with. I know you use integration and then split it into 2 parts. But how exactly do you do it ? A detailed answer would be very helpful ! Please help !
It is a lot simpler than you might think. Here is a hint: there are two cases if $U = \min\{Y, b\}$. Either $Y < b$ or $Y \ge b$. If the latter, then $\Pr[U = b] = \Pr[Y \ge b] = S_Y(b)$. Otherwise, $Y < b$ implies $U = Y$ and you already know the density for that situation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/825336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Integral in $\mathbb{R}^n$ What the value of $$\int\limits_{|x|\geq1}\frac{1}{|x|^n}dx , \quad x \in \mathbb{R}^n?$$ This integral is part of other problem.
The integral $$ K(n,\alpha) := \int\limits_{|x|\geq1}\frac{1}{|x|^\alpha}dx , \quad x \in \mathbb{R}^n? $$ converges only if $\alpha > n$. But as far as I know there is no known closed form. Perhaps there are asymptotics as $\alpha \downarrow n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/825424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How does a row of zeros make a free variable in linear systems of equations? I don't understand how a row of zeros gives a free variable when solving systems of linear equations. Here's an example matrix and let us say that we're trying to solve Ax=0: $$\left[ \begin{matrix} 2 & -3 & 0 \\ 3 & 5 & 0 \\ 0 & 1 & 0 \\ \end{matrix} \right]$$ This makes sense to me - you have a COLUMN of numbers corresponding to the number of times the variable $x_3$ is used for each equation and since they are all zeros, $x_3$ could have been any real number because we never get to manipulate our equations to determine a value for it. Hence, it is a free variable as it is not subject to the constraints of the equations. $$\left[ \begin{matrix} 2 & -3 & 5 \\ 0 & 5 & 1 \\ 0 & 0 & 0 \\ \end{matrix} \right]$$ Now for this second example, $x_3$ would still be a free variable. Why is this so? $x_3$ is being used in the other two equations where you could certainly come up with a finite set of answers for this variable rather than saying "It could've been anything!" right? Also is it entirely arbitrary that $x_3$ is the free variable or could it be decided that $x_1$ or $x_2$ is free instead? Could someone explain to me in a more layman or simplified form on why a row of zeros magically makes a free variable? Please help me :(
Fewer equations than unknowns means you cannot solve the set in a unique way. If you have three variables and in effect two equations as you have, then any one of the three variables can be seen as free. However, once you choose a value for it, the value of the two other variables will be forced. Studying interactions like this in general polynomial equations (i.e. the variables appear with natural number exponents, no roots or logarithms or anything) is the field of algebraic geometry. When working with matrices, row reducing and finding inverses and stuff, all of the variables only have the exponent $1$, and therefore it's called linear algebra.
{ "language": "en", "url": "https://math.stackexchange.com/questions/825515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Possibilities Of Dividing Cards In a card game we give 13 cards for each of the 4 players. How much division of card are there? I thought it is ${4\choose1}*{52\choose 13}$, but the answer is $\frac{52!}{13!^4}$ where did I get wrong?
This would be multinomal coefficient. We have 52 cards to seperate to 4 players, each of size 13. So: $\dbinom{52}{13,13,13,13}$ Your way would be to count how many ways can you hand 13 cards out of 52 cards to one of four players, which is different than what the question asks for.
{ "language": "en", "url": "https://math.stackexchange.com/questions/825651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What is an example of real application of cubic equations? I didn't yet encounter to a case that need to be solved by cubic equations (degree three) ! May you give me some information about the branches of science or criterion deal with such nature ?
To expand on @DrkVenom's post, elliptic curve cryptography (ECC) is a great example of the deep application of cubics. ECC is currently at the forefront of research in public key cryptography. It has the important benefit of requiring a smaller key size while maintaining strong security. Here is an accessible and well-illustrated introduction to the subject: http://arstechnica.com/security/2013/10/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/
{ "language": "en", "url": "https://math.stackexchange.com/questions/825699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42", "answer_count": 18, "answer_id": 12 }
Every weakly convergent sequence is bounded Theorem: Every weakly convergent sequence in X is bounded. Let $\{x_n\}$ be a weakly convergent sequence in X. Let $T_n \in X^{**}$ be defined by $T_n(\ell) = \ell(x_n)$ for all $\ell \in X^*$. Fix an $\ell \in X^*$. For any $n \in \mathbb{N}$, since the sequence $\{\ell(x_n)\}$ is convergent, $\{T_n(\ell)\}$ is a bounded set. By Uniform Boundedness Principle $ \sup_{n \in \mathbb{N}} \|x_n\| = \sup_{n \in \mathbb{N}} \|T_n\| < \infty,$ i.e. $\{x_n\}$ is bounded. My question is: why $ \sup_{n \in \mathbb{N}} \|x_n\| = \sup_{n \in \mathbb{N}} \|T_n\|$ ?
The equality $\|x_n\|=\|T_n\|$ is an instance of the fact that the canonical embedding into the second dual is an isometry. See also Weak convergence implies uniform boundedness which is stated for $L^p$ but the proof works for all Banach spaces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/825790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26", "answer_count": 2, "answer_id": 0 }
Approximate the probability An immortal snail is at one end of a perfect rubber band with a length of 1km. Every day, it travels 10cm in a random direction, forwards or backwards on the rubber band. Every night, the rubber band gets stretched uniformly by 1km. As an example, during the first day the snail might advance to x=10cm, then the rubber band gets stretched by a factor of 2, so the snail is now at x=20cm on a rubber band of 2km. The question: Approximate the probability that it will reach the other side at some point (better approximations are obviously preferred, but any bounds are acceptable as long as they are found by doing something interesting)
I do not believe the snail will reach the end in finite time, but it may reach the end in the limit, Assuming the extreme case, the snail always moves forward, at time 0, the snail moves 10cm to the right, and is at 10cm on a 1km band. The band then stretches, so the snail is now at 20cm on a 2km band. The snail moves 10cm again and is now at 30cm on a 2km band, but then the band stretches and the snail is at 45 cm on a 3km band. Let $i$ be a given 'time'. What we have here is that the snail moves $\frac{10}{i\cdot 100,000}$ of the bands length at time $n$, after which the band gets stretched by $\frac{i+1}{i}$, so $\textrm{Pos}(n)$, the snails position at time $n$ (more precisely, before time $n+1$: after movement and expansion) should be: $$ \begin{align} \textrm{Pos}(n) &= \sum_{i=1}^n \frac{i+1}{i}\frac{10}{i\cdot 100,000}\\ \textrm{Pos}(n) &= \frac{1}{10,000}\sum_{i=1}^n \frac{i+1}{i^2} \end{align} $$ Basically, we have a harmonic series here, which does diverge, but only at infinity, which means that the snail will reach the end with probability 1, but will never reach it at any finite observation. Once moving backwards is included, it takes on the properties of a 1-dimensional random walk, which, given infinite time, I believe reaches the end, but given finite time won't since even moving forward always doesn't. Edit Ross is correct, $\textrm{Pos}(n)$ is the fraction of the band at which the snail sits, so once $H_n$ is greater than 10,000, the snail will hit the end of the band, which is about $e^{10000}$ or a very big number. I am still not sure if it ever hits in finite time when there is positive probability for moving backwards.
{ "language": "en", "url": "https://math.stackexchange.com/questions/825855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
$2$-dimensional Noetherian integrally closed domains are Cohen-Macaulay Any 1-dimensional Noetherian domain is Cohen-Macaulay (C-M). For the $2$-dimensional case, a condition of being integrally closed is necessary to be added for a Noetherian domain to be C-M, which I could not prove it. Would anybody be so kind as to solve this? I also search for a non C-M $2$-dimensional Noetherian domain which is not integrally closed. Thanks for any cooperation!
Let $R$ be a noetherian integral domain of dimension $2$. If $R$ is integrally closed, then $R$ is Cohen-Macaulay. From Serre's normality criterion we have that $R$ satisfies $(R_1)$ and $(S_2)$. $(R_1)$ gives that all the localizations of $R$ at height one primes are regular, and therefore Cohen-Macaulay. (In fact, we don't need to use $(R_1)$ in order to prove that $R_{\mathfrak p}$ is Cohen-Macaulay for prime ideals $\mathfrak p$ of height one.) Now let $\mathfrak p$ be a height two prime ideal of $R$. From $(S_2)$ we get that $\operatorname{depth}R_{\mathfrak p}\ge2=\dim R_{\mathfrak p}$, so $R_{\mathfrak p}$ is Cohen-Macaulay. $k[x^4,x^3y,xy^3,y^4]$ is $2$-dimensional, not Cohen-Macaulay and not integrally closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/825920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What are the terms for the elements in the Euclidean algorithm $a = qb + r$? In a ring $R$ with a Euclidean norm $N$, given $a,b \in R$ with $b\neq 0$, there are elements $q, r$ such that $a = qb + r$. Is there any special terminology for the elements $a,b,q,r$ in this context? For example, if we write $a = \frac{p}{q}$, then $p$ is the dividend or numerator, and $p$ is the divisor or denominator$. It seems perhaps $b$ could be called the modulus and another number some word with the "and" suffix. Has any terminology been established?
$a = qb + r$ * *$q$ quotient, *$b$ divisor, *$r$ remainder, *$a$ dividend.
{ "language": "en", "url": "https://math.stackexchange.com/questions/826010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding an equation of circle which passes through three points How to find the equation of a circle which passes through these points $(5,10), (-5,0),(9,-6)$ using the formula $(x-q)^2 + (y-p)^2 = r^2$. I know i need to use that formula but have no idea how to start, I have tried to start but don't think my answer is right.
I know i need to use that formula but have no idea how to start \begin{equation*} \left( x-q\right) ^{2}+\left( y-p\right) ^{2}=r^{2}\tag{0} \end{equation*} A possible very elementary way is to use this formula thrice, one for each point. Since the circle passes through the point $(5,10)$, it satisfies $(0)$, i.e. $$\left( 5-q\right) ^{2}+\left( 10-p\right) ^{2}=r^{2}\tag{1}$$ Similarly for the second point $(-5,0)$: $$\left( -5-q\right) ^{2}+\left( 0-p\right) ^{2}=r^{2},\tag{2}$$ and for $(9,-6)$: $$\left( 9-q\right) ^{2}+\left( -6-p\right) ^{2}=r^{2}.\tag{3}$$ We thus have the following system of three simultaneous equations and in the three unknowns $p,q,r$: $$\begin{cases} \left( 5-q\right) ^{2}+\left( 10-p\right) ^{2}=r^{2} \\ \left( -5-q\right) ^{2}+p^{2}=r^{2} \\ \left( 9-q\right) ^{2}+\left( 6+p\right) ^{2}=r^{2} \end{cases}\tag{4} $$ To solve it, we can start by subtracting the second equation from the first $$\begin{cases} \left( 5-q\right) ^{2}+\left( 10-p\right) ^{2}-\left( 5+q\right) ^{2}-p^{2}=0 \\ \left( 5+q\right) ^{2}+p^{2}=r^{2} \\ \left( 9-q\right) ^{2}+\left( 6+p\right) ^{2}=r^{2} \end{cases} $$ Expanding now the left hand side of the first equation we get a linear equation $$\begin{cases} 100-20q-20p=0 \\ \left( 5+q\right) ^{2}+p^{2}=r^{2} \\ \left( 9-q\right) ^{2}+\left( 6+p\right) ^{2}=r^{2} \end{cases} $$ Solving the first equation for $q$ and substituting in the other equations, we get $$\begin{cases} q=5-p \\ \left( 10-p\right) ^{2}+p^{2}-\left( 4+p\right) ^{2}-\left( 6+p\right) ^{2}=0 \\ \left( 4+p\right) ^{2}+\left( 6+p\right) ^{2}=r^{2} \end{cases} $$ If we simplify the second equation, it becomes a linear equation in $p$ only $$\begin{cases} q=5-p \\ 48-40p=0 \\ \left( 4+p\right) ^{2}+\left( 6+p\right) ^{2}=r^{2} \end{cases} $$ We have reduced our quadratic system $(4)$ to two linear equations plus the equation for $r^2$. From the second equation we find $p=6/5$, which we substitute in the first and in the third equations to find $q=19/5$ and $r^2=1972/25$, i.e $$\begin{cases} q=5-\frac{6}{5}=\frac{19}{5} \\ p=\frac{6}{5} \\ r^{2}=\left( 4+\frac{6}{5}\right) ^{2}+\left( 6+\frac{6}{5}\right) ^{2}= \frac{1972}{25}. \end{cases}\tag{5} $$ So the equation of the circle is \begin{equation*} \left( x-\frac{19}{5}\right) ^{2}+\left( y-\frac{6}{5}\right) ^{2}=\frac{1972}{25}. \end{equation*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/827072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 0 }
$\lim_{x \to 0} \frac{e^{\sin2x}-e^{\sin x}}{x}$ without L'Hospital Anyone has an idea how to find $\displaystyle\lim_{x \to 0} \dfrac{e^{\sin2x}-e^{\sin x}}{x}$ without L'Hospital? I solved it with L'Hospital and the result is $1$ but the assignment is to find it without L'Hospital. Any idea?
$e^{sin2x} \sim_0 sin2x + 1$, and also $e^{sinx} \sim_0 sinx + 1$. Thus: $\dfrac{e^{sin2x} - e^{sinx}}{x} \sim_0 \dfrac{sin2x + 1 - sinx - 1}{x} \sim_0 \dfrac{2x - x}{x} = \dfrac{x}{x} = 1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/827287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 2 }
Solving a simple matrix polynomial Does there exist a $2\times 2$ Matrix $A$ such that $A-A^2=\begin{bmatrix} 3 & 1\\1 & 4\end{bmatrix}$ ?
Yes, there are complex solutions, but no real solutions. Using the brute force method also mentioned by JimmyK, you can see that $A$ must be symmetric. The top left element then becomes $a-a^2-b^2$ which must equal $3$, which can obviously only happen if $A$ has complex elements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/827375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 2 }
How do you prove that $ n^5$ is congruent to $ n$ mod 10? How do you prove that $n^5 \equiv n\pmod {10}$ Hint given was- Fermat little theorem. Kindly help me out. This is applicable to all positive integers $n$
By Fermat's Little Theorem, $n^5 \equiv n \pmod 5$ Also $n \equiv 0 \ or \ 1 \pmod 2 \implies n^5 \equiv n \pmod 2$ Chinese Remainder Theorem guarantees the presence of a solution $\pmod {10}$ as $(2,5) = 1$ From the second congruence, $n^5 = 2k + n$ Substitute into the first congruence, $2k+n \equiv n \pmod 5 \implies 2k \equiv 0 \pmod 5 \implies k \equiv 0 \pmod 5$ So $k = 5m$ Substitute that into second congruence, $n^5 = 2(5m) + n = 10m +n$ Therefore, $n^5 \equiv n \pmod {10}$ $\rm{(QED)}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/827467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
How can I expand $\frac{1}{\sqrt{1-x^2}}$ by using the binomial series? How can I expand $\frac{1}{\sqrt{1-x^2}}$ by using the binomial series? I know how to expand $\frac{1}{\sqrt{1-x}}$, but I have no idea how to expand $\frac{1}{\sqrt{1-x^2}}$. Simply differentiate this makes expanding way to complicated.
Simply substitute $x^2$ for $x$ in the expansion of$(1-x)^{-1/2}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/827688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Is $\lVert Ax \rVert^2 - \lVert Bx \rVert^2 = \lVert AA^T - BB^T \rVert$? For matrices $A,B\in\mathbb{R}^{m\times n}$ and for any unit vector $x$, is the following true, and if so why? $\lVert Ax \rVert^2 - \lVert Bx \rVert^2 = \lVert AA^T - BB^T \rVert$ Equivalently, is $x$ the eigenvector of $A^TA - B^TB$ corresponding to the largest eigenvalue. Why is this the same as saying that $x$ is a unit vector??
$$\lVert Ax\rVert^2-\lVert Bx\rVert^2=x^T(A^TA-B^TB)x\le \lVert A^TA-B^TB\rVert $$ if $x$ is a unite vector. This is true if $A^TA-B^TB$ is positive semidefinite. Now, since $A^TA-B^TB$ is symmetric it is diagonalizable unitarily and hence $A^TA-B^TB=U\Lambda U^T$ for some orthogonal (unitary) matrix $U$. Then, the expression is $$y^T\Lambda y$$ where $y=Ux$. Now, $$y^T\Lambda y=\sum_i y_i^2 \lambda_i\le \lambda_{max}y^Ty=\lambda_{max}$$. So, in that case if $x$ is a unit eigenvector corresponding to the largest eigenvalue of $A^TA-B^TB$ the equality holds (again assuming $A^TA-B^TB$ is PSD).
{ "language": "en", "url": "https://math.stackexchange.com/questions/827781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Fourier series of coshx using fourier of $e^{x}$. I have to find the Fourier series of $coshx$ on $(-l,l)$.What I did was I found the Fourier series of $e^{x}=\sum _{n=-\infty}^{\infty }{(-1)^n (\ell^2+in\pi)\over{l^2+n^2\pi^2}}\sinh(\ell)e^{{in\pi x}\over\ell}$ Since $\cosh x={e^{x}+e^{-x} \over 2}$ ,to find $e^{-x}$ I substituted x=-x for $e^{x}$.Can I do that substitution? I am using the complex form of fourier series $f(x)=\sum_{n=-\infty}^{\infty } C_n e^{in\pi x \over l}$ where $C_n={1\over 2l}\int_{-l}^l f(x)e^{-in\pi x \over l} dx$
Yes, you can. Let's do it more carefully: introduce a new variable $t$, assigning to it the value $t=-x$. Then $$e^{-t}=\sum _{n=-\infty}^{\infty }{(-1)^n (\ell^2+in\pi)\over{l^2+n^2\pi^2}}\sinh(\ell)e^{{-in\pi t}\over\ell} \tag1$$ But what's in a name? The formula (1) is valid, no matter what the variable is called. In particular, if could be called $x$: $$e^{-x}=\sum _{n=-\infty}^{\infty }{(-1)^n (\ell^2+in\pi)\over{l^2+n^2\pi^2}}\sinh(\ell)e^{{-in\pi x}\over\ell} \tag2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/827866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The function $f'+f'''$ has at least $3$ zeros on $[0,2\pi]$. Show that if $f\in \mathcal{C}^3$ and $2\cdot\pi$ periodic then the function $f'+f'''$ has at least $3$ zeros on $[0,2\pi]$. My attempt : f is $2\pi$ periodic and $\mathcal{C}^3$, we have : $$\lim_{h\rightarrow 0, h>0} \frac{f(h)-f(0)}{h}=\lim_{h\rightarrow 0, h>0} \frac{f(2\pi+h)-f(2\pi)}{h}\Rightarrow f'(0)=f'(2\pi)$$ After that I tried to compute differently the limit $$ \lim_{h\rightarrow 0, h>0}\frac{f(h)-f(0)}{h}=\lim_{h\rightarrow 0, h<0}\frac{f(h)-f(0)}{|h|}=\lim_{h\rightarrow 0, h<0}-\frac{f(2\pi+h)-f(0)}{h}\Rightarrow f'(0)=-f'(2\pi) $$ wich is clearly false because I get $f'(0)=0$ ( $f(x)=\sin(x)$ is a counterexample). For $2$ zeros is relatively easy but I am stuck for the additional zero. EDIT : I find this exercice (as usual) here : Revue de la filière Mathématique. This was asked during an oral examination of École normale supérieure rue d'Ulm.
A possible direction: Write out the Fourier series: $$f(x)= a_0 + \sum\left( a_n \cos nx + b_n \sin nx\right)$$ $$g(x)\equiv f'(x)+f'''(x) = -\sum n (n^2-1) \left(b_n \cos (n x)-a_n \sin (n x)\right)$$ Clearly, when $n=1$, $g(x)=0$, meaning that the lowest order in $g(x)$ is $n=2$. Now, since there is no constant term, by the mean value theorem $g(x)$ must cross zero at at least one point, $x_0$. Intuitively, any function with no frequencies lower than two, must cross zero at least $4$ times in the interval $[0,2\pi]$, though I'm not sure how to complete this proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/827969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 3, "answer_id": 2 }
If $I_n=\sqrt[n]{\int _a^b f^n(x) dx}$,for $n\ge 1$.Find with proof $\lim _{n \to \infty}I_n$ Let $a,b \in \mathbb R ,a<b$ and let $f:[a,b] \to [0,\infty) $ a continuous and non constant. attempt,using Reimann series $I_n=\sqrt [n]{\int _a^b f^n(x) dx}$ $I_n=\sqrt [n]{\lim _{k \to \infty}\sum_{i=1}^k f^n(x_i^*) dx}$ for each $i=1,2,3...n$ $x_i\in [x_{i-1},x_i]$ so our question is what is $I_n=\lim_{n\to \infty}\sqrt [n]{\lim _{k\to \infty}\sum_{i=1}^k f^n(x_i^*) dx}$
Let $\epsilon > 0$. 1. As $f(x)\le \sup f$, $$\int_a^b f^n(x) dx \le (b-a)(\sup f)^n \\ \limsup \left(\int_a^b f^n(x) dx\right)^{1/n} \le \limsup (b-a)^{1/n}\sup f = \sup f $$ 2. There are $a'<b'$ such as $a'<y<b' \implies f(y) > \sup f-\epsilon$, because $f$ is continuous. $$ \liminf \left(\int_a^b f^n(x) dx\right)^{1/n} \ge \liminf \left(\int_{a'}^{b'} (\sup f -\epsilon)^n dx\right)^{1/n} = \sup f -\epsilon $$As it is true for every $\epsilon$, $$ \lim \left(\int_a^b f^n(x) dx\right)^{1/n}=\sup f $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/828090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Matrices and rank inequality Let $A \in K^{m\times n}$ and $B \in K^{n \times r}$ Prove that min$\{rk(A),rk(B)\}\geq rk(AB)\geq rk(A)+rk(B)-n$ My attempt at a solution: $(1)$ $AB=(AB_1|...|AB_j|...|AB_r)$ ($B_j$ is the j-th column of $B$), I don't know if the following statement is correct: the columns of $AB$ are a linear combination of the columns of $B$, then $rk(AB) \leq rk(B)$. $(2)$In a similar way, $AB= \begin{bmatrix} —A_1B— \\ \vdots \\ —A_jB— \\ \vdots \\—A_mB— \end{bmatrix}$ ($A_j$ denotes the j-th row of $A$), so the rows of $AB$ are a linear combination of the rows of $A$, from here one deduces $rk(AB)\leq rk(A)$. From $(1)$ and $(2)$ it follows $rk(AB)\leq min\{rk(A),rk(B)\}$. This is what I've done so far. I am having doubts with, for example (1), this statement I've conjectured: the columns of $AB$ are a linear combination of the columns of $B$, then $rk(AB) \leq rk(B)$, but wouldn't this be the case iff $AB=(\alpha_1B_1|...|\alpha_jB_j|...|\alpha_rB_r)$ with $\alpha_1,...,\alpha_n \in K$ instead of $(AB_1|...|AB_j|...|AB_r)$ ? This is a major doubt I have, the same goes for (2). I need help to show the inequality $rk(AB)\geq rk(A)+rk(B)-n$
Use the dimension theorem on $A|_{\mathop{\rm Im}(B)}:\mathop{\rm Im}(B)\subseteq K^n\longrightarrow K^m$. Then $$\begin{align}\dim(\mathop{\rm Im}(B))&=\dim(\mathop{\rm Im}(A|_{\mathop{\rm Im}(B)}))+\dim(\mathop{\rm Ker}(A|_{\mathop{\rm Im}(B)}))\\&=\dim(\mathop{\rm Im}(AB))+\dim(\mathop{\rm Ker}(A)\cap\mathop{\rm Im}(B))\\&\leq\dim(\mathop{\rm Im}(AB))+\dim(\mathop{\rm Ker}(A)),\end{align}$$ and since $\dim(\mathop{\rm Ker}(A))=n-\dim(\mathop{\rm Im}(A))$, the claim follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/828179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Concerning a specific family of recursive sequences There is a family of recursive sequences that I came up with: $$a_n=\text{sum of factors less than $a_{n-1}$ of } a_{n-1}, a_0\in\mathbb{N}$$ First question: Has it been studied before? Here are some observations I've made about it: Starting from successive positive integers, and defining the sum of factors of $0$ less than $0$ as $0$, the first few sequences are: $$1,0,0\dots$$ $$2,1,0,0\dots$$ $$3,1,0,0\dots$$ $$4,2,1,0,0\dots$$ $$5,1,0,0\dots$$ $$6,6,6\dots$$ etc. Setting $a_0$ equal to a perfect number $n$ produces the sequence $n,n,n\dots$ Starting from $a_0=n_1$, where $n_1\text{ and }n_2$ are a pair of amicable numbers, produces the sequence $n_1, n_2, n_1, n_2\dots$ If $a_0$ is a prime number, $a_1=1$. There are several conjectures I've made about this family of sequences, but haven't been able to prove or disprove: * *No sequence diverges to infinity. *For any positive integer $>2$ defined as $a_n$, there is at least one possible $a_{n-1}$ that fits the rule of the sequence. Edit: Never mind *For every $j\in\mathbb{N}$, there is at least one $a_0$ fso that $a_j=a_0$ but for all $a_k$ where $1\leq k<j$, $a_k\neq a_0$. *(this might be easier than the other conjectures) No $k\in\mathbb{N}=a_n$ (greater than one) generates an infinite number of positive integers that can be set equal to $a_{n-1}$ and have these two terms follow the rule of the sequence. Second question: Does anyone know how to prove any of these conjectures? P.S. I do not necessarily want a proof to all four of these conjectures, although that would be good. It's just that these seem so difficult to prove that I feel like a proof of any one, two, or three of these conjecturs also has a place here.
The sequence in question is known as aliquot sequence. Based on the references I was able to find, question #1 and #3 are open problems at the moment; we don't even know whether : * *the sequence starting with $276$ ever becomes periodic. *there is any sociable number of order $3$ (number whose aliquot sequence has period of $3$). Question #4 turns out to be an easy one. If $a_{n-1}$ was a prime, we'd have $a_n=1$. Otherwise, $a_{n-1}$ must have at least one divisor greater or equal to $\sqrt{a_{n-1}}$, so $a_n\geq \sqrt{a_{n-1}}$. Thus, $a_{n-1}\leq a_n^2$ which means that any particular value $a_n>1$ can have at most $a_n^2$ immediate predecessors (the actual number of possible predecessors tends to be considerably smaller). This upper bound can be used to check if a particular number is untouchable (which means it can only ever appear at the beginning of the sequence) and find out that $5$ is the smallest counter-example to question #2.
{ "language": "en", "url": "https://math.stackexchange.com/questions/828300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that $\vert\int_{-1}^1 \omega(t) dt \vert \leq 2^n \int_{-1}^1\vert \omega^{(n)}(t)\vert dt$ I am stuck with the following problem: With $\omega: [-1,1]\rightarrow \mathbb{R}$, $\omega\in C^n(-1,1)$. Suppose that $\omega$ has a finite number of zeroes $t_1<t_2<\cdots <t_n$ (i.e. $\omega(t_i)=0,\forall i$) on $[-1,1]$. Prove that $$\left\vert\int_{-1}^1 \omega(t) dt \right\vert \leq 2^n \int_{-1}^1\vert \omega^{(n)}(t)\vert dt$$ I think I should show it inductively, but I can figure out how to do it. If someone could give me some hints that would be greatly appreciated.
Applying Rolle's theorem an awful lot of times, you will find that all derivatives of $\omega$ of orders $\le n-1$ have at least one zero in $[-1,1]$. This allows you to run the following, outrageously wasteful, chain of inequalities for $k=0,1,\dots, n-1 $: $$\int_{-1}^1 |\omega^{(k)}(x)|\,dx \le 2 \sup_{[-1,1]} |\omega^{(k)}(x)| \le 2\int_{-1}^1 |\omega^{(k+1)}(x)|\,dx$$ where the last step uses the fundamental theorem of calculus: $\omega^{(k)}(x)$ is given by the integral of $\omega^{(k+1)} $ from some zero of $\omega^{(k)}$ to $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/828362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to solve this problem using either Ptolemy's Theorem or Law of Cosines? A hexagon is inscribed in a circle of radius r. Suppose that four of the edges of the hexagon are 10 feet long and two of the edges are 20 feet long, but the exact arrangement of the edges is unknown. What is the value of r to three decimal places? At first I split up the hexagon into 6 triangles and found the vertex angles of these triangles to be either 45 or 90 but apparently this is incorrect and I have to use one of the above methods to try to solve this. Any help would be appreciated!
Consider the following diagram where I have rearranged the sides such that $EF = 20, FC = ED = 10, DC = 2r$ and $A$ is the center. Let $H$ be the altitude to $DC$ from $F$. Note that $\angle DFC = \angle AGF - \angle DFG + \angle HFC$. However, since $CF' = DE = 10, \angle DFG = \angle HFC$ so $\angle DFC = 90^\circ$. (This relationship always holds in a cyclic isosceles trapeziod.) Then $DF = \sqrt{(DC)^2-(FC)^2} = \sqrt{4r^2-100}$. From Ptolemy's theorem of $DEFC$, $$ED \cdot FC+EF \cdot DC = DF \cdot EC$$ or equivalently, $$10 \cdot 10 + 20 \cdot 2r = \left( \sqrt{4r^2-100} \right)^2.$$ Then the task is easy ;).
{ "language": "en", "url": "https://math.stackexchange.com/questions/828537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Is there a term for an algebraic structure with two binary operators that are closed under a set? For example, let's say we're using the operators +, and *, and the set {0,1,2} The Cayley tables look like this: * 0 1 2 + 0 1 2 0 0 0 1 0 1 2 0 1 1 2 1 1 0 1 0 2 0 0 2 2 1 2 2 These Cayley tables are totally random, but the point is that the algebraic structure isn't necessarily like any other common type of algebraic structure with two binary operators (e.a. field, ring, boolean algebra). The two operators just obey closure, so it's basically an abstraction of a magma to more than one operator. Is there a specific, agreed upon name for this in mathematics yet? The most obvious thing to me would be to call this a bimagma, then to call something similar with three binary operators a trimagma, then in general a n-magma. Do these structures have a common, agreed upon name?
From Burris, Sankappanavar A Course in Universal Algebra page 26 (42 of the pdf): "An algebra A is unary if all of its operations are unary, and it is mono-unary if it has just one unary operation." Although from what I read it is not clear whether or not in practice this terminology has been extended before, an algebra with two binary operators could be called di-binary.
{ "language": "en", "url": "https://math.stackexchange.com/questions/828635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Limits with factorial I'm having difficulties understanding all limits with factorial... Actually, what I don't understand is not the limit concept but how to simplify factorial... Example : $$\lim\limits_{n \to \infty} \frac{(n+1)!((n+1)^2 + 1)}{(n^2+1)(n+2)!}$$ I know that it's supposed to give $0$ as I have the answer, but I'd like to understand how to do it as each time I get a limit with factorial I get stuck. Thanks.
$\frac{(n+1)!}{(n+2)!} = \frac{1}{n+2}$ So, the problem reduces to $lim_{n\rightarrow \infty} \frac{(n+1)^2 + 1)}{(n+2)(n^2 + 1)}$. The numerator is quadratic in $n$ while the denominator is cubic, so as $n \rightarrow \infty$ the limit should go to $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/828682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Help to compute sum of products I need to compute the following sum: $$(1\times2\times3)+(2\times3\times4)+(3\times4\times5)+ ...+(20\times21\times22)$$ All that I have deduced is: * *Each term is divisible by $6$. So sum is is divisible by $6$. *Sum is divisible by $5$ as 1st term is $1$ less than multiple of $5$ and second term is $1$ more than multiple of $5$. Next three terms are divisible by $5$. This cycle continues for every $5$ terms. So sum will obviously be divisible by $30$.
Here's an interesting solution: $(1\cdot2\cdot3)+(2\cdot3\cdot4)+(3\cdot4\cdot5)+\dots+(20\cdot21\cdot22)$ $\dfrac{3!}{0!}+\dfrac{4!}{1!}+\dfrac{5!}{2!}+\dots+\dfrac{22!}{19!}$ $3!\left(\dfrac{3!}{0!3!}+\dfrac{4!}{1!3!}+\dfrac{5!}{2!3!}+\dots+\dfrac{22!}{19!3!}\right)$ $3!\left(\dbinom{3}{3}+\dbinom{4}{3}+\dbinom{5}{3}+\dots+\dbinom{22}{3}\right)$ then using the hockey-stick identity, we see that this is equal to $3!\dbinom{23}{4} = 3!\dfrac{23\cdot22\cdot21\cdot20}{4\cdot3!} = \dfrac{23\cdot22\cdot21\cdot20}{4} = 53130$
{ "language": "en", "url": "https://math.stackexchange.com/questions/828750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 2 }
Proving that $\sqrt[3] {2} ,\sqrt[3] {4},1$ are linearly independent over rationals I was trying to prove that $\sqrt[3] {2} ,\sqrt[3] {4}$ and $1$ are linearly independent using elementary knowledge of rational numbers. I also saw this which was in a way close to the question I was thinking about. But I could not come up with any proof using simple arguments. So if someone could give a simple proof, it would be great. My try: $a \sqrt[3] {2}+b\sqrt[3] {4}+c=0$ Then taking $c$ to the other side cubing on both sides we get $2a^3+4b^3+6ab(a+b)=-c^3$. I could not proceed further from here. Apart from the above question i was also wondering how one would prove that $\sqrt{2},\sqrt{3},\sqrt{5},\sqrt{7},\sqrt{11},\sqrt{13}$ are linearly independent. Here assuming $a\sqrt{2}+b\sqrt{3}+c\sqrt{5}+...=0$ and solving seems to get complicated. So how does one solve problems of this type?
Consider $c_1\sqrt{2}+c_2\sqrt{3}+c_3\sqrt{5}=0$. Then $c_1\sqrt{2}+c_2\sqrt{3}=-c_3\sqrt{5}$. Squaring both sides we will have $2c_1^2+3c_2^2+2\sqrt{6}c_1c_2=5c_3^2$. If either $c_1$ or $c_2$ turns out to be $0$ then we will either have $c_2\sqrt{3}+c_3\sqrt{5}=0$ implying $3c_2^2=5c_3^2$ which gives $\left(\frac{c_2}{c_3}\right)^2=\frac{5}{3}$ which is not possible . Similarly for the case when $c_2$ is $0$. (It is obvious when both $c_1$ and $c_2$ are $0$.) Hence if $c_1$ and $c_2$ are both non-zero then $$-\sqrt{6}=\frac{2c_1^2+3c_2^2-5c_3^2}{2c_1c_2}.$$ Now observe that the R.H.S is a rational no but the L.H.S is not.
{ "language": "en", "url": "https://math.stackexchange.com/questions/829005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 7, "answer_id": 3 }
Prove infinite series $$ \frac{1}{x}+\frac{2}{x^2} + \frac{3}{x^3} + \frac{4}{x^4} + \cdots =\frac{x}{(x-1)^2} $$ I can feel it. I can't prove it. I have tested it, and it seems to work. Domain-wise, I think it might be $x>1$, the question doesn't specify. Putting the LHS into Wolfram Alpha doesn't generate the RHS (it times out).
I think a less formal solution could be more understandable. consider $$ S_n= \frac{1}{x} + \frac{2}{x^2} + \frac{3}{x^3} + \frac{4}{x^4} + \dots + \frac{n}{x^n}$$ $$ xS_n = 1 + \frac{2}{x} + \frac{3}{x^2} + \frac{4}{x^3} + \dots + \frac{n}{x^{n-1}}$$ then $$xS_n - S_n = 1+ (\frac{2}{x}-\frac{1}{x})+(\frac{3}{x^2}-\frac{2}{x^2})+\dots+(\frac{n}{x^{n-1}}-\frac{n-1}{x^{n-1}}) - \frac{n}{x^n}$$ $$S_n(x-1) = 1 + \frac{1}{x} + \frac{1}{x^2}+\dots+\frac{1}{x^{n-1}} - \frac{n}{x^n}$$ Now we have a simplified the problem to one of a basic geometric series, so $$S_n(x-1) = T_{n-1} - \frac{n}{x^n}$$ where $$T_{n-1} = 1 + \frac{1}{x} + \frac{1}{x^2}+\dots+\frac{1}{x^{n-1}}$$ $$\frac{T_{n-1}}{x} = \frac{1}{x} + \frac{1}{x^2} + \frac{1}{x^3}+\dots+\frac{1}{x^{n}}$$ $$T_{n-1} - \frac{T_{n-1}}{x} = 1 + (\frac{1}{x}-\frac{1}{x})+ (\frac{1}{x^2}-\frac{1}{x^2})+\dots - \frac{1}{x^n}$$ $$T_{n-1}(1-\frac{1}{x}) = 1 - \frac{1}{x^n}$$ $$T_{n-1}(\frac{x-1}{x}) = \frac{x^n-1}{x^n}$$ $$T_{n-1} = \frac{x^n-1}{x^n}\cdot(\frac{x}{x-1})$$ $$T_{n-1} = \frac{x^n-1}{x-1}\cdot(\frac{1}{x^{n-1}})$$ $$T_{n-1} = \frac{x-\frac{1}{x^{n-1}}}{x-1}$$ Thus $S_n(x-1)$ becomes $$S_n(x-1) = \frac{x-\frac{1}{x^{n-1}}}{x-1} - \frac{n}{x^n}$$ for $|x|\gt 0$ this becomes $$\lim_{n\to\infty}S_n(x-1) = \lim_{n\to\infty}\frac{x-\frac{1}{x^{n-1}}}{x-1} - \frac{n}{x^n}$$ $$S(x-1) = \frac{x-\displaystyle\lim_{n\to\infty}\frac{1}{x^{n-1}}}{x-1} - \lim_{n\to\infty}\frac{n}{x^n}$$ $$S(x-1) = \frac{x-0}{x-1} - 0 = \frac{x}{x-1}$$ $$S = \frac{x}{(x-1)^2}$$ I used l'Hopital's rule to evaluate $\displaystyle\lim_{n\to\infty}\frac{n}{x^n}$, being an $\frac{\infty}{\infty}$ indeterminate form This helps me to understand the problem. Afterwards, I would go on to compose a more formal proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/829168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 4 }
Conditional Probability | Please explain this answers Can you please explain Why P(W2) is 4/9 ? Thanks..
This is because you are searching for the white marbles and there are total 4 white marbles. And the total marbles you've got is 9. So it 4/9. You don't actually need this method, You just need the probability of the second draw.Because the second draw is without replacement, Total marbles decrease by 1 and now the total is 8, because already a white marble is taken, total no. of white marbles left is 3. So the answer is 3/8. Easy...comment if there is any doubt.
{ "language": "en", "url": "https://math.stackexchange.com/questions/829349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why does $\sum_{k=1}^{\infty}\frac{{\sin(k)}}{k}={\frac{\pi-1}{2}}$? Inspired by this question (and far more straightforward, I am guessing), Mathematica tells us that $$\sum_{k=1}^{\infty}\dfrac{{\sin(k)}}{k}$$ converges to $\dfrac{\pi-1}{2}$. Presumably, this can be derived from the similarity of the Leibniz expansion of $\pi$ $$4\sum_{k=1}^{\infty}\dfrac{(-1)^{k+1}}{2k-1}$$to the expansion of $\sin(x)$ as $$\sum_{k=0}^{\infty}\dfrac{(-1)^k}{(2k+1)!}x^{2k+1},$$ but I can't see how... Could someone please explain how $\dfrac{\pi-1}{2}$ is arrived at?
$\newcommand{\+}{^{\dagger}} \newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack} \newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,} \newcommand{\dd}{{\rm d}} \newcommand{\down}{\downarrow} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,{\rm e}^{#1}\,} \newcommand{\fermi}{\,{\rm f}} \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{{\rm i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\isdiv}{\,\left.\right\vert\,} \newcommand{\ket}[1]{\left\vert #1\right\rangle} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\pars}[1]{\left(\, #1 \,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}} \newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,} \newcommand{\sech}{\,{\rm sech}} \newcommand{\sgn}{\,{\rm sgn}} \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert} \newcommand{\wt}[1]{\widetilde{#1}}$ \begin{align} \sum_{k = 1}^{\infty}{\sin\pars{k} \over k}&= -1 + \sum_{k = 0}^{\infty}{\sin\pars{k} \over k} \end{align} With Abel-Plana Formula: \begin{align} \sum_{k = 1}^{\infty}{\sin\pars{k} \over k}&= \color{#c00000}{\Large -1} +\ \overbrace{\int_{0}^{\infty}{\sin\pars{x} \over x}\,\dd x} ^{\ds{=\ \color{#c00000}{\Large{\pi \over 2}}}}\ +\ \color{#c00000}{\Large\half}\ \overbrace{\lim_{x \to 0}{\sin\pars{x} \over x}}^{\ds{=\ 1}} \\[3mm]&\phantom{=}+ \ic\ \underbrace{\int_{0}^{\infty}\bracks{% {\sin\pars{\ic x} \over \ic x} - {\sin\pars{-\ic x} \over -\ic x}} {\dd x \over \expo{2\pi x} - 1}}_{\ds{=\ 0}} \end{align} $$\color{#66f}{\large% \sum_{k = 1}^{\infty}{\sin\pars{k} \over k} = {\pi - 1 \over 2}} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/829523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 7, "answer_id": 2 }
Prove the following Trig Identity with reciprocals Prove that: $$\frac{\tan x}{\sec x-1}+\frac{1-\cos x}{\sin x}=2\csc x$$ Help please! I tried so many things but couldn't get the LHS = RHS. A hint please?
Hint: $$ (1 - \cos x)(1 + \cos x) = 1 - \cos^2 x = \sin^2 x\\ (\sec x - 1)(\sec x + 1) = \sec^2 x - 1 = \tan^2 x $$ Further $$ \frac{\sec x + 1}{\tan x} = \frac{1+\cos x}{\sin x} = \frac{(1 + \cos x)^2}{(1 + \cos x) \sin x}\\ \frac{\sin x}{1 + \cos x} = \frac{\sin^2 x}{(1 + \cos x) \sin x} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/829622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Shortest path between wikipedia articles I'm trying to figure out whether it is possible (and if so how) to find the shortest path inside a network from one node to another. I know that there are different possible algorithms to do that the most prominent being probably the A* search algorithm. I know that this algorithm uses heuristics to make assumptions on the chances that one way would be shorter than the other. On every node the algorithm passes it needs a function that to get a probability of whether or not one path is shorter than the other ones. But what if no such assumptions can be made? A bit more concrete, I want to find the shortest path from one wikipedia article to another one only using the links inside the articles. Is this possible without checking every possible order of articles? In my opinion every node has the same weight and every possible edge/link from a node has the same probability of ending in the shortest path. How can I possibly make any decisions which path to take? I'm no mathematician but more of a designer/programmer so if you have any clue how to help me solve this problem please use a "maths-for-dummies" language.
Thanks for your answers! I kept looking for solutions and found something very useful on stackoverflow. The answer is actually exactly what I expected. What I need is some function which makes an prediction for every node that is connected to the current on how long (relatively speaking) the total path will be. The post suggests keyword mapping or some other kind of guessing the similarity of two articles. Any other suggestions?
{ "language": "en", "url": "https://math.stackexchange.com/questions/829705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Fitch-Style Proof Hi I'm having trouble solving a Fitch Style Proof and I was hoping someone would be able to help me. Premises: $A \land (B \lor C)$ $B \to D$ $C \to E$ Goal: $\neg E \to D$ Thank You
You should be able to transform the following in a formal proof. Assume $\neg E$. Prove $B\lor \neg B$ with the intent to use $\lor$-$\text{Elim}$. If $B$ holds, then use $\to$-$\text{Elim}$ on the premise $B\to D$ to conclude $D$. Suppose $\neg B$ holds. Use $\land$-$\text{Elim}$ on the first premise to get $B\lor C$. You will want to use $\lor$-$\text{Elim}$ on $B\lor C$. If $B$ holds you can get $D$ in two ways, choose one of them. If $C$ holds, eliminate $\to$ on the premise $C\to E$ to get $E$ and a consequently a contradiction, thus getting $D$ with $\bot$-$\text{Elim}$. Eliminating the disjunction $B\lor C$ gives you $D$. Eliminating the disjunction $B\lor \neg B$ yields $D$. Finish off. You can find the proof hidden in the grey area below.
{ "language": "en", "url": "https://math.stackexchange.com/questions/829828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Why is $(1 - \frac{1}{p})^n$ close to $e^{-\frac{n}{p}}$ when $n$ and $p$ are large? Looking at this answer by Henry birthday problem - expected number of collisions and struggling to figure out why it matches this other formula provided to me on a programming related question. Thanks!
I don't know if you are familiar with Taylor series, but if yes : $$\left(1-\frac1p\right)^n=\exp\left(n\ln\left(1-\frac1p\right)\right)=\exp\left(-n\left(\frac1p+o\left(\frac1p\right)\right)\right)$$ and this gives : $$\left(1-\frac1p\right)^n\sim_{p>>1} \exp\left(-\frac{n}{p}\right)$$ Note than there's no assumption on n.
{ "language": "en", "url": "https://math.stackexchange.com/questions/829922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
(Elementary) Trigonometric inequality Any idea for proving the following inequality: $5+8\cos x+4 \cos 2x+ \cos3x\geq 0$ for all real x? I've tried trigonometric identities to make squares appear, and other tricks; but nothing has worked well.
Note that $\cos 2x = 2\cos^2 x - 1$ and $\cos 3x = 4\cos^3 x - 3\cos x$. So, we want to prove that $1+5\cos x + 8\cos^2x + 4\cos^3 x \ge 0$. The left side is a polynomial in $\cos x$ which can be factored as $(1+\cos x)(1+2\cos x)^2$. Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/829988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
When does the limit of the mean values of a function around a point approach the value of the function at that point ? When does the limit of the mean values of a function around a point approach the value of the function at that point ? We can prove it if the function is continuous. But are there general classes of functions for which this holds ? In precise language , what is the most general class of functions for which the following hold ? $\frac{1}{n\alpha(n)\epsilon^{n-1}}\int_{\partial B(x,\epsilon)}f(y)dS(y) \rightarrow f(x)$ as $\epsilon \rightarrow 0 $
The answer to this is the content of the Lebesgue differentiation theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/830108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find all the asymptote of $1-x+\sqrt{2+2x+x^2}$ I find $$\lim_{x\rightarrow\infty}1-x+\sqrt{2+2x+x^2}=2,$$ but i'am stuck when $x\rightarrow-\infty$ how to find that $y=-2x$ is an oblique asymptote. Any idea?
You can use the asymptotic expressions for $f(x)$: $$f(x) \sim 2 +\frac{1}{2x} + O(x^{-2}) \quad \text{as} \quad x \rightarrow \infty$$ $$f(x) \sim -2x -\frac{1}{2x} + O(x^{-2}) \quad \text{as}\quad x \rightarrow - \infty$$ and read-off the asymptotes $y=2$ and $y=-2x$. PS: To get the second expression consider $$g(x)=f(-x) = 1+x+\sqrt{2-2x+x^2} \sim 2x + \frac{1}{2x}+O(x^{-2})$$ for $x \rightarrow \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/830229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Infinite sum of logarithms Is there any closed form for this expression $$ \sum_{n=0}^\infty\ln(n+x) $$
As others correctly mentioned, the expression diverges. Yet, if necessary, you can get quite good asymptotics: $$ \sum_{k=1}^{n} \log (k+x) = \sum_{k=1}^{n} \log k + \sum_{k=1}^{n} \log (1+ \frac{x}{k}) \sim n \log n + \sum_{k=1}^{n} \frac{x}{k} = n \log n + x \log n \\ =(n+x) \log n $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/830295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
Why is the integral of the arc length in polar form not similar to the length of the arc of a circular sector? So I learned that the area enclosed by a polar function is computed by $$A = \int \frac{r(\theta)^2}{2}d\theta.$$ Which, I learned, comes somewhat from the formula for the area of a circular sector $$A_{sector}= \frac{r^2\theta}{2}.$$ So I expected the integral for the arc length to be $$S=\int r(\theta)d\theta$$ which is similar to the length of the arc of a circular sector $$S_{sector} = r\theta.$$ But then I learned it is actually $$S = \int \sqrt {r(\theta)^2+\left(\frac {dr(\theta)}{d\theta}\right)^2}d\theta.$$ I was confused why this was the case, and I did some searching and I found that the change in $r$ should be taken into account and that $S=\int r(\theta)d\theta$ only works if r is the radius of curvature. So my question is why can can the area computation use the similarity with the circular sector and the arc length computation can't?
You have the same phenomenon in rectangular coordinates: The area under a curve $y=f(x)$ $\> (a\leq x\leq b)$ is given by the integral $$\int_a^b f(x)\ dx\ ,$$ which "comes somewhat" from the formula for the area of a rectangle $$A_{\rm rectangle}= {\rm height}\cdot{\rm width}\ .$$ So one could expect that the integral for the arc length would be $$L=\int_a^b dx\ ,$$ which is similar to the arc length of the top edge of the rectangle: $$L_{\rm top\ edge}={\rm width}\ .$$ But we all know that the correct formula for the arc length is $$\int_a^b\sqrt{1+f'^2(x)}\ dx\ ;$$ the reason being that the projection of a line element $\Delta s$ onto the $x$-axis is shorter than $\Delta s$ by a factor of $\cos\phi$, and this factor does not go away by making $\Delta s$ shorter. Same thing in polar coordinates.
{ "language": "en", "url": "https://math.stackexchange.com/questions/830364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Hessian matrix to establish convexity I have a function, $u(x_1,x_2)=\alpha \ln(x_1)+(1-\alpha)\ln(x_2)$. where $0<\alpha <1$ I want to prove that it is convex. The Hessian matrix I have constructed is: $$ \left( \begin{array}{ccc} -\alpha/x_1^2 & 0 \\ 0 & -(1-\alpha)/x_2^2)\end{array} \right)$$ From here I found that it determinant is positive but the leading sub minor is negative. Am I making a silly mistake? From what I can tell for the function to be convex, the Hessian needs to be positive definite. Which, by my calculation it is not. Since I have, with help, worked out the correct answer, I will write it down here: The Hessian is calculated correctly. The leading first order principal submatrix is $-\alpha/x_1^2$ which in my case is negative. The 2nd order principal submatrix is the determinant of the 2x2 Hessian. This is a positive value. The Hessian is negative semidefinite. Using this link, http://www.economics.utoronto.ca/osborne/MathTutorial/CVNF.HTM and this quote from that page: * *f is concave if and only if H(x) is negative semidefinite for all x ∈ S *if H(x) is negative definite for all x ∈ S then f is strictly concave *f is convex if and only if H(x) is positive semidefinite for all x ∈ S *if H(x) is positive definite for all x ∈ S then f is strictly convex. I concluded the function is concave.
If $\alpha$ is real than either $\alpha>0$ or $1-\alpha>0$ and if $0<\alpha<1$ then both are positive. If $\alpha>0$, consider the two points $(x_1,x_2)$ and $(w_1,x_2)$. If $x_1$, $w_1$ are distinct positive numbers then we would have $$ u(p(x_1,x_2) + (1-p)(w_1,x_2)) > pu(x_1,x_2)+(1-p)u(w_1,x_2). $$ If by "convex" you mean that the opposite inequality holds, then this function is not convex.
{ "language": "en", "url": "https://math.stackexchange.com/questions/830481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that it doesn't exist any of natural number $ n = 4m + 3$ that $ n= x^2+y^2 $ for any natural x and y Show that it doesn't exist any of natural number $ n = 4m + 3$ that $ n= x^2+y^2 $ for any natural x and y Show that every prime number in form $ p=4m+1 $ could be showed as $ p = x^2+y^2$ (x and y are natural) I checked it and it
For the first question, any square number modulo four is 0 or 1 [even ==> 0, odd ==> 1]. This can be shown rather easily. Then, the summation of two such numbers would never equal 3 modulo four showing the result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/830577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Geometrically integral Here is a stupid question about the notion of geometric integrality. Say I have a smooth, projective variety $X$ over a some field $k$, equipped with a morphism $f: X \to C$ to a smooth, projective curve $C$, such that the generic fibre is geometrically integral. Assume that there exists a finite (dominant) morphism $\varphi: C \to C$ of degree at least $2$. Is it true that the generic fibre of the composition $\varphi \circ f$ is not geometrically integral?
Yes, absolutely. The generic fiber $Y$ is an algebraic variety over the function field $K$ of $C$. Consider $K$ as a finite non-trivial extension of a subfield $L$, then $Y \times_L (K \otimes_L \overline{L})$, and $K \otimes_L \overline{L}$ is never integral.
{ "language": "en", "url": "https://math.stackexchange.com/questions/830663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
$\sum_{n = 1}^\infty f_n =f.$ Prove that $f′_n =f′$ a.e. I am studying for a real analysis qualifying exam. Was hoping that there was a very slick proof for this? Thanks. Let $f_1, f_2, . . . , f : [0, 1] → \mathbb{R}$ be non-decreasing right-continuous functions such that $\sum_{n = 1}^\infty f_n =f.$ Prove that $\sum_{n = 0}^\infty$ $f′_n =f′$ a.e.
You can write $f_n(x)=\mu_n([0,x])$ for some nonnegative measure $\mu_n$. Write that as $d\mu_n=f'_n\,d\lambda+d\nu_n$, where $\lambda$ is Lebesgue measure and $\nu_n$ is singular. Do the same for $f$: $d\mu=f'\,d\lambda+d\nu$. Let $N$ be a (Borel) null set where $\nu_n([0,1]\setminus N)=0$ for all $n$, and likewise $\nu([0,1]\setminus N)=0$. By assumption $\sum_n\mu_n([0,x])=\mu([0,x])$ for all $x$, and it follows that $\sum_n\mu_n(E)=\mu(E)$ for all Borel sets $E$. When $E\cap N=\emptyset$ then (using the montone convergence theorem in the first part) $$ \int_E\sum_nf_n'\,d\lambda=\sum_n\int_E f_n'\,d\lambda=\sum_n\mu_n(E)=\mu(E)=\int_E f'\,d\lambda,$$ and we're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/830752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How is math used in computer graphics? I'm doing a research paper on the mathematics of computer graphics and animation (3D) and I do not know where to start. What mathematical equations and concepts are used for computer graphics and animation (3D)?
Two main results that tend to form the core of 3D graphics: (1) 3D coordinates are represented by matrices: $$\begin{bmatrix} x \\ y \\ z \end{bmatrix}$$ Transforms (like rotations) are represented by matrix multiplication: $$\begin{bmatrix} \text{new-x} \\ \text{new-x} \\ \text{new-x} \end{bmatrix} = \begin{bmatrix} m_{1,1} & m_{1,2} & m_{1,3} \\ m_{2,1} & m_{2,2} & m_{2,3} \\ m_{3,1} & m_{3,2} & m_{3,3} \end{bmatrix}\begin{bmatrix} x \\ y \\ z \end{bmatrix}$$ Transforms like moving a point a given distance are represented by adding: $$\begin{bmatrix} \text{new-x} \\ \text{new-x} \\ \text{new-x} \end{bmatrix} = \begin{bmatrix} \Delta x \\ \Delta y \\ \Delta z \end{bmatrix} + \begin{bmatrix} x \\ y \\ z \end{bmatrix}$$ A move and a transform can be combined into 1 operation, which is how your video card does it: $$\begin{bmatrix} \text{new-x} \\ \text{new-x} \\ \text{new-x} \\ 1\end{bmatrix} = \begin{bmatrix} m_{1,1} & m_{1,2} & m_{1,3} & \Delta x \\ m_{2,1} & m_{2,2} & m_{2,3} & \Delta y \\ m_{3,1} & m_{3,2} & m_{3,3} & \Delta z \\ 0 & 0 & 0 & 1 \end{bmatrix}\begin{bmatrix} x \\ y \\ z \\ 1 \end{bmatrix}$$ So to begin with, your computer does most of it's coordinate representations and transforms with $4$ by $4$ matrices. (2) 3D has to become 2D to appear on your screen Converting the 3D coordinate of a shape into the 2D point on your screen is called projection, usually perspective projection but sometimes orthographic projection or something else. Here is a start of learning about 3D projections.
{ "language": "en", "url": "https://math.stackexchange.com/questions/830856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Partial derivatives-Why does this stand? In my notes there is the following: $$u_{\xi \eta}=0 \Rightarrow \left\{\begin{matrix} u_{\xi}=0 \Rightarrow u=g(\eta)\\ u_{\eta}=0 \Rightarrow u=f(\xi) \end{matrix}\right.$$ I haven't understood why this stand... Isn't it as followed?? $$u_{\xi \eta}=0 \Rightarrow u_{\xi}=F(\xi) \text{ AND } u_{\xi \eta}=0 \Rightarrow u_{\eta}=G(\eta)$$ Why should $F(\xi)$ and $G(\eta)$ be equal to $0$?? EDIT: I found this at the proof of the solution of the wave equation using the characteristics, which is the following: $$u_{tt}-c^2u_{xx}=0$$ $\displaystyle{\frac{dx}{dt}=\frac{1}{a}(b \pm \sqrt{\Delta})=\pm c}$ $\left.\begin{matrix} \xi=x-ct\\ \eta=x+ct \end{matrix}\right\}$ $\displaystyle{\partial_x=\partial_{\xi}+\partial_{\eta}}$ $\displaystyle{\partial_t=-c \partial_{\xi}+c \partial_{\eta}}$ $\displaystyle{\partial_{xx}=\partial_{\xi \xi}+\partial_{\eta \eta}+2 \partial_{\xi \eta}}$ $\displaystyle{\partial_{tt}=c^2(\partial_{\xi \xi}+\partial_{\eta \eta}+2 \partial_{\xi \eta})}$ So $\displaystyle{(\partial_{tt}-c^2 \partial_{xx})u=0 \Rightarrow -4c^2 \partial_{\xi \eta}u=0 \Rightarrow}$ $$u_{\xi \eta}=0 \Rightarrow \left\{\begin{matrix} u_{\xi}=0 \Rightarrow u=g(\eta)\\ u_{\eta}=0 \Rightarrow u=f(\xi) \end{matrix}\right.$$ $$\Rightarrow u=f(\xi)+g(\eta) \Rightarrow u=f(x-ct)+g(x+ct)$$
You are correct. $u_\xi=F(\xi),\ u_\eta = G(\eta)$. Note that this implies (by integrating) that $u(\xi,\eta) = \widetilde F(\xi) + \widetilde G(\eta)$. You can verify this by trying something simple like $u(\xi,\eta) =\xi+\eta$
{ "language": "en", "url": "https://math.stackexchange.com/questions/830962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Difference between Euclidean space and vector space? I often hear them used interchangeably ... they are very complicated to make any use of. Wikipedia words: Euclidean space: One way to think of the Euclidean plane is as a set of points satisfying certain relationships, expressible in terms of distance and angle. Vector space: A vector space is a mathematical structure formed by a collection of elements called vectors, which may be added together and multiplied ("scaled") by numbers, called scalars in this context. They are not related at all. A vector space is a structure composed of vectors and has no magnitude or dimension, whereas Euclidean space can be of any dimension and is based on coordinates. I hear 3-D programming uses vectors, so Euclidean geometry should be useless, no? Basically, aren't they unrelated?
While a vector space is something very formal and axiomatic, Euclidean space has not a unified meaning. Usually, it refers to something where you have points, lines, can measure angles and distances and the Euclidean axioms are satisfied. Sometimes it is identified with $\mathbb{R}^2$ resp. $\mathbb{R}^n$ but more as an affine and metric space (you have both points and vectors, not just vectors). So, the Euclidean space has softer meaning and usually refers to a richer structure.
{ "language": "en", "url": "https://math.stackexchange.com/questions/831048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 6, "answer_id": 3 }
Paradox of the trumpet shape This is a question I had for long time now, when you rotate the function $y=1/x$, $x>0$ (say $x$ and $y$ both measure meters) about the $x$ axes by $2\pi$ you get a shape which has infinite surface area and finite volume.Lets call this shape "trumpet shape". Now the weird thing is that suppose I have a "trumpet" shape that is made of by arbitrarily small transparent material (imagine folding a transparent sheet to this trumpet shape). Since it's volume is finite I can fill the whole trumpet with finite amount of paint say $k$ litres.But since the surface area is infinite, no matter how many paint I have I can still not paint it's surface area. Now suppose I pour $k$ litres of paint in my "trumpet shape", then the whole trumpet is filled with paint.Now imagine how this trumpet made of transparent material looks like.It looks like it's surface area is painted and since our transparent material is arbitrarily small we can effectively say that the surface area is painted using $k$ litres of paint. A contradiction What am I saying wrong here? Thank you
Your issue is trying to compare a 2 dimensional object (surface area) with a 3 dimensional object (volume). Any volume of liquid can be spread thin enough to cover as much surface area as you want (mathematically speaking, there are probably physical limits to this).
{ "language": "en", "url": "https://math.stackexchange.com/questions/831208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Show that $1$ is an eigenvalue of $A$ Let $A \in \mathbb{K}_{n}$. Suppose that $A^3=I_{n}$ and $I_{n}+A+A^2\neq O_{n}$. Where $O_{n}$ is the null matrix. Show that $1$ is an eigenvalue of $A$. I couldn't show what is ask by using all the hypothesis. My work was: Let $v \in \mathbb{K}_{nx1}$, and $\lambda \in \mathbb{K}$. So one have $Av=\lambda v$ and also, $$A^2v=\lambda A v$$ $$A^3v=\lambda A^2 v$$ By hypothesis, $A^3=I_{n}$. So, $$I_{n}v=\lambda A^2v$$ $$v=\lambda A^2v$$ $$v=\lambda^2 Av$$ $$v=\lambda^3 v$$ So, $\lambda=1$. My doubt is where $I_{n}+A+A^2\neq O_{n}$ hypothesis is need. Thanks
Assume by contradiction that 1 is not an eigenvalue of $A$. Then $\det(A-I) \neq 0$, and therefore $A-I$ is invertible. Hence $$A^2+A+I=(A^3-I)(A-I)^{-1}=0$$ contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/831290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 3 }
if a function $f$ is decreasing and the limit $\lim\limits_{x\to \infty} f(x)$ exists, then Given that function $f$ is decreasing and the limit $\lim\limits_{x\to \infty} f(x)$ exists How can I prove $$\lim\limits_{x\to \infty} x\left(f(x)-f(x+1)\right)$$ exists? I applied monotone convergence theorem, but there is no way...
This is false. Counterexample: Define $f(x)$ this way, for each positive integer $n$ then for all $n^{2}<x\leq (n+1)^{2}$ then $f(x)=-\sum\limits_{i=1}^{n}\frac{1}{i^{2}}$. Clearly $f(x)$ is bounded below (bounded by $-\frac{\pi^{2}}{6}$) and monotone decreasing. Thus whenever $x=(n+1)^{2}$ then $x(f(x+1)-f(x))=x(-\sum\limits_{i=1}^{n+1}\frac{1}{i^{2}}-(-\sum\limits_{i=1}^{n}\frac{1}{i^{2}}))=(n+1)^{2}\frac{-1}{(n+1)^{2}}=-1$. But most of the time, (whenever $x=(n+1)^{2}-1$ for example) $x(f(x+1)-f(x))=0$ instead. Hence this oscillate, so limit does not exist.
{ "language": "en", "url": "https://math.stackexchange.com/questions/831375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Generator of singular homology of n-sphere I am learning singular homology theory right now. The homology of n-sphere is computed by Mayer-Vietoris argument. Intuitively, for example the class represented by a loop is the generator of $H_1(S^1)$. Is there a way to show that this is truly the case in singular homology?
You have a long exact sequence $$\dots→\tilde H_1(A)\oplus \tilde H_1(B)→\tilde H_1(A+B)→\tilde H_0(A∩B)→\tilde H_0(A)⊕\tilde H_0(B)→\dots$$ where $H_n(A+B)$ is the homology group for the chain complex $$\dots→C_n(A+B)→C_{n-1}(A+B)→\dots$$ where $C_n(A+B)$ consists of chains whose simplices are each in $A$ or in $B$, which are small open neighborhoods around the upper and lower semicircle such that $A∩B$ is the disjoint union of two arcs. The inclusion $C_1(A+B)\hookrightarrow C_1(S^1)$ induces an isomorphism $\tilde H_1(A+B)\cong \tilde H_1(S^1)$ whose inverse is induced by the map $\rho:C_1(S^1)→C_1(A+B)$, which turns each simplex $\sigma$ into a chain of smaller simplices, each of which has image in $\text{Im}(\sigma)∩A$ or in $\text{Im}(\sigma)∩B$. For a precise definition see Hatcher's Algebraic Topology, Proposition 2.21. If we apply $\rho$ to the loop $\sigma$ around $S^1$, we get the first barycentric subdivision of $σ$, which is $σ_1-σ_2$, where $σ_1$ is the semi-circle from $-1$ to $1$ with non-negative second coordinate, and $σ_2$ is the semi-circle from $-1$ to $1$ with non-positive second coordinate. Now $σ_1-σ_2$ is a generator of $H_n(A+B)$, as we can see by chasing the cycle $σ_1-σ_2$ through the definition of the connecting homomorphism $H_n(A+B)→H_{n-1}(A∩B)$: $σ_1-σ_2$ is the image of $(σ_1,σ_2)$ in $C_n(A)\oplus C_n(B)$, which has boundary $(1-(-1),1-(-1))$, hence $\partial[σ_1-σ_2]=[1-(-1)]$. Since this difference is the generator of $\tilde H_{0}(A∩B)$, $σ_1-σ_2$ is a generator of $\tilde H_1(A+B)$. It follows that $σ$ is a generator of $\tilde H_1(S^1)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/831466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Application of Riemann Roch I have read that thanks to Riemann Roch theorem, if get $\Sigma$ a compact Riemann Surface of genus $g$ there exists a conformal branch covering $\phi: \Sigma \rightarrow S^2$ of degree less than $g+1$. Unfortunately I have found only very abstract references which not clearly implies this fact. does any one can explain this to me? Ideally with a basic reference.
Let $D$ be a finite collection of $d$ points on $\Sigma$. The RR theorem shows that the vector space of meromorphic functions on $\Sigma$ with at worst a simple pole at each $D$ has dimension $\geq d + 1 - g$, which is $> 1$ if $d > g$. Thus (choosing any $D$ with $d > g$) this space contains a non-constant meromorphic function $f$. (The constant functions contribute just one dimension.) Since $f$ has at most $d$ simple poles, it induces a degree $\leq d$ branched covering $\Sigma \to \mathbb C P^1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/831654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
$\lim_{n \to \infty} \frac{(-2)^n+3^n}{(-2)^{n+1}+3^{n+1}}=?$ I have a question: $$\lim_{n \to \infty} \frac{(-2)^n+3^n}{(-2)^{n+1}+3^{n+1}}=?$$ Thanks for your help>
Since $|-2|<3$ then $(-2)^n=_\infty o(3^n)$ and then $$\lim_{n \to \infty} \frac{(-2)^n+3^n}{(-2)^{n+1}+3^{n+1}}=\lim_{n \to \infty} \frac{3^n}{3^{n+1}}=\frac13$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/831731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Towers of Hanoi recurrence relation How would I do this recurrence relation?
Let T(n) be the number of moves needed to transfer n disks from one peg to another. Clearly we have: T(0)=0 T(1)=1 Now, we note that in order to move a tower of n disks, we need to do the following: Move the tower of n−1 disks from off the top of the nth disk onto another of the pegs; Move the nth disk to the destination peg; Move the tower of n−1 disks from where it was put temporarily onto the top of the nth disk. It is clear that steps 1 and 3 above each take Tn−1 moves, while step 2 takes one move. Hence we have: T(n)≤2T(n−1)+1 The inequality applies here because we have established that although we know we can always get the job done in no more than 2T(n−1)+1 moves, we are not at this stage certain that there is not a better strategy which needs fewer moves. So: is there a way of moving the disks that uses fewer moves? Consider that at some point we need to move disk n. When we do this, we must have moved the n−1 disks above it onto a vacant peg. That must have taken T(n−1) moves to do, whatever T(n−1) happens to be. Having moved that final disk for the first (and hopefully last) time[1], we then need to transfer the n−1 smaller disks (which all need to be on a single peg, otherwise there won't be a spare peg to move disk n onto) back onto disk n. This also takes T(n−1) moves to do. So now we see that there is not a better way to move the disks, i.e.: T(n)≥2T(n−1)+1 Thus we arrive at our recurrence rule: T(n)=2T(n−1)+1
{ "language": "en", "url": "https://math.stackexchange.com/questions/831814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is the property reflexive, symmetric, anti-symmetric, transitive, equivalence relation, partially ordered given the relation below? I'm working on this and I'm supposed to figure out if the following properties apply to the below relations. Properties are: 1. Reflexive 2. Symmetric 3. Anti-Symmetric 4. Transitive 5. Equivalence Relation 6. Partially Ordered Set Relation: * *The relation $R$ on the set of all real function $f: \mathbb{N} \to \mathbb{R}^+$ where $f \ R \ g$ if and only if $f(n) = O(g(n))$ *The relation $R$ on the set of all real function $f : \mathbb{N} \to \mathbb{R}^+$ where $f \ R \ g$ if and only if $f(n) = \Theta(g(n))$ My work so far: For first relation: a. YES: b. NO: c. NO: d. YES: e. NO: f. NO: For second relation a.YES: b.YES: c.NO: d.YES: e.YES: f.NO: Am I doing this right? Thank you so much for any help.
If I were correcting your (presumably) homework, I would want more details on your reasoning for transitivity, in both cases. Nevertheless, all your answers are correct. Good job.
{ "language": "en", "url": "https://math.stackexchange.com/questions/831936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$f(x) = x^3 - x$ then $f(n)$ is multiple of 3 If $f(x) = x^3 - x$ then $f(n)$ is multiple of 3 for all integer $n$. First I tried $$f(n) = n^3-n=n(n+1)(n-1)\qquad\forall n\ .$$ When $x$ is an integer then at least one factor on the right is even, and exactly one factor on the right is divisible by $3$. It follows that for any $n\in{\mathbb Z}$ the right hand side is divisible by $6$, and so is the left hand side. That is to say: $n^3=n$ mod $6$ for all integers $n$. Is that correct? or there are another simple solution for this? Thanx.
After you get $f(n)=n(n+1)(n-1)$, meaning $f(n)$ has factors $n$, $n+1$, $n-1$. Now you want to show $3$ divides one of them. If $3\mid n$ then you get what you want. If $3\nmid n$, then $n\equiv 1$ or $n\equiv 2 \pmod 3$. Meaning $3\mid n-1$ or $3\mid n+1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/832021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Intro to Real Analysis I am having trouble proving the following: if $a < b$, then $a < {a+b\over2} < b$. I started with the Trichotomy Property and getting to where $a^2>0$, but then I do not know where to go from there. Any suggestions?
Given $a<b$, $$a<b \Longleftrightarrow 2a<a+b \Longleftrightarrow a<\frac{a+b}{2}$$ $$a<b \Longleftrightarrow a+b<2b \Longleftrightarrow \frac{a+b}{2} < b$$ $$a<\frac{a+b}{2}<b$$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/832082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 7, "answer_id": 6 }
Symbol for rational/irrational part of a number Just as $\Im(z)$ and $\Re(z)$ denote the imaginary and real parts of $z$, respectively, do there exist symbols for the rational and irrational parts of a real number?
I think the closest thing you can get is the floor function. Where the "rational" or integer part of $x$ would be the largest integer less than or equal to $x$. But this doesn't really guarantee that what is left over will be irrational. Edit Thinking about it a bit more I think the following is at least well defined for real values of $x$, $$ RationalPart(x) = \begin{cases} x \qquad x \in \mathbb{Q} \\ \lfloor x \rfloor \qquad x\notin \mathbb{Q} \end{cases} $$ Of course this doesn't have any of the nice properties like linearity that $Im$ and $Re$ have.
{ "language": "en", "url": "https://math.stackexchange.com/questions/832170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
What was the book that opened your mind to the beauty of mathematics? Of course, I am generalising here. It may have been a teacher, a theorem, self pursuit, discussions with family / friends / colleagues, etc. that opened your mind to the beauty of mathematics. But this question is specifically about which books inspired you. For me, Euler, master of us all is right up there. I am interested in which books have inspired other people.
W. W. Sawyer's 'Prelude to Mathematics' is a great book that really opened my eyes. It can be read almost without any knowledge of mathematics. I also believe that any mathematician ought to read 'Flatland'. It is a beautiful story it gave me my fist real intuition about higher dimensions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/832223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "58", "answer_count": 51, "answer_id": 6 }
What was the book that opened your mind to the beauty of mathematics? Of course, I am generalising here. It may have been a teacher, a theorem, self pursuit, discussions with family / friends / colleagues, etc. that opened your mind to the beauty of mathematics. But this question is specifically about which books inspired you. For me, Euler, master of us all is right up there. I am interested in which books have inspired other people.
Apostol's Introduction to Analytic Number Theory. It's a beautifully written and self contained book. Even if you cannot solve all the problems, just reading the text will take you a long way. One of the best number theory books I've seen.
{ "language": "en", "url": "https://math.stackexchange.com/questions/832223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "58", "answer_count": 51, "answer_id": 38 }
Closest Point to a vector in a subspace Given v = [0 -3 -8 3], find the closest point to v in the subspace W spanned by [6 6 6 -1] and [6 5 -1 60]. This is web homework problem and I have used the formula (DotProduct(v, w.1)/DotProduct(w.1, w.1))*w.1 + (DotProduct(v, w.2)/DotProduct(w.2, w.2))*w.2 but the computer said the answer I got was wrong. If this isn't the formula than I'm not sure what is. I have triple checked my calculations as well. Any help would be greatly appreciated.
The projection of a vector $v$ over $\mathrm{span}(w_1, w_2)$ is the sum of the projections of $v$ over $w_1$ and $w_2$, if $w_1$ and $w_2$ are orthogonal. If they're not, you can find other vector that span the same subspace, using Gram-Schmidt's process. For example: $$\mathrm{proj}_{w_1} v = \frac{\langle v, w_1\rangle}{\langle w_1, w_1\rangle}w_1$$ The easy way to remember this formula is to think: if the projection goes in $w_1$'s direction, then it should be natural that $w_1$ appears the most in the formula. Think that $w_1$ guides $v$ through the right direction. Having this in mind, the exercise is just a calculation. (also, I strongly suggest you look a bit about MathJax and LaTeX, so you can write formulas and stuff here right, otherwise, your question is not very visually appealing to people around here)
{ "language": "en", "url": "https://math.stackexchange.com/questions/832279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }