Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Finding asymptotic relationship between: $\frac {\log n}{\log\log n} \overset{?} = (\log (n-\log n))$ Given $f(n)=\frac {\log n}{\log\log n} , g(n)= (\log (n-\log n))$, what is the relationship between them $f(n)=K (g(n))$ where "K" could be $\Omega,\Theta,O$ I thought of taking a log to both sides and see what we get: $\log\frac {\log n}{\log(\log n)}= \log(\log n) -\log[\log(\log n)] \overset{?} = c\log(\log (n-\log n))$ It looks like the RHS is smaller than: $\log(\log (n-\log n)) \le \log(\log n)$ And since $\log[\log(\log n)] < \log(\log n)$ then $\log(\log (n-\log n)) \le \log(\log n) - \log[\log(\log n)] $ But it's actually $O$, and I can't find a way to show it...
Look at $g(n)$: $$ g(n) = \log(n-\log n) = \log n + \log\left(1-\frac{\log n}{n}\right) = \log n + o(1) $$ using the fact that $\frac{\log n}{n} \xrightarrow[n\to\infty]{}0$ and $\frac{\ln(1+x)}{x} \xrightarrow[x\to 0]{}1$. So $g(n) = \Theta(\log n)$. But $f(n) = \frac{\log n}{\log\log n} = o(\log n)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1653279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Easy way to compute logarithms without a calculator? I would need to be able to compute logarithms without using a calculator, just on paper. The result should be a fraction so it is the most accurate. For example I have seen this in math class calculated by one of my class mates without the help of a calculator. $$\log_8128 = \frac 73$$ How do you do this?
To evaluate $\log_8 128$, let $$\log_8 128 = x$$ Then by definition of the logarithm, $$8^x = 128$$ Since $8 = 2^3$ and $128 = 2^7$, we obtain \begin{align*} (2^3)^x & = 2^7\\ 2^{3x} & = 2^7 \end{align*} If two exponentials with the same base are equal, then their exponents must be equal. Hence, \begin{align*} 3x & = 7\\ x & = \frac{7}{3} \end{align*} Check: If $x = \frac{7}{3}$, then $$8^x = 8^{\frac{7}{3}} = (8^{\frac{1}{3}})^7 = 2^7 = 128$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1653374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41", "answer_count": 9, "answer_id": 2 }
Matrix rank and number of linearly independent rows I wanted to check if I understand this correctly, or maybe it can be explained in a simpler way: why is matrix rank equal to the number of linearly independent rows? The simplest proof I can come up with is: matrix rank is the number of vectors of the basis of vector space spanned by matrix rows (row space). All bases of a given vector space have the same size. Elementary operations on the matrix don't change its row space, and therefore its rank. Then we can reduce it to row echelon form (reduced row echelon form is not necessary, because I think the non-zero rows in row echelon form are linearly independent already). So we might pick only the rows that are non-zero and still get the same row space (adding or removing arbitrary number of zero rows don't change a thing), and because these rows are linearly independent, they are basis for the row space. As mentioned above, all bases have the same size, so number of linearly independent vectors is equal to matrix rank (the dimension - size of basis - of row space). Is it correct? Didn't I make it too complicated?
Two facts about elementary row operations are useful to resolve this question: * *Elementary row operations alter the column space but do not alter the linear dependences among the columns. For example, if column 10 is $4$ times column 3 minus $7$ times column 5, then after doing any elementary row operation, column 10 is still be $4$ times column 3 minus $7$ times column 5. It's not too hard to figure out why that's true. Therefore elementary row operations do not alter the number of linearly independent columns. *Elementary row operations do not alter the row space. It's also not hard to see why that is true. Therefore elementary row operations do not alter the number of linearly independent rows. After a matrix is fully reduced, it's not hard to see that the number of linearly independent columns is the number of pivot elements, and the number of linearly independent rows is the number of pivot elements. Therefore the number of linearly independent rows. Since the row operations don't change the number of linearly independent columns or the number of linearly independent rows, those two quantities must be the same in every matrix. Consequently one can define “rank” of a matrix either * *as the dimension of the column space; or *as the dimension of the row space, and it's the same thing either way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1653464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What are the invariants of a number field? How is defined an invariant of a field? Given a certain field extension $L/K$, is it related with the Galois group ${\rm Gal}(L/K)$? In the case of number fields, which are the invariants associated to these extensions? For example, I remember that the relative discriminant is an invariant, but any other examples? Is it possible that the maximum power of an element $a$, i.e. the maximum $h$ such that $a=b^h$ for some element $b$, is again an invariant? I know the question is very broad, so any bibliografy/reference suggestion would be great too! Thanks in advance
I will try it with a short answer (to the title question), only giving two further invariants. Besides the discriminant also the ideal class group and its order, the class number are invariants, and the ring of integers of a number field in general. Furthermore Minkowski's bound, and the invariants involved there, i.e., the number of real and complex embeddings.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1653686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Possible ways to have $n$ bounded natural numbers with a fixed sum Is it possible to count in an easy way the solutions of the equations and inequalities $x_1+x_2+\cdots+x_n = S$ and $x_i\leq c_i$ if all $x_i$ and $c_i$ are natural numbers?
We know that the number of non-negative integral solutions to the system: $\begin{cases} x_1+x_2+x_3+\dots+x_n = S\\ 0\leq x_1\\ 0\leq x_2\\ \vdots\\ 0\leq x_n\end{cases}$ is $\binom{S+n-1}{n-1}$, seen by stars-and-bars counting. In the event that you do not consider zero to be a natural number, then it is as though you have a lower bound of $1\leq x_i$ for each so make a change of variable as $y_i=x_i-1$ to get it into the above form. Denote all solutions where upper bounds are ignored as $\Omega$ and designate this as our universal set. With upper bounds on each, continue via inclusion-exclusion. Let $A_i\subset \Omega$ be the event that $x_i>c_i$, I.e. $x_i\geq c_i+1$. The set of solutions that you are interested in then is $(\bigcup A_i)^c$. The total being $|(\bigcup A_i)^c| = |\Omega|-|(\bigcup A_i)|$ This can be broken apart further as you wish, but becomes tedious to write in a compact formula. Counting $|A_1\cap A_2\cap A_3|$ for example will be solutions to the system: $\begin{cases} x_1+x_2+\dots+x_n = S\\ c_1+1\leq x_1\\ c_2+1\leq x_2\\ c_3+1\leq x_3\\ 0\leq x_4\\ \vdots\\ 0\leq x_n\end{cases}~~~~~~\Leftrightarrow \begin{cases} y_1+y_2+\dots+y_n = S-c_1-c_2-c_3-3\\ 0\leq y_1\\ 0\leq y_2\\ \vdots\\ 0\leq y_n\end{cases}$ using the change of variable $y_i = x_i-c_i-1$ for $i\in\{1,2,3\}$ and $y_i = x_i$ otherwise. An attempt to write in a compact form, using indexing set notation for intersections and sums, and letting $N=\{1,2,3,\dots,n\}$ $$\sum\limits_{\Delta\subseteq N}\left((-1)^{|\Delta|}|\bigcap_{i\in\Delta}A_i|\right) = \sum\limits_{\Delta\subseteq N}\left((-1)^{|\Delta|}\binom{S+n-1-|\Delta|-\sum\limits_{i\in\Delta}c_i}{n-1}\right)$$ of course, using that $\binom{n}{r} = 0$ whenever $n<r$ or $n<0$ or $r<0$, i.e. the original form of the binomial coefficient, not the generalized form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1653780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is a self-adjoint operator continuous on its domain? Let $H$ be a Hilbert space, and $A : D(A) \subset H \rightarrow H$ be an unbounded linear operator, with a domain $D(A)$ being dense in H. We assume that $A$ is self-adjoint, that is $A^*=A$. Since $A$ is unbounded, we can find a sequence $x_n$ in the domain such that $x_n \rightarrow x$ with $x \notin D(A)$, meaning that $Ax_n$ does not converge. My question is the following: Is it true that $A$ is continuous relatively to its domain? More precisely: given a converging sequence $x_n \rightarrow x$ for which both the sequence $x_n$ and the limit point $x$ lie in $D(A)$, can we conclude that $Ax_n \rightarrow Ax$? Or is there any counter-example? I know that self-adjoint operators are closed, in the sense that their graph $D(A)\times A(D(A))$ is closed in $H \times H$, but continuity on the domain is something else.
If $A$ is unbounded, that means that for any $n$ there exists a $v\in D(A)$ such that $\|Av\|> n\|v\|$. Dividing such a $v$ by $\|v\|$, we may assume $\|v\|=1$, so we can find a sequence $(v_n)$ of unit vectors in $D(A)$ such that $\|Av_n\|>n$ for each $n$. The sequence $(v_n/n)$ then converges to $0$ but $Av_n/n$ does not converge to $A(0)=0$ since $\|Av_n/n\|>1$ for all $n$. (More generally, the same argument shows that if $X$ and $Y$ are normed vector spaces, then a linear map $A:X\to Y$ is continuous iff it is bounded.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1653867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Average $y$ from a range of $x$ in a parabola Given a parabolic/quadratic formula such as $ax^2 + bx + c =y$, how do I get the average value of $y$ given a range of $x$ ($x_{min}$ to $x_{max}$). Real world example: if my formula represents the trajectory arc of a thrown object under the force of gravity, where $x$ is time and $y$ is height, I'm looking for the object's average height within a timespan.
The average of a function $f(x)$ in the interval $a$ to $b$ is given as $$\frac{1}{b-a} \int_a^b f(x) dx$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1653970", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is there any integral for the Golden Ratio? I was wondering about important/famous mathematical constants, like $e$, $\pi$, $\gamma$, and obviously the golden ratio $\phi$. The first three ones are really well known, and there are lots of integrals and series whose results are simply those constants. For example: $$ \pi = 2 e \int\limits_0^{+\infty} \frac{\cos(x)}{x^2+1}\ \text{d}x$$ $$ e = \sum_{k = 0}^{+\infty} \frac{1}{k!}$$ $$ \gamma = -\int\limits_{-\infty}^{+\infty} x\ e^{x - e^{x}}\ \text{d}x$$ Is there an interesting integral* (or some series) whose result is simply $\phi$? * Interesting integral means that things like $$\int\limits_0^{+\infty} e^{-\frac{x}{\phi}}\ \text{d}x$$ are not a good answer to my question.
$$\int_{0}^{\phi}(1-x+x^2)^{1/\phi}(1-\phi^2x+\phi^3x^2)\mathrm dx=2^{\phi}\cdot\phi$$ A bit over-crowed in term of $\phi$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1653979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "166", "answer_count": 42, "answer_id": 25 }
Infinite intersection of open sets need not be open The following is the property of an open set: The intersection of a finite number of open sets is open. Why is it a finite number? Why can't it be infinite?
Let $U_n=(-1/n,1/n)$ for $n=1,2,3,\dots$. Then $U_n$ is open for all $n$. Suppose $x\in\cap_n U_n$. Then if $x>0$ there is an $n$ such that $\frac1n<x$ (since $\frac1n\to0$). Thus $x\not\in U_n$ for that $n$. Same if $x<0$. Thus $x\not\in\cap_n U_n$. Now $0\in U_n$ for all $n$. Thus $\cap_n U_n=\{0\}$ which is not open.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1654031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
I don't understand how Kirchhoff's Theorem can be true Kirchhoff's Matrix-Tree theorem states that the number of spanning trees of a graph G is equal to any cofactor of its Laplacian matrix. Wouldn't this imply that all cofactors of a Laplacian matrix must be the same, as otherwise we could get a different number of spanning trees for the same graph depending on which cofactor we took? But that's not necessarily the case. Example: Consider the simple, undirected cycle graph on 4 nodes (1 is connected to 2 and 4, 2 is connected to 1 and 3, 3 is connected to 2 and 4, 4 is connected to 3 and 1). When I find the cofactor along A11 I get 21, but when I find the cofactor along A12 I get 9. These are different! So this says that this graph has both 21 and 9 spanning trees? Am I understanding this incorrectly?
I think you've made a mistake computing the Laplacian matrix. I find \begin{bmatrix}2 & -1 &0 &-1 \\ -1 & 2 & -1 & 0 \\ 0 & -1 & 2 & -1 \\ -1 & 0 & -1 & 2 \end{bmatrix} You can check that any cofactor has determinant $4$, which you can check by inspection is the right number of trees.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1654119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find $\lim_{n\to\infty} \frac{(n!)^{1/n}}{n}$. Find $$\lim_{n\to\infty} \frac{(n!)^{1/n}}{n}.$$ I don't know how to start. Hints are also appreciated.
Let $f_n=\frac{n!}{n^n}$ so that $f_{n+1}= \frac{(n+1)!}{(n+!)^n+1}$ So $\frac{f_{n+1}}{f_n}=(1+\frac{1}{n})^{-n}$ Therefore $\lim_{n\to\infty}(1+\frac{1}{n})^{-n}=\frac{1}{e}>0 $ We know that if $<f_n>$ is a sequence such that $f_n$ is greater than 0 for all n, and $\lim_{n\to\infty}\frac{f_{n+1}}{f_n}=l, l>0$. Then $\lim_{n\to\infty}{f_n}^{\frac{1}{n}}=l$. Using the above theorem we have $\lim_{n\to\infty} \frac{(n!)^{\frac{1}{n}}}{n}=\frac{1}{e}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1654204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
A milkman has $80\%$ of milk in his stock of $800$ litres of adulterated milk. How much $100\%$ milk to be added to give certain purity? Problem: A milkman has $80\%$ of milk in his stock of $800$ litres of adulterated milk. How much $100\%$ milk to be added to it so that the purity of milk is between $90\%$ and $95\%$ Let $x$ litres $100\%$ pure milk need to be added in $800$ litres of milk. Please suggest further how to proceed not getting any idea on this.
$80\%$ is milk out of $800$ litres. That gives - you have $640$ litres of pure milk. Now, $640+x\over {800+x}$$>0.9$ That gives $x>800$ litres. Since, it should also be less than $0.95$ , You get $x<1200$ litres.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1654307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
If $(x_1-a)(x_2-a)\cdots(x_n-a)=k^n$ prove by using the laws of inequality that $x_1x_2 \cdots x_n\geq (a+k)^n$ If $x_i>a>0$ for $i=1,2\cdots n$ and $(x_1-a)(x_2-a)\cdots(x_n-a)=k^n$, $k>0$, prove by using the laws of inequality that $$x_1x_2 \cdots x_n\geq (a+k)^n$$. Attempt: If we expand $(x_1-a)(x_2-a)\cdots(x_n-a)=k^n$ in the LHS, we get $x_1x_2 \cdots x_n -a\sum x_1x_2\cdots x_{n-1} +a^2\sum x_1x_2\cdots x_{n-2} - \cdots +(-1)^na^n=k^n$. But it becomes cumbersome to go further. Please help me.
Using convenient notation, we will prove a theorem from which the result of the question may be easily derived. Given any positive real numbers $ x_1, x_2,...,$ we will write their initial geometric means as $$g_n:=(x_1\cdots x_n)^{1/n}\quad(n=1,2,...).$$ Theorem.$\quad$Given any $a\geqslant0$, and for each $n$, the following proposition (which we will denote by $P_n$) holds:$$(x_1+a)\cdots(x_n+a)\geqslant (g_n+a)^n\quad\text{for all}\quad x_1,x_2,...>0.$$Proof.$\quad$We proceed by Cauchy induction. That is, we establish (1) $P_1\,;\;$ (2) $P_k\Rightarrow P_{2k}$ for any $k=1,2,...;$ and (3) $P_{k+1}\Rightarrow P_k$ for any $k=1,2,...$. (1)$\quad$Clearly the inequality (actually equality) holds for $n=1$ since $g_1=x_1$ in this case. (2)$\quad$To prove this, let us suppose that $P_n$ has been established for the case $n=k$:$$(x_1+a)\cdots(x_k+a)\geqslant(g_k+a)^k.$$A corresponding result holds also for $x_{k+1},...,x_{2k}$ , which we write as$$(x_{k+1}+a)\cdots(x_{2k}+a)\geqslant\left[\left(\frac{x_1\cdots x_{2k}}{x_1\cdots x_k}\right)^{1/k}+a\right]^k=\left(\frac{g_{2k}^2}{g_k}+a\right)^k.$$Composing the above two inequalities gives$$(x_1+a)\cdots(x_{2k}+a)=(x_1+a)\cdots(x_k+a)\cdot(x_{k+1}+a)\cdots(x_{2k}+a)$$$$\geqslant(g_k+a)^k(g_{2k}^2/g_k+a)^k$$$$\qquad=[g_{2k}^2+a(g_k+g_{2k}^2/g_k)+a^2]^k$$$$\geqslant(g_{2k}+a)^{2k},$$where the last inequality follows by observing that $$ g_k+g_{2k}^2/ g_k=(\surd g_k-g_{2k}/\surd g_k)^2+2g_{2k}\geqslant 2g_{2k}$$(or by applying AM–GM to $g_k$ and $g_{2k}^2/g_k$). Thus we have established $P_k\Rightarrow P_{2k}$ . (3)$\quad$It remains to show $P_{k+1}\Rightarrow P_k$ . Suppose, then, that $P_{k+1}$ holds for some $k\in\{1,2,...\}$: $$(x_1+a)\cdots(x_k+a)(x_{k+1}+a)\geqslant(g_{k+1}+a)^{k+1}$$for all $x_1,x_2,...>0.$ Then, in particular, it holds in the case when $x_{k+1}=g_k$. In this case,$$g_{k+1}=(g_k^kx_{k+1})^{1/(k+1)}=g_k.$$ Therefore, dividing by the factor $g_k+a\,$ (or $x_{k+1}+a$) gives $(x_1+a)\cdots(x_k+a)\geqslant(g_k+a)^k,$ which is the statement of $P_k$.$\quad\square$ The result of the question can be obtained via the replacement of $x_i+a$ by $x_i\;\,(i=1,...,n)$ and of $g_n$ by $k$ in the statement of $P_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1654378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Infinite dimensional topological vectorspaces with dense finite dimensional subspaces Consider $\mathbb R$ as a $\mathbb Q$ vector space. Using the usual metric on $\mathbb R$, we find: * *$\mathbb Q \subset \mathbb R$ is dense and one dimensional (indeed every non-zero subspace appears to be dense). *$\mathbb R$ is an infinite dimensional $\mathbb Q$ vectorspace. *Addition and scalar multiplication are continuous (addition on $\mathbb R \times \mathbb R$ and multiplication on $\mathbb Q \times \mathbb R$). *The metric is translation invariant. Can there exist $\mathbb R$ (or $\mathbb C$) vector spaces that satisfy conditions 1-3? Meaning infinite dimensional topological vector spaces that have dense finite dimensional subspaces. Is it possible to metricise these spaces? If so can condition 4 also be put in?
(1) Note that the topology on a finite-dimensional Hausdorff $\mathbf R$-vector space is uniquely determined and is always completely metriziable. As a complete metrizable space, is it closed in every topological vector space is it embedded into. Hence, finite-dimensional subspaces of Hausdorff topological vector spaces are always closed (and hence not dense if they are proper subspaces). (2) If we drop the Hausdorff condition, an example can be given: Let $V$ any infinite dimensional $\mathbf R$ vector space and $\tau = \{\emptyset, V\}$ the trivial topology. Then every non-empty subset is dense (1 is true) and 2. and 3. are trivially true. Of course, a non-Hausdorff space cannot be metrizable, but it is pseudo-metrizable by the pseudo-metric $d(x,y)=0$ for all $x,y \in V$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1654478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Analyticity of $\tan(z)$ and radius of convergence Define $\tan(z)=\dfrac{\sin(z)}{\cos(z)}$ Where is this function defined and analytic? My answer: Our function is analytic wherever it has a convergent power series. Since (I am assuming) $\sin(z)$ and $\cos(z)$ are analytic the quotient is analytic wherever $\cos(z) \not\to 0$??? Is there more detail to this that I am missing? Without Cauchy is there a way to determine analyticity etc... Further more, once we have found the first few terms as I have $$z+\frac{z}{3}+\frac{2}{15}z^5$$ How do we make an estimate of the radius of convergence? Do I make an observation that the terms are getting close to $0$ and say perhaps the $nth$ root is heading there as well, $\therefore$ $R=\infty$. I'm fairly lost on this part. Thanks for your help!
The Laurent series of $\tan z$ for $z\in \mathbb{C}$ is complicated: $\displaystyle \tan z=z+\frac{z^{3}}{3}+\frac{2}{15}z^{5}+\ldots + (-1)^{n-1}\frac{2^{2n}(2^{2n}-1)B_{2n}}{(2n)!}z^{2n-1}+\ldots \:, \: |z|<\frac{\pi}{2}$ where $B_{n}$ is Bernoulli number. The radius of convergence can be justified by $$4\sqrt{n\pi} \left( \frac{n}{\pi e} \right)^{2n} < (-1)^{n-1} B_{2n} <5\sqrt{n\pi} \left( \frac{n}{\pi e} \right)^{2n}$$ for $n\geq 2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1654572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
How lim sups here converted to appropriate limits? This question is from Rudin's Principles of Mathematical Analysis : Consider the sequence $\{a_n\}$: $\{\frac 12, \frac 13, \frac 1{2^2}, \frac1{3^2}, \frac 1{2^3}, \frac1{3^3},\frac 1{2^4}, \frac1{3^4},\dots\}$ so, $$\limsup_{n\rightarrow \infty}\frac{a_{n+1}}{a_n}=\lim_{n\rightarrow \infty}\frac 12\left(\frac32\right)^n$$ and $$\limsup_{n\rightarrow \infty}(a_n)^{\frac1n}=\lim_{n\rightarrow \infty}\left(\frac1{2^n}\right)^{\frac1 {2n}}$$ I can't understand how did we get expressions like $\lim_{n\rightarrow \infty}\frac 12\left(\frac32\right)^n$ or $\lim_{n\rightarrow \infty}\left(\frac1{2^n}\right)^{\frac1 {2n}}$. Limsup is defined in book as follows: Let $s_n$ be a sequence of real numbers. Let $E$ be the set of numbers $x$ (in the extended real number system) such that $s_{n_k}\rightarrow x$ for some subsequence $s_{n_k}$. We denote upper limit of $s_n$ as $\limsup_{n\rightarrow \infty} s_n=\sup E.$
Assuming the sequences start off with $n = 0$, we have $$\frac {a_{2k+1}}{a_{2k}} = \frac{\frac 1 {3^k}}{\frac 1 {2^k}} = \left(\frac 2 3\right)^k \tag{1}$$ $$\frac {a_{2k+2}}{a_{2k+1}} = \frac{\frac 1 {2^{k+1}}}{\frac 1 {3^k}} = \frac 1 2\left(\frac 3 2\right)^k\tag{2}$$ These two sequences together contain every element of $\frac {a_{n+1}}{a_n}$. (1) converges to $0$, (2) diverges to $\infty$ (converges to $\infty$ in the extended reals). Any subsequence of $\frac {a_{n+1}}{a_n}$ will have $0$ as a limit point if and only if it contains an infinite number of elements of (1) and will have $\infty$ as a limit point (or will diverge) if and only if it contains an infinite number of elements of (2). No other limit points are possible. Since $\infty > 0$, (2) is a subsequence of $\frac {a_{n+1}}{a_n}$ which converges to the highest limit, so the limit supremum will be the limit of this subsequence. Similar remarks apply to the other limit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1654682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Distance in metric space, triangle inequality problem Let $(X, d)$ be a metric space. Let $t\in (0,1]$. Show that $d^t: X\times X\to\mathbb{R}$ $$d^t (x,y) := d(x,y)^t, \forall x,y\in X$$ is also a distance function. Problematic bit is the triangle inequality, when $0<t<1$ $$d (x,y)^t\leq d (x,z)^t+d (z,y)^t$$ Not sure how to tackle this one: we know that if $0<x<1, y\geq 1$ and $0<t<1$, then $x<x^t<1$ and $1\leq y^t\leq y$. So, if $d(x,y)<1$ and at least one of the right hand distances is $\geq 1$, then everything is fine. In general, I think I am overlooking something about such problems, I always want to systematically work through every possible case. Doesn't seem to be too efficient. Please, give hints on how to solve the problem.
I think I got it now (hope I'm not confusing things again). Setting $a := d(x,z)/d(x,y), b:= d(z,y)/d(x,y)$ the problem reduces to showing $$ a^t + b^t \geq 1$$ If $0 < x < 1$, for $0 < t < 1$ we have $ x < x^t$, which gives $$ \left(\frac{a}{a+b}\right)^t + \left(\frac{b}{a+b}\right)^t \geq \left(\frac{a}{a+b}\right) + \left(\frac{b}{a+b}\right) = 1$$ which shows $$ a^t + b^t \geq (a+b)^t \geq 1$$ as $a+b\geq 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1654785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How many integer-sided right triangles are there whose sides are combinations? How many integer-sided right triangles exist whose sides are combinations of the form $\displaystyle \binom{x}{2},\displaystyle \binom{y}{2},\displaystyle \binom{z}{2}$? Attempt: This seems like a hard question, since I can't even think of one example to this. Mathematically we have, $$\left(\dfrac{x(x-1)}{2} \right)^2+\left (\dfrac{y(y-1)}{2} \right)^2 = \left(\dfrac{z(z-1)}{2} \right)^2\tag1$$ where we have to find all positive integer solutions $(x,y,z)$. I find this hard to do. But here was my idea. Since we have $x^2(x-1)^2+y^2(y-1)^2 = z^2(z-1)^2$, we can try doing $x = y+1$. If we can prove there are infinitely many solutions to, $$(y+1)^2y^2+y^2(y-1)^2 = z^2(z-1)^2\tag2$$ then we are done.
Solving $(1)$ for $z$, we have, $$z = \frac{1\pm\sqrt{1\pm4w}}{2}\tag3$$ where, $$w^2 = (x^2-x)^2+(y^2-y)^2\tag4$$ It can be shown that $(4)$ has infinitely many integer solutions. (Update: Also proven by Sierpinski in 1961. See link given by MXYMXY, Pythagorean Triples and Triangular Numbers by Ballew and Weger, 1979.) However, the problem is you still have to solve $(3)$. I found with a computer search that with $x<y<1000$, the only integers are $x,y,z = 133,\,144,\,165$, so, $$\left(\dfrac{133(133-1)}{2} \right)^2+\left (\dfrac{144(144-1)}{2} \right)^2 = \left(\dfrac{165(165-1)}{2} \right)^2$$ P.S. If you're curious about rational solutions, then your $(1)$ and $(2)$ have infinitely many.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1655884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Determining certain units in a local ring. I've been stuck on this problem for a while: Let R be a commutative ring with $1 \neq 0$. If R has a unique maximal ideal (i.e. R is local), then either $x$ or $1-x$ (or both) are units in R.
Suppose $I=(x)$ and $J=(1-x)$ are proper ideals. Every proper ideal is contained in a maximal ideal, but there is only one, say $M$. Then $I \subset M$ and $J \subset M$ and so $x, (1-x) \in M \implies x+(1-x)=1 \in M$ which is absurd. Then $I$ or $J$ or both are not proper, e.g. $I=R \implies \exists a \in R$ such that $ax=1 \implies x$ is a unit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1656106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Finding the expected time for a stock to go from $\$25$ to $\$18$ given there is a support level at $20 with upward and downward biases. This problem is adapted from Stochastic Calculus and Financial Applicationsby J. Michael Steele, Springer, New York, 2001, Chapter 1, Section 1.6, page 9. Consider a naive model for a stock that has a support level of \$20/share because of a corporate buy-back program. Suppose also that the stock price moves randomly with a downward bias when the price is above \$20, and randomly with an upward bias when the price is below \$20. To make the problem concrete, we let $Y_n$ denote the stock price at time $n$, and we express our stock support hypothesis by the assumptions that \begin{eqnarray*} \Pr[ Y_{n+1} = 21 | Y_{n} = 20] &=& 9/10 \\ \Pr[ Y_{n+1} = 19 | Y_{n} = 20] &=& 1/10 \end{eqnarray*} We then reflect the downward bias at price levels above \$20 by requiring that for $k > 20$: \begin{eqnarray*} \Pr[ Y_{n+1} = k+1 | Y_{n} = k ] &=& 1/3 \\ \Pr[ Y_{n+1} = k-1 | Y_{n} = k ] &=& 2/3. \end{eqnarray*} We then reflect the upward bias at price levels below \$20 by requiring that for $k < 20$: \begin{eqnarray*} \Pr[ Y_{n+1} = k+1 | Y_{n} = k ] &=& 2/3 \\ \Pr[ Y_{n+1} = k-1 | Y_{n} = k ] &=& 1/3 \end{eqnarray*} I would like to calculate the expected time, $T_{25,18}$ for the stock to fall from $\$25$ through the support level all the way down to $\$18$. My first step is to use first-step analysis (no pun intended). This will give me a recursive set of 9 equations. However, one of the hints given is to show that the expected time to go from $\$25$ to $\$20$ is $T_{25,20} = 15$ steps and that $T_{21,20} = 3$. Is it claimed that these are trivial to find. However, I am really not sure how to do this part. There appears to be no upper boundary above $\$25$ -- Does anyone have any hints as to how I can find this? Thanks.
Let $\tau=\inf\{n>0: Y_{n+1}<Y_n\mid Y_0=25\}$. Then $$\mathbb E[\tau] = 1 + \frac23 + \frac13\mathbb E[\tau+1]\implies \mathbb E[\tau] =3.$$ It follows that $T_{21,20}=\tau=3$ and $T_{25,20}=5\tau=15$. The quantities $T_{20,19}$ and $T_{19,18}$ may be computed by a similar argument, and $$T_{25,18} = T_{25,20}+T_{20,19}+T_{19,18}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1656214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can Anyone solve this number of cases problem? There are $n$ different chairs around the round table, $C_1, C_2, ....C_n$, and one person going to give a number to each chair, $1$, $-1$, $i$ and $-i$. But $1$ can't be placed next to $-1$, and $i$ can't placed next to $-i$. And how can I get the number of the possible cases?
As an approximation, start with the number of ways to make a line of $n$. You have $4$ choices for $C_1$ and $3$ choices for all the rest, so $4\cdot 3^{n-1}$. When you try to bend these into a circle, you will fail $\frac 14$ of the time, so so for a circle it is $3^n$ This is clearly not quite right as the end number is not truly independent of the first, though it will be close very soon. For $n=1$ the correct answer is $4$ instead of $3$ and for $n=2$ it is $12$, not $9$. For $n=3$ we have $28$ choices, very close to $3^3=27$ To be right, we can assume $C_1=1$ and multiply by $4$ at the end for the other starting choices. Ignoring the circle, let $A(n)$ be the number of strings ending in $1$, $B(n)$ be the number ending in $-1$, and $C(n)$ be the number ending in $i$ or $-1$. We have $$A(1)=1, B(1)=0, C(1)=0\\ A(n)=A(n-1)+C(n-1)\\B(n)=B(n-1)+C(n-1)\\C(n)=2A(n-1)+2B(n-1)+C(n-1)$$ and our answer is $4(A(n)+C(n))$ because the ends will not conflict. It turns out the answer is $3^n+2+(-1)^n$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1656316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
For which values $a, b \in \mathbb{R}$ the function $u(x,y) = ax^2+2xy+by^2$ is it the real part of a holomorphic function in $\mathbb{C}$ For which values $a, b \in \mathbb{R}$ the function $$u(x,y) = ax^2+2xy+by^2$$ is the real part of a holomorphic function in $\mathbb{C}$. I think we have to take Cauchy-Riemann theorem, but I don't know how to find these two constant from a certain function $f(x,y) = u(x,y)+i v(x,y)$. Is anyone could help me?
This Result and Cauchy Riemann Equations shows that $u(x,y)$ is real part of holomorphic function iff $u$ is harmonic. So,$u_{xx}+u_{yy}=0$ i.e. $b=-a$. QED
{ "language": "en", "url": "https://math.stackexchange.com/questions/1656369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Eigenfunctions of the Dirichlet Laplacian in balls I am trying to find out about the Dirichlet eigenvalues and eigenfunctions of the Laplacian on $B(0, 1) \subset \mathbb{R}^n$. As pointed out in this MSE post, one needs to use polar coordinates, whence the basis eigenfunctions are given as a product of solutions of Bessel functions and spherical harmonics. @Neal further points out that such considerations hold even for balls on spaces of constant curvature (see Chavel, Eigenvalues in Riemannian Geometry, Chapter 2, Section 5). I have one question in this matter: none of the sources say what the values of the basis eigenfunctions are at the center of the ball. Clearly, the eigenfunctions should be smooth, but then it seems that they should be zero at the center of the ball. Is that correct?
Radial eigenfunctions are not zero at the centre of the ball, while nonradial eigenfunctions are zero there. Indeed, recall that the radial part of any basis eigenfunction is $$ r^\frac{2-n}{2} J_{l-\frac{2-n}{2}}(\sqrt{\lambda} r), $$ where $\lambda$ is a corresponding eignevalue. The parameter $l \in \{0,1,\dots\}$ corresponds to spherical harmonics, i.e., $l$ is a degree of a homogeneous harmonic polynomial. So, if $l=0$, then the eigefunction is radial, while if $l \geq 1$, then the eigenfunction is nonradial. It is known that $J_{\nu}(x) = c x^\nu + o(x^\nu)$, where $c>0$, see, e.g., here. Therefore, we see that if $l=0$, then $$ r^\frac{2-n}{2} J_{-\frac{2-n}{2}}(\sqrt{\lambda} r) = c > 0 \quad \text{at } r=0, $$ while if $l \geq 1$, then $$ r^\frac{2-n}{2} J_{l-\frac{2-n}{2}}(\sqrt{\lambda} r) = r^\frac{2-n}{2}\left(c r^{l-\frac{2-n}{2}} + o(r^{l-\frac{2-n}{2}}) \right) = c r^l +o(r^l) = 0 \quad \text{at } r=0. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1656534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
problem solving PDE: $u*u_x + u_y = -u$ $$u*u_x + u_y = -u$$ $$u(0,t) = e^{-2t}$$ I tried solve with Lagrange and got 2 surfaces $\phi(x,y,u) = x+u$ and $\psi(x,y,u) = y+ln(u)$ . when I used $u(0,t) = e^{-2t}$, I got a solution $$ u(x,y) = \dfrac{(1+\sqrt{1+4xe^{2y}})*e^{-2y}}{2} $$ but that seem to be wrong solution. I don't know what went wrong. your help is appreciated.
Solving it with the method of charateristics, I obtained the general solution on implicit form : $$\Phi\left(x+u\:,\:y+\ln(u)\right)=0$$ which is consistent with your result: $\phi(x,y,u) = x+u$ and $\psi(x,y,u) = y+ln(u)$ $\Phi(\phi,\psi)$ is any derivable function of two variables. An equivalent form is : $$x+u=F\left(y+\ln(u)\right)$$ where $F$ is any derivable function. So, the key point is the boundary condition $u(0,y)=e^{-2y}$ In $x=0$ we have : $$0+e^{-2y}=F\left(y+\ln(e^{-2y})\right) = F(-y)$$ This determines the function $\quad F(z)=e^{2z}\quad$ or any other symbol for the variable. Puting it into the general solution gives : $$x+u=e^{2\left(y+\ln(u)\right)} = u^2e^{2y}$$ Then, solving $e^{2y}u^2-u-x=0$ for $u$ leads to : $$u=\frac{1\pm\sqrt{1+4e^{2y}x}}{2e^{2y}}$$ This is the correct result because bringing it back into $\quad u\:u_x+u_y+u =0\quad$ verifies the equality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1656648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Triangle angles. For $\vartriangle$ABC it is given that $$\frac1{b+c}+\frac1{a+c}=\frac3{a+b+c}$$ Find the measure of angle $C$. This is a "challenge problem" in my precalculus book that I was assigned. How do I find an angle from side lengths like this? I have tried everything I can. I think I may need to employ the law of cosines or sines. Thanks.
Short answer: According to the problem, the solution is unique, so any triple of values that satisfies the equation provides a solution. We immediately see that $a=b=c=1$ is a solution, hence the angle is 60 degrees. Medium answer. What count are the ratios between sides. So we can assume that $c=1$. The equation is symmetric in $a$ and $b$, and we know it's unique, so as $a$ varies, $b$ varies. We can try to see if this infinite family of solutions has an intersection with isoscele triangles, so we put $b=a$ and we solve $$\frac{1}{a+1}+\frac{1}{a+1}=\frac{3}{2a+1}$$ finding $a=b=c=1$. So the angle is 60 deg. Notice that the fun thing is that $k\cdot (1,1,1)$ is not the unique solution. They are indeed infinite. Example, $a=15$, $b=8$, $c=13$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1656778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 1 }
Steps for solving a simple quotient integral containing a product in the denominator. lets say that I have these two integrals: $\int \frac{1}{e^{2x}-2e^x-3} \, dx$ and $\int \frac{1}{(x+1)(x+2)(x-3)} \, dx$ I do recognize some properties and antiderivatives involved but wasn't successful by applying $u$-Substitution and I don't know if/how to integrate by parts with more than two functions involved (second example). What is a tipical approach in these cases?
For the first integral write it as follows:$$\int \frac { dx }{ e^{ 2x }-2e^{ x }-3 } \, =\int { \frac { dx }{ \left( { e }^{ x }-3 \right) \left( { e }^{ x }+1 \right) } } =\frac { 1 }{ 2 } \left[ \int { \frac { dx }{ { e }^{ x }-3 } -\int { \frac { dx }{ { e }^{ x }+1 } } } \right] $$ then substitute:${ e }^{ x }=t\Rightarrow \quad x=\ln { t\quad \quad \Rightarrow } dx=\frac { dt }{ t } $ in order to write as: $$\frac { 1 }{ 2 } \left[ \int { \frac { dt }{ t\left( t-3 \right) } } -\int { \frac { dt }{ t\left( t+1 \right) } } \right] $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1656895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why is $\frac{(e^x+e^{-x})}{2}$ less than $e^\frac{x^2}{2}$? I have read somewhere that this equality holds for all $x \in \mathbb {R}$. Is it true, and if so, why is that? $$\frac{(e^x+e^{-x})}{2} \leq e^\frac{x^2}{2}$$
The taylor series on the LHS is $$\sum_{n=0}^{\infty} \frac{x^{2n}}{(2n)!}$$ The taylor series on the RHS is $$\sum_{n=0}^{\infty} \frac{(x^2/2)^n}{n!} = \sum_{n=0}^{\infty} \frac{x^{2n}}{(2n)!!}$$ Where $(2n)!!$ is the double factorial $(2 \times 4 \times \ldots \times 2n)$. It is easy to then see that $(2n)!! \leq (2n)!$, namely since the latter has the extra factor of $(2n-1)!! = (1 \times 3 \times \ldots \times (2n-1))$ multiplying it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1656988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Planarity on $10$ vertices Is there a planar graph on $10$ vertices such that its complement is planar as well? I have troubles deciding if this is an elementary or deep question. By some other thread, the answer is an easy No for $11$ or more vertices.
I think the answer is no, but I have only an experimental evidence. $K_{10}$ has 45 edges, and the max number of edges for a planar graph with 10 vertices is $3\cdot10-6=24$. So the only possible pairs of number of edges are {24, 21} and {23, 22}. I used Brendan McKay's program plantri to generate all the planar graphs with 24 and 23 edges, and then I used Mathematica to find planar graphs among their complements. I found none, so there is not such a graph with 10 vertices. By the way, using the same approach I found no such a graph with 9 vertices, but I found many with 8 vertices.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1657080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Prove that if $ 2^n $ divides $ 3^m-1 $ then $ 2^{n-2} $ divides $ m $ I got a difficult problem. It's kind of difficult to prove. Can you do it? Let $ m,n\geq 3 $ be two positive integers. Prove that if $ 2^n $ divides $ 3^m -1$ then $ 2^{n-2} $ divides $ m $ Thanks :-)
Because $n \geq 3$ we get $8 \mid 3^m-1$ and so $m$ must be even . Let $m=2^l \cdot k$ with $k$ odd . Now use the difference of squares repeatedly to get : $$3^m-1=(3^k-1)(3^k+1)(3^{2k}+1)\cdot \ldots \cdot (3^{2^{k-1} \cdot l}+1)$$ Each term of the form $3^s+1$ with $s$ even has the power of $2$ in their prime factorization exactly $1$ because: $$3^s+1 \equiv 1+1\equiv 2 \pmod{8}$$ Also $k$ is odd so : $$3^k+1 \equiv 3+1 \equiv 4 \pmod{8}$$ has two factors of $2$ . Finally the term $3^k-1 \equiv 3-1 \equiv 2 \pmod{8}$ has one factor of $2$ . This means that $3^m-1$ has $1+2+l-1=l+2$ two's in his prime factorization . But $2^n \mid 3^m-1$ so $n \leq l+2$ and then $l \geq n-2$ . This means that $2^{n-2} \mid m$ as wanted .
{ "language": "en", "url": "https://math.stackexchange.com/questions/1657131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Question related to Integration and Probability Density Functions My question is regarding integration questions related to the probabilities of continuous random variables. If X = 0 to 5 is represented by f1(x) and X=5 to 10 is represented by f2(x) and we want P(0<=X<=10). Would the answer be integral over 0 to 5 for f1(x) + integral of 5 to 10 for f2(x)? That is I am confused when we are to calculate probabilities that are represented by 2 sets of functions. Also are the endpoints exclusive or inclusive to probability calculations? If I calculate the integral of 0-10 will 0 and 10 be included as probabilities? I am guessing though that for continuous RV's P(0<=X<=10) is the same as any combination of <=,>=,<,> as after all they cannot take discrete values. Any help would be appreciated.
Yes to your first question: You have it right about integrating each function in the range in which it applies. For example: $$ f(x) =\left\{ \begin{array}{cc} \frac{x^2}{144} & 0\leq x<6 \\ \frac{x}{64} & 6 \leq x \leq 10 \end{array} \right. $$ Then $$P(2<x<7) = \int_2^6 \frac{x^2}{144} dx + \int_6^7 \frac{x}{64} dx $$ For the question about discrete values: If the functions $f(x)$ involve no $\delta$-functions, that is, if the distribution has no discrete properties, then inclusion or exclusion of the endpoints does not matter.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1657231", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
PDE, change of variables and differential operator "transformation" Given the wave equation: $$ \frac{\partial^2 f}{\partial t^2} = c^2 \frac{\partial^2 f}{\partial x^2} $$ I change variables in this way: $$ \xi = x+ct \\ \eta=x-ct $$ And the differential operators transform: $$ \frac{\partial }{\partial x} = \frac{\partial }{\partial \xi} + \frac{\partial }{\partial \eta} \\ \frac{\partial }{\partial t} = c^2 ( \frac{\partial }{\partial \xi} - \frac{\partial }{\partial \eta} ) $$ I know this has something to do with the chain rule, but I'm not able to understand. How do I verify that these equalities are correct? How do I get to them? Thank you.
Define $F(\xi,\eta)=f(t,x)$. Then, with some abuse of notation, $$ \frac{\partial}{\partial x}f(t,x)=\frac{\partial}{\partial x}F(\xi,\eta)=\frac{\partial F}{\partial \xi}\frac{\partial\xi}{\partial x}+\frac{\partial F}{\partial \eta}\frac{\partial\eta}{\partial x}=\frac{\partial F}{\partial \xi}+\frac{\partial F}{\partial \eta}. $$ This is the meaning of $$ \frac{\partial}{\partial x}=\frac{\partial }{\partial \xi}+\frac{\partial }{\partial \eta}. $$ A similar computation applies to the other derivative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1657320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show $x^2 + 2\cos(x) \geq 2$ How do I show that $x^2 + 2\cos(x) - 2$ is always nonnegative (x is measured in radians)? If $x \geq 2$ or $x \leq -2$ then obviously, $x^2 \geq 4$, and so it must be true. But otherwise, $2\cos(x)$ can be as small as $-2$ and it is quite surprising that something that could potentially be as small as $x^2 - 4$ is never actually negative. I'm not sure how to go about solving this, especially since there is an $x^2$ term which is annoying. Edit: Preferably without calculus, although the existing calculus answers are fine.
Since the functions are even, we only consider the region [0,2]. For $x= 0$, we have $2\ge 2$ is true. For $x > 0$, we show the derivative of the function is positive. That is the function is increasing. $2x - 2 \sin(x) = 2(x-\sin(x)) \ge 0$ as $x-\sin(x) \ge 0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1657439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Prove $f$ is periodic if $ \int_{a}^b f(x)dx = 2 $ and $ \int_{y}^z f(x)dx = 1 $ $z,y\in (a,b)$ and $z-y=(a-b)/2$ if $ \int_{a}^b f(x)dx = 2 $ and for every $y,z \in (a,b)$ ($y$ smaller) such that $z-y = (a-b)/2$ we have $ \int_{y}^z f(x)dx = 1 $ how do we prove f is periodical and find the period?
Let's assume $a<b$. By "periodic(al)" I assume you mean that: $$f(x) = f(x+(b-a)/2) \quad \forall x \in [a, a + (b-a)/2]$$ Case 1: If $f$ is not necessarily continuous, then the result is not true. That is because you can take any $f$ that works, then change its value on a finite number of points so that the new function is not periodic. But the integrals are unchanged. Case 2: If $f$ is continuous, you can define $H(y)=\int_y^{y+(b-a)/2}f(x)dx$ and then see what happens when you use the Fundamental Theorem of Calculus.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1657518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to solve this indefinite integral using integral substitution? So while working on some physics problem for differential equations, I landed at this weird integral $$ \int \frac 1 {\sqrt{1-\left(\frac 2x\right)}}\,dx $$ So since there is a square root, I thought I could use trig substitution, but I couldn't find anything that works out. How can one solve this integral in a nice simple manner? If you can solve it in a different way, it is still fine. $Thank$ $you!$ This is the answer given to me by symbolab $$ 4\left(-\frac{1}{4\left(\sqrt{1-\frac{2}{x}}-1\right)}-\frac{1}{4\left(\sqrt{1-\frac{2}{x}}+1\right)}-\frac{1}{4}\ln \left|\sqrt{1-\frac{2}{x}}-1\right|+\frac{1}{4}\ln \left|\sqrt{1-\frac{2}{x}}+1\right|\right)+C $$
HINT: Make the substitution $x=2\sec^2(\theta)$ and arrive at $$4\int \sec^3(\theta)\,d\theta$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1657612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Solutions to the wave equation can be represented by a sine function? Consider the one dimensional wave equation: $$\frac{\partial^2 f(x, t)}{\partial t^2} - c^{2}\frac{\partial^2 f(x, t)}{\partial x^2} = 0. $$ I understand that one may find "wavy" solutions to this equation. But, $f(x, t) = x$ is a solution and it's just a simple linear equation. I'm working through a physics text, and whenever we arrive at a function which satisfies the wave equation, we always write the solution as $A\sin (\omega t - kx)$. I understand that this is a solution to the wave equation, but without some deep theorem stating that "any function which solves the wave equation can be represented as this sine function" I do not feel it is just to assume the function has this form. For the linear example, I don't believe it can be represented by a sine function.
Let $u_1(x, t) = f_0(x -\sqrt{b} t)$ and $u_2(x, t) = f_0(x + \sqrt{b} t)$, we can verify that $u_1(x,t)$ and $u_2(x,t)$ both satisfy the wave equation. The general solution is $u(x, t) = a u_1(x, t) + b u_2(x,t)$. The solution represents the wave front (at the beach, facing ocean, and let the time stop, the wave in front of you is the shape of wave) traveling along the time direction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1657711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How many possible combinations in $7$ character password? The password must be $7$ characters long and it can include the combination: $10$ digits $(0-9)$ and uppercase letters $(26)$. My Solution: Thus in total there are $7$ slots, each slot could be either $0-9$ or $26$ letters $= 36$ possibilities for each slot, therefore, $36^7$ would be the number of password combinations? Am I correct?
You are correct. $36^7$ is the right answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1657823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Does the axiom of choice have any use for finite sets? It is well known that certain properties of infinite sets can only be shown using (some form of) the axiom of choice. I'm reading some introductory lectures about ZFC and I was wondering if there are any properties of finite sets that only hold under AC.
There are two remarks that may be relevant here. (1) This depends on what you mean by "finite sets". Even for (infinite setts of) pairs the axiom of choice is does not follow from ZF if one looks at an infinite collection. This is popularly known as the "pairs of socks" version of AC which is one of the weakest ones. (2) If you mean that the family of sets itself is finite, then AC can be proved in ZF by induction, i.e., it is automatic, but this is only true if your background logic is classical. For intuitionistic logic, the axiom of choice can be very powerful even for finite sets. For example, there is a theorem that the axiom of choice implies the law of excluded middle; in this sense the introduction of AC "defeats" the intuitionistic logic and turns the situation into a classical one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1657972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
limit of a sequence when something about the limit is given Let $a_n$ be a sequence of real numbers such that $$\lim_{n\to\infty}|a_n+3((n-2)/n)^n|^{1/n}=\frac35. $$ Then what is $\lim_{n\to\infty}a_n$?
Call $b_n=((n-2)/n)^n$. You should know that $b_n \to e^{-2}$. Now, you have for $n$ big enough $$|a_n + b_n|^{1/n} < \frac 45$$ or equivalently, $$|a_n + b_n| < \left( \frac 45 \right)^n \to 0$$ so by the squeeze theorem $$\lim_n (a_n + b_n) = 0$$ Hence, $$\lim_n a_n = - \lim_n b_n = -e^{-2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1658098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solve $3 = -x^2+4x$ by factoring I have $3 = -x^2 + 4x$ and I need to solve it by factoring. According to wolframalpha the solution is $x_1 = 1, x_2 = 3$. \begin{align*} 3 & = -x^2 + 4x\\ x^2-4x+3 & = 0 \end{align*} According to wolframalpha $(x-3) (x-1) = 0$ is the equation factored, which allows me to solve it, but how do I get to this step?
\begin{align}x^2-4x+3&=x^2-3x-x+3\\ &=x(x-3)-(x-3)\\ &=(x-1)(x-3) \end{align} Note: You could also see that the sum of coefficients is zero, hence one root is $x=1$. Now divide the quadratic by $x-1$ to get the other factor.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1658304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 7, "answer_id": 6 }
How to find Fourier sine series of $f(x)=x(1-x), 0\lt x \lt 1$? How to find Fourier sine series of $f(x)=x(1-x), 0\lt x \lt 1$? This is not an odd functions, so how to proceed?
Define $g(x)=f(x)$ where $0<x<1$ and $g(x)=-f(-x)$ for $-1<x<0$. Then $g$ is an odd function. So you have to expand $g$ and that's the same as expanding $f$ in a sin series.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1658417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Geometry proof, lost and need it explained please A rational number of the form $\frac{a}{2^{n}}$ (with $a,n$ integers) is called dyadic. In the interpretation, restrict to those points which have dyadic coordinates and to those lines which pass through several dyadic points. The incidence axioms, the first three betweenness axioms, and the line seperation property all hold in this dyadic rational plane; show that Pasch's theorem fails. (Hint: The lines $3x+y=1$ and $y=0$ do not meet in this plane.)
Outline: Consider the triangle with vertices $(0,0)$, $(1,0)$, and $(0,3)$. The line $3x+y=1$ meets one side of this triangle at $(0,1)$, but does not meet another side of this triangle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1658499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finite length $A$-algebras are finitely generated? Let $k$ be a field and $M$ a module over a (associative, unital) finite-dimensional $k$-algebra $A$. The length of $M$ is the unique length of a composition series for $M$. How does $M$ having finite length imply that $M$ is finitely generated as an $A$-module? I know that there are similar statements for modules over artinian rings (and that finite-dimensional algebras are artinian rings etc.) but I can't at the moment see how to show this directly even though I've been told it's apparently easy.
A module of finite length is noetherian, so it is finitely generated.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1658606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
If $H$ fixes three points, then could the normalizer of $H$ induce an orbit of size two on the fixed points Let $G$ be a transitive permutation group of degree $\ge 5$ acting such that every four-point stabilizer is trivial. Equivalently this means that every nontrivial element has at most three fixed points. Now if $1 \ne H \le G_{\alpha}\cap G_{\beta} \cap G_{\gamma}$, i.e. $H$ is a subgroup which fixes three points, this gives that $$ |N_G(H) : N_G(H) \cap G_{\alpha}| \le 3 $$ as $N_G(H)$ acts on the three fixed points. I want to know if the case $|N_G(H) : N_G(H) \cap G_{\alpha}| = 2$ is possible. If $Z(G) \ne 1$, as $Z(G)$ similarly permutes the three fixed points, but could not fix any element itself (or otherwise if would fix everything, which is excluded as the action is faithful) it must move every fixed point and as $Z(G) \le N_G(H)$ we have $|N_G(H) : N_G(H) \cap G_{\alpha}| = 3$. But what if $Z(G) = 1$. Is it possible that $|N_G(H) : N_G(H) \cap G_{\alpha}| = 2$, which means that $\Delta = \{\alpha,\beta,\gamma\}$ is decomposed into two $N_G(H)$-orbits, one of size $2$ and one of size $1$. Another condition where this is not possible is if the point stabilizers are odd, and if $|N_G(H) : N_G(H) \cap G_{\alpha}| = 2$ and $|N_G(H) : N_G(H) \cap G_{\gamma}| = 1$ would imply then $N_G(H) \cap G_{\gamma} = N_G(H)$, or $N_G(H) \le G_{\alpha}$ and so $N_G(H)$ would have odd order, contradicting $|N_G(H) : N_G(H) \cap G_{\alpha}| = 2$ Another condition where this is not possible is if distinct conjugates of the stabilizers intersect trivially, as then $H = G_{\alpha} = G_{\beta} = G_{\gamma}$ and so if $N_G(H) > H$ every element in the normalizer not in $H$ moves every point. Okay, but what about the general case?
Yes this is possible. An example is ${\rm PSL}(3,2)$ in its natural action on $7$ points, with point stabilizer isomorphic to $S_4$, and $H$ a subgroup of order $2$ that does not lie in the derived subgroup of $G_\alpha$. Then $|N_G(H)|=8$, $|N_{G_\alpha}(H)|=4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1658679", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $x^2+y^2-xy-x-y+1=0$ ($x,y$ real) then calculate $x+y$ If $x^2+y^2-xy-x-y+1=0$ ($x,y$ real) then calculate $x+y$ Ideas for solution include factorizing the expression into a multiple of $x+y$ and expressing the left hand side as a sum of some perfect square expressions.
I'll assume $x$ and $y$ are supposed to be real. Let $s=x+y$ and $p=xy$; then your equation becomes $$ s^2-2p-p-s+1=0 $$ or $$ p=\frac{s^2-s+1}{3} $$ The equation $$ z^2-sz+p=0 $$ must have real roots; its discriminant is $$ s^2-4p=-\frac{(s-2)^2}{3}\le0 $$ so we have $s=2$ (and $p=1$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1658760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Order preserving bijection from $\mathbb{Q}$ to $\mathbb{Q}\backslash\lbrace{0}\rbrace$ * *How can one prove the existence of an order preserving bijection from $\mathbb{Q}$ to $\mathbb{Q}\backslash\lbrace{0}\rbrace$? *Can you give an example of such a bijection?
Choose an irrational number $\alpha$. Let $x_1, x_2, \ldots$ be a strictly increasing sequence of rational numbers that converge towards $\alpha$. Let $y_1, y_2, \ldots$ be a strictly decreasing sequence of rational numbers that converge towards $\alpha$. Then define $f:\mathbb Q\to\mathbb Q\setminus\{0\}$ as: * *$f$ maps $(-\infty,x_1]$ to $(-\infty,-1]$ by subtracting $x_1+1$ from everything. *For every $n$, $f$ maps $[x_n,x_{n+1}]$ to $[-\frac1n,-\frac1{n+1}]$, by linear interpolation between the endpoints. *For every $n$, $f$ maps $[y_{n+1},y_n]$ to $[\frac1{n+1},\frac1n]$, by linear interpolation between the endpoints. *$f$ maps $[y_1,\infty)$ to $[1,\infty)$ by subtracting $y_1-1$ from everything.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1658846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
solve $ax^3+by^3+cz^3+dx^2y+ex^2z+fxy^2+gxz^2+hy^2z+iyz^2=0$ for all triplets $(x,y,z)$. let $x,y,z$ be any 3 positive integers. If for all $x,y,z$, we have : $$ax^3+by^3+cz^3+dx^2y+ex^2z+fxy^2+gxz^2+hy^2z+iyz^2=0$$ What can be said about the integral coefficients $a,b,c,e,f,g,h,i$? I think they must be all zeros. What do you think?
A special case of the parametrization: $${z}^{3}-2\,a\,y\,{z}^{2}-b\,x\,{z}^{2}+\left( {a}^{2}-b\right) \,{y}^{2}\,z+\left( a\,b-3\,c\right) \,x\,y\,z+a\,c\,{x}^{2}\,z+\left( c+a\,b\right) \,{y}^{3}+\left( a\,c+{b}^{2}\right) \,x\,{y}^{2}+2\,b\,c\,{x}^{2}\,y+{c}^{2}\,{x}^{3}=c\cdot\,\left( {x}_{1}^{3}\,{c}^{2}+2\,{x}_{1}^{2}\,{y}_{1}\,b\,c+{x}_{1}^{2}\,{z}_{1}\,a\,c+{x}_{1}\,{y}_{1}^{2}\,a\,c-3\,{x}_{1}\,{y}_{1}\,{z}_{1}\,c+{y}_{1}^{3}\,c+{x}_{1}\,{y}_{1}^{2}\,{b}^{2}+{x}_{1}\,{y}_{1}\,{z}_{1}\,a\,b+{y}_{1}^{3}\,a\,b-{x}_{1}\,{z}_{1}^{2}\,b-{y}_{1}^{2}\,{z}_{1}\,b+{y}_{1}^{2}\,{z}_{1}\,{a}^{2}-2\,{y}_{1}\,{z}_{1}^{2}\,a+{z}_{1}^{3}\right)\cdot \,\left( {x}_{2}^{3}\,{c}^{2}+2\,{x}_{2}^{2}\,{y}_{2}\,b\,c+{x}_{2}^{2}\,{z}_{2}\,a\,c+{x}_{2}\,{y}_{2}^{2}\,a\,c-3\,{x}_{2}\,{y}_{2}\,{z}_{2}\,c+{y}_{2}^{3}\,c+{x}_{2}\,{y}_{2}^{2}\,{b}^{2}+{x}_{2}\,{y}_{2}\,{z}_{2}\,a\,b+{y}_{2}^{3}\,a\,b-{x}_{2}\,{z}_{2}^{2}\,b-{y}_{2}^{2}\,{z}_{2}\,b+{y}_{2}^{2}\,{z}_{2}\,{a}^{2}-2\,{y}_{2}\,{z}_{2}^{2}\,a+{z}_{2}^{3}\right)$$ $$x={x}_{1}\,{x}_{2}\,c+{y}_{2}\,\left( {z}_{1}-{y}_{1}\,a\right) +{y}_{1}\,{z}_{2},$$ $$y={y}_{2}\,\left( {x}_{1}\,c+{y}_{1}\,b\right) +{y}_{1}\,{x}_{2}\,c+{z}_{1}\,{z}_{2},$$ $$z={y}_{2}\,\left( {y}_{1}\,c+{z}_{1}\,b\right) +{z}_{2}\,\left( {x}_{1}\,c+{y}_{1}\,b+{z}_{1}\,a\right) +{z}_{1}\,{x}_{2}\,c.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1658952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Category-Theoretic relation between Orbit-Stabilizer and Rank-Nullity Theorems In linear algebra, the Rank-Nullity theorem states that given a vector space $V$ and an $n\times n$ matrix $A$, $$\text{rank}(A) + \text{null}(A) = n$$ or that $$\text{dim(image}(A)) + \text{dim(ker}(A)) = \text{dim}(V).$$ In abstract algebra, the Orbit-Stabilizer theorem states that given a group $G$ of order $n$, and an element $x$ of the set $G$ acts on, $$|\text{orb}(x)||\text{stab}(x)| = |G|.$$ Other than the visual similarity of the expressions, is there some deeper, perhaps category-theoretic connection between these two theorems? Is there, perhaps, a functor from the category of groups $\text{Grp}$ to some category where linear transformations are morphisms? Am I even using the words functor and morphism correctly in this context?
The intuition behind this question is spot-on. I'm going to try to fill out some of the details to make this work. The first thing to note is that a linear map $A:V\to V$ also gives a genuine group action: it is the additive group of $V$ acting on the set $V$ by addition. That is, any $v\in V$ acts on $x\in V$ as $v: x \mapsto x+Av.$ Now we see that given any $x$ in $V$ the stabilizer subgroup $\text{stab}(x)$ of this action is precisely the kernel of $A.$ The orbit of $x$ is $x$ plus the image of $A.$ If we are working with a vector space over a finite field, we can take the cardinality of these sets as in the formula $|\text{orb}(x)||\text{stab}(x)| = |G|$ and as @Ravi suggests, take the logarithm of this where the base is the size of the field and we get exactly the rank-nullity equation. If we have an infinite field then this doesn't work and we need to think more along the lines of a categorified orbit-stabilizer theorem. In this case, for each $x\in V$ we can find a bijection: $$ \text{orb}(x) \cong G / \text{stab}(x) $$ and as @Nick points out, this bijection gives us the First Isomorphism Theorem: $$ \mathrm{Im}(A) \cong V / \mathrm{Ker}(A). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1659075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 0 }
What is a single-prime function other than $f(x)=x!$? [Noob warning]: I am not a mathematician. If you use jargon, please explain or reference. Other than $f(x)=x!$, what is a univariate non-piecewise function with a domain that is either all integers, or an infinite-sized subset of all integers, and whose range contains only integers and exactly one prime number? For those that prefer lists, here are the criteria again: * *Univariate (one independent variable) *Not $f(x)=x!$ (of which I 'think' meets the criteria below...) *Non-piecewise (non-hybrid) *Domain is either all integers or an infinite-sized subset of all integers *Range contains only integers *Range contains exactly one prime number
$$f(x)=x^2+x$$ More generally, if $g(x)$ is any function with $f(1)=p$ is prime and $f(x) \geq 2$ for all $x$ then $$f(x)=xg(x)$$ Also $$h(n)= lcm [1,2,3,..,n]$$ Also $$u(n)=n^{n-1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1659172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 1 }
Is the group $G$ always isomorphic to the group $G/N \times N$? Let $N$ be a normal subgroup of $G$. Is the group $G$ always isomorphic to the group $G/N \times N$? I don't think this is true but I can't think of a counter-example. What's an easy counter-example (or way to prove the contrary)?
Consider the cyclic group of order $4$, say $C_{4}$. It has a nornal subgroup $H$ of index 2. $C_{4}/H$ is is a cyclic group of order $2$, isomorphic to $H$! But $H \times H$ is not cyclic!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1659250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
What is wrong with my integral? $\sin^5 x\cos^3 x$ I am trying to do an integration problem but am running into a problem! My answer is different from what the solution says. My attempt: Evaluate $$\int\cos^3x\sin^5 x\mathop{dx}$$ $\int\cos^3x\sin^5 x\mathop{dx}=(1-\cos^2 x)^2\sin x \cos ^3 x$ Then with u substitution, letting $u=\cos x \implies du=-\sin x \mathop{dx}$ which gives us $-\int (1-u^2)^2\cdot u^3\mathop{du}=\int-u^3+2u^5+u^7=-(\cos^ 4 x)/4+(\cos^6 x)/3-(\cos ^8 x)/8 +C$ However the solution books says the answer is $(\sin^6 x)/6-(\sin ^8 x)/8 +C$
Let's see a simpler case: $$ \int 2\sin x\cos x\,dx $$ You have two choices: either do $u=\sin x$, so $\cos x\,dx=du$, and you get $$ \int 2u\,du=u^2+c=\sin^2x+c $$ or do $v=\cos x$, so $\sin x\,dx=-dv$, and you get $$ \int -2v\,dv=-v^2+c=-\cos^2x+c $$ Which one is right? Both, of course, but this doesn't mean you reached the false conclusion that $\sin^2x=-\cos^2x$. The fact is that an antiderivative is only determined up to a constant, so what you can say is that $$ \sin^2x=-\cos^2x + k $$ for some constant $k$; you surely know that $k=1$, in this case. Your problem is exactly the same. You happened to use the second substitution instead of the first one. Try and determine $k$ such that $$ -(\cos^ 4 x)/4+(\cos^6 x)/3-(\cos ^8 x)/8=(\sin^6 x)/6-(\sin ^8 x)/8 +k $$ Hint: evaluate at $0$ both expressions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1659329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Effect of row augmentation on value of determinant. Part(a) is done. How to proceed for part (b). My first question is what do they mean by row augmentation ? Do they mean the row operation of adding k times the first row to third by row augmentation ?
An idea: The row augmentation on $\;A\;$ is then the same as the product $\;GA\;$. Now, why not using the all important theorem that for any two square matrices $\;X,Y\;$ of the same order we have that $\;\det(XY)=\det X\cdot\det Y\;$ ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1659527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Explain $\tan^2(\tan^{-1}(x))$ becoming $x^2$ How does $\tan^2(\tan^{-1}(x))$ become $x^2$? I feel that the answer should contain a tan somewhere and not just simply $x^2$. "Why?" you might ask, well I thought that $\tan^2(\theta)$ was a special function that has to be rewritten a specific way.
Because $\tan^{-1}x $ is not the reciprocal of $\tan x$, but the inverse function $\arctan x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1659596", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Simple way to estimate the root of $x^5-x^ 4+2x^3+x^2+x+1=0$ How to give a mathematical proof that for all complex roots of $x^5-x^ 4+2x^3+x^2+x+1=0$, their real part is smaller than 1, and there is at least one root whose real part is larger than 0. If possible, not to solve any algebraic equation whose degree is larger than 3. For the real roots, it would be easy to estimate them by observing the derivative and intermediate value theorem. What about the complex roots? I fail to find a way by trying Rouché's theorem.
Using the Routh-Hurwitz stability criterion you can tell how may roots of your system are in the open left-hand complex plane - i.e., the set $\{z\in\mathbb{C}: \operatorname{Re}(z) < 0\}$. In your case for the polynomial $p(x)=x^5 - x^4 + 2x^3 + x^2 + x + 1$ the Hurwitz matrix is: 1.0000 2.0000 1.0000 -1.0000 1.0000 1.0000 3.0000 2.0000 0 1.6667 1.0000 0 0.2000 0 0 1.0000 0 0 Indeed, there are two roots with nonnegative real part. We can verify that there are no imaginary roots simply by replacing $x=ic$ and try to determine $c\in\mathbb{R}$ so that $p(x)=0$. Now, in order to determine whether all roots have a real part which is lower than $1$ we need to apply the Hurwitz criterion to the polynomial $q(x) = p(x+1)$. In fact, this is $$ q(x) = p(x+1) = x^5 + 4x^4 + 8x^3 + 11x^2 + 10x + 5 $$ for which the Hurwitz matrix is 1.0000 8.0000 10.0000 4.0000 11.0000 5.0000 5.2500 8.7500 0 4.3333 5.0000 0 2.6923 0 0 5.0000 0 0 therefore all roots of $q$ are in the open left-hand plane, thus all roots of $p$ have real parts which are smaller than $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1659680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
Chain rule for composition of $\mathbb C$ differentiable functions What are the different methods to formulate a version for chain rule for composition of $\mathbb C-$ differentiable functions? Give a short proof.
"Differentiable" will mean "complex differentiable" below. I'll assume the fact that if a function has a complex derivative at a point, the function is continuous at that point. Thm: Suppose $f$ is differentiable at $a$ and $g$ is differentiable at $f(a).$ Then $g\circ f$ is differentiable at $a$ and $(g\circ f)'(a)= g'(f(a))f'(a).$ Proof: Here is the natural first thing to try: For $z$ close to $a, z\ne a,$ $$\tag 1 \frac{g(f(z)-g(f(a))}{z-a} = \frac{g(f(z)-g(f(a))}{f(z)-f(a)}\frac{f(z)-f(a)}{z-a}.$$ Because $f$ is continuous at $a,$ $f(z) \to f(a)$ as $z\to a.$ Thus the limit in $(1)$ is $g'(f(a))f'(a)$ as desired. What's wrong with that? The problem is that $f(z)$ could be equal to $f(a)$ for certain $z,$ rendering division by $f(z)-f(a)$ in $(1)$ meaningless. Now, if $f'(a)\ne 0,$ we don't have that problem. That's because $|f(z)-f(a)|\ge (|f'(a)|/2)|z-a|$ for $z$ close to $a.$ In this case we are done; the problem we worried about disappears and the easy proof works. We'll be done if we take care of the case $f'(a)=0.$ This is really not so bad: Because $g'(f(a))$ exists, there is a neighborhood $U$ of $f(a)$ and a constant $C>0$ such that $$|g(w)-g(f(a))| \le C|w-f(a)|, w \in U.$$ If $z$ is close to $a,z\ne a,$ then $f(z)$ will lie in $U,$ and we'll have $$\left |\frac{g(f(z)-g(f(a))}{z-a}\right | \le \frac{C|f(z)-f(a)|}{|z-a|}.$$ Because $f'(a)=0,$ the limit of the last expression is $0.$ This show $(g\circ f)'(a) = 0,$ which is exactly what the theorem says in this case. We're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1659773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
The intervals $(2,4)$ and $(-1,17)$ have the same cardinality I have to prove that $(2,4)$ and $(-1,17)$ have the same cardinality. I have the definition of cardinality but my prof words things in the most confusing way possible. Help!
The general way to show two sets $X,Y$ have the same cardinality is to show that there is a function $f:X\rightarrow Y$ that is both 1) injective and 2) surjective. That is 1) for all $a\neq b\in X$ we must have $f(a)\neq f(b)$ and 2) for all $y\in Y$ there must exist $x\in X$ such that $f(x)=y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1659870", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Linear independence of certain vectors of $\mathbf{C}^2$ over $\mathbf{R}$ Suppose that $\{e_1,e_2\}$ where $e_1=(1,0)$ and $e_2=(0,1)$ is the standard basis of $\Bbb C^2$ as a vector space over $\Bbb C$. Show that $\{e_1,ie_1,e_2,ie_2\}$ is a basis of $\Bbb C^2$ as a vector space over $\Bbb R$ and conclude that $\dim_\Bbb R \Bbb C^2=2\dim_\Bbb C \Bbb C^2$. I know that in general in order to show that a set of vectors form a basis you must show that they are linearly independent and are a spanning set. So normally I would solve $Ax=0$ and if $x$ was $0$ I would know they were linearly independent. However, I'm not sure how to use that here.
As another user has described linear independence (and hopefully made it clear that $dim_{\mathbb{R}} \mathbb{C}^2 = 4 = 2\dim_{\mathbb{C}}\mathbb{C}^2$ ), you need now to show that $\{e_1,e_2,ie_1,ie_2\}$ spans $\mathbb{C}^2$. Let $\xi \in \mathbb{C}^2$ be arbitrary. Then, by our assumptions, there exists $x_1,x_2 \in \mathbb{C}$ so that $\xi = x_1e_1+x_2e_2$. Since each $x \in \mathbb{C}$ we can write $x = a + bi$ for $a,b$ real. Hence, \begin{equation} \xi = x_1e_1+x_2e_2 \\ =(a_1+b_1i)e_1+(a_2+b_2i)e_2 \\ =a_1e_1+b_1ie_1+a_2e_2+b_2ie_2. \end{equation} This should complete your proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1659956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Convergence of $\sum_{n=1}^\infty n^2 \sin(\frac{π}{2^n}) $ This is my series: $$\sum_{n=1}^\infty n^2 \sin(\frac{π}{2^n}) $$ WolframAlpha says it converges, but I have no idea how to get the answer. I have learned comparison test, ratio test, root test and integral test. I don't really know which one of those to use. So far the only decent option seems the regular comparison test. $ \lim \limits_{x \to \infty} \frac{a_n}{b_n} = c, c \ne 0, c \ne \infty$ I tried something taking an geometric series for $b_k$ (like $\frac{1}{2^n}$ ) to get: $ \lim \limits_{x \to \infty} \frac{a_n}{b_n} = \lim \limits_{x \to \infty} n^2 \sin(\frac{π}{2^n}) 2^n = \lim \limits_{x \to \infty} n^2 \frac{\sin(\frac{π} {2^n})}{\frac{π} {2^n}} \frac{π} {2^n} 2^n = \lim \limits_{x \to \infty} n^2 \frac{\sin(\frac{π} {2^n})}{\frac{π} {2^n}} π$ But that still comes to infinity. If i use harmonic series to get rid of infinity (n^2), I can't get rid of the 0 from sinus and if I use geometric series it's vice-versa.
The following inequality holds: $$ \sin x \le x\qquad(x\ge 0) $$ Then $$ 0\le \sin\left(\frac{\pi}{2^n}\right)\le \frac{\pi}{2^n}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1660049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Evaluate the limit $\lim_{x\to \infty}( \sqrt{4x^2+x}-2x)$ Evaluate :$$\lim_{x\to \infty} (\sqrt{4x^2+x}-2x)$$ $$\lim_{x\to \infty} (\sqrt{4x^2+x}-2x)=\lim_{x\to \infty} \left[(\sqrt{4x^2+x}-2x)\frac{\sqrt{4x^2+x}+2x}{\sqrt{4x^2+x}+2x}\right]=\lim_{x\to \infty}\frac{{4x^2+x}-4x^2}{\sqrt{4x^2+x}+2x}=\lim_{x\to \infty}\frac{x}{\sqrt{4x^2+x}+2x}$$ Using L'Hôpital $$\lim_{x\to \infty}\frac{1}{\frac{8x+1}{\sqrt{4x^2+x}}+2}$$ What should I do next?
Hint : $\displaystyle\lim_{x\to \infty}\frac{x}{\sqrt{4x^2+x}+2x}=\lim_{x\to \infty}\frac{1}{\sqrt{4+\frac{1}{x}}+2}$ , dividing numerator and denominator by $x$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1660120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
A normal subgroup of a nilpotent group intersects the centre non-trivially How could I prove the above statement? For $G$ nilpotent and $N \lhd G$, how do I show that $N \cap Z(G) \neq 1$?
Take the upper central series of the group $$1=Z_0\le Z_1\le\ldots\le Z_n=G$$ Since $\;1\neq N\lhd G\;$ there exists $\;1\le k< n\;$ such that $\;N\cap Z_k=1\;$ but $\;N\cap Z_{k+1}\neq 1\;$ , so take commutator groups: $$[G, N\cap Z_{k+1}]\le[G,N]\cap[G,Z_{k+1}]\le N\cap Z_k=1$$ because normality: a subgroup $\;N\le G\;$ is normal iff $\;[G,N]\le N\;$ , and because centrality of the upper central series: $\;[G,Z_i]\le Z_{i-1}\;$ Thus, $\;[G, N\cap Z_{k+1}]=1\iff N\cap Z_{k+1}\subset Z_1=Z(G)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1660303", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
(H.W question) In Mathemaical Analysis of Rudin example 1.1 Pg 2 The author went on and proved that 1) There exists no rational $p$ such that $p^2=2$ 2) He defined two sets $A$ and $B$ such that if $p\in A$ then $p^2 <2$ and if $p\in B$ then $p^2>2$ and then constructed $$q=p - \frac{p^2-2}{p+2}$$ and $$q^2-2=\frac{2(p^2-2)}{(p+2)^2}$$ Then $A$ contains no largest element and $B$ contains no smallest element Finally in the remark he said that the purpose of the above exercise was to show that the rational number system has certain gaps. So my question is how is 2) used in arriving at this conclusion? I basically didn't understand the purpose of 2) in this discussion.
Rudin doesn't give a #*@$ whether there is a rational square root of 2 or not. What he's trying to show is that you can divide all the rational numbers into two sets that exhaust the rationals; that one set can have every element larger than every element in the other yet it is possible to have no limits to either set; you can take infinitely many larger numbers in one set and never have a largest and you can take infinitely many for the other set and never have a lowest one yet the two sets can be infinitely close together approaching ... something... but that something not being anything that can exist in the rationals. Thus we can say the rational are incomplete but that these incomplete "gaps" are infinitesimally small. He shows this by quickly proving $(m/n)^2 = 2$ is impossible. Ho-hum. Too bad how sad. Who cares. But once that has been shown comes the important part. You can divide the rationals entirely into two sets. All the $m/n$ where $(m/n)^2 > 2$ and all the $m/n$ where $(m/n)^2 < 2$. And both sets are infinite and all can get infinitely close to the other set but neither set has a highest (or lowest) element despite clearly there being a "wall" they can't go past, but can never actually hit, either. The only reason he cares that there is no rational square root of 2 is that of these two sets the option $(m/n)^2 = 2$ is simply not available. That's the point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1660377", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Diffeomorphism between an ellipse and unit circle? I'm trying to learn about diffeomorphism and an example asks for a diffeomorphism between an ellipse and an unit circle. How does one construct such?
You need to use a specific ellipse, and presumably the person asking the question would not accept the unit circle as an example of an ellipse (in which case the identity mapping would deliver). I would suggest using the standard equation of an ellipse with horizontal and vertical axes, centered on the origin. This is a scaled version of the unit circle with possibly different scaling factors in the horizontal and vertical directions. Scaling back provides an example of a diffeomorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1660442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
smallest number for given sum of digits I am trying to find the smallest number if we are given the sum of its digits. Suppose that sum of digits is 9 then it should be 9 instead of 18,36,63 and similarly if sum of digits is 11 then desired answer is 29 not 92 or any other number bigger than 29.I tried to write sum of all numbers upto 53 and got this but i am not able to come up with a general formula. from 1 to 9 it is just 9.FROM 10 to 18 it is 19,29,39,...... and for 19(1+9) it is 199 that is increase by 100. from 19 to 27 it is 199,299,399,499.... and for 28 it is 1999 that is increase by 1000. from 29 to 36 it is 29999,39999,49999,..... for 37 it is 19999 (increse by 10000). for 38 to 45 it is it is 29999,39999,49999,........999999 and for 46 it is 199999 (increased by 100000)
In our so-called positional numeration system, the digits get a weight that increases from right to left, following the powers of ten (units, tens, hundreds, thousands...). So to minimize the number you will allocate the budget in priority to the positions with the smallest weight. This is why the solution is by putting as many $9$s to the right as you can, preceded by the remainder of the budget. There will be $b\text{ div }9$ nines and the digit $b\bmod9$, forming the number $$(b\bmod 9)10^{b\text{ div }9}+10^{b\text{ div }9}-1=(b\bmod 9+1)10^{b\text{ div }9}-1.$$ If on the opposite you want to maximize the sum, then you must forbid the digit $0$ (because you could insert them "for free"), and the solution is formed by a maximum of $9$s followed by the remainder, i.e. $$10\,(10^{b\text{ div }9}-1)+b\bmod9$$ unless $b\bmod9=0$, then $$10^{b\text{ div }9}-1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1660540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
How do find if a relation is a function algebraically Is there a way to see if a relation is a function without having to do a "vertical line test" (where you draw a vertical line on the graph and if there line touches two points then it's not a function). To determine if a function is even or odd you simply go f(x) = f(-x); even, f(-x) = -f(x); odd. Can I do something similar to find out if a relation is a function? Thanks
For a relation to be a function, it must be one-to-one or injective, meaning that it must map each input into a different output. If you can't use the vertical line test, see if you can determine whether or not the function/relation has branches.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1660603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why are matrix norms defined the way they are? Given $A$ a square matrix Define: $\|A\|_1$ as the max absolute column sum $\|A\|_2$ as the sum of the squares of each element $\|A\|_\infty$ as the max absolute row sum Pray tell, why are matrix norms defined this way? Is this a property inherited or derived from the vector norms? (Source doesn't say: http://www.personal.soton.ac.uk/jav/soton/HELM/workbooks/workbook_30/30_4_matrx_norms.pdf )
Matrices can be considered as linear operators. And for a linear operator $A:X\to Y$, where $X,Y$ are normed spaces with norms $\|.\|_X,\|.\|_Y$, the definition of the operator norm is $$\|A\|=\sup\limits_{x\in X,x\neq 0}{\frac{\|Ax\|_Y}{\|x\|_X}}$$ If you use this definition, then the obtained matrix norm is called induced norm, because it is induced from the vector norms of the underlying vector spaces $X$ and $Y$. Such norms naturally satisfy also the last norm property $\|AB\|\leq \|A\|\|B\|$. But this property is not a real property of the norm (there are only $3$ properties), it is just that some authors use the terminology a matrix norm, only for those norms which satisfy this additional property (see Wikipedia). For example, if you have square matrix $A\in \mathbb R^{n\times n}$, $A:(\mathbb R^n,l^2)\to (\mathbb R^n,l^2)$, where $(\mathbb R^n,l^2)$ means the vector space $\mathbb R^n$ equipped with the $l^2$ Euclidean norm, the resulting induced matrix norm is $$\|A\|_2=\sup\limits_{x\neq 0}{\frac{\|Ax\|_2}{\|x\|_2}}=\sup\limits_{x\neq 0}{\frac{\sqrt{\langle Ax,Ax\rangle}}{\sqrt{\langle x,x\rangle}}}=\sup\limits_{x\neq 0}{\sqrt{\frac{\langle A^TAx,x\rangle}{\langle x,x\rangle}}}=\sqrt{\lambda_{\max}(A^TA)}$$ You also can find the derivation of the $\|A\|_1,\|A\|_\infty$ and other induced $\|A\|_p$ norms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1660720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Use $R_4$ to estimate the area under the curve $y= \frac{2}{1+x^2}$ between $x=0$ and $x=1$. Question : Use $R_4$ to estimate the area under the curve $y= \frac{2}{1+x^2}$ between $x=0$ and $x=1$. Not sure how to proceed with this question.
Let $$f(x)=\frac2{(1+x^2)}$$ Between $0$ and $1$ we have the width of each section equal to $\frac14$ because we are using $4$ subsections. For the first subsection we have width times height so $\frac14\times f(0.25)$ ($0.25$ because we are using $R$-approximations). We increment the input for the function by $0.25$ until we reach $1$, and then we sum up all the areas. Observe $$[0.25 \times f(0.25)] + [0.25 \times f(0.5) ]+ [0.25 \times f(0.75)] + [0.25 \times f(1)]$$ Factor the $0.25$ out we have $= 0.25(f(0.25) + f(0.5) + f(0.75) + f(1))$ $= 0.25(\frac{32}{17} + \frac85 + \frac{32}{25} + 1)$ $= \frac{2449}{1700}$ $\approx 1.44$ Diagram
{ "language": "en", "url": "https://math.stackexchange.com/questions/1660950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove $\frac{a^3+b^3+c^3}{3}\frac{a^7+b^7+c^7}{7} = \left(\frac{a^5+b^5+c^5}{5}\right)^2$ if $a+b+c=0$ Found this lovely identity the other day, and thought it was fun enough to share as a problem: If $a+b+c=0$ then show $$\frac{a^3+b^3+c^3}{3}\frac{a^7+b^7+c^7}{7} = \left(\frac{a^5+b^5+c^5}{5}\right)^2.$$ There are, of course, brute force techniques for showing this, but I'm hoping for something elegant.
Let $T_{m}$ be $a^m+b^m+c^m$. Let $k=-ab-bc-ca$, and $l=abc$. Note that this implies $a,b,c$ are solutions to $x^3=kx+l$. Using Newton's Identity, note the fact that $T_{m+3}=kT_{m+1}+lT_{m}$(which can be proved by summing $x^3+kx+l$) It is not to difficult to see that $T_{2}=2k$, $T_3=3l$, from $a+b+c=0$. From here, note that $T_{4}=2k^2$ using the identity above. In the same method, note that $T_{5}=5kl$. From here, note $T_{7}=5k^2l+2k^2l=7k^2l$ from $T_{m+3}=kT_{m+1}+lT_{m}$ . Therefore, the equation simplifies to showing that $k^2l \times l=(kl)^2$, which is true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1661035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34", "answer_count": 7, "answer_id": 4 }
is there a Relationship between duplicity of EigenValue and dimension of it's EigenSpace? giving characteristic polynomial of a matrix (Which has eigenvalues with it's duplicity) how can we understand the dimension of eigenspace of each eigenvalue without direct calculation? in addition, is there a relationship between the power of eigenvalue in minimal polynomial and dim of corresponding eigenSpace? ========= at least, Can we discuss about Zero EigenValues and Null Space?
The power of the eigenvalue in the characteristic polynomial is called algebraic multiplicity and the dimension of its eigenspace geometric multiplicity. One can show that $$1 \leq \text{ geometric mult.} \leq \text{algebraic mult.}$$ always holds. Note that equality does not hold in general, e.g. take $$\pmatrix{0 & 0 \\ 1 & 0}$$ which has char. poly. $\chi(\lambda) = \lambda^2$ but the eigenspace corresponding to $\lambda = 0$ is $span \pmatrix{0\\1}$. For any $n \times n$-matrix $A$ of rank $r < n$, $\lambda = 0$ is an eigenvalue and we have $$\text{geometric mult.} = \dim \ker A = n-r.$$ However, the algebraic mult. may still be strictly greater than the geometr. mult. (see example above).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1661174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that $K_1\cap K_2\cap \dots,K_N$ is compact Let $K_1,K_2,\dots,K_N$ be compact subsets of the mectric space $(X,d)$. Show that $K_1\cap K_2\cap \dots,K_N$ is compact. My approach: I think I should use the definition of compact sets in my textbook: Let $(X,d)$ be a metric space. A subset $K\subseteq X$ is compact if every sequence in $K$ has a convergent subsequence with limit in $K$. I can't get further than this. Can you help? Update: What if it was union instead of intersection?
Actually, arbitrary intersection (not only finite) of compact subsets is also compact. This is pretty easy to see: in metric spaces, compact subsets are closed, so their intersection is also closed. On the other hand, the closed subset of a compact set is also compact: suppose $K$ is compact and $N \subset K$ is closed. Take any open cover $N \subset \bigcup U_i$ of $N$. As $K - N$ is open, consider open cover $K \subset (K - N) \cup \bigcup U_i$. As $K$ is compact, it has some finite subcover $K \subset V_1 \cup ... \cup V_n$. If $K - N$ is one of the $V_i$, say $V_1$, $V_2 \cup ... \cup V_n$ covers $N$, and is a finite subcover of the original cover $N \subset \bigcup U_i$. Thus, as closed subset of compact set is compact, and intersection of compact set is a closed subset of any of them, it is compact. On the other hand, a finite union of compact sets is compact, but infinite one no longer necessarily is!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1661376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Is the range of a self-adjoint operator stable by its exponential? Let $H$ be an Hilbert space, and $A \in L(H)$ be a bounded linear self-adjoint operator on $A$. We assume that $R(A)$, the range of $A$, is not closed. Is it true or not that $R(A)$ is stable by $e^{-A}$? Meaning: if $y \in R(A)$ then $e^{-A} y \in R(A)$? I make here two comments: If $R(A)$ was closed, the answer would be simple, by writing $e^{-A}y$ as the limit of the sums $s_k:=\sum\limits_{k=0}^n \frac{(-A)^ky}{k!} \in R(A)$. But here the range is not closed so it could be that the limit "falls outside" the range. On the other hand, $e^{-A}y$ is an infinite sum of the terms $\frac{(-A)^ky}{k!}$, which all lie in $R(A^k) \subset R(A^{k-1}) \subset ... \subset R(A)$. So here we are adding elements which go, somehow, deeper and deeper in $R(A)$. Could it be that this property save the game?
Ok so I think I have a (partial) answer, by means of some spectral analysis. Assuming that $A$ is self-adjoint, and that $H$ is separable, we can find a family $(\sigma_n)$ of decreasing nonnegative eigenvalues and $(u_n)$ of orthonormal eigenvectors such that for all $x \in H$, $Ax= \sum\limits_{n=0}^\infty \sigma_n \langle x,u_n \rangle u_n$. In particular, the range is characterized as $$R(A)= \{ x \in X \ | \ \frac{\vert \langle x , u_n \rangle \vert }{\sigma_n} \in \ell^2(\mathbb{N}) \}.$$ So if we take $y \in R(A)$, there exists some $x \in X$ such that $y=\sum\limits_{n=0}^\infty \sigma_n \langle x,u_n \rangle u_n$, and $e^{A}y$ writes as $\sum\limits_{n=0}^\infty e^{\sigma_n} \sigma_n \langle x,u_n \rangle u_n$. According to our characterisation of $R(A)$, $e^{A}y \in R(A)$ holds if and only if $e^{\sigma_n} \vert \langle x_n,u_n \rangle \vert \in \ell^2(\mathbb{N})$, which is true since $e^{\sigma_n}$ is contained in $]1,e^{\sigma_1}[$. The above argument works also by replacing $e^A$ by $e^{-A}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1661501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Choosing value of ω for SOR I am learning about successive overrelaxation, and I'm wondering if there is an intuitive reason as to why ω must be between 0 and 2. I know that the method will not converge is ω is not on this interval, but I'm wondering if anyone can give an explanation of why this makes sense.
I shall try to give you an intuitive idea of why $\omega \in (0,2)$ is essential. There are many different ways of stating the SOR iteration, but for the purpose of answering your question I will use the following form \begin{equation} x^{(k+1)} = (1 - \omega) x^{(k)} + \omega D^{-1} \left[ b - Lx^{(k+1)} - Ux^{(k)} \right], \end{equation} which is based on the splitting of $A$ as \begin{equation} A = D + L + U, \end{equation} where $D$ is diagonal, $L$ is strictly lower triangular and $U$ is strictly upper triangular. Now consider the extreme situation where we are trying to solve the scalar equation \begin{equation} a x = 0, \quad a \not = 0. \end{equation} Then $L$ and $U$ are both zero and the iteration collapses to \begin{equation} x^{(k+1)} = (1 - \omega) x^{(k)} \end{equation} from which we deduce that \begin{equation} x^{(k)} = (1- \omega)^{k-1} x^{(1)}. \end{equation} Now, if $\omega \not \in (0,2)$ and $x^{(1)} \not = 0$, then $x^{(k+1)}$ will grow exponentially and it is only if $\omega \in (0,2)$ that we have convergence to zero. In summary, even in the case of $n=1$ we cannot hope for convergence unless $\omega \in (0,2)$, and, as a general principle, increasing the dimension of a problem tends to provided numerical algorithms with more, rather than fewer, opportunities to misbehave. You will have no trouble extending these above idea to the case where $A = D$ is a diagonal matrix and the right hand side $b$ has components which are zeros. I hope this helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1661622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that the limit as x approaches zero for $\frac{2^{1/x} - 2^{-1/x}}{2^{1/x} + 2^{-1/x}}$ does not exist This problem was in one of the first chapters of a calculus text, so how would you go about solving this without applying L'Hôpital's rule? I attempted factoring out $2^{1/x}$, as well as using u substitution for $2^{1/x}$. By graphing, I can see that the left-side limit does not equal the right-side limit, but how else can I demonstrate this?
We can simplify the problem as follows: $$\mathrm f(x) := \frac{2^{1/x}-2^{-1/x}}{2^{1/x}+2^{-1/x}} \equiv \frac{2^{2/x}-1}{2^{2/x}+1} \equiv \frac{4^{1/x}-1}{4^{1/x}+1} \equiv \frac{1-4^{-1/x}}{1+4^{-1/x}}$$ All we need to do is consider the limits of $4^{1/x}$ and $4^{-1/x}$ as $x$ tends to zero. * *If $x<0$ and $x \to 0$ then $1/x \to - \infty$ meaning that $4^{1/x} \to 0$ and so $$\mathrm f(x) \equiv \frac{4^{1/x}-1}{4^{1/x}+1} \to \frac{0-1}{0+1} = -1$$ *If $x>0$ and $x \to 0$ then $-1/x \to -\infty$ meaning that $4^{-1/x} \to 0$ and so $$\mathrm f(x) \equiv \frac{1-4^{-1/x}}{1+4^{-1/x}} \to \frac{1-0}{1+0} = 1$$ Since the left- and right-hand limits are different, the limit is not well-defined.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1661845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
$\lim_{x\to 0} (2^{\tan x} - 2^{\sin x})/(x^2 \sin x)$ without l'Hopital's rule; how is my procedure wrong? please explain why my procedure is wrong i am not able to find out?? I know the property limit of product is product of limits (provided limit exists and i think in this case limit exists for both the functions). The actual answer for the given question is $\frac{1}{2}\log(2)$. My course book has shown that don't use this step but has not given the reason. AND Please TELL why i am WRONG
One can rewrite: $$\frac{2^{\tan x}-2^{\sin x}}{x^2\sin x}=\frac{2^{\tan x}-2^{\sin x}}{\tan x - \sin x}\frac{\tan x - \sin x}{x^2 \sin x}=\frac{2^{\tan x}-2^{\sin x}}{\tan x - \sin x}\frac{1-\cos x}{x^2 \cos x}=\frac{2^{\tan x}-2^{\sin x}}{\tan x - \sin x}\frac{\sin^2 x}{x^2}\frac{1}{(1+\cos x)\cos x}$$ $$=2^{\sin x}\cdot\frac{2^{\tan x-\sin x}-1}{\tan x - \sin x}\cdot \left(\frac{\sin x}{x}\right)^2\cdot\frac{1}{(1+\cos x)\cos x}\to1\cdot\ln 2\cdot 1^2\cdot \frac{1}{2\cdot1} = \frac{\ln 2}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1661945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 5 }
Does existence of independent variables ensure that there are infinitely many solutions? As I stated above. In a system of equations, does existence of independent variables ensure that there are infinitely many solutions? THANK YOU!
I assume you mean independent variable as to mean the same as my $t$ in the following example: Say we a system of equations \begin{align} -x_1-2x_2-5x_3&=-3 \\ 2x_1+3x_2+8x_3&=4 \\ 2x_1+6x_2+14x_3&=10 \\ \end{align} which we write on matrix form and then reduce: \begin{align} \pmatrix{ -1 & -2 & -5 & -3 \\ 2 & 3 & 8 & 4 \\ 2 & 6 & 14 & 10 } \rightarrow \pmatrix{ 1 & 2 & 5 & 3 \\ 1 & 1 & 2 & 2 \\ 0 & 0 & 0 & 0 } \end{align} which is the same as \begin{align} x_1+2x_2+5x_3&=3 \\ x_2+2x_3&=2 \end{align} Since we have two equations with three variables, we have a free parameter, which we define as $t=x_3 \in \mathbb{R}$. Parameterization gives us that the full solution can be written as $$\pmatrix{x_1 \\ x_2 \\ x_3}=\pmatrix{-1 \\2 \\ 0} +t\pmatrix{-1 \\ -2 \\ 1}$$ which indeed represents an infinite number of points and therefore solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1662063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to evaluate this limit? I have a problem with this limit, I have no idea how to compute it. Can you explain the method and the steps used(without L'Hopital if is possible)? Thanks $$\lim _{x\to 0+}\left(\frac{\left[\ln\left(\frac{5+x^2}{5+4x}\right)\right]^6\ln\left(\frac{5+x^2}{1+4x}\right)}{\sqrt{5x^{10}+x^{11}}-\sqrt{5}x^5}\right)$$
Let's try the elementary way. We have \begin{align} L &= \lim _{x \to 0^{+}}\left(\dfrac{\left[\log\left(\dfrac{5 + x^{2}}{5 + 4x}\right)\right]^{6} \log\left(\dfrac{5 + x^{2}}{1 + 4x}\right)}{\sqrt{5x^{10} + x^{11}} - \sqrt{5}x^5}\right)\notag\\ &= \lim _{x \to 0^{+}}\left(\dfrac{\left[\log\left(1 + \dfrac{x^{2} - 4x}{5 + 4x}\right)\right]^{6} \cdot\log 5}{\sqrt{5x^{10} + x^{11}} - \sqrt{5}x^5}\right)\notag\\ &= \log 5\lim _{x \to 0^{+}}\left(\dfrac{\left[\dfrac{\log\left(1 + \dfrac{x^{2} - 4x}{5 + 4x}\right)}{\dfrac{x^{2} - 4x}{5 + 4x}}\right]^{6}\left(\dfrac{x^{2} - 4x}{5 + 4x}\right)^{6}}{\sqrt{5x^{10} + x^{11}} - \sqrt{5}x^5}\right)\notag\\ &= \log 5\lim _{x \to 0^{+}}\left(\dfrac{1\cdot\left(\dfrac{x^{2} - 4x}{5 + 4x}\right)^{6}}{\sqrt{5x^{10} + x^{11}} - \sqrt{5}x^5}\right)\notag\\ &= \log 5\lim _{x \to 0^{+}}\left(\dfrac{x^{6}\left(\dfrac{x - 4}{5 + 4x}\right)^{6}}{x^{5}\{\sqrt{5 + x} - \sqrt{5}\}}\right)\notag\\ &= \log 5\lim _{x \to 0^{+}}\left(\dfrac{x - 4}{5 + 4x}\right)^{6}\cdot\frac{x}{\sqrt{5 + x} - \sqrt{5}}\notag\\ &= \left(\frac{4}{5}\right)^{6}\log 5\lim _{x \to 0^{+}}\frac{x(\sqrt{5 + x} + \sqrt{5})}{5 + x - 5}\notag\\ &= \left(\frac{4}{5}\right)^{6}2\sqrt{5}\log 5\notag\\ &= \frac{8192\sqrt{5}\log 5}{15625}\notag \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1662144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How can one do dimensional analysis when units are not known? In the sciences, we can do dimensional analysis and unit checks to verify whether or not the LHS and the RHS have the same units. If we have the following function:$$y=f(x)=x^{2}$$ what ensures the preserving of units? I have a feeling it is the exponent of 2 which is not dimensionless, but if we write it as:$$y=x^{2}=x\times x$$ where can I see balancing out of the dimensions?
If you're doing mathematics, usually you work with dimensionless quantities, so it makes no sense to try to do dimensional analysis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1662200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding the limit of $\sqrt{9x^2+x} -3x$ at infinity To find the limit $$\lim_{x\to\infty} (\sqrt{9x^2+x} -3x)$$ Basically I simplified this down to $$\lim_{x\to\infty} \frac{1}{\sqrt{9+1/x}+3x}$$ And I am unaware of what to do next. I tried to just sub in infinity and I get an answer of $0$ , since $1 / \infty = 0$. However, on symbolab, when I enter the problem it gives me an answer of $1/6$. Can anyone please explain to me what I need to do from this point? I haven't learned l'Hopitals rule yet so please don't suggest that. Thanks
For $x>0$ we have $$\sqrt{9x^2+x}-3x=\frac{\big(\sqrt{9x^2+x}-3x\big)\big(\sqrt{9x^2+x}+3x\big)}{\sqrt{9x^2+x}+3x}= \\ =\frac{9x^2+x-9x^2}{\sqrt{9x^2+x}+3x}=\frac{x}{\sqrt{9x^2+x}+3x}=\frac{1}{\sqrt{9+\frac{1}{x}}+3}=$$ Thus: $$\lim_{x\rightarrow\infty}\big(\sqrt{9x^2+x}-3x\big)=\lim_{x\rightarrow\infty}\frac{1}{\sqrt{9+\frac{1}{x}}+3}=\frac{1}{6}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1662291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
prove that $\sqrt{2}$ is not periodic. If I am asked to show that $\sqrt{2}$ does not have a periodic decimal expansion. Can I just prove that $\sqrt{2}$ is irrational , and since irrational numbers are don't have periodic decimal expansions then I am done? Thank you.
Suppose $\sqrt{2}$ has a periodic decimal expansion, i.e. it has the form $$\sqrt{2} = 1.a_1a_2 \cdots a_m \overline{d_1 d_2 \cdots d_n}$$ where the overline indicates the repeating digits. Then $$10^m \sqrt{2} = 1a_1a_2 \cdots a_m . \overline{d_1 d_2 \cdots d_n}$$ and $$10^{m+n} \sqrt{2} = 1a_1 a_2 \cdots a_m d_1 d_2 \cdots d_n . \overline{d_1 d_2 \cdots d_n}$$ Subtracting the first from the second yields $$(10^{m+n}-10^m)\sqrt{2} = 1a_1 a_2 \cdots a_m d_1 d_2 \cdots d_n - 1a_1a_2 \cdots a_m$$ and hence $$\sqrt{2} = \frac{1a_1 a_2 \cdots a_m d_1 d_2 \cdots d_n - 1a_1a_2 \cdots a_m}{10^{m+n}-10^m}$$ This implies that $\sqrt{2}$ is rational, which is known to be false. This argument generalises to show that all irrational numbers have nonperiodic expansions (in any number base).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1662399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Determining the bounds for a triple integral? I need help with a triple integration problem. I don't need help actually integrating this thing, I just need help with setting the actual integral(s) up. Specifically, I dont know how to determine what the bounds are. We basically have a 3D donut. The problem says we can model the donut as a torus centered at the origin (0,0,0) with outer radius R=4 and inner radius r=2. The points (x,y,z) inside the torus are described by the following condition: Equation Link Here, c is the radius from the origin to the center of the torus tube (so I think it's 3), and a is the radius of the donut tube (so I think it should be 1), the cross section of the donut tube is a circle. I need to calculate the volume of the donute after 2 cuts. The first cut happens parallel to the x axis at y = -3, and then parallel to the y axis at x = 1. So far, I set up the following integral (in polar coordinates) that I think represents the volume of the whole donut: Integral Eq But I dont know how to subtract the cuts? I'm assuming I need two more integrals that have to be subtracted from the whole volume, but I cant set it up. Can anyone help? Is the volume I have correct so far?
You have a problem of order: The limits on the $z$ integral depend on $r$, and therefore it has to be done where $r$ is defined. In other words, the integral is $$\int_0^{2\pi}\int_2^4\int_{-\sqrt{a^2-(r-c)^2}}^{\sqrt{a^2-(r-c)^2}} rdzdrd\theta$$ And you are correct that $c = 3$ and $a = 1$ But as noted in the comments, this approach does not work well with the cuts. The problem is, the cuts are flat, which leads to complicated limits of integration for $r$ and/or $\theta$, which will undo the advantages of going to cylindrical representation ("polar" coordinates are in the plane - the 3D analogs are either "cylindrical", as you are using here, or "spherical"). Let's look at the cuts. The first is by the plane $y = -3$ (which is automatically parallel to the x-axis and to the z-axis). This plane is tangent to the "center ring" of the torus. The other plane is $x = 1$, which passes through the hole. Note that these two planes meet inside the torus. To set the up as an integral in $x, y, z$, you need to consider that for given values of $x$, there are in general two separate ranges of values for $y$ that need integrated over, and vice versa. The way to handle this is to break the torus up into quadrants and work each quadrant separately. Within each quadrant, there is only one range for $x$ or $y$ to be integrated. We can also do the same for $z$, limiting attention to $z \le 0$ and $z \ge 0$ separately. But in this case, the geometry is symmetric, so we will pick up the same volume on either side. Thus we can calculate only $z \ge 0$, then double it to get the whole volume. The equation of the torus surface is $$z^2 +(\sqrt{x^2 + y^2} - 3)^2 = 1$$ So we can set up the integration over the entire first quadrant as $$2\int_0^4\int_0^{\sqrt{16 - x^2}}\int_0^{\sqrt{1-(\sqrt{x^2+y^2}-3)^2}} dzdydx$$ After cutting, for each quadrant we have: * *Quadrant 1: $0 \le x \le 1, 0 \le y$ $$2\int_0^1\int_0^{\sqrt{16 - x^2}}\int_0^{\sqrt{1-(\sqrt{x^2+y^2}-3)^2}} dzdydx$$ * *Quadrant 2: $x \le 0, 0 \le y$ $$2\int_{-4}^0\int_0^{\sqrt{16 - x^2}}\int_0^{\sqrt{1-(\sqrt{x^2+y^2}-3)^2}} dzdydx$$ * *Quadrant 3: $x \le 0, -3 \le y \le 0$ $$2\int_{-4}^0\int_{-3}^{\sqrt{4 - x^2}}\int_0^{\sqrt{1-(\sqrt{x^2+y^2}-3)^2}} dzdydx$$ * *Quadrant 4: $0 \le x \le 1, -3 \le y \le 0$ $$2\int_0^1\int_{-3}^{\sqrt{4 - x^2}}\int_0^{\sqrt{1-(\sqrt{x^2+y^2}-3)^2}} dzdydx$$ The $z$ integration is easy, but for the $y$ and $x$ integrations, you will need some trigonmetric substitutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1662498", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Elementary symmetric polynomial related to matrices I've encountered with the following question while reading something about invariant polynomial in Chern-Weil theory: For a matrix $X \in M(n;\Bbb{R})$ ,denote its eigenvalues by $\lambda_1,...,\lambda_n$,and the $n$ symmetric polynomials by $\sigma_1,...\sigma_n$,that is $$\sigma_1(X)=\lambda_1+...+\lambda_n$$ $$\sigma_2(X)=\lambda_1\lambda_2+...+\lambda_{n-1}\lambda_n=\sum_{i\lt_j}\lambda_i\lambda_j$$$$...$$ $$\sigma_n(X)=\lambda_1\lambda_2...\lambda_n$$ Is it true or false that $$\det(I+tX)=1+t\sigma_1(X)+t^2\sigma_2(X)+...+t^n\sigma_n(X)$$ I've checked the case when $n=2$ and $n=3$ ,but how can we prove it in general? Much appreciated!
You can actually just read this from the characteristic polynomial $$\chi(t)=\det(tI-X)=\prod_{k=1}^n(t-\lambda_k)=\sum_{k=0}^n(-1)^k\sigma_k(X)t^{n-k}\text{,}$$ because then your polynomial is $$\det(I+tX)=(-t)^n\chi(-t^{-1})=\sum_{k=0}^n\sigma_k(X)t^k\text{.}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1662615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$X,Y$ infinite dimensional NLS , not both Banach , then $\exists T \in \mathcal L(X,Y)$ such that $R(T)$ is not closed in $Y$? Let $X,Y$ be infinite dimensional normed-linear spaces , not both Banach , then does there necessarily exist a continuous linear transformation $T:X \to Y $ such that $range (T)$ is not closed in $Y$ ?
Let $X$ be a Banach space and $Y$ the increasing union of a sequence of finite-dimensional spaces $F_n$. Then $X = \bigcup T^{-1}(F_n)$ with each $T^{-1}(F_n)$ closed. By the Baire category theorem, some $T^{-1}(F_n)$ has nonempty interior, and therefore it is all of $X$. That is, $\text{Range}(T)$ is a subspace of $F_n$, and therefore is closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1662752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove $\sinh(x)\leq 3|x|$ for $|x| < 1/2$ I need to prove $|\sinh(x)|\leq 3|x|$ for $|x| < 1/2$ My current progress states that $$x \leq \frac{ (1+x)-(1-x) }{2} \leq \sinh(x)\leq \frac{ \frac{1}{1-x}-\frac{1}{1+x} }{2}$$ whereas $$|\sinh(x)|\leq \max\left(|x|,\left|\frac{x}{1+x^2}\right|\right)\leq|x|+\left|\frac{x}{1+x^2}\right|$$ From my book I also get that $$|\exp(x)-1|\leq 3|x|\text{ so }|\sinh(x)| \leq |\exp(x)-1|\leq 3|x|$$ But now i'm stuck. Any hints?
It seems you can use the series definition. Then $$ \left|\sinh(x)\right|=|x|\left(1+\frac{|x|^2}{3!}+\frac{|x|^4}{5!}+…\right) \le|x|·\left(1+\frac{|x|^2}{6}+\frac{|x|^4}{6^2}+…\right) \\=|x|·\frac{6}{6-x^2} $$ for $|x|<\sqrt{6}$ and for $|x|\le\frac12$ this can be reduced to $$ |\sinh(x)|\le|x|·\frac{24}{23}. $$ Actually you can extend the range to $|x|<2$ where you get $$ |\sinh(x)|\le|x|·\frac{6}{6-2^2}=3·|x|. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1662856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
A probability theory question about independent coin tosses by two players Say Bob tosses his $n+1$ fair coins and Alice tosses her $n$ fair coins. Lets assume independent coin tosses. Now after all the $2n+1$ coin tosses one wants to know the probability that Bob has gotten more heads than Alice. The way I thought of it is this : if Bob gets $0$ heads then there is no way he can get more heads than Alice. Otherwise the number of heads Bob can get which allows him to win is anything in the set $\{1,2,\dots,n+1\}$. And if Bob gets $x$ heads then the number of heads that Alice can get is anything in the set $\{0,1,2,..,x-1\}$. So\begin{align}P(\text{Bob gets more heads than Alice})&= \sum_{x=1}^{n+1} \sum_{y=0}^{x-1} P( \text{Bob gets x heads }\cap \text{Alice gets y heads }) \\[0.2cm]&= \sum_{x=1}^{n+1} \sum_{y=0}^{x-1} \left(C^{n+1}_x \frac{1}{2}^{x} \frac{1}{2}^{n+1-x}\right)\left( C^n_y \frac{1}{2}^y \frac {1}{2}^{n-y}\right)\\[0.2cm]& = \sum_{x=1}^{n+1} \sum_{y=0}^{x-1} \frac{C^{n+1}_x C^n_y}{2^{2n+1}}\end{align} * *How does one simplify this? Apparently the answer is $\frac{1}{2}$ by an argument which looks like this, Since Bob tosses one more coin that Alice, it is impossible that they toss both the same number of heads and the same number of tails. So Bob tosses either more heads than Alice or more tails than Alice (but not both). Since the coins are fair, these events are equally likely by symmetry, so both events have probability 1/2.
Get out some red paint. Paint all the heads sides on Bob's coins, and paint all the tails sides of Alice's coins. Bob wins if and only if at least $n + 1$ coins out of $2n + 1$ land red side up. By symmetry, the probability of this happening is $1/2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1662958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 0 }
How do I prove this seemingly obvious property of subgroups The statement is the following: Given an abelian group $G=\langle a_1,...,a_t\rangle$, and a subgroup $H$ of $G$, we need at most $t$ elements to generate $H$; i.e. $H=\langle b_1,...,b_t\rangle$ for some $b_1,...,b_t\in H$. While this statement seems obvious I couln't find a way to prove it, even if I limited myself to finite groups. I tried a couple of approaches (see below). Any help will be appreciated. Approach 1: I tried proving that if $\{ [a_1]_H,...,[a_k]_H\}$ is a minimal generating set for $G/H$, and $\{ b_1,...,b_t\}$ is a minimal generating set for $H$, then $\{ a_1,...,a_k,b_1,...,b_t\}$ is a minimal generating set for $G$. Approach 2: Limiting myself to finite groups, I tried proving the statement using induction. I tried induction on the size of $G$, size of $H$, and size of the minimal sets generating $G$ and $H$.
I already had a proof of this written down, so I have copied and pasted it. Let $K \le G$ with $G$ an (additive) abelian group generated by $x_1,\ldots,x_n$. We shall prove by induction on $n$ that $K$ can be generated by at most $n$ elements. If $n=1$ then $G$ is cyclic and hence so is $K$. Suppose $n>1$, and let $H$ be the subgroup of $G$ generated by $x_1,\ldots,x_{n-1}$. By induction, $K \cap H$ is generated by $y_1,\ldots,y_{m-1}$, say, with $m \le n$. If $K \le H$, then $K = K \cap H$ and we are done, so suppose not. Then there exist elements of the form $h + t x_n \in K$ with $h \in H$ and $t \ne 0$. Since $-(h+t x_n) \in K$, we can assume that $t > 0$. Choose such an element $y_m = h + t x_n \in K$ with $t$ minimal subject to $t > 0$. We claim that $K$ is generated by $y_1,\ldots,y_m$, which will complete the proof. Let $k \in K$. Then $k = h' + u x_n$ with $h' \in H$ and $u \in {\mathbb Z}$. If $t$ does not divide $u$ then we can write $u = tq + r$ with $q,r \in {\mathbb Z}$ and $0 < r < t$, and then $k - qy_m = (h'-qh) + rx_n \in K$, contrary to the choice of $t$. So $t|u$ and hence $u=tq$ and $k - qy_m \in K \cap H$. But $K \cap H$ is generated by $y_1,\ldots,y_{m-1}$, so we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1663045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Integrate $I= \int \frac{x^2 -1}{x\sqrt{1+ x^4}}\,\mathrm d x$ $$I= \int \frac{x^2 -1}{x\sqrt{1+ x^4}}\,\mathrm d x$$ My Endeavour : \begin{align}I&= \int \frac{x^2 -1}{x\sqrt{1+ x^4}}\,\mathrm d x\\ &= \int \frac{x}{\sqrt{1+ x^4}}\,\mathrm d x - \int \frac{1}{x\sqrt{1+ x^4}}\,\mathrm d x\end{align} \begin{align}\textrm{Now,}\;\;\int \frac{x}{\sqrt{1+ x^4}}\,\mathrm d x &= \frac{1}{2}\int \frac{2x^3}{x^2\sqrt{1+ x^4}}\,\mathrm dx\\ \textrm{Taking}\,\,(1+ x^4)= z^2\,\,\textrm{and}\,\, 4x^3\,\mathrm dx= 2z\,\mathrm dz\,\, \textrm{we get} \\ &= \frac{1}{2}\int \frac{z\,\mathrm dz}{\sqrt{z^2-1}\, z}\\ &= \frac{1}{2}\int \frac{\mathrm dz}{\sqrt{z^2-1}}\\ &= \frac{1}{2}\ln|z+ \sqrt{z^2 -1}|\\ &= \frac{1}{2}\ln|\sqrt{1+x^4}+ x^2|\\ \textrm{Now, with the same substitution, we get in the second integral}\\ \int \frac{1}{x\sqrt{1+ x^4}}\,\mathrm d x &= \frac{1}{2}\int \frac{2x^3}{x^4\sqrt{1+ x^4}}\,\mathrm dx\\ &= \frac{1}{2}\int \frac{z\,\mathrm dz}{( z^2 -1)\;z} \\ &=\frac{1}{2}\int \frac{\mathrm dz}{ z^2 -1} \\ &=\frac{1}{2^2}\, \ln \left|\frac{ z+1}{z-1}\right| \\ &= \frac{1}{2^2}\, \ln \left|\frac{ \sqrt{1+ x^4}+1}{\sqrt{1+ x^4}-1}\right|\;.\end{align} So, \begin{align}I&=\int \frac{x^2 -1}{x\sqrt{1+ x^4}}\,\mathrm d x \\ &=\frac{1}{2}\, \ln|\sqrt{1+x^4}+ x^2|- \frac{1}{2^2}\, \ln \left|\frac{ \sqrt{1+ x^4}+1}{\sqrt{1+ x^4}-1}\right| + \mathrm C\;.\end{align} Book's solution: \begin{align}I&=\int \frac{x^2 -1}{x\sqrt{1+ x^4}}\,\mathrm d x \\ &= \ln\left\{\frac{1+x^2 + \sqrt{1+x^4}}{x}\right\} + \mathrm C\;.\end{align} And my hardwork's result is nowhere to the book's answer :( Can anyone tell me where I made the blunder?
From here and here we learn that (mistake 1) \begin{align} \int \frac{x}{\sqrt{1+ x^4}}dx&=\frac12\arcsin x^2\\ &=\frac12\ln(x^2+\sqrt{1+x^4}) \end{align} Hence (for part 2 I basically take your solution multiplied by $-1$: mistake 2) \begin{align} I&=\frac12\ln x^2+\sqrt{1+x^4})-\frac14 \ln \frac{ \sqrt{1+ x^4}-1}{\sqrt{1+ x^4}+1}\\ &=\frac12\ln(x^2+\sqrt{1+x^4})-\frac14 \ln \frac{ \sqrt{1+ x^4}-1}{\sqrt{1+ x^4}+1}\frac{\sqrt{1+ x^4}+1}{\sqrt{1+ x^4}+1}\\ &=\frac12\ln(x^2+\sqrt{1+x^4})-\frac14 \ln \frac{ x^4}{(\sqrt{1+ x^4}+1)^2}\\ &=\frac12\ln(x^2+\sqrt{1+x^4})-\frac12 \ln \frac{ x^2}{\sqrt{1+ x^4}+1}\\ &=\frac12 \ln \frac{ (x^2+\sqrt{1+x^4})(\sqrt{1+ x^4}+1)}{x^2}\\ &=\frac12 \ln \frac{ 2(x^2+\sqrt{1+x^4})(\sqrt{1+ x^4}+1)}{2x^2}\\ &=\frac12 \ln \frac{ 2x^2\sqrt{1+ x^4}+2x^2+2(1+x^4)+2\sqrt{1+x^4}}{2x^2}\\ &=\frac12 \ln \frac{ (1+x^2+\sqrt{1+x^4})^2}{2x^2}\\ &=\ln \frac{ 1+x^2+\sqrt{1+x^4}}{x}-\frac12\ln 2\\ \end{align} Note $\int \frac{dz}{z^2-1}=\frac12\ln\frac{z-1}{z+1}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1663130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Proof problem: show that $n^a < a^n$ for all sufficiently large n I would like to show that $n^a < a^n$ for all sufficiently large $n$, where $a$ is a finite constant. This is clearly true by intuition/graphing, but I am looking for a rigorous proof. Can anyone help me out? Thanks.
Take $\log$ from both sides $a \log n < n \log a $ Now lets check who grows faster $\lim\limits_{n\to \infty}\frac{n \log a}{a \log n}=\infty$ and $\lim\limits_{n\to \infty}\frac{a \log n}{n \log a}=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1663333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
Initial form of a polynomial I am reading some tropical geometry and came up with the concept of the initial form of a polynomial. The definition says that the initial form of f with respecto to a weight vector $w \in \mathbb{R}^{n+1}$ is \begin{equation} in_w(f) = \sum_{\substack{u\in \mathbb{N}^{n+1} \\ val(c_u) + w\cdot u = W}} \overline{c_ut^{-val(c_u)}}x^u \end{equation} However, I don't really see the intuition behind this initial form, could anyone explain this a bit further?
Have a look at Remark 5.7 of Gublers "Guide to tropicalization", I find his approach easier to understand than the definition you seem to have copied from Sturmfels' book. Initial forms can be used to define the tropicalization of a variety, and are especially important if the field K ist trivially valued. The set of all weight vectors $w$, such that $in_w(f)$ is not a monomial for $f$ in the Ideal defining a variety $V$ coincides with the set of valuation poins of the K-points in the variety for a non-trivial valuation. The closure of these sets is the tropicalization.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1663455", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Cauchy Residue Theorem Integral I have been given the integral $$\int_0^ {2\pi} \frac{sin^2\theta} {2 - cos\theta} d\theta $$ I have use the substitutions $z=e^{i\theta}$ |$d\theta = \frac{1}{iz}dz$ and a lot of algebra to transform the integral into this $$\frac{-i}{2} \oint \frac{1}{z^2}\frac{(z-1)^2}{z^2-4z+1}dz$$ In order to find the residues i further broke the integral into $$\frac{-i}{2} \oint \frac{1}{z^2}\frac{(z-1)^2}{(z+-r_1)(z-r_2)}dz$$ where $r_1 = 2+\sqrt{3}$ and $r_2 = 2-\sqrt{3}$ giving me three residues at $z=0|z=r_1|z=r_2 $ My question is where do I go from here? Thanks.
There was an error in the original post. We have $$\int_0^{2\pi}\frac{\sin^2(\theta)}{2-\cos(\theta)}d\theta=-\frac i2\oint_{|z|=1}\frac{(z^2-1)^2}{z^2(z^2-4z+1)}\,dz$$ There are two poles inside $|z|=1$. The first is a second order pole at $z=0$ and the second is a first order pole at $z=r_2$. To find the reside of the first pole we use the general expression for the residue of a pole of order $n$ $$\text{Res}\{f(z), z= z_0\}=\frac{1}{(n-1)!}\lim_{z\to z_0}\left(\frac{d^{n-1}}{dz^{n-1}}\left((z-z_0)^nf(z)\right)\right)$$ Here, we have $$\begin{align} \text{Res}\left(\frac{-i(z^2-1)^2}{2z^2(z^2-4z+1)}, z= 0\right)&=\frac{1}{(2-1)!}\lim_{z\to 0}\left(\frac{d^{2-1}}{dz^{2-1}}\left((z-0)^2\frac{-i(z^2-1)^2}{2z^2(z^2-4z+1)}\right)\right)\\\\ \end{align}$$ To find the residue at $z=r_2$ we have simply $$\text{Res}\left(\frac{-i(z^2-1)^2}{2z^2(z^2-4z+1)}, z= r_2\right)=\lim_{z\to r_2}\frac{-i(z^2-1)^2}{2z^2(z-r_1)}=-\frac{i}{2}\frac{(r_2^2-1)^2}{r_2^2(r_2-r_1)}$$ The rest is left as an exercise for the reader.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1663506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
length of $\sqrt{x} + \sqrt{y} = 1$ ? on $[0,1]$ how to calcuate length of $\sqrt{x} + \sqrt{y} = 1$ ? I put $x = \cos^4{t} , \ y = \sin^4{t}$ but failed. Do you have Any good idea form here?
$$ \sqrt{x}+\sqrt{y}=1\implies y'=-\frac{\sqrt{y}}{\sqrt{x}} $$ Substituting $x\mapsto u^2$ and $2u-1\mapsto\tan(\phi)$ gives $$ \begin{align} \int_0^1\sqrt{1+y'^2}\,\mathrm{d}x &=\int_0^1\sqrt{1+\frac yx}\,\mathrm{d}x\\ &=\int_0^1\sqrt{1+\frac{1+x-2\sqrt{x}}x}\,\mathrm{d}x\\ &=2\int_0^1\sqrt{2u^2-2u+1}\,\mathrm{d}u\\ &=\sqrt2\int_0^1\sqrt{\left(2u-1\right)^2+1}\,\mathrm{d}u\\ &=\frac1{\sqrt2}\int_{-\pi/4}^{\pi/4}\sec^3(\phi)\,\mathrm{d}\phi\\ &=\frac1{\sqrt2}\int_{-\pi/4}^{\pi/4}\frac{\mathrm{d}\sin(\phi)}{\left(1-\sin^2(\phi)\right)^2}\\ &=\frac1{4\sqrt2}\int_{-\pi/4}^{\pi/4}\left(\frac{\mathrm{d}\sin(\phi)}{(1-\sin(\phi))^2}+\frac{\mathrm{d}\sin(\phi)}{1-\sin(\phi)}+\frac{\mathrm{d}\sin(\phi)}{(1+\sin(\phi))^2}+\frac{\mathrm{d}\sin(\phi)}{1+\sin(\phi)}\right)\\ &=\frac1{4\sqrt2}\left[\frac{2\sin(\phi)}{\cos^2(\phi)}+2\log\left(\frac{1+\sin(\phi)}{\cos(\phi)}\right)\right]_{-\pi/4}^{\pi/4}\\ &=\frac1{2\sqrt2}\left[\tan(\phi)\sec(\phi)+\log(\tan(\phi)+\sec(\phi))\vphantom{\frac12}\right]_{-\pi/4}^{\pi/4}\\ &=\frac1{2\sqrt2}\left[2\sqrt2+2\log(\sqrt2+1)\right]\\ &=1+\frac{\log(\sqrt2+1)}{\sqrt2} \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1663573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Convert PI to base 4. Does my unique human genome exist in the sequence of digits? The human genome consists of sequences of BASE Pairs A G C T Convert the number PI to base 4. Does my unique human genome exist in the sequence of digits?
A heuristic argument would go as follows: Assume your genome $G$ is a string of $n\gg1$ digits over $\{0,1,2,3\}$. Denote by $x_k$ the $k^{\rm th}$ digit of $\pi$ in base $4$. For each $r\geq0$ the probability that $$(x_{rn+1},x_{rn+2},\ldots,x_{rn+n-1}, x_{(r+1)n})\ne G$$ amounts to $(4^n-1)/ 4^n<1$. Therefore the probability that for no $r\geq0$ we have a coincidence is $$\lim_{N\to\infty}\left({4^n-1\over 4^n}\right)^N=0\ .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1663674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Limit of probability density function as random variable approaches +/- infinity Consider a complex-valued function $\Psi(x,t)$ such that $|\Psi|^2$ is a probability density function for $x$ (for any time $t$). In his introductory Quantum Mechanics book, David J Griffiths writes that the limit of the expression $$\Psi^* \frac{\partial \Psi}{\partial t}-\frac{\partial \Psi^*}{\partial t} \Psi^*$$ as $x \rightarrow \infty$ and as $x \rightarrow -\infty$ must be $0,$ stating that $\Psi$ would not be normalizable otherwise. However, in a footnote, he mentions that "A good mathematician can supply you with pathological counterexamples, but they do not come up in physics; for the wave function always goes to $0$ at infinity." What are some counter-examples to this claim? More generally, if $f$ is a function, and $$\int_{-\infty}^{\infty}f(x)dx=1$$ then when may we assume that $\lim_{x\to\pm\infty}f(x) = 0$? Or, for positive integers $n$, that $\lim_{x\to\pm\infty} f^{(n)}(x)=0$?
Define $$f(x)=\begin{cases} 1-2x & x \in [0,1/2] \\ 2x+1 & x \in [-1/2,0] \\ 0 & \text{otherwise}\end{cases}.$$ Geometrically, this is a triangle with height $1$ and width $1$ centered at zero. Now define $$g(x)=\sum_{n=1}^\infty f(n^2(x-n))$$ This is a sequence of separate triangles with height $1$ and width $1/n^2$. The total area is then $\sum_{n=1}^\infty \frac{1}{2n^2}$, which is finite, but $g$ does not have a limit at $+\infty$. (Of course we can make it fail at $-\infty$ as well.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1663758", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What are the eigenvectors of 4 times the identity matrix? What are the eigenvectors of the identity matrix $ I= \left[ {\begin{array}{cc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} } \right] $
Follow the definition. In order to determine the eigenvectors of a matrix, you must first determine the eigenvalues. Substitute one eigenvalue λ into the equation Ax = λx or, equivalently, into (A − λI)x = 0 and solve for x; the resulting nonzero solutions form the set of eigenvectors of A corresponding to the selected eigenvalue. This process is then repeated for each of the remaining eigenvalues.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1663915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
how to choose which one holds? let $X$ be any set with the property that for any two metrics $d_1$ and $d_2$ on $X$, the identity map $id:($X$,d_1$)$\to ($X$,d_2)$ is Continuous. which of the following are true? 1) $X$ must be a singleton. 2) $X$ can be any finite set. 3) $X$ cannot be finite. 4) $X$ may be infinite but not uncountable. how to solve this? help me
It's easy if you know that all the norms over $\mathbb{R}$ are equivalent. Then you function is continuous if you choose $X=\mathbb{R}$ and $d_1$ $d_2$ metrics induced by norms over $\mathbb{R}$. In fact $ \exists c \in \mathbb{R}: d_2(x,y) \le c \ d_1(x,y)$ and if you take a succession $(x_n)$ of real numbers convergent to $x$ for the metric $d_1$ you have $$ d_2(id(x_n),id(x))=d_2(x_n,x) \le c \ d_1(x_n,x) \rightarrow 0 $$ then $id$ is continuous. This answers 1,3,4. For 2: it's true because if a succession $(x_n)$ tends to $x$ in a finite set then it's definitively equal to $x$, meaning that definitively $d_1(x_n,x)=0$. But Then also $d_2(id(x_n),(x))=d_2(x_n,x)=0$ definitively.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1664013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Make $2^8 + 2^{11} + 2^n$ a perfect square Can someone help me with this exercise? I tried to do it, but it was very hard to solve it. Find the value of $n$ to make $2^8 + 2^{11} + 2^n$ a perfect square. It is the same thing like $4=2^2$.
Here is a very late answer since I just saw the problem: By brute force, we may check that $n=12$ is the smallest possible integer such that $2^8+2^{11}+2^n$ is a perfect square. We also claim that this is the only integer. To see why the above is true, let $2^8+2^{11}+2^n=2^8(1+2^3+2^k)=2^8(9+2^k), k \ge 4$. Now, we only need to find all integers $k \ge 4 $ such that $9+2^k$ is a perfect square. Let $m^2=9+2^k, m \ \in \mathbb{Z^+}$. Hence we have $2^k=(m-3)(m+3)$. Clearly, $m$ must be odd; otherwise, both $m-3$ and $m+3$ would be odd, contradiction. Hence, $2\mid m-3$ and $2 \mid m+3$. But we also claim it is impossible that both $m-3$ and $m+3$ divides $4$; otherwise, $4 \mid 2m \Rightarrow 2 \mid m$, which is a contradiction for $m$ odd. Since $m-3 < m+3$, we conclude that, for any integer $k \ge 4$, we must have $m-3=2$ and $m+3=2^{k-1}$. But the former already implies $m=5$, so we must have $k-1=3 \Rightarrow k=4$ as our only solution, which is precisely what we claimed earlier.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1664088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 8, "answer_id": 4 }
Determine every vector field such that its field lines are contour lines to $g(x, y) = x^2 + 4y^2$ Is it possible to determine every vector field such that it's field lines are contour lines to $g(x, y) = x^2 + 4y^2$? If so, how?
At each point ${\bf z}=(x,y)$ the gradient $\nabla g(x,y)=(2x,8y)$ is orthogonal to the contour line of $g$ through ${\bf z}$. Turn this gradient by ${\pi\over2}$ counterclockwise to obtain the vector field $${\bf v_*}(x,y)=(-8y,2x)\ .$$ We can still multiply this ${\bf v}_*$ by an arbitrary nonzero function $\rho(x,y)$. In this way the most general answer to your question becomes $${\bf v}(x,y)=\rho(x,y)(-8y,2x)\ .$$ Of course ${\bf v}({\bf 0})={\bf 0}$ since there is no actual contour line going through ${\bf 0}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1664307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is the integral of square of a function (with parameter) positive? Suppose we have a function $f:\mathbb{R}\times\mathbb{R}_{>0}\mapsto\mathbb{R}$, with $f(x;p)\not\equiv 0$, where $p$ is some parameter. Supposing the integral is finite, I know that $$\int_{\mathbb{R}}f(x;p)^2dx\equiv f_1(p)>0.$$ Does the inequality persist if I integrate with respect to $p$, i.e. is it true that for $\nu>0$, $$\int_0^{\nu}f_1(p)dp \equiv g(\nu)> 0\quad ?$$ What about if I integrate again? Is it true that for $y>0$, $$\int_0^{y}g(\nu)d\nu =\int_0^y\int_0^{\nu}f_1(p)dpd\nu\equiv h(y)> 0\quad ?$$ Is there a theorem about this, or maybe this isn't true at all.
It s definitely true. It is the monotonicity property of integrals. If you have two functions $f$ and $g$ such that $f\ge g$, then $$\int f d\mu\ge\int g d\mu.$$ Moreover, if $f>g$ in a set of positive measure, then the inequality above is also strict. Have a look at this too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1664438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Imposing non-negativity constraint on a linear regression function Suppose I am interested in estimating the linear regression model $$ Y_i = g(X_i)^T\beta + \epsilon_i $$ where $Y_i$ is a scalar outcome of interest, $X_i$ is a scalar covariate with support on the unit interval, $g(\cdot)$ is a $K$-dimensional vector of known functions that are not perfectly colinear, $\beta$ is a $K$-dimensional vector of parameters to be estimated, and $E(g(X_i) \epsilon_i) = 0$. Suppose I know that $\Pr(Y \geq 0) = 1$, so I'd like to impose the condition $$ \beta^T g(x) \geq 0 \qquad \text{for all $x \in [0,1]$} $$ when estimating $\beta$. How would I go about doing this?
Assuming $g$ is polynomial, one way to attack the functional constraint on non-negativity is to employ a sum-of-squares approach. A sum-of-squares approach (conservatively) replaces a non-negativity constraint $g(x)\geq 0$ with a condition that the polynomial is a sum of squares. When non-negativity only is required on a region $h(x) \geq 0$, it can be generalized by the positivstellensatz to the search of a non-negative polynomial $s(x)$ such that $g(x)\geq s(x)h(x)$, once again replacing non-negativity of $s(x)$ with a sum-of-squares condition. Sum-of-squares decompositions and methods based on this is an active area with publically available software. The code below implements this in the MATLAB toolbox YALMIP (disclaimer, developed by me) % Define some data for a noisy quartic xdata = 0:0.05:1; Y = 5*(xdata-.5).^4 + .02*randn(1,length(xdata)); clf;plot(xdata,Y);grid % Create monomial data X = [xdata.^0;xdata;xdata.^2;xdata.^3;xdata.^4]; % Define decision variables and create residuals beta = sdpvar(5,1); e = Y - beta'*X; % Least squares solution optimize([],e*e'); hold on;plot(xdata,value(beta)'*X); % Parameterize a quartic polynomial sdpvar x g = beta'*[1;x;x^2;x^3;x^4]; % g should be non-negative when (x-0.5)^.2 <= 0.25 % define multiplier for positivstellensatz [s,coeffs] = polynomial(x,2); solvesos([sos(g - s*(.25-(x-.5)^2)),sos(s)],e*e',[],[beta;coeffs]) hold on;plot(xdata,value(beta)'*X); You can read more here https://yalmip.github.io/tutorial/sumofsquaresprogramming/
{ "language": "en", "url": "https://math.stackexchange.com/questions/1664546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove these functions $F_n$ are bounded by a function $G$ Hi, I know that to apply the Dominated Convergence Theorem to these functions, they must be bounded by another function $G$ which does not depend on $n$, for all $x$. However I'm really struggling to find a function. In the first case, $F_n$ behaves differently depending on whether $x$ is in $(0,1)$ or larger than $1$, but I can't analyse the $(0,1)$ case, the maximum value keeps changing depending on $n$ and $a$. In the second, I'm completely stuck as to how to approach it. Any help would be greatly appreciated! Thanks
For second, note that $$ \lim_{n\to\infty}\int_0^{n^2}xe^{-x^2}dx=\lim_{n\to\infty}\frac{1-e^{-n^4}}{2}=\frac1{2} $$ Since the Taylor series of $\sin x$ is alternating, for $x>0$ we have $$ |\sin x-x|\leqslant \frac{x^3}{6} $$ Thus \begin{align} \left|\int_0^{n^2}e^{-x^2}n\sin{\frac{x}{n}}dx-\int_0^{n^2}xe^{-x^2}dx\right|&=\left|\int_0^{n^2}\left(e^{-x^2}n\sin{\frac{x}{n}}-xe^{-x^2}\right)dx\right| \\ &\leqslant\int_0^{n^2}\left|\sin{\frac{x}{n}}-\frac{x}{n}\right|ne^{-x^2}dx \\ &\leqslant\int_0^{n^2}\frac{x^3}{6n^2}e^{-x^2}dx \\ &=\frac{1}{12n^2}\left(-x^2e^{-x^2}-e^{-x^2}\right)\bigg |_0^{n^2} \\ &=\frac{1}{12n^2}(1-e^{-n^4}-n^4e^{-n^4}) \\ &\to0\quad\text{as}\quad n\to \infty \end{align} So $$ \lim_{n\to\infty}\left|\int_0^{n^2}e^{-x^2}n\sin{\frac{x}{n}}dx-\int_0^{n^2}xe^{-x^2}dx\right|=0 $$ And so $$ \lim_{n\to\infty}\int_0^{n^2}e^{-x^2}n\sin{\frac{x}{n}}dx=\lim_{n\to\infty}\int_0^{n^2}xe^{-x^2}dx=\frac1{2} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1664692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find out if the transformation is linear, if so, determine whether it is an isomorphism. Hi, I know how to show that this is a linear transformation. But I am not sure how to figure out if it isomorphic. I tried performing the transformation with M = (a, b, c, d). When I multiply this out I get a matrix (-2a, -3b, -c, -2d). Is this isomorphic because the basis of this space is 4 dimensional: (-2, 0 , 0 , 0) ; (0, -3, 0 , 0) ; (0, 0 , -1, 0) ; ( 0, 0 , 0 , -2)? Meaning, if the matrix I got when multiplying this out was something like (a, 0 , b, c), it would signify a kernel of 1, and thus would not be isomorphic? Thanks!
Hint: consider $M $ a generic matrix $2×2$ and show that $\ker T=0$ then $M=0$ Edit: Consider $$M= \begin{matrix} a & b \\ c & d \\ \end{matrix} $$ To find $\ker T$ $T(M)= null matrix$ If find that $a=b=c=d=0$ $M$ is the null matrix and the $\ker $ is zero
{ "language": "en", "url": "https://math.stackexchange.com/questions/1664909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Functional calculus for unitization of an algebra? I have been stuck on this problem for a week now: "Let $A$ be a Banach algebra without identity, let $a\in A$, and let $f$ be holomorphic on a neighborhood of $\sigma(a)$, so that $f(a)\in A^\# $ is defined. Show that $f(a)\in A$ if and only if $f(0) = 0$." Here $A^\#$ is the unitization of $A$. So I tried approaching this question in two ways - approximating $f$ with rationals using Runge's theorem or via a power series about zero. Intuitively, I get why it makes sense, since the power series for $f$ on a small disk about zero has a zero coefficient for $z^0$, and since the homomorphism \begin{align*} O(D)&\to A\\ g &\mapsto g(a) \end{align*} preserves polynomials. (Here $D$ is the domain of $f$ and $O(D)$ is the set of holomorphic functions on $D$). The issue I am facing is of course that the power series only converges on a small disk and not all of $D$. Any help is much appreciated! (I don't need a full solution - a hint would definitely help.) Thanks!
Hint 1: If $f(0) \ne 0$, consider $$ f(0) = \frac{f(0)-f(z)}{z}z+f(z) $$ Hint 2: If $\lambda \in\rho(a)$ in $A^{\sharp}$, then $a(a-\lambda e)^{-1}\in A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1664997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What do the square brackets mean in $[5-(6-7(2-6)+2)]+4$? While watching a youtube video about a Simpsons math episode at 1:27 there's a puzzle that includes square brackets. $$[5-(6-7(2-6)+2)]+4$$ Apparently the answer is $-27$ which I can't figure out how to arrive at that answer. I've Googled that the square brackets mean intervals. But then I don't understand the context of this question as surely an interval should be two numbers separated by a comma? How do you arrive at $-27$?
$$[5-(6-7(-4)+2)]+4 \implies [5-(6+28+2)] + 4 \implies [5-36] +4 \implies -27$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1665212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 7, "answer_id": 5 }