Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Sum based on stolz theorem The Sum is as follows: $$ \lim_{n\to \infty} \frac{1} {\sqrt n} \left( \frac {1} {\sqrt n}+ \frac {1} {\sqrt {n+1}} +...+ \frac{1}{\sqrt{2n}} \right) $$. I solved as follows using stolz theorem $$ \lim_{n\to \infty} \frac{x_n-x_{n-1}} {y_n-y_{n-1} } = \lim_{n\to \infty} \frac {\frac {1} {\sqrt{2n}} - \frac {1} {\sqrt{2n-1}}} {\sqrt n - \sqrt{n-1}}$$ where $$ {\{x_n\} = \left( \frac {1} {\sqrt n}+ \frac {1} {\sqrt {n+1}} +...+ \frac{1}{\sqrt{2n}} \right)} $$ and $$ {\{y_n\} = \sqrt n} $$ I rationalized the denominator and got the answer as zero. But the solution in the book is given as $$ \lim_{n\to \infty} \frac{x_n-x_{n-1}} {y_n-y_{n-1} } = \lim_{n\to \infty} \frac {\frac {1} {\sqrt{2n}} + \frac {1} {\sqrt{2n-1}}- \frac{1} {\sqrt{n-1}}} {\sqrt n - \sqrt{n-1}}$$ $$ = 2(\sqrt2-1) $$ how they got the second step that is $$ \lim_{n\to \infty} \frac {\frac {1} {\sqrt{2n}} + \frac {1} {\sqrt{2n-1}}- \frac{1} {\sqrt{n-1}}} {\sqrt n - \sqrt{n-1}}$$ Please explain. Thanks in advance.
The second step follows from $$ x_n - x_{n-1} = \frac{1}{\sqrt{2n}} + \frac{1}{\sqrt{2n-1}} - \frac{1}{\sqrt{n-1}}, $$ which you see by carefully noting which terms are present in $x_n$ which are not in $x_{n-1},$ and vice versa. When you go from $x_{n-1}$ to $x_n$ you gain two terms 'at the end', and loose one 'at the beginning.'
{ "language": "en", "url": "https://math.stackexchange.com/questions/2566676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Combinations for 6-card pattern from 4-deck cards I had programmatically determined all the desired combinations of the 6-card patterns from a 4-deck cards, and now I want to apply combinatorics method to verify my computer generated results. From my computer simulation I've determined the following patterns and combinations (it should be emphasized that I'm NOT interest in flush and straight patterns, so it is not necessary to remove any existing flush and/or straight formations from the 6-card patterns): 6-Card Patterns and --> computer generated combinations * *(a) Six different ranks (example: K, J, Q, 1, 2, 3) --> 28,789,702,656 *(b) Six of a kind (ex: 1, 1, 1, 1, 1, 1) --> 104,104 *(c) Five of a kind (ex: 1, 1, 1, 1, 1, 4) --> 10,902,528 *(d) Four of a kind (ex: 1, 1, 1, 1, 3, 4) --> 399,759,360 *(e) Four of a kind & a pair (ex: 1, 1, 1, 1, 4, 4) --> 34,070,400 *(f) Three of a kind (ex: 1, 1, 1, 2, 3, 4) --> 6,560,153,600 *(g) Three of a kind & a pair (ex: 1, 1, 1, 2, 2, 3) --> 1,845,043,200 *(h) Three of a kind & three of a kind (ex: 1, 1, 1, 2, 2, 2) --> 24,460,800 *(i) Three-pair (ex: 5, 5, 9, 9, 3, 3) --> 494,208,000 *(j) Two-pair (ex: 5, 5, 9, 9, 3, 6) --> 15,814,656,000 *(k) One-pair (ex: 5, 5, 1, 2, 3, 4) --> 50,606,899,200 *Total combinations COMBIN(52*4,6) --> 104,579,959,848 Edit: With inputs from Lulu and John, I then realized of error in my program, and the simulation results were subsequently updated, and I was able to verify my simulation results. Thanks to everyone for your inputs.
Five of a kind: Choose which rank is the five of a kind $(13)$, and then which particular five cards of that rank you get ($_{16}C_5$). Then choose the sixth different card from what's left ($192$). So, $13 \cdot 4368 \cdot 192 = 10902528$ combinations. Looks like you were successful for that one, in my book. Let's try another. How about three of a kind and a pair: Choose which rank is the three of a kind $(13)$ and the particular three cards of that rank ($_{16}C_3$). Choose the rank of the pair $(12)$, and the two particular cards ($_{16}C_2$). Choose the sixth card from the other ranks $(176)$. So, $13 \cdot 560 \cdot 12 \cdot 120 \cdot 176 = 1845043200$ combinations. Well, that one doesn't look quite right. Anyway, I'd try to calculate the answer by hand (which is fairly straightforward) then compare with your computer simulation for each one. Side note: I'm not clear as to whether you are counting every seven of diamonds (say) as distinct, or the same. (Example, you mixed a Steelers deck, a Patriots deck, a Cowboys deck, and a Broncos deck, vs mixing four identical standard Bicycle decks.) Since I got the same answer as you with the first one, I assumed you're treating them as different, since that's what I did. But I could have done multiple counts in the second one, since my answer is a lot bigger than yours.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2566831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that a such set of Hahn-Banach extensions is infinite Let $M$ be the subspace of $C([-1,1])$ consisting of all even functions. Let $\psi:M\to \mathbb{R}$ be the linear functional given by $\psi(f)=\int_{-1}^{1}f(t)dt$. Show that there are infinitely many bounded linear extensions $F$ of $\psi$ to $C([-1,1])$ such that $\|F\|=\|\psi\|$. Hint: First show that $P_h$ is an extension of $\psi$ if $h(t)+h(-t)=2$ for all $t\in [-1,1]$. It is related to the posts (1) and (2). I tried to use $h(t)=|t+1|$, because then it satisfies $h(t)+h(-t)=2$ on $[-1,1]$. But I do not know how to answer this problem in general. If the post needs some more information, let me know.
First calculate $\|\psi\|$. We have: $$|\psi(f)| = \left|\int_{-1}^1f(t)\,dt\right| \le \int_{-1}^1 |f(t)|\,dt \le \int_{-1}^1 \|f\|_\infty\,dt =2\|f\|_\infty$$ so $\|\psi\| \le 2$. For the even function $f \equiv 1$ we have $$\|\psi\| \ge \frac{|\psi(f)|}{\|f\|_\infty} = 2 $$ so we conclude $\|\psi\| = 2$. Consider the bounded linear functionals $F_1, F_2 : C[-1,1] \to \mathbb{R}$ given by: $$F_1(f) = \int_{-1}^1 f(t)\,dt$$ $$F_2(f) = 2\int_{0}^1 f(t)\,dt$$ $F_1$ and $F_2$ are both extensions of $\psi$. Indeed, for an even function $f$ we have: $$F_2(f) = 2\int_{0}^1 f(t)\,dt = \int_{-1}^1 f(t)\,dt = \psi(f) = F_1(f)$$ And also we have $\|F_1\| = \|F_2\| = 2 = \|\psi\|$ which is shown similarly as for $\psi$: $$|F_1(f)| = \left|\int_{-1}^1f(t)\,dt\right| \le \int_{-1}^1 |f(t)|\,dt \le \int_{-1}^1 \|f\|_\infty\,dt = 2\|f\|_\infty$$ $$|F_2(f)| = \left|2\int_{0}^1f(t)\,dt\right| \le 2\int_{0}^1 |f(t)|\,dt \le 2\int_{0}^1 \|f\|_\infty\,dt =2\|f\|_\infty$$ Therefore, we get $\|F_1\|, \|F_2\| \le 2$. Since they extend $\psi$, we also have the reverse inequality. So, $F_1$ and $F_2$ are both Hahn-Banach extensions for $\psi$. Now, it is a known result that if there are two different Hahn-Banach extensions of a functional, then there are infinitely many Hahn-Banach extensions. Namely, for every $\alpha \in [0,1]$ the functional $\alpha F_1 + (1 - \alpha) F_2$ is also a Hahn-Banach extension of $\psi$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2566946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Limit of $\frac{x^5-1}{x^2-1}$ I need to determine if the following limit exists. $$\lim_{x\to 1}\frac{x^5-1}{x^2-1}$$ I've already proved using L'Hospital that this limit exists and should equal to $\frac{5}{2}$, but unfortunately I'm not allowed to used anything more than basic analysis for functions, i.e. basic definitions of convergence(at most continuity).
using Horner's method: $$x^5 -1 =(x-1)(x^4 +x^3 +x^2 +x+1)$$ $$x^2 -1=(x-1)(x+1)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2567028", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 4 }
Looking for a proof of an interesting identity Working on a problem I have encountered an interesting identity: $$ \sum_{k=0}^\infty \left(\frac{x}{2}\right)^{n+2k}\binom{n+2k}{k} =\frac{1}{\sqrt{1-x^2}}\left(\frac{1-\sqrt{1-x^2}}{x}\right)^n, $$ where $n$ is a non-negative integer number and $x$ is a real number with absolute value less than 1 (probably a similar expression is valid for arbitrary complex numbers $|z|<1$). Is there any simple proof of this identity?
Using $$\binom{n}{k}=\frac{1}{2 \pi i}\oint_C\frac{(1+z)^{n}}{z^{k+1}}dz$$ we get (integration contour is the unit cicrle) $$ 2\pi iS_n=\oint dz \sum_{k=0}^{\infty}\frac{(1+z)^{n+2k}x^{n+2k}}{z^{k+1}2^{n+2k}}=\oint dz \frac{(1+z)^n x^n}{z2^n}\sum_{k=0}^{\infty}\frac{(1+z)^{2k}x^{2k}}{2^{2k}z^k}=\\ 4\frac{x^n}{2^n}\oint dz \underbrace{\frac{(1+z)^n}{4z-(1+z)^2x^2}}_{f(z)} $$ for $|x|<1$ only we have just one pole of $f(z)$ inside the unit circle namely $z_0(x)=\frac2{x^2}-\frac{2\sqrt{1-x^2}}{x^2}-1$ , so $$ S_n=4\frac{x^n}{2^n}\text{res}(f(z),z=z_0(x))=4\frac{x^n}{2^n}\left[ \frac{1}{4 \sqrt{1-x^2}}\left(2\frac{1-\sqrt{1-x^2}}{ x^2}\right)^n\right] $$ or $$ S_n=\frac{1}{\sqrt{1-x^2}}\left(\frac{1-\sqrt{1-x^2}}{ x}\right)^n $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2567158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Some version of Hahn-Banach theorem I understand the first part of the theorem. I can prove the part about $\delta$ if the inf is actually achieved. But I wonder whether this is even true.
Consider $Z=Y\cup\operatorname{span}\{x\}$. This is obviously a subspace. Now define $f:Z\to \mathbb K$ by $$f(z)=\lambda\delta,$$ where $z=y+\lambda x$ ,remembering the definition of $Z$. I leave it to you to show that $f$ is bounded and linear, $\|f\|=1$, and $f(x)=\delta$, but feel free to ask hints if you get stuck. We can then use the standard Hahn Banach theorem for normed spaces to extend $f$ to a map $\tilde f\in X^*$ satisfying the requirements. To show that $\|f\|=1$, we first consider some $z=y+\lambda x$ with $\lambda\neq 0$. Then $$|f(z)|=|\lambda|\inf_{y'\in Y}\|x-y'\| \leq |\lambda|\|x-(-\lambda^{-1}y)=\|z\|.$$ Hence $\|f\|\leq 1$. For the reverse inequality, remember by definition of the infimum there exists a sequence $(y_n)\subset Y$ such that $\|y_n-x\|\to \delta$. Set $z_n=y_n-x$. Then $f(z_n)=-\delta$ for all $n\in\mathbb N$. Use these facts along with the definition of the norm of a functional to derive the desired inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2567260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Maximum likelihood estimator for uniform distribution $U(-\theta, 0)$ Consider $X_1,X_2,...,X_n$ i.i.d $U(-\theta,0)$. I want to find the maximum likelihood estimator of $\theta$. I know that $f(x,\theta)=\frac{1}{\theta}$ for $-\theta < x < 0$ and that $L_n(\theta, x)= \frac{1}{\theta^n}$. If we were looking at $U(0,\theta)$, then the MLE of $\theta$ would be $x_{(n)}$ because $L_n(\theta, x)= \frac{1}{\theta^n}$ is decreasing from $0 < x < \theta$ and would thus be maximized at the max $x_i$, which is $x_{(n)}$. For my case, since $L_n(\theta, x)= \frac{1}{\theta^n}$ is an increasing function for $-\theta < x < 0$, then $L_n(\theta, x)= \frac{1}{\theta^n}$ will be maximized at the max $x_i$, and thus the MLE of $\theta$ will be $x_{(n)}$ as well. I think this is correct, but it seems very silly to me that for both cases you can just say that it will be maximized at the max $x_i$. Could someone better explain this to me?
Note that the likelihood function is a function of $\theta$. In particular, $$L_n(\theta;\vec X) = \left \{ \begin{matrix}\frac{1}{\theta^n} & \text{if $\theta \ge -X_i$ for $i=1,2,\cdots, n$,} \\ 0 & \text{otherwise.}\end{matrix}\right.$$ Here $\theta \ge -X_i$ comes from $-\theta \le X_i$. Now, $L_n(\theta,\vec X)$ is a decreasing function of $\theta$. Consequently, $L_n(\theta;\vec x)$ attains its maximum when $\theta = \max\{-X_i\}=-\min\{X_i\}=-X_{(1)}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2567387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Product between a column vector and a row vector I know that matrices product is correct when the number of the columns of the first matrix is equal to the number of rows of the second matrix. Why I can't do the product between a column vector and a row vector? For example: $$\begin{bmatrix}1 \\ 2 \\ 3 \end{bmatrix} \, \begin{bmatrix}1 & 2 & 3\end{bmatrix}$$ Thank you so much.
I know that matrices product is correct when the number of the columns of the first matrix is equal to the number of rows of the second matrix. I wouldn't say "is correct", it is only defined in this case. You can invent your own product or way of multiplication, but the standard product of matrices only works, as you say, when the number of columns of the first matrix matches the number of rows of the second. Multiplying column or row vectors are simply special cases of matrices in general, so that condition still applies. In short: it's a consequence of the (usual) definition of the product of matrices. Why I can't do the product between a column vector and a row vector? For example: $$\begin{bmatrix}1 \\ 2 \\ 3 \end{bmatrix} \, \begin{bmatrix}1 & 2 & 3\end{bmatrix}$$ Your example however, satisfies the condition you mention: the first matrix has $1$ column and the second one has $1$ row, so their product is defined. Note that as a result, you expect a $3\times 3$-matrix. In general, multiplying an $m \times n$-matrix with an $n \times p$-matrix, gives you an $m \times p$-matrix: $$(\color{blue}{m} \times \color{red}{n}) \cdot (\color{red}{n} \times \color{purple}{p}) \to (\color{blue}{m} \times \color{purple}{p})$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2567679", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
proof of roots of characteristic polynomial are eigenvalues How do I prove the 2 directions of this statement?
A root $r$ of characteristic polynomial provides equation $det(M - r Id)=0$ so $M-r Id$ hasn't a full rank and there exist a nontrivial vector $\tilde{v}$ with $0 =(M-rId)\tilde{v}= M\tilde{v}-r\tilde{v}$ so $M\tilde{v}= r\tilde{v}$ and $r$ is by definition an eigenvalue. Walking backwards alongside my arguments provides the proof in other direction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2567988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Showing two topological spaces are not homeomorphic Having infinitely many open sets in a topology is a topological property #1: What are example of two topological spaces, preferably simple ones, which this statement show are not homeomorphic. But, more importantly why does it show this? Having trouble understanding topological properties in connection to homeomorphisms. #2: What about two spaces where this statement doesn't help deciding whether they are homeomorphic or not?
If $X$ and $Y$ are topological spaces with respective topologies $\tau_X$ and $\tau_Y$, and if $f : X \to Y$ is a homeomorphism, then $f$ induces a bijection $\tau_X \mapsto \tau_Y$: each $U \in \tau_X$ is mapped to its image $f(U) \in \tau_Y$ defined as usual by the formula $$f(U)=\{f(x) \,|\, x \in U\} $$ The proof that this map $\tau_X \to \tau_Y$ is a bijection is simple, and uses only the definition of homeomorphism. As a consequence, if $X$ and $Y$ are homeomorphic then the two sets $\tau_X$ and $\tau_Y$ have equal cardinalities. So, for example, it cannot happen that one is finite and the other is infinite. It also cannot happen that one is countably infinite and the other is uncountably infinite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2568082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Proving restriction of function is continuous The problem is: If $f : (X, T) \to (Y, S) $ is continuous and $A \subseteq X$, then $f|_A : (A, T_A) \to(Y, S) $ is continuous. How would I get started on this proof? I understand the restriction of the function $f$ is $f|_A(x) = f(x) $ for all $x\in A $. I jut don't see how to use this to show that it is continuous.
If $A$ has the subspace topology, then $f^{-1}(U)$ is open for any $U \subset Y$ that is open. But then $f^{-1}(U) \cap A$ is open in $A$, but $f^{-1}(U) \cap A$ is precisely$\dots$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2568165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the spectral decomposition of $A$ $$ A= \begin{pmatrix} -3 & 4\\ 4 & 3 \end{pmatrix} $$ So i am assuming that i must find the evalues and evectors of this matrix first, and that is exactly what i did. The evalues are $5$ and $-5$, and the evectors are $(2,1)^T$ and $(1,-2)^T$ Now the spectral decomposition of $A$ is equal to $(Q^{-1})^\ast$ (diagonal matrix with corresponding eigenvalues) * Q $Q$ is given by [evector1/||evector1|| , evector2/||evector2||] and for Q i got the matrix $$ Q= \begin{pmatrix} 2/\sqrt{5} &1/\sqrt{5} \\ 1/\sqrt{5} & -2/\sqrt{5} \end{pmatrix} $$ the inverse of Q is the matrix... $$ \begin{pmatrix} 2 \sqrt{5}/5 & \sqrt{5}/5 \\ \sqrt{5}/5 & -2 \sqrt{5}/5 \end{pmatrix} $$ and the diagonal matrix with corresponding evalues is $$ A= \begin{pmatrix} 5 & 0\\ 0 & -5 \end{pmatrix} $$ so now i found the spectral decomposition of $A$, but i really need someone to check my work. Did i take the proper steps to get the right answer, did i make a mistake somewhere?
The needed computation is $$\mathsf{A} = \mathsf{Q\Lambda}\mathsf{Q}^{-1}$$ Where $\Lambda$ is the eigenvalues matrix. And your eigenvalues are correct. Hence you have to compute $$\mathsf{AQ} = \mathsf{Q\Lambda}$$ Which gives you the solutions $$a = 2c ~~~~~~~~~~~ d = -\frac{b}{2}$$ You can then choose easy values like $c = b = 1$ to get $$Q = \begin{pmatrix} 2 & 1 \\ 1 & -\frac{1}{2} \end{pmatrix}$$ And easily $$\mathsf{Q}^{-1} = \frac{1}{\text{det}\ \mathsf{Q}} \begin{pmatrix} -\frac{1}{2} & -1 \\ -1 & 2 \end{pmatrix}$$ Which you can compute alone.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2568305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Almost complex structure on $\mathbb{S}^{3} \times \mathbb{S}^{5}$ I would like to check, whether the product space $X = \mathbb{S}^{n} \times \mathbb{S}^{m}$ admits an almost complex structure for odd $m,n$. For example, if $m=1$ and $n=3$, then $X = \mathbb{S}^{1} \times \mathbb{S}^{3}$ -- in this case one can construct an almost complex structure as follows: (1) Since $\mathbb{S}^{k}$ is parallelizable for $k = 1, 3, 7$, there exist linearly independendent sections $e_{1}$, and $e_{1}', e_{2}', e_{3}'$ coresspondingly. By lifting up those vector fields to the globals sections of $T(\mathbb{S}^{1} \times \mathbb{S}^{3})$ we obtain a paralelization of the tangent bundle of a product. (2) Thus, we calculate the Lie bracket and seek for an endomorphism $J: T_{p}{(S^{1} \times S^{3})} \rightarrow T_{p}{(S^{1} \times S^{3})}$ of the tangent space that satisfies the $J^{2} = -I$. (3) Moreover, we can easily check the integrability condition in order to establish, whether the almost complex structure lifts to a complex one. Is it possible to deal with the general case somehow, i.e. that is not restricted by parallelization property?
Let $n$ and $m$ be odd. As $\chi(S^n) = 0$, the manifold $S^n$ has a nowhere-vanishing vector field, and hence $TS^n \cong E\oplus\varepsilon^1$ for some rank $n - 1$ vector bundle $E$. Now note that \begin{align*} T(S^n\times S^m) &\cong \pi_1^*(TS^n)\oplus\pi_2^*(TS^m)\\ &\cong \pi_1^*(E\oplus\varepsilon^1)\oplus\pi_2^*(TS^m)\\ &\cong \pi_1^*E\oplus\pi_1^*\varepsilon^1\oplus\pi_2^*(TS^m)\\ &\cong \pi_1^*E\oplus\varepsilon^1\oplus\pi_2^*(TS^m)\\ &\cong \pi_1^*E\oplus\pi_2^*\varepsilon^1\oplus\pi_2^*(TS^m)\\ &\cong \pi_1^*E\oplus\pi_2^*(\varepsilon^1\oplus TS^m)\\ &\cong \pi_1^*E\oplus\pi_2^*(\varepsilon^{m+1})\\ &\cong \pi_1^*E\oplus\varepsilon^{m+1}\\ &\cong \pi_1^*E\oplus\varepsilon^2\oplus\varepsilon^{m-1}\\ &\cong \pi_1^*E\oplus\pi_1^*\varepsilon^2\oplus\varepsilon^{m-1}\\ &\cong \pi_1^*(E\oplus\varepsilon^2)\oplus\varepsilon^{m-1}\\ &\cong \pi_1^*(E\oplus\varepsilon^1\oplus\varepsilon^1)\oplus\varepsilon^{m-1}\\ &\cong \pi_1^*(TS^n\oplus\varepsilon^1)\oplus\varepsilon^{m-1}\\ &\cong \pi_1^*(\varepsilon^{n+1})\oplus\varepsilon^{m-1}\\ &\cong \varepsilon^{n+1}\oplus\varepsilon^{m-1}\\ &\cong \varepsilon^{n+m} \end{align*} so $S^n\times S^m$ is parallelisable. More generally, a product of two or more spheres is parallelisable if and only if at least one of them has odd dimension. As $S^n\times S^m$ is a parallelisable manifold of even dimension, it admits almost complex structures. As pointed out in the comments, such manifolds actually admit complex structures. Note however that there are parallelisable manifolds of even dimension which do not admit complex structures, for example $(S^1\times S^3)\#(S^1\times S^3)\#(S^2\times S^2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2568569", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Probability: 2 independent events I am not sure if I answered this question right: Suppose that A and B are 2 independent events such that the probability that neither occurs is 0.1 and that the probability of B is 0.2. Find the probability of A. Now I know that 2 events are independent if their intersection = $P(A) * P(B)$. The problem tells me neither occurs has a probability of $p = 0.1$. The probability that B doesn't occurs is $1 - 0.2 = 0.8$. So if I multiplied B not occurring by A occurring it should give me : $P(A') * P (B') = 0.1 \mapsto P(A') = \frac{0.1}{0.8} = 0,125$. Now $1 - P (A')$ should equal to $P(A)$ which is 1 - 0.125 = 0.875. Is that correct?
You already correctly noted the definition of independent events: $P(A\cap B)=P(A)P(B)$, then note that $A,B$ independent implies $A,B^c$ independent and $A^c,B^c$ independent, etc... We are told $P(A^c\cap B^c)=0.1$ and $P(B)=0.2$. So, $$0.1=P(A^c\cap B^c)=P(A^c)P(B^c)=(1-P(A))(1-P(B))=(1-P(A))(1-0.2)$$ At this point it is just algebraic manipulation to complete.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2568691", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Let $A$ be an $n \times n$ matrix. If every non-zero vector $v$ is an evector of $A$, prove that $A$ is a diagonal matrix I'll start with the things i already know, I know that for a vector $v$ to be an evector of $A$, then the following must be true $Av = \lambda v$ this is only true if and only if.... $(A - \lambda I)v = 0$ and i also know that a diagonal matrix is a matrix that has the following form.. $$ A = \begin{pmatrix} x_1 & 0 \\ 0 & x_2 \end{pmatrix}$$ ($x_1$ and $x_2$ can be any number and can equal the same number as well) i swear... i think the hardest part about these types of problem is knowing where to start... i think im too used to being told where to start by my professors.. which is a bad habit of mine... any help will be appreciated
Hint: For $k\in\mathbb{R}$, $$kI_nv=kv$$ I'm only showing $n=2$. Use the same idea for higher $n$ Suppose $v=\binom{a}{b}$ and $A=\left(\begin{matrix}x_1&y_1\\y_2&x_2\end{matrix}\right)$. We have \begin{align} Av&=\lambda v\\ \left(\begin{matrix}x_1&y_1\\y_2&x_2\end{matrix}\right)\binom{a}{b}&=\lambda \binom{a}{b}\\ \binom{ax_1+by_1}{ay_2+bx_2}&=\binom{\lambda a}{\lambda b} \end{align} Hence, $$x_1=x_2=\lambda$$ and $$y_1=y_2=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2568831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
How can I calculate $\lim_{x \to 0}\frac {\cos x- \sqrt {\cos 2x}×\sqrt[3] {\cos 3x}}{x^2}$ without L'Hôpital's rule? How can I calculate following limit without L'Hôpital's rule $$\lim_{x \to 0}\frac {\cos x- \sqrt {\cos 2x}×\sqrt[3] {\cos 3x}}{x^2}$$ I tried L'Hôpital's rule and I found the result $2$.
Hint: $$\cos x-(\cos2x)^{1/2}(\cos3x)^{1/3}=1-\cos x-[1-(\cos2x)^{1/2}(\cos3x)^{1/3}]$$ Now $\lim_{x\to0}\dfrac{1-\cos x}{x^2}=\cdots=\dfrac12$ On rationalization using $a^6-b^6=(a-b)(\cdots),$ $$1-(\cos2x)^{1/2}(\cos3x)^{1/3}=\dfrac{1-\cos^32x\cos^23x}{\sum_{r=0}^5[(\cos2x)^{1/2}(\cos3x)^{1/3}]^r}$$ $\lim_{x\to0}\sum_{r=0}^5[(\cos2x)^{1/2}(\cos3x)^{1/3}]^r=\sum_{r=0}^51=?$ Finally, $1-\cos^32x\cos^23x=1-(1-2\sin^2x)^3(1-\sin^23x)$ $\approx1-(1-2x^2)^3(1-9x^2)=x^2(6+9)+O(x^4)$ as $\lim_{x\to0}\dfrac{\sin mx}{mx}=1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2568920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 4 }
Is it possible to solve this bitwise equation? I have been googling but I can't get a conclusion. I have this equation: a = ((b ^ x) >> c) & d | e * ((b ^ x) & f) Would it be possible to solve this bitwise equation? Assuming all values are known except x. Thank you in advance.
If anyone is interested in the answer, finally I solved it with the help of bruteforce. First, I concluded that the result of ((b ^ x) >> c) & d was always relatively low (in a range of 0-50) and most of times 0 or 1 so I transformed the equation into this: a = y | e * ((b ^ x) & f) And defined the value of y with a loop that increases y progresively from 0. Then I setted up another loop that defines the value of x and then checked for each value if the equality is met. Since I was getting a lot of false positives, I decided to check if current value of y is correct. Now I know x, so: ((b ^ x) >> c) & d = y If the equality is met I have found a solution, if not the loop continues until find a valid solution. Sometimes I get duplicated solutions. For example, in one case I got two solutions: 7957 and 73493. Curiously, that numbers translated to hexadecimal are: 1F15 and 11F15. And in most of cases duplicated solutions follow that rule.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2569024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Good reference for metric topology I would like a "good" book (not really introductory, not too advanced with good theory and exercises) on metric topology covering the following topics: Metric spaces, open/closed sets, sequences, compactness, completeness, continuous functions and homeomorphisms, connectedness, product spaces, Baire category theorem, completeness of C[0, 1] and Lp spaces, Arzela-Ascoli theorem. It may not be a full book but parts of books or lecture notes are also most welcome.
You could have a look at Intermediate Mathematical Analysis by R.D. Bhatt. It does not cover all the topics that you mentioned, but provides an excellent treatment (with lots of exercises) for topics upto and including connectedness in your list.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2569151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Relation of convergence in probability and almost sure convergence Let $(Y_n)$ real valued random variables, such that $Y_n\to c \in \mathbb R$ in distribution. It has been shown that $Y_n \to c$ in probability. I want to prove that $Y_n \to c $ a.s. does not hold! Therefore consider $Y_n \sim Ber_{1/n}\,$, i.e. $\mathbb P(Y_n=0)=1-\frac1n$ and $\mathbb P(Y_n=1)=\frac1n$. Then $\mathbb E(Y_n)=\frac1n$. I will use Markow's inequality two times. $Y_n$ converges in probability to zero $$\lim_{n\to\infty}\mathbb P(\vert Y_n - 0\vert \ge \epsilon)\le \lim_{n\to\infty}\frac{\mathbb E(Y_n)}{\epsilon}=\lim_{n\to\infty}=\frac{\frac1n}{\epsilon}=0.$$ $Y_n$ does not converge a.s. to zero. I will use Borel Cantelli,i.e.$$\sum_{n=1}^\infty \mathbb P(\vert Y_n-0\vert \ge \epsilon) \le\sum_{n=1}^\infty\frac{\mathbb E(\vert Y_n\vert)}{\epsilon}=\sum_{i=1}^\infty\frac{\frac1n}{\epsilon}=\infty$$ Are we supposed to assume $\epsilon=1$ and is this attempt fine? If yes, Borel Cantelli provides $Y_n$ does not almost surely converges to zero. Some comments on this attempt are welcomes!
It seems you don't apply Borel-Cantelli correctly. Consider the events {$Y_n=1$} for $n \in \mathbb{N}$. They are independent and $$\sum_{n=1}^\infty \mathbb{P}(Y_n=1) = \sum_{n=1}^\infty \frac{1}{n} = \infty$$ Thus by Borel-Cantelli, $\limsup_{n \to \infty} Y_n = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2569276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Improper definite integration with complex bounds I am looking to prove the functional equation for theta function. Source: https://www.youtube.com/watch?v=-GQFljOVZ7I&list=PL32446FDD4DA932C9&index=12 Time about 8.00. We are to integrate: $$\int_{-\infty+\frac{ik}{x}}^{+\infty+\frac{ik}{x}}e^{-\pi xz^2}dz$$ The argument is to change the bounds to $-\infty$ and $+\infty$ using something called Estimation Lemma or "ML-Inequality". I have absolutely no knowledge of complex analysis and integrals with complex variables. I've read in the comments, that one could say that $\infty+k = \infty $ for any finite number $k$, equivallently for $-\infty$ but as there is $i$, this argument seems invalid to me. Would anyone give me some deeper explanation of what we have actually done?
This is not an answer. I am just showing that the naïve substitution $u=z-ik/x$ produces the correct answer to this integral with arbitrary real limits $a$ and $b$. Symmetry between the two limits is not leading to error cancellation in this case. $$I=\int_{a+ik/x}^{b+ik/x} e^{-\pi x z^2}\,dz=\frac{\text{erf}\left(\frac{\sqrt{\pi } (b x+\text{ik})}{\sqrt{x}}\right)-\text{erf}\left(\frac{\sqrt{\pi } (a x+\text{ik})}{\sqrt{x}}\right)}{2 \sqrt{x}}$$ Using the substitution $u=z-ik/x$ gives $$I=\int_{a}^{b} e^{-\pi x (u+ik/x)^2}\,du=\frac{\text{erf}\left(\frac{\sqrt{\pi } (b x+\text{ik})}{\sqrt{x}}\right)-\text{erf}\left(\frac{\sqrt{\pi } (a x+\text{ik})}{\sqrt{x}}\right)}{2 \sqrt{x}}$$ where erf is the error function I am just curious as to why in this case this naïve substitution works for all real values of the limits $a$ and $b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2569378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proof for sum of squares formula ( statistics - related ) I'm new to the domain of statistics and i'm trying to accumulate as much info as i can right now. I've considered that this question should be asked here as it is related to mathematics. The problem is that from the get go most statistics books use the sum of squares formula : $SS= \sum{X^2} - \frac{(\sum(X))^2}{n} .$ Where can i find a proof for this formula? I've tried to prove it myself from $SS= \sum{(X-m_x)} $ , where $m_x$ is the mean of the population $X$ but to no avail.
The sample variance of data $X_1, X_2, \dots, X_n$ is $$S^2 = \frac{\sum_{i=1}^n(X_i - \bar X)^2}{n-1} = \frac{\sum_i(X_i-\bar X)^2}{n-1} = \frac{\text{SS}_x}{n-1},$$ where $$\text{SS}_x = \sum_i(X_i - \bar X)^2 = \sum_i(X_i^2 - 2\bar X X_i + \bar X^2) = \sum_i X_i^2 -2\bar X\sum_i X_i + n\bar X^2\\ = \sum_i X_i^2 - \frac2n\left(\sum_i X_i\right)^2 + \frac1n\left(\sum_iX_i\right)^2 = \sum_i X_i^2 - \frac1n\left(\sum_i X_i\right)^2$$ because $\bar X = \frac1n\sum_iX_i.$ Similarly, $S^2 = \frac{\sum_i X_i^2 - n\bar X^2}{n-1}.$ Notes: The formula is of importance because, in a calculator or computer, one may keep track of three 'memories', for $n$, $\sum_iX_i,$ and $\sum_i X_i^2$ as data are entered one at a time, and then (when all data are present) apply the formula to find $S^2.$ Also, the formula makes it possible to find the combined sample variance $S_c^2$ of two samples (of $x$'s and $y$'s) from $n_x, n_y, \bar X, \bar Y, S_x^2,$ and $S_y^2.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2569510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Regression for implicit polynomial functions of n-th order How to do regression using polynomials of the form $a_n x^n + a_{n-1} x^{n-1} y + \cdots+ a_0 y^n+ b_{n-1}x^{n-1}+b_{n-2}x^{n-2}y+\cdots=c$, given some data points $(x_i,y_i)$ (the number of data points is much greater than $2n+2$)? The sources on the internet only seem to suggest the method for polynomial functions of the form $y=f(x)$.
Just do a linear regression like you normally would with any basis of functions: calculate the $x^ky^l$, put in the columns of a matrix $\bf \Phi$ in some order, then pose $${\bf v_o}= \min_{\bf v}\|{\bf \Phi v}-c{\bf 1}\|_2^2$$ If you allow constant term ($k=l=0$), there will exist a trivial solution which you will need to dodge somehow. For example by regularization on the coefficients of the polynomial. The value at position $m$ in the vector $\bf v$ will be the coefficient for whichever of the $x^ky^l$ you stuffed as $m$th column in the $\bf \Phi$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2569663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Order of $2^{36} \pmod{107}$ What is the order of $2^{36}\pmod{ 107}$? My current thought is $$2^{106} \equiv 1 \pmod{107}$$ according to Euler's Theorem. However, I don't know how to proceed from here or maybe my approach is wrong from the beginning?
Since $107-1=2\cdot 53$, every quadratic non-residue, with the only exception of $-1$, is a generator of $\mathbb{Z}/(107\mathbb{Z})^*$. $107$ is a prime of the form $8k+3$, hence $\left(\frac{2}{107}\right)=-1$ and $2$ has order $106$ in $\mathbb{Z}/(107\mathbb{Z})^*$. We have $\gcd(36,106)=2$, hence the order of $2^{36}$ in $\mathbb{Z}/(107\mathbb{Z})^*$ is $\frac{106}{2}=\color{red}{53}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2569775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
Eigenvalues of same-row matrices It has previously been discussed here that the eigenvalues of an all-ones $n \times n$ matrix $A$ such as the following are given by $0$ with multiplicity $n - 1$ and $n$ with multiplicity $1$, hence a total multiplicity of $n$ which means that the given matrix is diagonalizable. $$A = \begin{bmatrix} 1 & 1 & \cdots & 1 \\ 1 & 1 & \cdots & 1 \\ \vdots & \vdots & \ddots & \vdots \\ 1 & 1 & \cdots & 1 \\ \end{bmatrix} $$ I recently wrote an exam that asked us to diagonalize a matrix with multiple (3) rows that contained the same entries, so I was wondering if there was some general case to apply. Thus the question I am asking is given the following $n \times n$ matrix A, what are its eigenvalues? $$A = \begin{bmatrix} a_1 & a_2 & \cdots & a_n \\ a_1 & a_2 & \cdots & a_n \\ \vdots & \vdots & \ddots & \vdots \\ a_1 & a_2 & \cdots & a_n \\ \end{bmatrix} $$ For the sake of simplicity, lets first assume that $a_1, a_2, \ldots, a_n \in \mathbb{R} - \{0\}$; however, what happens if any (or all) are zero? It seems logical that there be the eigenvalue $0$ with $n - 1$ multiplicity since the rank of this matrix will be $1$ (assuming at least one nonzero entry), and that the other eigenvalue be the sum of entries on the diagonal by observation $a_1 + a_2 + \cdots + a_n$ with $1$ multiplicity. I could not, however, write a formal proof for that second statement.
* *If all the columns are zero, it is the zero matrix which of course must be diagonalizable since the zero matrix is a diagonal matrix. $$0=I\cdot 0\cdot I$$ *If at least one of the column is non-zero, then the rank of the matrix is $1$ and the nullity is $n-1$. We check that the all-$1$ vector is an eigenvector and the eigenvalue is $\sum a_i$. Hence if $\sum_i a_i \neq 0$, then the matrix is diaognalizable since the geometry multiplicity is equal to the algebraic multiplicity. *However, if one of the column is non-zero and $\sum_i a_i$ is equal to $0$. Since $\operatorname{tr}(A)=0=\sum_i \lambda_i$ and the nullity is $n-1$, we know the remaining eigenvalue must also be $0$. Suppose on the contrary that it is diagonalizable, then it is similar to the zero matrix, which shows that the matrix itself is the zero matrix, which is a contradiction since we assume at least one of the column is non-zero. For example $A=\begin{bmatrix} 1 & -1 \\ 1 & -1 \end{bmatrix}$ is not diagonalizable. Let the two eigenvalue be $\lambda_1$ and $\lambda_2$, we know that $\lambda_1+\lambda_2=0,$ and we know that at least one of them is zero, hence both of them must be zero. If it is diagonalizable. then the matrix $A=P^{-1}\cdot 0 \cdot P = 0$ which is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2569890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Describe all prime and maximal ideals of $\mathbb{Z}_n$ I know that an ideal P in $\mathbb{Z}_n$ is prime if and only if $\mathbb{Z}_n/P$ is an integral domain and an ideal m in $\mathbb{Z}_n$ is maximal if and only if $\mathbb{Z}_n$/m is a field. I think I've figured out that $\mathbb{Z}_p$ where p is a prime that divides n make up the maximal ideals. I have no idea how to figure out which are prime. Help?
Consider the canoncial map $$\pi:\Bbb Z\rightarrow \Bbb Z_n$$ If $\mathfrak m\subset\Bbb Z_n$ is a maximal ideal, then $\pi^{-1}(\mathfrak m)\subset\Bbb Z$ is maximal aswell. This means that $\pi^{-1}(\mathfrak m)=(p)$ for some prime $p\in\Bbb Z$. There are two cases to consider: * *If $p\mid n$ then $(p)$ is a maximal ideal of $\Bbb Z_n$, since $$\Bbb Z_n/(p)\equiv\Bbb Z_p$$ *If $p\not\mid n$ then $p$ is a unit in the ring $\Bbb Z_n$, so $(p)\cong \Bbb Z_n$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2569986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Derivative of a rotated vector with respect to the quaternion Let us say we have a right-handed unit quaternion, describing the rotation from frame $a$ to frame $b$: $q_a^b$. The rotation matrix formed from this quaternion is $R\left( q_a^b \right)$ and describes a passive rotation. That is, $R\left( q_a^b \right)v$ describes the same object $v$ in the new frame $b$. The following expression is given in Michael Andre Bloesh's dissertation without explanation link - (unfortunately embargoed until April 2018) $$\frac{d}{dq_a^b} R\left( q_a^b \right)v = -\left( R\left( q_a^b \right)v \right)^\times $$ where the $\left( \cdot \right)^\times $ notation is the skew-symmetric matrix. I played with these expressions numerically to confirm the above and also discovered that the derivative of the active rotation is $$\frac{d}{dq_a^b} R\left( q_a^b \right)^\top v = R \left( q_a^b \right)^\top \left( v \right)^\times $$ which I guess makes some intuitive sense as well. While these expressions seem to work, how do I approach this problem in a principled way (i.e. not guessing and checking with numerical differentiation)?
I may have an answer, taking the derivation straight from "A Primer on the Differential Calculus of 3D Orientations" by Bloesch et al. (Appendix I: Section 3: Derivative of a Coordinate Map) First, let $\Phi_{BA} \in SO(3)$ be a relative orientation of a coordinate system $B$ w.r.t. a coordinate system $A$. In the paper, they defined a mapping $\boldsymbol{C}: SO(3) \rightarrow \mathbb{R}^{3 \times 3}$ such that $\Phi(\mathbf{r}) \triangleq \boldsymbol{C}(\Phi) \mathbf{r}$ which means $\Phi$ can be a quaternion or euler angle (if I'm not mistaken). $\boldsymbol{e}_{i} \in \mathbb{R}^{3}$ be the standard basis vectors in $\mathbb{R}^{3}$, $\epsilon$ be a small scalar pertubation, and finally, $$ \begin{align} \boxplus : SO(3) \times \mathbb{R}^{3} \rightarrow SO(3), \\ \Phi, \boldsymbol{\varphi} \mapsto \exp(\boldsymbol{\varphi}) \circ \Phi \end{align} $$ be the box-plus operator that forms the addition operator between $SO(3)$ and $\mathbb{R}^{3}$ Copying from the appendix to here, the map of an orientation applied to a coordinate tuple can be differentiated w.r.t. the orientation itself. $$ \begin{align} \begin{bmatrix} \dfrac{\partial}{\partial \Phi} \Phi(\boldsymbol{r}) \end{bmatrix}_{i} &= \lim_{\epsilon \rightarrow 0} \dfrac{ (\Phi \boxplus \boldsymbol{e}_{i} \epsilon)(\boldsymbol{r}) - \Phi } { \epsilon } \\ &= \lim_{\epsilon \rightarrow 0} \dfrac{ \boldsymbol{C}(\boldsymbol{e}_{i} \epsilon) \boldsymbol{C}(\Phi)(\boldsymbol{r}) - \boldsymbol{C}(\Phi)(\boldsymbol{r}) } { \epsilon } \\ &= \lim_{\epsilon \rightarrow 0} \dfrac{ (\boldsymbol{I} + \boldsymbol{e}_{i}^{\times} \epsilon) \boldsymbol{C}(\Phi)(\boldsymbol{r}) - \boldsymbol{C}(\Phi)(\boldsymbol{r}) } { \epsilon } \\ &= \lim_{\epsilon \rightarrow 0} \dfrac{ \boldsymbol{e}_{i}^{\times} \epsilon \boldsymbol{C}(\Phi)(\boldsymbol{r}) } { \epsilon } \\ \dfrac{\partial}{\partial \Phi} \Phi(\boldsymbol{r}) &= -(\boldsymbol{C}(\Phi) \boldsymbol{r})^{\times} \end{align} $$ Highly encouraged to look at the paper to see the identities used for this derivation. Please let me know if I got this wrong, I'm very new to differential geometry.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2570065", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Trouble understanding pointwise vs uniform boundedness I've read the definition a few times and I am still struggling to understand what these actually mean. As I understand it, a sequence of functions is pointwise bounded if there exists an $M$ for each $x$ such that $f_n(x)<M$, for all $n$, and a function is uniformly bounded if there exists an $M$ such that $f_n(x) < M$ for every $n, x$. This leads to two questions that I have. Firstly, would pointwise convergence imply pointwise boundedness and would uniform convergence imply uniform boundedness? Also are there a few examples that demonstrate some pointwise/uniformly bounded sequences of functions(non trivial, i.e not a constant sequence), particularly perhaps one that is pointwise but not uniformly bounded and why that is the case
Yes it is true that pointwise convergence implies pointwise bounded. A proof is similar to a proof that the convergent sequence of numbers $f_i(x)$ in the index $i$ is bounded for a fixed $x$, except you apply the $\forall x$ quantifier. No uniform convergence does not imply uniform boundedness. Take $f_i(x)=x^2+1/i$, they converge to $f(x)=x^2$ uniformly, which is not a uniformly bounded function. The sequence $f_i$ above is pointwise bounded but not uniformly bounded. Given $x$, a bound for the sequence of numbers $f_i(x)$ is $x^2+1$. It is not uniformly bounded because not even any single $f_i$ is bounded.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2570218", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
derivative of $\frac 2x \sin(x^3)$ by definition The function: $$\frac 2x \sin(x^3)$$ when $x\neq0$, and $0$ while $x=0$. I need to find if the function is derivation at $x=1$. First step was to check if the function is continuous. there is just 1 side of limit to check (since its the same function around $x=1$, so I compared the limit of the function with $f(1)$: $$\lim_{x\to 1}\space \frac{2}{x}\sin(x^3) = 2\sin(1) = f(1)$$ first question: was this step necessary? next: $$f'(1) =\lim_{h\to 0}\space \frac{ \frac2{h+1}\sin((h+1)^3)-2\sin(1)}h$$ $$=\lim_{h\to 0}\space \frac{ \frac{2\sin((h+1)^3)}{h+1}-2\sin(1)}h$$ and I know about $\lim_{x\to 0}\frac {\sin(x)}x = 1$, so: $$=\lim_{h\to 0}\space \frac{ \frac{2\sin((h+1)^3) \cdot (h+1)^2}{(h+1)^3}-2\sin(1)}h$$ and then: $$=\lim_{h\to 0}\space \frac{ 2(h+1)^2-2\sin(1)}h$$ and now I don't know how to get rid of the $h$ denominator
The function is obviously continuous at $1$, because it is a composition of continuous functions. Your computation is flawed: it is true that $\lim_{x\to0}\frac{\sin x}{x}=1$, but you have $\lim_{h\to0}\frac{\sin((1+h)^3)}{(1+h)^3}$ and the preceding limit doesn't apply. For the derivative at $1$: $$ f'(1)=\lim_{h\to0}\frac{\dfrac{2\sin((1+h)^3)}{1+h}-2\sin1}{h} $$ Now write $(1+h)^3=1+hg(h)$, where $g(h)=3+3h+h^2$, so you have to compute, disregarding the factor $2$, $$ \frac{\sin(1+hg(h))-(1+h)\sin 1}{h(1+h)}= \frac{\sin1\cos(hg(h))+\cos1\sin(hg(h))-\sin 1-h\sin 1}{h(1+h)} $$ The right hand side can be rewritten as $$ \frac{\cos(hg(h))-1}{hg(h)}\frac{g(h)\sin 1}{1+h} +\frac{g(h)\cos 1}{1+h}\frac{\sin(hg(h))}{hg(h)} -\frac{\sin 1}{1+h} $$ Now the limit is basically done; the first fraction has limit $0$, the second fraction has limit $3\sin 1$; the third fraction has limit $3\cos 1$, the fourth fraction has limit $1$; the fifth fraction has limit $\sin 1$. Thus, reinserting the factor $2$: $$ f'(1)=2(3\cos1-\sin1) $$ Just to check with the standard procedure: for $x\ne0$, $$ f'(x)=2\frac{3x^3\cos(x^3)-\sin(x^3)}{x^2} $$ and $f'(1)=2(3\cos 1-\sin 1)$. Are you sure you don't want to compute the derivative at $0$? In order to do it, you need to first check the function is continous at $0$: $$ \lim_{x\to0}\frac{2\sin(x^3)}{x}= \lim_{x\to0}2x^2\frac{\sin(x^3)}{x^3}=0 $$ Then $$ f'(0)=\lim_{h\to0}\frac{f(h)-f(0)}{h} =\lim_{h\to0}\frac{2\sin(h^3)}{h^2} =\lim_{h\to0}2h\frac{\sin(h^3)}{h^3} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2570328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Laplace method for $\int_0^1 e^{a(x-1)}\ln(-\ln(x)) \, dx$ The following integral $$ \int_0^1 e^{a(x-1)}\ln(-\ln(x)) \, dx$$ looks like the one for which the Laplace method is applicable. The function under the integral looks like this: ($a$ increases from red to black), so the leading contribution to the integral should be at the points 1 and 0. I can split this into two integrals from functions having maximum at 0, however direct application of the Laplace method is problematic since the maxima are infinite. Can it be fixed with some variable change, or do I need something beyond the Laplace method (steepest descents or anything else)?
Let $I(a,\epsilon) = \int_\epsilon^{1/e} e^{-a(1-x)}\log\log\frac{1}{x}dx$ and $J(a,\epsilon) = \int_{1/e}^{1-\epsilon} e^{1a(1-x)}\log\log\frac{1}{x}dx$, i.e. we cut the integral at the unique root $1/e$ Then holds for all $\epsilon>0$: * *$0\le I(a,\epsilon) \le e^{-a(1-\frac{1}{e})} I(0,\epsilon)$ *$0\ge J(a,\epsilon) \ge e^{-a(1-\frac{1}{e})} J(0,\epsilon)$ Hence if $I(a) = I(a,0)$ exists (which im not sure it does), then $\lim_{a\to\infty}I(a) = 0$; same for $J(a)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2570427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
If $B^{1/2}A=AB^{1/2}$. It is true that $AB=BA$? Let $E$ be a complex Hilbert space. Let $A\in \mathcal{L}(\mathcal{H})$ and let $B\in \mathcal{L}(E)^+$. Assume that $B^{1/2}A=AB^{1/2}$. It is true that $AB=BA$? Thank you
$\def\h{^{1/2}}$ $$ AB=AB\h B\h = (B\h A) B\h= B\h (AB\h)=B\h B\h A=BA\,.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2570570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find all directions such that they decrease the function value after taking a small step towards that direction Let $f: \mathbb{R}^n \rightarrow \mathbb{R}$ and all second partial derivatives of $f$ exist and are continuous over the domain of the function, mixed derivatives are continuous and hence $ \forall i, j \frac{\partial}{\partial x_i} \frac{\partial f}{\partial x_j} = \frac{\partial}{\partial x_j} \frac{\partial f}{\partial x_i}$. Thus, a Hessian matrix exists and is symmetric. Let the Hessian matrix be indefinite in $\mathbf{x_0}$ (i.e. $\exists \mathbf{r_1}: \mathbf{r_1^T}: \mathbf{H(\mathbf{x_0})} \mathbf{r_1} < 0$ and also $\exists \mathbf{r_2}: \mathbf{r_2^T}: \mathbf{H(\mathbf{x_0})} \mathbf{r_2} > 0$), therefore $\mathbf{x_0}$ is a saddle point of function $f$ and $\frac{\partial f}{\partial \mathbf{x}}(\mathbf{x_0}) = \mathbf{0}$. Given a Hessian matrix, how do I find a direction $\mathbf{v}$ such that the very small step in that direction from $\mathbf{x_0}$ would lead to the decrease of the function? More formally, find $\mathbf{v}$ such that $\lim_{t \rightarrow 0+} f(\mathbf{x_0}) - f(\mathbf{x_0} + t \mathbf{v}) > 0$. I am also given eigendecomposition of a matrix, i.e. all the eigenvalues $\lambda_i$ and the corresponding eigenvectors $\mathbf{c}_i$, which I am not sure how to use yet. My understanding is that Hessian matrix $\mathbf{p^T} \mathbf{H(x_0)} \mathbf{p}$ represents $\lim_{t \rightarrow 0+} f(\mathbf{x_0}) - f(\mathbf{x_0} + t \mathbf{p}) > 0$. Am I correct? If my assumption holds, my approach would be to solve $\mathbf{p^T} \mathbf{H(x_0)} \mathbf{p} < 0$ with respect to $\mathbf{p}$, but I would end up solving a system of quadratic nonequalities, which doesn't look very promising. Is there any way of getting the result $\mathbf{v}$ easier somehow using the eigenvector decomposition?
You are correct. By Taylor's theorem, you have $$f(x_0+tp) =f(x_0) +t\nabla f(x_0)^T p + \frac{t^2}{2}p^TH(x_0)p + o(t^2)= f(x_0) + \frac{t^2}{2}p^TH(x_0)p + o(t^2) $$ because of the fact that $\nabla f(x_0)=0$ by hipothesis. Now take any $p\in \Bbb R^n$ such that $p^TH(x_0)p<0.$ Such a $p$ always exists because of the indefinitness hypothesis. We now show that $p$ is a descent direction of $f$ at $x_0.$ Indeed, otherwise you would have $$f(x_0+tp)\geq f(x_0)$$ for all $t$ small enough, and hence $$0\leq \frac{f(x_0+tp)-f(x_0)}{t^2}= p^TH(x_0)p + \frac{o(t^2)}{t^2},$$ which is a contradiction when $t$ is small enough because $\frac{o(t^2)}{t^2} \to 0 $ and $p^TH(x_0)p<0$ by construction. That being said, I can't imagine another way of finding a descent direction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2570674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find floor of sum $\sum_{k=1}^{80} k^{-1/2}$ We have to find the floor $\lfloor S \rfloor$ of the following sum: $$S = \sum_{k=1}^{80}\frac{1}{\sqrt k}$$ What I did was to find a approximate series that this series is near to. Let that series have general term $T_k$ and original series may have general term $a_k$. We construct the following series of $T_k$ $$T_k = \frac{1}{ \sqrt{k+1}+\sqrt{k}} = \sqrt{k+1}-\sqrt{k}\\$$ Then we have the following inequality: $$\frac{a_k}{2} = \frac{1}{\sqrt{k}+\sqrt{k}} > T_k \\ \sum a_k > 2 \sum_{1}^{80} T_k \\ S > 2 (\sqrt{81}-1)$$ Where the last result is due to telescoping property of $T_k$. So we have a lower limit $ S_k >\color{indigo}{ 16}$ However still we cannot say $\lfloor S \rfloor = 16$ because $S$ may exceed $17$.
$$\begin{align} \int_1^{81}\frac 1{\sqrt x}\;\;\text d x &<\qquad\sum_{k=1}^{80}\frac 1{\sqrt k} &&<1+\int_1^{80}\frac 1{\sqrt x}\;\;\text d x\\ \bigg[2\sqrt{x}\bigg]_1^{81} &< \qquad\sum_{k=1}^{80}\frac 1{\sqrt k} &&<1+\bigg[2\sqrt{x}\bigg]_1^{80}\\ 2\big(\sqrt {81}-\sqrt{1}\big) &< \qquad\sum_{k=1}^{80}\frac 1{\sqrt k} &&< 1+2\big(\sqrt{80}-\sqrt{1}\big)\\ 16 &< \qquad\sum_{k=1}^{80}\frac 1{\sqrt k} &&<16.88 \end{align}$$ __ NB: Wolframalpha gives $16.484$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2570782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
$\mu,\nu$ ergodic implies $\mu\perp\nu$ Let $T:\Omega\to\Omega$ a measurable function and $\mu,\nu$ $T$-ergodic measures on $\Omega$. I am trying to prove that $\mu\perp\nu$ (this is, they concentrate in disjoint sets). My attempt was define $w=\mu+\nu$ and use Radon-Nikodým to obtain $f,g\in L^1(\Omega)$ such that $\mu\sim fdw,\ \nu\sim gdw$. Then I only have to show that $w(\{f,g>0\})=0$, but I couldn't progress much. Note: for $\mu$ being ergodic I mean "$\mu$ is $T$-invariant and, for every measurable $A$, $\mu(A\triangle T^{-1}A)=0$ implies $\mu(A)\in\{0,1\}$. Note2: I am not able to use the ergodic theorem.
For a measure $\lambda$ and a measurable function, $f$, let \begin{align*} B_{\lambda}^{f} &= \left\{x: \lim_{n} n^{-1}\sum_{i=1}^{n}{f(T^{i}(x)) =E_{\lambda}[f]}\right\} \end{align*} Since $\mu \neq \nu$, there exists a measurable $f^*$ such that $E_{\nu}[f^*] \neq E_{\mu}[f^*]$. Therefore, $B_{\mu}^{f^*} \cap B_{\nu}^{f^*}=\emptyset$. Also conclude from the Ergodic theorem that $\mu(B_{\mu}^{f^*})=1$ and $\nu(B_{\nu}^{f^*})=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2570913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Role of binomial coefficient in binomial distribution As I can verify, the binomial distribution $W_N(n) = \binom Nn p^n q^{N-n}$, is indeed a probability distribution: $$\sum_{n=1}^N W_N(n) = (p+q)^N = 1\ ,$$ with $q = 1 -p$. In the context of statistical physics, I don't understand why it is essential that we take into account that there are $\binom Nn$ possible ways to pick n molecules from $N$. Yes, we do not take the order into account when we pick the molecules, but what does the binomial factor actually mean in $W_N(n)$? What would happen if we leave it out, other than the fact that $W_N(n)$ then fails to be a probability distribution? I have been ignoring this confusion for a while now, so a really down to earth answer would be appreciated. :) Edit. I understand the isolated concept of the binomial coefficient $\binom nk$; the number of ways to choose k objects out of n objects without ordering; the numerator accounts for the number of ways we can order the $k$ and $n-k$ objects respectively: $$\frac{1}{n!(n-k)!} \ .$$ What I don't understand is its role in the binomial distribution $W_N(n)$. In the context of my question; the probability that we find $n \ (< N)$ molecules in a subvolume $V_1 \subset V$; what is wrong with the following reasoning that I'm tempted to have?: The probability of finding $n$ molecules inside a subvolume $V_1$ is equal to $p^n q^{N-n}$, where $p$ and $q$ are the probabilities of finding a molecule inside and outside the subvolume $V_1$ respectively. Now, I do have a vague notion of that it is essential to take into account the fact that we are choosing $n$ molecules out of $N$ without ordering. But I don't understand the key concept of how this is resolved by just multiplying by $\binom Nn$, and what it would mean conceptually if we didn't. If it is evident from the way I phrased the question that I'm missing a different unrelated concept, do tell. :)
Expanding on @Falrach's answer; if we, indeed, fix $r$ events who "succeed" amongst $n$ events, where the probability of succeeding resp. failing are $p$ and $q = 1-p$, we have the following argument for $n=5$ and $r=2$. Denote $P(i)$ for the probability of $i$ succeeding events and $\hat{P}(k,l)$ for the probability of events $k$ and $l$ succeeding. We have $P(2) = \hat{P}(1,2) + \hat{P}(1,3) + \ldots + \hat{P}(4,5)$. Since we can pick $r$ events from $n$ events in $\binom n r$ ways, we have $P(2) = \binom 5 2 \hat{P}(1,2)$. So we multiply by the binomial in if the succeeding events are indistinguishable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2571020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Certain Subset of Sorgenfrey Plane is Closed Note that $L = \{(x,-x) \mid x \in \Bbb{R} \}$ is closed. Then if $A$ is closed in $L$, then it will also be closed in $\Bbb{R}^2_\ell$. According to Munkres, $L-A$ will also be closed, but I am having trouble proving this. The set $L-A$ closed in $\Bbb{R}^2_\ell$ if and only if $\Bbb{R}^2_\ell - (L-A) = (\Bbb{R}^2_\ell - L) \cup A$ is open, which I am having trouble seeing the truth of this.
As $\{(x,x)\} = ([x,x+1) \times [x,x+1)) \cap L$, every singleton subset of $L$ is open in $L$ (as an intersection of an open set of the Sorgenfrey plane with $L$). This means that $L$ is discrete as a subspace: all of its subsets are open (and thus closed) in $L$ and as $L$ is closed in the Sorgenfrey plane, all of its subsets are closed in the Sorgenfrey plane.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2571109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
A problem on rank of a matrix over two fields This is a problem from Berkeley problems in mathematics. If $F$ is a subfield of $K$, and $M$ has entries in $F$, how is the row rank of $M$ over $F$ related to the row rank of $M$ over $K$? where $M$ is a n by n matrix The solution says "If a set of rows of $M$ is linearly independent over $F$, then clearly it is also independent over K, so the rank of $M$ over $F$ is, at most, the rank of $M$ over $K$." I have some trouble understanding this, what I thought was that if they are linearly independent over the bigger field K, they are linearly independent over F. (Because all linear combinations with scalars from F are subsumed when you are talking about linear combinations in K) However here it is the other way around
The rank of a matrix is the size of its largest square submatrix with nonzero determinant. Since the determinant is determined only by the matrix's entries, it remains the same during a field extension. Thus the rank remains invariant too. Alternatively, the rank of a matrix is just the number of nonzero diagonal entries in its Smith normal form. Since the Smith normal form computed over a subfield is still a Smith normal form over a larger field, the rank remains the same. If one considers rational canonical form (which also is solely determined by the matrix's entries) instead, one can show not only that matrix rank is invariant on field extension, but also that similar matrices remain similar over a field extension too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2571197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Show that the number of solutions of $x^2+y^2 ≡ 1$ (mod $p$) where $0Show that the number of solutions of $x^2+y^2 ≡ 1$ (mod $p$) where $0<x<p$, $0<y<p$, $p$ is odd prime is even iff $p ≡ 3, -3$ (mod $8$). I learned about quadratic residue and sums of squares. Let $S_1$ = {$1^2, 2^2, ... , (p-1)^2$} $S_2$ = {$1-1^2, 1-2^2, ... , 1-(p-1)^2$}. No two element of the set $S_1$, $S_2$ are congruent modulo $p$. Together $S_1$, $S_2$ contain $2p-2$ integers. By the pigeonhole principle, there exist $x_0$, $y_0$ such that $x_0^2$ ≡ $1-y_0^2$ (mod $p$). But I don't know how to continue.
If $(x,y)$ is a solution, so are $(\pm x, \pm y)$, and since $p$ is odd, all $4$ solutions are distinct, mod $p$. Hence the number of solutions is a multiple of $4$. Note:$\;$For this argument, $p$ can be any odd positive integer, not necessarily prime.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2571329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Unsure how to use initial conditions in characteristic value problem $u_{xx} - 7u_{tx} + 12u_{tt} = 0$ $u(t,x) = sinx$ on $t+3x=0$ $u(t,x) = x$ on $t+4x = 0$ I have factorized the above equation, which gives me the result: $u(t,x) = f(t+3x) + g(t+4x)$ But, I don't know how to proceed. All I have is: $f(0) + g(t+4x) = sinx$ and $f(t+3x) + g(0) = x$ If the conditions were $u(t,x)$ and $u_t(t,x)$, I'd be able to differentiate one to find $f$ and $g$. But here, I am completely lost. Any help would be appreciated!
$u_{xx} - 7u_{tx} + 12u_{tt} = 0$ ...$(i)$ $u(t,x) = sinx$ on $t+3x=0$ ...$(ii)$ $u(t,x)=x$ on $t+4x=0$ ...$(iii)$ First of all, we observe from equations $(ii)$ and $(iii)$ that $u(t, x)$ is independent of $t$ for all $t$ and $x$, hence the second and third term of equation $(i)$ are zero. differentiating the equations $f(0) + g(t+4x) = sinx\implies\frac {\text d}{\text dx} g(t+4x)=\cos x$ ...$(iv)$ and $f(t+3x)+g(0)=x \implies \frac {\text d}{\text dx} f(t+3x)=1$ ...$(v)$ adding $(iv)$ and $(v)$ gives $\frac {\text d}{\text dx} v(t,x)=\cos x+1$ $\implies u_{xx}=-\sin x$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2571458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is $f(m,n)=2^m\cdot(2n+1)$ a bijection between $\Bbb{Z_{\geq0}\times Z\to Z}$? Let $\mathbb Z$ denote the set of integers and $\mathbb Z_{\ge 0}$ denote the set $\{0,1,2,3,...\}$. Consider the map $f:\mathbb Z_{\ge 0}\times \mathbb Z \to \mathbb Z$ given by $f(m,n)=2^m\cdot(2n+1)$. Then the map $f$ is (A)injective but not surjective. (B)surjective but not injective. (C)injective and surjective. (D)neither injective nor surjective. For injectivity, $$2^{m_1-m_2}(2n_1+1)=(1)(2n_2+1)$$ $$2^{m_1-m_2}(2n_1+1)=2^0(2n_2+1)$$ $$m_1=m_2 \land n_1=n_2 $$ For surjectivity, $m=0$, $f$ maps to odd integers.Similarly, I am getting pre-image for even integers also. So, (C) is the correct answer. Am I correct? But, solution manual gives (A) as the correct one. Who is correct? Please help me.
Hint: Note that $0$ is not mapped by any pair of $(m,n)$ (as $2^m \geq 1$ for $m \in \mathbb{Z}_{>0}$). So, $\implies \, ?$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2571629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
weird brackets in unit interval I have found during looking at the book Linear Algebra and its Applications (K. Nordstrom) some weird (for me) notation for belonging to the unit interval, namely, $\lambda \in \ ] 0,1 [$. Does it mean as always that $\lambda$ belongs to $[0,1]$ or something different?
The notation $]a,b[$ is used for an open interval, more commonly written as $(a,b)$; meaning: $$x \in \; ]a,b[ \; \iff a \color{red}{<} x \color{red}{<} b$$ whereas: $$x \in [a,b] \iff a \color{blue}{\le} x \color{blue}{\le} b$$ So $\lambda \in \; ]0,1[$ would mean values satisfying $0<\lambda<1$, excluding the end points of the interval. Half-open intervals are then written in a similar way, e.g. $[a,b) = [a,b[$ etc. This notation is more common in the French school (and countries adopting that notation) and has the advantage of avoiding confusion since $(a,b)$ is a common notation with other meanings too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2571747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
How to show it is a rhombus I am trying to solve question 2 (figure 2). I have shown that the diagonals are interesting each other in right angle but I cannot show that AB||GH. Please help.
Using Alternate Interior Angles $$\angle GBH=\angle AHB$$ and $$\angle BHG=\angle ABH$$ Again, $$\angle ABH=\angle GBH$$ $$\implies\angle ABH=\angle AHB\implies AB=AH\ \ \ \ (1)$$ Similarly $BG=GH$ In $\triangle ABH, \triangle GBH$ $$\angle ABH=\angle GBH,\angle AHB=\angle GHB$$ and $BH$ being the common side Using SAA Congruence $$\triangle ABH\cong\triangle GBH$$ $$\implies AB=BG,AH=GH$$ Using $(1),$ $$BG=AB=AH=GH$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2571882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 3 }
Steiner inellipse Hello it's related to my answer for Prove the inequality $\frac{b+c}{a(y+z)}+\frac{c+a}{b(z+x)}+\frac{a+b}{c(x+y)}\geq 3\frac{a+b+c}{ax+by+cz}$ My answer fails but I don't know why ... So I was thinking a generalization of the following formula: $$\frac{IA^2}{CA\cdot AB}+\frac{IB^2}{BC\cdot AB}+\frac{IC^2}{CA\cdot BC}=1$$ I know that it's related to the Steiner inellipse and we have for a triangle ABC and the ellipse of foci $P$ and $Q$: $$\frac{PA\cdot QA}{BA\cdot CA}+ \frac{PB\cdot QB}{CB\cdot AB}+ \frac{PC\cdot QC}{BC\cdot AC}=1$$ But in my proof I have also use the following formula: \begin{align} \frac{1}{IA^2}+\frac{1}{IB^2}+\frac{1}{IC^2} &= \frac{1}{r^2}-\frac{1}{2rR} \\ IA^2+IB^2+IC^2 &= s^2+r^2+8rR \\ CA\cdot AB+BC\cdot AB+CA\cdot BC &= s^2+(4R+r)r \\ \frac{1}{CA\cdot AB}+\frac{1}{BC\cdot AB}+\frac{1}{CA\cdot BC} &= \frac{1}{2rR} \end{align} So what's the new expression of: \begin{align} \frac{1}{BA\cdot CA}+\frac{1}{CB\cdot AB}+\frac{1}{BC\cdot AC} &= ? \\ \frac{1}{PA\cdot QA}+\frac{1}{PB\cdot QB}+\frac{1}{PC\cdot QC} &= ? \\ PA\cdot QA+PB\cdot QB+PC\cdot QC &= ? \\ BA\cdot CA+CB\cdot AB+BC\cdot AC &= ? \end{align} In function of the parameters of the inellipse and the triangle $ABC$ like the area and the side of the triangle or the semi major semi minor axes of the ellipse? Edit: I have a good news The centroid $M$ of the triangle $ABC$ correspond to the centre of the inellipse and we have the following relation for $P$ any interior point related to the triangle $ABC$: $$PA^2+PB^2+PC^2=MA^2+MB^2+MC^2+3MP^2$$ Thanks a lot.
Sums $$CA\cdot AB+BC\cdot AB+CA\cdot BC = s^2+(4R+r)r$$ and $$\frac{1}{CA\cdot AB}+\frac{1}{BC\cdot AB}+\frac{1}{CA\cdot BC}=\frac{1}{2rR}$$ are not related with Steiner inellipse. Two remaining sums are based on the products of distances from the foci $P$ and $Q$ of the Steiner inellipse of the triangle $\Delta ABC$ to its vertices, which can be found as follows. Let the vertices $A$, $B$, and $C$ and the foci $P$ and $Q$ have the complex coordinates $z_A$, $z_B$, $z_C$, $z_P$, and $z_Q$, respectively. Accoriding to Steiner’s Theorem [MP, Th. 2.1], $z_P$ and $z_Q$ are given by the equality, $$g\pm \sqrt{g^2-\frac f3},$$ where $g=\frac 13\left(z_A+z_B+z_C\right)$ is the centroid and $f=z_Az_B+ z_Bz_C+ z_Az_C$. Then, for instance, $$|PA\cdot QA|=|z_P-z_A||z_Q-z_A|=\left|(g –z_A)^2- g^2+\frac f3\right|= \left|2gz_A+z_A^2+\frac f3\right|.$$ I don’t know whether we can further simplify the expressions for two remaining sums. References [MP] D. Minda, S. Phelps Triangles, ellipses, and cubic polynomials, American Mathematical Monthly, 115 (8) (2008), 679-689.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2571971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Magnitude of $f_3(n)$ compared to power towers of tens In the fast growing hierarchy , the sequence $f_2(n)$ is defined as $$f_2(n)=n\cdot 2^n$$ The number $f_3(n)$ is defined by $$f_3(n)=f_2^{\ n}(n)$$ For example, to calculate $f_3(5)$, we have to apply the operator $n\cdot 2^n$ five times with start value $5$. Denote $$T(n):=10\uparrow 10 \uparrow \cdots \uparrow 10 \uparrow 10$$ with $n$ tens, so a power tower of tens with height $n$. With the help of the computer, I found out that $f_{30}<T(31)$ , but $f_{31}>T(32)$, so $31$ is the smallest number $n$ with $f_3(n)>T(n+1)$ * *Can this value also be found without electronic help by bounding the function $f_3(n)$ ? *Can I also find the smallest number $n$ with $f_3(n)>T(n+k)$ for $k=2,3,4,\cdots$ without brute force ?
We will use $\log$ to mean logarithm base $10$. Observe that $f_2^k(n) > (2^n \uparrow\uparrow k)n$, as a simple induction shows. Therefore $f_3(n) > 2^n \uparrow\uparrow n$. Since $2^n > 10$ for $n \ge 4$, we thus have $$\log^{n-2} f_3(n) > 2^{n 2^n}$$ In the other direction, we will show that $$\log^{n-2} f_3(n) < 2^{n 2^n + 2n}$$ Define $g_i(n)$ by $g_1(n) = n 2^n + 2n$ and $g_{i+1}(n) = 2^{g_i(n)}$. We claim that $g_i(n) > n f_2^i(n)$ for $i \ge 3$ and $n \ge 3$. Indeed, for $i=3$ we have $n f_2^3(n) = n^2 2^{n+n2^n} 2^{n2^{n + n2^n}} < 2^{2^{n+n2^n}}2^{n2^{n + n2^n}} = 2^{(n+1)2^{n+n2^n}} < 2^{2^n 2^{n+n2^n}} = 2^{2^{2n+n2^n}} = g_3(n)$. Then, assuming the statement for $i$, $g_{i+1}(n) = 2^{g_i(n)} > 2^{n f_2^i(n)} > 2^{n + f_2^i(n) + f_2^i(n)} > n f_2^i(n) 2^{f_2^i(n)} = n f_2^{i+1}(n)$. Thus $\log_2^{k-2}f_2^k(n) < 2^{n2^n + 2n}$, and therefore $\log^{n-2} f_3(n) < \log_2^{n-2} f_3(n) < 2^{n2^n + 2n}$. It follows that $$(n 2^n)\log 2 < \log^{n-1}f_3(n) < (n 2^n + 2n)\log 2$$ and $$ n \log 2 + \log n + \log \log 2 < \log^n f_3(n) < n \log 2 + \log n + \log \log 2 + \frac{\log e}{2^{n-1}}$$ and we can see that $\frac{\log e}{2^{n-1}}$ goes to $0$ very quickly. So, it is reasonably straightfoward to find $h(k) = $ the smallest value of $n$ such that $f_3(n) > T(n+k)$. As you have found, $h(1) = 31$ and $h(2) = 33 219 280 916$; for $k = 3$, if we let $x = 10^{10^{10} - \log \log 2}$, we will have overshot by very very close to $\log x + \log \log 2$; thus letting $y = \lceil x - \frac{\log x + \log \log 2}{\log 2} \rceil$ will very likely give us the exact answer, as the logarithm will be extremely close to constant in that interval. The same method will very likely give the exact answer for all $k$, as the error terms decrease tetrationally.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2572111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
$\vec{PP_1}+\vec{PP_2}+\vec{PP_3}=\frac {3}{2}\vec {PO} $. Let $ABC $ an equilateral triangle and $P\in int (ABC) $. If $O $ is the center of gravity of the triangle and $P_1, P_2, P_3$ are the projections of $P $ on the sides then $\vec{PP_1}+\vec{PP_2}+\vec{PP_3}=\frac {3}{2}\vec {PO} $.
Let $A_1C_2||AC $, $A_2B_1||AB $ and $B_2C_1||BC $ trough $P $ where $A_1,B_2\in [AB] $, $A_2, C_1\in [AC ]$ and $B_1, C_2\in [BC] $. Then $PA_1AA_2$, $PC_1CC_2$, $PB_1BB_2$ are parallelograms and $A_1PB_2$, $A_2PC_1$, $B_1PC_2$ are equilateral triangles. Now, $\vec {PP_1}=\frac {\vec{PB_1}+\vec{PC_2}}{2}$ $\vec {PP_2}=\frac {\vec{PA_2}+\vec{PC_1}}{2}$ $\vec {PP_3}=\frac {\vec{PA_1}+\vec{PB_2}}{2}$. But $\vec {PA_1}+\vec {PA_2}=\vec {PA} $ $\vec {PB_1}+\vec {PB_2}=\vec {PB} $ $\vec {PC_1}+\vec {PC_2}=\vec {PC} $. So, $\vec {PP_1}+\vec {PP_2}+\vec {PP_3}=\frac {\vec {PA}+\vec {PB}+\vec {PC}}{2}=\frac {3\vec {PO}}{2} $, q.e.d.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2572278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Lebesgue measure - natural measure on $S^1$ I know the definition of the Lebesgue measure for subsets of $\mathbb{R}$. I have heard that the Lebesgue measure is the "natural" measure to put on the circle, $S^1$. Why is this so? How is the Lebesgue measure defined for $S^1$ in $\mathbb{R}^2$ or $\mathbb{C}$? Is the idea to take the infimum of the area of open boxes covering $S^1$? If so, how exactly does this work, and what is the Lebesgue measure of $S^1$?
The Lebesgue measure on $S^1$ can be viewed as the Hausdorff measure of it as a subset of $R^2$ with the 2D Lebesgue measure $m_2$. Alternately it can be viewed as the pushfoward measure $\mu(A)=m_1(f^{-1}(A))$ where $f(t)=(\cos(t),\sin(t)$ with domain $[0,2\pi)$. Either way, the measure of an arc is the length of the arc, which makes sense.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2572341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Understanding Distributions I have been studying distributions, and I am still trying to get the intuition behind the following: $1)$ Suppose $f\in L^1_{loc}(\mathbb{R})$, and $\phi\in C^{\infty}_c(\mathbb{R})$. A distribution is defined as $\langle T_f, \phi \rangle=\int_{\mathbb{R}}f\phi dx$. Suppose we are given an $f$, is the definition telling us we can define a distribution for any $\phi$? In other words, if we have some given $L^1_{loc}$ function, we can define a different distribution for say $\phi_1=sin(x)\chi_{[0,1]}$ and $\phi_2=e^x\chi_{[0,2]}$? $2)$ From this definition, is it valid to define a function as follows: $g(x)=\int_{\mathbb{R}}f\phi dx$ with $x\in [x-\epsilon, x+\epsilon]$ and $\phi$ compactly supported on $[x-\epsilon, x+\epsilon]$? where $f$ and $\phi$ are some appropriate functions? $3)$ If $S(\mathbb{R})$ is the Schwarz space of rapidly decreasing functions, and $S'(\mathbb{R})$ is the dual space (space of tempered distributions), does this just mean that for $f\in S(\mathbb{R})$, the Fourier transform $\hat f$ defines a distribution which takes finite values? I think I am most stuck trying to make sense of the third one.
For 1) there is not a distribution for every $\phi$. The distribution is the linear functional $$\phi \longmapsto \int_{\mathbb R} f \phi dx$$ This functional is denoted $T_f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2572452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving that if $c^2 = a^2 + b^2$ then $c < a + b$ I am having some trouble in proving a conjecture that occurred to me some time ago, based on the Pythagorean theorem. If, for a non-degenerate triangle, $$c^2 = a^2 + b^2$$ Then can the following be proven? $$c < a + b$$ Is this statement always true?
For all $a,b \in \mathbb{Z}^+$ we have $a^2 + b^2 < (a+b)^2 \implies \sqrt{a^2+b^2} < a+b$. If $c^2$ is defined to be $c^2 = a^2 + b^2$, then we have $c < a+b$. This also explains why the triangle cannot be degenerate, i.e. $a = 0$ or $b = 0$, in order for statement to be true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2572505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Let A be an n by n matrix with the property that $A^TA = AA^T$ Show that $\left\Vert A^Tx \right\Vert = \left\Vert Ax \right\Vert$ for $x$ in $\mathbb{R}^n$. So this is what I did. I need someone to check my work. 1) I squared the left hand side of the equation $$\left\Vert A^Tx \right\Vert^2$$ 2) I then applied the rule $\left\Vert x \right\Vert^2 = \left<x,x\right>$ $$\left<A^Tx,A^Tx\right>$$ 3) Since this is just the dot product, I removed the brackets $$A^Tx \cdot A^Tx$$ 4) I took the transpose $$(A^Tx \cdot A^Tx)^T = Ax \cdot Ax$$ 5) I then placed the brackets since the inner product is just the dot product $$\left<Ax , Ax\right>$$ 6) I then used the rule $\left\Vert x \right\Vert^2 = \left<x,x\right>$ again $$\left\Vert Ax \right\Vert^2$$ 7) I took the square root $$\left\Vert Ax \right\Vert$$ Is this the correct way to prove that $\left\Vert A^Tx \right\Vert = \left\Vert Ax \right\Vert$?
$\Vert A^Tx \Vert^2 = \langle A^T x, A^T x \rangle = \langle x, (A^T)^T A^T x \rangle = \langle x, AA^T x \rangle$ $= \langle x, A^T Ax \rangle = \langle (A^T)^T x, Ax \rangle = \langle Ax, Ax \rangle = \Vert Ax \Vert^2, \tag 1$ whence $\Vert A^T x \Vert = \Vert Ax \Vert. \tag 2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2572646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What is the answer to this infamous "Common Core" question? The following question (number 15 of this test) has become infamous as a poor "Common Core" question. What is the correct answer? Juanita wants to give bags of stickers to her friends. She wants to give the same number of stickers to each friend. She's not sure if she needs 4 bags or 6 bags of stickers. How many stickers could she buy so there are no stickers left over?
The question is terribly worded. Here is another solution which is consistent with the wording: The only way Juanita can be unsure whether she needs 4 or 6 bags is if she has exactly 1 or 2 friends; any other number of friends does not divide evenly into both 4 and 6 bags and she would know in those cases that at least one choice was inappropriate. The question clearly states that Juanita has more than one friend, so she must have two. To ensure that no stickers are left over, she should buy the smaller of the two allowed choices, i.e. four bags.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2572757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29", "answer_count": 7, "answer_id": 4 }
In showing the convergence of a sequence, does it matter how one chooses $n$? I am doing an exercise in Abbott's Understanding Analysis textbook (question 2.2.1.b). I want to show that: $\displaystyle\lim\frac{3n+1}{2n+5}=\frac{3}{2}.$ Now, i've begun by first working with the inequality I need: $\displaystyle\left|\frac{3n+1}{2n+5}-\frac{3}{2}\right| < \epsilon.$ Simplifying the LHS, I get the inequality: $\displaystyle\left|\frac{3n+1}{2n+5}-\frac{3}{2}\right|=\frac{13}{2|2n+5|} < \epsilon.$ Now my reasoning is as follows. If I start with an $\epsilon$ larger than $\displaystyle\frac{13}{2|2n+5|}$, certainly it would satisfy the original inequality I want. Then I can simply solve for $n$ here to find out how big I need $n$ to be given some $\epsilon$. Now, testing it out I chose $\epsilon=0.01$, my inequality tells me I need $n>325$ so that my sequence is in this $\epsilon$ neighborhood. But with a simpler inequality, I can have: $$\displaystyle\left|\frac{3n+1}{2n+5}-\frac{3}{2}\right|<\left|\frac{3n+1}{2n}-\frac{3n}{2n}\right|=\frac{1}{2n}<\epsilon.$$ Now I notice that with this more "optimal" inequality, I just need $n>50$. Of course my first choice with $n>325$ would allow my sequence to be in the $\epsilon$ neighborhood I wanted, but $n>50$ for the first one isn't good enough. Technically, does it matter how one chooses the inequality? would the first one have been proper?
Either one would be fine. As long as you prove that for sufficiently large values of n, $\frac{3n+1}{2n+5}<\epsilon$ then you're done. It doesn't matter how large $n$ needs to be. As long as it is proved for sufficiently large $n$, no matter what the bound is the proof is fine. However, on re-look there is a mistake in your second proof. You say $$\displaystyle\left|\frac{3n+1}{2n+5}-\frac{3}{2}\right|<\left|\frac{3n+1}{2n}-\frac{3n}{2n}\right|=\frac{1}{2n}$$ but actually, $\left|\frac{3n+1}{2n+5}\right|<\left|\frac{3}{2}\right|$ for sufficiently large $n$, which means that the LHS is actually $-\left(\displaystyle\frac{3n+1}{2n+5}-\frac{3}{2}\right)$. I can't see many ways to make the limit much simpler to evaluate (unless you divide the numerater and denominator by $n$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2572972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Calculation of Chern number of $U(N)$ principal bundle I am considering $U(N)$ principal bundle on a two-dimensional sphere $S^2$. (Below the fiber is $N\times N$ unitary matrices for simplicity, and any generalization to general representations will be appreciated.) The bundle is defined by the transition function $t(\theta)\in U(N)$ along the equator $S^1$ parametrized by $\theta\in[0,2\pi)$. I am trying to calculate the first Chern number of this bundle and wondering whether it is generally possible to do a (globally defined, of course) gauge transformation to get an equivalent principal bundle defined by $\tilde{t}(\theta)$ so that \begin{eqnarray} \tilde{t}(\theta)\in \left[U(1)\right]^N, \end{eqnarray} where $\left[U(1)\right]^N$ is the Cartan subgroup of $U(N)$, namely diagonal matrices. Does such gauge transformation exist generally? If so, the first Chern number can be always calculated by abelian subgroup.
Yes. And you can do better. Observe that the clutching function is determined by its homotopy class $t\in \pi_1(U(N))$. Note that there is a homeomorphism $U(N)\cong U(1)\times SU(N)$. Since $SU(N)$ is simply connected we get $\pi_1(U(N))\cong \pi_1(U(1))$, so we can choose $t$ to factor through the inclusion $U(1)\hookrightarrow U(N)$ up to homotopy as $t:S^1\xrightarrow{\bar t}U(1)\hookrightarrow U(N)$, and $U(1)$ certainly lies inside the diagonal subgroup $U(1)^N$. So yes, the first Chern class of any bundle over $S^2$ (in fact over any 2- or 3-dimensional CW complex) is determined by the abelian subgroup $U(1)$. For higher dimensional spaces this will not always be true. Note that this is not the same as the diagonal $U(1)$-subgroup, consisting of matrices $\lambda\cdot I_n$ with $\lambda \in S^1$. The inclusion of this subgroup induces multiplication by $N$ on $\pi_1$, since it factors as $\pi_1(U(1))\xrightarrow{\Delta}\pi_1(U(1)^N)\hookrightarrow \pi_1(U(N))$ where the first map is induced by the diagonal. This means that $t$ is deformable to a map in the image of this incusion if and only if it has degree $0\mod N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2573053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that $3^n - 4(2^n) + (-1)^n + 6 \equiv 0 \mod 24 $ Is it possible to prove that $3^n - 4(2^n) + (-1)^n + 6 \equiv 0 \mod 24 $ for $n \geq 1 $ . I know that it is true because $ \frac{3^n - 4(2^n) + (-1)^n + 6}{24}$ represents the number of ways to uniquely $4$-colour an n-cycle , excluding permutations of colours.
Using weak induction, if $f(n)=3^n - 4(2^n) + (-1)^n + 6,$ $$f(m+2)-f(m)=3^m(3^2-1)-4\cdot2^m(2^2-1)$$ which is clearly divisible by $3\cdot8$ for $m\ge1$ So, $24\mid f(m)\iff24\mid f(m+2)$ Now establish the base cases $f(1),f(2)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2573158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Estimating distance to travel to each household in a county TL;DR: What's the minimum distance one has to travel to visit each node in a square matrix in which the nodes are D distance apart? I ask because I'm working on a fun feature on how long it takes Santa to reach every child in the country, using a Travelling Salesman algorithm to map his route through the counties. But I need to estimate how long he needs to spend in each county. For each county, we know: * *The approximate number of children he needs to reach (Those age 9 and under, times the 92 percent of Americans who celebrate Christmas.) *The square mileage of the county Given that this is mostly an exercise in TSP, I don't need a deeply precise value here. (We don't know how many children live in the same household or the specific population density of each county.) So I'm assuming the children are roughly evenly distributed across 50% of the county's land mass. The question is: How do I approximate the time it takes to reach each node in an arbitrary amount of space, assuming those nodes are neatly positioned? (After all, we can't run a TSP for each of the 3,000+ counties as well, even if we had everyone's address!) My best guess is to imagine the N nodes arranged in a square matrix. Then, the distance between two nodes is sqrt(area * 0.5) / sqrt(N)--I think. Then you imagine that you (Santa) has to travel to each node. I think this is just N steps since it's a neat square martix, unless I'm deeply miscalculating the number of edges in a square matrix. Thus, I came up with: sqrt(area * 0.5) / sqrt(N) * N Unfortunately, it's difficult to fact-check this sort of calculation since it's imaginary!
If you have $N$ nodes in a square matrix it takes $N-1$ times the distance between neighboring nodes to visit them all. You can't get from one to another in less distance and a snaking path will keep you at that minimum. Your grid is $\sqrt N \times \sqrt N$ so the area is $N$ times the square of the spacing, which gives the distance between as $\sqrt{\frac {area}N}$ and total travel of $\sqrt{N \cdot area}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2573350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Integral over solid angle in Cartesian coordinates I have an integral that is an average of some (unknown) function $f$ over solid angle: $$\bar{f} = \frac{1}{4\pi} \iint\limits_\Omega f \sin\theta~\mathrm{d}\theta~\mathrm{d}\phi$$ I use the physics convention where $\theta$ is the polar angle and $\phi$ is the azimuthal angle. For computational reasons, it would be useful to recast this as a volume integral in Cartesian coordinates (the exact reasons are somewhat out of the scope of this question). My hangup is that I essentially want to recast a surface integral as a volume integral. The divergence theorem comes to mind, but I'm not sure how to do the transformation without making any assumptions about $f$. My attempt: If we write the equation as $$ \bar{f} = \frac{1}{4\pi} \iint\limits_S f \left(\frac{\hat{r} \cdot \hat{n}}{r^2}\right) \mathrm{d}S $$ for a surface $S$ (a formula I found here), we could define $\vec{F} = f \hat{r} / r^2$ and then use the divergence theorem to write as a volume integral. Does this make any assumptions about $f$? What exactly is the surface $S$ in this context?
The most obvious choice for the surface is a sphere of radius $1$. For the sphere $\hat n=\hat r$, so $\hat r\cdot\hat n=1$. Also $r=1$, so $1/r^2=1$ on the surface. Note that you still want to use your expression for $\vec F$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2573445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculating $\int_0^\pi \log(1-2a\cos (x)+a^2)\,dx$ I need to calculate this integral using Riemann sum. $$\int_0 ^\pi \log(1-2\alpha \cos (x) +\alpha^2){\rm d}x$$ a). For $|\alpha|<1$; b). For $|\alpha| > 1$. I know one way of computing this using substitutions and symmetries, but it is necessary to do with Riemann sum. Help please. UPD:Also, I know that decomposition of a polynomial $a^2n$ into quadratic factors is helpful.
I don't think it's possible to solve such a hard integral with a Riemann sum. Anyway, by Cauchy's theorem we have $$I=\oint_{|z|=1} \frac{\log(a-z)}{z}dz = 2\pi i\log(a).$$ (assuming $|a| > 1$ so that the branch point is outside the contour). Now let's parametrise the function using $z= e^{it}$ for $0 \leq t \leq 2\pi$: $$I = \int_0^{2\pi} \frac{\log(a - e^{it})}{e^{it}}i e^{it} dt = i\int_0^{2\pi}\log(a - e^{it})\ dt.$$ Now, $\log(a - e^{it}) = \log(a - \cos t - i \sin t) = \log(\sqrt{(a-\cos t)^2 + \sin^2 t}\ \exp(i \arctan \frac{-\sin t}{a - \cos t}))$ which is equal to $$\frac{1}{2}\log(1+ a^2- 2a\cos t) + i\arctan\frac{-\sin t}{a - \cos t}$$ assuming that we chose a nice branch of the complex logarithm. Now since we know that $I$ is imaginary we can safely discard the imaginary part of the integral to find: $$2\pi \log a = \frac{1}{2} \int_0^{2\pi} \log(1 - a^2 + 2a\cos t)\ dt = \int_0^\pi \log(1-a^2 +2a\cos t)\ dt.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2573554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Turning sum to integral representation I'd like to turn this sum: \begin{align}\sum_{n=0}^{\infty} \frac{x^{n+1}}{3^{n+1}(n+1)} \end{align} into an integral $\displaystyle \int_{a}^{b} g(x) \space dx$. There seems to be many methods to either change or approximate sums as integrals. So I've become confused which approach would work. In Is it possible to write a sum as an integral to solve it? robjohn used $\int_0^\infty e^{-nt}\,\mathrm{d}t=\frac1n$ which looks similar to a Laplace Transforms. I can't see how he gets rid of the n's so I'm not able to apply it here otherwise it seems promising. But looking elsewhere there are also approximations methods such as: Turning infinite sum into integral which even more obscure at least to me. How do I convert this sum to an integral?
Well, you could write $$\frac{1}{n+1} = \int_0^1 t^n\; dt$$ so (for $|x| < 3$) your sum becomes $$ \eqalign{\sum_{n=0}^\infty &\left(\frac{x}{3}\right)^{n+1} \int_0^1 t^n\; dt\cr = & \frac{x}{3} \int_0^1 \sum_{n=0}^\infty \left(\frac{xt}{3}\right)^n \; dt\cr = & \frac{x}{3} \int_0^1 \frac{dt}{1-xt/3}\cr = & \ln\left(\frac{3}{3-x}\right)\cr }$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2573646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Show that $f\left(\frac{x_1+x_2}{2}\right)\leq\frac{f(x_1)+f(x_2)}{2}$ Let $f$ be twice differentiable in $(a,b)$ and $f''>0$ in the same interval. If $a<x_1<x_2<b,$ show that $$f\left(\frac{x_1+x_2}{2}\right)\leq\frac{f(x_1)+f(x_2)}{2}.$$ I'm not sure how to start. I know at least that $f'$ is strictly increasing in $(a,b).$ Should I check separately for the cases $x_1\neq x_2$ and $x=x_2$? I'm supposed to use the mean value theorem, but I don't know how. Any insight I need to make in order to start?
What you want here is the convexity of $f$. Since you know that $f'$ is monotonically increasing, I will outline a proof for you of how to show $f$ is convex. Proof outline. Let $a< x < y < b$ and consider the slope $m$ of the line $L$ joining the points $(x,fx)$ and $(y,fy)$. By the mean value theorem, there exists a point $c\in (x,y)$ such that $f'(c) = m$. Suppose that there is a point $\theta\in(x,y)$ such that $f(\theta)>L(\theta)$. Consider separately the two cases where $\theta\in(x,c)$ and $\theta\in(c,y)$ and derive contradictions. (Draw pictures.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2573769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }
Minimize interpolation error for sin$(x)$ on $[0,\pi]$ The interpolation error $R(x) = f(x) - L(x)$ of the interpolation polynom $L$ is given for $x_1 \le x \le x_n$ by $$R(x) = {f^{n}(\xi) \over n!} \prod_{i=1}^n (x - x_i)$$ where $x_1 < \xi < x_n $ if $x_i$ is sorted in a ascending order. Find (two) sampling points $x_i$ such that sin$( x)$ is on $[0, \pi]$ interpolated with a first-degree polynomial as precise as possible (w.r.p. to the maximum-norm). My approach was basically to minimize $$R(x) = {-sin(\xi) \over 2} (x-x_1)(x-x_2)$$ (ignoring the first term at first). I received that $f(x) = (x-x_1)(x-x_2)$ has a local minimum at $x_0 = {x_1 + x_2 \over 2}$. Nevertheless $x_1=0$ and $x_2 = \pi$ on the boundary should also be taken into consideration. I don't see right know how I can continue from that.
Your idea in general fails because $\xi$ is a function of $x$. The equioscillation theorem tells you that the minimizer is uniquely characterized by the requirement that there be three points a,b,c where the error is the same value but alternating in sign. By the symmetry this can be achieved by simply taking p to be 1/2, so that the requirement is satisfied by $0,\pi/2,\pi$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2573830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Derive Barycentric coordinate distance formula please pardon the poor formatting. (I'll work on learning it in time; I just started this account to see help with this question.) I've recently started learning about affine geometry and Barycentric coordinates, and I have a question regarding the distance formula for Barycentric coordinates. The Wikipedia page on Barycentric coordinate system gives two versions of this formula, and while I have no trouble proving the first, (first I took the dot product of the displacement vector $PQ$ while setting $A$ to the origin, much as the author of the Mathematical Gazette, cited by Wikipedia, did; I also proved it by setting the origin to the circumcenter of triangle $ABC$. Also, I followed another citation in said article which should have lead to an answer-but alas, that article stated the result without even a "proof is obvious.") my "proof" of the second relies on some (very simple) algebraic manipulation which lacks geometric intuition/motivation. Yes, it works, but there should be a better argument. (Both forms are written below.) Essentially, my question is this: can anyone help me prove the second form, but without first proving the first form? (Presumably, such a proof would provide the geometric intuition I'm looking for.) I've been such on this for days and it's starting to get to me-I've tried many different approaches. Setting: Triangle $ABC$ is positively oriented; $P, Q$ are vectors in the plane of $ABC$, with $P, Q$ having normalized/homogeneous Barycentric coordinates $P= [p_1, p_2, p_3], Q= [q_1,q_2,q_3].$ Thus, displacement vector $PQ= [q_1-p_1,q_2-p_2,q_3-p_3]=[x,y,z],$ with $x+y+z=0.$ Form $1$: (no problems here) $\textrm{dist}(P,Q)^2 = -yza^2-xzb^2-xyc^2.$ Form $2$: (subject of my question-and yes, I'm familiar with the polarization identity and its relation to the coefficients below-also familiar with the circumcenter's Barycentric coordinates and the similarity to those coefficients but I'm not sure how to relate the two in a proof.) $\textrm{dist}(P,Q)^2 = \frac12\{(b^2+c^2-a^2)x^2 + (a^2+c^2-b^2)y^2 + (b^2+a^2-c^2)z^2\}.$ Thanks for any help/guidance-it's much appreciated. This one has me stumped.
This is a nice problem. First, we assume we are in an affine plane over an inner product space such as the Euclidean plane. This means that the (inner) dot product defines a distance measure of line segments by $\;\textrm{dist}(P,Q)^2 = |PQ|^2 = (Q-P)\cdot(Q-P).\;$ Now given a triangle of reference $ABC$ with sides $\;a,b,c\;$ we have $\;a^2=|BC|^2,\;b^2=|AC|^2,\;c^2=|AB|^2.$ We want the length of a line segment $\;PQ=Q-P=xA+yB+zC,\;$ where $\;0 = x+y+z.\;$ Now $\;|PQ|^2 = (xA+yB+zC)\cdot(xA+yB+zC) = (x+y+z)(|A|^2x+|B|^2y+|C|^2z) + T,$ where $T = -yz|B-C|^2-xz|A-C|^2-xy|A-B|^2 = -yza^2-xzb^2-xyc^2.\;$ Since $\;0 = x+y+z,\;$ then $\;|PQ|^2=T\;$ which proves form $1$. The linear space of quadratics with basis $(x^2,xy,y^2),$ assuming that $\;0=x+y+z,\;$ also has bases $(xy,xz,yz)\;$ and $\;(x^2,y^2,z^2).\;$ We used one of them for form $1$. Using the other basis, we suppose that $\;|PQ|^2=ux^2+vy^2+wz^2.\;$ But $\;a^2=|BC|^2=v+w,\;$ $b^2=|AC|^2=u+w,\;$ $c^2=|AB|^2=u+v.\;$ Solving for $\;u,v,w\;$ proves form $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2573957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Show that $\alpha^3 = 10\alpha - 24$ given that $\alpha + \beta = 4$ and $\alpha \beta = 6$ How to show that $\alpha^3 = 10\alpha - 24$ given that $\alpha + \beta = 4$ and $\alpha \beta = 6$ As given in the title. I tried shifting the RHS to the left but I'm not sure if factorization is the way to go. There are earlier parts to this question but I'm not sure if they will be helpful so I'm excluding them. Thank you. (Question is under sum and product of roots for a quadratic equation.)
Sum of roots = $ \alpha + \beta + \gamma =$ $ \frac{-b}{a} = \frac{-0}{1} = 0 $ But, $ \alpha + \beta = 4 $ $\implies \gamma = -4 $ Product of roots $\alpha * \beta * \gamma = $ $ \frac{-d}{a} = -24 $ Sum of roots taken two at a time = $ \alpha\beta + \beta\alpha + \gamma\alpha =$ $ \frac{c}{a} = \frac{10}{1} = 10 $ Now, for a cubic equation: $$ x^3 - (\alpha+\beta+\gamma)x^2+(\alpha\beta + \beta\alpha + \gamma\alpha )x - (\alpha\beta\gamma)=0 $$ Or $$ \alpha^3-10\alpha+24=0 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2574049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Evaluate perfect $\sqrt[3]{X}$ Is there a method of working out the perfect cube root of a 3 digits number? Working out the perfect cube root of a 2 digits number which I know. An example of $$\sqrt[3]{12167}$$ * *Cross out $16$ (always two digits before the last digit) *take the cube root of $12$ is bewtween $2$ and $3$, so it is $2$ *$27$ or $23$ *$23$ because $3^3=27$
Solving equations is equivalent to root finding (as in points where a function equals, not the kind of root in the question). For any monotonic function a good way to find an integer root if you know the number of digits is to progressively start at the maximal digit and increase one by one until you find it changes sign or equals 0. The digit just before it changes sign is that nth digit. Then continue down until you have the precision you desire or obtain a root. This is actually how I set a variable resistor in a lab experiment and that's how I know this algorithm works (as assuming anything beyond monoticity in the experiment's relationships wouldve resulting in a sort of circular reasoning). This is I believe similar to an array searching algorithm. Another method would be binary search (check the middle element and keep progressively cutting tge search region in half). This is in fact just me giving a couple simple methods. Rootfinding is a large area if study and I would suggest reading about numerical methods in general to see if a quick preferred method jumps out at you. Tricks are nice but they to be greedy algorithms meaning that they can be incorrect at the gain of fast computation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2574141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
In the triangle $ABC$ $R = \frac56 BH = \frac52OH$. Find the angles $ACB$ or $BAC$ In the triangle $ABC$, the height $BH$ is drawn, the point $O$ is the center of the circle circumscribed about it, the length of its radius $R$. Find the smallest of the angles $ACB$ and $BAC$, expressed in radians, if it is known that $R = \frac56 BH = \frac52OH$ My work so far: 1) In triangle $BOH$ $BO=R, BH=\frac65R, OH=\frac25R$. Then I can to find $\angle BOH, \angle BHO$ and $\angle OBH$ 2) I proved that $\angle ABH= \angle OBC=90^{\circ}-\alpha$, where $\alpha=\angle A$
The hint. In $\Delta HOB$ we know that $OH=\frac{2}{5}R$, $BH=\frac{6}{5}R$ and $BO=R$. Thus, by law of cosines we obtain $$\cos\cos\measuredangle HBO HBO=\frac{1+\frac{36}{25}-\frac{4}{25}}{2\cdot\frac{6}{5}},$$ which gives $$\cos\measuredangle HBO=\frac{19}{20}.$$ In another hand, $\cos\measuredangle HBO=|\alpha-\gamma|$, which gives $$\cos(\alpha-\gamma)=\frac{19}{20}.$$ Now, $$BH=c\sin\alpha=2R\sin\alpha\sin\gamma.$$ Thus, $$R=\frac{5}{6}\cdot2R\sin\alpha\sin\gamma$$ or $$\sin\alpha\sin\gamma=\frac{3}{5}.$$ I hope the rest is smooth because $$\cos(\alpha+\gamma)=\frac{19}{20}-2\cdot\frac{3}{5}=-\frac{1}{4}.$$ I got the following value. $$\frac{\pi}{2}-\frac{\arccos\frac{19}{20}+\arccos\frac{1}{4}}{2}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2574230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Why can we simply substitute the constrain in when maximizing this equation? I am working on some Portfolio Analysis related material at the moment, and am trying to understand why the following approach to solving this maximization problem is correct: $\max _x \frac{\mu' x-r_f}{(x' \varSigma x)^{0.5}} $ subject to $\mathbf{1}'_Nx=1$, where $\varSigma$ is a positive definite $N\times N$ matrix, $r_f \in \mathbb{R}$, and $x,\mu \in \mathbb{R}^N$ To solve this, we substitute the add-up constrain into the objective function $\frac{\mu' x-r_f}{(x' \varSigma x)^{0.5}} $ to obtain an unconstraint maximization problem. Because $\mathbf{1}_N'x=1$, we can write $r_f = r_f \mathbf{1}_N'x$. Combining this the objective function becomes $\beta = \frac{\mu' x-r_f\mathbf{1}_N'x}{(x' \varSigma x)^{0.5}} $. Now what they do is find the zero-point of the derivative of this function, and conclude that this must be the maximum. However it is not clear to me why a possible zeropoint of the $\beta$ function would fullfill the condition $\mathbf{1}'_Nx=1$ and why it should necessarily be a maximum. Is there something I am misssing here or is this just a very shoddy proof? If so, how would one go about proving correctly?
There is no need for it to sum up to $1$. The problem is homogeneous in $x$ meaning that if you replace $x$ with $tx$, the objective is the same. Hence, once you have a solution to the problem, you can always scale it suitably afterwards. The only important thing is the relative size of the elements in the allocation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2574369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Help with trigonometric proof Show that $\frac {\cos (24)}{\cos {6} }+2\times \sin {24}=\sqrt {3}$
We need to prove that $$\cos24^{\circ}+2\sin24^{\circ}\cos6^{\circ}=2\sin60^{\circ}\cos6^{\circ}$$ or $$\cos24^{\circ}+\sin30^{\circ}+\sin18^{\circ}=\sin66^{\circ}+\sin54^{\circ}$$ or $$\sin66^{\circ}+\frac{1}{2}+\sin18^{\circ}=\sin66^{\circ}+\sin54^{\circ}$$ or $$\sin54^{\circ}-\sin18^{\circ}=\frac{1}{2},$$ which is true because $$\sin54^{\circ}-\sin18^{\circ}=2\sin18^{\circ}\cos36^{\circ}=2\cos36^{\circ}\cos72^{\circ}=$$ $$=\frac{4\sin36^{\circ}\cos36^{\circ}\cos72^{\circ}}{2\sin36^{\circ}}=\frac{2\sin72^{\circ}\cos72^{\circ}}{2\sin36^{\circ}}=\frac{\sin144^{\circ}}{2\sin36^{\circ}}=\frac{1}{2}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2574438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is it closed and compact? $A= f(B) \subseteq X$ where $B =\{(x,y)\in \mathbb{R}^2 |1≤ x^2 +y^2 ≤ 2\}$. $X$ is an arbitrary topological space and $f :\mathbb{R}^2 \rightarrow X $ is an arbitrary continuous map. which of the following is correct. * *open *closed *compact *connected I know that continuous image of a closed set is closed as continuous image of compact is compact so only option 2 and option 3 is trues and correct. Is it correct ? tell me where I'm wrong....I would be more thankful who give me hints or any solution.....thanks in advance
$B$ is compact $\implies$ $f(B)$ is compact $(\because$ continuous image of compact set is compact.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2574595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Find $\lim_{x \to 0} (\frac{\tan(x)}{x})^{\frac{1}{x}}$ Find $$\lim_{x \to 0} (\frac{\tan(x)}{x})^{\frac{1}{x}}$$ My first idea to solve this was to try to evaluate it and then apply the L'Hospital's rule. This is what I managed to achieve: $$\left(\frac{\tan(x)}{x}\right)^\frac{1}{x}=e^{\frac{1}{x}\ln\left(\frac {\tan(x)}x \right)} $$ However, this is problematic, because the L'Hospital's rule is not applicable in the exponent, therefore my transformation was not a good shot. What transformation should I apply to get the desired form?
If you want to use L'Hospital's rule, I would note that the limit of these two functions are equal: $\frac{ln(\frac{tanx}{x})}{x}$ and $\frac{(tan^2x+1)x-tanx}{xtanx}$, and then rewrite the second term as $tanx + \frac{x-tanx}{xtanx}$, noting that $tanx$ is eliminated. It remains to find $\frac{x-tanx}{xtanx}$. But by applying L'Hospital's rule again, we can see that this is equivalent to $\frac{-tanx}{1+\frac{x}{tanx}(1+tan^2x)}$, which goes to $0$ since $\frac{x}{tanx}$ goes to $1$. So your limit is $e^0$ which is $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2574690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 7, "answer_id": 5 }
Inverse of sum of nilpotent matrix and identity matrix Suppose $A$ is a $n\times n$ nilpotent matrix of index $m$. i.e. $A^m=0$ but $A^{m-1}\neq0$. Construct $M=A+\lambda I_n$. It is known that $M^{-1}=\sum_{i=1}^{m}\lambda^{-i}(-A)^{i-1}$ I want to prove $M^{-1}= B+\lambda^{-1}I_{n}$, where B is a nilpotent matrix of index $m$, i.e. $B^m=0$ but $B^{m-1}\neq0$. By the way, I want to get the characteristic polynomial and minimal polynomial of $M^{-1}$ It is easy to show $B^m=0$. However, I do not have a clue how to prove $B^{m-1}\neq0$.
Hint Writing out explicitly the first few terms of your summation expression gives $$M^{-1} = B + \lambda^{-1} I_n = \lambda^{-1} I_n - \lambda^{-2} A + p A^2$$ for some expression $p$ polynomial in $A$. So, we can write $B^{m - 1}$ as $$B^{m - 1} = (- \lambda^{-2} A + p A^2)^{m - 1} .$$ When expanding this expression, every term has contains a factor of $A^m$ (and hence by hypothesis is zero) except for $(-\lambda)^{(2 - m)} A^{m - 1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2574777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Invertible 4x4 matrix $$ \begin{pmatrix} 5 & 6 & 6 & 8 \\ 2 & 2 & 2 & 8 \\ 6 & 6 & 2 & 8 \\ 2 & 3 & 6 & 7 \\ \end{pmatrix} $$ Is this matrix invertible? I would like to show that it is invertible but first I should find the det(Matrix) which should not be equal to zero. To find the determinant, maybe the best idea is to use row operations and find an upper triangular of zeroes and then multiply the numbers on the diagonal to get the determinant. I have been doing some row operations and get this: $$ \begin{pmatrix} 5 & 6 & 6 & 8 \\ 0 & -1 & -4 & 1 \\ 0 & 0 & 2 & 6 \\ -1 & 0 & 0 & -12 \\ \end{pmatrix} $$ I just need to get rid of the -1 on the last row. But I am stuck. Thank you for your assistance.
You may find interesting to use Hamilton-Caley Formula, for $4x4$ matrices: $$\text{det}\mathsf{A} = \frac{1}{24}\left\{(\text{tr}\mathsf{A}^4 - 6\text{tr}(\mathsf{A}^2)(\text{tr}\mathsf{A})^2 + 3(\text{tr}\mathsf{A})^2 + 8\text{tr}(\mathsf{A}^3)\text{tr}\mathsf{A} - 6\text{tr}(\mathsf{A}^4)\right\}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2574905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
How can I define a span-preserving, linear matrix tansformation? I have a matrix $M \in \mathbb{R}^{n \times m}, n \gt m \geq 3$, where every column of $M$ is mutually orthogonal (so rank($M$)=$m$). I want to tranform $M$ in such a way that the span of $M$ is preserved. Intuitively, this is a rotation, but when I've investigated the question How i can rotate a $m$ dimensional vector? it makes it seem like "rotation" must have a different sort of meaning in greater than 3 dimensions than how I understand it (eg, if $n-m=1$, the solution doesn't help me because the "axis" of rotation needs to be $\mathbb{R}^{n-2 \times n}$?) Basically, I want a family of matrices $R \in \mathbb{R}^{n \times n}$, such that span($RM$)=span($M$) for arbitrary dimension. How can I get that?
First of all, it may interest you to know that $\operatorname{span}(MR) = \operatorname{span}(M)$ as long as $R$ is invertible (of size $m \times m$). To answer the question that you posed, namely find the family of matrices $R \in \mathbb{R}^{n \times n}$, such that span($RM$)=span($M$) for arbitrary dimension, here is a solution I like: Let $M_0$ be the matrix whose columns are the normalized columns of the matrix $M$ (so that the columns of $M_0$ form an orthonormal basis of the column space of $M$). Extend these columns into an orthonormal basis of $\Bbb R^n$ (e.g. by the Gram Schmidt process) to obtain the columns of the matrix $$ \tilde M = \pmatrix{M_0 & M_1} $$ A matrix $R$ will preserve the span of $M$ if and only if it can be written in the form $$ R = \tilde M S \tilde M^T =\pmatrix{M_0 & M_1}\pmatrix{S_{11} & S_{12}\\0&S_{22}}\pmatrix{M_0 & M_1}^T $$ where $S$ is a block-matrix partitioned so that $S_{11}$ is an invertible $m \times m$ matrix and $S_{22}$ is $(n-m) \times (n-m)$. To put it another way, $R$ will preserve the column space in question if and only if the matrix $$ S = \tilde M^T R M $$ is block-upper-triangular, i.e. partitioned in the manner described above, with $S_{11}$ invertible. To put it yet another way, $R$ will preserve the column space in question if and only if we have $$ S_{12} = M_1^T R M_0 = 0 $$ and $S_{11} = M_0^TRM_0$ is invertible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2575019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
related rates sphere volume and area calculus problem I have been given the following problem: A spherical balloon is expanding at the rate of 60 pie in^3/sec. How fast is the surface area of the balloon expanding when the radius of the balloon is 4 inches? I don't understand how the way I set the problem up is not giving me the correct answer. I have created an equation that links volume to area, did implicit diff. and now I should have only 2 variables, one of which I have the value for, dV/dt. please see image for how I set up the problem
We have that $$\frac{\mathrm dV}{\mathrm dt} = 4 \pi r^2\cdot \frac{\mathrm dr}{\mathrm dt} =60$$ which implies $$\frac{\mathrm dr}{\mathrm dt} =\frac{60}{4 \pi r^2}$$ We know that $A = 4 \pi r^2$, and so we also have $$\frac{\mathrm dA}{\mathrm dt} = 8\pi r\cdot \frac{\mathrm dr}{\mathrm dt} $$ Plug in what we found for $\frac{\mathrm dr}{\mathrm dt}$ and plug in $r=4$ to get $$\frac{\mathrm dA}{\mathrm dt} = 8\pi r\cdot \bigg( \frac{60}{4 \pi r^2}\bigg) $$ $$\frac{\mathrm dA}{\mathrm dt} = \frac{120}r = \frac{120}4$$ $$\frac{\mathrm dA}{\mathrm dt} = 30$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2575149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
$3$ mutually tangent circles $\textbf{Problem}$: In $\triangle{ABC}$, $AB=3$, $BC=4$, and $CA=5$. Additionally, we have mutually tangent circles $X$, $Y$, and $Z$ inside the triangle that are tangent to $\{AB, BC\}$, $\{BC, CA\}$, and $\{CA, AB\}$ respectively. Determine the sum of the radii of the circles $X$, $Y$, and $Z$. $\textbf{Thoughts}$: I drew a diagram. However, I do not know asymptote. I'm aware that drawing a diagram is best in these types of geometry problems, but my diagram (which was grotesque) did not help me. An addendum by Jack: the above diagram depicts Steiner's construction of the Malfatti circles of a $3-4-5$ triangle; the blue lines are the bitangents mentioned by the linked Wikipedia article.
The hint. Let $a$, $b$ and $c$ be radius of circles. Thus, we need to solve the following system. $$a\sqrt5+2\sqrt{ab}+b=3,$$ $$3c+2\sqrt{bc}+b=4,$$ $$a\sqrt5+2\sqrt{ac}+3c=5.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2575288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$(x_n)$ real, $(x_n) \to 0$ and $0So, I'm supposed to show that: Given $(x_n)$ a sequence of real numbers, with $(x_n) \to 0$, and given $0<c<1$, then $(y_n) \to 0$, where $y_n = c^n x_0 + c^{n-1} x_1 + ... + c^0 x_n$. Here's my attempt, and I'd appreciate any corrections: Since $|y_n| < c^n |x_0| + c^{n-1} |x_1| + ... + c^0 |x_n|$, I'll assume the $x_n$ are all positive and try to show that $y_n$ still goes to zero. So suppose that, for a given $d > 0$, there exist arbitrarily large $n$, such that $y_n > d$. Then, since $\forall n$, $$y_{n+1} = c (y_n) + x_{n+1} \implies y_{n+1} - y_n = (c-1) y_n + x_{n+1}$$, we would have $$y_{n+1} - y_n = (c-1) y_n + x_{n+1} < (c-1) d + x_{n+1}$$, and since $\exists N_0$ such that $n > N_0 \implies x_n < \frac{(1-c)}{2}d$, we would have, for $n > N_0$, $$y_{n+1} - y_n < (c-1) d + x_{n+1} < (c-1) d + \frac{(1-c)}{2}d = \frac{(c-1)}{2}d < 0$$. Assuming $y_{n+1} > d$ and continuing in this fashion, we would eventually get an $N$ such that $y_N \leq d$. Then, for the subsequent term, $$y_{N+1} = c \cdot y_{N} + x_{N+1} \leq c \cdot d + \frac{(1-c)}{2}d = \frac{(c+1)}{2}d$$ . If $y_{N+1} > d$, then $$y_{N+2} - y_{N+1} < \frac{(c-1)}{2}d \implies y_{N+2} < y_{N+1} + \frac{(c-1)}{2}d < \frac{(c+1)}{2}d + \frac{(c-1)}{2}d = c \cdot d < d$$. Therefore, for sufficiently large $N$, $n > N \implies y_n < 2 d$. Since $d$ was arbitrary, $(y_n) \to 0$.
Let me suggest a totally different approach. Suppose that $$ f(z)=\sum_{n=0}^\infty x_n z^n, \quad g(z)=\sum_{n=0} ^\infty c^nz^n, \quad\text{and then}\quad h(z)=f(z)g(z)=\sum_{n=0}^\infty y_nz^n, $$ where $\,y_n=x_n+cx_{n-1}+\cdots+ c^n x_0$. Then, the radius of converge of $f$ and $g$ is at least 1, and hence at least 1 is the radius of convergence of $h$. Thus $y_n\to 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2575362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
A complicated limit involving floor function Let $f(x) = \lfloor x\lfloor1/x\rfloor \rfloor $ . Find $\lim_{x \to 0^{+} } f(x) $ and $\lim_{x \to 0^{-} } f(x)$ . I think $\lim_{x \to 0^{+} } f(x)$ doesn't exist but I have no idea about $\lim_{x \to 0^{-} } f(x)$ .
Check the following graph: You will get the answer automatically. https://www.desmos.com/calculator/8p190y7wr2 Hint: The limit as a whole is not defined, because negative limit is not equal to positive limit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2575602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
For $\alpha$ a limit ordinal, show $V_{\alpha}=\bigcup_{\beta\lt\alpha}P(V_{\beta})$ For $\alpha$ a limit ordinal, I would like to show $V_{\alpha}=\bigcup_{\beta\lt\alpha}P(V_{\beta})$ where the $V$'s are members of the cumulative hierarchy and $P$ is the power set. (This is a continuation of Showing equivalence of definitions for Zermelo Hierarchy, where Max ostensibly provided a solution in for a successor ordinal) In the cumulative hierarchy, for a limit ordinal $\alpha$, $V_{\alpha}:=\bigcup_{\beta\lt\alpha}V_{\beta}$, so I would like to show $\bigcup_{\beta\lt\alpha}V_{\beta}=\bigcup_{\beta\lt\alpha}P(V_{\beta})$ Can I say that for any $\beta\lt\alpha$, there is a $\beta$' with $\beta\lt\beta'\lt\alpha$ such that $V_{\beta'}=P(V_{\beta})$. And conversely, for any such $\beta'$, there is a $\beta$ again with $V_{\beta'}=P(V_{\beta})$, establishing inclusion in both dorections? Thanks.
The main idea for the reverse inclusion, I think, is that $V_\beta \subset P(V_\beta)$ because $V_\beta$ is transitive. For the first inclusion, what you said was enough, also perhaps you should say more clearly that (with your notations) $\beta' = \beta +1 $, so that $\beta < \alpha \implies \beta' < \alpha$ (as $\alpha$ is limit)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2575838", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Non-split exact sequence of modules Find a module $M$ and a submodule $N$ such that $|M| = 100$, $|N| = 20$, $M$ is not cyclic, and the exact sequence $0 \rightarrow M \rightarrow N \rightarrow M/N \rightarrow 0$ does not split. I have tried $M = Z/2 \oplus Z/2 \oplus Z/5 \oplus Z/5$ and $N = Z/2 \oplus Z/2 \oplus Z/5$, but I don't know how to interpret $M/N$, let alone prove/disprove $M \cong M/N \oplus N$ (so that the sequence doesn't split). Any help? (explanation would be much appreciated)
We'll work with $\mathbb Z$-modules, i.e., abelian groups. Note $|M/N| = |M|/|N| = 5$ so $M/N$ is going to be cyclic of order $5$. Thus we can guarentee that the sequence does not split if $\mathbb Z/5$ is not a direct summand of $M$. One group that has $\mathbb Z/5$ as a quotient but not as a summand is $\mathbb Z/25$ so this is a good candidate to build $M$ from. To get the right order lets choose $M = \mathbb Z/4 \oplus \mathbb Z/25$. A subgroup of order $20$ inside $M$ is $N = \mathbb Z/4 \oplus 5\mathbb Z/25$. We know just by order that $M/N \simeq \mathbb Z/5$ and we have chosen $M$ so that this isn't a summand. So our SES won't split. If you want an explicit proof that it doesn't split note that $M/N$ is generated by $(0, 1) + N \in M/N$ so if there's a splitting then it comes from a homomorphism $\phi\colon\mathbb Z/5 \to \mathbb Z/4\oplus\mathbb Z/25$ satisfying $\phi(1) = (a, 1 + 5b)$ where $a \in \mathbb Z/4$ and $b \in \mathbb Z/25$. This means $\phi(0) = 5\phi(1)$ better equal $(0, 0)$ if such a homomorphism is possible. Show that $5\phi(1) \neq (0, 0)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2575889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $\phi:\Bbb Z_n\rightarrow G$ s.t. $\phi(i)=h^i$ for $0\le i\le n$. Give necessary and sufficient condition for $\phi$ to be homomorphism. The exercise reads Let $G$ be a group, $h$ and element of $G$, and $n$ a positive integer. Let $\phi : \mathbb{Z}_n\rightarrow G$ be defined by $\phi(i)=h^i$ for $0\leq i\leq n$. Give a necessary and sufficient condition (in terms of $h$ and $n$) for $\phi$ to be a homomorphism. Prove your assertion. I always have problems with necessary and sufficient arguments, because I do not know how to prove. I know it implies an if and only if, but I'm not sure exactly over which is used. Now, if I look at the answer, it says: The map is a homomorphism if and only if $h^n=e$, the identity in $G$. But how do you know that I have to prove $h^n=e$? I would have thought that I have to prove $\phi(n+m)=\phi(n)\phi(m)$, so it's quite confusing to think that we have to prove $h^n=e$.
This community wiki answer is to point out that this comment followed by this comment, both posted above by @Timkinsella (who is invited to post his own answer), form an answer to the question. Summarising them: 1) Prove $\tilde{\phi}: \Bbb Z\to G, i\mapsto h^i$ is always a homomorphism. 2) Think of the condition on $\tilde{\phi}$ that would allow $\Bbb Z$ to be replaced by $\Bbb Z/n\Bbb Z$. Hint: First isomorphism theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2575989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Difference between changing coordinates and changing basis? If we have a vector $v$ then is has some coordinates w.r.t to a basis $B$ say $(a,b)$. What if we transform this into "polar coordinates"? Does this change the basis or just transform the coordinates into some new pair? Giving us new coordinates for the same vector in the same basis. If we change basis I know we change coordinates, but can we change the coordinates and keep the basis? I am having troubles finding a basis for say polar coordinates which makes me suspect that this is just some kind of bijection and we still have the same vector.
Changing coordinates linearly allows you to think of this as a change of basis on the whole Euclidean space. If you do some non-linear change of coordinates, like to polar, there is no such way to think about the coordinates themselves. However, there is a notion of 'tangent space' to a point, which is a vector space of tangent directions. For curves, for example, the tangent space is a line. In Euclidean space, the tangent space at every point is just a copy of Euclidean space with the origin shifted to that point. Changing coordinates linearly induces the same linear change of coordinates on tangent spaces. However, for non-linear change of coordinates, one can compute the Jacobian matrix, which keeps track of the change of coordinates on the tangent space. If you evalaute the Jacobian matrix at the point, this gives the change of coordinates on tangent spaces. If you think of the derivative (and hence the Jacobian matrix) as the best linear approximation, then you can think of this matrix as telling you the closest possible linear coordinates near the chosen point. This is, I believe, as good as you can get.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2576093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Problem with proof of Stone Representation Theorem I am reading Thomas Jech's Axiom of choice in which he gives a concise proof of the Stone Representation theorem using the (Boolean) Prime Ideal theorem. However he states something is trivial, which I am really struggling to see. I shall quote him verbatim: Stone Representation theorem: Every Boolean algebra is isomorphic to a set algebra. (Set algebra is an algebra on a family of sets with + = union, $\cdot$ = intersection, - = complement) Proof: Let $B$ be a Boolean algebra, let $$S = \{U:U \textrm{ is an ultrafilter on }B\}$$ for $u\in B$ let $\pi(u) = \{U\in S:u\in U\}$. Then it is easy to see that $$\pi(u+v)=\pi(u)\cup\pi(v)$$ $$\pi(u\cdot v)=\pi(u)\cap\pi(v)$$ $$\pi(-u) = \pi(u)^C$$ Now, I can understand the last line: $U \in \pi(-u)$ iff $-u \in U$ iff $u \notin U$ iff $U \in \pi(u)^C$ (as $U$ an ultrafilter, so xor $u\in U$ or $-u \in U$. But I cannot see how to show the other two lines. Any help?
Let $U\in S$ and $u,v\in B$. If $u,v\in U$, then $u\cap v\in U$ since $U$ is a filter. Conversely, if $u\cdot v\in U$, then $u\in U$ and $v\in U$, since $u\cdot v\leq u$ and $u\cdot v\leq v$ and $U$ is a filter. Thus $\pi(u\cdot v)=\pi(u)\cap\pi(v)$. The other equation then follows formally: $$\pi(u+v)=\pi(-(-u\cdot-v))=(\pi(u)^C\cap\pi(v)^C)^C=\pi(v)\cup\pi(v).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2576191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why does $(g^a \bmod n)^b = (g^b \bmod n)^a = g^{ab} \bmod n $? The Diffie–Hellman key exchange protocol relies on the fact that one person, Alice, can perform $(g^a \bmod n)^b $ and another person, Bob, can perform $(g^b \bmod n)^a $ and they will both arrive at the same number: $g^{ab} \bmod n $. This allows Alice and Bob to create a private, yet symmetric, key. Why is this identity true?
It seems that you are thinking of 'mod' as an operation, such as might be found in a programming language. If, instead, you restate the above identity in terms of congruences, it might be clearer: $$(g^a)^b \equiv g^{ab} \equiv (g^b)^a \pmod n$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2576283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Take complex roots and apply it to a new polynomial Polynomial $f(x)=x^3-x^2+x+18$ has three distinct complex roots $r_1$,$r_2$, and $r_3$. Denote by $g(x)$ the cubic polynomial with leading coeffecient $1$ such that $g(r_i+\frac{1}{r_i})=0$, for $i=1,2,3$. The value of $g(2)$ can be expressed in the form $\frac{m}{n}$ for relatively prime positive integers $m$ and $n$. Compute the sum of $m$ and $n$. $\textbf{Thoughts}$ I'm not sure how to approach this question in any way whatsoever. Help is much appreciated.
You might start with $$g(x) = (x - (r_1 + 1/r_1))(x - (r_2 + 1/r_2))(x - (r_3 + 1/r_3))$$ EDIT: So $$ \eqalign{g(2) &= \frac{(r_1 - 1)^2 (r_2 - 1)^2 (r_3 - 1)^2}{r_1 r_2 r_3} \cr &= \frac{f(1)^2}{r_1 r_2 r_3}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2576414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How to calculate the proficiency based on an exam with questions of different difficulty levels? In an examination (multiple choice test), suppose there are n questions each question can have a difficulty level between 1 - 5 (1 for the easiest question and 5 for the difficult one). If someone answers x questions correctly and y questions wrongly (x + y = n) How can we calculate the average difficulty level of a question which s/he can correctly answer?
Alright, so we have question $i$ with difficult level $X_i$, $i=1,2,3,\ldots,n$. Say the set $S$ is the collection of the question indices of the questions that are answered correctly. So the average difficulty level of the questions answered correctly, $P$, is $$P=\frac{\sum_{i\in S}X_i}{\mathrm{n}(S)}$$ where $\mathrm{n}(S)$ is the number of elements of the set $S$. However, if you want the average difficulty earned by the candidate, $Q$, then we consider $$Q=\frac{\sum_{i\in S}X_i}{n}$$ instead. This means that the candidate will earn $0$ difficulty credit for the question they answer wrong. Using your example in the comment, we will have $$p=\frac{3+3+3+3}{4}=3$$ but $$q=\frac{3+3+3+3}{10}=1.2$$ Essentially, you are finding the candidate's score where each question has different weightage.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2576512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
For which complex numbers $\alpha$ and $\beta$ is it true that $\alpha^n+\beta^n$ is always an integer? Possibly a very straightforward question, but: Question. For which complex numbers $\alpha$ and $\beta$ is it true that $\alpha^n+\beta^n$ is always an integer for all $n=1,2,3\ldots$? For example, $$\alpha = \frac{1+i\sqrt{7}}{2}, \beta = \frac{1-i\sqrt{7}}{2}$$ have this relationship. A couple of remarks. Firstly, a way of finding such $\alpha$ and $\beta$ pairs show's up in Silverman's book "The Arithmetic of Elliptic Curves." In particular: Secondly, something similar seems to occur in connection with the Fibonacci numbers. Following this line of thought, perhaps a better question would be: for which complex numbers $\alpha$ and $\beta$ does there exist a complex number $k$ such that $$\frac{\alpha^n+\beta^n}{k}$$ is always an integer?
This is certainly true if $\alpha$ and $\beta$ are conjugate quadratic integers because $\alpha^n+\beta^n$ is a symmetric function of $\alpha$ and $\beta$ and so is an integer polynomial expression in $\alpha+\beta$ and $\alpha\beta$. Conversely, if $\alpha+\beta$ and $\alpha^2+\beta^2$ are integers, so is $2\alpha\beta$. Therefore, $\alpha$ and $\beta$ are roots of a polynomial $x^2+ax+\frac{b}{2}$ with $a,b\in\mathbb Z$, and so are definitely conjugate quadratic numbers, though perhaps not necessarily quadratic integers. Now, by the same argument, $\alpha^2+\beta^2$ and $\alpha^4+\beta^4$ are integers implies $2\alpha^2\beta^2$ is an integer, that is, $2(\frac{b}{2})^2=\frac{b^2}{2}$ is an integer. Therefore, $b$ is even. Bottom line: $\alpha^n+\beta^n$ is an integer for all $n$ iff $\alpha$ and $\beta$ are conjugate quadratic integers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2576633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
Find all $x^6 \pmod {17}$ Find all $a$ such that $x^6 \equiv a\pmod {17}$ (not including $0$) First I thought that we could look at $y^2 \equiv a \pmod {17}$, where $y=x^3$. Then, by Euler's criterion, it must be that: $$a^{\frac{17 -1}{2}} \equiv a^8 \equiv 1 \pmod {17}$$ I could develop it to $$a^8 -1 \equiv 0 \pmod {17} \implies\\ (a-1)(a+1)(a^2+1)(a^4+1)\equiv 0\pmod {17}$$ I'm not sure it's the right way. Could you guide me please? By the way, we are not familiar with $\text {ind}$
The group of nonzero residues modulo $17$ is not merely cyclic, it is cyclic of order $16$, which is relatively prime to $3$. This means that every residue has a unique cube root. As you very perceptively recognized, solving $x^6=a$ is the same as solving $y^2=a$; but you didn’t realize that given such a $y$, there is exactly one $x$ with $x^3=y$. It follows, as @lhf pointed out in a comment, that the sixth powers in $\Bbb F_{17}^*$ are exactly the squares in this group.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2576719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Injectivity and range of $\arctan\left(\sqrt{\frac{1+x}{1-x}}\right)$ To prove that $f(x)=\arctan\left(\sqrt{\frac{1+x}{1-x}}\right)$ is injective I have tried the following: According to the definition, for a function to be injective $f(a)=f(b) \to a=b$ for all $a,b \in D_f$. Using this I get: $$\arctan\left(\sqrt{\frac{1+a}{1-a}}\right)=\arctan\left(\sqrt{\frac{1+b}{1-b}}\right)$$ $$\iff\left(\sqrt{\frac{1+a}{1-a}}\right)=\left(\sqrt{\frac{1+b}{1-b}}\right)$$ $$\iff\frac{1+a}{1-a}=\frac{1+b}{1-b}$$ $$\iff2a=2b$$ $$a=b$$ Hence the function is injective. (?) Now this could be total bs but I also am not able to find the proper range for this one. WolframAlpha says $0\le y<\frac{\pi }{2}$. But how does one come up with this range?
The domain is $-1\leq x<1$, $\lim\limits_{x\rightarrow1^-}f(x)=\frac{\pi}{2}$, $f(-1)=0$ and $f$ is a continuous function. Thus, the range is $\left[0,\frac{\pi}{2}\right).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2576789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Epsilon delta continuity I know that there are a lot of similar questions like this on this forum but still I can't figure it out one thing of this definition. Definition: f is continuous at $x_o\in X\subset\mathbb{R}$ if $$\forall \epsilon>0 \exists \delta>0 |\forall x| |x-x_o|<\delta\Rightarrow|f(x)-f(x_o)|<\epsilon$$ My question is, why in this definition I can't take $\epsilon \to 0$?
Please try to remember that there is no real number at all that tends to zero. You can't write, in standard analysis, anything like "consider a number $x \to 0$." The very definition of limit actually gives the piece of notation "$f(x) \to L$ as $x \to x_0$ " a meaning by using quantifiers: for every $\epsilon>0$ there exists $\delta>0$ etc. So, to summarize: you can arbitrarily pick a positive number, but you can't let real numbers move towards a limit value.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2576902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
find the $\sum\limits_{k=0}^n \frac{(-1)^k}{k! (2k+1)} \frac{1}{(n-k)!}$ I'm stuck on computing the sum of \begin{align*} \sum\limits_{k=0}^n \frac{(-1)^k}{k! (2k+1)} \frac{1}{(n-k)!} \end{align*} I tried some manipulations which include \begin{align*} \frac{1}{n!} \binom{n}{k} = \frac{1}{k! (n-k)!} \end{align*} but still that $2k+1$ at the denominator complicates things. By the way, wolframalpha says that \begin{align*} \sum\limits_{k=0}^n \frac{(-1)^k}{k! (2k+1)} \frac{1}{(n-k)!} = \frac{\sqrt{\pi}}{2(n+\frac{1}{2})!} \end{align*} for $n\geq 1$. Can anyone help me?
Start with the binomial theorem: $$\sum_{k=0}^n \binom{n}{k}x^n=(x+1)^n$$ Substitute $x=y^2$: $$\sum_{k=0}^n \binom{n}{k}y^{2k}=(y^2+1)^n$$ Integrate both sides: $$\sum_{k=0}^n \binom{n}{k}\frac{y^{2k+1}}{2k+1}=\int_0^y(t^2+1)^ndt$$ Divide across by $n!$: $$\sum_{k=0}^n \frac{1}{k!(n-k)!}\frac{y^{2k+1}}{2k+1}=\frac{1}{n!}\int_0^y(t^2+1)^ndt$$ Let $y=i$: $$\sum_{k=0}^n \frac{1}{k!(n-k)!}\frac{i(-1)^k}{2k+1}=\frac{1}{n!}\int_0^i(t^2+1)^ndt$$ $$\sum_{k=0}^n \frac{1}{k!(n-k)!}\frac{i(-1)^k}{2k+1}=\frac{1}{n!}\int_0^1 i(1-t^2)^ndt$$ $$\sum_{k=0}^n \frac{1}{k!(n-k)!}\frac{(-1)^k}{2k+1}=\frac{1}{2 n!}\int_0^1 t^{-1/2}(1-t)^ndt$$ Use Euler's Beta Function: $$\sum_{k=0}^n \frac{1}{k!(n-k)!}\frac{(-1)^k}{2k+1}=\frac{1}{2 n!}\frac{\Gamma(n+1)\Gamma(1/2)}{\Gamma(n+3/2)}$$ $$\sum_{k=0}^n \frac{1}{k!(n-k)!}\frac{(-1)^k}{2k+1}=\frac{1}{2 n!}\frac{n!2^{n+1}}{(2n+1)!!}$$ $$\color{green}{\sum_{k=0}^n \frac{1}{k!(n-k)!}\frac{(-1)^k}{2k+1}=\frac{2^{n}}{(2n+1)!!}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2576997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Best way to fill a $8\times 8$ board You have an $8\times 8$ Battleships board and need to place battleships of sizes $1\times 1$, $1\times 2$, $1\times 3$, $1\times 4$, $1\times 5$ on the board to cover as much of the board as possible. The ships cannot touch another ship, even at the corners. You can place as many of any size as you wish, what is the maximum number of squares you can fill? I believe the answer to this is 30, although not sure how you prove it is the highest. Also how would you go about solving this for an $n\times n$ board. This probably relates to how you prove the answer for the first part.
EDIT: It looks as though you can get 32:
{ "language": "en", "url": "https://math.stackexchange.com/questions/2577095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Existence of a differentiable function satisfying a condition Show that there exists a real number $\epsilon>0$ differentiable function $f:(-\epsilon, \epsilon)\rightarrow \mathbb{R}$ such that $$e^{x^2+f(x)}=1-\sin(x+f(x)).$$ I have no idea about how to proceed. Any ideas?
Hint : apply the Implicit function theorem to the function $$F(x,y)=e^{x^2+y}-1+\sin(x+y). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2577217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Best way to show that this is a subvariety of the Grassmannian Let $k$ be a field and let $\text{Gr}(r,k^n)$ be the Grassmannian of $r$-dimensional subspaces of $k^n$. Fix a linear map $\phi\colon k^n \rightarrow k^n$. Define $X_{\phi}$ to be the subspace of $\text{Gr}(r,k^n)$ consisting of $r$-dimensional subspaces $V \subset k^n$ such that $\phi(V) \subset V$. Question: what is the best way to see that $X_{\phi}$ is a closed subvariety of $\text{Gr}(r,k^n)$? One way of doing this is by working in charts: Letting $W \subset k^n$ be an $(n-r)$-dimensional subspace, the subspace of $\text{Gr}(r,k^n)$ consisting of $V$ such that $V \cap W = 0$ is an open affine subspace that is isomorphic to $\mathbb{A}_k^{r(n-r)}$, and it is not hard to see that the intersection of $X_{\phi}$ with this open affine is closed in the Zariski topology. However, this is not particularly elegant, so surely there is a better way! The answer, of course, will depend first on which approach one takes to endowing $\text{Gr}(r,k^n)$ with the structure of an algebraic variety. I am completely agnostic here: use whatever approach makes this problem transparent! Perhaps my underlying issue here is that I don't know a great way to specify subvarieties of $\text{Gr}(r,k^n)$. For $r=1$ (i.e. projective space), I can use homogeneous polynomials, but I don't know anything quite as nice for $r>1$.
Let $0 \to U \subset O^{\oplus n} \to Q \to 0$ be the tautological exact sequence of bundles on the Grassmannian. Consider the composition $$ U \to O^{\oplus n} \stackrel\phi\to O^{\oplus n} \to Q. $$ Then $X_\phi$ is the zero locus of this morphism, hence is a closed subscheme.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2577428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If H/K is normal in G/K, then is H normal in G? Let $G$ be a group and $H, K$ are two subgroups of it s.t. $K$ is normal in $G$. Now if $H/K$ is a normal subgroup in $G/K$, then can we say $H$ is a normal subgroup of $G$?
Yes, this is the Lattice or fourth isomorphism theorem. $H$ is normal in $G$ if and only if $H/K$ is normal in $G/K$ for $K$ a normal subgroup of $G$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2577542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Equation involving modulus Solve the equation , $|x+1| + |x-2| = |2x-1|$ This was the solution given in my book , $|x+1| + |x-2| = |2x-1|$ $(x+1)(x+2) ≥ 0$ Hence , $x≤-1 , x≥2$ However I couldn’t understand the second step. How did they just factorise the equation that was in the modulus ? I initially thought of squaring but I believe that would be too long. So , what how can we solve such modulus equations by factorization ?
If you feel yourself lost you can always try cases: $$\begin{align*}\bullet\;&x<-1&\implies& -(x+1)-(x-2)=-(2x-1)\implies1=1\implies \color{red}{x<-1}\\{}\\ \bullet\;&-1\le x<\frac12&\implies& (x+1)-(x-2)=-(2x-1)\implies3=-2x+1\implies \color{red}{x=-1}\\{}\\ \bullet\;&\frac12\le x<2&\implies&(x+1)-(x-2)=(2x-1)\implies 3=2x-1\implies x=2...\text{no solution}\\{}\\ \bullet\;&2\le x&\implies& (x+1)+(x-2)=(2x-1)\implies 0=0\implies \color{red}{x\ge2}\end{align*}$$ Thus the solution set is $\;\color{red}{(-\infty,-1]\cup[2,\infty)}\;$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2577651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Counting question / solution verification In a $100$ day period, each of six friends goes swimming on exactly $75$ days. There are $n$ days on which at least $5$ friends swim. Find the largest and smallest possible values of $n$. My attempt at a solution: The largest value of $n$ occurs when exactly $5$ friends swim on as many days as possible. We can achieve this by rotating the days on which each friend doesn't swim. If $A,B,C,D,E,F$ are the friends, then * *$A$ doesn't swim on days $1,7,13,19,\ldots,85,91-100$ *$B$ doesn't swim on days $2,8,\ldots,86,91-100$ *$C$ doesn't swim on days $3,9,\ldots,87,91-100$ *$D$ doesn't swim on $4,10,\ldots,88,91-100$ *$E$ doesn't swim on $5,11,\ldots,89,91-100$ *$F$ doesn't swim on $6,12,\ldots,84,90,91-100$ Therefore, the largest possible value of $n$ is $90$. The smallest possible value of $n$ occurs when exactly $4$ friends swim on as many days as possible. We can rotate these days as follows: $A$ doesn't swim on days $1,2$; $B$ doesn't swim on days $2,3$; $C$ doesn't swim on days $3,4$; $D$ doesn't swim on days $4,5$; $E$ doesn't swim on days $5,6$; $F$ doesn't swim on days $6,7$; $A$ doesn't swim on days $7,8;\ldots$ Continuing in this way, it is possible to ensure that exactly four people are swimming on days $1-75$ and all six people are swimming on days $76-100$. Therefore, the minimum value of $n$ is $25$. Are these bounds correct? How could I improve my arguments?
There are $6 \cdot 75 = 450$ swimmer-days, so there is an obvious upper bound to the number of 5-swimmer days of $\frac{450}{5}=90$. Your schedule is a concrete example of such, so 90 days must be the maximum. A similar argument can be made about the lower bound. There you want to maximize the number of days in which 2 or more friends are not swimming. Any schedule with a day with 3 or more friends not swimming can be potentially improved by adjusting the schedule so that day has no more than 2 friends not swimming. Any schedule with a day where exactly 5 friends are swimming can be potentially improved by having all 6 friends swim that day, so as to conserve non-swimming days. So the best theoretical schedule uses has either 4 or 6 friends swimming each day, conserving the $6\cdot25=150$ nonswimming days to maximum effect to get 75 days with 4 or fewer friends swimming. You also achieved this schedule.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2577783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Pivots and linear independence I am doing linear algebra and I am a begginer. A thought struck on my mind that I need to work out that what is the relationship between pivots, pivot columns and linear independence? As far as I perceived, if a matrix is in a row Echelon form, then the first non-zero entry (not necessarily 1) of each row is called its pivot. Columns that contain the pivots - leading 1's of the rows - are called pivot columns. Also, I'm aware for the fact that each non pivot column is linearly dependent. Are pivot columns linearly independent? How pivot works for linear independence?
Pivot columns are linearly independent with respect to the set consisting of the other pivot columns (you can easily see this after writing it in reduced row echelon form). This means that if each column is a pivot column, all columns are linearly independent. The converse is also true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2577895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What are the products of partition numbers? In Neil Sloane's, On Line Encyclopedia of Integer Sequences, A033637 is the sequence whose first few terms are: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 15, 16, 18, 20, 21, 22, \ 24, 25, 27, 28, 30, 32, 33, 35, 36, 40, 42, ... There is no description of what is meant by the Title, Products of Partition Numbers. Can someone describe the sequence, perhaps with an example?
The sequence is literally what the title says. First $n$-th partition number (A000041) is the number of integer partitions of $n$. Let $A$ be the set of all partition numbers. Now let $B$ be the set of all numbers that can be written as the product of elements of $A$. For example since $2, 3 \in A$, $2*3 = 6 \in B$. Then A033637 is just the elements of $B$ in order.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2578014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding the global maxima and minima on closed interval Given a function $f(\theta) = (4\theta-\pi)\sin(\theta)+4\cos(\theta)-2.$ I want to find a value for $\theta$ in interval $\theta \in [0,\frac{\pi}{2}]$ such that for that value of $\theta$ my function has global maximum and minimum. I took the derivative of the function which is equal $f'(\theta) = \cos(\theta)(4\theta-\pi)$. Here i got two critical points which are $\frac{\pi}{4},\frac{\pi}{2}$. I picked $\frac{\pi}{6}$ at the left of the crictical point $\frac{\pi}{4}$ and $\frac{\pi}{3}$ at the right of the critical point $\frac{\pi}{4}$ to see the behaviour of the function. It seems that the function is decreasing on the interval $(0,\frac{\pi}{4})$ and increasing on the interval $(\frac{\pi}{4},\frac{\pi}{2})$ but $\frac{\pi}{2}$ is not included because it is also a critical point. I don't know how to deal with this kind of situation? I tried to find the co-ordinates also and they are $f(0) = 2$ $f(\frac{\pi}{2}) = 1.141$ $f(\frac{\pi}{4}) = 0.82$ Is my global max at $(\theta = 0)$ and global min at $(\theta = \frac{\pi}{4})$?
Since you already found the critical points, why not apply the second derivative test ? $$f'(\theta) = \cos(\theta)(4\theta-\pi)\implies f''(\theta) =4 \cos (\theta)-(4 \theta-\pi ) \sin (\theta)$$ from which $$f''\left(\frac{\pi }{4}\right)=2 \sqrt{2}\color{red} {>0} \qquad \text{and} \qquad f''\left(\frac{\pi }{2}\right)=- \pi \color{red} {<0}$$ revealing that $\theta=\frac{\pi }{4}$ corresponds to a minimum and $\theta=\frac{\pi }{2}$ corresponds to a maximum. No need to plot the function or to compute anything.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2578264", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Intuition behind Algebra When proving, or just straight up solving equations, we often manipulate variables until we get the results we want. For example, the square root of the discriminant is simply an algebraic manipulation to get the result of the distance (or difference) between the roots. However, is there intuition behind this manipulation? I get why Algebra works in the first place but when seeing it in action, manipulations as complex as that of involved in getting the discriminant, gets me lose sense of what's actually happening within the numbers.
An equation like $x+4 = 3x$ returns True for some values of $x$ and False for others. For example, this particular equation returns false for $x := 1$, because $$(x+4 = 3x)(x:=1) \iff 1+4= 3\cdot 1 \iff 5 =3 \iff \mathrm{False}.$$ Whereas it returns True for $x:=2$, because $$(x+4 = 3x)(x:=2) \iff 2+4= 3\cdot 2 \iff 6 = 6 \iff \mathrm{True}.$$ In summary, each equation is associated with a function that returns True for some values of $x$ and False for others. When you solve equations, you perform manipulations that preserve this function. For example: * *The equation $x+3 = 5$ is associated to a function that returns True for some values of $x$ and false for others. *The equation $x = 5-3$ is also associated to such a function. *Furthermore, these functions are equal. Therefore: * *if $x+3=5$ returns True for some choice of $x$, then so too does $x=5-3$, and vice versa. *if $x+3=5$ returns False for some choice of $x$, then so too does $x=5-3$, and vice versa. This may not sound too deep, but it allows us to find the exact values for which an equation is true. If I write $3x+6 = 0$, it's a bit mysterious which values of $x$ make this function return True. But I write $x=5$, it's really obvious. For instance: $$(x = 5)(x:=5) \iff 5 = 5 \iff \mathrm{True}$$ $$(x = 5)(x:=4) \iff 4 = 5 \iff \mathrm{False}$$ Surprise surprise, the equation $x=5$ seems to return True at $x:=a$ if and only if $a$ equals $5$. What this means is that if I can repeatedly transform $3x+6= 0$ into a form like $x=5$, then I can work out which choices of $x$ make it true really easily. This is what solving equations is all about. For example, if I write $$3x+6 = 0 \iff 3x = -6 \iff x = -6/3 \iff x = -2,$$ this instantaneously tells me that $3x+6=0$ returns True for $x:=-2$ and False otherwise. That's really all algebra is; it's about manipulating conditions so that the condition hasn't changed, just the way we've written it has changed. The way we've written the final expression may tell us something useful about the original expression which wasn't obvious until we performed the manipulations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2578390", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 7, "answer_id": 0 }
Trigonometric equation: $\ln(\sin x + \cos x)^{1+\sin 2x}= 2$ $\ln(\sin x + \cos x)^{1+\sin 2x}= 2$ I am unable to solve it. I tried this way: $(\sin x + \cos x)^{1+\sin 2x}= e^2$ I know that: $\sin x + \cos x \le \sqrt2 $ $1+ \sin 2x \le 2 $ I don't know how to utilise this idea in my solution.
HINT $$\ln(\sin x + \cos x)^{1+\sin 2x}= 2\iff\ln(\sin x + \cos x)^{1+\sin 2x}= \ln e^2\iff (\sin x + \cos x)^{1+\sin 2x}=e^2$$ but $$(\sin x + \cos x)^{1+\sin 2x}=(\sin x + \cos x)^{(\sin x + \cos x)^2}\leq\sqrt2^{2}=2$$ Note $$(\sin x + \cos x)^2=1+\sin 2x\le2\implies \sin x + \cos x \leq\sqrt 2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2578482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
$\int f(x)\,\mathrm{d}x = \left(\int_0^x f(t) \, \mathrm{d}t\right) + C$ If $f(x)$ is a continuous function on $\mathbb{R}$ and I am asked to find $\int f(x) \, dx$, what is the problem with the following answer: $$\int f(x)\,\mathrm{d}x = \left(\int_0^x f(t) \, \mathrm{d}t\right) + C$$
Despite teaching calculus for 20+ years, I have never seen a student do this on an exam, so the novelty of seeing it for the first time would count for something. The first time I would see it, I would accept it and even be somewhat impressed with the student. Far too few students really understand that part of the fundamental theorem of calculus. The second time I would see it, I would suspect that a thoroughly unimpressive student heard about how to get credit for no work. Their plan would backfire and they would get no credit for the problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2578593", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 5, "answer_id": 3 }
Why can we replace a variable with a constant in a limit? We say $\lim_{x \to c} f(x) = L$ that means $f(x)$ may be as close to $L$ as $x$ tends to $c$. "Tends" here means $x$ approaches $c$ but never actually becomes $c$. If it so, then why do we such easily replace $x$ with value $c$, whenever it is appropriate. E.g. $\lim_{x \to 5} 4 + x = 4 + 5 = 9$, or $\lim_{h \to 0} f(x+h) = f(x)$. Understand me right. I don't want to discuss cases when we can do the substitution and sometimes not, because we get a division by zero, and we need to do some simplification, etc. I understand all this. I just can't understand if $x$ actually is never $c$, what allows me to write $c$ as a value of $x$? Well, I used to think about it like "ah, as $x \to 0$, x is very small number, let it be zero". But it is not statistics, you know, to close eyes and make approximations. I met the notion of "infinitesimal" and as I understood it is opposed to "$\delta-\epsilon$" approach. I can't fully understand how they are related to each other and to my question. Maybe I lack of historical context. If it so, please clarify this for me. Thanks.
In order to compute $\lim_{x \to 5} (4 + x)$, we use the fact that $$\lim_{x \to c} (f(x) + g(x)) = \lim_{x \to c} f(x) + \lim_{x \to c} g(x)$$ whenever both limits on the right exist. Now, $\lim_{x \to 5} 4 = 4$ holds because as $x$ approaches any value (including $5$), the expression $4$ approaches the value $4$, since it is constant. Also, $\lim_{x \to 5} x = 5$ because as $x$ approaches $5$, the expression $x$ approaches $5$, trivially. Of course, all of these results can be rigorously proven with the $(\epsilon, \delta)$-definition of limit. Thus $\lim_{x \to 5}(4 + x) = 4 + 5 = 9$. It looks as though we just replaced $x$ with $5$, but what is actually going on is quite different.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2578692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Why can the a combination of an exponential and logarithmus function be used to create a powerlaw distribution I answered a question on StackOverflow where a person wanted a randomized function that produces as many numbers between 0-10, 10-100 and 100-1000. I offered the following function that does the job: Math.floor(Math.exp(Math.random()*Math.log(maxmimum-minimum+1)))+minimum In the comments, I was asked why this function works for the job and I can't come up with a good explanation. It's just obvious to me that it works. Can someone provide a good explanation of why it works?
One way to generate random numbers according to a particular distribution is to generate random numbers on $[0,1)$ and then use the inverse cumulative distribution function give the distribution. You want your cumulative distribution function (before rounding to integers) to be of the form $a\log(x)+b$ between two values $m$ and $M$. Since $F(m)=0$ and $F(M)=1$, this gives $b=-a\log(m)$ and $a=\frac{1}{\log(M)-\log(m)}$, so $$F(x)=\frac{\log(x)-\log(m)}{\log(M)-\log(m)}$$ when $m\lt x \lt M$. Its inverse is $$F^{-1}(y) = e^{y(\log(M)-\log(m))+\log(m)}=e^{y(\log(M)-\log(m))}m = m\left(\frac{M}{m}\right)^y$$ Now you can just plug your uniform random number $Y$ on $[0,1)$ into any of these expressions; you chose something close to $e^{y(\log(M)-\log(m))}m$. Since you are then going to round down to an integer, you want $M$ equal to maximum+1 and $m$ equal to minimum and that more or less gives your expression
{ "language": "en", "url": "https://math.stackexchange.com/questions/2578780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }