Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Find $r$ when $r>15$ A light-bulb flickers after every so often in minutes, and the time taken between each flicker is recorded. In total 5 observations are counted, and the time between each has been recorded as the following: $$x_1=2.5, x_2=5.4, x_3 = 6.4, x_4 = 2.1$$ However, the 5th observation is only recorded when the time between flickers is greater than 15 minutes, so we have $x_5 > 15$. Calculate the sample mean. Here's what I have tried: $$\frac{1}{n}\sum_{i=1}^5x_i = \frac{2.5+5.3+6.4+2.1+r}{5}=3.26+\frac{r}{5}$$ Where $r$ represents $x_5 > 15$. However, how do I find a value for $r$ if possible? The original question asks for this distribution to find the MLE of $X \sim \exp(\lambda;x)$. Give the MLE of $\exp(\lambda;x)$ is $\bar{x}$, I thought the interpretation I gave would be the answer. Please let me know if an alternative approach was required!
I assume $X_1,X_2,\ldots,X_5$ are i.i.d Exponential random variables with mean $1/\lambda$. Let $f$ be their common density function. Regarding your original question on MLE of $\lambda$, I think what you have is an instance of Type-I (right) censoring. There are $4$ uncensored observations and the $5$th observation is right-censored. So for $x_i>0$ and $\lambda>0$, the likelihood here takes the form \begin{align} L(\lambda \mid \boldsymbol x)&=\prod_{i=1}^4f(x_i)\cdot P(X_5>15) \\&=\prod_{i=1}^4 (\lambda e^{-\lambda x_i})\cdot e^{-15\lambda} \\&=\lambda^4 \exp\left\{-\lambda\sum_{i=1}^4 x_i-15\lambda\right\} \tag{1} \end{align} One can also say that the data is $(Y_i,\delta_i)$ where $Y_i=\min(X_i,15)$ and $\delta_i=I(X_i\le 15)$, for which the likelihood is $$L(\lambda\mid \boldsymbol y,\boldsymbol\delta)=\prod_{i=1}^5 (\lambda e^{-\lambda y_i})^{\delta_i}(e^{-15\lambda})^{1-\delta_i} \tag{2}$$ Note that $(1)$ and $(2)$ are equivalent. From $(1)$, it follows from usual calculus that the ML estimate of $\lambda$ is $$\hat\lambda(\boldsymbol x)=\frac{4}{\sum_{i=1}^4 x_i+15}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4400499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Create a block diagonal matrix with different sizes of blocks in GAP I don't know if this question has been asked before, or if this is the right site to ask it. If not, let me know about a site where can I ask, please. Problem: I want to create a block diagonal matrix with different size of blocks in GAP, for example with a block $1\times 1$ and a block $2\times 2$. I read the manual of GAP and I found the command "BlockMatrix" but I don't understand if it is possible to create such a matrix it with different sizes of blocks. Thanks in advance!
As long as the blocks are diagonal, you can use DirectSumMat with the matrices to be placed along the diagonal as arguments: gap> m1:=[[1,2],[3,4]];; gap> m2:=[[1,2,3],[4,5,6],[7,8,9]];; gap> DirectSumMat(m1,m2); [ [ 1, 2, 0, 0, 0 ], [ 3, 4, 0, 0, 0 ], [ 0, 0, 1, 2, 3 ], [ 0, 0, 4, 5, 6 ], [ 0, 0, 7, 8, 9 ] ] If the blocks are not along the diagonal, you would have to build the matrix yourself. (BlockMatrix is a special representation where blocks must have the same dimension, used mainly for representing induced representations)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4400617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Number of trees on $n+k$ vertices that do not contain edges that are only between first $n$ vertices So say I have $n$ black vertices $\{x_1,\dots,x_n\}$ and $k$ white vertices $\{y_1,\dots,y_k\}$, I want to count the number of trees that do not contain any edge of the type $(x_i,x_j)$. I have calculated the number using inclusion-exclusion and the formula in Lemma 6 from this paper (DOI link) for the case $n=2,3,4$ and I think the number should be equal to $k^{n-1}(k+n)^{k-1}$, it does however get very complicated and I wonder if there is an easier way to show this.
This is equivalent to counting the number of spanning trees of $K_{n+k} - K_n$ (where we start with a complete graph on $n+k$ vertices, and delete all edges between the first $n$ vertices). The Laplacian matrix of this graph, in block form, is $$ \begin{bmatrix} k I_n & -J_{n \times k} \\ -J_{k \times n} & (n+k)I_k - J_{k \times k} \end{bmatrix} $$ where $I_m$ is the $m \times m$ adjacency matrix, and $J_{s \times t}$ is the $s \times t$ matrix of all ones. We can apply Kirchhoff's matrix tree theorem to count the spanning trees. First, delete the first row and column, getting $$ \begin{bmatrix} k I_{n-1} & -J_{n-1 \times k} \\ -J_{k \times n-1} & (n+k)I_k - J_{k \times k} \end{bmatrix} $$ Next, use the formula $\det(A) \det(D - CA^{-1}B)$ to simplify: we get $$ \det(k I_{n-1}) \det((n+k)I_k - J_{k \times k} - J_{k\times n-1}( \tfrac1k I_{n-1}) J_{n-1 \times k}). $$ First, $\det(k I_{n-1})$ simplifies to $k^{n-1}$. In the second, more complicated determinant, $J_{k\times n-1}( \tfrac1k I_{n-1}) J_{n-1 \times k}$ becomes $\frac{n-1}{k} J_{k \times k}$, so altogether we get $(n+k)I_k - \frac{n+k-1}{k} J_{k \times k}$. To find this determinant, note that: * *$J_{k \times k}$ has eigenvalues $k, 0, 0, \dots, 0$. *Therefore $-\frac{n+k-1}{k}J_{k \times k}$ has eigenvalues $-(n+k-1), 0,0,\dots,0$. *Adding $(n+k)I_k$ adds $n+k$ to all eigenvalues, giving us $1, n+k, n+k, \dots, n+k$. *The determinant is the product of all these eigenvalues: $(n+k)^{k-1}$. This results in the overal formula $k^{n-1}(n+k)^{k-1}$, as you conjectured. This is a special case of a formula for complete multipartite graphs, which has apparently been both rediscovered and re-proved multiple times. One source is "The number of spanning trees of a complete multipartite graph" by Richard Lewis, which gives a proof by a Prüfer sequence-type argument. The formula is that if $n_1 + n_2 + \dots + n_k = n$, then $K_{n_1, n_2, \dots, n_k}$ has $$ n^{k-2} \prod_{i=1}^k (n-n_i)^{n_i-1} $$ spanning trees. Here, the graph $K_{n+k} - K_n$ can be rewritten as the complete $(k+1)$-partite graph $K_{n,1,1,\dots,1}$, and so the formula gives us $$ (n+k)^{k-1} k^{n-1} (n+k-1)^0 (n+k-1)^0 \cdots (n+k-1)^0. $$ After leaving out all the factors which simplify to $1$, this gives us the same result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4400818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Limit question using L'Hospital rule Here is the limit I am trying to do $$ \lim\limits_{x \to \infty} \frac{x^2 + \mathrm{e}^{4x}}{2x- \mathrm{e}^x} $$ Now, here first, I am trying to identify the indeterminate form so that I can use L'Hospital's rule. Numerator tends to $\infty$ as $x \to \infty $. But the denominator tends to $ \infty - \infty$ as $ x \to \infty$. So, indeterminate form would be $$ \frac{\infty}{\infty - \infty} $$ So, how to approach this problem here ?
For the denominator, you have to find the limit $\lim\limits_{x \to \infty} (2x-\mathrm{e}^x)$. You have noticed that both terms tend to $\infty$, so we need a more nuanced comparison. (The limit of "$\infty-\infty$", so to speak, could be any real number, or $\pm \infty$ e.g. take $(x+k)-x$ to get limit $k$, and $2x-x$ for $\infty$, and $x-2x$ for $-\infty$.) Recall that $$ e^x=1+x+\frac{x^2}{2}+\cdots+\frac{x^n}{n!}+\cdots$$ So $$ 2x-e^x=-1+x-\frac{x^2}{2}-\cdots-\frac{x^n}{n!}-\cdots$$ What is the limit of this as $x\to \infty$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4400958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Proving the curvature formula for an arbitrary planar curve using perpendicular bisectors Given an arbitrary (i.e. not necessarily arc-length parameterised) planar parametric curve $C(t) = \Big(x(t), y(t)\Big)$, I'm looking to prove the formula for its (signed) curvature $$\kappa = \frac{x'y'' - x''y'}{(x'^2 + y'^2)^\frac32}.$$ I would like to do so using perpendicular bisectors, as this seems the most natural approach to me. My attempt starts as follows. Given three points on the curve $P = C(t-t_1)$, $Q = C(t)$ and $R = C(t+t_2)$, the intersection of the perpendicular bisectors of the edges connecting these points (see the illustration below) yields the centre of the circle going through $P$, $Q$ and $R$. In the limit, as both $t_1 \to 0$ and $t_2 \to 0$, the circle through the three points is known as the osculating circle with a radius $r = \frac{1}{\kappa}$. The intersection of the perpendicular bisectors can be expressed as $S + \alpha u = T + \beta v$, with $S$ and $T$ the midpoints of the edges $PQ$ and $QR$, $\alpha$ and $\beta$ unknown scalar values, and $u$ and $v$ chosen to be the vectors $Q-P$ and $R-Q$ rotated $90$ degrees counter-clockwise. That is, $$u = \Big( -(Q-P)_y, (Q-P)_x \Big) = \Big( y(t-t_1)-y(t), x(t)-x(t-t_1) \Big), \\ v = \Big( -(R-Q)_y, (R-Q)_x \Big) = \Big( y(t)-y(t+t_2), x(t+t_2) - x(t) \Big).$$ Re-expressing, we have $\alpha u - \beta v = T - S$, or $$\left(\begin{array}{cc}y(t-t_1)-y(t) & y(t+t_2)-y(t) \\ x(t)-x(t-t_1) & x(t) - x(t+t_2)\end{array}\right) \left(\begin{array}{c}\alpha \\ \beta\end{array}\right) = \frac12 \left(\begin{array}{c}x(t+t_2) - x(t-t_1) \\ y(t+t_2) - y(t-t_1) \end{array}\right).$$ We can solve for $\alpha$ and $\beta$ by inverting this $2 \times 2$ matrix. Expressing the matrix entries symbolically, recall that $$\left(\begin{array}{cc}u_x & -v_x \\ u_y & -v_y\end{array}\right)^{-1} = \frac{\left(\begin{array}{cc}-v_y & v_x \\ -u_y & u_x\end{array}\right)}{v_x u_y - u_x v_y}.$$ Taking the limit of $t_1 \to 0$ and $t_2 \to 0$, we should then get $r = \alpha \|u\| = \beta \|v\|$. Unfortunately, this doesn't appear to result in the desired curvature formula (or well, its reciprocal). It probably means different lengths for the vectors $u$ and $v$ should be used — dividing them by $t_1$ and $t_2$, respectively, yields something that in the limit looks like the components of the tangent vector $C'(t) = \Big( x'(t), y'(t) \Big)$. However, that does not resolve the second derivatives appearing in the formula (which I suppose come from $T - S$). How to proceed?
You have $Q = C(t)$ and $R = C(t + h) $ The center of the circle passing through these two points satisfies $(S - Q) \cdot (S - Q) = (S - R) \cdot (S - R) $ so that $ S \cdot S - 2 S \cdot Q + Q \cdot Q = S \cdot S - 2 S \cdot R + R \cdot R $ from which $ 2 S \cdot (R - Q ) = R \cdot R - Q \cdot Q \hspace{15pt}(1) $ Using Taylor series approximation of $R$, we get $R = C(t + dt) = C(t) + h C'(t) + \frac{1}{2} h^2 C''(t) $ Plugging this into $(1)$ and retaining only differentials of first and second order, we get $ 2 S \cdot ( h C' + \frac{1}{2} h^2 C'' ) = 2 h C \cdot C' + h^2 (C \cdot C'' + C' \cdot C' ) $ From this, it follows that $ S \cdot C' = C \cdot C' \hspace{15pt}(2)$ and $ S \cdot C'' = C \cdot C'' + C' \cdot C'\hspace{15pt}(3) $ Equation $(2)$ can be written as $ (S - C) \cdot C' = 0 \hspace{15pt}(4) $ So that $(S - C)$ is perpendicular to $C'$. It follows that $ S = C + \alpha R(90^\circ) C' \hspace{15pt} (5) $ Explicitly writing the components of $C= (x, y)$, $C' = (x', y')$, and $C'' = ( x'', y'' ) $, then $(5)$ becomes $ S = (x - \alpha y' , y + \alpha x' ) $ Plugging this into $(3)$ $ x'' (x - \alpha y') + y'' (y + \alpha x') = x x'' + y y'' + x'^2 + y'^2$ Cancelling equal terms on both sides of the equation and solving for $\alpha$ $\alpha = \dfrac{ x'^2 + y'^2 }{ x' y'' - y' x'' } $ Now the radius of the circle is $\| S - Q \| = \alpha \| C' \| = \alpha \sqrt{x'^2 + y'^2 } $ Hence $ r = \dfrac{ (x'^2 + y'^2)^{\frac{3}{2}} } { x' y'' - y' x''} $ and the curvature is $ \kappa = \dfrac{1}{r} =\dfrac { x' y'' - y' x''}{ (x'^2 + y'^2)^{\frac{3}{2}} } $
{ "language": "en", "url": "https://math.stackexchange.com/questions/4401100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A chain complex $A_{*}$ is contractible if and only if $A_{*}$ is acyclic and $\iota: Z_{n}A \hookrightarrow A_{n}$ is a split monomorphism. I already proved the forward direction. But I'm stucked with the backward direction. Here's my proof attempt: Assume that $A_{*}$ is acyclic and $\iota: Z_{n}A \hookrightarrow A_{n}$ is a split monomorphism. Then $H_{*}A = 0_{*}$, and so $Z_{n}A = B_{n}A$, and there exists $j: A_{n} \to Z_{n}A$ such that $j \circ \iota = 1_{Z_{n}A}$. To prove that $A_{*}$ is contractible, I must show that there exists $h : A_{n} \rightarrow A_{n+1}$ such that $h : g \circ f \simeq 1_{A_{*}}$, where $f : A_{*} \rightarrow 0_{*}$ and $g : 0_{*} \rightarrow A_{*}$. What I have deduced so far is that if such $h$ exists, then $dh + hd = 1_{A_{*}}$. I tried looking at the map $j : A_{n} \rightarrow Z_{n}A = B_{n}A$. I feel that construction of $h$ will be centered around $j$, and that $h$ is exactly the composition $c \circ j$, where $c : B_{n}A \rightarrow A_{n+1}$ is a homomorphism such that $d \circ c = 1_{B_{n}A}$, that is, $d$ is a split epimorphism. But it's hard to construct such $c$, and I'm not even sure if $d$ is indeed a split epi. Am I on the right track? Am I doing this wrong?
I don't think $d$ is necessarily a split epimorphism: for example, consider the SES of the trivial extension $$0 \to \mathbb{Z} \to \mathbb{Z} \times \mathbb{Z} \to \mathbb{Z} \to 0.$$ Any monomorphism into a free abelian group splits, so both assumptions for this chain complex are fulfilled, but $\mathbb{Z} \to \mathbb{Z} \times \mathbb{Z}$ is of course not a split epi. From your suggestion I guess that you mean the splitting of $d: A_n \to \operatorname{im}d$, which you indeed have: the exact sequence $0 \to Z_{n+1}A \to A_{n+1} \xrightarrow{d} B_{n}A \to 0$, is due to the acyclicity the same as $0 \to Z_{n+1}A \to A_{n+1} \xrightarrow{d} Z_{n}A \to 0.$ Since the left arrow splits, the right one also spilts via some section $s$. Composing $s$ with $A_n \to Z_nA$ gives you a map $A_n \to A_{n+1}.$ Take it from here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4401374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Density of numbers which are the product of distinct primes raised to prime powers Consider the sequence of natural numbers which are the product of distinct primes raised to prime powers (http://oeis.org/A056166) The first few numbers in this sequence are $$ 4, 8, 9, 25, 27, 32, 36, 49, 72, 100, 108, 121, 125, 128, 169, 196, 200, 216, 225, 243, 288, \ldots $$ Question: Let $f(x)$ be the number of such numbers $\le x$. Experimental data for $x \le 4 \times 10^9$ shows that $f(x) \sim a \sqrt x$ some constant $a$. Can this is proved or disproved? Also the for this range of data the computed value of the parameter $a$ is approximately $1.416$ which is pretty close to $\sqrt 2$. Plot of $f(x)$
If you let $a_n = 1$ if $n$ has this property and $a_n = 0$ otherwise, then $$F(s) = \sum \frac{a_n}{n^s} = \prod_{p} \left(1 + \frac{1}{p^{2s}} + \frac{1}{p^{3s}} + \frac{1}{p^{5s}} + \ldots \right),$$ On the other hand, $$\zeta(2s) = \prod_{p} \left(1 - \frac{1}{p^{2s}}\right)^{-1},$$ and thus $$\frac{F(s)}{\zeta(2s)} = \prod_{p} \left(1 + \frac{1}{p^{3s}} - \frac{1}{p^{4s}} - \frac{1}{p^{9s}} + \ldots \right),$$ where the exponents are those occuring in $(1-x^2) \sum x^p = 1 + x^3 - x^4 - x^9 + \ldots$. From this, you see that $F(s)$ is holomorphic up to $s = 1/2$ where there is a simple pole with residue $C/2$, where $$C = \prod_{p} \left(1 + \frac{1}{p^{3/2}} - \frac{1}{p^{2}} - \frac{1}{p^{9/2}} + \ldots \right),$$ $$ = 1.4310606003 \ldots $$ But then, the Wiener–Ikehara theorem (and its natural variants) $$\sum_{n<X} a_n \sim C x^{1/2} + O(x^{1/3}).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4401506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to show that the set of extreme points of $\mathcal{P}_{1}(\mathcal{A})$ is precisely $\operatorname{Proj}(\mathcal{A})$. The problem is as follows: Let $\mathcal{A}$ be a $C^{*}$-algebra. Note that $\operatorname{Proj}(\mathcal{A})$ is contained in the set $\mathcal{P}_{1}(\mathcal{A}):=\left\{x \in \mathcal{A}^{+}:\|x\| \leq 1\right\}$, and that $\mathcal{P}_{1}(\mathcal{A})$ is closed and convex. (i) Let $y \in \mathcal{A}^{+}$and let $p \in \operatorname{Proj}(\mathcal{A})$. Suppose that $\left(1_{\mathcal{A}}-p\right) y\left(1_{\mathcal{A}}-p\right)=0$. Show that $y=p y p$. (ii) Let $x, y \in \mathcal{A}^{+}$be such that $x+y=0$. Show that $x=y=0$. (iii) Show that the set of extreme points of $\mathcal{P}_{1}(\mathcal{A})$ is equal to $\operatorname{Proj}(\mathcal{A})$. [Hint: Use continuous functional calculus to show that non-projections are not extreme points. In the other direction, first prove that the unit is extreme, and then use parts (i) and (ii).] (iv) Show that conv$\operatorname{Proj}(\mathcal{A})=\mathcal{P}_{1}(\mathcal{A})$, when $\mathcal{A}=M_{n}(\mathbb{C})$, for some $n \geq 2$. (v) Find an example of a $C^{*}$-algebra $\mathcal{A}$ for which conv$\operatorname{Proj}(\mathcal{A})$ is not dense in $\mathcal{P}_{1}(A)$. I got stuck at (iii). I proved that the unit is indeed extreme, but I don't know how to use parts (i) and (ii) to prove the conclusion for general elements $p \in \operatorname{Proj}(\mathcal{A})$. Any help is appreciated.
Assume $p=(1-t)a+tb,$ where $0<t<1$ and $0 \le a,b \le I.$ Then $$0=(1-t)(I-p)a(I-p)+t(I-p)b(I-p).$$ Therefore $$(I-p)a(I-p)=(I-p)b(I-p)=0.$$ Observe that $$ (I-p)a(I-p)= [a^{1/2}(I-p)]^*a^{1/2}(I-p).$$ Hence $ a^{1/2}(I-p)=0.$ We get $a(I-p)=0.$ Thus $a=ap$ and $a=(ap)^*=pa.$ This implies $a\le p$ as $p-a=p-ap=(I-a)p.$ Similarly $b\le p.$ We have $$0=(1-t)(p-a) + t(p-b) .$$ Therefore both summands vanish i.e. $p=a=b.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4401638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Rules to choose substitution for definite integration Let us consider the integral :$$\int_0^{\frac{2\pi}{3}}\frac{\cos x}{1+\sin x}dx$$ I want to solve it by using the substitution $t=\sin x$. But I read in a book that the substitution that we make must be monotonic in the given domain. The author even went on to give some examples to justify his statement one of them being $$\int_0^{\pi}\frac{\cos x}{1+\sin x}dx$$ Clearly, the substitution $t=\sin x$ does not work here. Is there a proper result/theorem which would require the substitution to be monotonic or injective. I can only find examples and counter-examples.
I suggest you look at my replies in the comments to your post for more details, but here, I just aim to explain what is an appropriate method for utilizing substitution to obtain the correct value for the integral. Notice that $$\int_0^{\frac{2\pi}3}\frac{\cos(x)}{1+\sin(x)}\,\mathrm{d}x=\int_0^{\frac{\pi}2}\frac{\cos(x)}{1+\sin(x)}\,\mathrm{d}x+\int_{\frac{\pi}2}^{\frac{2\pi}3}\frac{\cos(x)}{1+\sin(x)}\,\mathrm{d}x$$ $$=\int_0^{\frac{\pi}2}\frac{\cos(x)}{1+\sin(x)}\,\mathrm{d}x+\int_{\frac{\pi}2}^{\frac{2\pi}3}\frac{-\sin(x-\frac{\pi}2)}{1+\cos(x-\frac{\pi}2)}\,\mathrm{d}x$$ $$=\int_0^{\frac{\pi}2}\frac{\cos(x)}{1+\sin(x)}\,\mathrm{d}x+\int_0^{\frac{\pi}6}\frac{-\sin(x)}{1+\cos(x)}\,\mathrm{d}x.$$ For the first integral, the substitution $y=\sin(x)$ is injective and results in $$\int_0^1\frac1{1+y}\,\mathrm{d}y,$$ while for the second integral, the substitution $y=\cos(x)$ is injective and results in $$\int_1^{\frac{\sqrt{3}}2}\frac1{1+y}\,\mathrm{d}y.$$ Therefore, $$\int_0^{\frac{2\pi}3}\frac{\cos(x)}{1+\sin(x)}\,\mathrm{d}x=\int_0^1\frac1{1+y}\,\mathrm{d}y+\int_1^{\frac{\sqrt{3}}2}\frac1{1+y}\,\mathrm{d}y=\int_0^{\frac{\sqrt{3}}2}\frac1{1+y}\,\mathrm{d}y=\ln\left(1+\frac{\sqrt{3}}2\right).$$ In this case, it turns out that the direct non-injective substitution gives the same result, but in general, you have to be careful.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4401829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How did they get $|\epsilon/(2M)|$ when proving this limit property? I am currently learning calculus by going through this calculus book. The following Theorem and part of its proof are copied from page 31. (I shortened it a bit and left out the rest, since I don't think its relevant to my question) THEOREM 2.6: Suppose $\lim_{x\to a}f(x) = L$ and $\lim_{x\to a} g(x) = M$. Then $\lim_{x\to a}f(x)g(x) = LM$. Proof: Given any $\epsilon$ we need to find a $\delta$ so that $0 < |x − a| < \delta$ implies $|f(x)g(x) − LM| < \epsilon$. What do we have to work with? We know that we can make f(x) close to L and g(x) close to M, and we have to somehow connect these facts to make f(x)g(x) close to LM. We use, as is so often the case, a little algebraic trick: $$|f(x)g(x) − LM| = |f(x)g(x) − f(x)M + f(x)M − LM|\\ = |f(x)(g(x) − M) + (f(x) − L)M|\\ ≤ |f(x)(g(x) − M)| + |(f(x) − L)M|\\ = |f(x)||g(x) − M| + |f(x) − L||M| $$ [...] Since $\lim_{x\to a} = L$, there is a value $\delta_1$ so that $0 < |x − a| < \delta_1$ implies $|f(x) − L| <|\epsilon/(2M)|$, This means that $0 < |x − a| < \delta_1$ implies $|f(x) − L||M| < \epsilon/2$. You can see where this is going: if we can make $|f(x)||g(x) − M| < \epsilon/2$ also, then we’ll be done. My question: Can someone explain to me how they got to $\epsilon/(2M)$ in this statement: " Since $\lim_{x\to a} = L$, there is a value $\delta_1$ so that $0 < |x − a| < \delta_1$ implies $|f(x) − L| <|\epsilon/(2M)|$. "
Since $\lim_{x\to a}f(x)=L$, you know that, for every $\varepsilon>0$, there is some $\delta>0$ such that$$|x-a|<\delta\implies\bigl|f(x)-f(a)\bigr|<\varepsilon.$$But $\left|\frac\varepsilon{2M}\right|>0$, and therefore there is some $\delta_1>$ such that$$|x-a|<\delta_1\implies\bigl|f(x)-f(a)\bigr|<\left|\frac\varepsilon{2M}\right|.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4401946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The cross ratio $(\infty,z_2,z_3,z_4)=\dfrac{(z_3-\infty)(z_4-z_2)}{(z_3-z_2)(z_4-\infty)} = \dfrac{z_4-z_2}{z_3-z_2}$ As stated in wikipedia (https://en.wikipedia.org/wiki/Cross-ratio#:~:text=Kingdon%20Clifford.%5B4%5D-,Definition,-%5Bedit%5D) and in my textbook the cross ratio $(\infty,z_2,z_3,z_4)=\dfrac{(z_3-\infty)(z_4-z_2)}{(z_3-z_2)(z_4-\infty)} = \dfrac{z_4-z_2}{z_3-z_2}$. What I don't get is how the infinities get cancelled out. I am pretty sure that $\infty / \infty$ is undefined, because $1/\infty = 0$, while $\infty0$ is undefined (am I wrong?).
The general definition of the cross ratio is $$ (z_1, z_2, z_3, z_4) = T(z_1) $$ where $T$ is the unique Möbius transformation which maps $(z_2, z_3, z_4)$ to $(1, 0, \infty)$, respectively. If all $z_j$ are finite then this is equal to $$ (z_1, z_2, z_3, z_4) = \frac{(z_3-z_1)(z_4-z_2)}{(z_3-z_2)(z_4-z_1)} \, . $$ If one of the $z_j$ is equal to $\infty$ then one can compute the cross ratio as the limit for $z_j \to \infty$, for example $$ (\infty, z_2, z_3, z_4) = \lim_{z_1 \to \infty}\frac{(z_3-z_1)(z_4-z_2)}{(z_3-z_2)(z_4-z_1)} = \frac{z_4-z_2}{z_3-z_2} $$ or $$ (z_1, \infty, z_3, z_4) = \lim_{z_2 \to \infty}\frac{(z_3-z_1)(z_4-z_2)}{(z_3-z_2)(z_4-z_1)} = \frac{z_3-z_1}{z_4-z_1} \, . $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4402115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
General case of :$\frac{a}{1+b}+\frac{b}{1+c}+\frac{c}{1+d}+\frac{d}{1+a}\leq \frac{a+b+c+d}{1+\frac{1}{4}(a+c)(b+d)}$ Here in my answer (Prove that $\frac{a}{1+b}+\frac{b}{1+c}+\frac{c}{1+d}+\frac{d}{1+a}\le2$ for $0 \le a, b, c, d \le 1$) I show the inequality : Let $0\leq a,b,c,d\leq 1$: $$\frac{a}{1+b}+\frac{b}{1+c}+\frac{c}{1+d}+\frac{d}{1+a}\leq \frac{a+b+c+d}{1+\frac{1}{4}(a+c)(b+d)}$$ Using buffalo's way The Problem : Let $0\leq x_i\leq 1$ be real such that $x_{n+1}=x_1$ and $n\geq 4$ prove or disprove that : $$\sum_{i=1}^{n}\frac{x_{i}}{1+x_{i+1}}-\frac{\sum_{i=1}^{n}x_{i}}{1+\frac{\sum_{i=1}^{n}x_{i}x_{i+1}}{n}}\leq 0$$ As you can see we cannot use Buffalo's way in the general case because of prohibition of calculus . Perhaps we can use induction to show it . Motivation : It implies the general case (if true) of the inequality linked above . My (funny) complicated way : We have for $0< x,y\leq 1$ : $$x\left(x+1\right)^{-y}\geq \frac{x}{xy+1}$$ So the LHS is : $$\sum_{i=1}^{n}x_{i}\left(x_{i}+1\right)^{-\frac{x_{i+1}}{x_{i}}}$$ Now we use some constraint as $\frac{x_{i+1}}{x_i}=\frac{x_i+u}{k+x_i}$ and $1\leq i\leq n-1$ and $0<k$ and $0<u\leq 2k$ are constants next the function : $$f(x)=x\left(x+1\right)^{-\frac{x+u}{k+x}}$$ Is concave on $(0,1]$ so using weighted Jensen's inequality the LHS is : $$\left(\sum_{i=1}^{n-1}x_{i}\right)\left(\frac{\sum_{i=1}^{n-1}x_{i}^{2}}{\sum_{i=1}^{n-1}x_{i}}+1\right)^{-\frac{u\sum_{i=1}^{n-1}x_i+\sum_{i=1}^{n-1}x_{i}^{2}}{\left(k\sum_{i=1}^{n-1}x_{i}+\sum_{i=1}^{n-1}x_{i}^{2}\right)}}$$ Remains to compare with the RHS plus the last term and we need to use some other constraint Question : How to (dis)prove it ? Thanks in advance !
Note that, for all $x\in [0, 1]$, $$1 - x/2 - \frac{1}{1 + x} = \frac{x(1 - x)}{2(1 + x)}\ge 0.$$ We have $$ \sum_{\mathrm{cyc}}\frac{x_1}{1 + x_2} \le \sum_{\mathrm{cyc}} x_1(1 - x_2/2) = \sum_{\mathrm{cyc}}x_1 - \frac12 \sum_{\mathrm{cyc}} x_1x_2. $$ It suffices to prove that \begin{align*} \frac{\sum_{\mathrm{cyc}}x_1}{1 + \frac{1}{n}\sum_{\mathrm{cyc}} x_1 x_2} \ge \sum_{\mathrm{cyc}}x_1 - \frac12 \sum_{\mathrm{cyc}} x_1x_2 \end{align*} or $$ \sum_{\mathrm{cyc}}x_1x_2 + n \ge 2\sum_{\mathrm{cyc}}x_1$$ or $$\sum_{\mathrm{cyc}} (1 - x_1)(1 - x_2) \ge 0$$ which is true. We are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4402265", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Connection between two trees Given a tree $T_1 = (V_1, E_1)$ (so a connected acyclic undirected graph is the definition I'm working with) and another tree $T_2 = (V_2, E_2)$, is it true that given any node $v \in V_1$ in $T_1$ and $u \in V_2$ in $T_2$ $$T := ( V_1 \cup V_2, E_1\cup \{v,u\}\cup E_2)$$ Is still a tree? I am very convinced it is true, but I'm not able to prove it rigorously, could someone give me a hint or a counterexample if it's false indeed. Thank you. Edit: $V_1 \cap V_2 = \emptyset$.
Counterexample: $$V_1=V_2=\{1,2,3\}, E_1=\{\{1,2\},\{1,3\}\}, E_2=\{\{1,2\},\{2,3\}\}, u=2, v=3$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4402451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Proof of uniform convergence of an infinite product I am reading the book Complex Analysis: An Invitation (2nd Edition), page 163-164. There is a certain step in the proof, which I can not fill the details. First, I mention a relevant proposition and a definition. Proposition: The infinite product $\prod_{n=1}^{\infty}(1+a_k)$ converges if $\sum_{n=1}^{\infty}|a_n|<\infty$, and in that case $$ \left | \prod_{k=1}^\infty (1+a_k)-1 \right |\leq e^{\sum_{k=1}^\infty |a_k|}-1 \tag{*} $$ Definition: Let $(a_n)_{n\geq 1}$ be a sequence of complex valued functions defined in an open subset $\Omega$ of $\mathbb{C}$. We say that the infinite product $\prod_{n=1}^{\infty}(1+a_k)$ converges locally uniformly in $\Omega$, if $\prod_{n=1}^{\infty}(1+a_k(z))$ converges at each $z\in \Omega$ and if furthermore to each compact subset $K$ of $\Omega$ and each $\epsilon>0$ there exists an $N$ such that for all $z\in K$ and $n\geq N$ $$ \left | \prod_{k=1}^{\infty}(1+a_k(z))-\prod_{k=1}^{n}(1+a_k(z)) \right |<\epsilon. $$ What I want to prove is: Lemma: Let $(a_n)_{n\geq 1}$ be a sequence of complex valued functions on an open subset $\Omega$ of $\mathbb{C}$. If as $N\to\infty$ the sum $\sum_{n=N}^{\infty}|a_n(z)|$ converges locally uniformly to $0$, then the infinite product $\prod_{n=1}^{\infty}(1+a_k)$ converges locally uniformly in $\Omega$. The proof which the author says is simply: "It is a consequence of Proposition and the inequality (*)" It does not seem completely clear to me. Could someone explain that step for me? Update: When I read the answer of Kavi Rama Murthy, I thought as follows: Let $K$ be any compact subset of $\Omega$, and let $\epsilon\in (0,1/2)$ be given. Choose an integer $N^*$ with $N^*\geq N$ such that $\sum_{n=n+1}^{\infty}|a_n(z)|<\epsilon$ for all $z\in K$ and all $n\geq N^*$. Then, we have for all $z\in K$ and all $n\geq N^*$ $$ \left | \prod_{k=n+1}^\infty (1+a_k(z))-1 \right |\leq e^{\sum_{k=n+1}^\infty |a_k(z)|}-1<e^\epsilon - 1<2\epsilon $$ and so $$ \left | \prod_{k=1}^{\infty}(1+a_k(z))-\prod_{k=1}^{n}(1+a_k(z)) \right |=\left | \prod_{k=1}^{n}(1+a_k(z)) \right |\left | \prod_{k=n+1}^{\infty}(1+a_k(z))-1 \right |<2\epsilon \left | \prod_{k=1}^{n}(1+a_k(z)) \right |. $$ Then I got stuck here.
$|\prod_{k=n}^{m}(1+a_k(z))-1|\leq e^{ \sum\limits_{k=n}^{m}|a_k(z)}-1$. On any compact set $K$ we can choose $N$ such that $e^{ \sum\limits_{k=n}^{m}|a_k(z)}-1<\epsilon$ for alll $z \in K$ fro all $n,m \geq N$. Now consider $|\prod_{k=1}^{n}(1+a_k(z))-\prod_{k=1}^{m}(1+a_k(z))|$ where $n <m$. We can write this as $|\prod_{k=1}^{n}(1+a_k(z))| |\prod_{k=n+1}^{m}(1+a_k(z))-1|$. What remains is to see that th first factor $\prod_{k=1}^{n}(1+a_k(z))$ is uniformly bounded on $K$. This follows by another application of (*). I hope you can finish.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4402652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving a bounded function is integrable on $[a,b]$ I need to prove that Let $f:[a,b] \to \mathbf{R}$ be a bounded map and let $f$ be an integrable map on the interval $[c,b]$ for all $c \in (a,b).$ Then, $f$ is integrable on $[a,b].$ My attempt: By hypotesis, $f:[a,b] \to \mathbf{R}$ is bounded, so there exists $0 < K \in \mathbf{R}$ such that $|f(x)| \leq K$, for all $x \in [a,b].$ Let $\varepsilon>0,$ and take $c \in (a,b)$ such that $K \cdot (c-a) \leq \frac{\varepsilon}{4}$. By hypotesis, $f$ is integrable on $[c,b],$ then, there exists a partition $\left\{t_1, \dots, t_n \right\} \subset [a,b]$ such that $\sum_{i=2}^{n} \omega_i(t_i-t_{i-1}) < \frac{\varepsilon}{2}$. Let's make $t_{1}=a$, then we get a partition $\left\{t_0, t_1, \dots, t_n \right\}$ of $[a,b]$. Now I need to prove that $\omega_0 \leq 2K$, for me to get that $f$ is integrable on $[a,b]$. Am I following the correct idea? I don't know how to procceed. Any ideas would be appreciated!
I find it easier to talk about the equivalent in Darboux integral terms, that a funct ion is integrable if the upper sum minus the lower sum for any partition is less than any $\epsilon$ you want. So to show intgrability over $[a,b]$ we start with $\epsilon>0$. The trick is to find the right $c$ to use so that when we integrate from $[a,c]$ we are guaranteed to be less than $\frac \epsilon 2$, since we can make the part from $[c,b]$ be less than $\frac \epsilon 2$ by the fact that it is integrable. That's easy, just figure out worst case error, if $M>0$ is the bound on $|f(x)|$, then the biggest error change from upper to lower is $2M$ over a width of $c-a$, so the integral on $[a,c]$ is bounded by $2M(c-a)$. I find it easier to focus on the width, so define $\delta=c-a$ Setting that to $\frac \epsilon 2$ and solving for $\delta$ we get $$2M\delta<\frac \epsilon 2$$ $$\delta<\frac \epsilon {4M}$$ Now you just have to make sure that $a+\delta<b$, so use $\delta'=\min \{\delta,b-a\}$ now we can safely partition our integral on $[a,b]$ that on $[a,a+\delta']\cup [a+\delta',b]$ and be guaranteed that both parts are below $\frac \epsilon 2$ in the difference between upper and lower sums
{ "language": "en", "url": "https://math.stackexchange.com/questions/4402839", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Maximum number of spanning cycles with no common edge in a complete graph A new class has just been started, and there are $n$ people in this class that do not know each other. At each session they sit around a round table. They decided to change their adjacent persons in each session, so that all people come to know each other as soon as possible. What is the minimum number of sessions after which all people know each other? To solve the above problem, I associated a graph to each session where the nodes are the persons and the neighbours to each person are connected to that person via an edge. So the degree of each node is $2$. Indeed, each session is a cycle which includes all nodes (persons) of the graph. So the question is how many spanning cycles exist in a complete graph that do not have any common edge? A common edge is not allowed since it represents a repeated neighbor for some person. A complete graph with $n$ nodes has $\frac{n(n-1)}{2}$ edges and each cycle is going to include $n$ of these edges. So, I think if we are lucky and can use all of the edges to build such cycles then at most we can make $\frac{n-1}{2}$ of these cycles. $1$. Is this argument valid? $2$. Can we use all of the edges to make such cycles?
It is possible to partition all of the edges of the complete graph into Hamiltonian cycles whenever $n$ is odd. Furthermore, it is clearly impossible when $n$ is even, since in that case the number of edges is not a multiple of $n$. First, partition an even order complete graph on $2m$ vertices into $m$ Hamiltonian paths, using the construction in this other answer. Then, add a new vertex, and close up each of these paths by connecting both ends to the new vertex. You now have $m$ Hamiltonian cycles with disjoint edges on the complete graph on $2m+1$ vertices. I got this construction from Wikipedia, where it is attributed to Welecki. They give the following helpful illustration, for $n=9$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4403078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Correcting randomisation function for chance of returning original value I am implementing some GDPR-related data randomisation code; the general rules I need to abide to are as follow: For each user that requested to be GDPR deleted, there is a certain set of data fields that will need to be passed through a two-step randomisation function. As an example let's say that we only care about one field, favourite colour, for which there are 6 possible values the user could have picked from (1:'red', 2:'blue', 3:'yellow', 4:'green', 5:'orange', 6:'purple'). For each field, we "roll the dice" separately. The first roll is to determine whether the field should be randomised or not based on some percentage. Let's say, 15% (so, about 1/7 deleted user will actually get the field randomised). If the above roll returns TRUE then we perform a second "dice roll", this time to assign a value at random among those that are "legal" for said field. For our hypothetical user, the first dice roll returned TRUE and thus we will run the second function tasked with picking one of the six possible values listed above and replacing it from the original. Returning the same value as the original is permitted. Which means that even over a large sample of users, we will never achieve a situation by which "15% of users have their favourite colour field altered", because there is a chance that the random function will return the same value. And we finally get to my actual question: if I wanted to account for that, and increase the initial probability so that the actual "on the ground" probability is 15%, how would I go about that? I would assume it is target_probability + target_probability * chance_of_value_within_set so 0.15 + 0.15 * 0.1666... (1/6) but lacking proper maths training I am not sure this is the right approach.
You are almost correct. Let $t$ be the target probability of getting a different value. Suppose we are trying to achieve this probability by deciding to reroll with probability $r$, and choosing one of $n$ values when we reroll. Then the probability of getting a different value is $r \cdot \frac{n-1}{n}$: we must reroll, and give that we reroll, we must get one of the $n-1$ new values. Solving, $t = r \cdot \frac{n-1}{n}$ gives $r = t \cdot \frac{n}{n-1}$, which we can split up into $r = t + \frac{t}{n-1}$. (Your formula is $t + \frac tn$, which is almost but not quite right.) For example, in this case, there are $n=6$ possible colors, and we are aiming for a target probability of $t = 0.15$. Then we should take $r = 0.15 \cdot \frac65 = 0.18$: reroll with an $18\%$ probability.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4403227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Calculating the spherical harmonic of θ=π/2 This is a very simple question, yet I'm not sure how to approach it. I want to calculate the spherical harmonic: $$ Y_{l m}^{*}(\theta = \pi/2, \phi) $$ I know the general formula: $$ Y_{l m}^{*}(\theta, \phi)=\sqrt{\frac{2 l+1}{4 \pi} \frac{(l-m) !}{(l+m) !}} P_{l}^{m}(\cos \theta) e^{-i m \phi} $$ But for $\pi/2$ I need to calculate the associate Legendre polynomial of 0 ($P_{l}^{m}(\cos \pi/2)$), which I'm not sure how. The Rodrigues' formula is not clear to me for the case of $x=0$: $$ P_{l}^{m}(x)=\frac{(-1)^{m}}{2^{l} l !}\left(1-x^{2}\right)^{m / 2} \frac{d^{l+m}}{d x^{l+m}}\left(x^{2}-1\right)^{l} $$ Any guidance on how to calculate it for that special case would be appreciated.
For each choice of $l$ and $m$, you can find a closed form solution for $P_l^m(x)$ as a function of $x$. Typically, I would start there and only then evaluate it at a particular value of $x$ such as $x = 0$. That said, if you prefer we can find a formula for $P_l^m(0)$ in terms of $l$ and $m$ Instead of starting from the Rodrigues' formula, we could just use this closed form [1] of the associated Legendre polynomials: $$P_l^m(x)=(-1)^{m} \cdot 2^{l} \cdot (1-x^2)^{m/2} \cdot \sum_{k=m}^l \frac{k!}{(k-m)!}\cdot x^{k-m} \cdot \binom{l}{k} \binom{\frac{l+k-1}{2}}{l}$$ where $\binom \alpha k$ is the generalized binomial coefficient: $$\binom \alpha k = \frac{1}{k!} \prod_{i=0}^{k-1} (\alpha-i)$$ In particular, for $x = 0$ and $k > m$ we have $x^{k-m} = 0$. Therefore, the only term in the sum of $k$ which may be non-zero is the $k = m$ term. Now, if we naively plug in $x = 0$ and $k = m$, we'd get $x^{k-m} = 0^0$, which is indeterminate. But there are a few reasons I feel confident the value it takes here should be $1$. For one thing, I can see that this will give me the correct values for the first few associated Legendre polynomials, such as $P_0^0(x) = 1$. For another thing, I know that $P_l^m(x)$ for fixed $l$ and $m$ should be a continuous function in $x$, and clearly any small but non-zero $x$ to the $0$-th power is $1$. But if we wanted to be rigorous, we could go back to the derivation of this closed form and see for ourselves that we could have instead produced a series over $k$ from $m + 1$ to $l$, and with a separate term in front (playing the same role as our $k = m$ term did previously). This would give us a closed-form solution like this: $$P_l^m(x) = (-1)^{m} \cdot 2^{l} \cdot (1-x^2)^{m/2} \left[m! \binom{l}{m} \binom{\frac{l+m-1}{2}}{l} + \sum_{k=m+1}^l \frac{k!}{(k-m)!}\cdot x^{k-m} \cdot \binom{l}{k} \binom{\frac{l+k-1}{2}}{l} \right]$$ Now all terms of the sum are positive powers of $x$, so at $x = 0$ the whole summation goes away and we're left with this: $$P_l^m(0) = (-1)^{m} 2^{l} m! \binom{l}{m} \binom{\frac{l+m-1}{2}}{l}$$ After replacing the binomial coefficients using the definition above and making some cancelations, we have: $$P_l^m(0) = \frac{(-1)^{m} 2^{l}}{(l-m)!} \prod_{i=0}^{l-1} \left(\frac{l+m-1}{2} - i\right)$$ We can check a few values just to be sure. For example: $$P_{3}^{1}(x)=-\tfrac{3}{2}(5x^{2}-1)(1-x^2)^{1/2} \\ P_{4}^{2}(x)=\frac{15}{2}(7x^2 - 1)(1-x^2)$$ so $P_{3}^{1}(0) = \frac{3}{2}$ and $P_{4}^{2}(0) = - \frac{15}{2}$ From the above formula, we have: $$P_3^1(0) = \frac{(-1)^{1} 2^{3}}{(3-1)!} \prod_{i=0}^{3-1} \left(\frac{3+1-1}{2} - i\right) = \frac{-8}{2!} \prod_{i=0}^{2} \left(\frac{3}{2} - i\right) \\ = -4 \cdot \frac{3}{2} \cdot \frac{1}{2} \cdot \frac{-1}{2} = \frac{3}{2}$$ and $$P_4^2(0) = \frac{(-1)^{2} 2^{4}}{(4-2)!} \prod_{i=0}^{4-1} \left(\frac{4+2-1}{2} - i\right) = \frac{16}{2!} \prod_{i=0}^{3} \left(\frac{5}{2} - i\right) \\ = 8 \cdot \frac{5}{2} \cdot \frac{3}{2} \cdot \frac{1}{2} \cdot \frac{-1}{2} = - \frac{15}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4403409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
on counting certain number of 4-tuples Let $i,j,k,l \in \{0,1,2,3,\dots,n\}$. How can we count the number of $4$-tuples $(i,j,k,l)$ with the property $i-j=k-l$ ? Thanks for any hints/responces
$i+l=:t=j+k$ So fix $t$. This means that you have to take $(i,l)$ and $(j,k)$ such that their sum is equal to $t$. Thus we have exactly $t+1$ possibilities for $(i,l)$ and $t+1$ for $(j,k)$. Hence the number of possibilities is equal to $(t+1)^2$ for $(i,j,k,l)$. Here $t$ will go from $0$ to $2n$ and so the total number $N$ of possibilities for $(i,j,k,l)$ such that $i-j=k-l$ is equal to $$N=\sum_{t=0}^{2n} (t+1)^2=\sum_{k=1}^{2n+1}k^2=\frac{(2n+1)(n+1)(4n+3)}{3}$$ Here I’ve used te closed form formula of the sum of squares $\sum_{k=1}^n k^2=\frac{n(n+1)(2n+1)}{6}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4403732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the apparent contradiction in this integral? I was solving this exercise, however it is proposed that there is a possible contradiction in the exercise but I cannot determine what it is. The idea is to find the integral of $\int{\frac{1}{\sin(x)\cos(x)}dx}$ For this purpose, the following is expressed $\int{\frac{1}{\sin(x)\cos(x)}dx}=\int{\frac{\cot(x)}{\cos^2(x)}dx}=\int{\cot(x)\tan'(x)dx}=\cot(x)\tan(x)-\int{\tan(x)\cot'(x)dx}=1+\int{\frac{\tan(x)}{\sin^2(x)}dx}=1+\int{\frac{1}{\sin(x)\cos(x)}dx}$ Where does the failure occur? thank you for your help.
There is no failure. You've shown that $$\int{\frac{1}{\sin(x)\cos(x)}dx}=1+\int{\frac{1}{\sin(x)\cos(x)}dx},$$ which is correct, even if unhelpful: recall that $\int f(x)\,dx$ denotes the set of all antiderivatives of $f(x)$, all of which are a constant apart from one another. This means that it is in general true that for any function $f(x)$ we have $\int f(x)\,dx = 1 + \int f(x)\,dx$. Your derivation is correct, but it does not lead to a solution of the integral. (To actually solve the integral, you can use the Weierstrass half-angle substitution, i.e. $t=\tan(\frac{x}{2})$, and you may simplify beforehand using $\sin(2x)=2\sin(x)\cos(x)$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4404370", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
The solutions of the equation $z^4+4z^3i-6z^2-4zi-i=0$ are the vertices of a convex polygon in the complex plane. What is the area of the polygon? The solutions of the equation $z^4+4z^3i-6z^2-4zi-i=0$ are the vertices of a convex polygon in the complex plane. What is the area of the polygon? Solution: Looking at the coefficients, we are immediately reminded of the binomial expansion of ${\left(x+1\right)}^{4}$. Modifying this slightly, we can write the given equation as ${\left(z+i\right)}^{4}=1+i=2^{\frac{1}{2}}\cos \frac {\pi}{4} + 2^{\frac{1}{2}}i\sin \frac {\pi}{4}$ $\star$ We can apply a translation of $-i$ and a rotation of $-\frac{\pi}{4}$ (both operations preserve area) to simplify the problem: $z^{4}=2^{\frac{1}{2}}$ Because the roots of this equation are created by rotating $\frac{\pi}{2}$ radians successively about the origin, the quadrilateral is a square. We know that half the diagonal length of the square is ${\left(2^{\frac{1}{2}}\right)}^{\frac{1}{4}}=2^{\frac{1}{8}}$ Therefore, the area of the square is $\frac{{\left( 2 \cdot 2^{\frac{1}{8}}\right)}^2}{2}=\frac{2^{\frac{9}{4}}}{2}=2^{\frac{5}{4}}$ After the $\star$ I become completely lost. "We can apply a translation of $-i$ and a rotation of $-\frac{\pi}{4}$ (both operations preserve area) to simplify the problem: $z^{4}=2^{\frac{1}{2}}$" Can you please show me exactly how this is achieved, perhaps visually? I am especially confused about what in the equation gets edited to obtain a $- \pi/ 4$ rotation.
If we translate (preserves area) to the new variable $w=z+i$ we get $w^4=1+i$. Solutions are vertices of a square with side length $2^{5/8}$. Hence the area would be $2^{5/4}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4404581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Finding the value of $\sum^{\infty}_{n=3,5,7,9....}\frac{2n^2 \exp(-\pi n/2)}{\exp(\pi n)+1}$ I want to evalutate: $\displaystyle \tag*{} \sum \limits ^{\infty}_{n=3,5,7,9....}\dfrac{2n^2 \exp \left(-\pi n/2\right)}{\exp(\pi n)+1}$ This question is inspired from my previous question which again was inspired from quora, while the sum with $n$ in numerator is difficult to find closed form, however $n^2$ is easy to find. Any help would be appreciated.
Let us write $q=e^{-\pi/2}$ and the sum in question becomes $$F(q)=2\sum_{n\text{ odd}, n>1}\frac{n^2q^{3n}}{1+q^{2n}}\tag{1}$$ which can be further expressed as $$F(q) =2\sum_{n\text{ odd}, n>1}n^2q^n-2\sum_{n\text{ odd}, n>1}\frac{n^2q^n}{1+q^{2n}}\tag{2}$$ The first sum is easy to handle and we can just note that $$\sum n^2q^n=\left(q\frac{d}{dq}\right)^2\frac{1}{1-q}=\frac{q+q^2}{(1-q)^3}=a(q) \text{ (say)} $$ and hence $$A=2\sum_{n\text { odd}, n>1}n^2q^n=2(a(q)-4a(q^2)-q)=2q^3\cdot\frac{9-2q^2+q^4}{(1-q^2)^3}\tag{3}$$ Let $$g(q) =\sum_{n=1}^{\infty}\frac{n^2q^n}{1+q^{2n}}$$ then the second sum, say $B$, in $(2)$ can be expressed as $$B=2g(q)-8g(q^2)-\frac{2q}{1+q^2}$$ To evaluate $g(q) $ in closed form we need a bit of elliptic function theory. The function $\operatorname {dn} (u, k) $ has the Taylor series expansion $$\operatorname {dn} (u, k) =1-k^2\frac{u^2}{2!}+k^2(4+k^2)\frac{u^4}{4!}+\dots$$ and it also has a Fourier series $$\operatorname {dn} (u, k) =\frac{\pi} {2K}+\frac{2\pi}{K}\sum_{n=1}^{\infty} \frac {q^n} {1+q^{2n}}\cos(\pi n u/K) $$ where $$K=K(k) =\int_0^{\pi/2}\frac{dx}{\sqrt{1-k^2\sin^2x}},k'=\sqrt{1-k^2},K'=K(k'),q=e^{-\pi K'/K} $$ Expanding $\cos(\pi nu/K) $ as a power series in $u$ and equating coefficients of $u^2$ in the two series for $\operatorname {dn} (u, k) $ we get $$k^2=\frac{2\pi^3}{K^3}\sum_{n=1}^{\infty} \frac{n^2q^n}{1+q^{2n}}$$ or $$g(q) =\sum_{n=1}^{\infty} \frac{n^2q^n}{1+q^{2n}}=\frac{k^2K^3}{2\pi^3}$$ Let $l, l', L, L'$ correspond to $q^2$ in the same manner as $k, k', K, K'$ correspond to $q$ then we have $$g(q^2)=\frac{l^2L^3}{2\pi^3}$$ By Landen transformation we have $$k=\frac{2\sqrt{l}}{1+l},K=(1+l)L$$ and then we get $$g(q) =\frac{4l(1+l)L^3}{2\pi^3}$$ Thus we have $$2g(q)-8g(q^2)=\frac{4lL^3}{\pi^3}\tag{4}$$ For $q^2=e^{-\pi} $ we have $$l=1/\sqrt{2},L=\Gamma^2(1/4)/(4\sqrt{\pi})$$ and thus $$B=\frac{\sqrt{2}\Gamma ^6(1/4)}{32\pi^{9/2}}-\frac{2q}{1+q^2}$$ The desired sum is $A-B$ where $A$ has been obtained in equation $(3)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4404818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Counting the number of ways N people with a score tuple can be ranked Let's say I have $N$ people each with a score tuple $(x_i, y_i)$. The total score each person $i$ receives is $Xx_i + Yy_i$ where $X$ and $Y$ are both positive integers (without any upper bound). The $X$ and $Y$ values are the same for every person. In other words, they act as weights for the score tuple of every person. Now the question is, assuming we are free to choose any value for $X$ and $Y$, in how many ways can we rank each person? (i.e. in how many ways can we sort them by total score) Assume that when the scores are equal, we can break ties arbitrarily, meaning that if for example we have two persons $A$ and $B$ and both have equal score tuple, we can rank them in two ways total. I have made very little progress with this. Some observations I've made are: * *If all score tuples are in order, such that $x_i < x_{i+1}$ and $y_i < y_{i+1}$, then it's only possible to rank them in one way. For example $(1, 1)$, $(2, 3)$, $(6, 8)$ has only one possible order, no matter how we choose $X$ and $Y$. *If we have N people and all the tuples are equal, then the answer should be $N!$. *If we have two people with $(6, 10)$ and $(10, 6)$, then there are two ways of ranking them. Also in the case of $(1, 1)$, $(2, 2)$, $(6, 10)$, $(10, 6)$, $(20, 20)$, we can tell that some tuples are in order (following observation 1), and the ones in the middle are the same as observation 3, so the total ways of ranking them is also 2. The two possible ways are: $(1, 1)$, $(2, 2)$, $(6, 10)$, $(10, 6)$, $(20, 20)$ $(1, 1)$, $(2, 2)$, $(10, 6)$, $(6, 10)$, $(20, 20)$ I kind of think one way would be to discard all tuples that don't generate any "conflict" (in the sense of observation 1) and keep only the ones that could be ranked in many ways, and then do some kind of formula, but I'm not capable of finding it as of yet. Note: I'm not 100% sure if my observations are correct.
Imagine the points $(x_i,y_i)$ in the plane, and consider the family of lines orthogonal to the normal vector $(X,Y)$. These are the lines with slope $-\frac XY$, so you can choose any negative rational slope $m$ and the points get ranked according to which of the family of lines with slope $m$ they’re on. Now let $n_m$ be the number of lines with (negative rational) slope $m$ containing more than one point, and let $n_{mk}$ with $1\le k\le n_m$ be the number of points on the $k$-th line with slope $m$. Points that lie on the same line have the same score, so if you choose one of the “ambiguous” values for $m$ such that $n_{mk}$ points lie on a line, there are $n_{mk}!$ different choices for ordering those points, and if $n_m$ lines have more than one point, an order can be freely chosen on each line independently, so there are $\prod_{k=1}^{n_m}n_{mk}!$ different orders for this value of $m$. One of these orders is the same as the one corresponding to slope $m+\epsilon$, and one is the same as the one corresponding to slope $m-\epsilon$, so if we add all these contributions, we’re double-counting all the unambiguous orders between the ambiguous slopes, so we have to subtract $1$ for each ambiguous slope and then add $1$, so the number of different orders is $$ 1+\sum_m\left(\prod_{k=1}^{n_m}n_{mk}!-1\right)\;. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4405188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Cumulative distribution of a martingale Let $(M_n)_{n\in\mathbb{N}}$ be a martingale with respect to a filtration $(\mathcal{F}_n)_{n\in\mathbb{N}}$. We can define the bracket of $M_n$ by $$\langle M\rangle_n=\sum\limits_{k=0}^{n-1}\mathbb{E}\left[(M_{k+1}-M_k)^2|\mathcal{F}_k \right].$$ Morally, $\langle M \rangle_n$ is of the same order as $M_n^2$. This idea can be made precise thanks to the BDG inequality, which says that, for every $p>1$ (actually, any $p>0$ for continuous martingales.), there exists universal constants $c_p$ and $C_p$ such that for every $n\in\mathbb{N}$, $$c_p\mathbb{E}\left[\langle M \rangle_n^{p/2}\right]\leq \mathbb{E}\left[(M_n^{*})^p \right]\leq C_p\mathbb{E}\left[\langle M \rangle_n^{p/2}\right]$$ where $M_n^*=\underset{1\leq k\leq n}\max|M_k|$. Therefore, my question is the following one: Is this possible to compare $\mathbb{P}\left(M_n^*> \lambda\right)$ and $\mathbb{P}(\langle M\rangle_n>\lambda^2)$? Or in the case where $M$ and its bracket converge almost surely, $\mathbb{P}\left(|M_{\infty}-M_n|> \lambda\right)$ and $\mathbb{P}\left(\langle M\rangle_{\infty}-\langle M\rangle_n>\lambda^2 \right)$?
Regarding your question, the following Lenglart's inequality is useful(cf. J. Jacod, and A. N. Shiryayev, Limit Theory for Stochastic Processes, 2ed. Springer, 2003, Lemma I.3.30, P.35.). From Lenglart inequality, Since $M^2$ is a càdlàg adapted process which is L-dominated by an predictable increasing process $\langle M\rangle$(i.e. $\mathsf{E}[M^2_T] = \mathsf{E}\langle M \rangle_T$ for every bounded stopping times), then for all stopping times $T$ and all $\epsilon, \eta>0$, \begin{gather*} \mathsf{P}(M^*_T>\epsilon)\le \frac{\eta}{\epsilon^2} + \mathsf{P}(\langle M\rangle_T \ge \eta),\\ \mathsf{P}\Big(\sup_{m\ge n}|M_m-M_n|>\epsilon\Big)\le \frac{\eta}{\epsilon^2} + \mathsf{P}(\langle M\rangle_\infty-\langle M\rangle_n \ge \eta). \end{gather*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4405513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
An open set is contained in an open interval I am wondering if for an arbitrary open set $A$ in $\mathbb{R}$, there exist an open interval such that $A$ is contained in that interval. My guess is that this is true, and that this interval would be $(\inf(A), \sup(A))$, where the infimum and supremum can go to infinity. Is this correct?
Clearly $A\subseteq[\inf A,\sup A]$. On the other hand, $\sup A\notin A$, because otherwise you'd have $(\sup A-\varepsilon,\sup A+\varepsilon)\subseteq A$ for some $\varepsilon>0$, contradicting $\sup A$ being the supremum of $A$. Similarly $\inf A\notin A$, so we have $A\subseteq(\inf A,\sup A)$. Can there be a smaller interval containing $A$? It would be $(u,v)$ where either $u>\inf A$ or $v<\sup A$. Either case leads to a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4405935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
In a group endomorphism between two sets, must the binary operation of the two sets be the same? Let $(G,\ast)$ and $(G,\cdot)$ be any two groups, where $\ast$ and $\cdot$ are distinct binary operations. Suppose $\phi:G\to G$ satisfies $$ \phi(x)\cdot \phi(y)=\phi(x\ast y) $$ for all $x,y\in G$. Is $\phi$ a group endomorphism? In other words, for a homomorphism $\phi:G\to H$ to be an endomorphism, do we simply require that the set $G$ equals $H$, or do we also require that the binary operations are equal as functions?
An endomorphism of $(G,*)$ is, by definition, a homomorphism from $(G,*)$ to $(G,*)$. Thus, in your formulation, we require $* = \cdot$ (as functions $G\times G \to G$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4407042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Prove : $\sqrt{\dfrac{ab}{bc^2+1}}+\sqrt{\dfrac{bc}{ca^2+1}}+\sqrt{\dfrac{ca}{ab^2+1}}\le\dfrac{a+b+c}{\sqrt{2}}$ Let $a,b,c>0$ satisfy $abc=1$, prove that: $$\sqrt{\dfrac{ab}{bc^2+1}}+\sqrt{\dfrac{bc}{ca^2+1}}+\sqrt{\dfrac{ca}{ab^2+1}}\le\dfrac{a+b+c}{\sqrt{2}}$$ My attempt: Let $a=\dfrac{1}{x};b=\dfrac{1}{y};c=\dfrac{1}{z}$, we have $xyz=1$ and using $abc=1$, the inequality can be written as: $$\dfrac{x}{\sqrt{x+y}}+\dfrac{y}{\sqrt{y+z}}+\dfrac{z}{\sqrt{z+x}}\le \dfrac{xy+yz+zx}{\sqrt{2}}$$ I'm trying to use Cauchy-Schwarz: $$LHS\le\sqrt{(x+y+z)(\dfrac{x}{x+y}+\dfrac{y}{y+z}+\dfrac{z}{z+x})}$$ but now I have to prove $$\dfrac{x}{x+y}+\dfrac{y}{y+z}+\dfrac{z}{z+x}\le\dfrac{3}{2}$$ because $ab+bc+ca\ge\sqrt{3(a+b+c)}$, but I can't prove it. Can anyone give me a hint? Not necessarily a complete solution. By the way, I also relized a problem that seems quite similar to the above problem $\sqrt{\frac{2 x}{x+y}}+\sqrt{\frac{2 y}{y+z}}+\sqrt{\frac{2 z}{z+x}} \leq 3$ if $x,y,z>0$ (Vasile Cirtoaje) (and then we can use $3\le xy+yz+zx$ ?Hope it helps)
Remark: As Calvin Lin pointed out, we can just deal with $a, b, c$, without the substitutions. We have \begin{align*} &\sum_{\mathrm{cyc}} \sqrt{\frac{ab}{bc^2 + 1}} \\ =\,& \sum_{\mathrm{cyc}} \sqrt{\frac{ab ab}{(bc^2 + 1)ab}}\\ =\,& \sum_{\mathrm{cyc}}\frac{ab}{\sqrt{ab + bc}}\\ \le\,& \sqrt{(ab + bc + ca)\left(\frac{ab}{ab + bc} + \frac{bc}{bc + ca} + \frac{ca}{ca + ab}\right)} \tag{1}\\[5pt] =\,&\sqrt{\frac{(ab + bc + ca)ab}{ab + bc} + \frac{(ab + bc + ca)bc}{bc + ca} + \frac{(ab + bc + ca)ca}{ca + ab}}\\[5pt] =\,& \sqrt{ab + \frac{ca^2}{a + c} + bc + \frac{ab^2}{b + a} + ca + \frac{bc^2}{c + b}}\\[5pt] \le\,& \sqrt{ab + \frac{\frac{(a + c)^2}{4}a}{a + c} + bc + \frac{\frac{(b + a)^2}{4}b}{b + a} + ca + \frac{\frac{(b + c)^2}{4}c}{c + b}}\tag{2}\\[5pt] =\,&\sqrt{\frac{1}{4}(a^2 + b^2 + c^2) + \frac54(ab + bc + ca)} \end{align*} where we have used the Cauchy-Bunyakovsky-Schwarz inequality in (1), and $ca \le \frac{(c + a)^2}{4}$ etc. in (2). It suffices to prove that $$\frac{(a + b + c)^2}{2} \ge \frac{1}{4}(a^2 + b^2 + c^2) + \frac54(ab + bc + ca)$$ or $$a^2 + b^2 + c^2 \ge ab + bc + ca$$ which is true. We are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4407185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Comparing a %increase with a %decrease I'm struggling to compare the %increase/decrease in the average homicide rate of a select group of cities to the state homicide rate. For example, if the state saw a 4% decrease (-4) in the homicide rate over a two-year period but the average homicide rate in my group of comparison cities increased by 2%(+2), how much bigger was the increase in my group of comparison cities compared to the state? Is it possible to make a statement along these lines: "The average homicide rate of the comparison group increased X times more than the state rate over the same period." It can't be this |-4|/2 = 2 because it's definitely more than double. Appreciate any help!
Imagine that the state initially had a homicide rate of $100$. A $4$ percent decrease would be $96$. If we instead pretend that the state behaved like the group of comparison cities, then a $2$ percent increase would be $102$. Overall, the ratio between the two scenarios is $\frac{102}{96} = 1.0625$, which is a $6.25$ percent increase.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4407375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Countable (Willard) subnet in first-countable space Let $X$ be a first-countable space, and let $\mathcal B_x=\{B_1,B_2,\dots\}$ be a local base at $x\in X$ satisfying $B_1\supset B_2\supset\dots$. Given a net $(x_\alpha)_{\alpha\in A}$ that converges to $x$, I want to show that it has a convergent subsequence (that is, a subnet whose index set is $\mathbf N$). In more detail, we want to find a monotone cofinal map $\phi\colon\mathbf N\to A$ such that $(x_{\phi(n)})_{n\in\mathbf N}$ is a sequence that converges to $x$. I tried to use the convergence of the net to define a sequence as follows. Choose $\phi(1)\in A$ such that $x_\alpha\in B_1$ whenever $\alpha\ge\phi(1)$. Then choose $\phi(2)\in A$ such that $\phi(2)\ge\phi(1)$ and $x_\alpha\in B_2$ whenever $\alpha\ge\phi(2)$. Continuing in this way, we define a monotone sequence $\phi$. But I am stuck here — I do not know how to prove that $\phi$ is cofinal (and I am not even sure if it is). How should I proceed? Thank you.
This is not true. For instance, you could just take a constant net with value $x$ with an index set that has no countable cofinal subset (e.g., $\omega_1$). Such a net has no subsequences at all. There are less trivial examples as well, which show there is no easy fix by disallowing (say) nets that are eventually in the intersection of all neighborhoods of $x$. For instance, the index set of our net could be $\omega\times\omega_1$ with the product order, with $x_{(n,\alpha)}\in B_n\setminus B_{n+1}$ for all $(n,\alpha)\in\omega\times\omega_1$. Such a net will converge to $x$, since it is eventually in each $B_n$. However, this net still has no subsequences at all: no countable subset of the index set is cofinal, since any countable subset is bounded on the second coordinate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4407543", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can I show monotonicity of this function defined on the space of random variables $RV(\Omega)$? I have the function $f_{\lambda}:RV(\Omega)\rightarrow \mathbb{R}$ defined on the space $RV(\Omega)$ supported over some scenario set $\Omega$: $f_{\lambda}:=\frac{1}{\lambda}\log(\mathbb{E}[e^{-\lambda X}])]$ , where $\lambda>0$. Now, I want to show that this function is monotone. In order to do this I think I have to show that for any $X,Y\in RV({\Omega})$ with $X(\omega)\leq Y(\omega)$ $\forall\omega\in\Omega$ I have that: $f_{\lambda}(X)\geq f_{\lambda}(Y)$ However, I am not sure how to show that this inequality holds. Any ideas?
Note that \begin{align} f_\lambda(X)-f_\lambda(Y) =&\frac{1}{\lambda}\log \left( \frac{\mathbb{E}(e^{-\lambda X})}{\mathbb{E}(e^{-\lambda Y})}\right) \\ \geq & \frac{1}{\lambda} \log 1 \geq 0 \, , \end{align} where in the penultimate inequality I have used the fact that $X \leq Y$ and $\lambda>0$ implies that $e^{-\lambda X}\geq e^{-\lambda Y}$ for all $\omega \in \Omega$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4407741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluation under tensor products Could anyone explain to me the highlighted step in this calculation? Can I just exaluate the tenor product component wise? The first step is basically $$(ev_{V}\otimes id_{V})\circ(id_{V}(v)\otimes coev_{V}(v))=(ev_{V}\otimes id_{V})(v^ie_{i}\otimes(\epsilon^{j}\otimes e_{j}))$$ and then the second equality would look like $$(ev_{V}\otimes id_{V})(v^ie_{i}\otimes(\epsilon^{j}\otimes e_{j}))=(ev_{V}((v^ie_{i}\otimes(\epsilon^{j}\otimes e_{j})))\otimes id_{V}((v^ie_{i}\otimes(\epsilon^{j}\otimes e_{j}))))$$. But I think I'm wrong there.
Yes, you can calculate the tensor product "component-wise", i.e. if $f,g$ are maps, then $(f \otimes g)(x \otimes y) = f(x) \otimes g(y)$. In your case we get that \begin{align} (\text{ev}_V \otimes \text{id}_V)(v^ie_i \otimes \epsilon^j \otimes e_j) &= (\text{ev}_V \otimes \text{id}_V)\left((v^ie_i \otimes \epsilon^j) \otimes e_j\right)\\ &= \text{ev}_V(v^ie_i \otimes \epsilon^j) \otimes \text{id}_V(e_j)\\ &= \epsilon^j(v^ie_i) \otimes e_j\\ &= v^i\epsilon^j(e_i) \otimes e_j = v^i \delta_i^j \otimes e_j = v^ie_i = v. \end{align} I hope this is helpful!
{ "language": "en", "url": "https://math.stackexchange.com/questions/4407874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How do you evaluate: $\int _{0}^{\infty} \frac{\log x}{e^x+e^{-x}+1} \ \mathrm dx$ I want to find the value of $\displaystyle \tag*{} \int _{0}^{\infty} \frac{\log x}{e^x+e^{-x}+1} \ \mathrm dx$ At first, I solved this elementary integral: $\displaystyle \tag*{} \int _{0}^{\infty} \frac{\log x}{e^x+e^{-x}} \ \mathrm dx$ Using the same method, I couldn't find my asked integral. Are there any ways to connect them? Any help would be appreciated.
The similar problem was posted on AoPS the other day. There are nice answers posted. For the sake of completeness, I would like to add the solution, based on approach developed by Yaroslav Blagouchine; it is convenient for solving the problems with a specific symmetry by means of the integration along a rectangular contour in the complex plane. Let $$I=\int_0^\infty \frac{\ln{x}}{e^{x}+e^{-x}+1}dx=\int_0^\infty \frac{\ln{x}}{2\cosh x+1}dx$$ $$=\lim_{a\to0}\frac{1}{2}\int_0^\infty\frac{\ln(x^2+a^2)}{2\cosh x+1}dx=\frac{1}{4}\lim_{a\to0}\Re\int_{-\infty}^\infty\frac{\ln(a-ix)}{\cosh x+\frac{1}{2}}dx$$ Let's consider $$I(a)=\frac{1}{4}\int_{-\infty}^\infty\frac{\ln(a-ix)}{\cosh x+\frac{1}{2}}dx=\frac{\pi}{2}\int_{-\infty}^\infty\frac{\ln(a-2\pi i t)}{\cosh 2\pi t+\frac{1}{2}}dt$$ $$=\frac{\pi}{2}\ln2\pi\int_{-\infty}^\infty\frac{dt}{\cosh 2\pi t+\frac{1}{2}}+\frac{\pi}{2}\int_{-\infty}^\infty\frac{\ln\big(\frac{a}{2\pi}-it\big)}{\cosh 2\pi t+\frac{1}{2}}dt=I_1+I_2(a)$$ Now, we move to the complex plane and consider the closed rectangular contour $-R\,\to R\,\to (R+i)\,\to (-R+i)\,\to -R;\,\,R\to \infty$; counter clockwise. Let's also consider the following integral along this contour. The integrand has simple poles at the points $z=\frac{i}{3}$ and $\frac{2i}{3}$. $$\frac{\pi}{2}\ln2\pi\oint\frac{e^{i\beta z}}{\cosh 2\pi z-\frac{1}{2}}dz=I_1(\beta)\big(1-e^{-\beta}\big)=\frac{\pi}{2}\ln2\pi \,2\pi i\operatorname{Res}_{\binom{\frac{i}{3}}{\frac{2i}{3}}}\frac{e^{i\beta z}}{\cosh 2\pi z-\frac{1}{2}}$$ $$I_1(\beta)\big(1-e^{-\beta}\big)=\frac{\pi}{2}\ln2\pi\frac{2\pi i}{\sqrt 3\pi i}\Big(e^{-\frac{\beta}{3}}-e^{-\frac{2\beta}{3}}\Big)$$ (We also have to add side integrals - along $R\to R+i$ and $-R+i\to-R$, but these integrals $\to 0$ at $R\to\infty$). Taking the limit $\beta\to 0$, we find $$\boxed{\,\,I_1(0)=I_1=\frac{\pi}{3\sqrt 3}\ln2\pi\,\,}$$ To evaluate $I_2(a)$, we notice that $$\frac{\ln\big(\frac{a}{2\pi}-it\big)}{\cosh 2\pi t-\frac{1}{2}}=\frac{\ln\Gamma\big(\frac{a}{2\pi}-it+1\big)-\ln\Gamma\big(\frac{a}{2\pi}-it\big)}{\cosh 2\pi t-\frac{1}{2}}$$ $$=\frac{\ln\Gamma\big(\frac{a}{2\pi}-i(t+i)\big)}{\cosh 2\pi (t+i)-\frac{1}{2}}\,-\,\frac{\ln\Gamma\big(\frac{a}{2\pi}-it\big)}{\cosh 2\pi t-\frac{1}{2}}$$ (We used the fact that $\cosh (x+2\pi i)=\cosh x$). Adding two integrals (along $R\to R+i$ and $-R+i\to-R$ - these integrals $\to 0$ at $R\to\infty$), we can present $I_2(a)$ in the form of the integral along the same rectangular contour $$I_2(a)=-\frac{\pi}{2}\oint\frac{\ln\Gamma\big(\frac{a}{2\pi}-iz\big)}{\cosh 2\pi z-\frac{1}{2}}dz=-\frac{\pi}{2}\,2\pi i\operatorname{Res}_{\binom{\frac{i}{3}}{\frac{2i}{3}}}\frac{\ln\Gamma\big(\frac{a}{2\pi}-iz\big)}{\cosh 2\pi z-\frac{1}{2}}=-\frac{\pi}{\sqrt 3}\Big(\ln\Gamma\big(\frac{a}{2\pi}+\frac{1}{3}\big)-\ln\Gamma\big(\frac{a}{2\pi}+\frac{2}{3}\big)\Big)$$ $$\boxed{\,\,I_2(a)=\frac{\pi}{\sqrt 3}\ln\frac{\Gamma\big(\frac{a}{2\pi}+\frac{2}{3}\big)}{\Gamma\big(\frac{a}{2\pi}+\frac{1}{3}\big)}\,\,}$$ Coming back to our initial integral $$I=I_1+\Re\,I_2(0)=\frac{\pi}{3\sqrt 3}\ln2\pi+\frac{\pi}{\sqrt 3}\ln\frac{\Gamma\big(\frac{2}{3}\big)}{\Gamma\big(\frac{1}{3}\big)}$$ Using the reflection formula for gamma-function $\Gamma\big(\frac{2}{3}\big)\Gamma\big(\frac{1}{3}\big)=\frac{\pi}{\sin\frac{\pi}{3}}=\frac{2\pi}{\sqrt3}$ $$\boxed{\,\,I=\frac{\pi}{3\sqrt 3}\ln2\pi+\frac{\pi}{\sqrt 3}\ln\frac{2\pi}{\sqrt3}-\frac{2\pi}{\sqrt 3}\ln\Gamma\Big(\frac{1}{3}\Big)=-0.126321...\,\,}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4408018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Lambda Calculus Xor How do you prove $\mathsf{xor} \, \mathsf{True}\, \mathsf{True}$ is false in lambda calculus using call-by-value reduction. This is the approach I tried but it is not working: $$\mathsf{xor} \equiv \lambda xy.x(y F T) y$$ $$\mathsf{xor} \, T T \equiv (\lambda xy.x(y F T)y) T T$$ $$\mathsf{xor}\, T T \equiv (\lambda xy.x(y (\lambda x y.y) (\lambda x y.x))y) T T$$ Then by beta reduction of leftmost inner $$\mathsf{xor}\, T T \equiv (\lambda xy.x(y (\lambda y.y) )y) T T$$ But I feel there is a mistake somewhere as the steps above did not lead me to the expected answer.
You are mistaken. The subterm $yFT$ does not contain any $\beta$-redex. Indeed, every term of the form $MNL$ must be read as $(MN)L$, and not as $M(NL)$ (technically, the application is said to be left-associative). This means that $yFT = (yF)T = (y (\lambda xy.y))\lambda xy.x$ and so there is no $\beta$-redex to fire. Therefore, in the term $\mathsf{xor}\, T T = (\lambda xy.x(y (\lambda x y.y) (\lambda x y.x))y) T T$ there is only one $\beta$-redex, made up of the subterm from the first occurrence of $\lambda x$ to the first occurrence of $T$. You can easily see that the leftmost-innermost reduction from $\mathsf{xor}\, T T $ yields $F$, as expected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4408165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
why do we need a sequence of random variable, isn't one function sufficient? In sampling, we have so many situations involving a sequence of random variables, what I am confusing is why do we need a sequence of random variables to describe the process? It feels like each function is only used once. Suppose $$X_i:\Omega\to\mathbb{R}\quad,i\in\mathbb{N}$$ $X_1,X_2,X_3...$ basically just the $\mathbb{R}$-valued image, each image has its corresponding function. why don't we only use a single random variable to describe those images, this also seems to be sufficient to describe the process, if not, what is the problem?
I don't see how this would make any difference. I think it's mostly due to notation and history. The notation $a_i$, used to indicate a sequence of concrete values, was already around, and when people were formalizing random variables they just generalized the concept. Where as I'm not familiar with any notation for "the first value generated from random variable $X$", "the second value generated from random variable $X$", ... "the $n$th value generated from random variable $X$". I wonder if you're coming from a computer programming background, where there is some cost to create a "random variable", and so it seems wasteful to use it once and then throw it away. Also, computer programming is (generally) sequential, so you can only generate one sample at a time. But in math you can just as easily think of a whole sequence of random variable, and "generate" a single sample from all of them at once. Additionally, the idea that you "generate" values doesn't even really exist in the mathematical formalism - we can just as well think of the values as "already being there", but that they have a particular distribution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4408281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
$A_1,...,A_n$ are independent $\iff \forall 1 \leq k\leq n. \mathbb{P}\left(\bigcap_{i=1}^{k}A_{i}\right)=\prod_{i=1}^{k}\mathbb{P}\left(A_{i}\right)$ Prove/Disprove: $ A_1,...,A_n $ are independent events If and only If for all $ 1 \leq k \leq n $ it occurs that $\mathbb{P}\left(\bigcap_{i=1}^{k} A_{i}\right)=\prod_{i=1}^{k} \mathbb{P}\left(A_{i}\right)$ Attempt: I think the above statement is false. the "$ \rightarrow $" implication is correct since the result will follow by definition, but the reverse is not necessarily true but I can't seem to make up a counter example. I tried making a dice counterexample but It think that was bad idea since most dice problems have independency rooted in them. So I tried the following example based on other examples I've encountered: Let there be a jar with $15$ balls, $5$ black,$5$ white,$5$ red. Let there be persons $a,b,c$. $c$ takes out a ball uniformly ( we don't know what ball he takes out ) and throws it into the garbage, then, $ a $ takes out a black ball and returns it and $b $ takes out a black ball and returns it. Denote $ A$ as the event person $a $ pulled out a black ball,$ B$ as the event person $b $ pulled out a black ball,$ C$ as the event person $c $ pulled out a black ball. Note that, $ P(A)=P(B) = 1/3 $ $ P(A) = P(A|C)P(C) + P(A|C^C)P(C^C) = (4/14)\cdot (1/3) + (5/14)\cdot (2/3) = 1/3 $ $ P(A\cap B \cap C) = P(B\cap C)P(A|B \cap C) $ ... ( I don't know If it is worth continuing, I don't see if my example helps me in the refutation) My attempt obviously will not work, I need a different example that will work... but I can't think up of anything, can you please help? I'm stuck on this problem for a long time.
The statement is true for $n=2$ but not for $n\geqslant 3$ : take $A_1=\emptyset$, and $A_2=A_3=\dots=A_n=A$, a set of probability belonging to $(0,1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4408612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Integration by substitution, $\int_1^a f(x^s) dx = \int_1^{a^s} f(x) \dfrac{1}{sx^{1-1/s}} dx$ I am trying to show that, for a function $f: \mathbb{R} \to \mathbb{R}$, and any $s>0$, $a>1$, we have that $$ \int_1^a f(x^s) dx = \int_1^{a^s} f(x) \dfrac{1}{sx^{1-1/s}} dx.$$ I am trying to use the following result (integration by substitution theorem): $$\int_c^d f(t) dt = \int_{\phi^{-1}(c)}^{\phi^{-1}(d)}f(\phi(t)) \phi'(t) dt,$$ where $\phi$ is a continuously differentiable bijective function. My attempt is failing as follows: Let $\phi(t)=t^{1/s}$. Then $\phi(1)=1$ and $\phi(a^s)=a$. Also, $\phi'(t)=\dfrac{1}{s}t^{1/s-1}.$ At this point, I go awry: $$ \int_1^a f(x^s) dx = \int_1^{a^s} f(\phi(x^s)) \phi'(x^s) dx = \int_1^{a^s}f(x)\dfrac{1}{s}(x^s)^{1/s-1} dx.$$ This is incorrect. I have tried to rectify it by using: $\phi(x^s)=x$ and so $\phi'(x^s) \times sx^{s-1}=1$. However, this gives $$\int_1^{a^s} f(x) \dfrac{1}{sx^{s-1}}dx.$$ Both attempts I have made are incorrect. Can someone help me understand how I am going wrong here? I am grateful for your help. Thank you.
Note the following: In your case $\phi(t) = t^s$. So write: $$ \int^a_1 f(x^s)~\mathrm{d}x = \int^a_1 f(\phi(t))~\mathrm{d}t = \int^a_1 f(\phi(t)) \frac{\phi'(t)}{\phi'(t)}~\mathrm{d}t = \int^a_1 g(\phi(t))\phi'(t)~\mathrm{d}t $$ We had set $g(\phi(t)) := \frac{f(\phi(t))}{\phi'(t)}$. This seems a little weird at first, but we will just accept it as of now. NOW use substitution $u = t^s$: $$ \int^a_1 g(\phi(t))\phi'(t)~\mathrm{d}t = \int^{a^s}_1 g(u)~\mathrm{d}u $$ The difficulty is to write $\frac{f(\phi(t))}{\phi'(t)}$ in terms of $u$. Take a look: $$ \frac{f(\phi(t))}{\phi'(t)} = \frac{f(t^s)}{st^{s-1}} = \frac{f(u)}{st^{s-1}} $$ Since $u=t^s$, we get $t = \sqrt[s]{u}$. So: $$ \frac{f(u)}{st^{s-1}} = \frac{f(u)}{su^{1-\frac{1}{s}}} $$ This means: $$ \int^a_1 f(x^s)~\mathrm{d}x = \int^{a^s}_1 \frac{f(u)}{su^{1-\frac{1}{s}}}~\mathrm{d}u $$ This way - and only this way - is how substitution works. The delicate part is the reciprocal derivative that usually (also in this case) appears. You have to artificially add it in order to be able to use the formula. Your choice of $u$ was wrong in the first place. And if you call a variable $u$, you should stick to its name all the way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4408762", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A question about conjugate elements of group Let $G$ be a group. $a,b,x,y$ are elements of $G$.If $ab=ba$, $xy=yx$, $a$ is conjugate to $x$ and $b$ is conjugate to $y$ and $(o(a),o(b))=1$, then can we get $ab$ is conjugate to $xy$? I can’t give a contradictory example.
The question was basically answered in the comments, so I will expand them to an answer. If $a$ is conjugate to $x$, say $a^g = x$, then $(ab)^g = a^g b^g = b^g a^g$. So by replacing $a$ with $a^g$, we may assume that $a =x$. So the question is the following. Suppose that $xy = yx$ and $xy' = y'x$, where $y$ and $y'$ are conjugate in $G$, and $\gcd(o(x), (y)) = 1$. Must $xy$ and $xy'$ be conjugate in $G$? Well, in this situation $xy$ and $xy'$ are conjugate if and only if $y,y'$ are conjugate in $C_G(x)$. (I leave the proof of this fact to the reader of this answer.) So to find a counterexample, look for $y,y' \in C_G(x)$ which are conjugate in $G$ but not in $C_G(x)$. There are many such examples. You can find some in a suitable dihedral group $G$, take $x$ to be an element in the cyclic subgroup of index $2$ in $G$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4409074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How do I explain how the analytic function looks like? I have the following problem. We have given $g$ to be an analytic function such that for $|z-z_0|<R$ we have $$|g(z)|\leq M$$. We assume that $g$ have a zero of order $m$ at $z_0$. Then I have shown what $$|g(z)|\leq \frac{M}{R^m}|z-z_0|^m~~~~~~~~~~(1)$$ Now I assume that above I $(1)$ we have equality. I need to explain how this function looks like. My claim was the following: Claim Equality in $(1)$ holds for some $z\neq z_0$ iff $$g(z)=\lambda (z-z_0)^m$$ for some constant $\lambda \in \Bbb{C}$. I wanted to prove my claim now. $\Leftarrow$Let us assume that $\lambda\in \Bbb{C}$ is a constant and $g(z)=\lambda (z-z_0)^m$. Then since $ |z-z_0|<R$ we have $|g(z)|<\lambda R^m =:M$. This means that $\lambda=\frac{M}{R^m}$. Then $$|g(z)|=|\lambda||z-z_0|^m=\frac{M}{R^m}|z-z_0|^m$$as we wanted. $\Rightarrow$ Here assume that $$|g(z)|=\frac{M}{R^m}|z-z_0|^m$$ when $|z-z_0|<R$. Here I know that I need to show that $g(z)=\lambda(z-z_0)^m$ but I don't see why this is true, I thought maybe one could do it using contradiction but then I don't see how I should chose my $g$. My prof. told me that my claim is correct so I really only need to prove this. Could maybe someone help me with this direction? Thanks for your help.
Assume $$|g(z)|\le {M\over R^m}|z-z_0|^m,\qquad |z-z_0|<R$$ and $$|g(w)|= {M\over R^m}|w-z_0|^m$$ for a point $w,$ $|w-z_0|<R.$ Consider the function $$h(z)={g(z)\over (z-z_0)^m},\qquad |z-z_0|<R.$$ This function is analytic in $|z-z_0|<R,$ as $z_0$ is a removable singularity. Moroever $$|h(z)|\le {M\over R^m},\qquad |h(w)|={M\over R^m} $$ From the maximum modulus principle the function $h(z)$ is constant, i.e. $h(z)\equiv \lambda$ for a complex number $\lambda. $ Thus $g(z)=\lambda {M\over R^m}(z-z_0)^m.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4409224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that a polynomial with gaps with different roots under the following certain conditions is equal to zero I am wondering if the following conjecture is true: Let $f$ be a polynomial of the form $$f(x) = \sum_{k=0}^{p-1} c_k x^k + \sum_{k=n}^{n+q-1}c_k x^k.$$ Suppose that $y_1,\ldots,y_{p+q}$, be pairwise different numbers, $y_1,\ldots,y_{p+q}>0$, and $$f(y_j)=0\qquad (1\le j\le p+q).$$ Then $f=0$, i.e., $c_k=0$ for every $k$. Here is my attemp: We denote by $\lambda$ the following integer partition (integer tuple, decreasing in the non-strict sense): $$\lambda=(\,\underbrace{n-p,\ldots,n-p}_q\,,\,\underbrace{0,\ldots,0}_{p}\,).$$ The conditions $f(y_j)=0$, $1\le j\le p+q$, can be written as the following system of $p+q$ homogeneous linear equations for in the variables $c_0,\ldots,c_{p-1},c_n,\ldots,c_{n+q-1}$: $$\sum_{k=1}^{p+q} y_j^{\lambda_k+p+q-k} c_{\lambda_k+p+q-k} = 0\qquad(1\le j\le p+q).$$ The determinant of this system equals $$D=\det \bigl[ y_j^{\lambda_k+p+q-k} \bigr]_{j,k=1}^{p+q}.$$ For example for $p=2$ y $q=2$,the determinant is of the form $$ \begin{bmatrix} y_1^{n+1} & y_1^n & y_1 & 1 \\[1ex] y_2^{n+1} & y_2^n & y_2 & 1 \\[1ex] y_3^{n+1} & y_3^n & y_3 & 1 \\[1ex] y_4^{n+1} & y_4^n & y_4 & 1 \end{bmatrix}. $$ My next idea is to divide $D$ over the Vandermonde polynomial $\prod_{1\le j<k\le p+1}(y_j-y_k)$, but I do not know how to do it properly, and what else I should do to show that the unique solution of the system is the trivial one. Thank you in advance for you help friends.
Descartes' rule of signs says that for a polynomial with real coefficients (which is not identically zero) the number of positive roots is at most the number of sign changes in the sequence of polynomial's coefficients (omitting the zero coefficients) Your polynomial $f(x) = \sum_{k=0}^{p-1} c_k x^k + \sum_{k=n}^{n+q-1}c_k x^k$ has at most $p+q$ non-zero coefficients, so that there are at most $p+q-1$ sign changes. Therefore, it $f$ is not identically zero, it can have at most $p+q-1$ positive zeros. Which means that your conjecture is true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4409831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
scaling down the imarinary parts of eigenvalues of a matrix Let A be an n-by-n complex matrix. Is there a transformation to preserve the real parts of the eigenvalues of A but scale down the imaginary parts of the eigenvalues of A? Actually , I want to have a matrix that keeps the real parts of the eigenvalues of the original matrix but have the sum of absolute of the imaginary parts of the original eigenvalues minimized or reduced.
It depends what you mean by "transformation". For instance, you can use the expression $$B=\dfrac{1}{2}P(J+\bar J)P^{-1}$$ where $A=PJP^{-1}$ and $J$ is the Jordan form of $A$ and $\bar J$ is the complex conjugate of $J$ (componentwise). In this case, $B$ will have real eigenvalues which are equal to the real part of the eigenvalues of $A$. An interesting point of this transformation is that it preserves the structure of the matrix by keeping its eigenvectors and its Jordan structure. Only the eigenvalues are changed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4409975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that $\sum{\frac{1}{(k+1)^\alpha}\frac{1}{(n+1-k)^\beta}}\le K\frac{1}{(n+1)^\alpha}$ I want to prove that $$\sum_{k=0}^{n}{\frac{1}{(k+1)^\alpha}\frac{1}{(n+1-k)^\beta}}\le K\frac{1}{(n+1)^\alpha}$$ for all $n\ge 0$, where $1<\alpha\le\beta$ and $K$ a constant. Everything I tried to do, I always arrived at the constant depending on $n$, which cannot happen. And I didn't want to use induction, is it possible?
For all $n\geq 1$ you have $$\sum_{k=0}^n \frac{(n+1)^\alpha}{(k+1)^\alpha(n+1-k)^\beta} \leq \sum_{k=0}^n \frac{(n+1)^\alpha}{(k+1)^\alpha(n+1-k)^\alpha} = \sum_{k=0}^n \frac{1}{\left[(k+1)\left(1-\frac{k}{n+1}\right)\right]^\alpha} \, .$$ Since the summand is symmetric about $k=n/2$, where it acquires a unique minimum, it suffices to bound $$\sum_{1 \leq k \leq n/2} \frac{1}{\left[(k+1)\left(1-\frac{k}{n+1}\right)\right]^\alpha} \leq \int_0^{n/2} \frac{{\rm d}k}{\left[(k+1)\left(1-\frac{k}{n+1}\right)\right]^\alpha} \\ \leq \frac{1}{\left(1-\frac{n}{2(n+1)}\right)^\alpha} \int_0^{\infty} {\rm d}k \, (k+1)^{-\alpha} \stackrel{\alpha>1}{\leq} \frac{2^{\alpha}}{(\alpha-1)} \, ,$$ since $n/(n+1)$ is increasing in $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4410139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How to integrate $\int \frac{1}{y^s (1-y)} dy$? I want to integrate the above function where s can take any value (positive or negative). Also, I would like to integrate this without limits, so I won't be able to use Gamma function as per my understanding. For this specific problem, $0 < y < 1$, hence I tried expanding $(1-y)$ as $$ 1 + y + y^2 + y^3 + \cdots = \frac{1}{1-y} $$ Multiplying this with $\frac{1}{y^s}$ and integrating gave $$ \int \frac{1}{y^s (1-y)} dy = \frac{-1}{y^s} \left(\frac{y}{-s+1} + \frac{y^2}{-s+2} + \cdots \right) $$ For $s = 0$, the expression comes out to be $\ln(1-y)$ which is expected, however, I would like to obtain a closed-form solution for any general s. Any hints are greatly appreciated.
Not entirely sure whether this works but here's what I have come up with $$\begin{align}\int y^{-s}(1-y)^{-1}dy &= \int y^{-s}\sum_{x=0}^{\infty}y^xdy \\ &=\sum_{x=0}^{\infty} \int y^{-s}y^xdy \\ &=\sum_{x=0}^{\infty} \frac{y^{x-s+1}}{x-s+1} \\ &= y^{1-s}\Phi(y, 1, s-1) \end{align}$$ Given that $0<y<1$, where $\Phi(y, 1, s-1)$ is the Lerch Transcendent
{ "language": "en", "url": "https://math.stackexchange.com/questions/4410332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Prove that $P( A_1 \cap A_2 \cap \ldots A_n) \geq P(A_1) + P(A_2) + \ldots + P(A_n) - (n-1)$ Problem: Show that $$ P( A_1 \cap A_2 \cap \ldots A_n) \geq P(A_1) + P(A_2) + \ldots + P(A_n) - (n-1) $$ Answer: Recall that: $$ P(A \cup B) = P(A) + P(B) - P(A \cap B)$$ We can rewrite this as: $$P(A \cap B) = P(A) + P(B) - P(A \cup B) $$ I am going to prove this by induction on $n$. case $n = 1$ We need to prove that: $$ P(A_1) \geq P(A_1) - (1-1)$$ Since $1-1 = 1$ it is obviously true. case $n = 2$ We need to prove that: $$ P(A_1 \cap A_2) \geq P(A_1) + P(A_2) - (2-1)$$ We have: \begin{align*} P(A_1 \cap A_2) &= P(A_1) + P(A_2) - P(A_1 \cup A_2) \\ P(A_1) + P(A_2) - P(A_1 \cup A_2) &\geq P(A_1) + P(A_2) - 1 \\ P(A_1) + P(A_2) - P(A_1 \cup A_2) &\geq P(A_1) + P(A_2) - (2-1) \\ \end{align*} Now we assume it for $n = i$. This means we have: $$ P( A_1 \cap A_2 \cap ... A_{i}) \geq P(A_1) + P(A_2) + .. + P(A_{i}) - (i-1) $$ Now we prove it for $n=i+1$. We need to prove that $$ P( A_1 \cap A_2 \cap ... A_{i+1}) \geq P(A_1) + P(A_2) + .. + P(A_{i+1}) - (i+1-1)$$ \begin{align*} P( A_1 \cap A_2 \cap ... A_{i+1}) &= P( A_1 \cap A_2 \cap ... A_{i}) + P(A_{i+1}) - P( \left( A_1 \cap A_2 \cap ... A_{i}\right) \cup A_{i+1} ) \\ %P( A_1 \cap A_2 \cap ... A_{i+1}) \geq P(A_1) + P(A_2) + .. + P(A_{i+1}) - (i+1-1) \\ \end{align*} Is my solution correct so far? How do I finish the job?
From your last line, apply $$-P((A_1 \cap \cdots \cap A_i) \cup A_{i+1}) \ge -1$$ and $$P(A_1 \cap A_2 \cap \cdots \cap A_i) \ge P(A_1) + \cdots + P(A_i) - (i-1).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4410488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is there a historical connection between $\sigma$-algebra and topology? Through learn about probability theory and topology, I feel that definition of $\sigma$-field(algebra) and topology are similar. From this, I also know about they are not same each other. Furthermore, I interested about why they look like similar. Is there a historical connection between them? Is it result of convergent evolution? Or just by a chance? I think that, if there are something (historical) reason for similarity, it will be strongly connected to how they different. So it will make me naturally understand about the difference.
These are different notions that can be used to capture different properties of functions (being continuous and measurable, respectively) but of course they are related, and one is often interested in settings that make this explicit. Given any topological space $X$, we can create the least $\sigma$-algebra that contains the open sets of $X$ as members. This is called the Borel $\sigma$-algebra, and its members are called the Borel sets. As the name indicates, this notion was introduced by Émile Borel in 1896.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4410676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 1, "answer_id": 0 }
Equivalence of definitions of open sets in metric space Consider the following two equivalent definitions of an open set in a metric space $(X,d)$: * *Definition 1: A set $U$ in a metric space is open if and only if $U$ is an arbitrary union of open balls of elements in the metric space, or an open ball itself. *Definition 2: A set $U$ in a metric space is open if and only if, for every element $x\in X$, there exists an $\epsilon$ such that an open ball $B_{\epsilon}(x)$ is contained within $U$. Although Definition 2 clearly implies Definition 1, how does Definition 1 imply Definition 2?
Definition 1 implies 2 because of the observation that if $B(x,r)$ is an open ball and $y\in B(x,r)$ then for $r'=r-d(x,y)$ (which is positive) we have $B(y,r')\subseteq B(x,r)$. The last inclusion follows from the triangle inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4410902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove that $X_n(Y_n-c) \xrightarrow{P} 0$ I'm stuck with this exercise. Let $X,X_n,Y_n (n=1,2,\ldots)$ random variables such that $X_n \Rightarrow X$ and $Y_n \xrightarrow{P} c$, where $c$ is constant. Prove that $X_n(Y_n-c) \xrightarrow{P} 0$. I know that for all $\delta > 0$, there exist $M > 0$ and $N \in \mathbb{N}$ such that $$P(|X_n| \geq M) \leq \delta \hspace{0.5cm} \forall n \geq N.$$ Then, I want to prove that $$\lim_{n \to \infty}P[|X_n(Y_n-c)| \geq \epsilon ]=0,$$ Then for $N \in \mathbb{N}$ that for all $n \geq N$ and for $M >0$ $$P[|X_n(Y_n-c)| \geq \epsilon ] \leq P[|X_n| \geq M ] + P[|(Y_n-c)| \geq \epsilon/M] \leq \frac{\delta}{2}+\frac{\delta}{2}=\delta,$$ since the the first I mentioned and $Y_n \xrightarrow{P} c$. Is this correct? In fact I'm not completely sure why I can take the first inequality but I feel this is the correct way. Any help?
An explanation to your specific question: inequalities of the form $P(A \cap B) \geq 1-P(A) -P(B)$ and $P(A \cup B) \leq P(A) + P(B)$ emerge from basic properties such as $$ P(A) + P(B) = P(A \cup B) - P(A \cap B) $$ and $$ P(A) \leq P(B), \quad A \subset B. $$ It then depends on context what you want to handle, for example, if you have $C = \{X + Y > t\},$ then you can consider $A = \{X > t/2\}$ and $B = \{Y > t/2\},$ then $A^\complement \cap B^\complement \subset C^\complement$ so $C \subset A \cup B$ and $P(C) \leq P(A) + P(B).$ Similarly with $C = \{XY > ab\},$ $A = \{X > a\}$ and $B = \{Y > b\}$ (assuming everything is positive), then the same argument as with the sum shows that $P(C) \leq P(A) + P(B).$ Also, you approach is correct but I think it is easier to work the other way around. That is, to prove $P(|X_n(Y_n - c)| \leq \varepsilon) \to 1.$ Indeed, $$ \liminf P(|X_n (Y_n - c)| \leq \varepsilon) \geq \liminf P(|Y_n - c| \leq \frac{\varepsilon}{M}, |X_n| \leq M) \geq 1 - \delta, $$ for every $\delta > 0.$ (It is well-known that if $P(A_n) \to 1$ and $P(B_n) \geq \delta$ then $\liminf P(A_n \cap B_n) \geq 1 - \delta.$ Anyway, to prove it just notice that $P(A \cap B) \geq 1 - P(A) - P(B).$) Obviously, this and what you wrote are essentialy the same and it is just a matter of taste which one is preferred.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4411110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $G$ is finite group how to prove that $f(g)=ag$, $a \in G$, is a bijection for all $g\in G$? If $G$ is finite group, how to prove that $f(g)=ag$, $a \in G$, is a bijection for all $g \in G$? Here $ag$ is $a \cdot g$, where $\cdot$ is the operator from the group $G$. This is what I've tried so far: $f(g)=ag \\ a^{-1}g(g)=g. $ Since the inverse has been dotted so $f$ is a bijection, I would like to know if there's a misstep on that.
Though the claim holds true for any group, you can take advantage of the assumed finiteness of $G$ to get immediately: * *the injectivity holds by the left cancellation law; *since $G$ is finite, the surjectivity follows from 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4411247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Regular space has a sequence of disjoint open sets Exercise Suppose $X$ is a regular topological space. Suppose that $B \subset X$ is infinite. Prove that there exists a sequence of open sets $\{U_i\}_{i \in \mathbb{N}}$ satisfying (1) $U_j \cap U_k = \varnothing$ for all $j,k \in \mathbb{N}$ (2) $U_i \cap B \neq \varnothing$ for all $i \in \mathbb{N}$ I need help figuring out this proof. I have a couple of suggestions. * *Since $U_1, U_2, U_3, \dots$ needs to be a sequence, we need to work with a countably infinite subset of $B$. On the other hand, a countably infinite subset of $B$ might prevent from having $U_i \cap B \neq \varnothing$ *We design a sequence of $U_1,U_2,U_3, \dots$ (somehow?) and then take the closure $\overline{U_i}$ and use the regularity to have that $U_i \subset \overline{U_i}$ with $\overline{U_i} \cap U_k = \varnothing \implies U_i \cap U_k = \varnothing$ There seems to be a few moving pieces going on in this exercise. Can anyone offer any advice on a good starting point for this problem?
Since this is not true for the indiscrete topology, I assume that regularity includes T1. Then proceed as follows: (1) If $X$ is T2, $B$ an infnite subset of $X$, there is an open $U \subseteq X$ such that $U \cap B \neq \emptyset$ and $B \setminus U$ is infinite. (2) By regularity, you can improve (1) by ... and $B \setminus \overline{U}$ is infinite. (3) Proof the claim by induction using (2).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4411401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can the average of a set be lower than all of the averages of subsets? Let's imagine there are marbles of different diameter and color. Can the average diameter of all the marbles be lower than all of the average diameter of marbles per color?
Let $S \subseteq \mathbb{R}$ be a finite set. Let $P = \{C_1,C_2,…,C_n\}$ be a partition of $S$. This partition represents the color groupings of the marbles. Also assume that $\overline{C_i} \leq \overline{C_{i+1}}$ for $1 \leq i < n$, where $\overline{C_i}$ denotes the average of $C_i$. Notice: $$\overline{S} = \frac{|C_1|\overline{C_1} + |C_2|\overline{C_2} + … + |C_n|\overline{C_n}}{|S|}$$ And since $\overline{C_i} \geq \overline{C_1}$ for all $1 \leq i < n$ we have: $$\overline{S} \geq \frac{|C_1|\overline{C_1} + |C_2|\overline{C_1} + … + |C_n|\overline{C_1}}{|S|}$$ Now factoring out $\overline{C_1}$ and using the fact that $P$ partitions $S$: $$\overline{S} \geq \overline{C_1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4411587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Existence of a simple graph following some conditions Suppose, $a_1 < a_2 < \cdots < a_k$ are distinct positive integers. I am trying to prove that there exists a simple graph with $(a_k + 1)$ many vertices, whose set of distinct vertex degrees is $a_1, a_2, \cdots , a_k$. I was trying to use induction on k. I solved using induction when $a_i = i , \forall i$. But could not do the general case. Any help will be appreciated. Thanks in advance.
A trick with complements helps. If a graph has $a_k+1$ vertices and the set of distinct degrees is $\{a_1, a_2, \dots, a_k\}$, then its complement is a graph with $a_k + 1$ vertices where the set of distinct degrees is $\{a_k-a_1, a_k - a_2, \dots, a_k - a_{k-1}, 0\}$. The zero degrees are easy to tack on at the end, so now we have reduced to a smaller problem: find a graph with fewer than $a_k+1$ vertices where the set of distinct degrees is $\{a_k - a_1, a_k - a_2, \dots, a_k - a_{k-1}\}$. This can be done by induction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4411737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
A converse to Stone-Weierstrass for a metric space Let $X$ be a metric space. Suppose normed algebra $C_b(X)$ of bounded continuous real functions with norm $||f||={\rm supp}_{x\in X} |f(x)|$ has the property: if a subalgebra separates points (i.e. for any $x, y \in X$ such that $x \neq y$ there is a function in the subalgebra so that $f(x) \neq f(y)$), then it is dense in $C_b(X)$. Is it true that $X$ is compact in this case?
Yes. Indeed, suppose $X$ is not compact, so there exists an sequence $(x_n)$ in $X$ with no convergent subsequence. Now let $A\subset C_b(X)$ be the subalgebra consisting of functions $f$ such that the sequence $(f(x_n))$ converges. This subalgebra is easily seen to be closed. It separates points since $(x_n)$ has no convergent subsequence: this implies that for any $x,y\in X$, the set $\{x,y\}\cup\{x_n:n\in\mathbb{N}\}$ is discrete and closed in $X$, so by the Tietze extension theorem an arbitrary bounded function on that set can be extended to an element of $C_b(X)$. However, $A$ is not all of $C_b(X)$, again by the Tietze extension theorem. More generally, if $X$ is a completely regular space, then $C_b(X)\cong C(\beta X)$ via the natural inclusion $X\to\beta X$ into the Stone-Cech compactification, and closed subalgebras of $C(\beta X)$ correspond to Hausdorff quotients of $\beta X$. Such a closed subalgebra separates points of $X$ iff the corresponding quotient of $\beta X$ does not identify any points of $X$ together. But if $X$ is not compact, you can get such a nontrivial quotient of $\beta X$ by identifying some point of $\beta X\setminus X$ with some point of $X$. So if $X$ is completely regular and every subalgebra of $C_b(X)$ that separates points is dense, then $X$ is compact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4411979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving non linear equations where equations are equals I wonder and tried to google it, but I am not sure what to google it, how to solve non linear equations where equations are equal between each other. I am able to write a specific algorithm for two equations but not dynamically for N equations. I will show the example of three (how my equations approximately looks like): C1, C2, C3, X - are unknows, but in the end I do not need to know result of X. It can be interpreted like this (last equation C1 + C2 + C3 = 1 is not included here): Please, don't try to solve this, I am not sure if these equations have results. I just randomly typed coefficients. But this is how my equations can looks like. Only with different coefficients. I tried to calculated it with only two unknows and I have got quadratic equation in the end so with three unknowns there will be cubic in the end. With N unknows there will be polynomial equation with degree N. Also, I have to say, result do not have to be with 100% accuracy. I am not sure if its help somehow or not. I found on google that maybe using iterative method could help. I look at few iterative methods but I am still not sure how to use it on this kind of problem. I also found, that non linear equation can by linearize. Maybe that would be a option but I am not sure how to do it here.
From $C_1 + C_2 + C_3 = 1 $, you get $C_3 = 1 - C_1 - C_2 $ Substitute that in your equations, you get $\dfrac{ 4 C_1 + 8 C_2 + 16 (1 - C_1 - C_2) }{2 C_1} = \dfrac{ -12 C_1 - 8 C_2 + 16} {2 C_1} = \dfrac{ - 6 C_1 - 4 C_2 + 8} {C_1} $ and $\dfrac{ 9 C_1 + 27 C_2 + 81(1 - C_1 - C_2) }{3 C_2} = \dfrac{ - 24 C_1 - 18 C_2 + 27 }{C_2} $ and $\dfrac{16 C_1 + 64 C_2 + 256 (1 -C_1 - C_2) }{4 C_3} = \dfrac{ -60 C_1 - 48 C_2 + 64 }{C_3} $ Since these expressions are equal as you have in your question, we can cross multiply to get $( - 6C_1 - 4C_2 +8 ) C_2 = (-24 C_1 - 18 C_2 + 27) C_1 \hspace{25pt}(1)$ and $(- 6 C_1 - 4 C_2+8) (1 -C_1 - C_2) = (-60 C_1 - 48 C_2 + 64 ) C_1 \hspace{25pt}(2)$ Equations (1) and (2) are two quadratic equations in $C_1 $ and $C_2$ and can be solved using the method outlined in the solution of this problem
{ "language": "en", "url": "https://math.stackexchange.com/questions/4412175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Finding the slope of line intersecting the parabola A line $y=mx+c$ intersects the parabola $y=x^2$ at points $A$ and $B$. The line $AB$ intersects the $y$-axis at point $P$. If $AP−BP=1$, then find $m^2$. where $m > 0$. so far I know $x^2−mx−c=0,$ and $P=(0,c)$. $x = \frac{m \pm \sqrt{m^2 + 4c}}{2}$ $A_x = \frac{m + \sqrt{m^2 + 4c}}{2}$, $B_x = \frac{m - \sqrt{m^2 + 4c}}{2} $ $A_y = \frac{m^2 + m\sqrt{m^2 + 4c}}{2} + c$, $B_y = \frac{m^2 - m\sqrt{m^2 + 4c}}{2} + c$ using distance formula(not showing all steps) $AP = \frac{m + \sqrt{m^2 + 4c}}{2}(\sqrt{m^2 + 1}) $ $BP = \frac{m - \sqrt{m^2 + 4c}}{2}(\sqrt{m^2 + 1}) $ $AP - BP = 1$ $(\sqrt{m^2 + 4c})(\sqrt{m^2 + 1}) = 1$ $m^4 + m^2(4c + 1) + 4c - 1 = 0$ well I could manipulate this into quadratic but that doesn't really help me with coefficient with c.
Since you assume that $m > 0,$ this result of your calculations is good: $$ AP = \frac{m + \sqrt{m^2 + 4c}}{2} \sqrt{m^2 + 1}. \tag1$$ Here's where you get in a bit of trouble: $$ BP \stackrel?= \frac{m - \sqrt{m^2 + 4c}}{2} \sqrt{m^2 + 1}. \tag2$$ You want $AP - BP = 1,$ and I think the best interpretation of the problem statement interprets $AP - BP$ as the difference of two positive lengths (rather than a negative length subtracted from a positive length). Moreover, $AP$ must be the greater of the two lengths in order for the difference to be positive. The problem with Equation $(2)$ is that if $c > 0$ then the expression on the right side of the equation is negative. A better equation is: $$ BP = \left\lvert \frac{m - \sqrt{m^2 + 4c}}{2} \sqrt{m^2 + 1}\right\rvert.$$ A more useful correct equation is $$ BP = \begin{cases} \dfrac{-m + \sqrt{m^2 + 4c}}{2} \sqrt{m^2 + 1} & c \geq 0, \\[1ex] \dfrac{m - \sqrt{m^2 + 4c}}{2} \sqrt{m^2 + 1} & c < 0. \end{cases} \tag3$$ The $c < 0$ case still looks shaky because of the (apparent) possibility that $m^2 + 4c < 0,$ which would make the square root undefined, but what actually happens is that for very large negative $c$ the value of $m$ also will be large. Equation $(1)$, on the other hand, is good because with $m > 0$ you are guaranteed that the expression on the right-hand side of the equation is positive, and because the expression on the right-hand side is larger than either of the two expressions on the right-hand side of Equation $(3)$, so you have chosen the correct expression for $AP$ in either case. The two cases in Equation $(3)$ can (and I think should) be considered separately. In the $c \geq 0$ case we have \begin{align} 1 &= AP - BP \\ &= \frac{m + \sqrt{m^2 + 4c}}{2} \sqrt{m^2 + 1} - \frac{-m + \sqrt{m^2 + 4c}}{2} \sqrt{m^2 + 1} \\ &= m \sqrt{m^2 + 1} \end{align} and therefore $$ m^4 + m^2 - 1 = 0, $$ for which the only solution (since $m^2$ must be positive) is $$ m^2 = \frac12(\sqrt5 - 1). $$ In the $c < 0$ case, on the other hand, your further calculations are correct, and $m^2$ is the positive root $v$ of the quadratic equation $$ v^2 + (4c + 1) v + 4c - 1 = 0, $$ that is, \begin{align} m^2 &= \frac{-(4c + 1) + \sqrt{(4c + 1)^2 - 4(4c - 1)}}{2} \\ &= \frac{-4c - 1 + \sqrt{(4c - 1)^2 + 4}}{2}. \end{align} You cannot eliminate $c$ from the solution in this case because the slope of the line actually does depend on how negative $c$ is. With a $y$-intercept very far down the negative $y$ axis you need a steep slope in order to intersect the parabola. My hunch is that you were supposed to solve the case $c \geq 0.$ This could have been stated explicitly, or it could have been implied by stating that $P$ is between $A$ and $B.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4412365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
How to prove that there exist a travelling wave solution? Im currently studying the SIRS infectious disease model for modelling an epidemic. I have expanded the model to take spatial expansion into account, i.e. I have incorperated a diffusive term. The model is as follows $$ S_t(x,t) = -\beta S(x,t)I(x,t) + dS_{xx},\\ I_t(x,t) = \beta S(x,t)I(x,t)-\gamma I + dI_{xx},\\ R_t(x,t) = \gamma I + dR_{xx}, $$ where $\beta$ denotes the transmission rate, $\gamma$ denotes the recovery rate and $d$ denotes the diffusion rate. From studying it numerical, I've noticed that the infected compartment $I$ forms, what looks like, to be a travelling wave through time. But how do I prove that it exist?
Typically when one looks for traveling wave solutions, one imposes an ansatz for the unknown $U$ of the form $U(x,t)=u(x-tc)$ for some unknown velocity vector $c$. Then $\partial_t U(x,t) = -c \cdot \nabla u(x-ct)$, and the time-dependent PDE reduces to a time-idependent PDE with $c$ as a parameter. In your case you would then take $S(x,t) = s(x-tc), I(x,t) = i(x-tc), R(x,t) = r(x-tc)$, etc. The resulting stationary problem for $(s,i,r)$ is then a nonlinear elliptic system.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4412511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Find equation of the plane that this $x=\frac{1+t}{1-t}, y=\frac{1}{1-t^2}, z=\frac{1}{1+t}$ curve lies Prove that all points of the given curve lie in one plane, and find the equation of that plane: $$x=\frac{1+t}{1-t}, y=\frac{1}{1-t^2}, z=\frac{1}{1+t}.$$ If the given curve lies in one plane, then $$a\left(\frac{1+t}{1-t}\right)+b\left(\frac{1}{1-t^2}\right)+c\left(\frac{1}{1+t}\right)+d=0.$$ Solving this I get $2a=c,a=d,a=\frac{-b}{4}.$ How from this find equation of plane? Or maybe I did something wrong? When putting values back into equation of plane I get $$ax-4ay+2az+a=0.$$ Now, problem is I can't cancel $a$ here because first of all, I need to prove that such plane exists.
Partial answer $\vec r_0:= (x(0),y(0),z(0))=(1,1,1);$ Equation of a plane passing through $\vec r_0,$ where $\vec n:=(a, b, c)$ is the normal of the plane: $\vec n \cdot (\vec r - \vec r_0)=$ $(a,b,c)\cdot (x-1,y-1,z-1)=$ $a(x-1)+b(y-1)+c(z-1)=0;$ If the curve lies in the plane then $\vec r(t) =(x(t), y(t), z(t)) $ satisfies this equation for $t\not =\pm1$. $ax(t) +by(t)+cz(t)=$ $a+b+c;$ $a(1+t)^2+b+c(1-t)=$ $(a+b+c)(1-t^2);$ A quadratic equation in $t$. It is satisfied for all $t$ if the coefficients of $t^2,t$ and the constant are zero, which gives $3$ equations for $a, b, c.$ Assuming the system of equations is consistent and has one solution the curve lies in the plane.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4412630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Optimization problem [proof of generalized cauchy schwarz] Could you help me solve this problem please ? * *Maximize $x^ty$ with constraint $x^tQx \leq 1$ (where $Q$ is definite positive) What I tried : I tried using KKT but I don't know why I get $-\sqrt{y^tQ^{-1}y}$ as the maximum instead of $\sqrt{y^tQ^{-1}y}$ (which I believe is the maximum). Also, since $x^ty$ is linear (convex and concave), I don't know how to conclude... *Conclude that $(x^ty)^2 \leq (x^tQx)(y^tQ^{-1}y)$ $\forall x,y$ (generalized CS)
You might check your KKT calculations again; indeed, you should be getting $\sqrt{y^T Q^{-1} y}$ as the maximum. Now since $x^T y$ is convex, it is good; it means that satisfying the KKT conditions is sufficient for finding your desired global maximum. For part 2, note that by rearranging, equivalently you want to prove that $$ (\widetilde{x}^T y)^2 \leq y^T Q^{-1} y , $$ where $\widetilde{x} = \frac{x}{\sqrt{x^T Q x}}$. Now how can you use the solution from part 1 to deduce this?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4412826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Writing non linear part of equation after change of coordinates I'm trying to write a function in Matlab (although could be in some other language) that will do the following: I have a system of non linear equations, usually of the form $$ \begin{cases} \frac{dx}{dt}= ax+by+xy+x^2, \\ \frac{dy}{dt}= bx+cy+ xy+y^2. \end{cases} $$ I then make a change of coordinates so that the linear part is in Jordan normal form. I do this by using the inbuilt Jordan decomposition function in Matlab. My issue is this: How can I get the program to express the non linear parts in the new variables? Thank you
The linear part of your system is $ \begin{pmatrix} x’ \\ y’ \end{pmatrix}=A \begin{pmatrix} x\\ y \end{pmatrix} $ where $A:=\begin{pmatrix} a & b \\ b & c \end{pmatrix}$ that is symmetric, and so diagonalizable. Consider the change of variables $\begin{pmatrix} z\\ u\end{pmatrix}= B \begin{pmatrix} x\\ y \end{pmatrix} $ so that $\begin{pmatrix} z’ \\ u’ \end{pmatrix}=BA \begin{pmatrix} x\\ y \end{pmatrix}= BAB^{-1} \begin{pmatrix} z\\ u \end{pmatrix} $ Now you want to diagonalize it, so you have to take $B$ such that $BAB^{-1}$ is diagonal $B$ is the change base matrix from the standard basis to a basis of eigenvectors of $A$. $$\det(A-\lambda I)=(a-\lambda)(c-\lambda)-b^2=\lambda^2-(a+c)\lambda+ac-b^2 $$ $\lambda_{1,2}=\frac{(a+c)+/- \sqrt{(a-c)^2+4b^2}}{2}$ Now you have to find a basis of eigenvectors. I will not continue because it’s too long. In any case at the end you will have that the initial system became $$\begin{pmatrix} z’ \\ u’ \end{pmatrix}= BAB^{-1} \begin{pmatrix} z\\ u \end{pmatrix}+ \begin{pmatrix}x(z,u)y(z,u)+x(z,u)^2\\ x(z,u)y(z,u)+y(z,u)^2 \end{pmatrix} $$ Where $x(z,u)$ and $y(z,u)$ are obtained by $\begin{pmatrix} x\\ y \end{pmatrix}= B^{-1}\begin{pmatrix} z\\ u\end{pmatrix} $
{ "language": "en", "url": "https://math.stackexchange.com/questions/4412990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Quasi-compactness is a property of morphisms of schemes stable under base change That's it, I'm trying to prove that if $f:X\to S$ is a quasi-compact morphism of schemes and $g:T\to S$ is any morphism, then the base change $f_T:X\times_ST\to T$ is also quasi-compact. The proof in 01K5 is simply "ommited". This is what I've tried so far: I know that if $U\subset T$ is any open set, then $f_T^{-1}(U)=X\times_SU$ (for example, by 01JR). By 01K4 it suffices to show that $U$ affine and quasi-compact implies $f_T^{-1}(U)$ quasi-compact. On this case, we have that $g(U)$ is quasi-compact (as continuity preserves quasi-compactness). Therefore, there is a finite cover of $g(U)$ by affine open subsets of $S$. Call $W$ to the union of the sets of this cover, so $W$ is quasi-compact as quasi-compactness is stable under finite unions. Since $g(U)\subset W$, again by 01JR, we deduce $f_T^{-1}(U)=X\times_SU=f^{-1}(W)\times_WU$, where $f^{-1}(W)$ is quasi-compact. Thus, we've reduced the problem to showing Exercise. If $X$ and $Y$ are quasi-compact schemes over a quasi-compact scheme $S$, then $X\times_SY$ is quasi-compact. (We may assume if necessary that at least one of $X$ or $Y$ is affine.) But I don't know how to show this (in case it's true). Any ideas?
We change your notation slightly so that we use $\phi:S'\to S$ instead of $g:T\to S$. Let $X'=X\times_S S'$ and let $f':X'\to X$ be the obvious map. Suppose $U\subset S$ is an affine open subscheme, and cover $f^{-1}(U)$ by finitely many affine opens $V_1,\cdots,V_n\subset X$. Let $U'\subset\phi^{-1}(U)$ be an affine open. Then $$f'^{-1}(U') = f^{-1}(U)\times_U U' = \left(\bigcup^n V_i\right) \times_U U' = \bigcup^n \left(V_i\times_U U'\right)$$ and $V_i\times_U U'$ is again affine by the construction of the fiber product. Since a scheme is quasi-compact iff it admits a cover by finitely many affine opens, we've shown that $f'^{-1}(U')$ is quasi-compact; by varying $U$ and $U'$ we get an open cover of $S'$ by affine opens with quasi-compact preimage and therefore by 01K4 for instance we've shown that $f'$ is quasi-compact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4413385", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does $\det \mathcal{H}_f = (n-1)(-1)^{2n+1} \left[ f(x_1, \dots, x_n) \right]^{n-2}$ for $f(x_1, \dots, x_n) = \prod_{j=1}^n x_j$? Let function $f : \mathbb{R}^n \to \mathbb{R}$ be defined by $$f(x_1, \dots, x_n) := \prod_{j=1}^n x_j$$ Like any $C^2$ function, we can compute its Hessian $\mathcal{H}_f$, which will be a $\mathbb{R}^n \to \mathbb{R}^{n \times n}$ function. $\mathcal{H}_f$ should also have a determinant, denoted by $\det \mathcal{H}_f$. I have computed $\det H_f$ for $n\in\{2,\dots, 8 \}$ and I have come up with a guess that is consistent with all of these cases. Specifically, it appears that $$ \det \mathcal{H}_f = (n-1)(-1)^{2n+1} \left[ f(x_1, \dots, x_n) \right]^{n-2} $$ Is this equation true for all $n \in \mathbb{N}_{>1}$?
Assume that $x_j$ are all nonzero (the result is easy to check otherwise). Using Kronecker delta and $\det\{c_i a_{ij}\}=\det\{c_j a_{ij}\}=(\prod c_j)\det\{a_{ij}\}$, $$\det\mathcal{H}_f=\det_{1\leqslant i,j\leqslant n}\{(1-\delta_{ij})x_i^{-1}x_j^{-1}f(x_1,\ldots,x_n)\}=[f(x_1,\ldots,x_n)]^{n-2}\det(\mathbf{1}_n-\mathbf{I}_n),$$ where $\mathbf{1}_n$ is the all-ones matrix (and $\mathbf{I}_n$ is the identity matrix), which has eigenvalues $0$ (of multiplicity $n-1$) and $n$ (of multiplicity $1$), hence $\det(\lambda\mathbf{I}_n-\mathbf{1}_n)=\lambda^{n-1}(\lambda-n)$. Thus $$\det\mathcal{H}_f=(-1)^{n-1}(n-1)[f(x_1,\ldots,x_n)]^{n-2}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4413624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Quadrilateral ABCD satisfies $\overline{2AB}=\overline{AC}$, $\overline{BC}=\overline{\sqrt{3}}$, $\overline{BD}=\overline{DC}$ and $Quadrilateral ABCD is inscribed in a circle satisfies $\overline{2AB}=\overline{AC}$, $\overline{BC}=\overline{\sqrt{3}}$, $\overline{BD}=\overline{DC}$ and $<BAC=60$ I've never been good at Euclidean geometry questions like this... Really, what strategies could I employ to begin an analysis of the situation? I'm working through a text on Euclidean geometry and this is a question. The specific questions being asked about this scenario are the following: 1). The radius of the circumscribed circle 2). $\overline{AC}$ 3). $<BDC$ 4). The area of $\Delta BDC$ 5). $\overrightarrow{CA} \cdot \overrightarrow{DC}$ Now, The fact that $\overline{BD}=\overline{DC}$ struck me as odd. The length of a diagonal is equal to the length of one of the sides? I want to say that this implies that the quadrilateral we are dealing with must not be convex. Since $<BAC=60$, we can use the law of cosines to deduce that $\overline{AC}=2$ and so $\overline{AB}=1$. My analysis sort of hits a road block here. I have not used the fact that the quadrilateral is inscribed in a circle. For a quadrilateral inscribed in a circle, it is well known that: * *The product of the diagonals of a quadrilateral inscribed in a circle is equal to the sum of the product of its two pairs of opposite sides. * *The opposite angles of quadrilateral inscribed in a circle are supplementary. i.e., the sum of the opposite angles is equal to 180˚. However, I have not been able to make use of these facts. Can anyone help me out here?? Thanks in advance!
* *First note that since $<BAC = 60$ and $2AB = AC$ it tells us $\Delta ABC$ is congruent to the general $30, 60, 90$ triangle (by $SAS$). This tells us that $<ABC = 90$. Using the fact that the hypotenuse of an inscribed right triangle is a diameter then we have that $r = \frac{AC}{2}$ but as you stated $AC = 2$ so $r = 1$ *Correctly by using laws of cosines $AC = 2$ *Note that we know $<BAC = 60$ and $<BDC$ lies on the same arc as $<BAC$ (arc $BC$) then we know these angle equal so $<BDC = 60$ *Now we know that $<BDC = 60$ we can see that $\Delta BDC$ is an congruent to the general equilateral triangle (by SAS since $BD = DC$). This now tells us $BD = DC = BC = \sqrt{3}$ so the area of $\Delta BDC = \frac{1}{2}\sqrt{3}\sqrt{3}sin(60) = \frac{3\sqrt{3}}{4}$ *We have already worked out that $|CA| = 2$ and $|DC| = \sqrt{3}$. Then since $<ABC = 90$ and $<DBC = 60$ we have that $<ABD = 30$ and therefore since angles in the triangle $\Delta ABX$ (where $X$ the intersection of lines $AC$ and $BD$) is add to $180$, lines BD and AC meet at $90$ degrees. Hence $\Delta BCX$ is congruent to $\Delta DCX$ (by $AAS$) which tells us that the angle between $CA$ and $DC$ is equal to $<ACB = 30$. Hence the dot product of $CA$ and $DC$ is $-2\sqrt{3}\cos{30} = -3$ If you want a short extension then show that quadrilateral $ABCD$ is a kite
{ "language": "en", "url": "https://math.stackexchange.com/questions/4413992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Rewriting $\cos^4 x \sin^2 x $ with exponent no higher than $1$ I'm having some trouble finishing this one off. Rewrite with exponent no higher than $1$: $$\cos^4 x \sin^2 x$$ The answer is: $$\frac{2 + \cos(2x) - 2\cos(4x) - \cos(6x)}{32}$$ So I started like this: $$\cos^4 x \sin^2 x = \frac{1+\cos(2x)}{2}\frac{1+\cos(2x)}{2}\frac{1-\cos(2x)}{2}$$ $$= \frac{1}{8}\left(\{1+\cos(2x)\}\{1^2 - \cos^2(2x)\}\right)$$ $$\frac{1}{8}\left(\{1 + \cos(2x)\}\sin^2(2x)\right)$$ $$\frac{1}{16}\left(1 + \cos(2x)\{1-\cos(4x)\}\right)$$ Now this is where I start to get lost: $$\frac{1}{16}\left(1 - \cos(4x) + \cos(2x) - \cos(2x)\cos(4x) \right)$$ I really can't find a way from here - I try this, but not sure if this is the right path. $$\require{cancel} \cancel{\frac{1}{16}\left(1 - \cos(4x) + \cos(2x)\{1 - \cos(4x)\} \right)}$$ Completing thanks to help below: $$\frac{1}{16}\left(1 - \cos(4x) + \cos(2x) - \left(\frac{\cos(6x) + \cos(-2x)}{2}\right)\right)$$ $$\frac{1}{32}\left(2 - 2\cos(4x) + 2\cos(2x) - \cos(6x) - \cos(2x)\right)$$ $$=\frac{2 + \cos(2x) - 2\cos(4x) - \cos(6x)}{32}$$
$\cos x = \frac {e^{ix} + e^{-ix}}{2}\\ \sin x = \frac {e^{ix} - e^{-ix}}{2i}\\ \cos^4 x\sin^2 x = \frac {(e^{ix} + e^{-ix})^4(e^{ix} - e^{-ix})^2}{-64}\\ \frac {(e^{4ix} + 4e^{2ix} + 6 + 4e^{-2ix} + e^{-4ix})(e^{2ix} - 2 + e^{-2ix})}{-64}\\ \frac {e^{6ix} + 2e^{4ix} - e^{2ix} -4 - e^{-2ix}+2e^{-4ix} + e^{-6ix}}{-64}\\ \frac {\cos 6x + 2\cos 4x - \cos 2x - 2}{-32}\\ \frac {-\cos 6x - 2\cos 4x + \cos 2x+2}{32}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4414175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Can a function be differentiable if the limit does not exist? I'm trying to find if the function is differentiable at $x=1$. Upon solving, the limit does not exist. But when I tried to solve for the limit of $f '(1)$, I both got $3\over2$, so $f '(1)$ exists. So, is the function differentiable at $x=1$? $$f(x) = \begin{cases} \ln(x^2+x) & x ≤ 1 \\ 3\sqrt x & x > 1 \\ \end{cases}$$ I. $$ f(1) = \ln (1^2 +1) = \ln(2)$$ II. $$\lim_{x\to1^-} f(x) = \lim_{x\to1^-} \ln(x^2+x) = \ln(1^2+1) = \ln(2)$$ $$\lim_{x\to1^+} f(x) = \lim_{x\to1^+} 3 \sqrt x= 3\sqrt1 = 3 $$ $$\lim_{x\to1^-} f(x) ≠ \lim_{x\to1^+} f(x) $$ Thus the $\lim_{x\to1}f(x)$ does not exist. $$f'(x) = \begin{cases} \frac{2x+1}{x^2+x} & x < 1 \\ \frac{3}{2\sqrt x} & x > 1 \\ \end{cases}$$ a) $$ f'_+(1) = \lim_{x\to1^+}f'(x )= \lim_{x\to1^+}\frac{3}{2\sqrt x} = \frac{3}{2\sqrt1} = \frac{3}{2}$$ b)$$ f'_-(1) = \lim_{x\to1^-}f'(x)= \lim_{x\to1^+}\frac{2x+1}{x^2+ x} = \frac{2 + 1}{1+1} = \frac{3}{2}$$
A simple example: let $$f(x) = \begin{cases} -1 & x ≤ 0 \\ 1 & x > 0 \\ \end{cases}$$ The left and right limits of $f'$ at $x=0$ both exist and are equal to the same value (namely, zero), but that doesn't mean that $f'(0)$ exists.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4414750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Why doesn't my combinatorial solution to simple probability problem work? I need to find the probability of NOT getting an ace card on two draws from a deck of 52 cards. My first thought (which I really think is correct) was to get the probability from taking $\frac{48}{52}\frac{47}{51}=\frac{\binom{48}{2}}{\binom{52}{2}}\approx0.85$. Isn't this correct? Then I though about it in another way. There are $\binom{52}{2}$ ways too choose two cards from 52. To not get an ace, you can to choose two out of 12 values (where the ace is excluded) in $\binom{12}{2}$ ways, and then you can choose $\binom{4}{1}=4$ different cards from each of the chosen values. Then I'm thinking you could calculate the probability by taking $$\frac{\binom{12}{2}\binom{4}{1}^{2}}{\binom{52}{2}}\approx0.80$$ This doesn't give the same as the first method I used. I'm obviously missing something, but can't really figure out what is wrong. I would appreciate some guidance on how I should think differently doing the later method! (I suppose the probability is approx. 0.85 as I got in the first place.)
In the second method you don't take the case that both cards have the same value into account. That would be $12 \cdot \binom42$ many possibilities. When you add those, you get the same result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4414892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How can I convert a non-linear constraint to a linear constraint for the mixed integer programming? I have a nonlinear constraint: $\sum\limits_{i\in N}\sum\limits_{j\in J} A_{ijt}\times Z_{ijt}\geq \sum\limits_{i\in N}\sum\limits_{j\in J} D_{ij} \hspace{0.5cm} \forall{t}$ Here, $Z_{ijt}$={0,1}; $A_{ijt}$ is a continuous decision variable ($A_{ijt}\geq 0$); and $D_{ij}$ is a parameter of any value 0 to inf. $Z_{ijt}=1$ only when $A_{ijt}\geq D_{ij}$. How can i make it linear? Thanks
You can enforce $$Z_{ijt}=1 \implies A_{ijt} \ge D_{ij}$$ with the following (big-M) linear constraint: $$D_{ij} - A_{ijt} \le (D_{ij}-0)(1-Z_{ijt}),$$ which simplifies to $$A_{ijt} \ge D_{ij} Z_{ijt}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4415146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Minimal tensor product of $B(H)$ and $C(G)$ Let $H$ be a finite dimensional vector space, and $G$ be a compact group. Let $B(H)$ be the bounded operators on $H$, let $C(G)$ be the complex valued continuous functions on $G$, and let $C(G;B(H))$ be the $B(H)$ valued continuous functions on $G$. Let $B(H)\otimes C(G)$ be the minimal tensor product of $B(H)$ and $C(G)$. Question: Show that $B(H)\otimes C(G)=C(G;B(H))$? I got stuck in this problem while reading the note 'Compact Quantum Groups and Their Representation Categories' by Neshveyev and Tuset. Thanks in advance.
Another perspective: We have the following chain of well-known isomorphisms: $$B(H)\otimes C(G)\cong M_n(\mathbb{C})\otimes C(G)\cong M_n(C(G))\cong C(G, M_n(\mathbb{C}))\cong C(G, B(H)).$$ If you calculate this explicit composition, you easily see that it agrees with the map in the answer of Martin Argerami. Also, the 'minimal' tensor product doesn't matter here. $C(G)$ is nuclear, so there is a unique $C^*$-norm on every algebraic tensor product you form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4415338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Question about positive definite matrix and inequality proof Problem: Let $X$ and $R$ be positive definite matrices, $C$ is a matrix of compatible dimension, and define $g(X)$ as $g(X)=X-XC'[CXC'+R]^{-1}CX$ Prove that if $X>Y>0$, then $g(X)>g(Y)$. From the definition, I have the followings: $g(X)+XC'[CXC'+R]^{-1}CX=X$ $g(Y)+YC'[CYC'+R]^{-1}CY=Y$ Moreover, $X>Y>0$ implies that $g(X)+XC'[CXC'+R]^{-1}CX>g(Y)+YC'[CYC'+R]^{-1}CY$ and $X>0,Y>0$. I try to use the inversion lemma, but the equation becomes more and more complicated. I don't know how to prove that $g(X)>g(Y)$ based on the above equations. Could you please help me?
Note that \begin{align*} X > Y > 0 &\implies 0 < X^{-1} < Y^{-1} \\ &\implies X^{-1} + C^\intercal R^{-1} C < Y^{-1} + C^\intercal R^{-1} C \\ &\implies (X^{-1} + C^\intercal R^{-1} C)^{-1} > (Y^{-1} + C^\intercal R^{-1} C)^{-1} \\ &\implies g(X) > g(Y) \end{align*} by Woodbury's formula.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4415492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding the side length of a triangle type problem Find $x$ in the diagram Man.. I bet this is really easy but I can't seem to figure out what to do. Law of cosines won't work because I don't know the angle across from $x$. Those two angles are the same, but I don't know how to make that useful. Help appreciated here for a geometry noob! Thanks a ton I really appreciate the help.
You can use angle bisector theorem, the angle bisector divides the opposite side such that their ratio is equal to ratio of other two sides , in your case, 6.4/8=(16-6.4)/x which gives x=12
{ "language": "en", "url": "https://math.stackexchange.com/questions/4415708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Why is there a bijection from cosets of stabilizer to orbit? I'm going through the proof given for Orbit stabilizer theorem here, but, I am stuck at the point that there is a bijection from $G/ {\rm Stab}(x) \to{\rm Orb}(x)$. Just before, they found that stabilizer is a subgroup and the group action function $\phi: G \times X \to X$: $$ \phi(g) = g \cdot x$$ Breaks injectivity when $ g \equiv h \mod \text{Stab}(x)$. How does one go from this to the idea that fixing injectivity means to consider domain as cosets rather than elements?
In general if $f: A \to B$ is surjective, then the preimages $f^{-1}(\{b\})$ for all $b \in B$ form a partition of $A$. [See this for instance.] This automatically produces a bijection from the collection of preimages $f^{-1}(\{b\})$ to the elements of $B$. In your specific example where $\phi: G \to \text{Orb}(x)$, you can show that the preimages $\phi^{-1}(\{y\})$ for $y \in X$ happen to be the cosets of $\text{Stab}(x)$. This is the crux of the "$\phi(g) = \phi(h) \iff g^{-1} h \in \text{Stab}(x)$" claim in the middle of the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4415876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Question about isosceles trapezium ABCD, AB ∥ DC and AD = BC. Question about isosceles trapezium ABCD, AB ∥ DC and AD = BC. AC and BD intersect at M. ∠AMB = 60. P, Q, R are the midpoints of MA, MD, BC respectively. Prove that ∆PQR is equilateral. I tried solving using midpoint theorem, but I am getting stuck. Please guide.
Here is a solution using complex numbers geometry. We start from the fact that triangle $MAB$ and, as a consequence $MDC$ are equilateral triangles. We will consider WLOG that their resp. sidelengths are $1$ and $s$. Let us take $M$ as the origin and the parallel to lines $AB$ and $DC$ passing through $M$ as the $x$ axis (the $y$ axis being the line passing through midpoints of $AB$ and $DC$, resp.) Let us use complex numbers representation associating the lowercase letters to the uppercase letters attributed to points), with $$w:=b=e^{i \pi/3}.$$ To points $A,D,C,P,Q,R$ resp. we associate: $$a=w^2, d=sw^4, c=sw^5, p=\tfrac12w^2, q=\tfrac12sw^4, r=\tfrac12(w+sw^5)$$ Triangle $PQR$ is equilateral iff the $\pi/3$ rotation centered in $P$ maps $Q$ to $R$, i.e., $$\mathscr{R}_{\pi/3}(\vec{PQ})=\vec{PR} \ \iff \ w(q-p)=r-p$$ This relationship is equivalent to: $$w(\tfrac12sw^4-\tfrac12w^2)=\tfrac12(w+sw^5)-\tfrac12w^2 \ \iff \ w=1+w^2 $$ which is true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4416091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Ambiguity regarding $\mathbb Q(\sqrt[4]{-2})$ I am studying field theory.I found a question in Artin's algebra which asks if $i\in\mathbb Q(\sqrt[4]{-2})$.Now I am confused with the meaning of $\sqrt[4]{-2}$ here because there are $4$ of them.It is not specified which one I should take.So,I am having problem.Can someone please help me?
This answer has many similarities to that of Oscar Lanzi, but I thought a bit more detail might be helpful. It is true that for each root of $\mu^4+2=0$, $\mathbb{Q}[\mu]$ is a different ring, but all four of these rings are isomorphic. Suppose $\mu^4+2=0$ and $i\in\mathbb{Q}[\mu]$. That is, we have $a,b,c,d\in\mathbb{Q}$ so that $$ \begin{align} 0 &=\left(a\mu^3+b\mu^2+c\mu+d\right)^2+1\\ &=\underbrace{2(ad{+}bc)\vphantom{\left(a^2\right)}}_0\,\mu^3-\underbrace{\left(2a^2{-}2bd{-}c^2\right)}_0\,\mu^2-\underbrace{2(2ab{-}cd)\vphantom{\left(a^2\right)}}_0\,\mu+\underbrace{\left(d^2{-}2b^2{-}4ac{+}1\right)}_0\tag1 \end{align} $$ Since $x^4+2$ is irreducible over $\mathbb{Q}$ (via Eisenstein), $\left\{1,\mu,\mu^2,\mu^3\right\}$ are independent over $\mathbb{Q}$. Thus, each of the coefficients in $(1)$ must be $0$. Since $ad+bc=0$ and $2ab-cd=0$ we get $$ \begin{align} 0 &=2(ad+bc)^2+(2ab-cd)^2\\ &=\left(2a^2+c^2\right)\left(2b^2+d^2\right)\tag2 \end{align} $$ That is, $a=c=0$ or $b=d=0$. If $a=c=0$, then $2a^2-2bd-c^2=0\implies b=0$ or $d=0$. $\quad$if $b=0$, then $d^2-2b^2-4ac+1=0\implies d^2+1=0\quad\Rightarrow\Leftarrow\tag3$ $\quad$if $d=0$, then $d^2-2b^2-4ac+1=0\implies2b^2=1\quad\Rightarrow\Leftarrow\tag4$ If $b=d=0$, then $2a^2-2bd-c^2=0\implies2a^2=c^2\quad\Rightarrow\Leftarrow\tag5$ Cases $(4)$ and $(5)$ rely on the irrationality of $\sqrt2$. Cases $(3)$-$(5)$ are exhaustive; thus, $i\not\in\mathbb{Q}[\mu]$. Since we only used that $\mu^4+2=0$, this works for any root: $\mu=\frac{\pm1\pm i}{\sqrt[4]{2}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4416296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
If $f(x)\rightarrow L$ from both sides then $f'(c)=0$ for some $c$ Let $f:\mathbb{R}\rightarrow\mathbb{R}$ such that $f$ is differentiable over $\mathbb{R}$. Prove that if $\underset{x\rightarrow\infty}{\lim}f(x)=L$ and $\underset{x\rightarrow-\infty}{\lim}f(x)=L$ for $0<L\in\mathbb{R}$, then $\exists c\in\mathbb{R} \; f'(c)=0$. Thoughts: I know that if the interval is $[a,b]$ (where $a, b \in \mathbb{R}$) $x\rightarrow a^+$ and $x\rightarrow b^-$, I can use Rolle's theorem, but what should I do when $x$ approaches infinity?
If $f(x)=L$, then we are done (the derivative vanishes everywhere). Otherwise there exists $x_0\in \mathbb{R}$ such that $f(x_0) \neq L$, wlog assume $f(x_0)>L$. By definition there exists $R>0$ such that $\vert f(x)-L\vert < \frac{f(x_0)-L}{2}$ for all $\vert x \vert >R$. This implies that $\sup_{\vert x \vert>R} f(x) \leq f(x_0)$. This implies that $\sup_{x\in \mathbb{R}} f(x) = \sup_{ x \in [-R;R]} f(x)$. As $f$ is continuous and $[-R;R]$ compact, we get that $f$ admits a global maximum, and at the global maximum the derivative vanishes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4416428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
If $\Sigma_{k=1}^n \frac{1}{k(k+1)}= \frac{7}{8}$ then what is $n$ equal to? If $S_n=\Sigma_{k=1}^n \frac{1}{k(k+1)}= \frac{7}{8}$ then what is $n$ equal to? So, the most obvious course of action in my mind is to find a closed form for the partial summations, but alas, this task eludes me. I started doing this by hand... like just adding up the fractions until I get to $\frac{7}{8}$ and got $n=7$. Surely there must be a better way. Help appreciated here! Thanks, I really appreciate it.
There is a certain name for this type of sum... First we can use partial fractions method: $$ \frac{1}{k(k+1)} = \frac{A}k+\frac{B}{k+1} \quad \implies $$ $$ 1 = Ak+B(k-1) \quad \forall k $$ So if we set $k=1$ we obtain that $A=1$. If we set $k=0$ then $B = -1$ this means that $$ \frac{1}{k(k+1)} = \frac{1}{k}-\frac{1}{k+1} $$ Now the sum: $$ \sum_{k=1}^s \frac{1}{k(k+1)} = \sum_{k=1}^s \frac{1}{k}-\frac{1}{k+1} $$ But certain terms in this series cancel out... For example if $s =3$ then $$ \sum_{k=1}^3 \frac{1}{k}-\frac{1}{k+1} = \frac{1}{1}-\frac{1}{2} + \frac{1}{2}-\frac{1}{3}+ \frac{1}{3}-\frac{1}{4} = 1-\frac{1}{4} $$ Do you see the pattern?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4416590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Can the sum of two divergent integrals yield a convergent integral? Given: $f(x) + g(x) \neq 0$ $\int_a^b f(x) dx $ diverges $\int_a^b g(x) dx $ diverges $a$ and $b$ can be real numbers or $\pm\infty$ Find $f$ and $g$ such that: $$\int_a^b (f(x)+g(x)) dx = L$$ for some finite $L$. I really can't seem to think of any examples asides from $f+g=0$. Is there a solution to this at all?
Suppose $\int_a^{b} f(x)dx $ diverges and $\int_a^{b} h(x)dx$ converges with $h \neq 0$. Take $g=h-f$ $$\int_a^b (f(x)+g(x)) dx =\int_a^{b} h(x)dx$$ is convergent. Also, $\int g$ is divergent. Example: $f(x)=\frac 1 {x-a}, h(x)=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4416774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Height of an irregular tetrahedron with an equilateral base and lateral faces making angles $60^\circ$, $60^\circ$, $80^\circ$ with that base An irregular tetrahedron has a base that is an equilateral triangle of side length $10$. The lateral faces make angles of $60^\circ, 60^\circ$ and $80^\circ$ with the base. Find the height of the tetrahedron. So, one way I thought I could solve this problem is using coordinate geometry, specifically attaching a reference frame to the base, and writing the equations of the three planes that represent the three lateral faces, and then solving the linear system for the apex coordinates.
Given a tetrahedron with base $ABC$ and apex $D$, let * *$E$ be the orthogonal projection of $D$ onto the plane holding $ABC$. *$h = |DE|$ will be the height of tetrahedron. *$\theta_A / \theta_B / \theta_C$ be the angle between faces $DBC$ / $DCA$ / $DAB$ and base $ABC$. *$\ell_A / \ell_B / \ell_C$ be the distance of $E$ to edges $BC$ / $CA$ / $AB$. As long as all $\theta_A, \theta_B, \theta_C < 90^\circ$, $E$ lies inside $ABC$. Furthermore, we have * *$\ell_A = h \cot\theta_A$, $\ell_B = h \cot \theta_B$ and $\ell_C = h\cot\theta_C$ *$|BC|\ell_A + |CA|\ell_B + |BC|\ell_C = 2\verb/Area/(ABC)$ For the tetrahedron at hand, we have $|AB| = |BC| = |CA| = 10$ and $\verb/Area/(ABC ) = \frac{\sqrt{3}}{4}(10)^2$. This leads to $$\ell_A + \ell_B + \ell_C = 5\sqrt{3}$$ and as a result, $$\begin{align} h = \frac{\ell_A + \ell_B + \ell_C}{\cot\theta_A + \cot\theta_B + \cot\theta_C} &= \frac{5\sqrt{3}}{2\cot(60^\circ) + \cot(80^\circ)} = \frac{15}{2 + \sqrt{3}\cot(80^\circ)}\\ &\sim 6.506442514261543\end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4416949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Derive density of $Z=XY$, $X\sim U(0,1)$ and $Y\sim\mathcal N(0,1)$. I have been stumped for a few days on this. I have two random variables $X\sim U(0,1)$ and $Y\sim\mathcal N(0,1)$, which are independent. How can I get the density of $Z = XY$? I put $Z = XY, W = Y$ i.e. $X=Z/W, Y=W$ and achieved the Jacobian as $J={1\over|W|}$, but I've got $f_{Z,W}(z,w)={1\over\sqrt{2\pi}|w|}{\exp(-w^2/2)}$ and I don't know how to integrate this w.r.t $w$ and get the (marginal) density of $Z$. Could you please help me with this problem? I tried partial integration too but it doesn't work.
angryavian's comment seems to be the way to get a close to numerical answer. Incase you were interested visually in the density here is a plot with $10^7$ samples. It seems to be strangely convex, my guess is that it would have looked very normal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4417304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
show that $h$ is a differentiable and $h(-1)=h(0)=h'(0)=0$ and $h(1)=1$ then $h^{(3)}\geq 3$ If $h$ is an real function, differentiable three times on $[-1,1]$, such that $h(-1)=h(0)=h'(0)=0$ and $h(1)=1$. Prove that exist an real number $r\in(-1,1)$ such that $h^{(3)}(r)\geq 3$ For this I used Taylor's theorem so \begin{eqnarray} h(x)= h(a)+h'(a)(x-a)+\frac{h''(a)}{2!}(x-a)^{2}+\frac{h^{(3)}(a)}{3!}(x-a)^{3} \end{eqnarray} for $a\in[-1,1]$, then, for $a=0$ \begin{eqnarray} h(x)= \frac{h''(0)}{2!}x^{2}+\frac{h^{(3)}(0)}{3!}x^{3} \end{eqnarray} and for $a=1$, and $a=-1$ \begin{eqnarray} h(x)= 1+h'(1)(x-1)+\frac{h''(1)}{2!}(x-1)^{2}+\frac{h^{(3)}(1)}{3!}(x-1)^{3}\\ h(x)=h'(-1)(x+1)+\frac{h''(-1)}{2!}(x+1)^{2}+\frac{h^{(3)}(-1)}{3!}(x+1)^{3} \end{eqnarray} but I don't the way to continue. Do you know some hint to continue?
We can solve this by considering the function $h$ over the intervals $(0,1)$ and $(-1, 0)$. Taylor's theorem tells us that there is a point $x \in (0, 1)$ such that: $$h(1) = h(0) + h'(0) + h''(0)/2 + h^{(3)}(x)/6.$$ Similarly, we also know there is a point $y \in (-1, 0)$ such that: $$h(-1) = h(0) -h'(0) + h''(0)/2 - h^{(3)}(y)/6.$$ Then from the known values of $h$ we find that $h^{(3)}(x) + h^{(3)}(y) = 6$. So either $h^{(3)}(x) \geq 3$ or $h^{(3)}(y) \geq 3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4417513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
integral of the function $[x]$ stands for integral part of $x$, $n$ is positive and $f'$ is the derivative of $f$ $$\int_{0}^{2} [x]^n.f'(x)dx$$ I have done it by separating the limits $$ =>\int_{0}^{1} [x]^n.f'(x)dx_\ + _\ \int_{1}^{2} [x]^n.f'(x)dx $$ $$=>(2)^n f(2) -f(1)$$ but the answer given is $f(2)+f(1)$
$[x]=1$ for $1 <x<2$ (assuming that $[x]$ is the floor function). So the correct value is $f(2)-f(1)$. Your answer and the given answer are both wrong!
{ "language": "en", "url": "https://math.stackexchange.com/questions/4417660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving that copulas are Lipschitz continuous A copula is a function $C:[0,1]^2\to[0,1]$ such that $C(x,0)=C(0,x)=0$ for all $x\in[0,1]$, $C(x,1)=C(1,x)=x$ for all $x\in[0,1]$, and \begin{equation}\label{ineq} C(x_2,y_2)-C(x_1,y_2)-C(x_2,y_1)+C(x_1,y_1)\ge0\tag{*} \end{equation} for all $(x_1,y_1), (x_2,y_2)\in[0,1]^2$ with $x_1\le x_2$ and $y_1\le y_2$. I am trying to show that a copula is Lipschitz continuous in the following sense: $$ |C(x_2,y_2)-C(x_1,y_1)| \le|x_2-x_1|+|y_2-y_1| $$ for all $(x_1,y_1),(x_2,y_2)\in[0,1]^2$. Suppose that $x_2\ge x_1$ and $y_2\ge y_1$. Using the definition of a copula, we have that $$ x_2-x_1-C(x_2,y_1)+C(x_1,y_1)\ge0 $$ and $$ y_2-y_1-C(x_2,y_2)+C(x_2,y_1)\ge0. $$ Adding these two inequalities, we obtain $$ C(x_2,y_2)-C(x_1,y_1) \le x_2-x_1+y_2-y_1. $$ Since copulas are increasing in each argument, $ C(x_2,y_2) \ge C(x_2,y_1) \ge C(x_1,y_1) $ so that $ C(x_2,y_2)-C(x_1,y_1)\ge0 $ and hence $$ |C(x_2,y_2)-C(x_1,y_1)|\le x_2-x_1+y_2-y_1, $$ when $x_1\le x_2$ and $y_1\le y_2$. We also need to consider the case when $x_1\le x_2$ but $y_1\ge y_2$. If I understand correctly, inequality \eqref{ineq} is not valid in this case and I am not sure how to proceed. How can we proceed with the proof when $x_1\le x_2$ but $y_1\ge y_2$? Any help is much appreciated!
We have $x_1\le x_2$ but $y_1\ge y_2$. Then either $C(x_2, y_2)\geq C(x_1, y_1)$ or $C(x_2, y_2)\leq C(x_1, y_1)$. Take the case when $C(x_2, y_2)\geq C(x_1, y_1)$. Then, you want to prove that $$ C(x_2, y_2)- C(x_1, y_1)\leq x_2-x_1 + y_1-y_2. $$ From the marginal monotonicity you get $$ C(x_2, y_2)- C(x_1, y_1)\leq C(x_2, y_2)- C(x_1, y_2). $$ Now, you are in the previous setting, i.e. holds that $$ C(x_2, y_2)- C(x_1, y_2)\leq x_2-x_1. $$ Hence, $C(x_2, y_2)- C(x_1, y_1)\leq x_2-x_1. $ In the other case when $C(x_2, y_2)\leq C(x_1, y_1)$, we obtain the same $$ C(x_2, y_2)- C(x_1, y_1)\leq y_1-y_2. $$ Hence, together, $$ C(x_2, y_2)- C(x_1, y_1) \leq \max(x_2-x_1,y_1-y_2) \leq x_2-x_1+y_1-y_2. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4417841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
If $f=g$ almost everywhere, $\int f=\int g$ Let $\Omega$ be a measurable set, and let $f: \Omega \to [0, + \infty]$ and $g : \Omega \to [0, +\infty]$ be non-negative measurable functions. Show that if $f(x) = g(x)$ for almost every $x \in \Omega$, then $\int_\Omega f = \int_\Omega g$. I was given a very short proof that says that if $f=g$ almost everywhere then up to modifying $f$ on a measure zero set, we don't change its integral so we can suppose that $f(x) \geq g(x)$ $\forall x \in \Omega$ and then use the fact that the Lebesgue integral preserves inequality for measurable functions and do the same with $g$. However, saying that we don't change the integral of $f$ when modifying its values for a measure set is exactly what we are trying to prove right ? This argument is using A to prove A... Is there a valid proof I can be given ? I precise that we have not seen $\int_\Omega f+g = \int_\Omega f + \int_\Omega g$ yet.
You can do this going back to first principles. Let $\phi$ be a simple function with $0 \le \phi \le f$ and let $E = \{f = g\}$. You can check that $\phi \chi_E$ is also a simple function and that $$\int \phi = \int \phi \chi_E.$$ Since $\phi \chi_E = 0$ whenever $f \not= g$ it follows that $0 \le \phi \chi_E \le g$. Consequently by the definition of the integral of $g$ you have $$\int \phi \le \int g.$$ Take the supremum over all such $\phi$ to conclude by the definition of the integral of $f$ that $$\int f \le \int g.$$ Now do it again, exchanging the roles of $f$ and $g$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4417983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $|f(z)| < 1$ then $|f(z)| < |z|$ Exercise Suppose $f(z)$ is analytic on $|z| <1$ and $|f(z)| < 1$ for all $|z| < 1$; also $f(0) = 0$. Show that $|f(z)| \leq |z|$ and that $|f'(0)| = 1$. Proof Attempt. First, I attempt to show that $|f(z)| \leq |z|$. Case 1: |z| =0: If $z = 0$, then $f(0) = 0$. Notice that $|f(0)| = 0 \leq |0| = 0$. Case 2: 0<|z|<1: ??? It's hard to imagine how Case 2 holds, given the below counterexample Consider for $|z| < 1$ the function $f(z) = z + \frac{|z-1|}{2}$ . Then for all $ 0<|z| < 1$, we have $|f(z)| < 1$ but $|f(z)| > |z|$. Perhaps there is something I am misunderstanding here?? Any help or tips are greatly appreciated.
Use the Riemann mapping on removable singularities to show that $f(z)/z$ can be extended to an analytic function, and use the max principle on the disk centered at 0 of radius $0<r<1$. You will get a bound that holds for every $0<r<1$, and then take a suitable limit to get your bound. Now, your example is not a holomorphic function. One quick way to see this: if it were, then $f(z)-z$ would be a holomorphic real valued function defined on a open connected set. What does the open mapping theorem tell you? Of course, you can still use the CR equations to show that.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4418228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Generating function for $a_n = \sum_{k=0}^{n-2} a_ka_{n-k-2}$ Let $a_n$ be a sequence following the recurrence relation $$a_n = \sum_{k=0}^{n-2} a_ka_{n-k-2}$$ with initial conditions $a_0 = a_1 = 1$. We have to find the generating function for $a_n$ that does not contain an infinite series. Let $f(x) = \sum_{k=0}^{n} a_k$. We know that $$\sum_{k=0}^{n-2} a_ka_{n-k-2}x^{n-2} = \left(\sum a_k x^k \right) \left(\sum a_{n-k-2} x^{n-k-2}\right)$$ After that, I am not able to proceed. Can someone help? Note that $a_2 = 1$ and $a_3 = 2$. Thus simplifying from the answer below: $$x^2f^2(x) -f(x) + (1+x) = 0.$$ And solving for the quadratic function we have $$f(x) = \frac{1-\sqrt{1-4x^2(x+1)}}{2x^2}$$
The expansion of $\sqrt{1+x}$ is $$\sum_{k=0}^{\infty}\binom{1/2}{k}x^k = 1 + \sum_{k=1}^\infty\frac{(1/2)(1/2 - 1) \cdots (1/2 - k + 1)}{k!}x^k = 1 + \frac{1}{2}x - \frac{1}{8}x^2 + \frac{1}{16}x^3 - \frac{5}{128}x^4 + \cdots.$$ Then the expansion of $\sqrt{1 - 4x^2(x + 1)}$ is \begin{align*}1 + \frac{1}{2}(-4x^2(x+1)) - \frac{1}{8}(-4x^2(x+1))^2 + \frac{1}{16}(-4x^2(x+1))^3 - \frac{5}{128}(-4x^2(x+1))^4 + \cdots\end{align*} so $1 - \sqrt{1 - 4x^2(x + 1)}$ is \begin{align*}- \frac{1}{2}(-4x^2(x+1)) + \frac{1}{8}(-4x^2(x+1))^2 - \frac{1}{16}(-4x^2(x+1))^3 + \frac{5}{128}(-4x^2(x+1))^4 + \cdots\end{align*} and $\frac{1 - \sqrt{1 - 4x^2(x + 1)}}{2x^2}$ is \begin{align*}- \frac{1}{4x^2}(-4x^2(x+1)) + \frac{1}{16x^2}(-4x^2(x+1))^2 - \frac{1}{32x^2}(-4x^2(x+1))^3 + \frac{5}{256x^2}(-4x^2(x+1))^4 + \cdots\end{align*} which simplifies to $$1 + x + x^2(x+1)^2 + 2x^4(x+1)^3 + 5x^6(x+1)^4 +\cdots = 1 + x + x^2 + 2x^3 + 3x^4 + \cdots$$ and the coefficients are the terms you are looking for.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4418405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is there a relationship between slope of the curve $ f(x, y) = 0 $ and the partial derivative $ \frac{\partial f}{\partial x} $? Let $ f(x, y) = 0 $ define a curve. Is there any relationship between the slope of this curve and the partial derivative $ \frac{\partial f}{\partial x} $? For example, if $ f(x, y) = (x - 1)^2 + (y - 1)^2 - 1 $, then $ f(x, y) = 0 $ defines a circle of radius $ 1 $ centered at $ (0, 0) $. Now $ \frac{\partial f(x, y)}{\partial x} = 2(x - 1) $. Thus $ \frac{\partial f}{\partial x} = 0 $ at $ x = 1 $ and indeed we see that the circle $ f(x, y) = 0 $ also has a slope of $ 0 $ at $ x = 1 $. Is it always true that $ \frac{\partial f}{\partial x} = 0 $ at $ x = a $ implies that the curve $ f(x, y) = 0 $ has a horizontal slope (slope = $ 0 $) at $ x = a $? If yes, how can we prove it? If not, is there a counterexample?
$\def\rbf{\mathbf{R}}$The equation $f(x,y)=0$ does not always define a (smooth) curve. But let us assume, for simplicity, that it does and $f:\rbf^2\to\rbf$ is smooth and the gradient $\nabla f\ne 0$ everywhere. It does not make sense to say "the slope of this curve". Though it makes sense to say the slope field on this curve or the slope of the tangent line at a point of this curve. Note $f=0$ is a level curve for the function $f$. Call this curve $\gamma$. It is known that At each point on this level curve, the gradient vector is perpendicular to the level curve. In other words, if $(a,b)$ is a point on the curve, i.e., $f(a,b)=0$, and $f_x(a,b)=A$ and $f_y(a,b)=B$, then an equation to the tangent line of $\gamma$ at $(a,b)$ is $$ A(x-a)+B(y-b)=0\;\tag{1} $$ So if $f_x(a,b)=0$, equation (1) becomes $$ B(y-b)=0\;. $$ which is a horizontal line ($B\ne 0$ by our assumption). [Added.] As one of the comments pointed out, in your example $f(x,y)=(x-1)^2+(x-1)^2-1$, the equation $f=0$ is corresponding to the circle centered at $(1,1)$, not $(0,0)$. Moreover, it does not make sense to say the circle $f=0$ has a slope of $0$ at $x=1$. What you have instead, is that at the point $(x,y)=(1,0)$ and also the point $(x,y)=(1,2)$, the tangent lines to the circle both have the slope $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4418515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Expected value and variance of number of coin flips until two consecutive tails are flipped I'm working with a problem from an old exam where one had to calculate the expected value and variance of the number of throws, let's call it $N$, before we get two tails in a row. We also assume the coin to be fair, meaning the probability of getting head and tails is just as equal. For our sake, let's also form the events $T$ for flipping a tail, and $H$ for flipping a head. In order to calculate the expected value, we need to find the pmf of our stochastic variable $N$. This can easily be done by first examining some base cases of $p(k):=P(N=k)$. Furthermore, we have that $V_N \in \{2,3,\dots\}$. For $N = 2$, $P(N=2) = P(T \cap T) = 1/4$ trivially. For $N = 3$, $P(N=3) = P(H \cap T \cap T) = 1/8$ also trivially. From this we notice a pattern, before every ending $TT$ we have to place out a $H$, meaning this position is always determined. For instance $N = 4$, we have that the last three letters are $HTT$, and for the first position, we have 2 choices, meaning $P(N=4) = 2 / 2^4 = 1/8$ So what about the case when $N = k$? We already know that the last three letters are determined. Meaning we have a total of $2^{k-3}$ choices left to do. But from this, we have to subtract the number of $TT$ - "strings" that may arise in the rest of our $k-3$ positions. However, from here, I struggle to find the number of combinations for which we don't get a $TT$ somewhere along the $k-3$ positions. I know that as soon as we get $T$, we must choose $H$, but as soon as we get $H$, we have $2$ choices to make. Maybe this is a better way of tackling the problem instead of the method I used above. Still, I don't really see how to cover all the cases, and I'd be glad if anyone could share these details. Also, I'd be thankful if you didn't share a whole solution to the expected value and variance, since I'll try to solve it on my own. Thanks.
EDIT: My previous idea is right, but not the execution. Indeed, see comments below, we that $\mathbb E(N)\geq 2$. If we now condition on the last two flips after the initial two, we get $$\mathbb E(N)=2+\mathbb E(N|TT)\mathbb P(TT)+2\mathbb E(N|HT)\mathbb P(HT)+\mathbb E(N|HH)\mathbb P(HH)\\ =\frac14+\frac12(\mathbb E(N)+1)+\frac14(\mathbb E(N)+1)\\ \Leftrightarrow \mathbb E(N)=6$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4418658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
$F=\{z\in\Bbb{H}:\ |z|>1,\ 2|\Re(z)|<\lambda\}$ is fundamental domain for $G_\lambda$(the subgroup of $SL(2,\Bbb{R})$ generated by $S$ and $T_\lambda$ Let $0<\lambda<2$ be a real number and $G_\lambda$ be the subgroup of $SL(2,\Bbb{R})$ generated by $S=\begin{pmatrix}0 & -1\\ 1 & 0\end{pmatrix}$ and $T_\lambda=\begin{pmatrix}1 & \lambda\\ 0 & 1\end{pmatrix}$. Prove that, $$F_\lambda=\{z\in\Bbb{H}:\ |z|>1,\ 2|\Re(z)|<\lambda\}$$ is a fundamental domain for group $G_\lambda$. I can prove the statement for $\lambda=1$ using the following two lemmas- * *For $\tau'\in\Bbb{H}$, there exists $\tau \in \Bbb{H}$ such that $\tau=A\tau'$ for some $A\in G_1$ with $|\tau|\ge 1$ and $2|\Re(\tau)|\le 1$ *If $z\in F_1$ and $Az\in F_1$ for some $A\in G_1$, then $A= I$. But for $\lambda=1$, $G_1=SL(2,\Bbb{Z})$ (hence the entries of the matrices are integers) and I've used the concept of Lattice to prove the above two lemmas. I cannot generalize the proof for any $\lambda\in (0,2)$. Can anyone help me to prove the statement? Thanks for help in advance.
Answered in the comments, but just to close this out and give an elementary counterexample: Let $$\zeta = \frac{-\lambda+ \sqrt{\lambda^2 - 4}}{2}.$$ $$\zeta' = \frac{\lambda+ \sqrt{\lambda^2 - 4}}{2}.$$ Assuming $0 < \lambda < 2$ these are both in the upper half plane $\mathbb{H}$. Note that $|\zeta| = |\zeta'| = 1$ and $2 |\mathrm{Re}(\zeta)| = 2 |\mathrm{Re}(\zeta')| = 1$. These are the two "corners" of the hyperbolic triangle $F_{\lambda}$. Moreover $\zeta \zeta' = -1$. Hence $T \zeta = \zeta'$, $S \zeta' = \zeta$, and thus, if $R = ST$, that $$R \zeta = \zeta.$$ The stabilizer in $\mathrm{PSL}_2(\mathbb{R})$ of any point in $\mathbb{H}$ is isomorphic to $\mathrm{SO}_2(\mathbb{R})$, that is, will be a hyperbolic rotation around $\zeta$. But now you see a problem; if this rotation is not given by an element of finite order, the orbit of any point different from $\zeta$ under interates of $R \in G$ will be dense in a hyperbolic circle. This is certainly incompatible with the claim. Equivalently, it must be the case that $R$ has finite order and the eigenvalues of $R$ are roots of unity. But $\zeta$ itself is an eigenvalue of $R$, so if $\zeta$ is not a root of unity then the statement is certainly wrong. To be completely explicit, let $\lambda = 6/5$, so $\zeta = -3/5 + 4i/5$, and $$R =TS = \left( \begin{matrix} 6/5 & -1 \\ 0 & 1 \end{matrix} \right).$$ Now for any $x \in F_{\lambda}$, $R^n x$ will be dense and hence infinitely many other iterates will be inside $F_{\lambda}$. For example, if $x = 2 i$, then $$R^{10} x = \frac{-8648173707066 + 38146972656250 i}{25886783980445} $$ lies in $F_{\lambda}$ but is not equal to $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4418831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How many $5$ letter words can be made from $15$ letter set where multiple conditions must be met a) How many $5$-letter words can be made using letters from the $15$ letter set $\{A, B, C ... , O\}$ such that the letters are all different and in alphabetical order? b) How many are there if we add the condition that no word begins OR ends with a vowel? I understand part a). It's just $\binom{15}{5}$. But I am having trouble with b) I thought of creating two sets such as $A$ for all words that start with a vowel and set $B$ for all words that end in a vowel and then finding $A \cup B$ and subtract that from $\binom{15}{5}$ but I am not sure. Any help and guidance would be appreciated.
Here is simplification. Note that if a five letter word in alphabetical order contains letter $A$, it must start with $A$ and if it contains letter $O$, it must end with $O$. But as we cannot have a vowel at either end, that leaves us to make words with remaining thirteen letters, B C D E F G H I J K L M N If the set of words starting with a vowel is $P$ and the set of words ending with a vowel is $Q$, $|P \cup Q| = |P| + |Q| - |P \cap Q|$ $ \displaystyle |P| = {9 \choose 4} + {5 \choose 4} = 131$ $ \displaystyle |Q| = {7 \choose 4} = 35$ $ \displaystyle |P \cap Q| = 1$ So the answer is $~ \displaystyle {13 \choose 5} - \left(131 + 35 - 1\right) = 1122$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4419072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Derivative of Hadamard Product of two vectors How can I compute the following derivative? $$\frac{\partial(K u \circ T u)}{\partial u}$$ $K$ and $T$ are constant matrices, $u$ is an unknown vector. and $\circ$ is Hadamard product. my solution: $$\frac{\partial(K_{ij} u_j \circ T_{mn} u_n)}{\partial u_p} = K_{ij}\frac{\partial(u_j)}{\partial u_p}\circ T_{mn} u_n+K_{ij} u_j \circ T_{mn} \frac{\partial u_n}{\partial u_p} = K_{ij}\delta_{jp} \circ T_{mn} u_n+K_{ij} u_j \circ T_{mn} \delta_{np}=K_{ip} (\sum_n T_{mn} u_n)+T_{mp} (\sum_j K_{ij} u_j).$$ therefore it can be written as follow $$\frac{\partial(K u \circ T u)}{\partial u} = K^T(Tu)+T^T(Ku).$$ where $\square^T$ is the transpose of the matrix.
The Hadamard product of two vectors is like a scalar product but without summing over the index. So in your "my solution" the index m should be i (but there is no sum over i). The last expression in that calculation is then correct. You cannot rewrite this in your very last expression since there is no sum over i. If you take the directional derivative, i.e. the derivative in the direction of some vector $h$ then you may rewrite it avoiding the index notation: $$ \frac{\partial(K u \circ Tu)}{\partial u} . h = Kh \circ Tu + Ku \circ Th$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4419222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
How many more odd divisors are there than even divisors? Let $f(k)$ be the number of odd divisors of $k$ and $g(k)$ be the number of even divisors. Define $F(n) = \sum_{k \le n} f(k)$ and $G(n) = \sum_{k \le n} g(k)$. Thus $F(n)$ and $G(n)$ are the total number of odd and even divisors of natural numbers up to $n$. Experimental data show that $$ \lim_{n \to \infty}\frac{F(n) - G(n)}{n} = \log 2 $$ Question: Is the above limit true? Motivation: For a different question I had written a program to find length of the period $l_p$ of $1/p$. It is known that $l_p|p-1$ so we only need to search among the divisors of $p_1$ to find the smallest divisor $d$ such that $10^d - 1$ is divisible by $p$. This computation is slow but I observed that overall the program runs much faster if we first scan through even divisors first and only if we do not find a $d$ then we search through odd. This is because about $2/3$ of the divisors of $p-1$ seems to be even. This led me to investigate the proportion of odd and even divisors among natural numbers. Source code: p = 1 step = target = 10^6 odd = even = 0 while True: d = divisors(p) l = len(d) i = 0 while i < l: e = d[i] if e%2 == 1: odd = odd + 1 else: even = even + 1 i = i + 1 if even > odd: print("Found", p, odd, even) if p >= target: t = odd + even print(p, odd, even, odd/t.n(), even/t.n(), (odd - even)/p.n()) target = target + step p = p + 1
Not an answer, not very rigorous either, but here is some progress I made: The number of times $1$ occurs as divisor from $1 \cdots n$ is $\lfloor \frac{n}{1} \rfloor$. The number of times $2$ occurs as divisor from $1 \cdots n$ is $\lfloor \frac{n}{2} \rfloor$. a.s.o We are interested in $$\lim_{n \to \infty} \frac{\lfloor \frac{n}{1} \rfloor - \lfloor \frac{n}{2} \rfloor + \lfloor \frac{n}{3} \rfloor + \cdots + (-1)^{n+1}\lfloor \frac{n}{n} \rfloor}{n}$$ Approximate this to $$\lim_{n \to \infty} \frac{\frac{n}{1} - \frac{n}{2} +\frac{n}{3}+ \cdots + (-1)^{n+1}\frac{n}{n}}{n}$$ Thus, it equals $$\lim_{n \to \infty} \frac{1}{1} - \frac{1}{2} + \frac{1}{3} + \cdots + (-1)^{n+1}\frac{1}{n}$$ Using the expansion of $\ln (1 + 1)$, we know this is $\ln 2$. I think this approximation works because $\lfloor \frac{n}{x} \rfloor$ has only $O(\sqrt{n})$ distinct values for $1 \leq x \leq n$. These values decrease rapidly and stay equal for longer ranges, so I think the subtraction balances things out in normal division too. And as $n \to \infty$, the "leftovers" become less significant. But again this is not a rigorous way of proving.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4419318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Show the function between the dihedral groups is well defined Suppose that $n = dm$ where $d$ and $m$ are positive integers with $m\ge 3$. Consider the dihedral group $D_n = \langle \{\mu, \rho\}\rangle,$ where $|\mu| = 2$, $|\rho| = n$ and $\rho\mu = \mu\rho^{−1}$, and the dihedral group $D_m = \langle \{s, r\}\rangle,$ where $|s| = 2$, $|r| = m$ and $rs = sr^{−1}$. Define $\psi : D_n \to D_m$ by $ψ(\mu^a\rho^b)=s^ar^b$, for any integers $a,b$. Show that $\psi$ is well-defined. Here's the stuff I noticed: * *different values of $a,b$ can give the same group element $\mu^a\rho^b$, and I need to show that they also give the same $\psi(\mu^a\rho^b)$. *if $n$ is not a multiple of $m$, then $\psi$ not well-defined. So here is what I did so far, (tried to make a proof sketch): from integer division, there exists unique integers $i, j, s, t$ with $0 \le i < 2$ and $0 \le j < n$ and $a = i + 2s$ and $b = j + nt$. So, the group element $\mu^a\rho^b=\mu^{i+2s}\rho^{j+nt}$ uniquely determined by $i$ and $j$, since changing $s$ and $t$ won't make a difference. So, I think I need to show that $\psi(\mu^a\rho^b)$ depends only on $i$ and $j$, and not on $s$ or $t$. (this is what I'm having a hard time doing.)
I notice that the symbol you use for the quotient of $a$ when divided by $2$ is the same as the generator $s$ in $D_m$. To avoid confusion, I replace it by $u$. The main idea is to show that $\psi(\mu^a\rho^b)=\psi(\mu^i\rho^j)$. By how the function is defined, $\psi(\mu^a\rho^b)=\psi(\mu^{i+2u}\rho^{j+nt})=s^{i+2u}r^{j+nt}$. Since $|s|=2$, we have $s^{i+2u}=s^i(s^2)^u=s^i$. Next, $r^{j+nt}=r^{j+dmt}=r^j(r^m)^{dt}=r^j$. Therefore, $\psi(\mu^a\rho^b)=s^ir^j=\psi(\mu^i\rho^j)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4419628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Integrating factor $e^{\int \frac{1}{x} dx}$ in differential equation When integrating $\int \frac{1}{x} dx$, we typically write the integrated expression as $ln|x| + C$. The absolute value of the $x$-variable is introduced to account for the scenario where we have $ln(-x)$ and $x$ is a negative number. In a calculus book I am currently working with, the following differential equation is used in an example: $$xy'+y=3x^{2} +4x, x\neq 0$$ In this case the integrating factor becomes $e^{\int \frac{1}{x} dx}$. The textbook example then states that $e^{\int \frac{1}{x} dx}=e^{ln x} = x$. Question: Why is it that we do not have to take the absolute value of the $x$-variable here? I could understand this if it was explicitly stated that $x>0$ in the given problem, but this is not stated. All we know is that $x\neq0$. So how does this account for the second scenario outlined above? If anyone can explain this to me, then I would greatly appreciate it!
It doesn't matter. When you are solving the homogeneous $xy'+y=0$, you get $$y=C\exp\left(-\int\frac{dx}{x}\right)=\frac{C}{|x|}$$ But then, since the solution is not defined at $0$, you have to consider either $x>0$ or $x<0$, so $|x|=\sigma x$ with a fixed $\sigma\in\{+1,-1\}$. This constant factor is already taken into account in $C$. So, with another $C$, $$y=\frac{C}{x}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4419784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Eigenvalue of Householder reflector in $\mathbb{C}^{m \times n}$ I know that the eigenvalues of the Householder reflector in $\mathbb{R^{m\times n}}$, $H=I-2qq^T$, is $\pm 1$. But I have no idea whether this statement is true for the Householder reflector in $\mathbb{C^{m\times n}}$, $F=I-2qq^*$ where $*$ is Hermitian transpose. Here is my short observation: The eigenvalue $\lambda$ satisfies $Fx=\lambda x \Rightarrow x-2qq^*x=\lambda x$. If $x \in \perp q:=\{z \vert q^*z=0\}$, then $x=\lambda x \Rightarrow \lambda=1$. On the other hand, if $x=\mu q$ for some constant $\mu \in \mathbb{C}$, then $\mu q-2\mu q=\lambda \mu q$. Then $\lambda = -1$. I am curious at this point: Can we say that $\pm 1$ are the only eigenvalues of $F$? If we are in $\mathbb{R}$, it is true since $F^2=I-4qq^T+4qq^T qq^T=I \Rightarrow \det(F)=\pm 1$. But I think it is not that clear if we talk about complex $F$. Could you give me more explanation? Here are more observations: Since $F$ is Hermitian, all eigenvalues of $F$ are real. pf) If $Ax=\lambda x$ for some $x \in \mathbb{C} -\{0\}$, then $\lambda \Vert x \Vert^2=\lambda x^*x=x^*(\lambda x)=x^*Ax=x^*A^*x=(Ax)^*x=(\lambda x)^*x=\lambda^*x^*x=\lambda^*\Vert x \Vert^2$. Since $x \neq 0$, it must be the case of $\lambda=\lambda^*$, i.e., $\lambda \in \mathbb{R}$. Since $F^2=I$, $\det(F)=\pm 1$. pf) $\det(F^2)=\det(I) \Rightarrow (\det(F))^2=1 \Rightarrow \det(F)=\pm 1$. Now my question is more clear: Can we assure that the eigenvalues of $F$ are $\pm 1$ if $\det(F)=\pm 1$ is given?
The fact that $F^2 = I$ tells us that all eigenvalues of $F$ (real or complex) are equal to $1$ or $-1$. Indeed, suppose that $\lambda \in \Bbb C$ is an eigenvalue. Let $x \in \Bbb C^n$ be an associated eigenvector (so that $x \neq 0$ and $Fx = \lambda x$). We have $$ x = Ix = F^2x = F(Fx) = F(\lambda x) = \lambda F(x) = \lambda^2x, $$ so that $\lambda^2 x = x$. Because $x \neq 0$, this implies that $\lambda^2 = 1$. It follows that $\lambda = 1$ or $\lambda = -1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4419911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Give me one non-isotrivial elliptic curve over $\mathbb{F}_2(t)$ with supersingular reduction at some place I would like the equation of a non-isotrivial elliptic curve over the rational function field $\mathbb{F}_2(t)$ with exactly one place of supersingular reduction and I would like to know which place. I tried $$ Y^2 + tY = X^3 + tX + (t+1) \, . $$ I think it is supersingular at the place $t+1$. Because $t \equiv 1 \bmod {t+1}$ and so the curve reduces at $t+1$ to a curve of equation $Y^2+Y=X^3+X$ which is known to be supersingular. However this curve is isotrivial: its $j$-invariant is zero. I would like a non-isotrivial elliptic curve.
Consider the elliptic curve $E: y^2 + t xy + y = x^3 $. Its $j$-invariant is $\frac{t^{12}}{t^3+1}$, so it is not isotrivial. Since the only supersingular elliptic curve over $\overline{\mathbb{F}_2}$ is $y^2 + y = x^3$ with $j$-invariant $0$ (cf., Exercise 5.7 in Silverman's Arithmetic of Elliptic Curves), this shows that the only place of supersingular reduction is $t=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4420089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Are loops not allowed when gluing together simplices in a simplicial complex? I'm taking a graduate geometric topology class and our professor made a quick remark that we're not allowed to make stuff like these by gluing together simplexes in a simplicial complex. I didn't get time to ask him to clarify, but does it mean that loops are not allowed when gluing together simplices to form a simplicial complex? Any ideas? Also, is there any good reference for this material? Thanks!
* *A 1-dimensional simplex is a line segment with two end points. The line segment in the first figure does not have two end points. *In a simplicial complex, the intersection of two of the simplices is either empty or is a single face of both. In the second figure, the intersection between the two 1-dimensional simplices consists of the two points, whereas a single face would be a single point. *The third figure suffers from the same problems.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4420281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Has the sum of 4 cubes problem been proven? Today in class, my professor was lecturing on the sum of 3 cubes and sum of 4 cubes problems. Namely, can every number be written as the sum of 3 (or 4) cubes? He discussed their origins and showed a few examples, and showed how difficult they could be to find for certain numbers (such as 33 or 42 for the sum of 3 cubes). He said we would not cover their proofs in the course because they were "beyond the scope of the course." When I went to look them up, however, it seems as though they are open problems and have not been proved. I don't think my professor would get this wrong, so I'm a bit confused. I would appreciate any clarification. If they have been proven, where can I see the proofs?
First:You can use this identity for finding numbers which are the sum of three cubes: $$(x-y)^3+(y-z)^3+(z-x)^3=3(x-y)(y-z)(z-x)$$ For example: $(3-5)^3+(5-7)^3+(7-3)^3=3(3-5)(5-7)(7-3)= 48$ Second : we solve this problem to find a number which it's cube is the sum of three cubes: $$x^3+y^3+z^3=u^3$$ Let $u=-t$ we have: $$x^3+y^3+z^3+t^3=0\space\space\space\space(1)$$ This equation has infinitely many solutions(positive or negative), as you will see they make a set of particular numbers which means not every cube can be written as the sum of three cubes. Suppose $a, b, c, d , \alpha, \beta, \gamma, \delta $ are two groups of four numbers that satisfy equation (1) . Choose $k$ such that numbers $a+k\alpha, b+k\beta, c+k\gamma, d+k\delta$ also satisfy equation (1), or we can have: $$(a+k\alpha)^3+(b+k\beta)^3+(c+k\gamma)^3+(d+k\delta)^3=0$$ we expand each term; considering groups (a, b, c , d ) and $(\alpha, \beta, \gamma, \delta)$ both satisfy the equation i.e.: $a^3+b^3+c^3+d^3=0$ $\alpha^3+ \beta^3+ \gamma^3+ \delta^3=0$ We have: $3a^2k\alpha+3ak^2\alpha^2+3b^2k\beta+3bk^2\beta^2+3c^2k\gamma+\3ck^2\gamma^2+3d^2k\delta+3dk^2\delta^2=0$ Or: $3k[(a^2\alpha+b^2\beta +c^2\gamma+d^2\delta)+k(a\alpha^2+b\beta^2+c\gamma^2+d\delta^2)]=0$ This relation can be zero if one of it's factors is zero. Equating each factor to zero gives two values for k; one is $k=0$(which is not of our interest because it means we do not add anything to numbers a, b, c and d), second is: $k=-\frac{a^2\alpha +b^2\beta+c^2\gamma+d^2\delta}{a\alpha^2+b\beta^2+c\gamma^2+d\delta^2}\space\space\space (2)$ If we have two groups of solutions we can find a new group of four numbers as the solution to equation (1).For this we k times of numbers of first group to numbers of second group , provided k is found by relation (2). To do this we need to have a group of solutions, say $(x, y, z, t)=(3, 4, 5, -6). To find second group let: $\alpha, \beta, \gamma, \delta)=(r, -r, s, -s)$ clearly these numbers satisfy equation (1). Putting these values in (2) we get: $$k=\frac{7r+11s}{7r^2-s^2}$$ So we have: $a+k\alpha=\frac{28r^2+11rs-3s^2}{7r^2-s^2}$ $b+k\beta=\frac{21r^2-11rs-4s^2}{7r^2-s^2}$ $c+k\gamma=\frac{35r^2+7rs+6s^2}{7r^2-s^2}$ $d+k\delta=\frac{-42r^2-7rs-5s^2}{7r^2}$ In this way general form of solutions, considerin all numerators are equal, can be: $x=28r^2+11rs-3s^2$ $y=21r^2-11rs-4s^2$ $z=35r^2+7rs+6s^2$ $t=-42r^2-7rs-5s^2$ For example take $r=s=1$ you get: $(x, y, z, t)=(1, 6, 8, 9)$ $1^3+6^3+8^3=9^3$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4420427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculating pdf of minimum of i.i.d. random variables with threshold condition Let say I have $m$ i.i.d uniform random variables $U_1, U_2,...U_m$ that range between 0 and 1. I generate $m$ number by using each of random variable and select the one which is minimum among the numbers that exceed threshold. My goal is to obtain pdf of minimum of random variables that exceed a threshold $\gamma$. Let me rephrase, * *I generate $m$ number by using uniform distribution $\sim U\left(0,1\right)$. *I select the numbers which are bigger than $\gamma$. *I select the minimum number. How can I find the pdf of this number? In my opinion, the problem is not easy as it seems.
Let $\ G=\big|\{\,i\ |\,U_i<\gamma\,\}\big|\ $ and $\ V\ $ be the value of the number you choose in step $3$. Then $\ \mathbb{P}\big(G=g\big)=$$\,{m\choose g}\gamma^g(1-\gamma)^{m-g}\ $, and given that $\ G=g\ $ there are $\ m-g\ $ of the variates $\ U_1,U_2,\dots,U_m\ $ that will be uniformly distributed over the interval $\ [\gamma,1]\ $, and the minimum of them will be greater than $\ x\in[\gamma,1]\ $ if and only if all $\ m-g\ $ of them are. Therefore $\ \mathbb{P}\big(V> x\,|\,G=g\big)=$$\,\left(\frac{1-x}{1-\gamma}\right)^{m-g}\ $ for $\ x\in[\gamma,1]\ $ and $\ g=0,1,\dots, m-1\ $. Therefore \begin{align} \mathbb{P}\big(V> x\big)&=\sum_{g=0}^{m-1}\mathbb{P}\big(V> x\,|\,G=g\big)\mathbb{P}\big(G=g\big)\\ &=\sum_{g=0}^{m-1}{m\choose g}\gamma^g(1-x)^{m-g}\\ &=(1+\gamma-x)^m-\gamma^m\ . \end{align} Hence $$ \mathbb{P}\big(\{V\le x\}\cup\{V\ \text{is undefined}\}\big)=1+\gamma^m-(1+\gamma-x)^m\ . $$ If we take the probability, $\ \gamma^m\ $, of $\ V$'s being undefined as negligible, as the OP indicates as being the case in a comment, we get $$ \mathbb{P}\big(V\le x\big)=1-(1+\gamma-x)^m $$ for the cumulative distribution function of $\ V\ $, and $$ p_V(x)=m(1+\gamma-x)^{m-1} $$ for its density function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4420603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Computing the variance of a series of random variables I have the following random variables: $U$ is distributed uniformly with variance $1/3$ and mean $0$; $G$ is distributed normally with variance $1$ and mean $0$. Both random variables are independent of one another. I have the following series: $$\frac{2}{T}\sum_{j=1}^{T/2}G_{2j}+\frac{2}{T}\sum_{i=1}^{T/2}U_{2i-1}$$ (where e.g. $G_{2j}$ means that the $2j^ { \space th}$ term is the random variable $G$—and similarily for $U_{2i-1}$). I want to compute the variance of this series. Is the following correct? $$\begin{array}{l} \left(\frac{2}{T}\right)^{2} \sum_{j=1}^{T / 2} \operatorname{var}\left(G_{2 j}\right)+\left(\frac{2}{T}\right)^{2} \sum_{i=1}^{T / 2} \operatorname{var}\left(U_{2 i-1}\right)\\= \left(\frac{2}{T}\right)^{2}\left(\frac{T}{2}\right)+\left(\frac{2}{T}\right)^{2}\left(\frac{1}{3} \frac{T}{2}\right)=\frac{2}{T}+\frac{2}{T} \frac{1}{2}=\frac{2}{T}+\frac{2}{3 T}=\\ \frac{6+2}{3 T}=\frac{8}{3 T} \end{array}$$ Thank you.
Good job, your answer is correct, we do not need the properties that they are uniform or normal. We just need their variance and the fact that they are independent to perform the computation. $$\begin{array}{l} \left(\frac{2}{T}\right)^{2} \sum_{j=1}^{T / 2} \operatorname{var}\left(G_{2 j}\right)+\left(\frac{2}{T}\right)^{2} \sum_{i=1}^{T / 2} \operatorname{var}\left(U_{2 i-1}\right)\\= \left(\frac{2}{T}\right)^{2}\left(\frac{T}{2}\right)+\left(\frac{2}{T}\right)^{2}\left(\frac{1}{3} \frac{T}{2}\right)=\frac{2}{T}+\frac{2}{T} \frac{1}{\color{blue}3}=\frac{2}{T}+\frac{2}{3 T}=\\ \frac{6+2}{3 T}=\frac{8}{3 T} \end{array}$$ As $T$ increases, the variance decreases. Assumption: not only that $G$ and $U$ are independent, we also neethat each pair of the random variable being independent
{ "language": "en", "url": "https://math.stackexchange.com/questions/4420763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that this two norms are equivalent Let $X$ be the Banach space $(C^1[0,1],\lVert\cdot\rVert)$, where \begin{equation*} \lVert f\rVert=|f(0)|+\max_{t\in[0,1]}|f'(t)|. \end{equation*} We denote $Y=(C^1[0,1],\lVert\cdot\rVert_I)$, where \begin{equation*} \lVert f\rVert_I=\int_{0}^1|f(t)|\,dt+\int_{0}^1|f'(t)|\,dt. \end{equation*} I want to prove that this two norms are equivalent. In order to do that, I'm trying to prove that $i:X\rightarrow Y $ is bicontinuous ($i$ and $i^{-1}$ are continuous). Since $i$ and $i^{-1}$ are linear, I only have to show that they are bounded. To prove that $i$ is bounded, I've tried this: \begin{equation*} \lVert i(f)\rVert_I=\lVert f\rVert_I=\int_0^1|f(t)|\,dt+\int_{0}^1|f'(t)|\,dt\leq\int_0^1|f(t)|+\max_{t\in[0,1]}|f'(t)| \end{equation*} But I don't know how to follow from here. I've also got stuck proving that $i^{-1}$ is bounded. How do you think I should follow the proof? Is there another easier or clever approach? Thanks.
The norms don't seem to be equivalent. For one direction, let $f \in C^1[0,1]$ and recall that by the mean value theorem for any $x \in \langle 0,1\rangle$ there exists $\theta \in \langle 0,x\rangle$ such that $$|f(x)-f(0)| \le |x-0||f'(\theta)| = |x| |f'(\theta)| \le \max_{t \in [0,1]}|f'(t)|.$$ Therefore $$|f(x)| \le |f(x)-f(0)| + |f(0)| \le |f(0)| + \max_{t \in [0,1]}|f'(t)|.$$ Integrating this we get \begin{align*} \|f\|_I &= \int_0^1 |f(x)|\,dx + \int_0^1 |f'(x)|\,dx \\ &\le \int_0^1 (|f(0)| + \max_{t \in [0,1]}|f'(t)|)\,dx + \int_0^1 (\max_{t \in [0,1]}|f'(t)|)\,dx\\ &= |f(0)| + 2\max_{t \in [0,1]}|f'(t)|\\ &\le 2\|f\|. \end{align*} The other inequality cannot be obtained. Consider $f_n(x) = x^n$ for $n \in\Bbb{N}$. Then $$\|f_n\| = |0^n| + \max_{t\in[0,1]} |nt^{n-1}| = n \xrightarrow{n\to\infty} +\infty$$ but $$\|f_n\|_I = \int_0^1 t^n \,dt + \int_0^1 nt^{n-1} \,dt = \frac1{1+n} + 1 \xrightarrow{n\to\infty} 1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4420939", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding the intervals of increase and decrease of $\frac{x^4 - x^3 -8}{x^2 - x - 6}$ How can I find the intervals of increase and decrease of $\frac{x^4 - x^3 -8}{x^2 - x - 6}$? I tried to find the derivative by the quotient rule to obtain the critical points but the formula was getting complicated, I know that $D_f = \mathbb{R} \setminus \{-2,3\}$ but then what? Could anyone help me please?
The sign of the derivative of $\frac{x^4 - x^3 -8}{x^2 - x - 6}$ is exactly the same sign of $-4+8x+9x^2-11x^3-2x^4+x^5$. This polynomial has no rational roots:
{ "language": "en", "url": "https://math.stackexchange.com/questions/4421106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What is the Probability of Eating a Certain Meal on a Given Day? This is a problem that was given during a discussion section in the first week of my statistics class that I might be overthinking and misunderstanding. You prepare 5 meals for the week, 2 with vegetables and 3 without. Starting on Monday, a meal is consumed each day until Friday. What is the probability that you will eat a meal with vegetables on Wednesday? The answer to this problem was simply: $$\frac{(\text{# of Vegetable Meals})}{(\text{Total # of Meals})}$$ or $\frac{2}{5}$. My confusion stems from, if a meal is consumed each day wouldn't the number of meals that we can choose from get smaller as we near the end of the week? So by Wednesday there would be only 3 meals to pick from. In addition, why do we not have consider the 3 different cases of the meals eaten before Wednesday? If a vegetable meal is eaten on Monday and Tuesday, then there would be a 0 chance of eating one on Wednesday. What about the cases where there was already 1 vegetable meal eaten before Wednesday? Would that not make the probability of eating a vegetable meal on Wednesday be: $$P(\text{Vegetable Meal on Wednesday)} = \frac{2}{5}*\frac{3}{4}*\frac{1}{3}$$ or if no vegetable meals are eaten before Wednesday at all: $$P(\text{Vegetable Meal on Wednesday})=\frac{3}{5}*\frac{2}{4}*\frac{2}{3}$$ Why would we not sum up these prbabilites to get the actual probability of eating a vegetable meal on Wednesday?
Alternative perspective: You can assume, without loss of generality that the meals are all prepared, in advance, on Sunday night. Further, you can similarly assume that on Sunday night, you then stick a label on each of the $5$ meals, with the label identifying which meal will be eaten on which day. Note that the $2$ assumptions in the above paragraph do not have any effect on the probability of eating vegetables on Wednesday. That is, the probability is unaffected by whether Wednesday's meal is prepared and assigned on Sunday night, rather than Tuesday night or Wednesday morning. Then, it is easy to see that the probability of having vegetables on Wednesday must be the same as the probability of having vegetables on Monday.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4421255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Coalgebra after counit for a monad given by adjunction Let $L\dashv R$ be an adjunction and $LR$ the associated comonad, with comultiplication $L\eta R\colon LR\to LRLR$ and counit $\varepsilon\colon\mathrm{id}\to LR$. A coalgebra for this comonad is a map $a\colon X\to LRX$ such that $$\require{AMScd} \begin{CD} X @>{a}>> LRX;\\ @V{a}VV @VV{LRa}V \\ LRX @>{L\eta R_X}>> LRLRX; \end{CD}$$ commutes and such that $$X\overset{a}{\longrightarrow} LRX\overset{\varepsilon_X}{\longrightarrow} X$$ is the identity of $X$. Maybe the question is super naive, but if we consider the morphism $a\circ\varepsilon_X \colon LRX\to LRX$, does it commutatively fit the above square? For sure it makes the upper triangle to commute, but what about $L\eta R_X\circ a\circ \varepsilon_X$, is this equal to $LRa$? Could you give a proof or a counterexample?
Ok I think it's not true. Just take the free/forgetful adjunction $L\dashv R$ between groups and sets, and the easiest possible coalgebra $a\colon G\to LRG$ mapping $g\mapsto[g]$. Then $L\eta R_G\circ a\circ \varepsilon_X$ maps $[g_1,\dots,g_n]\mapsto [[g_1\cdot...\cdot g_n]]$ while $LRa$ maps $[g_1,...,g_n]\mapsto [[g_1],\dots,[g_n]]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4421426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Desmos Factorial function I was playing around with gamma function approximations and I was curious of which approximation Desmos uses. It extends negatives so it can’t be the Stirling formula. Does anyone know what it is?
Ok so I think I found what it uses. Thanks to @eyeballfrog for suggesting the reflection formula. I did a bit of research and found the Lanczos Approximation combined with the reflection formula to match the factorial function is Desmos almost perfectly. Here is the graph. I’ve also made a C++ implementation of the algorithm if anyone else wants to use it here
{ "language": "en", "url": "https://math.stackexchange.com/questions/4421609", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }