Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
What does $\text{dim}_K L$ mean when $K,L$ are fields? +Example. What does $\text{dim}_K L$ when $K,L$ are fields? From the context in which I have seen it used I believe it means the dimension of the vector space we get from viewing $L$ as a vector space over $K$ and that $\text{dim}_K L=[L:K]$ but I would like some confirmation. In the example it gives a set $M=\{a+b\sqrt 2~|~a,b \in \Bbb{Q}\}$ and it says $[M:\Bbb{Q}]=2$ would this just be because $\Bbb{Q} \subseteq M$ when we view $M$ as a vector space over $\Bbb{Q}$ we can simply find a basis for $M$ one such basis would be $\{1,\sqrt 2\}$ so then the dimension of $M$ is $2$ so $[M:\Bbb{Q}]=2$? Finally it says since $\pi$ is transcendental so $\{1,\pi,\pi^2,\pi^3,...\}$ are linearly independent over $\Bbb{Q}$ so we must have $[\Bbb{R}:\Bbb{Q}]=\infty$ why is this the case could anyone elaborate please?
All of what you said is correct except the last part. You wrote correctly that $\{1,\pi,\pi^2,\pi^3,...\}$ are linearly independent over $\mathbb Q$ since $\pi$ is trascendental. So $\mathbb R$ contains an infinite set of $\mathbb Q$-linearly independent vectors, therefore $\mathrm{dim}_{\mathbb Q}\mathbb R$ is infinite. But it is not true that $\{1,\pi,\pi^2,\pi^3,...\}$ is a base of $\mathbb R$ over $\mathbb Q$. In fact, it is not true that any real number is a finite combination with rational coefficients of those elements. If it were so, $\mathbb R$ would be countable (see comment of tomasz).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1816010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
When a loop with inverse property is commutative Question How to prove that a loop $L$ with inverse property and $x^3=e$ for all $x$ is commutative iff $(x y)^2=x^2 y^2$ for all $x,y$? Definitions: A loop is a quasigroup with identity $e$. $L$ has the inverse property if every element has a two sided inverse and $x^{-1}(xy) = y = (yx)x^{-1}$ for all $x,y \in L$. Edit: I just figured one half of the proof: Suppose $L$ is commutative. From $x^3 = e$, we have $x^2 = x^{-1}$. Then $(xy)^2 = (xy)^{-1}$. In an IP loop, $(xy)^{-1} = y^{-1} x^{-1}$. So that by using commutativity, $(xy)^2 = y^{-1} x^{-1} = x^{-1} y^{-1} = x^2 y^2$. Edit2: The other side is easy now: Suppose $(xy)^2 = x^2y^2$. Also, we have $(xy)^2 = y^{-1} x^{-1}.$ Hence $x^{-1} y^{-1} = y^{-1}x^{-1}$. Because $L = L^{-1}$, we get that $L$ is commutative.
Note that $x^3 = e$ means that $x^2 = x^{-1}$. Similarly, this means that $(xy)^2 = (xy)^{-1}, y^2 = y^{-1}$. We now write our condition as $$ (xy)^{-1} = x^{-1}y^{-1} $$ This implies that $(x^{-1}y^{-1})^{-1} = xy$. We now apply the inverse property repeatedly: $$ x^{-1} = (x^{-1}y^{-1})y\\ (x^{-1}y^{-1})^{-1}x^{-1} = (x^{-1}y^{-1})^{-1}((x^{-1}y^{-1})y) = y\\ (x^{-1}y^{-1})^{-1} = ((x^{-1}y^{-1})^{-1}x^{-1})x = yx $$ Combining we get $xy = (x^{-1}y^{-1})^{-1} = yx$, as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1816162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Convexity and Proof of one sided Derivative Working on some real analysis work, I've been able to show that for a function $f$, which is convex on $[a,b]$, for $a\leq x_1< x_2< x_3\leq b$: $$\frac{f(x_2)-f(x_1)}{x_2-x_1} \leq \frac{f(x_3)-f(x_2)}{x_3-x_2}$$ and for $h>0$, for some $x_0 \in [a,b]$: \begin{equation} \frac{f(x_0+h)-f(x_0)}{h} \end{equation} is non-decreasing. I'm stuck needing to show that: 1). $\exists \; c_0\in\mathbb{R}$ such that $\frac{f(x_0+h)-f(x_0)}{h} > c_0$ for all $h>0$. Intuition tells me $c_0 = f_+'(x_0)$, but i'm having issues with the proof. 2) $\lim\limits_{h\to 0^+} \frac{f(x_0+h)-f(x_0)}{h}= f_+'(x_0)$ Thanks for any help
Consider some $k > 0$ and any $h > 0$ such that $$a < x_0-k < x_0 < x_0 + h < b.$$ Using your inequality for a convex function, we have $$\frac{f(x_0) - f(x_0-k)}{k}= \frac{f(x_0) - f(x_0-k)}{x_0 - (x_0-k)} \leqslant \frac{f(x_0+h) - f(x_0)}{x_0+h - x_0} = \frac{f(x_0+h) - f(x_0)}{h}. $$ Thus, $[f(x_0 + h) - f(x_0)]/h$ is bounded below by $[f(x_0) - f(x_0 -k)]/k.$ Hence, there exists a greatest lower bound $c_0 = \inf\{[f(x_0 + h) - f(x_0)]/h: x_0 < x_0 + h \leqslant b\}$ As you have already shown that $[f(x_0 + h) - f(x_0)]/h$ is non-increasing, we have existence of the limit $$\lim_{h \to 0+} \frac{f(x_0+h) - f(x_0)}{h} = c_0 := f'_+(x_0).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1816383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the required group theory knowledge needed to understand Verhoeff's algorithm? The Wikipedia page tells me I need to understand permutation groups and dihedral groups. Can someone clearly outline what exactly the perquisites of understanding this is and how much time I'll take to understand this ? I know some basic group theory. I don't know what dihedral groups are and I haven't studied information theory.
Bit of a strange question. As (finite) groups go, I'd rate the family of Dihedral groups as the second easiest to get a handle on, after cyclic groups. Only 2 generators - basically a rotation and a reflection of a n-gons. In any book on group theory you are still in an early chapter when you reach this topic :-) For the agorithm itself, you need very little "group theory" per se. For example one could code the algorithm in java, perl, pascal, whatever without knowing any theory at all (not that I recommend it, but one could).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1816726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that as $PP'$ varies,the circle generates the surface $(x^2+y^2+z^2)(\frac{x^2}{a^2}+\frac{y^2}{b^2})=x^2+y^2.$ $POP'$ is a variable diameter of the ellipse $z=0,\frac{x^2}{a^2}+\frac{y^2}{b^2}=1,$ and a circle is described in the plane $PP'ZZ'$ on $PP'$ as diameter.Prove that as $PP'$ varies,the circle generates the surface $(x^2+y^2+z^2)(\frac{x^2}{a^2}+\frac{y^2}{b^2})=x^2+y^2.$ My Attempt Let the circle be described on the plane $PP'ZZ'$.Let $P$ be $(a\cos\theta,b\sin\theta)$ and $P'$ be $(-a\cos\theta,-b\sin\theta)$. Then the equation of the circle be $(x-a\cos\theta)(x+a\cos\theta)+(y+b\sin\theta)(y-b\sin\theta)=0,z=0$ $x^2-a^2\cos^2\theta+y^2-b^2\sin^2\theta=0,z=0$ I am stuck here.Please help.
The circle has two endpoints $(-X,-Y)$ and $(X,Y)$ with $\dfrac{X^2}{a^2}+\dfrac{Y^2}{b^2}=1$ and is perpendicular to the $z$-plane. Let $(x,y,z)$ be a point on this circle. Then we have: $x^2+y^2+z^2=X^2+Y^2$ and $y/x=Y/X$. So let $Y/y=X/x=c$. We get: $c^2\left (\dfrac{x^2}{a^2}+\dfrac{y^2}{b^2} \right)=1$ as well as: $x^2+y^2+z^2=c^2(x^2+y^2)$. Multiplying these two equations gives the desired equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1816806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find all functions $f:\mathbb R \to \mathbb R$ satisfying $xf(y)-yf(x)=f\left( \frac yx\right)$ Find all functions $f:\mathbb R \to \mathbb R$ that satisfy the following equation: $$xf(y)-yf(x)=f\left( \frac yx\right).$$ My work so far * *If $x=1$ then $f(1)=0$ *If $y=1$ then $f\left(\frac1x\right)=-f(x)$ *If $y=\frac1x$ then $f\left(x^2\right)=\left(x+\frac1x\right)f(x)$
$$xf(y)-yf(x)=f\left( \frac yx\right) \\ \iff xf(y)-f(\frac{y}{x})-yf(x) = 0 \tag{1}$$ Substitute $x \to y/x$, $y \to y$: $$\frac{y}{x}f(y) - yf( \frac{y}{x}) = f(x) \\ \iff \frac{y}{x}f(y) - yf( \frac{y}{x})-f(x) = 0 \tag{2}$$ Consider $(2)-(1)\times y$: $$(\frac{y}{x}-xy)f(y)+(y^2-1)f(x)=0 \tag{3}$$ Let $y = 0$, As M. Vinay said, $f(0) = 0$ $0\times f(0) + (0-1)f(x)=0 \implies f(x) = 0$ for all $x \in \mathbb{R}$ EDIT: My setting value is wrong(i.e. after formulate (3) , all what I did is wrong ) so I want to set them right. Set $y = 0.5$ in (3), $$(\frac{1}{2x}-\frac{x}{2}) f(1/2) - 3/4 f(x) = 0 \\ \iff f(x) =(\frac{2}{3x}-\frac{2x}{3}) f(1/2) $$ More precisely, $ f(x) = \frac{2}{3} f(1/2) \times(\frac{1}{x}-x)$ Let $\frac{2}{3} f(1/2) = c$ so$ f(x) = c(\frac{1}{x}-x)$ Since, c is an arbitrary constant, it includes my special solution $f(x) = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1816896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 1 }
Given that $\tan 2x+\tan x=0$, show that $\tan x=0$ Given that $\tan 2x+\tan x=0$, show that $\tan x=0$ Using the Trigonometric Addition Formulae, \begin{align} \tan 2x & = \frac{2\tan x}{1-\tan ^2 x} \\ \Rightarrow \frac{2\tan x}{1-\tan ^2 x}+\tan x & = 0 \\ \ 2\tan x+\tan x(1-\tan ^2 x) & = 0 \\ 2+1-\tan ^2 x & = 0 \\ \tan ^2 x & = 3 \end{align} This is as far as I can get, and when I look at the Mark Scheme no other Trignometric Identities have been used. Thanks
By the double angle formula we get $$\tan(2x)+\tan(x)=\frac{2\tan(x)}{1-\tan^2(x)}+\tan(x)=\frac{3-\tan^2(x)}{1-\tan^2(x)}\tan(x),$$ so that $\tan(x)=0$ is certainly a solution. But $\tan(x)=\pm\sqrt3$ as well, so that the initial claim is false.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1817022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
Bijectivity and Lipschitz continuity of a function on a Banach space I don't really know how to solve the following exercise, I need a little help: a) Let $(X,\|\cdot \|)$ be a Banach space and $F\colon X \to X$ Lipschitz-continuous (i.e. $|F(x) - F(y)| \le L|x-y|$) with Lipschitz constant $L < 1$. Show that $G\colon X \to X$ with $G(x) = x + F(x)$ is bijective and its inverse function $G^{-1}\colon X \to X$ is Lipschitz-continuous with Lipschitz constant $\frac 1{1-L}$. b) Use a) to show that there is a unique function $f \in C[0,1]$ for which the equation $$f(t) + \int_0^1e^{\tau+t-3}f(\tau)d\tau = 1 \quad \forall t \in [0,1]$$ holds. Determine $f$. Use the fact that $(X,\|\cdot \|_{\infty})$ is a Banach space. In a) I think using the Banach fixed point theorem is the way to go in order to prove F is bijective. However, I don't know how to start or how to prove the Lipschitz continuity of $G^{-1}$. For b) I don't know how to start, I'm thinking of the intermediate value theorem?
* *The reverse triangle inequality yields $|G(x) -G(y)|\ge (1-L)|x-y|$. This gives the Lipschitz continuity of the inverse map, as soon as $G$ is shown to be surjective. For surjectivity, you are right to use the Banach fixed point theorem: the equation $x+F(x)=y$ has a solution for $x$ because the map $x\mapsto y-F(x)$ has a fixed point. *The supremum norm of $\int_0^1e^{\tau+t-3}f(\tau)d\tau $ is at most $$\|f\|_\infty \int_0^1e^{\tau+1-3} d\tau = \|f\|_\infty (e^{-1}-e^{-2}) = L\|f\|_\infty$$ where $L = e^{-1}-e^{-2} <1$. This implies the existence and uniqueness of $f$, by virtue of $f$. To determine $f$, rewrite the equation as $$f(t) = 1 - e^t \int_0^1e^{\tau-3}f(\tau)d\tau $$ which says that $f$ is of the form $f(t)=1-Ce^t$. Plug this in the equation to find $C$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1817110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Show that $n \to \infty, \sqrt{n}(Y_{n}-p) \rightarrow N(0,p(1-p)$ Can anyone show me the correct working out to find the variance for $Y_n$, My variance seems to be $\frac{p(1-p)}{n}$
The variance is $\frac{p(1-p)}{n}$: \begin{align*} Var(Y_n) &= E\left((Y_n-E(Y_n))^2\right)\\ &=\frac{1}{n^2}E\left(\left[\sum_{i=1}^n\big(X_i-E(X_i)\big)\right]^2\right)\\ &= \frac{1}{n^2}\sum_{i=1}^n E\left((X_i-E(X_i))^2\right) \\ &= \frac{1}{n}\left[p(1-p)^2+p^2(1-p) \right]\\ &=\frac{p(1-p)}{n}. \end{align*} Then, \begin{align*} \frac{\sqrt{n}(Y_n-p)}{\sqrt{p(1-p)}}\rightarrow N(0, 1) \end{align*} in distribution, and, consequently, \begin{align*} \sqrt{n}(Y_n-p)\rightarrow N(0, p(1-p)) \end{align*} in distribution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1817201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why does every number of shape ababab is divisible by $13$? Why does it seems like every number $ababab$, where $a$ and $b$ are integers $[0, 9]$ is divisible by $13$? Ex: $747474$, $101010$, $777777$, $989898$, etc...
Note that $$ [ababab] = a\times 101010 + b \times 10101 = 13 (7770a + 777b) $$ Also noteworthy: $$ 10101 = 1 + 10^2 + 10^4 \equiv \\ 1 + 3^2 + 3^4 = 1 + 9 + 9^2 \equiv\\ 1 +(-4) + (-4)^2 = 1 - 4 + 16 = 13 \equiv 0 $$ where $\equiv$ indicates equivalence modulo $13$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1817294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 9, "answer_id": 4 }
about series and integral - solve $\int_1^2 (x^2-x)dx$ by limit of definite integral So I have to compute $$\int_1^2 (x^2-x)dx$$ by using the Riemann sums. I got stuck the point where I got to $$\lim_{n\to \infty} \frac{1}{n^3} \sum_{i=0}^n (i^2+in).$$ Can I split the sum? If I can then I know how to continue. so the question is: * *can I split the sum? why can I? because as I know for only finite sums it's ok to split, however about the infinite once I'm not sure. *if It's not legal to split the sum, I would appreciate some hint to the correct direction of solution. Thanks!
$$I=\int_{1}^{2}(x^2-x)\,dx = \int_{0}^{1}(z^2+z)\,dz\tag{1} $$ hence: $$ I = \lim_{n\to +\infty}\frac{1}{n}\sum_{k=1}^{n}\frac{k^2+kn}{n^2}=\lim_{n\to+\infty}\frac{1}{n^3}\sum_{k=1}^{n}\left[2\binom{k}{2}+(n+1)\binom{k}{1}\right]\tag{2} $$ and: $$ I = \lim_{n\to +\infty}\frac{1}{n^3}\left[2\binom{n+1}{3}+(n+1)\binom{n+1}{2}\right] = \frac{1}{3}+\frac{1}{2}=\color{red}{\frac{5}{6}}.\tag{3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1817365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
pointwise and uniformly convergence of a series of functions $f_n(x)$ I have $\sum \frac{e^{nx}}{2+e^{nx}} arctan(n^{|x|})$. I have pointwise convergence in $(-\infty,0)$.But for the uniform convergence? If I consider $\sum sup_{(-\infty,0)} |f_n(x)|\le \sum \frac{\pi}{2} \frac{e^{nx}}{2+e^{nx}}=\frac {\pi}{4}$ series not convergent
HINT: For any fixed $N$ with $x=1/N$, we have that $$\begin{align} \left|\sum_{n=0}^\infty \frac{e^{nx}}{2+e^{nx}}\arctan\left(n^{|x|}\right)-\sum_{n=0}^{N} \frac{e^{nx}}{2+e^{nx}}\arctan\left(n^{|x|}\right)\right|&=\left|\sum_{n=N+1}^\infty \frac{e^{nx}}{2+e^{nx}}\arctan\left(n^{|x|}\right)\right|\\\\ &=\sum_{n=N+1}^\infty \frac{e^{nx}}{2+e^{nx}}\arctan\left(n^{|x|}\right)\\\\ &=\sum_{n=N+1}^\infty \frac{e^{-n/N}}{2+e^{-n/N}}\arctan\left(n^{1/N}\right)\\\\ &\ge \frac{\pi}{12}\sum_{n=N+1}^\infty e^{-n/N}\\\\ &\ge \frac{\pi }{12e(e-1)} \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1817418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding convergence zone/range for $\sum_{i=1}^\infty \frac{x^{n^2}}{n(n+1)}$ $$\sum_{i=1}^\infty \frac{x^{n^2}}{n(n+1)}$$ I used the ratio test and I end up with: $$|x|*\frac{n}{n+2}$$ What steps do I need to take to continue? Looking for hints or steps, not full solution/
Did you notice the powers of $\;x\;$ are not consecutive? They are $\;1,4,9,16,...\;$ , so the actual coefficients are $$a_n:=\begin{cases}\cfrac1{\sqrt{n(n+1)}},\,&n\;\text{is a square}\\{}\\0,\,&\text{otherwise}\end{cases}$$ and thus $$\lim\sup_{n\to\infty}\sqrt[n]{|a_n|}=\lim_{n\to\infty}\frac1{\sqrt[n]{\sqrt{n(n+1)}}}=1$$ and thus the convergence radius is $\;1\;$ .
{ "language": "en", "url": "https://math.stackexchange.com/questions/1817512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is a Hurwitz matrix minus a positive-definite matrix still a Hurwitz matrix? In a certain multi-agent systems, the $i$th system can be described by the form \begin{equation} \begin{array}{cc} \left\{ {\begin{array}{l} {{\dot x}_i}=A_i{x_i} +B_iu_i\\ y_i = C_ix_i \end{array}} \right.,& i=1,2,\cdots,n \end{array} \end{equation} The information of two neighbours is \begin{equation} z_i=-\sum\limits_{j\in n_i}l_{ij}y_j \end{equation} The output feedback controller is \begin{equation} \left\{ \begin{array}{l} \dot{\eta}_i=M_i\eta_i+N_iz_i\\ u_i=O_i\eta_i \end{array} \right. \end{equation} The $i$th agent's augmented system is \begin{equation} \left\{\begin{array}{l} \left[\begin{array}{c} \dot{x}_i\\ \dot{\eta}_i \end{array}\right]=\left[\begin{array}{cc} A_i & B_iO_i\\ 0 & M_i \end{array}\right]\left[\begin{array}{c} x_i\\ \eta_i \end{array}\right]+\left[\begin{array}{c} 0\\ N_i \end{array}\right]z_i\\ y_i=\left[\begin{array}{cc} C_i & 0 \end{array}\right]\left[\begin{array}{c} x_i\\ \eta_i \end{array}\right] \end{array}\right. \end{equation} Suppose that \begin{equation} \bar{x}_i=\left[\begin{array}{c} x_i\\ \eta_i \end{array}\right], \bar{A}_i=\left[\begin{array}{cc} A_i & B_iO_i\\ 0 & M_i \end{array}\right], \bar{B}_i=\left[\begin{array}{c} 0\\N_i \end{array}\right], \bar{C}_i=\left[\begin{array}{cc} C_i & 0 \end{array}\right] \end{equation} Therefore, the whole system of $n$ agents is \begin{equation} \begin{array}{rl} \dot{\tilde{x}}&=\left[\left[\begin{array}{ccc} \bar{A}_1 & \cdots & 0\\ \vdots & \ddots & \vdots\\ 0 & \cdots & \bar{A}_n \end{array}\right]-\left[\begin{array}{ccc} l_{11}\bar{B}_1\bar{C}_1 & \cdots & l_{1n}\bar{B}_1\bar{C_n}\\ \vdots & \ddots & \vdots\\ l_{n1}\bar{B}_n\bar{C}_1 & \cdots & l_{nn}\bar{B}_n\bar{C}_n \end{array}\right]\right]\tilde{x}\\ & \\ &=\tilde{A}\tilde{x} \end{array} \end{equation} where $A_i$ and $M_i$ are Hurwitz, and $\bar{B}_i=\bar{C}_i^T$. So, $\left[\begin{array}{ccc} \bar{A}_1 & \cdots & 0\\ \vdots & \ddots & \vdots\\ 0 & \cdots & \bar{A}_n \end{array}\right]$ is Hurwitz, and $\left[\begin{array}{ccc} l_{11}\bar{B}_1\bar{C}_1 & \cdots & l_{1n}\bar{B}_1\bar{C_n}\\ \vdots & \ddots & \vdots\\ l_{n1}\bar{B}_n\bar{C}_1 & \cdots & l_{nn}\bar{B}_n\bar{C}_n \end{array}\right]$ is positive-definite. Is $\tilde{A}$ a Hurwitz matrix? Thank you!
If you are asking whether the sum of two positive definite matrices is still positive definite, then the answer is yes. As when $xA_1x^T>0$ and $xA_2x^T>0$, one have $x(A_1+A_2)x^T = xA_1x^T + xA_2x^T > 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1817564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Generalization of inner product I was wondering if there was a widely accepted generalization of inner product spaces where the inner product look something like $\langle\bullet , \bullet\rangle:V\times V \to \mathbb{F}$, where $\mathbb{F}$ does not have to equal to $\mathbb{R}$ or $\mathbb{C}$. Could you define a meaningful inner product space over $\mathbb{Q}$ or a finite field?
Yes, you can do this. Choose a linearly independent basis for an as yet unconstructed vector space $V$. Define $V$ as the set of linear combinations of basis elements with coefficients drawn from any field $\mathbb{F}$ you like. (You'll need to make sure your basis elements can be multiplied by any of the fields' elements.) Now define your inner product by combining "the basis elements are orthonormal" with "this product is sesquilinear". (For self-conjugate fields such as $\mathbb{R}$, sesquilinearity is equivalent to bilinearity; see also @Bleuderk's answer.) This uniquely defines an inner product that does what you've asked. To consider an example, there's a well-known inner product with respect to which Hermite polynomials are orthonormal. The linear combinations with strictly rational coefficients comprise a vector space of functions over $\mathbb{F}=\mathbb{Q}$. The inner product will be over $\mathbb{Q}$ too, since $a_i,\,b_i\in\mathbb{F}\to \sum_ia_ib_i\in\mathbb{F}$. (We need a bit of care with inner products that could be infinite due to convergence conditions failing, but that's a familiar complication for infinite-dimensional vector spaces over $\mathbb{R}$ or $\mathbb{C}$ too.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1817661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Three-Dimensional Random Walk A particle starts at an origin $O$ in three-space. Thinking of point $O$ as the center of a cube 2 units on a side. One move in this walk sends the particle with equal likelihood to one of the eight corners of the cube. That is to say, at every walk, the particle has a 50/50 chance of moving one unit up or down, one unit left or right, and one unit front or back. If this walk continues infinitely, what is the probability that the particle returns to $O$? So far, I thought of the following: $$P(\text{particle is at $O$ after $2n$ walks})=\left(\binom{2n}{n}\left(\dfrac{1}{2}\right)^{2n}\right)^3$$ I then applied Stirling's approximation: $n!\sim\sqrt{2\pi}n^{n+\frac{1}{2}}e^{-n}$ to get $$P(\text{particle is at $O$ after $2n$ walks})\approx\left(\dfrac{1}{\pi n}\right)^{\frac{3}{2}}$$ I then tried to sum this: $$P(\text{particle returns to $O$})=\sum_{n=1}^\infty \left(\dfrac{1}{\pi n}\right)^{\frac{3}{2}}\approx 0.47 $$ Unfortunately, this answer is incorrect and I suspect that there is a problem with the values of my summation.
One mistake is that you have approximated the expected number of times of return to the origin, rather than the probability of at least one return. They are related by $$ P=\frac{<n>}{<n>+1} $$ so your approximation should be giving a probability of about $\frac13$. The real value of the sum you have done is $$ \frac{\pi}{\Gamma\left(\frac34\right)^4}-1 \approx 0.3932$$ giving $$ P = 1 - \frac{\Gamma\left(\frac34\right)^4}{\pi} \approx 0.2822 $$ This would not be be the correct answer for a random walk on the edges of a cubic lattice, since in that case the three distinct dimensions will not necessarily have taken the same number of steps at the time of return to origin. In fact, for that random walk, $$ <n>_{\mbox{cubic}} +1 = \frac{\sqrt{6}}{32\pi^3} \Gamma\left(\frac1{24}\right)\Gamma\left(\frac5{24}\right)\Gamma\left(\frac7{24}\right)\Gamma\left(\frac{11}{24}\right)-1 \approx 1.5164 $$ and $$ P_{\mbox{cubic}} = 1-\frac1{<n>_{\mbox{cubic}}+1}\approx 0.3405$$ Thus the return probability for the walk you presented, which is an octahedral random walk, is somewhat easier to compute in your head than is the probability for a cubic random walk -- you just need a really large and well-oiled head!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1817770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove formally that $P(|X+Y|>\varepsilon)\leq P(|X|>\frac{\varepsilon}{2})+ P(|Y|>\frac{\varepsilon}{2})$ I'm stuck with a simple probability inequality. For arbitrary random variables $$P(|X+Y|>\varepsilon)\leq P(|X|>\frac{\varepsilon}{2})+ P(|Y|>\frac{\varepsilon}{2}) $$ Using $P(A)+P(B) \geq P(A \cap B) = P(X,Y > \frac{\varepsilon}{2})$ I get underestimate, as $\{|X+Y|>\varepsilon\}$ is satisfied not only for such values of $X,Y$, so it should be bigger. The probability $P(A)+P(B) \geq P(A \cup B)=P(X \text{ or }Y > \frac{\varepsilon}{2})$ looks like what's needed as it is the bigger event and can contain the values that will add up to the value $<\varepsilon$. But how to pass from heuristics to the formal proof?
Hint: verify $$ \{ |X+Y|\geq \varepsilon\} \subset \{ |X|\geq \varepsilon/2 \}\cup \{ |Y|\geq \varepsilon/2 \}$$ and use finite subadditivity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1817929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How can I show that the $\lim_{R\to\infty}\int_{C} \frac{e^{iz}dz}{(1+z^2)^2}$ is zero, where $C$ is a semicircle? Where $C$ is the positive semicircle of radius $R$, that is to say, it is the semicircle covering the first and second quadrant, $0 \leq \theta < \pi$ I know I have to use the $ML$ inequality, and I understand most of the steps, but I am having trouble with one particular part of the problem. I know that the points on this semicircle can be parametrized by $Re^{i\theta}$. Now, in my notes it says that the absolute value of $e^{iz}$, when we plug $Re^{i\theta}$ in for $z$, is $e^{-R\sin(\theta)}$. When I plug it in I get something different. If I express $Re^{i\theta}$ as $R(\cos(\theta) + i\sin(\theta))$ I still get something different when I plug it into $e^{iz}$. I do not understand what is going on here.
Let $z = R(\cos\theta+i\sin\theta)$. Then $$e^{iz} = e^{iR(\cos\theta+i\sin\theta)} = e^{iR\cos\theta+(-R)\sin\theta}=e^{-R\sin\theta}e^{iR\cos\theta}.$$ Hence $$|e^{iz}| = |e^{iR\cos\theta+(-R)\sin\theta}|=|e^{-R\sin\theta}e^{iR\cos\theta}| = e^{-R\sin\theta}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1818023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
how to solve double summation $1/k^2$ How should I solve $$\sum^{\infty}_{n=1}\sum^n_{k=1} \frac{1}{k^2} $$ I know that $\sum^\infty_{k=1} \frac{1}{k^2} = \frac{\pi^2}{6}$
This is probably off-topic. In answers, you have been explained that the infinite summation does not converge and why it does not. Let us look at what happens $$\sum^n_{k=1} \frac{1}{k^2}=H_n^{(2)}$$ where appear the generalized harmonic numbers. So, using their properties, $$S_p=\sum^{p}_{n=1}\sum^n_{k=1} \frac{1}{k^2}=\sum^{p}_{n=1}H_n^{(2)}=(p+1) H_{p+1}^{(2)}-H_{p+1}$$ Considering their asymptotics, we then have $$S_p=\frac{\pi ^2 p}{6}+\left(-\log(p)+\frac{\pi ^2}{6}-\gamma -1\right)-\frac{1}{p}+\frac{5}{12 p^2}+O\left(\frac{1}{p^3}\right)$$ where $\gamma$ is Euler's constant. Just to give you an idea, the exact summation for $n=10$ would lead to $\approx 14.118477$ while the asymptotic formula given above would lead to $\approx 14.118641$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1818106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
2nd order Runge-Kutta method for linear ODE Could someone please help me with the next step of this 2nd order Runge-Kutta method? I am solving the following initial value problem (IVP) $$x' = - \frac12 x(t), \qquad x(0)=2$$ I wish to use the second order Runge-Kutta method $$x(t+h)=x(t)+1/2(F_1+F_2),$$ where \begin{align*} F_1&=hf(t,x) \\ F_2&=hf(t+h,x+F_1). \end{align*} Let us use a spacing of $h=1$. My working goes like this: $$F_1 = -\frac{x(0)}{2}=-1.$$ Then \begin{align*}F_2&=1\times f(0+1,x(0)+F_1) \\ &=-1/2 \times x(1).\end{align*} But I have no idea what $x(1)$ is.
Integrating the ODE $$\dot x = -\frac{1}{2} x$$ we get $$x (t) = x_0 \cdot \exp\left(-\frac{t}{2}\right)$$ Hence, $$\begin{array}{rl} x (t+h) &= x_0 \cdot \displaystyle\exp\left(-\frac{t+h}{2}\right)\\\\ &= x_0 \cdot \displaystyle\exp\left(-\frac{t}{2}\right) \cdot \exp\left(-\frac{h}{2}\right)\\\\ &= \displaystyle\exp\left(-\frac{h}{2}\right) \cdot x (t)\\\\ &= \displaystyle\left(1 - \frac{h}{2} + \frac{h^2}{8} - \frac{h^3}{48} + \frac{h^4}{384} - \cdots\right) x(t)\end{array}$$ Using the 2nd order Runge-Kutta, we truncate $$x (t+h) \approx \displaystyle\left(1 - \frac{h}{2} + \frac{h^2}{8}\right) x(t)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1818246", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Evaluate $\int_0^{\pi/2}(\sin x)^n e^{-(2+\cos x)\log k}dx$ for fixed integers $n,k\geq 1$ My question is the following Question. Can you compute some of the following $$c_{n,k}=\int_0^{\pi/2}(\sin x)^n e^{-(2+\cos x)\log k}dx$$ where $n\geq 1$ is a fixed integer and $k\geq 1$ is also a fixed integer? What of those? Thanks in advance. If you know some of related integrals you can add also a link from this site of those computations that provide closed forms for our integrals in previous Question. For the other, you can add yourself computations/reductions. My attempt was convice to me that seems a hard problem, because for the lower odds $n=1$ or $n=3$ using the rule $$\int F'(x)e^{F(x)}dx=e^{F(x)}+\text{constant}$$ and integration by part, is a lot of work yet. I take the case $k=1$ as a special case, thus I am knowing that from Wolfram Alpha $\int_0^{\pi/2}(\sin x)^ndx$ a closed form is provide us with the code $$\int_0^{\pi/2} \sin^n(x)\, dx$$ for integers $n\geq 1$. But the code for a reduction with $n=6$, this is $$\int_0^{\pi/2} \sin^6(x)\,e^{-\cos(x)}\, dx$$ doesn't provide to me a closed-form. For example $n=5$ likes to me tedious computations. The context of my computations is speculative, I don't know if there were mistakes or my reasoning/approach is reasonable (please, feel free to add comments if you detect some): let the complex variable $s=\sigma+it$ , then $\zeta(s)$ the Riemann Zeta function is analytic for $\Re s>1$ and doen't vanishes. Is known that $$\frac{1}{\zeta(s)}=\sum_{n=1}^\infty\frac{\mu(n)}{n^s},$$ for $\Re s>1$. I believe that I can define the following real functions, as restrictions ($X=x+0\cdot i$) of previous complex function, $$f(X)=\frac{1}{\zeta(X+2)}$$ as continuous for $-1<X\leq 1$, thus this next composition $f: \left[ 0,1\right] \to\mathbb{R}$, also as continuous real function $$f(\cos x)=\frac{1}{\zeta(2+\cos x)}=\sum_{k=1}^\infty\mu(k)k^{-2-\cos x},$$ when (if I can presume it) $$-1<\cos x\leq 1.$$ Then as a specialization of the PROBLEMA 159 in page 515, in spanish, proposed by Furdui in La Gaceta de la RSME Vol. 14 (2011) No. 3, I say the first statement, and the first line of the Solution that provide us $$\frac{\sqrt{\frac{\pi}{2}}}{\zeta(3)}=\lim _{n\to\infty}\sqrt{n}\sum_{k=1}^\infty\mu(k)c_{n,k},$$ when is justified to swap the sign of series $\sum_{k=1}^\infty$ with the definite integral.
This is not an answer since the result comes from a CAS. $$\int_{0}^{\frac{\pi}2}\sin^n(x)\,dx=\frac{\sqrt{\pi } \Gamma \left(\frac{n+1}{2}\right)}{2 \Gamma \left(\frac{n}{2}+1\right)}$$ $$\int_{0}^{\frac{\pi}2}\sin^n(x)e^{-\cos(x)}\,dx=\sqrt{\pi }\, 2^{\frac{n}{2}-1} \,\Gamma \left(\frac{n+1}{2}\right) \left(I_{\frac{n}{2}}(1)-\pmb{L}_{\frac{n}{2}}(1)\right)$$ where appear Bessal and Struve functions. Both are of $\Re(n)>-1$. It evens seems that $$\int_{0}^{\frac{\pi}2}\sin^n(x)e^{-a\,\cos(x)}\,dx=\sqrt{\pi }\, 2^{\frac{n}{2}-1}\, a^{-n/2}\, \Gamma \left(\frac{n+1}{2}\right) \left(I_{\frac{n}{2}}(a)-\pmb{L}_{\frac{n}{2}}(a)\right)$$ May be (I hope), this will give you some ideas.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1818301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Creating nearest neighbors Laplacian matrix I want to create a general formula for an $N \times N$ nearest neighbors Laplacian matrix such that I can write an mfile in MATLAB to compute the matrix for given $N$. The nearest neighbors Laplacian matrix is of the below form \begin{equation} L=\left[ \begin{array}{ccc} -2 & 1 & 0 & \cdots & 0 & 1\\ 1 & -2 & 1 & 0 & \cdots & 0\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & \cdots & 0 & 1 & -2 & 1\\ 1 & 0 & \cdots & 0 & 1 & -2\\ \end{array} \right]. \end{equation} where the main diagonal elements are all $-2$ and we have two elements of $1$ before and after each element in the main diagonal. Would appreciate any help...
I might be misreading what you have written as the $L$ matrix, but I believe you can use: L = gallery('tridiag',ones(1,N-1),-2*ones(1,N),ones(1,N-1)); This will return the tridiaganol matrix in sparse form. You can wrap it in full() to convert it to a standard matrix. Then you can manually change the top and bottom corners to 1. Let me know if this works or if I've misunderstood the form of L.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1818396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Norms That Define An Open Set How can a norm define a set in a vector space. I don't understand for example how 2 different norms can define a same open set. It's not intuitive to me. An open set doesn't need a norm to be open (or to even exist)
Are you talking about open balls or open sets? If $|| \cdot ||$ is a norm on a vector space $V$, then an open ball in $V$ (with respect to the norm $|| \cdot ||$) is a set of the form $$ \{ v \in V : ||v - v_0|| < r \}$$ for a fixed $v_0 \in V, 0 < r \in \mathbb{R}$. An open set (with respect to $|| \cdot ||$) is a set which is equal to a union of open balls in the sense of $|| \cdot ||$. If $V$ is a finite dimensional real vector space, then $V$ can be identified with a finite product of $\mathbb{R}$s, which gives $V$ a topology of open sets (product topology). As you say, these open sets are defined independently of any norm. But it is a standard result any two norms $|| \cdot ||_1, ||\cdot ||_2$ on such a space $V$ are equivalent, in the sense that any open set in the sense of $|| \cdot ||_1$ is also an open set in the sense of $|| \cdot ||_2$, and vice versa. In fact, the open sets from either of these norms are the exactly the same as the open sets of $V$ from the product topology. This is not saying that open balls in the sense of one norm are also opens ball in the sense of the other. Rather, any set $S \subseteq V$ which is equal to a union of various open balls in the sense of $|| \cdot ||_1$, can also be expressed as a union of other various open balls in the sense of $|| \cdot ||_2$, and vice versa.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1818482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
why is $\forall x (p(x) \implies q(x)) \not\equiv (\forall x p(x)) \implies (\forall x q(x))$ I'm having a hard time wrapping my head around why $\forall x (p(x) \implies q(x)) \not\equiv (\forall x p(x)) \implies (\forall x q(x))$
Consider the domain of real numbers, where $P$ means "equals $\pi$" and $Q$ means "is rational". Then $\forall x (P(x)\to Q(x))$ means "All real numbers that equal pi are rational", which is blatantly false. Where as $\forall x~P(x) \to \forall x~Q(x)$ means "If every real number equals pi, then every real number is rational," which is vacuously true. Thus there is at least one interpretation where the statements are not equivalent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1818625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
find two matrices A and B such that $A^2 -BA-AB+B^2 = O_{2\times 2}$ Can someone please explain this question to me Question : Construct a $2\times 2$ matrix A and B with A different from B and neither A = $O_{2\times 2}$ nor $B = O_{2\times 2}$ such that $A^2 - BA - AB + B^2 = O_{2\times 2}$ Attempt: I found a matrix A such such that $a_{11} = 0$, $a_{12} = 0$, $a_{21} =1$, and $a_{22} = 0$ but I couldn't find a second matrix B that will respect the given condition Thanks.
$A^2 - AB - BA + B^2 = (A-B)^2 = O$ Find a matrix $M$ such that $M^2 = O$ How about $M = \begin{pmatrix}0&1\\0&0\end{pmatrix}$ Now find chose $A, B$ such that $(A-B) = M$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1818820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
A question about extending Lebesgue measure to all subsets of the real numbers. Is it known to be consistent with just ZF set theory-without the Axiom of Choice-that there exists an extension of Lebesgue measure to all subsets of real numbers, which is countably additive and isometrically invariant?.....I am confused about this because some articles in the literature, which seem to imply that my question has a "yes" answer, also seem to add a Large Cardinal Axiom to the axioms of ZF. But my question specifically rules out any axioms beyond those of ZF.
It is known that the consistency of "All sets of reals are Lebesgue measurable" over $ZF+DC$ has large cardinal strength; however, note the role of $DC$, Dependent Choice, in the base theory. With many related results, dropping the requirement of $DC$ also drops the consistency strength of the related regularity property. For example, I believe $ZF$+"All sets have the perfect set property" is equiconsistent with $ZFC$ (Truss, although I can't find the citation right now), whereas $ZF+DC$+"All sets have the perfect set property" has the same consistency strength as an inaccessible (Shelah). The problem is that - unlike e.g. the perfect set property - it's not $100\%$ clear how to define Lebesgue measurability if we drop the axiom of dependent choice, since now the countable union of countable sets need not be countable! So my instinct is that you'll get wildly varying answers, depending on what definition of "Lebesgue measurable" you use, reflecting the fact that these definitions are not equivalent over mere $ZF$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1818917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Probability of being dealt four-of-a-kind in a set of $5$ cards? You are dealt a hand of five cards from a standard deck of playing cards. Find the probability of being dealt a hand consisting of four-of-a-kind. If possible, please provide a hint first before the answer. One of the first things that came to me was $\dfrac{52}{52} \cdot \dfrac{51}{51} \cdot \dfrac{3}{50} \cdot \dfrac{2}{49} \cdot \dfrac{1}{48}$ but this was of course wrong. Then, I realized that the third card could be the same as the first OR second card, so I tried $\dfrac{52}{52} \cdot \dfrac{51}{51} \cdot \dfrac{7}{50} \cdot \dfrac{2}{49} \cdot \dfrac{1}{48}$, which is also wrong. Then I realized that we probably need to add up the probabilities of situations where the first card is the one that doesn't match, or the second is the one that doesn't match, etc. I think this method leads to the solution, but I don't think we're intended to solve it this way. I think the solution looks something like $\dfrac{x}{ {{52}\choose{5}}}$, but I'm not sure what should be in the numerator.
The simplest explanation might be the following: there are ${52}\choose{4}$ possible combinations of 4 cards in a deck of 52. Then, with 5 cards, you can have 13 * 5 possible four of a kind. Divide the latter by the former.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1819001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 4 }
How do I simplify this Log with a Fraction in it? So I have: $$ \log_2(5x) + \log_2 3 + \frac{\log_2 10}{2} $$ I understand that when there is addition, and the bases are the same, I can simply multiply what is in the parenthesis. So for the first part, I'd get $\log_2(15x)$. I'm stuck now, because I'm not sure what to do with the third log term, since the entire thing is being divided by two.
You may write $$ \log_2(15x)+\frac12 \log_2(10)=\log_2(15x)+\log_2(10^{1/2})=\log_2(15x \cdot10^{1/2}). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1819229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
proving combinatorics identity - $\sum_{k=0}^m{n-k \choose m-k}={n+1 \choose m}$ Prove that for every $n \ge m \ge 1 , \sum_{k=0}^m{n-k \choose m-k}={n+1 \choose m}$ I've tried saying that the RHS represents the number of binary series with m "1" 's and n+1-m "0"'s, but I couldn't find out what k represents at the LHS. Thanks
$$ \begin{align} \sum_{k=0}^m\binom{n-k}{m-k} &=\sum_{k=0}^m\binom{n-k}{n-m}\tag{1}\\ &=\binom{n+1}{n-m+1}\tag{2}\\[3pt] &=\binom{n+1}{m}\tag{3} \end{align} $$ Explanation: $(1)$: $\binom{n}{k}=\binom{n}{n-k}$ $(2)$: Equation $(9)$ from this answer $(3)$: $\binom{n}{k}=\binom{n}{n-k}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1819414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
If an event $A$ is independent from $B$ and $C$, is A also independent from $B \cap C$? $A$ and $B$ are independent. $A$ and $C$ are independent. Are $A$ and $B \cap C$ also independent?
Let $A$ and $B$ be independent events, and let $A$ and $C$ be independent events. How do I show that $A$ and $B\cap C$ are independent events as well? You cannot show this result because it does not hold for all $A, B, C$ enjoying these properties. Consider the following counter-example. Consider two independent tosses of a fair coin. Let $B=\{HT,HH\}$ and $C=\{HT,TT\}$ be the events that the first and second tosses resulted in Heads and Tails respectively. Let $A=\{HT,TH\}$ be the event that exactly one toss resulted in Heads. Then, $P(A)=P(B)=P(C) = \frac 12$ while $P(A\cap B) = P(A\cap C) = \frac 14$ and so $A$ and $B$ are independent events as are $A$ and $C$ independent events. Indeed, $B$ and $C$ are also independent events (that is, $A$, $B$, and $C$ are pairwise independent events). However, $$P(A) = \frac 12 ~ \text{and}~ P(B\cap C)=\frac 14 ~ \text{while}~ P(A\cap(B\cap C)) =\frac 14 \neq P(A)P(B\cap C)$$ and so $A$ and $B\cap C$ are dependent events. Notice that whether $B$ and $C$ are independent or not is not relevant to the issue at hand: in the counter-example above, $B$ and $C$ were independent events and yet $A = \{HT, TH\}$ and $B\cap C = \{HT\}$ were not independent events.If $A$, $B$, and $C$ are mutually independent events (which requires not just independence of $B$ and $C$ but also for $P(A\cap B \cap C) = P(A)P(B)P(C)$ to hold), then $A$ and $B\cap C$ are indeed independent events. Mutual independence of $A$, $B$ and $C$ is a sufficient condition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1819521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
find the measure of $AMC$ if $M$ is the midpoint of $BC$ then find the measure of $AMC$. I tried to use the angles to find $AMC$ but I don't know how to use that $M$ is the midpoint of $BC$.
Let $X$ be the point on $BC$ exactly below $A$ and $MX=z$ for comfort. $BX=\frac{BC}{2}+z$ and $XC=\frac{BC}{2}-z$ Now: $$\tan(\alpha )= \frac{AX}{BX}\implies \tan(\alpha )= \frac{AX}{\frac{BC}{2}+z}$$ $$\tan(AMC )= \frac{AX}{Z}$$ $$\tan(\beta )= \frac{AX}{XC} \implies \tan(\beta )= \frac{AX}{\frac{BC}{2}-z}$$ Now you have 3 questions with 3 variables $z,AX,\tan(AMC)$ and the rest is given. Let $AMC= \gamma$ $$AX=z \tan(\gamma)$$ $$(1)\tan(\alpha )= \frac{z \tan(\gamma)}{\frac{BC}{2}+z}\implies \frac{BC\tan(\alpha )}{2}=z(\tan(\gamma)-\tan(\alpha ))$$ $$(2)\tan(\beta )= \frac{z \tan(\gamma)}{\frac{BC}{2}+z}\implies \frac{BC\tan(\beta )}{2}=z(\tan(\gamma)+\tan(\beta ))$$ And by dividing (1)/(2) we get: $$\frac{\tan(\alpha )}{\tan(\beta )}=\frac{\tan(\gamma)-\tan(\alpha )}{\tan(\gamma)+\tan(\beta )}$$ Hence: $$\tan(\gamma)=\frac{2\tan(\alpha)\tan(\beta)}{\tan(\beta)-\tan(\alpha)}$$ $$\gamma=\arctan(\frac{2\tan(30^\circ)\tan(15^\circ)}{\tan(30^\circ)-\tan(15^\circ)})$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1819672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
Pareto optimum in game matrix I have to find Pareto optimum squares in game matrix. They are marked in following picture What questions do I have to ask myself for every square to decide if it is Pareto optimum? Why square E/A (5,4) is not optimal?
Pareto optimality is a state of allocation of resources in which it is impossible to make any one individual better off without making at least one individual worse off. E&A with a value $(5,4)$ can be improved to F&D with a value of $(5,5)$ since the second player is better off and the first player is not worse off. Similarly F&A can be Pareto improved to F&B or F&D or G&D, and you can find examples of improvement without worsening for all the cases without red rings, but you cannot for any of the cases with red rings.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1819774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Integral $\int_{-\infty}^\infty\frac{\Gamma(x)\,\sin(\pi x)}{\Gamma\left(x+a\right)}\,dx$ I would like to evaluate this integral: $$\mathcal F(a)=\int_{-\infty}^\infty\frac{\Gamma(x)\,\sin(\pi x)}{\Gamma\left(x+a\right)}\,dx,\quad a>0.\tag1$$ For all $a>0$ the integrand is a smooth oscillating function decaying for $x\to\pm\infty$. The poles of the gamma function in the numerator are cancelled by the sine factor. For $a\in\mathbb N$, the ratio of the gamma functions simplifies to a polynomial in the denominator, and in each case the integral can be pretty easily evaluated in a closed form, e.g. $$\mathcal F(3)=\int_{-\infty}^\infty\frac{\sin(\pi x)}{x\,(x+1)\,(x+2)}\,dx=2\pi.\tag2$$ Can we find a general formula for $\mathcal F(a)$ valid both for integer and non-integers positive values of $a$?
Interesting question. By using the reflection formula, your integral can be written as a convolution integral: $$ I(a) = \pi \int_{-\infty}^{+\infty}\frac{dx}{\Gamma(x+a)\Gamma(1-x)}.\tag{1}$$ We may also notice that when $n\in\mathbb{N}$ we have: $$ \int_{-\infty}^{+\infty}\frac{\sin(\pi x)\,dx}{x(x+1)\cdot\ldots\cdot(x+n)}=\pi\sum_{k=0}^{n}(-1)^k \text{Res}\left(\frac{1}{x(x+1)\cdot\ldots\cdot(x+n)},x=-k\right)\tag{2}$$ where the $RHS$ of $(2)$ equals: $$ \frac{\pi}{n!}\sum_{k=0}^{n}\binom{n}{k} = \color{red}{\frac{\pi\, 2^n}{n!}}\tag{3} $$ so the most reasonable conjecture is: $$ I(a) = \pi \int_{-\infty}^{+\infty}\frac{dx}{\Gamma(x+a)\Gamma(1-x)}\stackrel{!}{=}\color{red}{\frac{\pi\, 2^{a-1}}{\Gamma(a)}}.\tag{4}$$ Numerical simulations support $(4)$. Probably it is enough to exploit the log-convexity of the $\Gamma$ function to extend the validity of $(4)$ from $a\in\mathbb{N}$ to $a\in\mathbb{R}^+$. Update: As a matter of fact, this Sangchul Lee's blogpost proves that the partial fraction decomposition through the residue theorem still works for $a\in\mathbb{R}^+\setminus\mathbb{N}$, setting $(4)$ as an identity. Credits to him for another great piece of cooperative math. Following Yuriy S' approach, $$ \int_{-\infty}^{+\infty}B(a,x)\sin(\pi x)\,dx \\=\int_{0}^{+\infty}\int_{0}^{1}\left[(1-t)^{a-1}t^{x-1}\sin(\pi x)+t^{x-a}(1-t)^{a-1}\sin(\pi(a-x))\right]\,dt\,dx$$ leads to an integral that can be easily evaluated through the residue theorem, giving an alterntive proof of $(4)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1819837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 3, "answer_id": 1 }
Angle bracket (quadratic variation) process for martingales Probability with Martingales: What is the relation between $\langle M_{S(k) \wedge n}\rangle \ = A_{S(k) \wedge n}$ and $\{N_n\}, \{ N_{ S(k) \wedge n } \}$ being martingales? It seems that $$\langle M_{S(k) \wedge n}\rangle \ = A_{S(k) \wedge n}$$ is supposed to be true by defintion. Am I wrong?
Since $M^2-A=N$ is a martingale, the stopped process $(M^2-A)^{S(k)}$ is also a martingale. Also, $A^{S(k)}$ is previsible. However, the Doob decomposition of an $\mathcal{L}^1$ adapted process into a martingale and a previsible process is essentially unique. Hence, $\left<M^{S(k)}\right>=A^{S(k)}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1819919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What should I do further? I came across simple question, The length of all sides of a $\triangle{ABC}$ are in integral units. If length of $AB=10$ and $AC= 15$ then the number of distinct possible values of $BC$ is finite. We can simply apply Triangle Inequality, I tried to show creativity as follows: Taking $\angle A= \theta$, and apply cosine rule. $BC^2=10^2 + 15^2 -300\cos\theta$ $BC=5\sqrt{13-12\cos\theta}$, here $\theta$ can't be greater than $180$, and its range is between $[-1,1]$. Can I solve it further, using these? And what can be other interesting approach to this problem.
I think the Triangle Inequality is the cleanest method, applied twice to find the (excluded) lower bound of $5$ and the (excluded) upper bound of $25$, giving the full set of solutions for the length of $BC$ as $$\{6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24\}$$ A more creative method would of course be interesting, providing it didn't just boil down to find the two bounds above in a more obscure fashion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1819993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Sum $1+(1+x)^2+(1+x+x^2)^2+\cdots+(1+x+x^2+\cdots+x^{N-2})^2 $ Is there a way to find the sum the following series: $$1+(1+x)^2+(1+x+x^2)^2+\cdots+(1+x+x^2+\cdots+x^{N-2})^2 \text{ ?}$$ Any ideas ? Perhaps someone knows already the result.. Thank you in advance for your time.
$$ 1+(1+x)^2+(1+x+x^2)^2+\cdots+(1+x+x^2+...+x^{N-2})^2 = \sum_{i=0}^{N-2}(1+x+\cdots + x^i)^2$$ Let $x<1$, though same can be repeated for $x>1$ - we do not consider $x=1$ since the answer is straightforward in this case $$\sum_{i=0}^{N-2}(1+x+\cdots + x^i)^2 = \sum_{i=0}^{N-2}\left(\frac{1-x^{i+1}}{1-x}\right)^2 = \frac{1}{(1-x)^2}\sum_{i=0}^{N-2}(1+(x^{i+1})^2-2x^{i+1}).$$ Thus, we have $$\sum_{i=0}^{N-2}(1+x+\cdots + x^i)^2 = \frac{1}{(1-x)^2}(N-1+\sum_{i=0}^{N-2}(x^{i+1})^2-2\sum_{i=0}^{N-2}x^{i+1}).$$ Thus, the answer is $$\frac{1}{(1-x)^2}\left(N-1+\frac{1-x^{(2(N-1)})}{1-x^2}-2\frac{1-x^{N-1}}{1-x}\right)$$ For $x=1$, this is sum of $i^2$ from 1 to $N-1$, which is $(N-1)(N)(2N-1)/6$. For $x>1$, it will be the same expression as for $x<1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1820084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Under what condition does $f(f^{-1}(f(A))) = f(A)$? Basic question regarding function. Let $f: X \to Y$, then for what $f$ does $f(f^{-1}(f(A))) = f(A)$? hold? Obviously this relationship holds when $f$ is a bijection. This does not hold when $f$ is pure surjection because the inverse does not exist. Does this also hold when $f$ is an pure injection? I think so.
When $f$ is an injection function then $f^{-1}(f(A))=A$, as a result $$f(f^{-1}(f(A))) = f(A)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1820185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Prove $\int_0^{\infty} \frac{x^2}{\cosh^2 (x^2)} dx=\frac{\sqrt{2}-2}{4} \sqrt{\pi}~ \zeta \left( \frac{1}{2} \right)$ Wolfram Alpha evaluates this integral numerically as $$\int_0^{\infty} \frac{x^2}{\cosh^2 (x^2)} dx=0.379064 \dots$$ Its value is apparently $$\frac{\sqrt{2}-2}{4} \sqrt{\pi}~ \zeta \left( \frac{1}{2} \right)=0.37906401072\dots$$ How would you solve this integral? Obviously, we can make a substitution $t=x^2$ \begin{align} \int_0^{\infty} \frac{x^2}{\cosh^2 (x^2)} dx&=\frac{1}{2} \int_0^{\infty} \frac{\sqrt{t}}{\cosh^2 (t)} dt\\[10pt] &=\int_0^{\infty} \frac{\sqrt{t}}{\cosh (2t)+1} dt\\[10pt] &=\frac{1}{2 \sqrt{2}}\int_0^{\infty} \frac{\sqrt{u}}{\cosh (u)+1} du \end{align} We could use geometric series since $\cosh (u) \geq 1$, but I don't know how it will help.
Hint: Consider the parametric integral \begin{equation} I(a) = \int_0^\infty \frac{t^{a-1}}{\cosh^{2} t}\ dt=4 \int_{0}^{\infty} \frac{t^{a-1}}{(e^{t}+e^{-t})^{2}}\ dt = 4 \int_{0}^{\infty}\frac{t^{a-1} e^{-2t}}{(1+e^{-2t})^{2}}\ dt \end{equation} Hence, your integral is simply \begin{equation} \int_0^{\infty} \frac{x^2}{\cosh^2x^2}\ dx = -2\ \frac{\partial}{\partial b}\left[\int_0^{ \infty}\frac{t^{a-1}}{1+e^{bt}} \ dt\right]_{a=\frac{1}{2}\ ,\ b=2} \end{equation} I believe you can evaluate the last expression by your own.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1820280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
Any Nilpotent Matrix is not Diagonalizable I'm trying to go about the proof that any matrix that is nilpotent (i.e. $\exists N \in\Bbb N. A^N = \mathbf{0}$) cannot be diagonalizable. I believe that the best way to go about this is by showing that a given eigenvalue's geometric multiplicity is not the same as its algebraic multiplicity. However, I am having some difficulty figuring out what aspect of nilpotency might help me with this calculation. I can see that if $A^N = \mathbf{0}$ for some $N \in\Bbb N,$ I think it may be reasonable to prove that the only eigenvectors we get from $A$ are the $\mathbf{0}$ vector, although I am not entirely sure if this is the appropriate way to prove the above statement, given that it doesn't necessarily pertain to multiplicity but more of the mechanics of finding eigenvectors and eigenvalues of a matrix. Any recommendations on this problem?
Not true (example: $0$ matrix). Though, the statement is true if $A \neq 0$. If $A$ is diagonalizable, then there exists an invertible $P$ and a diagonal matrix $D$ such that $A = P^{-1} D P$. Then, $A^N = P^{-1} D^N P$, which gives $P^{-1} D^N P = 0$. Therefore $D^N = 0$ and so $(D_{ii})^N = 0$ for all $i$, hence $D_{ii} = 0$ for all $i$ and $D$ is the zero matrix. This gives $A = P^{-1} 0 P = 0$, which is contradictory to the assumption.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1820350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 6, "answer_id": 4 }
How do I prove $\int^{\pi}_{0} \frac{\cos nx}{1+\cos\alpha \cos x} \mathrm{d}x = \frac{\pi}{\sin \alpha} (\tan \alpha - \sec \alpha)^n $? The result I wish to show is that for $n \in \mathbb{Z}$, $$\int^{\pi}_{0} \frac{\cos nx}{1+\cos\alpha \cos x} \mathrm{d}x = \frac{\pi}{\sin \alpha} (\tan \alpha - \sec \alpha)^n $$ I have made a few attempts through the first techniques that came to my mind but I have not made any meaningful progress.
Hint: Use the following relation \begin{equation} \frac{\sin\alpha}{1+\cos\alpha\cos x}=1+2\sum_{k=1}^\infty \left(\frac{\sin\alpha-1}{\cos\alpha}\right)^k\cos(kx) \end{equation} It can be obtained by writing cosine in the denominator of the integrand out in exponential form as the following \begin{equation} \frac{A^2-B^2}{A^2-2AB\cos x+B^2}=\frac{A}{A-Be^{ix}}+\frac{Be^{-ix}}{A-Be^{-ix}} \end{equation} where we use $A^2+B^2=1$, $-2AB=\cos\alpha$, and the geometric series in form of $\frac{1}{1-y}$. Then use the following relations \begin{equation} \int_0^\pi\cos nx\cos mx\ dx=\begin{cases} 0&, & \text{if}\ \ n\ne m \\[20pt] \dfrac{\pi}{2}&, & \text{if}\ \ n=m \end{cases} \end{equation}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1820446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Show that $x^n$ is a continuous function So the idea I thought of doing was by induction. $\forall\; \epsilon > 0\;\; \exists \;\delta > 0$ such that $|x-x_0| < \delta \implies |x-x_0| < \epsilon $ By the definition of the limit choose $\delta=\epsilon$ and the statement holds for $n=1$. Thus $\lim_{x \rightarrow x_0} = x_0 = f(x_0)$ Now here is the inductive step. Next assume that $\lim_{x \rightarrow x_0} x^n$ exists. Another words, $\forall\; \epsilon > 0\;\; \exists \;\delta > 0$ such that $|x-x_0| < \delta \implies |x^n-x_0^n| < \epsilon $ $\forall\; \epsilon > 0\;\; \exists \;\delta > 0$ such that $|x-x_0| < \delta \implies |x^{n+1}-x_0^{n+1}| < \epsilon $ Let $\epsilon >0$ then $|x-x_0| < \delta \implies |x-x_0|^n|x+x_0| < \epsilon$ I am having trouble proceding from here any hints or advice would be greatly appreciated.
Hint: if $f$ is continuous and $g$ is continuous then $fg$ is continuous. This is not that hard to show (ask if you need help). Then apply induction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1820554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Prove that $\frac{dy}{dx} = -\frac1{(1+x)^2}$ for given that $x\sqrt{1+y} + y\sqrt{1+x} = 0$ $$x\sqrt{1+y} + y\sqrt{1+x} = 0$$ Please tell me where I went wrong. Why I am not getting correct answer ?
There is nothing wrong. Put the value of $y$ to get your result. However, a simpler approach: $$x\sqrt{1+y} + y\sqrt{1+x} = 0$$ $$x\sqrt{1+y} = - y\sqrt{1+x}$$ Squarring both sides, we get $$x^2(1+y) = y^2(1+x)$$ $$x^2(1+y) - y^2(1+x)=0$$ $$(x-y)(x+y+xy)=0$$ So either $x-y=0$ or $x+y+xy=0$. Now if, $x-y=0$, then we have $x=y$. But this does not satisfy the equation $x\sqrt{1+y} + y\sqrt{1+x} = 0$ for all $x,y \in \mathbb{R}$. Hence required solution is $x+y+xy=0$. Thus we can write that $$1+\frac{dy}{dx}+y+x\frac{dy}{dx}=0$$ $$\frac{dy}{dx}=\frac{-y-1}{1+x}=\frac{\frac{x}{1+x}-1}{1+x}=-\frac{1}{(1+x)^2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1820622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to Solve a linear matrix equation of an array $M = BMC$ where $ B$ and $C$ are known Adding to the question's description : I am doing Feature extraction from videos and i am trying to implement this one line of mathematical equation to matlab or even any algorithm . let's say I have $ B $ and $ C $ which are $ 10 ×10$ matrices * *$ R $ is also a $ 10 × 10 $ matrix but it should satisfy two conditions* 1) $M = B×M×C$ 2) The Values in $ M $ should be within a given range [$x$( which is min) $y$(max)] given by the user i can handle the constraints of the range but i have no idea on how to proceed with the 1st condition at all , much less converting it into matlab I have found several solutions to this type of problem 1)Sylvester's equation which approximates $ AX+XB = C $ or $AX + XB + C = 0 $ 2)Lyapunov Matrix equation which approximates $AX+XA'+Q = 0$ both of these implementations of matlab i have found in this link given below https://github.com/ajt60gaibb/freeLYAP but i need help in the mathematics area as to how i can use those (if i can) to $M = BMC$
If you multiply the equation $M = BMC$ by the inverse of $B$ on both sides and take everything on the same side of the equal sign, you get $$ B^{-1} M - MC = 0.$$ This is a Sylvester equation where the last term is the null matrix (all elements equal to zero). You should be able to solve your problem in Matlab by using lyap(inv(B),-C,zeros(10,10)) (here it should be ok to use the inv command because you are dealing with 10 x 10 matrices, consider another way of inverting $B$ if you also need to solve the same problem for bigger ones).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1820744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to derive the gregory series for inverse tangent function? How to derive the Gregory series for the inverse tangent function? Why is Gregory series applicable only to the set $ [-\pi/4,\pi/4] $ ?
A fast way is by exploiting $\arctan'(x)=\dfrac1{1+x^2}$. The Taylor series of that derivative is easily established to be the sum of a geometric series of common ratio $-x^2$, $$\sum_{k=0}^\infty(-x^2)^k=\frac1{1-(-x^2)},$$ which only converges for $x^2<1$. Then integrating term-wise, $$\arctan(x)=\int_0^x\frac{dx}{1+x^2}=\sum_{k=0}^\infty\frac{(-1)^kx^{2k+1}}{2k+1}.$$ Alternatively, assume that you know $$\ln(1+z)=-\sum_{k=1}^\infty\frac{(-z)^k}{k}.$$ Then with $z=ix$, $$\ln(1+ix)=\ln\left(\sqrt{1+x^2}\right)+i\arctan(x)=-\sum_{k=1}^\infty\frac{(-ix)^k}{k}.$$ The imaginary part of this identity gives $$\arctan(x)=\sum_{odd\ k}(-1)^{(k-1)/2}\frac{x^k}k$$ while the real part gives the extra $$\ln\left(\sqrt{1+x^2}\right)=\sum_{even\ k>0}(-1)^{k/2-1}\frac{x^k}k.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1820853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Can all functions over $\mathbb{Z}/4\mathbb{Z}$ be "described" by a polynomial? Consider the set $T$ of functions from $\mathbb{Z}/4\mathbb{Z}$ to $\mathbb{Z}/4\mathbb{Z}$. I'm now asked to prove or disprove the statement that all functions in $T$ can be described by a polynomial over $\mathbb{Z}/4\mathbb{Z}$, i.e. for any $f \in T$, there exists a polynomial $P \in (\mathbb{Z}/4\mathbb{Z})[X]$ so that the polynomial function for $P$ takes exactly the values that $f$ takes. I already know that this statement is true if instead of $\mathbb{Z}/4\mathbb{Z}$, we have a finite field as the underlying structure, but since $\mathbb{Z}/4\mathbb{Z}$ is not a field, I rather suspect that this statement is not true, i.e. that there is any function that gets in our way. If I suspect that I have found such a function, I would need to show that no polynomial over $\mathbb{Z}/4\mathbb{Z}$ takes the same values as $f$. I guess that there is a simple counterexample, but so far, I haven't been able to find one. Obviously we can't use any constant function, but I'm so far out of ideas what I can do to construct such a counterexample.
Consider a polynomial $p(X) = \sum_{k=0}^n a_k X^k$ with $a_k \in \mathbb{Z}/4\mathbb{Z}$. Then we have $$ p(0) = a_0, \\ p(2) = a_0 + a_1 \cdot 2 + a_2 2^2 + \dots + a_n 2^n = a_0 + 2a_1. $$ Hence, we can't find a polynomial that satisfies for example $p(0) = 1, p(2) = 2$ as the equation $2a_1 = 1$ has no solution in $\mathbb{Z}/4\mathbb{Z}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1821024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 3 }
The coefficient of $x^{100}$ in the expansion of $(1-x)^{-3}$. Please solve it and tell me the technique so that I can solve it in examination in multiple choice questions.
This is an application of the generalized binomial theorem, that $$(x+y)^s = \sum_{k=0}^\infty {s \choose k} x^{s-k}y^k$$ where $${s \choose k} = \frac{s(s-1)(s-2)\cdots(s-k+1)}{k!}$$ is the generalized binomial coefficient. Hence in this case, $$(1-x)^{-3} = \sum_{k=0}^\infty {-3 \choose k} (1)^{-3-k}(-x)^{k}$$ and so taking $k = 100$, the coefficient of the $x^{100}$ term will be $${-3 \choose 100} = \frac{(-3)(-4)\cdots(-102)}{(100)(99)\cdots(1)} = \frac{(102)(101)}{2} = 5151$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1821089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Does a separable metric space have a countable basis? I want to prove that if $X$ is a metric space and has a dense countable subset, then it has a countable basis. I know that every metric space is first countable, but I can't continue. Thanks for your help.
Well, take a dense subset of $D\subset X$ and then the metric guarantees you a countable neighbourhood basis at every point of $D$. A countable union of countable sets is again countable. Check that this is enough.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1821166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
With $m\in\mathbb{Z^+}$ fixed, is $\sum_{m\ne n\ge1} (n^2-m^2)^{-1}$ evaluable really elementarily? I found this exercise at the begininning of the series section of a calculus workbook, so it shouldn't require machinery like integrals or special functions; merely telescopic summing or some other easy trick, but I can't see what should be used. How to calculate, with $m\in\mathbb{Z^+}$ fixed, the sum $\displaystyle \sum_{m\ne n\ge1}\dfrac1{n^2-m^2}$?
Since $$\frac1{n^2-m^2}=\frac{1}{2m}\left(\frac{1}{n-m}-\frac{1}{n+m}\right)$$the sum does telescope "eventually"; for any specific $m$ you can see that it equals $\frac1{2m}$ times the sum of finitely many terms $$\frac{\mp1}{n\pm m};$$all the other terms cancel. (If that's not clear write out a large number of terms for $m=3$ and see what happens...)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1821253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Where is the notion of anti-isomorphism useful Let $(S,\cdot)$ and $(T,\circ)$ be semigroups (or some algebraic structure with an operation), then they are anti-isomorphic if there exists some $\varphi : S \to T$ such that $$ \varphi(xy) = \varphi(y) \circ \varphi(x). $$ Now for what is this notion useful? The notion of isomorphism is useful as this basically says that isomorphic structures are structurally the same. Anti-Isomorphism in some sense accounts for the order in which we combine elements, so if two structures are isomorphic or anti-isomorphic, what I guess this means is that they are structurally equal without respect to notions which depend on the order of the operation. In terms of the multiplication table, if we have one set (or a bijection to view both sets as one) and two operations on this set, then both algebraic structures are anti-isomorphic if the transpose (i.e. mirroring it at the diagonal) gets me the other multiplication table. But all examples that come to my mind are just simple observations, like that the inversion is an anti-isomorphism in every group, or that if we have two finite automata and construct the transformation monoids, we have to choose some order in which we read function composition (from left to right or from right to left), but regardless of what we choose the resulting transformation monoids are anti-isomorphic (but in general not isomorphic). But despite this mere observations, I do not see where they are really useful; or where this notion is essential?
Well, ... at the very least there are indeed some examples for that! I am going to mention some anti-automorphisms. (Post-) Composing with any isomorphism gives an anti-isomorphism. * *Taking inverses in a group: $(ab)^{-1} = b^{-1}a^{-1}$ *Taking the transpose in a semigroup of $n\times n$-matrices: $(AB)^T = B^TA^T$ *Taking inverses of relations: $(L\circ R)^{-1} = R^{-1}\circ L^{-1}$ There are propably more noteworthy examples, but I would say the importance of contravariance is more obvious, if you indeed study contravariant (semi-) functors in general. For example, one could say it is essential for formalizing the notion of an adjunction, which may be one of the most important concepts in all of mathematics (I dare say). But again more specifically you may be interested in dagger categories (all of the above examples are dagger (semi-) categories with a single object)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1821334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Proving that $\int_{-b}^{-a}f(-x)dx=\int_{a}^{b}f(x)dx$ $f:[a,b]\rightarrow R$ that is integrable on [a,b] So we need to prove: $$\int_{-b}^{-a}f(-x)dx=\int_{a}^{b}f(x)dx$$ 1.) So we'll use a property of definite integrals: (homogeny I think it's called?) $$\int_{-b}^{-a}f(-x)dx=-1\int_{-b}^{-a}f(x)dx$$ 2.) Great, now using the fundamental theorem of calculus: $$-1\int_{-b}^{-a}f(x)dx=(-1)^2\int_{-a}^{-b}f(x)dx=\int_{-a}^{-b}f(x)dx$$ This is where I'm stuck. For some reason I think it might be smarter to skip step 2, to leave it asL $$-1\int_{-b}^{-a}f(x)dx$$ because graphically, we've "flipped" the graph about the x-axis, but we're still calculating the same area. Proving that using properties seems to have stumped me. I prefer hints over solutions, thanks.
Let f(x) = x. then f(-x) = -x. Substituting -a and -b in the limits of the integration will lead to be f(a) = a then f(-a) = -(-a) = a. it's simply this that you are multiplying the limits and the function by -1. if both are multiplied then they would get neutralized,
{ "language": "en", "url": "https://math.stackexchange.com/questions/1821411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
If $U^*DU=D=V^*DV$ for diagonal $D$, is $U^*DV$ diagonal too? All the matrices mentioned are complex $n\times n$ matrices. Let $U, V$ be unitary matrices such that $U^*DU=V^*DV=D$ for a diagonal matrix $D$ with nonnegative diagonal entries. Does this imply that $U^*DV$ is also diagonal? All I understand is that $U^*V$ will commute with $D$.
No. Consider $D=I$. Any unitary $U$ and $V$ satisfy the condition and yet it is easy to see $U^*V$ is not diagonal in general. If the diagonal $D$ has all distinct diagonal entries, then $U$ and $V$ are both diagonal. Any kind of multiplication involving any of the three matrices is diagonal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1821503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What assumptions do we need for Fourier transform of derivative formula Suppose $f: \mathbb R \to\mathbb C$ is differentiable and $f$ and $f'$ are in $L^1(\mathbb R)$. Do we need further assumptions to have the formula: $$\widehat{f'}(t) = (2\pi it)\hat f(t) $$ My textbook also assumes $f$ is continuously differentiable which I don't see why it is needed. Basically we want the limit of $f(x)$ to be zero when $x \to +$ or $- \infty$ which we can prove using the fundamental theorem of calculus, no?
The minimum set of conditions needed can be expressed using the Lebesgue integral. Essentially you need $f$ to be absolutely continuous with derivative $f' \in L^1$. Absolutely continuous means $f$ is the integral of its derivative: $$ f(y)-f(x)=\int_{x}^{y}f'(t)dt. $$ This all works out very nicely using the Lebesgue integral. For the Riemann integral, you could assume that $f \in L^1$, and that there exists $g\in L^1$ such that $$ f(y)-f(x)=\int_{x}^{y}g(t)dt,\;\;\; x,y \in\mathbb{R}. $$ Using the Riemann integral, you're more or less stuck with assuming that $g$ is Riemann integrable on every finite interval. You could allow a few isolated singularities by employing an improper Riemann integral, but that's about the least you can get away with, if you're going to use the Riemann integral. And it's enough to imply that $$ \widehat{f'}(s)=2\pi is\widehat{f}(s),\;\;\; s\in\mathbb{R}, $$ assuming the proper normalization for the Fourier transform.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1821572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Solutions to $[x^2]+2[x]=3x\text{ where } 0\le x\le 2$ Find all solutions to $$[x^2]+2[x]=3x\text{ where } 0\le x\le 2$$ and $[x]=\lfloor x\rfloor$ $$$$ I managed to simplify this to $[x^2]-[x]=3\{x\}$. Thus, $$[x^2]-[x]=\{0,1,2\}$$ However I got stuck here and was unable to proceed further. Somehow I have a feeling that there's a much neater approach than what I've taken.$$$$ Any help would be greatly appreciated. Many thanks!
If $0\leq x<1$, $LHD=0$. $1\leq x<\sqrt2$, $LHD=3$ $\sqrt2\leq x<\sqrt3,LHD=4$ $\sqrt3\leq x<2,LHD=5$ So x is a rational number by feature of equation, $x=0,1,2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1821639", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
number of roots on SO(3) Suppose we have a smooth map$ f:SO(3) → SO(3)$ of manifolds s.t.$ f(X)=X^2$. $I$ though since I is a regular value of this map and f is orientation preserving, to calculate degree of it, it is enough to check the number of roots of $X^2-I$ in SO(3). If X is a root of it, then i have that $X=VDV^T$ where $V$ is an orthogonal matrix and $D$ is a diagonal matrix. So far I found that $D$ have to be $diag(1,1,1), diag(-1,-1,1), diag(-1,1,-1), diag(1,-1,-1)$. Any suggestion would be thankful.
The equation $X^2=I$ simply means that $X$ is an involution, i.e. order 2 rotation about the origin. Such rotation is determined by its axis and, thus you get continuum of solutions. The point is that $I$ is not a regular value of $f$. Hint: Try to solve the equation $f(X)=R$, where $R$ is a fixed rotation by any angle different from $0, 2\pi/3$ and $\pi$. Every solution of this equation will commute with $R$ which will narrow down your options.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1821728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proving that $\int_{-b}^{b}f(x)dx=0 $ if $f$ is an odd function So I split this integral up into two: $$ \int_{-b}^{0}f(x)dx+\int_{0}^{b}f(x)dx= \int_{-b}^{b}f(x)dx$$ Then I can do the following: $$ \int_{-b}^{0}f(x)dx+\int_{0}^{b}f(x)dx= \int_{0}^{b}f(-x)dx+\int_{0}^{b}f(x)dx$$ and since $f$ is odd $(f(-x)=-f(x))$ we get $$\int_{0}^{b}-f(x)dx+\int_{0}^{b}f(x)dx=0$$ I'm not really understanding why my second is correct other than if I use substitution, i.e. $x=-y$.
Another approach. Let's assume $f$ is sufficiently regular on a given set. By the fondamental theorem of calculus and the chain rule, we get that $$ \left(\int_{-b}^{b}f(x)dx\right)'=f(b)-(-f(-b))=0 $$ giving that $$ \int_{-b}^{b}f(x)\:dx=\text{constant} $$ by putting $b:=0$, the constant is seen to be $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1821846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
On every other finite field at least one of −1, 2 and −2 is a square, because the product of two non squares is a square [Except on field extensions of $\mathbb{F}_2$] On every other finite field at least one of $−1$, $2$ and $−2$ is a square, because the product of two non squares is a square. I don't see why this is so. Please explain. This statement is from the Wikipedia article on factorisation of polynomials over finite fields.
Looking at the Wikipedia article, I think you meant to say "On every finite field of characteristic other than $2$" rather than "On every other finite field apart from $\mathbf{F}_2$". Also, the Wikipedia article doesn't mean to imply that the statement isn't true in finite fields of characteristic $2$—just that the characteristic $2$ statement is not needed for the argument in that particular passage. In fact, every element in a field of characteristic $2$ is a square. In particular, $-1=1$ and $2=-2=0$, both of which are obviously squares. In the odd characteristic case, if you are asking how the statement that at least one of $-1$, $2$, and $-2$ is a square follows from the statement that the product of two non-squares is a square, then suppose that all three of $-1$, $2$, and $-2$ were non-squares. Then there's an immediate contradiction, since $(-1)(2)=-2$, and therefore $-2$ should be a square. Others have given nice proofs that the product of two non-squares is a square. Also, once you know the fundamental fact that all nonzero elements of a finite field are powers of some primitive element $r$, it's straightforward to see this: if an element were a non-square, it would have to be an odd power of $r$. But the product of two odd powers of $r$ is an even power of $r$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1821938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Graphical explanation of the difference between $C^1$ and $C^2$ function? We are all aware of the intuitive (graphical) explanation of the concepts of continuous and differentiable function. Whenever these two concepts are formally defined, the following elementary explanations are given: A continuous function is a function whose graph has no "holes" or "jumps", and a differentiable function is a function whose graph has no "corners". This is a non continuous function: This is a non differentiable continuous function: And this is a differentiable continuous function: Is there a "graphical" or intuitive explanation of the difference between a $C^1$ function and a differentiable function with discontinuous derivatives? What about a function that is $C^1$ but not $C^2$ because it does not have second derivatives? Or what about a function that is $C^1$ and has second derivatives but they are not continuous? What about the difference between a $C^1$ function and a $C^{\infty}$ function?
If $f$ is everywhere differentiable, but $f'$ is not continuous at some point, then $f'$ has to be very discontinuous there, because otherwise $f'$ could not satisfy the intermediate value requirement. As an example consider the function $f(x):=x^2\sin{1\over x}$ $(x\ne0)$, $f(0):=0$. On the other hand there are beautiful functions which are $C^1$, but not everywhere twice differentiable, e.g., the function $f(x):=0$ for $x\leq0$ and $:=x^2$ for $x\geq0$. Here $f''(0)$ is undefined, and has a jump discontinuity there.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1822023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 0 }
Question about proof of the product of a finite number of T1 spaces is a T1 space. Show that the product of a finite number of T1 spaces is a T1 space. Since each $X_i$ is a $T_1$ space, $$\prod_{i \in I} X_i \backslash \{x_i\} = \{(a_1, a_2, ..., a_n):a_i \in X_i \land (a_1, a_2, ..., a_n) \neq (x_1, x_2, ..., x_n)\} $$ is open, and $$(\prod_{i \in I} X_i) \backslash \{(x_1, x_2, ..., x_n)\} = \{(a_1, a_2, ..., a_n):a_i \in X_i \land (a_1, a_2, ..., a_n) \neq (x_1, x_2, ..., x_n)\}$$ has the same definition as the first product, which implies that it's open. $\therefore$ The two are equal and finite product of $T_1$ spaces is a $T_1$ space. Is this correct to assume that $\prod X_i \backslash \{x_i\} = (\prod X_i) \backslash \{(x_1, x_2, ..., x_n)\}$ because the definitions are the same?
To make things more simple you can focus on closed sets instead of open sets. If $X$ denotes the product of $X_1,\dots,X_n$ and $\pi_i:X\to X_i$ the projections then a singleton $\{x_1,\dots,x_n\}\in X$ equals $\bigcap_{i=1}^n\pi_i^{-1}(\{x_i\})$. The singletons in every $X_i$ are closed (since $X_i$ is $T_1$) and the $\pi_i$ are continuous so we are dealing with a finite intersection of closed sets. So it is closed itself. We conclude that singletons are closed in $X$, or quivalently that $X$ is $T_1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1822124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
General matrix solution for a variant orthogonality condition An $n\times n$ complex matrix $X$ satisfies a constraint looking like a rank deficient version of the orthogonality condition $$ X^TX = \text{diag}\left(1,\dots,1,0\right), $$ where $X^T$ is the transpose of $X$. How can I describe the most general form of $X$? Can it be factorized into some specific parts, or expressed as some constrained SVD?
Let $x_{mn}$ denote the entries of $X$, i.e. $$ X= \begin{bmatrix} x_{11} & x_{12} & \cdots & x_{1n} \\ x_{21} & x_{22} & \cdots & x_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n1} & x_{n2} & \cdots & x_{nn} \end{bmatrix}. $$ We may define vectors $$ X_j=\begin{bmatrix} x_{1j}\\ \vdots \\ x_{nj} \end{bmatrix}, $$ in terms of which we have $X=\begin{bmatrix}X_1 & X_2 & \cdots & X_n \end{bmatrix}$ and $$ X^T=\begin{bmatrix} X_1^T \\ X_2^T \\ \vdots \\ X_n^T \end{bmatrix}. $$ With this notation, we find $$ X^T X= \begin{bmatrix} X_1^T X_1 & X_1^TX_2 & \cdots & X_1^TX_n \\ X_2^TX_1 & X_2^T X_2 & \cdots & X_2^T X_n \\ \vdots & \vdots & \ddots & \vdots \\ X_n^TX_1 & X_n^T X_2 & \cdots & X_n^TX_n \end{bmatrix}. $$ We can now proceed as in the answer by loup blanc in order to conclude that the set of interest is a holomorphic manifold. If $X$ was surjective, then $X$ would be bijective, so it would follow from the identity $\det X=\det X^T$ that $X^TX$ was also bijective. Since $$ X^T X= \mbox{diag}(1,1,\ldots,1,0) $$ is clearly not bijective, this is a contradiction. We conclude that $$ \dim \mbox{Ran} X\leqslant n-1, $$ so it follows that the set $(X_1,\ldots,X_n)$ is not linearly independent. But $(X_1,\ldots,X_{n-1})$ is linearly independent, because $$ 0=a_1X_1+\ldots+a_{n-1}X_{n-1} $$ implies (after multiplication by $X_j^T$ from the left) $$ 0=a_j. $$ We conclude that $X_n\in\mbox{span} \{X_1,\ldots,X_{n-1}\}$. But then $$ X_n=b_1X_1+\ldots+b_{n-1}X_{n-1}, $$ and multiplication by $X_j^T$ from the left ensures that $$ 0=b_j, $$ so $X_n=0$. The space of matrices $X$ satisfying your condition is therefore characterized by the $n$ equations $X_n=0$ together with the $\frac{n(n-1)}{2}$ equations $$ X_j^TX_k=\delta_{jk}\qquad 1\leqslant j\leqslant k\leqslant n-1, $$ restraining the $n^2$ variables $$ (X_1,\ldots,X_{n}). $$ Here $$ \delta_{jk}=\begin{cases} 1,&\mbox{ if }j=k,\\ 0,&\mbox{ if }j\neq k. \end{cases} $$ In order to describe this set, it suffices to describe the set of $n(n-1)$ variables $$ (X_1,\ldots,X_{n-1}). $$ restrained by the $\frac{n(n-1)}{2}$ equations $$ X_j^TX_k=\delta_{jk}\qquad 1\leqslant j\leqslant k\leqslant n-1. $$ If we define, for $1\leqslant j\leqslant k\leqslant n-1$, $f_{jk}(X_1,\ldots,X_{n})=X_j^TX_k$, put $$ f(X_1,\ldots,X_{n-1})=(X_j^TX_k)_{1\leqslant j\leqslant k\leqslant n-1}, $$ and set $$ \delta=(\delta_{jk})_{1\leqslant j\leqslant k\leqslant n-1} $$ then our set becomes the level set $f^{-1}(\{\delta\})$. The Jacobian of $f$ is given by $$ (D_{(X_1,\ldots,X_{n-1})}f)(Y_1,\ldots,Y_{n-1})=(X_j^TY_k+Y_j^TX_k)_{1\leqslant j\leqslant k\leqslant n-1}. $$ By considering vectors of the form $$ (Y_1,\ldots,Y_{n-1})=(a_{11}X_1,\sum_{j=1}^{2}a_{j2}X_j,\ldots,\sum_{j=1}^{n-1}a_{j,n-1}X_{j}), $$ for which we find $$ (D_{(X_1,\ldots,X_{n-1})}f)(Y_1,\ldots,Y_{n-1})=(a_{jk})_{1\leqslant j\leqslant k\leqslant n-1}, $$ it is visible that the Jacobian is always surjective, such that the holomorphic implicit function theorem ensures that $f^{-1}(\delta)$ is a holomorphic complex manifold of dimension $\frac{n(n-1)}{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1822228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Picking cards sequentially vs consecutively We have a pack of 6 cards over the table. Cards are: {A, A}, {B, B}, {C}, {D}. There are 3 players (Papa, Pepe, Popo) sat around the table. Cards are all upside down, so the players cannot see which card they are picking. Now we are presented with two scenarios: * *Case 1: each player picks one card at a time. That is, first Papa picks a card, then Pepe, then Popo, until there are no cards over the table (i.e. cards are being picked sequentially). *Case 2: each player picks their 2 cards directly, that is: Papa picks 2 cards, then Pepe, then Popo. (i.e. cards are being picked consecutively). I am being asked if the probability that Papa has the two A's is the same (or not) for both cases. My knowledge on statistics and probability is not very advanced, but I'd say that the chance for Papa picking the two A's will not be the same, but I don't have a convincing argument in favour. Maybe someone could point me towards the right direction in solving this problem.
The probabilities that Papa picks two A's are actually the same. * *Case 1 Probability of Papa picks A: $\frac{2}{6}$ Probability of Pepe picks cards other than A: $\frac{4}{5}$ Probability of Popo picks cards other than A: $\frac{3}{4}$ Probability of Papa picks A again: $\frac{1}{3}$ So, probability of Papa picks two A's : $\frac{2}{6}\times \frac{4}{5} \times \frac{3}{4} \times \frac{1}{3}=\frac{1}{15}$ * *Case 2 Probability of Papa picks A: $\frac{2}{6}$ Probability of Papa picks A again: $\frac{1}{5}$ So, probability of Papa picks two A's : $\frac{2}{6}\times \frac{1}{5}=\frac{1}{15}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1822319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find the inverse of $f(x) = 1 + \frac{1}{x}, x \gt 0$ I'm tasked to find the inverse of the function $$f(x) = 1 + \frac{1}{x}, x \gt 0$$ The book offers a solution, simply to set $$1 + \frac{1}{x} = s$$ and solve $$x = \frac{1}{s-1}$$ and I think I understand why that works. But why won't this work? $f_i$ is the function f(x)'s inverse $$f_i(f(x)) = x $$ $$f_i ( 1 + \frac{1}{x} ) = x$$ $f_i$ produces the result $x$ with input $1 + \frac{1}{x}$ if: $$f_i(x) = (x-1)\times x^2$$ So why is $(x-1)\times x^2$ not the inverse?
You need to replace $x$ with $1+\frac1x$ in both parts of the formula. So $$f_i\left(1+\frac1x\right)=\left(1+\frac1x-1\right)\left(1+\frac1x\right)^2 =\frac1x\left(1+\frac1x\right)^2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1822397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Proving the ratio of curvature and torsion is constant. This question has been asked slightly differently in a few different forums, but I wanted to discuss my approach and see if I was on the right track: Prove that if the tangent lines of a curve make a constant angle with a fixed direction, then the ratio of its curvature to its torsion is constant. So, I started by letting the curve be parameterized by arclength for convenience. Then, I let the fixed direction be the principal normal of the curve (as suggested by my professor). I know that the ratio of curvature to torsion is constant for a helix, so I was thinking of trying to prove that the assumptions imply that the curve must be a helix. I tried using the cosine similarity formula as follows (with $T$ being the tangent vector, and $u$ being my fixed principal normal direction: $cos(\theta)=\frac{T\cdot u}{\Vert{T}\Vert \Vert{u}\Vert}$ is constant I think I can say that both $T$ and $u$ are unit, so then I'd have that $cos(\theta) =T\cdot u$ is constant. Then, I was thinking if I showed $\frac{d}{ds}(T\cdot u)=0$, then I could somehow relate that back to curvature. Am I on the right track? Thank you very much for any help!
To begin with, we should name the fixed direction some other way, say $v$. You already have that $T\cdot v$ is constant, so its derivative is zero. As $\dot v= 0$, we get $\kappa N \perp v$, and $v$ belongs to the vector space generated by $T$ and $B$. Since these are orthonormal, we get $$ v= \langle v,T\rangle T+ \langle v, B\rangle B. $$ Now you can differentiate this equality on both sides and use Frenet-Serre equations to get the desired result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1822541", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Spectrum of difference of two projections Let $p$ and $q$ be two projections in a $C^*$-algebra. What can one say about the spectrum of $p-q$, i.e. is $\sigma(p-q) \subset [-1,1]$ ? The exercise is to show that $\lVert p-q \rVert \leq 1$. Any hints are appreciated.
You have, since $0\leq q\leq I$ and $0\leq p\leq I$, $$ -I\leq -q\leq p-q\leq I-q\leq I. $$ So, as you mentioned, it follows that $\sigma(p-q)\subset[-1,1]$. Note also that the argument does not use that $p,q$ are projections, only that they are positive elements of the unit ball.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1822620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Finding the second derivative of $f(x) = \frac{4x}{x^2-4}$. What is the second derivative of $$f(x) = \frac{4x}{x^2-4}?$$ I have tried to use the quotient rule but I can't seem to get the answer.
Avoiding the quotient rule, just for an option: $$\begin{align} \ln(f(x)) &=\ln(4)+\ln(x)-\ln(x+2)-\ln(x-2)\\ \frac{f'(x)}{f(x)} &=x^{-1}-(x+2)^{-1}-(x-2)^{-1}\\ f'(x) &=f(x)\left(x^{-1}-(x+2)^{-1}-(x-2)^{-1}\right)\\ f''(x) &=f(x)\left(-x^{-2}+(x+2)^{-2}+(x-2)^{-2}\right)+f'(x)\left(x^{-1}-(x+2)^{-1}-(x-2)^{-1}\right)\\ &=f(x)\left(-x^{-2}+(x+2)^{-2}+(x-2)^{-2}\right)+f(x)\left(x^{-1}-(x+2)^{-1}-(x-2)^{-1}\right)^2\\ &=f(x)\left[-x^{-2}+(x+2)^{-2}+(x-2)^{-2}+\left(x^{-1}-(x+2)^{-1}-(x-2)^{-1}\right)^2\right]\\ &=\frac{4x}{x^2-4}\left[-x^{-2}+(x+2)^{-2}+(x-2)^{-2}+\left(x^{-1}-(x+2)^{-1}-(x-2)^{-1}\right)^2\right]\\ \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1822715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
at what price he should sell his product? Can anyone help me with this problem? A merchant has determined that if the selling price of peaches is 15 cent each, he will sell 400 of them each day. He has also determined that for each cent the price is increased he will sell 16 fewer peaches each day. At what price in cents should he sell each peach in order to maximize his income from peaches each day? Here is what I did: assume y is the total price and x is the amount of cent increase. Then I got: $$y=(15+x)(400-16x)=-16x^2+(400-15*16)x+400*15=-16x^2+160x+16*25*15=-16[x^2-10x-25*15]$$ Then how do I determine what x make y largest?
As you have correctly deduced, we have $P(n) = (15 + n)(400 - 16n) = -16n^2 + 160n + 6000$ At this point, you want to complete the square so that we have $$P(n) = -16n^2 + 160n + 6000 = -16(n^2 - 10n) + 6000 = -16(n^2 - 10n + 25) + 6400 = -16(n-5)^2 + 6400$$ Is the expression on the right-most side of the above equation easier to maximize? Remark: In this case, we're lucky since the maximum of $P(n)$ occurs at a natural number. Otherwise, assuming it is a parabola, you would consider $P$ with domain $\mathbb{R}$ and, at whatever value $c$ its maximum occurs, choose $n = [c]$ where $[x]$ is the nearest integer function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1822784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculate $\int_0^{1/10}\sum_{k=0}^9 \frac{1}{\sqrt{1+(x+\frac{k}{10})^2}}dx$ How can we evaluate the following integral: $$\int_0^{1/10}\sum_{k=0}^9 \frac{1}{\sqrt{1+(x+\frac{k}{10})^2}}dx$$ I know basically how to calculate by using the substitution $x=\tan{\theta}$ : $$\int_0^1 \frac{dx}{\sqrt{1+x^2}}$$ But I cannot find out a way to apply the result to the question.
There is a better way. We shall prove that: $$\int_0^s f(x+ks)dx=\int_{ks}^{(k+1)s}f(x)dx \tag1$$ And hence: $$\int_0^s[f(x)+f(x+s)+...+f(x+(n-1)s)]dx=\int_0^{ns}f(x)dx \tag2$$ Proof: Using substitution $t\mapsto x+ks$, $$\int_0^s f(x+ks)dx=\int_{ks}^{(k+1)s}f(t)dt$$ \begin{align} & \int_0^s[f(x)+f(x+s)+\cdots+f(x+(n-1)s)]dx \\ & = \int_0^sf(x)dx+\int_0^sf(x+s)dx+\cdots+\int_0^sf(x+(n-1)s)dx \\ & = \int_0^sf(x)dx+\int_s^{2s}f(x)dx+\cdots+\int_{(n-1)s}^{ns}f(x)dx \\ & = \int_0^{ns}f(x)dx \end{align} Using the aforementioned results, your integral just becomes: $$\int_0^1\frac{dx}{\sqrt{1+x^2}} \tag3$$ , which is exactly equal to your given integral!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1822894", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Existence of non obtuse angle of n+2 vectors in n-dimensional euclidean space. There are n+2 distinct vectors $v_1,v_2,v_3,\cdots ,v_{n+2}$ in n-dimensional euclidean space. Prove that there must be a integer pair of $(i,j)$ which satisfies $1\leq i<j\leq n+2$, and $dot(v_i,v_j)\geq 0$. i.e. There are at most n+1 vectors in n-d euclidean space, and any pair of them forms an obtuse angle.
W.l.o.g assume that the vectors are normal and suppose for contradiction that all these $n+2$ vectors have pairwise dot product less than zero. In the following I further assume that the dimension of the span of these vectors is $n$ but I think for a smaller rank we can reduce it to the full-rank case. Let $w_1=v_1$ and $w_2=v_2-\langle w_1,v_2 \rangle w_1$. Observe that the set $w_1,w_2$ is orthogonal and the inner product of one of $w_1$ or $w_2$ with one of $v_3,v_4,\ldots,v_{n+2}$ is less than zero. Now let $w_3=v_3-\langle w_1,v_3 \rangle w_1 - \langle w_2,v_3 \rangle w_2$. Again the set $w_1,w_2,w_3$ is pairwise orthogonal and any of their inner product with any of $v_4,\ldots,v_{n+2}$ is less than zero. Repeating this process $n$ times, we get $n$ orthonormal vectors $w_1,w_2,\ldots,w_n$ and such that $v_{n+1}$ and $v_{n+2}$ have inner product less than zero with each of them. Thus $v_{n+1}$ and $v_{n+2}$ must both lie in the same quadrant and hence the angle between them is acute.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1823007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Geometry-Triangle Let $ABC$ be a triangle with $DAE$, a straight line parallel to BC such that $DA=AE$. If $CD$ meets $AB$ at X and $BE$ meets $AC$ at $Y$, prove that $XY$ is parallel to $BC$ I tried to use the angle approach by couldn't work out the problem as I couldn't effectively prove the thing using alternate angles approach.
We have that $\triangle{AXD}$ and $\triangle{BXC}$ are similar and that $\triangle{EYA}$ and $\triangle{BYC}$ are similar, and so $$AX:BX=AD:BC=AE:BC=AY:CY$$ from which the claim follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1823086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to prove that there exists $\lambda_{\sigma(1)}$ such that $\mu(A\cap\{\lambda_{\sigma(1)}\neq0\})>0$? Let $(\mathcal F,\Omega,\mu)$ be a measure space and $A\subseteq\Omega$ such that $\mu(A)>0$. Let $L^0$ be the space of all measurable functions. We say $X_1,\ldots,X_k\in(L^0)^d=\prod_{k=1}^dL^0$ are linearly independent on $A$ if $(0,\ldots,0)$ is the only vector $(\lambda_1,\ldots,\lambda_k)\in1_A(L^0)^d$ satisfying $$\lambda_1X_1+\ldots+\lambda_kX_k=0$$ Suppose $X_1,\ldots,X_k\in(L^0)^d$ are linearly independent on $A$ and $$\text{span}_A\{X_1,\ldots,X_k\}\subseteq\text{span}_A\{Y_1,\ldots,Y_l\}$$ for some $Y_1,\ldots,Y_l\in(L^0)^d$. I'm trying to prove that there exists a $\sigma(1)\in\{1,\ldots,l\}$ such that $$\mu(A\cap\{\lambda_{\sigma(1)}\neq0\})>0$$ I tried to conclude this from the fact we can write $$1_AX_1=\sum_{i=1}^l\lambda_i1_AY_i$$ while couldn't arrive to any result, Can somebody help me, please?
Suppose, this is not true. Then, we would have $\mu(A\cap\{\lambda_i\ne0\})=0$ for all $i\in\{1,2,\dotsc,l\}$. Hence, $$1_A X_1 = \sum_{i=1}^l \lambda_i1_AY_i$$ implies that $1_AX_1=0$. This, the vector $(1_A,0,\dotsc,0)$ satisfies $$1_AX_1 + 0X_2 + \dotsb + 0X_k = 0$$ which contradicts the linear independence of $(X_1,X_2,\dotsc,X_k)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1823180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Is $\int_0^a\frac{f(x)^2}{\int_0^x f(t)\,dt}\,dx\geq\int_0^a \frac{g(x)^2}{\int_{0}^{x}{g(t)}\,dt}\,dx.$? Suppose $f(x)\geq g(x)>0$ and that $f,g$ are suitably defined so that the integrals below make sense. I want to know if $$\int_0^a\frac{f(x)^2}{\int_0^x f(t)\,dt}\,dx\geq\int_0^a \frac{g(x)^2}{\int_{0}^{x}{g(t)}\,dt}\,dx.$$ I guess I'll need some assumption to take care of fact that the integrals are improper, as the denominators are zero when $x=0$. If needed, I can also assume that $f$ and $g$ are decreasing functions. I'm actually more interested in the discrete counterpart to this problem but was hoping that the continuous version might give some insights. Does this integral inequality resemble some other problem? What techniques are helpful for such problems.
This is equivalent to write \begin{align} \int_0^a\frac{f(x)^2}{\int_0^x f(t)\,dt}\,dx-\int_0^a \frac{g(x)^2}{\int_{0}^{x}{g(t)}\,dt}\,dx \ge0 \end{align} So, \begin{align} 0 &\le\int_0^a\left[{\frac{f(x)^2}{\int_0^x f(t)\,dt} -\frac{g(x)^2}{\int_{0}^{x}{g(t)}\,dt} }\right]dx \\ &=\int_0^a\left[{\frac{f(x)^2\int_{0}^{x}{g(t)}\,dt-g(x)^2\int_{0}^{x}{f(t)}\,dt}{\int_0^x f(t)\int_{0}^{x}{g(t)}\,dt}} \right]dx \tag{1} \end{align} Setting $A:=\int_{0}^{x}{f(t)}\,dt$ and $B=\int_{0}^{x}{g(t)}\,dt$ thus (1) becomes \begin{align} 0 &\le \int_0^a\left[{\frac{Bf(x)^2 -Ag(x)^2}{AB }} \right]dx \tag{2} \end{align} which holds if and only if $ Bf(x)^2 -Ag(x)^2 >0$ and $AB>0$ or $ Bf(x)^2 -Ag(x)^2 <0$ and $AB<0$. Obliviously, we can conclude that the primary assumptions are the both quantities are positive (negative) and $f,g$ continuous. Thus, from (2) \begin{align} 0 &\le \int_0^a\left[{\frac{Bf(x)^2 -Ag(x)^2}{AB }} \right]dx \\ &=\int_0^a\left[{\frac{\left(\sqrt{B}f(x) -\sqrt{A}g(x)\right)\left(\sqrt{B}f(x)+\sqrt{A}g(x)\right)}{AB }} \right]dx \end{align} Without loss of generality assume $A,B>0$ thus \begin{align} \left(\sqrt{B}f(x) -\sqrt{A}g(x)\right)\left(\sqrt{B}f(x)+\sqrt{A}g(x)\right) >0 \end{align} which means that $\sqrt{B}f(x) -\sqrt{A}g(x)>0$ or we write $\sqrt{B}f(x) >\sqrt{A}g(x)$ i.e., it is enough to assume $f>g$ for all $x$. The summary of assumptions is that $f,g$ non-zero, positive, continuous functions on $[0,\infty)$ (or integrable, so far is enough) and $f>g$. To guarantee the imroper integrals, $f,g$ must be bounded as $x\to \infty$. Let me know if i missed something.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1823284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How to calculate this integral in one variable I want calculate this integral $$ \int_0^{2\pi} - \frac{\cos t \; ( 2 (\sin t)^2 + (\cos t)^2)}{(\cos t)^4 + (\sin t)^2} \, dt $$ Can I use an opportunity substitution?
\begin{align} & I=\int_{0}^{2\pi }{-}\frac{\cos t\ (2{{\sin }^{2}}t+{{\cos }^{2}}t)}{{{\cos }^{4}}t+{{\sin }^{2}}t}dt=-2\int_{0}^{\pi }{\frac{\cos t\ (1+{{\sin }^{2}}t)}{{{(1-{{\sin }^{2}}t)}^{2}}+{{\sin }^{2}}t}dt} \\ & I=-2\int_{0}^{\frac{\pi }{2}}{\frac{\cos t\ (1+{{\sin }^{2}}t)}{{{(1-{{\sin }^{2}}t)}^{2}}+{{(\sin t)}^{2}}}dt}-2\int_{\frac{\pi }{2}}^{\pi }{\frac{\cos t\ (1+{{\sin }^{2}}t)}{{{(1-{{\sin }^{2}}t)}^{2}}+{{(\sin t)}^{2}}}dt} \\ \end{align} let $u=\sin t$, we have $$I=-2\int_{0}^{1}{\frac{\ 1+{{u}^{2}}}{{{(1-{{u}^{2}})}^{2}}+{{u}^{2}}}du}+2\int_{0}^{1}{\frac{\ 1+{{u}^{2}}}{{{(1-{{u}^{2}})}^{2}}+{{u}^{2}}}du}=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1823385", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Is A+B digonalizable if they share the same basis of eigenvectors I was given the following statement: I know that the sum of two diagonalizable matrices is not allways diagonalizable, but i'm not sure how the added element of the shared base contributes.. I would very much appreciate an explanation or some guidance..
the same basis is also a basis of $A+B$, because if $v$ is an element of this basis we have $(A+B)v=Av+Bv=\lambda v+\gamma v=(\lambda +\gamma)v$, so $v$ is an eigenvectors of $A+B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1823464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
How is the function $f: \mathbb{Z} \to \mathbb{R}$ continuous? Where $\mathbb{Z}$ is the set of integers and $\mathbb{R}$ the set of real numbers. In a question in a problem sheet, it said this statement was correct, however I do not understand how. You clearly cannot even begin to draw this function without a lot of gaps. I suppose when the $\lim_{x\to Z_1} f(x) = f(Z_1)$. So is that the reason why the function is continuous? Edit: This question came up in a first year university analysis module so I'm not too sure what topology means. Also, I use the standard epsilon, delta definition of continuity.
$f:\Bbb Z \to \Bbb R$ is continuous since it cannot be noncontinuous. Let me remind you, a function, $f$, is noncontinuous if there is an $x_0$ in its domain in which $f$ is noncontinuous in $x_0$. By definition${}^\dagger$: * *If a function $f$ is noncontinuous in $x_0$, then (there exists a small distance $\epsilon$, where for all neighborhoods of $x_0$ there are points, $p$, in which $|f(y)-f(x_0)|>\epsilon$). But this couldn't happen since there are neighborhoods in which no or just a single point, $x_0$, exist. so $|f(p)-f(x_0)|=0.$ This denies the consequent of the of the previous proposition, so by modus tollens, $f$ is continuous. ${}^\dagger$ You may have not seen a definition for a function to be noncontinuous in a point, so you may define it as a negation to the definition of being continuous in a point. Since the definition of being continuous in a point is in the form of $\forall \epsilon \exists \delta P$, the definition of being noncontinuous in a point should be of the form of $\exists \epsilon \forall \delta \neg P$, with same $\epsilon,\delta,P$ as the previous definition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1823632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
find the the greatest value of $m$ such that $\text{lcm}(1,2,3,..,n)=\text{lcm}(m,m+1,..,n).$ I am stuck and unable to proceed. Value of n can be very large. For eg: if $n=6,\ \text{lcm}(1,2,...,6)=60$, so answer will be $4$ in this case. Since $\text{lcm}(2,3,4,5,6)=60,\ \text{lcm}(3,4,5,6)=60,\ \text{lcm}(4,5,6)=60$ and $\text{lcm}(5,6)=30...$ so largest $m$ in this case will be $4$. So in the question I am given any value of $n$ in range $1$ to $10^9$ and I need to tell the largest value of $m$.
Let $L_n$ be the left-hand side, and let $R_{m,n}$ be the right-hand side. Note that $L_n$ is divisible by all of the greatest prime powers $p^r \leq n$. In fact, $$L_n= \prod_{p \leq n} p^r$$ where $p$ is prime and $r$ is the greatest exponent such that $p^r \leq n$. Clearly $R_{m,n} \mid L_n$. Therefore, $$L_n=R_{m,n} \iff p^r \mid R_{m,n}$$ for each prime $p \leq n$ and greatest $r$. Set $m_p=\max\{kp^r|1 \leq k \leq n/p^r\}$. Then $m_p \leq n$ is the largest value divisible by $p^r$. Finally, we know $$m=\min\{m_p | p \leq n\}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1823703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
A function taking values in a countable product of metric spaces is uniformly continuous iff its coordinate functions are. Let $\lbrace (M_n,d_n) \rbrace _{n \in I}$ be a countable family of metric spaces and define a metric in the product $\prod _{n \in I}M_n$ as follows: $$d(\vec{x},\vec{y})=\sum _{n \in I} \min \lbrace \frac{1}{2^n},d_n(x_n,y_n)\rbrace$$ Now, let $(X,d_X)$ be a metric space and take a function $f:X \to \prod _{n \in I}M_n$, I'm trying to prove that if all the coordinate functions $f_n:X \to M_n$ are uniformly continuous so it is $f$ (the converse follows from the facts that the projections are u.c. and a composition of u.c. functions is also u.c.). This is what I thought: If I want $d(f(x),f(y))<\epsilon$ it would be enought to have $d_n(f_n(x),f_n(y))<\frac{\alpha}{2^n}$ for all $n \in I$ and some fixed $\alpha <\epsilon$ because if $\alpha \geq 1$ it means that $\epsilon>1$ and in that case we are done because $d(f(x),f(y))\leq 1$ for all $x,y \in X$ and if $\alpha <1$ then we get $d(f(x),f(y))<\alpha$. Now, since every $f_n$ is u.c. we know there are some $\delta _n$ such that if $d_X(x,y)<\delta _n$ then $d_n(f_n(x),f_n(y))<\frac{\alpha}{2^n}$, so if I could take $d_X(x,y)<\inf_{n \in I} \delta_n$ everything would fit quite well, the only problem is that I don't see any reason to assume $\inf_{n \in I} \delta_n \not=0$, is there any reason to make that assumption? Or perhaps this is not the right approach, in that case any advice on how to solve it is welcome. Thanks in advance
HINT: You’re not taking full advantage of the definition of $d$. Let $\epsilon>0$. Then there is an $m\in\Bbb N$ such that $$\sum_{n\ge m}\frac1{2^n}<\frac{\epsilon}2\;.$$ Then for any $\vec x,\vec y$ we have $$d(\vec x,\vec y)<\frac{\epsilon}2+\sum_{n<m}d_n(x_n,y_n)\;,$$ and we’re essentially dealing with only finitely many of the metrics $d_n$, those for which $n<m$. Can you finish it off from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1823795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the meaning of subtracting from the identity matrix? If I subtract the matrix $A$ from the identity matrix $I$, $I - A$, is there a meaning to the resulting matrix perhaps given some conditions like invertibility or symmetry? For example, $$ \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} - \begin{bmatrix} a & b \\ c & d \end{bmatrix} = \begin{bmatrix} 1-a & -b \\ -c & 1-d \end{bmatrix} $$ For future reference, is there a good reference to lookup answers to such questions when I can't derive the answer on my own?
The difference $I - A$ is sometimes used with respect to projection matrices, where $A$ is the projection onto a subspace and $I - A$ is the projection onto the orthogonal complement of that subspace.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1823891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Using limits to determine if $a_n=\frac{(\ln n)^2}{\sqrt{n}}$ (without L'Hospital) is convergent $a_n=\frac{(\ln n)^2}{\sqrt{n}}$ $(n\ge1)$ Would I use the squeeze theorem or can I just take the limit of top and bottom? using L'Hopital, I get $a_n=\frac{(4\ln n)}{\sqrt{n}}$ $(n\ge1)$
$$\ln{n}=4\ln{n^{\frac{1}{4}}}$$ and $$\sqrt{n}=(n^{\frac{1}{4}})^2$$ and you know $\lim_{x \rightarrow \infty} \frac{\ln{x}}{x}=0$ thus $$\lim_{n \rightarrow \infty}a_{n}=\frac{16(\ln{n^{1/4})^2}}{(n^{1/4})^2}=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1824112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Notation for enumerating a set Is there a common notation for enumerating a set? For example if $A=\{2,4,6,\ldots,n \}$ is the set of even numbers, I would like to know the notation that enumerates ordered pairs $(e,i) \in \operatorname{enumerate}(A)$, where $e$ is the $i$:th element of $A$. So the sequence would be $(2,1),(4,2),\ldots,(n,m)$. Would $(e,i) \in \operatorname{enumerate}(A)$ be ambiguous or is there a more common way of describing this kind of enumeration?
Normally, you would just write $A = \{a_i : i \in \omega\}$ or $A = \{a_i : i \leq n\}$ where $a_i = 2i$. Of course a sequence such as $(a_n)$ is defined to be a function; the sequence of pairs you have described is the graph of the function $a$. We could write $a(i)$ instead of $a_i$. And, in the usual set theory way, we can identify $a$ with its graph, so that $a$ is a set of ordered pairs. But it is more common to write $a_i = 2i$ than it is to write $(i,2i) \in a$. Particularly outside of set theory, it is more common to treat functions as a kind of first-class object than to treat them as sets of ordered pairs.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1824339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How would you find the roots of $x^3-3x-1 = 0$ I'm not too sure how to tackle this problem. Supposedly, the roots of the equation are $2\cos\left(\frac {\pi}{9}\right),-2\cos\left(\frac {2\pi}{9}\right)$ and $-2\cos\left(\frac {4\pi}{9}\right)$ How do I start? The Cosines seem especially scary...
HINT: Let $x=b\cos A$ $$\implies b^3\cos^3A-3(b\cos A)=1$$ As $\cos3A=4\cos^3A-3\cos A,$ $$\dfrac43=\dfrac{b^3}{3b}\implies b^2=4\text{ as }b\ne0$$ Let $b=2$ Consequently, $$2\cos3A=1\iff\cos3A=\dfrac12=\cos\dfrac\pi3$$ $$3A=2n\pi\pm\dfrac\pi3=\dfrac\pi3(6n\pm1)$$ where $n$ is any integer $$A=\dfrac\pi9(6n\pm1)$$ where $n\equiv0,\pm1\pmod3$ What if $b=-2 :)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1824431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Proving Bijections i'm working on this question here but I am having some trouble: $f: ℝ ⇒ ℝ$, $f(x) = x^3 - 6x$. A) Is $f(x)$ injective? B) Is $f(x)$ surjective? C) Is $f(x)$ bijective? My attempted solution: A) $f(x)$ is injective if and only if $f(x) = f(y)$ $⇒$ $x = y$ $⇔$ $x^3 - 6x = y^3 - 6y$ $⇔$ $x(x^2 - 6) = y(y^2 - 6)$ $⇔$ $x(x - √6)(x + √6) = y(y - √6)(y + √6)$ I saw this step in a similar example but don't understand exactly what is going on here, an explanation would be great At this point I imagine from that last line above these $2$ values for x are derived from it?($0$ and $√6$) $f(0) = x^3 - 6x$ $=$ $0^3 - 6(0) $ $= 0$ $f(√6) = x^3 - 6x$ $= √6^3 - 6(√6)$ $= 0$ $∴ f(x)$ is not injective since $f(0) = 0 = f(√6)$ B) To show f(x) is surjective, we can assume $y = x^3 - 6x$(a bit stuck on this step, or maybe there is nothing that can be done ∴ I know it's not surjective right away?) $y = x^3 - 6x $ what more can I do from here? C) Regardless of figuring out part B or not, I still know that $f(x)$ is not a bijection since it's not injective. Thanks!
For proving surjectivity, note that $\lim\limits_{x\to\infty}f(x)=\infty$ and $\lim\limits_{x\to -\infty}f(x)=-\infty$. Further note that $f(x)$ is a polynomial and it is known that polynomials are continuous. An application of the Intermediate-Value-Theorem will show that for each possible value of $y$ it can be achieved by some $x$. More specifically, we know that cubics (being polynomials of odd degree) will always have at least one real root. Given a specific $y$, we try to find what value of $x$ will give that value of $y$. $y=f(x)=x^3-6x\Rightarrow 0=x^3-6x-y$ where $y$ is known. We know it will have a real root for $x$. We do not necessarily need to find what exactly it is to be satisfied, but via cardano's method we find that it would be something along the lines of $\frac{\sqrt[3]{\sqrt{y^2-32}+y}}{\sqrt[3]{2}}+\frac{2\sqrt[3]{2}}{\sqrt[3]{\sqrt{y^2-32}+y}}$. You are not expected to know this. For proving/disproving injectivity, all you need to do is find an instance where $f(x)=f(y)$ despite $x\neq y$ (to prove it is not injective) or you need to show that $f(x)=f(y)$ implies that $x=y$. With $f$ a polynomial of low degree, we attempt to factor it. A polynomial is in factored form if it is of the form $f(x)=(x-\alpha_1)(x-\alpha_2)(x-\alpha_3)\cdots (x-\alpha_n)$. Each of these terms $\alpha_i$ are what are called "roots" and are the points for which $f(x)=0$. In your case $f(x)=x^3-x=(x-0)(x-\sqrt{6})(x-(-\sqrt{6}))$ so there are three roots. $f(0)=f(\sqrt{6})=f(-\sqrt{6})=0$. In particular, these roots are distinct, so we have found an occurrence of an $x$ and a $y$ where $f(x)=f(y)$ but $x\neq y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1824504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Find a subspace $W$ of $\mathbb{F}^4$ such that... - checking my answer! The question is the same as one previously asked, but I can't comment so I had to ask my own. Find a subspace $W$ of $\mathbb{F}^4$ such that $\mathbb{F}^4 = U \oplus W$ Suppose $U=\{ (u_1,u_1,u_2,u_2)\in \mathbb{F}^4:u_1,u_2\in \mathbb{F}\}$. Find a subspace $W$ of $\mathbb{F}^4$ such that $\mathbb{F}^4 = U\oplus W$. I chose $W = \{ (0,w_1,0,w_2)\in \mathbb{F}^4:w_1,w_2\in \mathbb{F}\}$. Is this incorrect or can I use it just as previous answers have been $(0,w_1,w_2,0)$? From what I can tell, my answer allows $U\cap W = \{0\}$ and $U+W=\{(u_1,u_1+w_1,w_1,u_2+w_2)\in\mathbb{F}^4:u_1,u_2,w_1,w_2\in\mathbb{F}\}=\mathbb{F}^4$. Is this a sufficient argument?
The example you provide doesn't work. The vector $(1,1,1,1)\in \mathbb{F}^4$ but can't be written as $(x,2x,y,2y)$. But you can still solve your problem by taking the orthogonal complement $W^{\bot}$ of $W$. With $W$ as defined, its orthogonal complement is given by:$$W^{\bot}=\left\lbrace X=(x,y,z,t)\in\mathbb{F}^4\mid\,\langle X,v\rangle=0,\mbox{ for all }v\in W \right\rbrace ,$$ where $\langle \cdot,\cdot\rangle$ represents the usual inner product of $\mathbb{F}^4$. On the orther hand, we can write $$W=\mbox{span} \left\lbrace \begin{pmatrix} 1\\ 1\\ 0\\ 0 \end{pmatrix} , \begin{pmatrix} 0\\ 0\\ 1\\ 1 \end{pmatrix} \right\rbrace .$$ Therefore $X=(x,y,z,t)\in W^{\bot}$ if and only if: \begin{align*} \left\{ \begin{array}{l} \overline{x}=-\overline{y}\\ \overline{z}=-\overline{t} \end{array} \right. \end{align*} This system provides $$W^{\bot}=\mbox{span} \left\lbrace \begin{pmatrix} -1\\ 1\\ 0\\ 0 \end{pmatrix} , \begin{pmatrix} 0\\ 0\\ -1\\ 1 \end{pmatrix} \right\rbrace .$$ You can immediately check that $W\oplus W^{\bot}=\mathbb{F}^4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1824602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why is $\gamma(t)=(0,t)$ a geodesic in the hyperbolic plane? I'm having trouble to understand a very simple fact of the book of DoCarmo "Riemannian Geometry". In the page 73 he calculates the geodesics of the hyperbolic plane: $$ \mathbb{R}^+_2 = \{ (x,y) \in \mathbb{R}^² :y>0 \} \qquad g_{ij}= y^{-2} \delta_{ij} $$ To do so he considers the curve $\gamma(t)=(0,t)$ which is a geodesic. But when I consider the equations of the geodesics I don't get this conclusion. The Cristoffel symbols for the hyperbolic plane with this metric are: $$\Gamma^x_{xx}=\Gamma^y_{xy}=\Gamma^x_{yy}=0 \quad \Gamma^y_{xx}=\frac{1}{y} \quad \Gamma^x_{xy} = \Gamma^y_{yy} = -\frac{1}{y}$$ And then the equations of the geodesics: $$ \frac{d^2x}{dt^2} - \frac{1}{y}\frac{dx}{dt} \frac{dy}{dt} = 0 $$ $$ \frac{d^2y}{dt^2} + \frac{1}{y} \left( \frac{dx}{dt} \right)^2 - \frac{1}{y} \left( \frac{dy}{dt} \right)^2 = 0$$ Then when I consider the given curve it satisfies the first equation but not the second. What I am missing here?
As he says, it's the image of a geodesic. It's not parametrized by arclength.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1824879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Finding the pdf of the difference of minimum and maximum of a finite set of random variables. Let $X_i$ $ (1\leq i\leq n)$ be identically distributed uniformly on $(0,1)$. Let $U = \min_i(X_i)$, $V = \max_i(X_i)$. Find the pdf of $V-U$ This is what I did. I found the cdf and differentiated. $$F_{V-U}(x)= \mathbb{P}(V-U \leq x)$$ We fix two of the random variables as the minimum and maximum and we of course have $2\binom{n}{2}$ choices and for each choice we fix the minimum, find the probability for the maximum and integrate. $$\int_0^1 x-u \quad du = x - \frac{1}{2}$$ So $$F_{V-U}(x) = 2(x-0.5)\binom{n}{2} \quad x\in(0,1)$$ and cdf equals $0$ elsewhere. So $f_{V-U}(x) = 2\binom{n}{2} \quad x\in(0,1)$ but this doesn't make much sense since it doesn't integrate to $1$ over the $(0,1)$ interval. How can I fix it? Where did I go wrong?
Your argument for constructing the order statistic's density function is almost correct.   You want the count of ways to select two of the samples to be required values, times the density functions for each of those values, times the probability that the remaining $n-2$ samples lie between these values. Next, it will be easier to integrate the probability of the complement. $$\begin{align}F_{V-U}(x) =&~ 1- \mathsf P(V-U>x) \\=&~ 1-\int_x^1\int_0^{v-x} f_{U,V}(u,v)\operatorname d u\operatorname d v \\=&~ 1- n(n-1)\int_x^1\int_{0}^{v-x} f_X(u)(F_X(v)-F_X(u))^{n-2}f_X(v) \operatorname d u\operatorname d v\\ =&~ 1-n(n-1)\int_x^1\int_0^{v-x} (v-u)^{n-2}\operatorname d u\operatorname d v \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1824986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding the generating function to split $n$ into odd parts I have been recently working with generating functions in my discrete mathematics course, and I was interested in one particular generating function. I want to find the generating function for the number of ways one can split $n$ into odd parts. I can see that the first coefficients of the sequence are $$0,1,1,2,2,3,4,5,...$$ And so on. I've been trying to find a recursion, and while it seemed originally that the number of ways looked to be the ceiling of $\frac{n}{2}.$ But it seems like the $6$th coefficient suggests otherwise. Is there another possible recursion going on in this sequence? If so, how would I go about finding the generating function for it?
The generating function for this sequence is $f(x)=\prod\limits_{n=1}^{\infty} \cfrac{1}{1-x^{2n-1}}$ since $f(x)=(1+x+x^{1+1}+\cdots)(1+x^3+x^{3+3}+\cdots)\cdots (1+x^{2n-1}+x^{(2n-1)+(2n-1)})\cdots = (1+x+x^2+\cdots)(1+x^3+x^{2\cdot3}+\cdots)\cdots (1+x^{2n-1}+x^{2(2n-1)})\cdots=\cfrac{1}{1-x}\cfrac{1}{1-x^3}\cdots\cfrac{1}{1-x^{2n-1}}\dots=\prod\limits_{n=1}^{\infty} \cfrac{1}{1-x^{2n-1}}$ Moreover, you can express this function as follows, $\prod\limits_{n=1}^{\infty} (1+x^n)=(1+x)(1+x^2)(1+x^3)\cdots=\cfrac{1-x^2}{1-x}\cfrac{1-x^4}{1-x^2}\cfrac{1-x^6}{1-x^3}\cdots=\cfrac{1}{1-x}\cfrac{1}{1-x^3}\cdots=f(x)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1825059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding the matrix of this particular quadratic form I have been working on problems related to bilinear and quadratic forms, and I came across an introductory problem that I have been having issues with. Take $$Q(x) = x_1^2 + 2x_1x_2 - 3x_1x_3 - 9x_2^2 + 6x_2x_3 + 13x_3^2$$ I want to find a matrix $A$ such that $Q(x) = \langle Ax,x \rangle$. My initial guess was to simply establish this via a coefficient matrix, i.e., $$A = \begin{bmatrix} 1 & 2 & -3\\ 2 & -9 & 6\\ -3 & 6 & 13\end{bmatrix}$$ However, upon closer inspection, I see that this matrix does not produce our desired outcome. Is there a more reasonable algorithm for generating the matrix $A$ of a quadratic form?
The matrix associated with your quadratic form is $$Q=\begin{pmatrix}1 & 1 & -\frac{3}{2} \\ 1 & -9 & 3 \\ -\frac{3}{2} & 3 & 13 \end{pmatrix}$$ and the characteristic polynomial of $Q$ is $$ q(x) = x^3-5x^2-\frac{501}{4}x+\frac{511}{4}$$ so the eigenvalues are $$\approx -9.5372,\qquad \approx 0.9886,\qquad \approx 13.5486. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1825197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Is this correct definition of $T_1$ space? I found this in a handwritten note: Defination : A topological space $X$ is $T_1$ if $\forall x \neq y \in X$ there exist a neighborhood of $y$ such that s.t. $x \not\in V$. I was almost certain that this was incorrect, because it looked like the definition for $T_0$ space. However, subsequently this definition was used to prove that all singletons are closed, for example: $(\Leftarrow)$ Suppose $\{x\}$ is closed, then $\{x\}^c$ is open and $y \in \{x\}^c$ satisfies the definition of a $T_1$ space. Is this some alternative definition of $T_1$ space?
The definition is correct, albeit a little sloppy. Here’s a slightly more careful version of it: $X$ is $T_1$ if for each $x\in X$ and each $y\in X\setminus\{x\}$ there is a nbhd $V$ of $y$ such that $x\notin V$. Let $p$ and $q$ be any two distinct points of $X$. If we set $x=p$ and $y=q$, we see that the definition ensures that $q$ has a nbhd that does not contain $p$. We can just as well set $x=q$ and $y=p$, however, and conclude that $q$ has a nbhd that does not contain $p$. Thus, the definition really does say that $X$ is $T_1$. The point is that since both quantifiers are universal, we can interchange the rôles of $x$ and $y$ in the definition. The real problem with the quoted version is that it doesn’t make the quantifier on $x$ explicit. Because of this, it’s easy on first reading to get the impression that the definition is asymmetric in $x$ and $y$, like the definition of $T_0$ separation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1825310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove inequality for $xyz=1$ I am having trouble to prove an intermediate step $(xy+xz+yz)^2 \geq 3xyz(x+y+z)$ for $x,y,z\geq0$.
Using that $xyz=1$ we see that the inequality we want to prove is equivalent to $$(1/x + 1/y+1/z)^2 \geq 3xyz(x+y+z) \fbox {1} $$ Set $x=1/x,y=1/y,z=1/z$, and notice that the restraint for the new variables stays the same i.e. $xyz=1$ again. Now we substitube to $\fbox{1} $ and get$$ ( x+y+z )^2\geq 3xyz(1/x+1/y+1/z)$$ So, $$ ( x+y+z )^2\geq 3xy+3yz+3zx$$ By expanding and canceling and multiplying by 2 we get $$2(x^2+y^2+z^2) \geq 2(xy+yz+zx)$$ But now by bringing everything to the left side and factoring we get $$(x-y)^2+(y-z)^2 + (z-x)^2\geq 0$$ And we condlude.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1825383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proof if $f(x)$ is continuous on $[a,b]$,strictly increasing on $(a,b)$,then $f(x)$ is strictly increasing on $[a,b]$ My attempt: Because $f(x)$ is continuous at $a$,then $\lim_{x\rightarrow a^+}f(x)=f(a)$ When $x\rightarrow a^+$,$f(x)$ is strictly decreasing,so for any $x \in (a,b)$, $f(x)>f(a)$. It can also show that $f(x)<f(b)$ in the same way. Do I prove it in a rigorous way?Thanks in advance!
The idea is very simple. Since $f$ is strictly increasing on $(a, b)$ it follows that if we have $a < x < b$ then we can choose $y, z$ such that $a < y < z < x < b$ and then $f(y) < f(z) < f(x)$. Letting $y \to a^{+}$ we get $f(a) \leq f(z) < f(x)$ and thus $f(a) < f(x)$. Similarly it can be proved that $f(x) < f(b)$. It follows that $f$ is strictly increasing on $[a, b]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1825511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Eigenvector of polynomial Suppose that $T: V \rightarrow V$ is an endomorphism of the linear space V (about $\mathbb{K}$) and that $p(X)$ is a polynomial with coefficients in $\mathbb{K}$. Show that if $x$ is an eigenvector of $T$ than it is also an eigenvector of $p(T)$. My attempt: So if $x$ is an eigenvector of $T$ that means that $T(x) = \lambda x$ ($\lambda$ being the eigenvalue associated to $x$). Ok so my next step is the one I feel is not correct $p (T(x)) = p (\lambda x)$ so $\lambda$ is an eigenvalue of the polynomial. I don't feel this is a correct assumption, that you can't immediately conclude this, can we?
Here is an outline. Step 1: Prove by induction that if $Tx=\lambda x$, then $T^n(x)=\lambda^n x$. Step 2: Show that if $Tx=\lambda x$ and $Sx=\mu(x)$, then $(T+S)(x)=(\lambda+\mu)x$. Step 3: Combine the first two steps to show that if $Tx=\lambda x$, then $P(T)(x)=P(\lambda)x.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1825639", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
Orthogonal Projection onto the $ {L}_{\infty} $ Unit Ball What is the Orthogonal Projection onto the $ {\ell}_{\infty} $ Unit Ball? Namely, given $ x \in {\mathbb{R}}^{n} $ what would be: $$ {\mathcal{P}}_{ { \left\| \cdot \right\| }_{\infty} \leq 1 } \left( x \right) = \arg \min_{{ \left\| y \right\| }_{\infty} \leq 1} \left\{ {\left\| y - x \right\|}_{2}^{2} \right\} $$ I managed to get an answer using the Moreau Decomposition. Yet I would be happy to see if someone can derive the answer directly. Thank You.
Just an addition which is helpful to program this projector efficiently for example with Numpy: The awnser $$ ({\mathcal{P}}_{ { \left\| \cdot \right\| }_{\infty} \leq 1 } \left( x \right))_i = \begin{cases} 1, & \text{if} & {x}_{i} \geq 1 \\ {x}_{i}, & \text{if} & \left | {x}_{i} \right | < 1 \\ -1, & \text{if} & {x}_{i} \leq -1 \end{cases} $$ can also be formulated as $$ ({\mathcal{P}}_{ { \left\| \cdot \right\| }_{\infty} \leq 1 } \left( x \right))_i = \operatorname{sign}(x_i)\min(1,|x_i|). $$ Therefore in vector formulation (which can be directly implemented in Numpy) we can write this simply as $$ {\mathcal{P}}_{ { \left\| \cdot \right\| }_{\infty} \leq 1 } \left( x \right) = \operatorname{sign}(x)\min(1, |x|) = \operatorname{np.sign}(x)*\operatorname{np.minimum}(1,\operatorname{np.abs}(x)) $$ having imported Numpy as np and using its functions $\operatorname{np.sign},\ \operatorname{np.minimum},\ \operatorname{np.abs}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1825747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
Prove that $ 1+2q+3q^2+...+nq^{n-1} = \frac{1-(n+1)q^n+nq^{n+1}}{(1-q)^2} $ Prove: $$ 1+2q+3q^2+...+nq^{n-1} = \frac{1-(n+1)q^n+nq^{n+1}}{(1-q)^2} $$ Hypothesis: $$ F(x) = 1+2q+3q^2+...+xq^{x-1} = \frac{1-(x+1)q^x+xq^{x+1}}{(1-q)^2} $$ Proof: $$ P1 | F(x) = \frac{1-(x+1)q^x+xq^{x+1}}{(1-q)^2} + (x+1)q^x = \frac{1-(x+2)q^{x+1}+xq^{x+2}}{(1-q)^2} $$ $$ P2 | \frac{1-(x+1)q^x+xq^{x+1}+[(x+1)(1-q)^2]q^x}{(1-q)^2} = \frac{1-(x+2)q^{x+1}+xq^{x+2}}{(1-q)^2} $$ $$ P3| \frac{x\color{red}{q^{x+1}}+[-(x+1)]\color{red}{q^x}+1+[(x+1)(1-q)^2]\color{red}{q^x}}{(1-q)^2} = \frac{x\color{red}{q^{x+2}}-(x+2)\color{red}{q^{x+1}}+1}{(1-q)^2} | $$ Here I just reorganize both sides of the equation, so LHS is explicity an expression with a degree of x+1, while the degree of RHS is x+2. Both LHS' $\color{red}{q^x}$ are added next. $$P4| \frac{xq^{x+1}+[-(x+1)+(x+1)(<1^2q^0+\binom{2}{1}1q-1^0q^2>)]q^x+1}{(1-q)^2}=\frac{xq^{x+2}-(x+2)q^{x+1}+1}{(1-q)^2} $$ $$P5 | \frac{xq^{x+1}+[2xq-xq^2+2q-q^2]q^x+1}{(1-q)^2} = \frac{xq^{x+2}-(x+2)q^{x+1}+1}{(1-q)^2} $$ I get stuck at this point. I don't know if i'm approaching the problem the right way. So, any help would be appreciated. Thanks in advance.
$$\begin{align} S&=1+2q+3q^2+\qquad\cdots\qquad \qquad+nq^{n-1}\\ qS&=\qquad q+2q^2+3q^3+\cdots +\quad(n-1)q^{n-1}+nq^n \\ \text{Subtracting,}&\\ (1-q)S&=1+\;\ q \ +\ q^2 +\ q^3+\cdots \qquad \qquad +q^{n-1}-nq^n\\ &=\frac {\;\ 1-q^n}{1-q}-nq^n\\ S&=\frac{1-q^n-nq^n(1-q)}{(1-q)^2}\\ &=\frac{1-(n+1)q^n+nq^{n+1}}{(1-q)^2}\qquad\blacksquare\end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1825825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 1 }
Compute $\int_0^{\infty}\frac{x}{x^4+1}dx$ I should compute the integral $$\displaystyle \int_0^{\infty}\frac{x}{x^4+1}dx.$$ This problem appears in complex analysis in the residue theorem and its consequences of real integrals. I know the following: Let $p,q$ be polynomials with $\deg p\leq\deg q -2$ so that $q$ has no zeros in the real numbers. Then we have $$\int_{-\infty}^{\infty}\frac{p(x)}{q(x)}dx=2\pi i \sum_{Im~z >0}Res(\frac{p}{q},z).$$ But since $\frac{x}{x^4+1}$ is not axissymmetric I could only compute $$\displaystyle \int_{-\infty}^{\infty}\frac{x}{x^4+1}dx$$ and not $$\displaystyle \int_0^{\infty}\frac{x}{x^4+1}dx.$$
An alternative way, just for fun. Through the substitution $x=\frac{1}{z}$ we have: $$\int_{1}^{+\infty}\frac{x\,dx}{1+x^4}=\int_{0}^{1}\frac{z\,dz}{1+z^4}$$ hence: $$\begin{eqnarray*}I=\int_{0}^{+\infty}\frac{x\,dx}{1+x^4}&=&\int_{0}^{1}\frac{2x\,dx}{1+x^4}\\&=&\int_{0}^{1}2\sum_{n\geq 0}(-1)^{n}x^{4n+1}\,dx\\&=&\sum_{n\geq 0}\frac{2(-1)^n}{4n+2}\\&=&\sum_{n\geq 0}\frac{(-1)^n}{2n+1}\\&=&\arctan(1)=\color{red}{\frac{\pi}{4}}.\end{eqnarray*}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1825910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Does associativity imply commutativity? I used to think that commutativity and associativity are two distinct properties. But recently, I started thinking of something which has troubled this idea: $$(1+1)+1 = 1+ (1+1)\implies 2+1=1+2$$ Here using associativity of addition operation, we've shown commutativity. In general, $$\underbrace{(1+1+\dots+1)}_{a \, 1\text{'s }}\ \ + \ \ \underbrace{(1+1+\dots+1)}_{b \, 1\text{'s }}=\underbrace{(1+1+\dots+1)}_{b \, 1\text{'s }}\ \ + \ \ \underbrace{(1+1+\dots+1)}_{a \, 1\text{'s }} \\ \implies a+b=b+a$$ For any natural $a,b$. Hence using only associativity we prove commutativity. That this can be done, is disturbing me too much. Is this really correct? If yes, then are associativity and commutativity closely related? Or is it because of some other property of natural numbers? If yes, then can it be done for other structures as well?
Consider an $\operatorname L$ operator defined as 'take the left': $$a \operatorname L b = a$$ For any three $a,b,c$: $$(a \operatorname L b) \operatorname L c = a \operatorname L c = a = a \operatorname L b = a \operatorname L (b \operatorname L c)$$ but $$a \operatorname L b = a \ne b = b \operatorname L a$$ unless $a=b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1826022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "79", "answer_count": 7, "answer_id": 3 }
What is the sum of the first $17$ terms of an arithmetic sequence if $a_9=35$? What is the sum of the first $17$ terms of an arithmetic sequence if $a_9=35$? This is what I did: $a_9=a_1+8d=35$ $S_{17}=\frac{17}{2}(a_1+a_{17})=\frac{17}{2}(a_1+a_1+16d)=\frac{17}{2}(2a_1+16d)=\frac{17}{2}\cdot 70= 595$ This solution is correct, however I don't understand the solution given in the book: $a_1+a_2+...+a_{17}=17a_9=17\cdot 35=595$ How did they get $a_1+a_2+...+a_{17}=17a_9$?
Dang, @gt6989b is right: \begin{align} a_1 + a_{17} &= a_1 + (a_1 + 16 d) = 2 a_1 + 16 d = 2(a_1 + 8 d) = 2 a_9 \\ a_2 + a_{16} &= (a_1 + d) + (a_1 + 15 d) = 2 a_1 + 16 d = 2 a_9 \\ & \vdots \\ a_8 + a_{10} &= (a_1 + 7d) + (a_1 + 9 d) = 2 a_1 + 16 d = 2 a_9 \end{align} so $$ \sum_{i=1}^{17} a_i = 8 \cdot 2 a_9 + a_9 = 17 a_9 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1826082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Can spaces where all singletons are closed and all singletons are open be homeomorphic? Suppose $(X, \mathfrak{T})$ is a space where all singletons are closed, and $(Y, \mathfrak{J})$ is a space where all singletons are open. Can these two spaces be homeomorphic? My thought is that they cannot be, but I am having a difficulty coming up with a proof. A realistic example maybe $\mathbb{R}_{usual}$ and $\mathbb{R}_{discrete}$. But the thing is $\mathbb{R}_{discrete}$ all singletons are closed as well...
The preimage of a open set is still open. And the preimage of a singleton under a bijection is still a singleton. So the singletons is open in both spaces. A space with all singletons open can only have discrete topology.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1826161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Functions validity. Why does writing a function differently make it valid for a originally invalid input? $e.g:$ $$f(x) = \frac{1} {(\frac1x+2)(\frac1x-3)} \implies x≠0$$ Which may alternatively be written as: $$f(x) =\frac{x^2}{(1+2x)(1-3x)}$$ Which is valid for $x=0$? Both graphically represent the same function. Thanks.
let $$g(x)=\frac{x^2}{(1+2x)(1-3x)}$$ and $$f(x) = \frac{1} {(\frac1x+2)(\frac1x-3)}$$ we have $D_f=R-\{0,\frac{1}{3},\frac{-1}{2}\}$ and $D_g=R-\{\frac{1}{3},\frac{-1}{2}\}$, thus $g\ne f$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1826253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
What is the probability that a person wearing a blue t-shirt will sit next to one wearing red? 9 people sit in a row linearly. 2 dressed in Red, 3 blue and 4 in yellow. What is the probability that a person in blue will sit next to a person in red? Why? RRBBBYYYY this sequence from what I gather can be arranged in 9! ways. * *Attempt There are 2 groups R and Y we want to seat together. The Y(people wearing yellow) can be arranged in 4! ways and people wearing red in 2! ways. Therefore 2 x 3! x 2! should be the right answer. Times 2 because we only want to consider two groups. (I think this is where I am going wrong because there are 3) If there were two groups of people the. i am guessing it would be right. However there are 3 groups and I have no idea how to show it. *Attempt There is a total lf 9!/2!x3!x4 = 1260 ways they can be seated. These however will be in random order and I can't figure out how to put them in an order that R sits next to Y.
Comment: (If I understand the problem properly.) In case it is of assistance checking analytical results, here is a simulation in R that approximates the distribution of $X,$ the number of red-blue adjacencies, for people randomly seated in a row. The program uses a trick in assigning numbers to colors so that the absolute difference is 1 whenever a red shirt is next to a blue one. I believe $P(X \ge 1)$ is the answer to your problem. shirts = c(1,1,2,2,2,5,5,5,5) # 1=Red, 2=Blue, 5=Yellow m = 10^6; rb.adj = numeric(m) # rb.adj = nr red-blue adjacencies for (i in 1:m) { perm = sample(shirts, 9) rb.adj[i] = (sum(abs(diff(perm))==1)) } mean(rb.adj >= 1) # aprx P(X >= 1) ## 0.841706 table(rb.adj)/m # aprx dist'n of X ## rb.adj ## 0 1 2 3 4 ## 0.158294 0.436611 0.321367 0.079765 0.003963 A million performances of the experiment should give results accurate to two (maybe three) places. Any arrangement with BRBRB has $X = 4$ and $P(X = 4) = 12(5!)/9! \approx 0.004,$ which is a little too small reliably to approximate with small relative error. (I 'cheated' by showing the best of three simulation runs.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1826414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Find the Minimum of $ F(u)= \int\limits_{-2}^{+2}|u(x) - \chi_{[0,2]}(x)|dx + |Du|(\mathbb R)$. Let $F: BV(\mathbb R) \to \mathbb R$ be a functional defined as: \[ F(u)= \int\limits_{-2}^{+2}|u(x) - \chi_{[0,2]}(x)|dx + |Du|(\mathbb R). \] Show that there is no minimum on $W^{1,1}$, but the infimum is exactly the minimum on $BV$. Attempt: $F(\chi_{[0,2]})=2$. I think that $F(u) \geq 2$ for every $u \in BV$ but I don't know why. Given that we have a Theorem which says that it exists $\{u_n\} \in C^{\infty}$ such that $u_n \to \chi_{[0,2]}$ in $L^1$ and $\|u'_n\|_1 \to |Du|(\mathbb R)$ thus we get $F(u_n)\to 2$ and so this is the infimum in $W^{1,1}$. I don't know why there is no $u \in W^{1,1}$ such that $F(u)=2$, I can "understand" why in some sense but can't prove it. Thanks!
First of all, $\chi_{(0,2)}$ isn't optimal; the minimum (on $BV$) is assumed for $u=\chi_{(0,\infty)}$, and $F(u)=1$. Now let me show that $F(u)\ge 2$ for all $u\in W^{1,1}$ (so the assertion about the infimum is incorrect). Consider $F_-(u)=\int_{-2}^0 |u(x)|\, dx + |Du|(-\infty,0)$. Since the derivative exists and is in $L^1$, the total variation equals $\|u'\|_{L^1(-\infty,0)}$, which I'll denote by $N_-$ for easier reference. Let's say $u(0)>0$. Now there are two cases: in the first case, $u(x)=0$ for some $x\in (-2,0)$. This implies that $N_-\ge u(0)$. On the other hand, if $N_-<u(0)$, then $\int_{-2}^0 u\, dx > 2(u(0)-N_-)$ (by comparing with the extreme case where $u$ drops to its minimal value on $(-2,0)$ instantaneously). Notice that $F_-(u)> u(0)$ in either case. A similar of discussion of $F_+(u)=\int_0^2|u(x)-1|\, dx+|Du|(0,\infty)$ will show that $F_+(u)\ge 2-u(0)$ (if $u(0)\le 2$), and our claim follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1826517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$\int_0^\infty \frac{ x^{1/3}}{(x+a)(x+b)} dx $ $$\int_0^\infty \frac{ x^{1/3}}{(x+a)(x+b)} dx$$ where $a>b>0$ What shall I do? I have diffucty when I meet multi value function.
Let us assume $A^3=a, B^3=b$, for simplicity. Now make a substitution $x=t^3$ which will transform the integral like this $$I=\int_{0}^\infty \frac {3t^3dt}{(t^3+A^3)(t^3+B^3)}$$ Now break this into partial fractions like this $$I=3\int_{0}^\infty [\frac {1}{(t^3+B^3)}-\frac {A^3}{B^3-A^3}(\frac {1}{t^3+A^3}-\frac {1}{t^3+B^3})]$$ Now, by using this $$I_1=\int_{0}^\infty \frac{dt}{t^3+A^3}=\frac {2\pi}{3^{\frac{3}{2}}A^2}$$ (You can compute this integral very easily. Just put $x=At$ and $t=\frac {1}{p}$ and after making these two substitutions, add both integral, you will see that a quadratic is left in denominator. To remove that put $p-\frac {1}{2}=\lambda$. After making this substitution, you will easily get the value of above integral.) Now using $I_1$, value of the $I$ is equal to $$I=\frac {2\pi}{3^{1/2}}[\frac{1}{B^2}-\frac {A^3}{B^3-A^3}(\frac{1}{A^2}-\frac{1}{B^2})]$$ which after simplication equal to $$I=\frac{2\pi}{\sqrt3 \times(B^2+A^2+AB))}$$ where $A^3=a, B^3=b$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1826607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Does this pattern have anything to do with derivatives? In 6th grade I was first introduced to the idea of a function in the form of tables. The input would be "n" and the output "$f_n$" would be some modification of the input. I remember finding a pattern in the function "f(n)=n^2". Here is what the table looked like: \begin{array}{|c|c|} \hline n& f_n\\ \hline 1&1 \\ \hline 2&4\\ \hline 3&9\\ \hline 4&16\\ \hline 5&25\\ \hline ...&...\\ \hline n&n^2\\ \hline \end{array} I would then take the outputs $f_n$ and find the differences between each one: $f_n-f_{n-1}$. This would produce: \begin{array}{|c|c|} \hline n& f(n)-f(n-1)\\ \hline 1&1 \\ \hline 2&3\\ \hline 3&5\\ \hline 4&7\\ \hline 5&9\\ \hline ...&...\\ \hline \end{array} Repeating this process (of finding the differences) for the outputs of $f_n-f_{n-1}$ would yield a continuous string of $2$s. As a 6th grader I called this process 'breaking down the function' and at the time it was just another pattern I had found. Looking back at my work as a freshman in high school, I realize that the end result of 'breaking down a function' corresponds to the penultimate derivative (before the derivative equals zero). For example: breaking down $y=x^3$ gives a continuous string of $6$s, and the third derivative of $x^3$ is 6 (while the 2nd derivative is 6x). Is there any significance to this pattern found by finding the differences between each output of a function over-and-over again? Does it have anything to do with derivatives? I know my question is naive, but I'm only a high school freshman in algebra II. A non-calculus (or intuitively explained calculus concepts) answer would be very helpful [note that I used an online derivative calculator to find the derivatives of these functions and I apologize for any incorrect calculus terminology].
Yes it is related to finite calculus, but others have answered why and how while no one mentioned it is also related to sum of arithmetic sequences so I will. Let's have arithmetic sequence where $a_1 = 1$ and $d = 2$ (so basically it's sequence of odd numbers). Sum of first $n$ elements of that sequence (denoted as $S_n$) is: $$S_n = \frac{2a_1 + (n - 1)d}{2}n = \frac{2 + 2(n - 1)}{2}n = (1 + n -1)n = n^2$$ It is obvious that $S_n - S_{n-1} = a_n$ and because of $S_n=n^2$ we have: $$n^2-(n-1)^2=S_n-S_{n-1}=a_n$$ In your table for $n$ you got that $n^2-(n-1)^2$ is nth odd number which is the same as formula which I got.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1826670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38", "answer_count": 8, "answer_id": 7 }