Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Simple solution to combinations with limited repetition Suppose that you are going to prepare a fruit salad with oranges, apples, and bananas. The salad should consist of 10 pieces. No fruit should be used more than 5 times. Without the constraint, the solution would be with $n=10$ and $k=3$, I believe. However, the problem has a constraint. A straigthforward approach would be: If there are zero bananas, then there is one possible distribution of apples and oranges (only (5,5) would yield 10). If there are 5 bananas, there are 6 possible distributions (5,5), (4,1), (3,2) ...). This yields $6 + 5 + 4 + 3 + 2 + 1 = 21$ in total. Is there a more efficient and more generalizable approach that would easily work with more fruits? I am not looking for solving this with the inclusion-exclusion approach, which is possible for any constraint (right?). I am also not looking for dynamic programming or anything like that. I found the exercise in a textbook about the basics of combinatorics, so I assume that there must be a simple solution using the typical combinatorics formulas that works in the special case where the constraint is the same for all the fruits involved (i.e. in contrast to different limits per fruit). Any advice is appreciated!
To be found are the number of tuples $(o,a,b)$ with $o+a+b=10$ under constraint:$$(o,a,b)\in\{0,1,2,3,4,5\}^3$$ Setting $o'=5-o$, $a'=5-a$ and $c'=5-c$ this comes to the same as finding the number of tuples $(o',a',b')$ with $o'+a'+b'=5$ under constraint:$$(o',a',b')\in\{0,1,2,3,4,5\}^3$$ But here the constraints can be weakened to:$$o',a',b'\text{ are nonnegative integers }$$ This because condition $o'+a'+b'=5$ then assures that the original constraint will be satisfied. I leave the rest to you. This is not more than a trick that sometimes works. Always check it out.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3647945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Prove or Disprove $2\sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n} \sin(nx) $ converge uniformly to $x$ on $(-\pi,\pi)$ I want to prove $2\sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n} \sin(nx) $ converges pointwise and uniformly to $x$ on $[-\pi,\pi]$. I know $\sum_{k=1}^{\infty}\frac{(-1)^n}{n}$ converge by alternating series test. And $\sum a_n \sin(nx)$ converge by Dirichlet test if $a_n$ is decreasing sequence. But in this case this does not work. Maybe it just we can just consider the interval without $-\pi$,$\pi$. I get lost. Please help. Thanks a lot After trying, I think maybe there is no uniform convergence?
NOTE: The original question that the OP asked was "Prove $2\sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n} \sin(nx) $ converge pointwise and uniformly to $x$ on $[0,2\pi]$ using elementary analysis"** Let $a_n(x)=(-1)^{n-1}\sin(nx)$ and $b_n(x)=\frac1n$. Obviously, $b_n(x)\to 0$ monotonically and uniformly as $n\to\infty$. Moreover, for any $0<\delta_1<\pi$ and $0<\delta_2<\pi$, and $x\in [-\pi+\delta_1,\pi-\delta_2]$, $$\begin{align} \left|\sum_{n=1}^N a_n(x)\right|&=\left|\sum_{n=1}^N (-1)^{n-1}\sin(nx)\right|\\\\ &\le\left|\sec(x/2)\right|\\\\ &\le \max(\csc(\delta_1),\csc(\delta_2)) \end{align}$$ Therefore, Dirichlet's Test guarantees that the series $\sum_{n=1}^\infty \frac{(-1)^{n-1}\sin(nx)}{n}$ converges uniformly on $[-\pi+\delta_1,\pi-\delta_2]$. EDITED: After the OP changed the question We now give a proof that the series $2\sum_{n=1}^\infty \frac{(-1)^{n-1}\sin(nx)}{n}$ fails to converge uniformly for $x\in (-\pi,\pi)$. We first note that the series converges to $-x$ for $x\in (-\pi,\pi)$. That is to say that the Fourier series for $x$ on $(-\pi,\pi)$ is given by $$x=2\sum_{n=1}^\infty \frac{(-1)^{n-1}\sin(nx)}{n}$$ Now let $f_N(x)$ be the $N$th partial sum of the Fourier series for $x$. Then, denoting $t=x+\pi$ we can write $$\begin{align} f_N(x)&=2\sum_{n=1}^N\frac{(-1)^{n-1}\sin(nx)}{n}\\\\ &=-2\sum_{n=1}^N \frac{\sin(nt)}{n}\\\\ &=-2\int_0^t \sum_{n=1}^N \cos(nu)\,du\\\\ &=t-\int_0^t \frac{\sin((N+1/2)u)}{\sin(u/2)}\,du\\\\ &=t-\int_0^{(N+1/2)t}\frac{\sin(x)}{x}\frac{x/(2N+1)}{\sin(x/(2N+1))}\,dx \end{align}$$ It suffices to show that $\int_0^t \frac{\sin((N+1/2)u)}{\sin(u/2)}\,du$ fails to converge uniformly to $\frac\pi2$ for $t\in (0,2\pi)$. Now take $t=1/(N+1/2)$ Then, we see that $$\sin(1)\le\int_0^1 \frac{\sin(x)}{x}\frac{x/(2N+1)}{\sin(x/(2N+1))}\,dx\le \csc(1)$$ Hence we conclude that the convergence of $f_N(x)$ fails to converge uniformly on $(-\pi,\pi)$. And we are done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/3648062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
If $a,b$ are irrational numbers, Is $K=[a,b] \cap \mathbb Q$ closed in $\mathbb Q$? Suppose $a,b \in \mathbb R-\mathbb Q , a <b$. Consider $K=[a,b] \bigcap \mathbb Q$. Now, $[a,b]$ is closed in the metric superspace of $\mathbb Q$ i.e in $\mathbb R$. Thus, $K=[a,b] \bigcap Q$ is closed in $\mathbb Q$ But, $K=[a,b] \bigcap Q=(a,b) \bigcap Q$ doesn't contain it's limit points, namely, $a,b$ . Suppose $a=\sqrt 2, b = \sqrt 5$. If we define a function $A= \{x \in K~|~x^2 <5 \}$, then there is a sequence in $K$ which converges to $\sqrt 5$ but $\sqrt 5 \notin K .$ Then how can K be closed in $\mathbb Q$? Thanks a lot for the help!
A subset $C$ of the metric space $(X,d)$ is closed if every sequence $(x_n\in C)$ which converges towards $c$, $c\in C\subset X$. Here $X=\mathbb{Q}$,, $C=[a,b]\cap\mathbb{Q}$, if you take $a=\sqrt2, b=\sqrt5$ and a sequence of rationals $x_n\in C, limx_n=\sqrt5$, $\sqrt5$ is not in $X$. What you have is the fact that $C$ is closed in $\mathbb{Q}$ but not in $\mathbb{R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3648228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Why does fundamental theorem of calculus not work for this integral $\int_0^{2\pi}\frac{dx}{(3+\cos x)(2+\cos x)}$? $$\int\frac{dx}{(3+\cos x)(2+\cos x)}= \frac{2\arctan(\frac{\tan(\frac x2)}{\sqrt3})}{\sqrt3} - \frac{\arctan(\frac{\tan(\frac x2)}{\sqrt2})}{\sqrt2} + C $$ This is the antiderivative . By the FTC : $$\int_a^b f(x) = F(b) - F(a)$$ where $F(x)$ is a primitve function. $$\left. \int_0^{2\pi}\frac{dx}{(3+\cos x)(2+\cos x)}= \frac{2\arctan(\frac{\tan(\frac x2)}{\sqrt3})}{\sqrt3} - \frac{\arctan(\frac{\tan(\frac x2)}{\sqrt2})}{\sqrt2} \right|_0^{2\pi}=0$$ $\frac{dx}{(3+\cos x)(2+\cos x)}$ is positive on$[0,2\pi]$ hence the result above is wrong. Correct result is: $$\int_0^{2\pi}\frac{dx}{(3+\cos x)(2+\cos x)}=\Bigl(\frac2{\sqrt3}-\frac1{\sqrt2}\Bigr) \pi$$ Why am I not getting the correct result ?
Let $f(x)=\frac{1}{(3+\cos x)(2+\cos x)}$ and $F(x)$ be its anti-derivative. Pertaining to this very question,what if we tried to make a substitution $\cos x=u$. Since we would have to change the limits accordingly we see that the limit comes out to be $\displaystyle\int_1^1 g(u) \mathrm du=0$, where the function $g$ is obtained after the substitution. Is the problem here that $f(x)$ is not invective or is it that the $f(x)$ anti-derivative is not differentiable $ \exists x \in [0,2\pi]$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3648343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
$\Psi: V_1\longrightarrow V_2$ such that $\Psi(F)=\Phi \circ F$. Let $V_1=Lin(\mathbb{R_{<=2}[x]},\mathbb{R_{<=2}[x]})$ and $V_2=Lin(\mathbb{R_{<=2}[x]},\mathbb{R})$ two vector spaces and let $$\Phi \in V_2, \qquad \Phi(p(x))=p'(1).$$ Consider $$\Psi: V_1\longrightarrow V_2\qquad \Psi(F)=\Phi \circ F.$$ How can I find the dimension of the Kernel of $\Psi$ and a basis? Then, consider $W=\{F\in V_1 : F(x − 1) = 0\}$. What's the dimension of $W$?
Consider the base $\mathcal B =\{x^2,x,1\}$ of $\mathbb R_{\leq 2}[x]$ and consider the coordinates isomorphism: $$ \mathbb R_{\leq 2}[x] \rightarrow \mathbb R^3,\qquad a_2x^2+a_1x+a_0\mapsto (a_2,a_1,a_0)^T $$ By this isomorphism an element of $\text{Lin}(\mathbb R_{\leq 2}[x],\mathbb R_{\leq 2}[x])$ is just a $3\times 3$ matrix, and an element of $\text{Lin}(\mathbb R_{\leq 2}[x],\mathbb R)$ is a $1\times 3$ matrix. In particular how does appear the element $\Phi$ read in coordinate? We have $\Phi(a_2x^2+a_1x+a_0) = 2a_2 + a_1$, so it is represented by the matrix: $$ \mathcal M(\Phi) = \left(\begin{matrix} 2 & 1 & 0 \end{matrix} \right) $$ And how does appear the function $\Psi$? We have $\Psi(F)=\Phi\circ F$; since $F$ is represented by a $3\times 3$ matrix we have: $$ \Psi\left( F \right) = \left(\begin{matrix} 2 & 1 & 0 \end{matrix} \right) \circ \left( \begin{matrix} a_{1,1} & a_{1,2} & a_{1,3}\\ a_{2,1} & a_{2,2} & a_{2,3}\\ a_{3,1} & a_{3,2} & a_{3,3} \end{matrix} \right)= \left( \begin{matrix} 2a_{1,1} + a_{2,1} & 2a_{1,2} + a_{2,2} & 2a_{1,3} + a_{2,3} \end{matrix} \right) $$ Now it's easy to find $\ker\Psi$ and a base of it: \begin{gather} \ker\Psi=\left\{\left( \begin{matrix} a_{1,1} & a_{1,2} & a_{1,3}\\ a_{2,1} & a_{2,2} & a_{2,3}\\ a_{3,1} & a_{3,2} & a_{3,3} \end{matrix} \right) \in V_1 \ \text{such that} \ \ \ \begin{matrix} 2a_{1,1} + a_{2,1}=0,\\ 2a_{1,2} + a_{2,2}=0,\\ 2a_{1,3} +a_{2,3}=0 \end{matrix} \right\}\\ =\left\{\left( \begin{matrix} a_{1,1} & a_{1,2} & a_{1,3}\\ -2a_{1,1} & -2a_{1,2} & -2a_{1,3}\\ a_{3,1} & a_{3,2} & a_{3,3} \end{matrix} \right)\in V_1\right\} \end{gather} So $\ker\Psi$ has dimesion $6$. An explicit base is really easy to see now. To find a base of $W$ again consider coordinates. The polynomial $x-1$ is represented by the vector $(0,1,-1)^T$, so: $$ \left( \begin{matrix} a_{1,1} & a_{1,2} & a_{1,3}\\ a_{2,1} & a_{2,2} & a_{2,3}\\ a_{3,1} & a_{3,2} & a_{3,3} \end{matrix} \right) \left(\begin{matrix} 0\\ 1\\ -1 \end{matrix}\right) = \left(\begin{matrix} 0\\ 0\\ 0 \end{matrix}\right) $$ And with the same proof as above you will find $$ W=\left\{\left( \begin{matrix} a_{1,1} & a_{1,2} & a_{1,2}\\ a_{2,1} & a_{2,2} & a_{2,2}\\ a_{3,1} & a_{3,2} & a_{3,2} \end{matrix} \right)\in V_1 \right\} $$ So also $W$ has dimension equal to $6$. I tried to do this exercise without coordinates, but I did not find a simple way. To see the dimensions of the subspaces you can think to how many conditions are the properties of the subspace. For example in the second part, $F(x-1)=0$ gives you $3$ conditions and the dimensions is $9-3=6$. This is not always true because sometimes some conditions are equivalent, but you can use this "fact" in order to have an idea of the result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3648478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Finding the integral $\int\frac{e^{x}}{e^{2x}+1}$ This question has been puzzling me for a bit and I'd like an explanation for what I'm doing wrong as my answer doesn't coincide with the correct one. Let's say we're asked to find: $$\int \frac{e^{x}}{e^{2x}+1}\mathrm{d}x$$ The way I chose to solve this was factor out an $e^{2x}$ from the denominator and work my way from there. So what I get is: $$\int \frac{e^{x}}{e^{2x}(1+1/e^{2x})}dx$$ This could be rewritten as: $$\int \frac{1}{e^{x}(1+(1/e^{x})^2)}dx$$ If we let: $$u = \frac{1}{e^x}$$ $$du = -e^{-x} dx$$ $$dx = -e^xdu$$ So, now we're at: $$\int \frac{1}{e^{x}(1+u^2)}(-e^x)du$$ Cancelling out the $e^x$ and removing the minus sign outside of the integral gives us: $$-\int \frac{1}{1+u^2}du$$ This leaves us with: $$-\int \frac{1}{1+u^2}du = -\tan^{-1}(\frac{1}{e^x})+c$$ However, I know the answer is wrong because the correct one is $\tan^{-1}(e^x) + c$. Can someone please tell me where I screwed up? Many thanks in advance!
One of the best thing you can do with indefinite integrals is take the derivative of your result and check if it is the integrand function: $$\frac{\text{d}}{\text{d}x} \left[-\arctan \left (\frac{1}{e^x}\right)+c\right]=\frac{\text{d}}{\text{d}x} \left[-\arctan \left (e^{-x}\right)+c\right]=-\frac{1}{1+e^{-2x}}({-e^{-x}})=\frac{e^{-x}}{1+e^{-2x}}=$$ $$=\frac{\frac{1}{e^x}}{1+\frac{1}{e^{2x}}}=\frac{\frac{1}{e^x}}{\frac{e^{2x}+1}{e^{2x}}}=\frac{1}{\frac{e^{2x}+1}{\frac{1}{e^x}}}=\frac{e^{x}}{e^{2x}+1}$$ So your result is absolutely correct, no matter what (except for calculation mistakes). Of course it is not always this simple, maybe you'll need some identities to have back the integrand function; but this is for sure better than thinking you're wrong without checking.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3648726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 4 }
Prove union/intersect of two powersets is/is not equal to powerset of their union? I am supposed to show whether or not the union/intersection of two powersets is equal to the powerset of the union/intersection, respectively. I found these two links that have both the proofs, and say that the union is not always equal, but the intersection is always equal. The proofs seem very similar to me. Can someone help me understand why one shows they are always equal and the other is only sometimes? https://proofwiki.org/wiki/Union_of_Power_Sets https://proofwiki.org/wiki/Intersection_of_Power_Sets
The powerset of intersection is equal to the intersection of powersets, i.e. $P(A\cap B)=P(A)\cap P(B)$, can be proven as follow: 1) $P(A \cap B) \subseteq P(A) \cap P(B)$ * *$\forall E \in P(A \cap B)$ *$[\forall x \in E(x \in A \cap B \equiv x \in A \land x \in B)]$ *$(E \subseteq A) \land (E \subseteq B)$ *$(E \in P(A)) \land (E \in P(B))$ *$E \in P(A) \cap P(B)$ *$\therefore P(A\cap B)\subseteq P(A) \cap P(B)$ 2) $P(A) \cap P(B) \subseteq P(A \cap B)$ * *$\forall E \in P(A) \cap P(B)$ *$[\forall x \in E (x \in A \land x \in B \implies x \in (A \cap B))]$ *$E \in P(A \cap B)$ *$\therefore P(A) \cap P(B) \subseteq P(A \cap B)$ Together, these give $P(A) \cap P(B) = P(A \cap B)$ To see that $P(A\cup B)\neq P(A)\cup P(B)$, simply consider the cardinality of each sides - $2^{|A|+|B|} = |P(A)\cup P(B)|\leq 2^{|A|}+2^{|B|}$ doesn't necessarily hold. One example would be when $A=\{1\},B=\{2\}$ $P(A\cup B)=\{\{\},\{1\},\{2\},\{1,2\}\}$ $P(A)\cup P(B)=\{\{\},\{1\},\{2\}\}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3648881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove $|f(y)−f(x)−f′(x)(y−x)|≤ε|y−x|, ∀x,y∈[0,1], |x−y|<δ$. Suppose f is a differentiable function and f′ is continuous on [a,b], prove that ∀ε > 0, there is δ > 0 such that $|f(y)−f(x)−f′(x)(y−x)|≤ε|y−x|, ∀x,y∈[0,1], |x−y|<δ$. Since f is differentiable and f' is continuous, then by using the definitions of both, I can apply MVT to get to this conclusion, however I am having trouble writing it out so that it can be cohesive.
Let $x \in [a, b]$ then since $f(t)$ is differentiable at $x$ it follows that $\lim_{y \to x} \frac{f(y)-f(x)}{y-x} = f^\prime(x)$ Translating this to $\epsilon, \delta$ we get that given $\epsilon > 0$ there exists a $\delta>0$ such that $|\frac{f(y)-f(x)}{y-x} - f^\prime(x)| < \epsilon$ if $|y - x|< \delta$. Which is equivalent to $|f(y)-f(x) - f^\prime(x)(y-x)| < \epsilon|y-x|$ when $|x-y| < \delta$. I don't see why we need the mean value theorem here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3649033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Proof for Uniform Convergence for $\{f_n\}$ Suppose $\{f_n\}$ is an equicontinuous sequence of functions defined on $[0,1]$ and $\{f_n(r)\}$ converges $∀r ∈ \mathbb{Q} \cap [0, 1]$. Prove that $\{f_n\}$ converges uniformly on $[0, 1]$. Since I know that $\mathbb{Q} \cap [0, 1]$ is not compact, I am a bit stuck on my proof. So far I have: Let $f_n \to f$ pointwise on $\mathbb{Q} \cap [0, 1]$ Since $\{f_n\}_n$ is equicontinuous and point-wise bounded (it’s pointwise convergent, so in particular), there exists a subsequence $\{f_{n_k}\}_k$ such that $f_{n_k} \to f$ uniformly. Since each $f_n$ is continuous, $f$ is then continuous. Now take $\varepsilon > 0$. Using equicontinuity of $\{f_n\}_n$, we find $\delta_1 > 0$ such that if $d(x, y) < δ_1$, $x, y \in K$, then $|f_n(x) − f_n(y)| < \varepsilon/3$ for all $n ∈ \mathbb{Z}^+$. Using continuity of $f$, for each $x \in K$, let $\delta_2 = \delta_2(x) > 0$ be such that if $|x − y| < \delta_2(x)$, $y \in \mathbb{Q} \cap [0, 1]$, then $|f(x) − f(y)| < \varepsilon/3$. For $x \in \mathbb{Q} \cap [0, 1]$, let $\delta(x) = \min(\delta_1, \delta_2(x)) > 0$ I am not sure how to continue nor am I too sure I am on the right path.
This is actually a simple case of applying the Arzela-Ascoli Propagation Theorem which states: Point-wise convergence of an equicontinuous sequence of functions on a dense subset of the domain propagates to uniform convergence on the whole domain. The rational numbers, $\mathbb{Q}$, are dense in the interval $[0,1]\subset \mathbb{R}$. So if <${f_n(r)}$> is converging to some function $f$ for every $r\in\mathbb{Q}$ $\cap$ $[0,1]$, the convergence will propagate to uniform convergence on all of $[0,1]$. A proof of this theorem can be found on page 227 of Real Mathematical Analysis by Charles Pugh.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3649221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
One inequality about the average integral: $\bar{f}_A:=\frac{1}{\mu(A)} \int_A f \,d\mu$ For a measurable function $f$, if we consider the average integral: $$\bar{f}_A:=\frac{1}{\mu(A)}\int_A f \, d\mu$$ where $\mu$ is Lebesgue measure. \begin{equation} \begin{split} \int_A (f-\bar{f}_{A})^2\,d\mu &\leq \int_A (f-\bar{f}_{B})^2\,d\mu\\ &\leq\int_B (f-\bar{f}_{B})^2\,d\mu \end{split} \end{equation} where $A\subset B$. I am confused about how to get the first inequality?
$f-\overline {f}_B =(f-\overline {f}_A )+ (\overline {f}_A -\overline {f}_B )$. Square both sides and integrate over $A$. Note that the cross term vanishes: $2\int_A (f-\overline {f}_A )(\overline {f}_A -\overline {f}_B )=2(\overline {f}_A -\overline {f}_B ) \int_A (f-\overline {f}_A )$ and $\int_A(f-\overline {f}_A ) =0$. Can you finish?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3649406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Finding / Proving the Order of Dihedral and Symmetric Subgroups Lately I've been trying to follow along with this YouTube lecture series called "Visual Group Theory" by Clemson's Mathew Macauley and have had some issues grasping the concept of a group's "order".. I (think I) understand how it works when referring to reflections, rotations, etc. in the sense that we are looking for the $"k"$ value that makes $x^k = e$ so examples when looking at $D_4$ where $R = 90^\circ rotation$ and $F = horizontal flip$ : * *$|R^2| = 2$ since two rotations bring you halfway and so doing this twice will preserve it's footprint *$|RF| = 2$ since when performing this on a piece of paper with numbered corners i got back to my original starting point after performing RF twice but when moving into the proof of orders that are dealing with numbered groups i have trouble wrapping my head around what's going on.. Proposed with an exercise of examining $S_4$, when computing $|(12)(13)|$ or $|(1243)|$ i lack the intuition of seeing how this gets mapped and how we deduce the order of it.. (although my guess would be $|(12)(13)| = 4$ since it can be rewritten as $|(123)|$) please let me know what you think; any and all help, input, and assistance is greatly appreciated!
You started with a slight error. There's a difference between the order of a group, and the order of an element. To illustrate, a favorite theorem of mine is Cauchy's theorem. It states that if $p$ is a prime dividing the order of the group, then the group has an element of order $p$. But getting back to your question, actually $|(12)(13)|=|(123)|=3$, because a $3$-cycle has order $3$. More generally, an $n$-cycle has order $n$. Your intuition will develop as you move along. Here's a useful fact: if $a,b\in S_n$ are disjoint, then $|ab|=\operatorname{lcm}(|a|,|b|)$. So for instance, let's look at $(12)(34)$. Since the transpositions $(12)$ and $(34)$ are disjoint, the order of their product is $2$. To contrast, in your example $(12)(13)$, the transpositions $(12)$ and $(13)$ are not disjoint, as they both "move" $1$. That's all for now.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3649556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Mathematical Operation (PEMDAS AND BODMAS) I am learning Python and came across PEMDAS. Python uses PEMDAS to solve mathematical equations. But in lower classes like 5th or 6th we were taught BODMAS. I got confused and then made an equation to check which method gives me the correct answer. My equation was 100-2⁵×8÷2+4 Now both PEMDAS and BODMAS gave me same result -24. So how's that possible. In PEMDAS we are doing multiplication first and in BODMAS we are doing division first. I actually got confused because when we enter print 100-25*3%4 Python gives result 97 using PEMDAS. But if I use BODMAS then we get 25 . PS: In above python script * means multiplication and % means modulus i.e. if we write X%Y then we speak it as 'X divided by Y with J remaining'. The result of % is the J part (or remainder ) of division. Tell me where I am doing wrong. I am posting it here because I think it's more of a mathematical doubt then a python problem.
In both PEMDAS and BODMAS, there is no particular preference for multiplication or division, either can be done first and answer will be same. In general, an expression like $\frac{abcd}{ghij}$ can be evaluated by multiplying or dividing number in any order whatsoever and you will get the same result. For example, check your expression, $2^5 \times 8 \div 2$ gives the same number regardless of the order in which you calculate. Moreover, if the two operations are far apart in an expression and separated by addition and subtraction operations in between, it is obvious that you will end up with same numbers whether you multiply first or divide.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3649733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Subgraphs of a bipartite graph $H$ as the intersection of two copies of $H$ in $K_{n,n}$. Let $H$ be a bipartite graph. Is it necessarily the case that every subgraph of $H$ can appear as the intersection of two copies of $H$ in $K_{n,n}$ for large enough $n$? (Here $K_{n,n}$ is the complete bipartite graph with $n$ vertices in each class.) Give a proof or a counterexample. No idea how to approach this at all and even on what the answer might be. Any help appreciated!
The answer is negative for $H=C_4$ and its subgraph $G$ which equals $H$ without an edge. Indeed, if a copy of $G$ is an intersection of two copies of $H$ then both copies share the same four vertices and it is easy to check that the intersection of the copies of $H$ cannot be a copy of $G$. On the other hand, the answer is positive for each induced subgraph $G$ of $H$. Indeed, let $H’$ be a graph $H$ in which we cloned each vertex which in not in $G$. So $H’$ is a union of two copies of $H$ whose intersection is a copy of $G$. Since $H’$ is bipartite, it is a subgraph of $K_{p,p}$ for $p\le 2n_H-n_G$, where $n_H$ and $n_G$ are the numbers of vertices of graphs $H$ and $G$, respectively.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3649895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that $R^{n}\setminus R^{k} \simeq S^{n} \setminus S^{k} \simeq S^{n-k-1} $ $\simeq$ is homotopy equivalence space homotopy equivalence: Two topological spaces X and Y are homotopy equivalent if there exist continuous maps $f:X\rightarrow Y$ and $g:Y \rightarrow X$, such that the composition f degreesg is homotopic to the identity $id_Y$ on Y, and such that g degrees f is homotopic to $id_X$. Each of the maps f and g is called a homotopy equivalence, and g is said to be a homotopy inverse to f (and vice versa). homotopy: In topology, two continuous functions from one topological space to another are called homotopic if one can be "continuously deformed" into the other, such a deformation being called a homotopy between the two functions. $\setminus$ is set minus (dont mistake it with quotient space) hint: there is similar question here but its not usable for this question because first of all it proves $R^{n}\setminus R^{k} \simeq S^{n-k-1} \times R^{k+1}$ The answer tried to solved it by using induction on k, and the right term $R^{k+1}$ play important roll which can not be omitted from that proof. and if I wanted to prove it by induction on k k=0 $R^{n} \setminus R^{0} \simeq S^{n-1}$ I can prove the initial case very similar by giving the map: $x \rightarrow (x/ ||x||)$ however I have no idea for the induction step. and for the middle part of the proof which says: $\simeq S^{n} \setminus S^{k}$
Your question is imprecise because you do not specify how $\mathbb R^k$ is regarded a subspace of $\mathbb R^n$ (similarly for $S^k$ and $S^n$). But certainly you identity $\mathbb R^k$ with $\{(x_1,\ldots,x_n) \in \mathbb R^n \mid x_{k+1} = \ldots = x_n = 0 \}$ and $S^k$ with $\{(x_1,\ldots,x_{n+1}) \in S^n \mid x_{k+2} = \ldots = x_{n+1} = 0 \}$. Let $p = (1,0,\ldots,0) \in S^n$. Stereographic projection gives us a homeomorphism $$h : S^n \setminus \{p \} \to \mathbb R^n, h(x_1,\ldots,x_{n+1}) =\left(\frac{x_2}{1-x_1},\ldots,\frac{x_{n+1}}{1-x_1}\right) .$$ We have $h(S^k \setminus \{p \}) = \mathbb R^k$, thus $h((S^n \setminus \{p \}) \setminus (S^k \setminus \{p \})) = \mathbb R^n \setminus \mathbb R^k$. But clearly $(S^n \setminus \{p \}) \setminus (S^k \setminus \{p \}) = S^n \setminus S^k$ which proves that $\mathbb R^n \setminus \mathbb R^k$ and $S^n \setminus S^k$ are homeomorphic. As you stated in your question, $\mathbb R^n \setminus \mathbb R^k$ and $S^{n-k-1} \times \mathbb R^{k+1}$ are homotopy equivalent which implies $\mathbb R^n \setminus \mathbb R^k \simeq S^{n-k-1}$ because $\mathbb R^{k+1}$ is contractible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3650017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Geometric Interpretation of Matrix Additiom Is there a geometric meaning to matrix addition similar to how matrix multiplication acts as a linear transformation? I'm really curious thanks!
As you note, each matrix corresponds to a linear transformation. Given a matrix $A$, the linear transformation that $A$ corresponds to is defined via matrix-vector mulitplication. In particular, if $A$ has size $m \times n$, then the corresponding transformation $T:\Bbb R^n \to \Bbb R^m$ is given by $T_A(x) = Ax$ for vectors $x \in \Bbb R^n$. With that established, we can interpret the addition and multiplication of two matrices in terms of these linear transformations. In particular: if $A,B$ are $m \times n$ matrices, then the transformation $T_{A + B}$ corresponding to the matrix $A + B$ is defined so that $T_{A + B}(x) = T_A(x) + T_B(x)$. That is, $T_{A+B} = T_A + T_B$. If $A$ is $p \times m$ and $B$ is $m \times n$, then the transformation $T_{AB}(x) = T_A(T_B(x))$. That is, $T_{AB} = T_A \circ T_B$. Since we're focusing on addition, here is an example: take $$ A = \pmatrix{1&0\\0&0}, \quad B = \pmatrix{0&0\\0&1}, \quad A+B = I = \pmatrix{1&0\\0&1}. $$ Note that $I$ is a special matrix known as the "identity matrix". The transformations corresponding to $A$ and $B$ are as follows: $$ T_A(x) = \pmatrix{1&0\\0&0} \pmatrix{x_1\\x_2} = \pmatrix{x_1\\0}, \quad T_B(x) = \pmatrix{0&0\\0&1}\pmatrix{x_1\\x_2} = \pmatrix{0\\x_2}. $$ In other words: $T_A$ is the projection onto the $x_1$-axis and $T_B$ is the projection onto the $x_2$-axis. Adding these two transformations together means that we get the output for $x$ by adding $T_A(x)$ and $T_B(x)$. That is, we should have $$ T_{A+B}(x) = T_A(x) + T_B(x) = \pmatrix{x_1\\0} + \pmatrix{0\\x_2} = \pmatrix{x_1\\x_2}. $$ That is, adding $T_A$ and $T_B$ results in a transformation that produces the same input that we started with. This indeed matches up with the transformation corresponding to $A+B$. In particular, $$ T_{A+B}(x) = (A+B)x = \pmatrix{1&0\\0&1}\pmatrix{x_1\\x_2} = \pmatrix{x_1\\x_2}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3650199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Show that $f$ has exactly one zero on the square $Q =$ {$x + iy ∈ \Bbb C : |x| < 1, |y| < 1$}. Let $f(z) = z + g(z)$ where $g$ is holomorphic. Suppose that $|\operatorname{Im} g(z)| < 1$ for $z ∈ [−1 − i, 1 − i]∪[−1 + i, 1 + i]$ and $|\operatorname{Re} g(z)| < 1$ for $z ∈ [−1 − i, −1 + i] ∪ [1 − i, 1 + i]$. Show that $f$ has exactly one zero on the square $Q =$ {$x + iy ∈ \Bbb C : |x| < 1, |y| < 1$}. My attempt: I let $h(z) = z$. Then, I want to compare $|g(z)|$ and $|h(z)|$ because if $|g(z)| < |h(z)|$ then by Rouché's theorem, $h$ and $h+g$ have the same number of zeros, and $h$ has in fact one zero. But then $h+g = f$ and thus $f$ would also have the same number of zeros as $h+g$ which has one zero. This is what I could come up with: $|g(z)| = |u(z) + iv(z)|$. Then for $z \in Q$, we have $|g(z)| \leq |u(z)| + |v(z)| < 1 + 1 = 2$ (since $|\operatorname{Im} g(z)| < 1$ for $z ∈ [−1 − i, 1 − i]∪[−1 + i, 1 + i]$ and $|\operatorname{Re} g(z)| < 1$ for $z ∈ [−1 − i, −1 + i] ∪ [1 − i, 1 + i]$) But I don't know how to continue from here. Any help please?
For this problem the following stronger version of Rouche works (sometimes it is called the symmetric Rouche and is expressed as $|f-g| <|f|+|g|, z \in K$): If $\Omega$ is the interior domain of a Jordan curve $K$ and $f(z)+\lambda h(z) \ne 0, \lambda \ge 0, h(z) \ne 0, z \in K$ then $f,h$ have the same number of zeroes inside $\Omega$. The hypothesis of the OP shows that for $\lambda \ge 0, \Re (f+\lambda z) \ne 0$ when $\Re z = \pm 1$ and $\Im (f+\lambda z) \ne 0, \Im z = \pm 1$ so $f+\lambda z \ne 0$ on the boundary of the square for any $\lambda \ge 0$ while $z \ne 0$ there clearly, so $f,z$ have same number of zeroes inside the square as the OP predicted. The stronger version of Rouche follows because the homotopy $tf(z)+(1-t)h(z), 0 \le t \le 1, z \in K$ avoids zero by hypothesis ($t=0$ is $h \ne 0$, $1 \ge t>0$ is $f+\frac{1-t}{t}h \ne 0$) so the winding number of $tf(z)+(1-t)h(z)$ around $K$ exists and is continuos for $0 \le t \le 1$ but it is then constant being an integer; at the two ends we get the number of zeroes inside $K$ of $f$ and $g$ respectively
{ "language": "en", "url": "https://math.stackexchange.com/questions/3650317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why do we need to determine the definiteness of the Hessian to decide what a critical point is? In univariate calculus, if we know that $f'(c)=0$, we can determine if the function $f$ has a minimum at $c$ by checking that $f''(c) > 0$. The multivariate analogue of the second derivative is the Hessian matrix. I now learned that to decide between extreme and saddle points in this case, it has to be checked whether the Hessian is positive definite, negative definite or indefinite. This can be achieved by checking its eigenvalues. I have several questions regarding this: * *Why is it not sufficient to check the sign of the values in the Hessian, but we need to check for definiteness? *Does the definiteness just make sure some convexity or concavity properties check out, or is there a more meaningful interpretation of that? *How do the eigenvalues of a matrix tell us its definiteness? *Addendum: What do the off-diagonal entries in the Hessian even mean? How the slope in a certain dimension changes by making changes in a different dimension?
1) For example, the function $f(x,y) = x^2 + 4 x y + y^2$ has all entries of the Hessian matrix $> 0$, but the critical point $(0,0)$ is a saddle (e.g. $f(t,-t) < 0$ for $t \ne 0$). 2) A smooth function of $n$ variables is convex in an open set $R$ iff its Hessian is positive semidefinite there. 3) A real symmetric matrix is positive definite, positive semidefinite, negative semidefinite, or negative definite iff its eigenvalues are all $> 0$, $\ge 0$, $\le 0$, $< 0$ respectively.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3650650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Variance calculation bivariate probability distribution I understand the following statement is true $$V(XY)=E((XY)^2)-(E(XY))^2$$ In my question X and Y are independent. The solution recommends $$ V(XY)=E((X)^2)E((Y)^2)-(E(XY))^2 $$ and I curious to understand why this approach fails $$V(XY)=E(XY)E(XY)-(E(XY))^2$$ Seems that there are arbitrary rules applying here, I am certain there are not and so would appreciate it if someone could unpick the issue.
The solution recommends V(XY)=E((X)^2)E((Y)^2)-(E(XY))^2 Because $X$ and $Y$ are independent, so too are $X^2$ and $Y^2$. The expectation of a product of independent random variables is the product of their expectations. You can further say $\mathsf V(XY)=\mathsf E(X^2)~\mathsf E(Y^2)-(\mathsf E(X))^2~(\mathsf E(Y))^2$ and I curious to understand why this approach fails V(XY)=E(XY)E(XY)-(E(XY))^2 The product rule only applies for independent random variables. $XY$ is not independent from $XY$; that product is actually very dependent on itself.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3650830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Continuous Random Variable Transformations vs Discrete My Textbook, Introduction to Mathematical Statistics, has the following example of finding the pdf of a transformation of a continuous random variable: Let $X$ be a random variable with pdf $f_X(x)=2x$ for $0 < x < 1$, zero elsewhere, and cdf $F_X(x)=x^2$. Let $Y = X^2$ be a second random variable. Find $f_Y(y)$, the pdf of $Y$. Solution: $F_Y(y)=P(Y\leq y)=P(X^2\leq y)=P(X\leq \sqrt{y})=F_X(\sqrt y) = \sqrt{y}^2 = y.$ $f_Y(y) = \frac{dF_Y(y)}{dy} = 1.$ I can follow the solution, but my first approach to this problem would have been the one described to solve the same problem with discrete random variables - to simply use the inverse of the transformation as substitution into $f_X(x)$, since the transformation is one-to-one: $f_Y(y) = f_X(g^{-1}(y))=2\sqrt{y}$ I see that this is clearly wrong since the cumulative probability of this pdf over the interval is not equal to 1, but I'd like to understand why this process works for discrete random variables to find the pmf of a transformation, but doesn't work for continuous random variables to find the pdf of a transformation. Why do we need to make the substitution in the cumulative distribution function if the random variable is continous?
Because the pdf is an unsigned derivative†, we must apply the chain rule for derivation. $$\begin{align}f_Y(y) &=\begin{vmatrix}\dfrac{\mathrm d F_Y(y)}{\mathrm d y}\end{vmatrix}\\[1ex] &=\begin{vmatrix}\dfrac{\mathrm d F_X(g^{-1}(y))}{\mathrm d y}\end{vmatrix}\\[1ex] &=\begin{vmatrix}\dfrac{\mathrm d F_X(g^{-1}(y))}{\mathrm d g^{-1}(y)}\cdot\dfrac{\mathrm d g^{-1}(y)}{\mathrm d y}\end{vmatrix}\\[1ex] &= f_X(g^{-1}(y))\cdot\begin{vmatrix}\dfrac{\mathrm d g^{-1}(y)}{\mathrm d y}\end{vmatrix}\\[4ex]f_Y(y) &=2 g^{-1}(y)\cdot\begin{vmatrix}\dfrac{\mathrm d g^{-1}(y)}{\mathrm d y}\end{vmatrix}\\[1ex]&= 2\sqrt y~\mathbf 1_{0<\sqrt y<1}\cdot\begin{vmatrix}\dfrac{\mathrm d \sqrt y}{\mathrm d y}\end{vmatrix}\\[1ex]&=2\sqrt y~\mathbf 1_{0<y<1^2}\cdot\dfrac{1}{2\sqrt y}\\[1ex]&=\mathbf 1_{0<y<1}\end{align}$$ († a pdf is required to map to non-negative reals values, so we use absolute value functions to ensure the transformation of variables retains this property.) I'd like to understand why this process works for discrete random variables to find the pmf of a transformation, but doesn't work for continuous random variables to find the pdf of a transformation. Because the support for a discrete distribution consists of a set of discrete points each with a probability mass.  A transformation which maps the points one-to-one to another set of discrete points won't affect the probability mass measure no matter if the points are spread further apart or pushed close together (unless they are folded onto one another). However, the support for a continuous distribution consists of a continuous interval whose points have probability density.  So a transformation which maps that interval one-to-one may involve stretching or squeezing, and hence affect the probability density of the new interval. The chain rule is how we account for this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3650976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Area of the shaded region of the intersection between two triangles Given figure below Find the area of the shaded region. The only thing I found is DCO triangle is congruent with ABO triangle $$\frac{AB}{CD}=\frac{AO}{CO}\\ \frac{9}{CD}=\frac{AO}{17}$$ I don't think this lead anywhere. Any clue what to do? Thanks.
I will assume $17$ is from $A$ to the point of intersection of $DB$ and $AC$ which you call $O$. The triangles $ABO$ and $DCO$ are similar so that $AO=BO=x$ and $\angle ADB=\angle ACB=\theta$. Using the law of cosines you get the following equations which you can easily solve for $x$ (without solving for $\theta$) $$\left(17+x\right)^{2}+100-20\left(17+x\right)\cos\left(\theta\right)=81 \tag{1}$$ $$x^{2}=100+17^{2}-340\cos\left(\theta\right) \tag{2} $$ Once you find $x$ you can use Heron's formula to find the area of $ABO$ so: $$A=\sqrt{\left(x+\frac{9}{2}\right)\left(x-\frac{9}{2}\right)\left(\frac{9}{2}\right)^{2}}=\frac{9}{2}\sqrt{\left(x+\frac{9}{2}\right)\left(x-\frac{9}{2}\right)}$$ $x$ is a nasty number though.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3651328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
The outside of a $180$-sheet roll of toilet paper is covered by two sheets; the inner cylinder, by one. What's wrong with how I counted the layers? Puzzle: A roll of toilet paper has 180 sheets on it. The outside is covered with exactly two sheets. The inside around the cardboard cylinder is covered by exactly one. Question of the puzzle: how many layers of toilet paper are on the roll of toilet paper? The given solution: One way to solve this is by saying that the average round is covered by 1.5 sheets, so therefore the answer is $180\times\frac{2}{3}=120$ I tried a similar (but wrong) reasoning: "the average sheet makes an average of $\frac{3}{4}$ rounds (first sheet makes one round and the last sheet makes $\frac{1}{2}$ rounds), so the answer is $180\times\frac{3}{4}=$ 135" QUESTION: Apparently my answer is wrong. But since it seems analogical to the given solution I don't understand what error I made. Possibly the growth of sheets per round is constant? While the (negative) growth of rounds per sheet is not constant? What are the related functions? Put in another way: if $\frac{dSheets}{dRounds}=Constant$ isn't also $\frac{dRounds}{dSheets}=Constant$? This question is linked to this question: Using differential equations to determine the number of rolls on a roll of toilet paper
Consider the cross-section of the roll. It might be easier to think of it as a continuous spool of paper that's later going to be divided lengthwise equally into sheets. The circumference of the cross section at a radial distance $r$ from the centre is $2\pi r$. The thickness of one layer is $t$. The circumference one layer outward will be $2\pi (r+t) $, which is $2\pi t$ more. All this is not necessary, it's just to show that adding a layer adds a constant to the cross sectional circumference. If we now think in terms of sheets, we can say the initial cross-sectional circumference is $1$ sheet while the final is $2$ sheets. You're adding a constant each time. This is an arithmetic progression. The sum of an arithmetic series can be given by different formulas. The easiest one to use here is $S(n) = \frac n2 (a+l) $, where $n$ is the number of terms (equal to number of layers, and this is what you need to solve for). $a$ is the first term ($1$ here) and $l$ is the final term ($2$ here). You can also think of it like $n$ times the average term, which allows you to relate it to your given solution. So $\frac n2 (1+2) = 180 \implies n =120$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3651513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 2 }
real number and decimal expansions For any real number $x$ we define its decimal expansion as $N\cdot x_1x_2x_2\cdots$ where $N=\lfloor x\rfloor$ and $$x_i=\left\lfloor 10^i \left(x- \left(N+\sum_{j=1}^{i-1}\frac{x_j}{10^j}\right) \right)\right\rfloor.$$ Now I have two questions regarding this definition: * *Why will each $x_k$ be a digit between $0$ and $9$? That is clear in the case of $x_1$ since $x-N$ being the fractional part of $x$ will be in $[0,1)$ and so $10(x-N)\in[0,10)$. In the case of $x_2$ it is not so clear. Intuitively, if from the fractional part we subtract one tenth's of the "first digit decimal point" so we must getting something like $0.0x_2x_3\cdots$ and hence multiplying by $100$ (and taking floor) is the correct thing to do, to recover $x_2$. However I cannot seem to make this idea rigorous. *Why can't the decimal expansion end in a string of $9's$? I think if we presumed that it did then, after some $k$ the difference between $x$ and $N.x_1\cdots x_k$ would be zero. That will be a contradiction because clearly each $x_i$ is unique. But how to justify that such a difference ultimately becomes zero? Update: The answers posted below both use induction to prove (1). Is it correct to do it without induction as follows: Suppose $i\ge 3$ (the cases $i=1,2$ being similar). Now, $$10^{i-1}\left(x-\left(N+\sum_{j=1}^{i-2}\frac{x_j}{10^j}\right)\right)<1+x_{i-1}$$ by definition of the floor function. Hence $10^{i}(x-(N+\sum_{j=1}^{i-1}\frac{x_j}{10^j}))<10$ and so $x_i\le 9$. Similarly, since $$10^{i-1} \left(x- \left(N+\sum_{j=1}^{i-2}\frac{x_j}{10^j} \right)\right)\ge x_{i-1}$$ so $10^{i}\left(x-\left(N+\sum_{j=1}^{i-1}\frac{x_j}{10^j}\right)\right)\ge 0$ following which $x_i\ge 0$. Thank you. (Just to clarify bounty will be given to the best posted answer, even if above is correct)
* *WLOG, $N=0$ (you can rescale $x$), and $$0\le(x-0.)<1$$ starts the induction. Then $$0\le10^n(x-0.x_1x_2\cdots x_n)<1\implies0\le10^{n+1}(x-0.x_1x_2\cdots x_n)<10$$ so that taking the floor, the next digit is one of $0,1,\cdots 9$. And in turn $$0\le10^{n+1}(x-0.x_1x_2\cdots x_nx_{n+1})<1$$ because this is the fractional part of $10^{n+1}(x-0.x_1x_2\cdots x_nx_{n+1})$, i.e. what remains of a number after you removed the integer part. *applying this definition, you will never get an infinite repetition of $9$, because such repetitions tend to a number with a finite expansion ($0.234999\cdots=0.234\bar9=0.235$), and by the definition, the computed digits will be zeroes, not nines.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3651649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
$g$ not continuous in $(0,0)$, differentiable in every direction AND $|D_vg(x)| \leq |v|$ I have found plenty of simliar questions to mine, but in this case there is one more condition that needs to be satisfied, this is the problem: "Find a function $g:\mathbb{R}^2 \rightarrow\mathbb{R}$, so that all directional derivatives $D_v g(x)$ exist ($v\in\mathbb{R}^2$) but $g$ isn't continuous in $(0,0)$ AND $|D_vg(x)| \leq |v|$." It's easy to find a function that satisfies the first two conditions, but I just don't know how to use the third one. I also have a problem in understanding it. For example, I can have 2 vectors which point in the same direction but have different length, for example: $\left( \begin{array}{c} 1\\ 0\\ \end{array} \right)$ and $\left( \begin{array}{c} 0.01\\ 0\\ \end{array} \right)$, both point into the same direction, but their length isn't the same. With this in mind, $|v|$ can become arbitrary small, meaning that $|D_vg(x)|$ has to be $0$, so $g$ has to be a constant function. But that doesn't really help, since there is no constant function that is uncontinuous in $(0,0)$ (or is there?), so I guess that I didn't understand the $|D_vg(x)| \leq |v|$ condition properly. Could you give me some advice?
Suppose $x,y\in \mathbb R^2.$ Define $f:\mathbb R\to \mathbb R^2$ to be the function $f(t)= g(x+t(y-x)).$ Then $f'(t) = D_{y-x}g(x+t(y-x))$ for all $t.$ By the MVT, $f(1)-f(0) = f'(c)$ for some $c\in (0,1).$ It follows that $$|g(y)-g(x)|=|f(1)-f(0)| = |f'(c)| = |D_{y-x}g(x+c(y-x))|\le |y-x|.$$ Thus $g$ is Lipschitz on $\mathbb R^2,$ so is certainly continuous everywhere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3651816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Square root of the max is the max of the square root? I apologize if the question seems so obvious, but I don't have a strong base on maths nor I know the tools to prove this simple statement. For a given function $f(x)$, is it true that $$ \left(\max |f(x)|^2 \right)^{\frac{1}{2}} = \max |f(x)|.$$ I wonder if it should have a less or equal sign instead, but I do not know where to start. Thanks in advance. Edit: I see that I was misunderstanding the answers... I thought my claim was false and should be $\leq$ instead. Now I see that equality holds by proving the inequalities in both ways. Thanks a lot!
It's a big assumption that $\max |f(x)|$ or $\max(|f(x)|^2)$ exist but if one or the other does, they both do $(\max(|f(x)|^2))^{\frac 12} = \max |f(x)|$. Suppose $\max |f(x)|$ exist. That means there is $a\in \mathbb R$ so that for every $y\in \mathbb R$ we have $|f(y)| \le |f(a)|$ and $\max|f(x)| = |f(a)|$. If $|f(y)|\le |f(a)|$ then $|f(y)|^2 \le |f(a)|^2$ so $\max(|f(x)|^2) = |f(a)|^2$. And $(\max(|f(x)|^2)^{\frac 12} = (|f(a)|^2)^{\frac 12} = |f(a)| = \max (|f(x)|)$. And the other direction is too similar to be worth dealing with. Now a more subtle question is if $\max(|f(x)|)$ doesn't exist but $\sup(|f(x)|)$ does. Does $\sup (|f(x)|^2)$ exist and if so does $(\sup(|f(x)|^2)^{\frac 12} = \sup |f(x)|$? The answer is still yes. $|f(y)| \le \sup |f(x)| \iff |f(y)|^2 \le (\sup |f(x)|)^2$ so $\{|f(x)|^2\}$ is bounded above by $(\sup |f(x)|)^2|$. So $\sup(|f(x)|^2)$ exists. If $0< k < (\sup |f(x)|)^2$ then $\sqrt k < \sup|f(x)|$ and so there $y: \sqrt k < |f(y)| \le \sup |f(x)|$ and $k < |f(y)|^2$ so $k$ is not an upper bound. So $\sup(|f(x)|^2) = (\sup(|f(x)|)^2$. And so $(\sup(|f(x)|^2))^{\frac 12} =((\sup(|f(x)|)^2)^{\frac 12} = \sup(|f(x)|)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3651978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Prove that for sets $A,B,C$, if $C \subseteq B$, then $(A\setminus B)\cap C = \varnothing$. I just need the proof of this. How does one prove that given $A, B, C$, if $C\subseteq B$, then $(A\setminus B)\cap C$ is equal to an empty set.
Assume that $(A \setminus B) \cap C \neq \varnothing$. Let $x \in (A \setminus B) \cap C$. Then $x\in (A \setminus B)$, and hence, not in B. However, $x\in C$ which is a subset of $ B$. So $x\in B$. That's a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3652088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Subgroup inavariant subextension of Galois extension Let $L/K$ be a Galois extension. Let $a$ be a generator of normal basis $ga$. Let $H$ be any subgroup of G. I need to prove that $M/K$ ($H$-invariant subextension) is generated by $x=\sum_{h\in H}ha$. The only thing i had managed to find out is that one of bases of $M/K$ is $\{\sum_{h\in Hg}ha\}$ and that subextension generated by $x$ is $H$-invariant, so one of ways is to prove that every element of this basis is lying inside generated subextension, but it doesn't feel like an obvious fact.
Lets check H-fixed elements in $K[G] \cong L/K$ (isomorphism as $K[G]$-modules is given by normal basis). It would be elements looking like $k\sum_{h\in Hg}h$ because only cosets of $H$ are stable under $H$ multiplications on the left and $H$ acts transitively on such cosets. Subspace $K$-linearly generated by such elements has $K$-dimension $[G:H]$. From the fact that cardinality of orbit under actions of Galois group is equal to dimension of corresponding generated subextension it follows that $dim \:K(x) = [G:H]$ because all $gH$ cosets are distinct and $ga$ forms a basic. From $x$ structure it's easy to see that all powers $x^n$ are $H$-fixed hence $K(x) \leq M$. From dimension argument it follows that $K(x) = M$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3652201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Is this a rigourous and correct way to evaluate a limit including an $n^{th}$ degree derivative? I'm a student in Physics' first year, I have an introductory knowledge on real analysis, and I'm not sure about how to solve the following limit: $$\left.e^{2x}\frac{d^n}{dx^n} e^{-x^2}\right\rvert_{-\infty}^\infty$$ What I know---and can prove---is that, asymptotically and using small o notation, $$\forall p,q,x\in\mathbb{R}:\ x^p = o_{x\to\infty}e^{qx}$$ and since $\frac{d^n}{dx^n} e^{-x^2} = P_n(x)e^{-x^2}$, with $P_n(x)$ an $n^{th}$ degree polynomial, it should follow that the above expression converges to zero due to the "faster" convergence of $e^{-x^2}$. Now, is this correct? I'm maybe even more worried about the question "is this rigorous enough?" I don't like doing sloppy maths. I would appreciate any comment, advice or enlightening explanation on how to evaluate this expression rigorously.
The expression can be written $$\frac{P(x)}{e^{(x-1)^2}}$$ or $$\frac{Q(x-1)}{e^{(x-1)^2}}$$ where $P,Q$ are polynomials. Whatever their degree $d$, $$e^{(x-1)^2}=\Omega(|x-1|^{d+1}).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3652361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluating : $\int \frac{\sec x-\tan x}{\sqrt{\sin^2x-\sin x}} \mathrm{d}x$ As a part of a bigger question, I was asked to evaluate the integral : $$\int \frac{\sec x-\tan x}{\sqrt{\sin^2x-\sin x}} \mathrm{d}x$$ Here's what I tried: (Please bear with me, it gets quite lengthy) $$\int \frac{\sec x-\tan x}{\sqrt{\sin^2x-\sin x}} \mathrm{d}x$$ $$=\int \frac{1-\sin x}{\cos x \sqrt{\sin^2x-\sin x}}\mathrm{d}x$$ $$=\int \frac{(1-\sin x) \cos x }{\sqrt{\sin^2 x-\sin x}(1-\sin^2 x)}\mathrm{d}x$$ $$=\int \frac{\cos x}{(\sqrt{\sin^2x -\sin x}(1+\sin x)}\mathrm{d}x$$ Substituting $\sin x= t$, we're left with a comparatively good-looking integral: $$\int \frac {\mathrm{d}t}{(1+t)\sqrt{t^2-t}}$$ Well, this integral looks simple and maybe is, but I'm having real trouble evaluating it : $$\frac12\int \frac{t+1-(t-1)}{(1+t)\sqrt{t^2-t}}\mathrm{d}t$$ $$=\frac12\left[\int \frac{\mathrm{d}t}{\sqrt{t^2-t}}-\int \frac{t-1}{\sqrt{t^2-t}}\mathrm{d}t\right]$$ Now this is getting longer than I expected it to. Can anyone help me find a shorter and quicker solution to this problem? Thanks in advance.
Letting $\sin x=\sec^2 \theta$, we have $$ \begin{aligned}I&=\int \frac{2 \sec ^2 \theta \tan \theta d \theta}{\left(1+\sec ^2 \theta\right) \sec \theta \tan \theta}\\&= 2 \int \frac{d(\sin \theta)}{2-\sin ^2 \theta}\\&= \frac{1}{\sqrt{2}} \ln \left|\frac{\sqrt{2}+\sin \theta}{\sqrt{2}-\sin \theta}\right|+C\\&= \frac 1{\sqrt{2}} \ln \left|\frac{\sqrt{2 \sin x}+\sqrt{\sin x+1}}{\sqrt{2 \sin x}-\sqrt{\sin x+1}}\right| +C \end{aligned} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3652486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
Calculate the perimeter of a circle with a continuously increasing radius I have a circle (if that would even be the correct name for this shape), with a radius function equal to $R=\frac 2\theta + 1$, where $\theta$ is the angle in radians. The domain is between $\theta = 0.25$ and $\theta = 2\pi$. How do I calculate the total outside perimeter of this shape? I tried to divide it into individual sectors, and find the arc length, but couldn't find a way due to the two radii (in each sector) being different. The main question is what is a generalised method that could be used to calculate the perimeter of the shape, with an increasing radius (which increases with the angle relative to a starting direction)? The exact function isn't too important.
There are three main ways to describe a curve * *by a cartesian equation $y=f(x)$, *by parametric equation(s) $x=x(t), \ y=y(t)$, *by a polar equation $r=r(\theta)$. We are here in this third case with $$r(\theta)=\frac{2}{\theta}+1$$ Each "way" has its own formulas for the computation of areas, lengths, etc. which IMHO have not to be established again each time one uses them : In particular, the length formula (6) for polar equations is $$L=\int_{\theta_1}^{\theta_2}\sqrt{r^2+\left(\dfrac{dr}{dθ}\right)^2} d\theta$$ Which here becomes : $$\int_{0.25}^{2 \pi}\frac{\sqrt{(\theta^2+2\theta)^2+4}}{\theta^2}d\theta$$ which has no simple expression... You must use numerical methods for getting an approximate result. I found $16.325196$ with Matlab, which looks good when seeing the curve below (the equivalent of a circle with perimeter $2 \pi R$, $R\approx 1.5$ + a "line" from $(1.5, 2.3)$ to $(8.5,2)$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3652603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Non equivalent colourings of regular hexagon( Brualdi Chapter-14 , Exercise -32) I have a question in this exercise of Richard Brualdi's Introductory Combinatorics. Exercise is -> Determine the number of non equivalent colourings of corners of regular hexagon with colours red, white and blue. Now, Taking motivation from this example solved in text Adding image of example-> What I calculated for regular hexagon $ N(D_6, C) $ = $\frac { 3^6 + 5×3 + 6 × 3^3} {12}$=75.5 . It seems I am making some mistake as non equivalent colouring comes out to be fractional. I tried to solve it again, but I am getting same answers. Can some please tell what I am doing wrong.
The dihedral group of the hexagon is $\rho^0,\rho^1,\rho^2,\rho^3,\rho^4,\rho^5,\tau_1,\tau_2,\tau_3,\sigma_4,\sigma_5,\sigma_6$. The $\rho^i$ are the rotations, the $\tau_i$ are the reflections through axes which pass though the vertices of the hexagon and the $\sigma_i$ are the reflections which do not pass through the vertices of the hexagon. For each permutation $g$, the number of cycles $c(g)$ and $3^{c(g)}$ are listed below: $$\rho^0 \quad 6\quad 729\\ \rho^1\quad 1 \quad 3 \\ \rho^2\quad 2 \quad 9\\ \rho^3 \quad 3\quad 27 \\ \rho^4 \quad2 \quad 9\\ \rho^5\quad 1 \quad 3 \\ \sigma_i \quad 3 \quad 27\\ \tau_i\quad 4\quad 81$$ The number of nonequivalent colourings is $\frac{1}{|G|}\sum 3^{c(g)}=\frac{1104}{12}=92$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3653142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove that $\sum_{n=2}^{\infty} \frac{(-1)^{n}}{n}\zeta(n) = \gamma$ How do you prove that $$\sum_{n=2}^{\infty} \frac{(-1)^{n}}{n}\zeta(n) = \gamma$$ where $\gamma$ is the Euler-Macheroni constant? This series kind of appeared in one of the questions I asked earlier; you just need to do some rearranging to get to this series. Here is WolframAlpha calculating the series. I believe I have almost proved it (my calculations below), but I'm unsure at the end. I would also like to see if there is some other way of proving them (I don't think there is, but it would be cool). My "proof" (not sure if it right): \begin{align} \sum_{n=2}^{\infty} \frac{(-1)^{n}}{n}\zeta(n) & =\sum_{n=2}^{\infty}\frac{(-1)^{n}}{n}\sum_{k=1}^{\infty}\frac{1}{k^n} \\\\ & = \sum_{n=2}^{\infty}\sum_{k=1}^{\infty}\frac{(-1)^{n}}{n}\frac{1}{k^n}\end{align} Here I interchange the summations (is it possible to do so?): \begin{align} \sum_{n=2}^{\infty}\sum_{k=1}^{\infty}\frac{(-1)^{n}}{n}\frac{1}{k^n} & = \sum_{k=1}^{\infty}\sum_{n=2}^{\infty}\frac{(-1)^{n}}{n}\frac{1}{k^n} \\\\ & = \sum_{k=1}^{\infty} \left(\frac{1}{k} + \sum_{n=1}^{\infty}\frac{(-1)^{n}}{n}\frac{1}{k^n}\right)\end{align} Recall the Taylor series for $\ln(x)$: $$\ln(1+x) = \sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n}x^n$$ By plugging in $\frac{1}{x}$, changing $x$ to $k$ and multiplying by $-1$ on both sides we get: $$-\ln\left(\frac{k+1}{k}\right) = \sum_{n=1}^{\infty}\frac{(-1)^{n}}{n}\frac{1}{k^n}$$ ...which is exactly what we need. So plugging the result into the series above we get: \begin{align} \sum_{k=1}^{\infty} \left(\frac{1}{k} + \sum_{n=1}^{\infty}\frac{(-1)^{n}}{n}\frac{1}{k^n}\right) & = \sum_{k=1}^{\infty} \left(\frac{1}{k} - \ln\left(\frac{k+1}{k}\right)\right) \\\\ & = \sum_{k=1}^{\infty} \frac{1}{k} - \sum_{k=1}^{\infty}\ln\left(\frac{k+1}{k}\right) \end{align} Recall the definition of the Euler-Macheroni constant: $$\gamma = \lim_{n\to\infty}(H_n - \ln(n))$$ Now clearly the term $\sum_{k=1}^{\infty} \frac{1}{k}$ is the $H_n$ part of the definition, but here's is where I get a little stuck; how does $$\sum_{k=1}^{\infty}\ln\left(\frac{k+1}{k}\right) = \lim_{k\to\infty}\ln(k)$$ Otherwise I think my proof is quite correct, but can anybody help at the end of it?
We can't write $$\sum_{k=1}^{\infty} \left(\frac{1}{k} - \ln\left(\frac{k+1}{k}\right)\right)=\sum_{k=1}^\infty\frac1k-\sum_{k=1}^\infty\ln\left(\frac{k+1}{k}\right)$$ because both of these two series are divergent. To fix this issue, we use the limit $$\sum_{k=1}^{\infty} \left(\frac{1}{k} - \ln\left(\frac{k+1}{k}\right)\right) = \lim_{n\to \infty} \sum_{k=1}^{n} \left(\frac{1}{k} - \ln\left(\frac{k+1}{k}\right)\right)$$ $$=\lim_{n\to \infty}\left(\sum_{k=1}^n\frac1k-\sum_{k=1}^n\ln\left(\frac{k+1}{k}\right)\right)$$ $$=\lim_{n\to \infty}\left(H_n-\ln(n+1)\right)$$ $$\overset{n+1=m}{=}=\lim_{m\to \infty}\left(H_{m-1}-\ln(m)\right)$$ $$=\lim_{m\to \infty}\left(H_m-\frac1m-\ln(m)\right)$$ $$=\lim_{m\to \infty}\left(H_m-\ln(m)\right)-\lim_{m\to \infty}\frac1m$$ $$=\gamma-0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3653249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Is Hamiltonian path NL? NL is what can be solved by a non-deterministic Turing machine in logspace. Could you non-deterministically "guess" the correct Hamiltonian path in logspace, keeping track of the current vertex (log(n) bits) and a count of how many vertices you've visited (log(n) bits)? Does that mean finding a Hamiltonian path is NL? (This is too simple; what is my mistake?) A Hamiltonian path is a path that visits all vertices in a graph. The concept exists for directed and undirected graphs.
This describes a non-deterministic walk, not a path. You may end up counting the same vertex more than once. Given a start configuration of (v, 0) (meaning you're at vertex v and you've traversed 0 edges), when you're at configuration (w, n-1) (meaning now you're at vertex w and you've traversed n-1 edges), all you know is that you have a v->w walk of length n-1. If there's a Hamiltonian path, you will find it non-deterministically, however, you will not "know" that you've found it by examining the (w, n-1) state. This algorithm decides walks of length n-1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3653394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Exponent of primes in factorial n Let $n$ be a natural number and $V2,V3$ and $V5$ denote the exponent of $2,3$ and $5$ in $n!$ respectively. Then it true that $(2^{V2})^2(3^{V3})^2(5^{V5})^2>n!$. I have verified it by calculator. How do I prove it mathematically?
This is not true, for example $59!=2^{54}\cdot 3^{27} \cdot 5^{13}\cdot d$ where $d$ is coprime to $2,3$ and $5$ (see factorize 59!), yet $$ (2^{54}\cdot 3^{27} \cdot 5^{13})^2 \not > 59!, $$ see is (2^54 * 3^27 * 5^13)^2 > 59!.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3653563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to prove ${n+2 \choose 3}=1\cdot n + 2 \cdot (n - 1) + \ldots + n \cdot 1$? I saw this problem as an exercise in Combinatorial Identities :- Prove that $${n+2 \choose 3}=1\cdot n + 2 \cdot (n - 1) + \ldots + n \cdot 1\,.$$ After giving some time to this, I think that it is quite similar to the identity :- ${n \choose k} = {n - 1 \choose k - 1} + {n - 1 \choose k}$ But I don't know how to prove this algebraically , anyone please help me with this. (Note that I am still not sure whether we can use that identity or not , I can also guess we can use Vandermonde's Identity here) .
I suggest proving it combinatorially. $\binom{n+2}3$ is the number of $3$-element subsets of the set $[n+2]=\{1,2,\ldots,n+2\}$. We can classify those sets by their middle elements: let $\mathscr{A}_k$ be the family of all $3$-element subsets of $[n+2]$ of the form $\{j,k,\ell\}$, where $j<k<\ell$; clearly $$\binom{n+2}3=\sum_k|\mathscr{A}_k|\;.$$ Now prove that $|\mathscr{A}_k|=(k-1)(n+2-k)$ and determine the range of possible values of $k$ to complete the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3653681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 0 }
convergency of a function series I've got the following function series: $$ f(x) = \sum_{n = 0}^{\infty} \frac{e^{nx}-1}{2^ne^{nx}}$$ Is it punctually convergent and uniformly convergent? What I have done is: I've chosen the series $\frac{1}{2^n}$ which is bigger than the series given in the problem, and because of the theory of Weierstrass, I've said that as $\frac{1}{2^n}$ is convergent, the given series is also convergent punctually and uniformly. But I'm not sure this is OK. If it's not, how can I do it?
I believe this can be put into the form of a geometric series s.t. $$f(x) = \sum_{n = 0}^{\infty} \bigg(\frac{1}{2}\bigg)^n - \sum_{n = 0}^{\infty} \bigg(\frac{1}{2e^x}\bigg)^n $$ The first term can be evaluated without thought via the equation for the geometric series: $\sum_\limits{n =0}^{\infty} ar^k = \frac{a}{1-r}$, $\forall |r|<1$, $$\implies f(x) = 2 - \sum_{n = 0}^{\infty} \bigg(\frac{1}{2e^x}\bigg)^n$$ The second term is also in the form of a geometric series, however it is only convergent if $\bigg|\frac{1}{2e^x} \bigg| < 1$. More work is still require, but can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3653902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
The quotient of an integral extension is also an integral extension I would like to prove the following fact: Suppose that $K$ is a field and that $A$ is a ring and an integral extension of $K$. Given a prime ideal $\mathcal{P} \subseteq A$, then the quotient $A / \mathcal{P}$ is also an integral extension of $K$. This is my attempt: every element in $A / \mathcal{P}$ is of the form $[a]$, with $a \in A$. Then, since $A$ is integral over $K$, given $a \in A$ there exists an integral equation for $a$ with coefficients in $K$: $$a^m+k_1a^{m-1}+ \ldots +k_{m}=0, ~~ k_i \in K ~~ \forall i=1,\ldots, m.$$ Passing to equivalence classes in the quotient we get $$[a]^m+[k_1][a]^{m-1}+ \ldots +[k_{m}]=[0].$$ If we show that for any $i$, the only element contained in $[k_i]$ is $k_i$, then the above expression is an integral relation for $[a]$ with coefficients in $K$ and we are done. From here, I suspect that I should procede in this way: given $k_i' \in [k_i]$, we have $$k_i' - k_i \in K \cap \mathcal{P } ~~ (*)$$ which is a prime ideal since it is the contraction of a prime ideal. But the only prime ideal in $K$ is $(0)$, so $k_i'=k_i$. My problem is probably quite stupid, but I find some difficulties in justifying this last step: in $(*)$ I assume that the class of an element of $K$ contains only elements of $K$, but is this statement true at all? How should I justify it?
I think the approach may be too elementary. You want to show that the map $K\to A\to A/\mathcal{P}$ is injective. Morphisms from a field to another ring are always injective. Indeed, let $f:K\to B$ be just about any ring morphism and assume that $f(a)=f(b)$ and $a\ne b$, hence $t:=a-b$ is invertible and so you get the following contradiction: $$ 0 = 0 \cdot f(t^{-1}) = (f(a) - f(b))\cdot f(t^{-1}) = f(t)\cdot f(t^{-1})= f(1) = 1. $$ This should set your mind at ease in a more general sense: Over a field, you can never run into the sort of problem you are trying to avoid.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3654074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Are strings of length $n^2$ sparse? Let L be the language of strings of length $m$, where $m$ is a perfect square. (So strings of length $1, 4, 9, 16, 25, \dots$ are accepted, other lengths are not.) As $m$ increases, less and less strings are accepted. Is this language sparse? a sparse language is a formal language (a set of strings) such that the complexity function, counting the number of strings of length $n$ in the language, is bounded by a polynomial function of $n$. A symbol is $1$ or $0$. A string is a finite sequence of symbols. For example $7$ in binary = $111$. (Unlike binary integers, strings can be padded with $0$'s and be distinct; $0111 \neq 111$.) A language is a set of strings. A language is sparse if only a polynomial number of its $2^m$ strings are accepted, (where $m$ is the length of the string).
It's not sparse: Every time we encounter a perfect square m=k^2 we add 2^m strings to the language. Even though we're adding less strings as n increases (perfect squares occur less and less), every time we add strings, we're adding exponential (2^m) strings. So overall we're still adding exponential strings.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3654234", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
If $H\leq G$ and $x^{2}\in H$ for all $x\in G$, show that $H\lhd G$ and $G/H$ is abelian. I have already showed that $H$ is normal in $G$. But I can't show that $G/H$ is abelian. What I have tried was taking $X,Y\in G/H$ and show that $XYX^{-1}Y^{-1}=1_{G/H}$. If $X=Hg_1$ and $Y=Hg_2$ then $XYX^{-1}Y^{-1}=Hg_1g_2g_1^{-1}g_2^{-1}$. How can I show that the element $g_1g_2g_1^{-1}g_2^{-1}\in H$. Maybe another approach would be easier.
Hint For each $x,y \in G$ you have $xyxy\in H, xx yy \in H$ and hence $$Hxyxy=H=Hxxyy$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3654445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Understanding Lay's Proof of Uniqueness of Reduced Echelon Form In Lay's Linear Algebra and Its applications, he gives a proof the uniqueness of the reduced echelon form. Consider two row equivalent matrices U and V in reduced echelon form. He says that the pivot columns of U and V are precisely the nonzero columns that are not linearly dependent to the columns to their left. Since U and V are row equivalent, their columns have the same linear dependence relations. Hence, the pivot columns of U and V appear in the same locations. I'm having a hard time seeing how the bolded statement follows from the previous two statements.
Suppose that the $k$-th column of $U$ is a pivot column; then it is linearly independent of the columns to its left. If the $k$-th column of $V$ were not linearly independent of the columns to its left, it would be linear combination of them, and the first $k$ columns of $V$ would therefore satisfy a linear dependence relation that the first $k$ columns of $U$ do not satisfy. But that is impossible, because $U$ and $V$ are row-equivalent, and their columns therefore satisfy the same linear independence relations: if the first $k$ columns of $V$ are linearly dependent, then so are the first $k$ columns of $U$, and vice versa. Therefore the $k$-th column of $V$ must be linearly independent of the columns to its left and hence be a pivot column. This shows that every pivot column of $U$ must be a pivot column of $V$, and the same argument with the rôles of $U$ and $V$ reversed shows that every pivot column of $V$ must be a pivot column of $U$. It follows that $U$ and $V$ must have exactly the same pivot columns.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3654563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Every *-isometric isomorphism of $B(\mathcal{H})$ keep compact operators? Let $\mathcal{H}$ be a Hilbert space, $B(\mathcal{H})$ denotes the $\mathcal{C}^*$-algebra consisting of bounded linear transformation on $\mathcal{H}$ ($*$ is the adjoint). Now consider a isometric *-isomorphism of $B(\mathcal{H})$ onto it self namely $\phi$. If $K$ is a compact operator, then $\phi(K)$ is also compact. Is this claim true? Here are my thoughts: (I guess it is right) 1.Maybe we can consider finite-rank operators. If we can prove $\phi$ keeps rank-one operators, then it will keep finite-rank operators, and thus keep compact operators by the density and continuity. 2.The ideal $\mathcal{K}$ formed by all compact operators is a minimal nonzero closed ideal of $B(\mathcal{H})$, and $\phi(\mathcal{K})$ is a closed ideal of $B(\mathcal{H})$. So if we can prove $\phi(\mathcal{K})\cap \mathcal{K}\not= \varnothing$, then $\phi(\mathcal{K})=\mathcal{K}$ and the claim is true. I think this question is not too difficult, but I just sticked. It also helpful if you can give me some hints or references. Thanks.
This is a corollary of the fact that every $*$-automorphism of $\mathbb B(\mathcal H)$ is inner, and the hint given by MaoWao can also be used to prove this. I'll expand on this hint a bit. Indeed, if $\phi$ is an automorphism of $\mathbb B(\mathcal H)$, then minimal projections are mapped to minimal projections under $\phi$. Thus to each unit vector $\xi\in\mathcal H$, $\phi(\xi\otimes\xi)$ is also a minimal projection, hence is of the form $(u\xi)\otimes(u\xi)$ for some unit vector $u\xi\in\mathbb B(\mathcal H)$. Then the map $\xi\mapsto u\xi$ extends to a unitary $u\in\mathbb B(\mathcal H)$, and it follows that $\phi=\operatorname{ad}(u)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3654899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Computing $\int_0^1\frac{1-2x}{2x^2-2x+1}\ln(x)\text{Li}_2(x)dx$ Any idea how ot approach $$I=\int_0^1\frac{1-2x}{2x^2-2x+1}\ln(x)\text{Li}_2(x)dx\ ?$$ I came across this integral while I was trying to find a different solution for $\Re\ \text{Li}_4(1+i)$ posted here. here is how I came across it; using the identity $$\int_0^1\frac{\ln(x)\text{Li}_2(x)}{1-ax}dx=\frac{\text{Li}_2^2(a)}{2a}+3\frac{\text{Li}_4(a)}{a}-2\zeta(2)\frac{\text{Li}_2(a)}{a}$$ multiply both sides by $\frac{a}{3}$ then replace $a$ by $1+i$ and consider the the real parts of both sides we have $$\Re\ \text{Li}_4(1+i)=-\frac16\Re\ \text{Li}_2^2(1+i)+\frac23\zeta(2)\Re\ \text{Li}_2(1+i)+\frac13\Re \int_0^1\frac{(1+i)}{1-(1+i)x}\ln(x)\text{Li}_2(x)dx$$ For the integral, use $\Re\frac{1+i}{1-(1+i)x}=\frac{1-2x}{2x^2-2x+1}$ which gives $I$. What I tried is subbing $1-2x=y$ which gives $$I=\int_{-1}^1\frac{-y}{1+y^2}\ln\left(\frac{1+y}{2}\right)\text{Li}_2\left(\frac{1+y}{2}\right)dy=\int_{-1}^1 f(y)dy=\underbrace{\int_{-1}^0 f(y)dy}_{y\to\ -y}+\int_{0}^1 f(y)dy$$ $$=\int_0^1\frac{y}{1+y^2}\ln\left(\frac{1-y}{2}\right)\text{Li}_2\left(\frac{1-y}{2}\right)dy-\int_0^1\frac{y}{1+y^2}\ln\left(\frac{1+y}{2}\right)\text{Li}_2\left(\frac{1+y}{2}\right)dy$$ I think I made it more complicated. Any help would be appriciated.
$$\int_0^1\frac{\ln(x)}{1+x}\text{Li}_2\left(\frac{x}{1+x}\right)\ dx=3\text{Li}_4(2)+\text{Li}_2(2)\log^22-3\text{Li}_3(2)\log2+6\operatorname{Li}_4\left(\frac12\right)+\frac{21}4\ln2\zeta(3)-\frac{\pi^2}{8}\log^22+\frac{1}{4}\log^42-\frac{29\pi^4}{288}$$ I being known,I deduce $$\int_0^1\frac{x\ln(1+x)}{1+x^2}\text{Li}_2\left(\frac{x}{1+x}\right)\ dx=-\frac{1}{16}\operatorname{Li}_4\left(\frac12\right)+\frac{21}{64}\ln2\zeta(3)-\frac{41}{768} \pi ^2 \log ^2(2)-\frac{1}{96}\log^42+\frac{1609\pi^4}{92160}-\frac{3}{2}\text{Li}_4(2)-\frac{1}{2}\text{Li}_2(2)\log^22+\frac{3}{2}\text{Li}_3(2)\log2$$ Sorry,i couldn't deduct this integral. $$3\text{Li}_4(2)+\text{Li}_2(2)\log^22-3\text{Li}_3(2)\log2=-3\operatorname{Li}_4\left(\frac12\right)-\frac{21}8\ln2\zeta(3)-\frac{1}{8}\log^42+\frac{\pi^4}{15}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3655021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Evaluate $\int_0^1\frac{\tan^{-1}ax}{x\sqrt{1-x^2}}\,dx$ Evaluate $$\int_0^1\frac{\tan^{-1}ax}{x\sqrt{1-x^2}}\,dx\,,$$ where $a$ being parameter. I am not able to solve this.
We will use Taylor series of arctan, so we have : $$f(a)=\int_0^1\frac{\arctan ax}{x\sqrt{1-x^2}}=\sum_{n=0}^\infty{\frac{(-1)^na^{2n+1}}{2n+1}}\biggl(\int_0^1\frac{x^{2n}}{\sqrt{1-x^2}}\biggr)$$ Then we use integration by substitution for calculate the given integral (by choosing $x=\sin t$) : $$\int_0^1\frac{x^{2n}}{\sqrt{1-x^2}}=\int_0^{\pi/2}\sin^{2n}t=W_{2n}$$ And we know calculate wallis'integral, $$W_{2n}=\frac{\pi}{2}\frac{(2n)!}{2^{2n}(n!)^2}$$ So we have : $$\int_0^1\frac{\arctan ax}{x\sqrt{1-x^2}}=\frac{\pi}{2}\sum_{n=0}^\infty{\frac{(-1)^n(2n)!}{4^n(2n+1)(n!)^2}a^{2n+1}}$$ Next step we will use taylor series of arcsin (see this link https://fr.wikipedia.org/wiki/S%C3%A9rie_de_Taylor), so we have : $$f(a)=\frac{\pi}{2}\frac{\arcsin( {ia})}{{i}}=\frac{\pi}{2}\ln|a+\sqrt{1+a^2}|$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3655194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Suppose $A \subseteq \mathbb R$ is countable. Show $\exists x\in\Bbb R$ s. t.$A \cap (x+A) =\emptyset$. Suppose $A \subseteq \mathbb R$ is countable. Show $\exists x\in\Bbb R$ s. t. $A \cap (x+A) =\emptyset$. (Here, $x + A = \{ x + a : a ∈ A \}$. I'm unsure how to proceed. I thought about taking the smallest distance between the countable numbers of the set etc but I don't think we can take smallest distance of an infinite set even if it's only countable ie say set were rationals. I've tried by contrapositive, ie suppose there is no such x but that got me nowhere. Any hints?
One can show something stronger: let $I$ be uncountable set, then $(f_λ)_{λ∈I}$ a sequence of $I$-many injective functions from uncountable set $X$ to itself such that $f_a(x)=f_b(x)⇒a=b$, then for every countable set $A$, there exists some $i∈I$ such $A\cap f_i[A]=\emptyset$. Let $I=X=\Bbb R$ and $f_a(x)=a+x$ to get your question. To prove that let $x∈X$, then if $x\in f_a[A],f_b[A]$ for $a\ne b$, then $f_a^{-1}(x)≠f_b^{-1}(x)$, because $A$ is countable, the set $I_x=\{λ∈I\mid x\in f_λ[A]\}$ is also at most countable, therefore $J=\bigcup_{x∈A}I_x$ is countable union of countable sets, hence countable, so we can take $\kappa\in I\setminus J$, if $A∩f_\kappa[A]≠\emptyset$ there must be $y∈A$ such that $y∈f_\kappa[A]$, so $\kappa\in J$, contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3655301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Is there another type of number more advanced than complex numbers? I am a student and I was introduced to complex numbers about a year ago. I am curious to know whether there is another type of number system more advanced than complex numbers. So the way I was introduced to complex numbers was by being told the history of numbers. First, there were integers, which were used to count things (such as ten fingers, two eyes, and so on.) Then we came up with decimals that came in handy for example when we say "the glass is half full." Next, we used negative numbers. That was handy in for example in accounting to show debt. Then we came up with imaginary numbers, which are used in, for example, quantum mechanics. So my question is are there any other types of number systems? Surely, there must be infinite types of numbers. This is because I imagine the real numbers to take up a dimension (similar to the x-axis) and imaginary numbers to take another dimension (similar to the y-dimension.) As a result, I am thinking there must be an infinite type of number because we can add as many dimensions as we like. Am I correct to assume this?
There are many kinds of numbers we can obviously say are as "advanced", or more advanced. The Cayley-Dickson construction lets you double the dimension as often as you like, e.g. the already mentioned quaternions are a $4$-dimensional generalization of complex numbers. There are also some which aren't comparable to complex numbers as "more advanced". For example, $\Bbb C_p$ "combines" $\Bbb Q_p$ and $\Bbb C$, so how does $\Bbb Q_p$ compare to $\Bbb C$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3655486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Integral $\frac{2}{876} \int_{0}^{T} \left[ x^{-3/8} (1+b\,x)^{3/4} (1+ab\,x)^{-3} e^{(c\,x^2)}+\log{(5/4)}\right]\,\mathrm dx$ I am trying to evaluate definite integral: $$\frac{2}{876} \int_{0}^{T} \left[ x^{-3/8} (1+b\,x)^{3/4} (1+ab\,x)^{-3} e^{(c\,x^2)}+\log{(5/4)}\right]\,\mathrm dx$$ $a$, $b$, $c$ are all positive ($a=3.8/3777$, $b=3777$, $c=4.8\times 10^{-3}$). I have plotted the whole integrand. In the plot $\tau$ corresponds to $x$ in the integral. The parameter $\eta$ can be ignored. I have tried the Leibniz method (also called Feynman's integration trick) but without any success. Maybe with a clever change of variable it can be expressed in terms of (combinations of) special functions... or at least get some bounds? Actually I have a different way to estimate the integral, since it's a physics problem and I can guess the (upper limit) of it. T should be $T\sim 57$ (months), I hope. Thanks.
"abandon hope all ye who enter here" as wrote Dante Alighieri in "The Divine Comedy" Using your numbers, you have $$f(x)= \frac{1}{438} \left(\frac{ (1+3777 x)^{3/4}}{x^{3/8} \left(1+\frac{19 }{5}x\right)^3}\,e^{\frac{3 x^2}{625}}+\log \left(\frac{5}{4}\right)\right)$$ and it seems to me that you want to find the values of $T$ such that $$F(T)=\int_0^T f(x) \,dx = \eta $$ There is absolutely no hope to get a closed form formula for the result and, for whatever you could need to do, all the work needs numerical methods. What is interesting is to look at the plot of $G(T)=\log(F(T))$ as a function of $T$; there are two parts : an almost horizontal part as long as $\eta < 1$ and for $\eta >1$ $G(T)$ exhibits an almost parabolic shape. For solving the equation, Newton method is very effective for finding the zero of the equation $$G(T)-\log(\eta)=0$$ Using in particular the fundamental theorem of calulus, the iterates will be given by $$T_{n+1}=T_n-\frac{F(T_n)}{f(T_n)}\log \left(\frac{F(T_n)}{\eta }\right)$$ Let us try with $T_0=123$ and $\eta=1$. The iterates will be $$\left( \begin{array}{cc} n & T_n \\ 0 & 123.000 \\ 1 & 74.4050 \\ 2 & 56.6963 \\ 3 & 53.1111 \\ 4 & 52.8262 \\ 5 & 52.8235 \end{array} \right)$$ which is quite fast in spite of a quite poor estimate. In fact it seems that a rather good estimate could be $$T(\eta)=52.8235 + a \big[\log(\eta)\big]^b$$ A quick a dirty non linear regression gives (with $R^2=0.999906$) $$\begin{array}{clclclclc} \text{} & \text{Estimate} & \text{Standard Error} & \text{Confidence Interval} \\ a & 9.08300 & 0.03959 & \{9.00532,9.16069\} \\ b & 0.54957 & 0.00055 & \{0.54848,0.55066\} \\ \end{array}$$ * *For $\eta=10^{100}$, the estimate is $233.304$ while the solution is $227.851$ *For $\eta=10^{1000}$, the estimate is $692.559$ while the solution is $696.057$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3655763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Difficulty understanding proof by contradiction Here's my understanding of proof by contradiction based on what I've read and I've been taught. We show $\neg P \implies (c \land \neg c)$ is always true. This is done by assuming $\neg P$ is true. Then, we realize that $(c \land \neg c)$ is logically equivalent to $F$ so we've just shown that $\neg P \implies F$ is always true. But, examining the $\implies$ truth table, we realize that $\neg P$ must be false to ensure $\neg P \implies F$ is always true. Therefore, we conclude that $\neg P$ is false, and thus $P$ is true. The conundrum I have is that our analysis is based on the assumption that $\neg P$ is true. Then later we conclude that $\neg P$ is false. So why do we accept that $\neg P$ is false even though we assume $\neg P$ is true?
If my dog is in the bathroom, I will hear him bark there. I do not hear him bark there. Hence he is not in the bathroom. Nothing more profound than that.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3655947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
How can I solve this probability problem? A most unusual Irish pub serves only Guinness and Harp. The owner of this pub observes that 85% of his male costumers drink Guinness as opposed to 35% of his female costumers. On any given evening, this pub owner notes that there are three times as many males as females. What is the probability that the person sitting beside the fireplace drinking Guinness is female? So what I did was Males that drink Guinness are 0.85 Females that drink Guinness are 0.35 There are also 0.75 males and 0.25 females So I took everyone that wasn't female and drinks Guinness by doing this: 0.75 + (0.25 * (1-0.35) = 0.9125 So I subtract this to 1 1 - 0.9125 = 0.0875 And so I think the answer is 0.0875 or 8.75%. Is this correct or am I doing something wrong?
What is wrong with your attempt has been elaborated in the comments : you calculated only the probability that the person is Female and drinking Guinness. You have not taken into account the fact that it was known that the person was drinking Guinness. This will increase the probability significantly. See, the person you see is drinking Guinness. Therefore, the desired probability is the probability that the person is female GIVEN that he/she is drinking Guinness. Thus, if $F$ denotes the event "the person is female" and $G$ denotes the event "the person is drinking Guinness", then we need $P(F | G) = \frac{P(F \cap G)}{P(G)}$. To calculate $P(F \cap G)$, we need to interpret what we have. It is known that $85$% of the male customers drink Guinness, so $P(G|M) = 0.85$. Similarly, $P(G | F) = 0.35$. Additionally, $P(M) = 0.75$ and $P(F) = 0.25$. From here, $P(F \cap G) = P(G | F)P(F) = 0.0875$ (as per your calculation). On the other hand, $P(G)$ is what we don't know. We do it using the Bayes' rule : condition on whether the person is a man or woman, and multiply by the respective probability of the person being man or woman. That is : $$ P(G) = P(G | F)P(F) + P(G |M)P(M) = 0.0875 + 0.85 \times 0.75 = 0.725 $$ Therefore, the actual answer is $\frac{0.0875}{0.725} = \frac{7}{58}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3656123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Showing a collection of maps is a projector Suppose I have a collection of maps defined as follows: for $d_{n}:C_{n} \rightarrow C_{n-1}$ and $s_{n}: C_{n} \rightarrow C_{n+1}$ I have : $t_{n}=1-f'_{n} -f_{n}$ , where $f_{n}=s_{n-1}d_{n}$ and $f'_{n}=d_{n+1}s_{n}$. Furthermore I am given that $s_{n}$ is a collection of maps which satisfies $s_{n+1}s_{n}=0$. I have already showed that $t^{2}_{n}=t_{n}$. * *How can I show that such map is chain homotopic to the identity map? *If this is the case, does that imply that its image is then itself, since it is in some sense an identity mapping?
$\text{id} - t_n = f_n + f_n' = s_{n-1}d_n + d_{n+1}s_n$. Therefore $s_\#$ is the required chain homotopy between $t_n$ and $\text{id}$. The answer your second question is, no (in general). For example, let $X$ be a topological space such that $X = \text{Int}A \cup \text{Int} B$, let $\iota$ be the inclusion $C_n(A + B) \hookrightarrow C_n(X)$, where $C_n(A+B)$ is the chains group consisting of singular simplices with images either entirely in $A$ or entirely in $B$ and $C_n(X)$ is the singular chain group of $X$. Then $\iota$ is a chain homotopy equivalence (this is called the barycentric subdivision lemma), with a chain homotopic inverse $\rho : C_n(X) \to C_n(A+B)$. Note that $\iota \circ \rho : C_n(X) \to C_n(X)$ is homotopic to identity. The image of $\iota \circ \rho$ is obviously not $C_n(X)$, unless $A = B = X$. What is true however, is that $t_n$ induces identity map at the level of homology $H_n(C_*) \to H_n(C_*)$. So, the image image will be $H_n(C_*)$. This is because if two maps are chain homotopic then the maps induced by them at the homology level are equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3656518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Adjusting a $y = \sin(x)$ graph slightly I have a function where $$ y = m\sin\Bigl(\frac{x^{1.1}+30}{0.7d}\Bigr)^{2.2} $$ where $m = 45$ and $d = 120$ (constants) I would like the turning point at the top of the sin curve to peak at 0.75d (90). I would like the starting and finishing values to remain the same. The current function is plotted in blue. I would like a line similar to the red one. Feel free to ditch the sin function, if another method would be better Thanks for any help.
Consider the curve $$y=\frac{(x-4)^4}{12}-\frac{kx^2}{2}+cx+d,$$ where the constants are given by $d=5-\frac{90^4}{12}$ and $$240=2×30^4-12k(120^2)+24(120c)+24d,\\2M=2d+180c-8100k,$$ with $M$ being the maximum value you want the function to attain.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3656661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
If $M$ finitely generated as an $R$-module, is $M$ is finitely generated as an $S$-module, and $S$ is finitely generated as an $R$-module? Let $S$ be a commutative ring, $R$ a subring of $S$, and $M$ a non-zero $S$-module. If $M$ finitely generated as an $R$-module, do we have that $M$ is finitely generated as an $S$-module, and $S$ is finitely generated as an $R$-module? I have proven the converse to this statement (i.e. $M$ finitely generated as an $S$-module and $S$ finitely generated as an $R$-module together imply that $M$ is finitely generated as an $R$-module), but I have no idea whether the other statement is true or not. My guess would be no, but I cannot think of a counterexample! Can anyone provide some tips please?
If $M$ is finitely generated as an $R$-module, then since $R$ is a subring of $S$ we have that $M$ is finitely generated as an $S$-module (we just happen to be able to restrict the coefficients to be only elements of $R$ if we want, which are still elements of $S$). But $S$ need not be finitely generated as an $R$-module. For example, we could use $S=\mathbb Z^\omega$, $R$ the subring isomorphic to $\mathbb{Z}$ where the elements in each coordinate are the same, $M = \mathbb{Z}$ and $S$ acts on $M$ by multiplying by the first coordinate. Then $M$ is finitely generated as an $R$-module (and hence as an $S$-module) but clearly $S$ is not finitely generated as an $R$-module.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3656800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Weak*-convergence to 0 on L^\infty and convergence almost everywhere I am stuck with something standard... Let $f_n \in L^\infty(\mathbb{R}^d)\cap L^1(\mathbb{R}^d)$, $n\geq1$, be such that $$ \sup_{n\geq1} \|f_n\|_{L^\infty(\mathbb{R}^d)}<\infty $$ and $$ \lim_{n\to\infty} \int_{\mathbb{R}^d}|f_n(x)-f(x)| g(x)\,dx=0, \qquad \text{for all } 0\leq g\in L^1(\mathbb{R}^d). $$ Is it correct that $f_n(x)\to f(x)$ for a.a. $x\in\mathbb{R}^d$ ? It can be reformulated as follows: if a sequence from $L^\infty$ converges to 0 in the weak-* topology, does it converge to 0 almost everywhere? (Update: in the reformulation, I omitted that $f_n\in L^1$ itself, not sure whether this is relevant.)
I think it is not true in general. Further more you cannot control all elements of $L^{\infty,*}$ using $L^1$ because $L^1$ it is not reflexive. Finally you can state that if $(f_n)_n$ converges weak* only if you allow to take a subsequence of it because the unitary ball is weak* compact. ($L^{\infty,*}=L^1$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3656938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
If $x$ is real find the maximum possible value of $10^x-100^x$ According to the person who gave this question it apparently has something to do with the range of a quadratic expression. But I can't see the connection with a quadratic equation. So I tried to solve this by finding the maxima of the expression. But I don't know how to do it as it's an exponential function. All I can infer from this is that $x$ must be negative.
$$F(x)=10^x-100^x=10^x(1-10^x)$$ Let $$f(a)=a(1-a)$$ $$f'(a)=1-2a$$ the maximum of $ f(a) $ is $$ f(\frac 12)=\frac 14.$$ Thus, the maximum of $ F(x) $ is $ \frac 14 $ attained for $ x$ such that $$10^x=\frac 12 = e^{x\ln(10)}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3657080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 0 }
Product rule for matrix-valued and vector-valued functions Given $g: \mathbb{R}^n \to \mathbb{R}^{n \times m}$ and $f: \mathbb{R}^n \to \mathbb{R}^{m}$ how can we compute $\nabla (g(x)f(x))$?
The $(i,j)$-term of $\nabla(fg)$ is given by \begin{align} \partial_i((g(x)f(x))_j) &= \partial_i\left(\sum_{k=1}^n g_{kj}(x)f_k(x)\right)\\ &= \sum_{k=1}^n (\partial_ig_{kj})(x)f_k(x) + \sum_{k=1}^n g_{kj}(x)(\partial_i f_k)(x)\\ &= ((\partial_ig)(x)f(x))_j + (g(x)(\partial_i f)(x))_j \end{align} so \begin{align*} \nabla(g(x)f(x)) &= \begin{bmatrix} \partial_i((g(x)f(x))_j)\end{bmatrix}_{i,j=1}^n \\ &= \begin{bmatrix} (\partial_1 g)(x)f(x)\\ \vdots \\ (\partial_n g)(x)f(x)\end{bmatrix} + \begin{bmatrix} g(x)(\partial_1f)(x)\\ \vdots \\ g(x)(\partial_nf)(x)\end{bmatrix}\\ &= \begin{bmatrix} (\partial_1 g)(x)f(x)\\ \vdots \\ (\partial_n g)(x)f(x)\end{bmatrix} + g(x)(\nabla f(x))^T \end{align*} where $(\partial_i g)(x)$ is $\partial_i$ applied to the matrix $g$ elementwise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3657178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that if $p$ is an odd prime such that $p\mid(x^2+1)$ for some $x\in\mathbb{Z}$, then $p\equiv 1 \pmod 4$ I've tried proving the statement using that $x^2\equiv -1\pmod p$, and someone told me that this actually just implies that $p\equiv 1\pmod 4$. But I don't see it. Can anyone help me with this problem?
Hint: Let $p = 2k + 1$. Consider $ x^{p-1} \pmod{p}$. Can you conclude that $k$ must be even? We are given that $ x^2 \equiv -1 \pmod{p}$, so $ 1 \equiv x^{p-1} \equiv (-1)^{k} \pmod{p}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3657303", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Proving the tangent map is linear Let $M$ and $N$ be some vector spaces and define the map $f:M \rightarrow N$, which is a smooth diffeomorphism. Let $Tf$ be the induced mapping between the tangent spaces of $M$ and $N$. Then it is said that the map $$Tf: T_pM \rightarrow T_{f(p)}N$$ is a linear mapping. This is where the confusion comes in, it may be silly but i don't see how this is a linear map, i know i have to check the conditions for linearlity but this looks like a abstract object, so i'm not sure how to apply the conditions to check the linearity of the map $Tf$. Thanks !
Yes, this is an abstract object but it has a specific definition. Apply Taylor's theorem $$f(x+\lambda y)=f(x)+\frac{\partial f}{\partial x}\cdot (\lambda y)+\mathcal{o}\left(|\lambda|\right)$$ So your object $$Tf:=\frac{\partial f}{\partial x}$$ Do you see why this is a linear operator?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3657602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Convergence of $\sum_{n=1}^{\infty}\frac{(-1)^{T_n+1}}{n},$ where $T_n$ is the $n$th Triangular number Consider the series $$\sum_{n=1}^{\infty}\frac{(-1)^{\frac{n(n+1)}{2}+1}}{n}=1+\dfrac12-\dfrac13-\dfrac14+\dfrac15+\dfrac16-\cdots.$$ This is clearly not absolutely convergent. On the other hand, obvious choice, alternating series does not work here. Seems like the partial sum sequence is bounded but it is not monotone. How can we prove that this series converges? and, where does it converge to?
$$1\color{red}{+\frac12}\color{blue}{-\frac13}-\frac14+\frac15\color{cyan}{+\frac16}\color{magenta}{-\frac17}+\cdots$$ $$=1\color{red}{-\frac12+2\cdot\frac12}\color{blue}{+\frac13-2\cdot\frac13}-\frac14+\frac15\color{cyan}{-\frac16+2\cdot\frac16}\color{magenta}{+\frac17-2\cdot\frac17}+\cdots$$ $$=1-\frac12+\frac13-\frac14+\frac15-\cdots+2\left(\frac12-\frac13+\frac16-\frac17+\cdots\right)$$ $$=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n}+2\sum_{n=1}^\infty\frac{1}{4n-2}-\frac{1}{4n-1}$$ $$=\ln(2)+\frac12\sum_{n=1}^\infty\frac{1}{n-1/2}-\frac{1}{n-1/4}$$ $$=\ln(2)+\frac12\left(H_{-1/4}-H_{-1/2}\right)$$ $$=\ln(2)+\frac12\left(\frac{\pi}{2}-\ln(2)\right)$$ $$=\frac{\pi}{4}+\frac12\ln(2)$$ note that we used the series representation of the harmonic number $$H_a=\sum_{n=1}^\infty\frac{1}{n}-\frac{1}{n+a}\Longrightarrow H_a-H_b=\sum_{n=1}^\infty\frac{1}{n+b}-\frac{1}{n+a}$$ and we also used the results $H_{-1/4}=\frac{\pi}{2}-3\ln(2)$ and $H_{-1/2}=-2\ln(2)$ which can be obtained from the integral representation of the harmonic number $H_a=\int_0^1\frac{1-x^a}{1-x}dx$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3657751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
Convergence of a sequence of fixed points Let $f:[0,1]\to [0,1]$ and $g:[0,1]\to [0,1]$ be two continuous functions, each having a unique fixed point $x_f$ and $x_g$. Assume $\Vert f-g\Vert_\infty<\epsilon$. Is it possible to say something about $\vert x_f-x_g\vert$, maybe under some additional hypothesis on $f$ and $g$. The problem I'm interseted in is more of the following form: $f_n$ is a sequence of functions (each of these functions have a unique fixed point $x_f^n$) uniformly converging to $g$, in the sense that $\Vert f_n-g\Vert_\infty\leq \epsilon(n)$, with $\epsilon(n)\to 0$. I would like to prove that $x_f^n$ converges to $x_g$ as $n\to\infty$.
I'll assume $g$ is continuous. If $x_g$ is the unique fixed point of $g$ on $[0,1]$, then for any $\delta > 0$ we have $\epsilon = \inf \{|g(x)-x|: x \in [0,1], |x - x_g| \ge \delta\} > 0$. If $f_n \to g$ uniformly, there is $N$ such that $|f_n(x) - g(x)| < \epsilon$ for all $n > N$ and $x \in [0,1]$, and then if $|x - x_g| \ge \delta$ we'd have $|f_n(x) - x| \ge |g(x) - x| - |f_n(x) - g(x)| > 0$, so $x$ is not a fixed point of $f_n$. Thus if $f_n$ is to have a fixed point $x_{f_n}$, we must have $|x_{f_n} - x_g| < \delta$. This shows $x_{f_n} \to x_g$ as $n \to \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3658235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
show this inequality $\sum_{cyc}\frac{1}{5-2xy}\le 1$ let $x,y,z\ge 0$ and such $x^2+y^2+z^2=3$ show that $$\sum_{cyc}\dfrac{1}{5-2xy}\le 1$$ try: $$\sum_{cyc}\dfrac{2xy}{5-2xy}\le 2$$ and $$\sum_{cyc}\dfrac{2xy}{5-2xy}\le\sum_{cyc}\dfrac{(x+y)^2}{\frac{5}{3}z^2+\frac{2}{3}x^2+\frac{2}{3}y^2+(x-y)^2}\le\sum\dfrac{3(x+y)^2}{2(x^2+y^2)+5z^2}$$ following I want use C-S,But I don't Success
SOS helps. For $a^2+b^2+c^2=1$ after $x=\sqrt3a$, $y=\sqrt3b$ and $z=\sqrt3c$ we need to prove that: $$\frac 1{5-6ab}+\frac 1{5-6bc}+\frac 1{5-6ca}\leq 1$$ or$$\sum_{cyc}\left(\frac{1}{3}-\frac{1}{5-6ab}\right)\geq0$$ or $$\sum_{cyc}\frac{2-6ab}{5-6ab}\geq0$$ or $$\sum_{cyc}\frac{3(a-b)^{2}+2c^{2}-a^{2}-b^{2}}{5-6ab}\geq0$$ or $$\sum_{cyc}\left(\frac{3(a-b)^{2}}{5-6ab}+\frac{(c-a)(a+c)}{5-6ab}-\frac{(b-c)(b+c)}{5-6ab}\right)\geq0$$ or $$\sum_{cyc}\frac{3(a-b)^{2}}{5-6ab}+\sum_{cyc}(a-b)\left(\frac{a+b}{5-6bc}-\frac{a+b}{5-6ac}\right)\geq0$$ or $$\sum_{cyc}(a-b)^{2}\left(\frac{1}{5-6ab}-\frac{2(a+b)c}{(5-6ac)(5-6bc)}\right)\geq0.$$ Now, let $a\geq b\geq c.$ Thus, $$S_{c}=\frac{1}{5-6ab}-\frac{2(a+b)c}{(5-6ac)(5-6bc)}\geq$$ $$\geq\frac{1}{5-6ac}-\frac{2(a+b)c}{(5-6ac)(5-6bc)}=\frac{5-8bc-2ac}{(5-6ac)(5-6bc)}=$$ $$=\frac{4(b-c)^{2}+(a-c)^{2}+4a^{2}+b^{2}}{(5-6ac)(5-6bc)}\geq0;$$ $$S_{b}=\frac{1}{5-6ac}-\frac{2(a+c)b}{(5-6ab)(5-6bc)}\geq$$ $$\geq\frac{1}{5-6bc}-\frac{2(a+c)b}{(5-6ab)(5-6bc)}=\frac{5-8ab-2bc}{(5-6ab)(5-6bc)}=$$ $$=\frac{4(a-b)^{2}+(b-c)^{2}+4c^{2}+a^{2}}{(5-6ab)(5-6bc)}\geq0$$ and $$S_{a}+S_{b}=\frac{1}{5-6bc}-\frac{2(b+c)a}{(5-6ab)(5-6ac)}+\frac{1}{5-6ac}-\frac{2(a+c)b}{(5-6ab)(5-6bc)}=$$ $$=\frac{1}{5-6ac}-\frac{2(b+c)a}{(5-6ab)(5-6ac)}+\frac{1}{5-6bc}-\frac{2(a+c)b}{(5-6ab)(5-6bc)}=$$ $$=\frac{5-8ab-2ac}{(5-6ab)(5-6ac)}+\frac{5-8ab-2bc}{(5-6ab)(5-6bc)}\geq0.$$ Id est, $$\sum_{cyc}(a-b)^{2}\left(\frac{1}{5-6ab}-\frac{2(a+b)c}{(5-6ac)(5-6bc)}\right)=\sum_{cyc}(a-b)^2S_c\geq$$ $$\geq S_b(a-c)^2+S_a(b-c)^2\geq S_b(b-c)^2+S_a(b-c)^2=(b-c)^2(S_b+S_a)\geq0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3658374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
$h:[0,1] \to\mathbb{R}$ continuous, and ivt The question is as follows: $$ \text{Supposd } h:[0,1] \rightarrow \mathbb{R} \text{ is continuous. Show that there exist } w \in [0,1] \text{ such that} \\h(w)= \frac{w+1}{2}h(0)+\frac{2w+2}{9}h(\frac{1}{2})+\frac{w+1}{12}h(1)$$ I know that I have to use Intermediate Value Theorem and I have to show $h(w)$ lies between $h(0)$ and $h(1)$, but I have no idea how to prove it. I have tried to separate $h(w)$ into $\frac{w+1}{2}h(0)+\frac{w+1}{9}h(\frac12)+\frac{w+1}{9}h(\frac12)+\frac{w+1}{12}h(1)$ and then use IVT twice on the interval $[0,\frac12]$ and $[\frac12,1]$ but it seems not working. I have also tried to see if $h$ is an interpolation of 3 points but it also fails.
Let $f(x)=\frac {h(x)}{(x+1)}$ which is continuous on $[0,1]$ and $T=1/2f(0)+1/3f(1/2)+1/6f(1)$. Then we want to prove ,$$f(w)=T \ \ \text{for some } \ w\in [0,1]$$ Let $M$ and $m$ be maximum and minimum value taken by $f(x)$. Then note that, $m\leq T\leq M$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3658583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
How can I caculate the segment length necessary to separate a certain area from a sector under a given angle? Consider the geometry in this drawing. A circular sector with radius $r$ is divided into two parts by a segment $l$ starting from one of the ends if its arc and meeting the opposite side of the sector at an angle $\alpha$. The sector area outside of $l$ (as indicated by blue shading in the drawing) is $A$. The values $r$, $\alpha$ and $A$ are given, I need to find $l$. I also need the solution to be easily calculable (using a computer). With $\beta$ and $d$ as indicated in the drawing, one easily sees $$ \begin{gather} A = \frac{r^2}{2} \cdot (\beta - \sin\beta \cos\beta) + \frac{l^2}{2} \cdot \sin\alpha \cos\alpha \tag{1}\\ d = r \cdot \sin\beta = l \cdot \sin\alpha \tag{2}. \end{gather} $$ However, I can't seem to solve this for $l$. How can it be done? Is there maybe a better approach than the two equations above?
It looks to me that $$A=\frac{r^2 \beta}{2}-\frac{rl}{2}\sin(\alpha-\beta)$$ thus $$l=\frac{r^2\beta-2A}{r\sin(\alpha-\beta)}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3658760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why is $15-\sqrt{15}-\sqrt{15-\sqrt{15}}-\sqrt{15-\sqrt{15}-\sqrt{15-\sqrt{15}}}$ so close to $5$? Basically I started with the number $15$. Then I subtracted its square root to get roughly $11.127$. Subtracting the square root of that returned roughly $7.791$, and finally after taking the square root of that, I got roughly $5$. According to Google, it is $5.00000861488$. It is so close to $5$ that I though it could use further investigation, but so far I haven’t found anything like it.
I don't think there is anything fundamental about this. Here's a plot of the real part for repeated applications of the process: Mathematica: iter = Re[N[NestList[(# - Sqrt[#]) &, 15, 15]]]; ListPlot[iter, Joined -> True, PlotRange -> {-2, 15}]
{ "language": "en", "url": "https://math.stackexchange.com/questions/3658882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Question on the existence of a boundary between divergence and convergence It is said that $\sum1/n^{1+\epsilon}$ will converge and $\sum1/n$ will not. There are various proofs showing that such a boundary between divergence and convergence does not, in principle, exist. This is of course evident in the above, where $\epsilon$ can be made arbitrarily small, e.g. $0.1, 0.01, 0.001$, etc. Resultantly, this boundary can't really be cast in specific terms. However, I am wondering if it is permissible to nonetheless say that $1/n$ is the fastest decreasing divergent series. That is to say that anything decreasing faster, even by an infinitesimally small value, will converge. Can we refer to this 'boundary' not as a boundary per se, but a minimum condition, the minimum being the series $\sum S$ converges only if it decreases with greater speed than $\sum 1/n$?
The question is interesting. Actually, things are not so simple. There are many (infinite) different series "between" $$ \sum \frac{1}{n}=\infty \quad\textrm{and}\quad \sum \frac{1}{n^{1+\varepsilon}}<\infty $$ (for any $\varepsilon>0$), some convergent and some other divergent. For example: $$ \sum \frac{1}{n\cdot \ln(n)}=\infty, \quad \sum \frac{1}{n\cdot (\ln\,n)^{1+\varepsilon}}<\infty $$ and $$ \sum \frac{1}{n\cdot \ln(n)\cdot \ln(\ln (n))}=\infty, \quad \sum \frac{1}{n\cdot \ln(n)\cdot (\ln (\ln (n)))^{1+\varepsilon}}<\infty $$ and so on. The ordering of the series is $$ \frac{1}{n}>\frac{1}{n\cdot \ln(n)}>\frac{1}{n\cdot \ln(n)\cdot \ln(\ln (n))}>\ldots \\ \ldots > \frac{1}{n\cdot \ln(n)\cdot (\ln (\ln (n)))^{1+\varepsilon}}>\frac{1}{n\cdot (\ln\,n)^{1+\varepsilon}}>\frac{1}{n^{1+\varepsilon}} $$ (For each $\varepsilon>0$, $1/n^\varepsilon<1/(\ln n)^k$ for $n$ sufficiently large.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3659058", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that there is no operator $T \in L(\ell^2(\mathbb{N}))$ Let $S \in L(\ell^2(\mathbb{N}))$ be the unilateral shift. How do we show that if $K \in L(\ell^2(\mathbb{N}))$ is a compact operator, then there is no operator $T \in L(\ell^2(\mathbb{N}))$ such that $T^2=S^3 +K$?
"Compact perturbation" should always make one consider "Fredholm". And this is the key here: if we consider the Fredholm index, noting that $S^3+K$ is Fredholm we would have $$ 2\operatorname{ind}(T)=\operatorname{ind}(T^2)=\operatorname{ind}(S^3+K)=\operatorname{ind}(S^3)=-3. $$ This would require $\operatorname{ind}(T)=-\tfrac32$, which is impossible. Note that $T$ is necessarily Fredholm, because we have $$ [(S^*)^3T]T=I+K',\ \ \ T[T(S^*)^3=I+K'' $$ for certain compact $K',K''$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3659207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Change in coordinates for a metric in a given form With a change in coordinates, transform \begin{align} ds^2 = -z^2dt^2 + dz^2 \end{align} to \begin{align} ds^2 = -dT^2 + dX^2. \end{align} My attempt. It is clear that incoming null geodesics $\dot{z}<0$ have the form $z=ce^{-t}$, and outgoing null geodesics have $z=de^{t}$. In terms of $(T,X)$, incoming geodesics take the form $T = -X + a$, and outgoing geodesics take the form $T=X+b$. So I try to map \begin{align} T = \log z,\quad X = t. \end{align} But I obtain $$ ds^2 = \frac{1}{z^2}dz^2 + dt^2, $$ which is close but not the right answer.
Look up Rindler coordinates. They're the Lorentzian analogue of polar coordinates. Set $T = z \sinh t$ and $X = z\cosh t$. So $$\begin{align*} -{\rm d}T^2 + {\rm d}X^2 &= -(\sinh t\,{\rm d}z + z\cosh t\,{\rm d}t)^2 + (\cosh t\,{\rm d}z + z\sinh t\,{\rm d}t)^2 \\ &= -\sinh^2t\,{\rm d}z^2 - 2z\cosh t\sinh t\,{\rm d}z\,{\rm d}t - z^2\cosh^2t\,{\rm d}t^2 \\ &\qquad + \cosh^2t\,{\rm d}z^2 + 2z\cosh t\sinh t\,{\rm d}t\,{\rm d}z + z^2\sinh^2t\,{\rm d}t^2 \\ &= {\rm d}z^2 - z^2\,{\rm d}t^2,\end{align*}$$as wanted.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3659355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that $ a^2+b^2+c^2 \le a^3 +b^3 +c^3 $ If $ a,b,c $ are three positive real numbers and $ abc=1 $ then prove that $a^2+b^2+c^2 \le a^3 +b^3 +c^3 $ I got $a^2+b^2+c^2\ge 3$ which can be proved $ a^2 +b^2+c^2\ge a+b+c $. From here how can I proceed to the results? Please help me to proceed. Thanks in advance.
Also, we can use a Tangent Line method: $$\sum_{cyc}(a^3-a^2)=\sum_{cyc}(a^3-a^2-\ln{a})\geq0$$ because easy to see that $$a^3-a^2-\ln{a}\geq0:$$ $$(a^3-a^2-\ln{a})'=3a^2-2a-\frac{1}{a}=$$ $$=\frac{3a^3-3a^2+a^2-a+a-1)}{a}=\frac{(a-1)(3a^2+a+1)}{a},$$ which gives $a_{min}=1$ and we are done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/3659533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
What do the "d" in SDE notation mean? An SDE is often written in the form $ dX_t=\mu dt + \sigma dW_t $. What is the meaning of this equation in English? If I had to construct an SDE, I would write something like $ \frac{dX_t}{dt} = \mu + \sigma \frac{dW_t}{dt} $. Why are SDEs not written in that way? I know that the Brownian motion does not have a derivative so it has to do something with that fact but I don't get the real meaning of the standard notation.
An SDE is just a short way of writing the stochastic integral equation $$X_t = X_0 + \int_0^t \mu ds + \int_0^t \sigma dW_s$$ So if we take the SDE form and integrate both sides we get: $$\int_0^t dX_s=\int_0^t\mu ds + \int_0^t\sigma dW_s$$ With the natural equation $$\int_0^t dX_s = X_t - X_0$$ we get original stochastic integral equation. So the SDE is nothing else then another way of writing the integral equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3659675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Simulation of a General equation I have solved a programming problem with a Equation. But can't simulate this equation briefly. Anyone can help me? Question: I have $n$ rubles initially. The cost of one plastic litre bottle, the cost of one glass litre bottle, and the money one can get back by returning an empty glass bottle are $a$, $b$ and $c$, respectively, where $1\leqslant a\leqslant10^{18}$, $1\leqslant c<b\leqslant10^{18}$. I need the maximum number of litre I can drink with my rubles. Number of glass litre is $\dfrac{n-c}{b-c}$. Then total litre will be: $t1 = \dfrac{n-c}{b-c}$ ${n = n -(b-c)*t1}$ $t2 = \frac{n}{a}$ $ans = t1 + t2$ How this simulation has come? Can anyone please elaborate? Example: $n=10$ $a=11$ $b=9$ $c=8$ Answer will be $2$
Let $p$ and $g$ be the number of plastic and glass bottles purchased, respectively. Assume that you return all glass bottles. Consider the problem of maximizing $p+g$ subject to linear constraints: \begin{align} a p + (b - c) g &\le n \\ p,g &\ge 0 \end{align} For $(n,a,b,c)=(10,11,9,8)$ the optimal solution is $(p,g)=(0,10)$, but this solution is not implementable because you cannot simultaneously buy and return a glass bottle. One way around this is to introduce a time index: for time $t=1,\dots,T$, let $p_t$ be the number of plastic bottles purchased, let $g_t$ be the number of glass bottles purchased, let $r_t$ be the number of glass bottles returned, and let $m_t$ be the amount of money available after purchases. The new problem is to maximize $\sum_t (p_t + g_t)$ subject to: \begin{align} n &= a p_t + b g_t + m_t &&\text{for $t=1$} \\ m_{t-1} + c r_{t-1} &= a p_t + b g_t + m_t &&\text{for $t>1$} \\ r_t &\le g_t &&\text{for all $t$} \\ p_t, g_t, r_t, m_t &\ge 0 &&\text{for all $t$} \end{align} You will need to impose integrality of $p_t$, $g_t$, and $r_t$. Without loss of optimality, you can assume $r_t=g_t$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3659881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does $\sum_i A_i=I$ with $A_i$ positive imply $\{A_i\}_i$ are mutually diagonalisable? As discussed in this other question, if $A$ and $B$ are matrices such that $A+B=I$, then trivially they commute, and thus if they are both diagonalisable they are also mutually diagonalisable. The same argument doesn't, however, apply when summing more than two such matrices. Suppose then that $$\sum_{i=1}^n A_i = I.$$ The case of $A_i\ge0$ is the one I'm most interested about, but if positivity turns out to not be relevant for this, as it might very well be the case, feel free to weaken this constraint (to maybe consider Hermitian, normal, or just diagonalisable matrices). If $\sum_i A_i=I$ then I can say that, for example, $[A_1,A_2+...+A_n]=0$, and thus $A_1$ and $\sum_{i>1} A_i$ are mutually diagonalisable. But then I cannot iterate the argument by splitting $A_2$ from $A_3+...+A_n$, as now they sum to a diagonal matrix (in their common eigenbasis), but not to the identity. So does the result about mutual diagonalisability only work for $n=2$? A counterexample of three or more non-mutually-diagonalisable matrices summing to the identity would be a good answer.
Let $$ A_1 =\frac{1}{9} \begin{bmatrix} 3 & 2 & -1\\ 2 & 3 & -1\\ -1 & -1 & 3\\ \end{bmatrix}, \quad A_2 =\frac{1}{9} \begin{bmatrix} 3 & -1 & 2\\ -1 & 3 & -1\\ 2 & -1 & 3\\ \end{bmatrix}, \quad A_3 =\frac{1}{9} \begin{bmatrix} 3 & -1 & -1\\ -1 & 3 & 2\\ -1 & 2 & 3\\ \end{bmatrix}. $$ Then it can be checked that $\sum_iA_i = I$, $A_i\geq 0$, but do not commute.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3659957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Probability about lifetime of 100 bulbs (exponential distribution) I have some doubts about the following problem: I have 100 bulbs with a lifetime represented by an exponential distribution, with an expected value of 1000 hours. Find the probability that, at least one bulb, blown down after at most 500 hours. I have calculated the probability about one bulb with this method: $P(X \leq 500)=\int_{0}^{500}\lambda e^{-\lambda x}dx = 1-e^{\frac{1}{2}} = 0.394$ now, how can I extend this method for all the 100 bulbs? A step-by-step solution is really appreciated, I'm really newbie about statistics/probability arguments. Thank you so much and best regards. EDIT: $\frac{1}{\lambda}=1000$ hours so $ \lambda = \frac{1}{1000} $
I assume that it means "after at most 500 hours" right? In that case your computation makes sense for one bulb. What is $\lambda$ btw? For the second part, we may assume that the bulbs are all independent and blow down within $500$ hours with a probability of $p=0.394$. You have $100$ bulbs. What is the chance that none of these blows down?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3660125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Automatic complexity of word problem Suppose $L$ is a regular language. Let’s define its automatic complexity $ac(L)$ as the minimal possible number of states of a DFA, that recognizes $L$. Now, suppose $G$ is a finite group. $A \subset G$, $\langle A \rangle = G$. Let’s define the map $\pi: A^* \to G$ using the following recurrence: $$\pi(\Lambda) = e$$ $$\pi(a \alpha) = a \pi(\alpha), a \in A, \alpha \in A^*$$ Now define $L(G) := \{ \alpha \in A^*| \pi(\alpha) = e\}$. It is not hard to see, that $ac(L(G)) \leq |G|$. Indeed, one can take DFA, where states correspond to the elements of $G$, $e$ is both initial and terminal states and the transition function os defined by left multiplication. However, I wonder, whether this bound is tight or not. So, my question is: What is the asymptotic of $max_{|G| \leq n} ac(L(G))$ as $n \to \infty$?
I think that $L(G)$ must have at least $|G|$ states, which proves that in fact $|G|$ is the smallest number possible. Let $v,w \in A^*$ represent two distinct elements of $G$, and let $\bar{v} \in A^*$ represent the inverse of the group element represented by $v$. Then, after reading the word $v$, the DFA accepting $L(G)$ must be in a different state from what it is after reading $w$. Since otherwise it would accept the word $v\bar{v}$, which represents the identity of $G$, but then it will also accept the word $w\bar{v}$, which does not represent the identity. So $L(G)$ must have at least $|G|$ states.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3660304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Need help with calculus II series I am working on a problem that my professor isn't really explaining well, so i decided to ask here. The following is the question $ f(x) =\sum_{n=1}^\infty \frac{\mathrm{(-1)}^{n+1}\mathrm{(x-5)}^{n}}{(n\mathrm{5}^{n})} $ I am asked to find the interval of convergence of the following $f(x)$ $f'(x)$ $f''(x)$ $\int f(x) $ So for $ f(x) $ i used the ratio test $ (\frac{a_n+1}{a_n} )$ and got that -> $ 1 - \frac{x}{5} < 1 $ Is this the interval of convergence? I am totally clueless. My main question was for $ f'(x) $ I have the first derivative as follows $ f'(x) = \frac{\mathrm{(-1)}^{n+1}\mathrm{(x-5)}^{n-1}}{\mathrm{5}^{n}} $ Using the ratio $ (\frac{a_n+1}{a_n} )$ test on this one got me -> $ - \frac{x-5}{5} $ -> $ \frac{x}{5} - 1 $ But this will never be greater than 1. I am not sure what I am doing wrong here. Any help is appreciated
For the series $\sum_{n = 1}^\infty a_n$, where there is an $N$ such that $a_n \neq 0$ for all $n \geq N$, the ratio test has you calculate $$ L = \lim_{n \rightarrow \infty} \left| \frac{a_{n+1}}{a_n}\right| $$ If $L < 1$, the series converges absolutely. If $L = 1$ or the limit fails to exist, the test is inconclusive. If $L > 1$ then the series is divergent. In both of your examples, you have lost/ignored the absolute value. For your first example, as I type this, you don't have a sum, but I will guess you meant $$ f(x) = \sum_{n=1}^\infty \frac{(-1)^{n+1} (x-5)^n}{n 5^n} \text{.} $$ The numerator is only zero when $x = 5$, and that sum is easy to do, so we apply the ratio test to determine what happens when $x \neq 5$. We compute \begin{align*} L &= \lim_{n \rightarrow \infty} \left| \frac{\frac{(-1)^{(n+1)+1} (x-5)^{(n+1)}}{(n+1) 5^{(n+1)}}}{\frac{(-1)^{n+1} (x-5)^n}{n 5^n}} \right| \\ &= \lim_{n \rightarrow \infty} \left| \frac{(-1)^{(n+1)+1} (x-5)^{(n+1)}}{(n+1) 5^{(n+1)}} \cdot \frac{n 5^n}{(-1)^{n+1} (x-5)^n} \right| \\ &= \lim_{n \rightarrow \infty} \left| \frac{(-1)^{(n+1)}(-1) (x-5)^{n}(x-5)}{(n+1) 5^{n}5} \cdot \frac{n 5^n}{(-1)^{n+1} (x-5)^n} \right| \\ &= \lim_{n \rightarrow \infty} \left| \frac{(-1) (x-5)}{(n+1) 5} \cdot \frac{n}{1} \right| \\ &= |(-1) (x-5)| \lim_{n \rightarrow \infty} \left| \frac{n}{5(n+1)} \right| \\ &= \frac{1}{5} |x-5| \text{.} \end{align*} The ratio test assures us the series converges if $L < 1$, so when \begin{align*} \frac{1}{5} |x-5| &< 1 \\ |x-5| &< 5 \\ -5 < x-5 &< 5 \\ 0 < x &< 10 \text{.} \end{align*} The ratio test is inconclusive when $L = 1$, that is at $x = 0$ and $x = 10$, so we inspect those individually: * *$x = 0$: \begin{align*} \sum_{n=1}^\infty \frac{(-1)^{n+1} (0-5)^n}{n 5^n} &= \sum_{n=1}^\infty \frac{(-1)^{n+1} (-1)^n 5^n}{n 5^n} \\ &= \sum_{n=1}^\infty \frac{(-1)^{2n+1}}{n} \\ &= \sum_{n=1}^\infty \frac{-1}{n} \\ &= - \sum_{n=1}^\infty \frac{1}{n} \text{,} \end{align*} which is minus a diverging $p$-series (in particular, it is minus the harmonic series, which diverges). *$x = 10$: \begin{align*} \sum_{n=1}^\infty \frac{(-1)^{n+1} (10-5)^n}{n 5^n} &= \sum_{n=1}^\infty \frac{(-1)^{n+1} 5^n}{n 5^n} \\ &= \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n} \text{,} \end{align*} which is an alternating series, whose terms monotonically decrease with limit zero. By the alternating series test, this series converges. (We could also recognize this as the alternating harmonic series, which converges.) Therefore, the interval of convergence of $f(x)$ is $(0,10]$. You should be able to adapt the above to your other series. If you have difficulties, ask in comments.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3660480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Solving a seperable ODE using substitution proof This is the question my softmore differential equations professor asked on our practice exam: Show that the substitution $y=ux$ in the first-order differential equation $$p(x,y)\;dx+q(x,y)\;dy=0$$ results in an ODE (in u and x) which can be solved by "separation of variables" if p and q are homogeneous of the same degree (meaning $p(tx,ty)=t^dp(x,y)$ and $q(yx,ty)=t^dp(x,y)$ for some integer d and all real $t\neq0$.) What is the expression you get for the solution of the original ODE (i.e., after undoing the substitution). Check that it works. Note: you'll need to have the correct substitution for dy here for things to work. This is how far I've gotten Where should I go from here? I am completely lost. HELP!
Unfortunately I did not manage to read your notes. Follow a deduction in case it can help. Making $y = \lambda(x) x$ and considering $dy = \lambda dx+x d\lambda$ we have $$ p(x,\lambda x)dx+q(x,\lambda x ) dy = x^qp(1,\lambda)dx + x^q q(1,\lambda)(\lambda dx+x d\lambda) = 0 $$ so we follow with $$ p(1,\lambda)dx + q(1,\lambda)(\lambda dx+x d\lambda) = (p(1,\lambda)+\lambda q(1,\lambda))dx + x q(1,\lambda)d\lambda=0 $$ hence $$ \frac{q(1,\lambda)d\lambda}{p(1,\lambda)+\lambda q(1,\lambda)} + \frac{dx}{x}=0 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3660666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Branch cut of $\sqrt{z^2-1}$. I was reading something that defined the function $f(z)=\sqrt{z^2-1}$ on $\mathbb{C}\setminus [-1,1]$ where the branch cut is such that the argument of $z$ and $\sqrt{z^2-1}$ are in the same quadrant. I think I understand what this means and I think it corresponds to the usual branch of the square root. Later, they say that $\sqrt{z^2-1}\leq 0$ for $z< -1$. I don't understand why this should be true. I have tried taking limits from above and below the imaginary axis but confused myself. This is the way I'm understanding it: taking a point in the second quadrant slightly above the real axis, we can write $z=re^{i(\pi-\epsilon)}$ for $r>1$. Then $z^2=r^2e^{i2\pi-2\epsilon}$, i.e a complex number with argument almost $2\pi$. When you subtract one, you decrease the argument but for $\epsilon$ small it should still be nearly $2\pi$. There are two complex numbers which square to this one, one is just above the negative real axis, the other is just below the positive real axis. To have a continuous function, we must choose the first.
Let $f(z)=\sqrt{z^2-1}$ for $z\in \mathbb{C}\setminus[-1,1]$, with the branch cut on $[-1,1]$ such that $\arg(z)$ and $\arg(\sqrt{z^2-1})$ are in the same quadrant. Branch points of $f(z)$ are at $z=-1$ and $z=1$. Corresponding branch cuts are contours that begin at $z=-1$ and $z=1$ and end at the point at infinity. Example branch cuts include rays on the real axis from $(i)$ $z=-1$ to $z=-\infty$ and $z=1$ to $-\infty$, $(ii)$ $z=-1$ to $z=-\infty$ and $z=1$ to $\infty$, and (iii) $(i)$ $z=-1$ to $z=\infty$ and $z=1$ to $\infty$. But the branch cuts need not be straight line paths. For example, we could choose the branch cut from $z=1$ to be hyperbolic path $\text{Im}(z)=\frac1{\text{Re}(z)}-1$ from $z=1$ to $z=i\infty$ in the first quadrant. In terms of set equivalence (See this answer and this one for references), we can write for any value of $f(z)$ as $$\sqrt{z^2-1}=\sqrt{z-1}\sqrt{z+1}$$ for some value of $\sqrt{z-1}$ and some value of $\sqrt{z+1}$. We choose, therefore, to cut the plane from $-1$ to $\infty$ and from $1$ to $\infty$, both along the real axis, so that $$\begin{align} \sqrt{z^2-1}&=\sqrt{|z+1|}e^{i\arg(z+1)/2}\sqrt{|z-1|}e^{i\arg(z-1)/2}\\\\ &=\sqrt{|z^2-1|}e^{i(\arg(z+1)+\arg(z-1))/2} \end{align}$$ where $0<\arg(z+1)\le 2\pi$ and $0<\arg(z-1)\le 2\pi$. Then, $0<\arg(\sqrt{z^2-1})\le 2\pi$, Note with these choices of branches for $\sqrt{z+1}$ and $\sqrt{z-1}$, we satisfy the requirement that $\arg(z)$ and $f(z)=\arg(\sqrt{z^2-1})$ are in the same quadrant. Moreover, along the real axis for which $\text{Re}(z)>1$, $f(z)$ is continuous. Hence, we have now defined a function $f(z)$ that is single-valued on $\mathbb{C}\setminus[-1,1]$ and $\arg(z)$ and $\arg(f(z))$ are in the same quadrant. Finally, note that the $\text{Re}(z)<-1$, we have $\arg(z+1)=\arg(z-1)=\pi$, $\arg(f(z))=\pi$, and $\sqrt{z^2-1}=-\sqrt{|z^2-1|}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3660865", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
SVD for image compression I want to make sure I understood the concept behind SVD for image compression. So, we start off with a rectangular $m \times n$ matrix that stores all the pixel values of the image. We then compute the SVD of this matrix to get two orthogonal matrices that contain information about the rows and the columns of the original matrix and a diagonal matrix which contains singular values that determine the importance of each rank-$1$ matrix. We then truncate some of the rank-$1$ matrices if their corresponding coefficient in the diagonal matrix is below some threshold value. Say, the number of modes is $k$, the total number of values we need to keep track of will be $k(m + n +1)$. But once we need to reconstruct the image , we'll have to multiply the three matrices together resulting in a $m \times n$ matrix again. So, the image is represented as the $3$ matrices in memory but when we want to view the image, only then, does the processor reconstructs the image from the $3$ matrices. Otherwise, the image is just saved in the form of $3$ matrices to save memory.
The SVD decomposes a matrix as a weighted sum of matrices which are themselves outer product of two vectors. Hence you trade $mn$ coefficients for $k(m+n)$, where $k$ is the number of weights retained. For compression to be effective, $$k(m+n)\ll mn$$ must hold.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3661229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Invariant subspace of $R^3$ Let $T:R^3→R^3$ be the linear operator defined by $$T(\begin{bmatrix}a\\b\\c\end{bmatrix})=\begin{bmatrix}b+c\\2b\\a-b+c\end{bmatrix}$$ Show that $W=span(e_1,e_3)$ is a T-invariant subspace of $R^3$. Let $\alpha={e_1,e_3} $ be ordered basis for W and $\beta={e_1,e_2,e_3}$ be ordered basis for $R^3=V$. (In my textbook's example, W was $W=span({e_1,e_2})$ and the $T_W:W→W,\begin{bmatrix}s\\t\\0\end{bmatrix}→\begin{bmatrix}t\\-s\\0\end{bmatrix}$) So my question is how can I show that W is a T-invariant subspace? And also how can I write matrices like $W=span(e_1,e_2)$?
It is easy to see that $T(e_1)=e_3 \in W$ and that $T(e_3)=e_1+e_3 \in W.$ This gives $$T(W)=W,$$ hence $W$ is a $T-$ invariant subspace.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3661440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Trapezium angle given, relation between sides How do i check other options? Should i apply triangular inequality ? Also is there some way that all options are checked?
Assume that (D) is true. Then, since you have established that (C) is true, we must have $BC=AC \implies \theta =60°$ which is clearly false. If (A) was true, then $AD=CD\implies \angle ACD =60° \implies \angle ADC =60° \implies \angle PDC =120° \implies PCD =0°$ which again, is false. If (B) was true, then $BC=CD \implies \angle BCD=\theta$. Now, in $\triangle ABO$, ($O$ being the intersection of the diagonals of the quad.) $\angle ABL = 60°-\theta \implies \angle AOB =120° \implies \angle AOD =60°\implies \angle ADO=60°\implies \angle PDC =120°-\theta \implies \angle PCD =\theta$ Now consider $\triangle BCD$. By the exterior angle property we have that $2\theta =\theta \implies \theta=0°$ False.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3661886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Are all injective arcs the images of an injective interval? Given a continous $f:[0,1] \to \mathbb{R}^2$ call it an injective arc if: * *$f(0) \neq f(1)$ *$f(a) = f(b) =c $ for $a<b$ implies that $f([a,b]) = {c}$. Given such $f$ we also have an ordering on $Im(f)$ by taking the one of $[0,1]$. Is it true that for any such $f$ there is an injective continous $g:[0,1] \to \mathbb{R}^2$ that is a homeomorphism on $Im(f)$ and preserves ordering?
It seems that my answer to Reparameterisation of Curve as a Regular Curve (Topology) also answers your question in the affirmative. In the referenced question arbitrary paths $p : [0,1] \to X$ are considered and it is shown that there exists a reparametrization $\phi$ (which is a non-decreasing surjective continuous map $\phi: [0,1] \to [0,1]$ with $\phi(0)=0, \phi(1)=1$) such that $p = q \circ \phi$ with a path $q : [0,1] \to X$ which is not constant on any closed subinterval $[a,b]$, $0 \le a < b \le 1$, of $I =[0,1]$. Applying this to your $f : I \to \mathbb R^2$, we get $\phi$ and $g : I \to \mathbb R^2$ such that $f = g \circ \phi$ and $g$ is not constant on any closed subinterval of $I$. Now assume that $g$ is not injective. Then $g(a) = g(b)$ for suitable $0 < a < b < 1$. Choose $a', b'$ such that $\phi(a') = a, \phi(b') = b$. Clearly $0 < a' < b' < 1$ and $\phi([a',b']) = [a,b]$ since $\phi$ is non-decreasing. Then $f(a') = f(b')$ and $f(x) = c$ for $x \in [a',b']$. Hence $g(y) = c$ for $y = \phi(x) \in \phi([a',b']) = [a,b]$, thus $g$ is constant on $[a,b]$ which is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3662193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Linear independence of complex basis of vectors. I understand that for a $\mathbb{C}^n$ as a real vector space, we choose $$\left\{\pmatrix{1\\0\\\vdots\\0},\pmatrix{\mathrm i\\0\\\vdots\\0},\pmatrix{0\\1\\\vdots\\0},\pmatrix{0\\\mathrm i\\\vdots\\0},\dots,\pmatrix{0\\0\\\vdots\\1},\pmatrix{0\\0\\\vdots\\\mathrm i}\right\}$$ as a standard basis. Now, how would I show linear independence of the basis? My first thought was to show that the determinant of the basis matrix is non-zero, but when looking at this basis I realise that there is no determinant because it is not a square matrix.
Let the $e_j$ be the standard basis elements for $\Bbb R^n$, so those of $\Bbb C^n$ are $e_j,\,ie_j$. A general linear combination thereof is $\sum_j(a_j+ib_j)e_j$ with $a_j,\,b_j\in\Bbb R$. If this vanishes, $a_k+ib_k=0\cdot e_k=0$, so $a_k=b_k=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3662375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Fundamental set of solutions for ODE Let $a$ and $b$ be distinct positive integers. Prove that $(x^{a}, x^{b})$ cannot be a fundemental set of solutions of any second order ODE of the form $y''+p(x)y'+q(x)y=0$ on the interval $(-1,1)$, where $p(x)$ and $q(x)$ are continuous functions on $(-1,1)$. My progress: I managed to obtain functions $p(x)=\frac{1-a-b}{x}$ and $q(x)=\frac{ab}{x^2}$, which are continuous everywhere except zero. Therefore, I am not sure how to continue after this, unsure whether this finishes the problem or not.
If $0<a<b$, then $b\ge 2$ and thus $y(x)=x^b$ has values $y(0)=y'(0)=0$. However the only solution of an initial value problem for the given DE form with these initial conditions is the zero solution. As $x^b\ne 0$ in general, this gives a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3662576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
On some special actions Let $G$ be a group and $A$ be an abelian group. Let $\beta$, $\alpha :G\rightarrow Aut(A)$ be two homomorphisms. It is well known that if there exist $\sigma \in Aut(A)$, $\rho \in Aut(G)$ such that $(\beta \circ \rho )(g)=\sigma \circ \alpha (g)\circ \sigma^{-1}$ for all $g\in G$, then the semidirect products $A\rtimes _{\alpha }G$ and $A\rtimes_{\beta}G$ are isomorphic. However, in one stage of a proof concerning the representation of some special groups, I get that there exist $\sigma \in Aut(A)$, $\rho \in $ $Aut(G)$ such that $(\alpha \circ \rho )(g)=\sigma \circ \alpha (g)\circ \sigma^{-1}$ for all $g\in G$. Is there any interpretation of this formula in group theory? why this might be an interesting property? Thank you in advance.
It looks like you've let $\beta=\alpha$. So of course this is true with $\sigma$ and $\rho$ the identity automorphisms. But this is basically a trivial statement. Not sure whether anything can be done with it if $\sigma\ne\rho$. If there are any examples, I guess they could be called $\alpha$ -equivalent
{ "language": "en", "url": "https://math.stackexchange.com/questions/3662731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
The time convergence of stochastic integral and Doob's convergence. Consider the process $$X_{t}=\int_{0}^{t}e^{-s}dW_{s},$$ where $e^{-s}$ is deterministic. I am wondering if $\lim_{t\rightarrow\infty}X_{t}$ exists almost surely... I understand that $X_{t}$ in the case is a martingale, so we can use Doob's martingale convergence theorem. However, I have no idea about how to show $$\sup_{t}\mathbb{E}X_{t}^{-}<\infty.$$ If we can show this, then yes, $X_{t}$ converges almost surely to a limit.. Also, is there any way to know the distribution of the limit? I know that the distribution converges since it converges almost surely, but I am not sure how to compute the limiting distribution.. using central limit theorem?? Thanks!
$E|X_t|^{2}=\int_0^{t} e^{-2s} ds=\frac 1 2(1-e^{-t}) <1$ for all $t$ and this implies $E|X_t|$ is bounded. The limiting distribution is $N(0,\int_0^{\infty} e^{-2s} ds)$ i.e. $N(0, \frac 1 2) $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3663049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Show that three points lie on the same line Show that the points $A(3;9), B(-2;-16)$ and $C(0.2;-5)$ lie on the same line. We can say that three points lie on the same line if the largest segment bounded by two of these points is equal to the sum of the smaller ones. Can you show me why this is sufficient for three points to lie on the same line? By the distance formula, we can get $AB=\sqrt{650}, BC=\sqrt{125.84}$ and $CA=\sqrt{203.84}$. How to check if $AB=BC+CA$?
In mathematics, the triangle inequality states that for any triangle, the sum of the lengths of any two sides must be greater than or equal to the length of the remaining side. source: https://en.wikipedia.org/wiki/Triangle_inequality Thus, if $AB+BC=AC$, then the points must be on the same line.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3663161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Basis of External Direct Sum of infinitely many vector spaces In the book Basic Algebra by Arthur Knapp, he states that External Direct Sum for infinitely many vector spaces can be defined as below: $\bigoplus_{a \in A} V_a$ is the set of tuples $\{v_a\}$ of the Cartesian product $\prod_{a \in A} V_a$ with atmost finitely many $v_a$ equal to zero and vector addition and scalar multiplication defined as usual. He remarks that the basis of $\bigoplus_{a \in A}$ is the union of basis of the constituent vector spaces. Moreover, he further defines External Direct Product for infinitely many vector spaces in the similar manner but excluding the highlighted condition. He then remarks that unlike the External Direct Sum, the External Direct Product doesn’t have a basis which can be represented via the collective basis of the vector spaces. I am confused about why the highlighted condition is necessary for a basis to exist. Why can’t we just use the vectors $U_{a(i)} =(0,0,...,a(i),...)$ as the basis where $a(i)$ belongs to basis of $V_a$ and $a \in A$?
Addition is a binary operation: it takes two vectors, and returns a vector. By induction, we can add finitely many vectors together. But we cannot add infinitely many vectors together. So, for example, the vector $(1,1,1,1,\ldots)$ cannot be expressed as a linear combination of the vectors $\mathbf{e}_j$ (where $\mathbf{e}_j$ is an element of $\mathbb{R}^{\omega}$ in which the $i$th coordinate of $\mathbf{e}_j$ is $1$ if $i=j$ and $0$ otherwise), because by definition, a linear combination has only finitely many summands. So a linear combination of the $\mathbf{e}_i$ has only finitely many nonzero entries. If you take the collection of vectors you propose, a linear combination of them, since by definition it can only involve finitely many vectors from your collection, will necessarily have zero entry in all but finitely many coordinates.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3663504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that $\lim\limits_{n\to\infty}\frac{n^2+n+1}{(n+1)^2}=1$ So, here's the sequence $\{x_n\}$ defined by the following formula: $$x_n = \frac{n^2+n+1}{(n+1)^2}$$ I want to try and prove this from the definition. Let $\epsilon > 0$ be given. Then, we need an integer $N(\epsilon) > 0$ such that: $$n > N \implies |x_n - 1| < \epsilon$$ $$|x_n-1| = |\frac{n^2+n+1}{(n+1)^2} - 1| = |\frac{n}{(n+1)^2}| < \frac{1}{n}$$ Then, define $N(\epsilon) = [\frac{1}{\epsilon}]+1$, where $[x]$ is the integer part of $x$. Since we have our required $N(\epsilon)$, this proves the desired result. Does the proof above work? If it doesn't, why and how can I fix it?
Yes your proof seems to be good. (the answer practically ends here, but here is another way to prove it) Proof. $$\frac {n^2+n+1} {(n+1)^2} = \frac {n^2+n+1} {n^2+2n+1}= 1 - \frac n {n^2+2n+1}$$. We claim that $$\lim_{n\to+\infty}\frac n {n^2+2n+1} = 0$$, and it can be shown that for any $\epsilon \gt 0$, as long as $n \gt N = \frac 1 {\epsilon}$, $$\lvert \frac n {n^2+2n+1} \rvert \lt \epsilon$$. It is also true that $$\lim _{n\to+\infty} 1 = 1$$ Therefore $$\lim_{n\to+\infty} \frac {n^2+n+1} {(n+1)^2} = 1$$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3663668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to prove that whether it is a Banach space or not? We consider the Banach space of all continuous functions on $X$ such that for each $f$ in the space, \begin{equation*} ||f||=\sup_{x\neq y}\frac{\left\vert f(x)-f(y)\right\vert }{\left\vert x-y\right\vert }. \end{equation*} How I can prove that it is a Banach space?
$m$ is not a norm on the set of the continuous function on $[0, 1]$ with $f(0)=0$, because not all continuous functions on $[0, 1]$ are Lipschitz. For example, take $f(x)=\sqrt{x}$. It's continuous, $f(0)=0$, but $$\frac{|\sqrt{x}-\sqrt{y}|}{|x-y|}=\frac{1}{|\sqrt{x}+\sqrt{y}|}$$ And taking $x=0$, we get that $$m(f)\geqslant \lim_{y \to 0+0} \frac{1}{\sqrt{y}}=+\infty$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3663825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Let $k\geq2$ and $l\in\mathbb{N}$. Prove that $(k+1)^l > k^l$. As a part of another proof, I need to prove that $(k+1)^l > k^l$, for all $k$, with $k\geq2$ and $l\in\mathbb{N}$. I need some help starting the proof. Thank you.
Another way to see this is to take $l$th roots. Then all you want to prove is that $$k+1>k,$$ which is clearly true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3663991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
eigenvectors and eigenvalues for finding the matrix representiing T with respcet to $\alpha$ Let $T: \mathcal P_2(\Bbb R)\rightarrow \mathcal P_2(\Bbb R)$ be the linear transformation defined by $T(p(x))=p(1)(x^2+x)+p(0)$ Let $\alpha={x^2, x, 1}$ be the standard ordered basis for $\mathcal P_2(\Bbb R)$. Find the matrix representing $T$ with respect to $\alpha$, i.e find $[T]_\alpha$ I let $T(x^2)= x^2+x$, then $[T(x^2)]_\alpha$=$$\left[\begin{matrix}1\\1\\0\end{matrix}\right] $$ similarly, $T(x)= x^2+x$, $T(1)=x^2+x$, the $[T(x)]_\alpha, T[1]_\alpha$ are all the same as the first one, so I have $[T]_\alpha$=$$\left[ \begin{matrix} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 0 & 0 & 0 \\ \end{matrix}\right] $$ Does that seem right?
Almost. Actually, $T(1)=x^2+x+1$. Therefore$$[T]_\alpha=\begin{bmatrix}1&1&1\\1&1&1\\0&0&1\end{bmatrix}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3664184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find an example of an entire function such that $f(z)=-f(iz)$ for all $z$ Is there a systematic way to find an example of an entire function $f$ such that $f(z)=-f(iz)$ for all $z$? By testing at monomials, we find that $f(z)=z^6$ is a solution. But how can we find other solutions? I tried writing $f=u+iv$ and then differentiated and then applied Cauchy-Riemann equations, but that just ended up with Cauchy-Riemann equations for $u$ and $v$ itself.
* *All the monomials $z^2,z^6,z^{10},\dots$ are solutions. *If $g(z)$ is any entire function, then $$ f(z) = \tfrac14\big( g(z) - g(iz)+g(-z)-g(-iz) \big) \tag{a} $$ is also entire and satisfies $f(z)=-f(iz)$. (This is similar to how the combination $\frac12\big( h(x)-h(-x) \big)$ produces an odd function for any real-valued $h$.) *If one applies the averaging operation (a) to a monomial $z^k$, then the result is $z^k$ again if $k\equiv 2\pmod 4$ and $0$ otherwise. In particular, the operation (a) converts the power series for $f(z)$ into the subseries consisting only of the terms $a_kz^k$ where $k\equiv 2\pmod 4$. *Similarly, if $h(z)$ is any entire function, then $f(z)=z^2h(z^4)$ is an entire function with the given property, because its power series consists only of terms $a_kz^k$ where $k\equiv 2\pmod 4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3664353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is this an exponential distribution? I have a probability density function $f(x) = k \cdot 3e^{-3x}$, with $k\ne 0$ constant. I saw someone saying this is the exponential distribution with $\lambda = 3$. However, isn't the exponencial distribution of the form $\lambda e^{-\lambda x}$? Doesn't the multiplication by a constant change the distribution? Or it just doesn't matter if I multiply it by any value I want?
Hint: consider the equation $1 = \int_0^{\infty} k \cdot 3e^{-3x}dx$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3664465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $T_n = \{\frac{a}{n} \mid a \in \mathbb{Z}\}$, then what are $\bigcup_{n\in\mathbb{N}}T_n$ and $\bigcap_{n\in\mathbb{N}}T_n$? For each positive integer $n$, let $T_n = \{\frac{a}{n} \mid a \in \mathbb{Z}\}$. What are $\bigcup_{n \in \mathbb{N}} T_n$ and $\bigcap_{n \in \mathbb{N}} T_n$? I'm pretty sure the first one is just $\mathbb{Q}$, the rationals. Since the set will have all possible numerators over all possible denominators. The second I'm not so sure of. It's certainly not empty, since $1/2 = 2/4$ so $T_2$ and $T_4$ have non-empty intersection. I am currently leaning towards this set also being $\mathbb{Q}$, since we will get every fraction here as well: if $a/b$ is in $T_b$, then it is also in $T_{2b}$ as $2a/2b$.
You are correct that $\bigcup_{n\in\mathbb{N}}T_n=\mathbb{Q}$. That's simply because any $q\in\mathbb{Q}$ is of the form $q=\frac{a}{b}$ for some $a\in\mathbb{Z}$ and $b\in\mathbb{N}$ and so $q\in T_b$. You are wrong that $\bigcap_{n\in\mathbb{N}}T_n=\mathbb{Q}$. This has no chance of happening since $\bigcap_{n\in\mathbb{N}}T_n\subseteq T_m$ for any $m$ and each $T_m$ is a proper subset of $\mathbb{Q}$. So first note that if $n\in\mathbb{Z}$ then $n\in T_b$ for any $b\in\mathbb{N}$. That's because $n=\frac{bn}{b}$ regardless of $b$. Meaning $\mathbb{Z}\subseteq T_b$ for any $b\in\mathbb{N}$ and so $\mathbb{Z}\subseteq\bigcap_{n\in\mathbb{N}}T_n$. We will show that $\bigcap_{n\in\mathbb{N}}T_n\subseteq\mathbb{Z}$. So assume that $q\in\mathbb{Q}\backslash\mathbb{Z}$, i.e. $q=\frac{a}{b}$ for some $a\in\mathbb{Z}\backslash\{0\}$ and $b>1$ relatively prime. Obviously $q\in T_b$. Take a prime number $p$ not dividing $b$. It is enough to show that $q\not\in T_p$. Indeed, $\frac{a}{b}=\frac{x}{p}$ implies $ap=xb$ which cannot hold since $b>1$ is relatively prime with $a$ and with $p$. This shows that $\bigcap_{n\in\mathbb{N}}T_n =\mathbb{Z}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3664620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Can Cavalieri's Principle be applied to a Pyramid and a Cylinder? I know that Cavalieri's Principle makes it so that if two prisms/cylinders, or two pyramids/cones have the same area at a cross section parallel to the base, and they have the same height, they also have the same volume. However, does it still apply to a pyramid/cone and a prism/cylinder? Research: All examples of Cavalieri's Principle show two prisms/cylinders or two pyramids/cones. They never get mixed. I have found that on Wolfram Mathworld it is defined as "...the same distance from their respective bases are always equal..." implying that a cone and a prism wouldn't fall under this since the prism's cross-section would remain constant, but the cone's would increase or decrease based on height.
As you point out, if we're comparing a solid with constant cross-sectional area to one whose cross sectional area vanishes as we move away from its base, then Cavalieri's Principle is inapplicable. Put another way, a pyramid/cone might have the same volume and height as a prism/cylinder, but even in such a case, they cannot have equal cross-sectional areas throughout.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3664979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Irreducible component of a geometrically reduced algebraic variety This is from Qing Liu's Algebraic Geometry and Arithmetic Curves, page 131: Why can we assume that $X$ is integral? Can we prove that an irreducible component of a geometrically reduced algebraic variety given with the reduced closed subscheme structure is geometrically reduced?
Liu's definition of an algebraic variety over a field $k$ is a scheme of finite type over $k$. In particular, such a scheme $X$ is noetherian and has finitely many irreducible components $X_1\cup\cdots\cup X_n$. Then $X_1\setminus (X_2\cup\cdots\cup X_n)$ is an open irreducible subscheme, and so we may pick an affine open irreducible subscheme $U\subset X_1\setminus (X_2\cup\cdots\cup X_n)\subset X$. As geometrically reduced implies reduced and any open subscheme of a reduced scheme is reduced, $U$ is reduced. As open immersions are preserved under base change, we have that $U_{\overline{k}}$ is an open subscheme of $X_{\overline{k}}$, which implies $U_{\overline{k}}$ is reduced by the same logic as in the previous sentence. So $U$ is an affine, irreducible, geometrically reduced subscheme. In particular, $U$ is an integral affine scheme. Further, any regular closed point of $U$ is a regular closed point of $X$, so if $U$ has a regular closed point, then $X$ must have a regular closed point. It should be pointed out that it's much easier to work with irreducible opens for this reduction rather than (closed) irreducible components. It's automatic that any open subscheme of a reduced scheme is reduced, but one needs to add more conditions when speaking of closed subschemes. So one might as well make one's life easier by just jumping straight to an irreducible open affine.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3665176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How Singular Integrals Theory is applied on Partial Differential Equations Currently I'm interested in Singular Integrals Theory (I'm a beginner). I have read that this theory has deep relations with PDE's. For that reason I would like to know if there exists some web page, guide, essay or book which explain how Singular Integrals Theory are applied in PDE's or how it is used nowadays. Thanks for your help!
Take for example Poisson’s equation: $-\Delta u = f \text{ on }\mathbb{R}^3$, where $f\in L^2(\mathbb{R}^3)$ is a compactly supported function. Then a solution to this partial differential equation is given by: $$ u(x)=\frac{1}{4\pi}\int_{\mathbb{R}^3}\frac{f(y)}{|x-y|}\,{\mathrm d}y, $$ where $|x-y|$ is the Euclidean distance between $x$ and $y$. It can be shown that this solution is the only solution to Poisson’s equation that fulfils $u(x)\to 0$ as $|x|\to\infty$. You see that $u$ is given by a singular integral. The study of singular integrals now allows us to conclude properties of $u$. For example, one can show that $u\in H^2_{\mathrm{loc}}(\mathbb{R}^3)$, that is $u$ is twice weakly-differentiable. Similar results also hold for $f\in L^p(\mathbb{R}^3)$, $1<p<\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3665334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find best approximation of $\sin(\pi x)$ over $[0,1]$ with quadratic polynomial $a_0+a_1x+a_2x^2$ Use the theory of orthogonal functions to find best in the mean approximation of the function $\sin(πx)$ on the interval $[0,1]$ by a second-order polynomial That is, find such coefficients $a_0, a_1$ and $a_2$ that, $$\int^1_0 (\sin(\pi x)-a_0-a_1x-a_2x^2)^2 \, dx, $$ takes a minimal possible value. I feel as though this has something to do with Fourier series but I really cant be sure because I am not very familiar with this area. Also, I'm not sure what "best in mean approximation" means, so any help with that would be great.
\begin{align} \int^1_0 ( & \sin(\pi x)-a_0-a_1x-a_2x^2)^2 \, dx, \\[8pt] = \int_0^1 \Big( & \sin^2(\pi x) \,dx + a_0^2 + a_1^2 x^2 + a_2^2 x^4 \\ & {}-2a_0\sin(\pi x) - 2a_1x\sin(\pi x) - 2a_2x^2 \sin (\pi x) \\ & {} -{}2a_0a_1 x -2a_0a_2x^2 -2a_1a_2 x^2 \Big) \, dx \\[10pt] = {} & \int_0^1 \sin ^2(\pi x)\, dx + a_0^2 \int_0^1 1\,dx + a_1^2 \int_0^2x^2\, dx + a_2^2\int_0^1 x^2\,dx \\[10pt] & {} - 2a_0 \int_0^1\sin(\pi x)\,dx - 2a_1 \int_0^1 x\sin(\pi x)\,dx - 2a_2 \int_0^1 x^2 \sin(\pi x)\,dx \\[10pt] & {} - 2a_0a_1 \int_0^1 x\,dx -2a_0a_2 \int_0^1 x^2\, dx - 2a_1 a_2 \int_0^1 x^2 \, dx \\[12pt] = {} & B_0 + B_1a_0^2 + B_2a_1^2 + B_3 a_2^2 - 2B_4 a_0 - 2B_5 a_1 -2B_6 a_2 \\ & {} -2B_7 a_0 a_1 - 2B_8 a_0 a_2 - 2B_9 a_1 a_2. \end{align} So you have a quadratic polynomial in three variables and the problem is to find the values of the variables that minimize the value of the polynomial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3665501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Show $\sqrt[3]{5}$ is not contained in any cyclotomic extension of $\mathbb{Q}$. Find the Galois group of $x^3-5$ over $\mathbb{Q}$, then show $\sqrt[3]{5}$ is not contained in any cyclotomic extension of $\mathbb{Q}$. My attempt: The roots of $x^3-5$ are $\sqrt[3]{5},\zeta_3\sqrt[3]{5},\zeta_3^2\sqrt[3]{5}$. So the splitting field for $x^3-5$ over $\mathbb{Q}$ is $\mathbb{Q}(\sqrt[3]{5},\zeta_3)$, where $\zeta_3$ is a primitive $3^\text{rd}$ root of unity. By the Degree Formula for field extensions, we have $$ [\mathbb{Q}(\sqrt[3]{5},\zeta_3):\mathbb{Q}]=[\mathbb{Q}(\sqrt[3]{5},\zeta_3):\mathbb{Q}(\zeta_3)][\mathbb{Q}(\zeta_3):\mathbb{Q}]=3\varphi(3)=3\cdot2=6, $$ where $\varphi$ is Euler's totient function. Define the automorphisms $$ \sigma_{ij}:=\begin{cases} \sqrt[3]{5}&\longmapsto\quad\zeta_3^i\sqrt[3]{5}\\ \zeta_3&\longmapsto\quad\zeta_3^j \end{cases} $$ where $0\leq i\leq 2$ and $1\leq j\leq 2$. Counting the $\sigma_{ij}$, we see we have found $6$ automorphisms, so we have found all elements of the Galois group. Since there is no element of order $6$, we know the Galois group is $S_3$. Finally, suppose $\sqrt[3]{5}$ is contained in a cyclotomic extension of $\mathbb{Q}$, call it $\mathbb{Q}(\zeta_n)$. By the Fundamental Theorem of Galois Theory, $S_3$ is a subgroup of $\text{Gal}(\mathbb{Q}(\zeta_n)/\mathbb{Q})$. Since $\text{Gal}(\mathbb{Q}(\zeta_n)/\mathbb{Q})$ is abelian, this implies $S_3$ is abelian, a contradiction. Hence $\sqrt[3]{5}$ is not contained in any cyclotomic extension of $\mathbb{Q}$. Is this correct?
The splitting field is correct and the reasoning seems the right path to take, but notice that in order to affirm that $\mathbb{Q}(\sqrt[3]5) \not\subset \mathbb{Q}(\zeta_{n})$ you don't have to put in play $S_{3}$ : If $\mathbb{Q}(\sqrt[3]5) \subset \mathbb{Q}(\zeta_{n})$ since Gal($\mathbb{Q}(\zeta_{n})/\mathbb{Q}$) is in particolar abelian, every subgroup is normal. Thanks to the fundamental theorem of Galois this condition translates into : every subextension is normal over $\mathbb{Q}$. But $\mathbb{Q}(\sqrt[3]5)$ can't be since is not the splitting field of $x^{3}-5$, infact he's missing $\zeta_{3}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3665586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Multiplication of an orthogonal matrix and a skew symmetric matrix Let $A\in O(n;\mathbb{R})$ such that for every $1\leq i\leq n$, $1>a_{ii}>0$ and $a_{ii}\geq |{a_{ij}}|$ for $j\neq i$. Prove that there exists a skew symmetric matrix $B$ such that all diagonal elements of $AB$ are positive. For $n=2$ the proof is simple: let $b_{12}=k$ and $b_{21}=-k$, then the diagonal elements of $AB$ are $-ka_{12}$ and $ka_{21}$. Since $a_{11},a_{22}>0$, $a_{21}$ and $a_{12}$ have different parities so a satisfying $k$ exists. I have also proved the case of $n=3$ by a similar parity argument, but I am not sure how to prove it in general. Thanks! After some investigations I suspect that the parity argument couldn't be generalized, i.e., we really need the entries of $B$, not only the parities of the entries of $B$, for such $B$ to exists. Edit: I believe some conditions here are redundant: over all we have $n(n-1)/2$ entries to choose for $B$ and we only need to satisfy $n$ inequalities, so as $n$ gets large it seems like it can always be done. But how to prove it though?
This is not always possible. E.g. when $n=5$, it is easy to generate by computer a symmetric orthogonal matrix $A$ such that $0<a_{ii}<1$ and $a_{ii}\ge|a_{ij}|$ using the following Octave/Matlab script: n=5; D=diag([ones(n-1,1); -1]); for k=1:10000 [U,S,V]=svd(2*rand(n,n)-1); A=U*D*U'; if min(diag(A))>0 && max(diag(A))<1 && min(diag(A)'-max(abs(A)))>=0 A, break; end end In this case, the trace of $AB$ is necessarily zero. Therefore, if $AB$ has a positive diagonal entry, it must have a negative diagonal entry too. However, the desired $B$ exists if $A$ satisfies some additional conditions. By permuting the rows and columns of $A$ if necessary, we may assume that the first $r$ diagonal entries of $A^2$ are equal to $1$ and the remaining $n-r$ diagonal entries are less than $1$. So, if $\mathbf a_j$ denotes the $j$-th column of $A$, the $j$-th row of $A$ must be equal to $\mathbf a_j^T$ when $j\le r$. Now the desired $B$ exists if one of the following sufficient conditions is satisfied: * *$r=0$. In this case, all diagonal entries of $A^2$ are smaller than $1$. Therefore, when $B=A^T-A$, all diagonal entries of $AB=I-A^2$ are positive. *$r>0$ and the column space of the augmented matrix $M=\pmatrix{\mathbf a_{r+1}&\cdots&\mathbf a_n}$ contains some entrywise nonzero vector $\mathbf v=(v_1,v_2,\ldots,v_n)^T$ (e.g. when $M$ has an entrywise nonzero column or when $M$ has not any zero row). Let $$ B=A^T(I+\epsilon \mathbf v\mathbf v^T)-(I+\epsilon \mathbf v\mathbf v^T)A. $$ Then $AB=I-A^2+\epsilon(\mathbf v\mathbf v^T-A\mathbf v\mathbf v^TA)$. Since $\mathbf v\perp\mathbf a_j$ for all $j\le r$, the $j$-th diagonal entry of $AB$ is equal to $\epsilon v_j^2$ when $j\le r$, or $1-(A^2)_{jj}+\epsilon\times\text{some constant}$ when $j>r$. It follows that $AB$ has a positive diagonal when $\epsilon>0$ is small.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3665947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why matrices commuting with $\small\begin{bmatrix} 0&1\\-1&0\end{bmatrix}$ represent complex numbers? I am trying to understand which $2$ by $2$ real matrices represent complex numbers in following way. Let $J=\begin{bmatrix} 0&1\\-1&0\end{bmatrix}$ and $A=\begin{bmatrix} a&b\\c&d\end{bmatrix}$ be any real matrix. If $A$ represents a complex matrix (by standard embedding of complex field into matrix ring) then $A$ should commute with the matrix $J$, which image of complex number $i$. Q. I want to understand why the matrices commuting with $J$ are precisely the matrices representing complex numbers?
Let's consider $\varphi:\mathbb C\rightarrow M_2(\mathbb C)$, $\varphi(a+ib)=\pmatrix{a & b \\ -b & a}$ the standard embedding of $\mathbb C$ into the matrix ring. Consider $Z(J)=\{A\in M_2(\mathbb C)\ | \ JA=AJ\}$ the set of the matrix commuting with $J$. Your question is equivalent to show that $Z(J) = \varphi(\mathbb C)$. And this is true because: \begin{gather} A=\pmatrix{a &b \\ c & d}\in Z(J) \Longleftrightarrow \pmatrix{a &b \\ c & d}\pmatrix{0 & 1 \\ -1 & 0} = \pmatrix{0 &1 \\ -1 & 0}\pmatrix{a &b \\ c & d} \Longleftrightarrow\\ \begin{cases} -b = c\\ a=d \end{cases}\Longleftrightarrow A=\pmatrix{a &b \\ -b & a}\in \varphi(\mathbb C) \end{gather}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3666098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 1 }
Any $(x, y, z)$ can satisfy the $5x^2+2y^2+6z^2-6xy-2xz+2yz<0$? Please tell me whether there any $(x, y, z)$ which can satisfy the $5x^2+2y^2+6z^2-6xy-2xz+2yz<0$ ? No process or just solve it by calculator are both fine. Thank you.
Consider that you search for the extremum of $$F=5x^2+2y^2+6z^2-6xy-2xz+2yz$$ $$\frac{\partial F}{\partial x}=10 x-6 y-2 z=0 \qquad \frac{\partial F}{\partial y}=-6 x+4 y+2 z=0 \qquad \frac{\partial F}{\partial z}=-2 x+2 y+12 z=0$$ The only solution is $x=y=z=0$ so the minimum value of $F$ is $0$. Then $F$ is non-negative for any $(x,y,z)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3666248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Probability zero vs impossible I understand that probability $0$ does not mean 'impossible' - because if we look for instance at a uniform distribution over $[0, 1]$ then while each of the singleton events $\{r\}$ for $0\leq r \leq 1$ has probability $0$, if we carried out the experiment then we would get exactly one of the numbers in $[0,1]$, and so these events are not actually impossible, even though they have probability $0$. However, if we look at a distribution defined by a density function which is zero on $[0,\frac{1}{2}]$ and non-zero on $(\frac{1}{2},1]$ (let's say with a continuous transition between them), then we know that for any $0\leq r\leq \frac{1}{2}$ the event $\{r\}$ is impossible, and for $\frac{1}{2}<r\leq 1$ the event is possible, yet still has probability $0$. Both have probability $0$, but one of them is possible, and the other is not. Is there a definition that captures this distinction between the two cases?
However, if we look at a distribution defined by a density function which is zero on $[0,\frac{1}{2}]$ and non-zero on $(\frac{1}{2},1]$ (let's say with a continuous transition between them), then we know that for any $0\leq r\leq \frac{1}{2}$ the event $\{r\}$ is impossible, and for $\frac{1}{2}<r\leq 1$ the event is possible, yet still has probability $0$. Why do you say that? Consider the uniform distribution on $[0,1]$. Let $r$ be any point of $[0,1]$. Use this density: $$ f(x) = \begin{cases} 0,\qquad x <0 \text{ or }x=r \text{ or } x>1\\ 1,\qquad \text{otherwise.} \end{cases} $$ Of course this is still the same distribution: uniform on $[0,1]$. But now your reasoning says $r$ is impossible. So your notion of "impossible" depends not on the distribution itself, but in your choice of density function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3666392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 1 }
A polynomial function representing Logical OR Is there a way to represent logical OR function of two or more variables as a polynomial/value function. So that the $f(x,y)>0$ if $\text{OR}(x,y)>0$. Additional Edit: Considering $\geqslant 0$ means TRUE, and $<0$ means FALSE. For both the variables and the function. i.e in case $x>0$ and $y<0$, $\text{OR}(x,y)>0$ but $f(x,y)$ might depend on the relative magnitudes of $x$ and $y$.
Yes: $OR(x,y)=1-(1-x)(1-y)$. Assuming $0$ is false and $1$ is true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3666597", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }