Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Prove that if $ \ A\cup B \subseteq C \cup D,\ A \cap B =$ ∅ $\land \ C \subseteq A \implies B \subseteq D.$ Question: Prove that if $ \ A\cup B \subseteq C \cup D,\ A \cap B =$ ∅ $\land \ C \subseteq A \implies B \subseteq D$. My attempt: Let $ \ x\in B \implies x \in A \cup B \implies x \in C \cup D \because A\cup B \subseteq C \cup D$. Now, $ x \in C \lor x\in D$. If $\ x \in C \implies x \in A \because C \subseteq A$. But that's not possible $\because x \notin A \cap B$, in particular $ x \notin A$. So we must have $ x \in D$. I found this proof a little challenging. Not quite sure if this is the correct way to prove it. Is my logic correct?
It is correct. Well done. Perhaps a bit of suggested modification in the "But that's not possible part" to make it clearer. Since we started with $x \in B$, if $x \in A$, then $x \in A \cap B$ which is not possible, as $A \cap B = \phi$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2397564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Prove or disprove $ \ (A \times A) - (B \times B) = (A-B) \times (A-B)$ Question: Let $ A,B$ be sets. Prove or disprove: $ \ (A \times A) - (B \times B) = (A-B) \times (A-B)$ My attempt: Let $ \ (x,y) \in (A \times A) - (B \times B) \implies (x,y) \in (A \times A)$ and $ \ (x,y) \notin (B \times B) \implies x \in A $ and $ \ x\notin B$ and $ \ y \in A$ and $ \ y \notin B \implies (x,y) \in (A-B) \times (A-B)$ Let $ \ (x,y) \in (A-B) \times (A-B) \implies x \in A$ and $ x \notin B$ and $ \ y \in A$ and $ \ y \notin B \implies (x,y) \in (A \times A)$ and $ \ (x,y) \notin (B \times B) \implies (x,y) \in (A \times A) - (B \times B)$. Is this approach correct?
This is incorrect. The set $A \times A$, is the sets where both coordinates belong in $A$. The set $B \times B$ is the set where both coordinates belong in $B$. So, $A \times A-B \times B$ are the points where both coordinates belong in $A$ except the ones that both coordinates belong in $B$. In other words, the first coordinates or the second coordinates does not belong in $B$. Without words, $$ A \times A-B \times B= \left ((A-B)\times A \right ) \cup \left (A\times (A-B) \right )$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2397664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 7, "answer_id": 5 }
Is there any framework similar to Bayesian Preposterior Analysis for Value of Information? I am trying to use the Value of Information concept using Bayesian Preposterior Analysis as proposed by Raiffa and Schlaifer 1961. However, due to certain limitations, mainly associated with the decision model, I am looking for alternate conceptions of quantifying the Value of Information. Is there any alternate way of quantifying the value of information without tying it to a decision model itself? For example, using Kullback Leibler divergence, the impact of the information on the prior and the posterior can be one such conception of the Value of information. Any help would be appreciated.
You're probably looking for "Entropy" or "Information Gain," also called mutual information. It's the expected value of the Kullback–Leibler divergence from the conditional distribution. https://courses.cs.washington.edu/courses/cse455/10au/notes/InfoGain.pdf You can also look for articles on info-gain ratios and related terms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2397741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How is induction on $p-q > 0$ used correctly here? I'm self learning Rotman's Algebraic topology and I've come across this theorem and proof. I have two questions: $(1.)$ How did the author arrive at $H_n(X^p, X^{q+1}) = 0$ if $q+1 \ge n$? $(2.)$ How is induction used properly here? Normally induction is done on a base case, $p=q$ in this example, and then assume it's true for $p-q = k$ and show that it's true for $p-q = k+1$. But I don't see anywhere an expression of the form $p-q = k+1$ being shown to be true. Can someone explain how induction is used here to show where $k+1$ is true?
You have $$(p',q') = (p,q+1)$$ where $$p-q >0,\;\;\text{and either}\;\;n > p\;\;\text{or}\;\;q \ge n$$ To apply the inductive hypothesis, you need to have \begin{align*} &{\small{\bullet}}\;\;p'-q' \ge 0\\[4pt] &{\small{\bullet}}\;\;p'-q' < p-q\\[4pt] &{\small{\bullet}}\;\;\text{either}\;\;n > p'\;\;\text{or}\;\;q' \ge n\\[4pt] \end{align*} Check the conditions one at a time . . . \begin{align*} &p-q > 0 \implies p-(q+1) \ge 0 \implies p'-q' \ge 0\\[4pt] &p'-q'=p-(q+1) < p-q\\[4pt] \end{align*} To show $n > p'\;\;\text{or}\;\;q' \ge n$, consider two cases . . . $\qquad$Case $(1)\,\!\!:\;n > p$.$\;\,$Then $n > p \implies n > p'$. $\qquad$Case $(2)\,\!\!:\;q \ge n$.$\;\,$Then $q \ge n \implies q+1 > n \implies q' > n \implies q' \ge n$. Thus, in each of the two cases, the requirements on $(p',q')$ are met. Therefore the inductive hypothesis can be applied.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2398069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Choosing branch cuts for complex integration When calculating integrals like $$\int_0^\infty \frac{x^\alpha}{1 + x^2}dx$$ for $\alpha \in (-1,1 )$, it is convenient to take the branch cut of the integrand along the positive real axis and then use the keyhole contour. I was wondering is there a way to use the principal branch cut, which runs along the negative real axis, to calculate this kind of integrals? I'm assuming some trivial manipulation of the integrand for $x>0$ would do the trick, but I fail to see it.
Using the principal branch, we can write the integral $\oint_C \frac{z^a}{z^2+1}\,dz$, where $C$ is comprised of (i) the real line segment from $-R$ to $R$ and (ii) the semi-circle in the upper-half plane, centered at the origin and with radius $R$, as $$\begin{align} \oint_C \frac{z^a}{z^2+1}\,dz&=\int_{-R}^0 \frac{x^a}{x^2+1}\,dx+\int_0^R \frac{x^a}{x^2+1}\,dx+\int_0^\pi \frac{(Re^{i\phi})^a}{(Re^{i\phi}))^2+1}\,(iRe^{i\phi}))\,d\phi\\\\ &=(1+e^{i\pi a})\int_0^R \frac{x^a}{x^2+1}\,dx+\int_0^\pi \frac{(Re^{i\phi})^a}{(Re^{i\phi}))^2+1}\,(iRe^{i\phi}))\,d\phi\tag1 \end{align}$$ As $R\to \infty$ the second integral on the right-hand side of $(1)$ approaches $0$. Hence, taking this limit and invoking the reside theorem we find that $$\begin{align} \int_0^R \frac{x^a}{x^2+1}\,dx&=\frac1{1+e^{i\pi a}}\,(2\pi i) \text{Res}\left(\frac{z^a}{z^2+1}, z=i\right)\\\\ &=\frac{\pi e^{i\pi a/2}}{1+e^{i\pi a}}\\\\ &=\frac{\pi}{2\cos(\pi a/2)} \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2398165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Multi-variable function Integration I want to calculate this integral. $$ \int\int_B x^2y \space dx dy$$ where $$B = \{ (x,y) \in \mathbb{R}^2 | y \leq x \leq y^2+1, 0 \leq y \leq 1 \}$$ Now from the definition we know that the "$y$"-integral needs to be from $0$ to $1$. I have tried to calculate it like this $$ \int_0^1 \int_y^{y^2+1}x^2y \ \ dx dy$$ But since I know the answer to be $-\frac{1}{40}$ I knew I was wrong when I got $\frac{127}{120}$ as an answer, so I know I did something wrong with the limits. Can someone help me ? I have done the calculation on paper, and its too long to type it in here as it is incorrect. Here are the various steps I did as proof for my work: $$ \int_0^1 \int_y^{y^2+1}x^2y \ \ dx dy$$ $$ \int_0^1 \left[ \frac{1}{3} x^3y \right]^{x=y^2+1}_{x=y} dy$$ $$ \int_0^1 \left(\frac{1}{3} (y^2+1)^3y \right) - \left(\frac{1}{3}y^4 \right) dy$$ $$ \left[ \frac{1}{24} y^8 + \frac{1}{4}y^4+ \frac{1}{3}y^3 +\frac{1}{2}y^2 -\frac{1}{15}y^5 \right]^1_0 =\frac{127}{120} $$
If $$B = \{ (x,y) \in \mathbb{R}^2 : y \leq x \leq y^2, 0 \leq y \leq 1 \}, $$ then you should be integrating: $$ \int_0^1 \int_{y}^{y^2} x^2y \ dx dy = - \int_0^1 \int_{y^2}^{y} x^2y \ dx dy = -\frac{1}{40}. $$ But if $$B = \{ (x,y) \in \mathbb{R}^2 : y \leq x \leq y^2\color{blue}{+1}, 0 \leq y \leq 1 \}, $$ then you should be integrating: $$ \int_0^1 \int_{y}^{y^2+1} x^2y \ dx dy = \int_0^1 \int_{0}^{x} x^2y \ dy dx + \int_1^2 \int_{\sqrt{x-1}}^{1} x^2y \ dy dx = \frac{67}{120}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2398284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Any neat proof that $0$ is the unique solution of the equation $4^x+9^x+25^x=6^x+10^x+15^x$? It is obvious that both $f(x)= 4^x+9^x+25^x$ and $g(x)=6^x+10^x+15^x$ are strictly monotonic increasing functions. It is also easy to check that $0$ is a solution of the equation. Also I chart the functions, and it looks that for any $x$, $f(x)>g(x),$ which can be somehow proof by studying the derivative of the $h(x)=f(x)-g(x)$ and showing that $(0,0)$ is an absolute minimum point for $h(x).$ However $h(x)$ is a function with a messy derivative, and is not looking easy (for me) to find the zeroes of this derivative. Does anyone know an elegant proof (maybe an elementary one, without derivatives) for this problem?
HINT: $$a^2+b^2+c^2\geq ab+bc+ca$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2398436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Are the elements of a set within a set also the elements of the latter? It is my understanding that an event is a subset of the set of all possible outcomes (sample space). If however the sample space consists of elements which are sets, can an event be defined as one the elements from these "inner" sets? Ex. A coin is flipped twice, (S={(H,T),(T,H),(T,T),(H,H)} Is the event A=(H) valid for the sample space despite not being a subset of S?
No. In the first place, if $x\in y$ and $y\in z,$ then in most cases it is not true that $x\in z.$ For example, consider the set $\{\ \{1,2,3\},\ \{2,3,4\}\ \}.$ This set has only two members. If $1,2,3,4$ were members of it, then it would have at least four members. In the second place, in the set $\{\,(H,T),(T,H),(T,T),(H,H)\,\},$ the pair $(H,T)$ is not a set with members $H$ and $T$; rather it is an ordered pair with components $H$ and $T.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2398552", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Show that $\lim_{n \to \infty} \int_{0}^{1}|f_n(x)| \, dx= 0$ Let $\{f_n\}$ be a sequence of Lebesgue integrable functions and $g: [0,\infty)\to [0,\infty)$ be an increasing and continuous function such that $\displaystyle\lim_{x\to\infty}g(x) = \infty$. We also have that (i) $\int_0^1|f_n(x)|g(|f_n(x)|)\,dx < 100$ (ii) $ f_n \to 0$ almost everywhere We must show that $$ \lim_{n\to\infty}\int_0^1 |f_n(x)| \, dx = 0$$ What I have: Let $\epsilon > 0$, $g$ is continuous and increasing, then $$ \exists M > 0, g(|f_n(x)|) > M \mbox{ and } \frac{100}{M} < \frac{\epsilon}{2}$$ Now consider the following partition of $[0,1]$ $$E_1 = \{x \in [0,1]: |f_n(x)| \leq \frac{1}{M}\}$$ $$E_2 = \{x \in [0,1]: |f_n(x)| > \frac{1}{M}\}$$ Then $$\int_0^1 |f_n(x)|\,dx = \int_{E_1}|f_n(x)| + \int_{E_2}|f_n(x)| \, dx$$ Thus, \begin{align} & \left|\int_0^1|f_n(x)|\,dx\right| < \frac{1}{M}m(E_1) + \int_0^1 |f_n(x)| \frac{g(|f_n(x)|)}{g(|f_n(x)|)} \, dx \\[10pt] < {} & \frac{1}{M} + \frac{1}{M} \int_0^1 |f_n(x)|g(|f_n(x)|) \, dx < \frac{100}{M} + \frac{100}{M} < \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon. \end{align} I want to know if I got the proof right. Thanks.
$\forall \varepsilon>0$ $\exists M>0,\ s.t.\ 100/g(M)<\varepsilon/3$. Due to Egoroff. $\exists E(measurable)\subset[0, 1]\ \&\ m([0, 1]-E)<\varepsilon/(3M)$. And $f_n\xrightarrow{u.}0$ on E. Find an enough large N such that $\forall n>N$, $|f_n|<\varepsilon/3$ on E. Then we have \begin{eqnarray} \int_0^1|f_n|dx&=&\int_E|f_n|dx+\int_{[0, 1]-E}|f_n|dx\\ &\leq&\varepsilon/3+\int_{([0, 1]-E)\cap\{|f_n|\leq M\}}|f_n|dx+\int_{([0, 1]-E)\cap\{|f_n|> M\}}|f_n|dx\\ &\leq&\varepsilon/3+\int_{[0, 1]-E}Mdx+\int_{[0, 1]\cap\{|f_n|>M\}}|f_n|dx\\ &\leq&2\varepsilon/3+\int_{[0, 1]\cap\{|f_n|>M\}}|f_n|\frac{g(|f_n|)}{g(|f_n|)}dx\\ &\leq&2\varepsilon/3+\frac{1}{g(M)}\int_0^1|f_n|g(|f_n|)dx\\ &\leq&\varepsilon \end{eqnarray}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2398626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that $\frac{\tan x}{x}>\frac{x}{\sin x}, x\in(0,\pi/2)$ Prove that $$\frac{\tan x}{x}>\frac{x}{\sin x},\;\;\; x\in(0,\pi/2).$$ My work I formulated $$f(x)=\tan x \sin x - x^2$$ in hope that if $f'(x)>0$ i.e. monotonic then I can conclude for $x>0, f(x)>f(0)$ and hence, prove the statement. However, I got $$f'(x)=\sin x + \sec x \tan x -2x, $$ where I am unable to conclude if $f'(x)>0.$ I also found $$f''(x)=\cos x + 2\sec^3x-\sec x-2,$$ $$f'''(x)=-\sin x (1-6\sec^4x+\sec^2x).$$ But I am not able conclude the sign of any of the higher derivatives either. Am I doing something wrong? Or is there some other way?
I believe the simplest proof is through the Cauchy-Schwarz inequality: $$\tan(x)\sin(x)=\int_{0}^{x}\frac{d\theta}{\cos^2\theta}\int_{0}^{x}\cos(\theta)\,d\theta\geq\left(\int_{0}^{x}\frac{d\theta}{\sqrt{\cos\theta}}\right)^2\geq\left(\int_{0}^{x}d\theta\right)^2=x^2. $$ In a similar fashion, for any $x\in\left(0,\frac{\pi}{2}\right)$ we have $\frac{\tan x }{x}\geq\left(\frac{x}{\sin x}\right)^2$ by Holder's inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2398777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 7, "answer_id": 1 }
Showing a solution of a PDE is bounded. Let $x \in \mathbb{R}^3$, $t \in [1,\infty)$ and $ u(x,t)$ be a solution of the PDE $$\partial_{t}^{2}u - \Delta u = 0 \\ u(x,0) = 0 \\ \partial_{t} u(x,0) = v(x)$$ where $v$ and $\partial_{x_i}v$ are both integrable on $\mathbb{R}^3$ for all $1 \leq i \leq 3$. Show that there exists $C > 0$ such that $|u(x,t)| \leq \frac{C}{t}$ I'm not too sure how to approach this. Before this I just learned about solving some PDEs with Fourier Transforms, but conditions usually involved the initial functions being in Schwartz Space. Any help is appreciated
$\DeclareMathOperator{\p}{\partial} $Recall Kirchoff's formula for the solution to the wave equation with initial conditions $u(x,0)=g$ and $\p_tu(x,0)=v(x)$: $$u(x,t)=\frac{1}{4\pi t^2}\int_{\p B(x,t)}g(y) + \nabla g(y)\cdot(y-x) + tv(y)~d\sigma(y).$$ In your case $g=0$ and so the representation formula simplifies significantly. Now notice that for $y\in\p B(x,t)$ that the outward pointing unit normal is $\nu=\frac{y-x}{t}.$ It then follows that \begin{align*} u(x,t) & = \frac{1}{4\pi t^2}\int_{\p B(x,t)}tv(y)\frac{y-x}{t}\cdot\nu ~d\sigma(y) \\ & = \frac{1}{4\pi t^2}\int_{B(x,t)}\operatorname{div}\left(v(y)(y-x)\right) ~d y \\ & = \frac{1}{4\pi t^2}\int_{B(x,t)}3v(y) + \nabla v(y)\cdot(y-x)~d y. \end{align*} Note that we needed to use the divergence theorem to avoid working with a surface integral. Now for $t \geq 1$ we have \begin{align} |u(x,t)|&\leq\frac{1}{4\pi t^2}\left(3\|v\|_{L^1}+\int_{B(x,t)}\|\nabla v(y)\|\|y-x\|~dy\right)\\ & \leq\frac{1}{4\pi t^2}\left(3\|v\|_{L^1}+t\|\|\nabla v\|\|_{L^1}~dy\right)\\ & \leq \frac{3\|v\|_{L^1}+\|\|\nabla v\|\|_{L^1}}{4\pi t}. \end{align} So by taking $C=\frac{3\|v\|_{L^1}+\|\|\nabla v\|\|_{L^1}}{4\pi}$ you have the desired result. In the above $\|\|\nabla v\|\|_{L^1}$ just means the $L^1$-norm of $\|\nabla v\|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2398853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How many numbers less than $100$ can be expressed as a sum of distinct factorials? How many numbers less than $100$ can be expressed as a sum of distinct factorials? Example: a) $4 = 2! + 2!$ b) $3 = 2! + 1!$
Lemma(I): For every positive integer $n$ we have: $$ 1! + 2! + ... + (n-1)! < n! \ \ $$ There are $ \color{Green}{15} = \color{Green}{16} \color{Red}{-1} = \color{Green}{2^4} \color{Red}{-1}$. $$n= \varepsilon_1 (1!) + \varepsilon_2 (2!) + \varepsilon_3 (3!) + \varepsilon_4 (4!) ; $$ where $\varepsilon_i \in \{0,1\}$ for $i=1, 2, 3, 4$. Because for every $\varepsilon_i$ we have two choices. The $\color{Red}{-1}$ appears because the case $\varepsilon_1=\varepsilon_2=\varepsilon_3=\varepsilon_4=0$ is not allowed; as @Professor Vector has been mentioned. If $0!$ permited to join the sum then: Lemma(II): For every integer $3 \leq n$ we have: $$ 0! + 1! + 2! + ... + (n-1)! < n! \ \ $$ There are $ \color{Green}{23} = \color{Green}{24} \color{Red}{-1} = \color{Green}{3.2^4} \color{Red}{-1}$. $$n= \varepsilon_0 (0!) + \varepsilon_1 (1!) + \varepsilon_2 (2!) + \varepsilon_3 (3!) + \varepsilon_4 (4!) ; $$ where $\varepsilon_i \in \{0,1\}$ for $i=0, 1, 2, 3, 4$. Because for each of $\varepsilon_2, \varepsilon_3, \varepsilon_4$ we have two choices; and we have three coices for choosing $\varepsilon_0, \varepsilon_1$; i.e. $(\varepsilon_0, \varepsilon_1)= (0,0) \ \ \ \text{or} \ \ \ (0,1) \ \ \ \text{or} \ \ \ (1,1) . $ [Notice that the two pairs $(\varepsilon_0, \varepsilon_1)= (0,1)$ and $(\varepsilon_0, \varepsilon_1)= (1,0)$ are the same. ] The $\color{Red}{-1}$ appears because the case $\varepsilon_0=\varepsilon_1=\varepsilon_2=\varepsilon_3=\varepsilon_4=0$ is not allowed; as @Professor Vector has been mentioned.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2399077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$\ x^3\:+\:a\left(a+1\right)x^2\:+\:ax\:-\:a\left(a+b\right)\:-\:1\:=\:0 $ $$\ x^3\:+\:a\left(a+1\right)x^2\:+\:ax\:-\:a\left(a+b\right)\:-\:1\:=\:0 $$ For what values of$\ b$ does the equation have a root which is independent of a? Tried the Horner's Method, but doesn't seem to work with this. Could I have some hints on how to get this done? Thank you. *the answer is $\ b=2$.
Put x = 1 You will find that for b = 2, irrespective of the value of a, you will find a root. It is that simple. But not scientific. $1+a(a+1) + a - a(a+b) -1 = 2a-ab = 0$ In the above expresssion, if you put b=2, the root does not have to be dependent on a
{ "language": "en", "url": "https://math.stackexchange.com/questions/2399227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
$r=\pm1$ are the only rationals with $\,r+1/r\in \Bbb Z$ (sum with its reciprocal is an integer) Can sum of a rational number and its reciprocal be an integer? My brother asked me this question and I was unable to answer it. The only trivial solutions which I could think of are $1$ and $-1$. As to what I tried, I am afraid not much. I have never tried to solve such a question, and if someone could point me in the right direction, maybe I could complete it on my own. Please don't misunderstand my question. I am looking for a rational number $r$ where $r + \frac{1}{r}$ is an integer.
Key Idea $\ r\ \&\ 1/r\,$ have integer sum & product so by RRT both are integers, so $\,r =\pm1.\,$ For convenience we reproduce the proof below, slightly generalized to $\,r\ \&\ c/r,\,$ for $\,c\in\Bbb Z$. Lemma $ $ If $\ r\in \Bbb Q,\,c\in\Bbb Z\ $ then $\ r + c/r = b\in\Bbb Z \iff r,\, c/r \in \Bbb Z\,\ $ [OP is $\,c \!=\! 1\Rightarrow r=\pm1 ]$ Proof $\ (\overset{\times\ r}\Longrightarrow)\,\ \ r^2 +c = b\, r \,\overset{\rm\small RRT}\Rightarrow\,r\in \Bbb Z\,$ $\,\Rightarrow\,r\mid c\,$ by $ $ RRT = Rational Root Test. $\,\ (\Leftarrow)\ $ Clear. Remark $ $ More generally if $\ a\, r + c/r = b\ $ for $\,a,b,c\in\Bbb Z\,$ then scaling by $\,r\,$ we deduce as above $\ a\,r^2 - b\,r + c = 0\,$ so if $\, r = e/d,\ \gcd(e,d)=1\,$ RRT $\Rightarrow e\mid c,\ d\mid a.\,$ If $\,a,c\,$ have $\rm\color{#c00}{few}$ factors then only a $\rm\color{#c00}{few}$ possibilities exist for $\,r,\,$ e.g. if $\,a,c\,$ are primes then $\,\pm r = 1,\, c,\,1/a,\,$ or $\,c/a\,$. [Or $\ ar\,\ \&\,\ c/r\,$ have integer sum & product so RRT $\Rightarrow$ both $\in\Bbb Z\,$ so $\,ar = ae/d\in\Bbb Z\Rightarrow d\mid a,\,$ and $\,c/r = cd/e\in\Bbb Z\Rightarrow e\mid c,\,$ by $\,d,e\,$ coprime and Euclid's Lemma]. These are special cases of ideas going back to Kronecker, Schubert and others which relate the possible factorizations of a polynomial to the factorizations of its values. In fact we can devise a simple (but inefficient) polynomial factorization algorithm using these ideas. For more on this viewpoint see this answer and its links.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2399350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40", "answer_count": 9, "answer_id": 4 }
Let $a_n$ be a sequence of real numbers such that $\lim(a_n\sum_{k = 1}^n a_k^2) = 1$ Let $a_n$ be a sequence of real numbers such that $\lim(a_n\sum_{k = 1}^n a_k^2) = 1$ . Prove that $\lim((3n)^{\frac{1}{3}}a_n)=1$ I'm more concerned with how I can derive the prove of this question
Let $S_n=a_1^2+\cdots+a_n^2$. The sequence $(S_n)_{n\ge1}$ is nondecreasing. If it is convergent then $\lim_{n\to\infty}a_n=0$ and this contradicts the hypotgesis $\lim_{n\to\infty}a_nS_n=1$. Thus, $\lim_{n\to\infty}S_n=+\infty$ and $\lim_{n\to\infty}a_n= \lim_{n\to\infty}\frac{a_nS_n}{S_n}=0$. Now $$S_{n}^3-S_{n-1}^3=S_{n}^3-(S_{n}-a_n^2)^3=3(a_nS_n)^2-3a_n^3(a_nS_n)+a_n^6$$ Hence $$\lim_{n\to\infty}(S_{n}^3-S_{n-1}^3)=3$$ So, by Cesàro's lemma we get $$\lim_{n\to\infty}\frac{S_{n}^3}{n}=3$$ Or equivalently $$\lim_{n\to\infty}\frac{\sqrt[3]{3n}}{S_{n}}=1$$ Finally $$\lim_{n\to\infty}\sqrt[3]{3n}a_n=\lim_{n\to\infty}(a_nS_n)\frac{\sqrt[3]{3n}}{S_{n}}=1$$ Done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2399413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
When is interchange of quantifiers allowed? Ex: $\forall w \in \bigcup A_n$ There is a myriad of question for the interchange of different quantifiers, mainly between $\exists$ and $\forall$. I'm interested in knowing when both can be interchanged. The motivation came from this: $\forall w \in \bigcup_n A_n \Leftrightarrow\forall_{n,w} w \in A_n$, when $w \in \bigcup_n A_n \Leftrightarrow\exists_{n} w \in A_n$. Any help would be appreciated.
Example Suppose that for every man, the there exists a women that is his mother. Symbolically, we can state this relationship as follows: $$\forall x:[x \in Men \implies \exists y: [y\in Women \land Mother(y,x)]]$$ Or equivalently: $$\forall x\in Men: \exists y\in Women: Mother(y,x)$$ It should be obvious that we cannot infer from this statement that there exists a woman that is the mother all men: $$\exists y: [y\in Women \land \forall x:[x \in Men \implies Mother(y,x)]]$$ Or equivalently: $$\exists y\in Women: \forall x\in Men: Mother(y,x)$$ As a general rule, in mathematical proofs with all quantifiers restricted to various sets like this: When discharging a premise (or assumption), the conclusion should not refer to any free variables that were introduced after that premise. And any free variables that were introduced in that premise should be universally generalized in the conclusion. Follow this rule and you shouldn't have to worry about rules for interchanging quantifiers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2399518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Roots of an equation over the finite field $\operatorname{GF}(p^q)$ Consider the following equation over the finite field $\operatorname{GF}(p^q)$ such that $r \mid p^q-1$: \begin{align} x^r=y^r \tag{1} \\ \end{align} The solutions of $(1)$ over $\operatorname{GF}(p^q)$ are: \begin{align} x=\gamma^i\, y \quad , \quad 0\leq i \leq r-1\\ \tag{2} \end{align} where $\gamma$ is the element of order $r$ over $\operatorname{GF}(p^q)$. It can be proved that $\gamma_i$'s are distinct elements. Another method to obtain solutions of $(1)$ is that considering the following equation: $$ x^r=y^r \Rightarrow (x-y)\,g(x,y)=0 \tag{3} $$ then find roots of the $g(x,y)$ over $\operatorname{GF}(p^q)$, where $g(x,y)$ is a polynomial of degree $r-1$ and based on the variables $x$ and $y$. My question: How to prove that $g(x,y)=\prod_{i=1}^{r-1} (x-\gamma^i\, y)$? My try: We know by Newton's identities the following relation $$ \prod_{i=1}^{r-1} (x-\gamma^i\, y)=\sum_{k=0}^{r-1}(-1)^{r-1-k} e_{r-1-k}\,y^{r-1-k}\,x^k\tag{4} $$ In addition, it can be proved that for $1\leq k\leq r-1$, we have $$ p_k(\gamma^1,\cdots,\gamma^{r-1})=\sum_{i=1}^{r-1}\gamma^{ik}=-1\tag{5} $$ There is a relation between $p_k$ and $e_k$ by Newton's identities that results that $$ e_{2k}(\gamma^1,\cdots,\gamma^{r-1})=1 \quad, \quad e_{2k-1}(\gamma^1,\cdots,\gamma^{r-1})=-1 \quad, \quad 1\leq k \leq \frac{r-1}{2}\tag{6} $$ Therefore, by using $(5)$ and $(6)$ in the relation $(4)$ we conclude that $$ \prod_{i=1}^{r-1} (x-\gamma^i\, y)=\sum_{k=0}^{r-1}y^{r-1-k}\,x^k=g(x,y) $$ Is it a correct proof? Thanks for any suggestions.
I would just use the factorization $$f(x)=x^r-1=\prod_{i=0}^{r-1}(x-\gamma^i).$$ It implies that $$ x^r-y^r=y^rf(x/y)=y^r\prod_{i=0}^{r-1}(x/y-\gamma^i)=\prod_{i=0}^{r-1}(x-y\gamma^i). $$ You can then cancel the factor $x-y$ corresponding to $i=0$. The trick is known as homogenization. It adds one more variable to a polynomial, and gives, as an end product, a homogeneous polynomial, i.e. a polynomial such that all the terms share the same total degree.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2399615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculation of Lower Box and Box Dimension I am new to this site, so sorry if the question is stupid. I am learning fractals and my teacher gave me the following exercise. Let $$E=\{0,0\}\cup\left\{\bigcup_{n=1}^\infty (x,1/\sqrt{n}\,):0\leq x\leq 1/\sqrt{n}\right\}$$ Find $\dim_\text{lower box}(E)$ and $\dim_\text{box}(L\cap E)$ for all lines $L$ with non-zero gradient and which do not pass through $(0,0)$. I have done some of the work but I do not know if it is correct. $E$ can be covered by $n(n+1)/2$ boxes of side $1/\sqrt{n}$. But then this gives $$\dim_\text{lower box}(E)\leq\lim_{n\to\infty}\frac{\log(n)+\log(n+1)-\log(2)}{\frac{1}{2}\log(n)}=4$$ I am certain this is not correct. Could someone possibly please help me? Thank you very much. Ok, now I am told that I am supposed only to show $\dim_{lower\ box}\geq\frac{4}{3}$. Does anyone know how I do this? Thanks you.
Since it is probably a homework problem, I'll just get you started. Suppose the width of the box is $\epsilon$. For each line segment $(0,1/\sqrt n)\times\{1/\sqrt n\}$ you need at least $1/(\sqrt n \epsilon)$ boxes, and these boxes won't cover more than one line segment if $\frac1{\sqrt n}-\frac1{\sqrt{n+1}} > \epsilon$, which gives $n \lessapprox \epsilon^{-2/3}$, since $\frac1{\sqrt n}-\frac1{\sqrt{n+1}} \approx \frac 1{n^{3/2}}$. The number of boxes this gives can be written as a sum, and this sum can be approximated by an integral.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2399731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
If a bird fly to height of $h$ ,What's the area that it can see? Suppose a bird fly to height $h$ from earth . The bird can see area under by it's eyes ,name as $S$ ,What's $max \{S\}$ ? Is it possible to solve ? my first trial was to assume a cone by $height =h$ and $S=\pi R^2$ as area like the figure I attached with. Is $S$ a constant for special amount of $h$ ? can we calculate $max \{S\}$ or $S$?(or something is missed ?) Thanks for any hint ,in advance.
I am assuming that there is no limitation on the angle of vision of the bird. The only known values I am assuming is the half apex angle $\theta$. By property of tangent, the half angle subtended at the center will be $90-\theta$. By using the solid angle formula, $$ A=\phi r^2=2\pi(1-\cos{(90-\theta}) = 2\pi (1-\sin{\theta}) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2399824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
How does $t \rightarrow \infty$ then $t[1-F(t)+F(-t)] \rightarrow 0$ relate to the weak law of large numbers? Refering to the notes here http://www.stat.umn.edu/geyer/8112/notes/weaklaw.pdf In Theorem 1, I understand how (i) $\iff$ (iii). I also understand the second part of (ii) where $\lim_{t \to\infty} \int ^t _{-t} x F \{dx \} = \mu $ However, I do not understand how the first part of (ii) is of any relevance here. Doesn't that equation hold true for all distribution functions, since as $t \rightarrow \infty$, $F(t) \rightarrow 1$ and $F(-t) \rightarrow 0$? Therefore, $1-F(t)+F(-t) \rightarrow 0$, and therefore $t(1-F(t)+F(-t)) \rightarrow 0$? How does this relate to the weak law of large numbers?
$1-F(t)+F(-t) \rightarrow 0$, and therefore $t(1-F(t)+F(-t)) \rightarrow 0$. If for a function $h\colon\mathbb R\to\mathbb R$, we have $h(t)\to 0$, it does not mean that $t\cdot h(t)$ goes to zero (for example if $h(t)=\sqrt{1+\left\lvert t\right\rvert}$). Here, we have $$1-F(t)+F(-t)=\mathbb P\left\{X_1 \gt t \right\} + \mathbb P\left\{X_1 \leqslant -t \right\}=\mathbb P\left\{\left\lvert X_1 \right\rvert \gt t \right\} +\mathbb P\left\{X_1 =-t \right\} $$ hence condition (1a) in the notes can be interpreted as a decay condition on the tail of $\left\lvert X_1 \right\rvert$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2399924", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Given a sequence {bn}, with two subsequence which both converge, proof that {bn} not have to be convergent Given a sequence $\{bn\}$, with two subsequences which both converge, prove that $b_n$ need to converge. Given $bn$ and subsequences $b_{2n}$ and $b_{2n+1},$ where $b_{2n}$ and $b_{2n+1}$ are both covergent, Show that $bn$ dosen´t have to be convergent. I know that if $bn$ is covergent then all of the subsequences of it must have the same limit point let´s say $x$. But if $b_{2n}$ has limit point $y$ and $b_{2n+1}$ has limit point $z$ then $b_n$ cannot be convergent, but how can i show that??
Take $b_n=(-1)^n$. $\{b_{2n}\}$ converges and $\{b_{2n-1}\}$ converges, but $\{b_n\}$ does not. Let $A=\lim\limits_{n\rightarrow+\infty}b_n$ and $\epsilon=\frac{1}{2}.$ Thus, for all $N>0$ there is $n>N$ for which $|b_n-A|\geq\epsilon$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2399993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Derivative of matrix-valued function with respect to matrix input I have the expression $$\bf \phi = \bf X W$$ where $\bf X$ is a $20 \times 10$ matrix, $\bf W$ is a $10 \times 5$ matrix. How can I calculate $\frac{d\phi}{d\bf W}$? What is the dimension of the result?
There is a similar question. Also, you could define it $$C = \frac{\partial \phi}{\partial W} $$ where C is a 4D matrix (or tensor) with $$ C_{a,b,c,d} = \frac{\partial \phi_{a,b}}{\partial W_{c,d}} $$ Actually, when derivatives are expressed as matrices, for example, $f=x^TAx$ where $x\in R^{n\times1}, A\in R^{n\times n}$, you could think of $\frac{\partial f}{\partial A}$ as $$ \left[\frac{\partial f}{\partial A}\right]_{ij} = \frac{\partial f}{\partial A_{ij}} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2400080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Binomial distribution to approximate 90 % successes A student is taking a multiple choice test where each question has four options for an answer. The student mastered mastered 70% of the material. Assume this means that the student has a 0.7 chance of knowing the correct answer to a random test question. On the other hand, if the student does not know the answer to the question, she randomly selects among the four answer choices. Finally, assume that this holds for each question independent of the others. What is the probability that a specific question is answered correctly? P(correct) = 0.7+0.25*0.3 = 0.775 Suppose that at least 90% of the choices has to be correct to pass the test. If the test has 30 questions, approximate the probability that the student will get at least 90% of the choices correct? I tried using the binomial distribution to approximate this: P(90% correct)=$\binom{30}{27} 0.775^{27} (1-0.775)^{30-27}$ $\binom{30}{27} = (30!)/((27!)((30-27)!)) = 4060 $ This gives me 0.047 as an result. Is this the right way of approximating the probability that the student will get at least 90% of the choices correct? 4,7%
At least $90$ per cent correct means that the number of correct answers could be $27$, $28$, $29$ or $30$. So to be exact, you're looking for $P(\ge90 \% \, correct)$, which is the sum of probabilities of the four cases computed the way you did for $27$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2400231", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Simplify $\sqrt{6-\sqrt{20}}$ My first try was to set the whole expression equal to $a$ and square both sides. $$\sqrt{6-\sqrt{20}}=a \Longleftrightarrow a^2=6-\sqrt{20}=6-\sqrt{4\cdot5}=6-2\sqrt{5}.$$ Multiplying by conjugate I get $$a^2=\frac{(6-2\sqrt{5})(6+2\sqrt{5})}{6+2\sqrt{5}}=\frac{16}{2+\sqrt{5}}.$$ But I still end up with an ugly radical expression.
In this particular problem, you can pretty much guess the answer. $$\sqrt{6-\sqrt{20}}=\sqrt{6-2\sqrt{5}}$$ Now, suppose that the $-2\sqrt{5}$ was the middle term of a perfect square trinomial, where $x = \sqrt{5}$. In other words, that middle term is $-2x$. What would the first and last term look like? Obviously it would be $x^2$ and $1$ respectively. $x^2 - 2x +1 = (x-1)^2$ Substituting $\sqrt{5}$ for $x$ we have... $$x^2 - 2x +1 = (x-1)^2$$ $$\sqrt{5}^2 - 2\sqrt{5} +1 = (\sqrt{5}-1)^2$$ $$5 - 2\sqrt{5} +1 = (\sqrt{5}-1)^2$$ $$6 - 2\sqrt{5} = (\sqrt{5}-1)^2$$ and taking the square root of both sides we have $$\sqrt{6-2\sqrt{5}} = \sqrt{5}-1$$ It's almost like somebody just make up that problem to work out cleanly like that. ;)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2400336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 5 }
An IMO graph theory problem In the second 1990 IMO problem, I saw a solution (probably of the proposer) as follows, of which I have some questions. Problem 2: Suppose $n\geq 3$, and let $S$ be a set of $2n-1$ distinct points on a circle. Assume that exactly $k$ points of $S$ are colored black. A coloring of $S$ is called "good" if there is at least one pair of the black points such that the interior of one of the arcs between the pair contains exactly $n$ points of $S$. Find the least value of $k$ so that each coloring of $S$ be "good". Solution: We call two points "dependent" if exactly $n$ points of $S$ sits in one of the arcs between these two points. We need to determine the least value of $k$ with the property that each $k$ points of $S$ contain at least one pair of dependent points. Connecting any pair of dependent points, there obtains a graph $G$ with degree $2$ at each vertex. $G$ is formed of disjoint cycles. We consider two cases: A) If $(3,2n-1)=1$, then $(2n-1,n+1)=(2n-1,3)=1$, whence $G$ is itself a cycle. In this case, "obviously" the least value for $k$ is $\lbrack \frac{2n-1}{2} \rbrack +1=n$. B) If $(3,2n-1)=3$,then $(2n-1,n+1)=3$, i. e. there are $3$ cycles in $G$ each having $\frac{2n-1}{3}$ vertices. In this case, the least value of $k$ would be $$3\left\lbrack \frac{\left(\frac{2n-1}{3}\right)}{2}\right\rbrack+1=3\left(\frac{\left(\frac{2n-1}{3}\right)}{2}-\frac{1}{2}\right)+1=n-1.$$ My question: 1) Why do we compute the "gcd" of $2n-1$ and $n+1$? (That is, what does this "gcd" have to do with the number of cycles?) 2) In each of the cases, I don't understand why do we compute the "bracket" of $\frac{2n-1}{2}$ plus $1$, or the "bracket" of $\frac{\frac{2n-1}{3}}{2}$ plus $1$, respectively? Thanks for any elucidating answer!
If you label the points with $\mathbb Z/(2n-1)\mathbb Z$ in order, then node $x$ is adjoined to nodes $x+n+1$ and $x-(n+1)$. So the nature of the graph depends on the sequence: $$0,n+1,2(n+1),\dots,$$ Two nodes $a,b$ in the graph are in the same loop if $a-b=(n+1)A$ for some $A$. But the number of distinct multiples of $n+1$ in $\mathbb Z/(2n-1)\mathbb Z$ is equal to $\frac{2n-1}{\gcd(2n-1,n+1)}$. Since each cycle has $\frac{2n-1}{\gcd(2n-1,n+1)}$ nodes, there must be $\gcd(2n-1,n+1)$ cycles. But we have that $\gcd(2n-1,n+1)\mid 3$. Now, in a cycle graph of size $M$, we can paint $\left\lfloor\frac{M}{2}\right\rfloor$ of them black before we have two connected nodes that are black. If you paint more than $\left\lfloor\frac{M}{2}\right\rfloor$, then two black nodes are connected by an edge. So the most you can color black without getting an edge with two black nodes: $$\gcd(2n-1,n+1)\left\lfloor{\frac{2n-1}{2\gcd(2n-1,n+1)}}\right\rfloor$$ So the smallest $k$ which guarantees an edge with two black nodes is one more than this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2400531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Simple probability question, with faulty screws. I have translated the problem as follows: A factory produces screws, the probability of them being faulty is 0.01 independently. The factory makes a box with 10 screws and recalls the boxes containing 2 or more faulty screws. What is the percentage of boxes that the factory has to recall?
My solution is: The percentage of faulty boxes is equal to the probability of faulty boxes. In order to make easy my calculation, I calculate the probability of the boxes which have no faulty screws or have at least one faulty screw. Then I can find the probability I am searching for as follows: $$S_{FaultyBox} = 1 - S_{NonFaultyBox}$$ To find the Non-Faulty Box I first find the probability that no box has faulty screws and then I sum to it the probability that it has only one faulty screw as follows: $$S_{NoFaultyScrew} = (1-0.01)^{10}=0.904382$$ (Because they are independent of each other) $$S_{OneFaultyScrew}= \binom{10}{1}0.01^{1}(1-0.01)^{9}=0.091351$$ (Using Bernoulli trials). $$S_{NonfaultyBox} = S_{NofaultyScrew} + S_{OneFaultyScrew} = 0.995733$$ Finally, I find the probability that I was searching for: $$S_{FaultyBox} = 1 - 0.995733 = 0.004267 \Rightarrow 0.4267\%$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2400637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Find the period of $f(x) =\{x\}+\{x+1/3\}+\{x+2/3\}$ is equal to what?({.} denotes fractional part of function) I tried the basic way of solving this question $f(x+T)=f(x)$ and writing $3x$ as $x+x+x$ but I don't think it can be solved directly like that.
Hint: $\{x+1\} = \{x\}\,$ so $f(x+1/3) = \{x+1/3\}+\{x+2/3\}+\{x+1\} = f(x)$. Alt. hint: write $\{x\}=x - \lfloor x \rfloor$ and use Hermite's identity $\;\lfloor x \rfloor + \lfloor x+1/3 \rfloor + \lfloor x+2/3 \rfloor = \lfloor 3x \rfloor\,$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2400754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Help with solving word problems pertaining to limits of a function. This is my first post, and I apologise if I make any mistakes in setting this out, but I am just getting used to formatting. So In my current math unit, we are being assigned problems to do with maxima and minima of a function. Specifically, the one I am dealing with is a 2D shape maximisation problem. So if i have a particular area $y$ that I need to maximise, and $x$ amount of material to enclose it, how am I meant to mathematically derive the maximum? It couldn't be as simple as $x$ divided by the sides needed for the enclosure, squared to get the area, would it? I feel like there is a derivative operation needed, but am unsure how to do that when I haven't even got a starting formula to derive from. Again, if there is an issue with my formatting, or I am asking an inane question, please be civil. I am only learning.
I'm going to suppose that you're working with a rectangle, hopefully you can generalise to other shapes as necessary. For an $l \times b$ rectangle, the area is $y = lb$ and the perimeter is $x = 2(l + b)$. Since we have a fixed amount of material, $x$ is a known value, so $l$ and $b$ are related by $l = \frac{x}{2} - b$, meaning that $y = b\left(\frac{x}{2} - b\right)$ expresses $y$ as a function of $b$. The question, then, is what is the maximum value of $y$, and what value of $b$ is it obtained at? Well, we can differentiate $y$ with respect to $b$ and set that to zero: $$\begin{eqnarray}\frac{dy}{db} & = & \frac{d}{db}\left[b\left(\frac{x}{2} - b\right)\right] \\ & = & \frac{d}{db}\left(\frac{bx}{2} - b^2\right) \\ & = & \frac{x}{2} - 2b \end{eqnarray}$$ So we have an extreme value for the area when $\frac{x}{2} - 2b = 0$, i.e. when $b = \frac{x}{4}$. And when that happens, $y = \frac{x}{4}\times\frac{x}{4} = \frac{x^2}{16}$. Notice that $b = l = \frac{x}{4}$, meaning that the shape is a square. You can do a little work with second derivatives, or graphs, or whatever's your preference, to show that this particular extreme value is a maximum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2400884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is nᵐ>mⁿ if m>n? I remember playing with my calculator when I was young. I really liked big numbers so I'd punch big numbers like $20^{30}$ to see how big it really is. On such a quest, I did observe that $20^{30}$ is greater than the value of $30^{20}$. In fact, in many cases, I found that $n^m>m^n$ if $m>n$. Is this a general fact? If so, can it be proved?
In simple terms, for integers you can start with smallest no. i.e. (1,2), (2,3), (2,4). * *$1^2 < 2^1$, *$2^3 < 3^2$ and *$2^4 = 4^2$. In all the above cases $n^m > m^n$ was false for all m>n. By observing the pattern for all n>=2 and m>4 we have $n^m > m^n$ true. Consider * *(2,5) $\implies$ 32 > 25 or *(3,4) $\implies$ 81 > 64 or *(4,100) $\implies$ (1.6 * 10^60) > 100000000 and so on... So basically even a small number but with large exponent/power is greater than a big number with small exponent as observed above, except for some cases. Bigger exponent matters more than a big base number. As for the proof part you can take log of $n^m$ and $m^n$. As the function $\frac{\log x}{x}$ is a decreasing function for x > e($\approx$ 2.718) $\implies$ $\frac{\log n}{n} > \frac{\log m}{m}$ (for m>n) So, (as mentioned in above answers also) m > n > e $\implies$ $n^m > m^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2400996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 5, "answer_id": 3 }
Prime ideals lying above a prime number in $\mathbb{Q}(\zeta_n)$ If we assume we have a prime number $p$ such that $p\nmid n$, and $\mathscr{P}$ is a prime ideal lying above $p$ in the field extension $\mathbb{Q}(\zeta_n)$, is it always true that $\mathscr{P}\nmid n$?
Yes. If $\mathscr{P} \mid k$, then $\mathrm{Nm}(\mathscr{P}) \mid \mathrm{Nm}(k)$. Since $\mathrm{Nm}(\mathscr{P})$ is a power of $p$ and $\mathrm{Nm}(k)$ is a power of $k$, so this can only happen if $p \mid k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2401089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
how to prove "monoid object in the category of monoids is a commutative monoid"? I've read about Eckmann–Hilton theorem (a * b) . (c * d) = (a . c) * (b . d). But category of monoids is not a 2-category, why are there two binary operators here?
If $(M, \cdot, 1)$ is a monoid, and $f: M \times M \rightarrow M$ a monoid homomorphism with $f(x,1) = x = f(1,x)$, then: $x\cdot y = f(x,y)$ Since $x \cdot y = f(x,1) \cdot f(1,y) = f((x,1) \cdot (1,y)) = f(x,y)$ Therefore $x \cdot y = f(1,x) \cdot f(y,1) = f((1,x) \cdot (y,1)) = f(y,x) = y \cdot x$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2401181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Compute the determinant The following problem is taken from here exercise $2:$ Question: Evaluate the determinant: \begin{vmatrix} 0 & x & x & \dots & x \\ y & 0 & x & \dots & x \\ y & y & 0 & \dots & x \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ y & y & y & \dots & 0 \end{vmatrix} My attempt: I tried to use first row substract second row to obtain \begin{pmatrix} y & -x & 0 \dots & 0 \end{pmatrix} and also first row subtracts remaining rows. However, I have no idea how to proceed.
Let $A$ be a $n\times n$ matrix of the form described above. You can easily compute the determinant by hand for the case up to $n=4$, which suggests the following relation: $$\det(A_n) = (-1)^{n+1}\sum_{i=1}^{n-1}x^iy^{n-i}$$ This can be proved by induction, by using the calculated small cases, followed by using the fact that the determinant can be computed as the sum of the determinants of comatrices.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2401299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Which of the following is bigger (logarithms) I need to compare those two expressions and decide which is bigger. $2 \sqrt2$ or $\log_2(3)+\log_3(4) $. So I tried to simplify so the log expression so I know and so $$ \log_2(4) \times (\log_4(3) + \log_3(2)) ?? 2 \times \sqrt2$$ and then $$2 \times \log_2(2)\times(\log_4(3)+\log_3(2)) ?? 2 \sqrt2$$ $$\log_2(2) \times (\log_4(3)+\log_3(2)) ?? \sqrt2 $$ and I know $\ log_2(2) = 1$ so now I need to compare those two expressions: $$\log_4(3)+\log_3(2) $$against$$ \sqrt2 $$ I'm not really sure what i'm doing wrong here.
$\log_3 4 = \dfrac{\log_2 4}{\log_2 3} = \dfrac{2}{\log_2 3}$ $A := {\log_2 3}+ \log_3 4 = {\log_2 3} + \dfrac{2}{\log_2 3}$ Dividing by$A$ by $\sqrt 2$, observe $ \dfrac{\log_2 3}{\sqrt 2} + \dfrac{\sqrt 2}{\log_2 3} > 2$ by AM-GM inequality (since ${\log_2 3 \ne \sqrt 2}$) Thus $A>2\sqrt 2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2401638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Finding map from Klein bottle to $RP^2$ that induces an epimorphism of fundamental groups. I want to find a map from the Klein bottle to $RP^2$ that induces an epimorphism of fundamental groups. Since the fundamental group of $RP^2$ is $Z_2$, intuitively it feels like a good start would be to look for any non-trivial map from $K$ to $RP^2$, but frankly even that is difficult for me. After reading quite a bit about the Klein bottle i still have problems even visualizing it, so any help would be appreciated.
Let $X$ and $Y$ be two connected manifolds of dimension $n \geq 2$. There is a map $\varphi : X\# Y \to X$ given by mapping $Y$ to a disc $D$. By the Seifert van Kampen theorem, $\pi_1(X\# Y) \cong \pi_1(X^{\circ})*_{\pi_1(S^{n-1})}\pi_1(Y^{\circ})$ where $X^{\circ}$ denotes $X$ with an embedded open disc removed; likewise for $Y^{\circ}$. The map $\varphi$ induces a map $$\varphi_* : \pi_1(X^{\circ})*_{\pi_1(S^{n-1})}\pi_1(Y^{\circ}) \to \pi_1(X^{\circ})*_{\pi_1(S^{n-1})}\pi_1(D) \cong \pi_1(X)$$ which is the identity on the subgroup $\pi_1(X^{\circ})$ and necessarily trivial on $\pi_1(Y^{\circ})$ (because $\pi_1(D)$ is trivial). In particular, $\varphi_*$ is an epimorphism. As $K = \mathbb{RP}^2\#\mathbb{RP}^2$, we obtain a map $\varphi : K \to \mathbb{RP}^2$ inducing an epimorphism $\varphi_* : \pi_1(K) \to \pi_1(\mathbb{RP}^2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2401764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
"Note that connectedness is not defined for closed sets" explanation I'm learning Complex Analysis, and we are given the following definitions: Definition. Suppose that $\Omega \subseteq$ C and that $\Omega$ is open. (1) The set $\Omega$ is connected if any two points of $\Omega$ can be joined by a polygonal path lying inside $\Omega.$ (2) The set $\Omega$ is simply connected if the interior of every simple closed polygonal path in $\Omega,$ lies in $\Omega$ that is, if “$\Omega$ has no holes”. (3) The set is a domain if it is connected as well as open. Later, my note makes a remark saying: Note that connected is not defined for closed sets, but there are questions about closed sets being connected. I don't understand this, it says that it's not defined for closed sets, yet it also says there are questions regarding closed sets being connected..? Why is do closed sets not have connectedness defined?
Let $X$ be a topological space, e.g. a subset of $\mathbb{C}$. Your definition of what it means for $X$ to be connected is different from the usual definition of connectedness. However, the two definitions coincide when $X$ is open. That is why they didn't want to apply their definition for nonopen sets. The usual definition of what it means for $X$ to be connected is that there are no proper nonempty subsets of $X$ which are both open and closed in $X$. Your definition of being connected is what everyone else calls being polygonally path connected. There is also the notion of just being path connected: for any $x, y \in X$, there exists a continuous map $\gamma: [0,1] \rightarrow X$ such that $\gamma(0) = x$ and $\gamma(1) = y$. If $X$ is any topological space, then it is connected if it is path connected, but the converse is not true. When $X$ is a subset of $\mathbb{C}$, it is path connected if it is polygonally path connected. But it is possible for $X$ to be path connected, but not polygonally path connected (the graph of $y = x^2$). And it is also possible for $X$ to be connected but not path connected (the union of the point $(1,0)$ with the closed line segments between $(0,0)$ and $(1, \frac{1}{n})$ for all $n \in \mathbb{N}$). However, all three of these notions coincide when $X$ is open.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2401859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Squares of positive semidefinite matrices Suppose $L_1 \succeq L_2$, where $L_1,L_2$ are positive semidefinite matrices (actually combinatorial Laplacians). Is the following inequality true, and if no, under which conditions? $$L_1^2 \succeq L_2^2$$
It's not always true. Counterexample: $$ \begin{align*} &L_1=\pmatrix{1&-1&0\\ -1&2&-1\\ 0&-1&1}, \ L_2=\pmatrix{1&-1&0\\ -1&1&0\\ 0&0&0},\\ &L_1^2-L_2^2=\pmatrix{0&-1&1\\ -1&4&-3\\ 1&-3&2}. \end{align*} $$ I'm not sure if there is any good (non-restrictive) sufficient condition for the inequality to hold.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2401994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How do I show that for any natural number n, there exists a natural number m such that $4^{2n+1} + 3^{n+2} = 13m$? Show that for any natural number $n$, there exists a natural number $m$ for which: $$4^{2n+1} + 3^{n+2} = 13m$$ I don't know where to start. I tried to use Mathematical Induction, denoting the top statement by $\rm P(n)$ and prove that $\rm P(0)$ is true, but I got stuck. Can anybody help?
The statement $P(n)$ is: There exists $m\in\mathbb N$ such that $$4^{2n+1}+3^{n+2}=13m$$ Step $1$: Proving that $P(0)$ is true should be easy, you just have to calculate what $$4^{2\cdot 0+1}+3^{0+2}$$ is equal to. Step $2$: Assume that $P(n)$ is true, and write $$\begin{align}4^{2(n+1)+1}+3^{(n+1)+2} &= 4^{2n+1+2} + 3^{n+2 + 1} \\&= 16\cdot 4^{2n+1} + 3\cdot 3^{n+2}\\&=13\cdot 4^{2n+1} + 3\cdot 4^{2n+1} + 3\cdot 3^{n+2}\end{align}$$ Can you continue from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2402106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Trouble understanding ij-element of a matrix Let $\;f:\mathbb R^n \rightarrow \mathbb R^m\;$ and $\;G:\mathbb R^m \rightarrow \mathbb R_{+}\;$ and consider the $\;n\times n\;$ tensor $\;\mathcal A=(a_{ij})_{1\le i,j \le n}\;$ where $\;a_{ij}=f_{x_i} \cdot f_{x_j} -{\delta}_{ij}(\frac{1}{2} {\vert \nabla f \vert}^2+G(f))\;$ NOTE: $\; \cdot \;$stands for the Euclidean inner product and $\;\vert \cdot \vert\;$ is the Frobenius Norm of the matrix. I want to prove that the entries of this tensor will be like: $\; a_{11}=(f_{x_1})^2-1(\frac{1}{2} (f_{x_1})^2+\dots+\frac{1}{2}(f_{x_n})^2+G(f))\;$, $\;a_{12}=f_{x_1} \cdot f_{x_2}\;$, etc. My attempt: Since $\; \nabla f =(\frac{\partial f_i}{\partial x_j})_{1\le i \le m, 1\le j \le n}\;$, I computed the Frobenius norm of $\; \nabla f\;$ and I found $\;\frac{1}{2} {\vert \nabla f \vert}^2=\frac{1}{2} (f^1_{x_1})^2+\dots+\frac{1}{2}(f^1_{x_n})^2+\dots+\frac{1}{2}(f^m_{x_1})^2+\dots+\frac{1}{2}(f^m_{x_n})^2\;$ where $\;f^i_{x_j}=\frac{\partial f_i}{\partial x_j}\;$. In addition, I know $\;{\delta}_{ij}=\begin{cases} 1\;if\;i = j\\ 0\;if\;i\neq j\\ \end{cases}\;$ Writing down all the above, I get: * *$\;a_{11}=(f_{x_1})^2-1(\frac{1}{2} (f^1_{x_1})^2+\dots+\frac{1}{2}(f^1_{x_n})^2+\dots+\frac{1}{2}(f^m_{x_1})^2+\dots+\frac{1}{2}(f^m_{x_n})^2+G(f))\;$ *$\;a_{12}=f_{x_1} \cdot f_{x_2}\;$ *etc. My question: I think I'm missing something but I don't know what! Are the above calculations right or wrong? Any help would be valuable because I've been stuck here for days... Thanks in advance!
Yes it is correct. This is what the author meant. Note that the matrix $\mathcal A$ can also be written in more compact form as $$\mathcal A = ff^T - (\frac 12 \left |\nabla f\right| + G(f))I$$ where $I$ is the identity matrix
{ "language": "en", "url": "https://math.stackexchange.com/questions/2402203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$x\left(x-1\right)\left(x-2\right)\left(x-3\right)=m$ has all roots real Given the equation: $x\left(x-1\right)\left(x-2\right)\left(x-3\right)=m$ For what values of $m$ are all the roots real? I've rewritten the equation as: $x^4-6x^3+11x^2-6x-m=0$ I'm quite sure this is done with Vieta's but didn't really figure out yet what should I aim to get out of it in order to get $m \in [-1,\frac{9}{16}]$ which is the correct answer .
Start from $x^4-6 x^3+11 x^2-6 x-m=0$ and substitute $x=z+\dfrac{3}{2}$ we get $\left(z+\frac{3}{2}\right)^4-6 \left(z+\frac{3}{2}\right)^3+11 \left(z+\frac{3}{2}\right)^2-6 \left(z+\frac{3}{2}\right)-m=0$ Expand and reorder $z^4-\frac{5 }{2}z^2+\frac{9}{16}-m=0$ substitute $z^2=w$ $w^2-\frac{5 }{2}w^2+\frac{9}{16}-m=0$ $w=\dfrac{1}{4} \left(5\pm 4 \sqrt{m+1}\right)$ to be real the solutions we need $w\ge 0$ $5\pm 4 \sqrt{m+1}\ge 0$ Begin with $5+4 \sqrt{m+1}\ge 0\to 4 \sqrt{m+1}\ge -5$ verified for $m\ge -1$ Then we solve $5- 4 \sqrt{m+1}\ge 0$ $4 \sqrt{m+1}\le 5$ $16(m+1)\le 25 \to 16m \le 9\to m \le \dfrac{9}{16}$ the equation has all real solutions if $\quad-1\le m \le \dfrac{9}{16}$ Hope this helps
{ "language": "en", "url": "https://math.stackexchange.com/questions/2402313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Shortest Hamilton Path Planar Problem I think that the problem to obtain the shortest path visiting once time each point (it is not needed to come back to start point so it is a Hamilton path), in its planar euclidean and symmetric version is an NP-complete problem. Wikipedia says: "If the distance measure is a metric and symmetric, the problem becomes APX-complete" I´m not sure if wikipedia is correct, please can anyone clarify if this problem is NP-complete, APX-complete or both? Thanks in advance.
Cristos H. Papadimitriou says that "The Euclidean Travelling Salesman Problem Is NP-Complete" He based its demonstration reducing the Exact Cover Problem to it, wich is known NP-complete. In general the ETSP is NP-hard when the inputs are real coordenates, but restricting the input to integers such as the distances can be P computables, it is NP-complete. This is explained in page 239: "In what follows we will assume that the elements of the distance matrix are the integral parts of this metric. Any desired precision can be thus obtained by increasing the scale accordingly. Moreover in the constructions that will follow we will also allow rational coordinates, with the understanding that the scale will be eventually multiplied by an adequately large integer, so that all coordinates become integral and any necessary precision is obtained." Ola Svensson in "Approximation Algorithms and Hardness of Approximation" is using a similar restriction to the inputs to obtain a PTAS with the Sanjeev Arora´s method wich has a $(1 + 1/c)-approximation$ where $c$ is the number of dimensions of the coordinates inputs; 3/2-aproximation for the planar problem is not good enough aproach. So, the Shortest Hamilton Path Planar Problem, even restrincting inputs to a P computable distances is NP-complete, and APX-complete too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2402492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to minimize the integral of the functional of a function, with respect to that function? I need to obtain the function $f(x)$ for which the following integral has its minimum value: $I=\int F(f(x))dx= \int [A (B^2-f(x)^2)^2-Cf(x)f''(x)]dx$ One special solution is $f(x)=constant=\pm B$, but I need the general solution such that $f(x) \ne constant$. Then the systematic approach is to minimize '$I$' with respect to $f(x)$. I started with $\dfrac{dI}{df(x)}=0 $. Then I differentiate both side with respect to $x$ so that I get rid of the integral and end up with $\dfrac{dF(f(x))}{df(x)}=0 $ This step gives me: $2A(B^2-f(x)^2).[-2f(x)] -C\dfrac{d}{df(x)}f(x)f''(x)=0$ At this point how to carry out the second part? Shall I consider the $f''(x)$ be constant with respect to $f(x)$? Doing so would give a differential equation to solve for $f(x)$. But I am not sure whether this is the right way or not. Thanks in advance
The Euler Lagrange equation for $f$ to be an extremum of the integral $\int^b_a F(x,f,f',f'') dx$ is $\frac{\partial F}{\partial f}-\frac{d}{dx}\frac{\partial F}{\partial f'}+\frac{d^2}{dx^2}\frac{\partial F}{\partial f''}$. For the given $F$ this gives $\frac{\partial}{\partial f}[A(B^2-f^2)^2]-Cf''-C\frac{d^2}{dx^2} f$. Multiply this through by $f'$ and integrate to get $[A(B^2-f^2)^2]+D=C f'^{\,2}$ where $D$ is a constant to be determined. It looks possible, but ugly, to integrate this again, so getting something neat and tidy like $f(x)=....$ is probably not on. If $f$ is fixed at $x=a$ and $x=b$ then these are the bc's. If $f$ is not fixed then calculus of variation provides natural boundary conditions. I believe these are $f'(a)=f'(b)=0$ but check. In the original integral the term $f f''$ can be integrated by parts to give $f'^2$. This is why we end up with only a second order equation and not a third order.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2402569", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Given a decreasing function s.t. $\int_0^\infty f(x)\,dx<\infty,$ prove $\sum_{n=1}^\infty f(na)$ converges Let $f\in C([0,\infty))$ be a decreasing function such that $\int_0^\infty f(x)\,dx$ converges. Prove $\sum_{n=1}^\infty f(na)$ converges, $\forall a>0$ My attempt: By the Cauchy criterion, there exists $M>0,$ such that for $t-1>M:$ $$f(t)=\int_{t-1}^t f(t) \, dx \leq \int_{t-1}^t f(x)\,dx\xrightarrow{t \to \infty} 0$$ Hence, $f$ is non-negative. $f$ is decreasing $\implies f(na)\leq f(a), \forall a>0, n\in \mathbb{N}.$ By integral monotonicity and non-negativity of $f$: $$\int_1^\infty f(nx)\,dx \leq \int_1^\infty f(x)\,dx \leq \int_0^\infty f(x)\,dx$$ Hence $\int_1^\infty f(nx)\,dx$ converges and therefore $\sum_{n=1}^\infty f(na)$ converges. Is that correct? If so, why is continuity necessary ? Is there a simpler way to prove it? Any help appreciated.
This is the integral test for convergence of series. Do a change of variable x --- to --- ax. The integral converges so does series g(n)=f(na).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2402659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Characterizing the kernel of a certain map For a given $A \in \mathrm{O}_n(\mathbb{R})$, consider the map \begin{align*} \phi_A: \mathfrak{o}_n(\mathbb{R}) & \to \mathrm{Mat}_{n \times n}(\mathbb{R}) \\ x &\mapsto Ax-xA^T. \end{align*} Recall that $\mathfrak{o}_n(\mathbb{R})$, the Lie algebra of $\mathrm{O}_n(\mathbb{R})$, is isomorphic as a vector space to the space of $n \times n$ antisymmetric matrices . I've found myself stuck trying to answer the following question: for which $A$ is the kernel of $\phi_A$ trivial? My observations so far: * *The image of $\phi_A$ is contained in the subspace of symmetric matrices. *The kernel of $\phi_A$ is isomorphic to the $-1$ eigenspace of $(D \iota)_A$, the derivative of the map \begin{align*} \iota: \mathrm{O}_n(\mathbb{R}) & \to \mathrm{O}_n(\mathbb{R}) \\ B & \mapsto B^{-1} \end{align*} evaluated at $A$. *There exist $A$ for which the kernel of $\phi_A$ is trivial (e.g., $n=2$ and $A$ is rotation by $\pi/2$), and there exist $A$ for which the kernel of $\phi_A$ is nontrivial (e.g., $n$ is arbitrary and $A$ is the identity). My investigations so far lead me to believe that the fixed subspace of $A$ is playing a major role here, but making this intuition rigorous is proving difficult for me. S.O.S.! Any and all insights are welcome.
Hint. By a change of orthonormal basis, we may assume that $A=I_p\oplus-I_q\oplus R_{\theta_1}\oplus\cdots\oplus R_{\theta_m}$, where each each $R_{\theta_k}$ denotes a $2\times2$ rotation matrix for an angle $\theta_k\in(0,\pi)$. It is not hard to see that $Ax=xA^T$ has a nonzero skew symmetric solution if and only if $p\ge2,\ q\ge2$ or some two $\theta_k$s are equal to each other. In other words, $\phi_A$ has a non-trivial kernel if and only if $A$ has a repeated eigenvalue over $\mathbb C$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2402758", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Simplex: duplicate constraints I'm trying to understand how the two-phase simplex algorithm works, this site explains it using a simple example: http://optlab.mcmaster.ca/feng/4O03/Two.Phase.Simplex.pdf I've tried to come up with some edge cases myself, and I'm stuck on this one, what am I doing wrong? Minimize $x$ where $x \ge 1000$ and $x \ge 1000$ Obviously this is a dumb example, but the same issue arises when some constraints are linear combinations of each other. Phase 1 We rewrite this linear program as a system of equations, where $s_1$, $s_2$ are the surplus variables and $a_1$ and $a_2$ are the artificial variables: $\left\{ \begin{array}{c} x - s_1 +a_1=1000\\ x - s_2 +a_2=1000 \end{array} \right.$ We start by minimizing $a_1 + a_2$, so maximize $p = -a_1 - a_2 = (x - s_1 - 1000) + (x - s_2 - 1000) = 2x - s_1 - s_2 - 2000$ $\iff -2x - s_1 - s_2 + p = -2000$ This gives our starting tableau: $$ \begin{array}{c|cccccc|c} &x&s_1&s_2&a_1&a_2&p&A\\ \hline a_1&1&-1&0&1&0&0&1000\\ a_2&1&0&-1&0&1&0&1000\\ \hline p&-2&1&1&0&0&1&-2000 \end{array} $$ After pivoting around column $1$, row $1$: $$ \begin{array}{c|cccccc|c} &x&s_1&s_2&a_1&a_2&p&A\\ \hline x&1&-1&0&1&0&0&1000\\ a_2&0&1&-1&-1&1&0&0\\ \hline p&0&-1&1&2&0&1&0 \end{array} $$ The next column to pivot around is row $2$, but there's no row that works! There isn't a positive $\frac{A}{pivot}$ ratio: $\frac{1000}{-1} = -1000 \le 0$ $\frac{0}{1} = 0 \le 0$ It seems like every explanation of the algorithm assumes there will always be a row with a positive ratio. It looks like the algorithm got stuck, we didn't even get to Phase 2, but the program definitely has a valid solution. What went wrong? I'm looking for an answer that works in every case, not just this trival example. Just removing one of the constraints isn't a real solution.
The salient point is that the entries of the RHS have to be non-negative. You calculate the minimum of the fractions. Only the entries in the matrix has to be positive. The short notation is $\min\bigg\{\frac{b_i}{a_{ij^*}}|a_{ij^*}>0\bigg\} $ where $b_{i}\geq 0$ In your case $\min\bigg\{\frac{b_2}{a_{22^*}}\bigg\}= \min\bigg\{\frac{0}{1}\bigg\}= 0 \quad \color{green} \checkmark $ Thus the final simlex tableau is $$ \begin{array}{c|cccccc|c} &x&s_1&s_2&a_1&a_2&p&RHS\\ \hline x&1&0&-1&-1&1&0&1000\\ s_2&0&1&-1&-1&1&0&0\\ \hline p&0&0&0&1&1&1&0 \end{array} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2402841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Convergence of a series of translations of a Lebesgue integrable function Let $f: \mathbb{R} \rightarrow \mathbb{R}$ be a Lebesgue integrable function. Prove that $$\sum_{n=1}^{\infty} \frac{f(x-\sqrt{n})}{\sqrt{n}}$$ converges almost for every $x \in \mathbb{R}$. My tactic here (which of course might lead me to nowhere) is to express this series as an integral or sum of integrals and use in some way the fact that $f$ is integrable along with a convergence theorem of course. From integrability of $f$ we know that $f$ is finite almost everywhere which I believe would help me in my proof. But I don't know exactly how to start with this. Any useful hint would be appreciated. I don't want a full solution to this. Thank you in advance.
A brute-force method: Define $g(x):= \sum_0^{\infty} \frac{|f(x-\sqrt n)|}{\sqrt n}$, which a priori might be infinite at many points. Fix some $j \in \Bbb Z$. By Tonelli's theorem we may interchange sum and integral to compute \begin{align*}\int_j^{j+1} g(x)dx &=\sum_{n=1}^{\infty} \int_j^{j+1}\frac{|f(x-\sqrt n)|}{\sqrt n} dx \\ &=\sum_{n=1}^{\infty} n^{-1/2}\int_{j-\sqrt n}^{j+1-\sqrt n} |f(x)|dx \\ &=\sum_{k=1}^{\infty} \sum_{n=k^2}^{(k+1)^2-1} n^{-1/2}\int_{j-\sqrt n}^{j+1-\sqrt n} |f(x)|dx\\&= \sum_{k=1}^{\infty} \int_{\Bbb R}\bigg(\sum_{n=k^2}^{(k+1)^2-1}n^{-1/2}\cdot1_{[j-\sqrt n,\;j+1-\sqrt n)}(x) \bigg)|f(x)|dx\end{align*} Now if $k^2 \leq n <(k+1)^2$, then $n^{-1/2} \leq k^{-1}$ and also $[j-\sqrt n,j+1-\sqrt n) \subset [j-k-1,j-k+1)$. Therefore we find that \begin{align*}\sum_{n=k^2}^{(k+1)^2-1}n^{-1/2}\cdot 1_{[j-\sqrt n,\;j+1-\sqrt n)}(x) &\leq \big[(k+1)^2-k^2\big]\cdot k^{-1} \cdot 1_{[j-k-1,j-k+1)}(x) \\ &=(2k+1) \cdot k^{-1} \cdot 1_{[j-k-1,j-k+1)}(x) \\ & \leq 3 \cdot 1_{[j-k-1,j-k+1)}(x)\end{align*} Consequently, \begin{align*}\int_j^{j+1} g(x)dx & \leq 3\sum_{k=1}^{\infty} \int_{j-k-1}^{j-k+1}|f(x)|dx \\ & = 3\sum_{k=1}^{\infty} \int_{j-k-1}^{j-k}|f(x)|dx+3\sum_{k=1}^{\infty} \int_{j-k}^{j-k+1}|f(x)|dx \\ &= 3\int_{-\infty}^{j-1} |f(x)|dx + 3\int_{-\infty}^j |f(x)|dx \\ &\leq 6 \|f\|_{L^1}<\infty\end{align*} We conclude that $g(x)<\infty$ for a.e. $x \in [j,j+1]$. But $j\in \Bbb Z$ was arbitrary, so we find that $g(x)<\infty$ for a.e. $x \in \Bbb R$. With a little more effort, this argument may be generalized to show that $\sum_n n^{\alpha-1}|f(x-n^{\alpha})|<\infty$ for any $\alpha \in (0,1]$. This was the special case $\alpha=\frac{1}{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2402988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
Prove that for all $k \geq 1$, $\mathbb{log}(n)^k \in o(n)$ without using limits Prove that for all $k \geq 1$, $[\mathbb{log}(n)]^k \in o(n)$ without using limits, i.e prove that for any $c > 0$, there is a $n_0 > 0$ such that for all $n \geq n_0$, $[\mathbb{log}(n)]^k \lt cn$ I first proved it using limits and L'hopital's Rule by showing inductively that $\lim \frac{\mathbb{log}(n)^k}{n} = 0$ for any integer $k \geq 1$, but I'm not sure if my proof is right, and regardless, I need to prove the result without the use of limits. I did the first part, but then I'm stuck and I don't know what to do with the power. I've been trying to solve this for hours. First, we prove the case where $k=1$ by showing that $e^{cn} > n$, then taking the logarithms; Let $c \geq 1$, then $e^{cn} \geq e^n$ and for all $n \geq 1$, $e^{cn} \gt n$ since $e^n =1+n+n^2 /2!+n^3 /3!+⋯ \gt n$. Otherwise, $ 0 \lt c \lt 1$, and for all $n \geq 2/c^2$, we have $e^{cn} = 1 + cn + c^2n^2/2 + c^3n^3/6 +... \gt c^2n^2/2 \geq (2/c^2)(c^2/2)n = n$, so $e^{cn} \gt n$. Thus, for any $c > 0$, there is a $n_0 = \mathbb{max}(1,2/c^2)$ such that $e^{cn} \gt n$, or equivalently, $\mathbb{log}(n) \lt cn$ for all $n \geq n_0$, and by definition this means that $\mathbb{log}(n) \in o(n)$. What to do from here?
Let $n= x^k$, we need to prove $\log(x^k)^k = o(x^k)$, or $$k\log(x) = \log(x^k) = o(x),$$ which is what you just proved.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2403072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to show $\frac{2-x}{2+x} \le e^{-x}$ for $x\ge0$? Let $x\geq0$. Show that: $$\frac{2-x}{2+x} \le e^{-x}.$$ I have some troubles to prove it. Would you give me any hint?
We need to prove that $$(2-x)e^x\leq2+x$$ or $$x(e^x+1)\geq2(e^x-1)$$ or $f(x)\geq0$, where $$f(x)=x-\frac{2(e^x-1)}{e^x+1}$$ and calculate $f'(x)$: $$f'(x)=1-\frac{4e^x}{(e^x+1)^2}=\frac{(e^x-1)^2}{(e^x+1)^2}\geq0,$$ which gives $f(x)\geq f(0)=0$ and we are done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2403185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Part of sum inequality to number of terms Suppose you have a series $$ S=\sum_{i=1}^n a_i$$ and rearrange the terms such that, $$ a_1\geq a_2 \geq \cdots \geq a_l \geq a_i\geq 0 $$ for all $$ i > l$$, then it should be obvious that $$ \frac{a_1+a_2+\cdots+a_l}{S}\geq \frac{l}{n}$$ How would one go about proving this? Using upper and underestimates won't work.
Hint: first rearrange your inequality to $\displaystyle \frac{S}{n} \leqslant \frac{a_1+a_2+\ldots+a_l}{l}$. Then note that $$\frac{S}{n} = \frac{l}{n} \cdot \frac{a_1+a_2+\ldots+a_l}{l} + \left( 1-\frac{l}{n} \right) \cdot \frac{a_{l+1} + \ldots + a_n}{n-l}$$ which is a weighted arithmetic mean of two numbers, where the first $\frac{a_1+\ldots+a_l}{l}$ is bigger.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2403299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What is the probability that the digit sum of a randomly chosen integer between 0000 and 9999 is divisible by 5? If I have a randomly selected integer between 0000 and 9999, what is the probability that the digit sum of that number is divisible by 5? [E.g. 1234 = 1 + 2 + 3 + 4 = 10] I've started off with knowing that I have 2 options for the last integer, but I'm not sure where to go from there.
In this case we had it easy since the base 10 is a multiple of the divisor 5. What happens if we have $n$ digits in base $B$ and are interested in divisor $d$? Let $\omega \neq 1$ be a $d$th root of unity, and consider $$ P(\omega) = \left(\frac{1 + \omega + \omega^2 + \cdots + \omega^{B-1}}{B}\right)^n = \left(\frac{1-\omega^B}{B(1-\omega)}\right)^n. $$ The coefficient of $\omega^r$ is the probability that the remainder is $r$. We can extract the probability of a zero remainder by going over all roots of unity (using $P(1) = 1$): $$ \Pr[r=0] = \frac{1}{d} + \frac{1}{d} \sum_{t=1}^{d-1} \left(\frac{1-\omega^{tB}}{B(1-\omega^t)}\right)^n. $$ It is not immediately clear why this expression is real. The reason is that the terms for $t$ and $d-t$ are complex conjugates (since $\omega^{d-t} = \overline{\omega^t}$). Perron-Frobenius theory tells us that the norm of the eigenvalues $\frac{1-\omega^{tB}}{B(1-\omega^t)}$ is strictly less than 1, and so the convergence to $1/d$ is exponentially fast.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2403441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 7, "answer_id": 3 }
How do I prove this method of determining the sign for acute or obtuse angle bisector in the angle bisector formula works? The formula for finding the angular bisectors of two lines $ax+by+c=0$ and $px+qy+r=0$ is $$\frac{ax+by+c}{\sqrt{a^2+b^2}} = \pm\frac{px+qy+r}{\sqrt{p^2+q^2}}$$ I understand the proof of this formula but I do not understand how to determine which sign is for acute bisector and which one for obtuse. I can find the angle between a bisector and a line, and if it comes less than $45^\circ$ then it is acute bisector. But that is a lengthy method and involves calculation. My book says, if $ap+bq$ is positive then the negative sign in the formula is for acute bisector. I want a proof of this method. Edit: Using the method for finding the position of two points with respect to a line is okay for the proof.
If you follow the method described by me here, we simply have to check the angle between normal after finding the bisector vectors: Now, check the angle between $n_1$ and $n_2$, if it is greater than ninty degrees then it means $\chi$ is the obtuse bisector and if not, it $\chi$ is the acute bisector. To understand why, check the linked in equi-angle form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2403530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 3 }
Set operations on connected sets in $R^2$ An exercise wants me to give an example of the following in $R^2$ * *$A$ and $B$ are connected but $A \cap B$ is not. *$A$ and $B$ are connected but $A \setminus B$ is not. *$A$ and $B$ are not connected but $A \cup B$ is. I think I found an example for 3. Take two curves with "holes" in them but holes should not coincide. Then their unions will fill their respective "holes" and will be connected. As you can see, I am moving only by geometric intuition. But I think this is the purpose of this kind of exercise. So, what are some examples of 1 and 2, and also if you have, better examples of 3.
$1)$ Take two circles $A,B$ with same radii and different centers and intersect them using a translation.The intersection in of these circles woulb be a two point set namely $A=\{(a_1,a_2),(b_1.b_2)\}$.Then $A$ is not a connected set as a union of two singletons which are closed sets with respect to the usual topology on the plane. . $2)$Take a closed disc $B(x_0,r)$as $A$ and a line segment $B$ which passes from its center and connects two antipodal points.These two sets are connected and $B(x_0,r)$ \ $B$ is not connected. . $3)$ Take a circle $A$ which misses two antipodal points and $B$ the set which contains those two points..clearly these two sets are disconected but their union is the circle which connected. I hope this helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2403633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
prove that every non-empty open set contains an open sphere disjoint from $A$. If $A$ is nowhere dense in $(X,d)$ then prove that every non-empty open set contains an open sphere disjoint from $A$. Suppose that $A$ is n.w.d. in $X$ , and every non-empty open set say , $B$ contains all open sphere $S_r(x),x\in X,r>0$ such that $S_r(x) \cap A\not= \emptyset.$ Then where it contradicts ?
Let B be a non-empty open set. Suppose every open sphere inside B intersects A. Then every point of B belongs to the closure of A. But this contradicts the definition of a nowhere dense set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2403843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Pull back of universal cover is universal iff the map induces a fundamental group isomorphism I'm trying to solve the following task: Let $f: Y \rightarrow X$ be a map and $p: \bar{X} \rightarrow X$ be the universal cover of $X$. The spaces Y and X are path-connected. The pull back of the cover $p$ by $f$ is a universal cover of $Y$ if and only if $f$ induces an isomorphism between the fundamental groups of $Y$ and $X$. I think i have the easier direction. Let $Z$ be the space of the pull back and let $p' : Z \rightarrow Y$ and $p_2 : Z \rightarrow \bar{X}$ be the appropriate projections. If $f$ induces an isomorphism but $Z$ isnt simply connected, we can take a loop in $Z$ that isnt trivial. Then on one hand the composition $f \circ p'$ cannot send it to a trivial loop since it induces a monomorphism on the fundamental groups (as a composition of a cover and a function inducing an isomorphism). On the other hand the composition $p \circ p_2$ does send any loop to a trivial one, since $p$'s domain is the simply connected space $\bar{X}$. Is this correct? And how would i approach the other direction? I feel like this should be fairly straightforward from the definition of the pull back, but it doesn't feel that intuitive to me yet.
You want to be a little careful, since the pullback $Z = Y \times_X \tilde{X}$ need not be path-connected even if $X$ and $Y$ are. (For example, take $f: * \to S^1$.) However, each path component of $Z$ will be simply-connected. I think your proof works, but here's another way to think about this problem. Associated to a pullback square $$ \begin{array}{ccc} Z & \rightarrow & \tilde{X} \\ \downarrow & & \downarrow \\ Y & \xrightarrow[f]{} & X, \end{array} $$ there is a long exact sequence (generalizing the long exact sequence of a fibration) $$ \cdots \to \pi_2 \tilde{X} \times \pi_2 Y \to \pi_2 X \to \pi_1 Z \to \pi_1 \tilde{X} \times \pi_1 Y \to \pi_1 X \to \cdots $$ (This is a little sloppy since I've omitted the basepoints from the notation.) Let's consider the first map $\pi_2 \tilde{X} \times \pi_2 Y \to \pi_2 X$. Since the universal covering $\tilde{X} \to X$ induces an isomorphism in $\pi_n$ for $n > 1$, this map is surjective. Moreover, $\pi_1 \tilde{X} = 0$ by construction. So our long exact sequence becomes $$ 0 \to \pi_1 Z \to \pi_1 Y \xrightarrow{f_*} \pi_1 X \to \cdots $$ So assuming I haven't made a mistake, we deduce that $\pi_1 Z = 0$ iff $f_*: \pi_1 Y \to \pi_1 X$ is injective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2403914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to determine probability of an outcome when the number of tries is variable. Given a known probability of an event being successful, how do you calculate the odds of at least one successful outcome when the number of times you can attempt is determined by a separate dice roll. For example: I know the probability of a favorable outcome on dice X is "P". I first roll a six sided dice. That dice roll determines the number times I can roll dice X. What is the probability that I will have at least one successful roll of dice X as a function of "P"?
Lets say the outcome of first dice roll is $k(1 \le k \ge 6$) each with probability $\frac{1}{6}$. P(atleast 1 success)= 1-P(no success) P(atleast 1 success) = P(k=1)(1-P(no success with 1 throws)) + P(k=2)(1-P(no success with 2 throws))+...6 terms Since P(k=1)=P(k=2)=...=P(k=6)=1/6 Hence P(atleast 1 success) = $\frac{1}{6} \sum_{k=1}^6$ (1-P(no success with k throws)) Thus the required answer is: P(atleast 1 success) = $\frac{1}{6} \sum_{k=1}^6$ $(1-(1-p)^k$) where $p$ is probability of favorable outcome given in question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2404013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
How to evaluate $\int_0^1 \mathrm e^{-x^2} \,\mathrm dx$ using power series? I'm trying to evaluate $$\int_0^1 \mathrm e^{-x^2} \, \mathrm dx$$ using power series. I know I can substitute $x^2$ for $x$ in the power series for $\mathrm e^x$: $$1-x^2+ \frac{x^4}{2}-\frac{x^6}{6}+ \cdots$$ and when I calculate the antiderivative of this I get $$x-\frac{x^3}{3}+ \frac{x^5}{5\cdot2}-\frac{x^7}{7\cdot6}+ \cdots$$ How do I evaluate this from $0$ to $1$?
$$\left[ x-\frac{x^3}{3}+ \frac{x^5}{5*2}-\frac{x^7}{7*6}+ \dots \right]_0^1 = \left( 1-\frac{1}{3} + \frac{1}{5 \cdot 2} - \frac{1}{7 \cdot 6} + \dots \right) - 0 = \sum_{n=0}^\infty \frac{(-1)^{n}}{(2n+1)n!}$$ So $\int_{0}^{1}e^{-x^2}=\sum_{n=0}^\infty \frac{(-1)^{n}}{(2n+1)n!}$, that is, the answer is whatever the series converges to.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2404105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
$\square+\square+\square=30$, with boxes filled using $1, 3, 5, 7, 9, 11, 13, 15$, possibly repeated. How? From the days I started to learn Maths, I've have been taught that Adding Odd times Odd numbers the Answer always would be Odd; e.g., $$3 + 5 + 1 = 9$$ OK, but look at this question This question was solved and the answer was 30, how it was possible? Need a valid explanation please.
you can also repeat the numbers Wonder if that means $\,11,5+13,5+5=30\,$ (where the $\,,\,$ comma works as decimal separator).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2404176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 8, "answer_id": 3 }
If $a$ is a positive integer that is not a perfect square then $[\Bbb Q(a^{\frac{1}{4}}):\Bbb Q]=4$ If $a$ were a square free number I am done by Eisenstein criterion but if $a$ is not a perfect square how do I use Eisenstein criterion. The book by Joseph Rotman gives an argument that that there exists a prime $p$ such that $p$ divides $a$ but $p^2$ does not divide $a$, but I dont get this.
May be Rotman means the following. Consider the prime factorization $$ a=\prod_{i=1}^kp_i^{a_i}. $$ Because $$\Bbb{Q}(a^{1/4})=\Bbb{Q}([a/p^4]^{1/4})\tag{*}$$ for any prime $p$, we can without loss of generality assume that $a_i<4$ for all $i$. As we assumed that $a$ is not a perfect square we also know that at least one of exponents, say $a_{i_0}$, is odd. It sounds like you know what to do when $a_{i_0}=1$, so the troublesome case is that of $a_{i_0}=3$. We can deal with that case by a trick similar to $(*)$. All we need to do is to observe that $$ \Bbb{Q}(a^{1/4})=\Bbb{Q}(a^{-1/4})=\Bbb{Q}(A^{1/4}), $$ where $$ A=\frac{\prod_{i=1}^kp_i^4}a=\prod_{i=1}^kp_i^{4-a_i}. $$ Here $4-a_{i_0}=1$, so Eisenstein's criterion proves that $x^4-A$ is irreducible over $\Bbb{Q}$ and you are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2404269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Expected stopping time Let $X_1,X_2,\ldots,$ be iid uniform in $[0,1]$ and define $S_n:=\sum_{i=1}^n X_i$. Let $\tau$ be the smallest $n$ for which $S_n>1$. It is known that $E[\tau]=e\approx 2.7183$, and this can be easily shown using the Irwin-Hall https://en.wikipedia.org/wiki/Irwin%E2%80%93Hall_distribution distribution. Indeed, $$ P(\tau\ge n) = P(S_{n-1}< 1)=1/(n-1)!,$$ whence $$ E[\tau]=\sum_{n=1}^\infty P(\tau\ge n) = \sum_{n=0}^\infty 1/n! = e.$$ Here, all of the heavy-hitting is done by the Irwin-Hall distribution, and I'm wondering: is there a clever "soft" martingale argument that gives the answer with less heavy-hitting?
For $x \in [0, 1)$, let $\tau(x)$ denote the smallest $n$ for which $S_n > x$. Let $g(x) := E[\tau(x)]$. We're interested in $g(1)$. Using its definition, $g$ satisfies an integral recurrence $$ g(x) \ = \ \int_{y = 0}^{x} [1 + g(x - y)] \ dy \ + \ \int_{y = x}^{1} 1 \ dy $$ which if I'm not mistaken, is satisfied by the function $e^{x}$. It also satisfies the "limit boundary condition" $\lim_{x \rightarrow 0} g(x) = 1$, which I believe can be established using crude arguments. I am not going to attempt to argue that these two conditions force the unique solution $g(x) = e^{x}$, but it looks true and doable. (Note, this might be isomorphic to some of the work on Irwin-Hall; I haven't checked.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2404388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Complex number quadratic Is the following equation $$z^2 + z^* + \frac14 = 0$$ where $z$ is a complex number and $z^*$ is its conjugate completely separate from ordinary quadratic equations? i.e. can I use the discriminant, quadratic formula etc. If not what, what type of equation is this? Can z* be treated independently from z? How is the degree related to the number of roots which is 4 (2 real, 2 complex) I believe. p.s. which specific topics could I look at to help me understand this further?
Here is a more geometric approach. We have $$z^{2}+z^{*} = -\frac{1}{4} \in \mathbb{R}$$ so $z^{2}+z^{*}$ is real. Write this in polar form $$r^{2}e^{2i\theta}+re^{-i\theta}=-\frac{1}{4}$$ to obtain the constraint $$r^{2}\sin(2\theta)-r\sin(\theta)=0$$ so that either $r=0$ (so $z=0$), or $\sin(\theta)=0$ (so $z \in \mathbb{R}$), or $$2r\cos(\theta)-1=0 \implies r\cos(\theta)=\frac{1}{2}$$ which we recognise as the condition $\Re(z)=\frac{1}{2}$. $z=0$ is not a solution; for real $z$ we get the repeated solution $z=-1/2$, and if we say $z=1/2+iy$, the equation becomes: $$\frac{1}{4}+iy-y^{2}+\frac{1}{2}-iy+\frac{1}{4}=0$$ which simplifies to $y^{2}=1$. We easily verify that $1/2 \pm i$ are both solutions. As you note, this is four roots (with multiplicity) - contrast this with an ordinary quadratic equation, which always has $2$ roots over $\mathbb{C}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2404493", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
Ax=b has no solution - version 3 I am struggling to prove the following theorem which is popped up in a linear optimization textbook, Theorem : For $A \in \mathbb{R}^{m \times n}, b\in \mathbb{R}^{m}$, $ \quad Ax=b $ has no solution if and only if there exists a vector $y \in \mathbb{R}^m $ with $ A^Ty =0 $ and $ b^Ty \ne 0.$ Proof : $(\Longleftarrow)$ Assuming such a vector $y \in \mathbb{R}^m$ exists, suppose $Ax=b$ has a solution. Then , consider $$ A^Ty = 0 $$ Multiply both sides by $x^T$ and get, $$ (Ax)^Ty =0$$ Since $Ax=b$, $$ b^Ty = 0. $$This is a contradiction with $b^Ty\ne0.$ Therefore, our assumption that $Ax=b$ has a solution is wrong. I could not prove the other way around.($ \Longrightarrow)$. I think that by Gauss-Jordan elimination we should find such vector $y$ with desired property, but I could not proceed. Any help will be appreciated.
Let me identify the matrix $A$ with the linear map $T_A \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ defined using left multiplication ($T_A(x) = Ax$). There are two basic relations between the kernel and image of $A$ (or, more precisely, $T_A$) and the kernel and image of $A^T$ given by $$ \ker(A) = \operatorname{im}(A^T)^{\perp}, \operatorname{im}(A) = \ker(A^T)^{\perp}. $$ Let's assume for a second we know those relations. Then $Ax = b$ has no solution if and only if $b \notin \operatorname{im}(A)$ if and only if $b \notin \ker(A^T)^{\perp}$ if and only if there exists $y \in \mathbb{R}^m$ such that $A^T(y) = 0$ and $\left< b, y \right> = b^T y \neq 0$. Thus, it is enough to prove that $\operatorname{im}(A) = \ker(A^T)^{\perp}$. We don't need the second relation but it is useful to remember and it follows from the first relation by taking $A = A^T$ and using $(A^T)^T = A$ and $(V^{\perp})^{\perp} = V$. Let $y \in \ker(A^T)$ and let $b \in \operatorname{im}(A)$. Choose $x$ such that $Ax = b$. Then $$ \left< Ax, y \right> = \left< x, A^T y \right> = 0 $$ which shows that $\operatorname{im}(A) \subseteq \ker(A^T)^{\perp}$. Since $$ \dim \ker(A^T)^{\perp} = m - \dim(\ker(A^T)) = m - (m - \dim\operatorname{im}(A^T)) = \operatorname{rank}(A^T) = \operatorname{rank}(A) = \dim \operatorname{im}(A)$$ we see that the dimensions of both vectors spaces are equal and so we must have $\operatorname{im}(A) = \operatorname{ker}(A^T)^{\perp}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2404585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Is a number meeting these conditions divisible by forty-nine? I am not a mathematician, I'm a linguistics PhD student. As part of my research I need to put various convoluted sentences through various syntactic transformations and see then check whether people think they are true or not. Mathematical statements (well, some of them) suit my purposes very well, because they are less context dependent and can be straightforwardly assigned a truth value (i.e. be deemed true or false). The problem is that I'm not a mathematician. When these sentences get a bit convoluted, I have a bit of a problem knowing whether they are true or false myself (before they undergo various syntactic transformations). I have a particular sentence which states that if a given number is: * *an integer *divisible by 7 (meaning it will yield an integer if divided by 7) *a square number then it is divisible by 49. I intuitively believe this to be correct (although I can't explain why). Is this actually true? I don't want to waste everybody's time by starting with an untrue untransformed sentence.
Yes. The reason is that the only way a square number can be divisible by $7$ is if its square root is divisible by $7$. So your number is the result of squaring a multiple of $7$, and when you do that you get a multiple of $49$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2404833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 8, "answer_id": 0 }
Proof Silverman-Toeplitz theorem Proof Silverman-Toeplitz theorem: Let $A$ be an infinite matrix with entries $(a_{ij})$. Two sequences $\sigma $ and $s$ are related by this matrix as follows $$\sigma_i =\sum_{j=0}^{\infty} a_{ij}s_i$$ Prove that for a convergent sequence $s$, $\sigma$ converges to the same value iff $$\lim_{i\to \infty} a_{ij} =0 \quad \text{for each } j$$ $$\lim_{i\to \infty} \sum_{j=0}^{\infty} a_{ij}=1$$ $$\sup_i \sum_{j=0}^{\infty} |a_{ij}|<\infty$$ I have no problem proving $\leftarrow$, and the first two conditions for $\rightarrow$ are easy: for the first one put $s_k=\delta_{kj}$ then $0=\lim_{i\to \infty }\sigma_i =\lim_{i\to \infty }\sum_{j=0}^{\infty} a_{ij}s_i=\lim_{i\to \infty } a_{ij}$. For the second one put $s_k=1$ then $1=\lim_{i\to \infty }\sigma_i =\lim_{i\to \infty }\sum_{j=0}^{\infty} a_{ij}$. The third one is more problematic. I think the Banach-Steinhaus theorem could be applied here. I already used it to prove that $\sum_{j=0}^{\infty} |a_{ij}| <\infty$, so that the hypothesis is satisfied if the linear operators are taken to be the rows of $A$. That leaves me with the two possibilities: either they are all bounded or they diverge to infinity on a dense $G_{\delta}$. How can I eliminate the latter?
Although this answer is very late, I still think it might be useful to others. Let $A_n$ be the $n$th row of $A$ and let $a_k$ denote its $k$th element. If $\sum_k |a_k|$ does not converge we can choose an index sequence $k_r$ such that $k_0 = 0$, and $$ \sum_{k = k_{r-1}}^{k_r-1} |a_k| > r \quad \text{for $r \geq 1.$}$$ Let $$s_k = \frac{\operatorname{sgn}(a_k)}{r}, \quad k_{r-1} \leq k < k_r.$$ Then $$\sum_{k=0}^\infty a_ks_k = \sum_{r=1}^\infty \sum_{k=k_{r-1}}^{k_r-1} \frac{|a_{nk}|}{r} > \sum_{r=1}^\infty 1 = \infty.$$ This is impossible since $(A_ns)_{n=0}^\infty$ is a convergent sequence and hence $A_ns$ must be finite. So in fact $A_n$ is absolutely summable, and hence defines a bounded linear functional on the space $l^\infty$ of bounded sequences, and hence on the space $c$ of convergent sequences with the inherited supremum norm. The family $(A_n)$ is pointwise bounded on each convergent sequence $s$, and by the Uniform Boundedness Principle we get $\sup_n \|A_n\|_c <\infty$, where $\|\cdot\|_c$ is the operator norm $c \to \mathbb{C}$. This operator norm is easily shown to be equal to the absolute sum, but we only need to show that it is at least as large as the absolute sum. For $\epsilon > 0$ choose $r$ such that $$\sum_{k = r+1}^\infty |a_{nk}| < \epsilon$$ and set $$x = \begin{cases} \operatorname{sgn}(a_{nk}) & k \leq r\\ 0 & \text{else} \end{cases} $$ Then $$|A_nx| = \sum_{k=0}^r |a_{nk}| \geq \sum_{k=0}^\infty |a_{nk}| - \epsilon.$$ This shows that $\|A_n\|_c \geq \sum_{k=0}^\infty |a_{nk}|$, which completes the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2404920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding the quadratic equation from its given roots. If $\alpha$ and $\beta$ are the roots of the equation $ax^2 + bx + c =0$ , then form an equation whose roots are: $\alpha+\dfrac{1}{\beta},\beta+\dfrac{1}{\alpha}$ Now, using Vieta's formula, For new equation, Product of roots ($P$) = $\dfrac{a^2+c^2+2ac}{ac}$ Sum of roots ($S$) = $\dfrac{2c-b}{c}$ Hence, the required equation is: $acx^2 - ax(2c-b)+(a+c)^2=0$ // as quadratic equation = $x^2-Sx +Px=0$ But the answer key states that the answer is: $acx^2 +b(a+c)x +(a+c)^2=0$ I am really doubtful of this answer. Where have I gone wrong (or is the answer in the key wrong?)?
$$S=\alpha+\beta+\frac{1}{\alpha}+\frac{1}{\beta}=(\alpha+\beta)\left(1+\frac{1}{\alpha\beta}\right)$$ If I write this in terms of $a,b,c$ I get $$S=-\frac{b}{a}\left(1+\frac{a}{c}\right)=-\frac{b(a+c)}{ac}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2405017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
How to think about open sets and continuous functions on discrete metrics I'm working through practice problems for the Math Subject GRE (Which seems to me to be all Analysis and Algebra even though I'd heard it was mostly multivariate calculus). This problem came up: Let $\mathbb{Z^+}$ be the set of positive integers and let $d$ be a metric on $\mathbb{Z^+}$ be defined as follows: $d = \begin{cases} 1,& \text{if } m \neq n\\ 0,& \text{if } m = n \end{cases}$ Which of the following is true: 1) For all n $\in \mathbb{Z^+}$,$\{n\}$ is open 2) Every subset of $\mathbb{Z^+}$ is closed 3) Every real valued function defined on $\mathbb{Z^+}$ is continuous. I have a lot of trouble assessing this claims with a discrete metric and I would like some help clarifying the way I think about them. All three are true, but it's not clear to me why. More specifically: 1) Is every point of such a set an interior point? To assess this, does the open ball we draw have to be (n-1, n+1)? Does it make sense to draw a smaller open ball than this if distances in this metric are either 0 or 1? I feel like the natural language meaning of "interior point" fails here, because the natural idea of "inside" doesn't make sense. 2) This obviously relies on one, since we test for being closed by examining if the complement is open. 3) It's not clear to me why this is true; part of that is that, as in 1), it's not clear if we can choose epsilons and deltas to be values other than 1 or 0.
$1.$ Open balls in a metric help you determine just how close two points are. In a way, the more open balls around one that contain the other, the closer they are. The discrete metric says all points are equidistant from each other. So, there is a smallest closed ball containing them all and smallest open ball containing only the center. $3.$ The $\varepsilon$ in the definition of continuity can be any positive real number, including anything between $0$ and $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2405137", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
a problem using upper bounds Let S be a non empty subset of the real numbers which is bounded above. Let c be a real number and define T ={cx|x∈S}. Show that,if c > 0,then T is non empty, bounded above and that sup T = c sup S. Give an example of a set S as above, such that with c = −1, the set T is still bounded above but supT ̸= csupS. I have made several attempts but my proofs are very inelegant and my friend has guaranteed there is a much shorter proof than mine (I have attached my proof) . If someone could please help me with an elegant proof it would be very helpful.
Since $S$ is bounded above, $\overline s=\sup S<\infty .$ We show that $\sup T=c\overline s:$ $a). \ \overline s$ is an upper bound for $S$ by definition of $\sup.$ Then, by definition of $T,$ and the fact that $c>0,\ c\overline s$ is an upper bound for $T$. $b).$ Let $t$ be $\textit {any}$ upper bound for $T.$ Then, by defintion of $T,\ t/c$ is an upper bound for $S,$ and then $\overline s<t/c$ and so $c\overline s<t,$ which implies that $c\overline s$ is the least upper bound for $T$. For a counterexample in case $c=-1$ take $S=\left \{ -1,1 \right \}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2405237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
The closed unit ball to generate a linear normed space. Let $X$ be a linear normed space and let B denote the closed unit ball of $X$. Then we can stretch the unit ball to get every vector in $X$: $$X=\bigcup_{n=1}^\infty nB.$$ Is this true? Why?
Indeed, in a normed space every neighbourhood of $0$ is absorbing, as this property is also called. For, if $x \in X$, take $n = 1 + \lceil\|x\|\rceil \in \mathbb{N}$ so that $\|x\| < n$. Then $\|\frac{1}{n}\cdot x\| = \frac{1}{n}\|x\| <1$ So $x = n \cdot (\frac{1}{n}\cdot x) \in nB$, as required. IIRC, a locally convex topological vector space is normable exactly when this property holds for all (open) neighbourhoods of $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2405328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Does independence necessarily mean uncorrelatedness? We all know that two independent events are uncorrelated, don't we? Nonetheless, we can find events that are correlated, yet they are independent such as the examples found on the Suprious Correlations website$^*$. Is there a problem here or is it just me who's missing a ring in the chain? $^*$ The website is actually meant to show that correlation doesn't imply causality but I think we can agree that the variables shown are independent (i.e., the occurrence of one does not affect the occurrence of the other).
You should keep stochastic independence distinct from causal independence. Two random variables that are stochastically independent are uncorrelated by definition. Two random variables that are causally independent ($A$ does not imply/causes $B$, nor vice versa) may be correlated. It is also possible that some third random variables $C$ separately influences both $A$ and $B$, making them correlated.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2405420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is it possible to derive a Ring from a Monoid? Multiplication is just repeated addition, so we can derive multiplication from addition. Addition over integers is a monoid. Addition and multiplication over integers form a Ring, so there's one example where this is possible, but addition and multiplication have more properties, for example addition is commutative. I'm not sure if it's possible to derive the identity element of multiplication from the Monoid (in the general ring-sense of multiplication and addition). Is it possible to derive a Ring from a Monoid in general?
It depends on what you want to do. For any monoid $M$ there is alway the monoid ring $\Bbb Z[M]$. This is the left adjoint to the forgetful functor from rings to monoids. If you want to take a commutative monoid as ground for the underlying group of the ring you probably want to take the of its Grothendieck group before. But this could be any abelian group. And if the ring is required to have a multiplicative identity element this can not always be achieved
{ "language": "en", "url": "https://math.stackexchange.com/questions/2405499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Proving $\sqrt{3+\sqrt{13+4\sqrt{3}}} = 1+\sqrt{3}$. How would I show $\sqrt{3+\sqrt{13+4\sqrt{3}}} = 1+\sqrt{3}$? I tried starting from the LHS, and rationalising and what-not but I can't get the result... Also curious to how they got the LHS expression from considering the right.
$$ \sqrt{3+\sqrt{13+4\sqrt3}}= \sqrt{3+\sqrt{1+2\times2\sqrt{3}+(2\sqrt{3})^2}}$$ $$=\sqrt{3+\sqrt{(1+2\sqrt{3})^2}}$$ $$=\sqrt{3+1+2\sqrt{3}}$$ $$=\sqrt{(1+\sqrt{3})^2}$$ $$=1+\sqrt3$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2405667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Equivalence between $\sigma$-additivity and a certain condition, on a certain set of equivalence classes I'm facing a certain part of a problem, and I don't know how to solve it. The background is the following: In a probability space $(\Omega,\mathcal{B},\mathbb{P})$, two sets $A,B$ are equivalent if $\mathbb{P}(A\bigtriangleup B) = 0$. This can define equivalence classes as follows: $$A^\# = \{ B \in \mathcal{B} : \mathbb{P}(A\bigtriangleup B) = 0 \},$$ and we can define a probability in the set of equivalence classes as $\mathbb{P}^\#(A^\#) = \mathbb{P}(A)$. I know that this is a metric space with metric $$d(A^\#,B^\#) = \mathbb{P}(A\bigtriangleup B).$$ Furthermore, I already proved that $\mathbb{P}^\#$ is uniformly continuous on this set. I need to prove the following: $$ \mathbb{P} \text{ is } \sigma\text{-additive} \iff [\mathcal{B} \ni B_n \downarrow \emptyset \implies d(B_n^\#,\emptyset^\#) \to 0 ].$$ I think the $\implies$ part is easy: if I take sets $B_n \downarrow\emptyset$, then $$d(B_n^\#,\emptyset^\#) = \mathbb{P}(B_n\bigtriangleup \emptyset) = \mathbb{P}(A_n) \to \mathbb{P}(\emptyset) = 0,$$ where the previous convergence is true due to $\sigma$-additivity. Sadly, I have no clue to prove the $\impliedby$ part. Any help will be appreciated. Thank you very much in advance.
Let $A_k$ be a sequence of pairwise disjoint sets. Set $B_n = \bigcup_{k=n}^\infty A_k$. Then $B_n \downarrow \emptyset$ so by assumption $d(B_n^\#, 0^\#) \to 0$ which is to say $\mathbb{P}(B_n) \to 0$. Now by finite additivity, for any $n$ you have $$\mathbb{P}\left(\bigcup_{k=1}^\infty A_k\right) = \sum_{k=1}^{n-1} \mathbb{P}(A_k) + \mathbb{P}(B_n)$$ and now pass to the limit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2405900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
On polynomials over finite ring Let $R$ be a finite commutative ring with unity ; then does there exist a non-empty proper subset $A \subseteq R$ and $f(X) \in R[X]$ such that $f(r)=1 , \forall r \in A$ and $f(r)=0 , \forall r \notin A$ ?
Such an $f$ exists iff $R$ is local. First, suppose $R$ is local. Since $R$ is finite, its unique maximal ideal is the nilradical, so every element of $R$ is either nilpotent or a unit. There is then some $n$ such that $r^n=0$ for every nilpotent $n$ and $r^n=1$ for every unit $r$. You can then take $f(X)=X^n$. Now suppose $R$ is not local. Equivalently, this means either $R$ is the zero ring (in which case $R$ has no nonempty proper subsets) or $R$ is isomorphic to a direct product $S\times T$ of two nontrivial rings $S$ and $T$. Then given any $f\in R[X]$, consider its images $f_S\in S[X]$ and $f_T\in T[X]$. Note that the set of values which $f$ takes is the product of the set of values $f_S$ takes and the set of values $f_T$ takes. So the set of values $f$ takes must be a "rectangle" in the product $S\times T$. But since $S$ and $T$ both have more than one element, the set $\{0,1\}=\{(0,0),(1,1)\}\subset R$ is not a rectangle, so it cannot be the set of values taken by any polynomial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2405994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Binary Relation composition is associative proof explanation and set memberships and set definitions request What set or relation does y,x belong to? What set or thing does w belong to? What set or thing does z belong to? I have a hard time keeping track what set w,z,y are members of, are part of whose domain and codomain. What are all the codomains and domains here? How do we formally define using set builder notation R,S, and T?
In general if $R\subseteq A\times B$ and $S\subseteq B\times C$ then: $$S\circ R:=\{\langle a,c\rangle\mid\exists b\in B[\langle a,b\rangle\in R\wedge\langle b,c\rangle\in S\}\subseteq A\times C$$ So starting with relations $R\subseteq A\times B$, $S\subseteq B\times C$ and $T\subseteq C\times D$ we have $S\circ R\subseteq A\times C$ and $T\circ S\subseteq B\times C$. Continuing this we find that $T\circ(S\circ R)$ and $(T\circ S)\circ R$ are both subsets of $A\times D$ that satisfy: $$T\circ(S\circ R)=(T\circ S)\circ R$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2406107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Does $\int_{-\infty}^{+\infty}te^{-|t|} \, dt$ converge? Good evening Does $\displaystyle \int_{-\infty}^{+\infty}te^{-|t|} \, dt$ converge? I have got a series of exercices without corrections so I carry on with a new exercice. My solution : $te^{-|t|}=\dfrac{t}{e^{|t|}}= \dfrac{t}{e^{\frac{|t|}{2}}}\times\dfrac{1}{e^{\frac{|t|}{2}}}$ So there exists $a>0$ such that $\forall |t|>a,\quad \dfrac{|t|}{e^{\frac{|t|}{2}}}<1\iff\dfrac{|t|}{e^{|t|}}<\dfrac{1}{e^{\frac{|t|}{2}}}$ Thus $\displaystyle \int_{-\infty}^{+\infty}e^{-\frac{|t|}{2}} \, dt=2\int_{0}^{+\infty}e^{-\frac{t}{2}} \, dt$ Let $F(x):=\displaystyle 2\int_{0}^{x}e^{-\frac{t}{2}} \, dt=-4\left[e^{-\frac{t}{2}}\right]_0^x=-4\left(e^{-\frac{x}{2}}-1\right)\underset{x\to+\infty}{\longrightarrow}4$ As $\displaystyle \int_{-\infty}^{+\infty}e^{-\frac{|t|}{2}} \, dt$ converges, then $\displaystyle \int_{-\infty}^{+\infty}te^{-|t|} \, dt$ converges. Is it correct and is there something more concise?
You can also integrate by parts to an upper bound $R$ and take limits as $R \to \infty$: $$ \int_0^R t \, e^{-t} \, dt = \left[ t \, \left( -e^{-t} \right) \right]_0^R - \int_0^R \left( -e^{-t} \right) \, dt = -R \, e^{-R} + \int_0^R e^{-t} \, dt \\ = -R \, e^{-R} + \left[ -e^{-t} \right]_0^R = -R \, e^{-R} - e^{-R} + 1 \\ \to 0 - 0 + 1 = 1 $$ Since $t \, e^{-t} \geq 0$ for $t \geq 0$ this shows that $t \, e^{-t} \in L^+([0, \infty))$ and symmetry implies $t \, e^{-t} \in L^1(\mathbb R)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2406214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Compactness of the closed interval [0,1] In general topology, a topological space is said to be compact, if every one of its open cover has a finite subcover. However, I cannot see the compactness of the close interval [0,1] from the above definition. To be a little specific,let us consider the following open cover for [0,1]: $C= \{[0,1/2),(1/3,3/4), (2/3,1]\}$. Now, the open interval (1/3,3/4) itself has at least one open cover (let's call it P) which does not have a finite subcover. We use P to cover the open interval (1/3,3/4). This gives a new open cover C' for the interval [0,1]. It looks like C' does not have an finite subcover, since C' includes P which does not have a finite subcover. Of course I misunderstood something here. If somebody can catch my error it will be very helpful.
So you get an open cover by retaining $[0,1/2)$ and $(2/3,1]$ but replacing $(1/3,3/4)$ by a bunch $P$ of open sets where no finite collection covers $(1/3,3/4)$. You can do this. But it is still the case that this new covering $C'$ has a finite subcovering. Don't forget that $[0,1/2)$ and $(2/3,1]$ are still available. If we used both of these, all we have to do is find a finite subset of $P$ that covers $[1/2,2/3]$. (We don't need it to cover all of $(1/3,3/4)$.) As $[1/2,2/3]$ is compact, then there will be such a finite subset.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2406432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
Proof of $ r(I)^e\subset r(I^e)$ in ring theory with extension and radical Let $f:A\to B$ be a ring homomorphism and $I$ be an ideal of $A$. Then prove that $\displaystyle r(I)^e\subset r(I^e)$, where $I^e$ denotes the extension of the ideal $I$ and $r(I)$ denotes the radical of the ideal $I$. Let $x\in r(I)^e=Bf(r(I))$. Then $x=yz$ for some $y\in B$ and $z\in f(r(I))$. So $z=f(z_1)$ for some $z_1\in r(I)\implies z_1^n\in I$ for some $n>0$. Then $z^n=f(z)^n=f(z^n)\in f(I).$ Now $x^n=y^nz^n$ where $y^n\in B$ and $z^n\in f(I)$. So $x^n\in I^e\implies x\in r(I^e)$. Is the proof correct ? Please check and if there are some mistake then give hint how to proceed.
As user26857 pointed out, $x=yz$ is not right. Remember that $I^e=\langle f(I)\rangle$, so it should be instead $x=\sum_{i=0}^n b_if(u_i)$ for some $n\in \Bbb Z^+$, where $b_i\in B$ and $u_i\in \sqrt{I}$. Now, as $u_i\in \sqrt{I}$, then there is $k_i\in \Bbb Z^+$ such that $u_i^{k_i}\in I$. Therefore, $$\bigl(b_if(u_i)\bigr)^{k_i}=b_i^{k_i}f(u_i^{k_i})\in I^e$$ $$\implies b_if(u_i)\in \sqrt{I^e}.$$ Hence, as $\sqrt{I^e}$ is an ideal of $B$, it follows that $x\in \sqrt{I^e}$. Alternatively, you can use the multinomial theorem to $\Bigl(\sum_{i=0}^n b_if(u_i)\Bigr)^k$, where $k=k_1+\cdots +k_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2406500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that for $x \in (-\pi, \pi)$ and $T(x) := \sum_{n=0}^\infty(\frac{x}{\pi})^n$ the derivation $T'(x) = \frac{\pi}{(x-\pi)^2}$ $\frac{x}{\pi} < 1$ for $x\in (-\pi, \pi)$, therefore we could use the formula for geometric series to get the limit. $$\sum_{n = 0}^\infty(\frac{x}{n})^n = \frac{1}{1-\frac{x}{\pi}} = \frac{1}{\frac{\pi}{\pi}-\frac{x}{\pi}} = \frac{1}{\frac{\pi-x}{\pi}} = \frac{\pi}{\pi-x}$$ Now, this doesn't get me any further and im stuck here, so hints would be very much appreciated.
You have already proved that, for $x \in (-\pi,\pi)$, by using a geometric series result, $$ T(x)= \frac{\pi}{\pi-x} $$ then to obtain $T'(x)$ just use $$ \left(\frac 1u\right)'=-\frac{u'}{u^2}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2406640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Endomorphism rings of isogeneous elliptic curves Let $E$ and $E'$ be isogenous elliptic curves and $K=\text{end}(E) \otimes \mathbb(Q) $ Is it true that $\text{end}(E') $ is a subring of $K$? The only thing I thought is that the isogeny between $E$ and $E'$ and its dual give a map between the endomorphism rings, which as far as i know needs not to be surjective or injective.
I think it is not true at least over $\mathbb{C}$ Let us consider the elliptic curve $E$ whose lattice is given by $\langle 1,\frac{i}{2} \rangle$ which has not the CM Now let us consider the isogeny given by the quotient for the translation of the point $\frac{1}{2}$ then you get an elliptic curve $E’$ whose lattice is $\langle\frac{1}{2},\frac{i}{2}\rangle$ which is isomorphic to $\langle 1,i\rangle$. Thus $E’\simeq E_i$ which has the CM given by $i$. This i think that it may happen then after an isogeny you get an automorphism group bigger or not depending on the isogeny you are considering.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2406771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
proof that the pattern exists for all n Explain the pattern $$(\sqrt2-1)^1= \sqrt2 - \sqrt1$$ $$(\sqrt2-1)^2 = \sqrt9 - \sqrt8$$ $$ (\sqrt2-1)^3 = \sqrt{50} - \sqrt{49} $$ that is $(\sqrt2-1)^n$ is equal to difference of two consecutive numbers one of which are squares.
This is related to the problem of finding which triangular numbers are a square, which leads to a Pell equation for $\sqrt 2$, and is related to the units of $\mathbb Z[\sqrt 2]$. Indeed, $(\sqrt2-1)^{-1}=\sqrt2+1$ is the fundamental unit. The pattern is $(\sqrt2-1)^n = \sqrt{a_n+1} - \sqrt{a_n}$, where the $a_n$-th triangular number is a square. See A001108.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2406860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Prime dividing repunit Let $ R(n) = \underbrace{111\ldots111}_{\text n\ ones}$. Prove that if a prime number $ p \neq 3 $ divides $ R(n) $ then $ n $ and $ p - 1 $ are not coprime. So obviously $ R(n) = \frac{10^n - 1}{9}$. Now if $ p $ divides $ R(n) $ then $$ \frac{10^n - 1}{9} \equiv 0 \pmod p $$ which implies $$ 10^n - 1 \equiv 0 \pmod p $$ $$ 10^n \equiv 1 \pmod p $$ Can we deduce from there that $ GCD(n, p - 1) \neq 1 $? How? Also, why is $ p \neq 3 $ requirement necessary? I suppose "multiplying both sides" by $ 9 = 3^2 $ is somehow relevant, but I'm not sure why... it's not diffucult to come up with a counterexample for $ p = 3 $ case, but I don't know how the proof would account for it.
Use little Fermat $10^{p-1} \equiv 1\bmod p$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2406976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Find all positive integers $n > 1$ such that the polynomial $P(x)$ belongs to the ideal generated by the polynomial $x^2 +x +1$ in $\Bbb Z_n[x]$ Find all positive integers $n > 1$ such that the polynomial $x^4 + 3x^3 + x^2 + 6x + 10$ belong to the ideal generated by the polynomial $x^2 + x + 1$ in $\Bbb Z_n[x]$. My attempt: I was using the Division Algorithm $$P(X)= X^4 + 3x^3 + X^2 +6X + 10 = (X^2 + x + 1)( x^2 + 2x -2) + (6x + 12).$$ Here I got the remainder $6x + 12$ not equal to $0$, so $P(x)$ is irreducible over $\Bbb Z_n[x]$ because it cannot be factored into the product of two non-constant polynomials. My thinking is that $\Bbb Z_3[x]$ is only the satisfied $x^2 + x + 1$ $$(1)^2 + 1 + 1 =3$$ and we if we divide $3/3 =1$ and remainder $= 0$. Therefore $3$ is the only positive integer $n > 1$ such that the polynomial $P(x)$ belong to the ideal generated by the polynomial $x^2 + x + 1$ in $\Bbb Z_n[x]$. Is my answer is correct or not? I would be more thankful to rectifying my mistake.
I think Bill Dubuque addressed the problem, but I would note that when you were looking at the remainder $r(x) = x^2 + x + 1$, you were trying to plug in values for $x$ to show that $r = 0$. This is not what you want to do. A polynomial is zero if and only if all its coefficients are zero. Like in your example, you can find polynomials that always evaluate to zero, but the polynomial itself is not zero. E.g. $x^2 + x + 1$ in $\mathbb{Z}/3\mathbb{Z}$ or $x^p - x$ in $\mathbb{Z}/p\mathbb{Z}$ if $p$ is prime. That is why in Hoffman and Kunze's Linear Algebra, they define polynomials abstractly in terms of their coefficients. That is, $x^2 + x + 1$ in $\mathbb{Z}/3\mathbb{Z}$ would be $(1, 1, 1, 0 , 0, \dots) \in \{(a_0, a_1, a_2, \dots): a_i \in \mathbb{Z}/3\mathbb{Z} \mbox{ are nonzero for only finitely many } i\}$. And the addition and multiplication of polynomials are done abstractly, just like complex numbers are: Construction of Complex Numbers Inside of Set Theory
{ "language": "en", "url": "https://math.stackexchange.com/questions/2407079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
For which values of $a\in\mathbb{Q}$ does integer solutions to $x^2+x+1=a(y^2+1)$ exist? I am unable to determine for which values of $a\in\mathbb{Q}$ does integer solutions to $$x^2+x+1=a(y^2+1)$$ in the form $(x,y)$ exist. My initial idea was to set $a=\frac{c}{d}$ to get $dx^2+dx+d=c(y^2+1)$ for $c,d\in\mathbb{Z}$ but I am unsure how to convert this into Pell's equation nor do I know how to apply initial solutions. Any suggestions?
Surely a zest of Galois theory could do no harm. 1) Consider first the equation with rational parameter and variables : $x^2+x+1=a(y^2 +1), a, x, y \in \mathbf Q^*$ (1). With $i^2=-1$ and $j^3=1$, (1) is equivalent to $a= N_3(x-j).N_1(y-i)^{-1}$, where $N_n$ denotes the norm map of $\mathbf Q(\sqrt -n)/\mathbf Q$. The natural question is then how to characterize the elements of $\mathbf Q^*$ which are products of two norms as above. This is a purely Galois theoretic exercise (see e.g. Cassels-Fröhlich, ex. 4.4, p. 358) : Let $K$ be any field of characteristic $\neq 2$. An element of $a \in K$ is a product of a norm from $K (\sqrt b)$ and a norm from $K(\sqrt c)$ iff $a$, as an element of $K(\sqrt bc)$, is a norm from $L=K(\sqrt b , \sqrt c)$ (2). In our case where $\mathbf Q(\sqrt -1)=\mathbf Q(i)$ and $\mathbf Q(\sqrt -3)=\mathbf Q(j)$, (2) is equivalent to "$a \in \mathbf Q (\sqrt 3)$ is a norm from $\mathbf Q (\sqrt 3)(i)$", or "$a$ is a sum of two squares in $\mathbf Q(\sqrt 3)$", say $a=(r+s\sqrt 3)^2 + (t+u\sqrt 3)^2$. Since $a \in \mathbf Q$, a straightforward calculation shows that the latter condition is equivalent to $a$ is of the form $r^2+t^2+3(s^2+u^2)$, with $r, s, t, u \in \mathbf Q$ such that $rs=-tu$ (3). Summarizing : (3) is equivalent to (3b) $a= (m^2+mn+n^2)/(\mu^2 +\nu^2)$, with rational parameter and variables. Cp. the edit of @Tito Pezias III. 2) Let us apply this approach to the special expression of $a$ given by (1). Elementary calculations show that the relation $rs=-tu$ in (3) boils down to $(2x+1)y=0$ and we have two different cases which correspond each to a single normic equation : (i) $y=0$, $a= x^2+x+1= N_{3} (x-j)$ ; (ii) $x=-\frac 1 2$, $3 a^{-1}=4(y^2 +1)= N_{1}(2(y-i))$. Let us finally come back to the original question, with $x, y \in \mathbf Z$. In case (i), $a$ is a norm from $\mathbf Z[j]$, but this is only a necessary condition. In case (ii), $3 a^{-1}$ is a norm from $\mathbf Z[i]$, i.e. $3 a^{-1}$ is a sum of two squares of integers, but again this is only a necessary condition; actually the inequality $3/4a\ge1$ cannot hold if $a\in \mathbf Z$. In the end, the integrality condition imposed on the variables $x, y$ seems rather irrelevant. A more natural condition could be that $x, y\in \mathbf Z$ and $a\in\mathbf Q^*$ is defined up to a square, see (3b). Edit : Because of the OP's uncertainty, I checked again the "elementary calculations" in part 2) of my answer... and I found a sign error which completely destroys the conclusion, so there is no reduction to two normic equations in one variable. I replace 2) by 3) and 4) below, following the same method but getting a different final result. I put in another answer because it seems that typo difficulties appear when the post is too long.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2407186", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 7, "answer_id": 2 }
Power set of any set. Question: Let $A$ be any set. Let $\mathbb{P}(A)$ be the power set of $A$. Then which one is true? * *$\mathbb{P}(A)=\emptyset$ for some $A$. *$\mathbb{P}(A)$ is finite for some $A$. *$\mathbb{P}(A)$ is countable for some $A$. *$\mathbb{P}(A)$ is uncountable for some $A$. Now 1. is not true, since if $A=\emptyset$, $\mathbb{P}(A)=\{\emptyset\}\neq \emptyset$. 3. is not true since let $A=\mathbb{N}$, then $\mathbb{P}(A)=2^\mathbb{N}$ is uncountable. So answer will be 2 and 4. But when I saw the answer book, it says answer is (2,4) or (2,3,4). That or is for answer is not decided yet. My question is why is their doubt!!! The answer should be $(2,4)$.... this is obvious, isn't? I am very bad at set theoritic arguments. Can someone clarify it?
Assuming countable means "countably infinite" (so finite sets are not considered countable), 3 is false. If $A$ is finite, then $\mathbb{P}(A)$ is finite. If $A$ is infinite it contains a countably infinite subset $B$ (mild form of choice used here). And then $2^{\mathbb{N}} = |\mathbb{P}(B)| \ge |\mathbb{P}(A)|$ so that $\mathbb{P}(A)$ is uncountable. So a power set cannot be countable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2407306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Prove that $ \lfloor 2x \rfloor \leq 2\lfloor x \rfloor + 1$ Theorem. For $x \in \mathbb R$, $$2 \lfloor x \rfloor \leq \lfloor 2x \rfloor \leq 2 \lfloor x \rfloor +1.$$ I tried to prove this theorem by first proving a few helper theorems. I have proved the following Lemma. Lemma. For $x \in \mathbb R$ and $n \in \mathbb N$, $$n \leq x \Leftrightarrow n \leq \lfloor x \rfloor. $$ This allowed me to prove the first inequality, because $$2 \lfloor x \rfloor = \lfloor x \rfloor + \lfloor x \rfloor \leq x + x = 2x, $$ and because $2 \lfloor x \rfloor$ is integer, $2 \lfloor x \rfloor \leq \lfloor 2x \rfloor$ due to the Lemma above. However, I can not get a hold of the other inequality. Anyone any ideas? Note: I have seen approaches using the fact that $x = \lfloor x \rfloor + x_1$ with $x_1 \in [0,1)$, however I prefer to not take such an approach, and rather work with the properties described above.
$$\begin{array}{rcl} x &<& \lfloor x \rfloor + 1 \\ \lfloor 2x \rfloor &\le& 2x \\ \lfloor 2x \rfloor &<& 2\lfloor x \rfloor + 2 \\ \lfloor 2x \rfloor &\le& 2\lfloor x \rfloor + 1 \\ \end{array}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2407452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Simplify $\frac1{\sqrt{x^2+1}}-\frac{x^2}{(x^2+1)^{3/2}}$ I want to know why $$\frac1{\sqrt{x^2+1}} - \frac{x^2}{(x^2+1)^{3/2}}$$ can be simplified into $$\frac1{(x^2+1)^{3/2}}$$ I tried to simplify by rewriting radicals and fractions. I was hoping to see a clever trick (e.g. adding a clever zero, multiplying by a clever one? Quadratic completion?) \begin{align} \frac{1}{\sqrt{x^2+1}} - \frac{x^2}{(x^2+1)^{3/2}} & = \\ & = (x^2+1)^{-1/2} -x^2*(x^2+1)^{-3/2} \\ & = (x^2+1)^{-1/2} * ( 1 - x^2 *(x^2+1)^{-1}) \\ & = ... \end{align} To give a bit more context, I was calculating the derivative of $\frac{x}{\sqrt{x^2+1}}$ in order to use newtons method for approximating the roots.
Factor from $\dfrac{1}{\sqrt{x^2+1}}$. You will have: $$\frac{1}{\sqrt{x^2+1}} \bigg(1 - \frac{x^2}{x^2+1}\bigg) = \frac{1}{\sqrt{x^2+1}} \frac{1}{x^2+1} = \frac{1}{(x^2+1)^\frac{3}{2}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2407653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Measure theory in practice I am trying to unite my knowledge of statistics and measure theory by considering the following example. Suppose we have a measurable space $(\Omega_1,B_1)$ and a random variable (measurable function) on the space, call it $X$: $\Omega_1 \rightarrow R$. Suppose we know the distribution function of $X$, say it is normal $X \sim N(0,1)$. Now consider the random variable $$Y=X +5$$ We know from basic statistics that $Y\sim (5,1)$, but how can we prove that by using the definition of $X$ and composition of functions? Explanations that are step by step greatly appreciated!
In order to properly speak of a random variable's distribution, you need a measure space (with a measure), and specifically a probability space $(\Omega, \mathscr A, \mathbb P)$, where of course $\mathbb P (\Omega) = 1$. The distribution of a random variable $X$ is the image measure $X(\mathbb P)$, i.e. $$ X(\mathbb P)(A) :=\mathbb P(X^{-1}(A)) \quad \text{for }A\in\mathscr A,$$ Outside of the measure-theory context, I first encountered this as the 'cumulative distribution function' of a random variable, i.e. the function that gives the probability of $X$ being less than or equal to $x$: $$ F_X(x) = \mathbb P (X \leq x) = \mathbb P \left(X^{-1} (-\infty, x]\right).$$ You may recognise $(-\infty, x]$ as being the sets that generate the Borel sets on $\mathbb R$. A random variable has normal distribution $N(0,1)$ when this image measure is given by: $$ X(\mathbb P)(A) = \int_A e^{-x^2/2} \, \mathrm \lambda(\mathrm dx),$$ where $\lambda$ is the one dimensional Lebesgue-measure. Phrased in less measure-theory heavy terms, $$ F_X(x) = \int_{-\infty}^x e^{t^2/2} \, \mathrm dt.$$ If $Y = X+5$, then we have $Y(\mathbb P)(A)=X(\mathbb P)(A-5)$, giving $$ Y(\mathbb P)(A) = \int_{A-5} e^{x^2/2} \, \lambda(\mathrm dx)\\ =\int_A e^{(x-5)^2/2} \, \lambda(\mathrm dx),$$ which is the definition of $Y$ having $N(5,1)$ distribution. Here we used the fact that $x \mapsto x +5$ is measurable (because it is continuous) to justify the last step.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2407726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
The logarithmic inequality $\ln^q (1+x) \le \frac{q}{p} x^p \quad (x \ge 0, \; 0 < p \le q)$ $$\ln^q (1+x) \le \frac{q}{p} x^p \quad (x \ge 0, \; 0 < p \le q)$$ For $p=q$ this reduces to the familiar $\ln(1+x) \le x$. Otherwise I haven't had much success in proving it. General suggestions would be appreciated.
put $p=\frac{1}{\ln x}$ and when $x > e$ then $p <1$ and we arrive at $\ln^q(x+1) \leq q \ln x * x^{p}$ which is $\ln^q (x+1) \leq q \ln x *e$ when $q \geq 2$ we get that $\ln^q(x+1) \leq e q \ln x$ Because $\ln^q(x+1)\geq \ln^q x \geq e q \ln x$ divide by $\ln x$ We get that $\ln^{q-1} x \geq e q $ and since $q \geq 2$ then we have that $\ln^{q-1} x \geq \ln x \geq e q$ which will be true whenever $\ln x \geq e q$ exponent -ate both sides we get that $x \geq e^{e q}$ for example if we put $q=3$ then the inequality will be false for all $ x \geq e^{3e} \approx 3480.2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2407820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to solve this PDE with method of characteristics? I have this problem $$y \frac{\partial u}{\partial x}-x\frac{\partial u}{\partial y}=1,\\u(x,0)=0$$ Using the method of characteristics I have $$\frac{dx}{dt}=y \\ \frac{dy}{dt}=-x \\ \frac{du}{dt}=1$$ Then $$\frac{dx}{y}=\frac{dy}{-x} \\ x^2+y^2=\eta $$ and $$u=t+\xi$$ But I do not understand how solve this problem.
You have done well so far. The system of ODE's is correct. Notice that $\xi=0$ due to your condition $u(x,0)=0$. However, your method to invert the $x$ and $y$ coordinates does not lead you anywhere. The following method to solve the system will be more useful. $$\frac{d}{dt}\left(\frac{dx}{dt}=y\right)\implies\frac{d^2x}{dt^2}=\frac{dy}{dt}=-x$$ $$\implies x=c_1\sin(t)+c_2\cos(t)$$ Now we can find expression for $y$. $$\frac{dy}{dt}=-c_1\sin(t)-c_2\cos(t)\implies y=c_1\cos(t)-c_2\sin(t)$$ We now use the given boundary condition $u(x,0)=0$ to find the constants $c_1$ and $c_2$. $$y(0)=c_1\implies c_1=0$$ $$x(0)=c_2\implies c_2=x_0$$ Thus $$x=x_0\cos(t)\quad\&\quad y=-x_0\sin(t)$$ We must invert these equations. Because they are nonlinear, we must be clever. First divide $y$ by $x$. $$\frac{y}{x}=-\frac{\sin(t)}{\cos(t)}=-\tan(t)$$ $$\implies t=\arctan\left(-\frac{y}{x}\right)$$ Since you found $u=t$, we do not need to find $x_0$ and have arrived at our solution. $$u(x,y)=\arctan\left(-\frac{y}{x}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2407994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
For any $\epsilon$, if $\epsilon>0$ and $|x|<\epsilon$, then $x=0$ For any $\epsilon$, if $\epsilon>0$ and $|x|<\epsilon$, then $x=0$. I understand that supposing $\epsilon=\frac{x}{2}$ will lead to a contradiction, but let’s take a correct case: Let $\epsilon=3$, then $x$ would have a whole set of values. Can you explain what is going on?
If it is true for any real $\epsilon>0$, it will be true in particular, for any rational $r>0$, so we will have, for any natural $n $, $$|x|<\frac {1}{n+1} $$ or $$-\frac {1}{n+1}<x <\frac {1}{n+1} $$ and the squeeze theorem gives $$0\le x\le 0$$ The downvoter is a zam.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2408066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
What are some mathematically interesting computations involving matrices? I am helping designing a course module that teaches basic python programming to applied math undergraduates. As a result, I'm looking for examples of mathematically interesting computations involving matrices. Preferably these examples would be easy to implement in a computer program. For instance, suppose $$\begin{eqnarray} F_0&=&0\\ F_1&=&1\\ F_{n+1}&=&F_n+F_{n-1}, \end{eqnarray}$$ so that $F_n$ is the $n^{th}$ term in the Fibonacci sequence. If we set $$A=\begin{pmatrix} 1 & 1 \\ 1 & 0 \end{pmatrix}$$ we see that $$A^1=\begin{pmatrix} 1 & 1 \\ 1 & 0 \end{pmatrix} = \begin{pmatrix} F_2 & F_1 \\ F_1 & F_0 \end{pmatrix},$$ and it can be shown that $$ A^n = \begin{pmatrix} F_{n+1} & F_{n} \\ F_{n} & F_{n-1} \end{pmatrix}.$$ This example is "interesting" in that it provides a novel way to compute the Fibonacci sequence. It is also relatively easy to implement a simple program to verify the above. Other examples like this will be much appreciated.
Rotation matrices are a typical example of useful matrices in computer graphics $${\bf R_\theta} = \begin{bmatrix}\cos(\theta)&\sin(\theta)\\-\sin(\theta)&\cos(\theta)\end{bmatrix}$$ They rotate a vector around origo by the angle $\theta$. If you want to make it more complicated you can make them in 3 dimensions. For example to rotate an angle $\theta$ around the $x$-axis: $${\bf R_{\theta,x}} = \begin{bmatrix}1&0&0\\0&\cos(\theta)&\sin(\theta)\\0&-\sin(\theta)&\cos(\theta)\end{bmatrix}$$ Even a bit more complicated are the affine transformations, in 2D you can make one like this: $${\bf A_{r,\theta,x_0,y_0}} = \begin{bmatrix}r\cos(\theta)&r\sin(\theta)&x_0\\-r\sin(\theta)&r\cos(\theta)&y_0\\0&0&1\end{bmatrix}, {\bf v} = \begin{bmatrix}x\\y\\1\end{bmatrix}$$ which with matrix multiplication scales the current vector ($x$ and $y$ in $\bf v$) a factor $r$, rotates it with angle $\theta$ and then adds (translates by) $[x_0,y_0]^T$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2408108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "65", "answer_count": 26, "answer_id": 17 }
Show right half-plane with points of closed unit disk removed is not a star domain Consider the right half-plane $\{z\in {\mathbb {C}}:{\mbox{Re}}(z)>0\}$. We define a set $X$ by removing from the right half-plane the points of the closed unit disk $\mathbb{D} = \{z \in \mathbb{C} : |z| \leq 1 \}$. I want to show that $X$ is not a star domain, i.e. there is no point $z_0 \in X$, such that for all $z \in X$ the line segment $[z_0, z]$ is in $X$. Here's a region plot of the situation: Intuitively the only possible choices for a $z_0$ all lie on the dashed line. However, if we were to select such a point $z^{*}$ on this line there will always be a region around the points $(0,1)$, $(0,-1)$ whose points are not joined by a line segment with $z^{*}$. In some way I think this is because the "tangents" at these points are parallel to the real axis. Can anyone help me to formalize this argument?
Suppose that $X$ is a star domain and let $z=(x,y)$ be the center of the star. Claim: For $\epsilon>0$ sufficiently small, the line between $z$ and $(\epsilon,1+\epsilon)$ or the line between $z$ and $(\epsilon,-1-\epsilon)$ intersects the unit disk. Sketch: If the imaginary part of $z$ is greater than $0$, i.e., $y>0$, then consider the line between $(x,y)$ and $(\epsilon,-1-\epsilon)$. Recall from calculus, the distance between a line and a point. If the line is given by $ax+by+c=0$ and the point is $(x_0,y_0)$, then the distance is $$ \frac{|ax_0+by_0+c|}{\sqrt{a^2+b^2}}. $$ In our case, we're interested in the distance between the line and the origin. Therefore, $(x_0,y_0)=(0,0)$, simplifying the formula. A vector in the direction of the line is $(x-\epsilon,y+1+\epsilon)$, so an equation for the line is $$ (y+1+\epsilon)X+(\epsilon-x)Y+c=0. $$ Substituting in the point $(X,Y)=(\epsilon,-1-\epsilon)$, this implies that $$ (y+1+\epsilon)(\epsilon)+(\epsilon-x)(-1-\epsilon)+c=0. $$ Therefore, \begin{align} a&=y+1+\epsilon\\ b&=\epsilon-x\\ c&=-y\epsilon-x-x\epsilon. \end{align} Now, what you want to do is to show $c^2\leq a^2+b^2$ for $\epsilon$ sufficiently small. Left to the reader. Small caveat: You need that the imaginary part of $z$ is greater than zero in order to get that the triangle $(0,0)$, $(\epsilon,-1-\epsilon)$, and $(x,y)$ is obtuse with the large angle at the origin. This implies that the altitude of the triangle is contained within the triangle, so the closest point on the line to the origin is on the side of the triangle between $(x,y)$ and $(\epsilon,-1-\epsilon)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2408169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
vector normal to a plane There is this "shortcut" we learned that helps us find a vector perpendicular to a plane. Say, $ax+by+cz+d=0$ is the plane equation, then the vector $(a,b,c)$ is normal to this plane. But why is this? Why does $d$ contribute nothing to the normal vector? For example, let $(x,y,z)$ be a point on this plane, then it must satisfy the plane equation $ax+by+cz+d=0$, but when you take the dot product with the vector $(a,b,c)$, we get $(x,y,z)\cdot(a,b,c)=ax+by+cz=-d$, which isn't necessarily zero?
A normal vector $\textbf{N}$ to a plane $P$ is a vector such that for all $\textbf{v} \in P$, $\textbf{v} \perp \textbf{N} \Rightarrow \textbf{N} \cdot \textbf{v} = 0$. Given any point $p_0, \textbf{x} = (x,y,z) \in P$, we defined $\textbf{x} - p_0$ to be the vector which extends from $p_0$ to $\textbf{x}$. Hence, $\textbf{x} - p_0 \in P \Rightarrow \textbf{N} \cdot (\textbf{x} - p_0) = 0. $ Letting $\textbf{N} = (a,b,c)$, we see that any plane in $\mathbb{R}^3$ is given by the following equation: $ (a,b,c) \cdot (x-x_0,y-y_0,z-z_0) = 0 \iff ax + by + cz = D$, where we've set $D = ax_0 + by_0 + cz_0$ i.e $-D = d$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2408246", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How to solve the recurrence relation $a_n - 2 a_{n-1} = 3 \times 2^n, a_0 = 1$ How to solve the recurrence relation $a_n - 2 a_{n-1} = 3 \times 2^n, a_0 = 1$. By looking at the terms of the relation, it can be seen that it is linear in nature but it is not homogeneous. How to solve such a recurrence relation?
Hint: Let $a_m=am2^m+b_m$ $$3\cdot2^n=a_n-2a_{n-1}=an2^n+b_n-2\{a(n-1)2^{n-1}+b_{n-1}\}=a2^n+b_n-2_{n-1}$$ Set $a=3$ to find $b_n-2_{n-1}=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2408394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 5 }
Is following option is correct or incorrect? Which of the following statements are true? (a). Let $X$ be a set equipped with two topologies $\tau_1$ and $\tau_2$. Assume that any given sequence in $X$ converges with respect to the topology $\tau_1$ if, and only if, it also converges with respect to the topology $\tau_2$. Then $\tau_1 = \tau_2$. (b). Let $(X, \tau_1)$ and $(Y, \tau_2)$ be two topological spaces and let $f : X \to Y$ be a given map. Then $f$ is continuous if, and only if, given any sequence $\{x_n\}$ such that $x_n \to x \in X$, we have $f(x_n) \to f(x) \in Y$. (c). Let $(X, \tau )$ be a compact topological space and let $\{x_n\}$ be a sequence in $X$. Then, it has a convergent subsequence. My attempsts ; all option a) ,b) and c) are all correct... by theorem of Arzelà–Ascoli theorem ... Is my answer is correct or not ? and i would be more thankful who rectifying my mistakes........
In $b)$ the $" \Leftarrow"$ part is not always true unless $(X,\tau)$ is a first countable space. Here is a counterexample: sequentially continuous on a non first-countable $c)$ is not always true.Here is a reference for counterexample. The space $[0,1]^{[0,1]}$ which is compact from Tychonov's theorem. https://www.andrew.cmu.edu/user/calmost/pdfs/sasms_F04.pdf
{ "language": "en", "url": "https://math.stackexchange.com/questions/2408609", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How much cash is in the wallet In the wallet we have $26$ banknotes. If we take, arbitrarily, $20$ of them we are sure that we have at least one of $\$5$, at least two of $\$10$ and at least five of $\$20$. How much cash is in the wallet? I have tried determining the number of compositions of $20$ in three parts with regard to the restrictions. Also tried using normal generating functions and obtaining coefficient next to $x^{20}$, but I don't see where does that get me. Any help or advice is appreciated.
The problem is much simpler. If we take 20 of the 26 notes, and we have at least five $\$20$ notes, then there must be at least eleven $ \$20$ notes (since otherwise the guarantee does not work). For similar reasons, there must be at least seven $ \$5$ notes, and at least eight $ \$10$ notes. Because $11+7+8 = 26$, this is all the notes in the wallet, and the total amount of cash is $\$(7\times 5 + 8 \times 10 + 11 \times 20)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2408689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Express it in its reduced form : $\sum\limits_{k=1}^{n}{(C(n,k-1)*C(n,k))}$ As we know $C^{2}(n,0)$+$C^{2}(n,1)$+$C^{2}(n,2)$+....+$C^{2}(n,n)$ = $C (2n,n)$ By deducing it from $ (1+x)^{n}$ So, how can I find the reduced form of $\sum\limits_{k=1}^{n}{(C(n,k-1)*C(n,k))}$ From $ (1+x)^{n}$ Please help me to solve this, Any help will be appreciated.
Let $[x^k]: (1+x)^n$ denote the coefficient of $x^k$ for the function $(1+x)^n$ ; that is $\binom{n}{k}$. Now \begin{eqnarray*} \sum_{k=1}^{n} \binom{n}{k-1} \binom{n}{k} = \sum_{k=1}^{n} [x^k]: \binom{n}{k-1} (1+x)^n \\ = [x^n]: \sum_{k=1}^{n} \binom{n}{k-1} x^{n-k} (1+x)^n \\ = [x^n]: x^{n-1}(1+\frac{1}{x})^n (1+x)^n \\ = [x^n]: x^{-1}(1+x)^n (1+x)^n = \binom{2n}{n-1}. \end{eqnarray*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2408799", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Difference between “ proof by reductio ad absurdum” and “proof by contradiction”? I always thought that both “proof by reductio ad absurdum” and “proof by contradiction” mean the same, but now my professor asked this question on my homework and I don't know. I believe that in both cases you assume the negation of the conclusion and develop a contradiction through the premises. This will imply the conclusion. Today I have a meeting with the assistant professor so I can clarify this, but I really would like to know what you guys think, or if possible it would be great if you point me into some good references. UPDATE: I just came from my extra help and the assistant professor explains the difference this way: Reductio ad absurdum: $$ \vDash [\neg p\to(q\wedge\neg q)]\to p$$ Proof by contradiction: $$ \vDash [\neg (p\to q) \to (r\wedge \neg r)]\to (p\to q)$$ And the examples of application were these: Using proof by contradiction: $\sqrt2$ is irrational.( First suppose it is rational and derive a contradiction). Using proof by reductio ad absurdum: If $f$ is differentiable on $(a,b)$ then $f$ is continuous on $(a,b)$. ( First we suppose that $f$ is differentiable on $(a,b)$ but not continuous on $(a,b)$ and derive a contradiction).
Regarding the rule of indirect proof: "if from assumption $\lnot A$ a contradiction follows, we can infer $A$", we can see: * *Jan von Plato, Elements of Logical Reasoning, Cambridge UP (2013), page 81: Sometimes the nomenclature RAA is used; it stands for reductio ad absurdum, the mediæval Latin name of the principle. [...] A genuine indirect proof in propositional logic ends with a positive conclusion. The principle is equivalent to Double Negation elimination. If we agree with this approach, proof by contradiciton is more general, because it applies also to inferences with negative conclusion, licensed by the principle of Negation Introduction: "if from assumption $A$ a contradiction follows, we can infer $\lnot A$".
{ "language": "en", "url": "https://math.stackexchange.com/questions/2408906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$\mathcal{M}_{n}(\mathbb{C})$ as a Hilbert space Let $\mathcal{M}_{n}(\mathbb{C})$ be the set of $n\times n$ matrices over $\mathbb{C}.$ I know that for $A,B\in \mathcal{M}_{n}(\mathbb{C}),$ $$\langle A,B \rangle=\text{tr}(B^{*}A)$$ defines an inner product on $\mathcal{M}_{n}(\mathbb{C})$ and hence we can induce the norm. I have a problem on how to verify that every Cauchy sequence of $\mathcal{M}_{n}(\mathbb{C})$ converges on $\mathcal{M}_{n}(\mathbb{C})$. Any hint/help would be appreciated.
Since $\mathcal{M}_n(\mathbb{C})$ is finite-dimensional, it is complete. Whence the result. In fact, $\mathcal{M}_n(\mathbb{C})$ equipped with the given Hermitian product is just $\mathbb{C}^{n^2}$ with its usual Hermitian structure.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2409018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Given $M=\{1,2,3,4\}$ find a topology on $M$ of minimum $3$ elements which makes $x=\{1,2,1,2,\dots\}$ converge. So, I've been having trouble with understanding this exercise: It sounds like this: Given $M=\{1,2,3,4\}$ find a topology on $M$ of minimum $3$ elements which makes $x=\{1,2,1,2,\dots\}$ converge. I don't really understand what is $x$ supposed to be, and how to test it's convergence on the topology without a function.
Recall that a sequence $\{x_n\}$ converges to some $x$ in a topological space if for every open set $U$ with $x\in U$, there is some $N$ sufficiently large that $x_n \in U$ whenever $n>N$. If the sequence $\{1,2,1,2,\dotsc\}$ is to converge to some $x\in M$, then every open set containing $x$ must contain both 1 and 2, since both of these terms appear infinitely often in the sequence. In particular, if we take the indiscrete topology on $M$ (i.e. the only open sets are the empty set and $M$ itself), then the sequence converges to anything we like. However, the question demands that we have at least three sets in the topology. Note that if either $\{1\}$ or $\{2\}$ is open, then the sequence won't converge (do you see why?). However, we could throw in the set $\{1,2\}$. That is, if the collection $$ \mathcal{T} = \{ \emptyset, \{1,2\}, M\} $$ is a topology on $M$, then the sequence $\{1,2,\dotsc\}$ will converge (to both 1 and 2), and we will be done. So, is $\mathcal{T}$ a topology? It contains the empty set and $M$, it is closed under arbitrary unions, and it is closed under finite intersections, so yes! it does form a topology! Hence one possible answer to your question is the collection $\mathcal{T}$, above. Can you come up with another example (by, perhaps, throwing in more sets)?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2409074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
From normal distribution to the lognormal distrubtion; where does $1/x$ come from? So the normal distribution is given by $\frac{1}{\sqrt{2\pi\sigma^2}}\exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right)$; Now the lognormal distribution is related to this by $y = e^x$ so the distrbution should be $\frac{1}{\sqrt{2\pi\sigma^2}}\exp\left(-\frac{(\ln(y)-\mu)^2}{2\sigma^2}\right)$, but apparently it is $\frac{1}{\color{red}{y}\sqrt{2\pi\sigma^2}}\exp\left(-\frac{(\ln(y)-\mu)^2}{2\sigma^2}\right)$, so where does this $\frac{1}{\color{red}{y}}$? It's apparent to me that this substitution is invalid, so the question is really: What's the valid way to do it?
Apply the following theorem, from Requirements for transformation functions in probability theory Let $X$ be an absolutely continuous random variable with support $S$ and probability density function $f(x)$. Let $g: \mathbb{R} \to \mathbb{R}$ be one-to-one and differentiable on $S$. If $$\frac{dg^{-1}(y)}{dy} \ne 0, \qquad \forall y \in g(S)$$ then the probability density of $Y$ is $$f_Y(y) = f_X(g^{-1}(y)) \left|{\frac{dg^{-1}(y)}{dy}}\right|, \qquad \forall y \in g(S)$$ The term $\dfrac{1}{y}$ comes from $\left|{\dfrac{dg^{-1}(y)}{dy}}\right|$, because in your case $g(x) = e^x$ and $g^{-1}(y) = \ln y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2409190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
There is an easier way to compute manually $e^{tA}$? I have this matrix $$A:=\begin{bmatrix}0&-1&1\\0&0&1\\-1&0&1\end{bmatrix}$$ And I have tested that $A^3\neq I$ but $A^4=I$, and I want to find $e^{tA}$. Then what I did was $$e^{tA}=\sum_{k=0}^\infty\frac{(tA)^k}{k!}=I\sum_{k=0}^\infty\frac{t^{4k}}{(4k)!}+A\sum_{k=0}^\infty\frac{t^{4k+1}}{(4k+1)!}+A^2\sum_{k=0}^\infty\frac{t^{4k+2}}{(4k+2)!}+A^3\sum_{k=0}^\infty\frac{t^{4k+3}}{(4k+3)!}$$ but I dont know if I can find $e^{tA}$ in an easier and computable-by-hand way than the above. The above seems painful to do it manually. So, if someone have some idea to improve the manual computation of $e^{tA}$ I will like to know it.
Let$$P=\begin{pmatrix}1-i&1+i&0\\-i&i&1\\1&1&1\end{pmatrix}.$$The columns of $P$ are eigenvectors of $A$. Then$$P^{-1}=\frac14\begin{pmatrix}1+i & -1+i & 1-i \\ 1-i & -1-i & 1+i \\-2 & 2 & 2\end{pmatrix}$$and$$P^{-1}.A.P=\begin{pmatrix}i&0&0\\0&-i&0\\0&0&1\end{pmatrix}.$$Therefore$$P^{-1}.e^{tA}.P=\begin{pmatrix}e^{ti}&0&0\\0&e^{-ti}&0\\0&0&e^t\end{pmatrix},$$which means that $e^{tA}$ is equal to$$\begin{pmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{pmatrix},$$where\begin{align}a_{11}&=\cos (t)\\a_{12}&=-\sin (t)\\a_{13} &= \sin (t) \\ a_{21}&= \frac{1}{2} \left(\cos (t)-\cosh (t)+\sin (t)-\sinh (t)\right)\\a_{22} &= \frac{1}{2} \left(\cos(t)+\cosh (t)-\sin (t)+\sinh (t)\right)\\a_{23} &= \frac{1}{2} \left(-\cos (t)+\cosh (t)+\sin (t)+\sinh (t)\right) \\a_{31}&= \frac{1}{2} \left(\cos (t)-\cosh (t)-\sin (t)-\sinh (t)\right)\\ a_{32}&= \frac{1}{2} \left(-\cos (t)+\cosh (t)-\sin (t)+\sinh (t)\right)\\a_{33} &= \frac{1}{2} \left(\cos (t)+\cosh (t)+\sin (t)+\sinh (t)\right).\end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2409282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }