Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
How do I find the slope of a curve of intersection at a given point? I am asked to find the slope of the curve of intersection between the upper half of the unit sphere and the plane $y=\frac12$ at the point $P(\frac12, \frac12,\frac {1} {\sqrt2})$ I made a drawing to illustrate how I visualize the problem. I'm confused about how to parametrize the curve. Yeah, I see that it's a circle. I can probably use a trigonometric parametrization. But which one? And how do I say why that's the one I chose to use? I understand that the partial derivatives will give me the slope of the curve at a point. So in order to find the slope at $P$ I just plug it in to the first partials. But if I have parametrized the curve, does that mean I have to change the coordinates of $P$? I need help understanding this problem. The more I search for resources on the internet the more confused I become.
Assuming that you mean the unit sphere given by $x^2+y^2+z^2=1$, you have for $y=\frac{1}{2}$ the equation $x^2+z^2=\frac{3}{4}$. Solving this wrt $z$ we have $z=\pm\sqrt{\frac{3}{4}-x^2}$, and since you are looking for the upper half dome of your sphere we use the $+$-solution, that is, $z(x)=\sqrt{\frac{3}{4}-x^2}$. Now you can differentiate this wrt $x$ to obtain the slope. Is this what you are looking for?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1044602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can I differentiate this equation? $y = \sqrt[4]{\frac{(x^3+2\sqrt{x})^2(x-sinx)^5}{(e^{-2x}+3x)^3}}$ $y = \sqrt[4]{\frac{(x^3+2\sqrt{x})^2(x-sinx)^5}{(e^{-2x}+3x)^3}}$ I tried removing the root but that got me no where
the solution of the given problem should be this here $$\frac{\left(x^3+2 \sqrt{x}\right) (x-\sin (x))^4 \left(-3 \left(3-2 e^{-2 x}\right) \left(x^3+2 \sqrt{x}\right) (x-\sin (x))+5 \left(3 x+e^{-2 x}\right) \left(x^3+2 \sqrt{x}\right) (1-\cos (x))+2 \left(3 x+e^{-2 x}\right) \left(3 x^2+\frac{1}{\sqrt{x}}\right) (x-\sin (x))\right)}{4 \left(3 x+e^{-2 x}\right)^4 \left(\frac{\left(x^3+2 \sqrt{x}\right)^2 (x-\sin (x))^5}{\left(3 x+e^{-2 x}\right)^3}\right)^{3/4}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1044728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Integral of $\frac{1}{\sqrt{x^2-x}}dx$ For a differential equation I have to solve the integral $\frac{dx}{\sqrt{x^2-x}}$. I eventually have to write the solution in the form $ x = ...$ It doesn't matter if I solve the integral myself or if I use a table to find the integral. However, the only helpful integral in an integral table I could find was: $$\frac{dx}{\sqrt{ax^2+bx+c}} = \frac{1}{\sqrt{a}} \ln \left|2ax + b +2\sqrt{a\left({ax^2+bx+c}\right)}\right|$$ Which would in my case give: $$\frac{dx}{\sqrt{x^2-x}} = \ln \left|2x -1 + 2\sqrt{x^2-x}\right|$$ Which has me struggling with the absolute value signs as I need to extract x from the solution. All I know is that $x<0$ which does not seem to help me either (the square root will only be real if $x<-1$). Is there some other formula for solving this integral which does not involve absolute value signs or which makes extracting $x$ from the solution somewhat easier? Thanks!
$$\int \frac{dx}{\sqrt{x^2-x}} = \int \frac{dx}{\sqrt{\left(x-\frac{1}{2}\right)^2 - \frac{1}{4}}}$$ Setting $ x - \frac{1}{2} = t $ $$ \int \frac{dx}{\sqrt{\left(x-\frac{1}{2}\right)^2 - \frac{1}{4}}} = \int \frac{dt}{\sqrt{t^2 - \frac{1}{4}}} $$ It's possible to do this using a trig substitution, but if you want the inverse function, a better way is to use a hyperbolic substitution. Let $t = \frac{1}{2}\cosh u \Rightarrow dt = \frac{1}{2} \sinh u$ $$ \int \frac{dt}{\sqrt{t^2 - \frac{1}{4}}} = \int \frac{\frac{1}{2}\sinh u \, du}{\sqrt{\frac{1}{4}(\cosh^2 u - 1)}} = \int \frac{\frac{1}{2}\sinh u \, du}{\frac{1}{2}\sinh u} = \int du = u + C $$ Working backwards is easy since you already have $t$ as a function of $u$ $$t = \frac{1}{2}\cosh u \Rightarrow x = \frac{1}{2}\cosh u + \frac{1}{2}$$ Where $u$ is the integral in question
{ "language": "en", "url": "https://math.stackexchange.com/questions/1044834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 7, "answer_id": 5 }
Let $n\geq4$ how many permutations $\pi$ of $S_n$ has the property that $1,2,3$ appear in the same cycle of $\pi$.... Let $n\geq4$ how many permutations $\pi$ of $S_n$ has the property that $1,2,3$ appear in the same cycle of $\pi$,while $4$ appears in a different cycle of $\pi$ from $1,2,3$? My attempt:For $n=4$ I have found two such permutations $(123)(4)$ and $(132)(4)$ For $n=5$ I have found four such permutations $(123)(45),(132)(45),(123)(4)(5),(132)(4)(5)$ For $n=6$ I have found $12$ such permutations. $(123)(456),(132)(456),(123)(465),(132)(465),(123)(45)(6),(132)(45)(6),(123)(46)(5),(132)(46)(5),(123)(56)(4),(132)(56)(4),(123)(4)(5)(6),(132)(4)(5)(6)$ My problem:How we can generally solve this?What will be the general formula for number of such permutations? Edit: For $n=5$ we also have the permutations $(1235)(4),(1325)(4),(1532)(4)$ and similarly these are 6.So in $S_5$ we have $4+6=10$ such permutations.
If a permutation $\sigma \in S_n$ consists of a $j$-cycle with $1, 2, 3$, call that cycle $\gamma_j$ and the set of numbers it is cycling $\Gamma_j = \{ 1, 2, 3, ... \} \not\ni 4$. The number of such $j$-cycles is $\color{blue}{(j-1)!}$ We can write $\sigma = \gamma_j\sigma'$ where $\sigma'$ is a member of all the permutations on $\Gamma_j' = \{ 1, 2, 3, ..., n \} - \Gamma_j$. As the cardinality of $\Gamma_j$ is $j$, the cardinality of $\Gamma_j' = n - j$ and the number of such permutations $\sigma'$ is $\color{blue}{(n-j)!}$ Hence when $1, 2, 3$ is in a $j$-cycle, there are $\color{blue}{(j-1)!(n-j)!}$ possible permutations $\sigma$. Summing up now over all possible $j$, the total number $t_n$ of such permutations for a given $n \geq 4$ is $$t_n = \sum_{j=3}^{n-1} (j-1)!(n-j)!$$ Calculating the first few values, $t_4 = \sum_{j=3}^{3} (j-1)!(4-j)! = 2!1! \hspace{25mm} = \ \ 2$ $t_5 = \sum_{j=3}^{4} (j-1)!(5-j)! = 2!2! + 3!1! \hspace{13mm} = 10$ $t_6 = \sum_{j=3}^{5} (j-1)!(6-j)! = 2!3! + 3!2! + 4!1! \, = 60$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1044920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
For a given finite group $Г$ , determine an infinite number of mutally nonisomorphic graphs whose groups are isomorphic to $Г$. For a given finite group $Г$ , determine an infinite number of mutally nonisomorphic graphs whose groups are isomorphic to $Г$. I know that $Г$ is generated by $\Delta$ and for any finite $Г$, there exist a graph $G$ such that $Aut(G) \cong Г$ But how does these info can help me find the number of mutally nonisomorphic graphs whose groups are isomorphic to $Г$?
Given a group $\Gamma = \{g_1,\ldots,g_n\}$, construct an edge-colored digraph on vertex set $\Gamma$, and with arcs $(g_i,g_j)$ having color $g_k$ iff $g_k=g_i g_j^{-1}$. Then the edge-colored digraph has automorphism group $R(\Gamma)$, the right regular representation of $\Gamma$, which is isomorphic to $\Gamma$. There are an infinite number of ways to replace this edge-colored digraph with a simple, undirected graph that has the same automorphism group as the edge-colored digraph. This yields an infinite number of undirected graphs whose groups are isomorphic to $\Gamma$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1045003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
the greatest value of an implicit function $$\frac{\exp(x)\sin(x) - \exp(y)\sin(y)}{\exp(y)-\exp(x)} \le P$$ where $x \le y$ how would you find the smallest value of $P$ - the greatest value of the LHS implicit function without using a graphical calculator In addition $x$ is less than or equal to $y$ that means the graph must lie on the LHS of the graph $y=x$
As $y\to x+$, the function approaches the derivative of $-u\sin(\log u)$. Otherwise, the two partial derivatives $\partial /\partial x$ and $\partial /\partial y$ of $$\exp x\sin x-\exp y\sin y=P(\exp y-\exp x)$$ both hold.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1045103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Noetherian property for exact sequence Let $0 \to M' \xrightarrow{\alpha} M \xrightarrow{\beta} M'' \to 0$ be an exact sequence of $A$-modules. Then $M$ is Noetherian is equivalent to $M'$ and $M''$ are Noetherian. For the ''$\Leftarrow$'' case: I guess if we let $(L_n)_{n\geq 1}$ be an ascending chain of submodules of $M$, then $(\alpha ^{-1}(L_n))_{n\geq 1}$ is a chain in $M'$, and $(\beta(L_n))_{n\geq 1}$ is a chain in $M''$. For large $n$ both these chains are stationary. Then why we know the chain $(L_n)$ is stationary? Thanks!
This follows immediately from the Five Lemma applied to $$\begin{array}{c} 0 & \longrightarrow & \alpha^{-1}(L_n) & \longrightarrow & L_n & \longrightarrow & \beta(L_n) & \longrightarrow & 0 \\ & & \downarrow && \downarrow && \downarrow & \\ 0 & \longrightarrow & \alpha^{-1}(L_{n+1}) & \longrightarrow & L_{n+1} & \longrightarrow & \beta(L_{n+1}) & \longrightarrow & 0.\end{array}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1045202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 0 }
Could someone point me in the right direction for this proof? I need to prove that Q (rational numbers) is countable by applying the function $f(m/n) = 2^m3^n$ with $m,n$ being relatively prime numbers. I honestly have no idea where to start. Any pointers would be great!
Injectivity $f\left(\dfrac{m_1}{n_1}\right)=f\left(\dfrac{m_2}{n_2}\right) \implies 2^{m_1}3^{n_1}=2^{m_2}3^{n_2}$ Case 1 $m_1\geq m_2, n_1\leq n_2$ $\therefore 2^{m_1}3^{n_1}=2^{m_2}3^{n_2} \implies 2^{m_1-m_2}=3^{n_2-n_1} \implies m_1=m_2, \ n_1=n_2$ Case 2 $m_1\geq m_2, n_1\geq n_2$ $\therefore 2^{m_1}3^{n_1}=2^{m_2}3^{n_2} \implies \left(2^{m_1-m_2}\right)\left(3^{n_1-n_2}\right)=1 \implies m_1=m_2, \ n_1=n_2$ Case 3 $m_1\leq m_2, n_1\geq n_2$ Similar analysis as Case 1. Case 4 $m_1\leq m_2, n_1\leq n_2$ Similar analysis as Case 2. $\\$ Surjectivity Obvious from the function. $\therefore f$ is bijective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1045325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
measurability with zero measure Let $f : [0,1] \rightarrow \Bbb R$ is arbitrary function , and $E \subset \{ x \in [0,1] | f'(x)$ exists$\}$. How to prove this statement: If $E$ is measurable with zero measure then $f(E)$ is measurable with zero measure. If $E$ is measurable with zero measure then $m(E)=0$ that $m :$ {measurable} $\rightarrow \Bbb R$, $m^* : P(\Bbb R) \rightarrow \Bbb R$,$A:= \{ x \in [0,1] | f'(x)$ is be exist$\}$. so $m(E)=m^*(E) <= m^*(A)$
hot_queen's answer looks fine to me. This is not an independent answer, it's a minor variation and a comment on hot_queen's answer; I'm posting it as an answer for better readability (and who knows, someone might be fooled into upvoting it). I'm reorganizing the covering lemma part of the argument. As observed earlier, it suffices to discuss the situation where $|f'|\le 1$ on $E$. For each $x\in E$, we can then find an interval $I_x=(x-h_x, x+ h_x)$ so that $|f(I_x)|\le 2 |I_x|$. We can also demand that $I_x\subset U$ for some open set $U\supset E$ with $|U|<\epsilon$. The intervals $I_x$ ($x\in E$) cover $E$, and we can find a subcollection $I_j$ (necessarily countable) that still covers $E$ and has overlap at most $2$: $\sum \chi_{I_j}\le 2$. This is a version of Besicovich's covering lemma; in one dimension, this is quite easy and immediately plausible. Now $$ |f(E)|\le \sum |f(I_j)| \le 2 \sum |I_j| \le 4 \left| \bigcup I_j \right| < 4\epsilon , $$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1045382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 1 }
"Honest" introductory real analysis book I was asked if I could suggest an "honest" introductory real analysis book, where "honest" means: * *with every single theorem proved (that is, no "left to the reader" or "you can easily see"); *with every single problem properly solved (that is, solved in a formal (exam-like) way). I've studied using Rudin mostly and I liked it, but it really doesn't fit the description, so I don't know what book I should suggest. Do you have any recommendations? Update: I need to clarify that my friend has just started to study real analysis and the course starts from the very basics, deals with real valued functions of one variable, but introduces topological concepts and metric spaces too.
Possibly Abbott, Understanding Analysis
{ "language": "en", "url": "https://math.stackexchange.com/questions/1045466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28", "answer_count": 10, "answer_id": 4 }
Finding the vertical asymptote of $y = \pm2\sqrt\frac{x}{x-2}$ Find the value of vertical asymptotes. $$ y = \pm2\sqrt\frac{x}{x-2}$$ There are two fuctions right? so, $$ y = f_1(x)= +2\sqrt\frac{x}{x-2}$$ $$ y = f_2(x)= -2\sqrt\frac{x}{x-2}$$ I'm good until here. Now, the book says $$\lim_{x\to2^+}f_1(x) = \lim_{x\to2^+} 2\sqrt\frac{x}{x-2} = + \infty$$ The book is a little old. Some parts were ripped. Could anyone please tell me why we say that the limit approaches 2 from the right side??
Because if $x\to 2^-$ then the radicand is negative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1045571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
connection between discrete valuation rings and points of a curve. Let $C$ be a projective irreducible non-singular curve over a field $k$ and let K be its function field. It applies that $(k[X,Y]/I(C))_{(X-a,Y-b)}$ (i.e. the localization of $k[X,Y]/I(C)$ at $(X-a,Y-b)$) is a discrete valuation ring w.r.t. $K$ and $k$, where $(a,b)$ is a point on $C$. My question is now: Can any discrete valuation ring w.r.t. $K$ and $k$ linked to $C$? (maybe to some points or a prime ideal of $k[X,Y]/I(C)$)? Note that if $k$ is algebraic closed any discrete valuation ring is a local ring of some point of $C$.
Suppose that $C/k$ is an integral projective curve. Then, there is a bijection $$\left\{\text{points of }C\right\}\longleftrightarrow\left\{\begin{matrix}\text{discrete valuations}\\\text{of }K(c)\\ \text{which are trivial}\\ \text{on }k\end{matrix}\right\}$$ The mapping being, as you indicating, sending $p$ to the valuation $v_p$ which is "order of vanishing of a function at $p$". This is an application of the valuative criteria for properness.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1045662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Dense subset of continuous functions Let $C([0,1], \mathbb R)$ denote the space of real continuous functions at $[0,1]$, with the uniform norm. Is the set $H=\{ h:[0,1] \rightarrow \mathbb R : h(x)= \sum_{j=1}^n a_j e^{b_jx} , a_j,b_j \in \mathbb R, n \in \mathbb N \}$ dense in $C([0,1], \mathbb R)$?
Yes, by the general Stone-Weierstrass Approximation Theorem. Indeed, $H$ is an algebra of continuous real-valued functions on the compact Hausdorff space $[0,1]$, and $H$ separates points and contains a non-zero constant function. That is all we need to conclude that $H$ is dense in $C([0,1],\mathbb{R})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1045778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Prove $g(a) = b, g(b) = c, g(c) = a$ Let $$f(x) = x^3 - 3x^2 + 1$$ $$g(x) = 1 - \frac{1}{x}$$ Suppose $a>b>c$ are the roots of $f(x) = 0$. Show that $g(a) = b, g(b) = c, g(c) = a$. (Singapore-Cambridge GCSE A Level 2014, 9824/01/Q2) I was able to prove that $$fg(x) = -f\left(\frac{1}{x}\right)$$ after which I have completely no clue how to continue. It is possible to numerically validate the relationships, but I can't find a complete analytical solution.
Note: this is only a Hint Using vieta's formulas $$a+b+c=3$$ $$ab+bc+ca=0$$ $$g(a) = 1 - \frac{1}{a}=\frac{a-1}{a}=b\implies a-1=ab\tag{1}$$ $$g(b) = 1 - \frac{1}{b}=\frac{b-1}{b}=c\implies b-1=bc\tag{2}$$ $$g(c) = 1 - \frac{1}{c}=\frac{c-1}{c}=a\implies c-1=ca\tag{3}$$ Adding $(1),(2),(3)$ $$a+b+c-3=ab+bc+ca$$ Which is true from relation between roots given by vieta's formula Now construct your proof
{ "language": "en", "url": "https://math.stackexchange.com/questions/1045961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Find area under the curve of a standard normal distribution Given a standard normal distribution, how can I find the area under the curve that lies: 1. to the left of z = −1.39; 2. to the right of z = 1.96; 3. between z = −2.16 and z = −0.65;
The areas for one, two, and three standard deviations about the mean are well known, as .68, .95, .997 roughly. But in general, there's no way to find the area under the standard normal distribution without using a computer, calculator, or a chart. edit1: most charts will give the area to the left of a given $z$-score. To find other values, use symmetry and complements. For example, $\operatorname{Pr}(Z > -.5) = 1 - \operatorname{Pr}(Z < -.5)$ and $\operatorname{Pr}(Z < - 1) = \operatorname{Pr}(Z > 1)$ edit2: I'm not careful above about whether to use "$\ge$" or "$>$" because the answers will always be the same for a continuous distribution like a normal distribution. Be careful you don't switch them around for discrete distributions because then it will affect the answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1046046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is there a shape with infinite area but finite perimeter? Is this really possible? Is there any other example of this other than the Koch Snowflake? If so can you prove that example to be true?
One can have a bounded region in the plane with finite area and infinite perimeter, and this (and not the reverse) is true for (the inside of) the Koch Snowflake. On the other hand, the Isoperimetric Inequality says that if a bounded region has area $A$ and perimeter $L$, then $$4 \pi A \leq L^2,$$ and in particular, finite perimeter implies finite area. In fact, equality holds here if and only if the region is a disk (that is, if its boundary is a circle). See these notes (pdf) for much more about this inequality, including a few proofs. (As Peter LeFanu Lumsdaine observes in the comments below, proving this inequality in its full generality is technically demanding, but to answer the question of whether there's a bounded region with infinite area but finite perimeter, it's enough to know that there is some positive constant $\lambda$ for which $$A \leq \lambda L^2,$$ and it's easy to see this intuitively: Any closed, simple curve of length $L$ must be contained in the disc of radius $\frac{L}{2}$ centered at any point on the curve, and so the area of the region the curve encloses is smaller than the area of the disk, that is, $$A \leq \frac{\pi}{4} L^2.)$$ NB that the Isoperimetric Inequality is not true, however, if one allows general surfaces (roughly, 2-dimensional shapes not contained in the plane. For example, if one starts with a disk and "pushes the inside out" without changing the circular boundary of the disk, then one can make a region with a given perimeter (the circumference of the boundary circle) but (finite) surface area as large as one likes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1046108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "79", "answer_count": 10, "answer_id": 5 }
Bochner Integral: Approximability Problem Given a measure space $\Omega$ and a Banach space $E$. Consider a Bochner measurable function $S_n\to F$. Then it admits an approximation from nearly below: $$\|S_n(\omega)\|\leq \vartheta\|F(\omega)\|:\quad S_n\to F\quad(\vartheta>1)$$ (This is sufficient for most cases regarding proofs.) Can it happen that it does not admit an approximation from below: $$\|S_n(\omega)\|\leq\|F(\omega)\|:\quad S_n\to F$$ (I'm just being curious wether it can actually fail.) Constructions The only constructions I found so far are cutoff to bound an approximation: $$E_n:=\{\|F_n\|\leq2\|F\|\}:\quad F'_n:=F_n\chi_{E_n}\implies\|F'_n\|\leq2\|F\|$$ and reset to obtain an increasing resp. decreasing approximation: $$R^\pm_n:=\{\|F_n\|\gtrless\|F'_{n-1}\|\}:\quad F'_n:=F_n\chi_{\Omega_n}+F'_{n-1}\chi_{\Omega_n^\complement}\implies\|F_n'\|\updownarrow\|F'_{n+1}\|$$ Apart from these truncation seems less relevant here: $$\Omega_n\uparrow\Omega:\quad F'_n:=\chi_{\Omega_n}\implies\|F'_n\|\uparrow\|F\|$$
This is a modified version from: Cohn: Measure Theory Enumerate a countable dense set: $$\#S\leq\mathfrak{n}:\quad S=\{s_1,\ldots\}\quad(\overline{S}=F\Omega)$$ Regard the finite subsets: $$S_K:=\{s_1,\ldots,s_K\}$$ Construct the domains by: $$A_k:=A_n(s_k):=\{\omega:\|s_k\|\leq\|F(\omega)\|\}\cap\{\omega:\|F(\omega)-s_k\|<\tfrac{1}{n}\}$$ And sum up their disjoint parts: $$A_k':=A_k\setminus\left(\bigcup_{l=1}^{k-1}A_l\right):\quad F_n:=\sum_{k=1}^{K=n}s_k\chi_{A_k'}$$ (Note that the supports are clearly measurable.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1046192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Which Sobolev-Space to use to formulate weak biharmonic equation, $H^2_0$ or $H_0^1\cap H^2$? For the weak formulation of the biharmonic equation on a smooth domain $\Omega$ $$ \Delta^2u=0\;\text{in}\;\Omega\\ u=0, \nabla u\cdot \nu=0\; \text{on}\; \partial\Omega $$ why does one take $H^2_0(\Omega)=\overline{C_c^\infty(\Omega)}^{W^{2,2}}$ as the underlying space? (i.e. $u\in H^2_0$ weak solution iff $\int \Delta u\Delta\phi=0,\;\forall \phi\in H^2_0$) Isn't $\nabla u=0$ on $\partial\Omega$ for $u\in H^2_0(\Omega)\cap C^1(\Omega)$, which is more than $\nabla u\cdot \nu=0$? If yes, wouldn't $H_0^1(\Omega)\cap H^2(\Omega)$ be the better choice?
Here is what I thought. I am not very sure but I am happy to discuss with you. The way we cast $\Delta^2u=0$ into a weak formulation, so that we could use Lax-Milgram, tells us that $H_0^2$ is a suitable space. Suppose we have a nice solution already, then we test $\Delta^2u=0$ with a $C^\infty$ function $v$ and see what happens. We have $$\int_\Omega \Delta^2u\,v\,dx = -\int_\Omega \nabla \Delta u \nabla v\,dx+\int_{\partial \Omega}\nabla\Delta u \cdot\nu\,v\,d\sigma(x) = \int_\Omega\Delta u\Delta v \,dx-\int_{\partial \Omega}\Delta u\nabla v\cdot\nu \,d\sigma(x)+\int_{\partial \Omega}\nabla\Delta u \cdot\nu\,v\,d\sigma(x) $$ Hence, we would like to ask $v=0$ and $\nabla v\cdot\nu=0$ on $\partial \Omega$ so that we can obtain a Bilinear operator. Now we have to choose our underlying space so that we could apply Lax-Milgram. Keep in mind that we need to choose a Banach space $H$, and it has to contain the condition that $v=0$ and $\nabla v\cdot\nu=0$ on $\partial \Omega$. Certainly $H_0^1\cap H^2(\Omega)$ will not work because it can not confirm that $\nabla v\cdot \nu\equiv 0$. Hence the only natural space left for us is $H_0^2(\Omega)$. I am really not sure that $\{v\in H^2, T[u]=0\text{ and }T[\nabla u]\cdot \nu=0\}$ is a Banach space.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1046311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Linear independence of a set of mappings $Map(\mathbb{R},\mathbb{R}):=$ The set of all mappings from $\mathbb{R} \rightarrow \mathbb{R}$ For every $a \in \mathbb{R}$ there is a Funktion $f_{a}:\mathbb{R} \rightarrow \mathbb{R}$ with: $$ f_{a}(x) = \left\{ \begin{array}{l l} (a-x)^{3} & \quad \text{if $x \le a$}\\ 0 & \quad \text{else} \end{array} \right. $$ Show, that the Set $\{f_{a}|a \in \mathbb{R}\} $ in the real vector space $Map(\mathbb{R},\mathbb{R})$ is linear independent. I wonder how to determine that there is no linear combination that equals zero, other than with all koefficients being zero. Any hints how to start on this?
You have to show that if you have a finite linear combination of functions of the form $f_a$, which is identically zero, then all coefficients have to be zero. So let $\lambda_1f_{a_1}+\ldots+\lambda_nf_{a_n}=0$, that is, $\lambda_1f_{a_1}(x)+\ldots+\lambda_nf_{a_n}(x)=0$ for each $x\in\mathbb R$. We want to show that $\lambda_1=\ldots=\lambda_n=0$. Now, use that for each $x\leq a$ the function $f_a$ is a polynomial of degree 3. Then your linear combination is a polynomial also of degree 3 for $x\leq\min\{a_1,\ldots,a_n\}$, which is zero. What do you know about such polynomials?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1046396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Determining the value of ECDF at a point using Matlab I have a data $X=[x_1,\dots,x_n].$ In Matlab, I know by using [f,x]=ecdf(X) plot(x,f) we will have the empirical distribution function based on $X$. Now, if $x$ is given, how will I know the value of my ECDF at this point?
Using interp1 is a nice idea. But we should not use 'nearest' option. Instead, to get the right result we must use 'previous' option because ecdf functions are flat except their jumping points. Also don't forget dealing with. I recommend, if [f, x] is given from ecdf command, to use y = interp1(x, f, vec_eval, 'previous'); y(vec_eval < min(original data)) = 0; y(vec_eval >= max(original data)) = 1; where vec_eval is the vector of points you want to evaluate
{ "language": "en", "url": "https://math.stackexchange.com/questions/1046476", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Let f(x) be a non negative continuous function on R such that $f(x) +f(x+\frac{1}{3})=5$ then calculate ........ Problem : Let f(x) be a non negative continuous function on R such that $f(x) +f(x+\frac{1}{3})=5$ then calculate the value of the integral $\int^{1200}_0 f(x) dx$ My approach : Given that : $f(x) +f(x+\frac{1}{3})=5.....(1) $ We replace x with $x +\frac{1}{3}$ so we get the given equation as : $f(x+\frac{1}{3})+f(x+\frac{2}{3}).....(2)$ Now subtracting (1) from (2) we get : $f(x+\frac{2}{3}) = f(x) $ $\Rightarrow f(x) $ is function with period $\frac{2}{3}$ Now how to move further please help on how the period of this function will impact the limit of integration. Thanks.
Hint: Note $$\int_{1/3}^{2/3}f(x)dx= \int_0^{1/3}f(x+\frac{1}{3})dx$$ and hence $$ \int_0^{1/3}f(x)dx+\int_{1/3}^{2/3}f(x)dx=\int_0^{1/3}(f(x)+f(x+\frac{1}{3}))dx=\frac{5}{3}.$$ Then write $$\int_0^{1200}f(x)dx=\int_0^{1/3}f(x)dx+\int_{1/3}^{2/3}f(x)dx+\cdots$$ and you will get the answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1046573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
3 balls in a box We have a box with $3$ balls, that can be black or white. We extract a ball, and it's white. Then we put the ball in the box, we extract again a ball and it's white. What is the probability that in the box there are $3$ white balls ? I obtain $\frac{9}{14}$ but I'm not sure about this result.
Assume each ball has the same probability of being black or white. The prior probability of having $n$ white balls is $\binom 3n\frac18$, $n=0,1,2,3$. Given there are $n$ white ball(s) in the box, then the probability of drawing two white balls with replacement is $\left(\frac n3\right)^2$. The posterior probability of having 3 white balls, given the result, is: $$\begin{align*} P(W=3\mid W_1,W_2) &= \frac{P(W_1,W_2\mid W=3)\times P(W=3)}{P(W_1,W_2)}\\ &= \frac{P(W_1,W_2\mid W=3)\times P(W=3)}{\sum_{n=0}^3P(W_1,W_2\mid W=n)\times P(W=n)}\\ &= \frac{\left(\frac 33\right)^2\binom 33\frac18}{\sum_{n=0}^3\left(\frac n3\right)^2\binom 3n\frac18}\\ &= \frac{3^2\times1}{0^2\times1+1^2\times3+2^2\times3+3^2\times1}\\ &= \cdots \end{align*}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1046635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving that the order of a group is greater than or equal to the product of orders of 2 subgroups. Let $H$ and $K$ be subgroups of a finite group $G$ such that $H \cap K = \{e\}$ Prove that $|G|\ge |H||K|$ What I think is the correct step is to consider the cosets $hK, h \in H$, and then using properties of cosets to prove that the product of orders can't be greater than $|G|$, but this is where I get stuck.
Hint: Consider $\phi: H \times K \to G$ given by $\phi(h,k)=hk$. Prove that $\phi$ is injective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1046702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
System of congruence equations I have a system of congruence eqs $$ \begin{cases} x \equiv 14 \pmod{98} \\ x \equiv 1 \pmod{28} \end{cases} $$ I have calculated $\text{gcd}(98,28) = 14$. I can from the congruence eqs get $x = 14+98k$ and $x = 1+28m$. I equate these $$ 14+98k = 1+28m \Leftrightarrow 28m - 98k = 13 $$ I know that $\text{gcd}(98,28) = 14$ is not divisible by 13 and therefore the system has no solutions.. Is this correct?
You are right, but this is faster: $x\equiv 14\pmod{98}$ implies that $x$ is a multiple of $7$, but $x\equiv 1\pmod{28}$ implies that it isn't.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1046815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 0 }
Finding Continuous Functions Find all continuous functions $f:\mathbb{R} \to \mathbb{R}$ such that for all $x \in \mathbb{R}$, $f(x) + f(2x) = 0$ I'm thinking; Let $f(x)=-f(2x)$ Use a substitution $x=y/2$ for $y \in \mathbb{R}$. That way $f(y)=-f(y/2)=-f(y/4)=-f(y/8)=....$ Im just not sure if this is a good approach. Opinions please..
By induction we prove that $$f(x)=(-1)^nf\left(\frac x{2^n}\right)$$ but by the sequential characterization of the continuity we have $$f\left(\frac x{2^n}\right)\xrightarrow{n\to\infty}f(0)$$ so since for $x=0$ we get $f(0)=0$ so $f(x)$ is constant $$f(x)=f(0)=0\quad\forall x\in\Bbb R$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1046961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Show that $x^a+x-b=0$ must have only one positive real root and not exceed the $\sqrt[a]{b-1}$ If we take the equation $$x^3+x-3=0$$ and solve it to find the real roots, we will get only one positive real roots which is $(x=1.213411662)$. If we comparison this with $\sqrt[3]{3-1}=1.259921$, we will find that $x$ is less than $\sqrt[3]{3-1}$.This always happens with any value of $a$ so that $a$ any positive real number and $b$ is a positive real number. So we can ask the following: 1-prove that $x^a+x-b=0$ must have only one positive real root if $a$ positive real number and $b$ is a positive real number greater than $1$. 2-The value of this roots must be less than $\sqrt[a]{b-1}$ 3-What happens when $a$ is a complex value.I mean this thing stays right or not?
If $f(x)=x^a+x-b$ then $f'(x)=ax^{a-1}+1>0,$ since $a\in\mathbb{R}_+.$ Thus $f$ is strictly increasing and so it must take the value $0$ at most one time. Now, since $f(0)=-b<0$ and $f(b)>0$ it follows from the Intermediate Value Theorem that there exists a root in the interval $(0,b).$ So, $f$ has exactly one root. Finally, if $f(x)=x^1+x-3=2x-3$ has as root $x=3/2.$ However, $\sqrt[a]{b-1}=(b-1)^{1/a}=2>3/2.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1047032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove there exists a real number $x$ such that $xy=y$ for all real $y$ Prove: There exists a real number $x$ such that for every real number $y$, we have $xy=y.$ In class I learned that I can prove a statement by: * *proving the contrapositive, *proof by contradiction, *or proof by cases. Can I do something along the lines of $xy=y$ divide both sides by $x$ to get $y=\frac{y}{x}$ which would then be equal to $y=\frac{y}{x}=xy$ divide everything by $y$ to get $1=\frac{1}{x}=x$ so $x=1$? There exists a real number $x$ such that for every real number $y$, we have $xy=x$ I know I somehow have to prove $x=0$ right? But I am not sure how to go about this.
Questions like this are difficult to give good answers to, not because the proofs themselves are difficult or deep, but because it's unclear what you're allowed to assume as already known, hence what constitutes an "acceptable" proof. It's tempting to say the proofs here boil down to saying "Let $x=1$" (for the first statement) and "Let $x=0$" (for the second), because all you have to show is the existence of a real number with a given property. Indeed, the first statement is, in essence, one of the axioms for the real numbers. The second statement isn't an axiom, but it follows from the theorem that $0\cdot y=0$ for all $y$, provided you have that theorem at your disposal; the existence of the number $x=0$ comes from the axiom for the additive identity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1047154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 8, "answer_id": 1 }
Show that $F$ and $G$ differ by a constant Suppose $F$ and $G$ are differentiable functions defined on $[a,b]$ such that $F'(x)=G'(x)$ for all $x\in[a,b]$. Using the fundamental theorem of calculus, show that $F$ and $G$ differ by a constant. That is, show that there exists a $C\in\mathbb R$ such that $F(x)-G(x)=C$. I'm assuming this is quite simple, but I can't seem to figure it out.
As you said, based on the fundamental theorem of calculus: $\int_a^xF'(x) = F(x) + C_1$ and $\int_a^xG'(x) = G(x) + C_2$, where $C_1$ and $C_2$ are constants from $\mathbb R$. As $F'(x) = G'(x)$, the above expressions are equal too, hence: $F(x) + C_1 = G(x) + C_2 \Leftrightarrow F(x) - G(x) = C_2 - C_1 = C_3$, where C_3 is some constant from $\mathbb R$. Q.E.D.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1047255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Finding Exact Value $7\csc(x)\cot(x)-9\cot(x)=0$ The values for $x$ on $[0,2\pi)$ solving $7\csc(x)\cot(x)-9\cot(x)=0$ are? I think that $\dfrac{\pi}2$ is one but I can't find the others. what are the others?
$7\csc x\cot x-9\cot x=0\implies7\cos x-9\cos x\sin x=0\implies (\cos x)(7-9\sin x)=0$, so $\cos x=0\implies x=\frac{\pi}{2}$ or $x=\frac{3\pi}{2}$ and $\sin x=\frac{7}{9}\implies x=\sin^{-1}\frac{7}{9}$ or $x=\pi-\sin^{-1}\frac{7}{9}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1047419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Car's airbag deployment system. Let's discuss the math! I'm trying to explore the concepts and/or Calculus/Physics behind auto airbag deployment... I am assuming the computer feeds real time velocity points and the ECU is monitoring for a sudden drop in rate of change / sudden spike in negative acceleration? Does this seem correct? Can we try to describe what the algorithm is looking for? ACU is fed constant speed values from the speed sensor. ACU can calculate acceleration based on time elapsed b/w 2 speed data points. The following is just 8th grade Algebra, and I know actual ACU systems will be much more nuanced (side impact, etc), but is this the core idea involved? VELOCITY $$t=(milliseconds), s=(mph)$$ $$t=0, s=55$$ $$t=5, s=60$$ $$t=10, s=70$$ $$t=15, s=50$$ $$t=20, s=0$$ At time=20 the car has hit into wall and stopped suddenly! ACCELERATION Let's calculate some rate of change of velocity using $\frac{\delta Y}{\delta X}$: $$\frac{60-55}{5-0}= 1 mph$$ $$\frac{70-60}{10-5}= 2 mph$$ $$\frac{50-70}{15-10}= -4 mph$$ $$\frac{0-50}{20-15}= -10 mph$$ During some interval, maybe the airbag is triggered once the acceleration exceed some negative threshold ? Does this seem reasonable?
It is more complicated than that. They probably have 3 axis measurements of acceleration and angular acceleration, so can trigger on a side impact (where the forward velocity doesn't change much) and a rear impact (where the acceleration is unrealistically positive). They also read very frequently so they can filter to avoid triggering on one bad reading.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1047509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Sum of series involving 1+cos(n)... $$\sum_{n=2}^\infty{1+\cos(n)\over n^2}$$ I justified that it converges absolutely by putting it less than to $\frac{1}{n^2}$ where $p=2>1$ meaning that it converges absolutely. Would this be a correct way to solve this problem?
Yes, as you say, $$\sum_{k=2}^\infty\left \lvert \frac{1+\cos(n)}{n^2}\right \lvert\leq \sum_{k=2}^\infty \frac{2}{n^2}=\frac{\pi^2}{3}-2,$$ which shows absolute convergence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1047587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Does a diagonal matrix commute with every other matrix of the same size? Does a diagonal matrix commute with every other matrix of the same size? I'm stuck on one line of a proof that I am writing, and I would like to switch order between a non-diagonal and a diagonal matrix. Thanks,
In general, a diagonal matrix does not commute with another matrix. You can find simple counterexamples in the comments. For a matrix to commute with all the others you need the matrix to be scalar, i.e. diagonal with entries on the diagonal which are all the same.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1047681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Can derivative of Hurwitz Zeta be expressed in Hurwitz Zeta? Can the derivative of Hurwitz Zeta function by the first argument be expressed in terms of Hurwitz Zeta and elementary fuctions? There is a formula which expresses Hurwitz Zeta through its derivative: $$\zeta '\left(z,\frac{q}{2}\right)-2^z \zeta '(z,q)+\zeta '\left(z,\frac{q+1}{2}\right)=\zeta(z,q)2^{z}\ln 2$$ So I wonder whether the opposite can be done?
Note that the derivative of the Hurwitz Zeta Function can be calculated as following (similiar to calculating the derivative of the Riemann Zeta): \begin{align} \frac{d}{ds} \zeta(s,q) &= \frac{d}{ds} \sum_{n=0}^{\infty} (n+q)^{-s} \\ &= \sum_{n=0}^{\infty} \frac{d}{ds} (n+q)^{-s} \\ & = \sum_{n=0}^{\infty} - \ln(n+q) (n+q)^{-s} \\ &= -\sum_{n=0}^{\infty} \frac{\ln(n+q)}{(n+q)^s} \end{align} Because $\ln(n+q)-\ln(n+q-1) = \ln\left(\frac{n+q}{n+q-1}\right)$, we actually have: $$\zeta'(s,q) = - \ln(q) \zeta(s,q) + \sum_{n=1}^{\infty} \ln\left(\frac{n+q-1}{n+q}\right) \zeta(s, n+q)$$ I doubt a nicer relation exists.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1047797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
A basic measure theory question on lebesgue integral Let $\mu$ and $\nu$ are probability measures on a complete separable space $S$. Suppose, for every real-valued continuous function on $S$ we have that $$\int fd\mu = \int fd\nu$$ does it imply $\mu = \nu$. looks like approximating characteristic functions by continuous functions.
Let $U$ be open and consider $$ f_n(x)=(n\cdot d(x,U^c))\wedge1,\quad n\geqslant 1. $$ Then $f_n$ is non-negative and continuous for all $n\geqslant 1$ and $f_n\uparrow \mathbf{1}_U$. Thus by the monotone convergence theorem we have $\mu(U)=\nu(U)$. Since the Borel sigma-algebra is generated by the open sets (which are stable under intersection) we have that $\mu=\nu$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1048013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is it possible to simulate a floor() function with elementary arithmetic? I'm using a "programming language" that only allows basic operations: addition, subtraction, multiplication, and division. Is it possible to emulate a floor function (i.e. drop the decimals a number) by using only those operators? I'm only interested in positive numbers Edit: Here's the documentation describing what's possible (which is only elementary arithmetic)
Programming languages typically use floating-point arithmetic defined by the IEEE 754 standard. Roughly speaking, if the exact result of an operation calculated using mathematical real arithmetic has an absolute value $2^k ≤ x < 2^{k+1}$, then x will be rounded to the nearest integer multiple of $2^{k-52}$. The exception is the case where x is exactly between the two nearest integer multiples of $2^{k-52}$, in which case x will be rounded to the even multiple. If the absolute value x is between $2^{52} ≤ x < 2^{53}$, then x will be rounded to the nearest integer or the nearest even integer if x is exactly between two integers. Any floating-point numbers with an absolute value $x ≥ 2^{52}$ are actually integers. And any floating-point operation where the exact result is an integer with $-2^{53} ≤ x ≤ 2^{53}$ will give the exact result. This gives a simple implementation: If the absolute value of x is $2^{52}$ or greater then x is an integer and floor (x) = x. Otherwise; first add then subtract $2^{52}$ from x if x >= 0, but first subtract then add $2^{52}$ to x if x < 0. This rounds x to one of the two nearest integers. If the result is greater than the original value of x, subtract 1. I think this is quite close to the implementation that is typically used by current compilers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1048054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51", "answer_count": 10, "answer_id": 5 }
How to find quotient groups? I am struggling to understand how do quotient groups work. For example - for the quotient of real numbers (without zero) under multiplication and positive real numbers, the correct quotient should be isomorphic to Z2. Why is that? How can I find what is it isomorphic to? How does in this case the quotient group "divide" the group?
Define $$\phi:\Bbb R^*\to\{-1\,,\,1\}\le\Bbb C^*\;,\;\;\phi(r):=\begin{cases}\!\!-1&,\;\;r<0\\{}\\\;\,1&,\;\;r>0\end{cases}$$ Check the above is a group homomorphism, that $\;\ker\phi=\Bbb R^*_+\;$ and apply the first isomorphism theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1048121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Easiest proof for $\sum_{d|n}\phi(d)=n$ To prove $\sum_{d|n}\phi(d)=n$. What is the easiest proof for this to tell my first year undergraduate junior. I do not want any Mobius inversion etc only elementry proof. Tthanks!
Write $n = \prod p^{a_p}$ The divisors of $p$ are $$ \prod p^{b_p}, b_p \le a_p $$ so the sum on the left hand side is $$ \sum_{b_2=0}^{a_2}\dots \sum_{b_P=0}^{a_P} \phi(\prod_{p \text{ prime}, p|n, =2}^P p^{b_p}) = \sum_{b_2=0}^{a_2}\dots \sum_{b_P=0}^{a_P} \prod_{p \text{ prime}, p|n, b_p>0} \left(1-\frac 1p\right) p^{b_p} \\ = \prod_{p \text{ prime}, p|n, =2}^P \left[1 + \sum_{b_p = 1}^{a_p} \left(1-\frac 1p\right) p^{b_p} \right] = \prod_{p \text{ prime}, p|n, =2}^P p^{a_p} = n $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1048209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Compute the following sum $ \sum_{i=0}^{n} \binom{n}{i}(i+1)^{i-1}(n - i + 1) ^ {n - i - 1}$? I have the sum $$ \sum_{i=0}^{n} \binom{n}{i}\cdot (i+1)^{i-1}\cdot(n - i + 1) ^ {n - i - 1},$$ but I don't know how to compute it. It's not for a homework, it's for a graph theory problem that I try to solve.
Look up Abel's binomial theorem, for example, here: http://en.wikipedia.org/wiki/Abel's_binomial_theorem I find this funny because, immediately preceding this, I answered a question by giving a reference to Abel's summation formula. This means, of course, that somebody will ask about solving the quintic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1048289", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Identifying $\mathbb{C}^*/\mathbb{R}^{+}$ I define a Surjective Homomorphism $\varphi:\mathbb{C}^*\to S^1$ by $z\mapsto {z\over |z|}$ Ker$\varphi=\{z\in\mathbb{C}^*:\varphi(z)=1\}\Rightarrow\{z: z=|z|\}=\mathbb{R}^{+}$ So, $S^1=\mathbb{C}^*/\mathbb{R}^{+}$ am I right?
Yes, you are right. You can visualize this result by identifying each element of $\mathbb{C}^*/\mathbb{R}^+$ with a ray out from $(0, 0)$ in $\mathbb{C}^*$. Then to multiply two rays, multiply representatives from them and take the ray containing the result. Clearly the product of two rays in this sense only depends on the angles, not the magnitudes, of the representatives so this operation is well-defined. So the chosen representatives might as well be the unique elements of $S^1$ on the two rays. Thus the group of rays is isomorphic to $S^1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1048409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
orthogonal subspaces in $\mathbb R^2$ Here is the question: Let W be a subspace of $R^n$ with an orthogonal basis {$w_1$,...,$w_p$}, and let {$v_1$,...,$v_q$} be and orthogonal basis for W$\perp$. Assume that both p >= 1 and q>=1. (1) For the case when n=2, describe geometrically W and W$\perp$. In particular, what are the possible dimensions of these two subspaces? (2) For the case when n=3..... My confusion is because I was under the impression that an orthogonal basis must consist of at least two vectors, because one vector cannot be orthogonal alone. But, if W is a two dimension subspace, then how can W$\perp$ be in $R^2$? If W is 2 dimensions, then the only orthogonal subspace would be a line in $R^3$. It seems to me that if both W and W$\perp$ are an orthogonal basis for a subspace, they must each span 2 vectors and together reside in a higher dimension than $R^2$. I also know that the union set of these two subspaces must span $R^2$ by definition, which tells me that there must be a total of two orthogonal vectors. Can someone give me a hint on what I'm missing here? Thanks
If W is a space of two dimensions then it is not a subspace of R^2 it is the whole space. The space perpendicular to W is null.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1048553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Continuity and adherence For two topological spaces $E$ and $F$ Please how to prove that $$f(\overline{A})\subset \overline{f(A)},\forall A\Rightarrow f:E\rightarrow F ~\text{is continuous}$$ Thank you
Suppose that $f$ is not continuous. Then there is some closed set $C\subset F$ such that $f^{-1}(C)$ is not closed. Call $A=f^{-1}(C)$. Then there is some $x\in\overline A$ such that $f(x)\notin C$. That is, $f(\overline A)\not\subset C=\overline C=\overline{f(A)}$. Arternatively, suppose that $f$ is not continuous at some point $x$. Then, there exists a neighbourhood $V$ of $f(x)$ such that $f(U)$ is not contained in $V$ for every neighbourhood $U$ of $x$. Call $A=f^{-1}(F-V)$. Since for every neighbourhood $U$ of $x$, $f(U)$ intersects $F-V$, there is some $y\in U$ such that $f(y)\in F-V$. that is, $y\in A$. That means that $x\in \overline A$, so $f(x)\in f(\overline A)$. But $f(x)\in V$, so $f(x)\notin F-V=\overline{F-V}=\overline{f(A)}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1048767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Rank of a linear transformation $T(x_1,x_2)=(x_1-x_2,5x_1)$ Find the associated matrix and compute the rank and nullity of the linear transformation $T:\mathbb{R}^2\to\mathbb{R}^2$ given by $T(x_1,x_2)=(x_1-x_2,5x_1)$. The associated matrix $A$ is $$ \left[\begin{matrix} T(\mathbf{e_1}) & T(\mathbf{e_2}) \end{matrix}\right]=\left[\begin{matrix} 1 & -1 \\ 5 & 0 \end{matrix}\right] $$ To compute the nullity we must find vectors $\mathbf{x}$ such that $A\mathbf{x}=\mathbf{0}$. So by applying elementary row operations to the augmented matrix $$ \left[\begin{matrix} 1 & -1 & 0 \\ 5 & 0 & 0 \end{matrix}\right] \to\left[\begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \end{matrix}\right] $$ Thus we have $\mathbf{x}=\left[\begin{matrix} 0 \\ 0 \end{matrix}\right]$. And so the nullity of $T$ is $0$. For the rank, is it enough to say that: * *$T(\mathbf{e_1})$ and $T(\mathbf{e_2})$ are linearly independent and so form a basis for the image of $T$. Hence the rank of $T$ is $2$? EDIT: Just realised that we must have rank $T + $ nullity $T = 2$ so I must have a mistake somewhere. EDIT: Corrected
To find the the rank, you can use the rank-nullity theorem: $$rank (T)+nullity(T)=2.$$ Since $nullity(T)=0$ as you have found, we have $rank (T)=2$. Or you can argue by saying that the columan space of $T$, $C(T)$, is spanned by $T(e_1)=\left[\begin{matrix} 1 \\ 5 \end{matrix}\right]$ and $T(e_2)=\left[\begin{matrix} -1 \\ 0 \end{matrix}\right]$, which implies that $C(T)=\mathbb{R}^2$. Therefore, $rank(T)=\dim C(T)=2$. Or you can look at the associated matrix $A=\left[\begin{matrix} 1 & -1 \\ 5 & 0 \end{matrix}\right]$, which has rref form given by $\left[\begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix}\right]$ as you have done. So the rank of $A$ is the number of nonzero rows (or the number of pivots) in its rref form, which is $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1048884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Compute $\bar z - iz^2 = 0$ $\bar z - iz^2 = 0, i = $ complex unit. I've found 2 solutions to this, like this: $x - iy - i(x+iy)^2 = 0$ $i(-x^2+y^2-y)+2xy+x=0$ $2xy + x = 0$ --> $y=-\frac{1}{2}$ $x^2 - y^2 + y = 0$ --> $x_1=\sqrt\frac34$ $x_2=-\sqrt\frac{3}{4}$ Solution 1: $z = \sqrt\frac34 - \frac12i $ Solution 2: $z = -\sqrt\frac34 - \frac12i $ That's great and everything, but Wolfram gives me another solution, which is $z = i$. How do I get that? LINK to wolfram.
$$\bar z=iz^2$$ Equate absolute values: $$|z|=|\bar z|=|iz^2|=|z|^2\Longrightarrow|z|=0,1.$$ One solution is $z=0$; other solutions satisfy $|z|=1$, so $\bar z=\frac{|z|^2}z=\frac1z$ and the equation simplifies to $$\frac1z=iz^2,$$ that is, $$z^3=\frac1i=i^3.$$ One solution obviously is$$z=i.$$ The other cube roots of $i^3$ are $i$ times a cube root of unity, that is, $$i\omega=i(\cos120^\circ+i\sin120^\circ)=i\left(-\frac12+\frac{\sqrt3}2i\right)=-\frac{\sqrt3}2-\frac12i$$ and $$i\omega^2=i\bar\omega=i(\cos240^\circ+i\sin240^\circ)=i\left(-\frac12-\frac{\sqrt3}2i\right)=\frac{\sqrt3}2-\frac12i.$$ Alternatively, $$z^3-i^3=(z-i)(z^2+iz+i^2)=(z-i)(z^2+iz-1)$$ so the last two solutions can be found by solving the quadratic equation $$z^2+iz-1=0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1048955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
$(X,d)$ is a complete metric space iff $(X,d')$ is a complete metric space. I recently got asked this question on an exam and I wasn't able to give a solution. If $(X,d)$ is a metric space and we define $$d'(x,y)= \frac {d(x,y)}{1+d(x,y)}$$ (I've already proved that $(X,d')$ is a metric space.) Prove that $(X,d)$ is a complete metric space iff $(X,d')$ is a complete metric space.
You can see that $d' = {d \over 1+ d} \le {d \over 1} = d$, hence a $d$-Cauchy sequence is a $d'$-Cauchy sequence. Note that if $d'(x,y) \neq 1$, then since $d'(x,y) = { d(x,y) \over 1 + d(x,y) }$, then $d(x,y) = { d'(x,y) \over 1 - d'(x,y) }$. Now suppose $x_n$ is a $d'$-Cauchy sequence, and choose $N$ such that if $m,n \ge N$, then $d'(x_n,x_m) \le {1 \over 2}$. Then for $m,n \ge N$, we have $d(x,y) \le 2 d'(x,y)$ and so the sequence is a $d$-Cauchy sequence. If $(X,d)$ is complete, and $x_n$ is $d$-Cauchy, then there is some $x$ such that $d(x,x_n) \to 0$. Since $d' \le d$, it follows that $d'(x,x_n) \to 0$ and hence $(X,d')$ is complete. Similarly, if $(X,d')$ is complete, and $x_n$ is $d'$-Cauchy, then there is some $x$ such that $d'(x,x_n) \to 0$. Choose $N$ such that if $n \ge N$ then $d'(x,x_n) \le {1 \over 2}$, then we have $d(x,x_n) \le 2 d'(x,x_n)$ and so $d(x,x_n) \to 0$ and hence $(X,d)$ is complete.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1049067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
A proof pertaining to the projector operator Let $H_{1}$ be any subspace of a Hilbert space $H$, and let $H_{2} = H_{1}^{\bot}$ be the orthogonal complement of $H_{1}$, so that an arbitrary element $h \in H$ has a unique representation of the form $h = h_{1} + h_{2}$ where $h_{1} \in H_{1}$ and $h_{2} \in H_{2}$. Let $Ph = h_{1}$. $P$ is a continuous linear operator known as the projection operator. I want to prove that the projection operator is completely continuous if and only if the subspace $H_{1}$ is finite dimensional. I have been thinking about this one for awhile now and have no idea. I would really appreciate the help!!!
Let's agree that a completely continuous operator $T:X\rightarrow Y$ between Banach spaces is one that sends weakly convergent sequences to norm convergent ones. For the first part, suppose that $H_1$ is finite dimensional. Consider a weakly convergent sequence $(x_n)_n$ in $H$. The projection $P:H \rightarrow H$ can be correstricted to $H_1$ to give a well defined continuous linear operator $P:H \rightarrow H_1$. Now, $P$ is norm-norm continuous and so it's weak-weak continuous and $(P(x_n))_n$ weakly convergent in $H_1$. Because $H_1$ is finite dimensional, norm and weak topologies agree in $H_1$ and then $(P(x_n))_n$ is norm convergent in $H_1$ and so in $H$. This proves that $P$ is completely continuous. For the second part, suppose that $P$ is completely continuous. As $H$ is a Hilbert space it is also reflexive and thus every completely continuous $T:H\rightarrow H$ is compact. In particular, $P$ is compact. As $P$ is a projection with image on $H_1$, we have that $P$ agrees with the identity over $H_1$. From this we have that the identity of $H_1$ is a compact operator and so $H_1$ must be finite dimensional.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1049155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to find the equation for a parabola for which you are given two points and the vertex? I was originally given the value $(4,-2)$ as the vertex of a parabola and told that it also includes the value $(3,-5)$. From this point, I deduced that the next point would have the same y-value as the point whose x-value is equidistant from the vertex, so the next point would be $(5,-5)$. I also know that since the parabola's vertex is higher than the two values surrounding it, it is a negative parabola. I am now stuck and do not know how to continue about finding the equation for this limited input/output table.
Vertex form: $g(x) = a(x-h)^2+k$ where $(h,k)$ is your vertex. Here $$g(x) = a(x-4)^2-2 $$ and $$g(3) = -5 \Rightarrow -5 = a(3-4)^2-2 \Rightarrow -3 = a \Rightarrow g(x) = -3(x-4)^2-2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1049239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Trouble understanding how $\int_c^d f \leq 0$ implies $f \leq 0$. We are asked to suppose that we have a function $f:[a,b] \rightarrow \Bbb R$ which is continuous and has the property that $$\int_{c}^{d} f \leq 0 \quad \text{ whenever } a \leq c < d \leq b.$$ We want to prove that $f(x) \leq 0$ for all $x \in [a,b]$. I'm having trouble understanding how this would be true. Isn't it concievable that I could have a function which has some area under the $x$-axis as well as over it in the region from $c$ to $d$, such that the integral is either negative or $0$? Then this would imply that the function takes on negative values and positive values in the interval from $c$ to $d$. Thanks for any help, I think maybe the problem might be worded subtley but it's driving me crazy.
For $\forall x\in(a,b)$, there is $\varepsilon >0$ such that $(x-\varepsilon,x+\varepsilon)\subset(a,b)$. Thus by the Mean Value Theorem for Integrals, we have $$ \int_{x-\varepsilon}^{x+\varepsilon}f(x)dx=2\varepsilon f(\xi)\le0$$ for some $\xi\in(x-\varepsilon,x+\varepsilon)$. Thus $f(\xi)\le 0$. Letting $\varepsilon\to0$ gives $$f(x)\le 0$$ for $x\in(a,b)$. Since $f(x)$ is continuous in $[a,b]$, we have conclude $$f(x)\le 0, x\in[a,b]. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1049327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Matlab wrong cube root How can I get MATLAB to calculate $(-1)^{1/3}$ as $-1$? Why is it giving me $0.5000 + 0.8660i$ as solution? I have same problem with $({-1\over0.1690})^{1/3}$ which should be negative.
In the following I'll assume that $n$ is odd, $n>2$, and $x<0$. When asked for $\sqrt[n]{x}$, MATLAB will prefer to give you the principal root in the complex plane, which in this context will be a complex number in the first quadrant. MATLAB does this basically because the principal root is the most convenient one for finding all of the other complex roots. You can dodge this issue entirely by taking $-\sqrt[n]{-x}$. If you want to salvage what you have, then you'll find that the root you want on the negative real axis is $|z| \left ( \frac{z}{|z|} \right )^n$. Basically, this trick is finding a complex number with the same modulus as the root $z$ (since all the roots have the same modulus), but $n$ times the argument. This "undoes" the change in argument from taking the principal root.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1049421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 3, "answer_id": 1 }
What is $\langle 3u+2v , -u+4v\rangle$ Given $\lVert u\rVert$ , $\lVert v\rVert$ , and $\langle u,v\rangle$? What is $\langle 3u+2v , -u+4v\rangle$ Given $\lVert u\rVert$ , $\lVert v\rVert$ , and $\langle u,v\rangle$? \begin{align} \lVert u\rVert &= 3\\ \lVert v\rVert &= 5\\ \langle u,v\rangle &= -4 \end{align} Well, to solve this I thought I'd break down certain elements. I know that... \begin{gather} \langle u,v\rangle = \mathbf{u}\cdot\mathbf{v} = u_1v_1 + u_2v_2 + u_3v_3 + \cdots + u_nv_n = -4\\ \langle u+v , u+v\rangle = \langle u,u\rangle + \langle u,v\rangle + \langle v,u\rangle + \langle v,v\rangle \end{gather} But I don't know what $\langle 3u+2v , -u+4v\rangle$ actually is... Is it $$ \langle 3u,-u\rangle + \langle 3u,4v\rangle + \langle 2v,-u\rangle + \langle 2v,4v\rangle \mbox{?} $$
$\langle 3u+2v,-u+4v\rangle =\langle 3u,-u\rangle +\langle 3u,4v\rangle +\langle 2v,-u\rangle +\langle 2v,4v\rangle$ \begin{align} &=-3\langle u,u\rangle +12\langle u,v\rangle -2\langle v,u\rangle +8\langle v,v\rangle\\ &=-3\cdot 9 + 12\cdot -4 - 2\cdot 4 + 8\cdot 25 \end{align} since $\lVert u\rVert = 3$, $\lVert v\rVert = 5$, $\langle u,v\rangle = -4$ and $\lVert u\rVert^2 =\langle u,u\rangle$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1049476", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
proving Riemann-Lebesgue lemma I have looked at proofs of the Riemann-Lebesgue lemma on the internet; all of these proofs use the technique of Riemann integration and making step functions. E.g.(https://proofwiki.org/wiki/Riemann-Lebesgue_Lemma) But if we know Parseval's identity (or Plancherel's identity for Fourier transform) then, can I use $$ \sum_{-\infty}^\infty |a_n|^2 = \|f\|^2 = M<\infty $$ then the tails of the sum is very small, $a_n \rightarrow 0$ as $n \rightarrow \infty$?
There's a really nice proof that "avoids step functions". With a $u-$substitution you can conclude that $$ \hat{f}(\xi) = \int_{\mathbb{R}} f(x)e^{ix\xi} dx \\ = \int_{\mathbb{R}} f(x + \frac{\pi}{\xi})e^{ix\xi}e^{i \pi} dx \\ = - \int_{\mathbb{R}} f(x + \frac{\pi}{\xi})e^{ix\xi} \ dx $$ By averaging these two representations of Fourier transform, we obtain $$\hat{f}(\xi) = \frac{1}{2} \int_{\mathbb{R}} (f(x) - f(x + \pi/\xi) ) e^{ix\xi} \ dx $$ Since $|e^{ix}| = 1$, we can then conclude that $$\hat{f}(\xi) \leq \frac{1}{2} \int_{\mathbb{R}} |f(x) - f(x+\pi/\xi)| dx $$ Letting $\xi \to 0$, finishes the proof assuming we can interchange limits and the integral. We can, but the proof of why (at least the ones I know) still require that step function type argument which is why I put "avoid step functions" in quotes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1049697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Examples of categories which naturally include End(O) as object I want examples of categories $\textbf C$ which naturally include $End_{\textbf C}(O)$ as object for objects $O$ in the category. The set of all endomorphims is always a monoid under the composition of morphisms, but beside from that. Counterexamples is also interesting. I'm not asking for examples that has to be of interest from category theoretical point of view, but just natural examples where the structure of $O$ transfers to $End_{\textbf C}(O)$. The reason for my question is that I imagine some kind of correspondence between those categories and categories with tensor products, since there is a diagram that recalls about the universal property of tensor products: $\require{AMScd}$ \begin{CD} X\times Y @>\mu>> End_\textbf C(X\times Y)\\ @V \forall h V V\# @VV \exists\varphi V\\ Z @= Z \end{CD} $\mu(x,y)(a,b)=(x,y)$, $\varphi(f)=h\psi_{a,b}$ and $\psi_{a,b}(f)=f(a,b)$. $\mu$ and $h$ are bi-morphisms and $\varphi$ is a morphism. Examples: * *Set, $End(X)$ is trivially a set. *$R$-Mod, $End(X)$ is a $R$-module: $(\alpha+\beta)(x)=\alpha(x)+\beta(x)$; $(r\cdot\alpha)(x)=r\cdot\alpha(x)$ for commutative rings. For non commutative rings $End(X)$ is only a $\mathbb Z$-module. *Grp, no obvious group structure on $End(X)$. *Top, $End(X)$ is a topological space with the topology of pointwise convergence.
$End(O)$ is a set (or a monoid), it belongs to the category of sets and nothing else. Perhaps you are interested in the notion of a cartesian closed category. There one has an internal hom object $\underline{\hom}(x,y)$ for all objects $x,y$, in particular $\underline{\mathrm{End}}(x):=\underline{\hom}(x,x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1049787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How many ways can 25 red balls be put into 3 distinguishble boxes if no box is to contain more than 15 balls? I'm reading Martin's: Counting: The Art of Enumerative Combinatorics. How many ways can 25 red balls be put into 3 distinguishble boxes if no box is to contain more than 15 balls? I understand that it's the solutions to the equation $x+y+z=2$5 minus the solutions to the equation $x+y+z=25-16$. The answer in the book is: $${3+25-1\choose 25}-{3 \choose 1}{3+9-1 \choose 9}$$ * *I understand why ${3+25-1\choose 25}$ and ${3+9-1 \choose 9} $ are there, but I don't understand why ${3 \choose 1}$ is there. *I don't understand why $x+y+z=25-16$ and not $x+y+z=25-15$.
The number of solutions to $x + y + z = 25 - 15$ is also the number of ways to put the balls in the boxes if there must be at least $15$ balls in the first box. But some of those solutions correspond to arrangements with only $15$ balls in the red box, which we do not want to exclude from our final answer. We want to subtract only the arrangements that have at least $16$ balls in one of the boxes, so we count the number of solutions of $x + y + z = 25 - 16$. The $\binom 31$ comes into the formula because the box that contains $16$ (or more) balls could be the first box, the second box, or the third box. (There can only be one such box, since $2\cdot16 > 25$.) Each of those cases includes a number of arrangements equal to the number of solutions of $x+y+z=25−16$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1049854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How do you integrate the function $\frac 1{x^2 - a^2}$, where a is a constant? My problem with integrating by parts is that it always ends up being recursive, as I feel like I'm going in loops. Would appreciate it if someone can help me understand the process. $$\int \dfrac 1{x^2-a^2}dx$$ I know the answer is supposed to be $\dfrac 1{2a}\ln\left|\dfrac {x-a}{x+a}\right|+C$, but I can't figure out how to derive it.
The integrand factors as $$\frac{1}{(x - a) (x + a)},$$ so we can decompose it via the Method of Partial Fractions--- $$\frac{1}{(x - a) (x + a)} = \frac{K}{x + a} + \frac{L}{x - a}$$ ---and cross-multiply and collect like terms to solve for $K$ and $L$. Then, we can integrate the terms separately using the elementary formula $$\int \frac{dx}{x + b} = \log|x + b| + C$$ (and then applying the usual identity for the sum of logarithms). Solving gives $$K = - \frac{1}{2a} \qquad \text{and} \qquad L = \frac{1}{2a}.$$ Alternately, one can substitute $$x = a \cosh t,$$ which gives an especially nice integral, or, as mjh points out in the comments, $$x = a \sec \theta.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1049944", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Is this c the same as that c? Are the highlighted $c$'s the same or should it be $c_1$ and $c_2$.
I am not very good at analysis, but the answer to your question is clearly "the $c$'s are the same", by plain elementary logic. I'll try to explain it to you: call $\varphi(\alpha)$ the "sentence" $$\forall x \ \left[ \overline{\lim}_{r \to 0} \mu(B(x,r)/r^s < \alpha \right]$$ and call $\psi(\beta)$ the "sentence" $$\mathcal H^s(F) \geq \mu(F)/\beta.$$ Then your theorem can be written as $$\varphi(c) \implies \psi(c).$$ If you think about it, you can see that it would make no sense to write something like $$\varphi(c_1) \implies \psi(c_2).$$ I'll use an easier example for you to understand better: suppose $\alpha,\beta \in \mathbb N$ and call $\varphi(\alpha)$ the "sentence" $$x \leq \alpha.$$ Now call $\psi(\beta)$ the "sentence" $$x \leq 2\beta.$$ Then it is of course true that $$\varphi(n) \implies \psi(n),$$ since this can be written in natural language as If $x \leq n$, then $x \leq 2n$. But which sense would it make to say $\varphi(n_1) \implies \psi(n_2)$? That is, which sense would it make to state If $x \leq n_1$, then $x \leq 2n_2$. ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1050033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to efficiently determine if two propositions are equivalent? Given any two arbitrary propositional formulas only using $\land, \lor, \lnot$, e.g., $\lnot(A \land (B \lor \lnot B) \land C)$ and $\lnot C \lor \lnot A$, how can I (or a computer) efficiently determine if they are equivalent? The formulas may contain many propositions, so the corresponding truth tables may be exponentially large. Is there a fast algorithm to convert formulas to some normal form so that any two equivalent formulas will reduce to the same normal form?
Efficient : NO. See Boolean satisfiability problem, abbreviated as SATISFIABILITY or SAT : There is no known algorithm that efficiently solves SAT, and it is generally believed that no such algorithm exists. To ask for two propositional formulae $\mathcal A, \mathcal B$ are equivalent can be reduced to asking if : $\mathcal A \vDash \mathcal B$ and $\mathcal B \vDash \mathcal A$. But $\mathcal A \vDash \mathcal B$ iff $\vDash \mathcal A \rightarrow \mathcal B$, i.e. $\mathcal A \rightarrow \mathcal B$ is a tautology and this in turn is equivalent to $\mathcal A \land \lnot \mathcal B$ being unsatisfiable. In other words, if one of $\mathcal A \land \lnot \mathcal B$ and $\mathcal B \land \lnot \mathcal A$ is satisfiable, then $\mathcal A, \mathcal B$ are not equivalent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1050127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
In a binary vector $ x \in \{0,1\}^{k}$ what does the $^{k}$ mean? In a binary vector $ x \in \{0,1\}^{k}$ what does the $^{k}$ mean? I understand that $\in$ means 'is a possible outcome' or 'in' so x can be 0 or 1, but I'm not sure what the $^{k}$ means.
Just as points in $\mathbb R^k$ consist of ordered tuples of $k$ real numbers, so points in $\left\{0,1\right\}^k$ consist of ordered tuples of $k$ "bits". The typical point $x = (x_1,x_2,\ldots,x_k)$, where each $x_i\in\left\{0,1\right\}$ (that is, each $x_i$ is $0$ or $1$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1050223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
For which $n\in\mathbb{Z}^+$ is there a $\sigma\in S_{14}$ such that $|\sigma|=n$? For which $n\in\mathbb{Z}^+$ is there a $\sigma\in S_{14}$ (the group of permutations on $\{1,2,\dots,14\}$) such that $|\sigma|=n$ (where $|\sigma|$ is the order of $\sigma$)? I know you could just enumerate all elements of $S_{14}$, but that would take a very long time. Are there any more elegant approaches?
The order of a cycle is the length of the cycle, and every permutation is a product of commuting cycles. Thus the possible orders are all least common multiples of sets of positive integers whose sums are less than or equal to 14. By hand I found 29 distinct combinations yielding 13, 11, 22, 33, 9, 18, 36, 45, 8, 24, 40, 7, 14, 21, 28, 35, 42, 84, 5, 10, 15, 20, 30, 60, 4, 12, 3, 6, 2. The identity is also of order 1, so that makes 30 different orders. EDIT: As Derek pointed out, I missed one. The culprit is cycle type $(7,5,2)$, which is of order 70.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1050369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
what can be said about the solution of the following differential equation $y(x)=f(x)+\log(1+y'(x))$ with $\lim_{x \to 0}f(x)=\infty,f(\infty)=0$ and $f(x)$ is contiuoues and decreasing. In particular, can we prove that the solution $y(x)$ is convex in general? from $e^{y(x)-f(x)}=1+y'(x)$ we get $1+y'(x) > 0$ and $y'(x) > -1$. Then it follow that $y(x) \leq f(x)$ $1+y'(x) = e^{y(x) - f(x)} \leq 1$ so also we have $y'(x) \leq 0$, so generally: $$-1 < y'(x) \leq 0$$ Another observation: Take the derivative: $y'(x)=f'(x)+\frac{y''(x)}{1+y'(x)}$ so: $$(y'(x)-f'(x))(1+y'(x))= y''(x)$$ and if $y'(x)-f'(x) \geq 0$ the function $y(x)$ is convex. When $f'(x) < -1$ this is always true, but what happend for $-1 \leq f'(x) \leq 0$?
If you put $z(x)=x+y(x)$ and $g(x)=x+f(x)$, the equation becomes $$z'(x)=e^{z(x)-g(x)},$$ which is separable, with general solution $$z=-\ln\bigl(C-h(x)\bigr),\qquad h(x)=\int_0^x e^{-g(t)}\,dt.$$ Then $$z'(x)=\frac{e^{-g(x)}}{C-h(x)},\quad z''(x)=\frac{e^{-2g(x)}-(C-h(x))e^{-g(x)}g'(x)}{(C-h(x))^2},$$ so the sign of $z''(x)$ is that of $e^{-g(x)}+\bigl(h(x)-C\bigr)g'(x)$. It follows from the assumptions that $h(\infty)$ exists and is finite, and you need $C\ge h(\infty)$ for the solution to exist for all $x$, and so the second term above is negative, if $g'(x)>0$. In typical cases $g'(x)\sim 1$ for large $x$, so $z$ (and hence $y$) becomes concave for large $x$, at least if $C>h(\infty)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1050446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find a point on the line: $y=2x-5$ that is the closest to $P(1,2)$ This is our line: $f(x)=2x-5$ I have to find a point on this line that is the closest to the point $P(1,2)$. How do I go on about solving this? Should I use derivative and distance from the point to the line? I've done this so far: $d\to$ distance between $P(1,2)$ and $f(x)$ $d= \sqrt{(x_2-x_1)^2+(y_2-y_1)^2}$ But how do I get the $x_2,y_2 $ from $f(x)$?
Actually, there is a simpler way. Since $f$ is a line, the point $P$ and the point on $f$ closest to $P$ will define a line perpendicular to $f$. That means that you can get the equation for the line with slope $m=-\frac{1}{2}$ (the inverse of the reciprocal of the slope of $f$) through the point $(1,2)$, and then find the intersection of that line and $f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1050560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Prove that $\frac{\binom{p}{k}}{p}$ is integral for $k\in \{1,..,p-1\}$ with $p$ a prime number with a particular method I started by induction on $k$ For $k=1$ then : $1\in \mathbb{N}$ For $k=2$ then : $\frac{(p-1)}{2!} \in \mathbb{N}$ , indeed for all $p>{2}$, $p-1$ is even. (We still have $k<p$ it's important). Suppose that $\frac{(p-1)...(p-k+1)}{k!} \in \mathbb{N}$ for a specific $k \in \{1,..,p-1\}$ Now try to prove that : $\frac{(p-1)...(p-k)}{(k+1)!} \in \mathbb{N}$. By the recursive formula we have directly : $\frac{\binom{p}{k+1}}{p} = \frac{[\binom{p+1}{k+1} - \binom{p}{k}]}{p}$ $\Leftrightarrow$ $\frac{(p-1)...(p-k)}{(k+1)!} = \frac{(p+1)(p-1)...(p-k+1)}{(k+1)!} - \frac{(p-1)...(p-k+1)}{k!}$ By hypothesis we know that $\frac{(p-1)...(p-k+1)}{k!} \in \mathbb{N}$ Now, the key is to see that a product of $k$ consecutive terms is divided by $k!$ (a little induction can solve this). Then the induction is complete.
By definition, $$\binom{p}{k} = \frac{p!}{k!(p-k)!}$$ Since $p$ is prime, it's relative prime to all smaller numbers, and therefore for $k<p$, it cannot be a factor of $k!$. Now if further $k>0$, then $p-k<p$, and thus $p$ is not a factor of $k!(n-k)!$. Since by definition, $p$ is a factor of $p!$, and $p$ is prime, it follows that it is a factor of $\binom{p}{k}$, thus $\binom{p}{k}/p\in\mathbb N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1050672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof involving chords of a circle In a circumference with center $O$, three chords $\overline{AB},\overline{AD}$ and $\overline{CB}$ such that the last two intersect in $E$. Show that $AE·AD+BE·BC=AB^2 $. Added: $O\in\overline{AB}$. Hi, I have been trying to solve this problem with the power of a point with respect to a circumference and with pythagoras, but it seems that I'm going nowhere: $2(AB)^2=AD^2+BD^2+BC^2+AC^2$ I hope you could give me a hint. Thanks Edit: Taking into account what Blue mentioned:
You were on a good track. You found that $$2 \cdot AB^2 = AD^2 + BD^2 + BC^2 + AC^2.$$ But you want something with $AD \cdot AE,$ not $AD^2,$ you don't want to see $BD^2,$ and so forth. But notice that $AC^2 = AE^2 - CE^2$ and $BD^2 = BE^2 - DE^2,$ and also $AD = AE + DE$ and $BC = BE + CE.$ So this suggests you could put everything on the right-hand side in terms of $AE,$ $BE,$ $CE,$ and $DE.$ So the hint is, do that, then see if you can put the pieces back together to make $2 \cdot AD \cdot AE + 2 \cdot BC \cdot BE.$ If you get that far, you can easily complete the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1050736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Prove that $1 + 4 + 7 + · · · + 3n − 2 = \frac {n(3n − 1)}{2}$ Prove that $$1 + 4 + 7 + · · · + 3n − 2 = \frac{n(3n − 1)} 2$$ for all positive integers $n$. Proof: $$1+4+7+\ldots +3(k+1)-2= \frac{(k + 1)[3(k+1)+1]}2$$ $$\frac{(k + 1)[3(k+1)+1]}2 + 3(k+1)-2$$ Along my proof I am stuck at the above section where it would be shown that: $\dfrac{(k + 1)[3(k+1)+1]}2 + 3(k+1)-2$ is equivalent to $\dfrac{(k + 1)[3(k+1)+1]}2$ Any assistance would be appreciated.
Here is @Shooter's answer shown a different way. Let's take the example of n = 8: 1 + 4 + 7 + 10 + 13 + 16 + 19 + 22 = 92 Let's rearrange this and group: (1 + 22) + (4 + 19) + (7 + 16) + (10 + 13) = 92 Now let's add the groups and look for a pattern: 23 + 23 + 23 + 23 = 92 That's it. The n = 8 example is just what happens when you add 23 four times. What is 23? It's 3n - 1. What is four? It's n / 2. That makes the formula (3n - 1)(n / 2). This is how I would derive this when n is even. It's a little more work when n is odd. Here's another simple solution that only uses this formula: 1 + 2 + 3 + ... + n = n * (n + 1) / 2 We'll call this f(n) for now. I'll decompose the original series: (a) 1 + 2 + 3 + 4 ... (b) + 0 + 1 + 2 + 3 ... (c) + 0 + 1 + 2 + 3 ... ------------------- (d) = 1 + 4 + 7 + 10 ... It should be clear from the above how to make a closed form equation: (a) = f(n) (b) = f(n - 1) (c) = f(n - 1) Therefore (d) = f(n) + 2 * f(n - 1) The formula (d) can be rewritten to what you posted in your question
{ "language": "en", "url": "https://math.stackexchange.com/questions/1050814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 11, "answer_id": 3 }
Check:etermine the number of Sylow $2$-subgroups and Sylow $3$-subgroups that $G$ can have. Let $G$ be a group of order $48$. By the $1$st Sylow theorem $G$ has a Sylow $2$-subgroup and a Sylow $3$-subgroup. Suppose none of these are normal. Determine the number of Sylow $2$-subgroups and Sylow $3$-subgroups that $G$ can have. Justify. Let $G$ be a group of order $48=3 \cdot 2^4$. The number of Sylow $2$-subgroups $n_2$ divides $24$ and has the form $n_2=2k+1$ by the Sylow Theorems. Therefore $n_2=1,3$. However since we want non-normal subgroups so $n_2=3$. The number of Sylow $3$-subgroups $n_3$ divides $16$ and has the form $n_3=3k+1$ by the Sylow Theorems. Therefore $n_3=1,4$. However since we want non-normal subgroups so $n_3=4$.
Assume that $n_3 = 16$. If $P_i, P_j \in Syl_3(G)$ and $P_i \neq P_j$, then $P_i \cap P_j = 1$. Each Sylow $3$-subgroup has $3$ elements, the identity and two others. The intersections only contain the identity, and hence from each Sylow $3$-subgroup you can count $2$ new elements. Now $n_2 = 3$. If $Q_i, Q_j \in Syl_2(G)$ and $Q_i \neq Q_j$, then $$ |Q_iQ_j| = \frac{|Q_i||Q_j|}{|Q_i \cap Q_j|} = \frac{16 \cdot 16}{|Q_i \cap Q_j|} \leq 48. $$ Now $|Q_i \cap Q_j| \ | \ |Q_i| = 16$, thus $|Q_i \cap Q_j| = 8$ or $16$. As $Q_i \neq Q_j$, then $Q_i \cap Q_j \neq Q_i$ and you can deduce that $|Q_i \cap Q_j| = 8$. In the worst case (least amount of distinct elements), you get $3 \cdot 8$ new elements from the $3$ Sylow $2$-subgroups. In total you get $16 \cdot 2 + 3 \cdot 8 = 56 > 48$ elements, which is a contradiction. Thus $n_3 = 16$ and $n_2 = 3$ is not possible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1050894", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding the probability of a selecting at least 1 of an element. If there are 18 red and 2 blue marbles what is the probability of selecting 10 marbles where there is either 1 or 2 blue marbles in selected set. Also, it seems intuitive that the probability should be twice that of selecting a single blue marble in a set of 10 from 19 red and 1 blue, but I'm not sure.
Another approachment is through hypergeometric distribution. * *The probability that there is exactly one blue ball in the selected set is: $$P(A_1)=\dfrac{\color{blue}{\dbinom 2 1}\cdot \color{red}{\dbinom {18} {10-1}}}{\dbinom {20}{ 10} }$$ *The probability that there are exactly 2 blue balls in the selected set is: $$P(A_2)=\dfrac{\color{blue}{\dbinom 2 2}\cdot \color{red}{\dbinom {18} {10-2}}}{\dbinom {20}{ 10} }$$ So, the probability you are looking for is $P(A_1)+P(A_2)\approx 0.763 $
{ "language": "en", "url": "https://math.stackexchange.com/questions/1050957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Is the symmetric definition of the derivative equivalent? Is the symmetric definition of the derivative (below) equivalent to the usual one? \begin{equation} \lim_{h\to0}\frac{f(x+h)-f(x-h)}{2h} \end{equation} I've seen it used before in my computational physics class. I assumed it was equivalent but it seems like it wouldn't matter if there were a hole at $x=h$ in the symmetric derivative, whereas with the usual one it wouldn't be defined. Which is kinda interesting... If they're not equivalent - is there a good reason as to why we should use the common one? Or is the symmetric one actually more useful in some sense because it "doesn't care" about holes?
As I noted in a comment to the other answer, Milly's computation is incorrect. I am posting this answer to rectify the situation. The symmetric derivative is defined to be \begin{align} \lim_{h\to 0} \frac{f(x+h)-f(x-h)}{2h}. \end{align} If $f$ happens to be differentiable, then the symmetric derivative reduces to the usual derivative: \begin{align} \lim_{h\to 0} \frac{f(x+h)-f(x-h)}{2h} &= \lim_{h\to 0} \frac{f(x+h)-f(x)+f(x)-f(x-h)}{2h} && (\text{add zero}) \\ &= \frac{1}{2} \left( \lim_{h\to 0} \frac{f(x+h)-f(x)}{h} + \lim_{h\to 0} \frac{f(x) - f(x-h)}{h}\right) \\ &= \frac{1}{2} \left( \lim_{h\to 0} \frac{f(x+h)-f(x)}{h} - \lim_{-h\to 0} \frac{f(x+h) - f(x)}{-h}\right) && (\ast)\\ &= \frac{1}{2} \left( f'(x) + f'(x) \right) && (\text{since $f'(x)$ exists}) \\ &= f'(x). \end{align} I'm begin a little trixy at $(\ast)$. Notice that we can think about the difference quotient as being the slope of a secant line through the points $(x,f(x))$ and $(x+h,f(x+h))$. This slope is given by \begin{equation*} \frac{f(x+h)-f(x)}{(x+h)-x}. \end{equation*} Multiplying through by $-1$, this becomes \begin{equation*} -\frac{f(x+h)-f(x)}{(x+h)-x} = \frac{f(x+h)-f(x)}{x-(x+h)} = \frac{f(x+h)-f(x)}{-h} \end{equation*} Taking $h$ to zero (which is the same as taking $-h$ to zero) on the left gives $-f'(x)$, justifying $(\ast)$. Again, this proves the key statement: Proposition: If $f$ is differentiable at $x$ (in the usual sense), then $f$ is symmetric differentiable at $f$, and the symmetric derivative agrees with the usual derivative. The converse does not hold. The usual example is the absolute value function which is not differentiable at zero, but which is symmetric differentiable at zero (with derivative zero): \begin{equation*} \lim_{h\to 0} \frac{|0+h|-|0-h|}{2h} = \lim_{h\to 0} \frac{|h|-|h|}{2h} = \lim_{h\to 0} \frac{0}{2h} = 0. \end{equation*} In particular, this demonstrates that the symmetric derivative is a generalization of the usual derivative. It cannot be equivalent, because it can meaningfully define the derivative of a larger class of functions. As to the why? of the symmetric derivative (and why we don't use it instead of the usual derivative), I think that adequate answers can be found attached to this question and in comments elsewhere on MSE.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1051034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 0 }
Parametrizing intersection I'm working on a question that require parametrizing some curves: C is the curve of intersection of the hyperbolic paraboloid $z = y^2 − x^2$ and the cylinder $x^2 + y^2 = 1$ oriented counterclockwise as viewed from above. Let $x$ and $y$ be in terms of $t$ where $0 ≤ t ≤ 2π$. I thought it was quite obvious that the parametrization is $\cos{t}, \sin{t}, \sin^2{t} - \cos^2{t}$, but no, apparently it's wrong: What am I missing here?
Your answer is correct. Try entering the answer $\bigl(\sin t, \cos t, -\cos 2t \bigr)$. This answer is equivalent to yours, but maybe the computer system is too dumb to realize this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1051150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is the exponential map for $\text{Sp}(2n,{\mathbb R})$ surjective? For $\mathfrak{g} := {\mathfrak s}{\mathfrak p}(2n,\mathbb{R})$ and $G = \text{Sp}(2n,{\mathbb R})$, is the exponential map \begin{equation} \text{exp} : \mathfrak{g} \to G \end{equation} surjective? If not then is it possible to access all of the symplectic group $\text{Sp}(2n,\mathbb{R})$ via a product, i.e. $e^Xe^Y$, such that $X,Y \in {\mathfrak s}{\mathfrak p}(2n,\mathbb{R})$? If so what is the minimum number of products needed?
In his comment, user148177 already explained that the exponential function of $\text{Sp}_{2n}({\mathbb R})$ is not surjective. Two factors, however, suffice: Theorem (Polar decomposition) Any $g\in\text{Sp}_{2n}({\mathbb R})$ can uniquely be written as a product $g = h\cdot\text{exp}(X)$ with $h\in SO_{2n}({\mathbb R})\cap\text{Sp}_{2n}({\mathbb R})$ and $X\in\text{Sym}_{2n}({\mathbb R})\cap{{\mathfrak s}{\mathfrak p}}_{2n}({\mathbb R})$. Reference: Hilgert-Neeb: The structure and geometry of Lie groups, Proposition 4.3.3. As a compact Lie group $SO_{2n}({\mathbb R})\cap\text{Sp}_{2n}({\mathbb R})$ has surjective exponential, so $\text{Sp}_{2n}({\mathbb R})=\text{exp}({\mathfrak s}{\mathfrak p}_{2n}({\mathbb R}))^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1051255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How many ambiguous dates exist? How many ambiguous dates are there in a year? An ambiguous date is one like 8/3/2007 which could either mean the 8th of March or the 3rd of August. Is it right to say that 1/1/2007 must mean the first of January so there are 11 ambiguous dates for each month (1/2/2007, 1/3/2007, ..., 1/12/2007)? So the answer is $11\times12=132$?
The important question here, is when is a date ambiguous? A date is ambiguous if the day (of the month) and the month (of the year) can be confused for one another; when the day can be a month as well. Since there are 12 months in a year, the day can only be confused with the month if the day is in $[1, 12]$, so there are twelve days each month where we don't know which is which. That makes for a total of $12 \times 12 = 144$ days. But of those twelve days per month, there will be one day where day and month are equal. So while we don't know which is which, it doesn't matter. That makes for a total of $12 \times 1 = 12$ days that aren't ambiguous after all. So your final answer is $12 \times 12 - 12 = 132$ days.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1051354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How to get ellipse cross-section of an ellipsoid I'm trying to get the major and minor radius of an ellipse which represents the cross-section of a given ellipsoid. This is particularly of interest in the field of RF propagation in terms of Fresnel zones and how they interact with a ground surface. In my particular use, I am taking an ellipsoid which represents the RF Fresnel zone and attempting to determine the cross-sectional area which comes in contact with the Earth's surface. This link provides an example of what I'm describing. The cross-section of the ellipsoid will be an ellipse. And the major radius is determined by knowing the where the Earth's surface begins and ends it's intersection with the ellipsoid - I can do this part just fine. But I am struggling to determine the minor radius at that intersection. I am looking for how to determine the minor radius of the intersection of the ellipsoid so that I can draw the intersection onto a map in this fashion:
You have sections with planes perpendicular to a principal axis of the ellipsoid. These ellipses are all similar. If $r$ is the radius of the Fresnel ellipsoid and $h$ is the Earth's height ( or bulge) at the midpoint between the transmitter and the receiver then the cross section perpedicular to the radius $r$ at distance $r-h$ from the center of the ellipsoid is an ellipse with semiaxes $r'$ and $d'$ where $$\frac{r'}{r} = \frac{d'}{d} = \sqrt{1 - \left(\frac{r-h}{r}\right)^2}$$ This all follows from the fact that the (Fresnel) ellipsoid has semiaxes $d$, $r$, $r$ and so for every point on the ellipsoid we have $$\frac{x^2}{d^2} + \frac{y^2}{r^2} + \frac{z^2}{r^2} =1$$ At distance $r-h$ from the center we have $z$ constant $r-h$ ( perhaps with $-$ sign) - the coordinate system is center in the centered of the ellipsoid. ${\bf Added:}$ Now for the slanted sections of the ellipsoid. Recall the system of coordinates is centered at the center of the ellipsoid and the $z$-axis is coming out of the earth (maybe not perpendicular, depending on the elevations of the transmitter and receiver). Let $E$, $E'$ be the ends of the slice that the Earth makes on the ellipsoid. So $EE'$ will be one of the axis of the slice. The other axis, coming out of the paper, passes through the middle $M$ of the segment $EE'$. Let $M$ have the coordinate $(x_M,0,z_M)$, the average of the coordinates of $E$, $E'$ ($x_M$, $z_M$ the signed green segments, horizontal and vertical). If $w$ is the half-axis of the section then the points of coordinates $(x_M, \pm w, z_M)$ are on the ellipsoid and so $$\frac{x_M^2}{d^2} + \frac{w^2}{r^2} + \frac{z_M^2}{r^2} =1$$ and from this equality you get $w$, the other semi-axis of the slice.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1051469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Let $G$ be a group. If $H\leq G$ is a subgroup and $N\vartriangleleft G$, then $HN$ is a subgroup of $G$. Let $H$ be a subgroup of $G$ and $N$ a normal subgroup of $G$. Prove that $HN=\{hn\:|\:h \in H, n \in N\}$ is a subgroup. Here what I have so far. It is not really much I understand what I need to show but I am stuck in one little thing which I can not by pass. Proof: I claim that $HN$ is not empty. Since $G$ is a group, $1_{g} \in G$. Since both $H$ and $N$ are subgroups of $G$, $1_{g} \in H$ and $1_{g} \in N$. Therefore $1_{g} \in HN$. I claim that $HN$ is closed. Let $h_{1}$$n_1$ and $h_{2}$$n_2$ $\in HN$. $(h_{1}n_1)(h_{2}n_2)=(h_{1}n_1h_{2}n_2)=h_{1}(n_1h_{2})n_2$. Now this is where my difficulty is I cannot swap $n_1$$h_{2}$ to get $h_{2}$$n_1$ so what do I do. Help!!!
It is true that $h^{-1}nh=n'\in N\;\implies nh=hn'$ , because $\;N\lhd G\;$ , then $$h_1n_1h_2n_2=h_1(n_1h_2)n_2=h_1(h_2n')n_2=(h_1h_2)(n'n_2)\in HN$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1051592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Proving $\frac{200}{\pi}\sum_{n=0}^{\infty}\frac{(-1)^{n}}{(2n+1)\cosh\left(\frac{\pi}{2}(2n+1)\right)}=25$ I solve a partial differential equation (Laplace equation) with specific boundary conditions and I finally found the answer: $$U(x,y)=\frac{400}{\pi}\sum_{n=0}^{\infty}\frac{\sin\left((2n+1)\pi x\right)\sinh\left((2n+1)\pi y\right)}{(2n+1)\sinh\left((2n+1)\pi\right)}$$ If I put $x=\frac{1}{2}$ and $y=\frac{1}{2}$, then the equation becomes as follow: $$U\left(\frac{1}{2},\frac{1}{2}\right)=\frac{200}{\pi}\sum_{n=0}^{\infty}\frac{(-1)^{n}}{(2n+1)\cosh\left(\frac{\pi}{2}(2n+1)\right)}$$ At the end we can draw the answer by MS. Excel software and see that the $U\left(\frac{1}{2},\frac{1}{2}\right)= 25$. Honestly, my professor asked me to prove $U\left(\frac{1}{2},\frac{1}{2}\right)= 25$. However, it is not included in my course and I do not have any idea. Could you help me? Thanks.
We will evaluate $$S=\sum_{n=0}^{\infty}\frac{(-1)^{n}}{(2n+1)\cosh\left(\frac{\pi}{2}(2n+1)\right)}$$ Since the series is alternating, use $f(z)=\pi\csc(\pi z)$ and we have $$\oint\frac{\pi \csc(\pi z)}{(2z+1)\cosh\left(\frac{\pi}{2}(2z+1)\right)}\,dz$$ The poles are at $z=-\frac{1}{2}, \;\ z=n, \;\ z=\frac{(2n+1)\,i}{2}-\frac{1}{2}$, so $$\begin{align}&\text{Res}\left(f(z)\,;\,z=-\frac{1}{2}\right)=-\frac{\pi}{2}\\ &\text{Res}(f(z)\,;\,z=n)=\frac{(-1)^{n}}{(2n+1)\cosh\left(\frac{\pi}{2}(2n+1)\right)}\\ &\text{Res}\left(f(z)\,;\,z=\frac{(2n+1)i}{2}-\frac{1}{2}\right)=\frac{(-1)^{n}}{(2n+1)\cosh\left(\frac{\pi}{2}(2n+1)\right)}\end{align}$$ Hence: $$\oint\frac{\pi \csc(\pi z)}{(2z+1)\cosh(\frac{\pi}{2}(2z+1))}dz=-\frac{\pi}{2}+4\sum_{n=0}^{\infty}\frac{(-1)^{n}}{(2n+1)\cosh\left(\frac{\pi}{2}(2n+1)\right)}$$ As $N\to \infty$, then the integral on the left tends to $0$. So, we have $0=-\frac{\pi}{2}+4S$. Solving for $S$ gives $\,\dfrac{\pi}{8}$. Hence $$\frac{200}{\pi}\sum_{n=0}^{\infty}\frac{(-1)^{n}}{(2n+1)\cosh\left(\frac{\pi}{2}(2n+1)\right)}=25$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1051688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Problem involving system of differential equations Solve following system of diferential equations$$\begin{cases} \frac{ds}{dt}=y+z\\ \frac{dy}{dt}=s+z\\ \frac{dz}{dt}=z-s. \end{cases}.$$ I tried many tehniques without any success. I would appreciate some help with this problem. One of my tries $$\frac{dz}{dt}=z-s\Rightarrow \frac{e^{-t}dz}{ds}=ze^{-t}-se^{-t}\Rightarrow -\frac{d(e^{-t})}{dt}\frac{dz}{dt}=-z\frac{d(e^{-t})}{dt}+s\frac{d(e^{-t})}{dt}\Rightarrow$$ $$\Rightarrow \frac{d}{dt}\left( \frac{y}{e^t}\right)=-\frac{z}{e^t}\Rightarrow \int d\left(\frac{y}{e^t}\right)=-\int\frac{z}{e^t}dt\Rightarrow y=e^t\left(-\int\frac{z}{e^t}dt+C\right) $$ Inserting this into the first equations doesn't lead to anything pleasant.
Consider the vector $\boldsymbol V=\left[\begin{array}{r}s\\y\\z\end{array}\right]$. Then your system of equations is equal to: $$\boldsymbol V'(t)=\left[\begin{array}{r}0&1&1\\1&0&1\\-1&0&1\end{array}\right]\boldsymbol V.$$ Let's call that matrix $\boldsymbol A$. Suppose that there is a solution $\boldsymbol V=\boldsymbol Ke^{\lambda t}$. Therefore $\boldsymbol K\lambda e^{\lambda t}=\boldsymbol A\boldsymbol Ke^{\lambda t}$. This implies $\boldsymbol {AK}=\lambda \boldsymbol K$. Therefore $\boldsymbol{AK}-\lambda\boldsymbol K = \boldsymbol 0 \Longrightarrow \boldsymbol {AK}-\lambda\boldsymbol {KI}=\boldsymbol 0.$ Then $(\boldsymbol A-\lambda \boldsymbol I)\boldsymbol K= \boldsymbol 0$. This means that to find the solution you need to find eigenvalues $\lambda$ that satisfy the last equation. This is done by taking $\det (\boldsymbol A-\lambda \boldsymbol I)=0$. Solve this polynomial to find $\lambda$ there is one of them for each variable. Careful! Some of these $\lambda$ might be equal. Once you find these eigenvalues, solve for $\boldsymbol K$ in $(\boldsymbol A-\lambda \boldsymbol I)\boldsymbol K= \boldsymbol 0$ for each $\lambda$. Then you will find solutions to the differential equations. This is an easy example: Now it's your turn to work on that $3\times 3$ matrix. The problem is that you will have to deal with complex numbers. The solutions to your system could be expressed as real solutions. This is done using Euler's formula!!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1052035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Geometry Problem involving Midpoints and Analytic Geoometry In triangle $ABC, AB = AC, D$ is the midpoint of $\overline{BC}$, E is the foot of the perpendicular from D to $\overline{AC}$, and F is the midpoint of $\overline{DE}$. Prove that $\overline{AF}$ is perpendicular to $\overline{BE}$. I am required to solve this using analytic geometry and am recommended to place $A$ on the origin and $B$ on the point $(x,0)$, but don't know how to proceed from there.
Here's a start. Set point $C$ at $(a,b)$ (with $b \neq 0$). You can also specify $a \gt 0$ without loss of generality. Now, you're in a position to define $D = ((a+x)/2, b/2).$ The slope of $\overline{AC}$ is $m_{AC} = b/a$, so the slope of a perpendicular to that line is $-a/b$. Now, try to finish. From there, you can determine the location of $E$, because you know $D$, and you know the slope of $\overline{DE}$. (Write out an equation for the line $AC$, and for the line $DE$, and solve for $x,y$.) Then, some more midpoint calculation, and finally show that the two calculated slopes are negative reciprocals of one another: $$m_{AF} = -\frac{1}{m_{BE}}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1052128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Question regarding basis in vector spaces How can one prove the following proposition ? $ B = (e_{1,...,} e_n )\, $ forms a basis for a space $V$ if and only if each vector of $V$ can only be written as an unique linear combination of elements from $B$ . I'm really confused here, any ideas?
For the first part, suppose $B$ is a basis. By definition a basis is a set of linearly independent vectors that spans $V$. Take then $v\in V$: $v$ can be written as a linear combination of elements of $B$ in this way $v=\sum_{i=1}^n\alpha_ie_i$. The problem here is to prove that this way is unique. Suppose then that it can be written with other scalars like $v=\sum_{i=1}^n\beta_ie_i$. Then, $$0=v-v=\sum_{i=1}^n\alpha_ie_i-\sum_{i=1}^n\beta_ie_i=\sum_{i=1}^n(\alpha_i-\beta_i)e_i$$ but, given that the $e_i's$ are linealy independent, $(\alpha_i-\beta_i)=0$ for each $i=1,...,n$. Then $\alpha_i=\beta_i$ for each $i$ and the linear combination is unique. For the converse, it is clear that the elements of $B$ span the whole space. To prove that these elements are in fact l.i, consider the linear combination $\sum_{i=1}^n0\cdot e_i=0$. That is a way to write $0$ as linear combination of element of $B$, and the hypothesis says that this way is unique, then in another linear combination of the form $\sum_{i=1}^n\alpha_ie_i=0$, each $\alpha_i=0$, and this is the definition of linear independence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1052350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Does the following alternating series converge or diverge? I have the following series that I have to check for convergence or divergence: $$ \sum\limits_{n=1}^{\infty} \frac{\sin (n+1/2)\pi}{1 + \sqrt{n}} $$ I know that it is an alternating series therefore I have to check for two conditions to be satisfied in order for it to be convergent; the limit has to equal 0 as n approaches infinity and that for series $a_n$ $a_n < a_{n+1}$. I am able to prove that the limit is approaches zero but could someone help me prove that for the series above that: $ a_n < a_{n+1}$
$$ \sum\limits_{n=1}^{\infty} \frac{\sin (n+1/2)\pi}{1 + \sqrt{n}}=\sum\limits_{n=1}^{\infty} \frac{(-1)^n}{1+\sqrt n} $$ so this is an alternating series. Generic term converges to $0$ $$\underset{n\to \infty }{\text{lim}}\frac{1}{\sqrt{n}+1}=0$$ Therefore the series converges and $$\sum _{n=0}^{\infty } \frac{(-1)^n}{\sqrt{n}+1}\approx 0.721717$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1052421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Evaluation of $ \lim_{x\to 0}\left\lfloor \frac{x^2}{\sin x\cdot \tan x}\right\rfloor$ Evaluation of $\displaystyle \lim_{x\to 0}\left\lfloor \frac{x^2}{\sin x\cdot \tan x}\right\rfloor$ where $\lfloor x \rfloor $ represent floor function of $x$. My Try:: Here $\displaystyle f(x) = \frac{x^2}{\sin x\cdot \tan x}$ is an even function. So we will calculate for $\displaystyle \lim_{x\to 0^{+}}\left\lfloor \frac{x^2}{\sin x\cdot \tan x}\right\rfloor$ Put $x=0+h$ where $h$ is a small positive quantity, and using series expansion So limit convert into $\displaystyle \lim_{h\to 0}\left\lfloor \frac{h^2}{\sin h\cdot \tan h}\right\rfloor = \lim_{h\to 0}\left\lfloor \dfrac{h^2}{\left(h-\dfrac{h^3}{3!}+\dfrac{h^5}{5!}- \cdots\right)\cdot \left(h+\dfrac{h^3}{3}+\dfrac{2}{15}h^5+ \cdot\right)}\right\rfloor$ Now how can i solve after that, Help me Thanks
Let me continue where you stopped in your post. Expand the denominator for a few terms and perform the long division. You should arrive to $$\frac{x^2}{\sin x\cdot \tan x} =1-\frac{x^2}{6}-\frac{7 x^4}{120}+O\left(x^5\right)$$ which is definitely smaller than $1$. I am sure that you can take from here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1052492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
what is the order and the degree of the given diff. equation? the given diff. equation is $$y=1+ \frac{dy}{dx}+\frac{1}{2!}\left(\frac{dy}{dx}\right)^2+\cdots+\frac{1}{n!}\left(\frac{dy}{dx}\right)^n$$ it is given that the order is $1$ and the degree is also $1$. But how it happens because the the highest order $1$ has degree is $n$. so how the given degree is possible?
If you really do mean that those (first) derivatives are raised to powers from $1$ to $n$, then it is a first order differential equation. However, it would be $n$th degree. Here are the definitions of order and degree for an ODE.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1052553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Given radius, and many vertices on it, how can I find center of a sphere? I have a sphere, I know its radius. I also have the coordinates of 500 vertices which are on the sphere. How can I find the center coordinates of a sphere? Is there an easy way to do that? Thanks.
You need just four vertices $A,B,C,D$. Let $O_A$ be the circumcenter of $BCD$ and $l_A$ the line through $O_A$ that is orthogonal to the $BCD$-plane. Let $O_D$ be the circumcenter of $ABC$ and $l_D$ the line through $O_A$ that is orthogonal to the $ABC$-plane. Then $l_A$ and $l_D$ meet in the centre of the sphere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1052751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to compute $\lim_{x \to 1}x^{\frac{1}{1-x}}$? I need to compute $$\lim_{x \to 1}x^{\frac{1}{1-x}}$$ The problem for me is negative $x$.
Set $x-1=h$ So, we have $$\lim_{h\to0}(1+h)^{-\frac1h}=\left([1+h]^{\frac1h}\right)^{-1}=e^{-1}$$ Alternatively, if $A=\lim_{x\to1}x^{\dfrac1{1-x}},$ $\ln A=\lim_{x\to1}\dfrac{\ln\{1-(1-x)\}}{1-x}$ Setting $x-1=h,$ $\ln(A)=-\lim_{h\to0}\dfrac{\ln(1+h)}h=-1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1052854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Maximum ratio between diameter of shape and diameter of enclosing circle For a 2-dimensional closed convex shape $C$, define: * *$d(C)$ = the diameter of $C$ (the largest distance between two points in $C$). *$D(C)$ = the diameter of the smallest circle containing $C$. Often $d(C)=D(C)$, for example in a unit square they are both $\sqrt 2$. Sometimes $d(C)<D(C)$, for example in a unit equilateral triangle $d(C)=1$ and $D(C)=2/\sqrt 3$. MY QUESTION: What is the largest value of $D(C)/d(C)$? (Can it be larger than $2/\sqrt 3$?)
I think $2/\sqrt 3 $ is the largest possible value: For simplicity only consider $C$ to be a convex polygon. If $C$ is a line, then the ratio is $1$. Now think about triangles: If we fix the diameter of the enclosing circle, then the triangle that produces the largest ratio (i.e. the smallest $d(C)$) is equilateral. If we still fix $D(C)$ and try to construct a polygon $C$ in this circle that has a very small diameter, we can always embed a triangle into $C$, which has the same diameter as $C$. However, this diameter is greater or equal to the diameter of an equilateral triangle with the same enclosing circle. Hence, the ratio $D(C)/d(C)$ is smaller or equal to the ratio of an equilateral triangle. Since arbitrary convex shapes can be approximated by polygons, the claim follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1052913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Candidates in an exam 443 candidates enter the exam hall. There are 20 rows of seats I'm the hall. Each row has 25 seats. At least how many rows have an equal number of candidates. My attempt Seat 25 in the first row 24 in the second and so on...but I cannot find the worst case scenario.
Let $a_1,a_2\dots a_{20}$ be non-negative integers such that $a_1+a_2+\dots a_{20}=443$. Denote by $s$ the size of the largest subset of the $a$'s which all have the same value. I think we want to minimize this. We shall prove it is three. To show it is more than two we notice that with $s=2$ the maximum sum would be achieved with $a_1=a_{11}=16,a_2=a_{12}=17\dots a_3=a_{3}=18 \dots a_{10}=a_{20}=25$ Then the sum of the $a$'s would be $2\frac{10(41)}{2}=410$. We now prove with $s=3$ it is obtainable, to see this we take the numbers from $20$ to $25$ three times (these are $6\times 3=18$ numbers). This allready adds up to $3\frac{45(6)}{2}=405$. For the remaining $38$ we must use three numbers, we use the number $13$ two times and the number $12$ once. Therefore it is possible for $s$ to be three.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1053023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Finding matrix for given recurrence For the recurrence relation: $f(0)=1$ $f(1)=1$ $f(2)=2$ $f(2n)=f(n)+f(n+1)+n$ $f(2n+1)=f(n)+f(n−1)+1$ How to find square matrices $M_0, M_1$ and vectors $u, v$ such that if the base-2 expansion of $n$ is given by $e_1 e_2 \cdots e_j$, then $$f(n) = u M_{e_1} \cdots M_{e_j} v.$$ ??
This is how I would approach this question, and I guess my answer would not be unique. For each $n = e_1e_2\cdots e_j$, there are $7$ numbers I should store (as a row vector) to calculate $f(2n)$ and $f(2n+1)$: $$uM_{e_1}M_{e_2}\cdots M_{e_j} = w(n) = \pmatrix {1&n&f(n-2)&f(n-1)&f(n)&f(n+1)&f(n+2)}.$$ To calculate $f(2n)$, the matrix $M_0$ is appended to the above product: $$w(2n) = \pmatrix{1\\2n\\f(2n-2)\\f(2n-1)\\f(2n)\\f(2n+1)\\f(2n+2)}^T =\pmatrix{1\\2n\\f(n-1)+f(n)+n-1\\f(n-1)+f(n-2)+1\\f(n)+f(n+1)+n\\f(n)+f(n-1)+1\\f(n+1)+f(n+2)+n+1}^T = w(n)M_0$$ To calculate $f(2n+1)$, the matrix $M_1$ is appended to the above product: $$ w(2n+1) = \pmatrix{1\\2n+1\\f(2n-1)\\f(2n)\\f(2n+1)\\f(2n+2)\\f(2n+3)}^T =\pmatrix{1\\2n+1\\f(n-1)+f(n-2)+1\\f(n)+f(n+1)+n\\f(n)+f(n-1)+1\\f(n+1)+f(n+2)+n+1\\f(n+1)+f(n)+1}^T = w(n)M_1$$ So $M_0$ can be written as $$M_0 = \pmatrix{1&0&-1&1&0&1&1\\ 0&2&1&0&1&0&1\\ 0&0&0&1&0&0&0\\ 0&0&1&1&0&1&0\\ 0&0&1&0&1&1&0\\ 0&0&0&0&1&0&1\\ 0&0&0&0&0&0&1}$$ And similarly for $M_1$. I imagine the base case to be $$w(2) = uM_1M_0 = \pmatrix{1&2&1&1&2&3&7}$$ For the rest, I have not tried out the calculation yet, but I suppose this involves solving the equation $$uM_1M_0 = w(2)$$ to solve for $uM_1$ and then $u$, and verify if that satisfies $w(0)$, $w(1)$ and $w(3)$. Lastly $$v = \pmatrix{0\\0\\0\\0\\1\\0\\0}$$ Edit: Just found out $uM_1M_0 = w(2)$ is inconsistent and there is no $uM_1$ which satisfy the equation. So hope this answer could give you some idea on other ways to solve this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1053112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $f$ is a loop in $\mathbb{S}^{n}$, then is $f^{-1}(\{x\})$ a compact set in $[0, 1]$? If $f$ is a loop in $\mathbb{S}^{n}$, then is $f^{-1}(\{x\})$, $x \in \mathbb{S}^{n}$, a compact set in $[0, 1]$?
This community wiki solution is intended to clear the question from the unanswered queue. Since $\mathbb S^n$ is Hausdorff, singletons are closed. The loop $f : [0, 1] \to \mathbb S^n$ is a continuous map, so the inverse image of closed sets is closed, in particular $f^{-1}(\{x\})$ is closed. But $[0, 1]$ is compact and closed subsets of compact sets are compact. Conclude that $f^{-1}(\{x\})$ is compact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1053183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to isolate $n$ in the inequality $ 3n + 7n^3\gt c(17 + 34n^2) $? I have an equation $$ 3n + 7n^3\gt c\left(17 + 34n^2\right) $$ and I want to turn this inequality into something like $$ n \gt c(\mbox{something that does not have}\ n) $$ I don't know why but always get stumped at these types of questions. Any help would be appreciated, thanks.
Hint: This is equivalent to finding the minimum of $$\frac{n(7n^2+3)}{34n^2+17}=\frac{n}{34}\left(7-\frac1{2n^2+1}\right)$$ Which for positive numbers, is clearly increasing. So you need to find only the least $n$ satisfying the cubic. Checking for $n$ near $\frac{34}7c$ may quickly solve it, depending on the value of $c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1053355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Average number of trials until drawing $k$ red balls out of a box with $m$ blue and $n$ red balls A box has $m$ blue balls and $n$ red balls. You are randomly drawing a ball from the box one by one until drawing $k$ red balls ($k < n$)? What would be the average number of trials needed? To me, the solution seems to be $$\sum_i i * \frac{\mbox{the number of cases where k-th red ball is picked at i-th trial}}{\mbox{the number of cases where k-th red ball is the last ball picked}}$$ the denometer seems to be $$\sum_{r=k-1}^{m-1} \binom{r}{k-1} = \binom{m}{k-1} $$ But, I have a difficulty of deriving a closed form formula for the numerator which seems to be $\sum_{i=k}^{m-1} i \binom{i-1}{k-1}$. I would appreciate if somebody helps me on that.
For your sum, we have $$\sum_{i=k}^{m-1} i \binom{i-1}{k-1} = \sum_{i=k}^{m-1} \frac{i!}{(k-1)!(i-k)!} = k\sum_{i=k}^{m-1} \frac{i!}{k!(i-k)!} = k \sum_{i=k}^{m-1} \binom{i}{k}.$$ By the hockey stick formula, this simplifies further: $$k \sum_{i=k}^{m-1} \binom{i}{k} = k\sum_{i=k}^{m-1}\binom{i}{i-k} = k\sum_{i=0}^{m-1-k}\binom{i+k}{i} = k\binom{m}{m-k-1} = k\binom{m}{k+1}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1053443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Z transform piecewise function I have this piecewise function: $$x(n)= \left\{ \begin{array}{lcc} 1 & 0 \leq n \leq m \\ \\ 0, &\mbox{ for the rest} \\ \\ \end{array} \right.$$ How do I calculate the $z$-transform?
$X(z) = \sum_{n=-\infty}^{\infty}x[n]z^{-n} = \sum_{n=0}^{m}z^{-n} = \frac{1-z^{-(m+1)}}{1-z^{-1}} = \frac{1}{z^{m}}\frac{z^{m+1}-1}{z-1}$, with $z \neq 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1053564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
"Integer average" of two integer numbers Suppose two arbitrary integer numbers $a$ and $b$. I'm looking for some function $f(a,b)$ with the following properties: * *$f(a,b)\in\mathbb{Z}$. *$f(a,a)=a$. *$f(a,b)=f(b,a)$. *$\min\{a,b\}< f(a,b)< \max\{a,b\}$, accepting "$\leq$" if $a$ and $b$ are consecutive ($a=b\pm 1$). Any example of such a function? Improvement: Any solution avoiding "floor/ceiling-like" functions?
I would use "scientific rounding" (round to even) on the arithmetic mean. So if $(a+b)/2$ is an integer, use that, otherwise use the even integer out of $(a+b+1)/2$ and $(a+b-1)/2$. So that would likely be something like $$\left\lfloor a+b+1\over4\right\rfloor + \left\lceil a+b-1\over4\right\rceil$$ The advantage of this kind of rounding is that it is overall unbiased while preferredly rounding to points of greater stability, possibly reducing the amount of followup errors with repeated operations since averaging two values calculated using the "round to even" tie-breaking will have an exact result not needing another tie breaker.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1053695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 8, "answer_id": 0 }
Find limit of $\frac {1}{x^2}- \frac {1}{\sin^2(x)}$ as x goes to 0 I need to use a taylor expansion to find the limit. I combine the two terms into one, but I get limit of $\dfrac{\sin^2(x)-x^2}{x^2\sin^2(x)}$ as $x$ goes to $0$. I know what the taylor polynomial of $\sin(x)$ centered around $0$ is… but now what do I do?
Since you already received good answers and being myself in love with Taylor series for more than 55 years, let me add a small trick. Considering the expression $$\dfrac{\sin^2(x)-x^2}{x^2\sin^2(x)}$$ you have, as already given in answers, for the numerator $$-\frac{x^4}{3}+\frac{2 x^6}{45}-\frac{x^8}{315}+O\left(x^9\right)$$ and for the denominator $$x^4-\frac{x^6}{3}+\frac{2 x^8}{45}+O\left(x^9\right)$$ The ratio gives you immediately the limit. But, make one further step, using long division for example and you will arrive to $$\dfrac{\sin^2(x)-x^2}{x^2\sin^2(x)}\approx-\frac{1}{3}-\frac{x^2}{15}$$ which gives you the limits and how it is approached. Just as a curiosity, plot on the same graph the function and this last approximation for $-1 \leq x\leq 1$; you will be surprized to see how close the curves are over this quite large interval. If you push the long division for one more term, you should get $$\dfrac{\sin^2(x)-x^2}{x^2\sin^2(x)}\approx-\frac{1}{3}-\frac{x^2}{15}-\frac{2 x^4}{189}$$ and now the curves are almost identical.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1053754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
prove $f(x)=x$ has a unique solution Question: Let $f$ be a continuous function from $\mathbb{R^2} \rightarrow \mathbb{R^2}$ such that $| f (x)− f (y)| ≤ \frac {1}{3} |x−y|$. Prove $f(x)=x$ has a unique solution. My sketch: There exists $\epsilon, \delta$ such that the following holds because $f$ is continuous $|f(x)-f(y)| \leq \epsilon \leq \frac {1}{3} |x-y| \leq |x-y|< \delta$ Thus by the squeeze lemma, if $\delta$ goes to $0$, then $f(x)=f(y)$
Assume there are two such points $x_1,x_2$ such that $f(x_1)=x_1$ and $f(x_2)=x_2$. Then $|f(x_1)-f(x_2)|=|x_1-x_2|\leq \frac{1}{3}|x_1-x_2|$. This is true if and only if $|x_1-x_2|=0$. Therefore $x_1=x_2$ and the fixed point, if it exists, is unique. To show that a solution does exist consider the sequence $\{x_n\}$ where $x_1=x$ for some point $x$, and $x_{n+1}=f(x_n)$. The limit of this sequence will end up being your fixed point!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1053841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
How to prove that there are infinitely many primes without using contradiction How can I prove that there are infinitely many primes without using contradiction? I know the proof that is (not) by Euclid saying there are infinitely many primes. It assumes that there is a finite set of primes and then obtains one that is not in that set. Say for some reason I want to prove it without using contradiction. Maybe I'm contradictaphobic. How could I go about doing this?
The simplest direct proof I know is one which involves infinite series. One can show that $\sum\limits_{p\text{ prime}}\frac{1}{p}$ diverges. Particularly, it means that there cannot be finitely many primes since the sum of finitely many nonzero numbers is finite. (There is a very slight contradiction - or contrapositive - argument in here if you look closely enough.) That said, you should definitely combat your contradictaphobia... proofs by contradiction are everywhere in mathematics. Hamstringing yourself by not accepting or feeling at ease with such proofs will make mathematics quite challenging at times.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1053902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
Disjoint sets of vertices of a polygon This is the question: Suppose that the vertices of a regular polygon of 20 sides are coloured with three colours – red, blue and green – such that there are exactly three red vertices. Prove that there are three vertices A,B,C of the polygon having the same colour such that triangle ABC is isosceles. And this is the official solution to this question: Since there are exactly three vertices, among the remaining 17 vertices there are nine of them of the same colour, say blue. We can divide the vertices of the regular 20-gon into four disjoint sets such that each set consists of vertices that form a regular pentagon. Since there are nine blue points, at least one of these sets will have three blue points. Since any three points on a pentagon form an isosceles triangle, the statement follows. What do we mean by the "four disjoint sets"? What does it signify geometrically? Also is the statement "among the remaining 17 vertices there are nine of them of the same colour" a consequence of the Pigeonhole principle?
For your second question; answer is yes. Pigeonhole principle repeatedly used to reach the answer. By "four disjoint sets", answerer means that "four sets such that no two of them contains same element". Call the vertices $1$, $2$, $3$, $4$, etc. such that adjacent vertices differ by one, except vertices $20$ and $1$. Partition set of all vertices to four set: $$ A=\{1,5,9,13,17\}\\ B=\{2,6,10,14,18\}\\ C=\{3,7,11,15,19\}\\ D=\{4,8,12,16,20\}\\ $$ It is easily seen that all vertices belong to same set creates pentagon. Apply pigeonhole principle again to reach the conclusion that one of them at least three blue points.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1054005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluating $\lim_{n \to +\infty}\frac{\left(1 + \sin\alpha\right)^n}{\left(1+\frac{\sin\alpha}n\right)^n}$ Let, for $n \ge 2$, $$a_n = \frac{\left(1 + \sin\alpha\right)^n}{\left(1+\frac{\sin\alpha}n\right)^n}$$ Evaluate $\lim_\limits{n \to +\infty}a_n$, as $\alpha$ varies in $[0, 2\pi)$. I proceeded as follows. The denominator is $\exp(\sin\alpha)$, and we can rewrite the numerator as $\exp(n\ln(1 + \sin\alpha))$. Therefore $$a_n = \exp(n\ln(1 + \sin\alpha) - \sin\alpha)$$ So $$\lim_{n \to +\infty} a_n = \begin{cases}0\qquad&\alpha \in \{0, \pi\}\\ +\infty\qquad&\alpha\in(0, \pi)\cup(\pi, 3\pi/2)\cup(3\pi/2,2\pi)\\ ??\qquad&\alpha = 3\pi/2 \end{cases}$$ I don't know what happens when $\alpha=3\pi/2\implies\sin\alpha = -1$ because that's outside the domain of the logarithm. But $\alpha=3\pi/2$ is inside the domain of the original $a_n$ so I think I've made a mistake somewhere.
Your first step is correct, with the simpler denominator. It was a good idea to convert to exp and log. Also to break into quadrants with particular interest in multiples of $\pi/2$. But then, when $x=0,\pi$, you have exp(n(0)-0) = exp(0)=1. When $\sin x>0$, then $1+\sin x>1$, so $\ln(1+\sin x)>0$ and $n\ln(1+\sin x)\to\infty$. Check $\sin x<0$. For $x=3\pi/2$, go back to the original question, where it is particularly simple. Logs won't work in this case, as you found.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1054125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$a_{n+1}=1+\frac{1}{a_n}.$ Find the limit. Let $a_1=1$ and $a_{n+1}=1+\frac{1}{a_n}.$ Is $a_n$ convergent? How could i find its limit? I found even terms of the sequence decrease and odd terms are increase. But i cant find upper and lower bounds to use monotone convergence theorem.
You can prove by induction that $$a_n = \frac{F_{n+1}}{F_n}$$ where $\{F_n\}_{n\geq 1}=\{1,1,2,3,5,8,\ldots\}$ is the Fibonacci sequence. Binet formula hence gives: $$\lim_{n\to +\infty} a_n = \phi = \frac{1+\sqrt{5}}{2}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1054202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Number of $n-1$-dimensional subspaces of $n$-dimensional space over finite field I got a question with two parts. Let $V$ be a $n$-dimensional vector space over $\mathbb{F}_{p}$ - finite field with $p$ elements. a) How many $1$-dimensional subspaces $V$ has. b) How many $n-1$-dimensional subspaces $V$ has. I solved (a) with action of the multiplicative group of $\mathbb{F}_{p}$ on $V$, but I didn't succeed to solve (b) with similar idea.. I still prefer an idea with action of groups.. thanks !
Hint: There is a bijection between subspaces of dimension $k$ and $k\times n$ matrices of rank $k$ in reduced row-echelon form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1054331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
how to point out errors in proof by induction I have searched for an answer to my question but no one seems to be talking about this particular matter.. I will use the all horses are the same color paradox as an example. Everyone points out that the statement is false for n=2 and that if we want to prove the propositions we should use 2 as the base case for this proof. But, (as I see it..), you have to use reason to figure that out. My question is, is there anything wrong with induction itself? except for the fact that we can use reason to understand why the proof is faulty. This goes for all these problematic propositions.. Thank you all!
"But, (as I see it..), you have to use reason to figure that out. My question is, is there anything wrong with induction itself? except for the fact that we can use reason to understand why the proof is faulty." I'm not entirely sure of what you mean with using 'reason', but I'm interpreting it as "is there way to spot the mistake in the argument without having the inspiration to check what happens for $n=2$?" There is. Take Wikipedia's argument. Inductive step Assume that $n$ horses always are the same color. Let us consider a group consisting of $n+1$ horses. First, exclude the last horse and look only at the first $n$ horses; all these are the same color since $n$ horses always are the same color. Likewise, exclude the first horse and look only at the last $n$ horses. These too, must also be of the same color. Therefore, the first horse in the group is of the same color as the horses in the middle, who in turn are of the same color as the last horse. Hence the first horse, middle horses, and last horse are all of the same color.. Consider a set of $n+1$ horses: $\{h_1, h_2,\ldots ,h_n, h_{n+1}\}$. Wikipedia now tells you to consider $\{h_2, \ldots ,h_{n+1}\}$ and $\{h_1, \ldots h_n\}$ and apply the induction hypothesis on each of this sets, which then yields that $h_2, \ldots ,h_{n+1}$ are all of the same color and $h_1, \ldots ,h_{n}$ are all of the same color. Up until this point there's nothing wrong. But now, to say that $h_1$ has the same color as $h_2$ and consequently as $h_3,\ldots ,h_{n+1}$ you need that $h_2\in \{h_1, \ldots ,h_n\}$, but this is only true if $n>2$, for $n+1=2$ one has $\{h_1 ,\ldots ,h_n\}=\{h_1\}$, so you can't make further progress.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1054446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
the purpose of induction After getting an answer (in a comment) from peter for this question I have a follow up question. If, in all horses are the same color problem for example, we need to use reason, reason which is specific to the case, in order to find that "hole" between the correct base case and the correct inductive step. That "hole" where n=2 which makes the proof collapse. So how can we verify that our hypotheses are correct and that there are no "holes"? I mean if we proved the base case p(0) or p(1) or p((int)whatever) is true and the inductive step is true, how can be sure that there no "holes" in the hypothesis? Do we use induction only to prove what we know, beyond any doubt, is true? But doesn't that contradicts the purpose of a proof? I know this is such a Newbie question.. but I have searched for answers for these specific questions and I could not find.. Thank you all!
You can prove statements inductively from which you did not know before being true or false. You just need to make sure that there is no "hole" in your proof. In the case of your example the problem occured because the proof of the inductive step was simply wrong. But if you check that both p(0) and the inductive step can be proved formally correct then your proof will be valid and you don't need to search for "holes" afterwards.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1054529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 3 }
Closed form of $\sum_{n=1}^{\infty} \frac{1}{2^n(1+n^2)}$ How would you recommend me to tackle the series $$\sum_{n=1}^{\infty} \frac{1}{2^n(1+n^2)}$$? Can we possibly express it in terms of known constants? What do you think about it?
You may recall the Lerch transcendent function $$ \Psi(z,s,a)=\sum_{k=0}^\infty\frac{z^k}{(a+k)^s} $$ and use $$ \frac{1}{n^2+1}=\frac{i}{2}\left(\frac{1}{n+i}-\frac{1}{n-i}\right) $$ to get $$ \sum_{n=1}^{\infty} \frac{1}{2^n(1+n^2)}=-1-\Im \: \Psi\left(\frac12,1,i\right) $$ which gives your series in terms of a known special function. Maybe someday we will be able to say something deeper...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1054615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Inverse laplace transform excercise I want to find the inverse transform of $$\frac{1}{(2s-1)^3}$$ I first applied a shifting theorem to get $$(e^t)\mathcal{L}^{-1}\left( \frac{1}{(2s)^3} \right)$$ I am just wondering is it possible to take the 2 out from here so it becomes $$\frac{1}{8}(e^t)\mathcal{L}^{-1}\left( \frac{1}{s^3} \right)$$ Any feed back would be much appreciated
We could use the inverse Laplace transform integral/Bromwich Integral/Mellin-Fourier integral/Mellin's inverse formula (many names) and then use Residue theory. The pole is at $s = 1/2$ of order three. The Bromwich contour is a line from $\gamma - i\infty$ to $\gamma + i\infty$ and then a partial circle connecting the line. \begin{align} \frac{1}{2\pi i}\int_{\gamma + -i\infty}^{\gamma + i\infty}\frac{e^{st}}{(2s - 1)^3}ds &= \sum\text{Res}\\ &= \lim_{s\to 1/2}\frac{1}{(3 - 1)!}\frac{d^{3 - 1}}{ds^{3 - 1}}(s - 1/2)^3\frac{e^{st}}{(2s - 1)^3}\\ &= \lim_{s\to 1/2}\frac{1}{2}\frac{d^2}{ds^2}(s - 1/2)^3\frac{e^{st}}{8(s - 1/2)^3}\\ &= \frac{t^2e^{s/2}}{16} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1054701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Can we prove that axioms do not contradict? We construct many structures by chosing a set of axioms and deriving everything else from them. As far as I remember we never proved in our lectures that those axioms do not contradict. So: Is it always possible to prove that a set of axioms does not contain a contradiction? (So if not could we perhaps find a contradiction in a structure that we definde by axioms someday?) Or is it proven that doing so is impossible?
For a first order theory, basically a set of "axioms" formulated in a first order language, you have Gödel's completeness theorem. The theorem establishes that a first order theory is consistent(i.e. non-contradictory) is and only if you have a model for that theory(i.e. a "real" mathematical object satisfying the axioms of the theory). From this you can know that group theory, ring theory, field theory and, in general, every first order theory that has a model is a consistent theory. To see this you proceed using the completeness theorem and observing, for example, that $(\mathbb{Z}_2,+)$ is a model for group theory, and $(\mathbb{Z}_2,+,.)$ is a model for field and ring theory.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1054857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
If $a$ is a complex number s.t. $a\notin \mathbb R$, then $\mathbb R(a)=\mathbb C$? If $a$ is a complex number s.t. $a\notin \mathbb R$, then $\mathbb R(a)=\mathbb C$? I'm asked to give a proof or a counterexample. I'm a bit confused on the notation of $\mathbb R(a)$, what does this mean exactly? I'm also unclear on what exactly the direction my proof neeeds to go in (assuming it is true). Though limited, my thoughts so far are as follows: Since $a\notin \mathbb R$ then $a=x+iy$ where $y\ne 0$. Elements in $\mathbb R$ are of the form $\{a+b\sqrt{2}|a,b\in \mathbb Q\}$. Similarly, for $\mathbb C$, $\{a+b\sqrt{-2}|a,b\in \mathbb Q\}$. Am I trying to show that plugging in some $a=x+yi$ in $\mathbb R$ that I can write it in the form of $\{a+b\sqrt{-2}|a,b\in \mathbb Q\}$?
For $a \in \Bbb C \setminus \Bbb R$, we indeed have $\Bbb R(a) = \Bbb C$; indeed, we have $i \in \Bbb R(a)$; this may be seen as follows: With such $a$, we have $0 < \bar a a \in \Bbb R \subset \Bbb R(a); \tag{1}$ since $\Bbb R(a)$ is a field and $a \ne 0$, we further have $\bar a = a^{-1} (\bar a a) \in \Bbb R(a); \tag{2}$ thus $a - \bar a \in \Bbb R(a); \tag{3}$ writing $a = a_r + i a_i \tag{4}$ with $a_r, a_i \in \Bbb R$ and $a _i \ne 0$ (since $a \ne \Bbb R$), we see that $a - \bar a = a_r + i a_i - (a_r - i a_i)= 2 i a_i, \tag{5}$ whence via (3) $2 i a_i \in \Bbb R(a), \tag{6}$ which yields $i \in \Bbb R(a); \tag{7}$ thus for $x, y \in \Bbb R \subset \Bbb R(a)$, $x + i y \in \Bbb R(a). \tag{8}$ (8) clearly shows that $\Bbb C \subset \Bbb R(a)$; $\Bbb R(a) \subset \Bbb C$ is obvious; thus $\Bbb R(a) = \Bbb C$. QED. Hope this helps. Cheers, and as ever, Fiat Lux!!!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1055033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }