Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Differentiating $\int\cdots \int f(X_1,X_2,\ldots,X_n)\varphi_1(x_1,\theta)\cdots\varphi_n(x_n,\theta)~dx_1\cdots dx_n$ Differentiating:$$\int_{-\infty}^\infty \cdots \int_{-\infty}^\infty f(X_1,X_2,\ldots,X_n)\varphi_1(x_1,\theta)\cdots\varphi_n(x_n,\theta)\,dx_1 \cdots dx_n$$ with respect to $\theta$. The result is given in one line, (the next one). I do not understand how this is. (Statistics proof) Anyway the result given being: $$\int_{-\infty}^\infty \cdots \int_{-\infty}^\infty f(X_1,X_2,\ldots,X_n) \sum_{i=1}^n \left(\frac{\partial}{\partial \theta}\varphi(x_i,\theta)\frac{1}{\varphi(x_i,\theta)}\right) \varphi_1(x_1,\theta)\cdots\varphi_n(x_n,\theta)\,dx_1\cdots dx_n$$
Let us rewrite the product rule as follows: $$(fg)'=f'g+g'f=\frac{f'}{f}fg+\frac{g'}{g}fg=\left(\frac{f'}{f}+\frac{g'}{g}\right)fg$$ Yours is just the generalization to $n$ factors, but is handled in the exact same way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1868690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why is multivariable continuous differentiability defined in terms of partial derivatives? Both in my textbook and on Wikipedia, continuous differentiability of a function $f:\Bbb R^m \to \Bbb R^n$ is defined by the existence and continuity of all of the partial derivatives. Since there is a notion of a (total) derivative (AKA differential) for multivariable functions, I'm wondering why continuous differentiability is not defined as existence and continuity of the derivative map $Df(a)$? Is there some reason why having existence and continuity of partials is more convenient or maybe continuity of the total derivative is too strict of a condition?
Continuous differentiability of the function $f: \mathbb{R}^m \to \mathbb{R}^n$ (in terms of partial derivatives) is equivalent to existence and continuity of the map $$Df: \mathbb{R}^m \to L(\mathbb{R}^m, \mathbb{R}^n)$$ $$ x \to Df_x$$ which takes a point to the derivative at the point. Any book on analysis on $\mathbb{R}^n$ will have a proof of this fact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1868802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$\int_{- \infty}^{\infty} \frac{f(x)}{1+\exp{g(x)}}dx=\int_{0}^{\infty} f(x) dx$ for $f(x)=f(-x),~g(x)=-g(-x)$ - are there other formulas like that? If $f(x)$ any even function, integrable on $(0,\infty)$ and $g(x)$ any odd function, then we have: $$\int_{- \infty}^{\infty} \frac{f(x)}{1+e^{g(x)}}dx=\int_{0}^{\infty} f(x) dx \tag{1}$$ The proof is elementary: $$I(a)=\int_{- \infty}^{\infty} \frac{f(x)}{a+e^{g(x)}}dx$$ $$I(1/a)=\int_{- \infty}^{\infty} \frac{f(x)}{1/a+e^{g(x)}}dx=a \int_{- \infty}^{\infty} \frac{e^{-g(x)}f(x)}{e^{-g(x)}+a}dx= \\ = a \int_{- \infty}^{\infty} f(x)dx-a^2\int_{- \infty}^{\infty} \frac{f(x)}{a+e^{g(x)}}dx$$ $$\frac{1}{a} I(1/a)+aI(a)=\int_{- \infty}^{\infty} f(x)dx$$ $$I(1)=\int_{0}^{\infty} f(x)dx$$ With this formula we can write some crazy looking integrals to scare people, like: $$\int_{- \infty}^{\infty} \frac{e^{-x^2}}{1+e^{\sin (\sinh x)+x^3-\arctan x}}dx=\frac{\sqrt{\pi}}{2}$$ To be fair, it might also be useful for some quatum statistics applications (i.e. Fermi-Dirac distribution). I want to know, what other formulas like $(1)$ exist? Maybe with the exponential function, or some other functions I also know of Glasser's Theorem, but I wonder if some more interesting cases exist. To be more specific, I mean the non-trivial formulas of the following kind: $$\int_{a}^b g(x) f(x) dx=k \int_{A}^B f(x) dx$$ With $k$ being some constant, independent on $f(x)$, $f(x)$ is a general function (with some restricitions), $g(x)$ is some interesting function. $A,B$ might be different from $a,b$, but also should not depend on $f(x)$.
Any function $g(x)$ such that $g(c+a)+g(c-a)=k$ for all $a$ on the interval $(0,b)$ with any function $f(x)$ such that $f(c-a)=f(c+a)$ for all $a$ on the interval $(0,b)$ will satisfy the equation $$\int_{c-b}^{c+b}{f(x)g(x)dx}=k*\int_{c}^{c+b}{f(x)dx}$$ because, using a trapezoidal Riemann sum after splitting the integrals into $$\int_{c-b}^{c}{f(x)g(x)dx} + \int_{c}^{c+b}{f(x)g(x)dx}$$ and using $\Delta x= \frac bn$ $$\lim_{n \to \infty}\Delta x *\sum_{i=1}^{n-1}f(c-b+i*\Delta x)*g(c-b+i*\Delta x)+f(c+b-i*\Delta x)*g(c+b-i*\Delta x)$$ $$+\frac{\Delta x}2*(f(b-a)*g(b-a)+f(b-c)*g(b-c))$$ The second part of the Riemann sum has the indices going backwards for the sake of the "proof" and we only look at one specific index in this part. $$\lim_{n \to \infty}\Delta x*(f(c-(b-i*\Delta x))*g(b-(a-i*\Delta x))+f(b+a-i*\Delta x)*g(b+a-i*\Delta x))$$ Letting $w=b-i*\Delta x$ $$\lim_{n \to \infty}\Delta x*(f(c-w)*g(c-w)+f(c+w)*g(c+w))$$ $$f(c+w)=f(c-w)$$ $$\lim_{n \to \infty}\Delta x*(f(c+w)*g(c-w)+f(c+w)*g(c+w))$$ $$\lim_{n \to \infty}\Delta x*(f(c+w)*(g(c-w)+g(c+w)))$$ $$g(c+w)+g(c-w)=k$$ $$\lim_{n \to \infty}\Delta x*(f(c+w)*k)$$ Going back, we have $$k*\lim_{n \to \infty}\Delta x*\sum_{i=1}^n f(c+b-i*\Delta x)+\frac{\Delta x}2*(f(c)+f(c+b))$$ which is the Trapezoid Rule for the integral $$k*\int_c^{c+b}f(x)dx$$ You had $c=0$ (which coincidentally made $f(x)$ an even function), $b=\infty$, and $g(x)=\frac 1{1+e^{h(x)}}$. I do not know if this is an actual theorem, corollary, etc. I also don't know if the logistic function is the only solution to $g(c+a)+g(c-a)=k$ for all $a$ on the interval $(0,b)$. Although I tried to avoid any errors, if you see any, let me know.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1868929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 3, "answer_id": 1 }
Critical Number real life applications I've studying a lot Critical Number/Point and I have to give a presentation about it. I am searching real life applications to explain the concept, but it's difficult to find. Anyone here can give me some real life applications examples about critical number?
There are many problems in physics that use this concept. For example, when two atoms come together to form a molecule. They come closer to each other because the energy of the system is smaller if they share electrons. But if they are too close, the electrons cannot screen the nuclei. The two nuclei will repel, so at very short distances the energy is increased. The energy as a function of distance has a minimum (critical point), so the nuclei will be at that particular distance. Google atomic potential.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1869049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof that two non-parallel planes must intersect? I managed to find, by enumeration, the intersection point of two planes $ax+by+cz+d=0$ and $ex+fy+gz+h=0$, in all possible cases (with the condition that the planes are not parallel). But this is a very ugly proof. I wonder if there is a quicker and more elegant proof (without linear algebra --- this is high school level (Euclidean) geometry)?
You want to show that if $v,w$ are linearly independent vectors in $\mathbb R^3$, then the $2\times 3$ matrix $A$ formed by putting $v$ and $w$ in two rows defines a map $A:\mathbb R^3\to \mathbb R^2$ that is onto. It suffices you show that the kernel of $A$ has dimension $1$ when $v,w$ are linearly independent, from this follows that the image of $A$ has dimension $2$; that is, we can always solve the system $$ v\cdot x=\lambda,w\cdot x=\mu $$ for any two scalars $\mu,\lambda$ (this is what you want). The claim that $A$ has one dimensional kernel is the same as saying two non parallel planes passing through the origin intersect in a line. Can you prove this? To do this, you want to show that the simultaneous equations $$ v\cdot x =0,w\cdot x=0$$ have a solution set equal to a line. The fact that $w$ and $v$ are linearly independent means that they are not a scalar multiple of each other. This means the cross product $u=v\times w$ is nonzero, and this gives a nonzero vector $u$ that solves the above, so the kernel has dimension at least $1$ (this is always true!), so we have to check these are all the solutions. Pick another solution, $x$. Because $(v,w,v\times w)$ is a basis of $\mathbb R^3$, we can write $x$ as a linear combination of $v,w,v\times w$, and if $x\cdot w=x\cdot v=0$, then in fact $x$ is a multiple of $v\times w$ (take the inner product against $v,w$ to see the corresponding coefficients are zero). Thus, as desired, $\ker A $ is generated by $v\times w$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1869135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 2 }
Field with $125$ elements I want to construct a field with $125$ elements. My idea is to consider the polynomial ring $\Bbb F_5[x]$. It is enough to find an irreducible polynomial $f\in \Bbb F_5[x]$ of degree $3$ because then $\Bbb F_5[x]/(f)$ is a field with exactly $5^3=125$ elements. How do I find an irreducible polynomial of degree $3$ in $\Bbb F_5[x]$?
Belatedly, there is a way to "be lucky" here, and not so computational, in an informative way. Namely, of all the polynomials known to humans, the best understood are cyclotomic ones. If a cyclotomic polynomial or a relative can resolve an issue, that's good fortune. After a moment's fooling around, we observe that the smallest power of $5$ such that $5^n-1$ is divisible by $7$ is $5^6$. That is, $5$ happens to be a "primitive root" mod $7$. Thus, a seventh root of unity $\zeta$ is of degree $6$ over $\mathbb F_5$. That is, the seventh cyclotomic polynomial $x^6+x^5+...+x+1$ is irreducible over $\mathbb F_5$. The irreducible of $\xi=\zeta+\zeta^{-1}$ over $\mathbb F_5$ is obtained by a standard algebraic manipulation: $$ 0\;=\; \zeta^{-3}\Big(\zeta^6+\ldots+\zeta+1\Big) \;=\; \zeta^3+\zeta^2+\zeta+1+\zeta^{-1}+\zeta^{-2}+\zeta^{-3} $$ $$ \;=\; (\xi^3 - 3\xi) + (\xi^2-2) + 1 \;=\; \xi^3 + \xi^2 - 3\xi - 1 $$ We can use $-3=2$ and $-1=4$ if we like. So, then, basic Galois theory assures us that $x^3+x^2-3x-1$ is irreducible in $\mathbb F_5[x]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1869254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 3 }
Differentiate equation with parenthesis I have a problem. I'm studying calculus, but I don't have a good math background, so I have a problem: I don't know well how to differentiate an equation with parenthesis. The equation is the following: $f(x) = 25x^3(x-1)^2$ Is it correct to use the Differentiation Product Rule in this way: $f'(x)=75x^2*(x-1)^2+25x^3*2(x-1)$ or before I have to solve $(x-1)^2$ in this way: f(x) = $25x^3*(x^2+1-2x)$ and then = $25x^5+25x^3-50x^4$ ? Thanks in advance
A small (useful) trick when you face products, quotients, powers,.. : logarithmic differentiation. Let us take your cas $$f(x) = 25x^3(x-1)^2\implies \log(f(x))=\log(25)+3\log(x)+2\log(x-1)$$ Now, differentiate $$\frac{f'(x)}{f(x)}=\frac 3 x+\frac{2}{x-1}=\frac{5x-3}{x(x-1)}$$ $$f'(x)=f(x)\frac{5x-3}{x(x-1)}=25x^3(x-1)^2\frac{5x-3}{x(x-1)}=25x^2(x-1)(5x-3)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1869349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 2 }
Find the Wrong Student There are 15 student in the class and each of them has a different number 1 to 15. * *Student #1: wrote the natural number on the board. *Student #2 said : This number is divisible by my number(number 2) *Student #3 said : This number is divisible by my number(number 3) *Student #4 said : This number is divisible by my number ( Number 4) And so on until * *Student #15 said : This number is divisible by my number ( Number 15 ) Student #1 is verifying the other 14 student said and he finds that all of them said it correctly except for two student with consecutive numbers. What is the sum of these two consecutive numbers?
The answer is $17$, as students number $8$ and $9$ are wrong. To see this, note that if student $i$ is wrong, then student $ki$ must be wrong for every $k \ge i$. As these will not be two consecutive numbers, this cannot be the case. This means students $2$ through $7$ must be right. Given $pq$, with $p$ and $q$ coprime, if student number $pq$ is wrong, then either $p$ or $q$ must be wrong as well. (In other words, if $pq | n$, then both $p | n$ and $q | n$.) However, these would not form two consecutive students. This means $10$, $12$, $14$ and $15$ are right as well. This leaves $8$ and $9$ as the only consecutive pair that could both be wrong.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1869451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Solve $\sec (x) + \tan (x) = 4$ $$\sec{x}+\tan{x}=4$$ Find $x$ for $0<x<2\pi$. Eventually I get $$\cos x=\frac{8}{17}$$ $$x=61.9^{\circ}$$ The answer I obtained is the only answer, another respective value of $x$ in $4$-th quadrant does not solve the equation, how does this happen? I have been facing the same problem every time I solved this kind of trigonometric equation.
Using $t$-formula Let $\displaystyle t=\tan \frac{x}{2}$, then $\displaystyle \cos x=\frac{1-t^2}{1+t^2}$ and $\displaystyle \tan x=\frac{2t}{1-t^2}$. Now \begin{align*} \frac{1+t^2}{1-t^2}+\frac{2t}{1-t^2} &=4 \\ \frac{(1+t)^{2}}{1-t^2} &= 4 \\ \frac{1+t}{1-t} &= 4 \quad \quad (t\neq -1) \\ t &= \frac{3}{5} \\ \tan \frac{x}{2} &= \frac{3}{5} \\ x &=2\left( n\pi +\tan^{-1} \frac{3}{5} \right) \\ x &= 2\tan^{-1} \frac{3}{5} \quad \quad (0<x<2\pi) \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1869545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 12, "answer_id": 0 }
$\mathbb{C}/\mathbb{Z}$ is isomorphic to multiplicative group $\mathbb{C}\setminus\{0\}$ I have to show that $\mathbb{C}/\mathbb{Z}$ is isomorphic to the multiplicative group $\mathbb{C} \setminus \{0\}$. Proof. Let $f:\mathbb{C} \setminus \{0\} \rightarrow \mathbb{C}/\mathbb{Z}$ be the map $$ f(\alpha) = \alpha \mathbb{Z}.$$ This map has inverse $f^{-1}(\alpha \mathbb{Z}) = \alpha$ so it is bijective. Furthermore, $$f(\alpha \beta) = (\alpha \beta)\mathbb{Z} = \alpha \mathbb{Z} \beta \mathbb{Z} =f(\alpha) f(\beta)$$ and $f(1) = \mathbb{Z}$, so the map $f$ is also a homomorphism. Question. Is this correct? It feels a bit like I am cheating.
The proof, as written, is definitely wrong -- the inverse map you define does not vanish on $\mathbb{Z}$, so it cannot be a map out of $\mathbb{C}/\mathbb{Z}$. I think the idea is to use the complex exponential function. Notice that $\alpha \mapsto \exp(2\pi i \alpha)$ defines a group homomorphism from $\mathbb{C}$ to $\mathbb{C} \setminus \{0\}$ that vanishes on $\mathbb{Z}$. Now prove it's an isomorphism (I suggest doing this geometrically -- you can get anything of norm $1$ by taking $\alpha \in \mathbb{R}$, and then scale)...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1869634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
The Jeep Problem and Nash's Friends The classical jeep problem is the following. A jeep can carry a maximum load of fuel of 1 gallon, and it travels $l$ miles with $l$ gallons of fuel. The jeep moves along a straight line, and is required to cross a desert $x$ miles wide in the most economic way, that is minimizing the required fuel. Let us say that $x > 0$ is the abscissa of the starting point, and $0$ that of the ending point. At $x$ there is a fuel station, where the jeep can load the fuel, while at any point $y < x$, there is a dumping station where the jeep can dump part of its fuel, in order to load it in a future trip. The solution in this case was found by Fine, The Jeep Problem, Americ. Math. Monthly, Vol. 54 (1947), 24-31. The minimum required quantity of fuel $f(x)$ is the piecewise linear function having slope $2n+1$ on the interval $\left[ D_n, D_{n+1} \right]$, where $D_0=0$ and \begin{equation} D_n = \sum_{k=1}^{n} \frac{1}{2k-1} \end{equation} for $n > 1$. From this result Fine easily derives the asymptotic formula for $f(x)$: \begin{equation} f(x) = \frac{1}{4} \exp(2x - \gamma) + \mathcal{O}(e^{-2x}), \end{equation} where $\gamma$ is Euler's constant. Now let $N$ be a positive integer and let $g(x,N)$ be the minimum required fuel to cross the desert when the jeep can dump (and subsequently load) the fuel only at the points \begin{equation} y_1 = \frac{x}{N}, y_2 = 2 \frac{x}{N}, \dots, y_{N-1}= \frac{N-1}{N} x. \end{equation} Suppose to choose for every $x \geq 0$ the positive integer $N(x)$ such that $x^2 / N(x) \rightarrow 0$ as $x \rightarrow \infty$. Fine states at the end of his paper without proof that $[g(x,N(x)) - f(x)]/f(x)$ is not larger that $1 / 2$ for large $x$, meaning that \begin{equation} \limsup_{x \rightarrow \infty} \frac{g(x,N(x))-f(x)}{f(x)} \leq \frac{1}{2}. \end{equation} Does anyone know how to prove this result? Thank you very much for your attention. PS I discovered the Jeep Problem in the book "A Beautiful Mind" by Sylvia Nasar. In chapter 17, she tells a curious story in which Nash was challenged by one of his friends to give an upper bound for the minimum required quantity of fuel. Nasar writes in the book that "there is no optimal solution to the problem, as it turns out". She is not explicit about what version of the jeep problem Nash and his friends were discussing: it seems from her words that it was the version later analyzed by Rote and Zhang, Optimal Logistics for Expeditions: the Jeep Problem with Complete Refilling. In any case, also for this version there is an optimal solution, and this existence property is true for all the versions I know of the jeep problems (see the references in https://en.wikipedia.org/wiki/Jeep_problem and http://mathworld.wolfram.com/JeepProblem.html ), so I think Nasar simply made a wrong statement (maybe she simply meant that an optimal solution was not known at that time, or simply she quoted a wrong statement by someone). PSS A very short and elegant proof that $f(x)$ is the minimum required quantity of fuel is found in Gale, The Jeep Once More or Jeeper by the Dozen, Americ. Math. Monthly, 77 (1970), 493-501.
Finally, I found the answer to my question. First of all, we can easily give a recursive solution to the problem as follows. Let us note that if $P$ is a feasible plan of trips which allows the jeep to arrive at $0$, then we can find another feasible plan $P'$ which arrives at $0$ such that $P'$ is made up a number of trips among $x=y_N$ and $y_{N-1}$, followed by a number of trips among $y_{N-1}$ and $y_{N-2}$, and so on, up to a number of trips among $y_1$ and $y_0=0$. Indeed, assume that the jeep makes $m$ round trips starting at $x$, followed by a last one-way trip $A_{m+1}$ from $x$ to $y_{N-1}$. The i-th of these round trips is made up of the one-way trip $A_i$ from $x$ to $y_{N-1}$, followed by a round trip $B_i$ starting and ending at $y_{N-1}$, and by a one-way trip $C_i$ from $y_{N-1}$ to $x$. Let $g_i$ be the fuel of the jeep when it starts the trip $A_i$. Suppose we replace the sequence of trips $A_1$, $B_1$, $C_1$, ..., $A_m$, $B_m$, $C_m$, $A_{m+1}$ with the sequence $A_1$, $C_1$, ..., $A_m$, $C_m$, $A_{m+1}$ and deposit the quantity $g_i - 2(x - y_{N-1})$ at $y_{N- 1}$ in $A_i$, $i=1,\dots,m$, and $g_{m+1} - (x - y_{N-1})$ at $y_{N-1}$ in $A_{m+1}$. Then we are now in a position to make the trips $B_1$, $B_2$, ... , $B_m$. When we do this the final configuration is not altered. By induction, we can so get the desired $P'$. We call a plane like $P'$ a standard plan. Now, if $S$ is a standard plan which realizes the optimal solution, then clearly the plan $S_n$, $n=1,\dots,N-1$ obtained from $S$ deleting the trips starting at $x_m$, for $m > n$, realizes the optimal solution $g(y_n, n)$. So, if $k_n$ is the number of trips in $S$ between $y_{n-1}$ and $y_n$, we have \begin{equation} g(y_n,n) - g(y_{n-1},n-1) = (2 k_n - 1) \Delta, \end{equation} where $\Delta=x/N$. Moreover, since in the first $k_n - 1$ round trips the maximum fuel that can be deposited at $y_n$ is $1-2 \Delta$, while in the $k_n$-th trip is $1- \Delta$, we must have \begin{equation} (k_n - 1)(1 - 2 \Delta) + 1 - \Delta \geq g(y_{n-1},n-1), \end{equation} or \begin{equation} k_n (1 - 2 \Delta) + 1 + \Delta \geq g(y_{n-1},n-1). \end{equation} Now note that, since we want to minimize the consumed fuel, clearly $k_n$ is determined as the least positive integer satisfying the above inequality. This solves the problem recursively. Now, note that if $k_n \geq 2$, then we must have \begin{equation} (k_n - 1)(1 - 2 \Delta) + \Delta < g(y_{n-1},n-1), \end{equation} from which we get \begin{equation} k_n < \frac{g(y_{n-1},n-1) + 1 - 3 \Delta}{1-2 \Delta}, \end{equation} so that \begin{equation} \frac{g(y_n,n) - g(y_{n-1},n-1)}{\Delta} = 2k_n - 1 < \frac{2g(y_{n-1},n-1) + 1 - 4 \Delta}{1-2 \Delta} < \frac{2g(y_{n-1},n-1) + 1}{1-2 \Delta}. \end{equation} Since we have \begin{equation} g(\Delta \left \lfloor {1/ \Delta} \right \rfloor, \left \lfloor {1/ \Delta} \right \rfloor) = 1, \end{equation} we compare $g(y_n,n)$ with the function \begin{equation} h(y)= c(N) \exp \left( \frac{2y}{1- 2 \Delta} \right) - \frac{1}{2}, \end{equation} where \begin{equation} c(N) = \frac{3}{2 \exp \left( \frac{ 2 \Delta \left \lfloor {1/ \Delta} \right \rfloor}{1 - 2 \Delta} \right) }. \end{equation} The function $h$ is a convex, satisfies the differential equation \begin{equation} h'=\frac{2 h + 1}{1-2 \Delta}, \end{equation} and the initial condition $h(\Delta \left \lfloor {1/ \Delta} \right \rfloor)=1$. From the above inequality for $g(y_n,n)$ we then get by induction that \begin{equation} h(y_n) \geq g(y_n,n), \end{equation} for all $n \geq \left \lfloor {1/ \Delta} \right \rfloor$, so that in particular \begin{equation} g(x,N(x)) \leq c(N(x)) \exp \left( \frac{2x}{1- 2 \Delta} \right) - \frac{1}{2}. \end{equation} Now note that since $x^2 / N(x) \rightarrow 0$ as $x \rightarrow \infty$, we have $\Delta \rightarrow 0$ and $\Delta \left \lfloor {1/ \Delta} \right \rfloor \rightarrow 1$. Moreover we have \begin{equation} \lim_{x \rightarrow \infty} \frac{\exp \left( \frac{2x}{1 - 2 \Delta} \right)}{e^{2x}} = \lim_{x \rightarrow \infty} \exp \left(\frac{4x \Delta}{1 - 2 \Delta} \right) = 1. \end{equation} We finally have so \begin{equation} \limsup_{x \rightarrow \infty} \frac{g(x,N(x))}{f(x)} \leq \limsup_{x \rightarrow \infty} \frac{c(N(x)) \exp \left( \frac{2x}{1 - 2 \Delta} \right)}{\frac{1}{4} \exp \left( 2x - \gamma \right) + \mathcal{O}(e^{-2x})} = \frac{6}{e^{2 - \gamma}} < \frac{3}{2}. \end{equation} QED
{ "language": "en", "url": "https://math.stackexchange.com/questions/1869729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $G$ is a non-abelian group of order 10, prove that $G$ has five elements of order 2. I'm trying to prove this statement: If $G$ is a non-abelian group of order $10$, prove that $G$ has five elements of order $2$. I know that if $a\in G$ such that $a\neq e$, then as a consequence of Lagrange's theorem $|a|\in \{2,5,10\}$. The order of $a$ cannot equal $10$, since then $G$ would be cyclic, and thus abelian which is a contradiction. Now this means that $|a|=2 $ or $|a|=5$. I know from this question that $G$ has a subgroup of order $5$. This subgroup $H$ has prime order, so it is cylic, and all of its non-identity elements have order $5$. Now I need to show that the elements not in $H$ have order $2$. This is where I'm stuck. I've tried assuming that an element $b \notin H$ has order $5$, in order to derive a contradiction, but to no avail. I also know from a previous exercise that if $G$ has order $10$, then it has at least one subgroup of order $2$, so I tried to assume toward a contradiction that $G$ has two subgroups of order $5$, and one subgroup of order $2$. I was trying to show that this would make $G$ abelian, but I couldn't. Any ideas?
There are only 2 groups of order 10, namely the cyclic group and the dihedral group of symmetries of a regular pentagon. The reflections in the dihedral group give you the five desired elements of order $2$. Now, to prove that there are only 2 groups of order 10, let $a,b$ be elements of orders $2,5$ respectively. Consider the elements $1$, $b$, $b^2$, $b^3$, $b^4$, $a$, $ab$, $ab^2$, $ab^3$, and $ab^4$, and notice that they are all distinct. To determine the group, it suffices to determine what $ba$ is. This is the same as determining what $a^{-1}ba$ is. By nonabelianness, we know that $a^{-1}ba \neq b$, so merely check against all other elements of the group...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1869851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 5, "answer_id": 0 }
$f$ is continuous at $x_0 \Leftrightarrow$ for every monotonic sequence $x_n$ in $\text{dom}(f)$ converging to $x_0$, we have $\lim f(x_n) = f(x_0)$ $f$ is continuous at $x_0 \Leftrightarrow$ for every monotonic sequence $x_n$ in $\text{dom}(f)$ converging to $x_0$, we have $\lim f(x_n) = f(x_0)$ Note: There is one answer for this already but it uses a different method than my attempted proof below, so please do not mark this as a duplicate. Proof: Take arbitrary $x_n$ converging to $x$. Then every sequence contains a monotonic sub-sequence, so take $x_{n_k}$ to be a monotonic sub-sequence of $x_n$. Since $x_n$ converges to x, then so does $x_{n_k}$. Then by assumption $f(x_{n_k})$ converges to $f(x)$. This is where I am stuck and am not sure how to complete the idea now that $f(x_n)$ converges to $f(x)$.
Hint For every subsequence $(x_{n_k})_{k \in \mathbb N}$ of $(x_n)_{n \in \mathbb N}$ there exists a subsequence $(x_{n_{k_\ell}})_{\ell \in \mathbb N}$ of $(x_{n_k})_{k \in \mathbb N}$ with $f(x_{n_{k_\ell}}) \rightarrow f(x_0)$ (this can be proven similar to your idea, so that comes into play here) It follows that $f(x_n) \rightarrow f(x_0)$. Can you see why? PS: I hope this is not the prove mentioned in the other question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1869910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How do I find the terms of an expansion using combinatorial reasoning? From my textbook: The expansion of $(x + y)^3$ can be found using combinatorial reasoning instead of multiplying the three terms out. When $(x + y)^3 = (x + y)(x + y)(x + y)$ is expanded, all products of a term in the first sum, a term in the second sum, and a term in the third sum are added. Terms of the form $x^3$, $x^2y$, $xy^2$, and $y^3$ arise. What does the bolded part mean? I found that if you find the possible combinations of $x$ and $y$, you can get $xxx = x^3$, $xxy = x^2y$, $xyy = xy^2$, $yyy = y^3$. Is this what it means?
Expanding $(x+y)(x+y)(x+y)$ amounts to adding up all the ways you can pick three factors to multiply together. For example, you could pick an $x$ from the first $(x+y)$, a $y$ from the second $(x+y)$, and another $x$ from the third $(x+y)$ to get $xyx=x^2 y$. You are right, the only possible products we can get are $x^3$, $x^2 y$, $xy^2$, and $y^3$. However, we do need to count how many ways to get each factor. For example there is only one way to get $x^3$ (pick $x$ from each $(x+y)$), but there are three ways to get $x^2 y$: $xxy$, $xyx$, and $yxx$. One way to count this is to realize that there are $3$ ways to choose which $(x+y)$ contributes a $y$ [and the rest will be $x$s]. Similar reasoning for $xy^2$ and $y^3$ shows that the expansion is $x^3 + 3x^2 y + 3 xy^2 + y^3$. In general, if you have $(x+y)^n$, the number of ways to obtain a product of the form $x^k y^{n-k}$ is the number of ways to choose $k$ of the $(x+y)$ factors from which to select an $x$. There are $\binom{n}{k}$ ways to make this choice. This proves the binomial theorem $(x+y)^n = \sum_{k=0}^n \binom{n}{k} x^k n^{n-k}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1870023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Cylinder defined on 3d coordinate plane This is the first time, I have seen a problem like this: I feel as though if i knew where to start i would be able to do this problem easily. In other words, question 1-4 make sence to me and i know what they are asking for, but i just can't visualize the cylinder. I'm not asking for a picture(although that would be nice), but clarification on what the question is telling me would be helpful. Thanks!
I don't have acesss to a plotting software or a scanner right now so I can't provide a precise plot, but you have the following ingriedents: * *The equation $x^2 + y^2 = r^2$ is the equation of an infinite cylinder of radius $r$ whose symmetry axis is the $z$-axis. The inequality $0 \leq x^2 + y^2 \leq r^2$ throws in all the points inside the cylinder and so defines a solid cylinder. *The equation $z = y$ is the equation of a plane in $\mathbb{R}^3$ that passes through the origin. The inequality $0 \leq z \leq y$ describes the region that lies below the plane and above the $xy$-plane. The solid $C$ you are interested in lies below the plane $z = y$, inside the cylinder $x^2 + y^2 = r^2$ and above the $xy$ plane. The following image, taken from math.tutorvista.com shows a similar situation: In the picture, $r = 1$ but the plane is $y + z = 2$ and not $z = y$ like in your scenario. The first part of your question asks you to describe the cross section of $C$ by a plane $x = t$ which is the plane parallel to the $yz$ plane that passes through $(t,0,0)$. In the cross section, $x$ is constant and so the cross section is described by $$ C_{t} = \left \{ (t, y, z) \, | \, y^2 \leq r^2 - t^2, \, 0 \leq z \leq y \right \}. $$ If you draw the inequalities defining $C_{t}$ in the $yz$ plane, you'll see that $C_{t}$ looks like a solid triangle. This is not surprising as the intersection of the solid cylinder with the plane $x = t$ is a strip and then choosing the points that lie above the $xy$ plane and below the plane $z = y$ will result in a solid triangle (try to imagine and draw this in 3d). This will allow you to do the first part of the question. For the second part of the question, you need to integrate the areas of the cross sections $C_t$ in the range $-r \leq t \leq r$ in order to get the volume of the solid $C$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1870146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Find $\lim_{x\to \infty}\left(\frac{n+2}{n-1}\right)^{2n+3}$ Find $$\lim_{n\to \infty}\left(\frac{n+2}{n-1}\right)^{2n+3}.$$ My attempt: $$\lim_{n\to \infty}\left(\frac{n+2}{n-1}\right)^{2n+3}=\lim_{n\to \infty}\left(1+\frac{3}{n-1}\right)^{2n+3}=\lim_{n\to \infty}\left(1+\frac{1}{\frac{n-1}{3}}\right)^{2n+3}$$ Now we should do something to change the power to $\frac{n-1}{3}$ because: $\lim_{x\to \infty}(1+\frac{1}{x})^x=e$ But I cannot get the answer(the answer is $e^2$). Please give small hints not full answers. Here is a picture from my answer: Note that in persian $2=۲$ and $3=۳$. edit:The answer is mistaked and take $n+1$ instad of $n+1$.
$\newcommand{\angles}[1]{\left\langle\,{#1}\,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{\mathrm{i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\Li}[1]{\,\mathrm{Li}_{#1}} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \color{#f00}{\lim_{n \to \infty}\pars{n + 2 \over n - 1}^{2n + 3}} & = \lim_{n \to \infty}\,\bracks{% \pars{1 + 2/n \over 1 - 1/n}^{2n}\pars{1 + 2/n \over 1 - 1/n}^{3}} \\[5mm] & = \lim_{n \to \infty}\,\braces{\bracks{\pars{1 + {2 \over n}}^{n}}^{2} \bracks{\pars{1 - {1 \over n}}^{n}}^{-2}} \\[5mm] & = \bracks{\lim_{n \to \infty}\pars{1 + {2 \over n}}^{n}}^{2} \bracks{\lim_{n \to \infty}\pars{1 - {1 \over n}}^{n}}^{-2} = \pars{\expo{2}}^{2}\pars{\expo{-1}}^{-2} \\[5mm] & = \color{#f00}{\expo{6}} \approx 403.4288 \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1870239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
How to prove this inequality $3^{n}\geq n^{2}$ for $n\geq 1$ with mathematical induction? Prove this inequality $3^{n}\geq n^{2}$ for $n\geq 1$ with mathematical induction. Base step: When $n=1$ $3^{1}\geq1^{2}$, statement is true. Inductive step: We need to prove that this statement $3^{n+1}\geq (n+1)^{2}$ is true. So, to get the left side of this statement is easy. We can get it by multiplying $3^{n}\geq n^{2}$ with $3$. After this step we have $3^{n+1}\geq 3n^{2}$. What we now have to get is the right side and we can transform it like this: $3n^{2}= (n^{2}+2n+1)+(2n^{2}-2n-1)$ which is same as $(n+1)^{2}+(2n^{2}-2n-1)$. So now we have $(n+1)^{2}$ and $(2n^{2}-2n-1)$ and my question is how should i use this to prove inequality?
Essentially, you want to show that $$3n^2 > (n+1)^2$$ which is not so hard since $$3n^2 - (n^2 + 2n + 1) > 0 \iff 2n^2 - 2n-1 > 0$$ But $2n^2 - 2n - 1 = 2(n^2 -n) - 1 = (n^2-2) + (n^2 - 2n+1) =(n^2 -2) + (n-1)^2$, so that we have for all $n \geq 2$ that $2n^2 - 2n - 1 \geq 0$ since $(n-1)^2$ is always $\geq 0$ and $n^2 - 2$ is $\geq 0$ when $n\geq 2$. Then this means that $$3n^2 \geq (n+1)^2$$ for all $n\geq 2$. And hence: $$3^n \geq n^2 \implies 3^{n+1} \geq 3n^2 \geq (n+1)^2$$ and we are done. This is a very common tactic in inequality induction proofs, you have the inequality arising from your hypothesis; you then multiply both sides by something to get one side of the inequality in the required inductive form (say $a >b$, where $a$ is the required form) and then prove an inequality chain $b > \cdots > d$ where $d$ is the required form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1870539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 1 }
Is this a valid proof that sine is continuous at the origin? $$ \text{Let } \left|\sin x - 0\right| < \epsilon. \\ -\epsilon < \sin x < \epsilon \\ \arcsin (-\epsilon) < x < \arcsin (\epsilon) \\ -\arcsin \epsilon < x < \arcsin \epsilon \\ \left|x\right| < \arcsin \epsilon \\ \left|x - 0\right| < \arcsin \epsilon \\ \text{Let } \delta = \arcsin \epsilon. \\ 0 < \left|x - 0\right| < \delta \implies \left|\sin x - 0\right| < \epsilon \\ \lim_{x->0} \sin x = 0 \\ \lim_{x->0} \sin x = \sin 0 \\ \sin x \text{ is continuous at the origin} $$ In particular, is it safe to get from $-\epsilon < \sin x < \epsilon$ to $\arcsin (-\epsilon) < x < \arcsin (\epsilon)$ by applying the inverse function to all sides of the inequality? Can this operation be dangerous for some functions, functions whose inverses don't share a strictly positive or negative relation?
Provided you know properties of the arcsine your idea will be a proof. However, are you sure you do not need to know that $\sin$ is continuous to deduce properties of the arcsine? Spivak's calculus book has a note about a faulty proof he had in there in one of the pre-publication drafts. It used the square-root function in a proof that $x^2$ is continuous. But then, later, he used continuity of $x^2$ in the proof that the square-root exists. Fortunately, he caught the mistake before publication.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1870753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Integer solutions to $x^3+y^3+z^3 = x+y+z = 8$ Find all integers $x,y,z$ that satisfy $$x^3+y^3+z^3 = x+y+z = 8$$ Let $a = y+z, b = x+z, c = x+y$. Then $8 = x^3+y^3+z^3 = (x+y+z)^3-3abc$ and therefore $abc = 168$ and $a+b+c = 16$. Then do I just use the prime factorization of $168$?
Hint: Taking from where you left off: $ab \mid 168 \implies ab = \pm 1, \pm 2, \pm 4, \pm 6, \pm 7, \pm 8, \pm 12, \pm 14, \pm 21, \pm 24, \pm 28, \pm 42, \pm 56, \pm 84, \pm 168$. Even though it looks cumbersome, it is easy to solve. For example, $ab = 6 \implies c = \dfrac{168}{6} = 28 \implies a+b = 16-c = 16 - 28 = -12$. This case yields no solution. But other cases yield some solutions as you go through them you might find the work is not that tedious and may be fun indeed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1870805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
The mean of the of a sum is the sum of the means Transcription: The mean has good mathematical properties. The mean of a sum is the sum of the means. For example, if $y$ is total income, $u$ is "earned income" (wages and salaries), $v$ is "unearned income" (interest, dividends, rents), and $w$ is "other income" (social security benefits and pensions, etc.). Clearly, a person's total income is the sum of the incomes he or she receives from each source $y_i = u_i + v_i + w_i$. Then $$ \overline{y} = \overline{u} + \overline{v} + \overline{w}. $$ So it doesn't matter if we take the means from each income source and then add them together to find the mean total income, or add each individual's incomes from all sources to get his/her total income and then take the mean of that. We get the same value either way. I've been trying to prove this, but it doesn't make sense to me. e.g. $$ \frac{3 + 4 + 2}{3} = 3 $$ $$ \frac{6 + 14}{2} = 10 $$ $$ 3 + 10 \neq \frac{9 + 20}{2} $$ $ 3 + 10 $ is the sum of the means $ \frac{9 + 20}{2} $ is the mean of the sums which are $3+4+2=9$ and $6+14=20$
In your example, you have $u_1, u_2, u_3$, $v_1, v_2$, and you have correctly showed that $$ \text{mean}(u_1,u_2,u_3) + \text{mean}(v_1,v_2) $$ is not necessarily equal to $$ \text{mean}(u_1 + u_2 + u_3, v_1 + v_2), $$ so in that sense you are exactly correct. However, this is not what the statement was intended to express. What is intended is that if you have two (or more) lists with the same number of elements, and you take the mean of each list and sum them, that will be the same as summing the corresponding elements and then taking the mean. So if we have lists $u_1, u_2, u_3$ an $v_1, v_2, v_3$, it is saying that $$ \text{mean}(u_1 + u_2 + u_3, v_1 + v_2 + v_3) = \text{mean}(u_1,v_1) + \text{mean}(u_2,v_2) + \text{mean}(u_3,v_3). $$ Notice how in the phrase "sum of the means", the individual means must take elements of the same index -- we take the mean of $u_1, v_1$ and the mean of $u_2, v_2$ for example, rather than mean of $u_1, v_1, v_2$ or $u_1, u_2$ or anything else.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1870904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Probability of selecting same factor. Willie Pikette randomly selects a factor of $144$. Betty Wheel selects a factor of $88$. What is the probability that they selected the same number? This is my incorrect approach (and please feel free to bash at me): $144$ has $15$ factors in total whereas $88$ has $8$ factors. Because $144 = {2^4}\times{3^2}$ and $88 = {2^3}\times{11}$, the common factors are related to $2: 1,2,{2^2},{2^3}$ for a total of $4$ factors. So the probability of choosing the common factor from $144$ is $\frac{4}{15}$ and the probability of choosing a common factor from $88$ is $\frac{4}{8}$. Using the rules of multiplication, $\frac{4}{15}\times\frac{4}{8} = \frac{2}{15}$ which is approximately $13.3\%$ This isn't the answer; rather it is $3.3\%$ I would gladly appreciate that you guys could not only provide the appropriate analysis and solution, but also point out the error to my solution (hopefully in layman's terms :)).
You've got the factoring part right, but the combinatorical part wrong: * *The number of ways to pick a pair of factors is $15\cdot8$ *The number of ways to pick a pair of identical factors is $4$ *Hence the probability of picking a pair of identical factors is $\frac{4}{15\cdot8}$ You have answered correctly for the probability of picking a pair of common factors. The question, however, is about the probability of picking a pair of identical factors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1871002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Continuous functions in the product topology on $\Bbb{R}^{\Bbb{N}}$ I'm trying to prove the following statement: Let $(X, T )$ be a topological space, and let $f : X \rightarrow \Bbb{R^{\Bbb{N}}}$ be a function, where $\Bbb{R^{\Bbb{N}}}$ has the product topology. Let the coordinate functions of $f$ be called $f_n$, for $n \in \Bbb{N}$, so that for $x \in X$ $f(x) = (f_1(x), f_2(x), f_3(x), . . .)$ Then f is continuous if and only if $f_n : X \rightarrow \Bbb{R}$ is continuous for every $n \in \Bbb{N}$ One side is easy: If we write $f_n = \pi_n(f)$ where $\pi$ is the identity function then whenever f is continuous $f_n$ is also continuous as well since the product topology is the coarsest topology s.t $\pi$ is continuous. For the other direction I was thinking: $S_n = \{ \pi^{-1}_n(U) : U \in \Bbb{R}$ is open$\}$ is a subbasic set in $\Bbb{R}^{\Bbb{N}}_{prod}$. We want to show its preimage is open so $f^{-1}(\pi^{-1}_n(U))$ = $(\pi_n(f(U))^{-1}$ = $f_n^{-1}(U)$ which is open since $f_n$ is continuous, since this s true for all n we have that f is contiuous.
Restrict attention to base elements. A base for the product topology is furnished by elements of the form $B=I_1\times\cdots \times I_N\times \prod_{k>N} {\mathbb R}$ with $N$ finite and each $I_k$ open. Each $f_k^{-1} (I_k)$, $1\leq k\leq N$ is open and their (finite) intersection which equals $f^{-1} (B)$ is thus open.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1871088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Algebraic solution for the value of $x$. I solved this problem the fifteen years ago without numerically solving equations of degree 4, I was happy in a substitution that I avoid directly attacking equations of degree 4. Today my nephew, who is an enthusiastic student of mathematics, proposes me the same problem. It occurs to me that I am a very rusted to algebraic substitutions exhaustively. I tried for about 3 hours. How disappointed do not want my nephew to his attempts (and not me) ask for help to the ME community. And of course I will give all the credits to ME. My attempt. Note that $ \cos \alpha = \frac{1}{x}$, $\cos \alpha = \frac{y}{1}$, $\cos \alpha = \frac{y+1}{x+1}$. Then we have $$ xy=1. $$ By Pythagoras Theorem we have $(y+1)^2+1^2=(x+1)^2 \Longleftrightarrow y^2+2y+1+1=x^2+2x+1$, i.e. $$ x^2-y^2+2(x-y)-1=0. $$ Update [ July 26 2016 ] I remember the time I solved this question, I tried something like \begin{align} x^2-y^2+2(x-y)-1=0 & \Longleftrightarrow (x-y)[x+2+y]=1 \\ & \Longleftrightarrow (x-y)[x+2\cdot 1+y]=1 \\ & \Longleftrightarrow (\sqrt{x}-\sqrt{y})(\sqrt{x}+\sqrt{y})[x+2\sqrt{x}\sqrt{y}+y]=1 \\ & \Longleftrightarrow (\sqrt{x}-\sqrt{y})(\sqrt{x}+\sqrt{y})^3=1 \\ & \Longleftrightarrow (\sqrt{x}+\sqrt{y})^3=\frac{1}{(\sqrt{x}-\sqrt{y})} \\ & \Longleftrightarrow (\sqrt{x}+\sqrt{y})^3=\frac{(\sqrt{x}+\sqrt{y})}{(x-y)} \\ & \Longleftrightarrow (\sqrt{x}+\sqrt{y})^2=\frac{1}{(x-y)} \\ & \Longleftrightarrow (x+y+2)=\frac{1}{(x-y)} \end{align} We then have two ways to tackle the problem: $$ \left\{\begin{array}{rl}\sqrt{x}\sqrt{y}=&1 \\ (\sqrt{x}+\sqrt{y})^3=&\frac{1}{(\sqrt{x}-\sqrt{y})} \end{array}\right. \quad\mbox{ or } \quad \left\{\begin{array}{rl}xy=&1 \\ (x+y+2)=&\frac{1}{x-y} \end{array}\right. $$
From $y = 1/x,$ then multiplying by $x^2,$ i got $$ x^4 + 2 x^3 - x^2 - 2 x - 1. $$ This looks bad. However, set $$ x = t - \frac{1}{2} $$ and you get rid of the cubic term, always worth a try. I was pleased to discover that the linear term also vanished, giving $$ t^4 - \frac{5}{2} t^2 - \frac{7}{16}, $$ and you can solve for $t^2$ with the Quadratic Formula. I get $$ t^2 = \frac{5 \pm \sqrt {32}}{4} $$ with two pure imaginary roots, a real negative, and a real positive for $t$ itself. Then $x$ is that minus 1/2. I get $1.132241883$ as $x.$ One good habit is to simply draw a graph of the function. I do them by hand with a calculator to find points. I have appended a good online graph. Notice that the graph appears to be symmetric across the vertical line $x = -\frac{1}{2}.$ We could confirm this by taking $ f(x) = x^4 + 2 x^3 - x^2 - 2 x - 1 $ and then checking whether $f(-1-x) = f(x)??$ In turn, this confirmation would tell us that the translation I tried would, in fact, give a graph symmetric across the y axis, meaning all even exponents. Calculus ideas that are, at least, consistent with the symmetry notion include $$ f'(x) = 2 (2x+1) \left(x^2 + x -1 \right) $$ and $$ f''(x) = 2 \left(6 x^2 + 6 x - 1 \right), $$ so that $x=-1/2$ gives a local maximum, while the inflection points are symmetric around $x = -1/2$ by the quadratic formula. Huh. Turns out the local minima really are along $y = -2,$ since $$ f(x) + 2 = \left( x^2 + x - 1 \right)^2. $$ Go Figure. ========================= parisize = 4000000, primelimit = 500509 ? x = -1 - w %1 = -w - 1 ? x^4 + 2 * x^3 - x^2 - 2 * x - 1 %2 = w^4 + 2*w^3 - w^2 - 2*w - 1 ? ================================== parisize = 4000000, primelimit = 500509 ? factor( x^4 + 2 * x^3 - x^2 - 2 * x - 1 ) %1 = [x^4 + 2*x^3 - x^2 - 2*x - 1 1] ? x = t - (1/2) %2 = t - 1/2 ? p = x^4 + 2 * x^3 - x^2 - 2 * x - 1 %3 = t^4 - 5/2*t^2 - 7/16 ? ==============================================
{ "language": "en", "url": "https://math.stackexchange.com/questions/1871168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
What are some examples of applications of integral quadratic forms in $n$ variables in algebraic topology? I'm reading the wiki page of qudratic forms. It simply seems curious to me what are some concrete examples of applications of integral quadratic forms in algebraic topology. I've searched a bit but a lot of the readings online are too involved. Hope to see some well-illustrated ones here. Thanks!
The paper J.H.C. WHITEHEAD: A certain exact sequence. Ann. Math. 52 (1950), 51-110. introduced a functor $\Gamma$ which is the ``universal quadratic functor" from Abelian groups to Abelian groups. Let $A$ be an Abelian group. Then $\Gamma(A)$ is the Abelian group with generators $\gamma a, a \in A$, and the following relations: * *$ \gamma(-a)=\gamma(a), a\in A$ *if $\beta(a,b)=\gamma(a+b)-\gamma a -\gamma b, a,b\in A$, then $\beta : A \times A\to \Gamma(A)$ is biadditive. This functor also occurs in the paper R. BROWN and J.-L. LODAY, ``Van Kampen theorems for diagrams of spaces'', — Topology 26 (1987) 311-334. This paper introduced a nonabelian tensor product $G \otimes H$ of groups $G,H$ which act on each other "compatibly", see this bibliography, so that in particular we have a tensor square $G \otimes G$ and a morphism of groups $\kappa: G \otimes G \to G$ induced by the commutator map $[\;,\;]: G \times G \to G$. The kernel of $\kappa$ is written $J_2(G)$ and is identified in the last paper as $\pi_3(SK(G,1))$. Further there is an exact sequence $$H_3(G)\to \Gamma(G^{ab}) \to J_2(G) \to H_2(G) \to 0 . $$ I think there are more applications of $\Gamma$ in work of H,-J. Baues.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1871352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Show $\sum_{k=1}^{n}\frac{1}{k}\sim \ln(n)$ there is an example of how we apply Integral test for convergence Theorem: Consider an $n_{0}$ and a non-negative, continuous function $f$ defined on the unbounded $[n_0,+\infty[$, on which it is monotone decreasing. Then $\forall (p,q)\in\mathbb{N}^{2}$ such that $n_o\leq p <q$: $${\displaystyle \int _{p+1}^{q+1 }f(x)\,dx\leq \sum _{k=p+1}^{q }f(k)\leq \int _{p}^{q }f(x)\,dx}$$ i don't understand why they took the bounded from $1$ to $n+1$ instead of $n \geq 2,\ 1=n_0=p,\ q=n$ $$\displaystyle \int_{1+1}^{n+1}\dfrac{1}{x}dx\leq \sum_{k=1+1}^{n}\dfrac{1}{k}\leq \int_{1}^{n}\dfrac{1}{x}dx$$
The value of $p$ is different for each of the inequalities. * *The inequality $\displaystyle \int_1^{n+1} \frac{1}{x}\, dx \le \sum_{k=1}^n \frac{1}{k}$ comes from taking $p=0$ and $q=n$ in the theorem. *The inequality $\displaystyle \sum_{k=2}^n \frac{1}{k} \le \int_1^n \frac{1}{x}, dx$ comes from taking $p=1$ and $q=n$ in the proposition. (It should also be noted that there's a typo in your notes: the $\int$ was erroneously replaced by a $\sum$ in the second bit. The second of these points yields $$\sum_{k=1}^n \frac{1}{k} = 1 + \sum_{k=2}^n \frac{1}{k} \le 1 + \int_1^n \frac{1}{x}\, dx = 1 + \ln(n)$$ You could have done it the way that you suggest, but that would have yielded $$\ln(n+1)-\ln 2 \le \sum_{k=2}^n \frac{1}{k} \le \ln(n)$$ and hence $$\ln(n+1)-\ln(2)+1 \le \sum_{k=1}^n \frac{1}{k} \le \ln(n)+1$$ and you ultimately end up with the same result, since $$\ln(n+1)-\ln(2)+1 \sim \ln(n) \text{ and } \ln(n)+1 \sim \ln(n) \text{ as } n \to \infty$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1871553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
What are examples of irreducible but not prime elements? I am looking for a ring element which is irreducible but not prime. So necessarily the ring can't be a PID. My idea was to consider $R=K[x,y]$ and $x+y\in R$. This is irreducible because in any product $x+y=fg$ only one factor, say f, can have a $x$ in it (otherwise we get $x^2$ in the product). And actually then there can be no $y$ in $g$ either because $x+y$ has no mixed terms. Thus $g$ is just an element from $K$, i.e. a unit. I got stuck at proving that $x+y$ is not prime. First off, is this even true? If so, how can I see it?
Let $\rm\ R = \mathbb Q + x\:\mathbb R[x],\ $ i.e. the ring of real polynomials having rational constant coefficient. Then $\,x\,$ is irreducible but not prime, since $\,x\mid (\sqrt 2 x)^2\,$ but $\,x\nmid \sqrt 2 x,\,$ by $\sqrt 2\not\in \Bbb Q$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1871637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29", "answer_count": 2, "answer_id": 0 }
Proving that the roots of $1/(x + a_1) + 1/(x+a_2) + ... + 1/(x+a_n) = 1/x$ are all real Prove that the roots of the equation: $$\frac1{x + a_1} + \frac1{x+a_2} + \cdots + \frac1{x+a_n} = \frac1x$$ are all real, where $a_1, a_2, \ldots, a_n$ are all negative real numbers.
We can prove a stronger statement: the equation above has $n - 1$ real positive roots and a negative real one, and there are no other roots. Let $$g(x) = \sum_{i = 1}^n \frac1{x-a_i} - \frac1x,\qquad a_i \in (0, +\infty).$$ Note that $g(x)$ is defined in $\mathbb R \setminus \{0, a_1, \ldots, a_n\}$ and it's also continuous. Without loss of generality, suppose that $a_1 < a_2 < \cdots < a_n$. Now, consider the interval $(a_i, a_{i + 1})$. We have that: * *$\lim\limits_{x \to a_i^+} g(x) = +\infty$ *$\lim\limits_{x \to a_{i + 1}^-} g(x) = -\infty$ Therefore, from the definition of limit and the intermediate value theorem, we deduce that there is a root in $(a_i, a_{i + 1})$. We proved the existence of $n - 1$ real positive roots. It can be easily verified that * *$\lim\limits_{x \to -\infty} g(x) = 0^-$ *$\lim\limits_{x \to 0^-} g(x) = +\infty$ Again, from the definition of limit and the intermediate value theorem we conclude that there is another real root in the interval $(-\infty, 0)$. We are done: observe that the equation $g(x) = 0$ is equivalent, through some simple algebra, to a polynomial equation of degree $n$. Having found $n$ real roots, we conclude that there are no complex roots.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1871711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 1 }
How to know if a segment is completely included between two lines? I have three segments (not necessarily parralel): * *blue $((ax1, ay1), (ax2, ay2))$ *green$((bx1, by1), (bx2, by2))$ *red $((cx1, cy1), (cx2, cy2))$ and a $margin$ value which is the width of the sky blue band in the sketch bellow (with infinite length and always centered to the blue segment). Is there a way to know if a segment is completely in the sky blue band knowing the coordinates of each segment and the value of the margin ?
Of course. Since the band is convex, to make sure the segment is completely in the band, you only need two endpoints in the band. (It is applicable for all convex domains)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1871814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Implied plus-minus sign in radical equation? Say we have: $$\sqrt{x+7}=5-x$$ Is it implicitly understood that the following also holds? $$-\sqrt{x+7}=5-x$$ I'm exploring the notion of "extraneous solutions." In this example, solving either equation leads to two results, namely x=2 and x=9. Standard practice is to check these solutions once they're found by plugging them into the original equation. Thus, x=2 is shown to be the right answer, assuming we're using the first equation above. Meaning, x=9 is extraneous. But really, x=9 is the solution to the second equation. What's the right way to think about the square root operator given this discussion? Does the second equation logically follow from the first and, if so, is it right to call the second equation's solution "extraneous?"
If what you say is true, then we should be able to add both sides, $$\sqrt{x+7}-\sqrt{x+7}=5-x+5-x$$ $$0=2(5-x)$$ $$\implies x=5$$ Going back, this is not true in either equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1871940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 0 }
Limit of function. How can it suddenly change it's domain after simple manipulations I'm trying to refresh my math at the moment and have quickly become very confused by the calculation of limits of functions. For example, I solved the following $\lim_{x \to 0} \frac{7x^2+4x^4}{3x^3-2x^2}$ by first manipulating it to $\frac{7+4x^2}{3x-2}$ and then concluding that the limit is $-\frac{7}{2}$ The thing I don't understand is why the original expression isn't defined in f(0) while the second one is? I'm not very experienced with math but I don't understand why the domain of the function can be changed by just multiplying the numerator and denominator of a fraction with the same value (which in this case is $\frac{1}{x^2}$).
You are dividing with $x^2$ which lies in the denominator and hence $x\neq0$. So it can't be $f(0)$ anymore.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1872032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Determine if the following short exact sequence is split. Do the following short exact sequences split? $$0\longrightarrow A\longrightarrow B\longrightarrow \mathbb{Z}^2 \longrightarrow 0$$ $$0\longrightarrow\mathbb{Z}\longrightarrow A\longrightarrow B\longrightarrow 0$$ This is a question on a Ph.D Topology exam. I know what it means to be a split short exact sequence. In order for the short exact sequence $0\longrightarrow A\longrightarrow B\longrightarrow C \longrightarrow 0$ split you need one of the following: * *there exists map $B\longrightarrow A$ such that $A\longrightarrow B\longrightarrow A$ is the identity on $A$. *there exists map $C\longrightarrow B$ such that $C\longrightarrow B\longrightarrow C$ is the identity on $C$. *$B$ is isomorphic to the direct sum of $A$ and $C$. I have tried to find examples and nonexamples online of split short exact sequences and I've tried to figure out how to answer the above question, but I am struggling hard. I've tried to use the fact that these sequences are exact so we have the fact that the $Im(f_i)=Ker(f_{i+1})$. If someone could please give me an explanation to this question I would be very grateful. Thanks.
For the first short exact sequence, note that $\mathbb{Z}^2$ is a free $\mathbb{Z}$-module, it is thus projective and the short exact sequence thus splits. In fact, a characterizing property for projective modules is the following. $M$ is a projective $R$-module if and only if any short exact sequence $$0\longrightarrow K\longrightarrow L\longrightarrow M\longrightarrow 0$$of $R$-modules splits. The above proposition already implies the second short exact sequence does not necessarily split. An explicit example can be the following $$0\longrightarrow \mathbb{Z}\longrightarrow \mathbb{Z}\longrightarrow \mathbb{Z}/2\mathbb{Z}\longrightarrow 0,$$ where the first map is the multiply by $2$ map while the second may is the natural projection(or modulo $2$ map). Note this short exact sequence does not split since $\mathbb{Z}$ is torsion free. Hope the above helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1872114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Spanning 2-regular subgraphs in even regular graphs. Theorem: Every regular graph of positive even degree has a spanning 2-regular subgraph. This was taken from Corollary 5.10 of ETH Zurich's notes on graph theory. The proof constructs a Eulerian tour, splits the vertices into in and out vertices on the tour, then invokes Hall's theorem on the regular (and now bipartite graph) to get a perfect matching. This is joined together to form the spanning 2-regular subgraph. While the proof seems relatively straightforward, I have two questions: First, where is the 2-regular, spanning subgraph in this 4-regular graph? It seems to me that following the theorem it should have one, but I have been unable to identify a 2-regular, spanning subgraph in this relatively simple graph. Second, doesn't this imply that every even degree, connected, regular graph has a Hamiltonian cycle?
A $2$-regular graph is a union of disjoint cycles.(So it doesn't have to be exactly one cycle) You yourself have provided an example of a $4$-regular graph with a $2$-regular spanning subgraph but no hamiltonian cycle. An example of a $2$-regular subgraph in your linked graph is the union of the following two cycles: $(1,9,0,6,8,4,1)$ and $(2,10,3,5,7,2)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1872227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Derivative of a analytic function at its fixed point Let $D$ be a bounded domain, and let $f(z)$ be an analytic function from $D$ to $D$.Show that if $z_{0}$ is fixed point for $f(z)$,then $|f'(z_{0})|\leq 1$ All the conditions above make me think about Schwartz Lemma to solve this problem.But I don't know how to construct a proper function satisfying all the conditions in Schwartz Lemma.
Assume $D$ is simply connected. Let $\phi: D \to \Bbb U$ be a conformal map with $\phi(z_0)=0$, guaranteed by Riemann Mapping Theorem. Define $g = \phi \circ f \circ \phi^{-1}: \Bbb U \to \Bbb U$. Then $g(0)=0$, so $|g'(0)| \leq 1$ by the Schwarz Lemma. But $(\phi^{-1})'(z_0) = 1/\phi'(f(z_0))$ and therefore by the chain rule $$g'(0) = \phi'(f(z_0))f'(z_0)(\phi^{-1})'(0) = f'(z_0)$$ so that $|f'(z_0)|=|g'(0)| \leq 1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1872380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to determine if certain operation is associative based on Cayley table I have the following table and I don't know how to determine if an operation is associative based on the table. Is there an easy way to do it? Or it's just brute force \begin{array}{|c|c|c|c|c|c|} \hline *& a & b & c &d &e \\ \hline a& a&b &c&b&d\\ \hline b& b&c &a&e&c\\ \hline c& c &a &b&b&a\\ \hline d&b&e&b&e&d\\ \hline e&d&b&a&d&c\\ \hline \end{array} We can see that it's not commutative because $b*e \neq e*b$, but how do we check if it's associative?
Light's associativity test is based on the following Lemma. Let $*$ be a binary operation on the set $S$ (called product). Definition: A subset $G$ of $S$ generates $S$ if every element of $S$ can be generated as product of elements of $G$. Lemma: If G generates S then * is associative on S if and only if $$\forall (x \in S) \forall (g \in G) \forall (z \in S): x*(g*z)=(x*g)*z$$ In your example $\{e\}$ generates $\{a,b,c,d,e\}$, because $$e=e$$ $$c=e^2$$ $$a=e*c=e*(e^2)$$ $$d=a*e=(e*(e^2))*e$$ $$b=a*d=(e*(e^2))*((e*(e^2))*e)$$ so you have to check $$\forall (x \in S) \forall (z \in S): x*(e*z)=(x*e)*z$$ But we have $$c*(e*e)=c*c=b$$ $$(c*e)*e=a*e=a$$ and so $$c*(e*e) \ne (c*e)*e$$ So $*$ is not associative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1872471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Calculation of $\frac{a_{20}}{a_{20}+b_{20}}$? The solution of $\frac{a_{20}}{a_{20}+b_{20}}$ is $-39$ (This is wrote by answer sheet) from the recursive system of equations : \begin{cases} a_{n+1}=-2a_n-4b_n \\ b_{n+1}=4a_n+6b_n\\ a_0=1,b_0=0 \end{cases} This is taken from $2007$ GATE entrance exam in India. anyone can show me how we can calculate this answer? Update 1: Three answer is added, but my main problem is remains up to yet, non of these three answers didn't include the main aspect of this question. my main problem is via simplification and replacement in last part of solution.
$4b_n=-a_{n+1}-2a_n$, $4b_{n+1}=-a_{n+2}-2a_{n+1}$, $-a_{n+2}-2a_{n+1}=16a_n-6a_{n+1}-12a_n$, $a_{n+2}-4a_{n+1}+4a_n=0$. Do you know how to solve that kind of recurrence? Here's an approach. $a_{n+2}-4a_{n+1}+4a_n=(a_{n+2}-2a_{n+1})-2(a_{n+1}-2a_n)=c_{n+1}-2c_n$, where we are defining $c_n$ by $c_n=a_{n+1}-2a_n$. Now we have to solve $c_{n+1}-2c_n=0$, and the solution is obviously $c_n=c_02^n$ (and we can work out $c_0$ easily enough). So now we have to solve $a_{n+1}-2a_n=c_02^n$. Are we at a recurrence you can solve yet?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1872575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Hadamard-like complex variable substitution \begin{align} \frac\pi a &= \int_{-\infty}^\infty dxdye^{-a(x^2+y^2)}\\ \tag{1}&= \int_{-\infty}^\infty dxdye^{-a(x+iy)(x-iy)} \end{align} So far so good. Now introduce a complex variable $z$ and its conjugate $z^*$ such that $$ x = \frac{z+z^*}{2}, y=\frac{z-z^*}{2i} $$ I suppose that means $z=x+iy$ and $z=x-iy$. According to my professor, this means that the Gauss integral $(1)$ becomes \begin{align} \tag{2}\frac\pi a &= \frac{1}{2i}\int dzdz^*e^{-azz^*} \end{align} I don't understand how to substitute the variables, and from where to where this integral runs. I tried it like this: $$ dx=\frac 12(dz+dz^*), dy=\frac 1{2i}(dz-dz^*)\\ dxdy = \frac{1}{4i}(dz^2-{dz^*}^2) $$ * *How can I show that the last term equals $\frac{1}{2i}dzdz^*$? *What are the limits of the integral $(2)$?
The substitution you made yields the Jacobian $$\begin{vmatrix}\cfrac12&\;\;\cfrac12\\\cfrac1{2i}&-\cfrac1{2i}\end{vmatrix}=-\frac1{2i}\implies dxdy=-\frac1{2i}dzdz^*$$ The minus sign is not that relevant here as without a specific integration path, and thus some orientation, it rather is moot. I can't tell anything about the path taken, but it would seem to be something in $\;\Bbb C\times\Bbb C\;$ . I really don't know.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1872693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Upper bound to a series with binomial coefficients Let $c>0$ and $m$ be a positive integer. The following sum is convergent, but how fast does it grow with $m$ as $m$ is large? $$ f(m)= \sum\limits_{n=1}^{\infty} \binom{n + m}{n} e^{-c \, n} $$ Is there a polinom in $m$, $g(m)$, such that $$f(m) \leq g(m) ?$$
$\newcommand{\angles}[1]{\left\langle\,{#1}\,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{\mathrm{i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\Li}[1]{\,\mathrm{Li}_{#1}} \newcommand{\mc}[1]{\,\mathcal{#1}} \newcommand{\mrm}[1]{\,\mathrm{#1}} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \color{#f00}{\mrm{f}\pars{m}} & \equiv \sum_{n = 1}^{\infty}{n + m \choose n}\expo{-c\, n} \\[5mm] & = \sum_{n = 1}^{\infty}{-\pars{n + m} + n - 1 \choose n}\pars{-1}^{n}\expo{-c\, n} \quad\pars{~Binomial "Negation"~} \\[5mm] & = \sum_{n = 1}^{\infty}{-m - 1 \choose n}\pars{-\expo{-c}}^{n} = \bracks{1 + \pars{-\expo{-c}}}^{-m - 1}\,\,\, -\,\,\, 1\quad \pars{~Newton\ Binomial~} \\[5mm] & = \color{#f00}{{1 \over \pars{1 - \expo{-c}}^{m + 1}} - 1} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1872860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Counting the degrees of a face in planar graph I've been having trouble wrapping my head around this concept. How do I calculate the degree of a face in planar graphs. In our textbook, we are given this image: where $f_1, f_2, f_3, f_4$ are the faces. In the textbook, it gives the degrees of the four faces as $$deg(f_1) = 6$$ $$deg(f_2) = 3$$ $$deg(f_3) = 5$$ $$deg(f_4) = 14$$ I don't understand how they got 6 for $f_1$ and 14 for $f_4$. I know this is a very simple question, but I'm just not getting it for some reason. I would appreciate the help. Thanks
Think of the edges as being two sided. As you move around the face of $f_1$ you see both sides of the leaf edge, so that edge is counted twice. Likewise, when you travel around the outer face you see the bridge edge twice (both sides of it) so it is counted twice. Hope this helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1872960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
How could a statement be true without proof? Godel`s incompleteness theorem states that there may exist true statements which have no proofs in a formal system of particular axioms. Here I have two questions; 1) How can we say that a statement is true without a proof? 2) What has the self reference to do with this? Godel sentence "G" can say that SUB(a,a, no prove) but could be this just arbitrarily judgement about non-provability of "a" because it may simply has a proof which is not yet revealed or discovered?
* *Assuming your formal system is consistent, Gödel shows there is a statement in that system whose interpretation is true but that is unprovable in the system. The statement is actually provable, but not in that system: you need the additional assumption that the system in consistent, and that is not provable in the system (unless the system happens to be inconsistent!). *There's no "arbitrarily judgement" here. If there were a proof of "a", you could use that to produce a proof of 0=1. Thus if the system is consistent, "a" is not provable in the system.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1873047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 1 }
How does changing the log base affect the intepretation of information entropy Entropy of random variable is defined as: $$H(X)= \sum_{i=1}^n p_i \log_2(p_i)$$ Which as far as I understand can be interpreted as how many yes/no questions one would have to ask on average, to find out the value of the random variable $X$. But what if the log base is changed to for example e? What would the interpretation be then? Is there an intuitive explanation?
Obviously this interpretation breaks down (at least somewhat) for non-integer bases, but for any logarithm base $b$, not just base $b=2$, we can interpret the information with respect to that base, $$H_b(X) = \sum_{i=1}^n p_i \log_b(p_i)$$ to be the average number of $b-$ary questions one would have to ask on average in order to find out the value of the random variable $X$. What do I mean by a $b-$ary question? One that has at most $b$ possible answers. Any yes/no question is a $2-$ary question, and if one can reword it cleverly enough, any $2-$ary question can be stated as a yes/no question (since yes and no are both exhaustive and mutually exclusive). More precisely, a $b$-ary question is one that has $b$ possible answers, which together exhaust all possibilities and each of which is mutually exclusive. In general, for $b \not=2$, it might be difficult to think of truly $b-$ary questions which don't involve artificially restricting the possible answers. Consider, for example, trying to determine the value of a dice roll. If you are limited to $2-$ary questions, (e.g. "did it roll a $1$?") then on average it will take $\log_2 6$ questions to ascertain what value it rolled. However, if you are allowed to ask the $6-$ary question "did it roll a 1, 2, 3, 4, 5, or 6"? Then it will only take you $\log_6 6 =1$ question on average to ascertain the value. If you consider the question tree consisting of all possible questions to derive the answer, then $b$ is just the number of branches emanating from each node (since each node is a question). If you increase $b$, you will decrease the number of questions necessary on average to ascertain the answer because each node will have more branches emanating from it, and thus you can traverse a larger portion of the set of possible answers by hitting fewer nodes. This is in fact very similar to the idea of recursion equations and recursion trees in computer science; in particular, I encourage you to look at Section 4.4 of Cormen et al, Introduction to Algorithms, for an explanation of how the logarithm enters naturally into these types of situations. We can think of asking a $b-$ary question as dividing the problem of finding the value of the random variable $X$ into $b$ subproblems of equal size -- then the analogy with recursion should become more clear (hopefully).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1873194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Prove $[\sin x]' = \cos x$ without using $\lim\limits_{x\to 0}\frac{\sin x}{x} = 1$ I came across this question: How to prove that $\lim\limits_{x\to0}\frac{\sin x}x=1$? From the comments, Joren said: L'Hopital Rule is easiest: $\displaystyle\lim_{x\to 0}\sin x = 0$ and $\displaystyle\lim_{x\to 0} = 0$, so $\displaystyle\lim_{x\to 0}\frac{\sin x}{x} = \lim_{x\to 0}\frac{\cos x}{1} = 1$. Which Ilya readly answered: I'm extremely curious how will you prove then that $[\sin x]' = \cos x$ My question: is there a way of proving that $[\sin x]' = \cos x$ without using the limit $\displaystyle\lim_{x\to 0}\frac{\sin x}{x} = 1$. Also, without using anything else $E$ such that, the proof of $E$ uses the limit or $[\sin x]' = \cos x$. All I want is to be able to use L'Hopital in $\displaystyle\lim_{x\to 0}\frac{\sin x}{x}$. And for this, $[\sin x]'$ has to be evaluated first. Alright... the definition that some requested. Def of sine and cosine: Have a unit circumference in the center of cartesian coordinates. Take a dot that belongs to the circumference. Your dot is $(x, y)$. It relates to the angle this way: $(\cos\theta, \sin\theta)$, such that if $\theta = 0$ then your dot is $(1, 0)$. Basically, its a geometrical one. Feel free to use trigonometric identities as you want. They are all provable from geometry.
What is required here is not a proof of $\sin'=\cos$ without using $$\lim_{\phi\to0}{\sin\phi\over\phi}=1\tag{1}\ ,$$ but a proof of the basic limit $(1)$ using the "geometric definition" of sine provided by the OP. To this end we shall prove "geometrically" that $$\sin\phi<\phi\leq\tan\phi\qquad\left(0<\phi<{\pi\over2}\right)\ .\tag{2}$$ The inequalities $(2)$ imply $$\cos\phi\leq{\sin\phi\over\phi}<1\qquad\left(0<\phi<{\pi\over2}\right)\ ,$$ so that $(1)$ follows from $\lim_{\phi\to0}\cos\phi=1$ and the squeeze theorem. Comparing segment length to arc length immediately shows that $$\sin\phi=2\sin{\phi\over2}\cos{\phi\over2}\leq2\sin{\phi\over2}<\phi\ .$$ In order to prove that $\phi\leq\tan\phi$ we somehow have to use how the length of curves is defined. I'm referring to the following figure. If $0\leq\alpha<\beta<{\pi\over2}$ then $$s'=\tan\beta-\tan\alpha={\sin(\beta-\alpha)\over\cos\beta\cos\alpha}=2\sin{\beta-\alpha\over2}\>{\cos{\beta-\alpha\over2}\over\cos\beta}\>{1\over\cos\alpha}>2\sin{\beta-\alpha\over2}=s\ .$$ It follows that the length $L_P$ of any polygonal approximation $P$ to the circular arc $AB$ is $\ <\tan\phi$, and this implies $$\phi:=\sup_PL_P\leq\tan\phi\ .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1873286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 7, "answer_id": 4 }
Matrix differential equation of the form $X'=CX$ Let $n \in \mathbb{N}^{\ast}$ and $\mathrm{Sym}(n)$ (respectively $\mathrm{Spd}(n)$) denote the linear space (respectively set) of real $n \times n$ symmetric (respectively positive definite) matrices. I am interested in the following matrix differential equation : $$ \frac{d\mathbf{X}}{dt} = \mathbf{V}\mathbf{S}^{-1} \mathbf{X}(t), \; t \in \mathbb{R} \tag{$\ast$} $$ where $\mathbf{V} \in \mathrm{Sym}(n)$, $\mathbf{S} \in \mathrm{Spd}(n)$ are known. The solutions of $(\ast)$ are of the form : $$ t \in \mathbb{R}, \, \mathbf{X}(t) = \exp(t\mathbf{V}\mathbf{S}^{-1})\mathbf{X}(0). $$ My question is the following : if $\mathbf{X}(0) \in \mathrm{Sym}(n)$, does $(\star)$ have solutions in $\mathrm{Sym}(n)$ ? By that, I mean : is there a solution $\mathbf{X} : \mathbb{R} \to \mathrm{Sym}(n)$ of $(\ast$) ?
In your generality, no. Let $V = \begin{pmatrix}1 & 1 \\ 1 & 1\end{pmatrix}$ and $S^{-1} = \begin{pmatrix} 2 & 0 \\ 0 & 1\end{pmatrix}$. Then $$ VS^{-1} = \begin{pmatrix} 2& 1 \\ 2 & 1 \end{pmatrix} $$ Let $X(0) = \begin{pmatrix} 1 & 1 \\ 1 & -2 \end{pmatrix} $, the corresponding solution to the ODE is $$ X(t) = \begin{pmatrix} e^{3t} & 1 \\ e^{3t} & -2 \end{pmatrix} $$ which is not symmetric.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1873389", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Joint probability density function $(X^2,Y^2)$ Let $X$ and $Y$ be random variables having the following joint probability density function $f(x,y)=\begin{cases} \frac{3}{8}xy & x\geq0,\,y\geq0,\:x+y\leq2,\\ 0 & \mbox{otherwise}. \end{cases}$ Find the joint probability density function of $X^2$ and $Y^2$. This is my solution: $z=x^2, \Longrightarrow x=\sqrt z$, because $x\geq0$ $w=y^2, \Longrightarrow y=\sqrt w$, because $y\geq0$. The Jacobian $J$ of the inverse transformation would then equal: $J={ \left|\begin{array}{cc} \frac{1}{2\sqrt{z}} & 0\\ 0 & \frac{1}{2\sqrt{w}} \end{array}\right|}=\frac{1}{4\sqrt{wz}}$ so, the joint probability density function \begin{align} g(z,w)&=\begin{cases} \frac{3}{8}\sqrt z \sqrt w \frac{1}{4\sqrt wz}& w\geq0,\,z\geq0,\:\sqrt w+\sqrt z\leq2\\ 0 & \mbox{otherwise} \end{cases} \\ &=\begin{cases} \frac{3}{32}\,\,\,\,& w\geq0,\,z\geq0,\:\sqrt w+\sqrt z\leq2\\ 0 & \mbox{otherwise} \end{cases} \end{align} but I have a problem, $\int_{0}^{4}\int_{0}^{w-4\sqrt{w}+4}\frac{3}{32}\,dz\,dw=\frac{1}{4}\neq1$. Please help me at my mistake
I think the issue is the original pdf: $$ \int_{\mathbb{R}^2}f(x,y)\;dydx=\frac{3}{8}\int_0^2\int_{0}^{2-x}xy\;dydx=\frac{3}{8}\int_0^2\frac{x(2-x)^2}{2}\;dx$$ $$ =\frac{3}{8}\int_0^2\frac{(2-x)x^2}{2}\;dx=\frac{3}{8}\cdot\frac{2}{3}=\frac{1}{4}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1873478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Flipping coins- percentages of heads vs tails If I flip a coin multiple times and count the number of time it fell on heads and the number of times it fell on tails and keep a track of them. In how many flips on average will the delta between percentage of heads and percentage of tails will be less than 0.1%?
I want to check if a coin is fair(lands 50% of the times on each side. I assume that delta of 0.1% between them is fair). How many flips do i need in order to be 99% confident that the coin is fair? For large n you can apply moivre laplace. So you approximate the binomial distribution by the normal distribution. Let $p=\frac{X}{n}$, where $X\sim Bin(n,0.5)$. The equation becomes $P(|\frac{X}n-0.5|\leq 0.001)=2\Phi\left(\frac{0.001}{ \frac{0.25}{\sqrt n}} \right)-1=0.99$ $\Phi\left(\frac{0.001}{ \frac{0.25}{\sqrt n}} \right)=0.995$ $\frac{0.001}{ \frac{0.25}{\sqrt n}} =\Phi^{-1}\left(0.995\right)$ $\frac{0.001}{ \frac{0.25}{\sqrt n}} =2.576$ $\sqrt n=2.576\cdot \frac{0.25}{0.001}$ $n=414,736$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1873578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Help with simplification of a rational expression (with fractional powers) Can you please help me see what I don't see yet. Here's a problem from a high school textbook (ISBN 978-5-488-02046-7 p.9, #1.029): $$ \frac{ (a^{1/m}-a^{1/n})^{2} \cdot 4a^{(m+n)/mn} }{ (a^{2/m}-a^{2/n}) (\sqrt[m]{a^{m+1}} + \sqrt[n]{a^{n+1}}) }$$ Here's my try at it: $$\frac{ (a^{1/m} - a^{1/n}) (a^{1/m} - a^{1/n}) \cdot 4 a^{(1/m) + (1/n)} }{ (a^{1/m} - a^{1/n}) (a^{1/m} + a^{1/n}) \cdot a (a^{1/m} + a^{1/n}) }$$ ...which is $$\frac{ (a^{1/m} - a^{1/n}) \cdot 4 a^{(1/m) + (1/n)} }{ a (a^{1/m} + a^{1/n})^2 }$$ Wolfram Alpha's simplify stops here, too. I don't see where to go from here. The final form, according to the book, is this: $$\frac{ 1 }{ a (a^{1/m} - a^{1/n}) }$$ How did they do it? PS I agree with @You're In My Eye that there's a misprint and instead of multiplication in the numerator there should be an addition sign. I want to express my sincere gratitude to everyone who spent their time to help me. Thank you guys very much.
After checking the older edition of the book, I'm quite sure that the original problem looked like this: $$\frac{ (a^{1/m}-a^{1/n})^{2} \color{blue}{+} 4a^{(m+n)/mn} }{ (a^{2/m}-a^{2/n}) (\sqrt[m]{a^{m+1}} + \sqrt[n]{a^{n+1}}) }$$ Now we have: $$(a^{1/m}-a^{1/n})^{2} \color{blue}{+} 4a^{(m+n)/mn} =(a^{1/m}+a^{1/n})^{2}$$ $$(a^{2/m}-a^{2/n}) (\sqrt[m]{a^{m+1}} + \sqrt[n]{a^{n+1}})=a(a^{1/m}-a^{1/n})^{2}(a^{1/m}+a^{1/n})^2$$ Finally we get: $$\frac{ (a^{1/m}-a^{1/n})^{2} \color{blue}{+} 4a^{(m+n)/mn} }{ (a^{2/m}-a^{2/n}) (\sqrt[m]{a^{m+1}} + \sqrt[n]{a^{n+1}}) }=\frac{1}{a(a^{1/m}-a^{1/n})}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1873700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Compute trigonometric limit without use of de L'Hospital's rule $$ \lim_{x\to 0} \frac{(x+c)\sin(x^2)}{1-\cos(x)}, c \in \mathbb{R^+} $$ Using de L'Hospital's rule twice it is possible to show that this limit equals $2c$. However, without the use of de L'Hospital's rule I'm lost with the trigonometric identities. I can begin by showing $$ \lim\frac{x\sin (x^2)(1+\cos(x))}{\sin^2x}+\frac{c\sin x^2(1+\cos x)}{\sin^2x}=\lim\frac{\sin (x^2)(1+\cos(x))}{\sin x}+\frac{c\sin x^2(1+\cos x)}{\sin^2x}, $$ and here I'm getting stuck. I will appreciate any help.
Your first step is very good: by multiplying numerator and denominator by $1+\cos x$, the limit becomes $$ \lim_{x\to0}(x+c)(1+\cos x)\frac{\sin(x^2)}{\sin^2 x}= \lim_{x\to0}(x+c)(1+\cos x)\frac{\sin(x^2)}{x^2}\frac{x^2}{\sin^2 x} $$ and it should be easy to finish.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1873795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 7, "answer_id": 1 }
Find a point R, such that angle increases 3 times Let $X=(4,0), Y=(4,3), O=(0,0) $ are points. I have to find point R with integer coordinates, such that $3|\angle$$XOY|$=$\angle$$|XOR|$. I think it's $R=(-3,8)$, but I am not sure. How can I prove it? Thanks for your help.
If $v, w$ are vectors in $\mathbb{R}^2$, the cosine of the angle between them is equal to $$\frac{v \cdot w}{||v|| \cdot ||w||}$$ Also, the triple angle identity formula can be derived from the formula for $\cos x+ y$: $$\cos 3x = 4 \cos^3 x - 3 \cos x$$ Let $\phi$ be the angle between $X$ and $Y$, and $\theta$ be the angle between $X$ and $R$. Try using these formulas to see whether $\cos 3 \phi = \cos \theta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1873877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to factorize the polynomial $a^6+8a^3+27$? I would like to factorize $a^6+8a^3+27$. I got different answers but one of the answers is $$(a^2-a+3)(a^4+a^3-2a^2+3a+9)$$ Can someone tell me how to get this answer? Thanks.
By the rational root theorem, we guess the factors include at least one of the following: $$x \pm 27$$ $$x \pm 9$$ $$x \pm 3$$ $$x \pm 1$$ Where did I get those? * *Take the absolute value of the last term (in descending order), $27$, and the absolute value of the last term, $1$. *Take the factors of each: $$27: \color{red}{1,3,9,27}$$ $$1: \color{green}{1}$$ *Consider $x-a$ where $a$ may be any of the ff: $$\pm \frac{\color{red}{27}}{\color{green}{1}}$$ $$\pm \frac{\color{red}{9}}{\color{green}{1}}$$ $$\pm \frac{\color{red}{3}}{\color{green}{1}}$$ $$\pm \frac{\color{red}{1}}{\color{green}{1}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1873963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
About the eigenvalues of a block Toeplitz (tridiagonal) matrix I have found the following $n\times n$ squared matrix in one stability analysis problem (i.e. I have to identify the sign of its eigenvalues) $$ A(\theta) = \begin{bmatrix} W(\theta)+W(\theta)^T & -W(\theta) & 0 & \dots & 0 & 0 \\ -W(\theta)^T & W(\theta)+W(\theta)^T & -W(\theta) & \dots & 0 & 0 \\ \vdots & \vdots & \vdots & \dots & \vdots & \vdots \\ 0 & 0 & 0 & \dots & -W(\theta)^T & W(\theta)+W(\theta)^T \end{bmatrix}, $$ where $W(\theta) = \begin{bmatrix}\cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix}$ is a rotational matrix with $\theta\in(-\frac{\pi}{2}, \frac{\pi}{2})$. Therefore the block diagonal is $W + W^T = (2\cos\theta) I$, with $I$ being the identity matrix. For the special case $\theta = 0$ we have that $$ A(0) = 2\begin{bmatrix} 1 & -0.5 & 0 & \dots & 0 & 0 \\ -0.5 & 1 & -0.5 & \dots & 0 & 0 \\ \vdots & \vdots & \vdots & \dots & \vdots & \vdots \\ 0 & 0 & 0 & \dots & 0.5 & 1 \end{bmatrix} \otimes I, $$ where $\otimes$ denotes the Kronecker product. The eigenvalues of $A(0)$ has analytical solution (https://en.wikipedia.org/wiki/Tridiagonal_matrix) and it can be checked that all of them are positive, in fact it can be decomposed as $B^TB$ (with $B$ being full row rank). So I am wondering whether I can decompose $A(\theta) = B^TB$, or if I can show that the eigenvalues are still positive (or a counter-example) with my constraint in $\theta$, i.e., the main diagonal of $A$ is always positive. Any ideas or suggestions?
$A=B^TB$ for some $B$ if and only if $A$ is positive semidefinite. Let $c=\cos\theta$ and $w=e^{i\theta}$. Then $A$ is unitarily similar to $C\oplus \overline{C}$, where $$ C=\pmatrix{ 2c&-w\\ -\bar{w}&\ddots&\ddots\\ &\ddots&\ddots&-w\\ &&-\bar{w}&2c}. $$ Now $C$ is a Hermitian tridiagonal Toeplitz matrix. Its eigenvalues are given by $$ \lambda_k = 2c+2\cos\left(\frac{k\pi}{n+1}\right);\ k=1,2,\ldots,n. $$ Therefore, $C$ and in turn $A$ are positive semidefinite iff $c\ge-\cos\left(\frac{n\pi}{n+1}\right)$, i.e. iff $\ |\theta|\le\frac{\pi}{n+1}$. (Here I suppose $A$ is $2n\times2n$ -- or $n$ blocks by $n$ blocks -- and $C$ is $n\times n$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1874032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
$2^{n!}\bmod n$ if $n$ is odd Given an odd number $n$, find $2^{n!}\bmod n$ and what if $n$ is even? I am not getting how to deal with that $n!$ in the power of $2$. Any help will be truly appreciated.....
For the first question, note that $\varphi(n)$ divides $n!$, and use Euler's Theorem. The $n$ even problem is more interesting. Let $n=2^km$ where $m$ is odd. Then by the result for odd moduli, we have $2^{n!}\equiv 1\pmod{m}$. Also, $2^{n!}\equiv 0\pmod{2^k}$. Now use the Chinese Remainder Theorem. Added: In more detail, we want to find a $t$ such that $1+tm$ is divisible by $2^k$. So we are looking at the congruence $tm\equiv -1\pmod{2^k}$. This can be solved in the usual way, by multiplying both sides by the inverse of $m$ modulo $2^k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1874171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
3-sphere complex co-ordinates I am currently trying to understand some mathematical physics papers that deal with torus knots. I am trying to find the origin of a complex scalar field used. These fields are somehow related to the Hopf fibration. I have spent the last week reading about and trying to understand the Hopf fibration. I Believe I understand the Hopf fibration with the mapping of $h(z_1,z_2)\mapsto \frac{z_2}{z_1}$. Multiple papers are stating this: $$u = u(r) = \frac{(r^2 −1)+2iz)}{r^2 +1},\ \ \ v = v(r) = \frac{2(x +iy)}{r^2 +1}$$ As the "standard complex co-ordinates for the 3-sphere". I am unable to find this information anywhere else apart from these papers, and I need to find out where it arises from, as to me this doesn't make sense. I am a 2nd year physics undergraduate and if you could explain in a way I can understand, that will be awesome! Here is one paper - the rest are behind paywalls https://arxiv.org/pdf/1302.0342.pdf Eq (10)
The following answer is not rigorous. To make it rigorous you may need to know more about smooth manifold. $\mathbb S^3 = \{ (u, v) \in \mathbb C^2 : |u|^2 + |v|^2 =1\}$ is itself a $3$-manifold, which means that locally it looks like an open sets in $\mathbb R^3$. Mathematically speaking, it means that for each point $p\in \mathbb S^3$, there is a parametrization $ \phi : U \to \mathbb S^3$ (where $U$ is open in $\mathbb R^3$), which is at least an injective mapping (with some other condition), so that $p\in \phi (U)$. This is called a (local) chart on the manifold, as you are giving for each points in $\phi (U)$ a coordinate $(x, y, z)$. Of course this three numbers $(x, y, z)$, which is associated with the point $\phi(x, y, z)$ in $\mathbb S^3$, depends on the chart $\phi$. But $\mathbb S^3$ is so explicit that it has a "standard" chart, given by stereographic projection. The stereographic projection is a mapping $(x, y, z)\mapsto (u, v) \in \mathbb S^3$ given by your formula, where $(x, y, z) \in U :=\mathbb R^3$ and $r = \sqrt{x^2 + y^2 + z^2}$. One needs to check that the above mapping is really a chart (that is, it is injective). One can use the following observation about the construction of stereographic projection: given $(x, y, z) \in \mathbb R^3$, the point $(u, v)$ in $\mathbb S^3$ is formed by the intersection of $\mathbb S^3$ with the line joining $(x, y, z, 0)$ to $(0,0,0,1)$. See the picture here for the two dimensional picture.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1874258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to estimate the range of a normal distribution when the mean and standard deviation are given For example, how would you respond to this question? The earnings of one-hundred workers in a company are normally distributed. If the mean of this data set is 24 and the standard deviation is 4, find an approximate value for the range.
Recall that about $99.7\%$ of data under a normal curve falls within three standard deviations of the mean. When you are given the mean and standard deviation, this seems like a pretty good way to approximate the range. So since the mean is $24$, we could estimate that most of the data falls in the interval $$[24-3(4), \;24+3(4)] \;=\; [12,36].$$ So the range is $36-12 = 24$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1874363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Universal Cover of wedges $S^{2} \vee S^{2}, \mathbb{R}P^{2} \vee S^{2}$ and $\mathbb{R}P^{2} \vee \mathbb{R}P^{2}$. We are asked to find the universal cover of the wedges $S^{2} \vee S^{2}, \mathbb{R}P^{2} \vee S^{2}$ and $\mathbb{R}P^{2} \vee \mathbb{R}P^{2}$. I am second guessing myself on this problem because I came up with the wedge of two spheres as the universal cover for all three of these. Is this the correct universal cover for them all?
Edit (last answer was wrong) The universal covering space of $S^2\lor S^2$ is itself. However, once you introduce the projective plane, the wedge point splits, so $S^2\lor \Bbb RP^2$ has a chain of three spheres as universal cover, where the middle sphere is a two-sheeted cover of $\Bbb RP^2$, and the two other spheres each cover the $S^2$. $\Bbb RP^2\lor\Bbb RP^2$ is similarly an infinite chain of spheres, each sphere being a two-sheeted cover of one of the $\Bbb RP^2$'s.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1874447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to minimize $f(x)$ with the constraint that $x$ is an integer? I would like to find the integer x that minimizes a function. That is: $$ x_{min} = \min_{x \in \mathbb{Z}}{(n - e^x)^2} $$ The goal is to write a program that computes the integer $x$ such that $e^x$ is closest to $n$, preferably avoiding conditionals. Without the integer constraint, obviously $x_{min} = \ln{n}$, but how to go about this with the integer constraint?
The fact that the minimum is squared is irrelevant, if you take the absolute value instead. Therefore $$x_{min}=\min_{x\in Z}{|n-e^x|}$$ If x werent constrained to the integers, x would equal as you pointed out ln(n), but since it is constrained to integers, to find $x_{min}$, take $\min(e^{Floor(ln(n))},e^{Ceil(ln(n))})$ The minimum between those two will be your answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1874539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Corollary of Schur's Lemma - why abelian Corollary (of Schur's Lemma): Every irreducible complex representation of a finite abelian group G is one-dimensional. My question is now, why has the group to be abelian? As far as I know, we want the representation $\rho(g)$ to be a $Hom_G(V,V)$, where $V$ is the representation space. Isn't this always the case (i.e. even if the $\rho(g)$ is not abelian) as it is by definition a function $G \rightarrow GL(V)$?
I had the same question, for maybe longer than a year, but because of a stupid mistake in understanding Schur's. Here it is, in case ( hoping ;) ) that someone else might do the same mistake: Schur's lemma says something about >any< linear $\psi$ so that (*) $ \;\;\;\; \psi \rho (g) = \rho (g) \psi$ $\;$ $\forall g$ for some irreducible representation $\rho$. (Correct me if there are more assumptions.) $\psi$ is not assumed to be irreducible. If you would assume it is irreducible, it would follow from Schur's lemma that it's one dimensional, already at this stage. But you don't know that. (In the corollary, you have $\psi = \rho $, so $\psi$ is irreducible now. But you still need (*), which you get from G's abelianness, as the others have pointed out.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1874634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Any smart ideas on finding the area of this shaded region? Don't let the simplicity of this diagram fool you. I have been wondering about this for quite some time, but I can't think of an easy/smart way of finding it. Any ideas? For reference, the Area is: $$\bbox[10pt, border:2pt solid grey]{90−18.75\pi−25\cdot \arctan\left(\frac 12\right)}$$
The big black roundy-corner on the bottom right has area $(10^2 - \pi\cdot 5^2)/4$ and there are 3 complete copies of it and one copy trimmed by a small roundy-triangle. We will focus on this roundy-triangle which is the same as the one in the bottom left. So the key is to compute the area of the small white roundy-triangle at the bottom left. To this end we must find the intersection of the diagonal and the circle that create the top of this triangle. The equation of the circle and of the diagonal are $$ y = 1/2 x \\ (x-5)^2 + (y-5)^2 = 25 $$ Solving that with WA gives : $x=2,\ y=1$. So now we can decompose this roundy-triangle into two parts by drawing a vertical line that goes through this intersection. This gives a true triangle (the left part) which has area $1$ and another roundy-triangle (the right part). To compute the area of the roundy-right-triangle, we can use integration : $$ \int_2^5 -\sqrt{25 - (x-5)^2} + 5\ dx $$ See WA for the plot of this function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1874736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "56", "answer_count": 12, "answer_id": 9 }
How to show $\sum_{k=0}^{n}\binom{n+k}{k}\frac{1}{2^k}=2^{n}$ How does one show that $$\sum_{k=0}^{n}\binom{n+k}{k}\frac{1}{2^k}=2^{n}$$ for each nonnegative integer $n$? I tried using the Snake oil technique but I guess I am applying it incorrectly. With the snake oil technique we have $$F(x)= \sum_{n=0}^{\infty}\left\{\sum_{k=0}^{n}\binom{n+k}{k}\frac{1}{2^k}\right\}x^{n}.$$ I think I have to interchage the summation and do something. But I am not quite comfortable in interchanging the summation. Like after interchaging the summation will $$F(x)=\sum_{k=0}^{n}\sum_{n=0}^{\infty}\binom{n+k}{k}\frac{1}{2^k}x^{n}?$$ Even if I continue with this I am unable to get the correct answer. * *How does one prove this using the Snake oil technique? *A combinatorial proof is also welcome, as are other kinds of proofs.
$$\begin{align} \sum_{k=0}^n \binom {n+k}k\frac 1{2^k} &=\frac 1{2^n}\sum_{k=0}^n \color{blue}{2^{n-k}}\binom {n+k}n\\ &=\frac 1{2^n}\sum_{k=0}^n\color{blue}{\sum_{r=0}^{n-k}\binom {n-k}r}\binom{n+k}r\\ &=\frac 1{2^n}\sum_{s=0}^n\sum_{r=0}^s\binom sr\binom {2n-s}n &&\scriptsize (\text{Putting } s=n-k)\\ &=\frac 1{2^n}\sum_{r=0}^n\sum_{s=r}^n \binom s{s-r}\binom{2n-s}{n-s}\\ &=\frac 1{2^n}\sum_{r=0}^n\sum_{s=r}^n(-1)^{s-r}\binom {-r-1}{s-r}(-1)^{n-s}\binom {-n-1}{n-s} &&\scriptsize(\text{Upper Negation})\\ &=\frac 1{2^n}\sum_{r=0}^n(-1)^{n-r} \color{magenta}{\sum_{s=r}^n\binom {-r-1}{s-r}\binom{-n-1}{n-s}}\\ &=\frac 1{2^n}\sum_{r=0}^n(-1)^{n-r}\color{magenta}{\binom {-n-r-2}{n-r}}&&\scriptsize(\text{Vandermonde})\\ &=\frac 1{2^n}\sum_{r=0}^n(-1)^{n-r}(-1)^{n-r}\binom {2n+1}{n-r}&&\scriptsize(\text{Upper Negation})\\ &=\frac 1{2^n}\sum_{r=0}^n\binom{2n+1}{n-r}\\ &=\frac 1{2^n}\cdot \frac 12 \sum_{r=0}^n \binom {2n+1}{n-r}+\binom {2n+1}{n+r+1} &&\scriptsize(\text{both summands are equal})\\ &=\frac 1{2^{n+1}}\sum_{r=0}^{2n+1}\binom {2n+1}r\\ &=\frac 1{2^{n+1}}\cdot 2^{2n+1}\\\\ &=2^n\qquad \blacksquare \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1874816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 15, "answer_id": 5 }
prove inequality using methods of differential calculus Could you help me prove following inequality ? $$(x+y)^{\alpha}\le x^{\alpha} + y^{\alpha} $$ $$x,y\ge 0, \alpha \le 1$$ I don't know from what start, I should use methods of differential calculus.
We may assume $0<\alpha<1$, because otherwise the claim is trivial (or wrong). The function $f(x):=x^\alpha$ $(x\geq0)$ is concave, i.e., has a decreasing derivative $f'(x)=\alpha x^{\alpha-1}$. It follows that $$f(x+y)-f(x)=\int_0^y f'(x+t)\>dt\leq \int_0^y f'(t)=f(y)\ ,$$ which is equivalent to the claim.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1874883", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Lagrange theorem question I'm trying to teach myself group theory and this question is the final one in an exercise on Lagrange theorem and it has me currently stumped. Finite group ${G}$ contains distinct elements ${a}$ and ${b}$ and identity ${e}$, such that: ${a*b=a^3*b^4}$ and ${a^4*b^3=e}$. Show that ${a^2=e}$ and that the order of ${G}$ is a multiple of 6.
Proof that $a^2=e$: $a^4*b^3=e$. Multiply both sides by $b$ from right to get to $a^4*b^4=b$. This combined with the assumption that $a*b=a^3*b^4$, implies $a*a*b=b$ (more details: $a^4*b^4=a*a^3*b^4=a*a*b$). Multiply both sides by $b^{-1}$ to get to $a*a=e$ or $a^2=e$. Proof that $|G|$ is a multiple of $6$: First we prove that $b^3=e$. We know that $a*b=a^3*b^4$ and $a^2=e$, hence $a*b=a^2*a*b^4=a*b^4$. So by multiplying both sides by $a^{-1}$ from left and then by $b^{-1}$, we get to $b^3=e$. By now, we know that $G$ has an element of order $2$ and an element of order $3$. It means that $|G|$ must be a multiple of $2$ and $3$. Hence $|G|$ is a multiple of $6$ nad we're done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1874961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is this lot drawing fair? Sorry for a stupid question, but it is bugging me a lot. Let's say there are $30$ classmates in my class and one of us has to clean the classroom. No one wants to do that. So we decided to draw a lot - thirty pieces of paper in a hat, one of which is with "X" on it. The one who draws "X" has to do the cleaning. Each one starts to draw... Is this kind of lot drawing fair or not fair? It looks to me like the first one's chances to get an "X" are equal to $1/29$, while the second one's chances would be equal either to $1/28$ (in case the first one didn't draw an "X") or zero $0/29 = 0$ (in case the first one drew an "X"). However, neither $1/28$, nor $0/29$ is equal to $1/29$.
Actually, it looks like its fair to me. The probability that person 1 chooses an "X" is $\frac{1}{30}$, since only one of the 30 lots has an "X" on it. The probability that person 2 chooses an "X" is $\frac{29}{30}*\frac{1}{29} =\frac{1}{30} $, since Person 1 must have not chosen a lot with an "X" on it. The probability that person 3 chooses an "X" is $\frac{29}{30}*\frac{28}{29}*\frac{1}{28} =\frac{1}{30}$ You can show that this is true for every successive person, and to further prove this, we can look at the 30th person. The probability that person 30 chooses an "X" is the probability that everybody else did not choose an "X", which is $\frac{29!}{30!} = \frac{1}{30}$ So it is indeed fair for every person.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1875045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 8, "answer_id": 4 }
$ \int_{-\infty}^{\infty} \frac{e^{2x}}{ae^{3x}+b} dx,$ where $a,b \gt 0$ Evaluate $$ \int_{-\infty}^{\infty} \frac{e^{2x}}{ae^{3x}+b} dx,$$ where $a,b \gt 0$ I tried using $y=e^x$, but I still can't solve it. I get $\displaystyle\int_0^\infty \frac y{ay^3+b} \, dy.$ Is there any different method to solve it?
$$ \begin{align} \int_{-\infty}^\infty\frac{e^{2x}}{ae^{3x}+b}\,\mathrm{d}x &=\frac1{3b}\left(\frac ba\right)^{2/3}\int_0^\infty\frac{u^{-1/3}}{u+1}\,\mathrm{d}u\tag{1}\\ &=\frac13\left(\frac1{ba^2}\right)^{1/3}\pi\csc\left(\frac23\pi\right)\tag{2}\\ &=\frac{2\pi}{3\sqrt3}\left(\frac1{ba^2}\right)^{1/3}\tag{3} \end{align} $$ Explanation: $(1)$: $u=\frac abe^{3x}$ $(2)$: result from this answer $(3)$: $\csc\left(\frac23\pi\right)=\frac2{\sqrt3}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1875122", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 1 }
Gradient of a real valued function defined on a sphere Points on a sphere of radius $R$ is expressed in spherical coordinates as $\left(\varphi,\theta\right)$. For a real valued, continuous and differentiable function $f:\mathbb R\times\left[0,\pi\right] \to \mathbb R$ evaluated on said sphere centered at $\left(0,0,0\right)$ with radius $R$, what is its gradient written in the form of angles (radians) in the azimuthal and zenith directions? Additional notes and assumptions: * *$f\left(\varphi+2\pi,\theta\right)=f\left(\varphi,\theta\right)$; *$f\left(\varphi,0\right) = f\left(0,0\right)$, $f\left(\varphi,\pi\right) = f\left(0,\pi\right)$; *$\varphi\in\mathbb R$ is the azimuthal angle and $\theta\in\left[0,\pi\right]$ is the zenith angle. By design, the direction of the gradient has to be parallel to the tangent plane at any position, and points in the direction of greatest increase of $f$. Meanwhile, the magnitude of the gradient is the "slope" of $f$ in said direction. I'm aware that the gradient operator of $\mathbb R^3$ in the spherical coordinate system can be written as a linear combination of the basis vectors, namely $$ \nabla = \mathbf{e}_{r} \frac{\partial}{\partial r} + \mathbf{e}_{\varphi} \frac{1}{r\sin\theta} \frac{\partial}{\partial \varphi} + \mathbf{e}_{\theta} \frac{1}{r} \frac{\partial}{\partial \theta}. $$ Since $f$ is only mapped from points on the sphere, I removed the term with $\mathbf{e}_{r}$ to arrive at $$ \nabla f = \mathbf{e}_{\varphi} \frac{1}{R\sin\theta}\frac{\partial f}{\partial\varphi} + \mathbf{e}_{\theta} \frac{1}{R} \frac{\partial f}{\partial\theta}, $$ from where I naively translate the "arc lengths" into their respective "angles" by \begin{align} \frac{1}{R \sin \theta} \frac{\partial f}{\partial\varphi} \to \frac{1}{R^2\sin^2\theta} \frac{\partial f}{\partial\varphi}, \quad & \text{because the circle of latitude has radius of }R\sin\theta, \text{and} \\ \frac{1}{R} \frac{\partial f}{\partial\theta} \to \frac{1}{R^2} \frac{\partial f}{\partial\theta}, \quad & \text{because the circle of longitude has radius of }R. \end{align} Therefore, I believe that the gradient described in azimuthal and zenith angles is the following pair: $$ \frac{1}{R^2} \left( \frac{1}{\sin^2\theta} \frac{\partial f}{\partial\varphi}, \frac{\partial f}{\partial\theta} \right). $$ My questions would be: * *Is there a rigorous formulation for this kind of gradient operator that I'm looking for? *What about gradient at the poles? I'm asking because $\sin\theta$ is obviously $0$ when $\theta=0,\pi$.
Rigorous formulation This involves the definition of the surface gradient operator, which is defined as $$ \nabla_{\Gamma} = \nabla - {\mathbf e}_{r} \left({\mathbf e}_{r} \cdot \nabla\right). $$ The projection of the gradient along the unit normal ${\mathbf e}_{r}$ is evaluated by $$ {\mathbf e}_{r} \left({\mathbf e}_{r} \cdot \nabla f\right) = {\mathbf e}_{r} \left(\frac{\partial f}{\partial r} {\mathbf e}_{r}^2\right) = {\mathbf e}_{r} \frac{\partial f}{\partial r} $$ which when subtracted from $\nabla f$ gives (at $r=R$) $$ {\nabla}_{\Gamma} f = \mathbf{e}_{\varphi} \frac{1}{R\sin\theta}\frac{\partial f}{\partial\varphi} + \mathbf{e}_{\theta} \frac{1}{R} \frac{\partial f}{\partial\theta}. $$ Angular gradient field Because of $\mathbf{e}_{\theta}\cdot\mathbf{e}_{\varphi} = 0$ and that the longitudinal rotation and the latitudinal rotations are independent of each other, the component of ${\nabla_{\Gamma}}f$ in the direction of $\mathbf{e}_{\theta}$ and $\mathbf{e}_{\varphi}$ can be individually resolved into angular components. Polar regions The original extent for asking the case of $\sin\theta=0$ is for numerical purposes. However, it should be noted that the gradient operator cannot be defined on $\sin\theta=0$ due to the choice of coordinate system and boundary conditions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1875194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is $(A^\top A \mathbf{x}, \mathbf{x}) = (A \mathbf{x}, A \mathbf{x})$? Let $(\mathbf{x}, \mathbf{y})$ denote the inner product between $\mathbf{x}$ and $\mathbf{y}$, and let $A$ be a real matrix. Why is $(A^\top A \mathbf{x}, \mathbf{x}) = (A \mathbf{x}, A \mathbf{x})$? Using the scalar product it's easy to see that $$ \begin{align} (A^\top A \mathbf{x})^\top \mathbf{x} &= \mathbf{x}^\top A^\top A \mathbf{x} \\ &= (A \mathbf{x})^\top A \mathbf{x} \end{align} $$ but using only the properties of the inner product I fail to see how one gets the result. Edit: To be clear, I'm asking how one comes to the conclusion that $(A^\top A \mathbf{x}, \mathbf{x}) = (A \mathbf{x}, A \mathbf{x})$, independently of how the inner product is defined (ie. only using the axioms of the inner product), so if your answer relies on a particular definition of the inner product such as $(\mathbf{x}, \mathbf{y}) := \mathbf{x}^\top\mathbf{y}$, it's not good. Edit 2: The equality I'm asking about can be found in the book Linear Algebra Done Wrong (pdf) by Sergei Treil (p. 172), although here I'm interested only in the case of real matrices in the book it covers the complex case as well.
We have $$(A^TAx,x)=[A^TAx]^Tx=[A^T(Ax)]^Tx=[(Ax)^TA]x=(Ax)^T(Ax)=(Ax,Ax)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1875312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
Analytic continuation for $\sum_{n=0}^{\infty}(\sqrt n+1/3)^{-s}$ Define a function $F(s)$ by: $$F(s)=\sum_{n=0}^{\infty}(\sqrt n+1/3)^{-s}$$ Is there a closed form expression for the analytic continuation of $F(s)$ to $F(-s)$?
Not sure whether this matches the query for closed form, but Euler-Maclaurin summation gives an exact expression for the analytic continuation to any $s\in\mathbb{C}\setminus\{1,2\}$: For abbreviation, let's write $f(s,x):=(\sqrt{x}+a)^{-s}$ for $x\in\mathbb{R}$, $0<a\in\mathbb{R}$. For $a=\frac{1}{3}$ we have the OP. For avoiding the singularity of the square root at $0$, split off some terms from the sum and apply Euler-Maclaurin summation to the remaining terms, where $N$ and $K$ are some arbitrarily chosen positive integers that do not influence the value of $F(s)$: $$\begin{aligned} F(s) = & \sum_{n=0}^{N-1}f(s,n)+ \int_N^\infty f(s,x)\mathrm{d}x + \frac{1}{2}f(s,N)\\ &-\sum_{k=1}^K\frac{B_{2k}}{(2k)!}f^{(2k)}(s,N) +\frac{1}{(2K+1)!}\int_N^\infty \bar{B}_{2K+1}(x)f^{(2K+1)}(s,x)\mathrm{d}x \end{aligned}\tag{*}\label{eq}$$ Here, $B_{2k}$ denote Bernoulli numbers, $\bar{B}_{2K+1}(x) = B_{2K+1}(x -[x])$ the periodically continued Bernoulli polynomial of degree $2K+1$ and $f^{(n)}(s,x) = \left(\frac{\operatorname{d}}{\operatorname{d}x}\right)^nf(s,x)$ the $n$th derivation of $f(s,x)$ by $x$. For $\Re s>2$, the first integral can be explicitly calculated $$\int_N^\infty f(s,x)\mathrm{d}x = \int_N^\infty (\sqrt{x}+a)^{-s}\mathrm{d}x = 2\frac{(\sqrt{N}+a)^{2-s}}{s-2}-2a\frac{(\sqrt{N}+a)^{1-s}}{s-1},$$ giving a meromorphic function in $s$. In the other terms of equation \eqref{eq} the functions $f(s,x)$ and $f^{(n)}(s,x)$ are all analytic in $s$ and the second integral converges normally for $\Re s > -4K$, so that it is also analytic in this range of $s$. Hence, \begin{align} F(s) = & \sum_{n=0}^{N-1}f(s,n)+ 2\frac{(\sqrt{N}+a)^{2-s}}{s-2}-2a\frac{(\sqrt{N}+a)^{1-s}}{s-1} + \frac{1}{2}f(s,N)\\ &-\sum_{k=1}^K\frac{B_{2k}}{(2k)!}f^{(2k)}(s,N) +\frac{1}{(2K+1)!}\int_N^\infty \bar{B}_{2K+1}(x)f^{(2K+1)}(s,x)\mathrm{d}x \end{align} is an expression for the analytic continuation of $F(s)$ in the half plane $\Re s > -4K$. Using integration by parts of the remaining integral, the value of $K$ can be made arbitrary large (that's why it can be freely chosen), so this is no principal bound fixed bound. We see that (the analytic continuation of) $F(s)$ has two simple poles, one at $s=2$ with residue $\operatorname{Res}(F,2)=2$ and one at $s=1$ with residue $\operatorname{Res}(F,1)=-2a$, and is analytic everywhere else.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1875432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $X$ is compact metric then $B(X)$ is non-separabale I've seen the following argument: Let $X$ be a compact metric space, then denote by $B(X)$ the set of bounded, measurable functions on $X$. $B(X)$ with the sup-norm is a $C^*$-algebra. Then, $B(X)$ is not separable. Now, if I take $X=[0,1]$, and denote $A=span_{\mathbb{Q}} \{\chi_{(a,b)} \forall a,b\in \mathbb{Q}\}$ then $A$ is a countable set in $B(X)$ and I had the (wrong) feeling that $A$ is dense in $B(X)$. Because any measurable bounded function can be approximated uniformly by simple functions.I also didn't use the compactness of $X$. So, why am I wrong and I'll be happy to see a proof for non-separability of $B(X)$. Thank you in advance.
This is false as stated. For example, $X$ could be finite, in which case $B(X)$ may as well be some $\mathbb R ^n.$ Assume $X$ is infinite. Then it contains a countably infinite subset $\{x_1,x_2,\dots\}.$ Note that singletons are closed, hence Borel, hence all countable subsets of $X$ are Borel sets. Define a map $F$ from the power set $P(\mathbb N)$ into $B(X)$ by setting $F(A) = \chi_{\{x_k : k \in A\}}.$ Then each $F(A)\in B(X).$ Note that if $A_1\ne A_2,$ then $\|F(A_1) - F(A_2)\|_\infty = 1.$ Since $P(\mathbb N)$ is uncountable, we have found uncountably many elements of $B(X)$ whose distances from one another are all $1.$ It follows that $B(X)$ is nonseparable. I don't think this problem has too much to do with compactness.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1875503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
If $\text{Area} (A) = \text{Area} (B)$ and $\text{Perimeter}(A) = \text{Perimeter}(B) \implies A \cong B$? If I have an $n$-gon $A$ and a convex $n$-gon $B$ with the same perimeter and the same area, does $A\cong B$? Edit : What becomes the answer if I replace convex by regular?
If they're both regular then just one of the given equalities is enough to show they're congruent. For example, let's denote $a_1, a_2$ the edges of the two equal perimiter polygons, then we get $na_1=na_2$. Clearly $a_1=a_2$. Same goes for area.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1875613", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
The number of possible ways to select the four vertices of eleven vertices A cycle graph is a graph that consists of a single cycle or in the other words some number of vertices connected in a close chain and denoted by $c_n$. I need the number of possibility ways for to select the four vertices of eleven vertices such that no two vertices are adjacent. Can you help me?
An admissible selection of $k$ nonadjacent vertices from an $n$-cycle can be realized as follows: Choose an arbitrary first vertex, determining a train of $n-1$ consecutive vertices in between. In the end there will be $n-k$ unchosen vertices, and $k-1$ more chosen vertices in $k-1$ different slots between the unchosen vertices. The first choice can be made in $n$ ways, and then the slots can be chosen in ${n-k-1\choose k-1}$ ways. Since we have arbitrarily called one of the $k$ chosen vertices the first we have to divide by $k$ in order to arrive at the end result $$N={n\over k}{n-k-1\choose k-1}\ .$$ If $n=11$ and $k=4$ we obtain $N={11\over4}{6\choose3}=55$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1875712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Graph theoretic proof: For six irrational numbers, there are three among them such that the sum of any two of them is irrational. Problem. Let there be six irrational numbers. Prove that there exists three irrational numbers among them such that the sum of any two of those irrational numbers is also irrational. I have tried to prove it in the following way, but I am not sure whether it is watertight or not as I have just started learning graph theory. Let there be a graph with $6$ vertices. We assign a weight equal to those six irrational numbers to each of the vertices. We join all the vertices with edges and color the edges in the following way: * *Edge is colored red if the sum of the weights of its end points is irrational. *Edge is colored blue if the sum of the weights of its end points is rational. We know that when we color a $6$-vertex graph with $2$ colors then there must be a monochromatic triangle. * *If the triangle is red then we are done. *If it is blue, then let the irrational numbers be $a$, $b$ and $c$. Therefore $a+b$, $b+c$ and $c+a$ are all rational. Which implies $2(a+b+c)$ and $a+b+c$ is rational. As $a+b$ is rational and hence $c$ is also rational. But this is a contradiction. Hence, our original statement is proved.
It's actually possible to demonstrate that this is NOT a graph-theoretic problem. The graph-theoretic condition equivalent to having a finite collection of irrational numbers as vertices, and recording (with edges) which pairs have rational sums is Graph that is a disjoint union of complete bipartite graphs $K_{m,n}$. The graph has that structure for any finite collection of irrational numbers, and every such finite graph can be realized by some irrational numbers. All of the binary relation structure of the graph is an encoding of a simpler unary structure, the partition of the vertex set into pairs of subsets (the mod $\mathbb{Q}$ equivalence classes of the numbers, and their negatives). To answer any question about the graph one looks at the partition, not the edges. A maximum edge-free subset (independent set) in such a graph is a union of the larger half of each partition-pair. The cardinality is $\sum \max(m,n)$ which is always at least $\lceil V/2 \rceil$ if the graph has $V$ vertices. So the first answer by @AlexRavsky, that used the partition directly without introducing a graph, seems to be the optimal argument.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1875785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "92", "answer_count": 5, "answer_id": 2 }
Proving that $x^4 - 10x^2 + 1$ is not irreducible over $\mathbb{Z}_p$ for any prime $p$. So I have seen the similar question and answers on here for $x^4 +1$, but I am having trouble extending anything there to this polynomial... I understand it is fairly trivial with Galois theory, but my class has just barely covered Field Extensions, so suffice it to say we have no Galois theory to play with. I managed to prove it for the primes such that $p \equiv 1, 7 \pmod 8$, by noting that $2$ is a square modulo those primes and thus $x^4 - 2x^2 +1 = (x^2 -1 + 2qx)(x^2 - 1 - 2qx)$ for those $\mathbb{Z}_p$... however, trying to get a similar result for $3 \pmod 8$ and $5 \pmod 8$ has been stumping me for a long time, I am having a hard time making $q^2 = -1$ and $q^2 = -2$ give me something factorable... I guess the worst part of all of this is that I don't think this solution is even particularly enlightening, in terms of abstract algebra. It's really just some number theory trickery. I don't think my course has prepared me theoretically for this problem, does anyone have an elementary approach to it?
Another simple explanation comes from the fact that the zeros of $m(x)=x^4-10x^2+1$ are $$ x=\pm\sqrt2\pm\sqrt3 $$ with all four sign combinations. So if $p$ is a prime, then you get the splitting field of $m(x)$ over $K=\Bbb{F}_p$ by adjoining $\sqrt2$ and $\sqrt3$. Because up to isomorphism the field $K$ has only a single quadratic extension, namely $L=\Bbb{F}_{p^2}$, we immediately see that $m(x)$ splits into linear factors over $L$. This is because $\sqrt2$ and $\sqrt3$ are both elements of $L$. Consequently $m(x)$ splits into quadratics at worst over $K$. From this way of looking at it it is obvious how to generalize this. Any biquadratic polynomial with zeros of the form $\pm\sqrt{d_1}\pm\sqrt{d_2}$ for some integers $d_1,d_2$ will split into quadratic (or linear) factors modulo $p$ for all primes $p$. From basic field theory we see that it will be irreducible over $\Bbb{Q}$ whenever the field $F=\Bbb{Q}(\sqrt{d_1},\sqrt{d_2})$ is a degree four extension over $\Bbb{Q}$. See Qiaochu's answer here for more Galois theory.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1875878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 2, "answer_id": 1 }
If $(w + 1)(w - 1) = w$, find $ { w }^{ 10 }+\frac { 1 }{ { w }^{ 10 } } $. Recently I was asked a question by my student that completely stumped me. $$\text{If }(w + 1)(w - 1) = w\text{, find } { w }^{ 10 }+\frac { 1 }{ { w }^{ 10 } }. $$ One "cheat" method that I used was to solve for the exact value of $w$ from the given first equation, and then substitute it into the requested expression that we were asked to find. I got $123$ as the answer. However, I'm quite sure there's an algebraic way to solve this. Anyone wants to give this a ahot?
We have $w-\frac{1}{w}=1$, and then $w^2+\frac{1}{w^2}=3$. Put $u_n=w^{2n}+\frac{1}{w^{2n}}$; we have $u_0=2$, $u_1=3$, and $$u_{n+1}(w^2+\frac{1}{w^2})=u_{n+2}+u_n$$ Hence $u_{n+2}=3u_{n+1}-u_n$, it is easy to compute $u_2,u_3,u_4$, and finally $u_5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1875995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
How to show the existence of a subgroup of an abelian group?? If an abelian group has subgroups of orders m and n, respectively, then show that it has a subgroup whose order is the least common multiple of m and n. I tried solving this on assumption that if an abelian group has a subgroup of order, say, N then it has an element whose order is also N. This leads me to the point where I arrive at two different elements having orders m and n as in the question, whereby I obtain a third element whose order is the l.c.m of m and n. Now, was my assumption correct in the first place?? If it's wrong, please let me know and help me get this solution right.
Let $H_0$ and $H_1$ be subgroups of size $m,n$ and let $H_2=H_0\cup H_1=\{h_1,h_2\dots h_s\}$ and suppose the elements have orders $o_1,o_2\dots o_s$ respectively .Then the set $H_3=\{h_1^{e_1}h_2^{e_2}\dots h_2^{e_s}| e_1,e_2\dots e_s\in \mathbb Z\}$ is a subgroup, because $G$ is abelian. It is also finite as it has at most $o_1o_2\dots o_s$ elements. By Lagrange's theorem $m$ and $n$ divide $|H_3|$, so $lcm(m,n)$ divides $|H_3|$, so by the following lemma $H$ has a subgroup of order $lcm(m,n)$ (Notice $H_3$ is clearly abelian also). Lemma: Let $G$ be a finite abelian group and $d$ a divisor of $|G|$, then $G$ has a subgroup of order $d$ (in other words, every abelian group is a converse lagrange group). Proof: The proof is by strong induction over $n$ If $|G|=1$ it is trivial. The inductive case is as follows: If $d=1$ it is trivial, otherwise take $p$ a prime that divides $d$. By cauchy's theorem there is a subgroup $N$ of order $p$, because $G$ is abelian $N$ is normal, so we can consider the canonical projection $\varphi:G\rightarrow \frac{G}{N}$. Now notice that $d/p$ divides the order of $\frac{G}{N}$, so by the induction hypothesis there is a subgroup $K\leq \frac{G}{N}$ of order $d/p$. Consider the subgroup $H=\varphi^{-1}(K)$ what is it's order ? Consider the restriction of $\varphi$ to $H$, the kernel is $N$ and the image is $K$, we conclude $\frac{H}{N}\cong K$, so $H$ has order $p\frac{d}{p}=d$. We have found a subgroup of order $d$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1876063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
What is the proof, that $\sum_{n=0}^{\infty} \frac{1}{2^n}\frac{d_n(x_n,y_n)}{1+d_n(x_n,y_n)}$ is a metric? What is the proof, that $d(x,y) = \sum_{n=0}^{\infty} \frac{1}{2^n}\frac{d_n(x_n,y_n)}{1+d_n(x_n,y_n)}$ is a metric? Where $x=(x_1,x_2,\cdots)$ , $y=(y_1,y_2,\cdots)$ and $d_n$ is a metric for $X_n$. How does one prove the triangle inequality? https://en.wikipedia.org/wiki/Metric_space#Product_metric_spaces
Since $d_i$ is metric, inequality $$d_i(x_i,z_i)\leq d_i(x_i,y_i)+d_i(y_i,z_i)$$ holds in i-th metric space from the product space. Also, metrics $d(x,y)$ and $\frac{d(x,y)}{1+d(x,y)}$ are equivalent. (If $d$ is a metric, then $d/(1+d)$ is also a metric). Therefore, we have $$\frac{d_i(x_i,z_i)}{1+d_i(x_i,z_i)}\leq\frac{d_i(x_i,y_i)}{1+d_i(x_i,y_i)}+\frac{d_i(y_i,z_i)}{1+d_i(y_i,z_i)},$$ for every index $i$ and $$\frac{1}{2^i}\frac{d_i(x_i,z_i)}{1+d_i(x_i,z_i)}\leq\frac{1}{2^i}\frac{d_i(x_i,y_i)}{1+d_i(x_i,y_i)}+\frac{1}{2^i}\frac{d_i(y_i,z_i)}{1+d_i(y_i,z_i)},$$ for every $i$. By summing, we get $$d(x,z)\leq d(x,y)+d(y,z).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1876157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Ways of showing $\sum_\limits{n=1}^{\infty}\ln(1+1/n)$ to be divergent Show that the following sum is divergent $$\sum_{n=1}^{\infty}\ln\left(1+\frac1n\right)$$ I thought to do this using Taylor series using the fact that $$ \ln\left(1+\frac1n\right)=\frac1n+O\left(\frac1{n^2}\right) $$ Which then makes it clear that $$ \sum_{n=1}^{\infty}\ln\left(1+\frac1n\right)\sim \sum_{n=1}^{\infty}\frac1n\longrightarrow \infty $$ But I feel like I overcomplicated the problem and would be interested to see some other solutions. Also, would taylor series be the way you would see that this diverges if you were not told?
"Sophisticated" does not mean "complicated". In my opinion, despite using more sophisticated ideas (asymptotic analysis), your proof is simpler than all of the other current answers — even the one expressing it as a telescoping series. Incidentally, you possibly made an oversight: to complete the proof, $$ \sum_{n=1}^{\infty} O\left(\frac{1}{n^2} \right) = O\left( \sum_{n=1}^{\infty} \frac{1}{n^2} \right) = O(1)$$ (also, a remark: for this argument to be valid, it's important that the $O$ on the left is uniform; e.g. the same 'hidden constant' works for all $n$)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1876350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 8, "answer_id": 6 }
Formal proof that weak partial order difference equivalence relation is a strict partial order. I'm having difficulty with the following problem: Prove: if $R$ is a weak partial (linear) order on $X$, then $R^− = R \; \backslash\ Id_X$ is a strict partial (linear) order. I know that as a weak partial order, $R$ is reflexive, antisymmetric, and transitive. I know that the identity relation $Id_X$ is reflexive, asymmetric, symmetric/antisymmetric and transitive. I also know that as a strict partial order, $R^−$ is irreflexive, asymmetric, and transitive. I can see that every reflexive relation in $R$ is also in $Id_X$, and that the difference $R \; \backslash\ Id_X$ will thus not contain any $<x, y>$ such that $x = y$, and will therefore be irreflexive. I can see that this also implies that $R \; \backslash\ Id_X$ will be asymmetric, since the only $<x, y>$ pairs such that $x = y$ have been removed with $Id_X$ and by the definition of antisymmetry those were the only $<x, y>$ pairs for which $<y, x>$ was also the case. I'm having trouble reasoning through why $R^−$ must be transitive. I think that if it weren't transitive, it would have to be the case that $R$ would also not be transitive, but I'm not sure how to articulate why that seems correct. Finally, my biggest issue is just that I don't have math experience and have a hard time putting any of the above in formal notation. Thanks for any help anyone is able to give.
Suppose that $\langle x,y\rangle\in R^-$ and $\langle y,z\rangle\in R^-$; then $\langle x,y\rangle,\langle y,z\rangle\in R$, and $R$ is transitive, so $\langle x,z\rangle\in R$. This means that $\langle x,z\rangle$ will be in $R^-$, as desired, unless $x=z$, so we want to rule out that possibility. But if $x=z$, then $\langle y,z\rangle=\langle y,x\rangle$, so our initial supposition is that $\langle x,y\rangle\in R^-$ and $\langle y,x\rangle\in R^-$. But then antisymmetry of $R$ implies that $x=y$, and we know that $\langle x,x\rangle\notin R^-$. Thus, $x\ne z$, and $\langle x,z\rangle\in R^-$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1876451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Characterize an analytic function with restriction of its growth Characterize all analytic functions $f(x)$ in $|z|<1$ such that $|f(z)|\leq|\sin(1/z)|$ for all points in punctured disk. I think we should change the form of $\sin(1/z)$ to find a connection with polynomial which $f(x)$ can be expanded into. But I don't know how to do that.
Hint: Take the sequence $\{z_n\}=\left\{\frac{1}{n\pi}\right\}$ and use Identity theorem. What is $f(z_n)$ ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1876520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Parity of a generalized characteristic polynomial Let $$Q(\lambda)=\det\begin{pmatrix}-\lambda C_{11}&C_{12}\\C_{21}&-\lambda C_{22}\end{pmatrix}$$ where $C_{ij}$ are (not necessary square) matrices. In this answer, it is claimed that $Q(\lambda)$ has a parity,i.e. $Q(\lambda)=\pm Q(-\lambda)$, but no explanation is given (it is probably obvious for some reason which currently eludes me). Why does $Q(\lambda)$ have a parity?
Suppose $C_{11}$ is $m \times k$, so that $C_{22}$ is $(n-m) \times (n-k)$ where the overall matrix is $n \times n$. Consider the Leibniz formula. Each term has $n$ factors that are matrix elements, one factor in each row and one factor in each column. If a term has $a$ factors in the top left block, it must have $m-a$ in the top right, $k-a$ in the bottom left, and $n-m-k+a$in the bottom right. That's a total of $n-m-k+2a$ factors with a $\lambda$, which is congruent to $n-m-k$ mod $2$. Thus if $n-m-k$ is odd, each term has $\lambda$ to an odd power, which makes $Q(\lambda)$ an odd function, while if $n-m-k$ is even, each term has $\lambda$ to an even power, and $Q(\lambda)$ is an even function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1876586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Are there names for $A_n$, $Z_n$ and $B_n$ in a chain complex? I mean similar name like $n$-th homology group. If there would be, say "$n$-th component" for $A_n$, "The cycles of $A_n$" for $Z_n$, and "The boundaries of $A_n$" for $B_n$ than I could say that "Any cycle and boundary preserving homomorphism between the components of two chain complexes induces a homomorphism between the corresponding homology groups". What is the correct terminology for this?
If $C=(C_p)$ is a (co)cochain complex then elements in $C_p$ are usually called $p$-(co)chains.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1876653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Integral over $0\leq x,y,z\leq 1$ and $x+y+z\leq 2$ What is $$\int_{S}(x+y+z)dS,$$ where $S$ is the region $0\leq x,y,z\leq 1$ and $x+y+z\leq 2$? We can change the region to $0\leq x,y,z\leq 1$ and $x+y+z\geq 2$, because the total of the two integrals is just $$\int_0^1\int_0^1\int_0^1(x+y+z)dxdydz=3\int_0^1xdxdydz=\frac{3}{2}.$$ Now, can we write the new integral as $$\int_0^1\int_{\min(2-x,1)}^1\int_{\min(2-x-y,1)}^1(x+y+z)dzdydx?$$ This gets more involved since we have to divide into cases whether $2-x-y\leq 1$ or $\geq 1$. Is there a simpler way?
You could write \begin{equation} \int_S x \, dx dy dz = \int_0^1 \left( \int_{y+z \leq 2-x; \, 0\leq y,z \leq 1} dy dz \right) x dx \end{equation} Now, you can interpret $y+z \leq 2-x$ with $y,z \geq 0$ as a triangle in the plane, whose area is $\frac{(2-x)^2}{2}$. From this triangle, you subtract two smaller triangles to account for the fact that $0 \leq y,z \leq 1$. You can write the area of these smaller triangles as $2\times \frac{(2-x-1)^2}{2} = (1-x)^2$. In total, we have \begin{equation} \int_{y+z \leq 2-x; \, 0\leq y,z \leq 1} dy dz = \frac{(2-x)^2}{2} - (1-x)^2 = 1 - \frac{x^2}{2} \end{equation} So in total, \begin{equation} \int_0^1 x \left(1 - \frac{x^2}{2} \right) dy dz = \frac{3}{8} \end{equation} You have three of those integrals, so $\int_S x+y+z \, dx dx dz = \frac{9}{8}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1876732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Interesting probability distribution of a mixed type random variable $Y$ Let $X$ and $U$ be independent random variables with: $$P(X=k)=\frac1{N+1} \text{ for } k=0,1,2,\ldots,N$$ and $U$ having uniform $(0,1)$ distribution. Let $Y=X+U$. Find distribution function of $Y$. I have tried to solve the problem by conditioning on value of $X$ and making use of total probability theorem. I have got $P(Y\le y)=y-\frac N2$. Is it correct? Please help.
Suppose $k\in\{0,1,2,\ldots, N\}$ and $0\le a<b\le 1$. Then $$ \begin{align} & \Pr(a<X+Y<b) = \Pr(X=k\ \&\ a<Y<b) = \frac 1 {N+1}\cdot(b-a) \\[10pt] = {} & \frac{\text{length of the interval }(a,b)}{\text{length of the interval} (0,N+1)} \\[10pt] = {} & \frac{\text{length of the interval }(k+a,k+b)}{\text{length of the interval} (0,N+1)} \end{align} $$ and that is what the probability would be if $X+Y$ has a continuous uniform distribution on $(0,N+1)$. Now just prove that the distribution is determined by the probabilities assigned to intervals lying between two consecutive integers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1876818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Spivak's Calculus - Chapter 1 Question 1.5 - Proof by Induction In Spivak's Calculus Fourth Edition, Chapter 1 Question 1.5 is as follows: Prove $x^n - y^n = (x - y)(x^{n-1}+x^{n-2}y+ \cdots + xy^{n-2}+y^{n-1})$ using only the following properties: It's easy enough for me to use P9 to expand the right-hand side: $$ \begin{array} { c l } (x - y)(x^{n-1}+x^{n-2}y+ \cdots + xy^{n-2}+y^{n-1}) & \text{Given} \\ x^{n-1}(x - y)+x^{n-2}y(x - y)+ \cdots + xy^{n-2}(x - y)+y^{n-1}(x - y) & \text{P9} \\ x^n - yx^{n-1} + x^{n-1}y - x^{n-2}y^2 + \cdots + xy^{n-1} - xy^{n-2}y^2 + xy^{n-1} - y^n & \text{P9} \end{array} $$ Finally, all the terms except the first and last cancel out by P3. I have a feeling this last step should be proved by induction. If so--how would one write that out? ( Also, I've noticed that a lot of people write the application of P9 to polynomials differently than I do when they're working on Spivak's Calculus. Why do they do that? What would it look like here? )
The basic steps of proof by induction are: * *Identify the Induction Hypothesis In this case, it is $P(n) : x^n - y^n = (x-y)(x^{n-1}+x^{n-2}y+\cdots+xy^{n-2}+y^{n-1})$ Note that it is often a good idea to write the Induction Hypothesis using iterative operators, like $\sum$ and $\prod$; doing so usually makes the analysis easier. $$P(n) : x^n - y^n = (x-y)\sum_{i=0}^{n-1}x^{n-1-i}y^i$$ The $P(n) : $ here reads "The proposition over $n$ that the following is true". *Identify the smallest value of $n$ for which the induction hypothesis is true. In this case, $n = 1$ *Assume the Induction Hypothesis is true. *Take the so-called "induction step". That is, increment $n$ from the Induction Hypothesis once--and show that the resulting proposition is still true. What does this accomplish? The above steps are simply a formal way of saying, "If Step 3 is true, Step 4 shows that we can increment $n$ as much as we like--and Step 3 will continue to be true. Because we can start at some minimum value of $n$ and increment it until we've visited every possible value of $n$--and because we've already shown that Step 2 ( the minimum value of $n$ ) is true--we know that Step 3 is true for all $n$ Applying those steps to the original question Step 1: $$P(n) : x^n - y^n = (x-y)\sum_{i=0}^{n-1}x^{n-1-i}y^i$$ Step 2: $$P(1) : x^1 - y^1 = (x-y)\sum_{i=0}^{1-1}x^{1-1-i}y^i$$ $$P(1) : x - y = (x-y)\sum_{i=0}^{0}x^{-i}y^i$$ $$P(1) : x - y = (x-y)(x^{-0}y^0)$$ Note that $x^{-0}$ is defined and $x^{-0} = x^0 = 1$. So, by substitution: $$ P(1) : x - y = (x-y)(1 \cdot 1) \\ P(1) : x - y = (x-y)(1) \\ P(1) : x - y = (x-y) \\ P(1) : true \\ $$ Step 3: $$P(n) : x^n - y^n = (x-y)\sum_{i=0}^{n-1}x^{n-1-i}y^i \text{ is true by assumption.}$$ Step 4: $$P(n+1) : x^{n+1} - y^{n+1} = (x-y)\sum_{i=0}^{n+1-1}x^{n+1-1-i}y^i \text{ By Induction Hypothesis}$$ $$P(n+1) : x^{n+1} - y^{n+1} = (x-y)\sum_{i=0}^{n}x^{n-i}y^i$$ $$P(n+1) : x^{n+1} - y^{n+1} = (x-y)(x^n+x^{n-1}y+x^{n-2}y^2+\cdots+x^2y^{n-2}+xy^{n-1}+y^n)$$ $$P(n+1) : x^{n+1} - y^{n+1} = x^n(x-y)+x^{n-1}y(x-y)+x^{n-2}y^2(x-y)+\cdots+x^2y^{n-2}(x-y)+xy^{n-1}(x-y)+y^n(x-y)$$ $$P(n+1) : x^{n+1} - y^{n+1} = x^{n+1} - y^{n+1}$$ $$QED$$ Each operation in the above proof can be justified using P1 through P12.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1877079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Is $x^{-0}$ defined? In the form of mathematics that most of humanity is taught, the following operation is undefined: $\Large{\frac{x}{0}}$ But, how about the following operation? $\Large{x^{-0}}$ Is the following statement true? $\Large{x^{-0}=\frac{1}{x^0}=\frac{1}{1}=1}$
$x^y=e^{ylog(x)}$ so $x^0=x^{-0}$ are defined for $x>0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1877148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 3 }
Binomial identity: Clever idea for a short proof? When answering this question I had to cope with this binomial identity: The following holds true \begin{align*} \sum_{i=0}^k\binom{k}{i}\sum_{j=0}^i\binom{i}{j}(-1)^{k-i}j^{i-j} =\sum_{i=0}^k\binom{k}{i}\sum_{j=0}^i\binom{i}{j}(-1)^{i-j}j^{k-i}\qquad\qquad k\geq 0 \end{align*} Although LHS and RHS look very similar, I have troubles to find a short transformation from one side to the other. At the time the proof looks like \begin{align*} \sum_{i=0}^k&\binom{k}{i}\sum_{j=0}^i\binom{i}{j}(-1)^{k-i}j^{i-j}\qquad&\\ &=\sum_{i=0}^k\binom{k}{k-i}\sum_{j=0}^{k-i}\binom{k-i}{j}(-1)^{i}j^{k-i-j}\qquad& i\longrightarrow k-i \\ &=\sum_{i=0}^k\binom{k}{k-i}\sum_{j=i}^{k}\binom{k-i}{j-i}(-1)^{i}(j-i)^{k-j}\qquad& j\in[0,k-i]\longrightarrow j\in[i,k] \\ &=\sum_{j=0}^k\sum_{i=0}^{j}\binom{k}{k-i}\binom{k-i}{j-i}(-1)^{i}(j-i)^{k-j}\qquad&\text{exchange sums}\\ &=\sum_{i=0}^k\sum_{j=0}^{i}\binom{k}{k-j}\binom{k-j}{i-j}(-1)^{j}(i-j)^{k-i}\qquad&i\longleftrightarrow j\\ &=\sum_{i=0}^k\sum_{j=0}^{i}\binom{k}{k-i+j}\binom{k-i+j}{j}(-1)^{i-j}j^{k-i}\qquad&j\longrightarrow i-j\\ &=\sum_{i=0}^k\sum_{j=0}^{i}\binom{k}{i}\binom{i}{j}(-1)^{i-j}j^{k-i}\qquad&\\ \end{align*} and the claim follows. Since this derivation looks somewhat cumbersome i am wondering if there is a more efficient index transformation possible or another shorter way to prove this identity.
Let $K$ be a set with $k$ elements. Both sides can be interpreted as counting, with signs, choices of * *a subset $I$ of $K$, *a subset $J$ of $I$, and *either a function $K \setminus I \to J$ (on the LHS) or a function $I \setminus J \to J$ (on the RHS). Picking $I$ and $J$ is equivalent to picking a partition of $K$ into three disjoint subsets, namely $K \setminus I, I \setminus J$, and $J$. The LHS and the RHS are related by switching $K \setminus I$ and $I \setminus J$, which is what your index transformations are accomplishing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1877250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
triple vector product: vector vs gradient I think there's a simple explanation for this, but could not find one from a few online searches. The triple vector product and the curl of $\mathbf{A}\times \mathbf{B}$ have very similar forms, however there are additional terms in the differentiation case: $ \mathbf{A} \times (\mathbf{B} \times \mathbf{C}) = \mathbf{B} (\mathbf{A} \bullet \mathbf{C}) - \mathbf{C} (\mathbf{A} \bullet \mathbf{B}) \\ \nabla \times (\mathbf{B} \times \mathbf{C}) = \mathbf{B} (\nabla \bullet \mathbf{C}) - \mathbf{C} (\nabla \bullet \mathbf{B}) + (\mathbf{C} \bullet \nabla)\mathbf{B} - (\mathbf{B} \bullet \nabla)\mathbf{C} $ Can someone explain why? I'm familiar with Einstein notation if that helps. Thank you for your help!
As stated above, the first expression given is simply product of vectors, which can be expressed in terms of the dot product. The second involves differentiation, acting on a product. The product rule for vector differentiation will inevitably lead to the extra terms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1877357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Is $\mathbb{R}$ with the particular point topology and open half line topology compact? Consider $(\mathbb{R}, \tau_{p})$ and $(\mathbb{R}, \tau_l)$ (or sometimes called "ray topology") Where $\tau_{p} = \{U \subseteq \mathbb{R}, p \in U\}\cup\{\varnothing\}$, and $\tau_l = \{(a, \infty)| a \in \mathbb{R}\}\cup\{\varnothing, \mathbb{R}\}$ Question: Are $(\mathbb{R}, \tau_{p})$ and $(\mathbb{R}, \tau_l)$ compact? * *Let $\mathcal{U}$ be an open cover of $(\mathbb{R}, \tau_{p})$, then it necessarily contains $\{\mathbb{R}\}$, since $p \in \mathbb{R}$. Then we can remove all other open sets, leaving only $\{\mathbb{R}\}$. So every open cover has a finite subcover. *Let $\mathcal{U}$ be an open cover of $(\mathbb{R}, \tau_{l})$. Since $\mathbb{R}$ is uncountabe, to produce a finite subcover, we must remove all but finite number of sets in $\mathcal{U}$. Suppose we have removed all but finite number of sets in $\mathcal{U}$. Since $\mathbb{R}$ has no least element, therefore we can always find $a \in \mathbb{R}$ such that no cover contains it, therefore $(\mathbb{R}, \tau_{l})$ is not compact. I know my arguments sort of sucks, I would appreciate if someone can check if these are correct and any improvement on my arguments are appreciated!
* *Careful! not every open cover needs to contain $\mathbb{R}$. You will have to use a different approach. *This is fine but actually I don't think you need uncountablility of $\mathbb{R}$ here. For instance, I think your argument would work for $\mathbb{Z}$ as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1877500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Evaluating the definite integral $\int_{-1}^1 \lfloor \arccos x \rfloor \,dx$ involving the greatest integer function Evaluating the definite integral $$\int_{-1}^1 \lfloor \arccos x \rfloor \,dx .$$ I tried it but unable to do as it is discontinuous. Somebody told me that you should do this by drawing graph---how? Can anybody please give me the start?
Hint: Use that arccosine is a decreasing function, and that $\arccos(cos x)=x$ if $0\le x\le \pi$. Hence $f(x)=\lfloor\arccos x\rfloor$ is a step function, defined by: $$f(x)=\begin{cases}3 &\text{if }\quad\!{-1}\le x\le\cos 3,\\2 &\text{if } \cos 3< x \le \cos 2, \\ 1 &\text{if } \cos 2< x\le \cos 1,\\ 0 &\text{if } \cos 1< x\le 1.\end{cases}$$ You should find $\enspace 3+\cos 1+\cos 2+\cos 3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1877737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Truth tables for extremely long expressions One of the questions from my text book says to Write the truth table for the expression: $$ p \vee ( \neg (((\neg p \vee q) \rightarrow q) \land p )) $$ and state whether it is a tautology, contradiction or neither. Please do not think I am asking you to do my work for me, but I don't understand how such a large expression can have a truth table that wouldn't take extremely long to write? I understand how to write truth tables for smaller expressions such us $$ p \rightarrow \neg ( p \land q ) $$ Where the columns would look like this: | $p$ | $q$ | $p \land q$ | $\neg (p \land q)$ | $p \rightarrow \neg (p \land q)$ And then I can fill p and q with T, T, F, F and T, F, T, F respectively and work out the values for the rest. Regardless of being a much smaller expression, it still has 5 columns, and I was wondering if there was a better way to write the truth tables for larger expressions?
Since there are only two variables in your expression, you will only need to evaluate it four times. Using the method you described, you can evaluate the expression outwards one step at a time using the following columns: $p\ |\ q\ |\ \lnot p\lor q\ |\ \left(\lnot p\lor q\right)\rightarrow q\ |\ \left(\left(\lnot p\lor q\right)\rightarrow q\right)\land p\ |\ p \lor \left(\lnot\left(\left(\left(\lnot p\lor q\right)\rightarrow q\right)\land p\right)\right)$ Really, the only thing of interest is the last column, the others are merely intermediate steps. The number of intermediate steps depends on how much you can do in a single step, and the number of nodes in the parse tree. The latter only grows linearly with the length of the expression. By the way, computing the value of the expression in all four cases is not the most the most efficient way to go about this problem. Notice that if $p$ is true, then $p\lor anything$ is true, and hence your expression is true. Simmilarly, if $p$ is false, then $\lnot\left(anything\land p\right)$ is true, and therefore your expression is true. Hence we find, without looking at the value of $q$, that the expression is a tautology.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1877833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Number of eigenvalues and their eigenspaces So Let matrix $A$ have eigenvalues as follows : $$ e_1=0\\ e_2=0\\ e_3=2\\ e_4=2\\ $$ From here can we deduce that dimension of the eigenspace when the eigenvalues is $2$ is 2? can we deduce this? If we could deduce that we could also deduce that dimension of the nullspace is $2$ since $e_1=e_2=0$ two eigenvalues pointing at $0$ To clear the question a bit : Can we conclude that rank of the eigenspace of a specific eigenvalue is equal the number of repetition of the eigenvalue?
No, think of $$ \begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 2 & 1 \\ 0 & 0 & 0 & 2 \\ \end{bmatrix}. $$ The eigenspace relative to the eigenvalue $0$ has dimension $1$, generated by $$ \begin{bmatrix} 1\\ 0 \\ 0 \\ 0\\ \end{bmatrix}. $$ The eigenspace relative to the eigenvalue $2$ has dimension $1$, generated by $$ \begin{bmatrix} 0\\ 0 \\ 1 \\ 0\\ \end{bmatrix}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1877915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Landau notation on $f(n) = \frac{1}{4n \tan(\frac{\pi}{n})}$ Does someone would have time to show me how to use the Landau notation "Big O"? A useful example could be on $f(n) = \frac{1}{4n \tan(\frac{\pi}{n})}$.
One has, by the Taylor series expansion, as $x \to 0$, $$ \frac{1}{1+x}=1+O(x),\qquad \tan x=x+O(x^3),\tag1 $$ then $$ \frac{x}{\tan x}=\frac{x}{x+O(x^3)}=\frac1{1+O(x^2)}=1+O(x^2)\tag2 $$ Hence, as $n \to \infty$, we have $\dfrac{\pi}n \to 0$, and $$ \begin{align} f(n) := \frac{1}{4n \tan(\frac{\pi}{n})} =\frac1{4\pi}\cdot \frac{\frac{\pi}{n}}{\tan(\frac{\pi}{n})} =\frac1{4\pi}\cdot \left( 1+O\left(\frac1{n^2}\right) \right) =\frac1{4\pi}+O\left(\frac1{n^2}\right) \tag3 \end{align} $$ as wanted.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1877994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Derivative of $\operatorname{trace}(XWW^{T}X^{T})$ with respect to $W$ Compute $$\frac{d}{dW}\operatorname{trace}(XWW^T X^T)$$ where $X$, $W$ are $n\times n$ real matrices.
A different solution from the one I proposed using the Matrix Cookbook equation $(116)$ (if you are not too familiar with matrix calculus) involves taking these products, then writing them out using index notation: $$V=WW^T$$ $$A=XV$$ $$B=AX^T$$ Hence: $$v_{ij}=\sum_kw_{ik}w^T_{kj}$$ $$a_{mj}=\sum_ix_{mi}v_{ij}$$ $$b_{mn}=\sum_ja_{mj}x^T_{jn}=\sum_{i,j,k}x_{mi}w_{ik}w_{kj}^Tx_{jn}^T$$ $$\operatorname{trace}(B)=\sum_{m}b_{mm}=\sum_{i,j,k,m}x_{mi}w_{ik}w_{kj}^Tx_{jm}^T$$ Let's suppose we want to find the element of index $(r,s)$ of the resulting derivative matrix. Only two elements of the trace contain the variable $w_{rs}$, namely: $$\sum_{j,m}x_{mr}w_{rs}w_{sj}^Tx_{jm}^T \qquad \text{and}\qquad \sum_{i,m}x_{mi}w_{is}w_{sr}^Tx_{rm}^T$$ All other terms vanish because they are independent of $w_{rs}$. These two sums are in fact the same, hence: $$\frac{d \operatorname{trace}(B)}{w_{rs}}=2\sum_{i,m}x^T_{rm}x_{mi}w_{is}=2\sum_i \left(\sum_m x_{rm}^Tx_{mi}\right)w_{is}$$ The sum over $m$ clearly represents an element of the matrix $X^TX$, so that the sum over $i$ is an element from the matrix $X^TXW$. Finally: $$\frac{d \operatorname{trace}(XWW^TX^T)}{dW}=2X^TXW$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1878086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Question about whether $(x^2)^{0.5} = x$. The Wikipedia page on exponentiation suggests that the following identity holds provided the base $b$ is non-zero: $$(b^m)^n = b^{mn}$$ Consider the following function: $$y = (x^2)^{0.5}$$ According to the identity above the following should hold: $$y = (x^2)^{0.5} = x^1$$ However consider the plots of the two functions: $$y = x$$ $$y = (x^2)^{0.5}$$ The functions are equal for $x \geq 0$ however for $x < 0$ there is a discrepancy. Could you comment on the use of exponentiation rules? Are there other circumstances that similar discrepancies can be found when applying exponentiation rules?
The page you refer to actually says the following. (The emphasis below is mine.) The following identities hold for all integer exponents, provided that the base is non-zero $0.5$ is not an integer, so the property does not apply. But as you've seen, $(x^2)^{0.5} = x$ is true for $x \ge 0$. The reason it fails for $x < 0$ is because raising to the exponent $0.5$ is the same as taking the square root (as long as we're working only with real numbers, which I'm assuming is the case here). That is, $y^{0.5} = \sqrt{y}$. And when we simply say $\sqrt y$, it is understood that we are talking about the positive square root. For example, $\sqrt9 = 3$, not $-3$. In general when we take the square root of a square, we have $\sqrt{x^2} = |x|$. For example, if $x = -4$ then we have $$x^{0.5} = ((-4)^2)^{0.5} = (16)^{0.5} = 4 \ne x.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1878196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Spivak Calculus - Chapter 1 Question 4.6 In Spivak's Calculus, Chapter 1 Question 4.6: Find all the numbers $x$ for which $x^2+x+1>2$ The chapter focuses on using the following properties of numbers to prove solutions are correct: Based on those properties, I am able to perform the following algebra: $ \begin{align} x^2 + x + 1 &> 2 & \text{Given}\\ x (x + 1) + 1 &> 2 & \text{P9}\\ x (x+1) &> 1 & \text{P3 P2 and Addition} \end{align} $ And from there, I can note that: $ \begin{align} x &\neq (x+1)^{-1}\\ x^{-1} &\neq (x+1)\\ \end{align} $ By P6, because $x (x+1) > 1$ and $x (x+1) \neq 1$. However, in his book Spivak is able to find the following: $ \begin{align} x &> \frac{-1+\sqrt{5}}{2} \text{ or}\\ x &< \frac{-1-\sqrt{5}}{2} \end{align} $ How does he come to that conclusion using only the properties listed above?
Complete the Square $ \begin{align} x^2+x+1&>2 & \text{Given}\\ x^2+x+1+0&>2+0 & \text{By Addition}\\ x^2+x+1+0&>2 & \text{By P2}\\ x^2+x+0+1&>2 & \text{By P4}\\ x^2+x+\left( \frac{1}{2} \right)^2+(-1)\left( \frac{1}{2} \right)^2+1 &>2 & \text{By P3}\\ \left(x+\frac{1}{2}\right)\left(x+\frac{1}{2}\right)+(-1)\left( \frac{1}{2} \right)^2+1 &>2 & \text{By P9}\\ \left(x+\frac{1}{2}\right)\left(x+\frac{1}{2}\right)+ (-1)\left( \frac{1}{4} \right) + 1 &> 2 & \text{By Multiplication}\\ \left(x+\frac{1}{2}\right)\left(x+\frac{1}{2}\right) &> \left( \frac{5}{4} \right) & \text{By Addition, P3, and P2}\\ \end{align} $ Spivak doesn't formally define exponents in Chapter 1, so it's a little difficult to finish the proof using only the properties listed in the chapter. But it is at least clear how to get to Spivak's result from there.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1878298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Find the least squares solution for rank deficient system Find the least squares solution to the system $$x - y = 4$$ $$x - y = 6$$ Normally if I knew what the matrix $A$ was and what $b$ was I could just do $(A^TA)^{-1} A^Tb$, but in this case I'm not sure how to set up my matrices. How can I find the least square solution to the system?
Your matrix is just the coefficients of your system of equations. In this case $$ x-y = 4 $$ $$ x-y = 6 $$ leads to $$ \begin{bmatrix} 1 & -1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 4 \\ 6 \end{bmatrix} $$ but you should see that there is no solution to this since you can't have $x-y$ be both $4$ and $6$...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1878383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
Example of $2\times2$ matrix with singular values $0$ and $5$ but only has one eigenvalue $0$. My understanding of singular values is that they are the square roots of eigenvalues, but I am definitely missing something here in the definition. The problem I am trying to work on is: Find an example of $T \in L(\mathbb{C}^2)$ such that $0$ is the only eigenvalue of $T$ and the singular values of $T$ are $5$, $0$.
The singular values of $A$ are the square roots of the eigenvalues of $A^*A$ (or of $AA^*$), not those of $A$ itself. For one thing, $A$ could have negative eigenvalues. For your specific problem, try $A=\begin{bmatrix}0&5\\0&0\end{bmatrix}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1878465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solve for transform parameters given original and transformed vectors I have some transformation in 3D homogeneous coordinates that includes three-axis rotation, translation and strain (linear deformation): $$\\ T P = P' \\ T = D R_z R_y R_x S $$ $$ D = \begin{bmatrix} 1 & 0 & 0 & d_x \\ 0 & 1 & 0 & d_y \\ 0 & 0 & 1 & d_z \\ 0 & 0 & 0 & 1 \end{bmatrix}\\ R_z = \begin{bmatrix} \cos a & -\sin a & 0 & 0 \\ \sin a & \cos a & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}\\ R_y = \begin{bmatrix}\cos b & 0 & -\sin b & 0 \\ 0 & 1 & 0 & 0 \\ -\sin b & 0 & \cos b & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \\ R_x = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & \cos c & -\sin c & 0 \\ 0 & \sin c & \cos c & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \\ S = \begin{bmatrix} m_x & 0 & 0 & 0 \\ 0 & m_y & 0 & 0 \\ 0 & 0 & m_z & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} $$ $P$ and $P'$ are identically-sized $4 \times n$, where $n$ is the number of points. I have some control over the number of points, but it should probably be at least 3. Given $P$ and $P'$, I need to solve for all of the transformation parameters: $a, b, c, d_x, d_y, d_z, m_x, m_y, m_z$. If $n$ is set to 4, then at least $T = P'/P$ is a straightforward matrix inversion, though $n$ might end up being higher and I would probably have to use a pseudoinverse. I will be using a numerical library (not yet chosen), but for the time being: * *How much of this solution can be done analytically? *Is there a way of re-forming the problem to make the solution easier? (I considered spherical coordinates but I'm not very sure how I would go about that) I tried a couple of brute-force numerical multivariate nonlinear solvers on this and they both (unsurprisingly) choked.
@dovalojd's answer is pretty good. There are better methods, though, of determining the nautical angles. You can use a test vector to eliminate the roll, $\theta_x$: $ \hat n = Q\hat x \\ = R_z R_y R_x \hat x \\ = R_z R_y \hat x \\ = R_z \begin{bmatrix} \cos \theta_y \\ 0 \\ -\sin \theta_y \end{bmatrix} \\ \hat n = \begin{bmatrix} \cos \theta_y \cos \theta_z \\ \cos \theta_y \sin \theta_z \\ -\sin \theta_y \end{bmatrix} $ With the roll gone, you can focus on the other two. First, you find the yaw, $\theta_z$, using yaw = atan2(nx, ny). Then you calculate the pitch, $\theta_y$, using s = hypot(nx, ny); pitch = atan2(s, -nz). Though asin could be used directly on ny since $\theta_y \in [-\frac \tau 4,-\frac \tau 4]$, it is numerically unstable around $\pm \frac \tau 4$, so it should be avoided. atan2 doesn't suffer this problem. To get the roll now, undo the yaw and pitch. First, the yaw. Recover $\cos \theta_z$ and $\sin \theta_z$ directly from $\hat n$: $ \cos \theta_z = \frac {n_x} s \\ \sin \theta_z = \frac {n_y} s $ Use these to reconstruct $R_z$, then apply $R_z^T$ to $Q$'s left side, leaving $R_y R_x$. Do the same thing with $R_y$, leaving you with just $R_x = \begin{bmatrix} 1 & & \\ & \cos \theta_x & -\sin \theta_x \\ & \sin \theta_x & \cos \theta_x \end{bmatrix} $ The roll can then be determined with one last atan2: roll = atan2(Rx_yy, Rx_yz). Again atan2 is preferred for stability reasons, but also to automatically cover the $\theta_x \notin [-\tau/4, \tau/4]$ case. For slightly better accuracy, you can repeat the roll calculation with the other two non-trivial elements of $R_x$ and do an average. So, that's all three nautical angles! Though this answers the original question, I feel obligated to add that analysis using these angles suffers from Gimbal lock, so if you can do your analysis using Q, maybe you should. That or quaternions, spinors, Clifford algebras, or some other kind of posh upper class mathemagicks.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1878531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Probability Book For Beginner. I am a graduate school freshman. I did not take a probability lecture. So I don't have anything about Probability. Could you suggest Probability book No matter What book level?
I think the book Probability Theory by Heinz Bauer is a very good text on probability theory. It contains an extensive discussion of all the basic parts of the theory and is very readable. The book requires, however, a modest background in measure theory. The original version of the book from 1973, Probability Theory and Elements of Measure Theory, contains all the necessary background in measure theory. Later versions of the book are split into two books, the parts on measure and probability theory are published as Measure and Integration Theory and Probability Theory, respectively.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1878806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
Why we can add an element on both sides like this? Suppose $A$ is a set equipped with a binary operation $+$. How I can prove if $a=b$ then $a+c=b+c$? Why we can add an element on both sides like this?
A simple explanation is that saying $a=b$ literally says that $a$ and $b$ are the exact same mathematical object. Hence 'adding' $c$ to $a$ on the right is one the same as 'adding' $c$ to $b$ on the right (adding in quotes because what I am really referring to is the given binary operation). Note that the converse, $a+c=b+c \Rightarrow a=b$, is not guaranteed to be valid unless every element of the set $A$ has unique right inverses, and in such a case, we say that $c$ has right inverse $c_R^{-1}$ and we can apply that right inverse to both sides to obtain $a+c+c_R^{-1}=b+c+c_R^{-1}\Rightarrow a+e=b+e$ where $e$ is an identity element (required for the notion of a right inverse to make sense) and thus $a+e=b+e\Rightarrow a=b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1878919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
How to compute the limit of $\frac{n}{n^2+1^2}+\frac{n}{n^2+2^2}+\frac{n}{n^2+3^2}+\cdots+\frac{n}{n^2+n^2}$ without using Riemann sums? How to compute the limit of $\frac{n}{n^2+1^2}+\frac{n}{n^2+2^2}+\frac{n}{n^2+3^2}+\cdots+\frac{n}{n^2+n^2}$ without using Riemann sums? My Try: I have Solved It using Limit as a Sum (Reinman Sum of Integral.) But I did not understand How can I solve it Using Sequeeze Theorem or any other way.
One may use the digamma function, from the standard $ \psi(z+1)-\psi(z)=\dfrac1z$ one obtains easily $$ \psi(z+n+1)-\psi(z+1)=\sum_{k=1}^n\frac1{z+k}, $$ inserting $z:=in$ and considering imaginary parts gives $$ \sum_{k=1}^n\frac{n}{n^2+k^2}=-\text{Im}\left[\psi(in+n+1)-\psi(in+1)\right] $$ then one may recall that (see $6.3.18$ here) $$ \psi(z)=\log z+O\left(\frac1z \right), \quad z \to \infty, \quad |\mathrm{arg}z|<\pi, $$ which yields, as $n \to \infty$, $$ \sum_{k=1}^n\frac{n}{n^2+k^2}=-\text{Im}\left[\log(1+i)-\log i \right]+O\left(\frac1n \right)=\frac\pi4+O\left(\frac1n \right) \to \frac\pi4. $$ Remark. By considering more asymptotic terms in $\psi$, one gets, $$ \sum_{k=1}^n\frac{n}{n^2+k^2}=\frac\pi4-\frac1{4n}-\frac1{24n^2}+\frac1{2016n^6}+O\left(\frac1{n^8} \right) $$ $\displaystyle \log z$ denotes the principal value of the logarithm defined by $$ \begin{align} \displaystyle \log z = \ln |z| + i \: \mathrm{arg}z, \quad -\pi <\mathrm{arg} z \leq \pi,\quad z \neq 0. \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1879019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
What does being "Linear" mean for a transformation and a function intuitively/graphically? I was wondering what is the geometric meaning or intuition behind a transformation and function(separately)being linear. An example(or graph) illustrating the characteristics of a linear function/map would be much appreciated. Thanks in advance. EDIT: Also, I have read that if we "zoom in" the graphs of some functions, we see that they "become linear around the point of magnification"(you can find it in Callahan's Advanced Calculus". What does that mean?
If you have a linear transformation on a space $X$ then the image is a subspace of the space $X$. Geometrically that means that the image of the transformation is a flat that contains the origin in the space. Examples: The easiest example is the 0-transformation. Let $X = \mathbb R ^n$ and $f: X \to X, v \mapsto 0$ then $Im(f) = \{0\}$ and thus just a point. The next easiest example is the identity: Let $X = \mathbb R ^n$ and $f: X \to X, v \mapsto v$ then $Im(f) = X$ and thus the whole space. A more complex example is: Let $X = \mathbb R ^n$ and $f: X \to X, v \mapsto (v_1, 0 , \dots, 0)$ then $Im(f) = \{(v_1, 0 , \dots, 0) \mid v_1 \in \mathbb R\}$ and thus a line. Hope that helps you :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1879257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Finding complex eigenvalues and its corresponding eigenvectors $$A = \begin{bmatrix}3&6\\-15&-15\end{bmatrix}$$ has complex eigenvalues $\lambda_{1,2} = a \pm bi$ where $a =$____ and $b = $ ____. The corresponding eigenvectors are $v_{1,2} = c \pm di$ where $c =$ (____ , _____ ) and $d =$ (____ , ___) So I got the char. poly. eqn $\lambda^2 + 12\lambda + 45 = 0$ Then using the quad. eqn I got $-6 \pm 3i$ which I know is in the form $a \pm bi$ so I was thinking $a = -6$, $b = 3$, but instead they have $b = -3$ , why? Also, I'm not sure how to obtain the corresponding eigenvectors because we are working with complex eigenvalues now.
I'll cover the how to find the eigenvectors part. $$A = \begin{bmatrix}3&6\\-15&-15\end{bmatrix}$$ has eigenvalues $\lambda_{1,2} = -6\pm 3i$. Now to find the associated eigenvectors, we find the nullspace of $$A-\lambda_{1,2}I = \begin{bmatrix}3-(-6\pm 3i)&6\\-15&-15-(-6\pm 3i)\end{bmatrix} = \begin{bmatrix}9\mp 3i & 6 \\ -15 & -9\mp 3i\end{bmatrix}$$ To find the nullspace I'll put this in REF: $$\begin{align}\begin{bmatrix}9\mp 3i & 6 \\ -15 & -9\mp 3i\end{bmatrix} &\sim \begin{bmatrix} 5 & 3\pm i \\ 3\mp i & 2\end{bmatrix} \\ &\sim \begin{bmatrix} 5 & 3\pm i \\ 0 & 2-(3\pm i)\frac{-(3\mp i)}{5}\end{bmatrix} \\ &= \begin{bmatrix} 5 & 3\pm i \\ 0 & 0\end{bmatrix}\end{align}$$ Therefore all of the eigenvectors associated with $\lambda_{1,2}$ are of the form $w_{1,2} = \begin{bmatrix}\frac 15(-3\mp i)t \\ t\end{bmatrix}$. Representative eigenvectors are then $$\bbox[5px,border:2px solid red]{v_{1,2} = \begin{bmatrix}-3\mp i \\ 5\end{bmatrix}}$$ which is obtained by setting $t=5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1879368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
How to find the Cartesian equation of a plane curve from a parametric equation? More specifically, how to express $$\begin{aligned}x(t) &=\frac{2t}{1+t^2}\\ y(t) &=\frac{1-t^2}{1+t^2}\end{aligned}$$ in terms of $x$ and $y$? I attempted adding the two, getting a square from the numerator and a few other methods, but running out of time. EDIT: please don't work backwards, and try to show this as simple as possible. Overcomplication or overdefining equations may confuse me even more.
Those are the parametric equations of the unit circle $x^2+y^2=1$. In fact $$x^2+y^2=\frac{(2t)^2+(1-t^2)^2}{(1+t^2)^2}=\frac{4t^2+1-2t^2+t^4}{(1+t^2)^2}=\frac{1+2t^2+t^4}{(1+t^2)^2}=1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1879452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }