Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Power series such that the $n$-th partial sum has $n$ distinct roots I am looking for examples of power series of the form $$\sum_{k=0}^\infty a_k x^k$$ (where $a_k \in \mathbb{C}$ for all $k$) such that the polynomial given by its $n$-th partial sum has $n$ distinct roots, i.e.: $$\sum_{k=0}^n a_k x^k$$ has $n$ distinct roots. So far, I have found this family of examples: $\sum_{k=0}^\infty x^k$ and all of its derivatives. Can you help me find some examples that are not any of these?
Let $a_k$ be a sequence of numbers that are algebraically independent over the rationals. Thus for any nontrivial polynomial $p(x_0, \ldots, x_n)$ with rational coefficients, $p(a_0, \ldots, a_n) \ne 0$. The polynomial $P_n(x) = a_0 + a_1 x + \ldots + a_n x^n$ has a repeated root if and only if its discriminant is $0$. In this case that discriminant is a nontrivial polynomial in $a_0, \ldots, a_n$ with integer coefficients, and therefore must be nonzero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3294332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
On Composite Numbers of the Form $p_{1}p_{2} \ldots p_{k} - 1$ This question is related to D. H. Lehmer's 1932 conjecture on Euler's totient function: Are there any composite $n$ for which $\phi(n)$ divides $n-1$? See, for example: On Lehmer's Totient Conjecture I would like to ask what is known regarding the factors of $p_{1}p_{2} \ldots p_{k} - 1$, where the $p_{i}$ are distinct odd prime numbers? I was not able to find much in some number theory books I consulted. Thank you.
Here are some additional things related Lehmer's totient conjecture that is known. Definition: If $n$ is composite then $\phi(n)<n−1$, hence there is at least one divisor $d$ of $n−1$ which does not divide $\phi(n)$. We call $d$ as the totient divisor of $n$. Trivially, if $n$ is prime then it has no totient divisor and if $n−1$ is prime then $n$ has exactly 1 totient divisor. * *There at least 4 divisors of $n-1$ which do not divide $\phi(n)$ if $n$ is a composite of the form $6k+1$. *Odd numbers prefer not to have a prime number of totient divisors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3294459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why does a non square matrix lack a multiplicative identity Precisely, why is multiplicative identity defined to be $IA=AI=A$ why both sides should work why not something like $AI=A$ ? Is there an underlying advantage?
The use of category theory may make this clearer. Suppose we have a category of objects $\, V_1, V_2, V_3, \dots\,$ An $\,n \times m\,$ matrix $\,A\,$ is an arrow from $\,V_m\,$ to $\, V_n.\,$ Matrix multiplication is only defined between compatible matrices. That is, If $\,B\,$ is an arrow from $\,V_n\,$ to $\,V_k\,$ then the matrix product $\,B A\,$ is an arrow from $\,V_m\,$ to $\,V_k.\,$ Each object $\,V_n$ has a unique identity arrow denoted by $\,I_n\,$ which is the $\,n \times n\,$ identity matrix. This gives us the identity $\, A = AI_m = I_nA.\,$ The two identity arrows are not the same unless $\,n=m.\,$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3294572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$\lim_{x \rightarrow \infty} \frac{f(x)}{g(x)}$ where $f(x)$ is linear and $g(x)$ is either strictly convex or strictly concave I am currently working on a problem where I am interested in the limit of a ratio of functions, $$\lim_{x \rightarrow \infty} \frac{f(x)}{g(x)}.$$ It is known that $f(x)$ is linearily increasing, e.g., $f(x) = x$, while $g(x)$ is either strictly convex or strictly concave. Intuitively, (but possible incorrectly) it seems to me that, for the case when $g(x)$ is strictly convex, the ratio should tend to $0$ in the limit. Is this a well-known result or is there a good way to show this? For the second case, when $g(x)$ is concave, is it possible to say anything at all about the limit? To me, it seems that it is not, but that more information is required about the limit of $g(x)$ itself to say something conclusive about the limit of the ratio. Many thanks,
The limit can be different from $0$. Actually it can be any real number $m$. Consider for example $f(x)=mx$ and $g(x)=x-\arctan(x)$ which is strictly convex for $x>0$ (or $g(x)=x+\arctan(x)$ for a strictly concave one). Then $$\lim_{x \rightarrow +\infty} \frac{f(x)}{g(x)}=m.$$ It can be also $\pm \infty$. Take $f(x)=\mp x$ and $g(x)=-\arctan(x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3294687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $x,y,z\in\mathbb{R}^+$, prove that $\sqrt{x^2-xz+z^2}+\sqrt{y^2-yz+z^2}\ge\sqrt{x^2+xy+y^2}.$ When I was doing Math Training, the coach gave a inequality problem. If $x,y,z\in\mathbb{R}^+$, prove that $$\sqrt{x^2-xz+z^2}+\sqrt{y^2-yz+z^2}\ge\sqrt{x^2+xy+y^2}.$$ I tried to use the brute force method, and found out it was to hard and it involves lot of terms. The coach later said that there is a beautiful way to solve this problem. Can someone help me?
Hint: After one times squaring we get $$2\sqrt{x^2-xz+z^2}\sqrt{y^2-yz+z^2}\geq xz+yz+xy-2z^2$$ Squaring again and factorizing we obtain $$(xy-xz-yz)^2\geq 0$$ which is true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3294805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Not understanding the definition of a differential of a map. I'am reading Loring.W.Tu's book on Manifolds and I'am stuck at the point where he defines differential of a smooth map between smooth manifolds $N$ and $M$. If $F:N\rightarrow M$ is a $C^{\infty}$ map between two manifolds (smooth) then at each point $p\in N$ it induces a linear map of tangent spaces $$F_{*}:T_{p}N\rightarrow T_{F(p)}M$$as follows.If $X_{p}\in T_{p}N$ then $F_{*}(X_{p})$ is the tangent vector in $T_{F(p)}M$ defined by $$(F{*}(X_{p}))f=X_{p}(f\circ F) \in \mathbb{R}$$ for $f \in C^{\infty}_{F(p)}(M)$. Can anyone help me out in decoding this and explain why this is a good generalisation of Jacobians.
They talk about this in example 8.4. Let $f:\mathbb{R}^n\rightarrow\mathbb{R}^m$ be smooth and $p\in \mathbb{R}^n$. Take standard coordinates of $\mathbb{R}^n$ and $\mathbb{R}^m$ via $(x^1,\cdots, x^n)$ and $(y^1\cdots, y^m)$, respectively. Then, the map $F_*:T_p\mathbb{R}^n\rightarrow T_{F(p)}\mathbb{R}^m$ is a linear map. The entries of the matrix representation $[a_j^i]$ relative to the standard bases (for the tangent spaces) is given via the formula $$F_*\left(\frac{\partial}{\partial x^j}\Big|_p\right)=\sum\limits_k a_j^k\frac{\partial}{\partial x^j}\Big|_{F(p)}.$$ To determine the $a_j^i,$ simply evaluate both sides on $y^i$. The right will give you $a_j^i,$ and the left, by the definition of the pushforward, will give you $\frac{\partial F^i}{\partial x^j}(p),$ where $F^i$ denote the coordinates of $F$ in the $y$ coordinate system. Hence, the matrix representation with respect to the bases $\{{\partial}/{\partial x^j}|_p\}$ and $\{{\partial}/{\partial y^j}|_{F(p)}\}$ is $[{\partial F^i}/{\partial x^j}(p)],$ which is the Jacobian that we all know and love. So, this is a good generalization, since it matches what we should get in the Euclidean case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3294959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Associated primes in a reduced ring Let $R$ be a reduced ring. Show that $\operatorname{Ass}R$ is the set of minimal prime ideals of $R$. I think that the first inclusion must come from using $\operatorname{Ass}R \subseteq\operatorname{Supp}R$, assuming the ideal is not minimal, and then showing some contradiction. No idea how to prove every minimal prime is associate, since the ring is not necessarily noetherian.
If $R$ is reduced and $P=\operatorname{Ann}(x)$ is an associated prime, suppose $Q$ is a prime properly contained in $P$. Then $(x)P\subseteq Q$ implies $x\in Q$. But then $x^2=0$, a contradiction. So $P$ was already minimal. The other direction isn't clear to me. In this paper they talk about necessary and sufficient conditions for a reduced ring to have the property that all finitely generated ideals of zero divisors to have a nonzero annihilator, so presumably both cases can happen. I see here that "weakly associated primes" are exactly the minimal primes in a reduced ring though. In that case, minimal primes are obviously weakly associated, because a minimal prime is minimal over $\operatorname{Ann}(1)=\{0\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3295064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to define measure over infinite sequence of random variables Suppose we are drawing elements from a set $\Omega$ at each time $t\in \mathbb N$. The probability of drawing $\omega\in \Omega$ at time $t$ is given by a conditional distribution $\mu(\omega|\omega_1,\omega_2,\ldots,\omega_{t-1})$ (the function $(\omega_1,\omega_2,\ldots,\omega_{t-1})\to \mu(B|\omega_1,\omega_2,\ldots,\omega_{t-1}) $ is measurable in $\Omega^{t-1}$ for each $B\subseteq \Omega$). How does one formally define the joint probability measure over a realization of an infinite sequence of draws such that the probability of any finite sequence of draws corresponds to the one given by the joint measure induced by the $\mu$'s? Does anyone have a reference for this type of construction? Thanks!
The Kolmogorov Extension Theorem may be able to help. The "Implications of the Theorem" section seems relevant to your question here. It's not constructive at the infinite level, but provides a formalism to link the finite to the infinite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3295172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Roots of $\sum_{k=0}^n x^k$ How can I go on to show that the roots of $1+x+x^2+x^3+\ldots+x^n$ are exactly $$\exp\left(\frac{2ki\pi}{n+1}\right)$$ for $k=1,\ldots,n$?
$(1-x)(1+x+x^{2}+...+x^{n})= 1-x^{n+1}$. So the roots are same as the roots of $1-x^{n+1}=0$ except for $x=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3295282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving that limits are a "local property" From M. Spivak's Calculus, let me reproduce Problem 10 in the chapter Limits. Suppose there is a $\delta>0$ such that $f(x) = g(x)$ when $0<|x-a|<\delta$. Prove that $\lim_{x\to a}f(x)=\lim_{x\to a}g(x)$. In other words, $\lim_{x\to a}f(x)$ depends only on the values of $f(x)$ near $a$ — this fact is often expressed by saying that limits are a "local property". I would like help identifying whether my proof is correct. Claim: Given that there exists $\delta'$ such that when $0<|x-a|<\delta'$, $f(x) = g(x)$, show that $\lim_{x\to a}f(x)=\lim_{x\to a}g(x)$. Proof: Since by assumption $\lim_{x\to a}f(x)$ and $\lim_{x\to a}g(x)$ exist, we may restate the problem as needing to show that $\lim_{x\to a}\big(f(x)-g(x)\big)=0$. Hence, given $\epsilon>0$, take $\delta:=\delta'$ so that if $0<|x-a|<\delta$, $|f(x)-g(x)-0| = |f(x)-g(x)|=0$. This implies by definition that $|f(x)-g(x)-0|<\epsilon$. $$\tag*{$\blacksquare$}$$
What you have shown is the following: If $\lim \limits_{x \to a}f(x)$ and $\lim \limits_{x \to a}g(x)$ exist, and there is a $\delta'>0$ such that $f(x) = g(x)$ whenever $0<|x-a|<\delta'$, then $\lim \limits_{x \to a}f(x) = \lim \limits_{x \to a}g(x)$. Your proof of this assertion seems fine. As suggested in the comments, you can also prove the following statement (or a similar statement with the roles of $f$ and $g$ reversed): Suppose $\lim \limits_{x \to a}f(x)$ exists, and there is a $\delta' > 0$ such that $f(x) = g(x)$ whenever $0 < |x-a| < \delta'$. Then, $\lim \limits_{x \to a}g(x)$ also exists, and $\lim \limits_{x \to a}f(x) = \lim \limits_{x \to a}g(x)$. The proof of this is very similar to the one you provided. Denote $l:= \lim \limits_{x \to a}f(x)$. Now, let $\varepsilon> 0$ be given. Then, we know there exists a $\delta_1 > 0$ such that if $0 < |x-a| < \delta_1$ then $|f(x) -l| < \varepsilon$. Now, define $\delta := \min(\delta', \delta_1)$. Now, if $0< |x-a|< \delta$ then we have that \begin{align} |g(x) - l| &= |f(x) - l| \\ &< \varepsilon. \end{align} Since $\varepsilon>0$ is arbitrary, this shows $\lim \limits_{x \to a}g(x)$ exists and equals $l$ as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3295680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How can I plot $f(x, y) = x^2 + y^2$ I want to plot $f(x, y) = x^2 + y^2$? I can plot functions of a single variable but I don't know how to plot multivariable function.
The graph of this function will be a surface in space. Above the point $(x,y)$ in the plane it has height $f(x,y)$. https://www.mathcurve.com/surfaces.gb/paraboloidrevolution/paraboloidrevolution.shtml
{ "language": "en", "url": "https://math.stackexchange.com/questions/3295839", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Is this substitution correct? I think the second equation above is incorrect. It seems to me that the constant k will cancel out. Am I interpreting the equation for the price elasticity of demand incorrectly as equivalent to: $E = \frac{\partial q}{\partial p} * \frac{p}{q} $
Warning: I won't make the $t$ subscripts explicit. The source you quote seems to have a misprint. I'm no economist, but$$E=\frac{p}{q}\frac{\partial q}{\partial p}\implies\frac{\partial\ln y}{\partial p}=\frac{1}{q}\frac{\partial q}{\partial p}=\frac{E}{p}.$$For constant $E$, this integrates to $\ln y=E\ln p+\text{constant}$. I suspect the author was thinking ahead to that step, resulting in $E\ln p$ instead of $\frac{E}{p}$. This is consistent with the first display-line equation after your excerpt, $$\ln y=\beta_0+\beta_1s+\beta_2\ln p+\varepsilon.$$This gives $E=\beta_2,\,\text{constant}=\beta_0+\beta_1s+\varepsilon$ (the integration "constant" need only not depend on $p$, but in this example depends, as it may, on $s,\,\varepsilon$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3295986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that composition by $\varphi$ is a linear mapping Let $\varphi$ be any mapping from a set $A$ to a set $B$. Show that composition by $\varphi$ is a linear mapping from $\mathbb{R}^B$ to $\mathbb{R}^A$. That is, show that $T:\mathbb{R}^B \rightarrow \mathbb{R}^A$ defined by $T(f) = f \circ \varphi$ is linear. To show that a mapping is linear, we must demonstrate that the vector operations are preserved. I am unclear about the wording here "composition by $\varphi$ is a linear mapping from $\mathbb{R}^B$ to $\mathbb{R}^A$" which implies to me that the composition by $\varphi$ leads to an inverse of $\varphi$ mapping defined previously from set $A$ to set $B$, but I don't know why. I am getting a lot of confusion here and I think that I am interpreting the problem wrong. I believe I have to show that $$f(\varphi_1 + \varphi_2) = f(\varphi_1) + f(\varphi_2)$$ and $$cf(\varphi) = f(c \varphi)$$ to show that the mapping is linear, but I am not sure how to get started.
$T$ is a function on $\mathbb{R}^B$, so you need to show it is linear on this space. That is, if $c\in\mathbb{R},$ and $f,g\in \mathbb{R}^B,$ then you need to show that $$T(cf)=cT(f),$$ and $$T(f+g)=T(f)+T(g).$$ You just need to use composition properties to show these; I'm sure you can take it from here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3296083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Notions of continuity for stochastic processes I would like to receive some clarification regarding the difference between continuous in probability and continuous almost surely. Using the definition of the wikipedia page (that match the one I have seen in other references), we have Continuous in probability for all $\varepsilon>0$ \begin{equation} \lim_{s\rightarrow t} \mathbb{P}\left(\left\{\omega \in \Omega : |X_s(\omega) - X_t(\omega) | \geq \varepsilon\right\} \right)= 0 \end{equation} and Continuous with probability 1 (almost surely) \begin{equation} \mathbb{P}\left(\left\{\omega \in \Omega : \lim_{s\rightarrow t} |X_s(\omega) - X_t(\omega) | = 0 \right\} \right)= 1 \end{equation} Now, I can see that the second condition is stronger than the first one, but to me they seem analogous, since the first should be valid for arbitrarily close $\varepsilon$. Obviously, my intuition is wrong, but I cannot understand how the difference can be significant. Could you please explain how the two definitions give raise to different processes? Thank you!
1. We first examine a much easier variant, i.e., discrete-time process. Let $Y = (Y_n)_{n\in\mathbb{N}_1}$ be a sequence of independent random variables such that $$ \mathbb{P}(Y_n = 1) = \frac{1}{n} \qquad\text{and} \qquad \mathbb{P}(Y_n = 0) = 1 - \frac{1}{n}. $$ Then it is clear that, for each $\epsilon > 0$, $$ \lim_{n\to\infty} \mathbb{P}( \left| Y_n - 0 \right| > \epsilon) \leq \lim_{n\to\infty} \mathbb{P}( Y_n \neq 0) = \lim_{n\to\infty} \frac{1}{n} = 0. $$ So $Y_n \to 0$ in probability. However, $\sum_{n=1}^{\infty} \mathbb{P}(Y_n = 1) = \infty$, and so, by the second Borel-Cantelli's lemma, $$ \mathbb{P}(Y_n = 1 \text{ infinitely often}) = 1. $$ So it follows that $$ \mathbb{P}\Bigl(\limsup_{n\to\infty} Y_n = 1\Bigr) = 1 \qquad\text{and}\qquad \mathbb{P}\Bigl(\liminf_{n\to\infty} Y_n = 0\Bigr) = 1. $$ So $Y_n$ does not converge as $n\to\infty$ with probability one. Here, the difference between two notions is that the probability $\mathbb{P}(\left|Y_n - 0\right| > \epsilon)$ describes what happens to a 'typical observer $\omega$', and telling that the exceptional event occurs only to a vanishing fraction of observers for each large $n$. However, for a 'fixed observer $\omega$', this exceptional event can happen infinitely often. 2. Now coming back to continuous-time case, we can borrow the above example as follows: Let $Y=(Y_n)_{n\in\mathbb{N}_1}$ be as above, and define $X = (X_t)_{t\geq 0}$ by $$ X_t = \begin{cases} Y_{\lceil 1/t \rceil}, & \text{if } t > 0, \\ 0, & \text{if } t = 0. \end{cases} $$ Then $X_t \to 0$ as $t \to 0$ in probability, whereas $X_t$ does not converge as $t \to 0$ almost surely.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3296182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Characteristic Direct product of the Rings Let the rings, $R_i$ and its direct product, $R = \Pi_1 ^{n}R_i = R_1 \times R_2 \times ... \times R_n$ Say the $Char(R_i) = m_i$ (1) $Char(R)$ = $lcm(m_1,m_2,...m_n)$ If all the rings, $R_i$ are commutative, The statement (1) is surely true. But what if the There are some rings that not commutative. Does statement (1) is true? (I.E. IS still $(1)$ true that regardelss of the $R_i$ is a commutative or not ?) Thanks.
Generally, if you have two groups $G_1$ and $G_2$, then the order of an element $ (g_1, g_2) \in G_1 \times G_2$ is simply lcm(ord($g_1$), ord($g_2$)). If the order of one of the elements is infinite, we simply take the lcm to mean infinite. The underlying additive group of any ring is abelian. And the characteristic of a nonzero unital ring is just the order of the multiplicative identity $1$ in that abelian group.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3296368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$x,y,z$ are all strictly positive, $x+y+z=1$, what is $\max(xyz)$ $x,y,z$ are all strictly positive, $x+y+z=1$, what is $\max(xyz)$? My attempt: Using rand() function in Microsoft Excel to generate random numbers between $0$ and $1$. I used this function for the values of $x$ and $y$. For the value of $z$, I used the formula $z=1-x-y$. This will make some values to be negative, which does not satisfy the condition given in the problem statement. However, repeating the process will lead us to find positive $z$ values. Then I used max() function. I observed that the $\max(xyz)=0.03703...$ I am not sure if $0.03703...$ is really the maximum value of the product of $x,y,$ and $z$. How to find the exact value (closed form) of $\max(xyz)$ without using programs? Any help will be appreciated. Thanks!
There are three ways that I can come up with. * *By using the AM-GM inequality, you have $$\sqrt [3]{xyz} \leq \frac{x+y+z}{3},$$ for non-negative $x$, $y$, and $z$. *As you attempted, represent $z$ with $x$ and $y$, and compute partial derivatives of $xyz$ with $z$ being replaced by $1-x-y$, and check if the critical points are in the domain and the signs of second partial derivatives. *The method of Lagrange multipliers: https://en.m.wikipedia.org/wiki/Lagrange_multiplier
{ "language": "en", "url": "https://math.stackexchange.com/questions/3296475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Find a smooth function $\eta:\mathbb{C}\to\mathbb{R}$ whose support is a disk. This comes from the proof of the following lemma in Jost's Compact Riemann Surfaces (Lemma 2.3.3). Lemma 2.3.3 Every compact Riemann surface $\Sigma$ admits a conformal Riemann metric. proof. ... For a disk $D\subset\mathbb C$ we choose a smooth function $\eta:\mathbb C\to\mathbb R$ with $$\eta>0\text{ on }D,\quad\eta=0\text{ on }\mathbb C\backslash D$$ ... My questions: (1) Does "smooth" here mean "infinitely differentiable as a $\mathbb R^2\to\mathbb R$ function (just to make sure)? (2) How to guarantee the existence of such functions? For (2) I know such functions must be smooth but non-analytic. The only example I know is $$f(x)=\left\{\begin{array}{lll}e^{-1/x}&,&x>0\\0&,&x\leq0\end{array}\right.$$ But how to generalize this to an $\mathbb R^2\to\mathbb R$ function?
$f(x)=e^{-\frac 1 {1-x}}$ for $x<1$ and $0$ for $x \geq 1$ defines a smooth function which is positive on $(-\infty,1)$ and $0$ outside it. So $f(\|x\|^{2})$ is a smooth function on $\mathbb R^{2}$ which is positive for $\|x\|<1$ and $0$ elsewhere. For any other disk in $\mathbb R^{2}$ use an appropriate affine transformation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3296624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Limit of derivative function at infinity $f$ is a differentiable function on real line such that $$\lim_{x \to \infty}f(x)=1$$ and $$\lim_{x \to \infty}f'(x)=\alpha$$. Then what can be said about $\alpha$ * *$\alpha=0$ *$\alpha$ may not be $0$ but $|\alpha| \le1$ *$\alpha\geq1$ *$\alpha\leq-1$ I cannot think of any viable option other than 1,but i cannot prove it. Any hint or solution would be appreciated!
Because $\lim_{x \to \infty}f(x)=1$, $y=1$ is a horizontal asymptote of the graph of $f(x)$. Therefore, $\lim_{x \to \infty}f'(x)=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3296739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
If $a\leq b$ and $-a\leq b$, then $|a|\leq b$. I have arrived at the two separate conclusions: * *$a\leq b$ *$-a\leq b$ Can I conclude that $|a|\leq b$? I am missing something as it is not by definition of the absolute value.
You could prove it by proving the contrapositive, as follows. Assume $|a|>b$. Now $|a|=a$ or $|a|=-a$. Therefore $a>b$ or $-a>b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3296820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Linear regression without intercept: formula for slope To do linear regression there is good answer from TecHunter Slope; $$\alpha = {n\sum(xy) - \sum x \sum y \over n\sum x^2 - (\sum x)^2}$$ Offset: $$\beta = {\sum y - \alpha \sum x \over n}$$ Trendline formula: $$y = \alpha x + \beta $$ However, How does these formulas change when I want to force interception at origin ? I want $y=0$ when $x=0$, so model is: $$y = \alpha x $$
Since $\beta=0$, $\alpha = \frac{\sum y}{\sum x}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3297060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Optimizing quadratic form with respect to inner positive definite matrix with a trace constraint Let $\{z_i\}_{i=1}^n$ and $\{w_i\}_{i=1}^n$ be two collections of vectors in $\mathbb R^p$. Let $A$ be a real positive definite $p\times p$ matrix, with Cholesky factorization $LL^T$, where $L$ is also $p\times p$. I want to solve the following optimization: $$\min_L F(L) \rightarrow \min_L \sum_{i=1}^p z^T_i LL^T z_i - w^T_i LL^T w_i$$ subject to the constraint $$\text{tr}(LL^T) = 1.$$ My approach: Lagrange multipliers. I thought that $\frac{d}{dL} \left(zLL^Tz - wLL^Tw\right) = 2(zz^T-ww^T)L$, and $\frac{d}{dL}\text{Tr}(LL^T) = 2L$, but this doesn't seem to lead to a solution. Edited: Rewrote problem to include a positive semi-definite constraint, via the Cholesky factorization.
Define the $p\times n$ matrices $Z=[z_1,\dots,z_n]$ and $W=[w_1,\dots,w_n]$ (such that given vectors are respectively their columns). Convince yourself that you can rewrite your optimization problem as \begin{align} \min_{A} &<A,ZZ^T-WW^T> \\ &A\geq 0 ~,~<A,I> = 1 \end{align} where $A\geq 0$ implies $A$ should be positive-semi-definite. Also, for any two symmetric matrices $A,B$, we define $<A,B> = \mathrm{trace}(AB)$. We can define the eigen-decomposition \begin{align} A = \sum_{i=1}^{p}\lambda_iu_iu_i^T \end{align} where $u_i$ are the eigen vectors and $\lambda_i$ are the eigen values (of unit-norm). Let $B=ZZ^T-WW^T$. Convince yourselves that your optimization problem of finding $A$ is same as finding pairs $(\lambda_i,u_i)$ in the optimization problem \begin{align} \min_{\lambda_i,u_i}&\sum_{i=1}^{p}\lambda_iu_i^TBu_i \\& \lambda_i\geq 0~,~\forall i \\& \sum_{i=1}^{p}\lambda_i = 1 \end{align} From rayleigh-ritz ratio, it follows that for any unit-norm $u_i$, we have that \begin{align} u_i^TBu_i\geq \lambda_{min}(B) \end{align} and equality is achieved when $u_i$ is the eigen-vector corresponding to $\lambda_{min}(B)$. Thus, it follows that \begin{align}A = uu^T \end{align} where $u$ is the eigen-vector corresponding to smallest eigenvalue of $B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3297190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Sum of a Sum of a Squared Difference How did the author jump from the second equation to the third equation? I suspect there’s a rule I’m forgetting that allows for this, any help is appreciated.
Note that \begin{align} -\left(\sum_{i=1}^{160} (x_i - 8)^2 - \sum_{i=1}^{160} (x_i - 7)^2\right) & = -\left(\sum_{i=1}^{160} (x_i^2 - 16x_i + 64) - (x_i^2 - 14x_i + 49)\right) \\ & = -\left(\sum_{i=1}^{160} (-2x_i + 15)\right) \\ & = 2\sum_{i=1}^{160} x_i - 2400 \tag{1}\label{eq1} \end{align} As you can see, the $15$ is a constant repeating $160$ times for a total of $15 \times 160 = 2400$. Also, the author used the minus sign in front to remove the first minus sign for the $2x_i$, moved the $2$ outside the summation and changed the plus to a minus for the sum of $2400$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3297316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
A Question on Cardinality $\aleph_{0}$ I'm trying to understand the concept of Cardinality. My question is, Let the interval $[1, 2n]$ is given. In this interval we have $2n$ natural numbers. Or $n\to\infty$, we have countable infinite natural numbers and Cardinality equal to $\aleph_0$. Then, in this interval we have $n$ even natural numbers. Or $n\to\infty$, we have countable infinite even natural numbers and Cardinality equal to $\aleph_0$. Then for $n\to\infty$, in this interval $[1,2n]$,we have $$\lim_{n\to\infty} \frac {\text{number of even natural numbers}}{\text{number of all natural numbers}}=\frac 12.$$ In other words number of natural numbers $2$ times many from number of even natural numbers. But, why the cardinalities are equal or What's the point I confused?
First of all, you try to work with limits and want to use that an expression at the limit point equals the limit of said expression as we approach the limit point. But for that you first of all need to know that the function you consider is defined at the limit point. So, how do you define division at infinity? And even if defined, you'd need continuity for your suggested conclusion. E.g., exponentiation $(x,y)\mapsto x^y$ is defined at $(0,0)$, namely $0^0=1$. However, exponentiation is not continous there and therefore we cannot infer $\lim x_n^{y_n}=1$ from $\lim x_n=\lim y_n=0$. That being said, you should not blindly assume that well-known properties of arithmetic for finite numbers transfer readily to the arithmetic of infinite cardinalities or ordinals. Final remark: Did you notice that you want to make a claim about $\color{red}{\aleph_0}$ but that limits use the notation $\lim_{x\to{\color{red}\infty}}$ instead?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3297414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
How to derive derivative of the logarithm of a summation? I'm currently reading the book Deep Learning (Goodfellow et al., 2015) and had a question regarding the calculation of a gradient when explaining backpropagation for a certain example. For anyone who's curious, this is from section 6.5.9: Differentiation outside the Deep Learning Community. Suppose we have variables $p_1, p_2, ... , p_n$ representing probabilities and variables $z_1, z_2, ... , z_n$ representing unnormalized log probabilities. Suppose we define $$q_i = \frac{e^{z_i}}{\sum_i e^{z_i}}$$ where we build the softmax function out of exponentiation, summation and division operations, and construct a cross-entropy loss $J = -\sum_i p_i \log{q_i}$. A human mathematician can observe that the derivateive of $J$ with respect to $z_i$ takes a very simple form: $q_i - p_i$. I don't know how this result was derived, and was hoping that someone could give me some tips or advice. What I have so far is $$\log{q_i} = \log{e^{z_i}} - \log({\sum_i e^{z_i}})$$ $$ \begin{align} p_i\log{q_i} & = p_i \log{e^{z_i}} - p_i \log({\sum_i e^{z_i}}) \\ & = p_iz_i - p_i\log(\sum_i e^{z_i}) \end{align}$$ If we take the derivative of $J = p_i\log{q_i}$ then I can understand that $d/dz_i (p_i z_i) = p_i$, but how do we differentiate the second term that contains the logarithm of the summation? Thank you.
Your derivation of $p_i\log q_i$ is fine. Based upon it we obtain for $J$: \begin{align*} J&=-\sum_{j=1}^np_jz_j+\sum_{j=1}^np_j\log\left(\sum_{k=1}^ne^{z_k}\right)\\ &=-\sum_{j=1}^np_jz_j+\log\left(\sum_{k=1}^ne^{z_k}\right)\tag{1} \end{align*} In the last line we use the sum of the probabilities $p_j,1\leq j\leq n$ is equal to $1$. From (1) we obtain the derivation of $J$ with respect to $z_i$ as: \begin{align*} \color{blue}{\frac{d}{dz_i}J} &=\frac{d}{dz_i}\left(-\sum_{j=1}^np_jz_j\right)+\frac{d}{dz_i}\left(\log\left(\sum_{k=1}^ne^{z_k}\right)\right)\\ &=-p_i+\frac{e^{z_i}}{\sum_{k=1}^ne^{z_k}}\\ &\,\,\color{blue}{=-p_i+q_i} \end{align*} in accordance with the claim.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3297525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Numerically stable evaluation of $x^{n!}$ Given that $x$ is a real number with property $0 < x < 1$ and $n$ upto $4000$ Is there a good way to decompose the n! into steps for multiplying x?
Minimal number of multiplications in worst case $n=4000$: $N_{\min}=\lceil\log_2 4000!\rceil=42\,100$. “Bruteforce” approach, using fast multiplication for power 2, then 3, then 4 etc gives $N_{bf}=\sum_{k=2}^{4000} \lceil\log_2k\rceil=43\,905$. So it's less than 5% inoptimal. I wouldn't be bothered with writing a sophisticated algorithm.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3297617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A question on counting I'm having difficulty answering (Qb iii) Question: (a) Write an expression for the number of sets S which contain 10 elements, each of which is an integer between 1 and 20.
There are ${10 \choose 2}$ subsets of size 2. (iii) For a non-empty subset X ⊆ S, let t(X) denote the sum of the members of X. Prove that there must be distinct subsets A,B ⊆ S, each of size two, such that t(A) = t(B). (Hint: what are the possible values of t(A) and t(B)?) Where I am at so far I realised the following: Consider 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20. I realised that if you pick any two number say 5 and 13 then 5 + 13 = 18, 6 + 12 = 18, 7 + 11 = 18 etc. But that's really all I've realised.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3297723", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Find the constant $k$ from the determinant Given: $$\begin{vmatrix}(b+c)^2 &a^2&a^2\\b^2 &(c+a)^2&b^2 \\c^2&c^2& (a+b)^2\end{vmatrix}=k(abc)(a+b+c)^3$$ Find $k$. If I directly open the determinant it will go to long I can't apply most of the row or column operation as they keep making it more complex.
Let $$a=b=c=1$$ and you get the matrix $$\begin{vmatrix}4&1&1\\1 &4&1 \\1&1& 4\end{vmatrix}=27k$$ The determinant is easily evaluated to be $54$ so $$27k=54$$. Thus $$k=2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3297816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Where does $\pi^2$ appear spontaneously within Physical Phenomenon and Mathematics Equations? The term $\pi$ is found to appear in many equations and natural phenomenon; however my question is related to $\pi^2$. While trying to figure out the reason for some $\pi^2$ terms appearing in certain equalities that I came across, I have a question. And the question is this: In which all mathematics/physics equation or contexts does $\pi^2$ appear inherently? -- and (now this second part is merely a follow up question that did not form part of the original query but added later) where that $\pi^2$ term can lend some interpretation of the underlying phenomenon, just like does $\pi$ whereby we can interpret (in most cases i.e.) that some type of circular ambulation in 1 dimension is involved?? As you can understand, the $\pi^2$ term is more complex and does not directly lend itself to an interpretation -- as opposed to $\pi$ which is very intuitive. Thanks
List of Places where π^2 can be seen- * *π is present in some structural engineering formulae, such as the buckling formula derived by Euler, which gives the maximum axial load F that a long, slender column of length L, modulus of elasticity E, and area moment of inertia I can carry without buckling *The fact that π is approximately equal to 3 plays a role in the relatively long lifetime of orthopositronium. The inverse lifetime to lowest order in the fine-structure constant α has a term of π^2 *Kepler's Third Law of Planetary Motion *Volume and Bounding Area of 4-D and 5-D Sphere *Basel Problem (As mentioned in another answer) And many more as well Sources- https://en.m.wikipedia.org/wiki/Pi https://en.m.wikipedia.org/wiki/Basel_problem https://en.m.wikipedia.org/wiki/Buckling https://en.m.wikipedia.org/wiki/Fine-structure_constant https://en.m.wikipedia.org/wiki/Kepler's_laws_of_planetary_motion https://en.m.wikipedia.org/wiki/N-sphere
{ "language": "en", "url": "https://math.stackexchange.com/questions/3298039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 14, "answer_id": 0 }
The integral $\int\limits_0^\infty\frac{x^4e^x}{(e^x-1)^2} \mathrm{d}x$ How to calculate the following integral $$\int_0^\infty\frac{x^4e^x}{(e^x-1)^2}\mathrm{d}x$$ I would like to solve this integral by means of two different ways: for example, integration by parts and using Residue Theorem.
Using $$ \frac1{(1-x)^2}=\sum_{n=0}^\infty(n+1)x^n $$ \begin{eqnarray} \int_0^\infty\frac{x^4e^x}{(e^x-1)^2}\mathrm{d}x&=&\int_0^\infty\frac{x^4e^{-x}}{(1-e^{-x})^2}\mathrm{d}x\\ &=&\int_0^\infty x^4e^{-x}\sum_{n=0}^\infty(n+1)e^{-nx}\mathrm{d}x\\ &=&\sum_{n=0}^\infty\int_0^\infty(n+1)x^4e^{-(n+1)x}\mathrm{d}x\\ &=&\sum_{n=0}^\infty \frac{24}{(n+1)^4}\\ &=&24\zeta(4)\\ &=&\frac{4\pi^4}{15}. \end{eqnarray}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3298116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
What $x$ makes $\frac{x}{(a^2 + x^2)}$ maximum? The problem (Calculus Made Easy, Exercises IX, problem 2 (page 130)) is: What value of $x$ will make $y$ a maximum in the equation $$y = \frac{x}{(a^2 + x^2)}$$ I successfully differentiate, equate to zero, and wind up with $$x^2 = a^2$$ Which gives me the answer of $$x = a$$ This is correct. But why isn't $x = -a$ also correct?
We are given $$y = \frac{x}{a^2 + x^2}$$ where $a$ is a constant. Differentiating with respect to $x$ using the Quotient Rule yields \begin{align*} y' & = \frac{1(a^2 + x^2) - x(2x)}{(a^2 + x^2)^2}\\ & = \frac{a^2 + x^2 - 2x^2}{(a^2 + x^2)^2}\\ & = \frac{a^2 - x^2}{(a^2 + x^2)^2} \end{align*} Setting the derivative equal to zero yields the critical points $x = \pm a$. We can apply the First Derivative Test. First Derivative Test. Assume $f$ is continuous on a closed interval $[u, v]$ and $f$ is differentiable everywhere in the open inteval $(u, v)$ except possibly at $c$. (a) If $f'(x) > 0$ for all $x < c$ and $f'(x) < 0$ for all $x > c$, then $f$ has a relative maximum at $x = c$. (b) If $f'(x) < 0$ for all $x < c$ and $f'(x) > 0$ for all $x < c$, then $f$ has a relative minimum at $x = c$. If $a = 0$, then $y = \dfrac{1}{x} \implies y' = -\dfrac{1}{x^2}$, so the function has no critical points and no relative extrema. Assume $a \neq 0$. Since it has not been specified whether $a > 0$ or $a < 0$, the critical points occur at $x = -|a|$ and $x = |a|$. If we perform a line analysis on the derivative, we see that $y'$ changes from negative to positive at the critical point $x = -|a|$ and from positive to negative at the critical point $x = |a|$. Thus, by the First Derivative Test, the function has a relative maximum at $x = |a|$ and a relative minimum at $x = -|a|$. If it is specified that $a > 0$, you can replace $|a|$ by $a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3298244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
Finding the inflexion points of a cubic in $ \mathbb{P}^{2}_{\mathbb{C}}.$ For what values of $ m $ is the cubic $$ F = x_{0}^{3} + x_{1}^{2} + x_{2}^{3} + mx_{0}x_{1}x_{2} = 0 $$ in $ \mathbb{P}^{2}_{\mathbb{C}} $ nonsingular? Find its inflexion points. I know that $$ \text{Sing}(F) = \Big\lbrace F = \frac{\partial F}{\partial x_{0}} = \frac{\partial F}{\partial x_{1}} = \frac{\partial F}{\partial x_{2}} = 0 \Big\rbrace $$ Using this information, I have that $$ \text{Sing}(F) = \lbrace (x_{0}:x_{1}:x_{2}) \in \mathbb{P}^{2}_{\mathbb{C}} \;|\; x_{0} = x_{1} = x_{2} \rbrace. $$ That is, $ \text{Sing}(F) = (1:1:1). $ Furthermore, at the singular point of $ F, m = -3. $ I am unsure about how to get the inflexion points. Do I need to compute the determinant of the Hessian matrix where $ m = -3$? EDIT: Unless I've made an error, the determinant of the Hessian matrix yields $$ 216x_{0}x_{1}x_{2} - 6m^{2}x_{0}^{3} - 6m^{2}x^{3}_{2} + 2m^{3}x_{0}x_{1}x_{2} -6m^{2}x^{3}_{1} = 0 $$ I'm not sure how to proceed from here.
You need to solve simultaneously the original equation times the Hessian. Adding $6m^2$ times the original equation to the Hessian gives $$(216+8m^3)x_0x_1x_2=0.$$ Unless $m^3=-27$ then $x_0x_1x_2=0$ so one of the variables vanishes. If $x_0=0$ then $x_1^3+x_2^2=0$ so you get three inflection points $(0:1:-\zeta)$ where $\zeta^3=1$. Overall then, you do get nine inflection points.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3298331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Sliding Motion and Fillippov system I have problems to understand how works Fillipov system (see page 2 and 3) : So to make things easier, let consider the example $$\dot x=-\text{sgn}(x),\tag{E}$$ where $\text{sgn}(x)$ is the sign function (i.e. is 1 if $x>0$ and $-1$ if $x<0$). So indeed the vector field $f(x)=-\text{sgn}(x)$ is not continuous. So what looks commonly done is to consider $$F(x)=\begin{cases}-1&x>0\\ 1&x<0\\ co\{-1,1\}=[-1,1]&x=0,\end{cases}$$ where $co\{f_1,f_2\}=\{\alpha f_1+(1-\alpha )f_2\mid \alpha \in [0,1]\}$ is the convex hull of $\{f_1,f_2\}$. So instead of considering $(E)$ one consider rather $$\dot x(t)\in F(x(t))\tag{E'}.$$ If someone knows a bit about this theory, could you explain me the motivation behind ? I'm not sure to really understand.
In the example equation, considered as a conventional ODE, the domain is the largest open set so that the right side is continuous, which is the real line without zero. A conventional solution is $x(t)=x_0-t$ if $x_0>0$. But it only exists for $t<x_0$ before leaving the domain of the ODE. There also does not exist a suitable solution on the other side $\{x<0\}$ that one could glue to this solution to get at least a continuous function. To provide a way out, Fillippov's approach essentially is that you consider all continuous approximations of the right side in the distributional sense (or here in some stronger functional norm like the $L^1$ norms on bounded intervals). If the solutions of the approximating equations converge to the same function, you can say that this is a generalized solution. Then any infinitesimally close approximation fills the gap of the discontinuity with the convex hull of the limit values, without large variations outside this convex set. Any generalized solution will take one of these values as derivative at the jump. One could in the example approximate the sign function by $$ {\rm sign}(x)\approx h_{a,b}(x)=\begin{cases} \dfrac{2x-a-b}{b-a}&a\le x\le b\\ +1&x>b\\-1&x<a \end{cases} $$ with some very small $a<0<b$. The corresponding equation $\dot x=-h_{a,b}(x)$ has solutions $x(t)=x_0-t$ for $t<t_b=x_0-b$, and after that $x(t)=\frac{a+b}2+\frac{b-a}2\exp(-2\frac{t-t_b}{b-a})$, which converges to the small number $\frac{a+b}2$. In the limit $a,b\to 0$ one obtains $x(t)=\max(0,x_0-t)$. Now this approach is quite impractical due to the numerous amount of approximating functions. Your cited generalized equation is a result of investigating the "nice" cases where this convergence is automatic, the solution exists as a piecewise smooth function that is overall Lipschitz continuous. Then one can demand that the generalized derivative of a solution $x(t)$ falls within the convex hull of the possible values of the right side at $x(t)$, that is the limits of the function values in a small neighborhood. Note that the graphs of the continuous approximations converge towards the curve where the jump is filled with the vertical segment, that is, the convex hull of the limit values.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3298459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is it legal to say $f(E)=\emptyset$ if set $E$ not in the function $f$ domain In many inverse functions, I have seen $f^{-1}(E)=\emptyset$, where the set $E$ is not in the function $f$ range. So, is it also right to say $f(E)=\emptyset$ if set $E$ not in the function domain.
The inverse image notation is very much standard; in most contexts, it may be used without clarification. The image of a set under a function is almost as standard. Given a function $f : X \to Y$, when seeing $f(E)$, it is typically understood that $E$ is a subset of $X$. That said, the convention of saying $f(E) = \emptyset$ makes some sense when $E \cap X = \emptyset$. I would just put a note before using it that you're adopting this convention, as it is not widely used.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3298688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Does an integral of the form $\int f(x) \, \sqrt{dx}$ have any meaning? If $f(x)$ is a Riemann-integrable function, what meaning is there to an integral of the form $$\displaystyle\int f(x) \, \sqrt{dx}~?$$ I have read that stochastic processes like Brownian motion may be described by integrals of somewhat unusual forms as $\displaystyle\int dW^2$. I suppose that $\displaystyle\int f(x) \, \sqrt{dx}$ doesn't represent a random process since $f(x)$ here is defined and deterministic. Any insight would be helpful.
Good question! Think of differentials like $dx^2$ existing because integrals undo derivatives: $$\begin{align} \frac{d}{dx}\frac{d}{dx}\,f(x) &= f’’(x) \\[2ex] \frac{d^2f}{dx^2} &= f’’(x) \\[2ex] d^2f &= f’’(x)\, dx^2 \\[2ex] \end{align}$$ Let’s integrate once (ignoring the $+c$). Remember how integrating always eliminates the differential $d$ and undoes the derivative, like how $\int2x\,dx=x^2$, except we’re using a two-letter symbol that isn’t $x$: $$\begin{align} \int d(df) &= \int f’’(x)\,dx\,dx \\ \int 1\, d(df) &= \left(\int f’’(x)\,dx\right)\,dx \\ df &= f’(x) \, dx \end{align}$$ Now we can do a normal integration (once again ignoring $+c$), but this time on the left we’ll integrate with respect to $f$—but don’t worry; the same rules apply, it’s just a different letter: $$\begin{align} \int 1\,df &= \int f’(x) \, dx \\ f &= f(x) \end{align}$$ So, what does this mean? It means that the power tells us how many times to integrate, just like the power in the derivative tells us how many times to take the derivative. We can’t do either of these $1/2$ times or $\sqrt{\text{once}}$ (unless you make up some crazy version of calculus).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3298827", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
If $x,y \in V$ are linearely independent, then there exists a transvection $\tau$ with $\tau(x)=y$ Let $V$ be a $n$-dimensional $K$ vector space. A $\tau \in \operatorname{GL}(V)$ is called a transvection, if there exists a $(n-1)$-dimensional $\tau$-invariant subvector space $W$ of $V$ with $$\tau_{W} = id_W \text{ and } \tau(v)-v \in W \quad \forall v \in W$$ Task: Prove: If $x,y \in V$ are linearly independent, then there exists a transvection $\tau$ with $\tau(x)=y$. All the mappings of $\tau$ that I can find either do not satisfy $\tau(x)=y$ or $\tau(v)-v \in W $. What I have so far: $x,y$ are linearly independent $\Rightarrow \operatorname{V} \geq 2$ $\Rightarrow \text{because} W=<y> \Rightarrow \operatorname{dim}(V) > dim(W) > 0$ $\Rightarrow \text{so } W \text{exists and thus } \tau$. I have found a similiar post on that task. But it seems not to be the answer to my task. If yes, please help me clarifying that answer.
Since $\{ x, y \}$ are linearly independent, we also have that $\{ x - y, x \}$ are linearly independent. Complete this set to a basis $$ w_1 = x - y, w_2, \dots, w_{n-1}, w_n = x $$ for $V$ and set $W = \operatorname{span} \{ w_1, \dots, w_{n-1} \}$. Define a linear map $\tau \colon V \rightarrow V$ by requiring that $$ \tau(w_i) = w_i, \,\,\, 1 \leq i \leq n - 1, \\ \tau(w_n) = y. $$ Then clearly $\tau|_{W} = \operatorname{id}|_{W}$ and $\tau(w_n) = \tau(x) = y$. Finally, any $v \in V$ can be written as $v = w + ax$ for some $w \in W$ and $a \in K$ and then $$ \tau(v) - v = \tau(w + ax) - (w + ax) = \tau(w) + a\tau(x) - w - ax = w + ay - w - ax = (-a)(x - y) \in W. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3299068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Evaluate $\int_{0}^{\infty} \frac{\sin x-x\cos x}{x^2+\sin^2x } dx$ The integral $$\int_{0}^{\infty} \frac{\sin x-x\cos x}{x^2+\sin^2x } dx$$ admits a nice closed form. The question is: How to evaluate it by hand.
$$ I=\int_{0}^{\infty} \frac{\sin x-x \cos x}{x^2+\sin^2 x} dx= - \int_{0}^{\infty}\frac{\frac {x\cos x -\sin x}{x^2}}{1+(\frac{\sin x}{x})^2}dx= -\int_{1}^{0} \frac{dt}{1+t^2}=\frac{\pi}{4}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3299202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Can we deduce two contradictory statements from a wrong statement? In the proof that $\sqrt{p}$ is irrational where $p$ is a prime number: We first assume $\sqrt{p}$ is rational. From this we deduce $\sqrt{p}=\dfrac{a}{b}$, where $a$ and $b$ are co-prime. Then using other reasonings we deduce $a$ and $b$ are not co-prime. That is from the wrong statement (i.e. $\sqrt{p}$ is rational), we deduced two contradictory statements. From a statement, if we apply correct reasonings, we may deduce two results which contradict each other. How can this be reasonable? Can anyone explain with simple examples? Anyway; from a statement, if we apply correct reasonings, we may deduce a result which contradicts another established result. This seems reasonable to me. And this shows that the statement is wrong.
The statement that $\sqrt{p}$ (with $p$ a prime number) is rational is not just a false statement, but a statement that contradicts the basic axioms for the real numbers. Indeed, it is from the statement that $\sqrt{p}$ is rational together with those axioms that you derive an explicit contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3299406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
Proving $\frac{\sqrt{k(k+1)} }{k-1} \leq1 + \frac{2}{k} + \frac{3}{4k^2}$ for $k \geq 3$. Could you please give me a hint on how to prove the inequality below for $k \geq 3$? $$\frac{\sqrt{k(k+1)} }{k-1} \leq1 + \frac{2}{k} + \frac{3}{4k^2} $$ Thank you in advance.
You have to solve this inequality system: $$\left\{\begin{matrix} & \\k \neq 1 \: \land \: k\neq0 & \\k(k+1)\geqslant0 & \\1+\frac{2}{k}+\frac{3}{4k^2}\geq 0 & \\ \frac{\sqrt{k(k+1)}}{k-1} \geq 0 & \\\left (\frac{\sqrt{k(k+1)}}{k-1}\right )^2\leq \left ( 1+\frac{2}{k}+\frac{3}{4k^2} \right )^2 \end{matrix}\right.$$ In this way you wil get the solutions of the inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3299532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Conjugate transpose of matrix is the adjoint intuition I'm having a bit of trouble understanding this fact from Linear Algebra Done Right Let T $\in \mathcal{L}(V, W)$ Suppose $e_{1}, \dots, e_{n}$ is an orthonormal basis of $V$ and $f_{1}, \ldots, f_{m}$ is an orthonormal basis of $W$. Then then adjoint $T^{*}$ is the conjugate transpose of $T$ in $\langle T v, w\rangle=\left\langle v, T^{*} w\right\rangle$ I went through the proof and did an example but I can't get a clearer picture as to why this must be true intuitively or geometrically. What is so special about a transpose that makes the transformation act in this bridge-like fashion? Also, I am also not able to appreciate the fact that we have an orthonormal basis, while it is important in the proof, what would go wrong if we didnt? Here's the entire proof for reference:
Any inner product $\langle v, w\rangle$ on a real vector space has a representation as a symmetric positive-definite matrix $M$, with $$\langle v, w\rangle = v^T M w.$$ Now if $T$ and $T^*$ are adjoint you must have $$\langle Tv, w\rangle = v^T T^TMw = v^T MT^* w = \langle v, T^*w\rangle.$$ Since $v$ and $w$ are arbitrary, it follows that $$T^TM = MT^*.$$ In the special case of the Euclidean inner product, $M=I$ and the above reduces to $T^T=T^*$. Notice that this relationship is not true for general inner products $M$. Now if you have two different inner product spaces $V$, $W$ with inner products $$\langle v_1, v_2\rangle_{M_v}, \langle w_1, w_2\rangle_{M_w}$$ and $T: V\to W$, an identical calculation shows that $T$ and $T^*$ satisfy $$T^TM_{w} = M_v T^*$$ with $T^T = T^*$ in the special case that both inner products are Euclidean.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3299625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
The rank of a symmetric matrix equals the number of nonzero eigenvalues. I am wondering why the rank of a symmetric matrix equals its number of nonzero eigenvalues. I have tried showing it like this: A symmetrix matrix A can be written: $$A=PDP^T$$, where P is an orthogonal matrix. It is not difficult to see that for a vector x: $PDP^Tx=0 \leftrightarrow DP^Tx=0$, since P is invertible. So what we need to show is that dimension of the nullspace of $DP^T$ equals the number of eigenvalues with value zero. Do you see how to do this?
More precisely, I would say that the rank of a symmetric matrix is equal to the sum of the geometric multiplicities of its nonzero eigenvalues. For example, If $A$ has two nonzero eigenvalues, say $2$ and $3$, with geometric multiplicities $1$ and $2$ respectively, then the rank of $A$ is $1+2 = 3$. In this example, $2$ and $3$ would appear once and twice in $D$ respectively. So the number of repetitions of nonzero eigenvalues in $D$ needs to be considered
{ "language": "en", "url": "https://math.stackexchange.com/questions/3299724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
1025th term of the sequence $ 1,2,2,4,4,4,4,8,8,8,8,8,8,8,8, ... $ Consider the following sequence - $$ 1,2,2,4,4,4,4,8,8,8,8,8,8,8,8, ... $$ In this sequence, what will be the $ 1025^{th}\, term $ So, when we write down the sequence and then write the value of $ n $ (Here, $n$ stands for the number of the below term) above it We can observe the following - $1 - 1$ $2 - 2 $ $3 - 2$ $4 - 4$ $5 - 4$ . . . $8 - 8$ $9 - 8$ . . . We can notice that $ 4^{th}$ term is 4 and similarly, the $ 8^{th}$ term is 8. So the $ 1025^{th}$ term must be 1024 as $ 1024^{th} $ term starts with 1024. So the value of $ 1025^{th}$ term is $ 2^{10} $ . Is there any other method to solve this question?
Make up the frequency and cumulative frequency table: $$\begin{array}{c|c|c} x&f&F\\ \hline 1&1&1=2^1-1\\ 2&2&3=2^2-1\\ 4&4&7=2^3-1\\ 8&8&15=2^4-1\\ \vdots&\vdots&\vdots\\ 256&256&511=2^{9}-1\\ 512&512&1023=2^{10}-1\\ 1024&1024&2047=2^{11}-1\\ \vdots&\vdots&\vdots\\ 2^n&2^n&2^{n+1}-1 \end{array}$$ So, your approach was efficient to notice that $a_{2^n}=2^n$. Hence, $a_{1024}=\color{red}{a_{1025}}=\cdots =a_{2047}=\color{red}{1024}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3299825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 7, "answer_id": 6 }
Jordan form of operator $X \mapsto AXA$ Matrices $n \times n$ on complex field. Compute Jordan form of operator $X \mapsto AXA$: $$ A = \begin{bmatrix} 0 & 1 & & \\ & 0 & \ddots & \\ & & \ddots & 1 \\ & & & 0 \end{bmatrix} $$ A is nilpotent Jordan block
Hint: The Jordan form of the map $T(X) = AXA$ can be deduced using (only) the following pieces of information: * *$T$ is a linear map on a space with dimension $n^2$ *$T^n = 0$ *More generally, $\operatorname{rank}(T^{k}) = (n-k)^2$, $k = 1,\dots,n$ Another approach: using the vectorization operator, we can conclude that the matrix of your transformation (relative to a certain basis of $\Bbb C^{n \times n}$) is $A^T \otimes A$, where $\otimes$ denotes the Kronecker product. This matrix is "almost" in Jordan normal form. To see that $\operatorname{rank}(T^{k}) = (n-k)^2$, $k = 1,\dots,n$ holds, it suffices to make the following observation. The domain $\Bbb C^{n \times n}$ is spanned by elements of the form $uv^T$ with $u,v \in \Bbb C^n$. Thus, the image of $T^k$ is spanned by elements of the form $$ T^k(uv^T) = (A^k u)(v^T A^k) = (A^n u)((A^T)^kv)^T. $$ Thus, the image of $T^k$ is spanned by the matrices $xy^T$ where $x,y$ are in the images of $A^k$ and $(A^T)^k$ respectively. Because the image of $A^k$ and $(A^T)^k$ each have dimension $n-k$, we may conclude that the image of $T^k$ has dimension $(n-k)^2$. Since $T$ is nilpotent, every Jordan block in the Jordan form is a block associated with $0$. More specifically, we may use the above observation to conclude that $T$ has $1$ block of size $n$, and $2$ blocks of size $k$ for $k = 1,\dots,n-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3299984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Trying to Understand the Meaning of Transitivity in Relation to a Particular Problem I was trying to understand transitive relation and so I was solving a problem. The question is : $R_1 = \{(a,b)| a =b \text{ or }a = -b\} , R_2 = \{(a,b)| a =b \}, R_3 = \{(a,b)| a =b+1\}$, which one is transitive and why? As far as I know transitive relation is, if $a>b$ and if $b > c$, then $a> c$. I am assuming that the given a and b are real numbers but I am not sure where I will get C so that I can show that which one is transitive. I saw many youtube tutorials and read my book but I am very confused with this math. I am new in this topic. It will be really helpful if someone can please explain how can I solve this problem. Thank you very much.
Try replacing, for example, your $a \gt b$ with $\{a,b\} \in R_1$, $b \gt c$ with $\{b,c\} \in R_1$ and $a \gt c$ with $\{a,c\} \in R_1$. Then replace $R_1$ with $R_2$ and $R_3$ to see for which of these the first $2$ statements (e.g., if $\{a,b\} \in R_1$ and $\{b,c\} \in R_1$) means the third one must hold as well (e.g., $\{a,c\} \in R_1$). If you do this, you should find that $R_1$ and $R_2$ are transitive, while $R_3$ is not (because $a = b + 1$ and $b = c + 1$ means $a = c + 2$, not $a = c + 1$). Can you finish the rest yourself?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3300104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
If $B(x, r)$ is closed in $S\subseteq \mathbb{R}$, is it closed in $\mathbb{R}$? Let $(\mathbb{R}, d)$ a metric space. Let $S \subseteq \mathbb{R}$. I know that an open ball in $S$ is not necessarily an open ball in $\mathbb{R}$, but is it a closed ball in $S$ closed in $\mathbb{R}$?
No. Consider $S = (-1,1)$. Then $\overline{B_S}(0,1) = S$ which is not closed in $\mathbb{R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3300229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
invert one column before matrix multiplication and multiply result with unit vector = still same ranking, why? I am currently trying to find a mathematical proof of the following for a research paper. It's been quite long since i did higher mathematics and english is not my first language, so go easy on me if i got the terms wrong: We have two matrices zxh and hxe that are multiplied to get the matrix zxe. This matrix in turn gets multiplied with a transposed unit vector 1xz to get the sum of each column in zxe. The result is the transposed vector 1xe. The values of this vector are ranked, which is the final result. Values in zxh range from -1 to 1 and values in hxe range from 0 to 1. Now i noticed the following: If one column in zxh gets inverted (multiplied by -1) and at the same time if i switch the values in the corresponding row in hxe with their subtraction to 1, i get the same result in the ranking. From what i observed the values in 1xe after the changes differ in a constant value from the ones before the changes. this constant is equal to the unit vector times the inverted column in zxh. Example: $$ zxh = \begin{pmatrix} 1 & -0,5 & 0 & -1 \\ 1 & 1 & 1 & 1 \\ -0,5 & 0 & 0,5 & 0 \end{pmatrix}; hxe = \begin{pmatrix} 1 & 0,5 & 0 \\ 0,5 & 1 & 0,5 \\ 1 & 0,5 & 0,5 \\ 1 & 1 & 1 \end{pmatrix} \\$$ The result of $1xz * zxh * hxe $ is $\begin{pmatrix} 3,25 & 2 & 1 \end{pmatrix}$ Now if we make the changes described above (let's take the first column and row): $$ zxh = \begin{pmatrix} -1 & -0,5 & 0 & -1 \\ -1 & 1 & 1 & 1 \\ 0,5 & 0 & 0,5 & 0 \end{pmatrix}; hxe = \begin{pmatrix} 0 & 0,5 & 1 \\ 0,5 & 1 & 0,5 \\ 1 & 0,5 & 0,5 \\ 1 & 1 & 1 \end{pmatrix} $$ The result of $1xz * zxh * hxe $ is $\begin{pmatrix} 1,75 & 0,5 & -0,5 \end{pmatrix}$ which is exactly 1,5 less than the previous result. I need to prove that the ranking before and after applying the changes, stays the same. The values in the vector 1xe are not relevant, as long as the ranking is equal. Would someone please draw out the steps to prove this formally?
I found a more or less formal proof for the observed phenomenon through splitting up the individual elements in the matrix, similar to this: k is zxh; i is hxe; u is zxe; t is 1xe Before: $$ u_{11}^v = k_{11}*i_{11} + k_{12}*i_{21} + k_{13}*i_{31} + ... $$ After: $$ u_{11}^n = -k_{11}*(1-i_{11}) + -k_{12}*(1-i_{21}) + -k_{13}*(1-i_{31}) + ... $$ Proof that the delta between those changes is constant: $$ \Delta_{vn} u_{11} = k_{11} + k_{12} + k_{13} ... = const. $$ and the delta for all elements in t is also constant $$ \Delta_{vn} t_{1} = g_1*\Delta_{vn}u_{11} + g_2*\Delta_{vn}u_{21} ... = const. $$ Therefore rank(t) is the same as before the changes
{ "language": "en", "url": "https://math.stackexchange.com/questions/3300332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Finding the number of solutions to $\cos^4(2x)+2\sin^2(2x)=17(1+\sin 2x)^4$ for $x\in(0,2\pi)$ Number of solution of the equation $\cos^4(2x)+2\sin^2(2x)=17(1+\sin 2x)^4\; \forall $ $x\in(0,2\pi)$ what i try $\cos^4(2x)+2\sin^2 2x=17(1+\sin^2(2x)+2\sin 2x)^2$ $1+\sin^4 (2x)=17(1+\sin^4 2x+2\sin^2 2x+4\sin^24x+4\sin 2x(1+\sin^2 2x))$ $16\sin^4 (2x)+68\sin^3 2x+34\sin^2 2x+68\sin 2x+68\sin^2 4x+16=0$ How do i solve it Help me please
You're not required to find all solutions, just to find how many there are. Let $u=\sin(2x)$. Then the trigonometric equation in $x$ becomes a polynomial equation in $u$: $$ 0 = (1 - u)^4 + 2 u^2 - 17 (1 + u)^4 = -2 (8 u^4 + 36 u^3 + 47 u^2 + 36 u + 8) $$ Now plot this function of $u$ and see how many solutions are in the interval $[-1,1]$ so that $u$ can be the sine of something. The graph below tells us that there is exactly one solution in $[-1,1]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3300465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Integrating $\int (x+1)^2 dx$ two ways gives different results: $\frac13 x^3+x^2+x$ vs $\frac13 x^3+x^2+x+\frac13$. Why? I was trying to compute $$\int (x+1)^2 \, dx~,$$ which is a really easy function to integrate. But the thing is that I write the function as $x^2 +2x+1$ and the result I got was $\frac{x^3}{3} +x^2 +x$. But then I tried to integrate it using substitution; I called $u=x+1$, so the function I had to integrate was $$\int (u)^2 \, du~,$$ and the result is $\frac{u^3}{3}$ and then $\frac{(x+1)^3}{3}$, which is equal to $\frac{x^3}{3} +x^2 +x+\frac{1}{3}$ and that's different from the previous result!
WARNING while dealing with INDEFINITE INTEGRALS!! When you solve some indefinite indefinite integral you should always add a constant in your final result (why?). Remember that indefinite integral is nothing but anti-derivative. Therefore, whenever you integrate you miss a constant which disappears because the derivative of constant is $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3300660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Prove : $C_{k}(x)+C_{k+1}(x)\geqslant 1$ , $x_{k}\leqslant x\leqslant x_{k+1}$ Suppose $-\infty < x_{1}< x_{2}< \cdots < x_{n}< +\infty (n\geqslant 2)$ . And suppose a algebraic polynomial $C_{k}(x)$ $(k=1,2,\cdots ,n)$ ($degree\leqslant n-1$ ) satisfy : $$C_{k}(x_{i})=\left\{\begin{matrix} 0, & i\neq k,\\ 1, & i=k, \end{matrix}\right. (i=1,2,\cdots ,n)$$ Prove : $C_{k}(x)+C_{k+1}(x)\geqslant 1$ , $x_{k}\leqslant x\leqslant x_{k+1}$ $(1\leqslant k\leqslant n-1)$ . Let $C_{k}(x_{i})=a_{k}(x-x_{1})(x-x_{2})\cdots (x-x_{k-1})(x-x_{k+1})\cdots (x-x_{n})$ , $(k=1,2,\cdots ,n)$ . $a_{k}=\frac{1}{(x-x_{1})(x-x_{2})\cdots (x-x_{k-1})(x-x_{k+1})\cdots (x-x_{n})} $ . It satisfies the condition . But I can't prove the inequality . Hope someone helps me . Appriciate!
As the answer by Dunham has stated, the solution involves several aspects. First, let $$P_k(x) = C_k(x) + C_{k+1}(x), \text{ for } 1\leqslant k\leqslant n-1 \tag{1}\label{eq1}$$ As you've indicated, you get $$C_j(x) = \frac{\prod_{i=1,i\neq j}^{n}(x - x_i)}{\prod_{i=1,i\neq j}^{n}(x_j - x_i)}, \text{ for } 1 \le j \le n \tag{2}\label{eq2}$$ This $n-1$ degree polynomial is the only one of degree $\le n - 1$ which satisfies the required conditions (e.g., as discussed in the answer by Dan in Find N degree polynomial from N+1 points, and with more details in Polynomial interpolation). As both $C_k(x)$ and $C_{k+1}(x)$ are $n-1$ degree polynomials, their sum of $P_k(x)$ is an up to $n-1$ degree polynomial. Also, $P_k(x_i) = 0$ for all $1 \le i \le n$ except for $i = k$ and $i = k+1$, i.e., for $n-2$ points. Also, note that $P_k(x_k) = P_k(x_{k+1}) = 1$. Since $P'_k(x)$ is an up to $n-2$ degree polynomial, unless it's the $0$ polynomial, it can have at most $n-2$ roots. However, by the Mean value theorem, $P'_k(x)$ must have a root in each of $(x_i,x_{i+1})$ for all $1 \le i \le n - 1$ except for $i = k-1$ and $i=k+1$. This gives a total of $n-2$ points if $k = 1$ or $k = n - 1$, else $n-3$ points for all other $k$. Consider that $P_k(x_m) \lt 1$ for some $x_k \lt x_m \lt x_{k+1}$. There are $4$ cases to consider, of which all but the first use the Intermediate value theorem: * *For $n = 2$, you get that $P_1(x)$ is a linear function. Since $P_1(x_1) = P_1(x_2) = 1$, this means it must be a constant function of $P_1(x) = 1$, so it's not possible for $P_1(x) \lt 1$. *For $n \gt 2$ and $k = 1$, since $P_1(x_1) \gt P_1(x_m)$, $P_1(x_2) \gt P_1(x_m)$ and $P_1(x_2) \gt P_1(x_3)$, there are points $x_a$, $x_b$ and $x_c$ where $x_1 \lt x_a \lt x_m$, $x_m \lt x_b \lt x_2$ and $x_2 \lt x_c \lt x_3$ such that $P_1(x_a) = P_1(x_b) = P_1(x_c)$, so by the Mean value theorem, $P'_1(x)$ must at least $2$ roots in $(x_1,x_3)$. Since it has $n-3$ roots in $(x_3,x_n)$, this means $P'_k(x)$ must have at least $2 + (n-3) = n-1$ roots. However, since $P'_k(x)$ can have at most $n-2$ roots, this is not possible. *For $n \gt 2$ and $1 \lt k \lt n - 2$, you can repeat the argument used in point $2$ above for both sides of $x_m$ to determine there must be at least $3$ roots of $P'_k(x)$ in $(x_{k-1},x_{k+2})$, but as there are $n-4$ other roots, the total of $3 + (n-4) = n - 1$ is once again too large. *For $n \gt 2$ and $k = n-1$, you can apply the arguments used in point $2$ above for the left side of $x_m$ to, once again, determine that $P'_k(x)$ must have at least $n-1$ roots, which isn't possible. Since all of the cases for $P_k(x_m) \lt 1$ for some $x_k \lt x_m \lt x_{k+1}$ have been shown to not be possible, this means that $P_k(x) \ge 1$ for all $x_k \le x \le x_{k+1}$ and $1 \le k \le n - 1$ as requested.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3300763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find a point that minimizes the sum squared difference of the distances to a set of other points I have n points in Euclidean space $\{\mathbf{a_1}, \mathbf{a_2}, ... , \mathbf{a_n}\}$, and the desired distances to them $\{d_1, d_2, ..., d_n\}$. How can I find the optimal point $\mathbf{x}$ that minimizes $\sum_{i=1}^n [\lVert \mathbf{x} - \mathbf{a}_i \rVert - d_i]^2$ What I am doing now is put x somewhere, and differentiate the objective. However, gradient descent very often end up in local minima.
Using automaticallyGenerated's answer as a basis for the discussion, you need to minimize $$\Phi(x,y)=\sum_{i=1}^{n} \left(\sqrt{(x-x_i)^2+(y-y_i)^2}-d_i\right)^2$$ which is highly nonlinear and "good" starting values are required. You can get those easily if, in a prelimary step, you consider that you have $n$ equations $$f_i=(x-x_i)^2+(y-y_i)^2-d_i^2=0$$ Now, write the $\frac {n(n-1)}2$ equations $[(i=1,2,\cdots,n-1) \quad\text{and} \quad(j=i+1,i+2,\cdots n)]$ $$f_i-f_j=2(x_i-x_j)x+2(y_i-y_j)y=(d_i^2-x_i^2-y_i^2)-(d_j^2-x_j^2-y_j^2)$$ and a simple linear regression (or matrix calculations) will give you estimates for $x$ and $y$. If you still want to improve these guesses, minimize in a second step $$\Psi(x,y)=\sum_{i=1}^{n} \left({(x-x_i)^2+(y-y_i)^2}-d_i^2\right)^2$$ which is better conditioned that $\Phi(x,y)$. For sure, start with the guesses obtained in the first step and obtain new estimates. Now, you are ready for the minimization of $\Phi(x,y)$ and this would not present any problem. This could even be done using Newton-Raphson method for solving $$\frac{\partial \Phi(x,y) }{\partial x }=\frac{\partial \Phi(x,y) }{\partial y }=0$$ even using numerical derivatives.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3300898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Which primes $p$ satisfy $n^2 \equiv -1 \mod p$ for a perfect square $n^2$? I am trying to solve a homework exercise in elementary number theory: Which primes $p$ satisfy $n^2 \equiv -1 \mod p$ for a perfect square $n^2$? After looking at the case $p=5$, I saw that $3^2 \equiv 4 \mod 5$, but $p=2^2+1$, I thought that maybe the answer would be primes $p$ such that $p=m^2+1$ for some $m$. Certainly this would imply that there is an $n$ so that $p \mid n^2+1$ (in particular $n=m$). Unfortunately, $n=13$ doesn't satisfy that condition. However, weakening it to $p\mid n^2+1$ for some $n$ is just the statement of the problem. I don't want to answer "the congruence is true for primes for which it is true." So I am back to square one.
The answer is exactly the primes which are congruent to $1,2 \bmod 4$. The only prime congruent to $2 \pmod 4$ is $2$, so assume $p$ is odd for the rest of the answer. The result follows pretty straightforwardly once we establish a basic result about the ring $\mathbb{Z}/p\mathbb{Z}$. Result: There exists some $g$ such that every nonzero element $c$ in $\mathbb{Z}/p\mathbb{Z}$ can be expressed as $c= g^n$ for some $n$. (The unit group is cyclic.) Back to the problem, we are essentially solving $x^2\equiv -1\bmod p$. Write $x$ as $g^n$ for some generator $g$, so we have $g^{2n}=-1\bmod p$, or equivalently $$g^{2n}=g^{\frac{p-1}{2}}\pmod p.$$ So $2n=\frac{p-1}{2}\bmod p-1$. If $p\equiv 1\bmod 4$, there is clearly a solution, and if $p\equiv 3\bmod 4$, this has no solutions. (Verify this.) In general, the law of quadratic reciprocity provides a simple criterion for determining whether or not a square root exists modulo a prime.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3301017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Monotone increasing bijection from $\mathbb{R}$ to $(0,1)$. Give an example of a monotone increasing function $f: \mathbb{R} \to (0,1)$ such that $f$ is a bijection. I have an example in mind that follows $$g(x)=\frac{1}{1+e^x} ,x \in \mathbb{R}.$$ Then $g$ is a monotone decreasing bijection from $\mathbb{R}$ to $(0,1)$. Then consider the function $f: \mathbb{R} \to (0,1)$ as $$f(x)=1-g(x).$$ Then is it the required example? Is my answer is correct that, above define $f$ is a monotone increasing bijection from $\mathbb{R}$ to $(0,1)$? If this example is wrong, please suggest me an appropriate example. Thanks.
Your example is correct. You may take other bijective function as $$f:(-\infty,\infty) \rightarrow (0,1), f(x)=\frac{1}{2}(1+\mbox{erf}(x)), \mbox{erf}(x)=\frac{2}{\sqrt{\pi}}\int_{0}^{x} e^{-t^2} dt,$$ then $$f'(x)=\frac{2}{\sqrt{\pi}} e^{-x^2}>0, f(\pm \infty)=\pm 1.$$ Another example is $$f:(-\infty,\infty) \rightarrow (0,1), f(x)=\frac{1}{2} \left (1+\frac{2}{\pi} \tan^{-1} x\right), f'(x)=\frac{1}{\pi (1+x^2)}>0.$$ Yet another example is $$f(x)= e^x,~ \mbox{if}~x<0;~ f(x)=2-e^{-x},~\mbox{if}~ x>0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3301197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Can anyone help showing why my calculation of $\int x\ln x$ $dx$ is wrong? Suppose $\frac{dy}{dx}=x\ln x,$ my teacher asks me to find $y$. So I assume I got to integrate the right hand side: $$\int x\ln x\, dx$$ The result I got is $$ \int x\ln x\, dx=x\ln x-x+C\tag{1} $$ But, apparently, it is wrong since taking the derivative gives: $$ (x\ln x-x+C)'=\ln x+1-1=\ln x. $$ Can you please give me a hand?
Using Latex is tiring, so i will just use drawing. Differentiating the answer will definitely get you into the function that you want to integrate earlier, im using Integration by parts with the table method
{ "language": "en", "url": "https://math.stackexchange.com/questions/3301319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
For $a>0$, prove $\int_{-\infty}^\infty e^{-x^2}\cos ax\,dx=\sqrt{\pi}e^{-\frac{a^2}{4}}$ by justifying term by term integration. For $a>0$, prove $\displaystyle\int_{-\infty}^\infty e^{-x^2}\cos ax\,dx=\sqrt{\pi}e^{-\frac{a^2}{4}}$ by justifying term by term integration. $$\int_{-\infty}^\infty e^{-x^2}\cos ax\,dx=\int_{-\infty}^\infty e^{-x^2}\sum_{k=0}^{+\infty}(-1)^k\frac{(ax)^{2k}}{(2k)!}\,dx $$ I wonder how to justify that the integral sign and the summation sign can be interchanged? I want to use Dominated Convergence Theorem, but I can't find a Dominated function. I only need to know how to do this way but no other methods. This is an exercise in Folland's Real Analysis, he hints that it can be proved this way.
Since for any $n\in \mathbb N$,$$\left|e^{-x^2}\sum_{k=0}^{n}(-1)^k\frac{(ax)^{2k}}{(2k)!}\right|\leq e^{-x^2}\sum_{k=0}^{+\infty}\frac{(a|x|)^{2k}}{(2k)!}\leq e^{-x^2}\sum_{k=0}^{+\infty}\frac{(a|x|)^{k}}{k!}=e^{-x^2+a|x|},$$ and $$\int_{\mathbb R}e^{-x^2+a|x|}\,dx=2\int_0^\infty e^{-x^2+ax}\,dx<\infty,$$ we can use DCT to change the order of integration and summation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3301437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Integration of $x^2\cdot\frac{x\sec^2x+\tan x}{(x\tan x+1)^2}$ Integrate $$\int x^2\cdot\dfrac{x\sec^2x+\tan x}{(x\tan x+1)^2}dx$$ So what is did is integration by parts taking $x^2$ as $u$ and the other part as $v$ . Now I got to use it again which then eventually leads to (integral of $\dfrac1{x\tan x+1}dx $). Can someone help me?
$x \tanx + 1 = t$ $$\int x^2\cdot\dfrac{x\sec^2x+\tan x}{(x\tan x+1)^2}dx$$ $$=\int \frac{x^2}{t^2}dt = x^2* (-1/t) + \int (2x/t)dt$$ (By parts) $$ = x^2(-1/t) + 2x \ln|t| -2(t \ln|t|-t) + C$$ (By parts again)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3301557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Splitting Field of $x^4 + x^3 + 1$ over $\mathbb{F}_{32}$ I'm trying to find the splitting field described in the title. I believe I have figured it out, but my method seems a bit involved and I'm wondering if there is any simpler way to obtain the result. My method: This polynomial is actually contained in $\mathbb{F}_2[x]\subset\mathbb{F}_{32}[x]$ so it should suffice to find the splitting field $K$ of the polynomial over $\mathbb{F}_2$ and get the composite $K\mathbb{F}_{32}$. Note that over $\mathbb{F}_2$ this polynomial is irreducible since it has no roots and the only irreducible polynomial of degree $2$ over $\mathbb{F}_2$ is $x^2 + x + 1$ which does not square to our polynomial. Since Gal($K/\mathbb{F}_2)\leq S_4$, and contains the degree $4$ extension $E = \mathbb{F}_2[x]/(x^4 +x^3 + 1)$, we have that $[K:\mathbb{F}_2]= 4$ or $8$. Over $E$, a tedious computation shows that our polynomial splits as: $$ x^4 + x^3 + 1 = (x+\bar{x})(x+\bar{x}^2)(x^2 + (1 + \bar{x} + \bar{x}^2)x + (\bar{x} + 1)).$$ Now the polynomial $ x^2 + ( 1 + \bar{x} + \bar{x}^2)x + (\bar{x} + 1))$ becomes the irreducible $x^2 +x + 1$ over the quotient $E/(\bar{x})\cong \mathbb{F}_2$, so it must be irreducible in $E[x]$. Thus $E$ is not the splitting field of our polynomial over $\mathbb{F}_2$, implying the splitting field is the unique extension of degree $8$, namely $\mathbb{F}_{2^8}$. Since this field contains $\mathbb{F}_{32}$, we have that this is also the splitting field of this polynomial over $\mathbb{F}_{32}$. Edit: Woops, $\mathbb{F}_{2^5}$ is definitely not contained in $\mathbb{F}_{2^8}$ so this last part should say the splitting field is $\mathbb{F}_{2^8}\mathbb{F}_{2^5} = \mathbb{F}_{2^{40}}$.
Note that the lattice of fields of the shape $$\Bbb F_{\displaystyle 2^r}$$ corresponds to the lattice of the $r$-values w.r.t. division. The field $$\Bbb F_{32}=\Bbb F_{2^5}$$ intersects (in a common embedding) the fields $\Bbb F_{2^k}$ for $k=1,2,3,4$ only in $\Bbb F_2$, in the prime field. The polynomial $$ f=X^4+X^3+1\in \Bbb F_2[X]$$ is irreducible. To see these, note that there is no root of it in $\Bbb F_2$. The only possibility to factor it would be as a product of two irreducible polynomials of degree two. But there is only one such irreducible polynomial, it is reciprocal, $X^2+X+1$, its square is reciprocal, $X^4+X^2+1$, but it is not our polynomial. Form here, the splitting field of $f$ over $\Bbb F_2$ is $\Bbb F_{2^4}\cong \Bbb F_2[X]/(f)$. The minimal field containing $\Bbb F_{2^4}$ and $\Bbb F_{2^5}$ is $$\Bbb F_{\displaystyle 2^{4\cdot 5}} = \Bbb F_{\displaystyle 2^{20}} \ , $$ which is the splitting field of $f$ considered as a polynomial over $\Bbb F_5$. Later EDIT: Let us split the polynomial $T^4 + T^3 +1 \in F[T]$ over the field $F=\Bbb F_2[X]/(f)=\Bbb F_2(a)$, where $a=X$ modulo $(f)$ is the generator of $F$, and the minimal relation over the prime field is $a^4+a^3+1=0$. * *First of all, $a$ is a root in $F$ of $T^4 + T^3 +1$. *The multiplicative order of $a$ is $2^4-1=15$, it generates the cyclic multiplicative group $F_{16}^\times$. *The Frobenius morphism ($u\to u^2$) applied on the relation $a^4+a^3+1=0$ then gives: $$ \begin{aligned} 0 &=a^4+a^3+1\\ 0 &=(a^2)^4+(a^2)^3+1\\ 0 &=(a^4)^4+(a^4)^3+1\\ 0 &=(a^8)^4+(a^8)^3+1\ . \end{aligned} $$ So we have $T^4+T^3+1=(T-a)(T-a^2)(T-a^4)(T-a^8)$. Computer checks: sage: var('x'); sage: F.<a> = GF(2^4, modulus=x^4+x^3+1) sage: F Finite Field in a of size 2^4 sage: a.minpoly() x^4 + x^3 + 1 sage: R.<T> = PolynomialRing(F) sage: (T-a) * (T-a^2) * (T-a^4) * (T-a^8) T^4 + T^3 + 1 sage: factor(T^4+T^3+1) (T + a) * (T + a^2) * (T + a^3 + 1) * (T + a^3 + a^2 + a) sage: a, a^2, a^4, a^8 (a, a^2, a^3 + 1, a^3 + a^2 + a)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3301686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Maximizing the Sum of Cubes I have ten variables, $x_1$ through $x_{10}$. $-1 \leq x_i \leq 1$, and $\sum x_i = 0$. What is the maximum of $\sum x_i^3$? I've tried to use Lagrange multipliers, but writing the restricted domain as a constraint seems, if not impossible, then very difficult.
You can write it in a standard form \begin{equation}\notag \begin{split} \min & -\sum_{i=1}^{10} x_i^3 \\ \mathrm{s.t.} \hspace{1ex}& x_i - 1 \leq 0 \\ & -1 -x_i \leq 0 \hspace{1ex} \forall i \in [1, 10]\\ & \sum_{i=1}^{10} x_i = 0 \\ \end{split} \end{equation} Then the Lagrange Dual will be \begin{equation} L(x,\lambda, \mu)=-\sum_{i=1}^{10} x_i^3+\sum_{i=1}^{10} \lambda_i (x_i-1) - \sum_{i=1}^{10} \mu_i (x_i+1) + \gamma \sum_{i=1}^{10} x_i \hspace{2ex} \lambda_i, \mu_i \geq 0 \end{equation}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3301790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Operator norm of translated operator Suppose $P$ is a compact operator in $L^2([0,1])$. Assume it's contractive in the sense that $\|P\|_{\mathrm{op}}<1$. For some $f$ in $L^2([0,1])$, define the operator \begin{equation}Tg(x):= f(x) +Pg(x).\end{equation} Is it true that $T$ remains a contraction? From my initial work, applying the triangle inequality to the right hand side of the definition suggests the norm of $f$ plays some role. \begin{equation}\|Tg(x)\|_{\mathrm{op}} = \sup_{g \in L^2[0,1]} \frac{\|f(x) +Pg(x)\|_{L^2}}{\|g\|_{L^2}}.\end{equation} However, intuitively it seems like it should be true as $g$ isn't being "stretched" any more than $P$.
Let's make a simple example. Let $P$ be the halving operator, $g$ be the constant $1$ function, and $f$ be the constant $2$ function. Then $$ T g = f + P g = 2 + \frac{1}{2} 1 = \frac{5}{2} > ||g|| = 1 \text{.} $$ The intuition is that if $f$ is "big", $f$ controls the norm of $Tg$. So then if $g$ is "small" compared to $f$, $T$ can't be contracting (in the sense expressed by the supremum expression you have written).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3302020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Boundary-value problem for linear hyperbolic system by Fourier series I am trying to solve the linear equations $$\partial_t \rho +\partial_x \varphi =0, \qquad \partial_t \varphi+\partial_x \rho = \alpha \rho +\beta \varphi,$$ where $\alpha$, $\beta$ are constants. The functions $\rho$, $\varphi$ are defined on $[0,T]\times [0,L]$. The initial conditions are $$\rho(0,x)=0, \qquad \varphi(0,x)=0$$ and the boundary conditions are $$\rho(t,0)=f(t), \qquad \varphi(t,L)=g(t).$$ I have been trying to obtain a Fourier series solution, but I always run into some sort of problem. Is it possible to obtain such a solution?
Applying $\partial_t$ to $\varphi_t + \rho_x = \alpha\rho + \beta\varphi$ and using $\rho_t = -\varphi_x$ leads to $$ \varphi_{tt} - \varphi_{xx} = \beta\varphi_t- \alpha\varphi_x \, . $$ The corresponding initial conditions are $\varphi(0,x) = 0$ and $\varphi_t(0,x) = 0$, and the boundary conditions are $\varphi_x(t,0) = -f'(t)$ and $\varphi(t,L) = g(t)$. Thus, there is a Neumann boundary condition at $x=0$ and a Dirichlet boundary condition at $x=L$. Using separation of variables $\varphi(x,t) = X(x) T(t)$, we have $$ \frac{T''}{T} - \beta\frac{T'}{T} = \frac{X''}{X} - \alpha \frac{X'}{X} = -\lambda \, , $$ i.e. $$ {T''} - \beta {T'} + \lambda T = 0 \qquad\text{and}\qquad {X''} - \alpha {X'} + \lambda X = 0 $$ for which the Fourier series approach could be applied. However, this might not be an easy task in the case where the boundary conditions are time-dependent (see e.g. this post, and chap. 4 of (1)). To tackle this case by using such an approach, one could expand the solution over the normal modes of the system. Alternatively, one may use the method of characteristics, but this might not be straightforward due to the right-hand side (if $\alpha\neq 0$ or $\beta\neq 0$). You may have a look at (1), chap. 12. Lastly, one may solve the problem by using Laplace transforms (see (1), chap. 13). (1) R. Habermann, Applied Partial Differential Equations: with Fourier Series and Boundary Value Problems, 5th ed. Pearson Education Inc., 2013.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3302106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
The derivative of $ \tan x$ is $ \sec^2 x$. Why? I understand why the derivative of $\sin x$ is $\cos x$, and why the derivative of $\cos x$ is $-\sin x$ in a geometric way. But I can not understand why the derivative of $\tan x$ is $\sec^2 x$. Can someone please explain this in a visual, geometric way using the unit circle?
First, look at the graph of $\tan(x)$. It has vertical asymptotes at integer multiples of $\dfrac{\pi}{2}$ and is undefined at $-\dfrac{\pi}{2}$ and $\dfrac{\pi}{2}$. Observe that from $-\dfrac{\pi}{2} \leq x \leq \dfrac{\pi}{2}$, the slope of $\tan(x)$ is always increasing. Notice that it increases faster from $-\dfrac{\pi}{2} \leq x \leq -\dfrac{\pi}{4}$ and from $\dfrac{\pi}{4} \leq x \leq \dfrac{\pi}{2}$. Then, let's take a look at the graph of $\sec^2(x)$. Observe that $\sec^2(x)$ is always positive as the slope of $\tan(x)$ was always positive. Also observe that $\sec^2(x)$ has vertical asymptotes at integer multiples of $\dfrac{\pi}{2}$. If we zone in on $-\dfrac{\pi}{2} \leq x \leq \dfrac{\pi}{2}$, then we see that the value of $\sec^2(x)$ is greater as we approach $x=-\dfrac{\pi}{2}$ or $x=\dfrac{\pi}{2}$. This is because we can think of the derivative as slope and previously saw that the slope was greatest near the asymptotes. Therefore, it is natural for $\sec^2(x)$ to be the derivative of $\tan(x)$. The same technique will work for $\sin(x), \cos(x)$, and many others. If you are uncomfortable with the algebra then it is best draw a function and its derivative on graph paper.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3302218", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 1 }
How to find $x$ from expression? Solve for $x$ in the expression $$x(2x + 5) = 168.$$ I have already tried to move $x$ from one side to another with brackets and without them, like this: $x = 168;\quad 2x + 5 = 168 \implies 2x = 163 \implies x = 81.5$ but eventually without success. I know that the answer is $x = 8$ but I can't come to this solution. How to solve this?
if $f(x)g(x) = A$ that does not mean that $f(x)=A$ or $g(x) = A$. However, what you could do is expand as follows $$2x^2 + 5x = 168$$ then $$2x^2 + 5x - 168 = 0$$ then you have to realize that what you have is a quadratic equation, where normally you compute a quantity called the discriminant $$\Delta = b^2 - 4ac$$ for a quadratic equation $ax^2 + bx + c $. In your case, you got $a = 2,b=5,c=-168$ so $$\Delta = (5)^2 - 4(2)(-168) = 1369$$ Now since $\Delta > 0$, you have two distinct solutions, $$x_1 = \frac{-b - \sqrt{\Delta}}{2a}=\frac{-5 - \sqrt{1369}}{4}$$ $$x_2 = \frac{-b + \sqrt{\Delta}}{2a}=\frac{-5 + \sqrt{1369}}{4}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3302325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
For independent r.v. the $\liminf X_n$ is almost surely constant I need to show that for independent real-valued r.v. $(X_n)_{n\in \mathbb{N}}$ the $X_*:= \liminf_\limits{n\rightarrow \infty}X_n$ is almost surely constant with $X_*\in\mathbb{R}\cup\{-\infty,+\infty\}$. A r.v. $X$ is constant iff $X(\omega)=c$ for $\omega \in \Omega$ and $c\in\mathbb{R}\cup\{-\infty,+\infty\}$. This means that I need to show that $$ \exists c\in\mathbb{R}\cup\{-\infty,+\infty\}: P(\liminf_\limits{n\rightarrow \infty}X_n=c)=1 $$ We know that for the r.v. on $(\Omega,\mathcal{F})$ and $ \sigma(X_k:k\geq n):=\sigma((X^{-1}B_k: B_k\in \mathcal{F}_k, k\geq n)$ the Terminal $\sigma$-Algebra is defined as $$ \mathcal{A}(X_k:k\geq 1):=\bigcap_{n\geq 1}\sigma(X_k:k\geq n) $$ I further know that for Events $A_n$ $$ \liminf_\limits{n\rightarrow \infty}A_n:=\bigcup_{m\geq1}\bigcap_{k\geq m}A_k $$ I am not quite sure from where to "attack" this problem. If $c\in\mathbb{R}$ than $\{X_n=c\} = \{\omega \in \Omega:X_n(\omega)=c\}:=A_n$ and since $X_n$ are independent $A_n$ are also independent. It is also clear to me that $\liminf_\limits{n\rightarrow \infty}A_n \in \sigma(1_{A_n}:k\geq n).$ Where do I go from here? Since $A_n$ are independent I know that $$ \liminf_\limits{n\rightarrow \infty}A_n:=\bigcup_{m\geq1}\bigcap_{k\geq m}A_k= \bigcup_{m\geq1}\sum_{k\geq m}A_k $$ I don't know how to go further. Doesn't Kolmogorov's $0$-$1$-law tell me that $P(\liminf_\limits{n\rightarrow \infty}X_n=c)=0$ or $P(\liminf_\limits{n\rightarrow \infty}X_n=c)=1$? But how do I show the latter?
The event $\{X_{*} \leq x\}$ belongs to $\sigma (X_k,X_{k+1},...)$ for each $k$. By 0-1 law $P(X_{*} \leq x) =0$ or $1$ for each $x$. The distribution function $F$ take only the values $0$ and $1$ and so it jumps from $0$ to $1$ at some point $c$. ($c=\sup \{t: F(t) =0\}$). It follows that $X_{*}=c$ with probability $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3302568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Area of parallelogram = Area of square. Shear transform Below the parallelogram is obtained from square by stretching the top side while fixing the bottom. Since area of parallelogram is base times height, both square and parallelogram have the same area. This is true no matter how far I stretch the top side. In below figure it is easy to see why both areas are same. But it's not that obvious in first two figures. Any help seeing why the area doesn't change in first figure?
Behold, $\phantom{proof without words}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3302853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38", "answer_count": 9, "answer_id": 6 }
Distance between two circles in a sphere I have a sphere with radius $R$ and $O$ is the origin. Inside sphere there are 3 circles. The small circle in black colour is fixed with it's position defined by and $\alpha$ angle and among other two circles one is great circle and another small circle can rotate by maintaining same distance and they are always parallel to each other. Both blue circles intersect the black great circle at point A and B. The rectilinear distance of A to B can be calculated using this relation: $AB = 2RSin {\theta\over2}$ Orientation of upper blue great circle is defined as $\beta$ which is the angle starting from $z$ axis.The range of $\beta $ can vary from $0^0 -360^0$ and it is a known value. Though my both blue coloured circle is parallel to each other, is it possible to get the lower circle's orientation angle from the centre of sphere that respect $\theta$ as it is the rectilinear distance? We always assume that both blue circle intersect the black circle.
According to the comments on the question, I will start by assuming that $\alpha$ and $\beta$ are given. Let's name some additional points on the sphere. Let $C$ be the point $(0,0,R)$ where the positive $z$ axis intersects the sphere. The blue great circle intersects the $y,z$ plane in two points; from those two points choose the one closer to $A$ and call it $D.$ The arrangement of points and arcs is shown in a schematic arrangement below. (This is not exactly to scale, because it is drawn as a plane figure while the actual figure is on the surface of a sphere.) The figure below shows what it looks like on the sphere. Note that all arcs that are labeled with symbols ($\alpha,$ $\beta,$ $\gamma,$ $\delta,$ and $\theta$) are great-circle arcs, and the symbol is the name of the angle subtended by that arc at the center of the sphere. Now we have a spherical triangle $\triangle ADC.$ Following the usual convention where the "sides" of the spherical triangle are measured by the angle (in radians) that they subtend at the center of the sphere, side $AC$ has measure $\alpha$ and side $CD$ has measure $\beta.$ The spherical angle $\angle ADC$ is $\frac\pi2.$ Let $\eta = \angle CAD.$ According to the spherical law of sines, $$ \frac{\sin\frac\pi2}{\sin\alpha} = \frac{\sin\eta}{\sin\beta}.$$ But $\sin\frac\pi2 = 1.$ Therefore $\sin\eta = \frac{\sin\beta}{\sin\alpha}$ and $$\eta = \arcsin\left(\frac{\sin\beta}{\sin\alpha}\right).$$ For now, let's consider the variant of the problem in which $\theta$ is known and $\gamma$ is unknown. Since $\theta$ is known, we can construct two points on the black small circle that subtend angle $\theta$ from $A$. Let the point labeled $B$ in the figure be one of those points. The spherical triangle $\triangle ABC$ has sides of measure $\alpha$ opposite $A$ and $B$ and a side of measure $\theta$ opposite $C.$ Let $\zeta = \angle BAC.$ By the spherical law of cosines, $$ \cos\alpha = \cos\alpha \cos\theta + \sin\alpha \sin\theta \cos\zeta. $$ Solving for $\zeta,$ $$ \zeta = \arccos\left(\frac{1 - \cos\theta}{\sin\theta}\cot\alpha\right). $$ From $B,$ construct an arc on the sphere to the closest point on the blue great circle. Call that point $E.$ The arc from $B$ to $E$ meets the blue great circle in a right angle. So we now have a spherical triangle $\triangle ABE$ with a right angle at $E.$ Note that the side $AB$ of this triangle has angular measure $\theta,$ but it is not the arc shown in the diagram. The arc in the diagram is an arc of a small circle, whereas the side of a spherical triangle is always a great circle. Let $\delta$ be the measure of the arc $BE.$ Let $\phi = \angle BAE.$ Then $\phi = \zeta - \eta,$ where $\zeta$ and $\eta$ have already been computed. Now we have another spherical triangle, this time with a right angle $\angle AEB.$ Applying the spherical law of sines again, $$ \frac{\sin\frac\pi2}{\sin\theta} = \frac{\sin\eta}{\sin\delta}.$$ Solving for $\delta,$ $$ \delta = \arcsin(\sin\theta \sin\phi). $$ The distance between $B$ and $E$ is also the distance between the two points where the blue circle intersect the $y,z$ plane, namely $D$ and $F$, the point on the blue small circle closest to $D$. Hence $$ \gamma = \beta + \delta. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3303052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Plugging inequalities into equations Let's say I have the inequality $3f \leq 2m$. If I wanted to plug this into the equation $n-m+f=2$, how would I do that? Where would I begin?
I assume that you can replace the $=$ sign with a $≤$ (less than OR equal to) so thus $n-m+f≤2$ adding gets $n-3m+4f≤2$ If your not sure of this use integers to test out this theory!! Ex:$3≤3$ and $6≤7$ so $9≤10$ Notice this only works when there is a $≤$ involved
{ "language": "en", "url": "https://math.stackexchange.com/questions/3303183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Are Hilbert Scmidt integral operators on separable compact Hausdorff spaces in the Hilbert Schmidt class? Let $X$ be compact separable Hausdorff space with a positive Borel measure $\mu$. Assume $L^2(X)$ is separable. Consider a function $K: X \times X \to \mathbb{C}$ with $K \in L^2(X \times X , \mu \otimes \mu)$. We denote the integral operator $\tilde{K}$ on $L^2(X)$ whose kernel is $K$, that is, for every $f \in L^2(X)$ we define \begin{align} \tilde{K}f(x) = \int_X K(x,y)f(y)\mu(dy). \end{align} Then is the operator $\tilde{K}$ in Hilbert Schmidt class, that is, for every orthonormal basis $\{e_n\}$ on $L^2(X)$, \begin{align} \sum_{n}(\tilde{K}^{\ast}\tilde{K}e_n,e_n) = \int_{X \times X}|K(x,y)|^2\mu \otimes \mu(dx,dy)< \infty? \end{align} I know this holds when $X = \mathbb{R}$ and $\mu$ is the Lebesgue measure. But I couldn't find the assertion for general settings. I would appreciate it if you give me counter examples or conditions for $X$ and $\mu$ under which the assertion holds.
This is true and , in fact, the converse is also true. Every H-S operator on $L^{2}(\mu)$ is of this type for some $K$. Reference: Theorem VI.23, p. 210, Functional Analysis, Vol 1 by Reed and Simon.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3303302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to find the overall visible area of overlapping 3 or more Octagons? I have 3 Octagons of same size. I know the coordinates of their centers and their side length. How can I calculate the overall area? (Shaded area in the image) In the image above, the area comes out to be 1343.73 sq units, calculated via a CAD software. I'd like to know how to calculate that manually? I need to make an algorithm.
You have enough information to find the vertices of each octagon. A search for overlapping polygons area finds many links to versions of the the polygon clipping problem. Something there might solve your problem. https://www.google.com/search?client=ubuntu&channel=fs&q=overlapping+polygons+area&ie=utf-8&oe=utf-8
{ "language": "en", "url": "https://math.stackexchange.com/questions/3303443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Compactness and metric space i know that if A and B are compact then there exists $(a,b)\in A\times B, d(a,b) = d(A,B)$ I want to find an example where this is not true if A is compact and B closed I put $A=[1,2]$ and $B=]-\infty,0[ $ in $\mathbb{R}^*=]-\infty,0[\cup ]0,+\infty[$ is it correct ? here B is closed but not bounded then it is not compact right? and d(A,B)=1 but d(a,b)>1 is it true ?
Yes, this is correct. In fact, you could have taken $B=[-1,0[$. While it is not closed in $\mathbb{R}$, it is closed in your $\mathbb{R}^*$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3303558", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$2$ out of $4$ points, each of distance at most $1$ apart, are at most $1/\sqrt2$ apart Given $4$ points in the plane such that any pair of them are a distance of at most $1$ apart, show that some pair of them must be of distance at most $1/\sqrt{2}$ apart. I figured the solution might involve a pigeonhole argument, but as my geometry is rusty I'm having trouble turning the "distance of at most $1$" condition into a condition that bounds all $4$ points in a region that can then be subdivided into $3$ regions in which every point must be within $1/\sqrt{2}$. Or that might be entirely on the wrong track.
By eyeballfrog's comment, if there's an arrangement of four points such that all pairwise distances lie in the interval $(1/\sqrt{2}, 1]$, then by scaling, we can guarantee that two are a distance $1$ apart. Let these points be $A$ and $B$. The diagram shows circles of radius $1/\sqrt{2}$ and $1$ with centers $A$ and $B$. The remaining two points must be in the curvilinear regions $CDEF$ and $C'D'E'F'$, excluding the inner arcs of radius $1/\sqrt{2}$. They can't be in different regions, as $ACBC'$ is a square with side-length $1/\sqrt{2}$, so $CC' = 1$, and between any two points in different regions there is a vertical distance of at least $CC'$. But they can't be in the same region either. The triangle $ABE$ is equilateral, so $EC = \sqrt{3}/{2} - 1/2$. Applying the law of cosines to the triangle $ABD$ gives $$AD^2 = AB^2 + BD^2 - 2 AB \cdot BD \cos \angle ABD.$$ As $AB = BD = 1$ and $AD = 1/\sqrt{2}$, this gives $\cos \angle ABD = BG = 3/4$ and thus $DF = 1/2$ (because $DF \parallel AB$). Therefore, $CDEF$ can be contained in a rectangle with sides through $C, D, E, F$ and parallel to $CE$ and $DF$. This rectangle has diagonal length $\sqrt{CE^2 + DF^2} < 1/\sqrt{2}$, so all points in $CDEF$ (and, by symmetry, $C'D'E'F'$) are closer than $1/\sqrt{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3303675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
antiderivatives of $\frac1{x-1}$ I was told to find the anti-derivative of this problem: $$\int\frac{2x^2-13x+18}{x-1} dx$$ I solved the problem - first dividing and then finding the anti-derivative: $$x^2-11x+7(\ln(x-1))+c$$ The given answer was the same as mine, but they put the $x-1$ in $\ln(x-1)$ in absolute value - $\ln|x-1|$. Why?
Hint: Since we have $$\int\frac{1}{x}dx=ln(|x|)+C$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3303776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Finding the maximum without differentiation I have this problem that needs to be solved as if I was a GCSE student. After cutting a square of length $x$ from each corner, the volume of the open box would be $V= x(10-2x)(10-2x)$. We want to find the value of $x$ so that this volume is maximised. The ordinary solution would be to differentiate the cubic expression and find that the maximum point is at $x=\frac{5}{3}$. However, only knowledge up to KS4 (UK Year 11) may be used to solve it. Would the only appropriate approach be to use an iterative method (trial and error) to obtain an approximate value?
Note that by AM-GM inequality, $$4V=4x(10−2x)(10−2x)\leq \left(\frac{4x+(10−2x)+(10−2x)}{3}\right)^3=\left(\frac{20}{3}\right)^3.$$ Equality holds, and the maximum value of $V$ is attained, when $4x=10-2x$, i.e. for $x=5/3$. P.S. Actually what you need is just a particular case of AM-GM inequality: for $a,b\geq 0$, $$ab^2\leq \left(\frac{a+2b}{3}\right)^3 \tag{1}$$ that is, after some standard calculations, equivalent to $$(a+8b)(a-b)^2\geq 0$$ which holds. Equality is satisfied when $a=b$. Now apply $(1)$ with $a=4x$ and $b=10-2x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3303846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to show that $\sqrt{2+\sqrt{3}} = \dfrac{\sqrt{6}+\sqrt{2}}{2}$ $\sqrt{2+\sqrt{3}} = \dfrac{\sqrt{6}+\sqrt{2}}{2}$ How to change $\sqrt{2+\sqrt{3}}$ into $\dfrac{\sqrt{6}+\sqrt{2}}{2}$
Hint: $$\left( \frac{\sqrt{6} + \sqrt{2}}{2}\right)^2 = \frac{6 + 2 \sqrt{6} \sqrt{2} + 2}{4} = ?$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3303949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 2 }
Evaluating $\sqrt{a\pm bi\sqrt c}$ I recently encountered this problem $$\sqrt{10-4i\sqrt{6}}$$ To witch I set the solution equal to $a+bi$ squaring both sides leaves $${10-4i\sqrt{6}}=a^2-b^2+2abi$$ Obviously $a^2-b^2=10$ and $2abi=-4i\sqrt{6}$, using geuss and check, the solution is $a=\sqrt12, b=\sqrt2$ But I was wondering if there is a faster way to solve these types of problems or a method that doesn't involve guess and check (since it can get tedious at times) for the basic form $$\sqrt{a\pm bi\sqrt c}=x+yi$$ Where you're solving for $x$ and $y$. I've attempted but failed since it gets pretty messy. All formula will be very much appreciated. (not all equations of that form can reduce)
We have that $$a^2-b^2=10 \quad \textrm{and} \quad 2ab=-4\sqrt 6$$ Now, squaring both equalities and addem up we get $$(a^2+b^2)^2=(a^2-b^2)^2 +(2ab)^2=10^2+(-4\sqrt 6)^2 =196$$ $$\Rightarrow \quad a^2+b^2=14$$ and using again the first equality we obtain $$a^2=12 \quad \textrm{and} \quad b^2=2$$ or $$a=\pm 2\sqrt 3 \quad \textrm{and} \quad b=\pm \sqrt 2$$ but $ab<0$ (by the second equality), that is, $a$ and $b$ are the opposite sign. Thus, the solutions are given by $$a= 2\sqrt 3 \quad \textrm{and} \quad b=- \sqrt 2$$ or well $$a=-2\sqrt 3 \quad \textrm{and} \quad b=\sqrt 2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3304125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
On the inclusion homomorphism $\mathbb Z\to\mathbb Q$ This question is about the group homomorphism $i: \mathbb Z\to\mathbb Q$ given by $m\mapsto m$. 1) Is the following proof of the fact that the arrow $i$ in $\mathbf {Ab}$ is epic correct? Let $h,h':\mathbb Q\to G$ be group homomorphisms. Suppose $h\circ i=h'\circ i$. Then for all $n$, $h(i(n))=h'(i(n))$, i.e., $h(n)=h'(n)$ since $i(n)=n$. Thus $i$ is epic. 2) The group homomorphism $i: \mathbb Z\to\mathbb Q$ given by $m\mapsto m$ is known to be a monic that is not split in $\mathbf {Ab}$. Why is that? Assume that it is split monic. Then there is a group homo $l:\mathbb Q\to \mathbb Z$ such that $l\circ i=1_\mathbb Z$. What does this contradict to? For any $m\in \mathbb Z$ this says that $l(i(m))=m$, i.e., $l(m)=m$. So if $l$ exists, then it $l\restriction_\mathbb Z:\mathbb Z\to \mathbb Z$ must be given by $m\mapsto m$. So I guess the statement that needs to be proved is that there does not exist a group homomorphism $l:\mathbb Q\to\mathbb Z$ such that $l(m)=m$ for all $m\in\mathbb Z$. How to show that?
The inclusion $\mathbb{Z} \hookrightarrow \mathbb{Q}$ is not epic in $\mathbf{Ab}$, but it is epic in $\mathbf{Ring}$ as well as in the category of torsion-free abelian groups. Also, there is no nonzero homomorphism from $\mathbb{Q}$ to $\mathbb{Z}$, because the image of any rational number would have to be an integer that is divisible by every nonzero integer, and no nonzero integer is divisible by any integer with a larger absolute value.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3304227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Maximal ideal and not algebraically closed field This is a question concerning maximal ideals in a polynomial ring over a non-algebraically closed field k. First is the example inspired for this question: as a standard exercise, it is easy to show that $\langle x^2+1\rangle$ is a maximal ideal in $\mathbb{R}[x]$ (Briefly speaking, let $J$ be an ideal s.t. $I\subset J$ and take $f\in J\setminus I$, divide $f$ by $x^2+1$ to obtain $f(x)=q(x)(x^2+1)+(ax+b)$, then one can show that $a^2+b^2\in J$ (see here, for example), so $J=\mathbb{R}[x]$) Next, consider the ideal $\langle x_1^2+1, x_2,\cdots ,x_n\rangle \subseteq \mathbb{R}[x_1,\cdots ,x_n]$. This is a maximal ideal (use the exact same argument above) Now, we can generalize things. If $k$ is non-algebraically closed field, we should be able to construct a maximal ideal in $k[x_1,\cdots ,x_n]$ using the logic above. Namely, pick $f\in k[x_1]$ s.t. it has no root in $k$, then consider the ideal $\langle f,x_2,\cdots ,x_n\rangle$. Our inspiration tells us that this ideal should be maximal. However, I have a hard time proving it. As before, let $J$ be an ideal s.t. $I\subset J$ and take $g\in J\setminus I$. Using the multivariable division algorithm to divide $g$ by $(f,x_1,\cdots ,x_n)$, we obtain $g=q_1f+q_2x_2+\cdots +q_nx_n+r$ with $r\in k[x_1]$. A simple observation shows that $r\in J$. But the biggest problem is I don't see any nice trick to use to show that $J$ contain some nonzero constant (and hence $J=k[x_1,\cdots ,x_n]$) Any idea?
The Hilbert Nullstellensatz for non-algebraically closed fields describes all maximal ideals $I$ of $R=k[x_1,\ldots,x_n]$. It states that $R/I$ is a finite field extension of $k$. This means that $I$ is the kernel of a $k$-algebra homomorphism $\phi:R\to k^{\text{alg}}$. Such a map is given by $\phi(f(x_1,\ldots,x_n))=f(a_1,\ldots,a_n)$ with the $a_i\in k^{\text{alg}}$. Then $I$ is the intersection of $R$ with the ideal of $k^{\text{alg}}[x_1,\ldots,x_n]$ generated by the $x_i-a_i$. Your example is the case $(a_1,\ldots,a_n)=(a,0,\ldots,0)$. Then $I=\langle f(x_1),x_2,\ldots,x_n\rangle$ where $f$ is the minimum polynomial of $a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3304318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Does the logarithm satisfy any differential equation? The exponential function $\exp:\mathbb{R}\to\mathbb{R}_+$ satisfies the differential equation $f^\prime(x)=f(x)$. Does the logarithm $\log:\mathbb{R}_+\to\mathbb{R}$ also satisfy any differential equation? Curious about if this gets closed immediately or not... seems like a way too basic question, but I have been totally stuck with this since yesterday.
Here's a fun way to see the relationship between the definition for the exponential function and the "obvious" answer: Suppose $y = e^x$. By definition of the exponential function, this means that $dy/dx = y$. Now view $x(y)$ as a function of $y$, i.e., $x$ is the inverse of the exponential function. Rearranging the relationship between the differentials above, we have $$ \frac{dy}{dx} = y \quad \Rightarrow \quad \frac{dx}{dy} = \frac{1}{y}. $$ Thus, the inverse of the exponential function obeys the differential equation $x'(y) = 1/y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3304424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 6, "answer_id": 3 }
Does $T$ bounded linear operator necessarily imply weak sequential continuity Let $X,Y$ be Banach Spaces, show that $x_{n} \xrightarrow{w} x$ and $T \in BL(X,Y)\Rightarrow Tx_{n} \xrightarrow{w} Tx$ Question: Does sequential continuity (which $T$ clearly has) necessarily imply that $T$ is weak-sequentially continuous? If so, then the above is trivial. Otherwise: $\vert\ell(Tx_{n})-\ell(Tx)\vert=\vert\ell(Tx_{n}-Tx)\vert=\vert T^{*}\ell(x_{n}-x)\vert\xrightarrow{n\to \infty} 0$ since $T^{*}\ell \in X^{*}$. I am somewhat unsure about this, since I have not used boundedness of $T$ anywhere.
If $y^{*} \in Y^{*}$ then $x^{*}(x)=y^{*}(Tx)$ defines a continuous linear functional on $X$. Hence $y^{*}(Tx_n)\to y^{*}(Tx)$. This implies that $Tx_n \to Tx$ weakly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3304542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why the normalizer of the Sylow $p$-subgroups of the symmetric group of degree $p$ has order $p(p-1)$ and is known as Frobenius group $F_{p(p-1)}$? Why the normalizer of the Sylow $p$-subgroups of the symmetric group of degree $p$ has order $p(p-1)$ and is known as Frobenius group $F_{p(p-1)}$? I am trying to understand the statements on Wikipedia about Sylow subgroups of the symmetric group, where the above statement has been made. Of course, the Sylow $p$-subgroup $C_p$ of $S_p$ is a normal subgroup of a group of order $p(p-1)$ by Sylow theorems. But how to show that it is the maximal subgroup normalizing $C_p$ in $S_p$? Moreover, what is the relation between this normalizer and the Frobenius group $F_{p(p-1)}$? Thank you.
My approach is more combinatorial than algebraic. Any group isomorphic to $(\mathbb Z_p)^n$ can be written as a permutation group on $p^n$ points, with any Frobenius complement a group that fixes exactly one of these points, therefore, maximally, a group of order $p^n - 1$. It can be shown that this maximal group always exists. Of course, this is not the maximal normalizer of the $(\mathbb Z_p)^n$ group in the symmetric group on $n$ letters (though it is for $n = 1$); the other normalizers, however, fix more than one point and therefore do not serve as Frobenius complements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3304678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Cross-ratio concept from Euclidean Geometry I have seen on the following Wikipedia website the definition of "cross-product." Apparently, if four lines passing through a common point $P$ are traversed by a line and the points of intersection "in one direction" are A, B, C, and D, their cross-product is \begin{equation*} [A, B; C, D] = \frac{\mathit{AC}\cdot \mathit{BD}}{\mathit{AD} \cdot \mathit{BC}} . \end{equation*} Why is this quantity defined this way? Is there supposed to be a semicolon in this notation? If another line intersects the same four lines passing through $P$ at $A^{\prime}$, $B^{\prime}$, $C^{\prime}$, and $D^{\prime}$ "in the same direction," \begin{equation*} [A, B; C, D] = [A^{\prime}, B^{\prime}; C^{\prime}, D^{\prime}] . \end{equation*} What is a proper explanation for this? What happens if the points $A^{\prime}$, $B^{\prime}$, $C^{\prime}$, and $D^{\prime}$ "in the opposite direction" with respect to $A$, $B$, $C$, and $D$? https://en.wikipedia.org/wiki/Cross-ratio May someone offer me a textbook in Euclidean Geometry that has a discussion on cross-product?
The cross-ratio of four points $A,B,C$ and $D$ on a line $l$ in Euclidean space is defined as the ratio of the ratios $AC:BC$ and $AD:BD$. Simplifying the double quotient yields $$ [A,B;C,D] = \frac{\frac{AC}{BC}}{\frac{AD}{BD}} = \frac{AC\cdot BD}{AD \cdot BC}.$$ The semicolon in Wikipedia's notation indicates the different roles of $A,B$ and $C,D$ here. Now, if $P$ is a fifth point with distance $d$ to $l$, we can express $[A,B;C,D]$ via the areas of the triangles $[PAC]$, $[PBC]$, $[PAD]$ and $[PBD]$: $$ [A,B;C,D] = \frac{\frac{AC \cdot d}{2} \cdot \frac{ BD \cdot d}{2}}{\frac{AD \cdot d }{2}\cdot \frac{BC \cdot d}{2}} = \frac{[PAC] \cdot [PBD]}{[PAD] \cdot [PBC]},$$ which by the sine area theorem can be written as $$ [A,B;C,D] = \frac{PA \cdot PC \cdot \sin(PA,PC) \cdot PB \cdot PD \cdot \sin(PB,PD)}{PA \cdot PD \cdot \sin(PA,PD) \cdot PB \cdot PC \cdot \sin(PB,PC)} = \frac{\sin(PA,PC) \cdot \sin(PB,PD)}{\sin(PA,PD) \cdot \sin(PB,PC)}.$$ where $\sin(PX,PY)$ denotes the oriented angle between the lines $PX$ and $PY$. Remarkably, this expression does only depend on the angle between the four lines $PA$, $PB$, $PC$ and $PD$, and not at all on how $l$ intersects these. Therefore, if $A', B', C'$ and $D'$ are four different points on $PA,PB,PC$ and $PD$ on another line $l'$, we will have $$[A,B;C,D] = \frac{\sin(PA,PC) \cdot \sin(PB,PD)}{\sin(PA,PD) \cdot \sin(PB,PC)} = \frac{\sin(PA',PC') \cdot \sin(PB',PD')}{\sin(PA',PD') \cdot \sin(PB',PC')} = [A',B';C',D'].$$ Note that if $A$ and $A'$ lie on different sides of $P$, both $\sin(PA,PC)$ and $\sin(PA,PD)$ change sign, thus leaving the crossratio invariant. A very practically oriented chapter on Euclidean cross-ratios would be in Evan Chen's "Euclidean Geometry in Mathematical Olympiads". Of course, one can study projective geometry much more abstractly, which is detailed in most textbooks on linear algebra.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3304826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Expected Profit from a Lottery Ticket Ahmed is playing a lottery game where he must pick 2 numbers from 0 to 9 and then one letter out of the 26-letter English alphabet. He may choose the same number both times. If his ticket matches the 2 numbers and 1 letter drawn in order, he wins the grand prize and receives \$10,405. If just his letter matches but one or both of his numbers do not match, he wins the small prize of \$100. Under any other outcome he loses and receives nothing. The game costs him \$5 to play. He has chosen the ticket 04R. Assuming that he paid the \$5 and he picked the ticket 04R. The solution to the problem is found in this video: https://www.youtube.com/watch?v=6vlBOHckmzU&feature=youtu.be In the video, the probability of getting the small prize is (1/26) - (1/2600). Shouldn't the probability of winning the small prize be the sum of the probabilities of getting the letter right and 1 number wrong + getting the letter right and 2 numbers wrong: (1/26)(9/10)(1/10) + (1/26)(9/10)(9/10)?
Yes, but you are missing something. Can you figure it out? $$2\cdot \frac{1}{26}\frac{1}{9}\frac{1}{10} + \frac{1}{26}\frac{9}{10} \frac{9}{10} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3304959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Conditions required for Galois Correspondence This question is based on Theorem 10.2 of John Howie's 'Fields and Galois Theory', p.171. All fields have characteristic zero. $M$ is a normal radical field extension of $K$, i.e. $M=K(\alpha_1,\alpha_2,...,\alpha_n)$ where $\alpha_i^{p_i}\in K(\alpha_1,...\alpha_{i-1})$ for some prime $p_i$. We also define $P=M(\omega)$ where $\omega$ is a primitive $p_i$th root of unity. Howie points out that, as $P$ is a splitting field for $X^{p_i}-1$ over $M$, $P/M$ is a normal extension of $M$. So far so good. As we have $M/K$ normal then, using the Fundamental Theorem of Galois Theory (FTGT), we have $\text{Gal}(P/M)\vartriangleleft \text{Gal}(P/K)$. Also by FTGT we have the following isomorphism: $$\text{Gal}(M/K)\simeq\frac{\text{Gal}(P/K)}{\text{Gal}(P/M)}$$ However, in order for the FTGT to apply, I understand that we also require $P$ to be a normal extension of $K$. Howie does not seem to be explicit about why this condition holds. The only strategy I know would be to show that $P=K(\omega,\alpha_1,\alpha_2,...,\alpha_n)$ is a splitting field for a polynomial over $K$, however it is not obvious to me what that polynomial would be, or how I could show that its splitting fields was the one required.
You assume that $M/K$ is a normal extension. This means there is a polynomial $f\in K[x]$ such that $M$ is a splitting field of $f$ over $K$. But then note that $P$ is a splitting field of $(x^{p_i}-1)f(x)\in K[x]$, and hence $P/K$ is normal. Note that in general if $P/M$ and $M/K$ are both normal extensions it doesn't imply that $P/K$ must be normal. Just in your case we are lucky that $x^{p_i}-1$ is also a polynomial in $K[x]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3305075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Describe the equalizer of $f,1:X\to X$ as explicitly as possible Exercise 5.2.22 from Leinster asks to describe the equalizer of $f,1:X\to X$ in $\mathbf{Set}$, where $f:X\to X$ is a map, as explicitly as possible. What level of explicitness is expected? By definition, it is a pair $(S,h)$ where $S$ is a set and $S\to X$ is a map such that $fh=h$, and if $(S'g)$ is another such pair, then there is a unique $\bar g:S'\to S$ such that $h\bar g=g$. How can one make this more explicit?
The equalizer of $s,t:X\to Y$ in $\mathbf{Set}$ is $(E,i)$ where $E=\{x\in X:s(x)=t(x)\}$ and $i:E\to X$ is the inclusion map. This is clearly a fork, and if $(A,g)$ is another fork, then define $\bar g:A\to E, a\mapsto g(a)$. Note that $g(a)$ indeed lives in $E$ because $sg=tg$, and $\bar g$ is the unique map $A\to E$ such that $i\bar g=g$. Apply this to $Y=X,\ s=1,\ t=f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3305168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How to calculate $\lim_{x \to - 1} \frac{2}{(x+1)^4}$ Calculate $$\lim_{x \to - 1} \frac{2}{(x+1)^4}$$ a) $0$ b) $\infty$ c) $-\infty$ d) $2$ I am able to see that it is equivalent the limit as $x$ approaches $-1$ of $\frac{2}{(x^2+2x+1)^2}$. I know that when doing limits to infinity this would be $0$ because the denominator has the highest exponent, but I am confused for $x$ approaches $-1$. Is this a limit from the right and left kind of thing? Would the process be the same if I changed $-1$ to another number?
Consider this: $$ x\to-1\qquad\Leftrightarrow\qquad (x+1)\to 0$$ $$ \lim_{x\to-1}\frac{2}{(x+1)^4} = \lim_{(x+1)\to 0} \frac{2}{(x+1)^4} = \lim_{y\to 0} \frac{2}{y^4}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3305293", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
what is the value of $\sum_{ijk}\omega u_iu_j \delta_{ik}\delta_{kj}$? I have to work with the following sum: $\sum_{ijk}\omega u_iu_j \delta_{ik}\delta_{kj}$ where $\omega$ is a constant in $\mathbb{C}$. Is the answer: $$\omega \sum_k \sum_i u_i \delta_{ik}\sum_j u_j\delta_{kj}=\omega\sum_k u_k^2$$ or: $$\sum_{ij}u_iu_j \left(\sum_k \omega\delta_{ik}\delta_{kj}\right)=\sum_{ij}u_iu_j \left(2\omega-\omega\delta{ij}\right)=2\omega\sum_{ij}u_iu_j-\omega\sum_iu_i^2$$. This is a pretty basic question, but I don't see where my logic is wrong here... If it is too obvious I can delete the question once my slow mind gets it... Anyway, thanks for your help! Edit: For the second line of calculation here was my reasoning: In the $\sum_k \omega\delta_{ik}\delta_{kj}$ there are only two non-zero terms, namely when $k=i$ and when $k=j$ therefore $\sum_k \omega\delta_{ik}\delta_{kj}=2\omega$. However if $i=j$ then there is only one non-zero term. Therefore to consider this case I have to add $-\omega\delta_{ij}$. Which yields: $\sum_k \omega\delta_{ik}\delta_{kj}=2\omega-\omega\delta_{ij}$.
When you manipulate this expression: $$\sum_{i, j} u_i u_j \left( \sum_k \omega \delta_{ik} \delta_{kj} \right),$$ you first (rightfully) consider the sum over $k$: $$\sum_k \omega \delta_{ik} \delta_{kj}.$$ A product of real numbers is non-zero if and only if each of the terms is non-zero. Here, if $\delta_{ik}=0$ or $\delta_{kj} = 0$, then $\omega \delta_{ik} \delta_{kj} = 0$; if $\delta_{ik}=\delta_{kj} = 1$, then $\omega \delta_{ik} \delta_{kj} = \omega$. Said otherwise: * *If $i \neq k$ or $j \neq k$, then $\omega \delta_{ik} \delta_{kj} = 0$; *If $i=j=k$, then $\omega \delta_{ik} \delta_{kj} = \omega$. If $i \neq j$, the second condition is never satisfied, so all of the terms are zero. If $i=j$, then there is only one non-zero term, corresponding to $k=i(=j)$. Hence: $$\sum_k \omega \delta_{ik} \delta_{kj} = \omega \delta_{ij}.$$ Finally, $$\sum_{i, j} u_i u_j \left( \sum_k \omega \delta_{ik} \delta_{kj} \right) = \sum_{i, j} u_i u_j \omega \delta_{ij} = \omega \sum_i u_i^2,$$ which is the same answer as with the first method.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3305497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to integrate $\int_0^{2\pi} \cos^{10}\theta \mathrm{d}\theta$ using complex analysis. I am asked to evaluate the following integral: $$\int_0^{2\pi} \cos^{10}\theta \mathrm{d}\theta$$ I am using complex analysis. Setting $z = e^{i\theta}$, I get from Eulers formula: $$\cos \theta = \frac{1}{2}\left(e^{i\theta} + e^{-i\theta}\right) = \frac{1}{2}\left(z + z^{-1}\right)$$ Now as $\theta$ goes from $0$ to $2\pi$, $z = e^{i\theta}$ goes one time around the unit circle. Therefore the problem is reduced to the following contour integral: $$\oint_{C} \left(\frac{1}{2}(z + z^{-1})\right)^{10} \frac{dz}{iz}$$ where C is the unit circle. At this point, I don't know how to move forward. I am pretty sure I am to apply the residue theorem, and the function I am integrating clearly has a pole at $z = 0$. But I don't know how to calculate that residue, since the pole is of the 10th order. Is there another approach I should take, maybe find the Laurent series of the function? Any help is greatly appreciated!
Hint Set $$f(z)=\frac{(z+z^{-1})^{10}}{2^{10}iz}=\frac{(z^2+1)^{10}}{2^{10}iz^{11}}.$$ Then, $$f(z)=\frac{1}{iz^{11}}\sum_{k=0}^{10}\binom{10}{k}z^{2k}=...+\frac{1}{i2^{10}z}\binom{10}{5}+...$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3305633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Number of ways to chose 6 courses out of 15 (different) courses We got 3 English courses, 6 Chinese courses, 6 Spanish courses. Each course is different. In how many was can we choose 6 courses such that we must chose atleast 1 course from each topic (English, Chinese, Spanish) If possible, I would like to see a solution without the inclusion - exclusion principle. edit : I got to $\frac{6*6*3}{3!}*\binom{12}{3}\;$ where $\frac{6*6*3}{3!}$ represents picking a course from each topic (we must choose atleast 1 from each topic), then dividing by the number of permutations. then we are left with 3 courses to choose from 12, so I multiplied by $\binom{12}{3}\;$
There are some mistakes in your calculation. Fist, there is no reason to divide by $3!$. When you choose an English course, a Chinese course, and a Spanish course, the order matters. You haven't counted anything $6$ times. Second you have some double counting. Choosing English 1 and one of the first three courses, and then choosing English 2 as one of the remaining three course is the same as choosing English 2 as one of the first three, and then choosing English 1 as one of the last three. If you want to do this without inclusion-exclusion (although that's the method I would recommend) note that you can either choose two courses in each language in $${3\choose2}{6\choose2}{6\choose2}$$ ways, or choose two English courses, one in one of the other languages and three of the third in $$2{3\choose2}{6\choose1}{6\choose3}$$ ways, or one English course, two in one of the otherLanguages and three of the third in $$2{3\choose1}{6\choose2}{6\choose3}$$ ways or one English course, one in one of the other languages and four of the third in $$2{3\choose1}{6\choose1}{6\choose4}$$ ways or three English courses, two in one of the other languages and one of the third in $$2{3\choose3}{6\choose2}{6\choose1}$$ ways, if I haven't missed any.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3305708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
A sum of products of binomial coefficients I am looking for a proof or a reference for the following ( apparent ) combinatorial identity: $$ \sum_{i = s}^{s+t}\left(\,{-1}\,\right)^{\, i}{i \choose s} {s \choose i - t} = \left(\,{-1}\,\right)^{s + t},\quad\mbox{where}\ s\geq t\geq 0\ \mbox{are integers} $$ Any help will be appreciated.
The hint of @darijgrinberg is valuable and deserves an answer by its own. We obtain \begin{align*} \color{blue}{\sum_{q=s}^{s+t}}&\color{blue}{(-1)^q\binom{q}{s}\binom{s}{q-t}}\\ &=\sum_{q=0}^t(-1)^{q+s}\binom{q+s}{q}\binom{s}{q+s-t}\tag{1}\\ &=(-1)^s\sum_{q=0}^t\binom{-s-1}{q}\binom{s}{t-q}\tag{2}\\ &=(-1)^s\binom{-1}{t}\tag{3}\\ &=(-1)^s\frac{(-1)(-2)\cdots(-t)}{t!}\\ &\,\,\color{blue}{=(-1)^{s+t}} \end{align*} and the claim follows. Comment: * *In (1) we shift the index to start with $q=0$. *In (2) we use the binomial identities $\binom{-p}{q}=\binom{p+q-1}{q}(-1)^q$ and $\binom{p}{q}=\binom{p}{p-q}$. *In (3) we apply the Chu-Vandermonde identity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3305804", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
$\zeta (s+c)/\zeta (s)$ is increasing It appears that for the Riemann zeta function $$\zeta (s) = \sum_{n\ge 1} \frac{1}{n^s} \quad (s > 1)$$ and any constant $c > 0$, the function $$\frac{\zeta (s+c)}{\zeta (s)}$$ (considered for real $s > 1$) is increasing. Could anybody give a proof of this fact? Thank you very much! Just in case, here's what I tried: we have $$\left(\frac{\zeta (s+c)}{\zeta (s)}\right)' = \frac{\zeta (s) \zeta' (s+c) - \zeta (s+c) \zeta'(s)}{\zeta (s)^2},$$ and $\zeta' (s) = -\sum_{n\ge 2} \frac{\log n}{n^s}$, but after playing around with series I don't see how to deduce that $\zeta (s) \zeta' (s+c) > \zeta (s+c) \zeta'(s)$.
Since $s>1$, $\zeta(s)>0$ and it's easier to consider the logarithmic derivative: $f'(x)/f(x)$ has the same sign as $f'(x)$. In particular, for $s>1$ we can use the Euler product $$ \zeta(s) = \prod_{p} (1-p^{-s})^{-1} : $$ from this we have $$ \frac{d}{ds} \log{\zeta(s)} = \frac{d}{ds} \sum_p -\log{(1-p^{-s})} = -\sum_p \frac{1}{p^s-1} \log{p} . $$ Then $$ \frac{d}{ds} \log{\frac{\zeta(s+c)}{\zeta(s)}} = \frac{d}{ds} \log{\zeta(s+c)} - \frac{d}{ds} \log{\zeta(s)} = \sum_p \left( \frac{1}{p^{s}-1} - \frac{1}{p^{s+c}-1} \right) \log{p} . $$ Since $s \mapsto 1/(p^s-1)$ is a decreasing function, every term in the sum is positive, so the whole sum is positive. Therefore $s \mapsto \zeta(s+c)/\zeta(s)$ is in fact strictly increasing when $c>0$ and $s>1$. (All the interchanges of logarithms and derivatives with sums and products can be justified using continuity and local uniform convergence, as usual: there's nothing weird going on here.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3306016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Solving for the height of water in a sphere of radius $r$ being filled at a constant rate. I'm attempting to find a function that will output a value for the height $h$ (which can also be represented as $y$) of water in a sphere of radius $r$ being filled at a constant rate. For finding the volume of a sphere until a certain point ($h$) I have chosen to use integration and add the areas of all spheres up until that certain point to do so, we will use the formula $\pi r^2$ and modify it into $\pi r_y^2$ where $r_y$ is the radius of the circular slice of our sphere at height $y$. To do this accurately, we use integration to add an infinite amount of circles to form a sphere. That equation looks like so: $$ V=\int_{-r}^h\pi r_y^2 \,dx $$ The work I've done so far has been broken into 2 major parts: 1.) Finding $r_y^2$ To find $r_y^2$, I've rearranged the equation of a circle to solve for $y$ and give a singular value, making it into a usable function for finding the radius of a circle at any given x coordinate between $-r$ and $r$: $$ x^2+y^2=r^2 $$ $$ y^2=r^2-x^2 $$ $$ y=\pm\sqrt{r^2-x^2} $$ We only care about the absolute value to give us the value of $r$ so that last equation becomes: $$ f(x)=\sqrt{r^2-x^2} $$ Which gives the shape:$y=\sqrt{r^2-x^2}$ on Desmos.com"> Now that we have a function for finding $r_y^2$ (although the equation is in terms of $x$ the equation will work nonetheless which will be demonstrated soon) we are good to move on to step 2. 2.) Solving for height $h$ We can use our equation for volume $V$ which adds the areas of all circles under a point $y=h$ in that sphere and plug in our new function for finding the radius of a circle at a different height in our sphere: $$ V=\int_{-r}^h\pi r_y^2 \,dx $$ $$ V=\int_{-r}^h\pi f(x)^2 \,dx $$ $$ V=\int_{-r}^h\pi \sqrt{r^2 - x^2}^2 \,dx $$ $$ V=\int_{-r}^h\pi(r^2 - x^2) \,dx $$ Before going any further, to make sure that this equation is correct in finding the volume of a sphere of radius $r$, we will use $h=r$ and solve to see if we get the volume of a sphere equation $\frac{4}3 \pi r^2$: $$ V = \int_{-r}^r\pi(r^2 - x^2) \,dx $$ $$ =\pi\left(\int_{-r}^hr^2 - x^2 \,dx\right) $$ $$ =\pi\left(\int_{-r}^hr^2 \,dx - \int_{-r}^rx^2 \,dx \right) $$ $$ =\pi\left([r^2 x]_{-r}^r - [\frac{1}3 x^3]_{-r}^r \right) $$ $$ =\pi\left(r^2 [x]_{-r}^r - \frac{1}3 [x^3]_{-r}^r \right) $$ $$ =\pi\left(r^2 [r - (-r)] - \frac{1}3 [r^3 - (-r)^3] \right) $$ $$ =\pi\left(r^2 [r + r] - \frac{1}3 [r^3 + r^3] \right) $$ $$ =\pi\left(2r^3 - \frac{2r^3}3 \right) $$ $$ =\pi\left(\frac{6r^3}3 - \frac{2r^3}3 \right) $$ $$ =\pi\left(\frac{4r^3}3 \right) $$ $$ =\pi \cdot \frac{4}3 \cdot r^3 $$ $$ V = \frac{4}3 \pi r^3 $$ Now that we know this equation is effective at finding the volume of a sphere, we will attempt to solve for $h=h$ rather than $h=r$. We will take it from the 5$^{th}$ equation onwards though as all of the steps for finding the indefinite integral are the same until there: $$ V = \pi\left(r^2 [x]_{-r}^h - \frac{1}3 [x^3]_{-r}^h \right) $$ $$ V = \pi\left(r^2 [h - (-r)] - \frac{1}3 [h^3 - (-r^3)] \right) $$ $$ V = \pi\left(r^2 [h + r] - \frac{1}3 [h^3 + r^3] \right) $$ $$ V = \pi\left(hr^2 + r^3 - \left(\frac{h^3}3 + \frac{r^3}3 \right) \right) $$ $$ V = \pi\left(hr^2 + r^3 - \frac{h^3}3 - \frac{r^3}3 \right) $$ $$ \frac{V}\pi = hr^2 + r^3 - \frac{h^3}3 - \frac{r^3}3 $$ $$ \frac{V}\pi - r^3 + \frac{r^3}3 = hr^2 - \frac{h^3}3 $$ $$ \frac{V}\pi - r^3 + \frac{r^3}3 = 3hr^2 - h^3 $$ $$ h(3r^2 - h^2) = \frac{V}\pi - r^3 + \frac{r^3}3 $$ $$ h(3r^2 - h^2) = \frac{3V}{3\pi} - \frac{3r^3\pi}{3\pi} + \frac{r^3\pi}{3\pi} $$ $$ h(3r^2 - h^2) = \frac{3V - 3r^3\pi + r^3\pi}{3\pi} $$ $$ h(3r^2 - h^2) = \frac{3V - 2r^3\pi}{3\pi} $$ This is as far as I've gotten. I've attempted going further by making a quadratic equation in the form $ax^2 + bx + c = 0$ where $x$ is replaced with $r$ and I solve for $r$ to place that equation back in, but that lead to the equation $0 = 0$, which means I did it right, it's just not what I'm looking for. I also realised later that it's required for $r$ to be in the equation for finding $h$ so to get rid of it would defeat the purpose of this equation. Assuming $V$ and $r$ are known variables which will be plugged in, how do you solve the equation we came up with in terms of $h$? $$ h(3r^2 - h^2) = \frac{3V - 2r^3\pi}{3\pi} $$ P.S. I have put it into Wolfram Alpha and it gave me some extremely complicated answers. Here's the link to the answer. If the link isn't working, type this into Wolfram Alpha and click "=": "h(3r^(2) - h^(2)) = (3V - 2r^(2)pi)/pi solve h".
Cardano is not the best way to go here. Try the trigonometric solution. Your equation is already depressed, so you can ignore that part. Your equation has $$p = -3r^2\\q = \frac{3V - 2r^3\pi}{3\pi}$$$p < 0$, so the solutions will involve real numbers only.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3306120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding a matrix with a given rowspace My linear algebra textbook asks, Find a matrix $A$ that has $V$ as its row space if $V$ is the subspace spanned by $$\begin{align}\begin{bmatrix}1\\1\\0\end{bmatrix},\begin{bmatrix}1\\2\\0\end{bmatrix} \text{, and}\begin{bmatrix}1\\5\\0\end{bmatrix}\end{align}$$ (Strang 4e, problem 2.4.17) These vectors span the space of all vectors in $\mathbb{R^3}$ whose third entry is $0$, so I answered with this: $$\begin{bmatrix}1 & 0 & 0 \\ 0 & 1 & 0\end{bmatrix}$$ The textbook lists this as the correct answer, however: $$\begin{bmatrix}1 & 1 & 0\end{bmatrix}$$ This seems wrong to me. How can you make $\begin{bmatrix}1 & 2 & 0\end{bmatrix}$ out of $\begin{bmatrix}1 & 1 & 0\end{bmatrix}$?
You are right. In fact $(1,1,0)^t$ and $(1,2,0)^t$ form a linearly independent set, and so, you can also put the matrix as $$A = \begin{pmatrix} 1 & 1 & 0 \\ 1 & 2 & 0 \end{pmatrix}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3306238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Minimize variance of $aX + bY + cZ$ for independent variables $X,Y,Z$ There are three independent random variables $X, Y, Z$. The goal is to minimize the variance of $aX + bY + cZ$ where $a+b+c = 1$ and $0\le a,b,c \le1$. $$\operatorname{var}(X) = 2\operatorname{var}(Y) = 3\operatorname{var}(Z)$$ I did $$K = aX + bY + cZ\quad,\quad \operatorname{var}(K) = a^2\operatorname{var}(X) + b^2\operatorname{var}(Y) + c^2\operatorname{var}(Z)\,,$$ but it looks like one more equation might be needed to solve the problem. Is there anything I missed?
To minimize $6a^{2}+3b^{2}+2c^{2}$ subject to $a,b,c \geq 0$ and $a+b+c=1$ note that $1^{2}=(a+b+c)^{2}=(\frac 1 {\sqrt 6}\sqrt 6 a +\frac 1 {\sqrt 3}\sqrt 3 a +\frac 1 {\sqrt 2}\sqrt 2 a)^{2} \leq (\frac 1 6 +\frac 1 3+\frac 1 2) (6a^{2}+3b^{2}+2c^{2})$ by C-S inequality. Hence $6a^{2}+3b^{2}+2c^{2} \geq 1$ and equality holds when $(a,b,c) =(\frac 1 6, \frac 1 3,\frac 1 2)$. Hence the minimum value is $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3306349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Finding $P(X<2Y)$ given joint pdf $f(x, y) = \frac {1}{2\pi} e^{-\sqrt{x^2 + y^2}}$ for $x,y\in\mathbb R$ The joint pdf of $X$ and $Y$ is $$f(x, y) = \frac {1}{2\pi} e^{-\sqrt{x^2 + y^2}}\,; \quad x,y\in \mathbb{R}$$ Find $P(X<2Y)$. I have tried this: $$\int_{-\infty}^{\infty}\int_{-\infty}^{2y} f(x, y)\, dx\,dy$$ @StubbornAtom suggested polar transformations. So here it goes, Let $X = r \cos\theta, \, Y = r \sin\theta$. Then The integral gets transformed as $\int \int re^{-r}d\theta$. Could some one help find the new limits.
The joint distribution of $(X,Y)$ is rotationally symmetric, since the density depends only on the distance from the origin. This means that the conditional distribution of $(X,Y)$ given $X^2+Y^2$ is uniformly distributed on the circle of radius $r=\sqrt{X^2+Y^2}$. Writing $X=r\cos\Theta$ and $Y=r\sin\Theta$ with $\Theta\in [0,2\pi)$ uniformly random, we have $$ \mathbb P(X<2Y\mid X^2+Y^2)=\mathbb P(\cos\Theta<2\sin \Theta)$$$$=\mathbb P\Bigl(\tan\Theta>\frac{1}{2}, \cos\theta > 0\Bigr)+\mathbb P\Bigl(\tan\Theta<\frac{1}{2}, \cos\theta < 0\Bigr),$$ which can be seen to equal $$\mathbb P\Bigl(\tan^{-1}\frac{1}{2}<\Theta<\pi+\tan^{-1}\frac{1}{2}\Bigr)=\frac12. $$ Since the conditional probability is $\tfrac12$ independent of $X^2+Y^2$, it follows that the unconditional probability is also $\tfrac12$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3306478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
When Is the Squeeze(Sandwich) Theorem Used? I am beginning to learn Calculus 1, and I was taught about the squeeze (sandwich) theorem. It seems to me that all the problems given have $\sin$ involved. Is this true? When is the squeeze theorem applied? Also, what are the standard "squeeze functions"? For example, I know that $-1\leq\sin(x)\leq 1$ and this is used in a $\sin$-involved problem. Are there other such functions to apply when using the squeeze theorem?
No it's not necessary that sine functions are involved in all Sandwich Theorem Problems. * *Sandwich Theorem is commonly used in computing Integrals as a limit of a sum. *It is used in Limit Computations *It is used in proving convergence of many series by bounding it. *I found another interesting use with many applications in itself. The limit of the sinc function can be proved using the theorem which provides a first order approximation that is used in Physics. This function also shows up as the fourier transform of a rectangular wave. You can read more about this specific function in the top answer (answered anonymously) to this Quora question. Examples of functions where squeeze theorem can be applied- Limit of the function- $f(x)=x^2 e^{\sin\frac{1}{x}}$ As x approaches 0 The limit is not normally defined because the function oscillates infinitely many times as it approaches zero
{ "language": "en", "url": "https://math.stackexchange.com/questions/3306633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Infinite Series $\sum_{n=1}^{\infty}\frac{4^nH_n}{n^2{2n\choose n}}$ I am trying to find a closed form for this infinite series: $$ S=\sum_{n=1}^{\infty}\frac{4^nH_n}{n^2{2n\choose n}}$$ Whith $H_n=\sum\limits_{k=1}^{n}\frac{1}{k}$ the harmonic numbers. I found this integral representation of S: $$S=2\int_{0}^{1}\frac{x}{1-x^2}\left(\frac{\pi^2}{2}-2\arcsin^2(x)\right)dx$$ Sketch of a proof: Recall the integral representation of the harmonic numbers: $H_n=\displaystyle\int_{0}^{1}\frac{1-x^n}{1-x}dx$ By plugging it into the definition of S and interchanging the order of summation between $\displaystyle\sum$ and $\displaystyle\int$ (justified by the uniform convergence of the function series $\displaystyle\sum\left(x\to\frac{4^n}{n^2{2n\choose n}}\frac{1-x^n}{1-x}\right)$, because $\forall x\in[0,1],\frac{1-x^n}{1-x}<n$), we get: $$S=\int_{0}^{1}\frac{1}{1-x}\sum\limits_{n=1}^{\infty}\frac{4^n(1-x^n)}{n^2{2n\choose n}}dx$$ $$=\int_{0}^{1}\frac{1}{1-x}\left(\sum\limits_{n=1}^{\infty}\frac{4^n}{n^2{2n\choose n}}-\sum\limits_{n=1}^{\infty}\frac{(4x)^n}{n^2{2n\choose n}}\right)dx$$ $$=\int_{0}^{1}\frac{1}{1-x}\left(\frac{\pi^2}{2}-\sum\limits_{n=1}^{\infty}\frac{(4x)^n}{n^2{2n\choose n}}\right)dx$$ Using the result $\displaystyle\sum\limits_{n=1}^{\infty}\frac{4^n}{n^2{2n\choose n}}=\frac{\pi^2}{2}$. At that point, we will rely on the taylor series expansion of $\arcsin^2$: $$\arcsin^2(x)=\frac{1}{2}\sum\limits_{n=1}^{\infty}\frac{4^n}{n^2{2n\choose n}}x^{2n}, |x|<1$$ Out of which we get $\displaystyle\sum\limits_{n=1}^{\infty}\frac{(4x)^n}{n^2{2n\choose n}}=2\arcsin^2\left(\sqrt{x}\right)$ So, $$S=\int_{0}^{1}\frac{1}{1-x}\left(\frac{\pi^2}{2}-2\arcsin^2\left(\sqrt{x}\right)\right)dx$$ Which, through the substitution $u=\sqrt{x}$, gives the integral representation above. But beyond that, nothing so far. I tried to use the integral representation of $\frac{H_n}{n}$ to switch the order of summation, but it didn't lead anywhere. Any suggestion?
From here, we have $$\frac{\arcsin z}{\sqrt{1-z^2}}=\sum_{n=1}^\infty\frac{(2z)^{2n-1}}{n{2n \choose n}}$$ substitute $z=\sqrt{y}$, we get $$\sum_{n=1}^\infty\frac{4^ny^n}{n{2n \choose n}}=2\sqrt{y}\frac{\arcsin\sqrt{y}}{\sqrt{1-y}}$$ Now multiply both sides by $-\frac{\ln(1-y)}{y}$ then integrate from $y=0$ to $1$ and using the fact that $-\int_0^1 y^{n-1}\ln(1-x)\ dy=\frac{H_n}{n}$, we get \begin{align} \sum_{n=1}^\infty\frac{4^nH_n}{n^2{2n \choose 2}}&=-2\int_0^1\frac{\arcsin\sqrt{y}}{\sqrt{y}\sqrt{1-y}}\ln(1-y)\ dy\overset{\arcsin\sqrt{y}=x}{=}-8\int_0^{\pi/2}x\ln(\cos x)\ dx\\ &=-8\int_0^{\pi/2}x\left\{-\ln2-\sum_{n=1}^\infty\frac{(-1)^n\cos(2nx)}{n}\right\}\ dx\\ &=\pi^2\ln2+8\sum_{n=1}^\infty\frac{(-1)^n}{n}\int_0^{\pi/2}x\cos(2nx) dx\\ &=\pi^2\ln2+8\sum_{n=1}^\infty\frac{(-1)^n}{n}\left(\frac{\pi\sin(n\pi)}{4n}+\frac{\cos(n\pi)}{4n^2}-\frac1{4n^2}\right)\\ &=\pi^2\ln2+2\pi\sum_{n=1}^\infty\frac{(-1)^n\sin(n\pi)}{n^2}+2\sum_{n=1}^\infty\frac{(-1)^n\cos(n\pi)}{n^3}-2\sum_{n=1}^\infty\frac{(-1)^n}{n^3}\\ &=\pi^2\ln2+0+2\sum_{n=1}^\infty\frac{(-1)^n(-1)^n}{n^3}-2\operatorname{Li}_3(-1)\\ &=\pi^2\ln2+2\zeta(3)-2\left(-\frac34\zeta(3)\right)\\ &=\pi^2\ln2+\frac72\zeta(3) \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3306742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
How to reason that $n^5 - n$ is divisible by 2 as proof for a consequence of Fermat's little theorem. In my text book on Discrete Mathematics (I), we have a chapter that covers a bit of elementary Number Theory. In it we see the famous Theorem of Euler as well as the derived little theorem of Fermat. I understand these theorems and even the proofs that are given for them in my text book. There is however a specific step in the proof given for Fermat's little theorem that I do not understand. For the sake of completeness I'll write both the resulting theorem (derived from Fermat's little theorem) and its proof here as written in my text book. $$ \forall n \in \mathbb N^* : n \text{ and } n^5 \text{ always end on the same digit.} $$ The proof goes as follows. Because Fermat's little theorem we know that $$ \begin{equation} \begin{aligned} n^5 &\equiv n \ (\text{mod } 5) &\Leftrightarrow \\ n^5 - n &= 5q &\Leftrightarrow \\ 5\ &|\ (n^5 - n). \end{aligned} \end{equation} $$ On the other hand $$ n^5 - n = n(n-1)(n^3+n^2+n+1). \ \ \ \ \ \ \ \ \ \ \ \ \text{(a)} $$ As both $2$ and $5$ are divisors of $(n^5 - n)$ we can conclude that $$ n^5 \equiv n\ (\text{mod } 10). \ \ \ \ \ \text{QED} $$ I understand the given result as well as the theorems it build upon. I also can easily see that $5$ is a divisor of $(n^5 - n)$. As I keep the target in mind I also can figure out that the only missing part of this proof would be to figure out a way to show that $2$ is also a divisor of $(n^5 - n)$. At first equation $(a)$ did not make any sense to me. After checking the case where $n$ is odd as well as the case where $n$ is even, I did find out that the equation results in an even number in both cases. My question, sorry for the very long introduction, goes as follows: Should I wanted to have proven this myself. What approach would have lead me to trying to factor $(n^5 - n)$ to the given equation $(a)$? In fact while it is easy fo to factor from right to left in that equation, I do not easily see how one would go from left to right? Probably I am missing some fundamental mathematical knowledge here. Can anyone please help me figure out how would exactly figure that out? Is it trial and error? Is it just knowing some specific concepts? What knowledge am I missing here?
This is trivial: * *If $n$ is odd, is $n^5-n$ odd or even? *If $n$ is even, is $n^5-n$ odd or even?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3306833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 7, "answer_id": 0 }
The first 3 terms of the expansion of $\left(1+\frac{x}{2}\right)\left(2-3x\right)^6$ According to the ascending powers of $x$, find the first $3$ terms if the expansion of $$\left(1+\frac{x}{2}\right)\left(2-3x\right)^6$$ For the expansion of $$(2-3x)^6 $$ The first 3 terms are $$64 -576x + 2160x^2$$ Now, are the required 3 terms are $$64-576x + 2160x^2$$ Or $$32x -288 x^2 +1080 x^3 $$ Or otherwise ?
Add the two together, and keep the terms with $x^0$, $x^1$, and $x^2$. So the answer is $$64+(32-576)x+(2160-288)x^2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3307055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Finding the Maximum of a Continuous Function over a Closed Interval For function $f\left ( x \right )=4x^{3}-6x^{2},$ the maximum occurs in the interval $\left [ 1,2 \right ]$ when $x$ is equal to ___________ I got $x=0$ is maxima. Because, in that point $f''(x)<0$ But in answer, it is given though $f''(x)<0$ at $x=0,$ but at $x=2$ function $f(x)$ value maximum i.e. $8.$ But is it a correct procedure to get maximum. Is maximum value and maxima are two different thing??
You are told to find the maximum value of $f(x) = 4x^3 - 6x^2$ in the interval $[1,2]$. So why did you simply assert $x = 0$ when this value is clearly not in the requested interval? This is a common mistake students make: the calculation goes flawlessly, but there is no understanding of what it means. When searching for critical points satisfying $$\frac{df}{dx} = 12x^2 - 12x = 0,$$ we easily find $x(x-1) = 0$ or $x \in \{0, 1\}$. Therefore, these are relative extrema of $f$. By computing the second derivative, $$\frac{d^2f}{dx^2} = 24x - 12,$$ we find that at $x = 0$, $f''(0) < 0$, so the function is concave down, and at $x = 1$, $f''(1) > 0$, so the function is concave up. All this tells us is that $x = 0$ is a relative maximum, and $x = 1$ is a relative minimum. Since the only critical point of $f$ that lies in $[1,2]$ is a relative minimum, we now have to consider the value of $f$ at the endpoints of the interval. At $x = 1$, $f(1) = -2$, and at $x = 2$, $f(2) = 8$. Moreover, $f'(x) > 0$ whenever $x > 1$. So we know that on $(1,2]$, $f$ is increasing, consequently the absolute maximum of $f$ on the interval $[1,2]$ occurs at $x = 2$ and equals $8$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3307138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Is a parallel translation a linear transformation? I guess not because a linear transformation maps a zero vector to the zero vector but parallel translation does not. Am I right?
You are entirely correct. For any linear transformation $L$, it is true that $L(0)=0$, but this is not true for translation. This is enough to prove that translation is not a linear transformation. If you want to go into detail, you can also go down to the definitions and find the axiom on which translation fails. Remember if $V$ and $W$ are linear spaces over $F$, then $L:V\to W$ is a linear transformation if * *For all $x,y\in V$, it is true that $L(x+y)=L(x)+L(y)$ *For all $x\in V$ and all $\alpha\in F$, it is true that $L(\alpha x) = \alpha L(x)$. You can show that neither of the properties above is true for translation, since translation $T$ has the form $T(x)=x+a$, and therefore * *$T(x+y)=x+y+a\neq x+a+y+a=T(x)+T(y)$ *$T(\alpha x) = \alpha x + a \neq \alpha x + \alpha a = \alpha T(x)$. Note also that $T$ is a translation if $a=0$, but in that case, $T$ is the identity map.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3307196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Method of Characteristics - Lagrange-Charpit Equations I need to solve the following PDE with initial condition $U(x,0)=U_0(x)$. Once this is one of my first times, I'd like to get second opinions. Many Thanks. \begin{equation} \partial_t U + (\text{cos}(t)+1)\partial_x U = -2, \end{equation} The characteristic curves are given by Lagrange-Charpit equations: \begin{equation} \dfrac{dt}{1}=\dfrac{dx}{(\text{cos}(t)+1)}=\dfrac{dU}{-2},\qquad(i) \end{equation} Following: \begin{equation} \dfrac{dU}{dt}=-2.\qquad(ii) \end{equation} Solving this EDO \begin{equation} U(x(t),t)=-t+c_1\qquad(iii) \end{equation} By other hand: \begin{equation} \dfrac{dx}{dt}=(\cos(t)+1).\qquad(iv) \end{equation} So: \begin{equation} x(t)= t+\sin (t)+c_2.\qquad(v) \end{equation} Setting $x(0)=x_0$, we get $c_2=x_0$. So, the characterhistics are given by: \begin{equation}x(t)=t+\sin (t)+x_0.\qquad(vi)\end{equation} The whole solutions is given by: \begin{equation} U(x,t)=-t+U_0(x_0)=-t+U_0\bigg(x-t+\sin (t)\bigg).\qquad(vii) \end{equation}
Note that while $U(x,0) = U_0(x)$, the PDE itself isn't satisfied. \begin{align} U_t + (1 + \cos t)U_x &= -1 + U_0'(x - t + \sin t)(-1 + \cos t) + U_0'(x - t + \sin t)(1 + \cos t) \\ &= -1 + 2\cos t \ U_0'(x - t + \sin t) \\ &\not \equiv -2. \end{align} But $$ \frac{\mathrm{d}U}{\mathrm{d}t} = -2 \implies U(x(t),t) = -2t + c_1 $$ and $$ x(t) = t + \sin t + x_0 \implies x_0 = x(t) - t - \sin t. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3307616", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What's my mistake in finding $\int_0^\infty dx e^{-ax^2} \sin(b/x^2)$? I want to evaluate $$I=\int_0^\infty dx e^{-ax^2} \sin(b/x^2)$$ for $a,b>0$. A first simplification is to substitute $y=x/\sqrt{a}$ and define $c=ab>0$ to obtain $$I=\frac{1}{\sqrt{a}} \int_0^\infty e^{-x^2} \sin(c/x^2)$$ Now my idea was to use the Taylor series for the sine $$I=a^{-1/2} \int_0^\infty dx e^{-x^2} \sum_{k=0}^\infty \frac{(-1)^k}{(2k+1)!}\left(\frac{c}{x^2} \right)^{2k+1}$$ Now I interchange sum and integral although I have no justification $$ I=a^{-1/2} \sum_{k=0}^\infty \frac{(-1)^kc^{2k+1}}{(2k+1)!} \int_0^\infty dx e^{-x^2} \left(\frac{1}{x^2} \right)^{2k+1}$$ Substituting $t=x^2$ in the integral we obtain the gamma function $$I=a^{-1/2} \sum_{k=0}^\infty \frac{(-1)^kc^{2k+1}}{(2k+1)!} \frac{1}{2} \Gamma(-2k-1/2) $$ Using $\Gamma \left({\frac{1}{2}}-n\right)={(-4)^{n}n! \over (2n)!}{\sqrt {\pi }}$ (which can be shown using the reflection formula and the duplication formula for the Gamma function) with $n=2k+1$ I obtain $$I=a^{-1/2} \sqrt{\pi} \sum_{k=0}^\infty \frac{(-1)^k(-4c)^{2k+1}}{(4k+2)!} \frac{1}{2} $$ or $$I=-\frac{1}{2}\sqrt{\frac{\pi}{a}} \sum_{k=0}^\infty \frac{(-1)^k(4c)^{2k+1}}{(4k+2)!}=-\frac{1}{2}\sqrt{\frac{\pi}{a}} \sin(\sqrt{2c}) \sinh(\sqrt{2c})$$ where I used wolfram alpha for the last series. The problem: The above result is wrong. It should be (wolfram alpha and Gradshteyn) $$I=\frac{1}{2}\sqrt{\frac{\pi}{a}} \sin(\sqrt{2c})\exp(-\sqrt{2c}) $$ The question: Can someone spot my mistake? Was it interchanging the limits? I would also be interested in your solutions to the integral $I$ using other approaches.
The Glasser's master theorem is a useful tool for the solution. First, use Euler's formula to decompose the sine term into the sum of exponentials. Then it boils down to computing the integral of the form $$ J(p) = \int_{0}^{\infty} \exp\left( -a x^2 - \frac{p}{x^2} \right) \, \mathrm{d}x. $$ Assume for a moment that $a, p > 0$. Then by completing the square, we get $$ J(p) = \int_{0}^{\infty} \exp\left( -a \left( x - \frac{\smash{\sqrt{p/a}}}{x} \right)^2 - 2\sqrt{ap} \right) \, \mathrm{d}x. $$ Then by the Glasser's master theorem and the gaussian integral, this evaluates to $$ J(p) = \int_{0}^{\infty} \exp\left( -a x^2 - 2\sqrt{ap} \right) \, \mathrm{d}x = \frac{1}{2}\sqrt{\frac{\pi}{a}} \exp(-2\sqrt{ap}). \tag{*}$$ Although $\text{(*)}$ is originally proved for $p > 0$, both sides of $\text{(*)}$ define holomorphic functions for $p$ in the right-half plane $\mathbb{H}_{\to} = \{z \in \mathbb{C} : \operatorname{Re}(z) > 0\}$ and are continuous on the closed right-half plane $\overline{\mathbb{H}_{\to}}$. So by the identity theorem and continuity, $\text{(*)}$ extends to all of $p \in \overline{\mathbb{H}_{\to}}$. In particular, plugging $p = \pm ib$ for $b > 0$, we get $$ J(\pm ib) = \frac{1}{2}\sqrt{\frac{\pi}{a}} \exp(-2\sqrt{\pm i ab}) = \frac{1}{2}\sqrt{\frac{\pi}{a}} \exp(-\sqrt{2c}(1\pm i)). $$ Therefore $$ I = \frac{J(-ib) - J(ib)}{2i} = \frac{1}{2}\sqrt{\frac{\pi}{a}} \sin(\sqrt{2c}) \exp(-\sqrt{2c}). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3307711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Mathematical music theory concerning melodic intervals and chord progressions There are many books exploring musical theory with maths. However, so far I have only seen discussions about the consonance/dissonance of two notes played simultaneously (intervals) -- this is the theory of "the vertical" on the score. Such theories only consider the sound of the frequency domain at a time, but in reality, music keeps on changing its frequency. None of the books I have read address the issue of "horizontal" movements where the music moves from one note to another, changing the spectrum of frequencies of sounds. Examples of "horizontal movement": * *Why some dissonant chords tend to resolve into consonant chords (usually by step)? Theories I have seen do a good job at explaining why something is dissonant, but none of them explain why I need to resolve it. *Why melodies tend to move mostly by steps? Are there any books that discuss those issues?
Your question is much more related with the realm of perception of notes and classical music theory than mathematics. For musical note sequences (lines) not having too large an interval between notes: This has to do with perceptual grouping strategies our brain employs. If jumps are too large they are not perceived as belonging together, breaking a "gestalt". A good book to get an introduction and many good sound examples is Music, Cognition, and Computerized Sound. The question of "dissonant to consonant" resolution is part of western music theory and in particular harmony theory and it is not at all shown to be universal. That said, a good albeit very detailed article discussing consonance/dissonance from both a western music theory and a psychological perspective is Parncutt and Hair. For "horizontal" changes in spectral information of musical sound, google for "transients" and "onset" and you will find a wealth of literature studying those phenomena on sound signals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3307879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }