Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Convergence of series $\sum\limits_{n=1}^{\infty} \frac{((n+1)!)^n}{2!\cdot 4!\cdot \ldots \cdot (2n)}$ I am trying to show that the following series is absolutely convergent: $$\sum\limits_{n=1}^{\infty} \frac{((n+1)!)^n}{2!\cdot 4!\cdot \ldots \cdot (2n)}$$ After writing the denominator as $\prod\limits_{k=1}^{n}(2k)!$ I have tried applying the ratio test which led to $$\frac{((n+2)!)^{n+1}}{\prod\limits_{k=1}^{n+1} (2k)!}\cdot \frac{\prod\limits_{k=1}^{n}(2k)!}{((n+1)!)^n}=\frac{((n+1)!)^{n+1}\cdot (n+2)^{n+1}}{(2(n+1))!\cdot ((n+1)!)^n}=\frac{(n+2)^{n+1}\cdot(n+1)!}{(2n+2)!}$$ Next I wanted to calculate the limit of this as $n\to\infty$ where I got stuck. WolframAlpha tells me, that the limit is $0$ but I don‘t see where to go from here. Also: any input on a different approach without the ratio test is greatly appreciated.
$$R_n=\frac{(n+2)^{n+1}\,(n+1)!}{(2n+2)!}$$ $$\log(R_n)=(n+1)\log(n+2)+\log((n+1)!)-\log((2n+2)!)$$ Using twice Stirling approximation and continuing with Taylor series $$\log(R_n)=n (1-2\log (2))+\left(2-\frac{5 \log (2)}{2}\right)+O\left(\frac{1}{n}\right)$$ $$R_n\sim \frac{e^{n+2}}{2^{\frac{4n+5}{2}} } \quad \to \quad 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4379817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Find remainder of large non-exponential integer divided by 180 Find the remainder when $12345678910111213...20172018$ is divided by $180$. $$12345678910111213...20172018 \quad \text{mod} \quad 180$$ The large dividend is formed by writing all the natural numbers up to 2018 together. I've read some posts similar on the community. And they took an approach some sort like this: $$\text{Let } a=12345678910111213...20172018$$ $$180 =4\times5\times9$$ $$a\text{ mod }4 = 2$$ $$a\text{ mod }5 = 3$$ $$a\text{ mod }9 = 3$$ So $$a\text{ mod }180 = 2\times3\times3=18$$ I know there is definitely something wrong with my method, though I can't recall where I saw it. Thanks for your help!
You are right about the three congruences: $a\equiv 2\pmod 4, a\equiv 3\pmod 5, a\equiv 3\pmod 9$. Next you need to use the Chinese remainder theorem (twice) using Bezout formulas. First $$4*(-1)+1*5=1,$$ so $$a\equiv 4*(-1)*3+1*5*2=-2 \pmod{4*5}.$$ Then $$20*(-4)+9*9=1,$$ so $$a\equiv 20*(-4)*3+9*9*(-2)=-402\equiv 138 \pmod{20*9}$$ Answer:138.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4379961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A question about posterior density This exercise is the first question from the book: Bayesian Data Analysis. Question:Conditional probability: suppose that if θ=1, then y has a normal distribution with mean 1 and standard deviation σ, and if θ=2, then y has a normal distribution with mean 2 and standard deviation σ. Also, suppose Pr(θ=1)=0.5 and Pr(θ=2)=0.5. Describe how the posterior density of θ changes in shape as σ is increased and as it is decreased. I know the posterior density is calculated in this way: The solution is: As σ → ∞, the posterior density for θ approaches the prior (the data contain no information): Pr(θ = 1|y = 1) → 1/2=0.5 . As σ → 0, the posterior density for θ becomes concentrated at 1: Pr(θ = 1|y = 1) → 1. But how can I get this solution in a precise way? Currently the solution is in an intuitive way, which is helpful but not precise.
Let $\phi(x;m,\sigma):=\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\tfrac{(x-m)^2}{2\sigma^2}}$. We arę given $$f(x|\theta)=\phi(x;1,\sigma)\mathbb{1}_{\{1\}}(\theta)+\phi(x;2,\sigma)\mathbb{1}_{\{2\}}(\theta)$$ and a prior disttribution $$\pi(\theta)=\frac{1}{2}\Big(\mathbb{1}_{\{1\}}(\theta)+\mathbb{1}_{\{2\}}(\theta)\Big)$$ The posterior distribution is given by $$\pi(\theta|x)=\frac{f(x|\theta)}{m(x)}\pi(\theta)$$ where $m(x)=\int_\Theta f(x|\theta)\,\pi(d\theta)$, the marginal of $x$, can be considered as a normalizing factor. In our case, \begin{align}\pi_\sigma(\theta|x)&=\frac{\phi(x;1,\sigma)\mathbb{1}_{\{1\}}(\theta) + \phi(x;2,\sigma)\mathbb{1}_{\{2\}}(\theta)}{\phi(x;1,\sigma) + \phi(x;2,\sigma)}\\ &=\frac{e^{-\tfrac{(x-1)^2}{2\sigma^2}}\mathbb{1}_{\{1\}}(\theta) + e^{-\tfrac{(x-2)^2}{2\sigma^2}}\mathbb{1}_{\{2\}}(\theta)}{e^{-\tfrac{(x-1)^2}{2\sigma^2}} + e^{-\tfrac{(x-2)^2}{2\sigma^2}}} \tag{1}\label{one}\end{align} * *As $\sigma\rightarrow0$, $\exp\big(-\frac{(x-m)^2}{2\sigma^2}\big)\rightarrow1$ whence we get that $$\begin{align}\lim_{\sigma\rightarrow\infty}\pi_\sigma(\theta|x)=\frac12\big(\mathbb{1}_{\{1\}}(\theta)+\mathbb{1}_{\{2\}}(\theta)\Big)=\pi(\theta)\end{align}$$ *For the case $\sigma\rightarrow\infty$ we will use the following observation: Let $a,b>0$. Then, $$\lim_{t\rightarrow\infty}\frac{e^{-at}}{e^{-bt}+e^{-at}}=\frac12\mathbb{1}(a=b)+\mathbb{1}(a<b)$$ Applying this to \eqref{one} yields $$\lim_{\sigma\rightarrow0}\pi_\sigma(\theta|x)=\left\{ \begin{matrix}\mathbb{1}_{\{1\}}(\theta) &\text{if} & x<3/2\\ \frac12\big(\mathbb{1}_{\{1\}}(\theta)+\mathbb{1}_{\{2\}}(\theta)\big) & \text{if} & x=\frac32\\ \mathbb{1}_{\{2\}}(\theta) &\text{if} & x>3/2 \end{matrix} \right. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4380402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
a basic question about complex integrals Let $$ \gamma_1:[a,b]\to\mathbb C,\qquad \gamma_2:[a,b]\to\mathbb C $$ be two one-to-one, continuously differentiable parametrizations of the same (oriented) curve $\Gamma$. In addition, suppose that $f(z)$ is a complex function which is continuous on a region containing $\Gamma$. I know that if there exists a strictly increasing and continuously differentiable function $\alpha:[a,b]\to[a,b]$ such that $\gamma_2(t)=\gamma_1(\alpha(t))$ for every $t\in[a,b]$, then $$ (\ast)\qquad \int_{\gamma_1}f(z)dz=\int_{\gamma_2}f(z)dz. $$ My question: Can we deduce $(\ast)$ merely by the fact that $\gamma_1,\gamma_2$ are parametrizations of the same curve? Is the existence of $\alpha$ not guaranteed?
Yes, since if $\gamma_1$ and $\gamma_2$ are one to one, continuously differential parametrizations of the same oriented curve $\Gamma$, you can take $\alpha : [a,b] \to [a,b],\; \alpha = \gamma_1^{-1}\circ \gamma_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4380586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Meteor probability question, assuming that at most one meteor can fall or more than one meteor can fall I know there have been similar questions on Math SE, but this has been bothering me for quite a while. Let's say you have the following question, but solve it with two different assumptions: In the first hour, there is a $10\%$ chance that a meteor will fall into Earth. Assuming that the probability is uniform throughout (not too sure how to say this properly, but I'm trying to say that a meteor has equal chance of falling at any time), what is the probability that a meteor falls within the first $30$ minutes? The two assumptions are: * *More than $1$ meteor can fall into Earth within that one hour *At most $1$ meteor can fall into Earth within that one hour Solving the problem with assumption 1: Let the probability of a meteor falling within the first hour be $p$. Then, we can set up the following equation: $$\begin{align}P(\text{none fall in first 30 mins.})+P(\text{none fall in last 30 mins.)}&=P(\text{none fall in first hour})\\(1-p)^2&=0.9\\p&\approx5.13\%\end{align}$$ Solving with assumption 2: Let the probability of a meteor falling within the first hour be $p$. Then, we can set up the following equation: $$\begin{align}P(\text{one falls in first 30 mins.})+P(\text{one falls in last 30 mins.)}-P(\text{one falls in each 30 minute intervals})&=P(\text{one falls in first hour})\\p+p-p^2&=0.1\\p&\approx5.13\%\end{align}$$ I feel like the two assumptions should produce different answers, but I can't pinpoint where the problem is.
Your answer re the 2nd assumption is wrong. There is a $10\%$ chance of the meteor falling in 1 hour. If this meteor falls in the 1st 30 minutes, then, by the 2nd assumption, it is impossible for a 2nd meteor to fall anytime within the hour. So, the events that a meteor falls within the 1st 30 minutes and a meteor falls within the 2nd 30 minutes are : * *not independent, so $p(E_1, E_2) \neq p(E_1) \times p(E_2).$ *and actually disjoint, and therefore additive So, there is a $5\%$ chance of the meteor falling within the 1st 30 minutes and a $5\%$ chance of the meteor falling within the 2nd 30 minutes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4380738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solve tangent of $y=\left(\log_{a}{x}\right)^2$ and $y=-ax+2$ How can I solve the tangent point and $a$ when $f(x)=\left(\log_{a}{x}\right)^2$ is tangent to $g(x)=-ax+2$? Although this can be solved by substituting $a=e^2$ and $x=e^{-2}$, then $f\left(e^{-2}\right)=g\left(e^{-2}\right)$ and $f^\prime\left(e^{-2}\right)=g^\prime\left(e^{-2}\right)$ can be proved, is there any general solution for this, rather than just substituting random numbers? Maybe using Lambert-W could help?
One cannot continuously attack this question algebraically (there is no general solution for $x$ without $a$), seeing that $(\log_a{x})^2 = -ax+2$, will eventually end up in a form similar to$f(x)e^{f(x)} = g(a)$ or $h(x)^{t(x)} = r(a)^{p(a)}$, where both need applications of Lambert-W functions to continue. Instead, we have to restrict the domain and analyse the nature of solutions. This goes as such: Analysing domain and nature of functions Let $f(x) = \log_a^2{x} = \big(\frac{\ln{x}}{\ln{a}}\big)^2$, then $f'(x) = \frac{2\ln{x}}{x\cdot\ln^2{a}}, a>0, a ≠ 1$ and $g(x) = -ax+2$, then $g'(x) = -a$ By equating $f'(x) = 0$, we see that function has a minimum at $x = 1$ regardless of $a$, and this is a minimum because of the +parabolic nature of $f(x)$ in the form $t^2$. From this we can judge $f(x)$ is strictly decreasing for $x \in (0, 1)$ and strictly increasing for $x > 1$ $[1]$ Since $g'(x) = -a$ and $a > 0, \implies g'(x) < 0 \implies g(x)$ is strictly decreasing $[2]$ Comparing $[1]$ and $[2]$ we observe solutions for $\color{red}{x \in (0, 1)}$ only. Analysing nature of solutions to $f(x) = g(x)$ We observed that $f'(x), g'(x) < 0, \text{ } x \in (0,1)$. This suggests only $1$ solution to $$\{x\} \begin{cases} f(x) = g(x) \\ f'(x) = g'(x) \end{cases}$$ which is the point of tangency. Now, observe $g(x) = f(x) = 1$ is always a solution of $x$ and it is regardless of $a$. $1 = -ax + 2 \to x = \frac{1}{a} \text{ and } f(\frac{1}{a}) = 1$ Hence, $\exists\{x_T, a_T\}$ such that $f(x_T) = g(x_T) = 1$ and $f'(x_T) = g'(x_T)$ and $\{x_T, a_T\}$ is the only set of tangent solution, with $x_T = \frac{1}{a_T}$ Therefore, $$f'(\frac{1}{a}) = g'(\frac{1}{a})$$ $$\frac{2\ln{\frac{1}{a}}}{\frac{1}{a}\big(\ln{a}\big)^2} = -a$$ $$-\frac{2\ln{a}}{\big(\ln{a}\big)^2} = -1$$ $$\frac{2}{\ln{a}} = 1$$ $$\therefore a = e^2 \text{ and } x = \frac{1}{e^2} = e^{-2} \in (0, 1) \text{ as required }$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4380862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Using addition formulas brings weird results So, I have this set of equation that I want to solve for x,y $$ A = \tanh(x+y) \\ B = \tanh(x-y) $$ and of course it can be solved by inverting and then summing/differencing, so I get $$ x = \dfrac{1}{2}\bigg(arctanhA + arctanhB \bigg) \\ y = \dfrac{1}{2}\bigg(arctanhA - arctanhB \bigg) $$ But what if I want to use addition formula (and I need to because this is just a short version of the equations that I have, where the above method cannot be used) writing $$ A = \dfrac{\tanh x + \tanh y}{1+\tanh x \tanh y} \\ B = \dfrac{\tanh x - \tanh y}{1-\tanh x \tanh y} $$ Now I get a quadratic equation for $\tanh x$ and I didn't figure out how to show that it gives the same result. I tried using identities and also plotting the two results but they are different. Here is the quadratic equation that I have: $$ \tanh y = \dfrac{A - \tanh x}{1-A \tanh x} \\ B \bigg[ 1 - \tanh x \dfrac{A -\tanh x}{1- A\tanh x} \bigg] = \tanh x - \dfrac{A -\tanh x}{1- A\tanh x} $$ and then finally for $\tanh x$ $$ (A+B) \tanh ^2 x -2 \tanh x(A+1) + A+B=0 $$
I think your original problem with equations like $$A = \tanh (x+y) + (1-p) \tanh x$$ is more complicated than the simplified problem. Look at what the solutions can be (using Geogebra) for $A=0.6, B=0.3,p=0.2$ for example: (are the equations the good ones ?).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4380979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Prove that if $\sum_{i=1}^n\lambda_i|x-a_i|=0$ then $\lambda_1=\lambda_2=\cdots \lambda_n=0$ Let $ n\in \Bbb N $ and $ (a_1,a_2,\cdots,a_n)\in \Bbb R^n $ such that $$a_1<a_2<\cdots <a_n$$ Prove that if there exist $(\lambda_1,\lambda_2,\cdots,\lambda_n)\in \Bbb R^n $ satifying $$(\forall x\in\Bbb R)\; \sum_{i=1}^n\lambda_i|x-a_i|=0$$ then $$\lambda_1=\lambda_2=...=\lambda_n=0$$ I tried induction in vain. Any idea will be appreciated.
Denote $P(x)$ as the function $\sum_{i=1}^n \lambda_i |x-a_i|$. For each $i\in [n]$, there exists some $\epsilon_i>0$ such that $a_i-\epsilon_i>a_{i+1}$ and $a_i+\epsilon_i=a_{i+1}$ (where we can treat $a_0$ and $a_{n+1}$ as very small and large numbers). For example something like $\epsilon_i=\text{min}\left(\frac{a_{i+1}-a_i}{2},\frac{a_i-a_{i-1}}{2}\right)$. We can deduce that $$P(a_i+\epsilon_i)-P(a_i-\epsilon_i)=2\lambda_i\epsilon_i=0$$ So it follows that $\forall i\in [n], \lambda_i=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4381203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Heat transfer between two fluids through a sandwiched solid (coupled problem) Two fluids ($t_h,t_c$) flow opposite to each other on either side of a solid ($T$), while exchanging heat among themselves. In such a scenario, the conduction in the solid is governed by: $$x\in[0,1], y\in[0,1]$$ $$\kappa \frac{\mathrm{d}^2 T}{\mathrm{d} x^2} + \mu b_h(t_h-T) - \nu b_c(T-t_c)=0 \tag1$$ with boundary condition as $T'(0)=T'(1)=0$. The fluids are governed by the following equations: $$\frac{\mathrm{d} t_h}{\mathrm{d} x}+b_h(t_h-T)=0\tag2$$ $$\frac{\mathrm{d} t_c}{\mathrm{d} x}+b_c(T-t_c)=0\tag3$$ The hot fluid initiates at $x=0$ and the cold fluid starts from $x=1$. The boundary conditions are $t_h(x=0)=1$ and $t_c(x=1)=0$. Equation $(1),(2)$ and $(3)$ form a coupled system of ordinary differential equations. It is pretty evident that using $(2)$ and $(3)$, Equation $(1)$ can be re-written as: $$\kappa \frac{\mathrm{d}^2 T}{\mathrm{d} x^2} - \mu \frac{\mathrm{d} t_h}{\mathrm{d} x} + \nu \frac{\mathrm{d} t_c}{\mathrm{d} x}=0 \tag4$$ However, I have not been able to proceed further. Some parameter values are $b_c=12.38, b_h=25.32, \mu=1.143, \nu=1, \kappa=2.16$.
Hint. Calling $T_1 = T$ and $T_2=T'_1$ we have $$ \left( \begin{array}{c} T'_1\\ T'_2\\ t'_h\\ t'_c \end{array} \right) = \left( \begin{array}{cccc} 0 & 1 & 0 & 0\\ \frac{\mu b_h}{\kappa}+\frac{\nu b_c}{\kappa} & 0 & -\frac{\mu b_h}{\kappa} & -\frac{\nu b_c}{\kappa}\\ b_h & 0 & -b_h & 0\\ -b_c & 0 & 0 & b_c \end{array} \right) \left( \begin{array}{c} T_1\\ T_2\\ t_h\\ t_c \end{array} \right) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4381615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What can you tell about the span of the following union set? Consider $\mathbb{C}^5$ and the following three sets, each containing two orthogonal vectors: $A=\{a_1,a_2 \}$, $B=\{b_1,b_2 \}$ and $C=\{c_1,c_2 \}$. All these vectors are different and non-zero. Additionally, vectors from different sets are not orthogonal. Clearly, each set spans a two-dimensional subspace of $\mathbb{C}^5$: $\text{dim}(\mathcal{S}_i)=\text{dim}(\text{span}(i))=2$ where $i\in\{A,B,C\}$. I have the following property: the union of each two sets forms a set of linearly independent vectors, i.e. $\text{dim}(\mathcal{S}_{i\cup j})=4$ for $i \neq j$. I would like to show that it is not possible that $\text{dim}(\mathcal{S}_{A\cup B \cup C} )=4$. Does this follow directly from the above (and I'm missing something obvious) or do I need to exploit particular properties of the vectors?
Consider $$A=\{(1,0,0,0,0), (0,1,0,0,0)\}$$ $$B=\{(0,0,1,0,0),(0,0,0,1,0)\}$$ $$C=\{(1,0,1,0,0),(0,1,0,1,0)\}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4381886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove or disprove convergence of $\int_0^1 \frac{dx}{x-sinx}$ Prove or disprove convergence of $\int_0^1\frac{dx}{x-sinx}$ Because $x=0$ is a problem here as $x\to0$ $sinx=x-\frac{x^3}{6}+o(x^3)$ So $\frac{1}{x-sinx}\approx\frac{6}{x^3+o(x^3)}$ Now how here get rid of $o(x^3)$ to use ratio test? Or maybe there is an other easy way to prove divergence of given integral
You don't need to get rid of the $o(x^3)$ term. Your estimate is correct and suggests that the integral behaves roughly like $\int_0^1 \frac{1}{x^3} dx$ which diverges. To prove this rigorously we need to use the comparison test: $$\lim_{x\to 0}\frac{\frac{1}{x-\sin x}}{\frac{1}{x^3}}=\lim_{x\to 0}\frac{x^3}{x-\sin x}=\lim_{x\to 0} \frac{3x^2}{1-\cos x}=\lim_{x\to 0}\frac{6x}{\sin x}=6$$ where L'Hopital's rule was used a few times. This justifies the intuition and proves that your integral diverges, since $\int_0^1 \frac{1}{x^3} dx$ diverges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4382094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Counting problem about Palindromes Consider the set of four digit sequences $d_1d_2d_3d_4$, where $d_i\in\{0,1,\ldots,9\}$. (a) What is the number of all four digit sequences, which contain no palindromic subsequence. For example, the sequence $1231$ is legal but $1213$ is not. We note that a subsequence such as $22$ is considered palindromic. (b) Can the solution of this problem can be generalized easily for $n$-digits sequences? My attempt: First we solve the problem for $3$-digits and then use the complement principle.
Let's say $S$ is a string of length $n$ which satisfies the "no palindromic subsequence" rule. If we want to extend this to length $n+1$ by adding one character to the end, what all can we do: * *The only way of violating this rule by adding a character to the end is if the last two letters become a palindrome (XX), or the last 3 letters (XYX) become one. (Note that the last 4 "XYYX" can't become a palindrome by this addition because $S$ had no palindromic subsequence, so it can't have a "YY"). *So the new letter has basically exactly $8$ options. The same thing is true for every such string S of length $n$. Also, any string of length $n+1$ has to be able to be constructed this way. So you should be able to construct a recursion $ans_n = ans_{n-1}*8$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4382270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Under what conditions may the left inverse of a morphism (which is not the identity) be implied from the following relation? Suppose we are given a category C and objects $c_{0}, c_{1}$ and morphisms $f: c_{0} \rightarrow c_{1}$ and $g: c_{1} \rightarrow c_{0}.$ Assume that $g \circ f = id_{c_{0}}$ and we have the equation $$f\circ g \circ f = id_{c_{1}}f.$$ Can we infer that $f\circ g = id_{c_{1}}$ from this? Otherwise what conditions would give rise to such a statement? I am uncertain about what we can infer here.
Well because $(g \circ f) = id_{c_0}$, we know that $g$ is a retraction and $f$ is a section. So then the question is what can we say in general about $f \circ g$ from this? Well in that case we know that $f \circ g$ must be idempotent, because $(f \circ g) \circ ( f \circ g) = f \circ (g \circ f) \circ g = f \circ g$. Likewise $f \circ g \circ f = f \circ (g \circ f) = f \circ id_{c_0} = id_{c_1} \circ f = f$. So the identity you claim is always true, but in general that is the most we can say, i.e. there may be other idempotent functions $c_1 \to c_1$ besides $id_{c_1}$, so knowing that $(f \circ g)$ is idempotent is not enough to say that it is the identity on $c_1$. OK, so then the question reduces to when an idempotent morphism $p: c_1 \to c_1$ is actually the identity morphism? Well note that if $p$ is the identity morphism, then clearly $p \circ p = id_{c_1}$, i.e. $p^{-1}$ exists and $p=p^{-1}$, so that is necessary. At the same time, $p^{-1}$ exists and $p = p^{-1}$ is also sufficient, because then for any morphism $h: c_2 \to c_1$, we have $h = id_{c_1} \circ h = p^{-1} \circ p \circ h = p \circ p \circ h = p \circ h$. Likewise you can show that for any morphism $k: c1 \to c_3$ that $k = k \circ p$. So then $p$ must equal $id_{c_1}$. Also note that when we have that $f \circ g = id_{c_1}$, then $f$ is a retraction and $g$ is a section, i.e. $f$ is both a retraction and a section (so an isomorphism) and $g$ is both a retraction and a section (so an isomorphism), i.e. $f$ and $g$ are both isomorphisms. So this would also be a necessary and sufficient characterization. In any case I would really recommend reading Conceptual Mathematics by Lawvere and Schanuel (here is a link for free, or if you want to buy a copy of the book). That's where I learned all of this and it has a lot of really easy but still really helpful practice problems, plus clarifies the intuition really well. In particular check out section 5 of Part II.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4382421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluate $\int_{0}^{\infty} \frac{\ln x}{\left(x^{2}+1\right)^{n}} d x$. Latest Edit By the contributions of the writers, we finally get the closed form for the integral as: $$\boxed{\int_{0}^{\infty} \frac{\ln x}{(x^{2}+1)^n} d x =-\frac{\pi(2 n-3) ! !}{2^{n}(n-1) !} \sum_{j=1}^{n-1} \frac{1}{2j-1}}$$ I first evaluate $$I_1=\int_{0}^{\infty} \frac{\ln x}{x^{2}+1} d x \stackrel{x\mapsto\frac{1}{x}}{=} -I_1 \Rightarrow I_1= 0.$$ and then start to raise up the power of the denominator $$I_n=\int_{0}^{\infty} \frac{\ln x}{(x^{2}+1)^n} d x .$$ In order to use differentiation, I introduce a more general integral $$I_n(a)=\int_{0}^{\infty} \frac{\ln x}{(x^{2}+a)^n} d x. $$ Now we can start with $I_1(a)$. Using $I_1=0$ yields $$\displaystyle 1_1(a)=\int_{0}^{\infty} \frac{\ln x}{x^{2}+a} d x \stackrel{x\mapsto\frac{x}{a}}{=} \frac{\pi \ln a}{4 \sqrt a} \tag*{}$$ Now we are going to deal with $I_n$ by differentiating it by $(n-1)$ times $$ \frac{d^{n-1}}{d a^{n-1}} \int_{0}^{\infty} \frac{\ln x}{x^{2}+a} d x=\frac{\pi}{4} \frac{d^{n-1}}{d a^{n-1}}\left(\frac{\ln a}{a}\right) $$ $$ \int_{0}^{\infty} \ln x\left[\frac{\partial^{n-1}}{\partial a^{n-1}}\left(\frac{1}{x^{2}+a}\right)\right] d x=\frac{\pi}{4} \frac{d^{n-1}}{d a^{n-1}}\left(\frac{\ln a}{\sqrt a}\right) $$ $$ \int_{0}^{\infty} \frac{\ln x}{\left(x^{2}+a\right)^{n}} d x=\frac{(-1)^{n-1} \pi}{4(n-1) !} \frac{d^{n-1}}{d a^{n-1}}\left(\frac{\ln a}{\sqrt{a}}\right) $$ In particular, when $a=1$, we get a formula for $$ \boxed{\int_{0}^{\infty} \frac{\ln x}{\left(x^{2}+1\right)^{n}} d x=\left.\frac{(-1)^{n-1} \pi}{4(n-1)!} \frac{d^{n-1}}{d a^{n-1}}\left(\frac{\ln a}{\sqrt{a}}\right)\right|_{a=1}} $$ For example, $$ \int_{0}^{\infty} \frac{\ln x}{\left(x^{2}+1\right)^{5}} d x=\frac{\pi}{4 \cdot 4 !}(-22)=-\frac{11 \pi}{48} $$ and $$ \int_{0}^{\infty} \frac{\ln x}{\left(x^{2}+1\right)^{10}} d x=\frac{-\pi}{4(9 !)}\left(\frac{71697105}{256}\right)=-\frac{1593269 \pi}{8257536} $$ which is check by WA. MY question Though a formula for $I_n(a)$ was found, the last derivative is hard and tedious. Is there any formula for $$\frac{d^{n-1}}{d a^{n-1}}\left(\frac{\ln a}{\sqrt{a}}\right)? $$
Use $(a^x)’=a^x\ln a$ to evaluate \begin{align}\frac{d^{n}}{d a^{n}}\left(\frac{\ln a}{\sqrt{a}}\right)_{a=1} = &\frac{\partial^{n}}{\partial a^{n}} \left(\frac{\partial a^x}{\partial x}\bigg|_{x=-\frac12}\right)_{a=1} =\frac{\partial}{\partial x} \left(\frac{\partial^{n}a^x}{\partial a^{n}}\bigg|_{a=1}\right)_{x=-\frac12}\\ =& \frac{d}{dx} \bigg(\prod_{k=1}^{n}(x-k+1)\bigg)_{x=-\frac12} = \prod_{k=1}^{n}(\frac12-k)\sum_{j=1}^{n}\frac1{\frac12-j}\\ =&-\frac{(-1)^n (2n-1)!!}{2^{n-1}} \sum_{j=1}^{n}\frac1{2j-1} \end{align} Thus $$ \int_{0}^{\infty} \frac{\ln x}{\left(x^{2}+1\right)^{n+1}} d x=\frac{(-1)^{n} \pi}{4n!} \frac{d^{n}}{d a^{n}}\left(\frac{\ln a}{\sqrt{a}}\right)_{a=1} =-\frac{\pi(2n-1)!!}{2^{n+1}n!} \sum_{j=1}^{n}\frac1{2j-1} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4382586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 8, "answer_id": 1 }
All possible combinations of seven numbers that sum up to a specific value under constraints. I know this (or similar questions) may have already been asked a ton of times, but I couldn't really find a good answer for my specific case, so here again. I would like to implement the following formula: $G(0,1,\ldots,5) =0$ $G(N) = G(N-1) + \underbrace{\sum\limits_{a}\sum\limits_{b}\sum\limits_{c}\sum\limits_{d}\sum\limits_{e}\sum\limits_{f}\sum\limits_{g}}_{a+b+c+d+e+f+g=N-6} F(a)F(b)F(c)F(d)F(e)F(f)F(g)$ with $N\in \lbrace 6,7,\ldots,180\cdot 7 -1\rbrace$ and $a,b,\ldots,g\in\lbrace 0,1,\ldots,179\rbrace$, in Python, whereby F is a float number between 0 and 1. The problem is that everytime I get it done it doesn't really work due to the high number of loops and so on. My math related question now is: Is there a way to simplify this formula? (so that it somehow doesn't contain so many sums in one go, etc.) Again if there is a solution somewhere I would also be glad to just be redirected to it. :D Thanks.
Let $$T(n, k) = \sum\limits_{x_1 + \ldots + x_k = n} \prod\limits_{i=1}^k F(x_i)$$ Then your expression can be rewritten as $G(N) = G(N - 1) + T(N - 6, 7)$. And $T(n, k)$ can be computed in $O(n^2k)$ operations using standard dynamic programming or recursion with memorization, as we have $$T(n, k + 1) = \sum\limits_{x_{k + 1} = 0}^{179} F(x_{k + 1}) \cdot T(n - x_{k + 1}, k)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4382764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to find the group associated with a given symmetry I have been doing some research as a theoretical physicist and I came across a physical system where any even number of quantum deformations of space leave the system unchanged.Hence two deformations is physically the same as say ten maybe and any other even number so that changing it doesn't change the physical situation.Is there any lie group that can be used to encode this symmetry? Thanks in advance
From a group-theory perspective it seems to me that you are asking about an abstract group $G$ having the property that the product of every even length sequence of nontrivial elements of $G$ is equal to the identity element of $G$. Any such group $G$ is isomorphic to the order 2 cyclic group. Ordinarily one doesn't think of $G$ as a Lie group, although one could certainly represent $G$ inside some Lie groups. Many Lie groups contain elements of order $2$. For example, in the Lie group $GL(n,\mathbb C)$ you can take $M$ to be any diagonalizable matrix all of whose eigenvalues are either $+1$ or $-1$, with at least one $-1$ eigenvalue, in which case $M^2=I$, the identity matrix, but $M \ne I$. Letting $G = \{M,I\}$ you get a subgroup $G$ of $GL(n,\mathbb C)$ that is isomorphic to the order 2 cyclic group.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4382881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Question About Gradient Descent Gradient descent is numerical optimization method for finding local/global minimum of function. It is given by following formula: $$ x_{n+1} = x_n - \alpha \nabla f(x_n) $$ For sake of simplicity let us take one variable function $f(x)$. In that case, gradient becomes derivative $\frac {df} {dx} $ and formula for gradient descent becomes: $$ x_{n+1} = x_n - \alpha \frac {df} {dx} $$ My question is: How can we get new iterands $x_{n+1}$ from change in value of $f$? Gradient defines both direction and value of biggest increase of $f$ at certain point, not how much $x$ changes and so it has no sense to me that we use it in formula to compute new values of $x$.
The intuitive notion behind this choice is that a graph is usually steeper farther away from an extreme and flatter closer to an extreme. So if the gradient is large (as in far away from $0$), then we are presumably far away from an extreme, and we can take a large step without worrying too much about stepping past the extreme we're looking for. If the gradient is small (as in close to $0$), then presumably we are getting close to a local extreme, so we reduce our step size accordingly so that we don't stray too far away. Using the gradient directly rather than trying to make a more qualified guess as to how far away the extreme is makes this algorithm easy to calculate, while at the same time it turns out to give decent results. A lot rides on the choice of $\alpha$, though. Which technically doesn't have to be constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4383015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Probability you end dice rolling sequence with 1-2-3 and odd total number of rolls Here's a question from the AIME competition: Misha rolls a standard, fair six-sided die until she rolls 1-2-3 in that order on three consecutive rolls. The probability that she will roll the die an odd number of times is ${m\over{n}}$ where $m$ and $n$ are relatively prime positive integers. Find $m + n$. This is what I did. Let's add up the following cases with an odd number of rolls, where in each sequence of XXX... there is no subsequence of 123: * *Probability of 123: $({1/6})^3$ *Probability of XX123: $({1/6})^3$ *Probability of XXXX123: $(1/6)^3(1 - 2(1/6)^3)$ *Probability of XXXXXX123: $(1/6)^3(1 - 4(1/6)^3 + (1/6)^6)$ *Probability of XXXXXXXX123: $(1/6)^3 (1 - 6(1/6)^3 + \binom{4}{2} (1/6)^6) = (1/6)^3 (1 - 6(1/6)^3 + 6(1/6)^6)$ *Probability of XXXXXXXXXX123: $(1/6)^3 (1 - 8(1/6)^3 + \binom{6}{2} (1/6)^6 - 4(1/6)^9) = (1/6)^3 (1 - 8(1/6)^3 + 15(1/6)^6 - 4(1/6)^9)$ *Probability of XXXXXXXXXXXX123: $(1/6)^3 (1 - 10(1/6)^3 + \binom{8}{2} (1/6)^6 - \binom{6}{3}(1/6)^9 + (1/6)^{12}) = (1/6)^3 (1 - 10(1/6)^3 + 28 (1/6)^6 - 20(1/6)^9 + (1/6)^{12})$ *$\ldots$and so forth. I notice some obvious patterns, but nonetheless I'm stuck with proceeding further. Any hints towards finding a way to add this all up would be well-appreciated. Please, no complete solutions. Edit: For the record, the correct probability after you add this all up should be ${{216}\over{431}}$. Edit 2: There are a number of solutions to this problem here: https://artofproblemsolving.com/wiki/index.php/2018_AIME_II_Problems/Problem_13 However, all the solutions at the link are extremely clever, whereas my approach is a naive brute force. I would like some hints/suggestions on how to make my brute force approach work.
I note that you want to make your own method work rather than consider other solutions. To see what is happening it is clearer to work with algebra rather than working out numerical values. The basis of your method is the equation $$p_{n}=(\frac{1}{6})^3(1-p_3-p_4- ... -p_{n-3}).$$ The pattern to notice here is that the RHS only changes slightly as $n$ increases. So , for example, $$p_{n+1}=(\frac{1}{6})^3(1-p_3-p_4- ... -p_{n-3}-p_{n-2}).$$ Therefore the simplest form for your equation is $$p_{n+1}=p_{n}-(\frac{1}{6})^3p_{n-2}.$$ We then have $$p_3=(\frac{1}{6})^3,p_5=p_4, p_7=p_6-(\frac{1}{6})^3p_4, p_9=p_8-(\frac{1}{6})^3p_6, ... $$ These equations are much easier to add than the original ones. $$p_3+p_5+p_7+.....=(\frac{1}{6})^3+(p_4+p_6+p_8+...) -(\frac{1}{6})^3(p_4+p_6+p_8+...).$$ Therefore the probability of an odd number of rolls is $\frac{1}{216}$ plus $\frac{215}{216}$ times the probability of an even number of rolls.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4383392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Prove that $\sum_{cyc}\frac{(a+b-c)^2}{(a+b)^2+c^2}\ge \frac{3}{5}$ $a,b,c$ are reals $ >0$ prove that $$\sum_{cyc}\frac{(a+b-c)^2}{(a+b)^2+c^2}\ge \frac{3}{5}$$ The inequality is homogeneous so assum WLOG $a+b+c=3$, the inequality is equivalent to $$\sum_{cyc}\frac{(3-2c)^2}{(3-c)^2+c^2} \ge 3/5$$ Set $$f(x)= \frac{(3-2x)^2}{(3-x)^2+x^2} $$ We wish to prove that for $0<x<3$ we have $f(x)\ge 1/5$... but unfortunately this is false, so what should I do?
Hint: Show $f(x)+\frac{18}{25}(x-1)\geqslant \frac15$ for all $x\in (0,3)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4383599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How was the plus-minus sign removed in these steps for proving a half-angle identity for tangent? How was the plus-minus sign removed in the following solution? (see red rectangle)
The derivation is indeed faulty. You can make it rigorous by avoiding the square roots: \begin{align} \tan^2\frac{\theta}{2} &=\frac{1-\cos\theta}{1+\cos\theta} \\[6px] &=\frac{1-\cos^2\theta}{(1+\cos\theta)^2} \\[6px] &=\frac{\sin^2\theta}{(1+\cos\theta)^2} \end{align} Therefore we have $$ \Bigl\lvert\tan\frac{\theta}{2}\Bigr\rvert=\frac{\lvert\sin\theta\rvert}{1+\cos\theta} $$ (note that $1+\cos\theta>0$, so no absolute value is needed in the denominator) and now we can observe that $\tan(\theta/2)$ and $\sin\theta$ have the same sign for every $\theta$ and so the absolute values can be removed. In a different way: $$ \frac{\sin2\alpha}{1+\cos2\alpha}=\frac{2\sin\alpha\cos\alpha}{1+2\cos^2\alpha-1}=\frac{\sin\alpha}{\cos\alpha}=\tan\alpha $$ Now set $\alpha=\theta/2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4383993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What does the limit $\lim_{x \to a}\frac{(x-a)f'(x)}{f(x)}$ mean for non-polynomials The limit $$\lim_{x \to a}\frac{(x-a)f'(x)}{f(x)}$$ has an important meaning for polynomials. Suppose $f(x)=(x-a)^nq(x)$ where $q(a)\neq 0$. Then, the given limit is equivalent to $$\lim_{x \to a}\frac{\{n(x-a)^{n-1}q(x)+(x-a)^nq'(x)\}(x-a)}{(x-a)^nq(x)}=\lim_{x \to a}\frac{nq(x)+(x-a)q'(x)}{q(x)}=n+\lim_{x \to a}\frac{(x-a)q'(x)}{q(x)}=n$$ where the last limit comes from the fact that $q(a)\neq 0$, but the numerator goes to zero. So, for polynomials, the value of the given limit would indicate the algebraic multiplicity of the polynomial of the term $(x-a)$. We could also extend this concept to functions not quite polynomial, but "looks" like a polynomial, for example functions like $x-4\sqrt{x}+3$. This function has the above limit value when $a=1$ as $$\lim_{x \to 1}\frac{(x-1)\times \{1-\frac{2}{\sqrt{x}}\}}{(\sqrt{x}-1)(\sqrt{x}-3)}=\frac{2\cdot (-1)}{1-3}=1$$ One could also extend to functions like $f(x)=x^\pi$. The limit in this case would be, $$\lim_{x \to 0}\frac{x\times \pi x^{\pi-1}}{x^\pi}=\pi$$ So, the limit would similarly be the value of the "degree of $(x-a)$"(which, if it is even a valid saying). (Note that we choose function $f(x)$ and the value $a$ for each limit. Yet since the limit has some meaning when $f(a)=0$, I would omit the choice of $a$ whenever the root of $f(x)$ is unique.) My question is, how can I extend this concept to functions that are not in the form of $f(x^p)$(where $f$ is a polynomial, $p$ is a real number)? Are there any meaning to this limit value when the function is a transcendental function? Are there any concepts that this value represents? (Edit) Here are some examples of the value of the limit for some functions. * *$f(x)=e^{-x}(x^3\sin x)$ at $a=0$ $$\lim_{x \to 0}\frac{xf'(x)}{f(x)}=\lim_{x \to 0}\frac{x\times e^{-x}(-x^3\sin x+3x^2\sin x+x^3\cos x)}{e^{-x}(x^3\sin x)}=4$$ *$f(x)=\frac{\sin x}{x}$ at $a=0$ $$\lim_{x \to 0}\frac{xf'(x)}{f(x)}=\lim_{x \to 0}\frac{x\times \frac{x\cos x-\sin x}{x^2}}{\frac{\sin x}{x}}=0$$ *$f(x)=\frac{\sin x}{x}$ at $a=\pi$ $$\lim_{x \to \pi}\frac{(x-\pi)f'(x)}{f(x)}=\lim_{x \to \pi}\frac{(x-\pi)\frac{x \cos x-\sin x}{x^2}}{\frac{\sin x}{x}}=1$$ *$f(x)=e^x-1$ at $a=0$ $$\lim_{x \to 0}\frac{xf'(x)}{f(x)}=\lim_{x \to 0}\frac{x\times e^x}{e^x-1}=1$$
If $\ f\ $ is at least $\ n\ $ times continuously differentiable in a neighbourhood of $\ a\ $, $\ f^{(n)}(a)\ne0$$\,\ $, and $\ f^{(k)}(a)=0\ $ for all $\ k\le n-1\ $, then by Taylor's theorem with Lagrange form of remainder, \begin{align} f(x)&=\sum_{i=0}^{n-1}\frac{(x-a)^if^{(i)}(a)}{i!}+\frac{(x-a)^nf^{(n)}(\xi)}{n!}\\ &=\frac{(x-a)^nf^{(n)}(\xi)}{n!}\\ f'(x)&=\sum_{i=0}^{n-2}\frac{(x-a)^if^{(i+1)}(a)}{i!}+\frac{(x-a)^nf^{(n)}(\eta)}{(n-1)!}\\ &=\frac{(x-a)^nf^{(n)}(\eta)}{(n-1)!}\ , \end{align} where $\ a\le\xi,\eta\le x\ $ or $\ x\le\xi,\eta\le a\ $. It follows that for any such $\ f\ $ $$ \lim_{x\rightarrow a}\frac{(x-a)f'(x)}{f(x)}=n\ . $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4384133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Creating a probability density function for a particular dataset I want to create a probability density function for a particular dataset. First of all, I calculate the mean and the variance of my dataset. So, I use the mean and the variance to create a probability density function, for example, Gaussian distribution. Is my thinking correct?
A nonparametric way to estimate a density corresponding to your data is through kernel density estimation. Given an iid sample $(x_1,...,x_n),$ this method estimates your density function as $$\widehat f=\frac{1}{nh}\sum_{i=1}^n K\left(\frac{x-x_i}{h}\right)$$ for a suitable choice of a bandwidth parameter $h$ and kernel $K(\cdot)$. I encourage you to read the wiki article for details. Further lecture notes are here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4384335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Average of an odd number of equally spaced points on a circle I want to show that the average of an odd number of equally spaced points on the unit circle is equal to 0. More precisely, let $n$ be an odd number, $\theta_1,\ldots, \theta_n\in[0,2\pi)$ and $$ re^{i\psi}:=\frac{1}{n}\sum_{i=1}^ne^{i\theta_i}.$$ We want to show that if the $\theta_i$s are equally spaced then $r=0$. I remark that I do not want to use Vieta's formulas but rather prove it "by hand". I was able to prove this formula for $r$: $$r=\frac{1}{n}\left(n+2\sum_{i=1}^{n-1}\sum_{j=i+1}^n\cos(\theta_i-\theta_j)\right)^{1/2} $$ (I wouldn't know if this is a well known formula or not...). If the angles are equally spaced, upon relabeling if necessary we have $\theta_i=\frac{2(i-1)\pi}{n},\:i=1,\ldots,n$ so that \begin{align*} r^2 & = \frac{1}{n^2}\left(n+2\sum_{i=1}^{n-1}\sum_{j=i+1}^n\cos(\theta_i-\theta_j)\right) \\ & = \frac{1}{n^2}\left(n+2\sum_{i=1}^{n-1}\sum_{j=i+1}^n\cos\left(\frac{2(i-j)\pi}{n}\right)\right) \end{align*} Then we would like to show that $$\sum_{i=1}^{n-1}\sum_{j=i+1}^n\cos\left(\frac{2(i-j)\pi}{n}\right)=-\frac{n}{2}.$$ If we set $n=2k+1\:k\in\mathbb{N}$ we can rewrite the previous formula in terms of $k$: $$\sum_{i=1}^{2k}\sum_{j=i+1}^{2k+1}\cos\left(\frac{2(i-j)\pi}{2k+1}\right)=-k-\frac{1}{2}.$$ Expanding this double sum gives \begin{align*} \sum_{i=1}^{2k}\sum_{j=i+1}^{2k+1}\cos\left(\frac{2(i-j)\pi}{2k+1}\right) & = \cos\left(\frac{-2\pi}{2k+1}\right)+\cos\left(\frac{-4\pi}{2k+1}\right)+\ldots+\cos\left(\frac{-4k\pi}{2k+1}\right) \\ & + \cos\left(\frac{-2\pi}{2k+1}\right)+\cos\left(\frac{-4\pi}{2k+1}\right)+\ldots+ \cos\left(\frac{-(4k-2)\pi}{2k+1}\right) \\ &\vdots \\ & + \cos\left(\frac{-2\pi}{2k+1}\right)+\cos\left(\frac{-4\pi}{2k+1}\right) \\ & +\cos\left(\frac{-2\pi}{2k+1}\right) \end{align*} which can be rearranged as: $$\sum_{i=1}^{2k}\sum_{j=i+1}^{2k+1}\cos\left(\frac{2(i-j)\pi}{2k+1}\right) =2k\cos\left(\frac{2\pi}{2k+1}\right)+(2k-1)\cos\left(\frac{4\pi}{2k+1}\right)+\ldots+2\cos\left(\frac{(4k-2)\pi}{2k+1}\right)+\cos\left(\frac{4k\pi}{2k+1}\right).$$ Then, my question is the following: is it true that $$ 2k\cos\left(\frac{2\pi}{2k+1}\right)+(2k-1)\cos\left(\frac{4\pi}{2k+1}\right)+\ldots+2\cos\left(\frac{(4k-2)\pi}{2k+1}\right)+\cos\left(\frac{4k\pi}{2k+1}\right)=-k-\frac{1}{2}?$$ I checked it for some values of $k$ and try induction for the general case, but I couldn't get very far. Also, as I mentioned, one can prove the result using Vieta's formula but it seems that one should be able to prove it this way as well.
The result should hold for any $n>1$ (not just odd $n$). First recall for $r\neq 1$, $$\sum_{k=0}^{n-1} r^k=\frac{1-r^{n}}{1-r}.\quad (1)$$ Note that equally spaced points on the unit circle are essentially the roots of unity up to a rotation by angle $\phi$. And it is known that the sum of the $n$th roots of unity up to any rotation for $n>1$ is zero since $$\sum_{k=0}^{n-1} e^{(2k\pi/n+\phi)i} =e^{\phi i}\sum_{k=0}^{n-1} e^{2k\pi i/n} =e^{\phi i}\frac{1-e^{2\pi i}}{1- e^{2\pi i/n}}=0,\\ $$ where the penultimate equality uses $(1)$, and the ultimate equality is by Euler's identity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4384496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 4, "answer_id": 2 }
Let $E,X$ be topological spaces and $\pi:E \to X$ a covering map. Show that $\pi$ is a local homeomorphism. Let $E,X$ be topological spaces and $\pi:E \to X$ a covering map. Show that $\pi$ is a local homeomorphism. Since $\pi$ is a covering map we have that for every $p \in X$ there exists a neighborhood $U$ such that $\pi^{-1}(U)$ is a union of disjoint open sets in $E$, each of which is mapped homeomoprhically to $X$. The definition of local homeomorphism states that in order for $\pi$ to be such a map we need that for every $x \in E$ there exists open set $O$ such that $x \in O$ and $\pi(O)$ is open in $X$ and the restriction $\pi\mid_U : U \to \pi(U)$ is a homeomorphism. I don't quite now how to approach this. If I pick $x \in E$, then $x=\pi^{-1}(y)$ for some $y \in X$, but for every $y \in X$, there exists nbhd $U$ containing $y$ such that $\pi^{-1}(U)$ is the union of disjoint open sets in $E$ that are mapped homeomoprhically to $X$. How can I use this info to show the requirements for this to be locally homeomorphic?
If $p \in X$ let $y=p(x)$ and let $U$ be an evenly covered neighbourhood of $y$ which means that $p^{-1}[U] = \bigcup_{i \in I} O_i$ (disjoint union) so that for all $i$, $p \restriction_{O_i}: O_i \to U$ is a homeomorphism. One of the $O_i$ contains $x$ so you're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4384681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
neighbourhood concept-how come $[0,1]^2$ is not a neighbourhood of $(1,1)$ Below is the capture from lecture slides $frac{}{}$ I know what the neighbourhood definition is, and it seems to me $[0,1]^2$ is the neighbourhood of $(1,1)$ and $(0,\frac{1}{2})$ Further edited Thank you so much for your reply. I regard $[0,1]^2$ as a unit square if I do not get it wrong, those examples are at the boundary of the set. I just noticed closed set is not a neighbourhood of its end-point This bugs me!! According to this, I believe the set is not the neighbourhood of $(0,0)$ as well, but Isn't $[0,\frac{1}{2})$ open in $[0,1]$? This is analogy that I rely on to make the claim.
When the slide says that a neighborhood should surround the point, it means that there is a disk of positive radius centered at the point that is included in the neighborhood. $[0,1]\times [0,1]$ is not a neighborhood of $(1,1)$ because you can escape starting from $(1,1)$ following the diagonal $(t,t)$, $t > 1$ without meeting $[0,1]\times [0,1]$. Note that we are talking of neighborhoods of points in the plane. The notion of open sets and neighborhood are relative to the space the point belongs to. A subset is open or not in a given set. For example, $[0,1/2)$ is an open subset of $[0,1]$ but not an open subset of ${\bf R}$. In the slides, the notion of neighborhood and open sets are relative to the plane itself. If instead of the plane, you consider the space $[0,1]\times [0,1]$, then the subset $(1/2,1] \times (1/2,1]$ is a neighborhood of the point $(1,1)$ in the square $[0,1] \times [0,1]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4384834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
properties of sets involving functions Suppose f is a function from X to Y and A, B are subsets of X, and suppose that S, T are subsets of Y. prove that $f(A\cap B) \subseteq f(A) \cap f(B)$. my working: let $x\in A\cap B$, then $f(x) \in f(A \cap B)$. $x\in A$ and $x\in B$ so $f(x) \in f(A)$ and $f(x) \in f(B)$. so $f(x) \in f(A) \cap f(B)$ Hence the forward direction is proved. However, somehow, the backward direction can also be proved. let $x\in A$ and $x\in B$, so $f(x)\in f(A)\cap f(B)$ $x\in A\cap B$ so $f(x) \in f(A \cap B)$ How come the answer isn't an equal sign, and where did I go wrong, ie which step was invalid?
You start the backwards direction like this: let $x\in A$ and $x\in B$, so $f(x)\in f(A)\cap f(B)$ This isn't a general description of every element in $f(A)\cap f(B)$. It may be two different values $x\in A,y\in B$ that produce the same output, $f(x)=f(y)\in f(A)\cap f(B)$. Other people have given explicit counterexamples. In each case, note how it is two different elements from $A$ and $B$ that map to the same output that cause the issue. By seeing exactly which step in your "proof" is wrong, we can construct a very simple counterexample: $A=\{0\},B=\{1\},f(0)=f(1)=2$. Observe that $f(A)\cap f(B)=\{2\}\cap\{2\}=\{2\}$, but $f(A\cap B)=f(\emptyset)=\emptyset$. This is a common technique in "proof or counterexample" situations - try to construct a proof, and if a step of logic is faulty, that tells you something about how to make a counterexample.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4385058", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
How do I know when to introduce parenthesis in algebra? I quite often fail to come up with the right answer because I don't know the rules for parenthesis. Let's say I would like to expand(?) the first equation. What I did was use the quadratic identities and I got equation 2, which is wrong. The right way is equation 3. Is there a rule that I can learn so that I wouldn't make this mistake again? Are there maybe several rules? Any advice is much appreciated! $$ \frac{(a+1)^2-(a-1)^2}{(b+1)^2-(b-1)^2} \tag{1} $$ $$ \frac{a^2+2a+1-a^2-2a+1}{b^2+2b+1-b^2-2b+1} \tag{2} $$ $$ \frac{a^2+2a+1-(a^2-2a+1)}{b^2+2b+1-(b^2-2b+1)} \tag{3} $$
Let's do this first without variables because it will make it easier to understand. Say I know that the amount of money I have is $$520-80$$ and let's say that for whatever reason I choose to use the fact that $80=40+40$ to write this as $$520-(40+40)$$ Do you see why we used parenthesis? Because we replace the value $80$ with an equal value $40+40$, and in the first expression we subtracted the whole of $80$ and so to keep the same value we need to subtract the whole of $40+40$. If we didn't use parenthesis and wrote this instead as $$520-40+40$$ then we would change the meaning of the expression. Now back to your problem. Let's focus on just the numerator because the same thing is going on in the denominator. $$(a+1)^2-(a-1)^2$$ Think of the $(a-1)^2$ as analogous to the $80$ above. We are now replacing $(a-1)^2$ with another value which is equal to it, just like we replaced $80$ with $40+40$. So you use $$(a-1)^2 = a^2 - 2a + 1$$ and so the expression should become $$(a+1)^2 - (a^2 - 2a + 1)$$ because just like we subtracted the whole of $(a-1)^2$ then we want to subtract all of $a^2-2a+1$. If we didn't use parenthesis here it would mean we subtracted only one part of $a^2-2a+1$ which would change the meaning of the expression.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4385373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How many postive integer $n$ are there, such that $\frac{(n-1)^2}{n+29}$ is an integer? my solution $$\frac{n^2 - 2n +1}{n+29}\overbrace{\Rightarrow}^{\text{long division}} n -31 + \frac{900}{n+29}$$ Now the question is how many divisors for 900 $$900 \underbrace{=}_{\text{prime factorization}} 2^23^25^2$$ so the number of divisors is $(2+1)(2+1)(2+1) = 3^3 = 27$. Is my solution correct?
Following along the lines of my comment as well as Stephen's, the factors of $900$ are: $1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 25, 30, 36, 45, 50, 60, 75, 90, 100, 150, 180, 225, 300, 450, 900$ However, in order to answer the question, the factors have to be rewritten in the form $n + 29$ such that $n$ is positive. This happens from $30$ and above. Counting the number of factors of $900$ that are above $30$, we get $\fbox{13}$ different values of $n$ that are positive integers such that $\frac{(n - 1)^2}{n + 29}$ is also a positive integer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4385511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is the exact solution for this equation? I have been thinking about this equation: $$x^2=2^x$$ I know there is two integer solutions: $x=2$ and $x=4$. But there also is a negative solution, that is approximately $x=-0.77$. $$(-0.77)^2=0.5929$$ $$2^{(-0.77)}=0.5864...$$ Can we find this negative solution exactly?
$x$ is a solution of the equation $x^2 = 2^x$ if : $$\dfrac{\ln|x|}{x} = \dfrac{\ln 2}{2}$$ from the graph of the function : $$f(x) = \dfrac{\ln|x|}{x} - \dfrac{\ln 2}{2}$$ we deduce that there are only $3$ solutions $x = 2$, $x = 4$ and $x = -0.767$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4385793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Meaning of Minus Sign on Top or Bottom of Integral I'm using a textbook in my Advanced Calculus class that might have funny notation for an integral. I couldn't figure out how to type it, so I snipped a picture of it and am attaching it: I can't tell if the integral simply has a minus sign for the "a" bound of integration, or if it's something I haven't seen before. I tried using context clues in my textbook, but honestly it's not my favorite one and is not very clear a lot of the time. If anyone knows what this means, I'd appreciate it!
The notation$$\underline{\int}_a^bf$$has no minus sign; instead, the $\int$ is underlined there. It denotes the supremum of all lower sums of $f$ with respect of all partitions of $[a,b]$. Another notation used here for the same purpose is$$\underline\sum(f,P).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4385954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
$F$ floors and $p$ passengers problem There exists an elevator which starts off containing $p$ passengers. There are $F$ floors. $\forall i: P_i = $P(i. passenger exists on any of the floors) = $1/F$. The passengers exit independently. What's the probability that the elevator door opens on all floors? Reasonable assumption: elevator stops on at least one of the floors. Number of all configurations: $F^{p}$. It arises as the implication that any of the passengers can exit on any of the floors. Number of favorable configurations will be the difference between the number of all configurations and the number of ways that at least one of the floors are skipped. Therefore the probability in accordance with the principle of inclusion-exclusion: $\frac{F^{p} - \binom{F}{1}\cdot \left( F - 1\right)^{p} + \binom{F}{2}\cdot \left( F - 2\right)^{p} - ... + \binom{F}{F - 1}\cdot 1^{p}}{F^{p}}$. Now how to proceed from this point?
The problem is equivalent to the following (slightly neater) formulation: we place $p$ balls in $F$ urns, with uniform probability; which is the probability that all urns are occupied? Your approach and your solution is fine. It can be written as $$ P = \frac{F! \, S (p,F)}{F^p}.$$ where $S$ are the Stirling number of the second kind.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4386087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Semi-simple $\mathbb R[x]$-module structure on $\mathbb R^2$ This might just be a trivial question: Let $R=\mathbb{R}[x]$ and define an $R$-module structure on $\mathbb{R}^2$ with $x$ acting by $\begin{pmatrix}0&1\\-1&0\end{pmatrix}$. Is the module semi-simple? So my thought is: since the matrix admits no eigenvalue in $\mathbb{R}$, we cannot find any non-trivial submodule, thus this is a simple module, thus semi-simple. Is this a correct understanding? What's the correct/better way to understand this?
Let $A=\begin{pmatrix}0&1\\-1&0\end{pmatrix}$. The characteristic polynomial of $A$ is $x^2+1$, and thus the module is isomorphic to $\mathbb R[x]/(x^2+1)$ (since $x^2+1$ is irreducible over $\mathbb R$), hence it is simple.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4386487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How can we determine the sum of a series where each term is a product of two integers using method of differences? I have been trying to find out the sum of a series up to the $n^{th}$ term but failed.$$S_n=1 \cdot 3+2 \cdot 4+3 \cdot 5+4 \cdot 6+ \ldots + n(n+2)$$ my work: \begin{align*} S_n & =\frac{n(n+2)(n+4)-(n-2)n(n+2)}{6}\\ & =\frac{n(n+2)(n+4)}{6}-\frac{n(n+2)(n-2)}{6}\\ & =V_n - V_0 \end{align*} since $V_0=0$ $$S_n =\frac{n(n+2)(n+4)}{6}$$ but later I found that the actual sum is $$S_n=\frac{n(n+1)(2n+7)}{6}$$ Where did I make a mistake? I tried to find the sum in another way and that is by taking the sum of $n^2+2n$ and obtained the actual sum but couldn't find where I made a mistake in my first approach. I am stuck with another problem too and that one is $$S_n=1 \cdot 2+2 \cdot 5+3 \cdot 8+ \ldots + n(3n-1)$$ In this one I am completely clueless what to do since $n$ and $3n-1$ doesn't seem to be following a pattern between them.
The simplest way to solve such questions is to use the known results for $\sum i, \sum i^2, ...$. So, for your second problem you have to find $3\sum i^2-\sum i.$ In your comments you say that you wanted to use the method of differences. This is possible and is a good method for some problems but in your first question you would have needed to obtain a function $F$ such that $F(n)-F(n-1)=n(n+2)$. That function $F$ would actually have been $\frac{n(n+1)(2n+7)}{6}$, a far from easy function to find! You would then have the required form for the method of differences: $$\frac{n(n+1)(2n+7)}{6}-\frac{(n-1)n(2n+5)}{6}=n(n+2).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4386659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Is it possible to get positive radius of convergence for composition of two formal power series, if none of them has positive RoC. I get counter example/proof for all other possibilities. But this one I couldn't do.
Take any power series $F=a_1 X+a_2 X^2+\ldots$ with $a_1\not=0$ with radius of convergence $0$ and $G:=F^{-1}$. Then $G$ also has radius of convergence $0$ because otherwise also $F=G^{-1}$ would have a positive radius of convergence. Finally $F\circ G=X$ has $\infty$ as its radius of convergence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4387044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is the positive operator $TT^*-T^*T$ on a finite complex inner product space is a zero operator? Suppose the vector space is a complex finite-dimensional inner product space $V$, and the operator $TT^*-T^*T$ is a positive operator on it. Can this information be sufficient to conclude that $TT^*-T^*T$ is a zero operator on $V$? I think maybe it can guarantee that $TT^*-T^*T=0 $ (or $T$ is normal),but I can't prove this
Here are just some variants of Stephen's answer in the matrix form (just for references by other readers): In terms of matrices, we know that $\operatorname{tr}(AB)=\operatorname{tr}(BA)$ for every two $n\times n$ complex matrices. In particular, we always have $\operatorname{tr}(A^*A)=\operatorname{tr}(AA^*)$, which gives $\operatorname{tr}(A^*A-AA^*)=0$. If $A^*A-AA^*$ is positive semi-definite, then its eigenvalues are non-negative whence $\operatorname{tr}(A^*A-AA^*)\ge0$. Now the inequality is attained, which means that all the eigenvalues of $A^*A-AA^*$ should be zero! Of course we cannot conclude that a matrix is zero if all of its eigenvalues are zero, but note that $A^*A-AA^*$ is Hermitian whence a normal matrix. It means that $A^*A-AA^*$ is unitarily similar to the zero matrix, hence $A^*A-AA^*=\mathbf{0}$ and $A^*A=AA^*$, i.e., $A$ is normal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4387214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Find $\liminf _{n \to \infty}A_n $ and $\limsup_{n \to \infty}A_n $ Define $$A_n= \begin{cases} \left ( \frac{1}{2}-\frac{1}{2n},1+\frac{1}{2n}\right )& \text{ if n is odd } \\ \left ( \frac{1}{2n},\frac{3}{4}-\frac{1}{2n} \right )& \text{ if n is even } \end{cases}$$ Find $\liminf _{n \to \infty}A_n $ and $\limsup_{n \to \infty}A_n $ $\limsup_{n \to \infty}A_n= \cap_{n=1}^{\infty}(\cup_{n\le N})$ $\liminf _{n \to \infty}A_n=\cup_{n=1}^{\infty}(\cap_{n\le N})$ $A_1=\left ( 0,1+ \frac{1}{2}\right )\\ A_2=\left ( \frac{1}{4}, \frac{1}{2}\right )\\ A_3=\left (\frac{1}{3},1+ \frac{1}{6}\right )$ Can you help someone here i'm confusing
I find it helpful to think about what the following two sequences of sets are doing. For the $\liminf$, I think about the sequence \begin{align} \bigcap_{j = 1}^{\infty} A_j\\ \bigcap_{j = 2}^{\infty} A_j\\ \bigcap_{j = 3}^{\infty} A_j\\ \vdots \end{align} The $\liminf$ is the union of all of these, and since they're clearly getting larger I don't have to worry about the first however many. I can just focus on the ultimate behavior. The left-hand side of $A_n$ is oscillating, half the time approaching $(0$ and half the time approaching $[\frac{1}{2}$. The right-hand side of $A_n$ is also oscillating, half the time approaching $\frac{3}{4})$ and half the time approaching $1]$. So turning back to the $\liminf$, what's the ultimate behavior of the increasing sequence above? It's clearly ultimately going to contain $[\frac{1}{2}, \frac{3}{4})$, so that's the $\liminf$. For the $\limsup$ I can consider the sequence \begin{align} \bigcup_{j = 1}^{\infty} A_j\\ \bigcup_{j = 2}^{\infty} A_j\\ \bigcup_{j = 3}^{\infty} A_j\\ \vdots \end{align} The $\limsup$ is the intersection of all of these, and since they're clearly getting smaller I don't have to worry about the first however many. I can just focus on the ultimate behavior. Based on the analysis above, we take the other parts of the oscillation, because we're dealing with unions this time. The interval that's in at least one of the $A_i$ as $i$ becomes large is $(0, 1]$, so that's the $\limsup$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4387352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
show this equation $3x^4-y^2=3$ has no integer solution show this diophantine equation $$3x^4-y^2=3$$ has no integer $(y\neq 0)$ solution? My try: WLOG Assmue $(x,y)$ is postive integer solution,then $3|y$,let $y=3y'$,then we have $$x^4-1=3y'^2\tag{1}$$ and following I want $\pmod5$,since $x^4\equiv 0,1\pmod 5$ (1):if $x^4\equiv 0\pmod 5$or $5|x$,then $(1)$ it is clear no solution (2)if $(x,5)=1$,then $x^4-1\equiv 0\pmod 5$,there also exsit $y'$ such $5|3y'^2$,so How to solve my problem? Thanks
This curve $y^2=3x^4-3$ has only two integral points, according to the magma online calculator as follows. IntegralQuarticPoints($[3,0,0,0,-3]$); It says that all integral points are $( \pm 1 , 0)$. Hence this equation has no integer $(y\neq 0)$ solution. See related info MO
{ "language": "en", "url": "https://math.stackexchange.com/questions/4387555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Prove that a given operator is compact Let $H$ be an Hilbert space with scalar product $(\cdot,\cdot)$ and $T : H \to H$ a linear operator defined as follow: $$ Tx = \sum _{n=1} ^\infty (x,a_n)b_n $$ where $(a_n)_{n\in\mathbb{N}},(b_n)_{n\in\mathbb{N}}$ are two sequences in $H$ such that $\sum_{n=1}^\infty|a_n||b_n|<+\infty $. I have to prove that $T$ is compact, i.e. if $x_n \to x_0$ weakly for $n\to+\infty$, then $\|Tx_0-Tx_n\|\to 0$ for $n\to+\infty$. My idea was to use somehow Banach-Steinhaus theorem to prove that $(x_0-x_n,a_k)\xrightarrow[n\to +\infty]{}0$ uniformly in $k$ and then deduce the strong convergence. However, the sequence $(b_n)_{n\in\mathbb{N}}$ can be unbounded, so I fear that this approach do not work. Does anyone know how to solve this exercise?
$\newcommand{scal}[2]{\left({#1};{#2}\right)}\newcommand{abs}[1]{\left\lvert {#1}\right\rvert}\newcommand{nrm}[1]{\left\lVert {#1}\right\rVert}$Call $T_nx=\sum_{k=1}^n \scal x{a_k}b_k$ and consider any $x$ such that $\abs x\le 1$. Then $$\abs{Tx-T_nx}=\abs{\sum_{k=n+1}^\infty \scal x{a_k}b_k}\le \sum_{k=n+1}^\infty \abs{\scal x{a_k}}\abs{b_k}\le \abs x \sum^\infty_{k=n+1}\abs{a_k}\abs{b_k}\le \sum^\infty_{k=n+1}\abs{a_k}\abs{b_k}$$ Therefore, $$\nrm{T-T_n}\le \sum_{k=n+1}^\infty \abs{a_k}\abs{b_k}\stackrel{n\to \infty}\longrightarrow 0$$ $T$ is the strong limit of a sequence of finite-rank operators, hence compact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4387731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that $\int_0^\infty E[ 1\{f(X) \le f(X+t) \}] \, t dt =E[X^2]$ Consider a symmetric random variable $X$ with the pdf $f$. We want to study the following expression: \begin{align} \int_0^\infty E[ 1\{f(X) \le f(X+t) \}] t dt \end{align} where $1\{\cdot \}$ is the indicator function. Can we show that \begin{align} \int_0^\infty E[ 1\{f(X) \le f(X+t) \}] \, t dt =E[X^2]? \end{align} What I have done: Proof for the case when $f(x)$ is increase for $x<0$ and decreasing for $x>0$. Under these conditions, we have that for $t\ge 0$ \begin{align} 1\{f(X) \le f(X+t) \}=1 \{ X \le 0, t \le 2|X|\} \end{align} Therefore, \begin{align} \int_0^\infty E[ 1 \{ X \le 0, t \le |X|\}] \, t dt &= E \left[1 \{ X \le 0\} \int_0^{2|X|} t dt \right] \text{ by Tonelli_Fubini}\\ &=E \left[1 \{ X \le 0\} \frac{|X|^2}{2} \right]\\ &=E[X^2] \text{ by symmetry} \end{align} The issue is that not all symmetric random variables have this increasing property for the pdf. Therefore, I am not sure if it always holds.
I'm assuming that $\{X: f(X) = t\}$ has measure $0$ for every $t$. Let $\mu$ be the measure associated with $f$. Setting $Y = t + X$, we can rewrite the integral as $$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} 1_{X < Y} 1_{f(X) < f(Y)} (Y - X) d\mu(X) dY.$$ We split the integral into three parts: those with $X, Y \geq 0$, those with $X < 0 < Y$, and those with $X, Y \leq 0$. The first part is $$\int_{0}^{\infty} \int_{0}^{\infty} 1_{X < Y} 1_{f(X) < f(Y)} (Y - X) d\mu(X) dY = \int_{0}^{\infty} \int_{0}^{\infty} 1_{X < Y} 1_{f(X) < f(Y)} (Y - X) f(X)dX dY.$$ The third part is, by symmetry $$\int_{-\infty}^{0} \int_{-\infty}^{0} 1_{X < Y} 1_{f(X) < f(Y)} (Y - X) d\mu(X) dY = \int_{0}^{\infty} \int_{0}^{\infty} 1_{X > Y} 1_{f(X) < f(Y)} (X - Y) f(X)dX dY.$$ So the sum of the first and third part is $$\int_{0}^{\infty} \int_{0}^{\infty} 1_{f(X) < f(Y)} |X - Y| f(X)dX dY.$$ On the other hand, the second part is $$\int_{-\infty}^{0} \int_{0}^{\infty} 1_{f(X) < f(Y)} (Y - X) f(X) dY dX = \int_{0}^{\infty} \int_{0}^{\infty} 1_{f(X) < f(Y)} (Y + X) f(X)dX dY.$$ So we conclude by summing the three parts that $$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} 1_{X < Y} 1_{f(X) < f(Y)} (Y - X) d\mu(X) dY = \int_{0}^{\infty} \int_{0}^{\infty} 1_{f(X) < f(Y)}2\max(X,Y) f(X)dX dY.$$ We can manipulate the final expression as follows: swapping the role of $X$ and $Y$, we get $$\int_{0}^{\infty} \int_{0}^{\infty} 1_{f(X) < f(Y)}2\max(X,Y) f(X)dX dY = \int_{0}^{\infty} \int_{0}^{\infty} 1_{f(Y) < f(X)}2\max(X,Y) f(Y)dX dY.$$ So, replacing a copy with LHS with a copy of RHS, $$\int_{0}^{\infty} \int_{0}^{\infty} 1_{f(X) < f(Y)}2\max(X,Y) f(X)dX dY = \int_{0}^{\infty} \int_{0}^{\infty} \max(X,Y) \min(f(X), f(Y)) dX dY.$$ The right hand side is symmetric in $X$ and $Y$, so we can cut the region of integration by half, $$\int_{0}^{\infty} \int_{0}^{\infty} \max(X,Y) \min(f(X), f(Y)) dX dY = 2\int_{0}^{\infty} \int_{0}^{X} X \min(f(X), f(Y)) dY dX.$$ Now, we have the inequality, $$2\int_{0}^{\infty} \int_{0}^{X} X \min(f(X), f(Y)) dY dX \leq 2\int_{0}^{\infty} \int_{0}^{X} X f(X) dY dX = 2\int_{0}^{\infty} X^2 f(X) dX = E[X^2].$$ Thus we conclude that $$\int_{-\infty}^{\infty} P[f(X) \leq f(X + t)] tdt \leq E[X^2]$$ with equality iff $f$ is monotonically decreasing almost everywhere on $[0,\infty]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4387901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
How do we know when direct substitution is the proper approach for a limit? In the equation $y=\frac{(x+2)(x-2)}{x+2}$, the limit as $\lim_{x→-2}$ is $-4$. However, direct substitution would give a result of "Does not exist". Given that direct substitution can variously be either appropriate for identifying a limit, or correct in saying that the limit does not exist, or (as in this case) incorrect in saying that the limit does not exist, how do we know if direct substitution should be used? My best guess is that a graphical analysis is the typical first step.
In order to answer this question, it is interesting to remember the definition of limits in $\textbf{R}$. Consider a real-valued function $f:X\to Y$ with real domain and an accumulation point $a$ from $X$. We say that the limit of $f$ when $x$ approaches $a$ is $L$ iff the following statement is true: \begin{align*} (\forall\varepsilon > 0)(\exists \delta_{\varepsilon} > 0)(\forall x\in X)(0 < |x - a| < \delta_{\varepsilon} \Rightarrow |f(x) - L| < \varepsilon) \end{align*} As you can see, in order to define the limit of a function at a point, the proposed function need not be defined at this point. But it is required that we can study the behavior of $f$ when $x$ is as close to $a$ as one wants, without assuming the value $a$ itself. This is the case of the proposed limit in the body of the question and, by the above mentioned reason, we can cancel the term $x + 2$. However, if we also know that $f$ is continuous, then $L = f(a)$. In such case, it is possible to replace the value of $a$ at each entry $x$ of the proposed expression. Here is an example: \begin{align*} \lim_{x\to a}(x^{2} + 2x + 1) = a^{2} + 2a + 1 \end{align*} This basically address the problem from your question (as I have understood). Hopefully this helps !
{ "language": "en", "url": "https://math.stackexchange.com/questions/4388047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What is the resource interpretation of $A \to B$ in linear logic? Linear logic seems to have two forms of implication. $$A \multimap B$$ With resource interpretation of "consuming A yields B". And the usual $$A \rightarrow B$$ What is the resource interpretation of regular implication in linear logic? For example, I've seen $A \rightarrow B$ sometimes re-interpreted as: $$A! \multimap B$$
I would say it is not a reliable way to translate classical logical formulas to resource-sensitive essence of linear logic just over semantic instructions. Thus, many interpretations are ex post facto attempts to make sense of formal results. Presuming your intention by "regular implication" is "material implication" ($A \rightarrow B$) in classical logic, I think we can reason as follows: We may think of propositions as place-holders for resources. Material implication tells that either we do not have, and thus, demand a resource A, or else, we have got a resource B available. In other words, we have either the absence of resource (i.e., to be supplied) A or the presence of resource (to be consumed) B. Since these are "classical" resources; we can freely restore or dispose of them over and over again on demand. So we have the alternatives $$!A \multimap !B$$ and $$?A \multimap ?B$$ We supply either ?A to be consumed some amount and yield proportionately ?B, hence $$!?A \multimap ?B$$ or we consume !B some amount and need proportionately !A, hence $$!A \multimap ?!B$$ which are the expressions for material implication in LL.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4388202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Doubt in Location Lemma in Greens Relation Abstract Algebra! I am unable to prove one part of rectangular lemma in green's relations. Let $S^1$ be a monoid. Then I need to prove that $m.m' \in D(m) \iff m.m'\in R(m) \cap L(m')$. How should I go about proving this? $R(m)$ is Right equivalence classes of m. $L(m')$ is Left equivalence classes of m'. $D(m)$ is the equivalence class of m, defined by relation $D=L\circ R=R\circ L$. I know that $x\in R(m) \iff \exists a,b:xa=m$ and $x=mb$. Symmetrically for $L(m')$. Also, $D=L\circ R=R\circ L$ is an equivalence relation.
This is not true in arbitrary monoids. For instance, it is not true in the bicyclic monoid. The bicyclic monoid is the monoid with presentation $\langle \{a, b\} \mid ab = 1 \rangle$. I let you verify that every element of the bicyclic monoid can be written uniquely in the form $b^ma^n$, where $m, n \geq 0$. The product of such words is given by $$ (b^ma^n)(b^pa^q) = b^ra^s $$ where $r = m - n + \max(n,p) = m + p - \min(n,p)$ and $s = q - p + \max(n,p) = n + q - \min(n,p)$. The idempotents are the elements of the form $b^na^n$, with $n \geqslant 0$. It contains a single $\cal D$-class and $\cal D = \cal J$. Its egg-box picture is represented below: $\hskip 100pt$ Let $R(x)$, $L(x)$ and $D(x)$ be the $\cal R$-class, the $\cal L$-class and the $\cal D$-class of an element $x$. Then $b, ba \in D(b)$. It is also true that $bba \in L(ba)$. However, $bba \notin R(b)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4388304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
prove that: there are exactly ${n-1 \choose k-1}$products that consist of $n-k$ factors prove that: there are exactly ${n-1 \choose k-1}$products that consist of $n-k$ factors,so that all these factors are elements of $[k]$. Repetition of factors is allowed, that is my attempt : for explain my attempt let's take $n=7$ and $k=4$ so let's find the number of all product that have $7-4=3$ factors , so that all these factors are element of $[4]$ our concern is just finding the numbers of all these products , now we know $[4]=\{1,2,3,4\}$, but let's denote it by $\{e_1,e_2,e_3,e_4\}$, note that $(e_k=k )$ $,k \in \mathbb{Z^+}$ we want to chose $3$ factors from $\{e_1,e_2,e_3,e_4\}$,(repetition of factors is allowed) all ways to do that equal the number of all solutions of this equation $e_1+e_2+e_3+e_4=3$ for more explain let's take this solution $(2,1,0,0)$ that mean we choose this product $ 1.1.2$, and if we take $(1,1,0,1)$ that mean we choose this product $1.2.4$ so the numbers of all products that have 3 factors from $[4]$equal the number of all solution of the above equation ,and that's equal ${7-4+4-1 \choose 4-1}$ and we can do the same thing with any $n$ and $k$ so finally products that have $n-k$ factors,so that all these factors are element of $[k]$ is a $n-k$-element multisets of the set $\{1,2,3,4,....k\}$,so the number of all this products is the number of all weak composition of $ n-k$ into $k$ parts and it is ${ n-k+k-1 \choose k-1}={ n-1 \choose k-1}$ does my attempt true?
Your identification of this problem with the number of solutions of $$\sum_1^k e_i=n-k$$ is problematical since you have not taken account of the fact that the same product can occur in different ways. For example $$2\times4\times6=2\times3\times8.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4388516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding a subgraph satisfying degree constraints in a directed graph We are given a directed graph $D=(V,A)$ and two values $i(v)$ and $o(v)$ for each vertex. Is it NP-hard to find an induced subgraph of $D$ such that the in degrees are at most $i(v)$ and the out degrees are at least $o(v)$ for each vertex in the subgraph?
The problem is NP-hard by a reduction of the CNF-SAT problem. The basic idea is to create a big cycle of out-degree constraints so all needed constraints are realized. Let $n$ be the number of variables and $C$ be the set of clauses. The reduced directed graph consists of $2n + |C|$ "levels" $(L_i)_i$. Arcs between levels $\{ (u, v) \mid 1 \le i \le |L|, u \in L_i, v \in L_{(i \bmod |L|) + 1} \}$ are added to form a big cycle. For $1 \le i \le n$, denote $L_{2i-1} = \{ s_i \}$ and $L_{2i} = \{ x_i, \overline{x}_i \}$, representing a variable selection. Let $i(s_i) = i(x_i) = i(\overline{x}_i) = 1$. Out-degree constraints are defined later. For $1 \le i \le |C|$, denote $L_{2n+i} = \{c_i\}$, representing a clause. Let $i(c_i) = |C_i|$. Then, add arcs $\{ (x, c_i) \mid x \in C_i\}$, where $x$ is a vertex $\overline{x}_i$ if the variable $i$ occurs positively, and $x_i$ if the variable $i$ occurs negatively. The maximum in-degree constraint ensures not all vertices from the clause are selected, so the clause is satisfied. For a selection vertex $s_i$, we set $o(s_i) = 1$. For all other vertex $u$, $o(u)$ is set to be the out-degrees in the resulting directed graph. This ensures exactly one vertex from each level is used.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4388744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Evaluate $I=\int_{0}^{1}\frac{x^2-x}{(x+1)\ln{x}}dx$ I am trying to calculate this integral:$$I=\int_{0}^{1}\frac{x^2-x}{(x+1)\ln{x}}dx$$. I tried to find the antiderivative but it didn't exist. So i changed variable by set $t=\frac{x-1}{x+1}$ due to factor numerator is $x(x-1)$ and it led to:$$I=2\int_{-1}^{0}\frac{t^2+t}{(1-t)^3\ln{\frac{t+1}{1-t}}}dx$$ and it seems more harder. The result from Wolfram Alpha is ok, but i don't know how to evaluate this result. Need some hints or advices from everyone. Thank you.
Another approach can be as below: First, by this link https://en.wikipedia.org/wiki/Frullani_integral one can easily obtain: $$\int_0^1 \frac{x^{a-1}-x^{b-1}}{\ln x}dx=\int_0^\infty \frac{e^{-bt}-e^{-at}}{t}dt=\ln\frac{a}{b},$$ where $a,b\gt0.$ Second, we have a well-known relation that states: $$\frac{\pi}{2}=(\frac{2}{1}\times\frac{2}{3})(\frac{4}{3}\times\frac{4}{5})(\frac{6}{5}\times\frac{6}{7})(\frac{8}{7}\times\frac{8}{9})\times\cdots . $$ For example you can see this relation in this link https://en.wikipedia.org/wiki/Pi . Third, notice that: $\frac{1}{1+x}=1-x+x^2-x^3+\cdots . $ Now, we get: $$I=\int_0^1\frac{(x^2-x)(1-x+x^2-x^3+\cdots)}{\ln x}dx=\sum_{k=0}^\infty (-1)^k\ln\frac{k+3}{k+2}.$$ The last sum is indeed:$$I=\ln\frac{3}{2}-\ln\frac{4}{3}+\ln\frac{5}{4}-\ln\frac{6}{5}+\cdots=\ln\frac{9}{8}+\ln\frac{25}{24}+\ln\frac{49}{48}+\cdots =\ln\frac{4}{\pi}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4389061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Simple curve constructed from a parabola - can it be expressed explicitly? Some years ago I thought of a problem that I've returned to frequently but never been able to solve. Maybe it's impossible, I'm not a mathematician. I thought I should ask some experts rather than puzzling over it till my dying day. The problem involves a simple looking curve constructed from the parabola $y = x^2$ as follows. From the point $(x , x^2)$ construct a normal to the curve (in the direction of increasing $y$) which is one half unit long. The end point of this normal segment traces out a new curve $Y = F (X)$ as $x$ varies. The problem is to express $Y$ explicitly in terms of $X$. From the diagram - $tan{\ \theta} = 2x = \dfrac{dY}{dX} = \dfrac{x - X}{Y - x^2}$ also ${({\dfrac{1}{2})}^2} = {{(x - X)}^2} + {{(Y-x^2)}^2}$ Considering this I could express $X, Y$ and $Y'$ parametrically in terms of $x$ - $X = x - \dfrac{x}{\sqrt{1 + 4 x^2}}$ $Y = x^2 + \dfrac{1}{2\sqrt{1 + 4 x^2}}$ $\dfrac{dY}{dX} = 2x$ I've never got any further than this. I've never been able to construct a solvable looking differential equation or polynomial or eliminate the parameter. After all these years it's got to stop. If it's not possible, can one demonstrate that it is not possible? What kind of curve is this? Any assistance will be gratefully received.
The problem is essentially to eliminate the parameter $t$ from the pair of equations $$x=t-\frac{t}{\sqrt{1+4t^2}}$$ $$y=t^2+\frac{1}{2\sqrt{1+4t^2}}$$ To do this, let $$2t=\sinh u\implies \sqrt{1+4t^2}=\cosh u$$ Then, $$x=\frac12(\sinh u-\tanh u)$$ and $$y=\frac14\sinh^2u+\frac{1}{2\cosh u}=\frac14\left(\cosh^2u-1+\frac{2}{\cosh u}\right)$$ Meanwhile, after some simplification, we get $$x^2=\frac14\left(\cosh^2u-2\cosh u+\frac{2}{\cosh u}-\frac{1}{\cosh^2u}\right)$$ So now we can obtain $$4y-4x^2+1=2\cosh u+\frac{1}{\cosh^2u}=:\lambda$$ and also $$4y+1=\cosh^2u+\frac{2}{\cosh u}=:\mu$$ From the second of these equations we can get $$\cosh^3u=\mu\cosh u-2$$ Substituting this expression for $\cosh^3u$ into the first equation gives $$\lambda=\frac{2(\mu\cosh u-2)+1}{\cosh^2u}\implies\lambda\cosh^2u-2\mu\cosh u+3=0$$ Hence, $$\cosh u=\frac{\mu\pm\sqrt{\mu^2-3\lambda}}{\lambda}$$ The $\pm$ indicates the fact that there are two options for the locus: one above the parabola and one below. From now onwards it gets very messy to actually obtain the Cartesian equation(s). First we have to get an equation just in $\lambda$ and $\mu$ and then substitute into that their expressions in terms of $x$ and $y$. Firstly, $$\cosh^2u=\frac{2\mu^2-3\lambda\pm2\mu\sqrt{\mu^2-3\lambda}}{\lambda^2}$$ Therefore the equation in terms of $\lambda$ and $\mu$ is $$\mu=\frac{2\mu^2-3\lambda\pm2\mu\sqrt{\mu^2-3\lambda}}{\lambda^2}+\frac{2\lambda}{\mu\pm\sqrt{\mu^2-3\lambda}}$$ I had a go at rewriting this without the square roots, and I got$$(\mu^2\lambda^2-4\mu^3-2\lambda^3+9\lambda\mu)^2=\pm(\mu^2-3\lambda)(\mu\lambda^2-4\mu^2+3\lambda)$$ You may want to check this for yourself, but in any event you now have to substitute into this equation (or the previous one) the expressions $\mu=4y+1$ and $\lambda=4y-4x^2+1$ to get the Cartesian equation(s). I will leave this to you. I hope this helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4389261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is the order of elliptic curve the same as the order of point on it in finite field? The question is the same as: is elliptic curve cyclic? how to prove it? update Seems the above answer is no. But I've a further question(maybe should post another thread?). Is there a bounding for the order of a random point on an elliptic curve? Many zero knowledge algorithms choose a point randomly, I guess there should be a low limit ?
If you need to produce random points of large order, the standard approach is to use a fixed, known curve of prime order (which is then guaranteed to be cyclic), and choose random points on that curve. Do not fix a point $P$ and then choose random integers $k$ and output $kP$, as suggested in one of the comments. This approach is insecure for many protocols. A definitive resource for this topic is How to hash into elliptic curves by Icart, published in CRYPTO 2009.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4389426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Sobolev spaces are Hilbert spaces Define $H^s(\mathbb{R}^n)=\lbrace u\in \mathcal{S}'(\mathbb{R}^n): (1+\vert y\vert^2)^\frac{s}{2}\hat{u}\in L^2(\mathbb{R}^n)\rbrace$ where $\mathcal{S}'(\mathbb{R}^n)$ is the space of tempered distributions together with the norm $\Vert u\Vert_{H^s}= \Vert (1+\vert y\vert^2)^\frac{s}{2}\hat{u}\Vert_{L^2}$. I've read that $H^s$ are Hilbert spaces, but I am stuck showing that they are complete. If $u_n$ is a Cauchy sequence in $H^s$, then $\hat{u}_n$ is a Cauchy sequence in a weighted $L^2$ space. Therefore we can find $v^*$ with $(1+\vert y\vert^2)^\frac{s}{2}v^*\in L^2$ and $(1+\vert y\vert^2)^\frac{s}{2}\hat{u}_n\rightarrow (1+\vert y\vert^2)^\frac{s}{2}v^*$ in $L^2$. But I don't know how to conclude that $v^*$ is the Fourier transform of a tempered distribution. For $s\geq 0$ the case is clear, but not for $s<0$.
As shown e.g. in Rudins Functional Analysis: Let $1\leq p <\infty$ and $N>0$ with $$\int_{\mathbb{R}^n}\vert (1+\vert x\vert^2)^{-N}u(x)\vert^p dx<\infty.$$ Then $u $ is a tempered distribution. Therefore your function $v^\ast$ is a tempered distribution for all $s$ and thus we can find a tempered distribution $v$ with $\hat{v}=v^\ast$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4389640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
(Silly question) Why can't derivative be defined as $[f(a)-f(a+h)]/h$? I'm certain that this is a silly question, but it's one I need to ask anyway, because I'm still learning which parts of calculus are conventions unworthy of serious thought, which parts are trivial to prove, and which parts are worth understanding deeply. The question is: Why can't the basic derivative (studied in a Calculus I class) be defined as $$\lim_{h\rightarrow 0}\dfrac{f(a)-f(a+h)}{h},$$ equivalently $$\lim_{x\rightarrow a}\dfrac{f(a)-f(x)}{x-a}$$ rather than as $$\lim_{h\rightarrow 0}\dfrac{f(a+h)-f(a)}{h}$$ and $$\lim_{x\rightarrow a}\dfrac{f(x)-f(a)}{x-a}$$ respectively? I've come up with a potential answer: switching the sign of the numerator (by swapping $$f(x)-f(a)$$ and $$f(a)-f(x)$$ is unacceptable because it gets the sign of the derivative wrong. But then, how can we use knowledge about the sign of the derivative, in order to help us calculate the derivative? Any help is appreciated.
Why is the derivative defined like this? It is because we are looking for the slope of a function at a point. But this can't be right because when solving for the slope, we need two points, right? Recall that given two points, $P_1(x_1, y_1)$ and $P_2(x_2,y_2)$, the slope passing through these points are $\frac{y_2 - y_1}{x_2 - x_1}$, or $\frac{y_1 - y_2}{x_1 - x_2}$, both of which are the same. Why is this relevant? Given a function $f$, let's say we want to find the slope of the line tangent to $f$ at $x = a$. To start, the two points here will be $(a,f(a))$ and the other point will be $(x, f(x))$. This secant will have a slope of $\frac{f(x) - f(a)}{x - a}$. Clearly, we can't let $x = a$. But notice that as $x$ gets closer and closer to $a$, the secant gets closer and closer to becoming a tangent. We can write this as $$\lim_{x \to a}\frac{f(x) - f(a)}{x - a}.$$ We go back to the question. Why can't the basic derivative (studied in a Calculus I class) be defined as $$\lim_{h\rightarrow 0}\dfrac{f(a)-f(a+h)}{h},$$ equivalently $$\lim_{x\rightarrow a}\dfrac{f(a)-f(x)}{x-a}$$ rather than as $$\lim_{h\rightarrow 0}\dfrac{f(a+h)-f(a)}{h}$$ and $$\lim_{x\rightarrow a}\dfrac{f(x)-f(a)}{x-a}$$ respectively? Because that does not use the formula for the slope correctly. Since the terms in the numerator were swapped and subtraction is not commutative, then the sign will change and hence, give the wrong answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4389788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Does there exist some isomorphism from $Q_8$ to $\Bbb Z_4 \times\Bbb Z_2$? I am wondering if some direct decomposition exists for quaternion group. I think that I am mixing some things, but let me explain and ask for clarification, tips from your side to let me understand my misconception. First of all we have first isomorphism theorem which says that: $ H \cong G / \ker(f) $ so can I threat it as $ G \cong H \times \ker(f) $ if the $ \ker(f) $ create a group itself? In $Q_8$ we could choose $ H $ as $ \{1, -1, i, -i\} $ which results that I have to take $ \ker(f) $ such as $ \{j, k \} $ which do not form a group itself or maybe I should choose $ \mathbb Z_2 $ which is a group so then I can claim that $ Q_8 \cong \{1, -1, i, -i\} \times \mathbb Z_2 $?
Since $\Bbb Z_4\times\Bbb Z_2$ is abelian whereas $Q_8$ isn't, they are not isomorphic. But, yes, $\{1,-1,i,-i\}$ is a normal subgroup of $Q_8$ which is isomorphic to $\Bbb Z_4$, and $Q/\{1,-1,i,-i\}\simeq\Bbb Z_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4390014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Does $\mathbb{Q}(\sqrt{-1}, \sqrt{2},\sqrt{3},\sqrt{5},\sqrt{7},\ldots)$ have countably many subfields? According to Example 3.10 of these notes, the field $L = \mathbb{Q}(\sqrt{-1}, \sqrt{2},\sqrt{3},\sqrt{5},\sqrt{7},\ldots)$, where we adjoin $\sqrt{p}$ for every prime $p$ (and $p=-1$) has only countably many subfields. I think that this is only true if we require the subfields to have finite degree over $\mathbb{Q}$. If we do not specify finite degree, then for any element $(a_{-1}, a_2,a_3,a_5,\ldots) \in \prod_p \{0,1\}$, we get a subfield $E = \mathbb{Q}(a_{-1}\sqrt{-1}, a_2\sqrt{2}, a_3\sqrt{3},\ldots)$. This defines an injection from $\prod_p \{0,1\}$ to $\operatorname{Gal}(L/\mathbb{Q})$, so the latter is uncountable. Given that Keith Conrad and I disagree, the conditional probability that I am wrong is high. Can anyone spot any errors in my reasoning?
There is no error, but you misread the exercice. The finite degree subfields don't correspond to the finite index subgroups but to the open finite index subgroups. The point of this exercice is this subtlety which is essential when dealing with infinite Galois extensions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4390214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Show that $ \sum_{k=1}^n k n = \mathrm{O}(n^3)$ Cheers, I have to show that $ \sum_{k=1}^n k n = \mathrm{O}(n^3)$. It's a fairly easy question, but I need some answers as to that I am allowed to do. The first way to solve this is pretty easy I think, so I stated: $$n + 2n + 3n + \cdots + n \cdot n \leq \\ n \cdot n + n \cdot n + n \cdot n + \cdots n \cdot n = n \cdot n \cdot n = n^3 $$ so we proved it one way, basically. Now I also tried to solve it using the limits. So I tried saying something like this: We have to prove that: $$ \lim_{n \to \infty} \frac{\sum_{k=1}^n k n}{n^3} = 0$$ Now at this point, I have a question. Is this fraction even eligible to use L'Hopital's rule, and if yes how would that be applied? I am thinking that the limit would boil down to: $$ \lim_{n \to \infty} \frac{\sum_{k=1}^n k n}{n^3} \stackrel{\frac{\infty}{\infty}(?)}{=} \lim_{n \to \infty} \frac{\sum_{k=1}^n k}{3n^2} = 0 $$ but I don't know If I am exactly allowed to even do that. I also tried to split them, so I'd get: $$ \lim_{n \to \infty} \frac{n}{n^3} + \frac{2n}{n^3} + \frac{3n}{n^3} + \cdots + \frac{n^2}{n^3} = 0 + 0 + 0 + \cdots + 0 = 0$$ Would that be a correct answer as well? Thanks for any help =)
We just need to understand $\sum_{k=1}^n k$. The following is a common trick: Think about how rectangles of width $1$ are used to approximate integrals. If you draw this on a graph, you will soon realise that $\int_0 ^{n-1} x\; \text{d}x<\sum_{k=1}^n k<\int_1 ^n x\; \text{d}x$. This is the same as saying $\frac{(n-1)^2}{2}< \sum_{k=1}^n k<\frac{n^2-1}{2}$. $\\$ As an exercise, perhaps you might want to see if you can apply this to $\sum_{k=1}^n k^s$, for any positive $s$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4390383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
How is this differential equation solved? I have a differential equation as follows: $$ y \cdot y'' +(y')^2 +1=0.$$ I'm interested in how to solve it. So far I have found a few solutions like $y=\sqrt{r^2-(x-k)^2}$ and $y=x+k.$ [In these, $r$ and $k$ are real constants.] It came up while looking at a property of surface area of revolutions which held for spheres and certain cones. I am wondering if a general soution can be found, and in particular whether only circles and certain lines are solutions. Note: As user RadialArmSaw noted, $y=x+k$ is not a solution.
Noting that $y y'' + (y')^2 + 1 = (y y' + x)'$, the original equation reduces to $y y' + x = c_1$, which is an equation with separable variables whose solution is implicitly defined by the relation $$ \frac{y^2}{2} = c_1 x-\frac{x^2}{2}+c_2 $$ that can be rewritten as $$ y^2=k_1 x-x^2+k_2 \Leftrightarrow (x-\alpha_1)^2+y^2 = \alpha_2. $$ Given initial or boundary conditions, we can compute (when possible) the constants $\alpha_1, \alpha_2$. When the initial/boundary data is compatible with the differential equation the solution is a circumference, as noted by others.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4390550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
all prime elements of $ \Bbb{Z}_p$ What are all prime elements of $\Bbb{Z}_p$ ? $\Bbb{Z}_p$ is a local ring with unique maximal ideal $(p)$. Prime element is generator of $(p)$, so $p$ and $-p$ are prime elements. What about other prime elements ? I couldn't come up with other prime elements of $\Bbb{Z}_p$. All prime elements are $±p$? Thank you in advance.
This is equivalent to finding all units of $\mathbb Z_p$, and since $\mathbb Z_p$ is local, $$\mathbb Z_p^{\times} = \cup_{x\in\{1, 2, \dots, p-1\}} x + p\mathbb Z_p.$$ In particular, $1+p+p^2+p^3+\cdots=\frac{1}{1-p}$ is a unit, and $(\frac{p}{1-p})=(p)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4390731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Good books to learn Complex Analysis and Contour Integration? I have completely finished some of calculus, such as Limits, Derivatives, Sequence and series, Indefinite and Definite Integration and many more. And have solved humongous amount questions on these topics. I am also good with basics. So to expand my knowledge more, I wanna self study Complex analysis and Contour Integration as I did for previous topics. Can you please suggest some good books for them? Starting from stratch so that it's easy for me to self study? It don't have to be one book, can be series of books too. Any help would be greatly appreciated, thank you. :)
I would recommend Complex Analysis (Princeton Lectures in Analysis, Volume II) written by Elias M. Stein (Ph.D advisor of the famous Professor Terrence Tao) and Rami Shakarchi. This is the second analysis book in the serious of books and the authors aim to sacrifice the depth of presented topics in exchange for the demonstration of various connections of the materials to other branch of mathematics, which in my mind will help you to hunt for next interested topics in your mind along the journey.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4390982", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
smooth quandle transitive Let $G$ be a commutative Lie group and let $T \in \operatorname{Aut}(G)$ be an automorphism of the Lie group. We put $x * y=T x+(1-T) y$ for $x, y \in G$, and then $(G, T)$ is a smooth quandle. We call $(G, T)$ an Alexander quandle and denote it by $G_{T}$ , if $T$ is a multiplication by a scalar $a$, we simply denote it by $G_{a} .$ I need to show that $G_{T}$ is transitive if and only if the endomorphism $1-T$ on $G$ is surjective. I get stuck in this question.
We say that $X$ is transitive if the action of $\operatorname{Inn}(X)$ on $X$ is transitive, with $\operatorname{Inn}(X)$ is the sub group of the $Aut(X)$ generated by $s_y$ $(y \in X)$ and called the inner antonorphisma group. Return to my question, we take the action of $\operatorname{Inn}(X)$ on $G$ defince by $$ \begin{aligned} Inn(G) \times G & \longrightarrow G \\ (y, x)&\longrightarrow y* x=s_y(x)=T{y}+(1-T) x \end{aligned} $$ such that $$\begin{aligned} \text { $s_y$ : }\;\;\; G & \longrightarrow G \\ x & \longrightarrow s_y(x)=y * x \end{aligned}$$ if $G$ is transitive then $H$ is transitive, for all $x, y$ for there is $z$ on $\operatorname{Inn}(X)$ such that $$x=y * z=s_{y}(z)=T{y}+(1-T) z$$ then $$(1-T)z=x-T y $$ then $(1-T)$ is surjective. Conversaly: we have $T \in Aut(G)$ and $\forall x, y \in G$, $x-Ty \;\in G$, and $1 -T$ surjective then $\exists z \in G$ sush that $$ (1-T) z=x-T y $$ then $$ x=y * z . \quad \forall x, y \in G $$ Hence G is transitive.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4391135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find the singular points of the ODE Consider the ODE $$\frac{d^2x}{dt^2}+e^t\frac{dx}{dt}+\frac{x}{1+2t}=0$$ What are the singular points of this ODE? For an ODE of the form $$P(t)\frac{d^2x}{dt^2}+Q(t)\frac{dx}{dt}+R(t)x=0$$ a point is singular if $P,Q,R$ are analytic and $P(t_0)=0$. From the equation above, there doesn't seem to be any singular points. We have that $P(t)=1$ which does not equal $0$ for any value $t_0$. The reason I asked is because when solving this using the series method with $x(t)=\sum^\infty_{n=0}a_nt^n$, I want to know at what values $t$ will the solution not converge. I know it won't converge at singular points, but there doesn't seem to be any.
We don't need re-write the ODE, just notice that $$\frac{{\rm d}^{2}x}{{\rm d}t^{2}}+e^{t}\frac{{\rm d}x}{{\rm d}t}+\frac{1}{1+2t}x=0$$ Hence the ODE has the form $$\frac{{\rm d}^{2}x}{{\rm d}t^{2}}+P(t)\frac{{\rm d}x}{{\rm d}t}+Q(t)x=0$$ Now, * *A point $t_{0}$ is ordinary point if $P(t)$ and $Q(t)$ are analytic at $t_{0}$. *A point $t_{0}$ is singular point if $P(t)$ or $Q(t)$ is not analytic at $t_{0}$. *A point $t_{0}$ which is singular point is regular singular point if $(t-t_{0})P(t)$ and $(t-t_{0})^{2}Q(t)$ are analytics at $t_{0}$ otherwise the point $t_{0}$ is an irregular singular point. Setting $P(t)=e^{t}$ and $Q(t)=\frac{1}{1+2t}$ so since $1+2t=0\implies t=-1/2$ so $Q(t)$ is not analytic in $t_{0}=-1/2$. Now, we need to know if that point is regular or irregular singular point. For that part notice $(t+1/2)e^{t}$ is analytic at $t=-1/2$ but in $\displaystyle (t+1/2)^{2}\cdot \frac{1}{1+2t}$ we can see is not defined at $t=-1/2$ but we can still see that this singularity is weak because $\displaystyle \lim_{t\to -1/2}(t+1/2)^{2}\cdot\frac{1}{1+2t}=0$ which is finite. Therefore all work and we can conclude $t_{0}=-1/2$ is a regular singular point and then we can use the method of Frobenius for to solve the ODE. Just a small remark: * *I think you should change the notation $x(t)$ to $y(x)$ since for the general $x(t)$ describes the position relative to the temporal variable and here it is strange to have $t=-1/2$. I think the second notation would be more convenient.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4391301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Integrated multivariate normal density is greater or equal to product of integrated marginal densities Let $(X, Y)$ be a two-dimensional multivariate normally distributed random vector with means $0$, standard deviations $1$ and correlation $\rho$. Given the joint density $f_{X, Y}$ of $(X, Y)$ and its marginal densities $f_X, f_Y$, I want to show $$ \int_u^\infty \int_v^\infty f_{X, Y}(x, y)\ \mathrm{d}x\, \mathrm{d}y \geq \bigg( \int_u^\infty f_{X}(x)\ \mathrm{d}x\ \bigg) \bigg( \int_v^\infty f_{Y}(y)\ \mathrm{d}y\ \bigg) $$ for any $u, v \geq 0$ (possibly also $u, v \in \mathbb{R}$ if that makes no difference). I already tried to bring everything to the LHS and compute $$ \int_u^\infty \int_v^\infty \Big(f_{X, Y}(x, y) - f_{X}(x) f_{Y}(y)\Big)\ \mathrm{d}x\, \mathrm{d}y. $$ Here, the integrand equals $$ \frac{1}{2\pi} \bigg( \frac{1}{\sqrt{1 - \rho^2}} \exp\bigg\{- \frac{x^2 + y^2}{2(1 - \rho^2)} + \frac{ \rho xy}{2(1 - \rho^2)}\bigg\} - \exp\bigg\{- \frac{x^2 + y^2}{2}\bigg\} \bigg). $$ I am not entirely sure how to go from here or if this leads somewhere. Any ideas how to proceed in this calculation or maybe use some other properties from the multivariate normal distribution to show the inequality?
Your claim is true if and only if $\rho\geq 0$. Indeed, for all $u, v\in\mathbb R$, note that $$\mathbb P(X\geq u, Y\geq v) = \mathbb P(X\leq -u, Y\leq -v)\,,$$ as $(X, Y)$ has the same law as $(-X, -Y)$. Now, the RHS above is the CDF of a bivariate normal with correlation $\rho$. We denote it as $F_\rho(u, v)$, so that we can write $$F_\rho(u, v)=\mathbb P(X\geq u, Y\geq v)\,.$$ Now, if $\rho=0$, $X$ and $Y$ are independent. We thus have that $$F_{\rho=0}(u, v) = \mathbb P(X\geq u)\mathbb P(Y\geq v)\,.$$ To conclude, note that $\rho\mapsto F_\rho(u, v)$ is a strictly increasing map (see for instance Bivariate normal distribution and correlation). In particular, we have that for $\rho>0$ $$F_{\rho}(u, v)>\mathbb P(X\geq u)\mathbb P(Y\geq v)\,,$$ while for $\rho<0$ $$F_{\rho}(u, v)<\mathbb P(X\geq u)\mathbb P(Y\geq v)\,.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4391453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
I don't fully understand why Pythagorean theorem works with velocity vectors. I get why it works with displacement because that's what the theorem was originally meant for, lengths.... I find it harder to wrap my head around it when its velocity. If anyone has a good visualization or anything that will help me understand It will really help. I scoured the internet for a while but all of them were just:- put V(y) velocity arrow perpendicular to V(x) head ,for some reason this explanation isn't working out for me.... well looking forward to the responses! cheers!
Suppose you’re on a ship moving with velocity $v_1$ (relative to the shore) and you’re walking with velocity $v_2$ (relative to the ship). In time $\Delta t$ the ship is displaced by $v_1 \Delta t$ (relative to the shore) and you are displaced by $v_2 \Delta t$ (relative to the ship). Your displacement relative to the shore is $(v_1 + v_2) \Delta t$. So your velocity relative to the shore is $v = v_1 + v_2$. If $v_1 \perp v_2$, then the Pythagorean theorem (which holds whenever we add orthogonal vectors) tells us that $\| v \|^2 = \| v_1 \|^2 + \| v_2 \|^2$. If you’d like, you could apply the Pythagorean theorem to the displacement vectors then divide by $\Delta t^2$ to obtain the Pythagorean theorem for the velocity vectors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4391580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Why does the fact that $f$ and $g$ have the same Fourier coefficients imply that $f=g\text{ a.e.} $ I am self-learning Rudin's RCA and I encountered a puzzle that struggled me for a long time. Here is the related text from RCA and my question. Suppose now that $f\in L^1(T)$, that $\{c_n\}$ is given by $ c_n = \frac{1}{2\pi} \int_{-\pi}^\pi f(x)e^{-inx} \, dx$, and that $\sum_{-\infty}^\infty |c_n|<\infty$. Put $g(x)=\sum_{-\infty}^\infty c_n e^{inx}$. We can get that the series $g(x)$ converges uniformly by $\sum_{-\infty}^\infty |c_n|<\infty$ (hence $g$ is continuous), and the Fourier coefficients of $g$ are easily computed. Through the computation, we know that $f$ and $g$ have the same Fourier coefficients. This implies $f=g\text{ a.e.}$, so the Fourier series of $f$ converges to $f(x)\text{ a.e.}$ My question is in the very last line: why the fact that the Fourier coefficients of $f$ and $g$ are equal implies that $f=g\text{ a.e.}$? Guess 1: I am aware that if $f$ and $g$ are continuous, then this implication can follow. So, we may use that $C(T)$ is dense in $L^1(T)$ to try to solve this question. But I am stuck with the $\epsilon$ thing. Guess 2: Is it true that if a function's Fourier series converges pointwise, then the series must converge (pointwise a.e.) to the original function? If yes, then we can say that the Fourier series of $f$ converges pointwise to $g$, and $g$ is equal to $f\ a.e.$ by the above guess.
The hypothesis $\sum |\widehat{f}(n)|<\infty$ implies $\sum |\widehat{f}(n)|^2<\infty$, and we can think in terms of the Parseval theorem that $\int_0^{2\pi}|f(t)|^2\,dt=\sum |\widehat{f}(n)|^2$ (up to a normalizing constant). Applying this to $f-g$ gives $\int_0^{2\pi}|f(t)-g(t)|^2\,dt=0$. From definitional properties of integrals, this entails that $f-g$ is $0$ almost everywhere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4392010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Which solution to $\int \frac{x^3}{(x^2+1)^2}dx$ is correct? I tried solving the following integral using integral by parts : $$\int \frac{x^3}{(x^2+1)^2}dx$$ but I got a different answer from Wolfram Calculator This is the answer that I got : $$\int \frac{x^3}{(x^2+1)^2}dx=\frac{1}{2(x^2+1)}+\frac{1}{2}\ln(x^2+1)+C$$ I am wondering if I am wrong or the calculator if I am where did I do something wrong. Here is my solution : $$\int \frac{x^3}{(x^2+1)^2}dx$$ $$\int x^2\frac{x}{(x^2+1)^2}dx$$ $\implies u =x^2 \qquad u'=2x$ $\displaystyle\implies v'=\frac{x}{(x^2+1)^2}\qquad v=\int\frac{x}{(x^2+1)^2}dx$ $$\int\frac{x}{(x^2+1)^2}dx$$ $\implies \zeta=x^2+1$ $\implies\displaystyle \frac{d\zeta}{2}=x$ $$\boxed{v=\frac{1}{2}\int\frac{d\zeta}{\zeta^2}=-0.5\zeta^{-1}=\frac{-1}{2(x^2+1)}}$$ $$\int x^2\frac{x}{(x^2+1)^2}dx=x^2\frac{-1}{2(x^2+1)}-\underbrace{(\int \frac{-2x}{2(x^2+1)}dx)}_{-\frac{1}{2}\ln(x^2+1)}$$ $$\boxed{\int \frac{x^3}{(x^2+1)^2}dx=\frac{-x^2}{2(x^2+1)}+\frac{1}{2}\ln(x^2+1)+C}$$
BOTH are correct since the two answers differ by a constant. $$ \begin{aligned} \int \frac{x^{3}}{\left(x^{2}+1\right)^{2}} d x &=-\frac{1}{2} \int x^{2} d\left(\frac{1}{x^{2}+1}\right) \\ &=-\frac{x^{2}}{2\left(x^{2}+1\right)}+\frac{1}{2} \int \frac{2 x}{x^{2}+1} d x \\ &=-\frac{x^{2}}{2\left(x^{2}+1\right)}+\frac{1}{2} \ln \left(x^{2}+1\right)+C_{1} \\ &=-\frac{x^{2}+1-1}{2\left(x^{2}+1\right)}+\frac{1}{2} \ln \left(x^{2}+1\right)+C_{1} \\ &=-\frac{1}{2}+\frac{1}{2\left(x^{2}+1\right)}+\frac{1}{2} \ln \left(x^{2}+1\right)+C_{1} \\ &=\frac{1}{2\left(x^{2}+1\right)}+\frac{1}{2} \ln \left(x^{2}+1\right)+C_{2}, \end{aligned} $$ where $C_{2}= C_{1} -\frac{1}{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4392187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is there a math theorem by which a contour integral is equal to a double integral? I was reading Maxwell's relations and came across: $$\oint pdV=\oint TdS\Rightarrow \iint dpdV=\iint dTdS.$$ I know this is straightforward to see since they both represent the surface area, but I've never seen a math theorem on textbooks that indicates $$\oint ydx=\iint dxdy.$$ Is this just a trivial corollary?
This is Stokes' theorem, $$ \int_S \mathrm{d}\omega = \int_{\partial S} \omega$$ for a 1-form $\omega = y\mathrm{d}x$, $S$ a surface and $\partial S$ its boundary.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4392567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 2 }
Let $G_1$ and $G_2$ be groups and $\pi_1:G_1\times G_2\rightarrow G_1$ Let $G_1$ and $G_2$ be groups and $\pi_1:G_1\times G_2\rightarrow G_1$ be the function defined by $\pi_1(a,b)=a$. Prove that $\pi_1$ is a homomorphism, find $\ker(\pi_1)$, and prove $(G_1\times G_2)/\ker(\pi_1)$ is isomorphic $G_1$. I've already shown the first two parts of the question, but I'm confused on how to proceed for the last proof. First, let $G_1$ and $G_2$ be groups. Let $\pi_1:G_1\times G_2\rightarrow G_1$ be the function defined by $\pi_1(a,b)=a$. Let $a_1,a_2\in G_1$ and $b_1,b_2\in G_2$. Then, $\pi_1(a_1a_2,b_1b_2)=a_1a_2$, but $a_1=\pi_1(a_1,b_1)$ and $a_2=\pi_1(a_2,b_2)$, so $\pi_1(a_1a_2,b_1b_2)=\pi_1(a_1,b_1)\pi_1(a_2,b_2)$. $\pi_1$ is a homomorphism. Next, note $\ker(\pi_1) = \{(a,b)\mid \pi_1(a,b)=e_1\}$. Since $G_1$ is a group, $e_1$ is unique, and thus $\ker(\pi_1)=\{(e_1,b)\mid b\in G_2\}$. We can write $\ker(\pi_1)=e_1\times G_2$. From this point on I think my confusion comes in as to what the structure of $(G_1\times G_2)/\ker(\pi_1)$ is? I don't exactly know what this looks like or how to proceed in proving isomorphic to $G_1$. Is there a simple way to go about this? I think I do have to show some arbitrary map $\phi$ is a bijection from $(G_1\times G_2)/\ker(\pi_1)$ to $G_1$? Edit: We do not yet have the First Isomorphism Theorem. We have proven the Fundamental Homomorphism Theorem-- which is often called the same thing, but is stated in a different way.
All you need to know is the first isomorphism theorem. This map is a surjection onto $G_{1}$. So $(G_{1}\times G_{2})/\ker(\pi_{1})\cong G_{1}$. If you want to proceed directly by hand then you can define $f:(G_{1}\times G_{2})/\ker(\pi_{1})\to G_{1}$ by $f((a,b)\ker(\pi_{1}))=a$. Now observe that the $\ker(f)=\{(e,b)\in G_{1}\times G_{2}:b\in G_{2}\}=\ker(\pi_{1})$. which is the identity element in $G_{1}\times G_{2}/\ker(\pi_{1})$. This gives you injectivity. And for any $a\in G_{1}$ . The equivalence class $(a,e)\ker(\pi_{1})\in (G_{1}\times G_{2})/\ker(\pi_{1})$ is the preimage. Hence done. Note that the equivalence classes can be written as $(a,G_{2})\ker(\pi_{1})$ as only the first components give different equivalence class. $(a,b_{1})\ker(\pi_{1})=(a,b_{2})\ker(\pi_{1})\,\,,\forall b_{1},b_{2}\in G_{2}$. this is because $(a,b_{1})^{-1}\circ (a,b_{2})=(e,b_{1}^{-1}b_{2})\in \ker(\pi_{1})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4392769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does a differential equation always have continuous solutions? The one-dimensional time-independent Schroedinger Equation is defined as $$E\phi(x)=-\frac{\hbar^2}{2m}\frac{d^2\phi(x)}{dx^2}+V(x)\phi(x)$$ In my QM1 lecture, the Professor said that the solutions $\phi(x)$ are continuous because the Schroedinger Equation is a differential Equation. Why is this the case? Which theorem states that the solutions to differential equations are continuous?
If you want a solution to the equation as stated, the second derivative must exist for the equation to have any meaning. As a result the first derivative must exist and so the function must be continuous. However, we are sometimes interested in potentials that act like a delta distribution or potentials that are infinite in such a way as to preclude finding a solution for which a finite second derivative exists at all points. In these cases, we could either give up or weaken our initial requirement. One way to do so would be to integrate both sides of the equation and only require that the integral of the left hand side equals the integral of the right. Then we only need the first derivative to exist and so we can find pick potentials which would result in solutions with discontinuous first derivatives. (Try plugging in a delta distribution into this integrated eigenvalue problem) Another way we could weaken the requirement is by defining a sequence of potentials, which in the limit approaches our badly behaved potential of interest. For each potential in the sequence we can find a solution in the strict sense and then we can ask what the solution looks like in the limit. This limit solution need not have a second derivative. In fact, you can find a sequence of potentials, which generates a step function as a solution in the limit. (Try it! pick a limit of functions which approaches a step function, differentiate twice and figure out what potential would result in the sequence of solutions) One more thought: it is possible to make a physical argument that rules out an infinite first derivative and therefore also a discontinuous solution. If the first derivative is infinite, then the momentum is, too. That implies the kinetic energy is infinite. Although, I'm not sure if it also implies the total energy is infinite. In principle, I'm not sure if infinite kinetic energy is a problem as long as the total energy is finite in the limit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4393073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How do I compute the probability in this exercise? I need to define the probability space $\Omega$ and it's probability function $\Bbb{P}$ for the following exercise: For a fair die which we toss two times, compute the probability that the parity of the two numbers that show up matches. My Idea was the following. We define $\Omega=\{1,...,6\}^2$, then $|\Omega|=36$. We have the following partition into even and odd numbers:$$\Omega=\{1,3,5\}~\dot\cup~\{2,4,6\}=:A~\dot \cup ~B$$Now let us define $\Lambda\subset \Omega$ such that it contains all pairs $(u,v)$ where $u,v\in A$ or $u,v\in B$. * *Case $1$: $u,v\in A$. Then we have $3^2=9$ possibilities. *Case $2$: $u,v\in B$. Then we have $3^2=9$ possibilities. Thus we have $18$ possibilities and $$\Bbb{P}(\Lambda)=\frac{|\Lambda|}{|\Omega|}=\frac{18}{36}=\frac{1}{2}$$ Is this correct so?
Looks fine overall! One suggestion: you might be a bit more explicit about what the measure $\mathbb P$ is actually defined to be. It's not hard, of course -- it just assigns a measure of $1/36$ to each of the $36$ atomic elements (i.e. single pairs). I think it'd be good to specify that for completeness' sake, since in principle you could construct an alternative probability measure that might correspond to, say, rolling weighted dice. Additionally, you were apparently asked to construct the probability measure, and your posted solution sees to just use it without ever stopping to say how it's defined. That said, that comment is a bit nitpicky. The probability measure you have implicitly chosen is, in any reasonable sense, the default one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4393223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How many different number combinations are there between 1000 and 2999 inclusive? This is a question I made that I would like to solve. Here are some points: * *I am referring to combinations, not permutations. Therefore, whilst there are 2000 permutations of numbers between 1000 and 2999 inclusive, there should be less combinations. This is because the number 2324 has the same combination of numbers as 2342, just not in the same order, and should be considered the same combination of 4 digits. Order doesn't matter - it should solely depend on the different numbers obtained. * *As an example: The numbers 2341, 1432, 1342, 2431, 2143, and so on are all a combination of the four numbers, 1, 2, 3 and 4. *Each digit between 0 and 9 can repeat as many times as needed. (2222, 1111 is possible). *The 4-digit number must start with 1 or 2. This is really the only restriction. Please alert me if there's something I need to clarify. I've had a go, and tried researching, but I'm really lost... I thought that dividing the total number of permutations, which is 2000, by possible repetitions would get the answer, such as by 2! for repetitions of 2 specific digits. But, this doesn't account for repetitions of 3 or 4 digits. I just need a hint to help get there. I'm sure the solution could be smoother than I imagine.
For convenience of explanation, we change the range to $0000-1999$ With the first digit as $0$, we essentially want to count (say), weakly increasing three digit sequences, $1\leq x \leq y \leq z \leq9$ This is exactly equivalent to counting a strictly increasing sequence $1\leq x < (y+1) < (z+2) \leq 12= \binom{12}3 = 220$ and for the second sequence, we can neither use digit $0$ nor $1$ Thus ans $ =\binom{12}3 + \binom{11}3 = 385$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4393372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
If $\underbrace{f*f*\ldots*f}_{n\text{ times}}\to f$ uniformly, then the continuous $2\pi$-periodic function $f$ is a trigonometric polynomial Let $f$ be a $2 \pi$-periodic continuous function. Given $$g_1 = f, \qquad g_2 = f * f, \quad \cdots \quad g_{n} = \underbrace{f * f * \cdots * f}_{n \text{ times}} $$ where $*$ denotes convolution, and assume $ g_{n} \xrightarrow{n\to\infty} f $ uniformly. Prove that $f $ is a trigonometric polynomial. Im not sure where to start, I have proved in previous parts that if $f$ is Dirichlet kernel $$\sum_{n=-N}^{N}e^{i n x}$$ then for any $n\in \mathbb{N} $ we have $g_n=f$, but I am not sure how to use it, if it is relevant at all. Any help would be appreciated. Thanks in advance.
By the convolution theorem, we have $$c_k[g_n] = c_k[f]^n \qquad \forall n \ge 0, \ \forall k \in \mathbb{Z},$$ where $c_k$ denotes the $k$-th Fourier coefficient. Furthermore, we have $c_k[g_n] \to c_k[f]$ for all $k \in \mathbb{Z}$ due to the uniform convergence $g_n \rightrightarrows f$ (can you show this?). But $c_k[f]^n \xrightarrow{n \to \infty} c_k[f]$ implies that $c_k[f] \in \{ 0, 1 \}$ for every $k \in \mathbb{Z}$. But in this case $f(x) = \sum_{k \in \mathbb{Z}} c_k[f] e^{i k x}$ diverges in $L^2([0, 2 \pi])$ unless there is a $m \in \mathbb{N}$ such that $c_k[f] = 0$ for all $k \in \mathbb{Z}$ with $| k | > m$. Hence $f$ is a trigonometric polynomial of degree at most $m$(, whose coefficients are either 0 or 1).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4393558", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Does a group (finite or infinite) always be isomorphic to a subgroup of $GL_n(\mathbb C)$? Let $G$ be an arbitrary group (finite or infinite), and let $GL_n(\mathbb C)$ be the general liner group. And let $\varphi : G \rightarrow GL_n(\mathbb C)$ be a homomorphic map. My question is: Can we always find the homomorphic map $\varphi$ be injective? In another word, can any infinite group be represented by some $n \times n$ complex matrices, which are matched with a single element in $G$? Above are some examples. Finite group like Hamilton IV group, $\mathrm Q=\{\mathrm{1, i, j, k}\}$, can be represented by these 8 complex matrices, where: $1=\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix},\mathrm{i}= \begin{pmatrix} i & 0 \\ 0 & -i \end{pmatrix},\mathrm{j}= \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix},\mathrm{k}= \begin{pmatrix} 0 & i \\ i & 0 \end{pmatrix}. $ And infinite group like $\mathbb Z^*$ can be simply represented by $1\times 1$ matrices: $(e^n), n \in \mathbb Z.$ So this problem occured to me. Now, one idea is that the infinite group $G$ is generated by finite elements $\{1, x_1, ..., x_n\}$, and these elements are somewhat 'isolated' from each other. The word 'isolated' may mean that $ x_m \not= x_1^{r_1}x_2^{r_2}\dots x_{m-1}^{r_{m-1}}.$ But at least now, there are still 3 bad things about the idea: 1. How to define the word 'isolated'? 2. How to solve the problem of non-abelian situations? 3. Some infinite groups such as $\mathbb R^*$ cannot be generated by a few elements, but it can still be represented by matrices.
All finite groups are isomorphic to a subgroup of $Gl_n(\mathbb{C})$ for some $n$. This can be shown by first showing that all finite groups are subgroups of a permutation group and then that all permutation groups sit inside a matrix group. Infinite groups in general need not be isomorphic to a subgroup of $Gl_n(\mathbb{C})$ for any $n$. If I recall correctly one of the smallest/ simplest counter examples consists of infinitely many copies of $\mathbb{Z}_2$. Essentially one can build infinite groups that are just too big to fit into $Gl_n(\mathbb{C})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4393752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Show that the optimum value for the problem is $\frac{\lambda_1}{2}$ Let $G\in\mathbb{R}^{n\times n}$ be a symmetric matrix with eigenvalues $\lambda_1\leq\lambda_2\leq\cdots\leq\lambda_n$. Let the problem be: $$\text{minimize}\quad\frac{1}{2}\vec{x}^TG\vec{x}$$ $$\text{s.t.} \quad \vec{x}^T\vec{x}=1$$ Show that the optimum value for the problem is $\frac{\lambda_1}{2}$ My try: The Lagrangian is: $$L=\frac{1}{2} \vec{x}^T G\vec{x} + \mu(\vec{x}^T \vec{x}-1)$$ Then if we take the gradient with respect to $\vec{x}$ and set it equal to the zero vector: $$\nabla_x L= G\vec{x}+2\mu\vec{x}=\vec{0}$$ Then $$\vec{x}=G^{-1}(-2\mu\vec{x})$$ I'm not sure on how to solve for $\vec{x}$ here. Any suggestions would be great!
Hint: $Gx=-2\mu x$ implies that $x$ is an eigenvector of $G$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4393930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Characterization of closed sets similar to open sets doesn't exist? In a topological space $X$, it is a simple result (using axiom of choice) that a subset $U$ is open if and only if each of $x\in U$ has an open neighborhood contained in $U$. I was wondering if we can characterize the closed subsets in a similar manner, i.e., using closed neighborhoods of points of a closed set $K$. The above characterization uses the arbitrary union property of open sets and I reckon that the characterization that I am seeking should use the arbitrary (nonempty) intersection property of closed sets. However, I am stuck. Any help? Note that I am not looking for this characterization: $K$ is closed $\iff$ all points in $X\setminus K$ have an open neighborhood disjoint from $K$.
A set $K$ in a topological space $X$ is closed if and only if $$ \bigcap_{\substack{K \subseteq C \\ C \text{ closed}}} C = K $$ This uses the arbitrary intersection of closed sets that you wanted. This just expresses that $K$ is closed if and only if $\overline{K}=K$. (The left-hand side is just an alternate--and my favorite--definition of the closure of $K$.) This is the exact dual of the following definition of the interior of a set $A$: $$ \mathrm{int}A = \bigcup_{\substack{U \subseteq A \\ U \text{ open}}} U. $$ And, of course, $A$ is open if and only if $\mathrm{int}A=A$. Alternatively, use the fact that $K$ is closed if and only if it contains all its cluster points. Thus $K$ is closed if and only if each point $x \in X$ that has the property that every open neighborhood of $x$ intersects $K$ necessarily has $x \in K$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4394106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Question about predicate logic I know how to represent the sentence “there is exactly one person that is happy”, ∀y∀x((Happy(x)∧Happy(y))→(x=y)) Edit: ∃x∀y(y=x↔Happy(y)) (NOW, I actually know how to represent it) Where x and y represent a person. However, my problem is that I can’t figure out how to say “there are exactly 3 people that are happy” in predicate logic.
You can extend this simply as: if 4 people are happy then at least one is equal to the other. $\forall x,y,z,w((Happy(x)\land Happy(y) \land Happy(z) \land Happy(w))\rightarrow(x=y \lor x=z \lor x=w \lor y=z \lor y=w \lor z=w)) \land \exists x,y,z (Happy(x)\land Happy(y) \land Happy(z) \land x\neq y \land y\neq z \land x \neq z) $
{ "language": "en", "url": "https://math.stackexchange.com/questions/4394247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Sum of interior angles of a polygon I need help with this exercise. "Using the figure below, determine the measure of the interior angle at vertex A." Choose one: a. $60^\circ$ b. $150^\circ$ c. $300^\circ$ d. $150^\circ$ I would like to know if what I did is right. Since polygon has a 7 sides the sum of the interior angles is $(7-2)180^\circ=900^\circ$. Then, $$2x+2x+6x+5x+5x+5x+5x=900$$ $$30x=900$$ $$x=30$$ Then, Angle with vertex $A$ has a measure $5x=5(30)=150^\circ$. Is this ok? Appreciate the help in advance.
$\newcommand{\d}{^\circ}$Imagine driving a car counterclockwise around this circuit. At angle $A,$ you turn $180\d-5x$ to the right. Add up all your right turns as you go around, counting a turn to the left as a negative number of degrees to the right. To complete one full circuit, you turn $360\d$ to the right. So \begin{align} 360\d & = 180\d-5x \\ & {} + 180\d-5x \\ & {} + 180\d-5x \\ & {} + 180\d-2x \\ & {} + 180\d-6x \\ & {} + 180\d-2x \\ & {} + 180\d - 5x \end{align} Solve that for $x.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4394435", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The space spanned by the set of coordinate projections is dense in dual space This answer leads me to below result. Let $E := \ell^p$ with $1 < p < \infty$. Let $\pi_n: E \to \mathbb R, x \mapsto x_n$ be the canonical projection. Clearly, $\pi_n \in E'$. Let $G := \{\pi_1, \pi_2, \ldots\}$. Then $\operatorname{span} G$ is dense in $E'$. Below is my proof for which I use the fact that $\ell^p$ space is uniformly convex and thus reflexive for all $p \in (1, \infty)$. * *Is my conclusion that $s_n \to \sum_{m} x_m f(y^m)$ correct? *Does the result hold for $p=1$ or $p = \infty$? My attempt: Define $y^n \in E$ with $y^n =(y_1^n, y_2^n, \ldots)$ by $y_m^n = \delta_{mn}$. For $x = (x_1, x_2, \ldots) \in E$, we can write $x = \sum_{m} x_m y^m$. For $x\in E$ and $f\in E'$, let $$ x^n := \sum_{m \le n} x_m y^m \in E \quad \text{and} \quad s_n := f(x^n) = \sum_{m \le n} x_m f(y^m) \in \mathbb R \quad \forall n \in \mathbb N^*. $$ We have $|x^n-x|^p_p = \sum_{m>n} |x_m|^p \xrightarrow{n \to \infty}0$ because $|x|_p^p = \sum_{m} |x_m|^p < \infty$. This implies $x^n \to x$. Because $f$ is linear continuous, we have $s_n \to f(x)$. On the other hand, $\color{blue}{s_n \to \sum_{m} x_m f(y^m)}$, so $f(x) = \sum_{m} x_m f(y^m)$. We have $E$ is uniformly convex and thus reflexive. Let $\varphi \in E''$ such that $\varphi_{\mid G} \equiv 0$. There is $z = (z_1, z_2, \ldots) \in E$ such that $\varphi(f) = f(z)$ for all $f\in E'$. We have $$ f(z) = \sum_m z_m f (y^m) = \sum_m \pi_m(z) f (y^m) = \sum_m \varphi(\pi_m) f (y^m) = 0. $$ It follows from Hahn-Banach theorem, that $\overline{\operatorname{span} G} = E'$. This completes the proof.
For $p=1$ we have $E'=\ell^\infty.$ The isometric isomorphism is given by $$\ell^\infty \ni x\longmapsto \sum_{n=1}^\infty x_n\pi_n\in E'.$$ The linear span $V$ of $\{\pi_n\}_{n=1}^\infty$ is a separable subspace of $E'$, so is the closure $\overline{V}.$ The entire space is not separable, hence $\overline{V}\subsetneq E'.$ Another straightforward argument: consider $\varphi =\displaystyle \sum_{n=1}^\infty \pi_n.$ We have $\varphi\in E'$ but $d(\varphi, V)=1,$ where $d(\varphi, V)=\inf\{v\in V:\, \|\varphi-v\|_{E'}\}.$ For $p=\infty$ the separability argument can be applied. Indeed, $E'$ is not separable, as the dual space of a nonseparable space.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4394621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Difficulty understanding the meaning of quotient rings with regards to polynomials. I am having difficulty understanding quotient rings with regards to polynomials. For example, the quotient ring $\mathbb{F_2}[x]/(x^3 + 1)$ maps to the set of all remainders when a polynomial of $x$ (with coefficients in $\mathbb{F}_2$) is divisible by $x^3 + 1$. However, it's hard for me to comprehend the exact elements in this quotient ring. Obviously if the quotient ring is something like $\mathbb{Z}/5$, I know that it maps to the set $\{0, 1, 2, 3, 4\}$, since that is the set of possible remainders when any number is divided by $5$. However, with polynomials, I feel it's more difficult to understand because for instance, I know that when a polynomial of $x$ is divided by $x^3 + 1$, the remainder obviously has to be less than $x^3 + 1$, but a polynomial such as $x^2 + 1$ is greater than $x^3 + 1$ for the domain $(0, 1)$ but not so when $x > 1$. So I'm wondering if there's an objective algorithm to determine the set of all remainders when a polynomial is divided by another polynomial.
The notion of "size" we use when we quotient polynomial rings is not the absolute value, but the degree. So for instance, something like $100x^2 + 500$ (which is degree $2$) would be considered smaller than $x^5 - 50 x^4$ (which is degree $5$), because we only care about the degree. As for how we compute residue classes, we use the same division algorithm that we're used to in order to compute a remainder. You can see a series of worked out examples here, say. Now, if we look at $\mathbb{F}_5 = \mathbb{Z} / 5$, the elements we're left with are those which are "smaller" than $5$. So $\{0,1,2,3,4\}$. If we look at $\mathbb{F}_3[x] / (x^3 - x^2 + 1)$, then the elements we're left with are those which are "smaller" than $x^3 - x^2 + 1$. That is, those polynomials of degree $< 3$. So the elements are $$ \begin{align} 0 && 1 && 2 \\ x && x+1 && x+2 \\ 2x && 2x+1 && 2x+2 \\ x^2 && x^2 + 1 && x^2 + 2 \\ x^2 + x && x^2 + x+1 && x^2 + x+2 \\ x^2 + 2x && x^2 + 2x+1 && x^2 + 2x+2 \\ 2x^2 && 2x^2 + 1 && 2x^2 + 2 \\ 2x^2 + x && 2x^2 + x+1 && 2x^2 + x+2 \\ 2x^2 + 2x && 2x^2 + 2x+1 && 2x^2 + 2x+2 \\ \end{align} $$ Notice there are $27$ of these, since a general polynomial looks like $a_2 x^2 + a_1 x + a_0$, where each $a_i \in \mathbb{F}_3$. In general, if $f$ is a polynomial of degree $n$, then $\mathbb{F}_p [x] / (f)$ will have $p^n$ elements for exactly this reason. As a quick example of how to do reduction, let's reduce $x^4$ mod $x^3 - x^2 + 1$ in $\mathbb{F}_3[x]$. The key insight is that since $x^3 - x^2 + 1 = 0$ in the quotient ring, we can rearrange this to see $x^3 = x^2 - 1$. Now $x^4 = (x) (x^3)$, which, by the above argument is $(x) (x^2 - 1)$, or $x^3 - x$. Now we go again, since we still see an $x^3$, and this becomes $(x^2 - 1) - x$. So then $x^4 = x^2 - x - 1$, and since we are working in a polynomial ring over $\mathbb{F}_3$, we can rewrite this as $x^2 + 2x + 2$, which would be the final answer. Obviously it's tedious to have to repeatedly subtract off factors of $x^3$ and replace them, but thankfully we don't need to! In addition to the video I linked earlier, you can see a worked example for the division algorithm here which lets you efficiently compute $p(x)$ mod $f(x)$. I hope this helps ^_^
{ "language": "en", "url": "https://math.stackexchange.com/questions/4394831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Definition of "separate" in Diestels‘s Graph Theory In Dietels Graph theory, he defined "separate" in chapter 1.4 as If $A, B \subseteq V$ and $X \subseteq V \cup E$ are such that every $A-B$ path in $G$ contains a vertex or an edge from $X$, we say that $X$ separates the sets $A$ and $B$ in $G$. Note that this implies $A \cap B \subseteq X$. So vertex in A or B is allowed to be in X? However, in chapter 3.3, he stated the following theorem: A set of $a-B$ paths is called an $a-B$ fan if any two of the paths have only $a$ in common. Corollary 3.3.4. For $B \subseteq V$ and $a \in V \backslash B$, the minimum number of vertices separating $a$ from $B$ in $G$ is equal to the maximum number of paths forming an $a-B$ fan in $G$. Here is the contradiction: isn't {a} itself a separator of $a$ and B? Which makes the corollary wrong in most cases. This is such a fundamental and important concept yet I can't find explicit explanation in the book, could someone clarify it? Thanks!
What Diestel is doing here is not standard; most treatments simply do not allow the case $A \cap B \ne \varnothing$. It makes some theorems a bit simpler or a bit stronger: for example, the proof of this corollary in Diestel reads "apply Menger's theorem to $N(a)$ and $B$" and this would ordinarily need to be qualified if $a$ had neighbors in $B$. The downside is that it's easy to make mistakes in the statements, like here. My version of Diestel's Graph Theory (an electronic version of the third edition) has Corollary 3.3.4. For $B \subseteq V$ and $a \in V \smallsetminus B$, the minimum number of vertices $\ne a$ separating $a$ from $B$ in $G$ is equal to the maximum number of paths forming an $a{-}B$ fan in $G$. Note the extra qualifier "$\ne a$" which addresses your concern.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4394930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Period of an element in direct product of finite semigroups Let $S$ and $T$ be finite semigroups and let $(x,y)\in S\times T$. What is the period of $(x,y)$ ? I know that if $$\mathrm{index}(x)=\mathrm{index}(y)$$, then the period of $(x,y)$ is $$\mathrm{lcm(period}(x),\mathrm{period}(y))$$. What can we say in case $\mathrm{index}(x)\neq \mathrm{index}(y)$?
The period is also the $$\mathrm{lcm(period}(x),\mathrm{period}(y))$$ Indeed suppose that $x^{i+p} = x^i$ and $y^{j+q} = y^j$. Let $k = \max\{i, j\}$. Then $x^{k+p} = x^k$ and $y^{k+q} = y^k$ and you are back to your known result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4395075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
An identity for rooted plane labelled trees I wish to prove combinatorialy that $$\frac{1}{1-\sum_{n=1}^{\infty} \frac{(n-1)^{n-1} z^n}{n!}} = \frac{T(z)}{z}$$ where $T(z)= \sum_{n=1}^{\infty} \frac{n^{n-1}z^n}{n!}$. $T(z)$ is the exponential generating function for rooted plane lebelled trees times $\frac{1}{z}$. Define $\mathcal{F}$ to be the combinatorial class so that $F_n = (n-1)^{n-1}$ then the exponential generating function for this class is $F(z) = \sum_{n=1}^{\infty}\frac{n^{n-1}z^n}{n!} $. Now it suffices to prove that $\text{SEQ}(\mathcal{F})$ is the exponential generating function for set of rooted plane labelled trees as $T(z)$ is known to satisfy $T(z)=ze^{T(z)}$. Any hints would be appreciated.
This can be understood using the fact that Any parking function (counted by $(n+1)^{n-1}$) is a sequence of prime parking functions (counted by $(n-1)^{n-1}$). See section 2 in the article https://arxiv.org/pdf/math/0312126.pdf for the statement, and the references [10] and [22] there for more details.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4395292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to evaluate the following prefix expression? Can you please help in evaluating following prefix expression? Thank you. – ^ / + + 4 6 2 – 4 1 2 9
The standard way to evaluate prefix expression is a stack. If you have not heard of it before, it is easy to understand anyways, as I will show. We will process your expression – ^ / + + 4 6 2 – 4 1 2 9 in reverse order, while maintaining a list of numbers. Every time we see a number, we append it to the end of the list. If we see an operand (+ - * / ^), we will calculate the result using the last two numbers in our list, remove the two numbers, and append the result instead. Below, I will process each token of your expression one by one, while keeping track of the list. * *Token = 9, append to list, list = [9] *Token = 2, append to list, list = [9, 2] *Token = 1, append to list, list = [9, 2, 1] *Token = 4, append to list, list = [9, 2, 1, 4] *Token = -, calculate result using $4$ and $1$, giving us $4 - 1 = 3$, and we append 3: list = [9, 2, 3] *Token = 2, append to list, list = [9, 2, 3, 2] *Token = 6, append to list, list = [9, 2, 3, 2, 6] *Token = 4, append to list, list = [9, 2, 3, 2, 6, 4] *Token = +, calculate 6 + 4, list = [9, 2, 3, 2, 10] *Token = +, calculate 2 + 10, list = [9, 2, 3, 12] *Token = /, calculate 12 / 3, list = [9, 2, 4] *Token = ^, calculate 4 ^ 2, list = [9, 16] *Token = -, calculate 16 - 9, list = [7] As you can see, the result is 7. If at any point the list is empty (i.e. "adding one number"), then the expression is invalid. Hope this helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4395539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Find the probability that not all the targets in the group will be detected. There is group of $k$ targets, each of which independently of other targets, can be detected by a radar unit with probability $p$. Each of $m$ radar units detects the targets independently of other units. Find the probability that not all the targets in the group will be detected. My Approach: Let Event $A$ is "Not all the targets in group is detected" And Event $B$ denotes "All the targets has been detected" $\implies$$P(A)=1-P(B)$ According to question $p$ is probability that radar will detect targets $\implies$ $1-p$ will be Probability that radar will not detect target. So none of the radar will detect target is ${(1-p)^m}$. $\implies$ At least one radar will detect target is $1-(1-p)^m$. $\implies$ At least one radar will detect the all the $k$ targets is $(1-(1-p)^m)^k$. $\implies$ $P(B)=(1-(1-p)^m)^k$. $\implies$ $P(A)=1-(1-(1-p)^m)^k$ My doubt:$(1)$ Is my Approach correct? $(2)$ Can somebody suggest me All the other method to solve this Problem?
Every target will be aimed at $m$ times, so P(a particular target is not hit) $= (1-p)^m$ P(a particular target is hit) $= 1 - (1-p)^m$ P(all targets are hit) $=(1-(1-p)^m)^k$ P(at least one target not hit) = $1 - (1-(1-p)^m)^k$ So your approach is correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4395719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Solve the ODE $y y''=3(y')^2$ using Reduction of Order For the reduction of order method, we are supposed to guess a solution, $y_1$, and assume that the second solution (this will be second order as the title shows) is of the form $y_2=uy_1$. But I'm having difficulty finding a solution by inspection. I've tried the following, which seemed legit at first, only to find that the constant 3 is making the problem more difficult. $$y=e^3x \quad y=e^{\sqrt{3}x} \quad y=ax^2 \quad y=\sqrt{x}$$ Previously the problems given, it was pretty straight forward but this one is really hurting as I can't find a solution by inspection. It could be that we can maybe use some "product rule" manipulation, since $$y y''=3(y')^2 \quad \rightarrow \quad y y''-3(y')^2=0$$ looks like it could be of the form $(y y')'$, but again, that 3 is messing me up. Once I find it, I want to try and find the other solution using Reduction of Order. So for anyone answering, please don't solve the ODE, just how I can maybe posit a guess given the information provided in the problem.
You can first change from $y'$ to $x'$, then get a reduction of order. The inverse function formula for the second derivative is $y'' = -\left(\frac{1}{x'}\right)^3x''$. Therefore, your equation becomes: $$-y\,\left(\frac{1}{x'}\right)^3x'' = 3\left(\frac{1}{x'}\right)^2$$ This reduces to: $$-y\,\left(\frac{1}{x'}\right)x'' = 3$$ Now you have a problem where you can do reduction of order for $x$. So, $u = x'$ and $u' = x''$. $$ -y\,\frac{1}{u}\,u' = 3 $$ Alternatively, you can write $u'$ as $\frac{du}{dy}$. So, this becomes: $$ -y\,\frac{1}{u}\,\frac{du}{dy} = 3 \\ \frac{du}{u} = -3\frac{dy}{y} \\ \ln(u) = -3\,\ln(y) + C \\ u = \frac{C}{y^3} \\ \frac{dx}{dy} = \frac{C}{y^3} \\ dx = C y^{-3}\,dy \\ x = Cy^{-2} + D $$ Note that a few times in there, C got merged with another constant, but I didn't feel like spelling it out.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4395920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Find the number of all sequences $\{ a_{n}\}$ in $\{-5,-4,-3,...,0,1,...,100\}$ such that $ |a_{n}| < |a_{n+1}|$. Find the number of all sequences $\{ a_{n}\}$ in $\{-5,-4,-3,...,0,1,...,100\}$ such that $ |a_{n}| < |a_{n+1}|$. I think we can find a unique sequence for every non empty subset of $\{0,1,\ldots ,100\}$ so we have at least $2^{101}-1$ .
Let $\{a_n\}$ be a sequence [possibly empty]. By the reasoning of the OP, the sequence $\{a_n\}$ is completely determined by 1. and 2 together below: * *for each of the $5$ integers $i \in \{1,2,3,4,5\}$, whether $i,-i$, or neither, is in $\{a_n\}$ [so $3$ choices, and those are precisely the $3$ choices for each of those $5$ integers $i$], *for each of the remaining $96$ integers $i \in \{0,6,7,8,\ldots, 100\}$, whether or not $i$ is in $\{a_n\}$ [so $2$ choices for each of those $96$ integers $i$]. So this yields precisely $3^5 \times 2^{96}$ choices for $\{a_n\}$, including the empty sequence. There is however, exactly $1$ empty sequence, the number of **nonempty $\{a_n\}$ is then $(3^5 \times 2^{96})-1$. What if the condition were changed to $|a_n| \le |a_{n+1}|$, instead of $|a_n|<|a_{n+1}|$. Then the number of such sequences becomes $4^52^{96}-1$. Can you see why.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4396046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
where did the sum come from in formula of order statistic $ P(X_{(r)}\leq x) = \sum_{j=r}^n C^n_j F(x)^j (1-F(x))^{n-j} $ the order statistic formula is given as follows $ P(X_{(r)}\leq x) = \sum_{j=r}^n C^n_j F(x)^j (1-F(x))^{n-j} $ I undestand that the combination are from the picking $r$ of the $n$ $X$-s to be less than or equal to $x$ but where does the sum come from by intuition? If more than $r$ of the $X$-s are less than or equal to $x$ shouldn't that be smaller?
I'll review a derivation that I hope will make it more intuitive, see bold part for the key obserfation. Recall that if $Y$ be binomial with parameters $n$ and $p$, then $$(*)\quad P(Y \ge r) = \sum_{j=r}^n \binom{n}{j} p^j (1-p)^{n-j}.$$ Now let's assume $X_1,\dots,X_n$ are IID with a CDF $F$. Fix some value $x$. Here's the key: The random variable $X_{(r)}$, the $r$-th order statistic, is less then or equal to $x$ if and only if at least $r$ of the $X_i$'s are less than or equal to $x$ The number of $X_i$-s less than or equal to $x$ is binomial with parameters $n$ and $p=F(x)$, and we therefore apply $(*)$. Note that the summation range is determined by $r$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4396246", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
A Olympiad geometry question - where two angle bisector are given , two interior angles are given . Find the angles of the triangle . This is a problem from 2017 Bulgaria Selection Test for the Junior Balkan Mathematical Olympiad: Given a triangle $\triangle ABC$ and $AA_1$, $BB_1$ are angle bisectors. If angle $AA_1B_1$ = $24^{\circ}$, angle $BB_1A_1$ = $18^{\circ}$.Find out the other angles of the triangle . So - Angle C can be easily deduced with basic geometry . But from here the trouble begins - I tried various constructions (like, trying to get the angles in a parallelogram or something). At last, I got the answer by using sine law repeatedly. But this wasn’t satisfactory for me as I always work with pure Euclidean way of geometry . By using trigonometry - I many have got the answer but not the required satisfaction . Hence, I asked my colleagues, tried myself over a Month ( I don’t like to give up ) and here I am. Anybody with a pure geometry ( Euclid geometry ) proof. You have my appreciation .
I want your help in solving this task. I know the trigonometric solution. How to solve it in another way, maybe you can help me in advanceenter image description here
{ "language": "en", "url": "https://math.stackexchange.com/questions/4396479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Elementary inequalities exercise - how to 'spot' the right sum of squares? I have been working through CJ Bradley's Introduction to Inequalities with a high school student and have been at loss to see how one could stumble upon the solution given for Q3 in Exercise 2c. Question: If $ad-bc=1$ prove that $a^2+b^2+c^2+d^2+ac+bd \geq \sqrt{3}$. Solution: Make the inequality homogenous by multiplying both sides by $ad-bc=1$. [That seems sensible.] Take everything onto one side so we now want to show $$a^2+b^2+c^2+d^2+ac+bd -\sqrt{3}(ad-bc) \geq 0 \tag{1}.$$ [That's also a reasonable thing to do. The trouble is coming next...] Now play around until you notice the left hand side can be written as $$\frac{1}{4}(2a+c-\sqrt{3}d)^2 + \frac{1}{4}(2b+d+\sqrt{3}c)^2 \tag{2}.$$ [What??] I played around for a fair while and didn't get to this. I have a suspicion that this question was created by reverse-engineering. What thought processes get you from (1) to (2), without knowing (2) beforehand? How can you get to the solution without pulling a rabbit out of a hat?
Rather than $\sqrt 3 $ here is the matrix algorithm for coefficient $1.$ I will try $\sqrt 3$ in a few minutes Positivity is shown in matrix $D.$ It is then matrix $Q$ that fills in the linear terms, as in: double your form (coefficient $1$) is $$ 2 \left( a + \frac{c}{2} - \frac{d}{2} \right)^2 + 2 \left( b + \frac{c}{2} + \frac{d}{2} \right)^2 + \left(c \right)^2 + \left( d\right)^2 $$ $$ Q^T D Q = H $$ $$\left( \begin{array}{rrrr} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ \frac{ 1 }{ 2 } & \frac{ 1 }{ 2 } & 1 & 0 \\ - \frac{ 1 }{ 2 } & \frac{ 1 }{ 2 } & 0 & 1 \\ \end{array} \right) \left( \begin{array}{rrrr} 2 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} \right) \left( \begin{array}{rrrr} 1 & 0 & \frac{ 1 }{ 2 } & - \frac{ 1 }{ 2 } \\ 0 & 1 & \frac{ 1 }{ 2 } & \frac{ 1 }{ 2 } \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} \right) = \left( \begin{array}{rrrr} 2 & 0 & 1 & - 1 \\ 0 & 2 & 1 & 1 \\ 1 & 1 & 2 & 0 \\ - 1 & 1 & 0 & 2 \\ \end{array} \right) $$ $$ \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc $$ Allowing the coefficient $\sqrt 3$ to be replaced by variable $x$ we get $$ Q^T D Q = H $$ $$\left( \begin{array}{rrrr} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ \frac{ 1 }{ 2 } & \frac{ x }{ 2 } & 1 & 0 \\ - \frac{ x }{ 2 } & \frac{ 1 }{ 2 } & 0 & 1 \\ \end{array} \right) \left( \begin{array}{rrrr} 2 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 \\ 0 & 0 & \frac{3-x^2}{2} & 0 \\ 0 & 0 & 0 & \frac{3-x^2}{2} \\ \end{array} \right) \left( \begin{array}{rrrr} 1 & 0 & \frac{ 1 }{ 2 } & - \frac{ x }{ 2 } \\ 0 & 1 & \frac{ x }{ 2 } & \frac{ 1 }{ 2 } \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} \right) = \left( \begin{array}{rrrr} 2 & 0 & 1 & - x \\ 0 & 2 & x & 1 \\ 1 & x & 2 & 0 \\ - x & 1 & 0 & 2 \\ \end{array} \right) $$ with resulting expansion $$ 2 \left( a + \frac{c}{2} - \frac{dx}{2} \right)^2 + 2 \left( b + \frac{cx}{2} + \frac{d}{2} \right)^2 + \left( \frac{3-x^2}{2} \right) \left(c \right)^2 + \left( \frac{3-x^2}{2} \right) \left( d\right)^2 $$ Once we set $ x = \sqrt 3$ we get $$ 2 \left( a + \frac{c}{2} - \frac{d\sqrt3}{2} \right)^2 + 2 \left( b + \frac{c\sqrt3}{2} + \frac{d}{2} \right)^2 $$ $$ \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4396645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 5, "answer_id": 0 }
Diameter of the intersection of nested compact sets My question is about the possibility of the inverse of a well known theorem: “Let $A_1 \supset A_2 \supset A_3 \supset … $ be nested, non-empty compact sets and let the diameter of $A = \bigcap_n A_n$ be $0$, then the sequence $diam(A_n)$ approaches $0$ as $n \rightarrow + \infty$” First of all I noticed that this sequence is weakly decreasing, hence it must converge to the infimum of its range, let us call it $\alpha$. Then it is also clear that $diam(A) \leq \alpha$, which, however doesn’t seem to tell us anything helpful in this case (whereas is the key inequality for the theorem to which I referred at the beginning). Any help on proving this theorem (if it can be proven at all) is highly appreciated as always!
Suppose $\operatorname{diam}(A_n)$ does not approach zero. Taking a subsequence, if necessary, we may assume there is $r>0$ so that $\operatorname{diam}(A_n) > r$ for all $n$. Then: there exist $x_n, y_n \in A_n$ with $d(x_n,y_n) > r$. Again taking a subsequence, if necessary, we may assume $x_n \to x$ and $y_n \to y$. Of course $x,y \in A$ and $d(x,y) \ge r > 0$. So $\operatorname{diam}(A) \ne 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4396794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Are power series continuous everywhere where they are defined? Let f(x) ve given by a power series everywhere where ir converges, thus the domain of f(x) consists of the open circle of convergence plus some boundary points of it (posibbly none). Then is f continuous? I know that if f is real valued then yes by abel's limit theorem, but what about complex valued f? I've looked online and found something about a thing called 'Stoltz sector', I didn't really understand it, plus it posibbly doesn't answer my question (therefore I won't quote it).
Ah, I have found the exact answer to your question here: https://mathoverflow.net/questions/110345/does-a-power-series-converging-everywhere-on-its-circle-of-convergence-define-a. The answer provides a 1916 paper from Sierpinski, in which "[he] produces an example where the function converges everywhere on the unit circle but is discontinuous (in fact unbounded) on the circle."
{ "language": "en", "url": "https://math.stackexchange.com/questions/4396969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $X \subset \Bbb R^2$ be a union of the coordinate axes and the line $x+y=1$, $0\le x \le1$. Show that $X$ is homotopy equivalent to $\Bbb S^1$. Let $X \subset \Bbb R^2$ be a union of the coordinate axes and the line $x+y=1$, $0\le x \le1$. Show that $X$ is homotopy equivalent to $\Bbb S^1$. Denote the triangle formed by $(0,0),(1,0),(0,1)$ as $K$. The trick here is apparently to show that $K \simeq X$ and then using the fact that $K$ is homeomorphic to $\Bbb S^1$ to deduce that $X \simeq \Bbb S^1$. With this I've managed to get the following. Define $f :X \to K$ as $$f(x) = \begin{cases} x, & x \in K \\ (0,1), &x \in \{0\} \times [1,\infty) \\ (1,0), & x \in [1,\infty) \times \{0\} \\ (0,0), & x\in \{0\} \times (-\infty, 0] \\ (0,0), &x \in (-\infty, 0] \times \{0\} \end{cases}$$ and define the inclusion $\iota :K \to X$. We know have that $f \circ \iota = id_K$ and I think I can define $h:K \times [0,1] \to X$ as $$h(a,t)=(1-t)(\iota \circ f)(a) + t \cdot id_X(a)$$ to show that $\iota \circ f \simeq id_X$? The problem I'm having is that I didn't know that $K$ is homeomorphic to $\Bbb S^1$. What is the map giving this homeomorphism?
Denote by $T$ the aforementioned triangle. For $f:T\to X$, just use inclusion. For $g:X\to T$, you know that each vertex $v$ of the triangle is the meeting point of two lines $L,M$? Now, $L-\{v\}$ has two components. One includes the edge of the triangle, the other doesn't. It would be a good idea to map every element of the latter component to the vertex $v$ -- and to do likewise for all lines and vertices. The process is easier to understand if you draw it on a piece of paper. That is one example of $g$ that would be enough to show homotopic equivalence. $f\circ g$ is the identity on $X$ itself. To show that $g\circ f$ is homotopic to the identity on $Y$, take each half-line that has been collapsed into a vertex, and pull it back out.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4397164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Doubt about Probability measure in Lebesgue Inegral exercise. I am working my way through the book Basic Stochastic Processes from Springer Undergraduate Mathematics Series and there is a step in one of the excerices' solutions that I can not understand. The exercice 1.9 goes like this: Show that if $\eta: \Omega \rightarrow [0,\infty)$ is a non-negative square integrable random variable, then $E(\eta²)=2\int_0^\infty tP(\eta>t)dt.$ The solution starts as follows: Let $F(t)=P\{\eta\leq t\}$ be the distribution function of $\eta$. Then $E(\eta²)=\int_0^\infty t²dF(t).$ Since $P(\eta>t)=1-F(t)$, we need to show that $\int_0^\infty t² dF(t) = 2\int_0^\infty t(1-F(t))dt$ The next step and the one I do not understand is this one: $\int_0^a t² dF(t) = \int_0^a t² d(F(t)-1)$ After this it goes on with integration by parts which I can follow. Any help would be appreciated, I do not understand how $dF(t)$ seems equivalent to the same probability measure minus 1. I am obviously missing something.
$dF(a,b]=F(b)-F(a)$. The incrementa define the integral with respect yo $dF(t)$. If we call$$ G(t):=F(t)-1 $$then the stieljes integrals with respect yo F and G Will ve the same because $dF(a,b] $clearly is equal to $ dG(a,b].$ A sidenote: the notation $ d(F(t)-1) $ can ve tricky, You are not translating the measures , maybe that was the confusión.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4397321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding the entropy of a random variable $Y$ with no given $P(Y)$? Say we are given two random variables $X$ and $Y$ that can take the values of ${a,b}$ only. X~Bern(p) is given and also the conditional probabilities $P(Y=a|X=a)$ and $P(Y=b|X=b)$. I am trying to find $H(Y)$ but only know so far $H(X,Y)$ and $H(Y|X)$. If the two random variables are independent (which is not stated in the problem), I could say that $H(Y|X)=H(Y)$ but there it is not stated that they are independent. There are no other infos given, Y~Bern(p) is not given. I could use this: $$H(Y,X) = H(Y) + H(X|Y)$$ But the problem is I don't know that $H(X|Y$) is either, neither $H(Y,X)$. Are there relationships that I am not aware of that would help me solve this?
Forget the relationships, and work it out directly. In information theory, there are many formulae between the basic concepts but most are simply definition chases, with use of the law of total probability or other elementary ideas. They can help you out in some cases, but going back to the initial definitions will not slow you down too much. Let $t:=\mathbb{P}(Y=a)$. Then $$\begin{align}H(Y)&=-\sum_x \mathbb{P}(Y=x)\log \mathbb{P}(Y=x)\\ &= -(\mathbb{P}(Y=a)\log \mathbb{P}(Y=a)+\mathbb{P}(Y=b)\log \mathbb{P}(Y=b))\\ &=-(t\log t+(1-t)\log (1-t))\end{align}$$ So we just need to work out what $t$ is, and then we can plug it in. You might notice that $a,b$ are not really relevant: entropy does not take into account what the values are, only how likely they are. The expression above has a name, the binary entropy function (a function of $t\in [0,1]$), and it's the entropy of a Bernoulli random variable (which by convention only takes values $0$ or $1$). $$ t=\mathbb{P}(X=a)\mathbb{P}(Y=a|X=a)+\mathbb{P}(X=b)\mathbb{P}(Y=a|X=b)$$ This is the law of total expectation. You already know $\mathbb{P}(X=a)=p$ and $\mathbb{P}(X=b)=1-p$, and are given the value of $\mathbb{P}(Y=a|X=a)$. That only leaves us to work out $\mathbb{P}(Y=a|X=b)$, but $Y=a$ is the same thing as $Y\neq b$ (as $Y$ can only take the values $a,b$), so $\mathbb{P}(Y=a|X=b)=1-\mathbb{P}(Y=b|X=b)$ and you are given $\mathbb{P}(Y=b|X=b)$ as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4397481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Regarding verifying Gauss-Divergence theorem If $\vec{F}=4x \hat{i}-2y^{2}\hat{j}+z^{2}\hat{k}$ taken over the region bounded by the cylinder $x^{2}+y^{2}=4$, $z=0$ and $z=3$. Verify Divergence theorem. Attempted Solution: Here is what I've done as yet. To verify the theorem, we have to compute $\iiint_{V}\nabla\cdot \vec{F}\mathrm dV$ and $\iint_{S}\vec{F}\cdot \hat{n}\mathrm dS$ and show that they're equal. $$\iiint_{V}\nabla\cdot\vec{F}\mathrm dV =\int_{0}^{3}\int_{-2}^{+2}\int_{-\sqrt{4-x^{2}}}^{+\sqrt{4-x^{2}}}(4-4y+2z)\mathrm dy\mathrm dx\mathrm dz$$ As for the surface integral, we need to consider the three surfaces, viz., two disks and a curved surface. I'm having slight trouble computing the integral over the curved surface. $$\begin{aligned} x^{2}+y^{2}&=4 \\ \phi(x,y)&=x^{2}+y^{2}-4 \\ \hat{n}&=\frac{\nabla \phi}{|\nabla\phi|}=\frac{x\hat{i}+y\hat{j}}{2} \\ I&=\iint_{C}(2x^{2}-y^{3})\mathrm dS \end{aligned}$$ I don't know how to proceed. I'm having trouble writing $\mathrm dS$ in terms of Cartesian coordinates and that is the source of confusion. If I were to project it onto the plane, would I make separate cases for each, or is there some neat way to solve this compactly. Thanks in advance.
The divergence of $\vec{F}$ is the scalar $$\nabla\cdot \vec{F}=\frac{\partial (4x)}{\partial y}+\frac{\partial (-2y^2)}{\partial y}+\frac{\partial (z^2)}{\partial z}=4-4y+2z$$ and therefore we have to compute $$\iiint_{V}\nabla\cdot\vec{F}\mathrm dV =\iiint_{V}(4-4y+2z)dxdydz \\=\int_{z=0}^3\int_{r=0}^2\int_{\theta=0}^{2\pi}(4-4r\sin(\theta)+2z)rd\theta dr dz$$ where we used the cylindrical coordinates $x=r\cos(\theta)$, $y=r\sin(\theta)$, $z=z$. As regards the flux through the lateral surface $C$ of the cylinder, we apply again the cylindrical coordinates. From your work $$\iint_{C}\vec{F}\cdot \hat{n}\mathrm dS=\iint_{C}(2x^{2}-y^{3})\mathrm dS=\int_{z=0}^3\int_{\theta=0}^{2\pi}(2(2\cos(\theta))^{2}-(2\sin(\theta)^{3})2d\theta dz.$$ It is easy to see that the flux through the disc at $z=0$ is $0$ and the one through the disc at $z=3$ is $9\cdot 4\pi=36\pi$. Can you take it from here? Finally, for both computations you should find $84\pi$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4397757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
An exercise on the implicit function theorem I am trying to learn the implicit function theorem and this is one exercise about it; I have solved it and would be grateful for any feedback on my solution, thanks. Let $f\begin{pmatrix}x\\ y\\ z\end{pmatrix}=x y^2+\sin(xz)+e^z$ and $\textbf{a}=\begin{bmatrix}1\\ -1\\ 0 \end{bmatrix}$. (a) Show that the equation $f=2$ defines $z$ as a $\mathcal{C}^1$ function $z=\phi\begin{pmatrix}x\\ y\end{pmatrix}$ near $\textbf{a}.$ (b) Find $\frac{\partial\phi}{\partial x}\begin{pmatrix}1\\-1\end{pmatrix}$ and $\frac{\partial\phi}{\partial y}\begin{pmatrix}1\\-1\end{pmatrix}.$ (c) Find the equation of the tangent plane of the surface $f^{-1}(\{2\})$ at $\mathbf{a}$ in two ways. What I have done: (a) Let $F=f-2$; $F$ is a $\mathcal{C}^1$ function, $F(\mathbf{a})=0,\ DF\begin{pmatrix}x\\ y\\ z\end{pmatrix}=\begin{bmatrix}y^2+z\cos(xz) &2xy & x\cos(xz)+e^z\end{bmatrix}$ and $DF(\mathbf{a})=DF\begin{pmatrix}1\\ -1\\ 0\end{pmatrix}=\begin{bmatrix}1 & -2 & 2\end{bmatrix}$ so in particular $\frac{\partial F}{\partial z}(\mathbf{a})=2\neq 0$ thus there exists a neighborhood V of $\begin{pmatrix}1\\-1\end{pmatrix}$ and $W$ of $0$ and a $\mathcal{C}^1$ function $\phi:V\to W$ so that $z\in W\Leftrightarrow z=\phi\begin{pmatrix}x\\ y\end{pmatrix},\ \begin{pmatrix}x\\ y\end{pmatrix}\in V.$ (b) $$\frac{\partial\phi}{\partial x}=-\frac{\frac{\partial F}{\partial x}\begin{pmatrix}1\\ -1\\ 0\end{pmatrix}}{\frac{\partial F}{\partial z}\begin{pmatrix}1\\ -1\\ 0\end{pmatrix}}=-\frac{1}{2}$$ and $$\frac{\partial\phi}{\partial y}=-\frac{\frac{\partial F}{\partial y}\begin{pmatrix}1\\ -1\\ 0\end{pmatrix}}{\frac{\partial F}{\partial z}\begin{pmatrix}1\\ -1\\ 0\end{pmatrix}}=-\frac{-2}{2}=1.$$ (c) Tangent plane: $$\begin{bmatrix}1 & -2 & 2\end{bmatrix} \begin{bmatrix}x-1\\ y+1\\ -z\end{bmatrix}=x-1-2(y+1)-2z=0\Leftrightarrow x-2y-2z=3$$
(a) and (b) are fine. As regards (c), there is a minor error in your work: the tangent plane should be $$\begin{bmatrix}1 & -2 & 2\end{bmatrix} \begin{bmatrix}x-1\\ y+1\\ z-0\end{bmatrix}=x-1-2(y+1)+2z=0\Leftrightarrow x-2y+2z=3.$$ Here it is another way to find the tangent plane. Since $\phi$ is differentiable at $\begin{pmatrix}1\\-1\end{pmatrix}$, then $$\phi\begin{pmatrix}x\\ y\end{pmatrix}=\phi\begin{pmatrix}1\\-1\end{pmatrix}+ \nabla \phi\begin{pmatrix}1\\-1\end{pmatrix}\cdot \left( \begin{pmatrix}x\\ y\end{pmatrix}-\begin{pmatrix}1\\-1\end{pmatrix}\right)+o\left(\sqrt{(x-1)^2+(y+1)^2}\right)$$ Hence the required tangent plane is $$z= \phi\begin{pmatrix}1\\-1\end{pmatrix}+ \nabla \phi\begin{pmatrix}1\\-1\end{pmatrix}\cdot \left( \begin{pmatrix}x\\ y\end{pmatrix}-\begin{pmatrix}1\\-1\end{pmatrix}\right)= 0+ \begin{pmatrix}-\frac{1}{2}\\1\end{pmatrix}\cdot \begin{pmatrix}x-1\\ y+1\end{pmatrix}\\ =-\frac{1}{2}(x-1)+(y+1)$$ which is equivalent to the equation given above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4398214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Maximum of absolute differences inequality Let $x_1 < x_2 < \cdots < x_q$ and $y_1 < y_2 < \cdots < y_q$ be two monotonic sequences of real numbers. Then, I want to show \begin{align} \max_k |x_k - y_{\sigma(k)}| \geq \max_k |x_k - y_k|, ~~~~~~~~~~~~~~~~~~~~~~~~~(1) \end{align}where $\{y_{\sigma(1)}, y_{\sigma(2)}, \ldots, y_{\sigma(q)}\}$ is a permutation of $\{y_1, y_2, \ldots, y_q\}$. I found out this solution, using which I can show \begin{align*} \left(\sum_{k=1}^q |x_k - y_{\sigma(k)}|^r\right)^{1/r} \geq \left(\sum_{k=1}^q |x_k - y_k|^r \right)^{1/r} ~~~~~~~~~~~~~~~~~~~~~~~~~(2) \end{align*} for all $r$. I cannot apply the mazorization theory to prove (1) directly as the maximum-absolute function is not convex. However, if I apply $r\to \infty$ on both the sides of (2), I can obtain (1). My question: I am wondering if there is a way to directly prove (1) (without using r-norm and, possibly, mazorization theory)?
The idea is the same as in the proof of the rearrangement inequality or in Inequality involving rearrangement: $ \sum_{i=1}^n |x_i - y_{\sigma(i)}| \ge \sum_{i=1}^n |x_i - y_i|. $: Show that if there are $i< j$ with $\sigma(i) > \sigma(j)$ then the left-hand side of $(1)$ does not increase if $\sigma(i)$ and $\sigma(j)$ are exchanged. It then follows that the permutation which minimizes the left-hand side of $(1)$ and has the last possible number of inversions is necessarily the identity permutation. In other words, it suffices to prove the inequality for $q=2$ and the permutation which exchanges indices $1$ and $2$. Actually we can prove strict inequality for $q=2$, but not for $q \ge 3$. Let $x_1 < x_2$ and $y_1 < y_2$. Then $$ \tag{$*$} \max( |x_1 - y_1|, |x_2 - y_2|) < \max(|x_1-y_2|, |x_2-y_1|) \, . $$ Case 1: $x_1 + x_2 \le y_1 + y_2$. Then $$ x_1 < \frac 12 (x_1+x_2) \le \frac 12 (y_1 + y_2) \implies |x_1 - y_1| < |x_1 - y_2| $$ and $$ \frac 12 (x_1+x_2) \le \frac 12 (y_1 + y_2) < y_2 \implies |y_2 - x_2| < |y_2 - x_1| \, . $$ It follows that $$ \max( |x_1 - y_1|, |x_2 - y_2|) < |x_1 - y_2| $$ and that implies $(*)$. Case 2: $x_1 + x_2 > y_1 + y_2$ works similarly, now we get $$ \max( |x_1 - y_1|, |x_2 - y_2|) < |x_2 - y_1| $$ which again implies $(*)$. Remark: The rearrangement inequality is strict if the $x_i$ and $y_i$ are distinct and $\sigma$ is not the identity permutation. That is not the case here if $q \ge 3$. An example is $$ (x_1, x_2, x_3) = (1, 2, 3) \\ (y_1, y_2, y_3) = (1, 2, 5) $$ with $$ \max(|x_1-y_2|, |x_2-y_1, |x_3-y_3|) = 2 = \max(|x_1-y_1|, |x_2-y_2, |x_3-y_3|) \, . $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4398389", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Closed form for $T_n$ if $T_{n+3}=2 T_{n+2}+2 T_{n+1}-T_{n}$ Consider the recurrence relation $$T_{n+3}=2 T_{n+2}+2 T_{n+1}-T_{n}$$ with first three terms as: $T_{1}=20, T_{2}=12, T_{3}=70$ Find the closed form expression for $T_n$. My try: Since it is a constant coefficient difference equation, the auxiliary equation is: $$\lambda^3-2\lambda^2-2\lambda+1=0$$ whose roots are: $-1,\frac{3\pm\sqrt{5}}{2}$ Thus we have: $$T_{n}=A(-1)^{n}+B\left(\frac{3+\sqrt{5}}{2}\right)^{n}+C\left(\frac{3-\sqrt{5}}{2}\right)^{n}$$ where the constants $A,B,C$ to be determined by values of $T_1,T_2,T_3$ but its becoming too tedious to solve. Any alternate approach?
There is one alternate approach mentioned in the comments: generating functions. Let $$f(x) = \sum_{n \geq 1} T_n x^n.$$ If we can write $f(x)$ as a well-known power series, then we can equate coefficients to determine $T_n$ exactly. Start by multiplying both sides of your recurrence by $x^n$ and sum over all values of $n$ where the recurrence holds: \begin{align*} T_{n + 3} x^n &= 2 T_{n + 2} x^n + 2T_{n + 1} x^n - T_n x^n \\ \sum_{n \geq 1} T_{n + 3} x^n &= 2 \sum_{n \geq 1} T_{n + 2} x^n + 2 \sum_{n \geq 1} T_{n + 1} x^n - \sum_{n \geq 1} T_n x^n \\ \frac{f(x) - T_1 x - T_2 x^2 - T_3 x^3}{x^3} &= 2 \frac{f(x) - T_1 x - T_2 x^2}{x^2} + 2 \frac{f(x) - T_1 x}{x} - f(x). \end{align*} We can solve this for $f(x)$: $$f(x) = -{\frac {x \left( 2\,{\it T_1}\,{x}^{2}+2\,{\it T_2}\,{x}^{2}-{\it T_3}\,{x}^{2}+2\,{\it T_1}\,x-{ \it T_2}\,x-{\it T_1} \right) }{{x}^{3}-2\,{x}^{2}-2\,x+1}}. $$ Once you substitute your values for $T_1$, $T_2$, and $T_3$, you do partial fraction decomposition on the resulting explicit rational function. Each term of the decomposition can be written as an explicit geometric series, and $T_n$ is the sum of the $n$th terms of each. If you do this, you get \begin{equation*} A = -\frac{54}{5} \quad B = \frac{2}{5} (6 + \sqrt{5}) \quad C = \frac{2}{6}(6 - \sqrt{5}). \end{equation*} It is, as you say, very tedious to do this by hand! I recommend using a computer algebra system such as Maple, Mathematica, or Sage to do the manipulations. In that case, each has very capable recurrence solvers. For more details and some friendlier examples with respect to generating functions, see generatingfunctionology.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4398715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Union of sets having $\binom{n - 1}{2}$ elements with each two of them having $n - 2$ common elements has cardinality of at least $\binom{n}{3}$. Let $S_1, S_2, \dots, S_n$ sets that have each of them $n - 1 \choose 2$ elements, with $n - 2$ common elements for each two of them. Prove that their union has at least $n \choose 3$ elements. Find an example for equality case. Let us examine the simplest case $n = 3$. Each of them has $2 \choose 2$ $= 1$ elements and each two of them have $1$ element in common, so, in fact, the three sets are identical, so their union has exactly $3 \choose 3$ $= 1$. (Equality case confirmed) Examining $n = 4$, each set should have $3 \choose 2$ $= 3$ elements and $2$ in common every $2$. We may construct $\{a, b, c\}$, $\{a, b, d\}$, and the last set may be $\{a, b, e\}$ or $\{a, c, d\}$, both of them satsifying the conditions. However, I am unable to generalize the problem
One can use the following useful lemma: Let $A_1,\dots,A_N$ be $r$-element sets and $X$ be their union. If $|A_i\cap A_j|\leq k$ for all $i\neq j$, then $$|X|\geq \dfrac{r^2N}{r+(N-1)k}.$$ Proof: Let's introduce some convenient notation: $[N]:=\{1,\dots,N\}$ and for each $x\in X$ define $d(x):=\#\{j\in [N]: x\in A_j\}.$ For each $1\leq i\leq N$ consider $$\sum \limits_{x\in A_i}d(x)=\sum \limits_{x\in A_i}\sum \limits_{j=1}^N 1_{A_j}(x)=\sum \limits_{j=1}^N |A_i\cap A_j|=|A_i|+\sum \limits_{j\neq i}|A_i\cap A_j|\leq r+(N-1)k.$$ Summing over all sets $A_i$ we have $$\sum \limits_{i=1}^N \sum\limits_{x\in A_i}d(x)\leq rN+N(N-1)k.$$ However, the LHS of the inequalit can be written as follows: $$\sum \limits_{i=1}^N \sum\limits_{x\in A_i}d(x)=\sum \limits_{i=1}^N \sum\limits_{x\in X}1_{A_i}(x)\sum \limits_{j=1}^N 1_{A_j}(x)=\sum \limits_{x\in X}\left(\sum \limits_{i=1}^N 1_{A_i}(x)\right)^2=\sum \limits_{x\in X}d(x)^2\geq $$ $$\geq \frac{1}{|X|}\left(\sum \limits_{x\in X}d(x)\right)^2=\frac{1}{|X|}\left( \sum \limits_{i=1}^{N}|A_i|\right)^2=\dfrac{N^2r^2}{|X|}.$$ Hence we obtain: $$\dfrac{N^2r^2}{|X|}\leq rN+N(N-1)k \Leftrightarrow$$ $$|X|\geq \dfrac{r^2N}{r+(N-1)k}.$$ Remark: Your problem follows easily from that lemma: just take $N=n, r=\binom{n-1}{2}$ and $k=n-2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4398878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 2 }
How do we construct a homomorphism between a simple R-module and a unitary R-module? I don't have a lot of experience with this sort of thing and so I am trying to synthesize a process by which I can understand constructing homomorphisms. In this particular case I want to construct a homomorphism between a simple $R$-module and a unitary $R$-module, where $R$ itself is a ring with identity, where $I_{R} \neq 0_{R}$. I do know that a non-zero unitary $R$-module $M$ is simple if the only submodules of $M$ are $\left\{0_{R}\right\}$ and $M$. Any suggestions on how to go about this?
Hints: (1) A simple module $M$ must be cyclic. That is, $M = Rm = \{rm \mid r \in R \}$ for any $m \neq 0 \in M$. Why? (2) For any module homomorphism $\varphi: M \to N$, $$ \ker \varphi = \{ m \in M \mid \varphi(m) = 0 \}, $$ is a submodule. Why? (3) A homomorphism $\varphi: M \to N$ is completely determined by $\varphi(m)$, where $m$ is any nonzero element (generator) of $M$. Why? (4) There are only two possibilities for the image submodule $$ \operatorname{im} \varphi = \{ \varphi(m) \mid m \in M\}. $$ What are they? Why?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4399044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Probability for Ben, Amos and Carl Three men Amos, Ben and Carl share an office at work with a single telephone. Calls call in at random with the proportions of $\dfrac{1}{2}$ for Amos, $\dfrac{1}{3}$ for Ben and $\dfrac{1}{6}$ for Carl. For any incoming, calls, the probabilities that it will be picked up by Amos, Ben, and Carl are $\dfrac{1}{2}$, $\dfrac{3}{10}$ and $\dfrac{1}{5}$ respectively. For calls arriving during working hours, find the probability that (i) a call is not picked up by the person being called, (ii) a call is for Ben given that a call is not picked up by the person being called. For (i), I tried to get P(proportion for Amos & not picked up) or P(proportion for Ben & not picked up) or P(proportion for Carl & not picked up) $= \dfrac{1}{2} * \dfrac{1}{2} + \dfrac{1}{3} * \dfrac{1}{2} + \dfrac{1}{6} * \dfrac{5}{6} = 0.5546 $ But the answer for (i) is $\dfrac{37}{60}$, what went wrong? How do I do for (ii)? Your help is appreciated. Thanks
You got one number wrong. Your idea is correct, we have \begin{align*} \def\P{\mathbf P}\P(\text{call picked up by wrong person}) &=\underbrace{ \frac 12}_{\text{call for A}}\cdot \underbrace{\frac 12}_{\text{call not picked by A}} + \underbrace{ \frac 13}_{\text{call for B}}\cdot \underbrace{\frac 7{10}}_{\text{call not picked by B}}+ \underbrace{ \frac 16}_{\text{call for C}}\cdot \underbrace{\frac 45}_{\text{call not picked by C}}\\ &= \frac 14 + \frac 7{30} + \frac 4{30}\\ &= \frac{30 + 28 + 16}{120}\\ &= \frac{37}{60} \end{align*} For (ii), we have \begin{align*} \P(\text{call for B}\mid\text{wrong person}) &= \frac{\P(\text{call for B}\cap \text{wrong person})}{\P(\text{wrong person})}\\ &= \frac{\frac 12 \cdot \frac 12}{\frac{37}{60}}\\ &= \frac{15}{37} \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4399243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Maximum sum after some elements are deleted from the set Let $n$ be positive integer and set $A=\{1,2,...,2n-1\}$. Alice deletes at least $n-1$ integers from the set $A$, such that : * *For every $a\in A$ and $2a\in A$ if $a$ is deleted, $2a$ is also deleted. *For every $a,b\in A$ and $a+b\in A$ if $a,b$ are deleted, $a+b$ is also deleted. Find maximum sum of the elements in a set $A$ after Alice's operations. I think the answer is $n^2$.The set $\{1,3,...,2n-1\}$ satisfies given conditions. But how can I prove that there isn't a better set?
There is another way to do this. First, let $A$ be the set of remaining integers. We may assume $A$ has exactly $n$ elements, indeed if $A$ does not, add the smallest integer $a \in \{1,2,\ldots, 2n-1\}$ not yet already in $A$. We make 2 claims: Claim 1: Let $a$ be an element in $A$. Then if $a$ is odd, then there must be at least $\frac{a-1}{2}$ integers $a'<a$ in $A$. If $a$ is even, then there must be at least $\frac{a}{2}$ integers $a'<a$ in $A$. Indeed, for each pair of distinct positive integers $\{a_1,a_2\}$ w $a_1+a_2=a$, at least one integer of the pair must be in $A$. Then if $a$ is even, so must $\frac{a}{2}$ be in $A$.$\surd$ Claim 2: Let $x$ be an odd integer [not necesarily in $A$]. Then there must be at least $\frac{x+1}{2}$ integers $a'\le x$ such that $a'$ is in $A$. Use induction on $x$. Indeed true if $x=2n-1$ [because $A$ has $n$ integers], or if $x$ is in $A$ [Claim 1]. So let us assume that there are at least $\frac{x+3}{2}$ $=\frac{x+1}{2}+1$ integers $a'\le x+2$ in $A$. Then on the one hand, if $x+1$ is not in $A$, then there must indeed be at least $\frac{x+3}{2} -|A \cap \{x+1,x+2\}|$ $\ge$ $(\frac{x+1}{2}+1)-1$ $=\frac{x+1}{2}$ integers $a'<x$ in $A$. However, on the other hand, if $x+1$ is in $A$, then as $x$ is odd, $x+1$ is even, so by Claim 1 there must indeed be at least $\frac{x+1}{2}$ integers $a'<x+1$ in $A$, or equivalently, $\frac{x+1}{2}$ integers $a'\le x$ in $A$. So either way Claim 2 follows. $\surd$ So now let us write $A=\{x_1,x_2,\ldots, x_n\}$, where the $x_k$s are in increasing order. Then by Claim 2, the inequality $x_k \le 2k-1$ must hold for each $k=1,2,\ldots, n$. This gives $$\sum_{x \in A} x \le \sum_{k=1}^n (2k-1) \ = \ n^2.$$ But then $A =\{1,3,5,\ldots,2n-1\}$ i.e., $A$ is the set of odd positive integers no larger than $2n-1$, has no more than $n$ integers and satisfies the conditions 1. and 2. above, and $\sum_{x \in A} x$ is precisely $n^2$. So this bound is tight, as the set $A$ above explicitly constructed meets this bound. Note that we need both conditions for this bound to be tight. Take $n=4$. Then the set $\{6,5,4,3\}$ haa no more than $4$ integers satisfies exactly one of the remaining conditions namely 2. , and the set $\{7,6,5,3\}$ satisfies the other of the remaining conditions, namely 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4399440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
A question on the Zeta function of Riemann I am reading a paper "A note on the Riemann zeta function" by F. T. Wang, Bull. Amer. Math. Soc $(1946)$. Let $K$ be the unit semi circle with center $z=0$ lying in the right half plane $\Re(z)>0$ and let $C$ be the broken line consisting of three segments $L_1$ $(0\leq x\leq T, y=-T)$, $L_2$ $(0\leq x\leq T, y=-T)$ and $L_3$ $(x=T, -T\leq y\leq T)$. Let $\Gamma$ be a contour describing $C$, $K$ and the corresponding part of the imaginary axis, and let $\rho_v=\beta_v+i\gamma_v$ (the zeros of the zeta function $\zeta(1/2+z)$ whose real part $0\leq \beta_v<1/2$) be a point interior to $\Gamma$, and $\log(z-\rho_v)$ be taken as its principal value. We write $C_1$ as a contour describing $\Gamma$ in positive direction to the point $i\gamma_v$, then along a segment $y=\gamma_v$, $0<x<\beta_v-r$, and describing a small circle with center $z=\rho_v$, radius $r$ , then going back along the negative side of the segment to $i\gamma_v$, and then along $\Gamma$ to the starting point. By Cauchy's theorem we get $$\int_{C_1}\frac{\log(z-\rho_v)}{z^2}dz=0 \tag{1}$$ Hence $$\frac{1}{2\pi i} \int_{\Gamma}\frac{\log(z-\rho_v)}{z^2}dz=-\int_{0}^{\beta_v}\frac{dx}{(x+i\gamma_v)^2}\tag{2}$$ where the integral around the small circle with center $z=\rho_v$, radius $r$, tends to zero as $r\to 0$. Question: How did we get equation $(2)$? Also I am struggling to understand the above highlighted portion which describes the contour $C_1$ If possible please draw the contour. Any insights will be welcome. Thank you!
For $\Gamma$ and $C_1$ see Fig.1, 2 below. Let $L^+$, $L^-$ be the upper edge and the lower edge of the segment $y=\gamma_v$,$0<x<\beta_v$ respectively. Since $$ 0=\int_{C_1}\frac{\log(z-\rho_v)}{z^2}dz=\int_\Gamma \frac{\log(z-\rho_v)}{z^2}dz+\int_{L^+} \frac{\log(z-\rho_v)}{z^2}dz+\int_{L^-} \frac{\log(z-\rho_v)}{z^2}dz, $$ we have $$ \int_\Gamma \frac{\log(z-\rho_v)}{z^2}dz=-\int_{L^+} \frac{\log(z-\rho_v)}{z^2}dz-\int_{L^-} \frac{\log(z-\rho_v)}{z^2}dz. $$ The value of $\log(z-\rho_v)$ on $L^+$ is $\log|x+i\gamma_v|+i\pi$ and the value on $L^-$ is $\log|x+i\gamma_v|-i\pi$. So we get $$ \int_{L^+} \frac{\log(z-\rho_v)}{z^2}dz=\int_0^{\beta_v} \frac{\log|x+i\gamma_v|+i\pi}{(x+i\gamma_v)^2}dx$$ and$$ \int_{L^-} \frac{\log(z-\rho_v)}{z^2}dz=\int_{\beta_v}^0 \frac{\log|x+i\gamma_v|-i\pi}{(x+i\gamma_v)^2}dx.$$ Therefore we have $$ \int_{L^+} \frac{\log(z-\rho_v)}{z^2}dz+\int_{L^-} \frac{\log(z-\rho_v)}{z^2}dz=2\pi i \int_0^{\beta_v} \frac{dx}{(x+i\gamma_v)^2},$$ which leads $(2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4399629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proving $\frac{\pi}{2}=\sum^\infty_{l=0} \frac{(-1)^l}{2l+1}\big(P_{2l}(x)+\text{sgn}(x)P_{2l+1}(x)\big)$ Can someone help me in proving the following: $$ \frac{\pi}{2}=\sum^\infty_{l=0} \frac{(-1)^l}{2l+1}(P_{2l}(x)+\text{sgn}(x)\cdot P_{2l+1}(x)), $$ for any value of $x$, $-1\le x\le 1$? (Here $P_l(x)$ is the Legendre polynomial of degree $l$, and $\text{sgn}(x)$ is the sign function.) It's the craziest result I've seen (that the left hand side is actually independent of $x$), and it's seemingly difficult to prove. I'm thinking maybe you could prove it using elliptic integrals somehow? Or maybe an easier way? Putting it into the matlab verifies the result (going to high enough $l$).
Assume $x\in\mathbb{R}$, $|x|<1$. Then $|P_n(x)|=O(n^{-1/2})$ as $n\to\infty$, hence $$F(x,z):=\sum_{n=0}^\infty P_n(x)\frac{z^{n+1}}{n+1},\qquad G(x,z):=\sum_{n=1}^\infty P_n(x)\frac{z^n}{n}$$ converge absolutely and uniformly in $\{z\in\mathbb{C}:|z|\leqslant 1\}$, and clearly \begin{align*} F(x):=\sum_{n=0}^\infty\frac{(-1)^n}{2n+1}P_{2n}(x)&=\frac1{2i}\big(F(x,i)-F(x,-i)\big), \\ G(x):=\sum_{n=0}^\infty\frac{(-1)^n}{2n+1}P_{2n+1}(x)&=\frac1{2i}\big(G(x,i)-G(x,-i)\big). \end{align*} Both $F(x,z)$ and $G(x,z)$ can be computed using the generating function $$P(x,t):=\sum_{n=0}^\infty P_n(x)t^n=\frac1{\sqrt{1-2xt+t^2}}$$ (here $t\in\mathbb{C}$, $|t|<1$): for $|z|<1$ we then have \begin{align*} F(x,z)&=\int_0^z P(x,t)\,dt&&=\log\frac{z-x+\sqrt{1-2xz+z^2}}{1-x},\\ G(x,z)&=\int_0^z\frac{P(x,t)-1}{t}\,dt&&=\log\frac2{1-xz+\sqrt{1-2xz+z^2}}, \end{align*} and for $|z|=1$ we may take radial limits, by Abel's theorem. This gives, for $\color{blue}{x\geqslant 0}$, $$F(x,\pm i)=\log\frac{\pm i+\sqrt{x}}{1+\sqrt{x}},\qquad G(x,\pm i)=\log\frac2{(1+\sqrt{x})(1\mp i\sqrt{x})},$$ then $F(x)=\operatorname{arccot}\sqrt{x}$ and $G(x)=\arctan\sqrt{x}$, hence $F(x)+G(x)=\pi/2$. The case $|x|<1$ follows by parity, and the cases $x=\pm 1$ are easy to check separately.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4399774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Does the mean of the ratio of the number of distinct prime factors to the number of divisors of natural numbers converge? Let $d(n)$ and $\omega(n)$ be the number of divisors and the number of distinct prime factors of $n$ respectively. What is the limiting value of $$ \lim_{n \to \infty} \frac{1}{n}\sum_{r=1}^n \frac{\omega(r)}{d(r)} $$ For $n \le 23275000000 $, the value is approximately $0.275967$.
Just to expand on my comment on tomos's answer: we always have $d(r) \geq 2^{\omega(r)}$, so when $\omega(r)$ is large, we have $$ \frac{\omega(r)}{d(r)} \leq \frac{\omega(r)}{2^{\omega(r)}} \to 0. $$ But recall that for all $m$, asymptotically $100\%$ of positive integers $r$ have $\omega(r) > m$. Now let $\varepsilon > 0$. By the previous statement, there exists some $N$ such that for all $n > N$, the proportion of integers $r$ in $[1, n]$ with $\frac{\omega(r)}{2^{\omega(r)}} \geq \frac{\varepsilon}{2}$ is less than $\frac{\varepsilon}{2}$. Then the sum $$ \sum_{r=1}^n \frac{\omega(r)}{d(r)} $$ can be split into two pieces: one containing at most $n$ terms that are each less than $\frac{\varepsilon}{2}$, and the other containing fewer than $\frac{n \varepsilon}{2}$ terms, each of them at most $1$. This yields $$ \sum_{r=1}^n \frac{\omega(r)}{d(r)} < n \cdot \frac{\varepsilon}{2} + \frac{n \varepsilon}{2} \cdot 1 = n \varepsilon. $$ So the average is less than $\epsilon$ whenever $n$ is large enough; i.e. it converges to $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4400180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }