Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Maclaurin expansion problem I am having problem with a question regarding Maclaurin expansion. a) Find the term up to $x^4$ in the Maclaurin expansion of $f(x)=\ln(\cos(x))$. This part I was able to anwser part a. The Maclaurin expansion until $x^4$ is: $f(x)=-x^2/2-x^4/12$ b) Use this series to find an approximation in terms of $\pi$ for $\ln(2)$. I was not able to answer part b. If someone know how to make it I would appreciate it. Thanks
Hint: Use the fact that $\ln(a^b)=b\cdot\ln(a)$. Therefore, $\ln(2)=-\ln\bigl(\frac{1}{2}\bigr)$. Then, when does $\cos(x)=\frac{1}{2}$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2540444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Expected number of draws to draw 3 of the same balls out of an urn with replacement An urn contains twelve balls numbered 1 to 12. We draw a ball, record its number, and replace it in the urn. We repeat this until we draw any number three times. What is the expected number of draws? First post here. Anyhow, I can solve the variant with one repeat, but I am struggling to figure out how to solve it with two repeats. My initial guess was to use a Geometric distribution with $E[X] = 1/p$, where $p = P(\text{three balls are} 1)+...+P(\text{three balls are} 12) = 12*(1/12^3)$. So $1/(12*(1/12^3)) = 144$. It seemed a bit high, and I realized that this only works if we are drawing three at a time. Been stumped for a few hours now. I think I am overthinking this. Can anyone help?
Let's say the first time we have drawn the same number three times is on draw number $X$, so we want to find $E(X)$. Our approach is to find $P(X>n)$ for $n=0,1,2, \dots ,24$, and then apply the theorem $E(X) = \sum_{n>0} P(X>n)$. $X>n$ if no number has been drawn more then two times by the $n$th draw. There are $12^n$ possible sequences of numbers in $n$ draws, all of which we assume are equally likely. We would like to count the sequences in which no number occurs more than twice; let's say this number is $a_n$. The exponential generating function for $a_n$ is $$\begin{align} f(x) &= \left( 1+x+\frac{1}{2}x^2 \right)^{12} \\ &= \left[ (1+x)+\frac{1}{2}x^2 \right]^{12} \\ &= \sum_{i=0}^{12} \binom{12}{i} (1+x)^i \left( \frac{1}{2} x^2 \right)^{12-i} \\ &= \sum_{i=0}^{12} \binom{12}{i} \left( \frac{1}{2} \right)^{12-i} x^{24-2i} \sum_{j=0}^i \binom{i}{j} x^j \\ &= \sum_{i=0}^{12} \sum_{j=0}^i \binom{12}{i} \left( \frac{1}{2} \right)^{12-i} \binom{i}{j} x^{24-2i+j} \end{align}$$ where we have applied the binomial theorem twice above. So $a_n$ is the coefficient of $(1/n!) x^n$ in $f(x)$: $$a_n = n! \sum_{i=12-\lfloor n/2 \rfloor}^{12} \binom{12}{i} \left( \frac{1}{2} \right)^{12-i} \binom{i}{n-24+2i}$$ for $n=0,1,2, \dots ,24$, and $$P(X>n) = \frac{a_n}{12^n}$$ Finally, $$E(X) = \sum_{n=0}^{24} P(X>n) \approx 10.7821$$ which is in agreement with the Monte Carlo estimate given in a comment by Suzuteo.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2540530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Circular permutations problem, need help How many ways can $5$ people sit on a circular table of $8$ places? I know that if both people and places number =$n$ , then the answer $= (n-1)!$ But the situation here is different, since there are more places than people, what should I do? Thanks for help
Let $X$ be the set of all possible arrangements of $5$ people in a straight line of $8$ chairs. Now, define a relation on $X$ : two arrangements are similar, if they are related by a cyclic permutation: if there exists a number $0 \leq r\leq 7 $, such that each person in the first arrangement, when shifted by $r$ places to the right, lands up where he is supposed to be in the second arrangement (with wrapping around i.e. from extreme right the next shift is to the extreme left). Now, note that this relation is an equivalence relation, and the equivalence class of each straight line permutation contains eight elements, and represents one single arrangement around a circular table. Therefore, the answer to your question is just the number of ways that $5$ people can sit in $8$ places in a straight line(that is, with no constraints on symmetry), divided by $8$. The answer to this question is then just $\frac{8 \times7\times6\times 5\times 4}{8} = 7\times6\times5\times4 = 840$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2540644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
A finite group is nilpotent iff two elements with relatively prime order commute (Question 9 in chapter 6.1 of Dummit and Foote). Prove that a finite group G is nilpotent if and only if whenever $a, b \in G$ with $(|a|, |b|) = 1$ then $ab = ba$. It says to use the following theorem: Let G be a finite group, $p_1$, $p_2$, ... $p_s$ be the distinct primes dividing its order, and $P_i \in Syl_{p_i}(G)$. Then G is nilpotent iff $G \cong P_1 \times P_2 \times ... P_s$. I believe I know the if direction: an element $a \in G$ corresponds to an element $(g_1, g_2, ... g_s) \in P_1 \times P_2 \times ... P_s$ and $|a| = lcm(|g_1|, |g_2|, ... |g_s|)$. If $b$ corresponds to $(h_1, h_2, ... h_s)$ then $(|a|, |b|) = 1$ implies each $(|g_i|, |h_i|) = 1$. Since the order of the elements divides $|P_i|$ a prime power, $|g_i|$ or $|h_i|$ has to be 1 or their gcd would not be 1. So one of every pair $g_i$ and $h_i$ has to be 1, so they commute, so $a$ and $b$ commute. I'm not sure how to do the only if direction. Any pointers? Thank you
Here is the my proof: I will use theorem which says: If G is finite nilpotent and $P_i$'s are Sylow-$p_i$-subgroups of G then $G= \displaystyle\prod_{p_i} P_i$ (Also this prodocut is direct but we dont need uniqueness). Let $G$ be a nilpotent group and let $P_1$,$P_2$,..,$P_n$ be Sylow subgroups of G for every prime $p_i||G|$. Since $G$ is nilpotent we have $n_{p_i}(G)=1$ $\forall p_i$. Then $P_i$'s are normal subgroup. Since $P_i$ $\cap$ $P_j$ $=1$ for all $i \neq j$ and they are normal, elements of $P_i$ and $P_j$ are commute (easy to check). By above theorem two elements of $G$ with relatively prime order commutes. Conversely, suppose any two elements of $G$ with relatively prime orders are commute. Let $P_1,P_2,..P_n$ are sylow-$p_i$-subgroups of G $\forall p$ $|$ $ |G|$. So $P_i$ commutes with $P_j$ for all $i \neq j$. Then $P_i \subseteq N_G(P_j) $ for all $i \neq j$. Hence $G=N_G(P_j)$, and this implies all $P_i$'s are normal. $G$ is nilpotent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2540771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Can this definite integral involving series be solved without a calculator? I got this question today but I can't see if there is any good way to solve it by hand. Evaluate the definite integral $$\int_2^{12}\frac{\sqrt{x+\sqrt{x+\sqrt{x+...}}}}{\sqrt{x\sqrt{x\sqrt{x}...}}}\,\mathrm{d}x$$ where the series in the numerator and denominator continue infinitely. If you let $y=\sqrt{x+\sqrt{x+\sqrt{x+...}}}=\sqrt{x+y}$, solving for $y$ we get $y=\frac{1\pm\sqrt{1+4x}}{2}$. And similarly for the denominator we have $z=\sqrt{xz}$. So $z=x$. So the integral simplifies to $$\int_2^{12}\frac{1\pm\sqrt{1+4x}}{2x}\,\mathrm{d}x\,.$$ Now my problems are * *I don't know what to with the $\pm$. *I tried to solve the integral by separating it as a sum of two fractions. But I can't solve $$\int_2^{12}\frac{\sqrt{1+4x}}{2x}\,\mathrm{d}x\,.$$
Hint: 1. For real $y,$ $\sqrt{y^2}=|y|\ge0$ and $\sqrt{1+4x}\ge3$ for $2\le x\le12\implies1-\sqrt{1+4x}<0$ 2. Set $\sqrt{1+4x}=u\implies4x=u^2-1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2540947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 4, "answer_id": 0 }
Direct Sum Isomorphism: $\mathbf{Z}_4 \oplus \mathbf{Z}_3 = \mathbf{Z}_{12}$ A question I'm working on asks: If $f$ is an isomorphism from $\mathbf{Z}_4 \oplus \mathbf{Z}_3 = \mathbf{Z}_{12}$, what is $\phi(2,0)$? What are the possibilities for $\phi(1, 0)$? Give reasons for your answer. I know that isomorphisms preserve the order of an element. $(2,0)$ has order 2 and so will map to 6. $(1,0)$ will have order 4 and so will map to 3 or 9. I did the latter by checking order of all elements (skipping those relatively prime to 12). The answer key actually constructs an explicit isomorphism then checks: The isomorphism defined by $(1, 1)x \rightarrow 5x$ with $x=6$ takes $(2, 0)$ to $6$. [Then later talking about mapping $(1,0)$] The first case occurs for the isomorphism defined by $(1, 1)x \rightarrow 7x$ with $x=3$ ; the second case occurs for the isomorphism defined by $(1, 1)x \rightarrow 5x$ with $x=9$. How did they construct these isomorphisms? I see that $(1,1)$ generates $\mathbf{Z}_4 \oplus \mathbf{Z}_3$ but what about the RHS?
For $Z_n$, any number $m$ where $gcd(m,n)=1$ will generate the entire group. As you noted, $(1,1)$ is a generator of $Z_3\oplus Z_4$. Since both groups are cyclic and of the same order, any map defined by sending a generator to a generator will be an isomorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2541085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Bounded variation of $\frac1f$ when $\inf(|f|)>0$ & $f$ bounded variation I want to show if $\frac{1}{f}\in BV[a,b]$ when $\inf(|f|)>0 \land f\in BV[a,b]$. I tried to find a partition that $V(\frac{1}{f},P)$ is upper-bounded using the partition that makes $V(f,P)$ upper-bounded in which I failed. (in case $f$ is not continuous which is countable..) Showing by subtraction of two increasing functions also seems hard. Hope if someone can help me or give me a hint. Thank you.
Hint: Say $f(x)\ge c>0$ for all $x$. Then $$\left|\frac1{f(x)}-\frac1{f(y)}\right|=\left|\frac{f(y)-f(x)}{f(x)f(y)}\right|\le\frac{|f(x)-f(y)|}{c^2}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2541155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Diophantine polynomial equations If K is a positive integer, what is the smallest k for which $x^2 + kx = 4y^2 -4y +1$ has a solution (x,y) where x and y are integers = $(2y-1)^2 = x^2 +kx$ =$(2y-1+x)(2y-1-x) = kx $ I tried equating them to each other $(2y-1+x) = k; (2y-1-x) = x$ $x= k-2y+1; k= (6y-3)/2$ but then k does not have a solution.
We need $$k^2+(4y-2)^2$$ to be perfect square WLOG $k=a(p^2-q^2),2(2y-1)=a(2pq)$ $\implies apq$ is odd The minimum positive values of $a,p^2,q^2$ will be $1,3^2,1^2$ respectively $$x=\dfrac{a(q^2-p^2)\pm a(p^2+q^2)}2=?$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2541263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Convolution of the PDFs of $2$ Independent Random Variables I'm having trouble getting the correct pdf for $Z$ in the problem $Z = X + Y$ where the pdf of $X$ and $Y$ are $$ f_x(x) = f_y(x) = \left\{\begin{aligned} &x/2 &&: 0 < x < 2\\ &0 &&: \text{otherwise} \end{aligned} \right.$$ I am solving this using the convolution $$f_z(w) = \int_{-\infty}^\infty f_x(x)f_y(w-x) \, dx.$$ The limits can be changed to $0$ and $2$ in order for $f_x(x)$ to have a non-zero value and then $f_x(x)$ will equal $x/2$. $$f_z(w) = \frac{1}{2}\int_{0}^2 xf_y(w-x) \, dx $$ In order for $fy(w-x)$ to have a non-zero value, $0 < w-x < 2 \implies x < w < 2 + x$. After drawing a picture, I found two cases that can be tested, these being: $$\text{Case 1}: 0 \le w \le 2$$ $$\text{Case 2}: 2 \le w \le 4$$ For case $1$, the bounds are $0$ and $w$, giving $$f_z(w) = \frac{1}{2}\int_{0}^w x f_y(w-x) \, dx = \frac{1}{4}\int_{0}^w x^2 \, dx = w^3 /12.$$ For case $2$, the bounds are $w-2$ and $2$ giving $$f_z(w) = \frac{1}{2}\int_{w-2}^2 xf_y(w-x) \, dx = \frac{1}{4}\int_{w-2}^2 x^2 \, dx = (8-(w-2)^3) /12$$ Combining the two cases, we get $$ f_z(w) = \left\{\begin{aligned} &w^3 /12 &&: 0 \le w \le 2\\ &(8-(w-2)^3) /12 &&: 2 < w \le 4\\ &0 &&: \text{otherwise} \end{aligned} \right.$$ This can't be right because the area under $f_z(w) = 4/3$, not $1$. Can I get some advice on where I went wrong with this calculation?
You computed the limits of integration correctly. But you substituted wrong expressions for $f_y(w-x)$ in both integrals. In particular, $$f_y(w-x) = (w-x)/2$$ when $0<(w-x)<2$. Therefore, your integrals would be $$f_z(w) = \frac{1}{4}\int_{0}^{w}x(w-x)\,dx$$ for $0 \le w \le 2$ and $$f_z(w) = \frac{1}{4}\int_{w-2}^{2}x(w-x)\,dx$$ for $2 \le w \le 4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2541532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculation of the volume and surface of a cuboïd knowing the coordinates of two opposite vertices I hope someone will help me with this problem i'm facing right now. So our C language teacher gives us an exercice to solve. The exercice itself looks easy and the programmation part is not the problem. So he did ask us to calculate the volume, surface and the total length of all edges of a cuboïd just knowing the coordinates of two vertices (those forming the space diagonal). I did my research on internet to find the mathematic formulas to calculate what i need here but apparently with just two vertices it's not possible. Now i'm confused i don't know if our teacher did forget to give another details or am i missing something big right here i really don't know! please can anyone here help. Thank you in advance :) Here is the complete Exercice: Task 3: Cuboid Calculations A cuboid is represented by two of its vertices in three-dimensional space, as follow: Write a C program that reads the coordinates of the two vertices from the screen (the coordinates of the two vertices will be given by the user of the programm) and then calculates the following figures and outputs them on the screen: * *Volume *Surface *Total length of all edges
The edges of cuboid are parallel to the coordinate axes. $$ x_2-x_1= L;\quad y_2-y_1= B; \quad z_2-z_1 = H; $$ $$ V= LBH; \quad A = 2 (LB+BH +HL). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2541655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Linearize the equation $f^2(x)+1=2f(x+1)$ How can I linearize for the continuous function $f$ the functional equation $f^2(x)+1=2f(x+1)$ I' ve been studying recently the book for functional equations by Christopher G. Small and I' ve encountered the term "Linearization" of functional equations. I cannot figure out how to apply this method in this equation but if I had the equation $f^2(x)=2f(x+1)$ instead I know I could apply logarithms in both sides (thought I don't know if $f(x)>0$) such that $2\ln f(x)=\ln f(x+1)+ \ln2$. Is it possible to do something like this in the first functional equation?
I am not sure that linearization is applicable to this functional equation, but I will depict another way to solve this. First, lets consider trivial solutions. $$f^2(x)+1=2f(x+1)$$ $$f(x)=c$$ $$c^2+1=2c$$ $$c^2-2c+1=0$$ $$(c-1)^2=0$$ $$c=1$$ $$f(x)=1$$ I assume that the function is meant to be continuous and defined everywhere. Lets find some restrictions we can put on $f(x)$. The domain must be all real numbers. $$f(x+1)=\frac{f^2(x)}{2}+\frac{1}{2}$$ $$f(x-1)= \pm \sqrt{2f(x)-1}$$ Both of these operations should be valid if we want $f(x)$ to have a domain of all real numbers. There is no problem scaling a value of $f(x)$ up, but there may be a problem scaling down due to the domain of square roots. $$2f(x)-1 \geq 0$$ $$2f(x) \geq 1$$ $$f(x) \geq \frac{1}{2}$$ This means that we only need to consider the positive branch of the square root. $$f(x-1)= \sqrt{2f(x)-1}$$ We will further bound $f(x)$ by analyzing a sequence constructed from the equation. $$a_{n+1}= \sqrt{2a_n-1}$$ We must ensure that $a_n$ never goes below $\frac{1}{2}$. We will first establish that $a_n$ is a monotonically decreasing sequence. $$a_n>a_{n+1}$$ $$a_n>\sqrt{2a_n-1}$$ $$a_n^2>2a_n-1$$ $$a_n^2-2a_n+1>0$$ $$(a_n-1)^2>0$$ This is true for all $a_n$ except at $a_n=1$, which is the only solution to $a_n=a_{n+1}$. Since $a_n$ is monotonically decreasing, all $a_n<1$ can be ruled out as they will eventually go below $\frac{1}{2}$. We have tightened the bound on $f(x)$ to: $$f(x) \geq 1$$ For all $a_n>1$, we must prove that $a_n$ always stays above $1$. $$\sqrt{2a_n-1}>1$$ $$2a_n-1>1$$ $$2a_n>2$$ $$a_n>1$$ Since $a_n=1$ is the only solution to $a_n=a_{n+1}$, $a_n$ is monotonically decreasing, and $a_n$ stays above $1$ for all $a_n>1$, $a_n$ converges to $1$ for all $a_0>1$. Convergence of this sequence to $1$ and the bound on $f(x)$ means that: $$\lim_{x \to -\infty} f(x)=1$$ $$\text{From the main identity:}$$ $$f^2(x)+1=2f(x+1)$$ It is clear that the entirety of $f(x)$ can be defined if $f(x)$ is known over a domain interval of length $1\text{.}$ To find the other values $f(x)$ can be shifted using the identity. Let this interval be $x \in [0, 1]\text{.}$ Let $S(x)$ be a continuous function defined on this interval. $S(x)$ can be used contrive other values of $f(x)\text{.}$ For $f(x)$ to be continuous at $x=1$, S(x) must satisfy: $$S^2(0)+1=2S(1)$$ Generally, any continuous seed function on the $[0, 1]$ interval will allow for computation of a continuous function $f(x)$ given that the seed function satisfies: $$S(x) \geq 1$$ $$S^2(0)+1=2S(1)$$ Conclusions that can be drawn about the resulting continuous function $f(x)$ include: $$f^2(x)+1=2f(x+1)$$ $$f(x) \geq 1$$ $$\lim_{x \to -\infty} f(x)=1$$ Additionally, there are a few things that can be said about $f(x)$ as $x$ tends to $\infty\text{.}$ If $S(x)=1$, then $\lim_{x \to \infty}f(x)=1$ If $S(x)>1$, then $\lim_{x \to \infty}f(x)=\infty$ If $S(x)=1$ for some $x$ in the interval but not all $x$, then: $$\liminf_{x \to \infty}f(x)=1 \quad\quad \limsup_{x \to \infty}f(x)=\infty$$ This occurs because the sequence $b_n$ is monotonically increasing for all $b_n>1$ where: $$b_{n+1}=\frac{b_n^2}{2}+\frac{1}{2}$$ $$b_{n+1}>b_n$$ $$\frac{b_n^2}{2}+\frac{1}{2}>b_n$$ $$b_n^2+1>2b_n$$ $$b_n^2-2b_n+1>0$$ $$(b_n-1)^2>0$$ Note that using the seed function method to generate $f(x)$ can produce all possible $f(x)$ that are continuous everywhere. The next step would be to determine if any solutions are differentiable everywhere. I will come back to edit this answer if I find any.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2541825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Getting x in terms of y I have this equation: $$\dfrac{x}{y} = \dfrac{y-x}{x}$$ How would I separate $x$ and $y$ in $x^2+xy-y^2=0$ ?
Consider your equation, as a quadratic equation with respect to the variable $y$ : $$-y^2 + xy + x^2$$ handling the variable $x$ as a parameter. Then, it would be : $$D=b^2-4ac=x^2+4x^2=5x^2$$ It's easy to check that $\forall x\in \mathbb R, D\geq 0$, so you can safely express $y$ with respect to $x$ via the solution of the quadratic equation, without worrying about complex numbers. Then : $$y_{1,2} = \frac{-b \pm \sqrt D}{2a} \Rightarrow y_{1,2} = \frac{-x+\sqrt{5x^2}}{-2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2541921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Inversion of Gamma and Zeta Functions through Fourier Transform I have been messing around with the Fourier Transform and wanted to see if I could manipulate this equation: $$\Gamma(s)\zeta(s)=\int_{0}^\infty\frac{x^{s-1}}{e^x-1}\rm dx$$ into an integral over the Gamma and Zeta Functions. My Work: $$\Gamma(s)\zeta(s)=\int_{0}^\infty\frac{e^{s\log(x)}}{e^x-1}\frac{\rm dx}{x}$$ Let $2\pi k=\log(x)$ $$\Gamma(s)\zeta(s)=\int_{-\infty}^\infty \frac{e^{2\pi ks}}{e^{e^{2\pi k}}-1}2\pi\rm dk$$ Let $s=u+iv$ $$\Gamma(u+iv)\zeta(u+iv)=\int_{-\infty}^\infty \frac{e^{2\pi ku}e^{2\pi kiv}}{e^{e^{2\pi k}}-1}2\pi\rm dk$$ Through this, I now apply the Fourier Inversion Theorem:$$\frac{e^{2\pi ku}}{e^{e^{2\pi k}}-1}2\pi=\int_{-\infty}^\infty\Gamma(u+iv)\zeta(u+iv)e^{-2\pi ikv}\rm dv$$ I'm sure that I messed up somewhere since this does not look very feasable/pretty but can anyone confirm?
For $\sigma > 1$, with $x = e^{-u}$ $$\Gamma(\sigma+2i\pi\xi)\zeta(\sigma+2i\pi\xi) = \int_0^\infty \frac{x^{(\sigma+2i\pi\xi)-1}}{e^x-1}dx = \int_{-\infty}^\infty \frac{e^{-(\sigma+2i\pi\xi) u}}{e^{e^{-u}}-1} du= \mathcal{F}[\frac{e^{-\sigma u}}{e^{e^{-u}}-1}](\xi)$$ $$\frac{e^{-\sigma u}}{e^{e^{-u}}-1}= \int_{-\infty}^\infty \Gamma(\sigma+2i\pi\xi)\zeta(\sigma+2i\pi\xi) e^{2i \pi \xi u}d\xi$$ Everything converges nicely because $\frac{e^{-\sigma u}}{e^{e^{-u}}-1}$ is Schwartz, thus so is $\Gamma(\sigma+2i\pi\xi)\zeta(\sigma+2i\pi\xi)$ (it is fast decreasing).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2542007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding the eigenvalues of a 4x4 matrix So I have this matrix $A$, so after I did $\det(xI-A)$, I have this \begin{pmatrix} x & 0 & 0 & -x \\ 0 & x & 0 & -2x \\ 0 & 0 & x & -2x \\ -1 & -1 & -1 & x-1 \end{pmatrix} At this point I'm contemplating taking out an $x$ out of the matrix but there is a $-1$ on the bottom row. How can I go about row reducing this matrix?
$A = \pmatrix {1&1&1&1\\2&2&2&2\\2&2&2&2\\1&1&1&1}$ Since A is a singular matrix, we know that 0 is an eigenvalue. So, what is the dimension of the kernel of A? if we perform row operations on A we get $A = \pmatrix {1&1&1&1\\0&0&0&0\\0&0&0&0\\0&0&0&0}$ The dimension of the kernel is 3 0 is an eigenvalue of multiplicty 3 The sum of the eigenvalues equals the trace of the matrix. The trace of A is 6 The remaining eigenvalue is 6
{ "language": "en", "url": "https://math.stackexchange.com/questions/2542141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Simple upper bound on the probability that the sum of $n$ dices rolls is equal to the most likely total Suppose $n$ $s$-sided (and fair) dice and are rolled, and consider the most likely value their total will take. Is there a simple / easy to state upper-bound on the probability that this total is rolled? I know you can bound this accurately using generating functions, but to use in the proof I'm working on requires summing over this probability for a range of (large) $n$ values, which gets too complicated. I imagine it can also be approximated it using some distribution, but it's for a crypto proof so I really need an upper bound. I'm crudely upper bounding this with $1/s$ at the moment for all $n$. This follows by induction on n. Letting $X_i$ denote the distribution of the $i$th dice roll For $n = 1, Prob[X_1 = z] = 1/s$ for all $z \in [1, s]$, so the base case holds. Assume true for $n=k$, so $max_{z \in [k, ks]} Prob[\sum_{i = 1}^k {X_i} = z] \leq 1/s$. Then for $n = k+1$, and each $z \in [k+1, (k+1)s]$: $Prob[\sum_{i = 1}^{k+1} {X_i}=z$] $ = \sum_{h \in [k, ks]} Prob[X_{k+1} = z - \sum_{i = 1}^k X_i | \sum_{i = 1}^k X_i = h]Prob[\sum_{i = 1}^k X_i = h]$ $\leq \sum_{h \in [z - s, z-1]}1/s^2 = 1/s$, where the final inequality follows since by the induction hypothesis $Prob[\sum_{i = 1}^k X_i = h]\leq 1/s$ and $Prob[X_{k+1} = z - h] = 1/s$ if $h \in[z - s, z-1]$ and $0$ otherwise. I was wondering if there is any tighter (for large $n$) but still simple upper bound?
A good approximation for large $n$ is to note (rescaled) convergence in distribution to a normal distribution where the variance of the sum is $n\frac{s^2-1}{12}$ and so the probability of a value near the expectation of $n\frac{s+1}{2}$ will be about $\frac{1}{2\pi \sigma^2}=\sqrt{\frac{6}{\pi n(s^2-1)}}$. If you want to simplify this and remove the $-1$, then the worst case will be when $s=2$ in which case $s^2-1 = \frac34 s^2$ making the approximation $\sqrt{\frac{8}{\pi}}\frac1{s\sqrt n}$. Since $\sqrt{\frac{8}{\pi}} \approx 1.596$, a further simplification on the conservative side is $\dfrac{1.6}{s \sqrt n}$ For $s=2$ this is an very good upper bound as $n$ increases for the true value of $\frac{n \choose \lfloor n/2 \rfloor}{2^n}$: for example with $s=2$ and $n=100$ it gives $0.08$ when the true probability of the mode is about $0.079589$. For larger $s$ it is only a small multiplicative amount too high: for example with $s=6$ and $n=36$ it gives about $0.044444$ for the probability of the mode when the true value is about $0.038761$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2542263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Convergence of $(x_n)$, $(x_n + y_n)$ and $(y_n)$ Prove that if real sequences $(x_n)$ and $(x_n + y_n)$ converge, then $(y_n)$ converges. My attempt so far: Suppose that the limits of $(x_n)$ and $(x_n + y_n)$ are $x$ and $x+y$ respectively. (Intuitively, $y_n \rightarrow y$ as $n \rightarrow \infty$.) Then given any $\epsilon > 0$, there exists $N \in \mathbb{N}$ such that $|x_n - x| < 3\epsilon/2$ and $|(x_n + y_n) - (x+y)| < \epsilon/2$ for all $n > N$. By triangle inequality, we have that $|(x_n + y_n) - (x+y)| \leq |x_n - x| + |y_n - y|$. I want to use this relation and $|x_n - x|$ to obtain $|y_n - y| < \epsilon$. But I haven't had any luck so far. Any help would be appreciated!
Set $L$ to be the limit of $(x_{n}+y_{n})$ (or we take $y=L-x$ as OP noted), and we claim that $y=L-x$ is the limit of $(y_{n})$: For $\epsilon>0$, choose $N$ such that for all $n\geq N$, $|x_{n}-x|<\epsilon/2$ and $|x_{n}+y_{n}-L|<\epsilon/2$, then for such an $n$, we have \begin{align*} |y_{n}-(L-x)|&=|y_{n}+x_{n}-L-x_{n}+x|\\ &\leq|x_{n}+y_{n}-L|+|x_{n}-x|\\ &<\epsilon. \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2542361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Probability of choosing value of variable in equation - 4 tuple ($x_1 +x_2+x_3+x_4=10$) It may be the wording in this problem that is throwing me off but I can't seem to figure out the number of possible successful outcomes to calculate the probabiliy: Suppose a non-negative integer solution to the equation $w+x+y+z=10$ is chosen at random (each one being equally likely to be chosen). What is the probability that in this particular solution that $w$ is less than or equal to 2? Let $A = w \leq 2 $ To find P(A) I need: \begin{align} P(A)=\frac{|E|}{|S|} \end{align} Where |E| = Successful outcomes and |S| = Size of sample space. I start by finding the sample space of possible solutions, since this is a 4 tuple: ${\{w,x,y,z\}}$ -- order does not matter and repeats are allowed, I would say the size of sample space is \begin{align} |S| = C(10+4-1,4) =C(13,4) \end{align} So this gives me: \begin{align} P(A)=\frac{|E|}{C(13,4)} \end{align} However, I can't seem to figure out $|E|$ as I don't know how to account for all cases... I am guessing since there are 4 variables: $\{w,x,y,z\}$ and we assume $w$ is aleady chosen from the following: $\{0,1,2\}$ (since $w\leq 2$) this leaves us with 3 variables left to determine. The number of outcomes for this would look like: $$ \begin{array}{c|lcr} case & \text{Number of outcomes} \\ \hline 0+x+y+z= 10 & C(10+3-1,3) = C(12,3) \\ 1+x+y+z= 10 & C(10+3-1,3) = C(12,3)\\ 2+x+y+z= 10 & C(10+3-1,3) = C(12,3) \end{array} $$ This feels wrong.. or maybe I am overthinking it. But would the solution be: \begin{align} P(A)=\frac{3 \cdot C(12,3)}{C(13,4)} \end{align}
Put w = 0 Find solution for $x+y+z = 10$ and that is ${12\choose2}=66$ Put w = 1 Find solution for $x+y+z = 9$ and that is ${11\choose2}=55$ Put w = 2 Find solution for $x+y+z = 8$ and that is ${10\choose2}=45$ Add the above cases to a total of $166$ For all non negative w,x,y,z Find the solution for $w+x+y+z = 10$ and that is ${13\choose3} =286$ Thus the probability $=\frac{166}{286}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2542508", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Find real solutions in $x$,$y$ for the system $\sqrt{x-y}+\sqrt{x+y}=a$ and $\sqrt{x^2+y^2}-\sqrt{x^2-y^2}=a^2.$ Find all real solutions in $x$ and $y$, given $a$, to the system: $$\left\{ \begin{array}{l} \sqrt{x-y}+\sqrt{x+y}=a \\ \sqrt{x^2+y^2}-\sqrt{x^2-y^2}=a^2 \\ \end{array} \right. $$ From a math olympiad. Solutions presented: $(x,y)=(0.625 a^2,0.612372 a^2)$ and $(x,y)=(0.625 a^2,-0.612372 a^2)$. I tried first to make the substitution $u=x+y$ and $v=x-y$, noticing that $x^2+y^2=0.5((x+y)^2+(x-y)^2)$ but could not go far using that route. Then I moved to squaring both equations, hoping to get a solution, but without success. Hints and answers are appreciated. Sorry if this is a duplicate.
Hint: Let $u=\sqrt{x-y},v=\sqrt{x+y}$. The system now reads $$u+v=a,\\\sqrt{\frac{u^4+v^4}2}-uv=a^2$$ Raising the first equation to the fourth power, $$a^4=u^4+v^4+4uv(u^2+v^2)+6u^2v^2.$$ Then using $u^4+v^4=2(a^2+uv)^2$ and $u^2+v^2=a^2-2uv$, you get an equation in $uv$, which simplifies: $$a^4=2(a^2+uv)^2+4uv(a^2-2uv)+6u^2v^2.$$ $uv=-\dfrac{a^2}8$. When $uv$ is known, $u^2+v^2=a^2-2uv$ gives you $2x$, and $x^2-u^2v^2$ gives you $y^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2542647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Divergence of integral I found in web the following problem: Let $f,g$ be continuous non-negative decreasing functions on $\mathbb R_{\ge 0}$ such that $$\int_0^\infty f(x)\,dx\to\infty, \int_0^\infty g(x)\,dx\to\infty$$. Define the function $h$ by $h(x)=\min \{f(x),g(x)\},\forall x\in\mathbb R_{\ge 0}$. Prove or disprove that $$\int_0^\infty h(x)\,dx$$ also diverges. I bet the statement is true, so I attempt to prove it. Firstly, since $h$ is also continuous, non-negative and decreasing, we know that the statement is equivalent to say Let $f,g$ be continuous non-negative decreasing functions defined on $\mathbb N$, show that $$(\sum_{k=0}^\infty f(k)\to\infty\land \sum_{k=0}^\infty g(x)\to\infty )\implies \sum_{k=0}^\infty \min\{f(k),g(k)\}\to\infty $$ Now, if one has assumed the limits $$\lim_{n\to\infty} f(n)/\min\{f(n),g(n)\}, \lim_{n\to\infty} \min\{f(n),g(n)\}/g(n), \lim_{n\to\infty} f(n)/g(n)$$ all exist, I think I could give a proof. Let $$\lim_{n\to\infty} f(n)/\min\{f(n),g(n)\}=L_1, \lim_{n\to\infty} \min\{f(n),g(n)\}/g(n)=L_2, \lim_{n\to\infty} f(n)/g(n)=L$$. We know that any of $L_1,L_2, L$ is nonnegative. Since $L_1L_2=L$, if $L>0$, then we are done by the limit comparison test. If $L=0$, then in the long run $\min\{f(x),g(x)\}\sim f(x)$, we are done as well. However, I could hardly think of an approach if the assumption is not made. Any help would be appreciated. Thanks in advance.
here is my attempt to prove (it's only a rought idea) let define $$M(x)=max(f,g)$$ and $$I_1(x)=\frac{M(x)+h(x)}{2}$$ thus by comparison $I_1(x)$ diverges then define $$I_k(x)=\frac{I_{k-1}(x)+h(x)}{2}$$ thus by induction $I_k(x)$ diverges as $$I_k(x) \rightarrow h(x)$$ $h(x)$ also diverges $\square$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2542777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
If a sequence of bounded operator converges pointwise, then it is bounded in norm $\newcommand{\vertiii}[1]{{\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert #1 \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}}$ Let $\left(V, \|\cdot\|_V \right)$ be a Banach space and $\left(W, \|\cdot\|_W \right)$ a normed vector space. Show that if a sequence $(T_n)_n\subset \mathcal{B}(V,W)$ converges pointwise then $\sup \vertiii{T_n}<\infty$. So we know that $\sup\limits_{x\in V} \frac{\|T_n x\|}{\|x\|}=\vertiii{T_n}$, thus $$\lim_{n\to\infty}\sup\limits_{x\in V} \frac{\|T_n x\|}{\|x\|}=\sup\limits_{x\in V} \frac{\|T x\|}{\|x\|}=\vertiii{T}<\infty$$ I'm wondering if I didn't miss something, since this seems a bit too simple. Please let me know. Update (version 3): Since $V$ is a complete metric space, it is Banach, and $T$ is defined on $V$. Since $\{(T_n(x))\}_n$ converges pointwise, it is bounded, so that there exists a function $C:V\to \mathbb{R}$ such that $$\sup_{n\in \mathbb{N}} \|T_n(x)\|<C(x)$$ for each $x\in V$. By the Uniform Boundedness Principle, we deduce that there exists $M>0$ such that $$\sup_{n\in\mathbb{N}} \vertiii{T_n} \le M <\infty.$$
The last update contains a correct solution to the exercise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2542884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
h-vector of an unmixed ideal We know that: * *if a ring is Cohen-Macaulay, then it is unmixed. But the converse is not true. *if a ring is Cohen-Macaulay, then its $h$-vector is positive. The converse is not true. Then I expect to find unmixed ideals that have positive $h$-vector, but they are not CM. Now the question is: Is it possible to find an example of a not unmixed ideal which has positive $h$-vector?
I solved my question starting from the simplicial complex $$\Delta = \{\{x_1,x_2\},\{x_1,x_3\},\{x_1,x_4\},\{x_2,x_3\},\{x_2,x_4\},x_5\}$$ It is not unmixed but $h=(1,3,1)$, since $f=(1,5,5)$. So, the ideal that I was looking for is $$I_{\Delta} = (x_1x_2x_3, x_1x_2x_4,x_3x_4, x_1x_5,x_2x_5,x_3x_5, x_4x_5) \subseteq \mathbb{K}[x_1, \dots, x_5]$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2543038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Parametrising the surface enclosed by a parametric curve I have a curve given by $(\cos t, \sin t, \sin 2t)$ with $0 \leq t \leq 2\pi$: I need to integrate a function over any of the infinitely many surfaces to which this curve is a boundary. How would I find a parametrised form of such a surface?
You can see that $$z(t)=\sin 2t = 2\sin (t) \cos (t)=2x(t)y(t)$$ Therefore, a possible surface is $$ z=2xy, \quad x=x, \quad y=y\quad \mbox{with}\quad x^2+y^2\le 1 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2543135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that $\operatorname{Arsinh}(x) \ge \ln(1+x)$ for $x>-1$ Prove that $\operatorname{Arsinh}(x) \ge \ln(1+x)$ for $x>-1$. I have solved similar inequalities for other trigonometric functions, but for this one I have no idea where to start, other than the fact that the plot of the functions makes it obvious. For other examples, I was using the derivatives of various related functions and facts like "its derivative is $>0$". Some indications would be welcome.
Defining $$f(x)=\operatorname{arsinh}(x)-\ln(x+1),$$ then we get by differentiating with respect to $x$: $$f'(x)=\frac{1}{\sqrt{1+x^2}}-\frac{1}{x+1}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2543220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
What is the difference between $0$ and $\vec{0}$? If we take the approach underlined by Peano axioms, we get: $$a + (-a) = 0.$$ Likewise, we can get the same thing in Euclidean plane: $$|AB| + (-|AB|) = 0,$$ where $A$ and $B$ are points, and $|AB|$ is the length of the line segment $AB$ (thus the "operation" $AB - AB$ has no meaning, as we have not ascribed any attributes to the line segment). On the other hand, $$\vec{AB} - \vec{AB} = \vec{AB} + \vec{BA} = \vec{0}.$$ Well, what is the difference between $0$ and $\vec 0$ ? I think that a concrete definition of both should clear my confusion up. I understand that $\vec 0$ cannot be used in the same context as $0$ (because the former arises from the definition of vector), but I still don't have in-depth understanding of the concept of $\vec 0$. In Euclidean plane both $0$ and $\vec 0$ are points, right? So what's the difference then?
$\vec{0}$ is a vector while $0$ is a number
{ "language": "en", "url": "https://math.stackexchange.com/questions/2543340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Show that a polynomial function has a minimum $p:\mathbb{R}\rightarrow\mathbb{R}$ is a polynomial function, which only has positive values. Show that p has a minimum. This is what I tried to do: I want to proof this statement using the extreme value theorem. To be able to use it, the function has to be continuous and bounded. $p(x) = a_nx^n+a_{n-1}x^{n-1}+...+a_1x+a_0 > 0$ Since the values have to be positive $p(x)>0$ $a_n\neq 0$ $\lim_{x\to +\infty}x^n= \infty $ Since the limit to infinity is not bounded, how do I change my interval so it is bounded and I can use the extreme value theorem? And how do I make sure that the values are positive?
Since $\lim_{x\to\pm\infty}p(x)=+\infty$ (it follows from the assumption that the polynomial has only positive values), there are $a<0$ and $b>0$ such that $p(x)>p(0)$ on $(-\infty,a)$ and $p(x)>p(0)$ on $(b,+\infty)$. Consider $p$ on $[a,b]$. Since all values of $p$ outside this interval are greater than some values of $p$ (e.g. $p(0)$) and $p$ is continuous and bounded on $[a,b]$, the minimum has to be taken on $[a,b]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2543520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Limit of multinomial distributions Let $(X_1, X_2, ..., X_n)$ follow a multinomial distribution with parameters $n$ and $\mathbf{p}=(p_1, p_2,..., p_n) = (p+ \frac{1-p}{n}, \frac{1-p}{n},...,\frac{1-p}{n})$. I am trying to prove that $\mathbb{P}(\max_k X_k \neq X_1) \to 0$ as $n\to \infty$. My idea is to first use the sub-additivity property to write $\mathbb{P}(\max_k X_k \neq X_1) \leq \mathbb{P} (\cup_{k=2}^n \{ X_k \geq X_1\}) \leq \sum_{k=2}^n \mathbb{P}(X_k \geq X_1)$ and then prove that for a fixed $k\ne 1$, the quantities $ \mathbb{P}(X_k \geq X_1) = O(1/n^2)$, say. I strongly believe that this is the case as $\mathbf{E}[X_1] \sim np$ and $\mathbf{E}[X_k] = 1-p$ for $k\ne 1$, but I seem to be caught in some nasty loop trying to prove this.
I think Hoeffding's inequality is enough to show $P(X_k \ge X_1)$ is small. Let $Z_1,\ldots,Z_n$ be i.i.d., each taking values $1$, $0$, and $-1$ with respective probabilities $\frac{1}{n}(1-p)$, $\frac{n-2}{n}(1-p)$, and $p+\frac{1}{n}(1-p)$. Then $$X_k - X_1 \overset{d}{=} Z_1 + \cdots + Z_n.$$ We have $E[Z_i]=-\frac{p}{2}$ for each $i$, so Hoeffding's inequality implies $$P(X_k \ge X_1) = P(Z_1+\cdots+Z_n \ge 0) \le \exp\left(-\frac{2(np/2)^2}{4n}\right) = \exp(-p^2 n/8).$$ Plugging this into your union bound should give you what you need.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2543634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Mean value thorem - Showing $\sqrt{1+x} < 1+\frac{x}{2}$ for $x>0$ So as the title states I have to show $\sqrt{1+x} < 1+\frac{x}{2}$ for $x>0$. This is a example from the book which has to be explained for me, i'm having a hardtime understanding the proof. I do however understand the concept of MVT. So the rest of the solution looks like the following: If $x>0$, apply the Mean-Value Theorem to $f(x)= \sqrt{1+x}$ on the interval $[0,x]$. There exist $c\in [0,x]$ such that $$\frac{\sqrt{1+x}-1}{x}=\frac{f(x)-f(0)}{x-0}=f'(c)=\frac{1}{2\sqrt{1+c}}<\frac{1}{2} $$ The last inequality hold because $c>0$. Mulitiplying by the positive number $x$ and transposing the $-1$ gives $\sqrt{1+x} <1+\frac{x}{2}$, So I am not sure why he(the author) choose $\frac{1}{2}$, it seems a little arbitrary to me. My guess is that you're allowed to pick a number for the derivative of $c$ which suits the cause/solution best, as long as it's $0<c<x$. I'm not sure though. All help would be greatly appriciated!
$\frac{1}{2}$ is not a random number, it comes from differentiating the square root: $\sqrt{x}\,'=\frac{1}{2\sqrt{x}}$ or $\sqrt{x+1}\,'=\frac{1}{2\sqrt{x+1}}$, the same thing. Next he noticed that $\frac{1}{\sqrt{1+c}}<\frac{1}{\sqrt{1+0}}=1$ for all $c>0$, which is quite obvious after simple transformation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2543784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Evaluating the integral $\int_0^{\infty} \frac{\sin(x)}{\sinh(x)}\,dx$ I was trying to evaluate the following integral, $$I=\int_\limits {-\infty}^{\infty} \dfrac{\sin(x)}{\sinh(x)}\,dx$$ but had no success. I first expanded the the hyperbolic sine: $$I=2\int_\limits {-\infty}^{\infty} \dfrac{\sin(x)}{e^{x}-e^{-x}}\,dx=2\Im \int_\limits {-\infty}^{\infty} \dfrac{e^{ix}}{e^{x}-e^{-x}}\,dx$$ I then substituted $u=e^x$, $$I=2\Im\int_\limits {0}^{\infty} \dfrac{u^i}{u^2-1}\,du$$ Now, I'm not really sure what to do. Also, after exchanging the $\Im$ with the integral seemed to create a non-integrable singularity at $u=1$. When can you not do that?
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} &\bbox[10px,#ffd]{% 2\int_{0}^{\infty}{\sin\pars{x} \over \sinh\pars{x}}\,\dd x} = 2\,\Im\int_{0}^{\infty}{\expo{\ic x} - 1 \over \pars{\expo{x} - \expo{-x}}/2}\,\dd x \\[5mm] = &\ 4\,\Im\int_{0}^{\infty}{\expo{-\pars{1 - \ic}x} - \expo{-x} \over 1 - \expo{-2x}}\,\dd x \,\,\,\stackrel{\large\expo{-2x}\ =\ t}{\large =}\,\,\, 4\,\Im\int_{1}^{0}{t^{1/2 - \ic/2} - t^{1/2} \over 1 - t}\, \pars{-\,{\dd t \over 2t}} \\[5mm] = &\ 2\,\Im\bracks{\int_{0}^{1}{1 - t^{-1/2} \over 1 - t}\,\dd t - \int_{0}^{1}{1 - t^{-1/2 - \ic/2} \over 1 - t}\,\dd t} \\[5mm] = &\ 2\,\Im\bracks{\Psi\pars{1 \over 2} - \Psi\pars{{1 \over 2} - {\ic \over 2}}} = -2\,\Im\Psi\pars{{1 \over 2} - {\ic \over 2}} \\[5mm] = &\ -2\,{\Psi\pars{1/2 - \ic/2} - \Psi\pars{1/2 + \ic/2} \over 2\ic} = \ic\braces{\pi\cot\pars{\pi\bracks{{1 \over 2} + {\ic \over 2}}}} \\[5mm] = &\ -\ic\pi\tan\pars{\pi\ic \over 2} = \bbx{\pi\tanh\pars{\pi \over 2}} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2543862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
Continuity of function defined on $C_c(\mathbb{R})$ Let $X=C_c(\mathbb{R})$, the space of functions with compact support, normed with the sup norm. Define $ f: (X, \vert \vert.\vert \vert_{\infty} )\rightarrow \mathbb{R}$ as $ f= \int_{- \infty}^{\infty} x(t)dt \>\>\> \forall x \in X$ Then $f$ is continuous(T/F). I think function is not cts but I can't construct a counterexample.
Take $x_n \in C_c(\mathbb{R})$ such that $$x_n(t) = \begin{cases} \frac{1}{n} & x \in [-n,n] \\ -\frac{1}{n}(x-n+1) &x \in (n, n+1] \\ - \frac{1}{n}(x+n-1) & x \in [-n-1, -n) \\ 0 & \text{otherwise}.\end{cases}$$ Then $x_n \to 0$ in $\| \cdot \|_\infty$ (that is $x_n$ tends to $0$ uniformly), but $$f(x_n) >\frac{1}{n}\cdot 2n = 2$$ for all $n$. Thus $f$ is not sequentially continuous (we have a sequence $x_n \to 0$ such that $f(x_n) \not\to f(0)$), which is equivalent to continuity for metric spaces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2544041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Evaluating $\int_0^{\infty} \frac{\tan^{-1} x}{x(1+x^2)} dx$ The question is to evaluate $$\int_0^{\infty} \frac{\tan^{-1} x}{x(1+x^2)} dx$$ I used the substitution $x=\tan a$ then the given integral becomes $\int_0^{\pi/2} \frac{\tan^{-1}(\tan a)}{\tan a} da$ Now $\tan^{-1} (\tan a)=a \forall a \in [0,\pi /2]$ so that the integrand becomes $a/ \tan a$.i am facing trouble evaluating this.i tried using $a \to \pi/2 -a$ but couldn't simplify.Any ideas?
Proceeding by your method: $\int_0^{\pi/2}\cfrac{a}{\tan(a)}da=\int_0^{\pi/2}a \tan(a)da$ [Now use Integration by Parts] $= [a\log(\sin(a))]_0^{\pi/2} - \int_0^{\pi/2}\log\sin(a)da$ $= 0 - \int_0^{\pi/2}\log\sin(a)da$ $= \cfrac{\pi\ln(2)}{2}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2544154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 2 }
Convergence of subsequences in a compact set of $\mathbb{R}^n$ For any sequence in a compact set $K$ of $\mathbb{R}^n$, it is well known that there is a subsequence that is convergent to a point in $K$. Is this true that for any sequence in a compact set $K$ of $\mathbb{R}^n$, there is a partition of this sequence into several subsequences that every subsequence is convergent to a point in $K$. Intuitively, it is correct to me. Any constructive proof?
Denote the original sequence by $\{a_n\}$ and call its convergent sub-sequence $\{a_{n_0}\}$ Construct $\{a_{n_1}\}$ by first removing the terms of $\{a_{n_0}\}$ from $\{a_n\}$ and then finding a convergent sub-sequence. Proceeding in this way, each time you will construct $\{a_{n_k}\}$ by first removing the terms of $\{a_{n_{k-1}}\}$ from $\{a_{n_{k-2}}\}$ and then finding a convergent sub-sequence of what remains, which is possible because of compactness. Then $\{a_{n_k}\}$'s form a partition of $\{a_n\}$, not necessarily a finite partition, such that each $\{a_{n_k}\}$ is a convergent sequence. Q.E.D. Addendum: One might wonder if one could always find a finite partitioning of the original sequence. To see that finite partitioning isn't enough, consider this example: Take a bunch of sequences $b_i=\{ a_{i,n} \}_{n\in \mathbb{N}}$ such that $\lim_{n\to \infty}b_i=l_i$ where $l_i$'s are different. Now write down each sequence $b_i$ in a row to form an infinite square. Create a new sequence $\{c_n\}$ by covering this square by first going down the first column, then going up the second column, then again going down the third column and so forth. The sequence that is obtained in this way will have no finite partitioning.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2544283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
$E$ is nowhere dense Let $E$ be the set of all $x\in [0.4,0.777\dots]$ whose decimal expansion contains only digits $4$ and $7$. How can we show that $E$ is nowhere dense in $[0.4,0.777\dots]$? That is, there is no interval $(a,b)$ in $E$. My Try: $E$ does not contain intervals of the form $(0.4\dots41,0.4\dots3)$ and $(0.4\dots48,0.4\dots444)$ etc. But I can't find some 'form' of intervals that are not in $E$, such that every interval that lies in $[0.4,0.777\dots]$ lies in one of those intervals.
Lets say there is interval (a,b). Then all numbers between $a$ and $b$ must be in your set. Lets take decimal expansion of $a$ and $b$ and find first digit where they differ. Put for example $5$ there. You have a number that is between them, but is not in your set. Therefore there can't be any interval.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2544407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Does pointwise convergence of a sequence of functions in each closed subinterval of an open interval imply pointwise convergence in the open interval? If a sequence {$f_n$} be uniformly convergent in every closed subinterval $[a+ε,b-δ]⊂(a,b)$, then it isn't necessarily uniformly convergent in $(a,b)$. But what about pointwise convergence? That is, if a sequence {$f_n$} be pointwise convergent in every closed subinterval $[a+ε,b-δ]⊂(a,b)$, is it pointwise convergent in $(a,b)$? {$f_n$} may or may not be uniformly convergent.
Of course it is. Pick a point $x \in (a,b)$. The interval being open gives you an $ \epsilon>0 $ such that $(x-\epsilon, x+\epsilon) \in (a,b)$, and clearly this open interval contains the closed interval $[x-\frac{\epsilon}{2},x+\frac{\epsilon}{2}]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2544523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Probability that everyone shows up for flight? The probability of a flight reservation being a no-show is unknown but after observing $10000$ flight reservations we found that $95\%$ of those people showed up. If we consider a new sample of $100$ flight reservations, what is the chance that each of the people shows up? Can we find a useful upper bound to it? I assume that all show-ups are i.i.d. If we write $p$ for the probability of a passenger showing up for her flight, then the probability that everyone shows up is simply $p^{100}$, but of course $p$ is not known. My idea was to use the information that we had $95\%$ show-ups in the sample with $10000$ reservations to bound likely values of $p$. How can we proceed?
A Bayesian approach could be to take a Beta prior distribution for $p$ the probability of somebody turning up, with a density proportional to $p^{\alpha-1}(1-p)^{\beta-1}$. Common choices are $\alpha=\beta=1$ (a uniform prior), $\alpha=\beta=\frac12$ (a Jeffreys prior) or $\alpha=\beta=0$ (an improper prior), but with a large number of observations as here, it may make little difference Assuming individual behaviour is i.i.d., with your observation of $9500$ turning up and $500$ not turning up, you get a posterior distribution for $p$ which is also a Beta density, now proportional to $p^{9500+\alpha-1}(1-p)^{500+\beta-1}$ Given $p$, the probability that the next $100$ all turn up is $p^{100}$. So combining this with the posterior distribution for $p$ gives a probability of $$\dfrac{\int_0^1 p^{9600+\alpha-1}(1-p)^{500+\beta-1}\,dp}{\int_0^1 p^{9500+\alpha-1}(1-p)^{500+\beta-1}\,dp}=\dfrac{B(9600+\alpha, 500+\beta)}{B(9500+\alpha, 500+\beta)}$$ where the Beta function $B(x,y)=\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}$ is basically the reciprocal of a generalised binomial coefficient; if you are using a computer to calculate this, you might want to use logarithms to avoid underflow * *With $\alpha=\beta=1$ this would give about $0.006019$ *With $\alpha=\beta=\frac12$ this would give about $0.006047$ *With $\alpha=\beta=0$ this would give about $0.006076$ *By comparison, a simple $0.95^{100}$ would give about $0.005921$ So they all give about $0.006$ I suspect that the most dubious assumption here is the assumption of i.i.d. individual behaviour: if that is wrong and in fact people tend to turn up or not turn up together then the probability of all $100$ turning up could be much higher
{ "language": "en", "url": "https://math.stackexchange.com/questions/2544637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Will terms in this sequence always have digital root 1? A recreational math problem, dubbed "Insert and Add", asks: What is the least integer m that requires no less than n insertions of plus signs so that, after performing the addition(s), we arrive at a single digit? (See the last page here: http://orion.math.iastate.edu/butler/papers/16_03_insert_and_add.pdf) It is similar to finding the additive persistence of n, but instead of merely counting the number of digital sums required to arrive at a single digit it counts the minimum number of plus signs inserted during that process. 10 is the smallest number that requires one plus sign: 1+0=1. 19 is the smallest to require two: 1+9=10 -> 1+0=1. 118 is the smallest to require three: 1+1+8=10 -> 1+0=1; alternatively we can try 1+18=19 -> 1+9=10 -> 1+0=1; and finally we can try 11+8=19 -> 1+9=10 -> 1+0=1. 3187, and 3014173 are the next two numbers in the sequence. Now observe that all of these numbers (10, 19, 118, 3187, 3014173) have a digital root of 1. Is it obvious that all future terms in this sequence will have digital root 1? The sequence is https://oeis.org/A293929.
I am terrible at proofs, so I won't be surprised when someone points out a glaring hole in this, but what about: Assume the terms $a(1)$ to $a(n)$ all have digital root $1$, but $a(n + 1) = x$ doesn't. Increment $a(n)$ by one until we reach $x$. Insert one plus sign into $x$ in the optimal way that guarantees the result of the addition, $y$, requires exactly $n$ more insertions of a plus sign to arrive at a single digit. Because $y$ requires $n$ insertions it cannot be less than $a(n)$, otherwise we would have found $y$ before $a(n)$. Because $x$ has digital root greater than $1$, $y$ cannot equal $a(n)$. So now $y$ must be in the range $a(n) < y < x$, but we already checked these before arriving at $x$, so no such number $y$ can exist, therefore no such $x$ can exist. Clearly, $a(n + 1)$ cannot have digital root $0$. We have shown that no $a(n + 1) = x$ with digital root $0$, or $2$ through $9$ can exist, therefore $a(n + 1)$ must have digital root $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2544776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
$Ass(M) = Ass(N) \cup Ass(M/N)?$ Suppose $M$ is an $R$ module and $N$ is a submodule of $M.$ I am trying to prove the following: $Ass(M) = Ass(N) \cup Ass(M/N)$. This seemed intuitively true but the proof proved to be difficult. If I have a prime ideal $P \in Ass(M)$ then $P = Ann(m)$ for some $m \in M.$ If $m \in N$ then $P \in Ass(N) \cup Ass(M/N).$ But suppose it isn't. We already have $P \subset Ann(m + N)$ so now I want to show that $Ann(m + N) \subset P.$ How do I go about doing this? If I have an element $x \in R$ such that $xm \in N$ I want to show that $xm = 0.$ If $R$ were a field this would be easy as I could simply multiply the inverse of $x$ and get that $m \in N$ which contradicts the assumption that $m \notin N.$ Should I continue in this approach?
Disclaimer: I'm not quite good enough to give a nice hint, or point you in the right direction. Here is the argument I am familiar with, but read on only if you want a spoiler claim:$ Ass(M) \subset Ass(N) \cup Ass(M/N)$. Let $P \in Ass(M)$. Note that $P$ is associated to $M$ if and only if there is an injection $f:R/P \hookrightarrow M$. Consider the image $\mathrm{Im}(f) \subset M$. First assume that $\mathrm{Im}(f)$ is disjoint from $N$. Then $\pi:M \to M/N$ restricts to a monomorphism, which we consider as $\pi \circ f:R/P \to M/N$, which shows that $P \in Ass(M/N)$. Now, assume that $\mathrm{Im}(f)$ intersects $N$ nontrivially, with $x \in \mathrm{Im}(f) \cap N$, but then $ann(x)=P$, since the image of $f$ is isomorphic to $R/P$, so $P$ is an associated prime $Ass(N)$, since it is the anhilator of some element $x \in N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2544901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
The "constants" of a structure (model theory) A structure is composed of the domain, constants, relation symbols, and function symbols. I understand all of the ingredients of a structure immediately except for the constants. The definition in my book (A Shorter Model Theory) is A set of elements of $A$ called constant elements, each of which is named by one or more constants. If $c$ is a constant, we write $c^A$ for the constant element named by $c$. An example of a structure with constants is $$\langle \mathbb{R}, +, -, \cdot, 0,1, \le \rangle$$ I understand that $0$ and $1$ are special because they are the identity elements for addition and multiplication, respectively, but why are they the constants, and in what sense do we need to specify them? I understand that I get a totally different structure if I where to use $<$ instead of $\le$, but I don't see what is so special about the constants. Why not have a structure of the following type: $$\langle \mathbb{R}, +, -, \cdot, \frac{1}{2},\pi, \le \rangle$$ Is this different and if so, in what sense? Viewed as a group $\mathbb{R}$ has many choices of elements but only one identity element so I can make sense of specifying the identity element when talking about a group, but in what sense are the symbols related when we write out the structure (or signature)?
One good reason to specify constants in a signature for models is that you want to make sure that these are preserved by homomorphisms. Consider, as in your example, the signature of ordered fields $\{ +, -, \cdot, 0,1, \leq \}$. A homomorphism $f:A \to B$ of structures of this signature will always satisfy $f(0_A) = 0_B$, and similar for $1$, whereas this need not be the case for the signature $\{ +, -, \cdot, \leq \}$. Moreover, you can refer to constants in formulas that are then quantifier-free. For example, consider a field $K$. Within the first signature, the formula $\varphi(x) = (x = 0)$ is quantifier-free, whereas it is not possible to express the same statement in the second signature without the use of quantifiers. This can be quite important when proving properties of certain classes of models.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2545007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Working on Homotopy Equivalent Suppose I have spaces $X_{0}, X_{1},Y_{0},Y_{1}$ and $X_{0}$ is homotopy equivalent to $Y_{0}$ and $X_{1}$ is homotopy equivalent to $Y_{1}$. I would like to show that $X_{0}$ x $X_{1} $ is homotopy equivalent to $Y_{0}$ x $Y_{1} $. How do I approach this statement? I am fresh new to this topic so any input will be greatly appreciated. Thanks!
Since $X_0 \simeq Y_0$ and $X_1 \simeq Y_1$, there are two pairs of functions $$X_0 \overset{f_0}{\underset{g_0}{\rightleftarrows}} Y_0 \quad \text{and} \quad X_1 \overset{f_1}{\underset{g_1}{\rightleftarrows}} Y_1$$ such that the relevant composites are homotopic to the relevant identities. From these functions, you can construct functions $$X_0 \times X_1 \overset{f}{\underset{g}{\rightleftarrows}} Y_0 \times Y_1$$ where $f$ and $g$ are defined by $$f(x_0,x_1) = (f_0(x_0),f_1(x_1)) \quad \text{and} \quad g(y_0,y_1)=(g_0(y_0),g_1(y_1))$$ for all $x_0,x_1,y_0,y_1$. You need to check that $g \circ f \sim \mathrm{id}_{X_0 \times X_1}$ and $f \circ g \sim \mathrm{id}_{Y_0 \times Y_1}$. In a similar way to how we constructed $f$ and $g$ from $f_0,f_1,g_0,g_1$, you can construct the two desired homotopies from the homotopies $$g_0 \circ f_0 \sim \mathrm{id}_{X_0}, \quad g_1 \circ f_1 \sim \mathrm{id}_{X_1}, \quad f_0 \circ g_0 \sim \mathrm{id}_{Y_0} \quad \text{and} \quad f_1 \circ g_1 \sim \mathrm{id}_{Y_1}$$ I'll leave you to construct these homotopies.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2545131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Show that $(C_b (0,1],||\cdot||_{\infty})$ is not separable I'm not sure what I'm suppose to do for this question. Any help would be greatly appreciated. I know that there is a similar question like this for $[0,1)$ instead. But is there anyway for me to show this without having to prove the isomorphism between $[0,1)$ and $(0,1]$?
Hint: Think of triangles of height $1$ over the intervals $[1/(n+1),1/n], n = 1,2,\dots.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2545217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
"In a circle of 33, the next 10 people on my right are all liars" In the land of Truthlandia, each person is either a truth teller who always tells the truth, or a liar who always tells lies. All 33 people who gathered for a meal in Truthlandia at a round table, said: "The next 10 people on my right are all liars." How many liars were actually in attendance? I have looked around quite a lot, but the closest I could find was this. If anybody can answer that would be greatly appreciated. Thanks for any answers!
A truth-teller, by the given statement, forces the nature of the ten people to the right – a block of eleven. However, there cannot be a block of eleven liars, because the leftmost "liar" would actually be telling the truth. Thus there are three equally-spaced truth tellers and 30 liars.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2545341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 4, "answer_id": 1 }
The product of three ordered variables is $8!$ (Factorial)? The question asks: I could get a lowest of 17 with 28, 32, 45, but the answer is 4. Other than trying again and again, is there a fast and accurate way to solve this?
Note that $8! = 40320$. Note that if $a < b < c$, then $a,b,c$ must be very close to the cube root of $40320$, which you can easily estimate to be between $30$ and $40$, since $3^3 = 27 < 40 < 4^3 = 64$, so we can predict the first digit easily. Therefore, the desired numbers have to be somewhere around the $30-40$ mark. Let us write down the factors of $8!$ which do occur in this range : they are $30,32,35,36,40$. Note that $7$ is a multiple of $8!$, and only one of the above is a multiple of $7$. So that means $35$ is a candidate. Now, you can easily see from the leftovers: $$ \color{orange}1 \times \color{orange}2 \times \color{orange}3 \times \color{green}{ 4}\times \color{blue}{5}\times \color{orange}6\times \color{blue}{7}\times \color{green}{8} $$ that $32 \times 35 \times 36 = 40320 = 8!$. Hence, the desired difference is $4$. Furthermore, this cannot be improved since $32,35,36$ are consecutive factors of $8!$, hence we cannot find factors that are closer than the following trio.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2545453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Smallest possible value of expression involving greatest integer function If $a, b, c \gt 0$ then what is the smallest possible value of $$\left[\frac{a+b}{c}\right]+ \left[\frac{b+c}{a}\right] + \left[\frac{c+a}{b}\right]$$ where $[.]$ denotes greatest integer function. I tried using the AM GM inequality at first but it was not useful. I also tried adding 1 to each bracket and then subtracting 3 from overall to get the same numerator in each bracket. But this too wasn't useful. I don't have much practice of solving the question involving greatest integer function. Somebody please tell me how to deal with this question.
Wolog $a \le b \le c$ $[\frac {b+c}a] \ge [\frac {a+a}a] =2$ And $[\frac {a+c}b] \ge [\frac {a + b}b] \ge [\frac bb] = 1$. So you can not get less than $3$ If $\frac {a+b}{c} < 1$ then $c > a+b$ and $\frac {c+a}b > \frac {2a + b}b= \frac {2a}b + 1$. So If $\frac {c+a}b < 2$ then $\frac {2a}b < 1$ so $b > 2a$. So $\frac {b+ c}a > \frac {2a + a+b}a > \frac {5a}a = 5$. In other words if $[\frac {a+b}{c}]=0$ and $[\frac {c+a}b] = 1$ then $[\frac {b+c}b] \ge 5$. So you most certainly can not get $3$ as an answer. So the answer is $4$ or greater. Can we get four? Well, if $a = b = c$ then $\frac {a+b}c= \frac {a+c}b=\frac {b+c}a =2$ Well if we can get $\frac {a+b}c$ and $\frac {a+c}b$ just under $2$ and $\frac {b+c}a$ just over $2$ that should do it. Let $a= .9; b = 1; c= 1.1$ then $[\frac {a+b}c]=[\frac {1.9}{1.1}] = 1$ and $[\frac {a+c}b]= [\frac 21]$ ....oops..... we'll have to shave just a whisker off. Let $a = .9; b=1; c=1.09$ then $[\frac {a+b}c]=[\frac {1.9}{1.09}]=1$ and $[\frac {a+c}b]= [\frac {1.99}1]=1$ (barely!)$ And $[\frac {b+c}{a}] = [\frac {2.09}{.9}] =2$. So $[\frac {a+b}c]+[\frac {a+c}b]+[\frac {b+c}{a}]=4$. $4$ is the smallest.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2545603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What does $\{\infty\}$ mean? I'm being introduced to $\sigma$-algebra's and I came across the following definition: Let $\mathcal{B}(\overline{\mathbb{R}})$ be the $\sigma$-algebra generated by the sets $\{-\infty\}$,$\{\infty\}$ and $B\in\mathcal{B}(\mathbb{R})$. This $\sigma$-algebra $\mathcal{B}(\overline{\mathbb{R}})$ will be called the Borel algebra of $\overline{\mathbb{R}}$. This definition got me confused as I've never heard of the sets $\{-\infty\}$ and $\{\infty\}$. I always thought that infinity wasn't a number. Question: What should one think of or imagine when trying to understand the meaning of the (singleton?) $\{\infty\}$? I know that something similar has been asked here, but I think my question goes a little bit further into what $\{\infty\}$ actually means. Edit: I want to give an example that explains my confusion. In school I learned that $[a,\infty] = [a,\infty)$. Thanks in advance!
The extended real line is $\Bbb R$ adjoined by the two non-number-objects $+\infty$ (or $\infty$) and $-\infty$, along with the declaration that $-\infty<r<\infty$ for any real number $r$. The text you quote simply describes the Borel sets of the extended real line, and since it turns out that these are exactly the same as adding the singletons $\{-\infty\}$ and $\{\infty\}$ to the Borel sets of the reals, this can be given as a definition instead of meddling with the topology. Note that in the extended real line, since $\infty$ is now an actual object, we have that $[a,\infty]\neq[a,\infty)$. (If all this is being confusing, think about $\Bbb R$ as $(0,1)$ and the extended real line as $[0,1]$.) So what is the meaning of $\{\infty\}$? It's a set, with a single element, and that element is $\infty$. Oh, yes, not only real numbers can be elements of sets. Any mathematical object can be an element of a set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2545783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Problem 9 from Herstein's book Suppose that $H$ is a subgroup of $G$ such that whenever $Ha\neq Hb$ then $aH\neq bH$. Prove that $gHg^{-1}\subset H$ for all $g\in G$. Remark: Honestly, I had some problems. Firstly, after some thoughts I have realized that condition on $H$ is equiavalent to the following: If $aH=bH$ then $Ha=Hb$. I have found the duplication of that problem in this forum and it is stating that condition on $H$ implies that if $a^{-1}b\in H$ $\Rightarrow$ $ba^{-1}\in H$. I was not able to derive it. Please help how to do it. Suppose we proved above implication. Taking $x\in gHg^{-1}$ $\Rightarrow$ $x=ghg^{-1}$ for some $h\in H$ $\Rightarrow$ $g^{-1}xg=h\in H$. And taking $a=g$ and $b=xg$ it follows that $ba^{-1}=xgg^{-1}=x\in H$. Thus $gHg^{-1}\subset H$. Also I can not understand one moment: $a$ and $b$ are certain elements. Is it normal to put $g$ and $xg$ instead of $a$ and $b$, respectively? Please explain these two questions.
Here is one part: $aH=bH \implies a^{-1}b\in H $ Indeed, $b=be \in bH=aH \implies b=ah$ for some $h \in H$ and so $a^{-1}b=h\in H $. $a^{-1}b\in H \implies ba^{-1}\in H$ Indeed, $a^{-1}b=h \in H \implies b^{-1}a=(a^{-1}b)^{-1}=h^{-1} \in H $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2545941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Define $F : \mathbb{S}^n \times I \to \overline{\mathbb{B}^{n+1}}$ by $F(x, s) = sx$, show that $F$ is injective Define $F : \mathbb{S}^n \times I \to \overline{\mathbb{B}^{n+1}}$ by $F(x, s) = sx$, show that $F$ is injective on $\overline{\mathbb{B}^{n+1}} \setminus \{0\}$ I tried to show that by contradiction but it eventually went from being a set theoretic issue to an arithmetic issue to resolve, and I'm not sure how to conclude the proof My Attempted Proof: Suppose there exists an $\alpha \in \overline{\mathbb{B}^{n+1}} \setminus \{0\}$ for which $F(x_1, s_1) = s_1x_1 = s_2x_2 = F(x_2, s_2) = \alpha$ for $(x_1, s_1) \neq (x_2, s_2) \in \mathbb{S}^n \times (0, 1]$. Then $x_1 = (x_{1}^{1}, ..., x_{1}^{n+1})$ and $x_2 = (x_{2}^1, ..., x_{2}^{n+1})$, so $s_1x_1 =s_2x_2$ implies $$s_1x_{1}^{i} = s_2x_{2}^{i}$$ for $s_1, s_2 \in (0, 1]$ and $d(x_1, 0) = 1$ and $d(x_2, 0) = 1$. Note that the superscripts on are usually sub-subscripts but I've just put them as superscripts so they can be seen better. Now how can I show that for any choice of $s_1$ and $x_1$ subject to the two constraints $s_1 \in (0, 1]$ and $d(x_1, 0) = 1$ and for any choice $s_2$ and $x_2$ subject to the two constraints $s_2 \in (0, 1]$ and $d(x_2, 0) = 1$, that for each $i \in \{1, ... ,n+1\}$ we have $$s_1x_{1}^{i} = s_2x_{2}^{i} \implies s_1 = s_2 \ \text{and } x_1^i = x_2^i$$ This really seems to be an arithmetic issue, but I'm not sure how to show it rigorously. I'm guessing that it is something really trivial and something that I should know but for some reason I can't show it. If I can show the above, then I can show that $(x_1, s_1) = (x_2, s_2)$ contradicting the fact that $(x_1, s_1) \neq (x_2, s_2)$ and thus proving injectivity of $F$ on $\overline{\mathbb{B}^{n+1}} \setminus \{0\}$.
Suppose $F(s_1,x_1)=F(s_2,x_2)$, so that $s_1x_1=s_2x_2$. Taking the magnitude of both of these vectors, we get $s_1\|x_1\|=s_2\|x_2\|$, which since $x_1,x_2\in \mathbb S^n$ implies $s_1=s_2$. Therefore, $s_1x_1=s_1x_2$. Finally, since $s_1\neq0$ (which we know since $s_1x_1\in \mathbb B_{n+1}\setminus\{0\}$), we can scale both sides by $1/s_1$ to get $x_1=x_2$, proving injectivity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2546029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Find the area of the region bounded by the curves $y =\sqrt x$, $y=x-6$ and the x-axis by integral with respect to x Find the area (in green) of the region bounded by the curves $f(x) =\sqrt x$, $g(x)=x-6$ and the x-axis by integral with respect to x Attempt: since $x-6 =\sqrt x$ $x=9$ $A = \displaystyle\int _a^b [f(x)-g(x)] dx$ $A = \displaystyle\int _0^9 \sqrt x -\int _0^9x-6$ $=\frac{2}{3}(9)^{\frac{3}{2}}-\left(\frac{1}{2}(9)^2-6(9)\right) $ $=31.5$ $unit^2$ I don't think this method is correct since I am not getting the same area when I solve the integral with respect to y which is also different when I take the whole area under f(x) then subtract the area of the triangle on the right (base=3, height=3) Can anyone help?
Alternatively, you can use \begin{align} f^{-1}(y)&=y^2 ,\\ g^{-1}(y)&=y+6 \end{align} and find the area as \begin{align} \int_0^3 \int_{f^{-1}(y)}^{g^{-1}(y)} \,dx \,dy &=\int_0^3 6+y-y^2\,dy \\ &= \left. 6y+\tfrac12\,y^2-\tfrac13\,y^3\right|_0^3 \\ &=\frac{27}2 . \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2546172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
Nonhomeomorphic subsets of the plane I'm trying to find two compact, nonhomeomorphic subsets of the plane, say $X$ and $Y$, such that $X \times [0,1]$ is homeomorphic to $Y \times [0,1]$. I can not think of how a homeomorphism arises when you product with the interval.
This CW answer is supposed to kick this question from the unanswered queue. I strictly follow the approach mentioned in What to do with questions that are exact duplicates from MathOverflow? There are indeed counterexamples to which Igor Belegradek gave a reference. Here is another counterexample in the plane, perhaps the simplest there is: Let $X$ be an annulus with one arc attached to one of its boundary components and another arc attached to the other boundary component, and $Y$ - an annulus with two disjoint arcs attached to the same one of its boundary components. The above answer is written by @WlodekKuperberg MO link: Is it true that $X\times I\sim Y\times I\implies X\sim Y$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2546273", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
Trouble understanding null-homotopic chain maps I'm working through Weibel, and I'm at the part where he defines null-homotopic maps. He says it's essentially topological null-homotopy, but I'm having trouble reconciling that with examples. Remark: This terminology comes from topology via the following observation. A map $f$ between two topological spaces $X$ and $Y$ induces a map $f^*: S(X) \to S(Y)$ between the corresponding singular chain complexes. It turns out that if $f$ is topologically null homotopic (resp. a homotopy equivalence), then the chain map $f^*$ is null homotopic (resp. a chain homotopy equivalence), and if two maps $f$ and $g$ are topologically homotopic, then $f^*$ and $g*$ are chain homotopic. If $X$ is the circle and $Y$ is the sphere, and $f: X \to Y$ is the inclusion of the equator, then $f$ is topologically null-homotopic. The map it induces on the complex should decompose as $f^* = sd + ds$. But I don't see how this can work in degree 0. Since $H_{-1}(X) = 0$ we must have $sd = 0$ for any choice of $s$. And similarly, $ds$ must be zero, because $H_1(Y) = 0$. But $f^*$ isn't zero, it's the identity. What am I misinterpreting about this example?
It doesn't. $f$ induces an isomorphism on $H_0$ because $X$ and $Y$ are both path-connected, and this is true more generally for any map between path-connected spaces. In topology it's actually not possible for a map to induce zero on $H_0$ since the connected components of the source have to map to the connected components of the target somehow. If you want a tighter analogy to topological null-homotopy you should be looking at 1) pointed maps and 2) reduced homology.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2546367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Equivalences of formulas under an interpretation Is this proof correct? Prove $\vDash\exists x\phi\leftrightarrow\lnot\forall x\lnot\phi$ Proof: We want to see the that $\vDash\exists x\phi\leftrightarrow \vDash\lnot\forall x\lnot\phi$, so let I be an interpretation with domain |I|. Notice that $\vDash\exists x\phi\iff\vDash\phi[a/x],\exists a\in |I|\iff\not\vDash\lnot\phi[a/x],\exists a\in |I|\iff\not\vDash\forall x\lnot\phi \iff \vDash\lnot\forall x\lnot\phi.$
My main criticism with this is that I'd hate to see the use of quantifiers in the semantics. That is, rather than saying $$\vDash\exists x\phi\iff\vDash\phi[a/x],\exists a\in |I|$$ I would define the semantics as: $$\vDash\exists x\phi\iff \text{for some } a\in |I|: \ \vDash\phi[a/x]$$ That way, you have a much more clear separation between the $\exists$ as a logic symbol that is part of the logic statement you are looking at, and the 'for some' that is part of the mathematically defined semantics for that symbol. Also, this way you can add a natural step between your step 3 and step 4 (in your proof I feel that step goes a little quick): $$\vDash\exists x\phi\iff$$ $$\text{ for some } a\in |I|: \ \vDash\phi[a/x]\iff$$ $$\text{ for some } a\in |I|: \ \not\vDash\lnot\phi[a/x]\iff$$ $$\color{red}{\text{not for all } a\in |I|\vDash\lnot\phi[a/x]\iff}$$ $$\not\vDash\forall x\lnot\phi \iff$$ $$\vDash\lnot\forall x\lnot\phi.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2546470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove $\bigcap_{i=1}^k\ker(f_i)\subset \ker(f)\iff f\in {\rm span}(f_1,...,f_k) $ Let $V$ a $\mathbb{K}$-vector space of finite dimension $n$, with $\{f_1,...,f_n\}$ a set linearly independent of $V^*$ and $f\in V^*$. Prove $\bigcap_{i=1}^k\ker(f_i)\subset \ker(f)\iff f\in {\rm span}(f_1,...,f_k).$ ($\Leftarrow$) Let $f\in {\rm span}(f_1,...,fk)$ then exists $\alpha_1,...,\alpha_k$ such that $f=\alpha_1f_1+...+\alpha_kf_k$. Let $f\in\bigcap_{i=1}^k\ker(f_i) $ then $f\in \ker(f_1),...,f\in \ker(f_k)$. By hypothesis we have if $f\in {\rm span}(f_1,...,fk)$ then exists $\alpha_1,...,\alpha_k$ such that $f=\alpha_1f_1+...+\alpha_kf_k$ Here I'm a little stuck. Can someone help me? ($\Rightarrow$) I do'nt know how to prove this part. Help me, if you can. I will be very grateful.
You start well, but soon make a mistake: the statement $f\in\bigcap_{i=1}^k\ker(f_i)$ is wrong, because the kernels are subspaces of $V$ and $f\in V^*$. What you have to prove is If $f\in\operatorname{Span}(f_1,\dots,f_k)$ then $\ker(f)\supset\bigcap_{i=1}^k\ker(f_i)$. Suppose $f\in\operatorname{Span}(f_1,\dots,f_k)$; then $f=\alpha_1f_1+\dots+\alpha_kf_k$ for some scalars $\alpha_1,\dots,\alpha_k$. If $x\in\bigcap_{i=1}^k\ker(f_i)$, then $f_i(x)=0$, for $i=1,\dots,k$ and therefore $$ f(x)=\alpha_1f_1(x)+\dots+\alpha_kf_k(x)=0 $$ proving that $x\in\ker(f)$. Now let's try the converse: If $\ker(f)\supset\bigcap_{i=1}^k\ker(f_i)$ then $f\in\operatorname{Span}(f_1,\dots,f_k)$. Suppose $\ker(f)\supset\bigcap_{i=1}^k\ker(f_i)$. Write $$ f=\alpha_1f_1+\dots+\alpha_kf_k+\alpha_{k+1}f_{k+1}+\dots+\alpha_nf_n $$ which is possible because $\{f_1,\dots,f_n\}$ is a basis of $V^*$. Lemma. Every basis of $V^*$ is the dual of a basis $\{e_1,\dots,e_n\}$ of $V$. Once accepted this lemma (for the proof, see https://math.stackexchange.com/a/1772676/62967), we have $\{e_1,\dots,e_n\}$ such that $$ f_i(e_j)=\begin{cases} 1 & i=j \\ 0 & i\ne j \end{cases} $$ In particular, $e_{k+1},\dots,e_n\in\bigcap_{i=1}^k\ker(f_i)$, so $$ f(e_j)=0,\quad j=k+1,\dots,n $$ and therefore, for $j=k+1,\dots,n$, \begin{align} 0=f(e_j)&=\alpha_1f_1(e_j)+\dots+\alpha_kf_k(e_j)+ \alpha_{k+1}f_{k+1}(e_j)+\dots+\alpha_jf_j(e_j)+\dots+\alpha_nf_n(e_j) \\ &=0+\dots+0+0+\dots+\alpha_j\cdot1+\dots+0 \\ &=\alpha_j \end{align} Hence $\alpha_j=0$ for $j=k+1,\dots,n$ and finally $$ f=\alpha_1f_1+\dots+\alpha_kf_k\in\operatorname{Span}(f_1,\dots,f_k) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2546630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
$\tan(x) = 3$. Find the value of $\sin(x)$ I’m trying to figure out the value for $\sin(x)$ when $\tan(x) = 3$. The textbook's answer and my answer differ and I keep getting the same answer. Could someone please tell me what I'm doing wrong? 1.) $\tan(x) = 3$, then $\frac{\sin(x)}{\cos(x)} = 3$. 2.) Then $\cos(x) = \frac{1}{3}\sin(x)$ 3.) $\sin^2 (x) + \left(\frac{1}{3}\sin(x)\right)^2 = 1$ //Pythagorean identity substitution. 4.) $\left(\frac{4}{3}\sin(x)\right)^2 = 1$ //Combining like terms 5.) $\frac{16}{9}\sin^2(x) = 1$ //Square the fraction so I can move it later. 6.) $\sin^2(x) = \frac{1}{\frac{16}{9}}$ //Divide both sides by $\frac{16}{9}$ 7.) $\sin^2(x) = \frac{1}{1} * \frac{9}{16} = \frac{9}{16}$ //divide out the fractions 8.) $\sin(x) = \pm \frac{3}{4}$ //square root both sides. So $\sin(x) = \pm \frac{3}{4}$ but the book says this is wrong. Any help is much appreciated.
If $\tan(x)=3$, then $\tan^2(x)=9$. This means that $\frac{\sin^2x}{1-\sin^2x}=9$. So, $\sin^2(x)=\frac9{10}$; in other words (at least if we're on the first quadrant), $\sin(x)=\frac3{\sqrt{10}}$. Your error lies in item 4: $\sin^2(x)+\left(\frac1{3\sin(x)}\right)^2=\frac{10}9\sin^2(x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2546705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Group action on a completely regular space I am trying to prove that $X/G$ is completely regular provided that $X$ is a completely regular space and $G$ is a compact hausdorff group acting on $X$. Let $p: X \rightarrow X/G$ be the canonical quotient map. So I choose a point $\overline{x} \in X/G$ and $C \subseteq X/G$ closed subset with $\overline{x}\notin C$. Then I know that $D = p^{-1}(C)$ is closed in $X$ and it is disjoint to $G\cdot x = p^{-1}(\overline{x})$. Therefore, for any $g\in G$, there is a continuous function $f_g:X \rightarrow \mathbb{R}$ satisfying $f_g(gx) = 0$ and $f_g(D)= 1$. I would like to use the compactness of $G$ to be able to consider just finite of such functions, but I don't really know what else to do. I was also trying to construct an $G$-invariant map $f:X \rightarrow \mathbb{R}$ satisfying $f(x) = 0$ and $f(D)= 1$ and thus conclude that the map factor uniquely throughout the quotient.
I recall the following known facts From [AT]: I guess [165, Theorem 1.4.13] is the same as [Eng, Theorem 1.4.13]. From [Eng]: References [165] Ryszard Engelking General Topology, PWN, Polish Scientific Publ., 1977. [AT] A.V. Arhangel'skii, M. Tkachenko Topological groups and related structures, Atlantis Press, Paris; World Sci. Publ., NJ, 2008. [Chaber 1972] Chaber, J. Remarks on open-closed mappings, Fund. Math. 74 (1972), 197-208. [Eng] Ryszard Engelking, General Topology, 2nd ed., Heldermann, Berlin, 1989. [Frolík 1961] Frolík Z. Applications of complete familes of continuous functions to the theory of $Q$-spaces, Czech. Math. J 11 (1961), 115-133. [Ponomarev 1959] Ponomarev V.I. Open mappings of normal spaces, Dokl. Akad. Nauk SSSR 126 (1959), 716-718, (in Russian).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2546869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
There are infinitely many integers $n$ such that $\varphi(n)=n/3$ prove or disprove the following statement: There are infinitely many integers $n$ such that $φ(n)=n/3$. where $φ(n)$ is Euler Phi-Function. Could you please help me with the prove of this , I try it many time but I do not how can I start to prove it or disprove?.
Hint: Look at integers in the form $n = 2^a3^b$ where $a$ and $b$ are positive integers. Can you calculate $\varphi(n)$ for integers $n$ in that form?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2546976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Are irrational numbers like π relationships which are established by some rule or criteria? Rational and Irrational of Reals Irrational numbers appear to fill in the ‘gaps’ between Rational numbers on a Real number line. However they seem to be stipulations or definitions of relationships which are established by some rule or criteria. Take π, for an example: There is no precise definition of what it means. This fact becomes evident over the entire known history of this ‘number’ (relationship). It’s not the computation that is the problem; rather, the definition of its meaning. Is π in this formula: $area=πr^2$ π as an area component the same as π in this formula: $C = 2\pi r$ π as an arc component They are stipulated or defined to be the same, but are they not acting as two completely different constants of proportionality?
There are uncountably many numbers on the real line. Most of them are just there and serve as a sort of "glue" in order to make the system ${\mathbb R}$ complete. The real numbers that actually do occur in mathematics as individuals of interest are all defined by criteria, formulas, or algorithmic procedures, etc. The irrational number $\sqrt{2}$ is the unique positive real number whose square is $=2$. The number $e$ is defined, e.g., as limit $$\lim_{n\to\infty}\left(1+{1\over n}\right)^n\ .$$ For highschool purposes $\pi$ is defined as ratio circumference/diameter of arbitrary circles; but of course at a higher level we could define $\pi$ by $$\pi=4\int_0^1\sqrt{1-x^2}\>dx$$ without reference to folklore facts of elementary geometry.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2547162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Question about Bayesian probabilities. Where did I go wrong? I was solving a question for an acquaintance, about probabilties. The original question goes like this: Of all threats in an year, $12\%$ are Tier 1 and the remaining $88\%$ are Tier 2. If the probability that a reported Tier 1 threat is actually a Tier 2 is $21\%$ and the probability that a reported Tier 2 threat is actually a Tier 1 is $33\%$, then what is the probability that a reported Tier 1 is actually a Tier 1? Here are my attempts to solve; Let probability that an event is Tier 1$=P(T_1)=12\%$ Let probability that an event is Tier 2$=P(T_2)=88\%$ Let probability that a reported Tier 1 was actually Tier 2$=P(M_{1\to2})=21\%$ Let probability that a reported Tier 2 was actually Tier 1$=P(M_{2\to1})=33\%$ Let there be $10000$ events. ... Then, $\#T_1=1200$ $\#T_2=8800$ ... Let the number of reported Tier 1 events be $O$. Let the number of reported Tier 2 events be $T$. ... Then, Number of actual Tier 1 Events from number of reported Tier 1 events $={{(100-21)}\over100}\times O=O_1$ Number of actual Tier 2 Events from number of reported Tier 1 events $={{21}\over100}\times O=T_1$ Number of actual Tier 1 Events from number of reported Tier 2 events $={{33}\over100}\times T=O_2$ Number of actual Tier 2 Events from number of reported Tier 2 events $={{(100-33)}\over100}\times T=T_2$ ... Since, Number of Tier 1 threats $=1200$ $O_1+O_2=1200$ So $\left({{(100-21)}\over100}\times O\right)+\left({{33}\over100}\times T\right)=1200$ Since, Number of Tier 2 threats $=8800$ $T_1+T_2=8800$ So $\left({{21}\over100}\times O\right)+\left({{(100-33)}\over100}\times T\right)=8800$ From here I get two equations in $O$ and $T$, $\left({{79}\over100}\times O\right)+\left({{33}\over100}\times T\right)=1200$ and $\left({{21}\over100}\times O\right)+\left({{67}\over100}\times T\right)=8800$ Wolfram|Alpha reports that $O$ is negative. How is this possible? Where did I go wrong? The actual question is near the middle here. The answer should be explainable in a text-only environment.
Using conditional probability expressions in a Bayesian probability question will lead to much less confusion about the topic. Use $T_1,T_2$ as the mutually exclusive and exhaustive events that a threat is tier 1 or tier 2 respectively, and $R_1,R_2$ as the m.e.e. events that a threat is reported as such. Of all threats in an year, $12\%$ are Tier 1 and the remaining $88\%$ are Tier 2. If the probability that a reported Tier 1 threat is actually a Tier 2 is $21\%$ and the probability that a reported Tier 2 threat is actually a Tier 1 is $33\%$, then what is the probability that a reported Tier 1 is actually a Tier 1? So, you are told : * *$\mathsf P(T_1)=0.12$ the probability that an event is actually tier 1 *$\mathsf P(T_2)=0.88$ the probability that an event is actually tier 2 *$\mathsf P(T_2\mid R_1)=0.21$ the probability that an event is tier 2 given that it is reported as tier 1. *$\mathsf P(T_1\mid R_2)=0.33$ the probability that an event is tier 1 given that it is reported as tier 2. You seek $\mathsf P(T_1\mid R_1)$, the probability that a threat is tier 1 given that it is reported as tier 1. $\mathsf P(T_1\mid R_1)=1-\mathsf P(T_2\mid R_1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2547295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Convert a circle to a polygon If I have a circle with a given radius, how calculate I find a regular polygon's vertices, if I know they are all on the circles edge (like in the images below)?
For a polygon with $n$ vertices (an $n$-gon), each vertex will have an angle of $\frac{2\pi}{n}$ between it and the next one. All we need to do is place a first vertex, then translate it around the origin by this amount $n$ times to get all the vertices. Place the first vertex at $p_0=(\rho, 0)^T$, where $\rho$ is the circle's radius. To rotate a point by $\theta$ around the origin we need the rotation matrix in 2 dimensions, $R(\theta)$ given by: $$ R(\theta)= \left( \begin{matrix} \cos\theta & -\sin\theta\\ \sin\theta & \cos\theta \end{matrix} \right) $$ Just apply this multiple times to get each vertex: $$ p_m=R^m \left( \frac{2\pi}{n} \right) p_0 $$ So the set of vertices for a certain $n$-gon is given by: $$ \{R^m\left(\frac{2\pi}{n}\right)p_0 | p_0=(\rho, 0)^T, 0\le m \lt n\} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2547408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Can the sequence of derivatives $\{f^{(k)}(0)\}_{k\geq 1}$ be any sequence? Let $\{a_{k}\}_{k\geq 1}$ be any sequence of real numbers, must there exist a smooth function $f:]-\epsilon,\epsilon[\rightarrow \mathbb{R}$ (for some positive $\epsilon$) such that for every positive integer $k\geq 1$, we have $f^{(k)}(0)=a_k ?$ Thank you a lot.
Yes, this is a special case of a theorem of Borel. Given any sequence $(a_n)$ there is a smooth function on $\Bbb R$ whose Maclaurin series is $\sum a_nx^n$. I outline the proof. There is a smooth function $f:\Bbb R\to\Bbb R$ which equals $1$ on $[-1,1]$ and vanishes outside $[-2,2]$. Then consider $f(x)=\sum_{n=0}^\infty a_n x^n\phi(x/\varepsilon_n)$, where $\varepsilon_n$ is a sequence of positive numbers tending to zero. Then if $\varepsilon_n$ tends to zero rapidly enough, the series for $f$, and its formal derivatives of all orders will converge uniformly, and it will follow that $f$ has the given Maclaurin series. For a more general result, see Theorem 1.2.6 in Volume 1 of Hormander's The Analysis of Linear Partial Differential Operators.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2547495", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Simple inequality over positive reals: $2(x+y+z) \geq 3xyz + xy+yz+zx$ for $xyz=1$ Problem Let $x,y,z$ be real positive numbers with $xyz=1$. Prove: $$ 2(x+y+z) \geq 3xyz + xy+yz+zx$$ Note : I don't know whether the inequality is true or not. I couldn't find a prove in the place found it nor a solution to it. My try I firstly took $a = \frac{x}{y}$, $b = \frac{y}{z}$ and $c = \frac{z}{x}$ so that the expression becomes $2\sum_{cyc} a^2c \geq 3abc + \sum_{cyc}a^2b$. Then I tried to think of it as a function $f(a,b,c)$ and say that if we let two variable swap, we would come from one to another, letting us see that the order of the variables has nothing to do with the result. But I couldn't get anything here. Any hint or solution would be really apreciated. PD: I tried using the search function, but I couldn't find anything. so, sorry if this is a duplicate
If $x=y=4$ and $z=1/16$, then $xyz=1$, and $$2(x+y+z)-(3xyz+xy+yz+zx)=-27/8<0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2547574", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 2 }
Deriving the density of sum of iid Uniform distributions using Laplace Transforms. In Resnick's Adventures in Stochastic Processes, there's this example where the author derives the density of $\sum_i X_i$ where $X_i$ are iid uniform (0,1). In the picture below, I don't understand 2 things: * *What they mean (and why) by «$e^{-\lambda k}\lambda^{-n}$ is the transform of $\epsilon_k * g(x)$». Here, the $*$ is the symbol for convolution. *After deducing the form of the convolution between $\epsilon_k$ and $g(x)$, , the author states the form of the desired density. How does he get that? Any help would be appreciated.
1) They mean that $$\int e^{-\lambda x} (\epsilon_k*g(x)) dx = e^{-\lambda k} \lambda^{-n}$$ This follows from the two equations (one right after "Now," and the other one right after "Furthermore,") ,and the fact that products in Laplace domain are convolutions in "normal" domain. 2) Plug the integral I just typed into the sum (right under (from Example 3.2.1)): $$\sum_k \binom{n}{k} (-1)^k \int e^{-\lambda x} (\epsilon_k*g(x)) dx$$ And now swap sum and integral from Fubini's (Since the transform is defined, I am assuming it exists) $$= \int e^{-\lambda x} \sum_k \binom{n}{k} (-1)^k (\epsilon_k*g(x)) dx$$ $$= E[e^{\lambda \sum X_i}]$$ $$= \int e^{-\lambda x} f(x) dx $$ where f is the density (if it exists). The result now follows from the uniqueness of transforms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2547701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Solve $\mathbf{\vec{a}} \, \cdot \mathbf{\vec{x_{i+1}}} \gt \mathbf{\vec{a}} \, \cdot \mathbf{\vec{x_{i}}} $ for $\mathbf{\vec{a}}$ I have a set $X = \{\mathbf{\vec{x_1}},\mathbf{\vec{x_2}},\mathbf{\vec{x_3}},\ldots,\mathbf{\vec{x_k}}\}$ of vectors, all of which are $n$-dimensional. I want to find an $n$-dimensional vector $\mathbf{\vec{a}}$, such that $\mathbf{\vec{a}} \, \cdot \mathbf{\vec{x_{i+1}}} \gt \mathbf{\vec{a}} \, \cdot \mathbf{\vec{x_{i}}} $ for $i = 1,2,3,\ldots, k$. My question is: to be able to find such a vector, does the equality $n = k$ have to hold? And, how does one go about solving this problem? Background: Basically, I was thinking of coming up with a single metric that measures athleticism in basketball. Each vector $\mathbf{\vec{x_i}}$ represents physical data about a player (i.e. wingspan, height, vertical jump, etc.). However, I do not know the weights of each of these elements (i.e. their contribution to athleticism), but I do know the ranking of the players (i.e. who is more athletic than who). So, I was wondering if there is a way to determine the weights ($\mathbf{\vec{a}}$) using the ranking.
[I write superscripts to index the vectors.] If the vectors $ \mathbf{\vec{x^{i}}} $ are linearly independent, then for $k\le n$ you can find such a vector $\mathbf{\vec{a}} $. You can do even more: you can fix the overlaps $\mathbf{\vec{a}} \, \cdot \mathbf{\vec{x^{i}}} $. So let $\mathbf{\vec{a}} \, \cdot \mathbf{\vec{x^{i}}} = c^i\ $ and fix numbers $c^k > c^{k-1} > \cdots > c^1$. Then, writing the conditions in components (subscripts), you have $$ \begin{pmatrix}x^1_1 & x^1_2 & \cdots & \cdots & x^1_n \\ x^2_1 & x^2_2 & \cdots &\cdots & x^2_n \\ & & \cdots & & \\ x^k_1 & x^k_2 & \cdots & \cdots &x^k_n \\ \end{pmatrix} \cdot \begin{pmatrix}a_1 \\ a_2 \\ \vdots \\ \vdots \\ a_n \\ \end{pmatrix} = \begin{pmatrix}c_1 \\ c_2 \\ \vdots \\ c_k \\ \end{pmatrix} $$ or for short, $ \mathbf X \cdot \mathbf{\vec{a}} = \mathbf c $. Note the different dimensionalities. Now this system is underdetermined, and a particular solution for $\mathbf{\vec{a}} $, which also is rather robust to measurement errors in individual values of given or new vectors $ \mathbf{\vec{x}} $, is given by the Pseudoinverse (or Moore–Penrose inverse) which is $$ \mathbf{\vec{a}} = \mathbf X^T (\mathbf X \mathbf X^T)^{-1} \mathbf c $$ For $k > n$ the system is overdetermined. Then feasibility in general is not given, one has to inspect. One can ask for the probability that such an overdetermined system can be solved not with fixed values $c^k > c^{k-1} > \cdots > c^1$, but such that at least one such set of values will exist. This probability is not well determined for $n < k < 2 n$ but it is known that it drops sharply for $k > 2 n$. The treatment of those probabilities is a matter of linear separability / cf. the function counting theorem (Thomas Cover, 1965) and of later investigations in the storage capacities of perceptrons (Elisabeth Gardner, 1988).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2547807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why does $\lim_{n \to \infty} \left(1+\frac{1}{n}\right)^n=e$ but $\lim_{n \to \infty} \left(1-\frac{1}{n}\right)^n=e^{-1}$? Why does $$\lim_{n \to \infty} \left(1+\frac{1}{n}\right)^n=e$$ but $$\lim_{n \to \infty} \left(1-\frac{1}{n}\right)^n=e^{-1}$$ Shouldn't the limits be the same since $\left(1+\frac{1}{n}\right) \to 1$?
Expand both using the binomial theorem $$\left(1+\frac 1n\right)^n=1^n+n\cdot\frac 1n\cdot1^{n-1}+\binom n2\left(\frac 1n\right)^21^{n-2}+\binom n3\left(\frac 1n\right)^31^{n-3}+\dots =$$$$=1+1+\frac {n(n-1)}{2n^2}+\frac {n(n-1)(n-2)}{6n^3}+\dots$$while $$\left(1-\frac 1n\right)^n=1-1+\frac {n(n-1)}{2n^2}-\frac {n(n-1)(n-2)}{6n^3}+\dots$$ Subtract the second from the first and you get $$2+\frac {n(n-1)(n-2)}{3n^3}+\dots$$ a sum of positive terms. So the difference between the two expressions is at least $2$. If the limits exist (they do), the difference between the limits must be at least $2$. If you analyse the difference you will note that it is increasing with $n$ (but bounded) so even though the $(1\pm \frac 1n)$ part tends to zero, raising to the $n^{th}$ power in this case gives expressions which get further apart rather than closer together.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2547914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Finding the turning points of $f(x)=\left(x-a+\frac1{ax}\right)^a-\left(\frac1x-\frac1a+ax\right)^x$ I've just come across this function when playing with the Desmos graphing calculator and it seems that it has turning points for many values of $a$. So I pose the following problem: Given $a \in \mathbb{R}-\{0\}$, find $x$ such that $\dfrac{dy}{dx}=0$ where $y=\left(x-a+\dfrac1{ax}\right)^a-\left(\dfrac1x-\dfrac1a+ax\right)^x$ As in most maxima/minima problems, we first (implicitly) differentiate it and set to $0$ to give $$\boxed{\small\dfrac{a(ax^2-1)}{x(ax^2-a^2x+1)}\left(\dfrac{ax^2-a^2x+1}{ax}\right)^a=\left(\ln\left(\dfrac{a^2x^2-x+a}{ax}\right)+\dfrac{a(ax^2-1)}{a^2x^2-x+a}\right)\left(\dfrac{a^2x^2-x+a}{ax}\right)^x} \tag{1}$$ I have no idea how to continue from here. I thought about taking logarithms, but it appears to me that the double $\ln$ in the term $\dfrac{a^2x^2-x+a}{ax}$ would only make the equation worse. (For the simplest case when $a=1$, the problem is easy: $x=1$ and it is a point of inflexion). Let's try setting each of the terms to $0$: Case $1$: $\left(\frac{a^2x^2-x+a}{ax}\right)^x=0$ $ \hspace{1cm}$ This is only possible when the fraction is zero; that is, solving $a^2x^2-x+a=0$ to get $$x=\frac{1\pm\sqrt{1-4a^3}}{2a^2}$$ Case $2$: $\left(\frac{ax^2-a^2x+1}{ax}\right)^a=0$ $ \hspace{1cm}$ This gives $$\begin{align}ax^2-a^2x+1=0&\implies a^2x^2+a=a^3x\\&\implies\left(\dfrac{a^2x^2-x+a}{ax}\right)^x=\left(\dfrac{a^3x-x}{ax}\right)^x=\left(\dfrac{a^3-1}{a}\right)^x=0\end{align}$$ $ \hspace{1cm}$ so for equality between LHS and RHS, we must have $a=1$. However, the equation $ \hspace{1cm}$ $ax^2-a^2x+1=0$ has no real solutions for such $a$; hence we reach a contradiction. Case $3$: $\frac{a(ax^2-1)}{x(ax^2-a^2x+1)}=0$ $ \hspace{1cm}$ We have $x=\pm \dfrac1a$. Now LHS is $0$, and $$\left(\dfrac{a^2x^2-x+a}{ax}\right)^x=\left(1-\dfrac1a+a\right)^{\frac1a} \neq 0$$ $ \hspace{1cm}$ for $a \in \mathbb{R} - \{\phi\}$, where $\phi$ is the golden ratio. $ \hspace{1cm}$ Suppose that $a = \phi$. Then $x$ is forced to be $-\dfrac1a=-\dfrac2{1+\sqrt5}$, since $ax^2-a^2x+1=0$ $ \hspace{1cm}$ (undefined) when $x=\dfrac 1a$. This is impossible, since $y$ is only defined when $x>0$ for this $ \hspace{1cm}$ value of $a$! Case $4$: $\ln\left(\frac{a^2x^2-x+a}{ax}\right)+\frac{a(ax^2-1)}{a^2x^2-x+a}=0$ $ \hspace{1cm}$ This is impossible from cases $1$ and $3$. UPDATE: I have provided a partial answer to my question, now with $x$ removed from it. Any hints on how to solve $(3)=(4)$ are welcome. Here, on MathOverflow: https://mathoverflow.net/questions/302105/on-finding-the-critical-points-of-fx-leftx-a-frac1ax-righta-left-fra
A note about Case $1$ with $\,a:= 2^{-\frac{2}{3}}\,$ and $\,\displaystyle x\to 2^{\frac{1}{3}}\,$ . Left side $\,=0\,$ for $\,\displaystyle x=2^{\frac{1}{3}}\,$ because of $\, ax^2-1=0\,$ and $\,\displaystyle ax^2-a^2x+1=\frac{3}{2}\ne 0\,$ . Right side $\,=0\,$ for $\,\displaystyle x\to 2^{\frac{1}{3}}>1\,$: $\,\displaystyle \lim_{z\to +0 \\x>0} z^x\ln z=0\,$ and therefore $\,\displaystyle \left(\frac{a^2x^2-x+a}{ax}\right)^x \ln \frac{a^2x^2-x+a}{ax} \to 0\,$ for $\,\displaystyle x\to 2^{\frac{1}{3}}>0\,$ $\,\displaystyle \left(\frac{a^2x^2-x+a}{ax}\right)^x \frac{a(ax^2-1)}{a^2x^2-x+a} = \left(a^2x^2-x+a\right)^{x-1} (ax^2-1) x^{-x} a^{1-x} = 0\,$ for $\,\displaystyle x=2^{\frac{1}{3}}\,$ because of $\,x-1>0\,$ and $\,\displaystyle a^2x^2-x+a=0\,$ and $\, ax^2-1=0\,$ . If you like to work with recursions for e.g. $\,a>0,\,a\neq 1\,$ it can make sense to choose $\,\displaystyle z:=x+\frac{1}{ax}\,$ so that you get $\displaystyle x=f_{1,2}(z):=\frac{z}{2}\pm\sqrt{(\frac{z}{2})^2-\frac{1}{a}}\,$ and $\,\displaystyle y=(z-a)^a-\left(az-\frac{1}{a}\right)^x\,$ . We get $\,\displaystyle \frac{dy}{dz}=(z-a)^{a-1} - \left(az-\frac{1}{a}\right)^x \left(\frac{x}{z-\frac{1}{a^2}}+\frac{1}{2-\frac{z}{x}}\ln\left(az-\frac{1}{a}\right)\right)$ and a possible recursion with $\displaystyle z_0>\max\left(a;\frac{2}{\sqrt{a}};\frac{1}{a^2}\right)\,$ could be $\displaystyle z_{n+1}=a+\left(\left(az_n-\frac{1}{a}\right)^{f(z_n)} \left(\frac{f(z_n)}{z_n-\frac{1}{a^2}}+\frac{1}{2-\frac{z_n}{f(z_n)}}\ln\left(az_n-\frac{1}{a}\right)\right)\right)^{\frac{1}{a-1}}\,$ with $\,f(z)\in\{f_1(z);f_2(z)\}\,$ . For time reasons I haven't checked this recursion, sorry. But it's a try to simplify the calculations. Note: Because of $\,\displaystyle \frac{dy}{dx}=\frac{dy}{dz}\frac{dz}{dx}\,$ with $\,\displaystyle \frac{dy}{dx}:=0\,$ we have $\,\displaystyle \frac{dy}{dz}=0\,$ or $\,\displaystyle \frac{dz}{dx}=0\,$ . $\displaystyle \frac{dz}{dx}=0\,$ means $\,ax^2-1=0\,$ which leads directly to my comment about Case $1$ . This strengthens the claim that the substitution $\,\displaystyle z:=x+\frac{1}{ax}\,$ makes sense.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2548041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 1 }
Tangent space as the set of all derivations I am trying to get a grip on the concept of derivations at a point on a manifold by working out some concrete examples. Let $M$ be a smooth manifold with or without boundary, and let $p \in M$. A linear map $v : C^{\infty}(M) \to \mathbb{R}$ is called a derivation at $p$ if it satisfies $$v(fg) = f(p)v(g) + g(p)v(f)$$ The set of all derivations of $C^{\infty}(M)$ at $p$ is denoted by $T_pM$ and is a vector space called the tangent space to $M$ at $p$. Now let's look at an example; from single-variable calculus. Take $M = \mathbb{R}$ and $f(x) =\sin(x)$ and $p = \pi$. We have $f'(p) = \cos(p) = -1$. Now let's look at the general definition above, and note that for any $a \in \mathbb{R}^n$, the $n$ derivations $$\frac{\partial}{\partial x^i}|_{a} \ \text{ defined by } \ \frac{\partial}{\partial x^i}|_{a}f = \frac{\partial f}{\partial x^i}(a)$$ for $i \in \{1, .., n\} $ form a basis for $T_a(\mathbb{R}^n)$ and has dimension $n$. So pick $v \in T_p(\mathbb{R}^1)$, since $v$ is a linear-combination of basis elements in a vector space of dimension $1$, we have $v = c \cdot \frac{\partial}{\partial x}|_{p}$ for some $c \in \mathbb{R}$. So $v(f) = c \cdot \frac{\partial f}{\partial x}|_{p} = c\cdot \frac{d f}{d x}|_{p} = c\cdot cos(\pi) = -c$ for some $c \in \mathbb{R}$ But we need $v(f) = -1$, for the computation from the general definition and usual calculus to coincide. Have I done anything wrong here? Shouldn't $v(f) = -1$?
You have effectively done everything correctly. You recognised that $\frac{\partial}{\partial x}|_{p}$ was a basis for the tangent space at $p$ and $\frac{\partial f}{\partial x}|_{p} = -1$ An arbitrary tangent vector at $p$ is thus $v = c\frac{\partial}{\partial x}|_{p}$ for some $c$ so by linearity $v(f)= c\frac{\partial f}{\partial x}|_{p} = c(-1) = -c $ which is what you showed. In other words, you came to the correct conclusion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2548163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
On the asymptotics of $a_n=a_{n-1}^k+a_{n-2}^k$ where $k>1$ and $a_0=0, a_1=1$ Consider the sequence defined by $a_0=0,a_1=1, a_n=a_{n-1}^k+a_{n-2}^k$, where $k$ is a fixed integer larger than $1$. One finds $a_n\sim a_{n-1}^k$ and thus $a_n\sim \alpha_k^{k^n}$ where $\alpha_k$ is a constant which depends on $k$. It seems that $\lim\limits_{k\to\infty}\alpha_k=1$. If so, how can it be proven?
If $k > 1$, it's obvious that the sequence $a_n$ is increasing and nonnegative. So, because $0 \le a_{n-2} \le a_{n-1}$, we have the inequality $a_{n-1}^k \le a_n \le 2a_{n-1}^k$, and that will be all we need to prove the inequality $$ 2^{k^{n-3}} \le a_n \le 2^{k^{n-1}/(k-1)} \tag{$\star$} \label{eq:star} $$ for sufficiently large $n$. This doesn't prove that $\alpha_k$ exists (for that we need more precise estimates) but assuming $\alpha_k$ exists, it proves $2^{1/k^3} \le \alpha_k \le 2^{1/(k^2-k)}$, which is enough to know that $\alpha_k \to 1$ as $k \to \infty$ by the squeeze theorem. (And this is true for various notions of "existence" of $\alpha_k$. For example, we might define $\alpha_k = \lim_{n\to\infty} (a_n)^{1/k^n}$, which leaves open the possibility that $a_n$ only grows like $\alpha_k^{k^n}$ up to some lower-order factors which are "merely exponential", say. Or we might require that $a_n \sim \alpha_k^{k^n}$, which I interpret as saying $\lim_{n\to\infty} \frac{a_n}{\alpha_k^{k^n}} = 1$ and am not yet convinced is true for any $\alpha_k$.) To prove $\eqref{eq:star}$, let $b_n = \frac{\lg a_n}{k^n}$ (defined for $n\ge 1$). We have $$ a_{n-1}^k \le a_n \le 2 a_{n-1}^k \implies k \lg a_{n-1} \le \lg a_n \le k \lg a_{n-1} + 1 \implies b_{n-1} \le b_n \le b_{n-1} + \frac1{k^n} $$ where the last statement is obtained by dividing through by $k^n$ and applying the definition of $b_n$. To get a lower bound on $b_n$, just compute the first few values of $a_n$: we have $a_2 = 1^k + 0^k = 1$ and $a_3 = 1^k + 1^k = 2$. So $b_3 = \frac{\lg 2}{k^3} = \frac1{k^3}$, and because $b_{n-1} \le b_n$, we have $b_n \ge \frac1{k^3}$ for all $n \ge 3$. To get an upper bound on $b_n$, just notice that because $b_{n} \le b_{n-1} + \frac1{k^n}$, then $$ b_n \le b_1 + \sum_{i=2}^n \frac1{k^i} \le b_1 + \sum_{i=2}^\infty \frac1{k^i} = \frac1{k(k-1)}. $$ Knowing that $\frac1{k^3} \le b_n \le \frac1{k(k-1)}$ immediately translates into the bound on $a_n$ in $\eqref{eq:star}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2548216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Uniform intgrability and compact convergence Suppose we have a family of uniformly integrable random variables $\{X_n:n\in\mathbb{N}\}$. Suppose also we have a sequence of continuous functions $f_n:\mathbb{R}\to\mathbb{R}$ converging to $f$ and that $\{f(X_n):n\in\mathbb{N}\}$ is uniformly integrable. Also assume that $f_n(X_n)$ is integrable for all $n$. If $f_n$ converges to $f$ uniformly, then we know that $\{f_n(X_n):n\in\mathbb{N}\}$ is uniformly integrable. If we replace uniform convergence with compact convergence (uniform convegergence on compact sets), is $\{f_n(X_n):n\in\mathbb{N}\}$ also uniformly integrable?
Let $f_n\colon x\mapsto x/n$. Then $f_n$ is continuous for all $n$ and the sequence $\left(f_n\right)_{n\geqslant 1}$ converges to $0$ uniformly on compact sets. It thus suffices to find a sequence of random variables $(X_n)_{n\geqslant 1}$ such that for each $n$, $X_n/n$ is integrable but the family $\left\{X_n/n,n\geqslant 1\right\}$ is not uniformly integrable. By letting $Y_n:=X_n/n$, it suffices to find a sequence of integrable functions which is not uniformly integrable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2548325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Numerical range of normal matrices Let $F_1, F_2$ be two normal matrices, we consider $$W(F_1,F_2)=\{(\langle F_1 y\; ,\;y\rangle,\langle F_2 y ,\;y\rangle):y \in F,\;\;\|y\|=1\}.$$ If $F_1F_2=F_2F_1$. Is $W(F_1,F_2)$ convex? Thank you!
It is quite simple: if all $(F_1,\cdots, F_d)$ commute with each other and normal, they all have the same orthonormal eigenbases $(v_1,\cdots,v_n)$ and every state vector $x$ can be decomposed: $$x=\displaystyle\sum_{i=1}^n x_i v_i$$ Average value of $F_j$ over this state is simply $$\displaystyle\sum_{i=1}^n |x_i|^2 \lambda(F_j)_i,$$ where $\lambda(F_j)_i$ is the eigenvalue of $F_j$ corresponding to the eigenvector $v_i$. Now, if there are two points $A,B \in W(F_1,\cdots, F_k)$, there are state vectors $x,y$ such that image of $x$ is $A$ and image of $y$ is $B$. We would like to prove that for any point $C$ on the line segment from $A$ to $B$ there exists a state vector $z$ corresponding to $C$. But $C=\alpha A+(1-\alpha)B,\;(0<\alpha<1)$ and all the corresponding averages (coordinates of $C$) have form $$\displaystyle\sum_{i=1}^n(\alpha |x_i|^2 + (1-\alpha)|y_i|^2) \lambda(F_j)_i,$$ so clearly the state vector $z$ defined as $$z=\displaystyle\sum_{i=1}^n\left(\sqrt{\alpha |x_i|^2 + (1-\alpha)|y_i|^2}\right) v_i,$$ corresponds to the point $C$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2548484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
The intersection of all subspace of $V$ is $\{0\}$. Let $V$ be a $\mathbb{R}$-subspace with basis $B=\{v_1 ,v_2, \ldots , v_n\}$ and $\overline{v}\in V$, $\overline{v}\neq 0$. I have shown that if we exchange $\overline{v}$ with a $v_i\in B$ we get again a basis. I want to show, using this fact, that the intersection of all subspace of $V$ of dimension $n-1$ is $\{0\}$. $$$$ I have done the following: We suppose that there is a non-zero vector, say $u$, common to all $(n-1)$-dimensional subspaces. Then suppose that $B$ is the basis of the intersection, then $u\in B$. We can exchange $u$ by an other element $v\in V$ and we get again a basis, right? How can we continue?
I am not sure this is the easiest way to answer your original question. The way to do what you’re asking is: Asume by contradiction that the intersection has a non zero vector $u$. Complete $u$ to a basis $u, v_2, ....v_n$. Now $v_2, ...,v_n$ spans a $n-1$ subspace. $u$ can’t be an element of that sub space because it’s independent of $v_2, ...,v_n$ by our construction. This leads to the desired contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2548620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Integer part of $\sqrt[3]{24+\sqrt[3]{24+\sqrt[3]{24+\cdots}}}$ Find the value of the following infinite series: $$\left\lfloor\sqrt[3]{24+\sqrt[3]{24+\sqrt[3]{24+\cdots}}}\right\rfloor$$ Now, my doubt is whether it's​ $2$ or it's​ $3$. I'm not sure if it just converges to $3$ but not actually reaches it or if it completely attains the value of $3$. I would to like to see the proof (formal) as well instead of just intuitions.
Let $$x= \sqrt[3]{24+\sqrt[3]{24+...}}$$ $$x^3 = 24+\sqrt[3]{24+\sqrt[3]{24+...}}$$ Notice $x^3 -24$ will be itself so $$x^3-24=x$$ Solving for $x$ yields $$3, \frac{3+\sqrt{23}i}{2}, \frac{3-\sqrt{23}i}{2}$$ but since we are concerning real solutions, 3 is the answer. It will be infinitely close to 3 amd reaches 3 so the answer is 3. ^^ Edit: That symbol is usually called "floor function".
{ "language": "en", "url": "https://math.stackexchange.com/questions/2548701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 8, "answer_id": 6 }
Proving $g(2n)=0$ Given that $f$ is an odd function periodic with period $2$ and continuous for all $x$ and $g(x)=\int_0^x f(t) dt$ then the question is to prove $g(2n)=0$ $g(2n)=n\int_0^2 f(x) dx=n g(2)$ so I have to prove that $g(2)=0$.I could check that $g(x) $ is a even function.Any ideas?
$$g(2)=\int_0^2f(x)\text{d}x=\int_{-2}^0f(x)\text{d}x$$ Thus, $$g(2)=\frac{1}{2}\left(\int_0^2f(x)\text{d}x+\int_{-2}^0f(x)\text{d}x\right)=\frac{1}{2}\int_{-2}^2f(x)\text{d}x=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2548813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Showing a set has nonempty interior Let $A$ be a finite set. Let $M : A \times A \to {\bf R}$ be a symmetric function, which is positive-semidefinite when regarded as an $A \times A$ matrix. Let $P(A)$ be the set of vectors $p = (p_a)_{a \in A} \in {\bf R}^A$ such that for all $a \in A$, $p_a \ge 0$, and $\sum_{a \in A} p_a = 1$. Then let $S(p) = \sum_{(a,b) \in A \times A} M(a,b) p_a p_b$, and $U_t = \{p \in P(a) : S(p) \leq r\}$. I want to prove that if $U_t$ is nonempty, it also has nonempty interior under the norm $||p - q || = \sum_{a \in A} |p_a - q_a|$. I have used Cauchy-Schwarz to show that the $U_t$ are convex.
This is not true. If $S^{-1}((-\infty, r])$ is non-empty, then it contains each of the open sets $S^{-1}((-\infty, s])$ for $0\leq s<r$ and is thus of non-empty interior unless all of those sets are empty, in other words when $r$ is the global minimum of $S$ on $P(A)$. But the global minimum of $S$ might well be $r>0$. For example if $M$ is positive definite, since $S(p)=p^TMp$, the minimum of $S$ on the compact set $P(A)$ must be attained, but cannot be zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2548932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Definite Integral $\int_{0}^{1}\frac{\ln(x^2-x+1)}{x-x^2}dx$ for Logarithm and Algebraic. Evaluate $$\int_{0}^{1}\frac{\ln(x^2-x+1)}{x-x^2}dx$$ I have been thinking for this question quite some time, I've tried all the methods I have learnt but still getting nowhere. Hope that someone can explain it for me. Thanks in advance.
You can use, since you're in the range $[0, 1]$, the logarithm expansion $$\ln(x^2 - x + 1) = \sum_{k = 1}^{+\infty} \frac{(-1)^k}{k} (x^2-x)^k$$ together with the fact that $x - x^2 = -(x^2 - x)$, hence the integration becomes $$\int_1^0$$ due to the minus sign, incorporated into the extrema. After that, treat the fraction $$\frac{(x^2-x)^{k}}{x^2 - x} = (x^2-x)^{k-1}$$ With the binomial theorem, which transforms it into $$\sum_{j = 0}^{k-1} \binom{k-1}{j} (-1)^{k-1-j}$$ Giving you the integral $$\sum_{k = 1}^{+\infty} \frac{(-1)^k}{k} \sum_{j = 0}^{k-1} \binom{k-1}{j} (-1)^{k-1+j} \int_1^0 x^{k-1+j} dx$$ Which is trivial. The final result in terms of series: $$-\sum_{k = 1}^{+\infty} \frac{(-1)^{2k-j}}{k} \sum_{j = 0}^{k-1} \binom{k-1}{j} \left(\frac{1}{k+j}\right)$$ The series can be summed and the final result is numerical $$-\frac{\pi^2}{9}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2549072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
Trying to solve this First Order Differential Equation. I've made several attempts to this question but they've been unsuccessful. The question is shown below: $$(1+x^2)\frac{\text{d}x}{\text{d}y}+xy=0$$ the answer in the book states when y(0)=2 $$y^2(1+x^2)=4$$ however I gain $c^2=y^2(1+x^2)$ and when the boundary condition is applied this answer is not achieved
It looks like you're correct - for $x=0$ and $y=2$, we have $c^2 = 2^2 (1 + 0^2)$, so $c^2 = 4$. So, your equation becomes $y^2 (1+x^2) = c^2 = 4$, as required.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2549161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Determinant of a matrix with odd diagonal and even entries I'm trying the solve the following problem linear algebra problem and I'm not sure where to begin: Let $B$ be a square matrix with n columns and integer entries. This matrix is constructed so that all diagonal entries are odd and all other entries are even. We wish to demonstrate that $\det(B) \neq 0$. My knee jerk is to separate $B$ into even and odd matrices $B_{even}$ and $B_{odd}$, where $B_{odd}$ will be diagonal and have a nonzero determinant. But I'm not sure where to go from there.
$$B = \begin{bmatrix} \color{red}{odd} & even & \dots & even\\ even & odd & \dots & even \\ \vdots & \ddots & \dots & even\\ even & even &even & odd \end{bmatrix}$$ $$|B| = \color{red}{odd} + even+even+..........+even=odd\ne 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2549314", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How do I show $f_n(x)=n^2 x^n(1-x)$ pointwise converges to $0$ on $[0,1]$? How do I show $f_n(x)=n^2 x^n(1-x)$ pointwise converges to $0$ on $[0,1]$? I first started with the case $x=1/2$ to try out. We see that $n^2(1/2)^n(1/2)=n^2/2^{n+1}$ which goes to $0$ as $n$ goes to infinity since $n^2 < 2^{n+1}$. Now I need to generalize. If I take the derivative of $f_n(x)$ then I get, after simplifying, $$f_n'(x)= n^2x^{n-1}(n-(n+1)x).$$ So, we know $x=0$ and $x=n/(n+1)$ are critical points. I plotted the graph and I see that as $n$ gets larger, the maximum of the function on $[0,1]$ grows and gets closer to $x=1$.
For $0 < x < 1$ we have $y = 1/x - 1 > 0$ and $ x = 1/ (1 + (1/x - 1))= 1/(1+y).$ Thus, $$n^2 x^n = \frac{n^2}{(1 + y)^n}.$$ Using the binomial expansion $(1+y)^n = 1 + ny + \frac{1}{2}n(n-1)y^2 + \frac{1}{6}n(n-1)(n-2)y^3 + \ldots,$we have $$0 \leqslant n^2 x^n = \frac{n^2}{(1 + y)^n} < \frac{n^2}{n(n-1)(n-2)}\frac{6}{y^3},$$ and $\lim_{n \to \infty} n^2x^n = 0$ by the squeeze theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2549581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Calculus Derivative—Finding unknown constants Find all values of $k$ and $l$ such that $$\lim_{x \to 0} \frac{k+\cos(lx)}{x^2}=-4.$$ Any help on how to do this would be greatly appreciated.
Note that $$\lim \frac{1-\cos x}{x^2}=\frac{1}{2}$$ And that $$\frac{k+\cos(lx)}{x^2}=\frac{k+1}{x^2}-\frac{1-\cos(lx)}{x^2}$$ Thus $$k+1=0$$and $$\frac{1}{2}l^2=4$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2549850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Shifted absorption time If $(X_n)_{n\geq0}$ is a Markov chain, and $T=inf\left\{n\geq0 : X_n=1 \right\}$, why does $$\mathbb{E}_i[T-1|X_1=j]=\mathbb{E}[T|X_0=j] ?$$ This question is partially answered here : Markov chains: Expected time until absorption., but I'd like a rigorous proof of that fact.
It's enough to prove that $$ \Pr[T = t \mid X_0 = i, X_1 = j] = \Pr[T = t-1 \mid X_0 = j]. $$ Then $\mathbb E[T]$ will follow by the usual expectation formula. Both sides of this probability can be decomposed into sums over paths: on the left, paths $(i,j, \dots,1)$ of length $t$, and on the right, paths $(j,\dots,1)$ of length $t-1$. For any fixed $P$ on the left, we have a corresponding path $P'$ on the right by dropping the initial $i$, and vice versa. Following the path $P$ given that $X_0 = i$ and $X_1 = j$ has the same probability as following $P'$ given that $X_0 = j$, by the Markov probability (and by stationarity: we'll get terms like $\Pr[X_{k+1} = x \mid X_k = y]$ on one side and $\Pr[X_k = x \mid X_{k-1} = y]$ on the other side). So each path contributes the same to both sides, and therefore both probabilities are equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2549957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof of Generalized Cayley's formula I'd like to prove the equation following: $$x_1x_2x_3...x_n(x_1 + x_2 + ... x_n)^{n-2} = \sum_Tx_1^{d_{T(1)}}x_2^{d_{T(2)}}...x_n^{d_{T(n)}}\tag 1$$ where the sum is over all spanning trees $T$ in $K_n$ and $d_{T(i)}$ is the degree of $i$ in $T$ I heard this's called Cayley's generalized formula, which is the number of $trees$ in $n$ vertices: $n^{n-2} \tag 2$ Only the morphological similarity between $(1)$ and $(2)$ is the power $n-2$. I think in this case induction might work well, but honestly $induction$ itself is not a productive way of proving something to reveal one's mathematical identity to me including this case, proof of $(1)$. Any hint to prove this equation in combinatoric and algebraic way?
As you might have noticed, setting all the $x_i$ to $1$ recovers the fact that the number of trees on $n$ vertices, which is the same as the number of trees spanning $K_n$, is $n^{n-2}$. To get a bijective proof of (1), let $x_1, \ldots, x_n$ be numbers labeling the vertices of $K_n$. Given a spanning tree $T$, note that $$\prod_{(i,j) \in T} x_ix_j = \prod_{i = 1}^n x_i^{d_i(T)}.$$ (The product on the left is over edges of $T$.) Summing over all spanning trees $T$, we are reduced to showing that $$\sum_{T} \prod_{(i, j) \in T} x_ix_j = x_1x_2\ldots x_n(x_1 + x_2 + \ldots + x_n)^{n-2}.$$ Note that the right hand side is the sum over all monomials of degree $2n-2$ in which each variable has degree at least $1$. Now, given any set $S$ of $n-1$ edges $(u_i, v_i)$ in $K_n$, we can form the degree $2n-2$ monomial $$m(S) = \prod_{i=1}^{n-1} x_{u_i}x_{v_i}.$$ Moreover, if $S$ happens to be a spanning tree, each variable will appear at least once. To complete the proof, it suffices to show that $m$ is a bijection between the set of spanning trees and monomials of degree $2n-2$ in which each $x_i$ occurs at least once. Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2550064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
$\det(A^4+I)=29$ is not solvable by any $A\in M_4(\mathbb Z)$ I recently encountered the following problem. Given any $A \in M_4(\mathbb Z)$, show that $\det(A^4+I)\ne29$, where $I$ denotes the identity matrix. LHS can be written as the product of $1+{\lambda _i}^4$ where $\lambda _i$ denotes the eigenvalues of A. By using AM-GM inequality, I found that A is either invertible in $M_4(\mathbb Z)$ or has a zero determinant. I cannot go further. Can anyone help me?
Consider the matrix $A$ modulo $29$. The matrix $A^4+I$ has determinant $29$ so has rank $3$ and nullity zero considered as a matrix over $\Bbb F_{29}$. It has a unique null-vector $u$ up to scalar multiplication: $(A^4+I)u\equiv 0\pmod{29}$ and $(A^4+I)v\equiv 0\pmod{29}$ implies $\newcommand{\la}{\lambda}v\equiv\la u\pmod{29}$. In particular, taking $v=Au$, we find $Au=\la u\pmod{29}$ for some $\la$. Then $\la^4+1\equiv0\pmod{29}$. But as $8\nmid(29-1)$, this congruence is insoluble.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2550139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
convergence sequence and continuous functions I have the following question: assume $C$ is a subset (we can assume it is convex and compact) of a Banach space $(X,\|\cdot\|)$, $f:C\longrightarrow C$ continuous and $x_{0}\in C$ a fixed point of $f$. Also, let $(x_{n})_{n\geq 1}\subset C$ be a sequence having a subsequence converging to $x_{0}$, and such that $$\Big| \|x_{n+1}-x_{0}\|-\|f(x_{n})-f(x_{0})\| \Big|\leq c_{n},$$ for each $n\geq 1$, where $c_{n}$ is a positive sequence of numbers with $c_{n}\rightarrow 0$. From the above conditions, can we derive that $x_{n}\rightarrow x_{0}$? I know that if $f$ is nonexpansive (i.e. $\|f(x)-f(y)\|\leq \|x-y\|$) the assett is true under some additional conditions on the sequence $(c_{n})_{n\geq 1}$,but assuming only the continuity of $f$, it is not clear. What do you think? Many thanks in advance for your comments!
Without additional conditions on the sequence $(c_{n})_{n\geq 1}$ (maybe, that the series $\sum c_n$ converges) the asset may be false even for the segment $[0,1]$ endowed with the standard metric and the identity map $f$. As a counterexample it suffices to put $x_0=0$ and $\{x_n\}$ any sequence of points of $[0,1]$ such that $\lim_{n\to\infty} |x_{n}-x_{n+1}|=0$, $\{x_n\}$ contains a subsequence convergent to $x_0$, but $\{x_n\}$ does not converges to $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2550259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Find the formula for the sequence $(a_n)$ that satisfies the recurrence relation $a_n=(n+7)a_{n-1}+n^2$ with $a_0=1$ Find the formula for the sequence $(a_n)$ that satisfies the recurrence relation $a_n=(n+7)a_{n-1}+n^2$ with the initial condition $a_0=1$. This is a non-linear nonhomogeneous recurrence relation. What I can think that may help solve this problem is using the idea of generating function. Let $G(x)=\sum_{n=0}^{\infty}a_nx^n$. Then $$\begin{aligned} G(x)&=a_0+\sum_{n=1}^{\infty}a_nx^n\\ &=1+\sum_{n=1}^{\infty}\left((n+7)a_{n-1}+n^2\right)x^n\\ &=1+\sum_{n=1}^{\infty}\left((n+7)a_{n-1}x^n\right)+\sum_{n=1}^{\infty}(n^2x^n) \end{aligned}$$ Then I was stuck here, because I couldn't change the two summations into forms of $G(x)$. Anyone has brillant ideas?
Following @GTonyJacobs remark about the homogeneous sequence, let us define (this is a discrete method of variation of the parameter) $${a}_{n} = \left(n+7\right) ! \ {b}_{n}$$ We have $$\left(n+7\right) ! \ {b}_{n} = \left(n+7\right) \left(n+6\right) ! \ {b}_{n-1}+{n}^{2}$$ hence $${b}_{n}-{b}_{n-1} = \frac{{n}^{2}}{\left(n+7\right) ! \ }$$ It follows that $${b}_{n} = {b}_{0}+\sum _{k = 1}^{n} \left({b}_{k}-{b}_{k-1}\right) = \frac{1}{7 ! \ }+\sum _{k = 1}^{n} \frac{{k}^{2}}{\left(k+7\right) ! \ }$$ hence $${a}_{n} = \left(n+7\right) ! \ \left(\frac{1}{7 ! \ }+\sum _{k = 1}^{n} \frac{{k}^{2}}{\left(k+7\right) ! \ }\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2550562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
If $f(g(x))=x$ is $f$ an injective function? Let $ f:\mathbb R\to \mathbb R $ . If $f(g(x))=x$ then is $f$ an injective function? Well, I proved it to be true. But honestly I have a strong feeling that my proof is wrong. Here's my proof: Assume $f(a_1)=f(a_2) = x$ and we want to prove that $a_1 = a_2$ . if $f(a_1)=f(a_2) = x,$ then $a_1 = g(x)$ and $a_2 = g(x) $. meaning $a_1 = a_2$. Therefor f is injective. Is my proof wrong? I assume that not only that my proof is wrong but also that this isn't true at all. But I'm having a hard time to find a contradicting example.
Recall the definition of $\arctan$: $\arctan$ is the only continuous function $\Bbb R\to \left(-\frac\pi2,\frac\pi2\right)$ such that $\tan\arctan x=x$ for all $x\in\Bbb R$. Now, it is clear that $\tan$ can be extended to a function $\Bbb R\to\Bbb R$ by assigning arbitrary values to $x=k\pi+\frac\pi2$. Would such an extension be injective?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2550627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 0 }
Problems with finding limit The function $f(x)$ has a derivative at the point $a$ and $f(a) > 0$. I need to find the limit as n $\to + \infty$ of $$\left(\frac{f(a + \frac1n)}{f(a)} \right)^n$$ Substitution method?
By definition of derivative we get $$ \lim_{n\to \infty}\left(\frac{f(a + \frac1n)}{f(a)} \right)^n= \lim_{n\to \infty} \exp\left[n\log\left(\frac{f(a+1/n)}{f(a)}\right )\right] =\lim_{h\to 0}\exp\left[\frac{\log\left(f(a+h\right ) -\log f(a)}{h}\right] =\exp\left(\frac{f'(a)}{f(a)}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2550733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Test for divergence of $\int_{0}^{\infty} \frac{\sin^2(x)}{x}dx$ without evaluating the integral I would like to prove that $$\int_{0}^{\infty} \frac{\sin^2(x)}{x}dx$$ diverges without actually evaluating the integral. Is there a convergence test from calculus or real analysis that can show that this integral diverges? Thanks. Edit: Someone pointed out that this is a possible duplicate. However, the question put forth as a possible duplicate asks about $\sin(x^2)$, not about $\sin^2(x)$.
$$ \begin{align} \int_0^\infty\frac{\sin^2(x)}{x}\,\mathrm{d}x &=\sum_{k=1}^\infty\int_{(k-1)\pi}^{k\pi}\frac{\sin^2(x)}{x}\,\mathrm{d}x\\ &\ge\sum_{k=1}^\infty\frac1{k\pi}\int_{(k-1)\pi}^{k\pi}\sin^2(x)\,\mathrm{d}x\\ &=\sum_{k=1}^\infty\frac1{2k} \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2550977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
If $a+b\mid a^4+b^4$ then $a+b\mid a^2+b^2$; $a,b,$ are positive integers. Is it true: $a+b\mid a^4+b^4$ then $a+b\mid a^2+b^2$? Somehow I can't find counterexample nor to prove it. I try to write it $a=gx$ and $b=gy$ where $g=\gcd(a,b)$ but didn't help. It seems that there is no $a\ne b$ such that $a+b\mid a^4+b^4$. Of course, if we prove this stronger statement we are done. Any idea?
We have that $$a^4+b^4=(a+b)(a^3-a^2b+ab^2-b^3)+2b^4$$ so if $a+b$ is a factor of $a^4+b^4$ it is also a factor of $2b^4$ and (by symmetry) $2a^4$ If the highest common factor of $a$ and $b$ is $y$ so that $a=py$ and $b=qy$ we find that $(p+q)y$ is a factor of $2q^4y^4$. Now $p+q$ can have no factor in common with $q$ by construction, so $p+q|2y^4$. We find an easy solution by setting $p+q=y$. You might want to think about constructing a counterexample before going further, by tightening things up a bit. If we want to be tight against a constraint we might try $y^4=\frac {p+q}2$. With $y=2$ this would give $p+q=32$. Then $a=2p$ and $b=2q$ and $a^4+b^4=16 (p^4+q^4)$ and this is divisible by $p+q=32$ because $p$ and $q$ have the same parity. But now put $p=1, q=31$ with $a=2, b=62$ and $a^2+b^2=4+3844=3848$ is not divisible by $32$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2551099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Find vectors and a scalar that disprove that $A=\{p(x)\in V:p(0)-p(2)=2\}$ is a subspace of $V=R_3[x]$ So I am supposed to disprove that $A=\{p(x)\in V:p(0)-p(2)=2\}$ is a subspace of $V=R_3[x]$ by finding $v_1$, $v_2$, and $\alpha$ such that $v_1+\alpha v_2\notin A$. I am honestly kind of stumped here. So far, I have tried to play around with exponents and have found that $x^2-3x+2$ could work as $p$, however I don't think that I am even in the correct direction. I don't think that I correctly understand the problem and can't seem to find any similar examples.
$0 \notin A$. So, just take any $v_1 \in A$ and $\alpha=-1$ and $v_2=v_1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2551224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Digit sum of a huge number I am helping a high school student to solve some challenging problems. I am stuck on the following problem: let $S(n)$ be a digit sum of the integer $n$, e.g. $S(1234)=10$. Find $S(S(S(S(2018^{2018}))))$. I spent some time on this problem but was not able to solve it. I found out, that the result should be less than $10$, since the number of digits in a number $n$ is $[\log_{10}n]+1$, where $[]$ denotes rounding towards zero. But this is not really helpful, since I need the precise number as an answer, not just an estimate. I've tried to find some iteration relatioship, e.g. what is $S(n\cdot m)$, but it didn't help either. Hints are appreciated. Thanks, Mikhail
Remember $n \equiv S(n) \mod 9$. ($a \equiv b \mod n$ means $a$ and $b$ have the same remainder when divided by $n$.)[See postscript] So $S(S(S(S(2018^{2018})))) \equiv 2018^{2018}\mod 9$. So what is the remainder of $2018^{2018}$ when divided by $9$. As $2018 \equiv 2 \mod 9$ then $2018^{2018} \equiv 2^{2018} \mod 9$. $2^3 = 8 \equiv -1 \mod 9$ so $2^{2018} = 2^{3*672 + 2} \equiv (2^3)^{672}*2^2 \equiv (-1)^{672}*4\equiv 4 \mod 9$. So $2018^{2018}$ will have remainder $4$ when divided by $9$. And therefore $S(S(S(S(2018^{2018}))) $ will also have remainder $4$ when divided by $9$. Now you now that $0 < S(S(S(S(2018^{2018})))< 10$. So what single digit has remainder $4$ when divided by $9$. There is only one; $4$ itself. ==== post script ====== If $n = \sum 10^i a_i$ then $S(n) = \sum a_i$. $n - S(n) = \sum 10^i a_i - \sum a_i = \sum (10^i- 1)a_i$. $10^i - 1 = 999999......999$ so $n - S(n)$ has remainder $0$ when divided by $9$. So $n$ and $S(n)$ must have the same remainder when divided by $9$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2551319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What is $\aleph_0!$? What is $\aleph_0!$ ? I know that in the original definition the factorial is defined for natural numbers but, what if we extend this concept to cardinal numbers? This concept has been extended to the real numbers by the $\Gamma$ function but I never see this kind of extension before. This is a proof that I made by myself and can be incorrect but still interesting for me. $\aleph_0\times(\aleph_0 - 1)\times(\aleph_0 - 2)\times ...$ We can rewrite this as $$\aleph_0! = \prod_{i = 1}^{\infty}(\aleph_0 - i) = \prod_{i = 1}^{\infty}(\aleph_0)$$ But, is this equal to: $$\aleph_0^{\aleph_0}$$ Also, if we assume the continumm hypothesis $2^{\aleph_0} = \mathfrak{c} \leq \aleph_0^{\aleph_0} \leq \mathfrak{c}$ Hence, $\aleph_0! = \mathfrak{c}$
We have $k!=1\cdot2\cdots k$, i.e., it is products of all numbers with size at most $k$. Therefore $$\aleph_0! = 1\cdots 2 \cdots \aleph_0 = \prod_{k\le\aleph_0} k$$ seems like a possible generalization. (Although probably taking the number of bijections - as suggested in other answers - is a more natural generalization.) I will add that the above product has two - it can be reformulated in this way: For each $k\le\aleph_0$ we have a set $A_k$ such that $|A_k|=k$. And we are interested in the cardinality of the Cartesian product of these sets $\prod_{k\le\alpha_0} A_k$. It is not difficult to see that if we use this definition, then $$2^{\aleph_0} \le \aleph_0! \le \aleph_0^{\aleph_0}.$$ Together with $\aleph_0^{\aleph_0}=2^{\aleph_0}$, we get $\aleph_0! = 2^{\aleph_0}$. If we wanted to do similar generalization for higher cardinalities, we could define $$\kappa! = \prod_{\alpha\le\kappa} |\alpha|$$ where the product is taken over all ordinals $\le\kappa$. Then exactly the same argument shows that $2^\kappa \le \kappa! \le \kappa^\kappa$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2551471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Why is the boundary of an oriented manifold with its (opposite oriented) copy the empty set? Excuse the very basic question: I'm following Milnor's Lectures on Characteristic Classes. He defines a relation on the collection of compact, smooth, oriented manifolds of dimension $n$ by letting $M_1\sim M_2$ if $M_1\sqcup -M_2$ is a boundary (If $M$ is a manifold, then $-M$ is the same manifold with the opposite orientation). He uses this to define cobordism classes and further goes on to define a graded ring structure on these classes. He says clearly the relation is reflexive and symmetric. I see symmetric, but I'm not entirely sure why this relation is reflexive. Since we're supposed to have a group structure, and the $0$ element is the empty set, I'm assuming that the boundary of $M\sqcup -M$ is the empty set. Why is it that $\partial (M\sqcup -M)=\emptyset $ ? Do the two manifolds somehow cancel each other out since they have opposite orientations? How can I picture this?
You're a bit confused about what this definition means. First of all, the $n$-manifolds we're considering here all do not have boundary. So $M$ should be an $n$-manifold without boundary, and you shouldn't be thinking about its boundary at all. The relation $M_1\sim M_2$ means that $M_1\sqcup -M_2$ is the boundary of some $(n+1)$-manifold with boundary. So to prove $M\sim -M$, you want to prove that $M\sqcup -M$ is the boundary of some $(n+1)$-manifold. To prove this, you can just take the $(n+1)$-manifold to be $N=M\times[0,1]$. The boundary of $N$ is $M\times \{0\}\cup M\times\{1\}$, where $M\times\{1\}$ has the given orientation of $M$ and $M\times\{0\}$ has the opposite orientation, so $\partial N\cong M\sqcup -M$ as an oriented manifold.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2551610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Use Green’s Theorem to evaluate the line integral along the given positively oriented curve. (which approach to take) Use Green’s Theorem to evaluate the line integral along the given positively oriented curve. $$\int_c y^3 \, dx - x^3 \, dy, C \text{ is the circle } x^2+y^2=4$$ Ok, so I'm not sure how to approach this problem. I can easily find $\frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y}$, but I'm not sure which approach to take after that. Am I supposed to parameterize by making $y = 2\sin t$ and $x = 2\cos t$ ? and then do I replace dxdy with $-4\sin t\cos t\, dt$? and when I plug those in does it become a single integral from $0$ to $2\pi$? And how would I integrate that? It seems like this would leave me with an over-complicated integral. Would it be faster to switch it to polar coordinates? if so how? I'm not really sure if this problem is implying I use a certain method and I'm not sure if I'm doing that method correctly. I looked in the book for similar examples but I think they vary slightly so I'm not sure which approach I'm supposed to take.
Green's theorem tell us that $$\int_C y^3 \, dx- x^3\, dy = \iint_{x^2+y^2 \leq 4} \left(\frac{\partial (-x^3)}{\partial x}\right)- \left(\frac{\partial y^3}{\partial y}\right) \,dx \, dy$$ While it is not a must to use polar coordinate, I strongly Iencourage you to do so as the form becomes elegant once you use polar coordinate. Note that $\,dx\,dy =r \, dr \, d\theta $ when you change to polar coordiante.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2551687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to determine the range of the following function $\frac{x}{1+ |x|}$? How to determine the range of the following function $\frac{x}{1+ |x|}$? when I calculated it, it was $\mathbb{R}$, but my professor said that the range is ]-1,1[, could anyone explain for me why? thanks!
Let $f(x)=\frac{x}{1+ |x|}$. Then: $|f(x)|=\frac{|x|}{1+ |x|} \le 1$, hence $f( \mathbb R) \subseteq [-1,1]$. Furthermore: $\lim_{x \to \infty}f(x)=1$ and $\lim_{x \to -\infty}f(x)=-1$. Show that $f(x) \ne 1$ and $f(x) \ne -1$ for all $x$. Are you now in a position to derive $f( \mathbb R) =]-1,1[$ ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2551858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Compute the sum fast. How can I compute the following sum in the fastest way possible? $y = 1 + x + ... + {x}^{{n}^{3}}=\sum_{i = 0}^{n}{x}^{{i}^{3}}$ I wrote that $n^3 - (n-1)^3 = 3n^2-3n+1$, but so far it does not help a lot.
This is a modified version of my answer here, which was the same question only with square exponents rather than cubed ones. You have already calculated $x^{(n+1)^3} = x^{n^3}\cdot x^{3(n+1)^2-3(n+1)+1}$ (I shifted the indices by one to conform with my other answer). We also have $x^{3(n+1)^2-3(n+1)+1}=x^{3n^2-3n+1}\cdot x^{6n}$. You could then use the following recursion: * *Get $x^{n^3}, x^{3n^2 - 3n +1}$ and $x^{6n-6}$ from previous iteration *Calculate $x^{6n} = x^{6n-6}\cdot x^6$ *Calculate $x^{3(n+1)^2 - 3(n+1) + 1} = x^{3n^2 - 3n +1}\cdot x^{6n}$ *Calculate $x^{(n+1)^3} = x^{n^3}\cdot x^{3(n+1)^2 - 3(n+1) + 1}$ *Add $x^{(n+1)^3}$ to your summation variable *Send $x^{(n+1)^3}, x^{3(n+1)^2 - 3(n+1) + 1}$ and $x^{6n}$ to the next iteration
{ "language": "en", "url": "https://math.stackexchange.com/questions/2551936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Circle: finding locus of the chord's mid point Find the locus of the middle points of chords of the circle $$x^2+y^2 =a^2$$ which subtends a right angle at the point $(c, 0)$
Let $\Gamma$ be a circle centered at $O$ with radius $R$ and $P$ a point inside $\Gamma$. Let $ABCD$ a quadrilateral inscribed in $\Gamma$ with orthogonal diagonals $AC,BD$ meeting at $P$. By Thales' theorem the midpoints of $AB,BC,CD,DA$ are the vertices of a rectangle with sides parallel to $AC,BD$. Since $O$ is the circumcenter of $ABCD$, the lines joining $O$ with the previous midpoints are orthogonal to the sides of $ABCD$. By angle chasing those midpoints lie on a circle centered at the midpoint of $OP$, whose diameter equals $\frac{1}{2}\sqrt{BD^2+AC^2}$. Let $u,v$ be the distances of $O$ from $AC,BD$. By the Pythagorean theorem we have $u^2+v^2=OP^2$ and $\left(\frac{1}{2}AC\right)^2+u^2 = OA^2 = R^2$, hence $$\tfrac{1}{2}\sqrt{AC^2+BD^2}=\sqrt{2R^2-OP^2}$$ is constant and the midpoints of $AB,BC,CD,DA$ lie on a fixed circle centered at the midpoint of $OP$. This essentially is the Proposition $11$ from Archimedes' Book of Lemmas, which is used to prove the pizza theorem, too (if you are confident with Italian, you may have a look at page 21 of these notes, too).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2552047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Show that $\left|\int_{-n}^{n}e^{iy^2}dy\right|\le 2$ for $n\ge 5.$ Question is to show that $$\left|\int_{-n}^{n}e^{iy^2}dy\right|\le 2$$ when $n\geq5$, $x \in \mathbb R $ and $i$ is an imaginary unit. My effort: $$|\int_{-n}^{n}e^{iy^2}dy|\leq \int_{-n}^{n}|e^{iy^2}|dy=\int_{-n}^{n}|\cos(y^2)+i\sin(y^2)|dy$$ $$ \leq \int_{-n}^{n}|\cos(y^2)|dy+\int_{-n}^{n}|i||\sin(y^2)|dy$$ $$\leq \int_{-n}^{n}|\cos(y^2)|dy+\int_{-n}^{n}|\sin(y^2)|dy$$ It is also known that $|\cos(x)|,|\sin(x)|\leq1$ but its leading nowhere since integral then evalutes to $0$.. Any tips?
A Contour Integral Estimate This estimate is valid for all $n\gt0$, and shows that the bound is $2$ for $n\ge4.5$ (since $\sqrt\pi\doteq1.77245385$). The contours in the complex plane are straight lines, and are parametrized linearly. $$ \begin{align} \left|\,\int_{-n}^ne^{iy^2}\,\mathrm{d}y\,\right| &\le\left|\,\int_{-n(1+i)}^{n(1+i)}e^{iy^2}\,\mathrm{d}y\,\right| +\left|\,\int_n^{n(1+i)}e^{iy^2}\,\mathrm{d}y\,\right| +\left|\,\int_{-n(1+i)}^{-n}e^{iy^2}\,\mathrm{d}y\,\right|\\ &\le\sqrt2\int_{-n}^ne^{-2y^2}\,\mathrm{d}y +2\int_0^1e^{-2n^2t}n\,\mathrm{d}t\\[3pt] &\le\sqrt\pi+\frac1n \end{align} $$ This is pretty good Here is the plot of the actual value of the integral vs the estimate above vs $2$: Note that the first maximum after $n=4.5$ is the first one that is less than $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2552190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
A matrix raised to a high power ($87$) So, I have this matrix: $$\pmatrix {0&0&0&-1\\1&0&0&0\\0&1&0&0\\0&0&1&0}^{87}$$ My teacher never discussed eigenvalues. So, I do not know what they are and there must be another way to do this (without multiplying the matrix $87$ times). Thanks for your help.
Note that your matrix acts like the permutation $\sigma=(2341)$, i.e. $1 \mapsto 2$, $2 \mapsto 3$, $3 \mapsto 4$, $4 \mapsto 1$ such that each time it spits a minus sign to the fourth, third, second and first column respectively. The reason for this is that you can think of a matrix in the following way: * *The first column of a matrix is where $e_1$ goes. *The second column of a matrix is where $e_2$ goes. And so on, so forth. Looking at the problem in this way helps us directly compute any power of $A$ easily. In fact, we have the following closed form for $A^n$: $$A^n=\pmatrix{(-1)^{\lfloor \frac{n+8}{4} \rfloor} e_{\sigma^n(1)} && (-1)^{\lfloor \frac{n+9}{4} \rfloor}e_{\sigma^n(2)} && (-1)^{\lfloor \frac{n+10}{4} \rfloor}e_{\sigma^n(3)} && (-1)^{\lfloor \frac{n+3}{4} \rfloor}e_{\sigma^n(4)}} $$ Where $e_n$ is the $n$-th standard basis vector, $\sigma=(2341)$ and $\lfloor \cdot \rfloor$ is the floor function. In particular, for $n=87$, we get: $$A^{87}=\pmatrix{(-1)^{\lfloor \frac{95}{4} \rfloor} e_{\sigma^{87}(1)} && (-1)^{\lfloor \frac{96}{4} \rfloor}e_{\sigma^{87}(2)} && (-1)^{\lfloor \frac{97}{4} \rfloor}e_{\sigma^{87}(3)} && (-1)^{\lfloor \frac{90}{4} \rfloor}e_{\sigma^{87}(4)}} $$ Now, since $\sigma^4=e$, we have $\sigma^{87}=\sigma^{84}\sigma^{3}=(\sigma^{4})^{21}\sigma^{3}=\sigma^3$ We only need to calculate: $\sigma^3(1)=4, \sigma^3(2)=1, \sigma^3(3)=2, \sigma^3(4)=3$ Therefore, our answer in the closed form is: $$A^{87}=\pmatrix{- e_{4} && +e_{1} && +e_{2} && +e_{3}} $$ And we can expand it to see that our final answer is: $$A^{87}=\pmatrix{ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\\ -1&0&0&0} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2552463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Selecting Reference Orbit for Fractal Rendering with Perturbation Theory I have been trying to implement the perturbation algorithm for rendering fractals, as per this article: http://www.superfractalthing.co.nf/sft_maths.pdf Let's say we simply focus on the first of the method, part up to equation 1, neglecting the series approximation part at the end to speed up calculations. My question is regarding the reference orbit. Since you record the orbit of a reference point, and calculate the other points based on that reference, doesn't that imply that the reference need to have an iteration count (before its norm reaches 2) larger than all the other calculated points/pixels in the image? For example, how would you go about calculating the points in the image needing 10000 iterations before diverging when your reference diverges after 1000? If you indeed need the reference to diverge last, how would you choose such a point? Thanks.
Not only do you need a reference whose iteration count is larger than all the other pixels, sometimes there are "glitches" when pixel dynamics differ significantly from the dynamics of the reference. These glitches can be detected in various ways: the most common heuristic is one developed by Pauldelbrot, though there is another using interval arithmetic developed by knighty. Choosing good references is still a topic of active research by the community at fractalforums.org. Some strategies include trying periodic points (the nuclei of the minibrot islands deep in the set) and preperiodic points (the Misiurewicz points at the centers of spirals), both of which can be found by Newton's method (finding their (pre)periods is a bit harder, but not impossible). Higher-period "structural" minibrot nuclei seem to be the most favoured as they are relatively easy to find while also emitting fewer glitched pixels than lower period nuclei. A simple approach can still yield accurate results, albeit in less than optimal time - simply take the first reference to be the center of the image, and correct any glitches that result (including those resulting from the reference escaping too early) by adding more references within the glitches, recalculating only those pixels that need it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2552605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that $\sin x$ lies between $x-x^3/6$ and $x \;$ $\forall x \in R$ Show that $\sin x$ lies between $x-x^3/6$ and $x \;$ $\forall x \in R$ I am getting: $$\sin(x) = x - \frac{x^3}{3!} + R_4(x)$$ where $R_4(x) = \frac{\cos(c)x^5}{5!}$ for some $c$ between $0$ and $x$ I want to prove $R_4(x)\geq 0$ to arrive at the result $x-x^3/3 \leq \sin(x)$. for:$$0 \leq x \leq \pi/2 \Rightarrow 0<c<\pi/2 \Rightarrow R_4(x)\geq 0$$ but for: $$-\pi/2 \leq x < 0 \Rightarrow -\pi/2 < c < 0 \Rightarrow R_4(x) < 0$$ How can I proceed with this ? There are many cases that I need to check
The aim is to show that: $$ x - \frac{x^3}{3!} \leq \sin(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} -\frac{x^7}{7!} + ...\leq x $$ For $sin x\leq x$ it suffices to use MVT: $$\cos c =\frac{\sin x- \sin 0}{x-0}=\frac{\sin x}{x}\implies -1\leq \frac{\sin x}{x}\leq \implies \frac{\sin x}{x}\leq 1 \implies sin x\leq x$$ For $x - \frac{x^3}{3!}\leq \sin(x) $ see here Proof for $\sin(x) > x - \frac{x^3}{3!}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2552775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Find all complex numbers $z$ such that ($z^6 - i) \in \mathbb R$ Find all complex numbers $z$ such that ($z^6 - i) \in \mathbb R$ My solution: Let's set $x^6 = (z - i)^6$. Then $$x^6 = |x| e^{6\theta i} \\ x^6 \in \mathbb R \iff 6\theta = k\pi \land k\in \mathbb Z$$ $$\theta = \frac{k\pi}{6}$$ Therefore $z - i = |z - i|(\cos(\frac{k\pi}{6}) + i\sin(\frac{k\pi}{6})) \\$ $$z = |z-i|\left(\cos\left(\frac{k\pi}{6}\right)+i\sin\left(\frac{k\pi}{6}\right)\right)+i$$ Now, imagine that I have plotted the solution in terms of $x$. If I wanted to have a plot in terms of $z$, would it be enough to simply shift all of my solutions one imaginary unit upwards, to satisfy the $+i$ term?
$z^6 = \cot \theta + i\\ z^6 = \csc \theta (\cos \theta + i\sin\theta)\\ z = (\csc \theta)^{\frac 16} e^{i(\frac {\theta}{6}+\frac {k\pi}{3})}\\$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2553022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Measurable set in real numbers with arbitrary lebesgue density at some point I'm not sure if this is easy or not, but i can't see the solution (or that it is wrong!) Suppose that $\alpha \in (0,1)$ is given, Can you find a Lebesgue measurable set in $\mathbb{R}$, such that at point $0$, it has Lebesgue density $\alpha$?
Here’s full justification for the answer provided by Kavi: Suppose first that $t=\frac 1 n$, then $$|(-t,t)|=2\cdot(\sum \alpha\cdot(\frac 1 n-\frac{1}{n+1}))=2\alpha t,$$ hence $\frac{|E\cap (-t,t)|}{2t}=\alpha.$ Now suppose we’re given arbitrary $t\in (\frac 1 n,\frac{1}{n+1})$. Indeed, $$\frac{2\cdot (\frac{1}{n+1})\alpha}{2\cdot \frac{1}{n}}\leq \frac{|E\cap (-t,t)|}{2t} \leq \frac{2\cdot (\frac{1}{n})\alpha}{2\cdot \frac{1}{n+1}}. $$ And we’re done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2553302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove diagonal entries of positive definite matrices cannot be smaller than the eigenvalues The aim is to prove that the diagonal entries of a positive definite matrix cannot be smaller than any of the eigenvalues. I know a positive definite matrix must have eigenvalues that are > 0, and that just because a matrix has all positive values, does not make it a positive definite matrix. I've also looked at the wikipedia for Positive-definite matrices and understand the definition given there, but am having a hard time convincing myself that the diagonal entries have to be greater than the eigenvalues. The starting point of the proof should be to consider $A−a_{ii}I$, where $A=A^T$, and A is the positive definite matrix. Can anyone help push me in the right direction to complete the proof?
Proof by contradiction: We know that, if $\lambda$ is an eigenvalue of $A$, then $\lambda - p$ is an eigenvalue of $A-pI$. So if $\lambda$ is an eigenvalue of $A$, then $\lambda - a{_i}{_i}$ is an eigenvalue of $A-a{_i}{_i}I$. Now, if $a{_i}{_i}$ is smaller than all the eigenvalues of $A$, then each $\lambda - a{_i}{_i}$ is positive and that makes $A-a{_i}{_i}I$ a positive definite matrix. But $A-a{_i}{_i}I$ contains $0$ as its diagonal element on row $i$. So $A-a{_i}{_i}I$ cannot be positive definite and so $a{_i}{_i}$ cannot be smaller than all the eigenvalues of $A$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2553378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 3 }
Hello folks, how should I proceed to solve this differential equation through power series? The problem is: $$ y'-y = x \\ y(0) = 0 $$ I know I have use the general form and its derivatives $$ \sum a_n(x-x_0)^n $$ My problem is with the alone $x$ variable on the right side. Could someone give me any tips? Thanks in advance!
$$y'-y = x $$ Power series may seem cool, but you can easily solve it by another way since it is a standard differential equation. This is an Euler equation. $$y'+P(x)y=Q(x)$$ $$\implies ye^{\int Pdx}=\int Qe^{\int Pdx}dx$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2553568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }