Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Show $\int_0^\infty xe^{-ax^2}dx=\frac{1}{2a}$ I tried integrating by parts but I always arrive at an expression other than $\frac{1}{2a}$ which contains $\sqrt{\frac{\pi}{a}}$ from the Gaussian integral $\int_0^\infty e^{-ax^2}dx=\frac12 \sqrt{\frac{\pi}{a}}$. Is there some kind of trick to evaluating this integral?
Substitute $x^2=y$. Then $$\int_0^\infty xe^{-ax^2}\,dx=\frac12\int_0^\infty e^{-ay}\,dy.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2264414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
What is the right definition of a cycle? I've seen this in all introductory courses on graphs, but every time it bugs me : the definition of a cycle is usually wrong. In the last course I have seen they define paths in the obvious way, adding edges inbetween vertices. Then they say " a cycle is a non-trivial path whose first and last vertices are the same, but no other vertex is repeated" : but obviously this is wrong, since if there's an edge $\{a,b\}$, then the sequence $(a, \{a,b\}, b,\{a,b\}, a)$ is a non trivial path whose first and last vertices are the same, but no other vertex is repeated. Now what's the proper definition of cycle ? The only way I can see this definition being correct is if "non-trivial" includes the example I gave. But then shouldn't the course mention it, as usually "trivial" means "with one element" or something along those lines ?
A cycle is either: * *a simple graph (= no double edges, no loops) with 1 component and all vertices having vertex degree 2 *a graph with 2 vertices and two edges between them *a graph with 1 vertex and a loop
{ "language": "en", "url": "https://math.stackexchange.com/questions/2264517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
A matrix is a product of nilpotent matrices iff its not invertible I just completed a homework problem which proves the following result: a matrix (with coefficients in some field) is a product of nilpotent matrices iff its not invertible. The proof was broken into several parts and was quite involved. I'm wondering if there's a very simple way to demonstrate this result (just the implication non-invertible then its a product of nilpotent matrices , as the other implication is trivial).
Following is a proof for fields that allow a Jordan-type canonical form. Write $A=P^{-1}TP$ with $T$ in Jordan form (but this can be relaxed somewhat). $A$ is not invertible if, and only if, at least one of the diagonal elements is zero. Taking it to be the last eigenvalue for simplicity, note the following example decomposition into two nilpotent matrices: $$\pmatrix{1&2&3&4&0\\0&5&6&7&0\\0&0&8&9&0\\0&0&0&a&0\\0&0&0&0&0}=\pmatrix{0&1&2&3&4\\0&0&5&6&7\\0&0&0&8&9\\0&0&0&0&a\\0&0&0&0&0}\pmatrix{0&0&0&0&0\\1&0&0&0&0\\0&1&0&0&0\\0&0&1&0&0\\0&0&0&1&0}$$ Thus $T=M_1M_2$ nilpotent, so $A=(P^{-1}M_1P)(P^{-1}M_2P)=N_1N_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2264627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Shortest distance from point to curve I could use some help solving the following problem. I have many more like this but I figured if I learn how to do one then I can figure out the rest on my own. Thanks in advance! A curve described by the equation $y=\sqrt{16x^2+5x+16}$ on a Cartesian plane. What is the shortest distance between coordinate $(2,0)$ and this line?
Since distance is positive and the square root function is increasing, it suffices to find the smallest value the squared distance between $(x,y)$ on the curve and the point $(2,0)$ can take. This is $$ L(x) = (x-2)^2 + (y-0)^2 = (x-2)^2+y^2 = x^2-4x+4 + 16x^2+5x+16 = 17x^2+x+20. $$ A minimum can only occur if $L'(x)=0$. So $$ L'(x) = 34x+1, $$ so there is a turning point at $x=-1/34$. Moreover, the derivative is negative on the left and positive on the right, so the point is a minimum. Hence the minimum distance is $$ \sqrt{L(-1/34)} = \sqrt{\frac{1359}{68}} \approx 4.47. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2264702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 1 }
What is the distribution of 1/($X$+1)? I have a problem that I'm having trouble figuring out the distribution with given condition. It is given that 1/($X$+1), where $X$ is an exponentially distributed random variable with parameter 1. Original Problem: What is the distribution of 1/($X$+1), where $X$ is an expoentially distributed random variable with parameter 1? With parameter 1, $X$ can be written as $e^{-x}$, and after plug in the given function, I got $$\frac{1}{e^{-x}+1} = \frac{e^{x}}{e^{x}+1}$$ What type of distribution is this?
$X$ is not "writen" as $e^{-x}$.   The probability density function of $X$, called $f_X(x)$, is equal to $e^{-x}~\big[x\geqslant 0\big]$. The cummulative distribution function of $X$ is: $$\begin{align}F_X(x) ~&=~ \mathsf P(X\leqslant x) \\[1ex] &=~ (1-e^{-x}) ~\big[x\geqslant 0\big]\end{align}$$ Let $Y:=1/(1+X)$.   The cummulative distribution function, and therefore probability density function, of $Y$ is $$\begin{align}F_Y(y) ~&=~ \mathsf P(1/(1+X)\leqslant y) \\[1ex] &=~ \mathsf P(X\geqslant (1/y)-1) \\[1ex] &=~ 1- F_X(\tfrac 1y-1)\\[0ex] &~~\vdots\\[3ex] f_Y(y) ~&=~ \dfrac{\mathrm d ~F_Y(y)}{\mathrm d~y\qquad}\\[0ex] &~~\vdots\end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2264791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Finding the limit of a sequence If $x_1$ and $y_1$ be positive numbers, $x_{n+1}$ $=$ $1\over 2$$(x_n+y_n)$ and $2\over y_{n+1}$ $=$ $1\over x_n$ $+$ $1\over y_n$ $\forall$ $n$ $\ge$ $1$. Show that the sequences $x_n$ and $y_n$ are monotonic and converge to a common limit $l$ where $l^2$ $=$ $x_1y_1$. I proved that these two sequences are monotone, bounded and hence convergent. But I can't find that unique limit. I've been trying it with telescopic sum, but it doesn't help at all. Also, I failed to establish a good relation between $x_{n+1}$ and $x_n$ or $y_{n+1}$ and $y_n$. Please help me to find that limit.
Hint: $$\require{cancel} x_{n+1} y_{n+1} = \frac{x_n+y_n}{2}\cdot\frac{2}{\cfrac{1}{x_n}+\cfrac{1}{y_n}}=\frac{\cancel{x_n+y_n}}{\bcancel{2}}\cdot\frac{\bcancel{2} \,x_n y_n}{\cancel{x_n+y_n}} = x_n y_n $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2265070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Find $\lim_{n \to \infty} \frac{1}{n}\sum_{k=1}^{n} \ln\left(\frac{k}{n} + \epsilon_n\right)$ if $\epsilon_n>0$ and $\epsilon_n\to0$ Let $\epsilon_{n}$ be a sequence of positive reals with $\lim\limits_{n \rightarrow \infty} \epsilon_{n}=0$. Then find $$\lim_{n \rightarrow \infty} \frac{1}{n} \sum_{k=1}^{n} \ln\left(\frac{k}{n} + \epsilon_n\right)$$ My doubt here is that , can I introduce the limit within the summation? Then it can be easily found by converting the sum into an integral.
Via mean value theorem we can see that $$\log\left(\frac{k} {n} +\epsilon_{n} \right) = \log\frac{k} {n} +\frac{n} {k} \epsilon_{n} +o(\epsilon_{n}) $$ and hence the sum in question is equal to $$\frac{1}{n}\sum_{k=1}^{n}\log\frac{k}{n}+\epsilon_{n}\sum_{k=1}^{n}\frac{1}{k}+o(\epsilon_{n})$$ First sum tends to $\int_{0}^{1}\log x\, dx=-1$, second term tends to $\lim_{n\to\infty} \epsilon_{n} \log n$ so that the desired limit requires more information on the limiting behavior of $\epsilon_{n} $. For example if $\epsilon_{n} =1/n$ then the desired limit is $-1$ and if $\epsilon_{n} =1/\log n$ then the desired limit is $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2265176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
I do not know how to approach this coin problem, where to start? When you throw a coin, you can either win $10$ dollars if heads or lose $5$ dollars if tails. After $100$ throws, you win $895$ dollars. Is this a fair coin? What probabilities can you associate with each side of the coin? Can you please explain how to approach such questions, how to pull out the needed formulas from the wordings? In this case, I honestly do not even remotely know where to start. I was thinking about the expected values. I think this is not a fair coin, but then I can't associate any probabilities to it.
For a fair coin, the number of heads after 100 throws should be $N\cdot p=50$, and the standard deviation of this number should be $\sqrt{Npq}=5$. By the stated rules, the win with $n$ out of $100$ heads is $10n-5(100-n)=15n-500$. We conclude that $n=\frac{895+500}{15}=93$. This differs by $\frac{93-50}{5}=8.6$ standard deviations - something so unlikely that people might call virtually impossible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2265398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Calculate $\lim\limits_{n\to\infty}\dfrac{\sum\limits_{k=0}^n\log\binom{n}{k}}{n^2}$ How to prove if the following limit exists? If it exists, what's the value? $$\lim\limits_{n\to\infty}\dfrac{\sum\limits_{k=0}^n\log\binom{n}{k}}{n^2}$$ Thanks!
$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \lim_{n \to \infty}{\sum_{k = 0}^{n}\ln\pars{n \choose k} \over n^{2}} & = \lim_{n \to \infty} {\sum_{k = 1}^{n + 1}\ln\pars{n + 1 \choose k} - \sum_{k = 1}^{n}\ln\pars{n \choose k} \over \pars{n + 1}^{2} - n^{2}} = \lim_{n \to \infty} {\sum_{k = 1}^{n}\ln\pars{{n + 1 \choose k}/{n \choose k}} \over 2n + 1} \\[5mm] & = {1 \over 2}\,\lim_{n \to \infty} {\sum_{k = 1}^{n}\bracks{\ln\pars{n + 1} - \ln\pars{n - k + 1}} \over n + 1/2} \\[5mm] & = {1 \over 2}\,\lim_{n \to \infty}{n\ln\pars{n + 1} - \ln\pars{n!} \over n + 1/2} \\[5mm] & = {1 \over 2}\,\lim_{n \to \infty}\bracks{\vphantom{\Large A}% \pars{n + 1}\ln\pars{n + 2} - n\ln\pars{n + 1} - \ln\pars{n + 1}} \\[5mm] & = {1 \over 2} \lim_{n \to \infty}\bracks{\pars{n + 1}\ln\pars{1 + 2/n \over 1 + 1/n}} = \bbx{1 \over 2} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2265526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Have these disk/washer problems been set up correctly? Q1: R is the region between $f(x) = x^{2}$ and $g(x) = x + 2$. Find the volume $V_1$ of the region generated by revolving about the line $y = - 3$ Q2: R is the region between $f(x) = 3x$, and $g(x) = 3x$, Find the volume $V_2$ of the region generated by revolving about the line $y = - 2$ $$ \begin{split} V_1 &= \pi\int_{-1}^2 \left( (x^2 + 3)^2 - (x + 2 + 3)^2 \right) dx\\ V_2 &= π\int_0^3 \left( (x^2 + 2)^2 - (3x + 2)^2 \right) dx \end{split} $$ Thank you!
For $V_1$, you have the inner and outer radii reversed: on the interval $[-1,2]$, $g(x) \ge f(x)$, consequently your evaluation of $V_1$ will result in a negative number. For $V_2$, I cannot verify your integral, because you have stated $f(x) = g(x) = 3x$, meaning there is no region enclosed by these two functions. However, if you mean that $f(x) = x^2$ and $g(x) = 3x$ as your integrand implies, then again you have the same problem as $V_1$: your inner and outer radii are reversed, since $3x \ge x^2$ on $[0,3]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2265737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
For any set $A$, there is some $x$ not in $A$. I am working through a set theory text and am having trouble giving a formal proof of the above statement. So far I have: Suppose the contrary. Then there exists a set $A$ such that $x\in A$ for all $x$. Thus $A$ is the set of all sets, which gives rise to Russell's paradox. Is there a more concise or direct proof of this fact using the axioms of $\sf ZFC$? I am concerned about asserting that $A$ is the "set of all sets."
A more direct (but very similar) proof can be concieved: even though Russell's paradox is often thought as a limitative result, the argument it uses is precisely a proof of your result: Let $x$ be the subset $\{a \in A \ | \ a \notin a\}$ of $A$. $x \in A$ is absurd, hence $x \notin A$. Note that assuming fondation, $x$ is equal to $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2265884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Prob. 8, Chap. 5, in Rudin's PMS, 3rd ed: If $f^\prime$ is continuous on $[a, b]$, then $f$ is uniformly differentiable on $[a,b]$ Here is Prob. 8, Chap. 5, in the book Principles of Mathematical Analysis by Walter Rudin, 3rd edition: Suppose $f^\prime$ is continuous on $[a, b]$ and $\varepsilon > 0$. Prove that there exists $\delta > 0$ such that $$ \left\lvert \frac{f(t)-f(x)}{t-x} - f^\prime(x) \right\rvert < \varepsilon $$ whenever $0 < |t-x| < \delta$, $a \leq x \leq b$, $a \leq t \leq b$. (This could be expressed by saying that $f$ is uniformly differentiable on $[a, b]$ if $f^\prime$ is continuous on $[a, b]$.) Does this hold for vector-valued functions too? My Attempt: As $f^\prime$ is continuous on $[a, b]$ and as $[a, b]$ is compact, so $f^\prime$ is uniformly continuous on $[a, b]$. So, for any real number $\varepsilon > 0$, we can find a real number $\delta > 0$ such that $$ \left\lvert f^\prime(x) - f^\prime(y) \right\rvert < \varepsilon \tag{1} $$ for all $x, y \in [a, b]$ for which $\lvert x-y \rvert < \delta$. Now suppose $a \leq t \leq b$, $a \leq x \leq b$, and $0 < \lvert t-x \rvert < \delta$. Then by the Mean Value Theorem there is some point $y$ between $t$ and $x$ such that $$ f(t) - f(x) = (t-x) f^\prime(y), \tag{2} $$ and also $$ \lvert y-x \rvert < \lvert t-x \rvert < \delta; \tag{3} $$ moreover, as $\lvert t-x\rvert > 0$, so $t \neq x$, and from (2) we can write $$ \frac{ f(t) - f(x)}{t-x} = f^\prime(y),$$ which together with (3) and (1) yields \begin{align} \left\lvert \frac{f(t)-f(x)}{t-x} - f^\prime(x) \right\rvert &= \left\lvert f^\prime(y) - f^\prime(x) \right\rvert \\ &< \varepsilon. \end{align} Am I right? Now for vector-valued functions. Suppose $$\mathbf{f} = \left( f_1, \ldots, f_k \right) $$ be a mapping of $[a, b]$ into some $\mathbb{R}^k$ and suppose that $$ \mathbf{f}^\prime = \left( f_1^\prime, \ldots, f_k^\prime \right) $$ is continuous on $[a, b]$. Then each of the component functions $f_1^\prime, \ldots, f_k^\prime$ is also continuous on $[a, b]$. So, given any real number $\varepsilon > 0$, we can find real numbers $\delta_i$, for $i = 1, \ldots, k$, such that $$ \left\lvert \frac{ f_i(t) - f_i(x) }{ t-x } - f_i^\prime(x) \right\rvert < \frac{ \varepsilon }{\sqrt{k}} $$ whenever $0 < \lvert t-x \rvert < \delta_i$, $a \leq t \leq b$, and $a \leq x \leq b$. Now let $$\delta := \min \left\{ \delta_1, \ldots, \delta_k \right\}.$$ Therefore, If $a \leq t \leq b$, $a \leq x \leq b$, and $0 < \lvert t-x \rvert < \delta$, then for each $i = 1, \ldots, k$, we obtain $0 < \lvert t-x \rvert < \delta_i$ and so $$ \left\lvert \frac{ f_i(t) - f_i(x) }{ t-x } - f_i^\prime(x) \right\rvert < \frac{ \varepsilon }{\sqrt{k}}, $$ which then implies that \begin{align} & \ \ \ \left\lvert \frac{ \mathbf{f}(t) - \mathbf{f}(x) }{ t-x } - \mathbf{f}^\prime(x) \right\rvert \\ &= \left\lvert \left( \frac{f_1(t) - f_1(x)}{t-x}, \ldots, \frac{f_k(t) - f_k(x) }{t-x} \right) - \left( f_1^\prime(x), \ldots, f_k^\prime(x) \right) \right\rvert \\ &= \left\lvert \left( \frac{ f_1(t) - f_1(x)}{t-x} - f_1^\prime(x), \ldots, \frac{ f_k(t) - f_k(x)}{t-x} - f_k^\prime(x) \right) \right\rvert \\ &= \sqrt{ \sum_{i=1}^k \left\lvert \frac{ f_i(t) - f_i(x)}{t-x} - f_i^\prime(x) \right\rvert^2 } \\ &< \sqrt{ \sum_{i=1}^k \frac{\varepsilon^2}{k} } \\ &= \varepsilon. \end{align} Thus the above result holds for vector-valued functions as well. Am I right? Is my reasoning correct in each of the above two cases? If not, then where have I erred?
Yes, I verify that your attempt is correct. To summarize, the key ideas for proving the scalar case are indeed uniform continuity and the mean value theorem: * *the mean value theorem allows one to replace the Newton quotient in the conclusion with a derivative; *the uniform continuity of $f'$ gives a desired $\delta$. For the vector-valued case, the key idea is that in finite-dimensional normed vector space, one can "control" the whole vector by "controlling" each component. Adding up finitely many $O(\varepsilon)$ quantities, one still has a $O(\varepsilon)$ quantity. The proof of the scalar case could be written in a concise way as follows, omitting less important formal details. Proof. Since $f'$ is continuous on a compact interval, it is uniformly continuous. Let $\delta$ be such that $\left|f^{\prime}(x)-f^{\prime}(u)\right|<\varepsilon$ for all $x, u \in[a, b]$ with $|x-u|<\delta$. $\dagger$ By the mean value theorem, for $t,x\in[a,b]$ with $0<|t-x|<\delta$, there exists $u$ between $t$ and $x$ such that $$ \frac{f(t)-f(x)}{t-x}=f^{\prime}(u)\;. $$ Hence, since $|u-x|<\delta$, $$ \left|\frac{f(t)-f(x)}{t-x}-f^{\prime}(x)\right|=\left|f^{\prime}(u)-f^{\prime}(x)\right|<\varepsilon . $$ (Since this result holds for each component of a vector-valued function $f(x)$, it must hold also for $\mathbf{f}$.) $\dagger$ Notes. The $\varepsilon$ is already given in the statement of the exercise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2266019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Every graph has a core I'm currently going through Godsil/Royle's chapter about graph homomorphisms and using the following definitions I struggle to follow through the proof of the existence of a core of a graph. A graph $X$ is a core if every homomorphism $f$ from $X$ to itself is a bijection. A subgraph $Y$ is a core of $X$ if $Y$ is a core and there exists a homomorphism from $X$ to $Y$. Let $X$ be a finite graph. Since $id_X: V(X) \to V(X), v \mapsto v$ is a homomorphism the family of subgraphs from $X$ to which $X$ has a homomorphism is finite and not empty. Hence, there exists a minimal subgraph $Y$ with respect to inclusion. Why is $Y$ a core of $X$? It is clear that there exists a homomorphism $X \to Y$ but why is every endomorphism of $Y$ an isomorphism?
You've shown that there exists a $Y$ which is a minimal subgraph of $X$ such that there exists a homomorphism $f:X\rightarrow Y$. Now, suppose that there exists a homomorphism $g:Y\rightarrow Y$ which is not a bijection. This means that $g(Y)$ has less vertices than $Y$. However, homomorphisms are closed under composition, so that $f\circ g$ is a homomorphism from $X$ to $g(Y)$. But $Y$ was minimal in the above sense, which is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2266251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Creating a function F(x) such that all outputs exist between -5 and 5 Take this sample graph illustration: As $X$ approaches negative infinity the output approaches $-5$ As $X$ approaches positive infinity the output approaches $5$ From what I recall this would be leveraging $\log, \ln$ or $e$ but I'm failing to remember the specific principles involved to come up with this function. Other things that would be nice for this function is that it accelerates very quickly from the origin and tapers off which I tried to illustrate. Bring this back to the real world and not just conceptual, realistic values of $X$ will primarily exist between $-10,10$ and much less frequently $-20,20$ and $-50,50$. My goal is to produce scoring algorithm with constrained limits on the output of the score. The simpler the function, the better. This function seems to be very close to what I'm looking for: $5*\frac{x}{1+|x|}$ The graph it produces is: At $F(5)$ and $F(-5)$ the graph is approximately 4 and -4. How can I stretch out this function that $F(10)$ and $F(-10)$ are roughly 4 and -4 instead? Able to answer this question myself, replace $X$ with $0.5X$
The simplest is $$f(x)=\frac{10}{1+e^{-x}}-5$$ It is a simple logistic function subtracted by 0.5 multiplied by $10$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2266326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Graph Theory: Prove that G must be connected Let $G$ be a simple graph with $n$ vertices and $ \frac 1 2 (n-1)(n-2)+1$ edges. Prove that $G$ must be connected. Thank you!
[Edited - thanks to M. Vinay who spotted the initial mistake!] Suppose for contradiction that the graph is disconnected. Then it is possible to partition the graph into two subgraphs (disconnected from one another), with $k$ vertices and $n- k$ vertices respectively. By a simple counting argument, you should be able to show that these two subgraphs can have at most $\frac 1 2 k(k-1)$ and $\frac 1 2 (n-k)(n-k-1)$ edges respectively. It then remains to check that, for any value of $k$ between $1$ and $n-1$, $$ \frac 1 2 k(k-1) + \frac 1 2 (n-k)(n-k-1) < \frac 1 2 (n-1)(n-2) + 1,$$ which would give a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2266478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Homomorphism Between Non-Abelian Group and Abelian Group I know that when finding homomorphisms between groups, for a cyclic group to any other group, then the homomorphism is completely determined by where you send the generator. However, I have two questions regarding homomorphisms between non-abelian groups and abelian groups. For instance, I know that for a homomorphism between $S_3$ and $C_4$ (cyclic group of order 4), you map the commutator subgroup, $A_3$ to the unit element of $C_4$. However, what do you do with the even permutations, and why must you map the commutator subgroup to the unit element of $C_4$? Also, how should I approach finding the homomorphisms between $C_2 \times C_2$ (direct product of two cyclic groups of order 2) and $S_3$?
$C_2$ is isomorphic to a subgroup of $S_3$ so that should help with the second question. As to the first, the commutator is generated by products of the form $xyx^{-1}y^{-1}$ so any homomorphism into an abelian group would allow these elements to commute, and this cancel. That is $$\phi(xyx^{-1}y^{-1})=\phi(x)\phi(y)\phi(x^{-1})\phi(y^{-1})=\phi(x)\phi(x^{-1})\phi(y)\phi(y^{-1})=1$$ so the generators of the commutator are in the kernel, so the commutator subgroup is too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2266570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
A subring of the upper triangular matrices I want to compute the Jacobson radical, the right socle, and the left socle of the ring of $3\times 3$ upper triangular matrices with entries in $\mathbb Z_4$ and such that the entries on the main diagonal are equal. I know that the Jacobson radical of the ring of $3\times 3$ upper triangular matrices is comprised of those with zero main diagonal, and that, when the ring $R$ is Artinian, the right (left) socle of $R$ is the left (right) annihilator of the Jacobson radical of $R$. Thanks for any suggestion and/or help!
Well, you have that $\begin{bmatrix}0&b&c\\ 0& 0& d\\ 0&0&0\end{bmatrix}$ is a nilpotent ideal, and that $\begin{bmatrix}2&0&0\\ 0& 2& 0\\ 0&0&2\end{bmatrix}$ is a central nilpotent, so the Jacobson radical must contain at least $\begin{bmatrix}2a&b&c\\ 0& 2a& d\\ 0&0&2a\end{bmatrix}$ for any choice of $a,b,c\in \mathbb Z_4$. The quotient by this ideal is exactly $\mathbb Z_2$, so apparently we have already found the Jacobson radical. I trust you can compute the annihilators after knowing this? Both socles have just four elements apiece. I know that the Jacobson radical of the ring of $3\times 3$ upper triangular matrices is comprised of those with zero main diagonal. That is not really correct. If you have $R$ and take the quotient in $T_n(R)$ by the strictly upper diagonal matrices, you get $R^n$, which is not Jacobson semisimple unless $R$ already was. So the radical is strictly bigger than the strictly upper triangular matrices, and contains some elements on the main diagonal (namely the elements in $J(R)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2266678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The square of any odd number is $1$ more than a multiple of $8$ I'm taking a single variable calculus course and asked following : Say wether the following is a valid proof or not : the square of any odd number is 1 more than a multiple of $8$. Proof : By the division theorem any number can be expressed in one of the forms $4q , 4q+1 , 4q+2 , 4q+3$ Squaring each of these gives : $$(4q+1)^2=16q^2+8q+1=8(2q^2+q)+1$$ $$(4q+3)^2=16q^2+24q+9=8(2q^2+3q+1)+1$$ My answer : I think this proof is invalid as it does not prove ' the square of any odd number is $1$ more than a multiple of $8$. ' is true for any odd number as it does not prove for the odd number $(2n+1)$ or $(2n-1)$. Is my assertion correct ?
The proof is valid. (1) Since $2n-1=2(n-1)+1$, you don't have to consider $2n-1$ and $2n+1$ separately. (2) Now consider only $2n+1$. We further divide it into two cases: (i) if $n$ is even, then $n=2q$ for some $q$ and so $2n+1=4q+1$. (ii) if $n$ is odd, then $n=2q+1$ for some $q$ and so $2n+1=4q+3$. All possible cases are considered.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2266811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Semidefinite programming, SDP, eigenvalues If I have an $n\times n$ hermitian matrix $A$ and I want to find all the eigenvalues of $A$, i.e $\{\lambda_{i}\}$, $i=1,...,n$ where $\lambda_{i+1}>\lambda_{i}$, if I only know the biggest eigenvalue (found using SDP), i.e $\lambda_{n}$, my question is: How can I transform $\{\lambda_{i}\rightarrow \lambda'_{i} \}$ ($A \rightarrow A'$) in order to convert $\lambda_{n-1}$ in the 'new' biggest eigenvalue of $A'$, $\lambda'_{n}$, and then apply SDP to $A'$ finding the new biggest eigenvalue, i.e $\lambda'_{n}$, the second biggest eigenvalue of $A$, $\lambda_{n-1}$?
Set $A' = A - \lambda_n v_n v_n^*$ where $v_n$ is a normalized eigenvector for $\lambda_n$. The spectrum of $A'$ is that of $A$, except $\lambda_n$ is now replaced with 0.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2266940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Interesting integral: $\int_0^1{\frac{nx^{n-1}}{x+1}}dx$ Find the value of $$\int_0^1{\frac{nx^{n-1}}{x+1}}dx.$$ I had no luck while integrating it. I also tried differentiating w.r.t n but still couldn't reach anywhere. Need help.
Put $y=1+x$ and the integral becomes $$ \int_{1}^2 \frac{n(y-1)^{n-1}}{y} \, dy = \int_1^2 \sum_{k=0}^n n\binom{n-1}{k} (-1)^{n-k-1} y^{k-1} \, dy = \left[ n(-1)^{n-1}\log{y} + \sum_{k=1}^n \frac{n}{k} \binom{n-1}{k} (-1)^{n-k-1} y^k \right]_1^2 \\ = n(-1)^{n-1}\log{2} + \sum_{k=1}^n \frac{n}{k} \binom{n-1}{k} (-1)^{n-k-1}(2^k-1). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2267061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 1 }
Accessing irreducible representations in GAP character table I'm writing an algorithm in GAP to automate some easy calculations related to elements of prime order in groups that appear in the ATLAS. To automate the process, I need to access the character values for some irreducible complex representations of small degree so I can do some basic arithmetic on them. Is there any way to access these without computing all of the irreducible characters using the function Irr(G)? Calling Irr(G) takes a very long time to run on my computer (on PSL(4,3), for example) and I really only need the two or three smallest irreducible representations. Thanks in advance!
If the character table is in the ATLAS (or related) you can access it immediately by its name: c:=CharacterTable("L4(3)");; The long time you observe is presumably a call such as gap> d:=CharacterTable(PSL(4,3));; gap> Irr(d); which stems from calculating the character table afresh from first principles. In this second case you might be able to start with some ad-hoc computations and hope that this gives certain low-degree characters, but there is no generic process that would get all low-degree characters quicker. In short, unless you need the connection to the group, access the character tables by name, not from constructing a group first.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2267179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding the asymptotes of an odd function The question gave the graph shown below, giving one asymptote as $y=f(x)=\frac{1}{2}x - 3$. Graph: I was asked to complete the rest of the graph and draw on any missing asymptotes, and this is what I drew: For the other asymptote which I drew I wasn't sure what its equation was but I predict it's $y=\frac{1}{2}x + 3$. My question is how would I know for sure that that is the equation of the second asymptote without just guessing?
the given asymptote means $$\lim_{x\to -\infty}(f (x)-(\frac {1}{2}x-3))=0$$ but $f $ is odd $ (f (-x)=-f (x)) $. replacing $x $ by $-x $, we get $$\lim_{x\to+\infty}(f (-x)-(\frac {-1}{2}x-3))=0$$ $$\implies$$ $$\lim_{+\infty}(f (x)-(\frac {1}{2}x+3))=0$$ thus the other asymptote is $$y=\frac {1}{2}x+3$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2267320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is this proof reasonable, given this information? Suppose I have the figure in the image marked 'Original': Visually, that figure appears to be a parallelogram. Would the following proof that $\bigtriangleup BCA \cong \bigtriangleup DAC$ be valid? * *$BC \parallel AD$, because Diagram *$\angle BAC \cong \angle DCA$, because Alternate Interior Angles *$AC \cong AC$, because Reflexive Property of Congruence *$AB \cong CD$, because Diagram *$\bigtriangleup BCA \cong \bigtriangleup DAC$, because Side-Angle-Side congruence postulate I ask because the figure marked 'Alternate' has the same markings ($AB \cong CD$, $BC \parallel AD$), but side CD is in a different position and side AD is longer, so the two triangles are not congruent. So I'm not sure whether you can say that the original figure is in fact a parallelogram, just based on the information shown, which must be true for step 2 in the proof to be valid.
Your step (5) says it's using side-angle-side, but the pictures show the configuration is side-side-angle, which does not imply congruence precisely because of this kind of counterxample.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2267554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $\alpha$ is a root, then $\sigma(\alpha)$ is a root There is the well-known proposition (for a specific reference: see section 14.1 of Dummit/Foote): Proposition. If $\sigma \in \text{Aut}(K/F)$ and $f(x) \in F[x]$ has $\alpha \in K$ as a root, then $f(x)$ also has $\sigma(\alpha)$ as a root. This got me wondering about the following scenario: Suppose there exists distinct $\sigma_1, \sigma_2 \in \text{Aut}(K/F)$ with the property that $\sigma_1(\alpha) = \sigma_2(\alpha)$, where $\alpha \in K$ is some root of $f(x)$. Then does $f(x)$ have $\sigma_1(\alpha)$ as a root with multiplicity $2$? (i.e. one multiplicity "comes from" $\sigma_1$ and one multiplicity "comes from" $\sigma_2$) After some thought, I think this is false. For example, consider a Galois extension $K/F$ and some irreducible polynomial $p(x) \in F[x]$ with $\alpha$ as a root. If it so happens that there are distinct $\sigma_1, \sigma_2 \in \text{Aut}(K/F)$ with $\sigma_1(\alpha) = \sigma_2(\alpha)$, then it cannot be that $p(x)$ has $\sigma_1(\alpha)$ as a multiple root, since $K/F$ being Galois implies that $p(x)$ is separable. Is this reasoning correct? Also, are there any easy counterexamples when $K/F$ is not Galois? (If an example would require some exotic construction, then please don't waste your time writing it up...) Thanks so much!
The reasoning is correct, but we can make a stronger observation. The proposition you mentioned can be strengthened as follows: If $\sigma \in \operatorname{Aut}(K/F)$, $f \in F[x]$, and $\alpha \in K$, then $\sigma(\alpha)$ has the same multiplicity as a root of $f$ as does $\alpha$. Proof: For any $p \in K[x]$, let $\sigma p$ be the polynomial obtained by applying $\sigma$ to the coefficients of $p$. This gives an automorphism of the ring $K[x]$. Now, if $\alpha$ has multiplicity $n$, then $f(x) = (x - \alpha)^n g(x)$ for some $g \in K[x]$. Then, $f(x) = (\sigma f)(x) = (x - \sigma(\alpha))^n (\sigma g)(x)$. Thus $\sigma(\alpha)$ has multiplicity at least $n$, and applying the same reasoning with $\sigma^{-1}$ we see that the multiplicities are equal. QED Therefore, in your proposed scenario, $\sigma_1(\alpha)$ cannot be a double root unless $\alpha$ itself is.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2267662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is this the correct homotopy rel $\{0,1\}$? Concerning the problem circled in red: Would this work as the homotopy rel $\dot I$? $F(t_1, t_2) = \begin{cases} \sigma(e_1), & \text{if $t_1=0,1$ or $t_2=1$} \\ (\sigma_0 * \sigma_1^{-1}) * \sigma_2[(1-t_2)t_1 + t_2], & \text{otherwise} \end{cases}$ And also, how would you use theorem $1.6$ here to show this? I can see by part three I can show that it's nullhomotopic rel $\{1\}$ if we define $f ': S \rightarrow X$ by $f'(e^{2 \pi i t_1}) = f(t_1)$, where $f(t)= (\sigma_0 * \sigma_1^{-1}) * \sigma_2(t)$ and the homotopy as $G(t_1, t_2)$ as $(e^{2 \pi i t_1}, t_2) \rightarrow f'((e^{2 \pi i t_1})^{(1-t_2)})$, but that doesn't show it's rel $\{0,1\}$.
Your homotopy is not a homotopy. Its image always lies on $\partial \Delta^2 \cong S^1$, and $S^1$ is not contractible. More specifically, we have for any $0<t_2<1$, $$\lim_{t_1 \to 0} F(t_1,t_2) = \sigma_0\sigma_1^{-1}\sigma_2(t_2) \not= \sigma(e_1) = F(0,t_2).$$ I think you have the answer, but you just don't recognize it. Theorem 1.6 tells you have a nullhomotopy of $\sigma:\partial \Delta^2 \to X$ rel $\{e_1\}$. We can then think of $f$ as a composition of maps $$[0,1] \xrightarrow{g} \partial \Delta^2 \xrightarrow{\sigma} X,$$ where $g(t)=(\epsilon_0\cdot\epsilon_1^{-1}\cdot\epsilon_2)(t)$. Since $g(0)=g(1)=e_1$, the homotopy $\sigma \simeq (\sigma e_1)$ rel $e_1$ gives a homotopy $\sigma \circ g = f \simeq (\sigma(e_1))$ rel $\{0,1\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2267755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove that $\sum_{k=0}^n \frac{(-1)^{(n+k)}\sum_{r=0}^n a_rk^r}{k!(n-k)!}=a_n\quad \forall n \in \mathbb{N_0} $ Can Someone prove that the following equation is true? I tried to prove it using induction but I didn't get very far. $$\sum_{k=0}^n \frac{(-1)^{(n+k)}\sum_{r=0}^n a_rk^r}{k!(n-k)!}=a_n\quad \forall n \in \mathbb{N_0} $$ Where $\sum_{r=0}^n a_rk^r$ is a polynomial of degree $n$ and $a_n$ is the coefficient of the highest power of that polynomial. I know that for $k=0$ and $r=0$ you get $0^0$ which ofcourse is undefined. But can we just say for the sake of simplicity that $0^0=1$? $$\sum_{k=0}^n \frac{(-1)^{(n+k)}p(k)}{k!(n-k)!}=\frac{\partial^n p}{\partial k^nn!} \quad \forall n \in \mathbb{N_0} $$ Where $p(k)$ is a polynomial of degree $n$ should be an equivalent statement. This is my first question here by the way. If I made some mistakes, please let me know.
$\begin{array}\\ s_n &=\sum_{k=0}^n \frac{(-1)^{(n+k)}}{k!(n-k)!}\sum_{r=0}^n a_rk^r\\ &=\frac{(-1)^n}{n!}\sum_{k=0}^n \frac{(-1)^{k}n!}{k!(n-k)!}\sum_{r=0}^n a_rk^r\\ &=\frac{(-1)^n}{n!}\sum_{k=0}^n (-1)^{k}\binom{n}{k}\sum_{r=0}^n a_rk^r\\ &=\frac{(-1)^n}{n!}\sum_{r=0}^n\sum_{k=0}^n (-1)^{k}\binom{n}{k} a_rk^r\\ &=\frac{(-1)^n}{n!}\sum_{r=0}^na_r\sum_{k=0}^n (-1)^{k}\binom{n}{k} k^r\\ &=\frac{(-1)^n}{n!}(-1)^na_nn!\\ &=a_n\\ \end{array} $ This is because $\sum_{k=0}^n (-1)^{k}\binom{n}{k} k^r $ is the $n$-th difference of a $r$-th degree polynomial which is zero for $r < n$ and $n!$ times the coefficient of $x^n$ for $r = n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2267860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Is $SO(n)$ diffeomorphic to $SO(n-1) \times S^{n-1}$ There is a fibration $SO(n-1) \mapsto SO(n) \mapsto S^{n-1}$, from basically taking the first column of the matrix in $\mathbb{R}^n$. Is this fibration trivializable?
If $n=3$, the Hopf fibration is the composition $S^3=Spin(3)\rightarrow SO(3)\rightarrow S^2$, so if $SO(3)\rightarrow S^2$ is trivial, so the hopf fibration will be flat and this is not true. https://en.wikipedia.org/wiki/Hopf_fibration#Geometric_interpretation_using_rotations
{ "language": "en", "url": "https://math.stackexchange.com/questions/2267935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Prove that this function is zero for all real number Let be $f$ a continuous function on $[0,2\pi]$. If $\int_0^{2\pi}f(x)e^{-inx}dx=0$ for all $n\in\mathbb{N}$, then $f(x)=0$ for all $x\in [0,2\pi]$. I was thinking this problem a lot but I don't know what I'm not seeing. I think, by the Stone-Weierstrass theorem I have that exists a sequence of polynomials that approaches uniformly to $f$, and the function $e^{-inx}$ maybe is a way to write points of the plane, and if with this I can write every point of $\mathbb{R}^2$ I got it..., isn't? I feel a little lost with this. Thanks you to help me.
The example $f(x)=e^{-ix}$ shows that the question is not correct as stated. Instead, we must require that $\int_0^{2\pi}f(x)e^{-inx}\;dx=0$ for all $n\in\mathbb{Z}$. If we make this assumption, then it follows that $$ \int_0^{2\pi}f(x)\overline{p(x)}\;dx=0 $$ for any trigonometric polynomial $$ p(x)=\sum_{n=-N}^Nc_ne^{inx} $$ The trigonometric polynomials are dense in $C([0,2\pi])$ with the uniform norm by Stone-Weierstrass, so there is a sequence of trigonometric polynomials $p_n(x)$ such that $||f-p_n||_{\infty}\to 0$, hence $||f-p_n||_2\to 0$. Therefore $$ \int_0^{2\pi}|f(x)|^2\;dx=\lim_{n\to\infty}\int_0^{2\pi}f(x)\overline{p_n(x)}\;dx=0 $$ and since $f$ is continuous this implies that $f=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2268042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Computing maximum dimension of a vector subspace given that it's every element is a symmetric matrix and is closed under matrix multiplication Q. Let $S$ be a subspace of the vector space of all $11 \times 11$ real matrices such that (i) every matrix in $S$ is symmetric and (ii) $S$ is closed under matrix multiplication. What is the maximum possible dimension of $S$? Attempt : Subspace of symmetric matrices has dimension $\frac {n(n+1)}2$ for the vector space of $n \times n$ matrices. Here in this case, it is $\frac {11\times 12}2=66$. Next, using (ii) we have that $(AB)^T=AB$. But $(AB)^T=B^T A^T=BA. (\because A^T=A, B^T=B)$. Hence $AB=BA$. So we conclude that elements in $S$ commute with each other. In case of vector spaces of $2\times 2$ and $3 \times 3$ symmetric matrices, I found that only diagonal matrices commute with each other. For e.g. in $2 \times 2$ case, I just multiplied the basis elements of the subspace of symmetric matrices $$ \begin{pmatrix} 1 & 0 \\ 0 & 0 \\ \end{pmatrix} \cdot \begin{pmatrix} 0 & 0 \\ 0 & 1 \\ \end{pmatrix} = \begin{pmatrix} 0 & 0 \\ 0 & 1 \\ \end{pmatrix} \cdot \begin{pmatrix} 1 & 0 \\ 0 & 0 \\ \end{pmatrix}$$ but $$ \begin{pmatrix} 1 & 0 \\ 0 & 0 \\ \end{pmatrix} \cdot \begin{pmatrix} 0 & 1 \\ 1 & 0 \\ \end{pmatrix} \neq \begin{pmatrix} 0 & 1 \\ 1 & 0 \\ \end{pmatrix} \cdot \begin{pmatrix} 1 & 0 \\ 0 & 0 \\ \end{pmatrix}$$ Similarly checking the commutativity of basis elements of space of $3 \times 3$ symmetric matrices also gives only the space of diagonal matrices as a candidate having commuting elements. Like, $$ \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \\ \end{pmatrix} \cdot \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \\ \end{pmatrix} \neq \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \\ \end{pmatrix} \cdot \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \\ \end{pmatrix}$$ Hence the vector space of all diagonal matrices satisfies (i) & (ii). Thus minimum possible 'maximum dimension' of $S$ is $11$. So we have that $11 \le \max \dim S \lt 66$. Am I in right direction so far? What can be my next step?
If $A$ and $B$ are symmetric, then $AB$ symmetric means $AB=(AB)^T=B^TA^T=BA$. All the matrices therefore commute. A symmetric real matrix is diagonalisable, and pairwise commuting real matrices are simultaneously diagonalisable. So the dimension of such a space is at most $11$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2268156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Integral of $f_n$ from $0$ to $1$ is zero Let $f_n(t)$ be defined as $$f_n(t)=\begin{cases}1, \text{if}\,\,t\in[\frac{p}{2^k},\frac{p+1}{2^k})\\ 0, \text{otherwise}\end{cases}$$ where $n=2^k+p,\,\,0\le p<2^k$. Then, a)what can be the value of $\lim\sup f_n(t)$ and $\lim\inf f_n(t)$? b)Is $\int_0^1|f_n(t)|\to0$ when $n\to\infty$ true? For part b), I think that as $n\to\infty$, the function approaches zero because of the constriction of the interval on which it is $1$.Any ideas. Thanks beforehand.
Hints: a) For a fixed $t \in [0,1]$ and some time $N$, is there an $n > N$ so that $f_n(t) > 0$? b) Write out an explicit bound to formalize your idea. What is the value of integral at the stage when the interval has length $2^{-n}$? Bonus: Think about $b$ for the function $2^k f_n$ instead ($n = 2^k + p$). What's the difference?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2268241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Limit $\lim_{x\rightarrow \infty}x \ln x+2x\ln \sin \left(\frac{1}{\sqrt{x}} \right)$ I think that this limit should be not defined $$\lim_{x\rightarrow \infty}x \ln x+2x\ln \sin \left(\frac{1}{\sqrt{x}} \right)$$
You can do it without substitution, $$\lim_{x\to \infty}\left(x\ln x+2x\ln\sin\left(\frac{1}{\sqrt{x}}\right)\right)$$ $$=\lim_{x\to \infty}2x\left(\ln\sqrt{x}+\ln\sin\left(\frac{1}{\sqrt{x}}\right)\right)$$ $$=\lim_{x\to \infty}2x\ln\left(\sqrt{x}\sin\left(\frac{1}{\sqrt{x}}\right)\right)$$ $$=\lim_{x\to \infty}2x\ln\left(\sqrt{x}\left(\frac{(1/\sqrt{x})}{1!}-\frac{(1/\sqrt{x})^3}{3!}+\frac{(1/\sqrt{x})^5}{5!}+\ldots\right)\right)$$ $$=\lim_{x\to \infty}2x\ln\left(1-\frac{1}{6x}+\frac{1}{120x^2}-\ldots\right)$$ $$=\frac13\lim_{x\to \infty}\frac{\ln\left(1-\frac{1}{6x}\left(1+\frac{1}{20x}-\ldots\right)\right)}{\frac{1}{6x}\left(1+\frac{1}{20x}-\ldots\right)}\cdot \left(1+\frac{1}{20x}-\ldots\right)$$ $$=\frac{1}{3}(-1)\cdot 1$$$$=\color{blue}{-\frac13}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2268330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Maximum of $x^3+y^3+z^3$ with $x+y+z=3$ It is given that, $x+y+z=3\quad 0\le x, y, z \le 2$ and we are to maximise $x^3+y^3+z^3$. My attempt : if we define $f(x, y, z) =x^3+y^3 +z^3$ with $x+y+z=3$ it can be shown that, $f(x+z, y, 0)-f(x,y,z)=3xz(x+z)\ge 0$ and thus $f(x, y, z) \le f(x+z, y, 0)$. This implies that $f$ attains it's maximum whenever $z=0$. (Is this conclusion correct? I have doubt here). So the problem reduces to maximise $f(x, y, 0)$ which again can be shown that $f(x, y, 0)\le f(x, 2x,0)$ and this completes the proof with maximum of $9$ and equality at $(1,2,0)$ and it's permutations. Is it correct? I strongly believe even it might have faults there must be a similar way and I might have made mistakes. Every help is appreciated
$$ \begin{eqnarray} &(x+y+z)^3 &=& x^3 + y^3 + z^3 + 3(x+y)(y+z)(z+x)\\ \implies & x^3 + y^3 + z^3 &=& (x+y+z)^3 - 3(x+y)(y+z)(z+x)\\ && = & 27 - 3(x+y)(y+z)(z+x) \end{eqnarray} $$ Now, $x^3+y^3+z^3$ is maximum when $t = (x+y)(y+z)(z+x)$ is minimum. Now since $x$, $y$ and $z$ are each non-negative, therefore $t$ is non-negative. Also, $x,\,y,\,z \in [0,\,2]$. So, $t$ takes minimum value when the variables take values $0,\,1,\,2$. So, $t_\text{min} = (0+1)(1+2)(2+0) = 6$. So $\max (x^3+y^3+z^3)=27-3\times6=9$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2268524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
How to show a bound on expected sum of squared martingale increments? I am reading a paper with the following set up. Let $(\Omega, \mathcal{F},P)$ be a probability space, $B \in \mathcal{F}$, and $\{\mathcal{F}_n : n \in \mathbb{N} \}$ a filtration with $\mathcal{F}_n \uparrow \mathcal{F}$. Let $q_n = P(B \mid \mathcal{F}_{n-1})$. The sequence $\{q_n \}$ is a martingale with values in $[0,1]$ almost surely. Now, the paper asserts $$E\Big(\sum_{n=1}^\infty(q_{n+1} - q_n)^2 \Big) \leq 1.$$ I can't see why this is true. Of course, for all $n$, $E(q_{n+1}-q_n)^2 \leq 1$, but I don't see why the inequality should hold after summing over all $n$. Could someone please point out what I'm missing?
Increments of a martingale are orthogonal hence $$\mathbb E\left[\sum_{n=1}^{+\infty}\left(q_{n+1}-q_n\right)^2 \right] =\mathbb E\left[\left(\sum_{n=1}^{+\infty}q_{n+1}-q_n\right)^2  \right].$$ By the martingale convergence theorem, $$\sum_{n=1}^{+\infty}q_{n+1}-q_n=\mathbb P\left(B\mid\mathcal F\right)-\mathbb P\left(B\mid\mathcal{F} _0\right),$$ which gives the result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2268614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Area between $y^2=2ax$ and $x^2=2ay$ inside $x^2+y^2\le3a^2$ I need to find the area between $y^2=2ax$ and $x^2=2ay$ inside the circle $x^2+y^2\le3a^2$. I know it's an integral but I can't seem to find the right one.
The parabolas will intersect the circle at the points $(a,\sqrt{2}a)$ and $(\sqrt{2}a,a)$ giving the following region: So you want to find $$ \int_0^a\sqrt{2ax}-\frac{x^2}{2a}\,dx+\int_a^{\sqrt{2}a}\sqrt{3a^2-x^2}-\frac{x^2}{2a}\,dx $$ In polar coordinates you can use the symmetry of the region and find the area by evaluating \begin{eqnarray} A&=&2\int_{\pi/4}^{\arctan(\sqrt{2})}\frac{r^2}{2}\,d\theta+2\int_{\arctan(\sqrt{2})}^{\pi/2}\frac{r^2}{2}\,d\theta\\ &=&\int_{\pi/4}^{\arctan(\sqrt{2})}3a^2\,d\theta+\int_{\arctan(\sqrt{2})}^{\pi/2}(2a\cot\theta\csc\theta)^2d\theta \end{eqnarray} which can be finished by elementary means.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2268705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Uniformly continuous function Let $f:[0,1] \cup \{-1\} \to \Bbb R$ be defined by, $f(x)=1 , \forall x \in [0,1]$; $=0$ ,for $x=-1$. Is the function uniformly continuous? * *here the domain is closed and bounded. Thus compact. And also f(x) is continuous. Thus f should be uniformly continuous. But here how the definition of uniform continuity is applicable? I can't understand it. Or is it uniformly continuous at all?
The "same" reason you saw it is continuous. The function is uniformly continuous on $[0,1]$ clearly. If $x,y \in \{ -1 \}$, then $|x-y| = 0$ and $|f(x)-f(y)| = 0$. So $f|_{\{-1\}}$ is such that for every $\varepsilon > 0$ and every $\delta > 0$ we have $|x-y| < \delta$ implying $|f(x)-f(y)| < \varepsilon$; this is even stronger than the requirement of uniform continuity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2268881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Consider a measurable function $f:[0,1]\rightarrow \mathbb{R}$ such that $f(x)\geq ||f||_2$. So I am stuck on this question and I don't really know how to go about it. The question is Let $X=[0,1]$ and $\mu = \lambda$ be the Lebesgue measure. Consider a measurable function $f:[0,1]\rightarrow \mathbb{R}$ such that $f(x)\geq ||f||_2$ for all $x \in [0,1]$. Prove that there exists an $x \in [0,1]$ such that $f(x_0) =||f||_2$. So far, I have written out the statements $||f||_2=\bigg(|f|^2 d\lambda \bigg)$ and $f(x)\geq \bigg( \int |f|^2 d\lambda \bigg)^{\frac{1}{2}}$
Using Holder's inequality $$ \|f\|_2 = \int_0^1 \|f\|_2 \leq \int_0^1 f \leq \left(\int_0^1 |f|^2\right)^{1/2} = \|f\|_2. $$ Hence the non-negative function $g(x) := f(x) - \|f\|_2$ satisfies $\int_0^1 g = 0$, so that $g(x) = 0$ a.e. in $[0,1]$, i.e. $f(x) = \|f\|_2$ a.e. in $[0,1]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2269008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
The conjecture: $\binom{2n-1}{n-1} \equiv 1 \pmod{n^3} \quad \Longrightarrow \quad n \in \mathbb{P}$ I recall seeing the following conjecture somewhere, but I cannot find the reference any more. Where can I find more information about this conjecture? Does it have a name? Conjecture: For any natural number $n$ it holds that $$ \binom{2n-1}{n-1} \equiv 1 \pmod{n^3} \quad \Longrightarrow \quad n \in \mathbb{P}, $$ where $\mathbb{P}$ denotes the set of primes.
This is known as Wolstenholme's theorem. For a prime $p > 3$ we have: $$\binom{2p-1}{p-1} \equiv 1 \pmod{p^3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2269128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Will $2$ linear equations with $2$ unknowns always have a solution? As I am working on a problem with 3 linear equations with 2 unknowns I discover when I use any two of the equations it seems I always find a solution ok. But when I plug it into the third equation with the same two variables , the third may or may not cause a contradiction depending if it is a solution and I am OK with that BUT I am confused on when I pick the two equations with two unknowns it seems like it has no choice but to work. Is there something about linear algebra that makes this so and are there any conditions where it won't be the case that I will find a consistent solution using only the two equations? My linear algebra is rusty and I am getting up to speed. These are just equations of lines and maybe the geometry would explain it but I am not sure how. Thank you.
Each linear equation represents a line in the plane. Most of the time two lines will intersect in one point, which is the simultaneous solution you seek. If the two lines have exactly the same slope, they may not meet so there is no solution or they may be the same line and all the points on the line are solutions. When you add a third equation into the mix, that is another line. It is unlikely to go through the point that solves the first two equations, but it might.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2269220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 0 }
If $\sin x + \sin^2 x =1$ then find the value of $\cos^8 x + 2\cos^6 x + \cos^4 x$ If $\sin x + \sin^2 x =1$ then find the value of $\cos^8 x + 2\cos^6 x + \cos^4 x$ My Attempt: $$\sin x + \sin^2 x=1$$ $$\sin x = 1-\sin^2 x$$ $$\sin x = \cos^2 x$$ Now, $$\cos^8 x + 2\cos^6 x + \cos^4 x$$ $$=\sin^4 x + 2\sin^3 x +\sin^2 x$$ $$=\sin^4 x + \sin^3 x + \sin^3 x + \sin^2 x$$ $$=\sin^3 x(\sin x +1) +\sin^2 x(\sin x +1)$$ $$=(\sin x +1) (\sin^3 x +\sin^2 x)$$ How do I proceed further?
Hint: $$\cos^8x+2\cos^6x+\cos^4x=(\cos^4x+\cos^2x)^2$$ Now as $\cos^2x=\sin x,\cos^4x=(\cos^2x)^2=?$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2269298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 4 }
Prove if $x > 3$ and $y < 2$, then $x^{2} - 2y > 5$ My solution is: Multiply $x > 3$ with $x$, yielding $x^{2} > 9$ Multiply $y < 2$ with $2$, yielding $2y < 4$ Thus, based on the above $2$ yielded inequalities, we can prove that if $x > 3$ and $y < 2$, then $x^{2} - 2y > 5$. Is this a correct proofing steps?
As a slightly extended version of Michael Rozenberg's answer, this can very simply be written down as:$% \newcommand{\calc}{\begin{align} \quad &} \newcommand{\op}[1]{\\ #1 \quad & \quad \unicode{x201c}} \newcommand{\hints}[1]{\mbox{#1} \\ \quad & \quad \phantom{\unicode{x201c}} } \newcommand{\hint}[1]{\mbox{#1} \unicode{x201d} \\ \quad & } \newcommand{\endcalc}{\end{align}} \newcommand{\subcalch}[1]{\\ \quad & \quad #1 \\ \quad &} \newcommand{\subcalc}{\quad \begin{aligned} \quad & \\ \bullet \quad & } \newcommand{\endsubcalc}{\end{aligned} \\ \\ \cdot \quad &} \newcommand{\ref}[1]{\text{(#1)}} \newcommand{\then}{\Rightarrow} \newcommand{\when}{\Leftarrow} %$ $$\calc x^2-2y \op>\hints{using $\;x > 3\;$ and the fact that $\;a-b\;$ is monotonic in $\;a\;$;} \hint{using $\;y < 2\;$ and the fact that $\;a-b\;$ is antimonotonic in $\;b\;$} 3^2-2\times2 \op=\hint{arithmetic} 5 \endcalc$$ In my opinion, this proof is the most direct reflection of the basic idea of this proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2269402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Defining a matrix in Magma with finite field entries Consider the following matrix $$ G:=\left[ \begin {array}{cccccccc} 1&0&0&0&\alpha&\alpha+1&1&1\\ 0&1&0&0&1&\alpha&\alpha+1&1\\ 0&0&1&0&1&1&\alpha&\alpha+1\\ 0&0&0&1&\alpha+1&1&1& \alpha\end {array} \right] $$ where entries of matrix $G$ come from finite field $GF(2^8)$ such that this finite field is constructed by the polynomial ${\alpha}^{8}+{\alpha}^{4}+{\alpha}^{3}+\alpha+1$. My question: How to define matrix $G$ in the MAGMA software such that we can see the coding parameters that are generated with the matrix $G$?
At First, we should define the finite field $GF(2^8)$ by our polynomial as follows $$ K<x>:=ExtensionField< GF(2), z | z^8+z^4+z^3+z+1 >; $$ After that we have to define matrix space over the finite field $K$, in the following form $$ M := KMatrixSpace(K, 4, 8); $$ Now, we define our matrix as shown $$ G := M ! [1,0,0,0,x,x+1,1,1,0,1,0,0,1,x,x+1,1,0,0,1,0,1,1,x,x+1,0,0,0,1,x+1,1,1,x]; $$ In continue, we find coding parametr of $G$, by this command $$ C := LinearCode(G); $$ I asked two questions about math software in the Math stack. The first question was about Maple that no one answer and because of this I asked from support team of Maple. The next question that you can see was about Magma that again no one answer and I read two chapters of Magma guide to find it's method. I strongly believed that Math software tags of math stack should be independent of math stack and it's better that defined a separate forum for this special tags.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2269557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Show $\forall_{z \in \mathbb{C}, n \in \mathbb{N}} : \exists_{u,v \in \mathbb{Q}} |z - (u+ vi)| < \frac{1}{n}$ Show $\forall_{z \in \mathbb{C}, n \in \mathbb{N}} : \exists_{u,v \in \mathbb{Q}} |z - (u+ vi)| < \frac{1}{n}$ In other words, show that you can make an infinitely small complex number (> 0) by using rational coefficients (uses density of $\mathbb{Q}$ in $\mathbb{R)}$. What I've done is simplifiy the equation so that we can work with it: $|z - (u+ vi)| < \frac{1}{n} \Leftrightarrow $ (substituting $(a+bi)$ for $z$, and $x$ for $\frac{1}{n}$) $|(a+bi) - (u + vi)| < x \Leftrightarrow$ $|(a-u) + (b - v)i| < x \Leftrightarrow$ Now when I try to solve for $u$ or $v$ im getting stuck. I've also tried to make cases like ($a = 0, b = 0$), ($a \ne 0, b = 0$), ($a = 0, b \ne 0$), ($a \ne 0, b \ne 0$) but it didn't work out either.
Hint: choose $u\in\mathbb Q$ such that $\left|a-u\right|\leqslant 1/(2n)$ and $v\in\mathbb Q$ such that $\left|b-v\right|\leqslant 1/(2n)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2269861", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is this inequality on Schatten p-norm and diagonal elements true? Let $A=[a_{ij}]\in\mathbb R^{m\times n}$ be a matrix with $m\ge n$, and $\Vert A\Vert_p$ be the Schatten p-norm of $A$. It is known that $\Vert A\Vert_1\ge \sum_i \vert a_{ii}\vert$ and $\Vert A\Vert_2^2=\Vert A\Vert_F^2\ge \sum_i \vert a_{ii}\vert^2$. Can we show that $\Vert A\Vert_p^p\ge \sum_i \vert a_{ii}\vert^p$ for general $p\ge 1$? Thanks!
This is true, and is a consequence of the following majorization result. Let $d_1,\dots,d_n$ be the main diagonal entries of $A$ ordered so that $|d_1|\ge |d_2| \ge \dots\ge |d_n|$. Also let $\sigma_1\ge \sigma_2 \ge \dots \ge \sigma_n$ be the singular values of $A$. The aforementioned (weak) majorization inequality is $$ \sum_{j=1}^k |d_j| \le \sum_{j=1}^k \sigma_j\qquad \forall \ k =1,2,\dots,n \tag{1} $$ Weak majorization implies that $$ \sum_{j=1}^n \phi(|d_j|) \le \sum_{j=1}^n \phi(\sigma_j)\tag{2} $$ for every increasing convex function $\phi$. Taking $\phi(t)=t^p$ and recalling that $\Vert A\Vert_p^p = \sum_{j=1}^n \sigma_j^p$, the result follows. References for inequality (1): * *Problem 21 of section 3.3 in Topics in Matrix Analysis by Horn and Johnson; *Theorem 2.4 of Chi-Kwong Li's lecture notes. Here the result is stated for square matrices, but also includes the more difficult converse direction, describing all the possible combinations of singular values and diagonal entries. This is a theorem proved independently by Thompson and Sing in 1970s. References for inequality (2): * *Section I.3.C of Inequalities: Theory of Majorization and Its Applications by Marshall, Olkin, and Arnold, 2nd edition. *Section 3.17 of Inequalities by Hardy, Littlewood, and Polya (they consider strong majorization, which requires equality to hold in (1) when $k=n$; but for increasing $\phi$ the weakly-majorized case reduces to strongly majorized).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2269975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How do I show that coordinate basis spans the tangent space? A tangent space $T_{m,M}$ is defined as the set of all linear derivations at a point $m$ on a manifold $M$. Linear derivations are operators that satisfy the Leibniz rule, i.e. $f,g \in F_{m,M}, O(f,g) = f*O(g)+g*O(f)$ where $F_{m,M}$ is the space of smooth functions defined on a subset of $M$ defined on a subset including $m$. Operators are defined to map from $F_{m,M} \to \mathbb{R}$ The coordinate basis $\{t_i\}$ of tangent vectors are defined as $t_i(f) = \frac{\partial f}{\partial x^i}$ where the $x^i$ are coordinate functions belonging to some chart of $M$. I read that the coordinate basis apparently spans the space of all linear derivations at m. It's obvious that derivatives belong to the space, but how can you show that the partial derivatives span the space of linear derivations?
This is half of the content of Proposition 3.2 in Lee's Introduction to Smooth Manifolds. In summary, and using the notation of the question here: Hint The given definition of derivation at local, so by choosing smooth coordinates of the given manifold $M$ centered at the given point $m \in M$, we need only prove the claim for $M = \Bbb R^n$ and $m = {\bf 0}$. For a derivation $X$ on $M$ at ${\bf 0}$, set $v^i := X(x^i)$, where $x^i$ is the usual $i$th coordinate function. By applying Taylor's Theorem (with remainder) to expand an arbitrary smooth function $f$ on a neighborhood of ${\bf 0}$, show that $X = v^i t_i$, which in particular implies $X \in \operatorname{span}\{t_i\}$. Fix a smooth, real-valued function $f$ on a neighborhood of ${\bf 0}$. By Taylor's Formula, there are smooth functions $g_i$ on a neighborhood of $m$ such that $g_i({\bf 0}) = 0$ and $$f(x) = f(m) + \sum_{i = 1}^n t_i(f) x^i + \sum_{i = 1}^n g_i(x) x^i .$$ Each summand of the last term is a product of functions that vanish at ${\bf 0}$, so applying $X$ to $f$ gives $$X(f) = X\left(\sum_{i = 1}^n t_i(f) x^i\right) = \sum_{i = 1}^n t_i(f) X(x^i) = \sum_{i = 1}^n v^i t_i(f) .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2270093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Limit of 1/x as x approaches infinity Why is $\lim_{x\to\infty} \frac{1}{x}$ equal to $0$, when really the limit appears to be an infinitesimal quantity? I am trying to understand why there is no distinction between 0 and an infinitesimal quantity in the context of limits.
In standard real analysis/calculus, there are no infinitesimal quantities. Everything is formulated in terms of real numbers. What $\lim_{x\to \infty} f(x) = c$ means is that for all $\varepsilon > 0$ there exists $x_o\in \mathbb{R}$ such that whenever $x>x_0$, we have that $\vert f(x)-c\vert < \varepsilon$. In words, what this means is that if you pick and small number $\varepsilon$, you can find a number $x_0$ large enough such that at any number past $x_0$, $f(x)$ will be no greater than a distance $\varepsilon$ away from $c$. For example, if $f(x)=\frac{1}{x}$ and $c=0$, this is the case. Given $\varepsilon > 0$, we can let $x_0 = 1/\varepsilon$; then if $x>x_0$, we have $\vert f(x)-0\vert = 1/x < \varepsilon$. So, $1/x$ really does approach $0$, in that it gets arbitrarily close to $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2270193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
$\epsilon - \delta$ proof for $\frac{x^2 - 16}{x + sin x}$ limit I'm having difficulty writing an $\epsilon - \delta$ proof for the following limit: $\lim_{x\to 4} \frac{x^2-16}{x+\sin x} = 0$ I've factored it to $\frac{(x+4)(x-4)}{x+\sin x} = 0$ and guessed that I need $\delta = \frac{2}{5}\epsilon$ for $|x-4| < \delta \implies |\frac{x^2-16}{x+\sin x}| < \epsilon$ I've also bounded $|x+4|$ by $\delta + 8$ but I don't know how to control $|x + \sin x|$.
$$|x-4| < \delta$$ $$4-\delta < x < 4+ \delta$$ $$3 - \delta< x+ \sin x < 5 + \delta$$ $$\frac{1}{5+\delta} < \frac{1}{x+\sin x} < \frac{1}{3-\delta}$$ If $\delta < 1$, $-\delta > -1$, $3-\delta > $2, $\frac{1}{3-\delta} < \frac12$ $$\left| \frac{x^2-16}{x+\sin x}\right| \leq \frac12 |x^2-16|$$ Also, if $\delta < 1$, $|x+4|<9,$ $$\frac12 |x^2-16|\leq \frac 92 |x-4|$$ Hence, for example, I can choose $$\delta = \min ( \frac12, \frac29 \epsilon)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2270291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove if 2 divides $a^2$, then 2 divides $a$. If 2 divides $a^2$, then 2 divides a. I know that 2 divides $a^2$ means there is some integer $n$ such that $a^2 = 2n$, and similarly, 2 divides $a$ means there is some integer $m$ such that $a = 2m$ I thought I could rewrite $a^2 = 2n$ into this $= a = 2(n/a)$ but I don't think that helps, because I'm not sure $n/a$ is an integer. Thank you for any help!
By division algorithm, $a=2q+r$ where $r=0\ \text{or}\ 1$ and $q\in\mathbb{Z}$. Now $a^2=4q^2+4qr+r^2$. Since $2|a^2$ it follows that $2|r^2$, whence $r=0$. OR use the fact that if $p$ is a prime such that $p$ divides $ab$ then $p$ divides $a \ \text{or}\ b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2270529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 9, "answer_id": 2 }
By transfer principle, is the set of hypernatural the set of naturals? In Jech, we learn that $x=\mathbb{N}$ is a $\Delta_0$-formula. Can you tell me what is wrong with the following reasoning ? Let $\phi(x)$ be the formula $x=\mathbb{N}$ The tranfer principle tells us that $\forall y\in\mathbf{S},\phi^{\mathbf{S}}(y)\iff \phi^{\mathbf{I}}(^*y)$ where $\mathbf{S}$ and $\mathbf{I}$ are the class of standard sets and the class of internal sets respectively. But, as $\mathbf{S}$ is transitive, and $\phi$ is $\Delta_0$, $\forall y\in\mathbf{S},\phi(y)\iff\phi^{\mathbf{S}}(y)$ and in particular $\phi(x)\iff \phi^{\mathbf{S}}(x)$. Similarly, as $\mathbf{I}$ is transitive, and $\phi$ is $\Delta_0$, $\forall y\in\mathbf{I},\phi(y)\iff\phi^{\mathbf{I}}(y)$. But, $y=^*x$ is internal so $\phi(^*x)\iff\phi^{\mathbf{I}}(^*x)$. Using the last two facts, the transfer principle gives $\phi(x)\iff \phi(^*x)$. Taking $x$ equal to $\mathbb{N}$, then $\mathbb{N}=^*\mathbb{N}$. EDIT : My framework is *ZFC, see the paper here.
Jech proves that "$x=\mathbb{N}$" can be expressed by a $\Delta_0$ formula in ZFC. That is, there is some $\Delta_0$ formula $\phi(x)$ such that ZFC proves there is exactly one set satisfying $\phi(x)$ (and that set is what we think of intuitively as $\mathbb{N}$). However, you are not working in ZFC. You are working in *ZFC, which does not include all the axioms of ZFC: it is missing the axiom of regularity. In fact, the axiom of regularity is crucial to the proof that the $\Delta_0$ formula $\phi(x)$ defines a unique set. So *ZFC actually cannot prove that $\phi(x)$ defines a unique set, and so you cannot conclude that ${}^*\mathbb{N}=\mathbb{N}$. In fact, your argument shows that in *ZFC, there is no $\Delta_0$ formula that defines $\mathbb{N}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2270633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Probability of area of a rectangle (uniform distribution) Given $X\sim U(0,2)$ and $Y\sim U(0,3)$ which are length and width of a rectangle (respectively). I want to find the probability of the area of the rectangle less than 1. The hint is the joint density $f(x,y) = 1/6$ for $0\leq x \leq 2$ and $0 \leq y \leq 3$. So, f(x,y) = 0, otherwise. The answer is 0.4653, but never explained why.
Hint: Try to understand whether the following is true and evaluate it. \begin{align} Pr(XY < 1) = \int_0^2 \int_0^{\min(3,\frac1x)}f(x,y)dydx \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2270766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to calculate $p$ value bound for $\chi^2$ test Consider a hypothesis test concerning the variance from a normal population with $H_0: \sigma_2=339.7$ and $H_a: \sigma_2<339.7$. Select bounds on the $p$ value for $n=11$ and test statistic $\chi^2=1.36$. A) $0.025\leq p\leq0.05$ B) $0.0001\leq p\leq 0.001$ C) $p\leq 0.0001$ D) $0.005\leq p\leq 0.01$
Just to make sure the rational and computations are clear: To test $H_0: \sigma = 339.7$ against $H_a: \sigma < 339.7,$ one uses the test statistic $$Q_{obs} = \frac{(n-1)S^2}{\sigma_0^2},$$ where $S$ is the sample standard deviation of your normal sample of size $n = 11$ and $\sigma_0 = 339.7.$ You do not give the numerical value of the sample SD $S,$ but you report that $Q_{obs} = 1.36.$ You would reject $H_0$ when $Q_{obs}$ is sufficiently small. In your case that would be when $S$ is 'significantly' smaller than the null value $333.7.$ Under $H_0$ (that is, assuming $H_0$ to be true), $$Q = \frac{(n-1)S^2}{\sigma} \sim \mathsf{Chisq}(n-1).$$ At the fixed significance level $\alpha =0.01 = 1\%,$ you would reject $H_0$ if $Q_{obs} < q^*,$ where one sees from printed tables or software that the 'critical value' $q^* = 2.558$ cuts 1% of the probability from the lower tail of the distribution $\mathsf{Chisq}(10),$ the chi-squared distribution with degrees of freedom $\nu = n -1 = 11 - 1 = 10.$ This distribution has mean $E(Q) = \nu = 10.$ Because $Q_{obs} = 1.35 < 2.558$ you would reject $H_0$ at the 1% level of significance. From R statistical software, qchisq(.01, 10) ## 2.558212 The P-value is the probability under $H_0$ of getting a value of $Q < Q_{obs}.$ In general, one cannot find the exact P-value using tables of the chi-squared distribution because not enough probabilities and critical values are given. In your specific case, the P-value from R is 0.00069, which is between the values 0.0001 and 0.001 in one of your answers. pchisq(1.36, 10) ## 0.0006907683 Presumably your chi-squared table shows critical values close to 0.8889 and 1.4787, which 'bracket' 1.36. qchisq(c(.0001, .001), 10) ## 0.8889204 1.4787435 The figure below shows the density curve for $\mathsf{Chisq}(10)$ with a solid vertical black line at $Q_{obs}=1.36.$ The dashed vertical red line cuts 1% of the area from the lower tail of the curve. The two dotted brown lines are at the values mentioned above that bracket your observed value 1.36.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2270909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Linea Algebra: I do not understand this definition of a Vector Space I'm studying linear algebra, I'm reviewing Vector Spaces, and i came across the definition of this Real Vector Space $( \mathcal P_n (\mathbb R), +, \cdot\mathbb R)$: where the 'addition' and 'multiplication' operations are defined as follows: And I just don't know how to read these definitions, i.e. what do they mean (of both, the real vector space and the operations)
The source of your confusion may be this: the symbols $+$ and $\cdot$ [although suppressed] are being used in two different ways. First of all we have them in the set of polynomials, just the usual multiplications $a_j x^j$ and so on, and usual additions $a_0+a_1 x +a_2 x^2$ and so on. But then we want to make this into a Vector Space, so we need to define Vector Addition, and Scalar Multiplication by real numbers. Let's call these operations $\dotplus$ and $\cdot$ for a moment. Then the statement is that $$ (a_0+a_1 x+\dots+ a_n x^n)\dotplus(b_0+b_1 x+\dots+ b_n x^n)$$ is defined to be $$((a_0+b_0)+(a_1+b_1) x+\dots+ (a_n +b_n) x^n)$$ and $$c \cdot (a_0+a_1 x+\dots+ a_n x^n)$$ is defined to be $$(c a_0+c a_1 x+\dots+c a_n x^n).$$ You now need to check that if you do this then the nine (?) axioms for a vector space are satisfied. Finally, once you are comfortable with what the notation means, you can be a bit more casual. No confusion is introduced since, for example we have that $a_0 \dotplus a_1 x$ (note the plus with a dot) is actually the same by definition as $a_0+a_1 x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2271035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Unitary operator that leaves a dense subspace invariant Let $\mathcal{H}$ be a Hilbert space, $\mathcal{D}$ be a dense subspace of $\mathcal{H}$ and $U$ be an unitary operator on $\mathcal{H}$. Suppose that $U\mathcal{D}\subseteq \mathcal{D}$. Can we say that $U\mathcal{D}= \mathcal{D}$? If this is not true, do you know a counter example?
Suppose that base is $\{e_n, n \in\mathbb{Z}\}$ consider the shift operator $S(e_n)=e_{n+1}$ and $D=\{x=(x_i)_{i\in\mathbb{Z}}$ with $x_i\neq 0, i<0$. $S(D)\subset D$, but $x=(x_n)$ with $x_n=1/n, i\neq 0, x_0=0$ is not in $S(D)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2271105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Poisson process: How to find the probability of an event occurring during a sub-interval given it occurred during a bigger interval Title doesn't make a lot of sense due to the complicated explanation and the fact I tried to be brief. Basically, I have a Poisson process, and I have been given the information that an event occurred between 9:10:00pm and 9:10:30pm. How do I go about finding the probability that it occurred within the last 10 seconds (i.e. between 9:10:20pm and 9:10:30pm)? (I have the expected number of events at 1 per 5 minutes) I've tried searching for the probabilities of events in overlapping Poisson intervals, but they don't seem to apply and I can't find anything regarding sub-intervals like the one in this question. This question is for an assignment due tomorrow and I originally thought I had a correct answer (the probability for an attack within 10 seconds) but the probability was far too low at 0.0322 and I only just realised it was wrong. It's not just 1/3 is it?
let's say that number of events in 10 seconds is poisson with parameter $\lambda$ then you should know that for 30 seconds it is poisson with $3\lambda$ then P(1 event in 30 seconds) = $3\lambda exp(-3\lambda)$ p(1 event 20-30 seconds only) = P(0 events 0-20)P(1 event 20-30) = $exp(-2\lambda) \times \lambda exp(-\lambda) = \lambda exp(-3\lambda)$ A = 1 event 20-30 only B = 1 event 0-30 P(A|B) = P(A and B) / P(B) P(A and B) = P(A) (since A is a subset of B) so P(A|B) = $\lambda exp(-3\lambda) / (3\lambda exp(-3\lambda)) = \frac{1}{3}$ as you suspected, the answer turns out to be 1/3 , which I think is justified by symmetery, even though the first event might seem to be weighted towards the beginning of the period, if we know there IS only one event, it is equally likely to be anywhere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2271283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Constructive proof for existence of uncountable sets? I stumbled upon this question by trying to prove that the rationals $\Bbb Q$ are not uncountable, but not by using the knowledge that they are already provably countable. I think I forbid myself using a proof by contradiction. But then, can I even show that the reals are uncountable in a construcive way. In the end, any kind of diagonal argument is using proof by contradiction, right? So in more general terms: Can I show the existence of an infinite set with no bijection to $\Bbb N$ without using proof by contradiction? I found this, and read from it that this seems to be a hard and still studied question for the reals. But I ask this in more general terms.
Your question "Can I show the existence of an infinite set with no bijection to N N without using proof by contradiction?" is ill-posed because it does not clarify the nature of "existence" you have in mind. This is the gist of constructivist objections to classical mathematics as formulated for instance by Errett Bishop. What you can retain from Cantor's diagonal argument is the following: if you have a map from the natural numbers to the reals then it won't be surjective. This is proved without using the law of excluded middle and in fact this is proved in Bishop's book.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2271415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Which event has a higher probability? Which event has a higher probability? $24$ rolls of 2 dice at once we get at least 2 $1$s or one roll of 4 dice at once we get at least one $1$?
Hint. $1$ minus the probability that in 24 rolls of 2 dice we never get 2 ones (at once): $1-\left(\frac{6^2-1}{6^2}\right)^{24}$. 1 minus the probability that in 1 roll of 4 dice we never get one: $1-\left(\frac{6-1}{6}\right)^{4}$. Which number is greater? P.S. Both numbers are quite close to $0.5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2271643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Show that $a+b+c+\sqrt {\frac {a^2+b^2+c^2}{3}}\le4$, Let $a,b,c$ be positive real numbers such that $a^2+b^2+c^2+abc=4.$ Show that $$a+b+c+\sqrt {\frac {a^2+b^2+c^2}{3}}\le4.$$
Let $a=\frac{2x}{\sqrt{(x+y)(x+z)}}$ and $b=\frac{2y}{\sqrt{(x+y)(y+z)}},$ where $x$, $y$ and $z$ are positives. Hence, $c=\frac{2z}{\sqrt{(x+z)(y+z)}}$ and we need to prove that $$\sum_{cyc}\frac{2x}{\sqrt{(x+y)(x+z)}}+\sqrt{\frac{4}{3}\sum_{cyc}\frac{x^2}{(x+y)(x+z)}}\leq4$$ or $$\sum_{cyc}x\sqrt{y+z}+\sqrt{\frac{1}{3}\sum_{cyc}(x^2y+x^2z)}\leq2\sqrt{(x+y)(x+z)(y+z)}.$$ Let $\sum\limits_{cyc}(x^2y+x^2z)=6kxyz$. Hence, by C-S $$\sum_{cyc}x\sqrt{y+z}\leq\sqrt{\sum_{cyc}x\sum_{cyc}x(y+z)}=\sqrt{2\sum_{cyc}(x^2y+x^2z+xyz)}.$$ Thus, it's enough to prove that $$\sqrt{2(6k+3)}+\sqrt{\frac{1}{3}\cdot6k}\leq2\sqrt{6k+2}$$ or $$\sqrt{6k+3}+\sqrt{k}\leq2\sqrt{3k+1},$$ which is C-S again: $$\sqrt{6k+3}+\sqrt{k}=3\sqrt{\frac{2k+1}{3}}+\sqrt{k}\leq\sqrt{(3+1)\left(3\cdot\frac{2k+1}{3}+k\right)}=2\sqrt{3k+1}.$$ Done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2271863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Recurrence relation with limit How do I find the lim for a series that has the following recurrence relation $$ a_{0}=2 $$ $$a_1=16 $$ and $$a_{n+1}^2=a_na_{n-1} $$ .I applied $\ln$ and I substituted $$ b_n=\ln (a_n) $$ and so I found the following recurrence relation $$2b_{n+1}=b_n+b_{n-1}$$ .What do I do next? Can someone explain this limit in recurrence relation method because I'm not very familiar with it. I understood that it can only be applied if the given series converges?
The Ansatz $b_n = x^n$ leads to $2x^2 - x - 1 = 0$ which implies $x = 1,-1/2$. Hence the general solution is $$b_n = c_1 + c_2 \left(-\frac{1}{2} \right)^n.$$ Using the initial conditions we see that $c_1 + c_2 = \ln 2$ and $c_1 - \frac{c_2}{2} = \ln 16$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2272007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Probability of equal no. of red/black cards from selection - simulation vs. answers discrepancy Following reading this thread: "Probability of drawing exactly 13 black & 13 red cards from deck of 52", I created a simple simulation using Excel/VBA to help my son grasp the concept - he's only 7 but wanted to know more... In this simulation, I chose 2, 4, 6...50 cards from a deck of 52 over 10,000 iterations each and counted how many events of equal red/black cards occurred for each round. The chance of equal red/black cards were then recorded and graphed - results as follows: * *Excel graph of simulated probabilities These results are in no way remotely close to the solutions given to the original problem. However, to me the simulated results are logical (2 should be the same as 50, 4 the same as 48 etc.). So where have I/we gone wrong? * *Numeric comparison of sim results - P(Sim) - and solutions given to the original post - P(Ans). Unfortunately my high school days were too long ago to address the deterministic answer. However, I am (supposed to be) an expert at simulations and this problem is a very basic one to perform. I guess it boils down to my disputing the provided answers: ie. for 10 selected cards, the solution of (10C5)^2/(52C10) ~ 0.0004%, nothing like 27.82% from the sim. Logically this should also equal 42 selected cards (10 left over) but the provided answer calculates as (42C21)^2/(52C42) ~ 1.83E+13. So if anyone can resolve the differences or point out to me where I've missed something, that would be wonderful. Many thanks and regards, David E
The chance of getting five red and five black is $\frac {{26 \choose 5}^2}{52 \choose 10} \approx 0.2735$, very close to your simulation. The $26 \choose 5$s are the number of ways to choose five of the $26$ red (black) cards.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2272117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Regarding the sum $\sum_{p \ \text{prime}} \sin p$ I'm very confident that $$\sum_{p \ \text{prime}} \sin p $$ diverges. Of course, it suffices to show that there are arbitrarily large primes which are not in the set $\bigcup_{n \geq 1} (\pi n - \epsilon, \pi n + \epsilon)$ for sufficiently small $\epsilon$. More strongly, it seems that $\sin p$ for prime $p$ is dense in $[-1,1]$. This problem doesn't seem that hard though. Here's something that (to me) seems harder. If $p_n$ is the nth prime, what is $$\limsup_{n \to +\infty} \sum_{p \ \text{prime} \leq p_n} \sin p?$$ What is $$\sup_{n \in \mathbb{N}} \sum_{p \ \text{prime} \leq p_n} \sin p? $$ Of course, we can ask analogous questions for $\inf$. I'm happy with partial answers or ideas. For example, merely an upper bound.
My answer here only includes partial results. First, we use Vinogradov's inequality: Let $\alpha$ be a real number. If integers $a$ and $q$ satisfies $(a,q)=1$ and $$ \left| \alpha - \frac aq \right| \leq \frac 1{q^2}, $$ then $$ \sum_{n\leq N} \Lambda(n) e^{2\pi i \alpha n} = O\left( (Nq^{-1/2} +N^{4/5} + N^{1/2}q^{1/2} ) (\log N)^4 \right) $$ With an error of $O(N^{1/2+\epsilon})$, we obtain the same upper bound for $\sum_{p\leq N} (\log p) \cdot e^{2\pi i \alpha p}$. Since we have finiteness of irrationality measure of $\pi$ : see this, we may use the continued fraction convergents $a/q$ for $\alpha = 1/(2\pi)$. Thus, it is possible to find a denominator $q$ of the continued fraction convergent of $1/(2\pi)$ such that $N^{1/7}<q<N^{99/100}$. Then Vinogradov inequality yields that there is $\delta>0$ such that $$ \sum_{p\leq N} (\log p)e^{i p} = O(N^{1-\delta}). $$ Now, partial summation gives for some $\delta>0$, $$ \sum_{p\leq N} e^{ip} = O(N^{1-\delta}). $$ Therefore, by taking imaginary parts, $$ \left|\sum_{p\leq N} \sin p \right| = O(N^{1-\delta}). $$ With this result and partial summation, we obtain that $$ \sum_{p \ \mathrm{prime} } \frac{\sin p}p $$ converges. It will be possible to find the best $\delta>0$ in the upper bound: $$ \left|\sum_{p\leq N} \sin p \right| = O(N^{1-\delta}) $$ by using Vinogradov's inequality more efficiently.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2272248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 1, "answer_id": 0 }
Find the sum $\sum_{k=1}^\infty {\frac{6^k}{\left(3^k-2^k\right) \left(3^{k+1}-2^{k+1}\right)}}$ I need to find the sum, $$\sum_{k=1}^\infty {\frac{6^k}{\left(3^k-2^k\right) \left(3^{k+1}-2^{k+1}\right)}}$$ I have tried to break the terms into partial fractions (method of differences) but am not able to do so. How to proceed?
First we can try to split things into two pieces: $$\frac{6^k}{\left(3^k-2^k\right) \left(3^{k+1}-2^{k+1}\right)} = \dfrac{A}{3^k-2^k} + \dfrac{B}{3^{k+1}-2^{k+1}}$$ So we have $A \cdot (3^{k+1}-2^{k+1}) + B \cdot (3^k-2^k) = 6^k$ which can be arranged to $3^k (3 A + B) - 2^k (2 A + B) = 6^k$. If we make $2A+B=0$ and $3A+B=2^k$, this equality will hold. This suggests that $B=-2A$ and we also see that $3A-2A=A=2^k$. Therefore $B=-2^{k+1}$. $$S = \sum_{k=1}^{\infty} \left(\dfrac{2^k}{3^k - 2^k} - \dfrac{2^{k + 1}}{3^{k + 1} - 2^{k + 1}}\right)$$ Now it's looking like a telescoping series. This is more easily seen by displaying a few terms: $$S = \left(\dfrac{2}{1} - \dfrac{4}{5}\right) + \left(\dfrac{4}{5} - \dfrac{8}{19}\right) + \left(\dfrac{8}{19} - \dfrac{16}{65}\right) + ...$$ Most of these fractions cancel each other out. $$S = \lim_{K \rightarrow \infty} \sum_{k=1}^{K} \left(\dfrac{2^k}{3^k - 2^k} - \dfrac{2^{k + 1}}{3^{k + 1} - 2^{k + 1}}\right)= \lim_{K \rightarrow \infty} \left(2 - \dfrac{2^{K + 1}}{3^{K + 1} - 2^{K + 1}}\right) = 2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2272522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find a continuous function having following properties. Which of the following statements is/are true? * *There exists a continuous map $f:ℝ\to ℝ$ such that $f(ℝ)=\mathbb{Q} $. *There exists a continuous map $f:ℝ\to ℝ$ such that $f(ℝ)=\mathbb{Z} $. *There exists a continuous map $f:ℝ\to ℝ^2$ such that $f(ℝ)=\{(x,y)\in\mathbb{R^2: x^2+y^2=1}\}$. *There exists a continuous map $f:[0,1]\cup[2,3]\to \{0,1\}$ So I try to solve and showed that first option is false, by assuming that there exists a continuous map from $f:ℝ\to ℝ$ such that $f(ℝ)=\mathbb{Q} $. If there were, then we could find $a,b ∈ R$ with $f(a) = 1$ and $f(b) = 2$. Either $a < b$ or $b < a$. Let’s suppose $a < b$. Since $f(x)$ is continuous on $R$ it is also continuous on $[a,b]$. By the intermediate value theorem, and the fact that $1 < \sqrt{2} < 2$, there exists a $c ∈ (a,b)$ such that $f(c) = √ 2$. But $\sqrt{ 2} \not \in Q$, hence $f(R)$, which includes $f(c)$, cannot just be $Q$. Now my concerns are- Can I argue similarly for option 2? My intuition says that option 3 and 4 are correct. But I am unable to find explicit functions so far. Can anyone help me to clear my doubts? Thanks.
Hints for 3: Don't try to find a bijective function; it doesn't exist. What is the name of the set where $x^2+y^2=1$? Have you studied that set before? Hints for 4: Don't try to find a nice formula, like a polynomial. You can define a function in English if you want. What do you need the function to do?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2272650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
The integer values of $\sum_{d|n}\frac{\sigma(n/d)^d}{d}$. I was looking for integer values of the $$\sum_{d|n}\frac{\sigma(n/d)^d}{d}$$ where the $\sigma(n)$ is a divisors sum of $n$. And amazingly has found that the only integer values for $n<1500000$ are: $$1, 39, 793$$ So I assume that these are the only integer numbers of this kind. Any ideas how to prove or disprove this? The according OEIS sequence is: https://oeis.org/A268983 EDITED: With @RobertIsrael help in question The solution of congruences system. and according to @user1952009 comment I have found the next item in this sequence: $$2408321608150261253061174553 = 22419767768701 * 107419560853453$$
Let $$f(n) =\sum_{d | n} \frac{\sigma(n/d)^d}{d}$$ If $p,q$ are two different primes then $$f(pq) =\sigma(pq) + \frac{p(p+1)^q + q (q+1)^p + 1}{pq}$$ For $f(pq) \in \mathbb{Z}$ we need $p(p+1)^q + q (q+1)^p + 1 \equiv 0 \bmod p$ and $q$ $$\implies \qquad q (q+1)\equiv -1 \bmod p, \qquad p (p+1)\equiv -1 \bmod q$$ (by the Fermat little theorem) Letting $g(n) = \sum_{d | n} \frac{\sigma(n/d)}{d}$ which is multiplicative, then $f(pq) \in \mathbb{Z}$ iff $$\sigma(pq)+\frac{p(p+1) + q (q+1) + 1}{pq}=g(pq)=g(p)g(q)= (p+1+\frac{1}{p})(q+1+\frac{1}{q}) \quad \in \mathbb{Z}$$ With this matlab program I didn't find more solutions : for a = [1:3200] A = a*(a+1)+1; fac = factor(A); for j = 1:length(fac) p = fac(j); if p > a P = p*(p+1)+1; fac2 = factor(P); for j2 = 1:length(fac2) q = fac2(j2); if mod(q,p) == a fprintf('%d %d \n', p,q); end end end end end
{ "language": "en", "url": "https://math.stackexchange.com/questions/2272764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Efficient algorithm for finding period of Markov chain What is the least time consuming way to find a period of state of irreducible Markov chain? I wondering if there is an algorithm which does not use matrix multiplication?
There is an efficient algorithm based on breadth-first search here: http://cecas.clemson.edu/~shierd/Shier/markov.pdf Given a dense N x N matrix the algorithm would be worst-case O(n^2) whereas computing all matrix powers would be O(n^4) which is most likely much slower for matrices of any significant size.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2272873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Tile Edge Challenge I've been having a problem with these problems for a week now so I thought I'd post it on MSE. Read the text below. . I've also typed it up: Annabel made a shape by placing identical square tiles in a frame as shown in the diagram above. The tiles are arranged in columns. Each column touches the base but no columns touch the side or top. There are no empty gaps between columns. The frame can be enlarged as needed. These are the questions. I've also typed them up. A. Show that it is possible to arrange $7$ tiles so that the ant walks $8,9,10,11,12,13,14,15$ tile edges. B. Show 6 ways of arranging 7 tiles so that the ant walks a total of 9 tile edges. C. Show it is possible to arrange 49 tiles so that the ant walks a total of less than 21 tile edges. D. Show four arrangements of 137 tiles, each arrangement with a different maximum height, so that the ant walks a total of 34 tile edges. For question $a$, is there a formula relating the number of tile edges exposed to the number of tiles. These are my answers for $a$ For the above picture, it's the left hand side which has the $a$ answers. However, I'm having a bit of trouble with $b$. I've already got four answers down but where are the other two. My four current answers are in the right hand side of the last image. Also, I'll also appreciate a formula to show the relationship. I'm also having trouble with $c$. I found out that you get 21 tile edges exposes when you make a $7*7$ square but the answer is asking for less than 21. Can anyone help me. Also, as in the previous questions, a formula to show the relationships would be great. And for $d$, I am totally lost. I have no idea where to start, and I certainly can't find out four arrangements. So can anyone help me? And as always, a formula is nice P.S. Please explain this in an understandable formula, as I'm only a Year 7 with the ability to understand linear algebra and parts of quadratics and trigonometry. Also, sorry about the poor imaging.
If the shape is convex, since the height ant go up equal to the height ant go down and side length ant go is side, therefore Formula of convex type is 「Total=HighestVertical×$2$+LongestSide」 b: Lost pieces are $(3,3,1)(1,3,3)$. c: $(5,5,5,5,5,5,5,5,5,4)$ is $20$. d: Here are 4 ways $(7,7,,,7,4) (8,8,,,8,1) (9,9,,,9,2) (10,10,,,10,7)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2272959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
A Question From Shiryaev I was studying about $\lambda$-systems from Shiryaev where I countered the following statement about equivalence of conditions for defining a $\lambda$-system. I can check the equivalence of the second set of conditions given the first. However, going the other way, I am not able to show condition $(\lambda_{b})$ given conditions $(\lambda_{a}), (\lambda'_{b})$ and $(\lambda'_{c})$. Can someone please provide a hint on how to prove this?
Hint: If $A,B \in \mathscr{L}$ and $A \subseteq B$, then can you show (In my notation, $B^c$ is the complement of $B$) a) $B^c \in \mathscr{L}$ b)$B^c\bigcup A \in \mathscr{L}$ (Since $A\subseteq B \iff A\cap B^c = \phi$). c)$B^c\bigcup A = (B\setminus A)^c$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2273140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Number of real solutions $2\cos(\frac{x^2+x}{2})=2^x+2^{-x} $ The number of real solutions of $2\cos(\frac{x^2+x}{2})=2^x+2^{-x} $ is (1) 0 (2) 1 (3) 2 (4) infinitely many . My work : $$ 1\geq \cos\left(\frac{x^2+x}{2}\right)=\frac{2^x+2^{-x} }{2}\geq 1 \qquad \text{by (AM-GM).} $$ So $\frac{x^2+x}{2}=2n\pi$ for all $n\in \mathbb{Z}$ . Now discriminant $=1+2n\pi$ is always positive for $n\geq0$ . But the equation is a quadratic so it has only two solution . Hence the answer must be 2 . PS: I'm aware that the this problem is already on the site but i posted as there were no complete solution .
hint $$-2\leq 2\cos(\frac {x^2+x}{2})\leq 2$$ $$2^x+2^{-x}\geq 2$$ the root must satisfy $$2^x+2^{-x}=2$$ which gives $x=0$. the unique root.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2273239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Calculate $\sqrt{2i}$ I did: $\sqrt{2i} = x+yi \Leftrightarrow i = \frac{(x+yi)^2}{2} \Leftrightarrow i = \frac{x^2+2xyi+(yi)^2}{2} \Leftrightarrow i = \frac{x^2-y^2+2xyi}{2} \Leftrightarrow \frac{x^2-y^2}{2} = 0 \land \frac{2xy}{2} = 1$ $$\begin{cases} \frac{x^2-y^2}{2} = 0 \\ xy = 1\\ \end{cases} \\ =\begin{cases} x^2-y^2 = 0 \\ x = \frac{1}{y}\\ \end{cases} \\ =\begin{cases} \frac{1}{y}-y^2 = 0 \\ x = \frac{1}{y}\\ \end{cases} \\= \begin{cases} \frac{1-y^3}{y} = 0 \\ -\\ \end{cases} \\= \begin{cases} y^3 = 1 \\ -\\ \end{cases} \\= \begin{cases} y = 1 \\ x =1\\ \end{cases} $$ And so $\sqrt{2i} = 1+i$, but my book states the solution is $\sqrt{2i} = 1+i$ and $\sqrt{2i} = -1-i$. What did I forget?
Write $2i=2e^{i \pi/2+2k\pi}$. Then square root to get: $\sqrt{2} e^{i\pi/4+k\pi}$. So your roots are $\sqrt{2}e^{i\pi/4}$ and $\sqrt{2}e^{3i\pi/4}$. Which are $\pm(1+i)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2273345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Using Cramer's Rule to Derive Dot Product I'm hoping to derive an equation for the dot product using Cramer's rule. Here I'm going to try in $\mathbb{R}^2$ and will generalize once I get this first issue cleared. I'm hoping to arrive at an expression for the dot product by first asking how to describe some vector $\mathbf{r}$ in terms of two basis vectors $\mathbf{\hat{m}}, \mathbf{\hat{n}}$. If you can, then the coefficients of the linear combination ought to be equal to the projection of $\mathbf{r}$ onto one of its basis vectors. Thus by solving the system $$\mathbf{r} = \alpha\mathbf{\hat{m}} + \beta\mathbf{\hat{n}}$$ in the unknowns $\alpha, \beta$, I want to see that $\alpha = \mathbf{r}\cdot \mathbf{\hat{m}}$ and $\beta = \mathbf{r} \cdot \mathbf{\hat{n}}$ Using Cramer's rule and the fact that $\det A = \det A^T$, we see that $$\alpha = \frac{ \begin{array}{|cc|} r_x & r_y \\ n_x & n_y \end{array} } { \begin{array}{|cc|} m_x&m_y\\ n_x&n_y \end{array} } \qquad \beta = \frac{ \begin{array}{|cc|} m_x & m_y \\ r_x & r_y \end{array} } { \begin{array}{|cc|} m_x&m_y\\ n_x&n_y \end{array} } $$ Now it seems that $\alpha \ne \mathbf{r}\cdot\mathbf{\hat{m}} = r_xm_x + r_ym_y = \begin{array}{|cc|} r_x & r_y \\ -m_y & m_x \end{array} $ and similarly our expectations did not hold for $\beta$. Can someone please explain what I am misunderstanding that leads to this unexpected conclusion
This only works if the two basis vectors are orthonormal. Putting that in your assumption presupposes the conclusion. However, there is another way to see it: this should define a new inner product, not necessarily the dot product, in which they are orthonormal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2273471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Number of solutions for $x_1+ x_2+ x_3 + \cdots + x_k=n$, where $0\leq x_i\leq p$ for all $i$ I know that for $1\leq x_i\leq p$ the answer will be the coefficient of $x^n$ in $(x + x^2 + x^3 + ... + x^p)^k$. But what will be the answer for the constraint $0 \leq x_i \leq p?$ Also, how can I generate a definite formula or recurrence relation to program it? It will be difficult to calculate the answer by summing up the GP series and then calculating the coefficients using series expansion. Thank you!
For any fixed $p,k$ you look at the generating function $((1-x^{p+1})/(1-x))^k$ and the coefficient of $x^n$ is the answer $a(n)$. The generating function factors as $(1-x^{p+1})^k (1-x)^{-k}$ and each of these involves binomial coefficients. Thus the product coefficients is given by a sum involving products of binomial coefficients. $$a(n) = \sum_{i=0}^{\left\lfloor(n+1)/(p+1)\right\rfloor} (-1)^i {k \choose i} {n-(p+1)i+k-1 \choose k-1}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2273604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Union of connected space If $\{A_\alpha, \alpha \in I\}$ is a collection of conected spaces and $\cap A_\alpha \neq \emptyset$ then $\cup A_\alpha$ is connected. My proof: If $\cup A_\alpha$ is not connected then we can find $V,W\neq\emptyset$ such that $V\cup W = \cup A_\alpha$ and $V\cap W = \emptyset$. Suppose $\cup A_\alpha$ is not connected. Since $\cap A_\alpha \neq \emptyset$ then $\exists x\in \cap A_\alpha$. Because $V\cup W = \cup A_\alpha$ and $V\cap W = \emptyset$ $x\in V$ or $x\in W$, but if $x\in V$ then $x\notin W$ therefore $\exists \beta \in I$ such that $x\notin A_\beta$, likewise if $x\in W$. We reach a contradiction, thus $\cup A_\alpha$ is connected
Suppose $\cup_{\alpha} A_{\alpha}$ is not connected and let $V$ and $W$ be two nonempty disjoint sets so that $V\cup W = \cup_{\alpha} A_{\alpha}$. Let $x\in \cap_{\alpha} A_{\alpha}$. Then $x \in V$ or $x \in W$ but not both. Wolog let $x \in V$. Let $y \in W$, then $y \in A_i$ for some set $A_i$. And $x = A_i$ because $x$ is in all $A_{\alpha}$. So $W\cap A_i$ is not empty as it contains $y$ and $V \cap A_i$ is not empty as it contains $x$. So $A_i = (W\cap A_i) \cup(V\cap A_i)$ two disjoint non-empty sets. Now $A_i$ is connected so either $\overline{ (W\cap A_i)}\cap (V\cap A_i)$ is not empty or $(W\cap A_i)\cap \overline{(V\cap A_i)}$ Let $p$ be a point in the non-empty intersection. If $p \not \in W$ then $p \in V$ but $p$ is in $\overline{ (W\cap A_i)}$ so it is a limit point of $ (W\cap A_i)$ therefore a limit point of W and so $\overline W \cap V \ne \emptyset$. Otherwise $p \in W$ but $p \not \in V$ and the same argument shows $\overline{V} \cap W \ne \emptyset$. So $\cup_{\alpha} A_{\alpha}$ can not be partitioned into two disjoint nonempty sets so that the closure of neither intersect with the other. So the union is connected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2273691", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
If $f(x)=x^{n-1}\log x$, then the $n$-th derivative of $f$ is equals? The options are: A) $\dfrac{(n-1)!}{x}$; B) $\dfrac{n}{x}$; C) $(-1)^{n-1}\dfrac{(n-1)!}{x}$; D) $\dfrac{1}{x}$ My attempt: $$f'(x)= (n-1)x^{n-2}\log x+ x^{n-2}$$ $$f''(x)=(n-2)(n-1)x^{n-3}\log x+ (n-1)x^{n-3}+(n-2)x^{n-3}$$ But I fail to see any pattern...
Here is the pattern: The second summand in $f'(x)$, is $x^{n-2}$. This summand will not survive $n-1$ more derivatives, and so, you may ignore it. This leaves you with $$f'(x)=(n-1)x^{n-2}\log x+(\mathrm{irrelevant\; stuff}).$$ Likewise, for the second derivative you have$$f''(x)=(n-2)(n-1)x^{n-3}\log x +(\mathrm{irrelevant\;stuff}).$$ It is not too hard to continue this until you reach the $n$-th derivative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2273879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How am I computing $\int e^{x}\ln(1+e^{x})\,dx$ incorrect? I am stuck on this problem. Problem Evaluate $$\int e^{x} \ln (1+e^{x})$$ Attempt Integration by Parts: Let $u=\ln(1+e^x), \ dv = \int e^{x} \ dx$ and we have $\ du = \frac{e^x}{1+e^x}$ and $v=e^x$ $$\int u \ dv= uv - \int v \ du$$ Thus $$I=\ln(1+e^x) e^x - \int e^x \frac{e^x}{1+e^x} \ dx$$ On the second integral apply $u=1+e^x$ and $\ du = e^x \ dx$ and $$\begin{split} I&=\ln (1+e^x) e^x - \int \frac{u-1}{u} \ du \\ &= \ln (1+e^x) e^x - \int 1- \frac{1}{u} \ du \end{split}$$ which simplifies to $$I=\ln (1+e^x) e^x - x- \ln(|1+e^x|) +C $$
You should just let $u=1+e^x$, $du=e^x\,dx$ Then \begin{eqnarray} \int e^{x}\ln(1+e^{x})dx&=&\int\ln(u)\,du \\ &=&u\ln(u)-u+c \end{eqnarray} and take it from there.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2274130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Money Interest Problem Jake has loaned 20000 pesos from the bank with an interest of 10% every year. If Jake completes the payment within 3 years, what amount should be paid to the bank? Could someone explain to me the entire solution?
Amount= $Principal+Interest$ (assumption, Interest is calculated at initial principal, i.e. simple interest) $Interest$=$Principal\times Rate\times years$ =$20,000\times0.1\times 3$ Total money= $20,000+6,000$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2274217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Domain of $f(x)=\sqrt{\lfloor x\rfloor-1+x^2}$ I drew the number line and tested with different values, getting the correct domain $(-\infty,-\sqrt3)\cup[1,\infty)$. However, how do I solve this faster by manipulating the function?
You must find the values of $x$ such that $\lfloor x\rfloor-1+x^2\geq0$. The easy part: * *It's true for $x\geq1$, since $x^2\geq1$ and $\lfloor x\rfloor>0$. *For $0\leq x<1$, it's false, because $\lfloor x\rfloor=0$ and $x^2<1$. Now, the case $x<0$. First notice that for $x\in[n,n+1[$, for integer $n$, you have $\lfloor x\rfloor=n$, hence $\lfloor x\rfloor-1+x^2=x^2+n-1$. For $n<0$, its infimum, on $[n,n+1[$, is found when $x\to n+1$, and this infimum is $(n+1)^2+n-1=n^2+3n=n(n+3)$. Hence, for $n<-2$, it's nonnegative. There are two intervals left, $[-2,-1[$ and $[-1,0[$. * *On $[-1,0[$, $\lfloor x\rfloor-1+x^2=x^2-2<0$ because $x^2\leq1$. *On $[-2,-1[$, $\lfloor x\rfloor-1+x^2=x^2-3$. This is nonnegative for $x\leq-\sqrt{3}$, hence, for $-2\leq x\leq-\sqrt3$. All in all, $\lfloor x\rfloor-1+x^2\geq0$ for $x\in]-\infty,-\sqrt3]\cup[1,+\infty[$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2274330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Integral with $+i\epsilon$ prescription involving residue theorem? Consider the integral $$I = \int_{-1}^{1} \frac{\text{d}x}{(x + \xi - i\epsilon) (x- \xi + i \epsilon)}$$ where $\xi$ is valued in $[-1,1]$. If I want to note the contribution of this integral at the point $x=\xi$ does the $+i\epsilon$ prescription allow me to simply write that $$I_{x=\xi} = \frac{1}{2\xi}?$$ I just said that $x-\xi$ is then zero while $x+\xi$ is $2\xi$ and the $+i\epsilon$ prescription avoids the pole at this point. 1) Is this answer correct? 2) If so, is there a more mathematically rigourous way of showing this result and if the result is not correct how can one proceed to find this $x=\xi$ contribution to $I$? I'm thinking the result is ok as the residue theorem tells me that the residue of the pole term is just one but would be nice to check this.
It is not clear to me what your question means. But why not calculate the integral $$f(\xi ,\epsilon )=\int_{-1}^1 \frac{1}{(\xi +x-i \epsilon ) (-\xi +x+i \epsilon )} \, dx$$ explicitly? Writing the integrand as $$\frac{1}{(\xi +x-i \epsilon ) (-\xi +x+i \epsilon )}=\frac{1}{a^2+x^2}$$ with $$a\to \epsilon +i \xi$$ the integral can be done using the well known relation $$\int_{-1}^1 \frac{1}{a^2+x^2} \, dx=\frac{2 \tan ^{-1}\left(\frac{1}{a}\right)}{a}$$ Hence the explicit solution of the integral is $$f(\xi ,\epsilon )=\frac{2 \tan ^{-1}\left(\frac{1}{\epsilon +i \xi }\right)}{\epsilon +i \xi }$$ Let us plot Re, Im, and Abs of the complex function $f$ as a function of $\xi$ for two small values of $\epsilon$ EDIT #1 Your question sounds as if you are talking about the indefinite integral. Ok this is given by $$\int \frac{1}{(\xi +x-i \epsilon ) (-\xi +x+i \epsilon )} \, dx\\= \frac{1}{4 (\epsilon +i \xi )}\left( i \left(\log \left(\xi ^2+x^2-2 \xi x+\epsilon ^2\right)-\log \left(\xi ^2+x^2+2 \xi x+\epsilon ^2\right)\right)\\+2 \tan ^{-1}\left(\frac{x \epsilon }{\xi (\xi -x)+\epsilon ^2}\right)+2 \tan ^{-1}\left(\frac{x \epsilon }{\xi (\xi +x)+\epsilon ^2}\right)\right)$$ and in the limit $\xi \to x$ this is $$\frac{i \left(\log \left(\epsilon ^2\right)-\log \left(4 x^2+\epsilon ^2\right)\right)+2 \tan ^{-1}\left(\frac{x \epsilon }{2 x^2+\epsilon ^2}\right)+2 \tan ^{-1}\left(\frac{x}{\epsilon }\right)}{4 (\epsilon +i x)}$$ The limit $\epsilon \to 0$ of which is $\infty$ in this sense $$\lim_{\epsilon \to 0} \, \frac{i \left(\log \left(\epsilon ^2\right)-\log \left(4 x^2+\epsilon ^2\right)\right)+2 \tan ^{-1}\left(\frac{x \epsilon }{2 x^2+\epsilon ^2}\right)+2 \tan ^{-1}\left(\frac{x}{\epsilon }\right)}{4 (\epsilon +i x)} = \left(-\frac{1}{\text{sgn}(x)}\right) \infty$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2274420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Convergence question from Stromberg Classical Analysis I have been trying to find whether the following series converge. If so, to what limit? Define $x_0=0$ and $x_1=1$, and $$ x_{n+1~}=\frac{1}{n+1}x_{n-1}+\frac{n}{n+1}x_n \qquad n\geq 1 $$ There are other ways to show convergence, but I showed that $x_{2n-2}\leq x_{2n}\leq x_{2n+1}\leq x_{2n-1}$ for all $n$ and $\lim_{n\to\infty}(x_{2n-1}-x_{2n})=0$. Thus, $\lim_{n\to\infty} x_n=x$. To find the actual value of $x_n$ I have tried many things but could not find the value. I think the most promising manipulation I got so far is the following $$ \frac{1}{2}+\frac{1}{3!}-\frac{1}{4!}+\frac{1}{5!}-\frac{1}{6!}+\frac{1}{7!}-\frac{1}{8!}+\dots $$ Thanks for any help!
It seems to me that your guessed limit is $$ 1 + \sinh(1) - \cosh(1). $$ Just compare the expansions $$ \cosh(x)=\sum_{n=1}^\infty \frac{x^{2n}}{(2n)!}, \quad \sinh(x) = \sum_{n=1}^\infty \frac{x^{2n+1}}{(2n+1)!}. $$ You should try to prove it, maybe by induction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2274544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What is wrong with this "proof"? $1$ is always an eigenvalue for $I + A$ ($A$ is nilpotent)? Consider the nilpotent matrix $A$ ($A^k = 0$ for some positive $k$). It is well known that the only eigenvalue of $A$ is $0$. Then suppose $\lambda$ is any eigenvalue of $I + A$ such that $(I + A) \mathbf{v} = \lambda \mathbf{v}$ where ($\mathbf{v} \neq \mathbf{0}$). Then $I \mathbf{v} + A \mathbf{v} = \lambda \mathbf{v} \implies \mathbf{v} + A \mathbf{v} = \lambda \mathbf{v} \implies A \mathbf{v} = (\lambda - 1) \mathbf{v}$ We know that $\lambda - 1 = 0$ because the only eigenvalue of a nilpotent matrix is $0$. Therefore $\lambda = 1$ This "proof" seems to indicate that $1$ is always an eigenvalue for the sum of the identity matrix with any nilpotent matrix, but I believe I have a counterexample that disproves this. I believe my error was in assuming that $I + A$ has eigenvalues -- but I do not know how I could prove/disprove this. If someone could help me see where I've gone wrong I would greatly appreciate it!
You have proven that if $A$ is nilpotent, then the eigenvalue of $A+I$ (NOT the eigenvalues of $A$) is equal to $1$. There's nothing wrong with the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2274656", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Proving an absence of rational and integer solutions. Prove $$(x+y\sqrt2)^2+(z+t\sqrt2)^2=5+4\sqrt2$$ has no solution in rational $(x,y,z,t)$ Prove $$(5+3\sqrt2)^m=(3+5\sqrt2)^n$$ has no solution for positive integers $(m,n)$ How do I approach these kinds of problems? I'm not sure where to start. Also, what are some more problems in this category to practice?
Hint for the second part: Take norms: $N(a+b\sqrt2)=a^2-2b^2$. The key property is $N(\alpha\beta)=N(\alpha)N(\beta)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2274777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Completing the square of $x^2 - mx = 1$ is not giving me the right answer. This is my attempt $$ \begin{align} x^2 - mx &= 1 \\ x^2 - mx - 1 &= 0 \\ \left(x^2 - mx + \frac{m^2}{4} - \frac{m^2}{4}\right) - 1 &= 0 \\ \left(x^2 - mx + \frac{m^2}{4}\right) - \frac{m^2}{4} - 1 &= 0 \\ \left(x^2 - mx + \frac{m^2}{4}\right) - \frac{m^2}{4} - \frac{4}{4} &= 0 \\ \left(x^2 - mx + \frac{m^2}{4}\right) - \frac{m^2 - 4}{4} &= 0 \\ \left(x^2 - mx + \frac{m^2}{4}\right) &= \frac{m^2 - 4}{4} \\ \left(x - \frac{m}{2}\right)^2 &= \frac{m^2 - 4}{4} \\ \sqrt{\left(x - \frac{m}{2}\right)^2} &= \sqrt{\frac{m^2 - 4}{4}} \\ x - \frac{m}{2} &= \pm \frac{\sqrt{m^2 - 4}}{\sqrt{4}} \\ x &= \frac{m}{2} \pm \frac{\sqrt{m^2 - 4}}{2} \\[20pt] x_1 &= \frac{m}{2} - \frac{\sqrt{m^2 - 4}}{2} \\ x_1 &= \frac{m - \sqrt{m^2 - 4}}{2} \\[16pt] x_2 &= \frac{m}{2} + \frac{\sqrt{m^2 - 4}}{2} \\ x_2 &= \frac{m + \sqrt{m^2 - 4}}{2} \\ \end{align} $$ However, the correct answer according to the text is: $$ \begin{align} x_1 &= \frac{m}{2} - \frac{\sqrt{m^2 + 4}}{2} \\ x_2 &= \frac{m + \sqrt{m^2 + 4}}{2} \\ \end{align} $$ Why $\sqrt{m^2 + 4}$ instead of $\sqrt{m^2 - 4}$ ???
(Not an answer, just a long comment.) Your actual question has already been answered, but I want to point out another mistake, namely when you go from $$\left(x - \frac{m}{2}\right)^2 = \frac{m^2 - 4}{4}$$ to $$\sqrt{\left(x - \frac{m}{2}\right)^2} = \sqrt{\frac{m^2 - 4}{4}}.$$ At this point, there should be $\pm$ signs before the $\surd$ signs (it's redundant to put $\pm$ on both sides of the equation, but there needs to be a $\pm$ on at least one of them). You've added the $\pm$ signs in the next line, so it's not the end of the world, but conceptually it's important to understand where to put them. Here's a simpler example: suppose we have $x^2=9$. Taking square roots, we have $$x = \pm\sqrt9$$ so $x=\pm3$. Note that $\sqrt9$ in itself means only $3$, not $\pm 3$! It is not the $\surd$ sign, but rather the act of taking the square root, that engenders the $\pm$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2275086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Truncated taylor series inequality I came across the following fact in a paper and am having trouble understanding why it is true: Consider the error made when truncating the expansion for $e^a$ at the $K$th term. By choosing $K = O(\frac{\log N}{\log \log N})$, we can upper bound the error by $1/N$, in other words $$\sum_{j={K+1}}^\infty \frac{a^j}{j!} \leq \frac{1}{N}.$$ Here, the $O$ is "big-O" notation. I'm thinking a Stirling approximation probably has to be used in the denominator but still can't reproduce this result. Maybe this is some well-known result that I'm not aware of?
What we are asked to prove is this: there is a constant $C$ such that if $$ K \ge C \frac{\log N}{\log \log N} $$ then $$ \sum_{j=K+1}^\infty \frac{a^j}{j!} \le \frac1N. \tag1$$ Now clearly the left hand side of (1) grows to infinity as $a \to \infty$, so it must be that $C$ depends upon $a$. Suppose $K \ge 2a$ (which follows if $C \ge 2a$). Then the left hand side of (1) can be bounded above by a geometric series $$ \sum_{j=K+1}^\infty \frac{a^{K}}{K!} 2^{j-K} = \frac{a^K}{K!} .$$ Now use Stirling's formula, noting that if $$ K \ge C \frac{\log N}{\log \log N} $$ then $$ \log(K^K) = K \log K \ge C \frac{\log N}{\log\log N} (\log\log N - \log\log\log N) $$ and that if $N$ is sufficiently large then $$ \log\log N - \log\log\log N \ge \tfrac12 \log\log N .$$ By the way, if you want a much sharper result, use the approximation of the Poisson distribution by the normal distribution, and Proof of upper-tail inequality for standard normal distribution, to get $$ K \ge C \sqrt{\log N} .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2275164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Proving Variance of Normal Distribution My question is as follows: using the standard integral $$\int_{-\infty}^{\infty}e^{-ax^2}dx=\sqrt{\frac{\pi}{a}}$$ prove directly from the definition that the variance of the normal distribution, $$f(x)=\frac{1}{\sigma\sqrt{2\pi}}e^{\frac{(x-\mu)^2}{2\sigma^2}},$$ is $\sigma^2$. It is important to note that no understanding of why the integral above is true is needed to answer the question. I am instructed that integration by parts is required, but I'm struggling... Ok, I've tried with much help from you lot, thanks greatly, but have missed something: $$\begin{align}Var(X) &= E((X-\mu)^2) \\ & = \frac{1}{\sigma\sqrt{2\pi}}\int_{-\infty}^{\infty}(x-\mu)^2e^{-\frac{(x-\mu)^2}{2\sigma^2}}dx \\ & Let\;y = \frac{(x-\mu)}{\sigma} \\ & = \frac{\sigma}{\sqrt{2\pi}}\int_{-\infty}^{\infty}y^2e^{-\frac{1}{2}y^2}dx \\ &=\frac{\sigma}{\sqrt{2\pi}}([ue^{-\frac{1}{2}u^2}]_{-\infty}^{\infty}+\int_{-\infty}^{\infty}e^{-\frac{1}{2}u^2}du) \\ &=\sigma \end{align}$$ where did I go wrong?
Your substitution step is lacking an extra factor of $\sigma$ because you should have written $$\begin{align*} \frac{1}{\sqrt{2\pi} \sigma} \int_{x=-\infty}^\infty (x-\mu)^2 e^{-(x-\mu)^2/(2\sigma^2)} \, dx &= \frac{1}{\sqrt{2\pi} \sigma} \int_{y=-\infty}^\infty (\sigma y)^2 e^{-y^2/2} \sigma \, dy \\ &= \frac{\sigma^2}{\sqrt{2\pi}} \int_{y=-\infty}^\infty y^2 e^{-y^2/2} \, dy. \end{align*}$$ This is because $y = (x-\mu)/\sigma$ implies $x = \sigma y + \mu$, hence $dx = \sigma \, dy$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2275273", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can I show that $P(A \cap B) \gt P(A) + P(B) - 1$ How can I show that $P(A \cap B) \gt P(A) + P(B) - 1$ I know that $P(A \cap B)= P(A)P(B)$ But I don't see how that can help me get to that inequality. Can someone give me a hint on how to start this?
The inequality is not correct, for example, let $A$ be the event that you get a head and $B$ be the even that you get a tail. $$P(A \cap B)=0$$ $$P(A)+P(B)-1=0$$ $$0> 0$$ which is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2275407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Let $A, B$ be unitary rings and let $f$ be a surjective homomorphism of rings. Then $f(Jac(A)) \subseteq Jac(B)$ Let $A, B$ be unitary rings, and let $J(A)$, $J(B)$ denote the Jacobson's radical of $A$ and $B$. Let $f$ be a surjective homomorphism of rings. Then: $$f(J(A)) \subseteq J(B)$$ Attempt: It suffices to prove that the preimage of every maximal ideal of $B$ is a maximal ideal in $A$. Let $M \subseteq B$ be an ideal of $B$, and let $f^{⁻1}(M)$ denote its preimage. Suppose $J$ is an ideal of $A$ such that $f^{⁻1}(M) \subseteq J$. Since $f$ is a surjection, this implies $M \subseteq f(J)$. Since $M$ is maximal, we have $f(J)=M$ or $f(J)=B$. If $f(J)=M$, then $J \subseteq f^{⁻1}(M)$ and then $J=f^{-1}(M)$. If $f(J)=B$, then I intended to prove that $1_{A}$ is in $J$, since any surjective homomorphism maps $1_{A}$ to $1_{B}$.The problem is that might be that exist other elements of $A$ mapped to $1_{B}$, and then we can't conclude that $1_{A} \in J$. My questions: 1.Is the initial problem true? 2.Is the preimage of a maximal ideal, through an epimorphism, also a maximal ideal? 3.Are there any ring homomorphisms $f:A\rightarrow B$ that maps $1_A$ and another element to $1_B$?
* *Yes, it is true, for the reason below: *Yes, this is true as well. By the correspondence theorem, the ideals of $ B $ correspond bijectively to the ideals of $ A $ containing $ \ker f $ in an inclusion respecting manner, so a maximal ideal of $ B $ pulls back to a maximal ideal of $ A $. Note that this is not true if we don't require the homomorphism to be surjective: if $ R $ is any commutative ring with a non-maximal prime ideal $ \mathfrak p $, then the preimage of the maximal ideal $ \mathfrak p R_{\mathfrak p} $ under the natural map $ R \to R_{\mathfrak p} $ is $ \mathfrak p $, which is not maximal. *Sure, for example, the quotient map $ \mathbf Z \to \mathbf Z/2\mathbf Z $ is one such homomorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2275538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How do you prove the statement: $A^c=(A\cup B)^c\cup (B \setminus A)$ I've been stuck on this question for a while and the problem is basically I just don't know how to prove it, I tried converting it to $\vee$ and $\wedge$ symbols then I did some research and found mathematically it doesn't make sense to compare them, so I'm currently stuck on how to prove such a statement. Question Determine whether the following statement is true or false, if true prove it, if false, provide a counterexample. $A^c=(A\cup B)^c\cup (B \setminus A)$ Working $A^c=$ ~$A$, $(A \cup B)^c=($~$A$ $\wedge$ ~$B$), $(B\setminus A)=$ ~$A$ $\implies A^c=(A\cup B)^c\cup (B \setminus A)=$~$A =($~$A$ $\wedge$ ~$B)$ $\wedge$ ~$A$ $\rightarrow$ I then went on to show this was equal to ~$A$ $\wedge $ ~$B$ which doesn't show anything additionally it's mathematically incorrect from what I found. Note I tried finding a counterexample and couldn't find one, so I'm assuming it is true, and as I said that's where I'm stuck, I don't know how to prove it. ANY help would GREATLY be appreciated, thanks! :)
The standard way to show that two sets are equal is to show that one is contained in the other. On the left, you have everything that is not in $A$. On the right, everything that is not in ($A$ or $B$) together with everything in $B$ but not $A$. So if $x$ is not in $A$ then show that either $x \in (A \cup B)^c$ or $x \in B \setminus A$ (which depends on whether or not $x \in B$). For the reverse inclusion, show that if $x \in (A \cup B)^c$ then $x \not\in A$ and if $x \in B \setminus A$ then $x \not\in A$. You can also show this algebraically if you can justify each of the following equalities: $$ \begin{align*} (A \cup B)^c \cup (B \setminus A) &= (A^c \cap B^c) \cup (B \setminus A) \\ &= (A^c \cap B^c) \cup (B \cap A^c) \\ &= A^c \cap (B^c \cup B) \\ &= A^c \cap X \\ &= A^c \end{align*} $$ where $X$ is the whole set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2275659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Proving that sequences with the pattern aaaaaaa don't include a perfect square I need help with the following problem: Which of the following sequences doesn't have perfect square: A) $11, 111, 1111, \dots$ B) $33, 333, 3333, \dots$ C) $44, 444, 4444, \dots$ D) $77, 777, 7777, \dots$ I proved that the first sequence cannot contain a perfect square, but it seems that none of them contains a perfect square. Is it possible to write a proof for a general pattern $aaaaaaaaa$?
For 1 you are done. For 2 not divisible by 4. For 3 no perfect square ends with digit 3. For 4 just divide by 4 and get back to the case with 1. For 5 not divisible by 25. For 6 not divisible by 4. For 7 no perfect square ends in digit 7. For 8 not divisible by 16. For 9 divide by 9 and get back to 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2275879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
What is the largest $a$ for which all the solutions to the equation $3x^2+ax-(a^2-1)=0$ are positive and real? Problem: What is the largest $a$ for which all the solutions to the equation $3x^2+ax-(a^2-1)=0$ are positive and real? Attempt: Solving the equation for $x$ I get $$x_{1,2}=-\frac{a}{6}\pm\sqrt{\frac{13a^2-12}{36}}.$$ For the roots to be real, the discriminant as to be greater than or equal to zero, so it yields the inequality $$13a^2-12 \geq 0\Leftrightarrow -\frac{2\sqrt{39}}{13}\leq a\leq\frac{2\sqrt{39}}{13}.$$ Condition number two is that both roots should be positive. How should I think to proceed?
The equation is : $3x^2+ax-(a^2-1)=0$ * *First condition is which you identified, that the discriminant must be positive. *Note that the abscissa of vertex of this parabola is $\dfrac{-b}{2a}$ . For both roots to be positive, this value must be positive *Furthermore, since the parabola is upward, $f(0) > 0$ implies that both roots must be positive. These are the three conditions which yield the sought answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2276023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
How to find period of a real function $f$ given the functional equation $\sqrt{3}f(x) = f(x-1) + f (x+1) $? If a periodic function satisfies the equation $\sqrt{3}f(x) = f(x-1) + f (x+1) $ for all real $x$ then prove that fundamental period of the function is $12$. Here fundamental period means the smallest positive real for which function repeats its value for all $x$. I tried replacing $x$ by $x \pm 1$ then try to find $f(x)$ in terms of other but always end up with it in terms of sum of other two arguments in the function eg $f(x-2)$ + $f (x+2)$ etc. Please provide a general method and also especially do give the thought process or reasoning for all the steps ie why you are doing these particular steps or what led you to thinking that doing these steps would give you the period of f.
Define the linear operator $T$ by $$Tf(x):=- √3f(x) + f(x-1) + f(x+1)$$ and $E$ by $$Ef(x)=f(x+1).$$ One can see that $T=E-\sqrt{3}+E^{-1}$. Consider solving $$x-\sqrt{3}+x^{-1}=0$$ $$\implies x^2-2x\frac{\sqrt{3}}{2}+1=0$$ $$\implies\left(x-\frac{\sqrt{3}}{2} \right)^2=-\frac{1}{4}$$ $$\implies x = \frac{\sqrt{3}\pm i}{2}=\exp\left(\frac{\pm i\pi}{6}\right)=\omega_{\pm}$$ One can thus see that $T=E^{-1}(E-\omega_+)(E-\omega_-)$. Note that the $\omega_\pm$ are primitive $12^{th}$ roots of unity, so the smallest $n$ such that $T$ will factor exactly into $E^n-1$ is $12$. It thus follows that the fundamental period of such a function must be $12$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2276088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Find all natrual numbers that are $13$ times bigger than the sum of their digits. Find all natural numbers that are $13$ times bigger than the sum of their digits. I had a solution and just wanted to verify it. By solving the equation $$13(a_1+a_2+a_3+\dots a_n)=\overline{a_1a_2\dots a_n},$$ we can get $n=3$ and by putting the number $\overline{abc}$ in the equation, I got three answers. Right? Thanks!
An $n$-digit number is $\ge 10^{n-1}$, but the sum of its digits $\le 9n$. This give us the inequality $10^{n-1}\le 13\cdot 9n=117n$, which easily leads to $n\le 3$. Then $13\cdot(a+b+c)=100a+10b+c$ leads to $87a=3b+12c$, or $29a=b+4c$. * *With $a=0$, we arrive at $b=c=0$. *With $a=1$, we arrive at $b+4c=29$, hence $5\le c\le 7$, with each case leading to a valid solution *With $a\ge 2$, $9+4\cdot 9\ge b+4c\ge 58$ leads to a contradiction That's four solutions in total if we allow $0$ a a solution
{ "language": "en", "url": "https://math.stackexchange.com/questions/2276159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Given hypotenuse, find the other two sides. Note that we are only interested in integral pythagorean triplets, we are given the hypotenuse $c$, how can I efficiently find the other two sides of the right angled triangle. I need something better than the bruteforce approach of iterating over all lengths $a$ below $c$, and checking perfect square for $b = \sqrt{c^2-a^2}$. For multiple solutions, I need one with the smallest $a$ possible.
How about using the standard formula for generating Pythagorean triples? Solve $c = m^2 + n^2$ for $m$ and $n$. Then you have $a = m^2 - n^2$ and $b = 2mn$. (If $m$ and $n$ are co-prime and of opposite parity, the triple is primitive, otherwise not.) This requires less brute force than the approach you wanted to avoid, since $c < c^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2276258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
$\alpha : H^{n}(\mathrm{Hom}(K,G)) \to \mathrm{Hom}(H_{n}(K),G)$ is an isomorphism Let $G$ is divisible group and abelian , $K$ is a chain complex . The map $\alpha$ in the title above takes idea from inner product . To be more specific , $x \in H_{n}(K)$ and $u \in H^{n}(\mathrm{Hom}(K,G))$ the inner product $<u,x>$ is an element of $G$ obtained according to the following simple prescription : Choose a representative cocycle $u' \in \mathrm{Hom}(H_{n}(K),G)$ for $u$ and $x' \in K_{n}$ for $x$ then $<u,x> = u'(x')$ ( one can prove this definition is independent of choices . Then my question is : $$\alpha : H^{n}(\mathrm{Hom}(K,G)) \to \mathrm{Hom}(H_{n}(K),G)$$ $$(\alpha u)(x) = <u,x>$$ is an isormophism when $G$ is a divisible group
By the universal coefficient theorem, the obstruction to isomorphism will be an Ext group: $\text{Ext}^1(H_{n-1}(X),G)$. But because $G$ is divisible, this Ext group vanishes, as divisible groups are injective in the category of Abelian groups.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2276352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is the following inequality true: $\frac{k!}{k^k} > e^{-k}$ I stumbled upon the following inequality in a scientific paper which estimates a lower bound for $\frac{k!}{k^k}$ for $k \in \mathbb{N}$: $$\frac{k!}{k^k} > e^{-k}$$ They did not explain why this holds true, and I could not find any answer by myself yet.
Use the Taylor series: $$e^k = 1+k+\frac{k^2}{2!} +\cdots+\frac{k^k}{k!} +\cdots.$$ Because all terms on the right are positive, we have $$e^k > \frac{k^k}{k!},$$ then just take reciprocals of both sides.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2276404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
is this sequence Cauchy in the Banach space of continuous functions with infinity norm? In the Banach space $(C[0,1], ||\cdot||_{\infty})$ where $||\cdot||_{\infty} = max_{[0,1]}|f(x)|$ let the sequence of functions $\{f_n(x)\}$ be given by $f_n(x) = \frac{n\sqrt{x}}{1+nx}$. State whether the sequence is Cauchy in this space. Here's my work: If the sequence is Cauchy, then given any $\epsilon > 0$, there exists $N \in \mathbb{N}$ such that for all $m, n \geq N$ we have $||f_n(x) - f_m(x)|| \leq \epsilon$. Since $f_n(x) \rightarrow \frac{1}{\sqrt{x}} = f(x)$ on $[0,1]$ we can find $N$ such that for $m, n \geq N$, $||f_n(x) - f(x)|| \leq \frac{\epsilon}{2}$ and $||f_m(x) - f(x)|| \leq \frac{\epsilon}{2}$. Thus we will have: $||f_n(x) - f_m(x)|| = ||f_n(x) - f(x) + f(x) - f_m(x)|| \leq ||f_n(x) - f(x)|| + ||f_m(x) - f(x)|| \leq \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon$. So the sequence is Cauchy assuming convergence, but I'm not sure if I can assume this.
Suppose $(f_n)$ is Cauchy in $C([0,1]).$ Then there exists $N\in \mathbb N$ such that $\|f_n-f_N\| < 1$ for $n>N.$ This $N$ is now fixed. It follows that $$\tag 1 \|f_n\| \le \|f_n-f_N\| + \|f_N\| < 1 + \|f_N\|$$ for $n>N.$ But notice $f_n(1/n)=\sqrt n/2.$ Thus $\|f_n\|\to \infty,$ violating $(1),$ contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2276551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Expectation from a colour-matching withdrawal game A person draws 3 balls from a bag containing 3 white, 4 red and 5 black balls. He is offered 10 bucks, 5 bucks, 2 bucks if he draws 3 ball of same color, 2 balls of same color and 1 ball of each color respectively. Find how much he expects to earn. I tried to solve this question by calculating probability of getting 3 balls of same color that came $3/44$ but for solving balls of 2 same colors there are so many combination and calculation becomes lengthy so is there any way.
You had a good start. The expected value is calculated from the sum of the probability and the value associated with that probability. So, the expected value here would be the following: $10$ $ 3 \choose 3 $ $12 \choose 3$^-1 + $10$$4 \choose 3$ $2 \choose 3$^-1 + $10$ $5 \choose 3$ $12 \choose 3$^-1 + $5$$ 3 \choose 2 $ $ 9 \choose 1$ $12 \choose 3$^-1 + $5$$4 \choose 2$ $8 \choose 1 $ $12 \choose 3$^-1 + $5$ $ 5 \choose 2$ $7 \choose 1$ $12 \choose 3$^-1 + $ 3 \choose 1 $ $4 \choose 1$ $5 \choose 1$) $*12 \choose 3$^-1. Which, all laid out is: $10 times the probability of drawing the three balls respectively, $5 times the probability of drawing two of the same and one different ball respectively, and $1 times the probability of drawing all different balls. Happy studying!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2276658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Representing lower order B-Splines as higher order B-splines I have tried to figure out how B-splines of degree $p - 1$ can be represented as linear combinations of B-splines of degree $p$. Definitions: * *Given a set of increasing real values $t = (t_i)_{i = 1}^{p+n+1}$, the $i$th B-spline of degree $p$ is defined as $$ B_{i, p, t}(x) = \frac{x - t_i}{t_{i+p} - t_i}B_{i, p-1, t}(x) + \frac{t_{i+p+1} - x}{t_{i+p+1} - t_{i+1}}B_{i, p-1, t}(x) $$ where $B_{i, 0, t}$ is defined as $$ B_{i, 0, t}(x) = \begin{cases} 1, & x \in [t_i, t_{i+1}), \\ 0, & \text{else}. \end{cases} $$ * *We call the vector $t$ of real values $p + 1$-regular if the first $p+1$ values coincide, and the last $p+1$ values coincide. I.e., $$ t_1 = t_2 = \dots = t_{p+1} \\ t_{n+1} = t_{n+2} = \dots = t_{n + p + 1} $$ Linear independence If $t$ is a $p+1$regular knot vector, then the B-splines $B_{i, p, t}$ are linearly independent on the interval $[t_{p+1}, t_{n+1})$. Question: How can I represent the B-spline $B_{i, p-1, t}$ as a linear combination B-splines of a higher degree, provided that $t$ is $p+1$ regular? I.e., $$ B_{i, p-1, t}(x) = \sum_{j = 1, n}c_jB_{j, p, t}(x). $$ How to determine the coefficients $c_j$?
In general the Degree Elevation Algorithm can express a spline $S=\sum \limits _{j} c_{j} B_{j,p}$ in terms of B-Splines of order $p+1$, i.e. computes the coefficients $c^*_{i} $ such that $S=\sum \limits _{j} c_{j} B_{j,p}=\sum \limits _{i} c^*_{i} B_{i,p+1}$. You can find the details of the algorithm for example in "The NURBS Book" http://www.springer.com/gp/book/9783642973857. In your case, $S=B_{k,p}$ for some $k$, and it can be considered as a special case of the general procedure. Moreover, since the algorithm performs linear operations, in general you can write $\boldsymbol{c}^* =\boldsymbol{M} \boldsymbol{c}$ for some rectangular real matrix $M$ that produces the operations of the algorithm. In your case you are interested in the $k$-th column of $\boldsymbol{M}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2276752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Sum of entries of a matrix For a matrix $A \in \mathbb{R}^{n \times n}$, it is clear that the sum of all the entries of $A$ can be expressed as $$\vec{1}^{T} A \vec{1} = \sum \limits_{i,j} A_{i,j}$$ Now suppose $A,B \in \mathbb{R}^{n \times n}$ are symmetric matrices. Then by the above expression, it is clear that the sum of the entries of the product $AB$ is the same as that of $BA$, even though the two are distinct as matrices. So $$\vec{1}^{T} (AB+BA) \vec{1}=2\vec{1}^{T} AB \vec{1}$$ Do we have any such expression for higher degrees? That is, suppose we form the sum of all possible permutations of a product of $n$ repetitions of $A$ and $m$ repetitions of $B$, and let $Symm(A^nB^m)$ denote this sum. For example, when $n=3$ and $m=2$, the expression has $\binom{5}{2}$ terms as follows $$Symm(A^3B^2)=A^3B^2 + A^2BAB + A^2B^2A+ABA^2B+ABABA+AB^2A^2+BA^3B+BA^2BA +BABA^2+B^2A^3$$ Can we say anything useful about $$\vec{1}^{T} Symm(A^nB^m) \vec{1}$$ in terms of $A,B$? This came up while working on a larger problem, so I've skipped the context here as of now. I apologize if the question is a bit vague and open-ended, and will update it promptly based on any feedback. Thanks.
Let $X$ be the square matrix whose each element is $1$. (Is there canonical notation for this?) This is a symmetric matrix. $\DeclareMathOperator{\tr}{tr}$ The sum of elements of $A$ is $s(A)=\tr(AX)$. For symmetric $A$ and $B$ the sum of elements in the product is $$ s(AB)=\tr(ABX)=\tr((ABX)^T)=\tr(X^TB^TA^T)=\tr(XBA)=\tr(BAX)=s(BA). $$ This you already knew, but I just wanted to give a new point of view to this fact. Similarly, it's easy to check that $s(A^T)=s(A)$ for any matrix $A$. Given the commutativity properties of the trace — and more importantly, lack thereof — I don't think there will be a nice identity to allow you to treat arbitrary permutations nicely. Matrices don't commute with $X$ in general. It's hard to prove non-existence of useful things to say, but perhaps the trace helps clarify your thoughts. For example, in your example you can take $A=\begin{pmatrix}0&1\\0&0\end{pmatrix}$ and $B=X$. Then all terms with $A^2$ vanish, and $Symm(A^3B^2)=ABABA=A$. This $A$ is not symmetric; the point is just to emphasize that the order of matrices can have big effects (also on trace: typically $\tr(ABC)\neq\tr(BAC)$), and this is true with or without symmetry. In general, the order of the matrices will have a significant effect on the sum of elements of the product, but it's hard to say more than that. If you have a more specific question, I can try to think of a specific answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2276877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Area of triangle and determinant The area of a $\vartriangle ABC$ with given vertices $(a,a^2),(b,b^2),(c,c^2)$ is $\frac{1}{4}$ $sq. units$ and area of another $\vartriangle PQR$ with given vertices $(p,p^2),(q,q^2),(r,r^2)$ is $3$ $sq. units$. Then what is the value of $$ \begin{vmatrix} (1+ap)^2 & (1+bp)^2 & (1+cp)^2 \\ (1+aq)^2 & (1+bq)^2 & (1+cq)^2 \\ (1+ar)^2 & (1+br)^2 & (1+cr)^2 \\ \end{vmatrix} $$ I could not even begin attempting it , i don't know where to begin from,someone kindly help.
Let $A, B, C$, $P, Q, R$ be the $6$ column vectors $$ \begin{cases} A^T = (1, \sqrt{2}a, a^2),\\ B^T = (1, \sqrt{2}b, b^2),\\ C^T = (1, \sqrt{2}c, c^2) \end{cases} \quad\text{ and }\quad \begin{cases} P^T = (1, \sqrt{2}p, p^2),\\ Q^T = (1, \sqrt{2}q, q^2),\\ R^T = (1, \sqrt{2}r, r^2) \end{cases} $$ Using identites of the form $$(1+ap)^2 = 1 + 2ap + a^2p^2 = 1\cdot 1 + \sqrt{2}a\cdot\sqrt{2}p + a^2\cdot p^2 = A\cdot P$$ We can rewrite the determinant at hand as $$\Delta \stackrel{def}{=}\begin{vmatrix} (1+ap)^2 & (1+bp)^2 & (1+cp)^2 \\ (1+aq)^2 & (1+bq)^2 & (1+cq)^2 \\ (1+ar)^2 & (1+br)^2 & (1+cr)^2 \\ \end{vmatrix} = \begin{vmatrix} A\cdot P & B\cdot P & C\cdot P \\ A\cdot Q & B\cdot Q & C\cdot Q \\ A\cdot R & B \cdot R & C\cdot R \\ \end{vmatrix} $$ Notice the matrix for rightmost determinant is a product of two $3 \times 3$ matrices $$ \begin{bmatrix} A\cdot P & B\cdot P & C\cdot P \\ A\cdot Q & B\cdot Q & C\cdot Q \\ A\cdot R & B \cdot R & C\cdot R \\ \end{bmatrix} = \left[ P, Q, R\right]^T \left[A, B, C\right] $$ This leads to (up to a sign), $$\Delta = \begin{vmatrix} 1 & \sqrt{2}p & p^2 \\ 1 & \sqrt{2}q & q^2 \\ 1 & \sqrt{2}r & r^2 \\ \end{vmatrix} \begin{vmatrix} 1 & 1 & 1\\ \sqrt{2}a & \sqrt{2}b & \sqrt{2}c \\ a^2 & b^2 & c^2 \\ \end{vmatrix} = 2 \begin{vmatrix} 1 & p & p^2 \\ 1 & q & q^2 \\ 1 & r & r^2 \\ \end{vmatrix} \begin{vmatrix} 1 & a & a^2 \\ 1 & b & b^2 \\ 1 & c & c^2 \\ \end{vmatrix} = 2(2\times 3)(2\times\frac14) = 6 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2276979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How would I find the number of distinct homomorphisms and isomorphisms mapping the Klein four group to the Klein four group? How would I find the number of distinct homomorphisms and isomorphisms from the Klein four group to the Klein four group? Thank you
Here’s an approach that perhaps is more advanced, maybe even too advanced. Your group is a two-dimensional vector space $V$ over the field $k=\Bbb F_2$ with two elements. Every homomorphism $V\to V$ is automatically a $k$-linear map, so to count these, we need only count the $2\times2$ matrices over $K$, so sixteen in number. For automorphisms, we need to count the nonsingular matrices, or what is the same thing, the number of distinct ordered $k$-bases of $V$. To get a basis of any two-dimensional vector space, you choose first a nonzero vector (three choices, in our case), and then a vector not in the space spanned by your previous choice. This spanned space has cardinality $2$ in our case, so there are only two possible choices for the second vector. Thus six different bases in all, and six automorphisms of $V$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2277128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
If $ x^4+x^3+x^2+x+1=0$ then what is the value of $x^5$ If $$x^4+x^3+x^2+x+1=0$$ then what's the value of $x^5$ ?? I thought it would be $-1$ but it does not satisfy the equation
Well, we have $x^5-1=(x-1)(x^4+x^3+x^2+x+1)=(x-1)0=0$ so that $x^5=1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2277392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Proving an interesting property of absolute values: $\frac{|x+y|}{1+|x+y|}\le\frac{|x|}{1+|y|}+\frac{|y|}{1+|y|}$ Question If $x$ and $y$ are real numbers, show that $$\frac{|x+y|}{1+|x+y|}≤\frac{|x|}{1+|y|}+\frac{|y|}{1+|y|}$$ I am having difficulty in proving this equation. I don't know where to start. Your help will be highly appreciated.
We know that $|x+y|\leq|x|+|y|$. You need to prove $$\frac{|x+y|}{1+|x+y|}\leq\frac{|x|}{1+|x|}+\frac{|y|}{1+|y|}$$ $$or,~|x+y|.(1+|x|)(1+|y|)\leq(1+|x+y|)(|x|.(1+|y|)+|y|.(1+|x|))$$ by cross multiplication. By expanding we get $$|x+y|\leq|x|+|y|+2|x||y|+|x||y||x+y|$$ which is obviously true since $2|x||y|+|x||y||x+y|\geq0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2277665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
$f$ will always reach 1 if $\lim \limits_{x \to \infty}f(x)=1^-$? Suppose an algorithm A, has success probability of 10%. One experiment is composed by a (possibly infinite) number of executions of A. In one experiment, the success probability of at least one execution after x trials can be found by: $P[x\ge1]=1-(\frac{90}{100})^x$ and so $\lim \limits_{x \to \infty}P[x\ge1]=1^-$ So, having that in mind, can we say that every experiment will always succeed? I would say that we can't. If we say for example that the algorithm will succeed after k tries (for any value of k), then somebody could argue that after infinite number of experiments, then there would be at least one experiment in which the algorithm A failed after k tries. Or in another way, after infinite number of experiments, there will be at least one experiment in which A fails for infinite number of executions. Is that right?
Let B be the algorithm running A until it succeeds. There are different ways of looking at this * *There is a case where B runs forever (if A keeps failing) *The probability of this case (that B runs forever), is $0$ *For any $k$, the probability, that B will require more than $k$ calls to $A$, is greater than $0$ So depending on what you want to know, the answer will be different.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2277791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }