Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
What is the integral of $e^{-\alpha|t|}e^{ivt}e^{-ipt}$ over $\mathbb{R}$? Given $\alpha, v > 0$, I'm supposed to get to the following result: $$ g(p) = \int_{-\infty}^{\infty}e^{-\alpha|t|}e^{ivt}e^{-ipt}dt =\frac{2\alpha}{\alpha^{2}+(v-p)^{2}} $$ I've tried calculating it as: $$ \int_{-\infty}^{0} e^{((v-p)i+\alpha)t}dt\ +\ \int_{0}^{\infty} e^{((v-p)i-\alpha)t}dt $$ but I got stuck at $$ \frac{2\alpha}{\alpha^{2}+(v-p)^{2}}+\lim_{x\to\infty}\frac{2\alpha(e^{(v-p)ix}-e^{-(v-p)ix})}{e^{\alpha x}(\alpha^{2}+(v-p)^{2})} $$ I also tried calculating the integral as $$ i\int_{-\infty}^{\infty}\sin((v-p)t)e^{\alpha|t|}dt+\int_{-\infty}^{\infty}\cos((v-p)t)e^{\alpha|t|}dt $$ but that didn't lead me very far. This is an exercise I have to solve in the context of a (relatively easy) first semester math course in a Computer Science bachelor's.
Note that: $$I_1 = \int_{-\infty}^{0} e^{((v-p)i+\alpha)t} dt = \lim_{b \to -\infty} \int_{b}^{0} e^{((v-p)i +\alpha)t} dt = \lim_{b \to -\infty} \frac{1}{((v-p)i +\alpha)}[1-e^{((v-p)i +\alpha) b}] = \frac{1}{((v-p)i +\alpha)}$$ Can you similarly see the next integral as well?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2553694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding the range of $\frac{x^2}{x^2-9}$ I am a student who is studying about functions (only basic ones) and was practicing till I found this - Finding the range of $$ \frac{x^2}{x^2-9} $$ Since I have only learnt the basics , I can only play around with the numbers and not use limits (which was what I found online). I was Told to try this method - to try to assign $y=\frac{x^2}{x^2-9}$ This means expressing $x$ in terms of $y$ $x = \sqrt { \frac{9y}{y-1} } $ Now I then go on to find the range of this function - range =$ y \leq 0 $ or $ y>1$ This is simple to do. But what I'm confused with is this - Since Range is all the possible 'y' values obtainable from the domain, then this confuses me because I express x in terms of y. Finding the x value feels like finding the 'domain' for me . I believe I'm having a conceptual problem and I don't understand what does expressing x in terms of y do to help me find the range. does this mean that I can express all other simple functions into x in terms of y to find the range too ? Thanks !! Note : I'm being taught on How to read the domain And range off a function. So I can't use the graphical method .
The domain is the set of $x$ values such that the function is defined. Here the expression exists for any $x\notin \{-3,3\}$. The range is the set of $y$ values for which the equation $y=f(x)$ has solutions. Indeed, $$y=\frac{x^2}{x^2-9}$$ can be written $$x^2=\frac{9y}{y-1}$$ and the RHS is only non-negative and defined for $y\notin(0,1]$. Remember: the domain is for $x$, the range for $y$. Alternatively, you can study the variations of $f$. Canceling the derivative, $$f'(x)=-\frac{18x}{(x^2-9)^2}$$ and noting the two vertical asymptotes and the horizontal one, the table of variations is $$\begin{matrix}x&-\infty&&&-3&&&0&&&3&&&\infty\\\hline f(x)&1&\nearrow&\infty&|&-\infty&\nearrow&0&\searrow&-\infty&|&\infty&\searrow&1\end{matrix}$$ Then all values are reached, except for $(0,1]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2553845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 0 }
Using the natural homomorphism $\mathbb Z$ to $\mathbb Z_5$ Prove that $x^4+10x^3+7$ is irreducible in $\mathbb Q[x]$ by using the natural homomorphism from $\mathbb Z$ to $\mathbb Z_5$. So I would assume we should rewrite our polynomial, maybe as $(x + 10) x^3 + 7$? Then in terms of $\mathbb Z_5, (x+2\cdot 5)x^3+(5+2)\equiv (x+0)x^3+2=x^4+2$. Although I am not sure of this because I know that $F_5$ has no zero divisors, so maybe 10 cannot be canceled. Also, this didn't use the natural homomorphism, so this cannot be right. Could I have some help?
The other answers do not address why there cannot be a quadratic factor. I don't know if there's a clever thing I'm missing, but here's one way to see. Assume the quadratic splits over $\mathbb{Q}$ as $$ x^4 + 2 = (x^2+ax+b)(x^2+cx+d) $$ Then, passing to $\mathbb{F}_5$ we get the equations $$ \begin{eqnarray} d+ac+b = 0 \\ a+c = 0 \\ ad+bc=0 \\ bd = 2 \end{eqnarray} $$ From which we get $$ \begin{eqnarray} a(d-b)=0 \end{eqnarray} $$ So either $a=0$ or $b=d$. If $b=d$ then $b^2 = 2$, but $$ \begin{eqnarray} 0^2=0\\ 1^2=1\\ 2^2=4\\ 3^2=4\\ 4^2=1 \end{eqnarray} $$ so, $b^2=-2$ is not possible. Therefore $a=0$. But then from $d+ac+b=0$ we get $d=-b$, so $-b^2=2$, and $-2$ is not a square either. Therefore we can't factor $x^4+2$ in to quadratic polynomials over $\mathbb{F}_5$. Since a factorization over $\mathbb{Q}$ would descend to $\mathbb{F}_5$, there is also no factorization in to quadratics over $\mathbb{Q}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2553935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Can only the existence of the right and left derivatives imply continuity? If $f:X\rightarrow \mathbb{R} \,$ is a function with $x_0 \in \overline{X} \,\setminus \partial(\overline{X}) $ such that : $$\exists \,\,\,\,f'_-(x_0)=\lim_{x\rightarrow x_0^-}\dfrac{f(x)-f(x_0)}{x-x_0},$$ $$\exists \,\,\,\,f'_+(x_0)=\lim_{x\rightarrow x_0^+}\dfrac{f(x)-f(x_0)}{x-x_0}$$ but with possibly $f'_-(x_0) \not= f'_+(x_0)$, does this still imply continuity of $f$ ?
Suppose that$$\lim_{x\to{x_0}^+}\frac{f(x)-f(x_0)}{x-x_0}$$exists (in $\mathbb R$). Then\begin{align}\lim_{x\to{x_0}^+}f(x)&=f(x_0)+\lim_{x\to{x_0}^+}f(x)-f(x_0)\\&=f(x_0)+\lim_{x\to{x_0}^+}\left((x-x_0)\frac{f(x)-f(x_0)}{x-x_0}\right)\\&=f(x_0)+\lim_{x\to{x_0}^+}(x-x_0)\lim_{x\to{x_0}^+}\frac{f(x)-f(x_0)}{x-x_0}\\&=f(x_0)+0\\&=f(x_0).\end{align}For the same reason, if$$\lim_{x\to{x_0}^-}\frac{f(x)-f(x_0)}{x-x_0}$$exists (in $\mathbb R$), then $\lim_{x\to{x_0}^-}f(x)=f(x_0)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2554078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is every commutative ring contained in a field? I imagine the answer to this question is very simple, but I haven't been able to locate it. Can every commutative ring be imbedded in a field? This seems very plausible to me, for it seems we could just take the "closure" under inverses. Thanks!
A commutative ring can be embedded in a field iff it is an integral domain. Indeed, if a ring can be embedded in a field then it cannot have zero divisors because fields cannot have zero divisors. Conversely, every integral domain can be embedded in a field, namely, its field of fractions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2554162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
Differentiable functions $f'(x)=f(-x)^4f(x)$ Find all differentiable functions $f\colon \mathbb{R}\to\mathbb{R}$ with $f(0)=1$ such that $f'(x)=f(-x)^4f(x)$, for all $x \in \mathbb{R}$.
The defining relation $f'(x)=f^4(-x)f(x)$ implies that $f$ is continuously differentiable (in fact it's even $C^\infty$). Setting $-x$ in the relation $f'(x)=f^4(-x)f(x)$, one gets $\forall x, f'(-x) = f^4(x)f(-x)$. Multiplying $f'(x)=f^4(-x)f(x)$ by $f^3(x)$ yields $$\forall x, f'(x)f^3(x) = f^4(-x)f^4(x)=[f^4(x)f(-x)]f^3(-x)=f'(-x)f^3(-x)$$ that is to say $x\mapsto f'(x)f^3(x) $ is even. Therefore, $\displaystyle \int_{-x}^xf'(t)f^3(t) dt = 2\int_{0}^xf'(t)f^3(t) dt$, and since an antiderivative of $f'f^3$ is $\displaystyle \frac{f^4}4$, this implies $$\forall x, f^4(x)+f^4(-x)=2$$ Replacing $f^4(-x)$ in the defining relation, one gets $$\forall x, f'(x)=2f(x)-f^5(x)$$ By Picard–Lindelöf theorem, this non-linear differential equation (with initial condition $f(0)=1$) has a unique global solution. Using a computer or other means, one finds that $$x\mapsto \frac{\sqrt[4]{2} e^{2 x}}{\sqrt[4]{e^{8 x}+1}}$$ is a solution of the differential equation, thus it must be the only one. Conversely, it's easily checked that $x\mapsto \frac{\sqrt[4]{2} e^{2 x}}{\sqrt[4]{e^{8 x}+1}}$ is indeed a solution to the original functional equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2554270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Homomorphisms S4 to Z Find all homomorphisms f: S4 -> Z. I know that Ker(f) must be a normal subgroup in S4. So, Ker(f) must be {e}, V4, A4 or S4. However, I don't know how to use this information. Thanks.
There is only one such homomorphism, the zero homomorphism (mapping everything to 0). The reason is there every non-zero element of $\mathbb{Z}$ has infinite order, and the order of the image (under a homomorphism) of any element of finite order must be finite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2554505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Interchanging two columns of a matrix will lead to change in the sign of determinant Interchanging two columns of a square matrix changes the sign of the determinant. I know that this is true, and I do understand how it works. But is there any proof for this statement?
Any proof of this result depends on a definition of determinant. Let us define it using permutations: $\det(A) = \sum_{\tau \in S_n}\operatorname{sgn}(\tau)\,a_{1,\tau(1)}a_{2,\tau(2)} \ldots a_{n,\tau(n)},\;$ where the sum is over all $n!$ permutations of the columns by elements in the symmetric group $S_n.\;$ See the question about a determinant definition. Let $A^\sigma$ be the result of rearranging the columns of $A$ using a permutation $\sigma.\;$ This replaces all the $\tau$ in the summation by $\sigma\tau$, the product of two permutations. Now $\;\operatorname{sgn}(\sigma\tau)=\operatorname{sgn}(\sigma)\operatorname{sgn}(\tau)\;$ and, by distributivity, the common $\operatorname{sgn}(\sigma)$ comes out of the summation. Thus, $\;\det(A^\sigma)=\operatorname{sgn}(\sigma)\det(A).$ In our case, interchanging any two columns is a transposition and these all have signature $-1$, and so multiplying the determinant by $-1$ changes its sign. QED.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2554645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Writing in CNF that only one statement can be true Consider 5 Boolean variables $x_1, x_2, x_3, x_4, x_5$. * *Write a propositional formula that expresses the fact that one and at most one among the Boolean variables $x_1, x_2, x_3, x_4, x_5$ is true. *Compute a conjunctive normal form of this formula. Your answer must be justified. I thought this was straightforward as I could just define $\phi_i$ as the conjunction of $x_i$ and not the others, but I'm not seeing a nice way to put that into CNF.
The proposition for "none are true" is: $(\neg x_1\wedge\neg x_2\wedge\neg x_3\wedge\neg x_4\wedge\neg x_5)$ The proposition for "only $x_1$ is true" is: $(x_1\wedge\neg x_2\wedge\neg x_3\wedge\neg x_4\wedge\neg x_5)$ And so forth. Use that to build a DNF for "at most one from the five is true," and simplify it (hint: use idempotence, distribution, and complementation). Convert this to CNF.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2554805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Helen Borrows Money Helen borrows a sum of money from a bank at 12% convertible monthly and wishes to repay it by 24 monthly payments. In total, she will pay 584 of interest. Determine the size of the loan. I have started by doing this: The total amount paid back is given by $Pi(1+i)^n/(1+i)^n-1$ so the total interest would be this minus principal $P$. Given $i=0.12$ and given $n=2$. I substitute into our equation and get: $$584=((P(0.12)(1+0.12)^2)/(1+0.12)^2-1)-P$$ but I am not sure how to go farther with this, or if I am doing it right. I got $-1430.31$.
The monthly interest rate is $i=\frac{i^{(12)}}{12}=\frac{12\%}{12}=1\%$. The total interest is $I=nP-L$, where $P$ is the monthly payment, $n$ is the number of months and $L$ is the loan. Then $$\left\{ \begin{align} I&=P(n-a_{\overline{n}|i})\\ L&=Pa_{\overline{n}|i} \end{align}\right.\qquad \Longrightarrow\quad \boxed{L=I\cdot\frac{a_{\overline{n}|i}}{n-a_{\overline{n}|i}}=584\cdot\frac{a_{\overline{24}|1\%}}{24-a_{\overline{24}|1\%}}\approx 4,500.5} $$ where $a_{\overline{n}|i}=\frac{1-(1+i)^{-n}}{i}$. We can also find $P=\frac{L}{a_{\overline{n}|i}}=\frac{4,500.5}{21.24}\approx 211.85$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2554925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Suppose that $p$ is prime and $≡ 3\bmod4$ then $((p-1)/2)!≡-1\bmod p$ or $((p-1)/2)!≡1\bmod p$ Prove or disprove: Suppose that $p$ is prime and $≡ 3\bmod4$ then $((p-1)/2)!≡-1\bmod p$ or $((p-1)/2)!≡1\bmod p$ After I checked it I see it is true statement so by Wilson's theorem we have $(p-1)!≡-1\bmod p$ so $1\cdot2\cdot3\cdots((p-1)/2)((p+1)/2)\cdots(p-1) ≡ -1\bmod p$ so $(p-1)/2)!((p-1)/2)! -2) ≡ -1\bmod p$ then $((p-1)/2)! ≡ -1\bmod p$ or $((p-1)/2)! ≡ 1\bmod p$ is which I did right?
Your idea of using Wilson's theorem is correct, but when you get to $$1\cdot2\cdot3\cdots\frac{p-1}2\cdot\frac{p+1}2\cdots(p-1) ≡ -1\bmod p$$ you need to take a different approach. Rewrite $p-1$ as $-1$, $p-2$ as $-2$ and so on until you get $$1\cdot2\cdot3\cdots\frac{p-1}2\cdot\left(-\frac{p-1}2\right)\cdots(-1)≡-1\bmod p$$ Because $p\equiv3\bmod4$, the number of terms is singly even, so there are an odd number of terms that have become "negative". The above is thus equivalent to $$-\left(1\cdot2\cdot3\cdots\frac{p-1}2\right)^2≡-1\bmod p$$ $$\left(\frac{p-1}2!\right)^2≡1\bmod p$$ $$\frac{p-1}2!≡\pm1\bmod p$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2555048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Truth vs Lie possibility A, B and C tell the truth independently with probabilities $1/3, 1/4, 1/5$ respectively. C makes a statement and B says that C has lied , whereas A says that C has told the truth. Find the probability that C made a true statement. I did $$(1*1*3/3*4*5)/[(1*1*3/3*4*5)+(4*2*1/5*4*3)]$$ Is this correct?
Your expression is a bit difficult to read, but yes, that is correct. What you've done I assume is work out the probability $p$ of A and C telling the truth and B lying, and the probability $q$ of A and C lying and B telling the truth. We know one of these two things has happened, and so the probability that it is the first one is $\frac{p}{p+q}$. You can cancel the $3\times 4\times 5$ in your answer, so this becomes $$\frac{1\times 1\times 3}{1\times 1\times 3+4\times 2\times 1}=\frac3{11}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2555181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $x,y\in E$ then $\frac{x+y}{2}\in E$. Prove that $E$ has an interior point Let $E\subset \mathbb{R}$ be a set of positive Lebesgue measure. Assume that if $x,y\in E$ then $\frac{x+y}{2}\in E$. Prove that $E$ has at least one interior point. Here is what I have done: (1). By regularity, for any $\epsilon>0$ we can find an open set $O_\epsilon$ such that $E\subseteq O_\epsilon$ and $m(O_\epsilon)-m(E)<\epsilon.$ Write $O_\epsilon$ as a disjoint union of open intervals $\{I_j\}$ $$O_\epsilon=\bigsqcup_{j=1}^\infty I_j$$ (2). WLOG we can do the indexing in such a way that $I_{j+1}$ is the next interval to $I_j$ (in the sense that $I_{j+1}$ is on the right of $I_j$ and there is no $I_k$ which is in between $I_j$ and $I_{j+1}$.) (3). If at least one $I_j\subseteq E$ then we are done. So assume that $I_j\subsetneq E$ for all $j$. Chose an $I_j$ and pick a point $x\in I_j\cap E$. Chose $y\in I_{j+1}\cap E$. Now $z=\frac{x+y}{2}\in E$ and thanks to the indexing, $z\in I_j$ or $z\in I_{j+1}.$ WLOG we can assume that $z\in I_j$. (4) Now we have two point $x,z\in I_j$. We can recursively pick the midpoints on the line joining $x$ and $z$ and all these points will be in $E$. (First pick $\frac{x+z}{2}$, then pick $\frac{x+\frac{x+z}{2}}{2}$ and $\frac{z+\frac{x+z}{2}}{2}$ and so on) (5). My guess is that one of the midpoints (constructed in the previous step) on the line joining $x$ and $z$ will be an interior point. But I don't know if my guess is correct. Am I moving in the right direction? Is there a different way to solve this problem?
The following result is quite well-known: If $E$ and $F$ are measurable with $m(E),m(F)>0$, then $$E+F = \{x+y\mid x\in E,y\in F\}$$ contains an interval. Then the condition on your $E$ says $$\frac{E+E}{2} \subset E$$ since $E+E$ contains an interval, so is $E$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2555422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Prove using lagrange's mean value theorem * *If f' is continuous on [a,a+h] and derivable on (a,a+h) prove that there exists a real number c between a and a+h such that $f(a+h)=f(a)+hf'(a)+{\frac{h^{2}}{2}}f''(c)$. I used lagrange's mean value theorem $f(x)$ will be also be continuous in $[a,a+h]$ and differentiable in (a,a+h) hence there exists $\delta \epsilon$ $(a,a+h)$ such that $f'(\delta)={\frac{f(a+h)-f(a)}{h}}$ Similarly since LMVT is applicable to $f'(x)$ and there exists $\gamma \epsilon$ ($\delta$,a) $f''(\gamma)={\frac{f'(\delta)-f'(a)}{\delta-a}}$ . But how do I prove that $f"(\gamma)= {\frac{h}{2}}f''(c)$ I got the above conclusion above by the following steps ${\frac{f(a+h)-f(a)}{h}}=f'(c)+{\frac{h}{2}}f''(c)$ ${\frac{f'(\delta)-f'(a)}{\delta-a}}={\frac{hf''(c)}{2}}$ But I don't know how to proceed further can someone please help me with this question
It can be shown by MVT on the integral of the error. https://brilliant.org/wiki/taylors-theorem-with-lagrange-remainder/
{ "language": "en", "url": "https://math.stackexchange.com/questions/2555575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding an orthogonal vector to two vectors in $\mathbb{R}^4$ "Let $u_1$, $u_2$ be to vectors in $\mathbb{R}^4$ $$u_1=(1,0,1,1) \text{ and } u_2=(1,1,0,3)$$ Provide a real vector which is orthogonal to both $u_1$ and $u_2$ So, I kind of guessed a vector $u_3=(1,-1,-1,0)$ which must be orthogonal to both since $$u_1 \cdot u_3 = 0 \text{ and } u_2 \cdot u_3=0$$ My question is, how should it be done if it can't immediately be guessed? In $\mathbb{R}^3$ one could just take the cross product of the two vectors, but that's not defined for any other vector spaces
Since you have only two vectors, you can work in $\mathbb R^3$. Let $u_3 = (a, b, c, 0)$, so that $u_1 \cdot u_3$ and $u_2 \cdot u_3$ only depend on the first three components of $u_1$ and $u_2$. So, call $v_i$ the vector of the first three components of $u_i$, and compute $v_3 = v_1 \times v_2 = (a, b, c)$ (this wouldn't work if either $v_1$ or $v_2$ was zero). Specifically, $$u_3 = \begin{pmatrix} \begin{vmatrix}0 & 1 \\ 1 & 0\end{vmatrix}, -\begin{vmatrix}1 & 1 \\ 1 & 0\end{vmatrix}, \begin{vmatrix}1 & 0 \\ 1 & 1\end{vmatrix}, 0 \end{pmatrix} = (-1, 1, 1, 0)$$ Which only differs in sign from your solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2555693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Multiplicities in the Regular Representation of a Semisimple Algebra Let $R$ be a finite dimensional associative algebra over a field $k$ and suppose that $R$ is semisimple, i.e., that we can express $R$ as a direct sum of left $R$-modules $$R\cong \oplus_i S_i^{\oplus n_i},$$ where the $S_i$ are non-isomorphic simple modules. Fact: If $R=kG$ is the group algebra of a finite group $G$ over an algebraically closed field $k$ then the multiplicity $n_i$ is equal to the dimension of the simple module $S_i$: $$n_i=\dim_k S_i.$$ Question: At what level of generality does this fact remain true?
The general description of $R$ is given by the Artin-Wedderburn Theorem: $R$ is a product of matrix algebras over finite-dimensional division algebras over $k$. If $k$ is algebraically closed, then the only finite-dimensional division algebra over $k$ is itself, and your theorem follows from the representation theory of the matrix algebras $M_{n \times n}(k)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2555843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find the fixed points of the difference equation $ \ a_n=\frac{2}{7} a_{n-1}-1 \ $ Find the fixed points of the difference equation $ \ a_n=\frac{2}{7} a_{n-1}-1 \ $ . Classify the fixed points whether stable, unstable or Neutral. Answer: Let $ \ x \ $ be the fixed point. Then, $ x=\frac{2}{7} x-1 \\ \Rightarrow 7x=2x-7 \\ \Rightarrow 5x=-7 \\ \Rightarrow x =-\frac{7}{5} $ But how to decide whether $ \ x =-\frac{7}{5} $ is stable or unstable. Help me out.
Hint: Let $a_n=b_n-\frac75$. The recurrence is $$b_n-\frac75=\frac27\left(b_{n-1}-\frac75\right)-1=\frac27b_{n-1}-\frac75,$$ or $$b_n=\frac27b_{n-1}.$$ You should be able to conclude.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2555919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find all the solutions to the equation: $e^z = e^{-1 + i\pi}$ Im slightly stuck on hiw to attampt this... I thought to write both sides in polar form, ie. $$e^x (\cos(y) + i\sin(y)) = e^{-1}(\cos(\pi)+i\sin(\pi))$$ I could equate it from here i think, but it specifically says in the question find ALL the solutions to the equation. By doing it this way, i think i would only find one. I know i need to add $2k\pi$ in at some point. Any help would be much appreciated.
The exponential map is 1-1 on the strip $\{x+iy : -\pi \le y < \pi \}$ and $e^z = e^{z+2n\pi i}$ for all $n \in \mathbb{Z}$. Since $z = -1 +i\pi$ is one solution of it. So, $z = -1 + i(\pi +2n\pi)$ where $n \in \mathbb{Z}$ are all the solutions of your equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2556023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Problem with Structure Theorem of PID modules proof I am using Lang's Algebra to prove the Structure Theorem for finitely generated modules over PIDs, and I am having difficulties understanding the proof of the existence of the decomposition for $E(p)$. $E$ is a torsion module over a PID $R$, $p \in R$ prime element, and $E(p)=\{m \in E\, ;\, p^nm=0 \, \textrm{for some positive interger n}\}$. Lang starts with Lemma 7.6 which I understand just fine. The next step is what I do not grasp. From what I could get, the idea is to show that there is an independent generator of $E(p)$, lets say $\{y_1,...,y_1\}$ and by having that, since it is independent you could see it as a direct sum of cyclic modules, i.e. $(y_1,..,y_n)= (y_1)\bigoplus...\bigoplus (y_n) \cong\frac{R}{(p^{r_1})} \bigoplus....\bigoplus \frac{R}{(p^{r_n})}$ where $p^{r_i}$ is $y_i$'s period. This is how he does it (its a copy paste): Lang's Proof (Sorry for the sloppiness of the picture, but it was the best I could do with my knowledge) Apart from the fact that I don't get the overall proof, here are some doubts that pop to mind: * *Is $\overline{E_p}$ well defined? ($E_p=\{m \in E \, ; pm=0\}$) Because he defines $\overline{E}$ using an element $x_1$ with a maximal period, but $x_1$ is not necessarily in $E_p$ *Lang says he does it by induction, induction over what? Sorry if the question is not very well formulated. Thanks in advanced.
* *It's actually $\overline E_p$ and not $\overline{E_p}$ : you take $F= \overline{E}$ and then $F_p$, not $F=E_p$ and then $\overline F$. *The induction is over the number of generators : you assume the result for all modules that have less than $r$ generators, and prove it for those that have $r$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2556171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Gamma functions and limits How can someone calculate the limit $\lim_{n\rightarrow \infty}\frac{\Gamma(n+p)}{n^p\Gamma(n)}$ ? Is there an article about it? Is $\frac{\Gamma(n+p)}{n^p\Gamma(n)}$ greater than unity?
Stirling's approximation $$\Gamma(z) \sim \sqrt{\frac{2 \pi}{z}} \left( \frac{z}{\mathrm{e}} \right)^z $$ does the trick: $$\frac{\Gamma(n+p)}{n^p \Gamma(n)} \sim \frac{ \sqrt{\frac{2 \pi}{n+p}} \left( \frac{n+p}{\mathrm{e}} \right)^{n+p}}{n^p \sqrt{\frac{2 \pi}{n}} \left( \frac{n}{\mathrm{e}} \right)^n}= \frac{1}{e^p} \sqrt{\frac{n}{n+p}} \left(\frac{n+p}{n} \right)^{n+p} \to 1, \quad \text{ as } n \to \infty. $$ I don't know about such an article.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2556304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Equivalence class of a number on a relation? Lets say there is an equivalence relation $x\sim y$ if and only if $x-y$ is an integer. Find the equivalence class of the number $\frac13$. I came up with $\left[\frac13\right]=\left\{\frac13\right\}$ but I'm not sure if its right. Any tips?
Hint: Let $S=\{(x,y)\mid x-y\in \mathbb{Z}\}$. Suppose that $(1/3,y)\in S$, then $1/3-y=n$ for some $n\in\mathbb{Z}$ and $y=1/3-n$. Moreover, if $y=1/3-n$ for some $n\in\mathbb{Z}$ then $(1/3,y)\in \mathbb{Z}$. Thus $(1/3,y)\in S$ if and only if $y=1/3-n$ for some $n\in\mathbb{Z}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2556467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Showing that $\{X_n, n\geq 1 \}$ are independent if for $n\geq 2$ we have $\sigma(X_1,...,X_{n-1}) \perp \sigma(X_n)$ Looking for hints to proceed or to corroborate my solution (I found it still weak). So, as the title says: We want to show that $\{X_n, n\geq 1 \}$ are independent random variables if for $n\geq 2$ we have: $\sigma(X_1,...,X_{n-1}) \perp\sigma(X_n)$. My approach: So, I can say, assume this is true for n=2. then we have, from the "if condition" that if we know that: $\sigma(X_1) \perp \sigma(X_2)$ Then, this implies that $X_1 \perp X_2$, since their induced sigma-fields are independent, the random variables are independent. Now, take an induction approach (still not sure if approaching it the right way though), but this is my shot: Start by checking $n=3$: From the "if condition", if we know that: $\sigma(X_1,X_2) \perp \sigma(X_2)$ This implies that $X_1 \perp X_2 \perp X_3$. Since the induced sigma-fields are independent, the random variables are independent. A question is:: What is the relation between $\sigma(X_1) \perp \sigma(X_2)$ and $ \sigma(X_1,X_2) $ I little confused now. Would appreciate any help. Thanks.!
I believe your perp stands fro independence, not orthogonality. By definition a sequence is independent if each finite subset is. So we have to show that $\{X_1,X_2,...,X_n\}$ is independent for each N. This means $P\{X_1^{-1}(A_1) \cap ...\cap X_N^{-1} A_N\}$ is the product of $P\{X_i^{-1} A_i\}$. Just note that $\{X_1^{-1}(A_1) \cap ...\cap X_{(N-1)}^{-1} A_{(N-1)}\}$ belongs to $\ sigma \{X_1,X-2,...,X_{(N-1)}\}$ and use induction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2556573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
trying to find the general term for a serie in an easy way I have a general expression which is the following: $$b_0=7$$ $$b_n = 2b_{n−1}+7\cdot4^n\quad∀n∈\Bbb N^+$$ and that I have to it resolve in the easiest way possible. I know that I could use the generative function technique, but there is probably quicker ways. How could I resolve this?
Here's another way to look at this. Consider the general form $$f_n=A\cdot B^n+Cf_{n-1}$$ Then we can write $$\frac{f_n-Cf_{n-1}}{f_{n-1}-Cf_{n-2}}=\frac{A\cdot B^n}{A\cdot B^{n-1}}=B\\ $$ or $$ f_n=af_{n-1}+bf_{n-2}\\ $$ where $$ a=B+C\\ b=-B $$ This is now in a familiar form for which we know the characteristic roots $$\alpha,\beta=\frac{a\pm \sqrt{a^2+4b}}{2}$$ And we'll also need $f_1=A\cdot B+ Cf_0$ to complete the solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2556690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How to find the shortest distance from a line to circle while their equations are given Consider a line $L$ of equation $ 3x + 4y - 25 = 0 $ and a real circle $C$ of real center of equation $ x^2 + y^2 -6x +8y =0 $ I need to find the shortest distance from the line $L$ to the circle $C$. How do I find that? I am new to coordinate geometry of circles and line. And I noted the slope of $L$ to be $\frac{-A}{B} = \frac{-3}{4} $ which means the line is inclined to $ -37° $ with $+x axis$ And circle centered at $ (3,-4) $ and of radius $5$ units. By diagram, its difficult to figure out. Can we figure out easily by diagram or there is an algebraic way which is good for this?
Hint: Any point on the circle can be set as $$P(3+5\cos t,5\sin t-4)$$ The distance of this point from $L$ will be $$\dfrac{|3(3+5\cos t)+4(5\sin t-4)-25|}{\sqrt{3^2+4^2}}$$ $3(3+5\cos t)+4(5\sin t-4)-25=5(3\cos t+4\sin t)-32$ Now $-\sqrt{3^2+4^2}\le3\cos t+4\sin t\le\sqrt{3^2+4^2}$ $\iff-5\cdot5-32\le5(3\cos t+4\sin t)-32\le5\cdot5-32$ $\iff7\le|3(3+5\cos t)+4(5\sin t-4)-25|\le57$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2556783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Prove that $ C_1'\cap Z=C_2'\cap Z $ if and only if $ C_1 $ and $ C_2 $ touch at $ \xi $. This is Exercise II.4.1 in Shafarevich's book Basic Algebraic Geometry, second edition. Suppose that $\dim X = 2 $ and that $ \xi \in X $ is a nonsingular point. Let $ C_1, C_2 \in X $ be two curves passing through $ \xi $ and nonsingular there, $ \sigma: Y \to X $ the blowup centered at $ \xi $, and set $ C_i' = \overline{\sigma^{-1}(C_i \backslash \xi)} $ and $ Z = \sigma^{-1} (\xi) $. Prove that $ C_1'\cap Z=C_2'\cap Z $ if and only if $ C_1 $ and $ C_2 $ touch at $ \xi $. I have been thinking about this exercise for many days now but I still don't even know where to start. I have of course read the relevant section, but I'm still lost. I believe the following is important as a background for the exercise: Let $ X $ be a quasiprojective variety and $ \xi \in X $ a nonsingular point, and suppose that $ u_1, \cdots ,u_n $ are functions that are regular everywhere on $ X $ and such that (a) the equations $ u_1 = \cdots = u_n = 0 $ have the single solution $ \xi \in X $; and (b) $ u_1, \cdots, u_n $ form a local system of parameters on $ X $ at $ \xi $. $ Y \subseteq X \times \mathbb{P}^{n-1} $ consists of points $ (x; t_1 : \cdots : t_n ) $ with $ x \in X $ and $ (t_1 : \cdots : t_n ) \in \mathbb{P}^{n-1} $, such that $$ u_i(x)t_j = u_j(x)t_i $$ for $ i,j = 1, \cdots ,n $. The regular map $ \sigma: Y \to X $ obtained as the restriction to $ Y $ of the first projection $ X \times \mathbb{P}^{n-1} \to X $ is called the local blowup of $ X $ with center in $ \xi $. Can anyone help me out?
I was also stuck on this problem for a while. I think I have a solution, but I find it a bit hand-wavey. Leaving it here for future visitors in hopes that somebody can improve it. $X$ is 2-dimensional. We can choose $u_i$ so $X$ is locally given by $u_3=\ldots=u_N=0 ,$ $C_1$ given by $u_1=0$, $C_2$ by $F(u_1,u_2)=0$. On $X$ we have $\sigma^{-1}(u_1,u_2)=(u_1,u_2,0,\ldots,0,u_1:u_2:0:\ldots,0)$. We are interested in the image of $(0,0)$ under the restriction of this rational map to $C_i$. We are also really only interested in the first two homogeneous coordinates $(u_1:u_2)$ of the image, so from now on we will consider $\sigma^{-1}$ as a map to $\mathbb{P^1}$. On a curve every rational map to a projective space is regular, so the restrictions of $\sigma^{-1}$ to $C_i$ must be regular. On $C_1$ the restriction takes the obvious form $\sigma^{-1}(0,u_2)=(0:1)$. On $C_2$ it must have some form $\sigma^{-1}(u_1,u_2)=(P(u_1,u_2):G(u_1:u_2))$. For this to be a restriction of $\sigma^{-1}$ from $X$ to $C_2$, we need it to satisfy $u_1G=u_2P\ \text{mod} F$, and for it to coincide with the restriction to $C_1$ we need it to satisfy $P(0,0)=0,\ G(0,0)=1$. Thus $F$ must have a nonzero term $ku_1$ and must have a zero coefficient before $u_2$, which happens exactly when $C_2$ touches $C_1$ at zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2556934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Determine if $x=0$ is a point of relative extremum for $f(x)= \sin(x) + \frac{x^3}{6}$ Determine if $x=0$ is a point of relative extremum for $f(x)= \sin(x) + \frac{x^3}{6}$ I am trying to use this test Here, $f(x)= \sin(x) + \frac{x^3}{6}$ $f'(x)=\cos(x) + \frac{x^2}{2} \Rightarrow f'(0)=1 \neq 0$ So I am unable to proceed further.
Since the first derivative is positive, the function increases at that point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2557025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Calculate the limit using de L'Hopital's rule Calculate the following limit: $\lim_{x \to +\infty}(\sqrt{x}-\log x)$ I started like this: $\lim_{x \to +\infty}(\sqrt{x}-\log x)=[\infty-\infty]=\lim_{x \to +\infty}\frac{(x-(\log x)^2)}{(\sqrt{x}+\log x)}=$ but that's not a good way... I would be gratefull for any tips.
Hint If you have to use l'Hôpital; this limit is easier to find (*): $$\lim_{x \to +\infty} \frac{\sqrt{x}}{\log x} = \lim_{x \to +\infty} \frac{\frac{1}{2\sqrt{x}}}{\frac{1}{x}} =\lim_{x \to +\infty}\frac{\sqrt{x}}{2} = +\infty$$ Can you see how this would help for your limit as well? If not (hoover over), rewrite: $$\sqrt{x}-\log x = \left( \frac{\sqrt{x}}{\log x} - 1 \right) \log x$$ (*) With a similar calculation, it's easy to show and worth remembering that for $n>0$: $$\lim_{x \to +\infty} \frac{x^n}{\log x} = +\infty$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2557151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Proof of $ \cos \alpha + \sin \beta + \cos \gamma = 4* \sin( \frac\alpha2 + 45°)* \sin \frac\beta2 * \sin (\frac\gamma2 + 45°) $ Proof of $$ \cos \alpha + \sin \beta + \cos \gamma = 4* \sin( \frac\alpha2 + 45°)* \sin \frac\beta2 * \sin (\frac\gamma2 + 45°) $$ if $ \alpha + \beta + \gamma = \fracπ2 $ I tried to simplify right side: $4(-\frac12(\cos(\alpha/2+\gamma/2+π/2)-\cos(\alpha/2-\gamma/2))*\sin(\beta/2)=$ $=-2*\sin(\beta)*\cos(\alpha/2+\gamma/2+π/2)+2\cos(\alpha/2-\gamma/2)*\sin(\beta/2)= $ $=-\sin(\beta/2+\alpha/2+\gamma/2+π/2)-\sin(\beta/2-\alpha/2-\gamma/2-π/2)+\sin(\beta/2+\alpha/2-\gamma/2)+\sin(\beta/2-\alpha/2+\gamma/2) $ Is it possible to simplify this to the left hand side?
use that $$\sin\left(\frac{x}{2}+\frac{\pi}{4}\right)=\frac{1}{\sqrt{2}}\left(\left(\sin(\frac{x}{2}\right)+\cos\left(\frac{x}{2}\right)\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2557281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Divergence theroem I have a problem applying the divergence theorem in two problems. The first asks me to calculate $\int \int F · N dS$ where $F(x,y,z)=(x^2 + \sin z, x y + \cos z, e^y)$ in in the cylinder $x^2 + y^2=4$ limited by the planes $XY$ and $x+z=6$. I compute divergence and $\mathop{div}{F} = 3 x$. To compute $\int \int \int \mathop{div}{F} \ dV $, I use cylindrical coordinates: $x= r \cos( \alpha)$ $y=r \sin (\alpha)$ $z=t$ for $r \in [0,2]$, $\alpha \in [0, 2 \pi]$ and $t \in [0, 6- r \cos(\alpha)]$. As the Jacobian is $r$ I have the following: $$ I=\int \int F · N dS$= $\int \int \int \mathop{div}{F} \space dV = \int_{0}^{2 \pi} \int_{0}^{2} \int_{0}^{6- r \cos(\alpha)} 3 r \cos( \alpha) r \ dt \ dr \ d\alpha $$ I compute it using Mathematica and I have I=0 however I know that the result is not that. Where have I been wrong? The other problem is similar. I have to compute I have to calculate the flow F that passes through S where $F(x,y,z)=(x^2 + \sin(yz), y- x e^{-z},z^2)$ and $S$ is the boundery of cylinder $x^2+y^2=4$ limited by planes $z=0$ and $x+z=2$. I have applied the divergence theorem as in the previous case and I have to $$ I=\int \int F · N dS= \int_{0}^{2 \pi} \int_{0}^ {2} \int _{0}^{2-r \cos(\alpha)} (2 r \cos (\alpha) + 1 + 2z )r\ dz\ dr\ d\alpha=\frac{64 \pi}{3} $$ And as in the previous case, it is not the solution either. Where this error? Thank you very much to all.
Your coordinates should be \begin{align*} x &= r \cos \alpha \\ y &= \color{red}{r \sin \alpha} \\ z &= t \end{align*} You have a different expression for $y$. I don't think it matters in the integration, however, as a coincidence. The volume integral is $$ \iiint_E 3x\,dV = \int_0^{2\pi}\int_0^2 \int_0^{6-r\cos\alpha}3(r\cos \alpha) \color{red}{r}\,dt\,dr\,d\alpha $$ And that should make a big difference.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2557397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
$U$ is timelike if its orthogonal complement $U^\perp$ is spacelike Consider the bilinear form $\left<x,y\right>_{n,1} = \sum_{j=1}^n x_j y_j - x_{n+1} y_{n+1}$ on $\mathbb{R}^{n+1}$. A vector $x \in \mathbb{R}^{n+1}$ is said to be timelike if $\left<x,x\right>_{n,1} > 0$ while $x$ is called spacelike if $\left<x,x\right>_{n,1} < 0$. Furthermore, a subspace $U \subset \mathbb{R}^{n+1}$ is timelike if there exists a timelike vector $u \in U$ and $U$ is called spacelike if every non-zero vector in $U$ is spacelike. The orthogonal complement is defined as $U^\perp = \{ v \in \mathbb{R}^{n+1} | \left<v,u\right>_{n,1} = 0 \: \forall u \in U \}$ Now I would like to show that if $U^\perp$ is spacelike then $U$ is timelike. I already know that the converse holds and one has to use the fact that $(U^\perp)^\perp=U$. But I seem to not be able to prove the statement above, even with this hint. Could someone help me out? Thanks!
Suppose that $U^{\perp}$ is spacelike. Because $U^{\perp}$ nondegenerate, then the whole space $V = U \oplus U^{\perp}$. Suppose the contrary that $U$ is not timelike (i.e there is no timelike vector in $U$). Then for any $u \in U$, $g(u,u) \leq 0$. So for any $v \in V$ can be expressed as $v = u + w$ where $u \in U$ and $w \in U^{\perp}$. With this \begin{align} g(v,v) &= g(u+w,u+w) \\ &= g(u,u) + g(u,w) + g(w,u) + g(w,w) \\ &=g(u,u) + g(w,w) \leq 0 \end{align} which tells us that there is no timelike vector at all in $V$. The key here is that by realize that a subspace $W\subset V$ is nondegenerate iff the direct sum of $W$ and $W^{\perp}$ is equal to $V$. You can see the proof in O'neill's Semi-Riemannian Geometry p.59.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2557500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Isomorphic Groups Example I've been learning about isomorphisms (Two groups $(G, \cdot)$ and $(H, \circ)$ are isomorphic if there exists a one-to-one and onto map $\phi : G \rightarrow H$ such that the group operations is preserved; that is $\phi(a \cdot b) = \phi (a) \circ \phi (b)$ for all $a$ and $b$ in $G$.) but I was wondering if there's a group isomorphic to $\langle \lbrace 0 \rbrace, + \rangle$ other than itself? Could you give an example of a group that is isomorphic to $\langle \lbrace 0 \rbrace, + \rangle$?
The group $\langle \{1\}, \times\rangle$ is isomorphic to $\langle \{0\}, +\rangle$ because they are both groups containing only the unit of the operation. The isomorphism $0\leftrightarrow 1$ preserves the unit, and every possible application of the group operation $0+0 = 0 \iff 1\times 1 = 1$. Other examples include $\langle \{\varnothing\}, \cup\rangle$ and $\langle \{\mathbb{N}\}, \cap\rangle$. (Normally, these operations might not qualify as group operations because they're not invertible, but for a singleton group like this, all we really need is the unit property.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2558068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to draw a triangle with knowing the length of 3 heights only???? I tried it,then I found that the product of the two parts of same height will be equal to the other 2 products getting in the same way,Then I tried to do it by drawing it as the bisecting chords of a circle,the product will be same,but not getting the triangle,Pls help me!!!!!!!!
OK let's define some variables first. Let $[ABC]$ be the area of the triangle, and $h_A,$ $h_B,$ and $h_C$ be the heights, respectively. Also, define $a,$ $b,$ and $c$ to be the lengths of segments $AB,$ $AC,$ and $BC,$ respectively. Note that, trivially, $a=\frac{2[ABC]}{h_a},$ and so on. Therefore, the sides must be in the form $\frac{2[ABC]}{h_a},\frac{2[ABC]}{h_b},\frac{2[ABC]}{h_c}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2558193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Reciprocal derivative of function: $(f^{-1})'(\frac{1}{2} \sqrt{2})$ for $f(x)=\cos x$ "The function $f(x)=\cos(x)$ has an inverse/reciprocal (Not sure which one is the correct translation) function arccos in the interval $[0;\pi]$ Determine the derivative of $$(f^{-1})'(\frac{1}{2} \sqrt{2})$$ So, I guess the steps to take are * *Find the inverse of $\cos(x)$ which is $\frac{1}{\cos(x)}$ *Find the derivative of that function which I belive is $(\frac{1}{\cos(x)})'=\frac{\sin(x)}{\cos^2(x)}$ *Insert the value: $\frac{\sin(\frac{1}{2}\sqrt{2})}{\cos^2(\frac{1}{2}\sqrt{2})}$$=\frac{\frac{\pi}{4}}{\frac{\pi^2}{4^2}}$$=\frac{\pi}{4} \cdot \frac{16}{\pi^2}$$=\frac{4}{\pi}$ However, that answer is not correct. I'm told that the correct answer is $-2^{\frac{1}{2}}$ which I don't understand.
Use the definition $$(f^{-1})’(x)=\frac{1}{f’(f^{-1}(x))}$$ and see what you get. Note that $f(x)=\cos x$ and $f’(x)=-\sin x$. Also, that $f^{-1}(\frac{1}{\sqrt{2}})=\frac{\pi}{4}$. EDIT: We need $(f^{-1})’(\frac{1}{\sqrt{2}})$. Using the definition, we get, $$(f^{-1})’(\frac{1}{\sqrt{2}})=\frac{1}{f’(\frac{\pi}{4})}=\frac{1}{-\sin \frac{\pi}{4}}=-\frac{1}{\frac{1}{\sqrt{2}}}=-\sqrt2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2558299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Show that $(K,\circ)$ is a group. Let $K$ be the set of functions defined by : $f : \mathbb C \times \mathbb C \to \mathbb C \times \mathbb C$, such that $\exists a \in \mathbb C, \exists b \in \mathbb C$, $a,b$ not simultaneously equal to zero with : $f(u,v) = (au+bv,-\bar b u+\bar a v$) Q:Show that $(K,\circ)$ is a group. I was able to show $f_1 \circ f_2 \in K$ (Closure) and $(f_1 \circ f_2)\circ f_3 = f_1 \circ (f_2 \circ f_3)$ (Associativity) . but i'm not familiar nor able to show the symmetry nor the identity element of this set. I'd appreciate any help i can get that would push me in the right direction! thanks.
According to you, you have already shown the closure of such functions. Also, associativity is always true for composition of functions, no matter what. To prove that $K$ is a group under composition of functions, we have to show the existence of an identity element in the group, and the existence of the inverse of any element in the group. The identity function will be the identity element of $K$ and it is in $K$ because for $a=1,b=0$, we have $f(u,v)=(u,v)$. Therefore, the identity function is in the set $K$. Now to find the inverse of $f$, you have to solve the following system of equations in $\mathbb{C}^2$, $$au+bv=c$$ $$-\bar{b}u+\bar{a}v=d$$ You want to solve it for the unknown $u$ and $v$. The determinant is $|a|^2+|b|^2$ which is not $0$ because $a,b$ are not simultaneously $0$. Therefore, the system has a unique solution. Let's find it: $$\pmatrix{u \\ v} = \frac{1}{|a|^2+|b|^2}\pmatrix{\bar{a} && -b \\ \bar{b} && a}\pmatrix{c \\d}$$ $$\pmatrix{u \\ v} = \pmatrix{ \frac{\bar{a}c-bd}{|a|^2+|b|^2} \\ \frac{\bar{b}c+ad}{|a|^2+|b|^2}}$$ If you set $$r=\frac{\bar{a}}{|a|^2+|b|^2}$$ $$s=\frac{-b}{{|a|^2+|b|^2}}$$ You obtain, $$f^{-1}(c,d)=(rc+sd,-\bar{r}c+\bar{s}d)$$ which is of the given form and therefore, belongs to $K$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2558429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why are there two terms for the values of a function but only one for its inputs? Every function has a domain, a codomain and an image, the last being a subset of the codomain. My question, for which I ask for your opinions, is "Why is the domain not split into two parts in a similar fashion to the codomain?". I believe that it's because there's no logical or analytical gain in doing so, but I will like a more concrete explanation. Or maybe to make deductions one needs to have one of the two sets (inputs and outputs) always being explicitly specified while the other one can vary. Thank you for your time
Short answer: It's the definition of a function. Long answer: Because a function maps everything in its domain to somewhere in the codomain - in order to define a function, you need to say where every point in its domain gets mapped to in its codomain, otherwise it's not a well-defined function. If a function only mapped some of the elements in its domain somewhere, then it would no longer be a function by definition. One usually does this by instead restricting the domain of a function - it's important to note though that this gives a different function to the original, as the new function will have different properties. For instance, the function $f \colon \mathbb{R} \to \mathbb{R}$ given by $f(x)=x^2$ is not injective, however the function $g \colon [0,\infty) \to \mathbb{R}$ given by $g(x)=x^2$ is injective. In this particular case, $g$ is a restriction of $f$ to the subset of $f$'s domain, $[0,\infty) \subset \mathbb{R} = \operatorname{dom}f$. Sometimes we write $g = f|_{[0,\infty)}$. As an aside, you may be interested in the notion of relation which generalises the notion of a function. Very briefly, if we identify a function $f \colon X \to Y$ with its graph, so $$f = \{(x,y) \in X \times Y \mid f(x)=y\} \subset X \times Y,$$ then a relation is just any subset of $X \times Y$. A function $f \colon X \to Y$ can then be seen as a relation on $X \times Y$ such that for every $x \in X$, there exists a unique point $y \in Y$ such that $(x,y) \in f$ - this is often taken to be the definition of a function in set theory.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2558514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
For $f$ in dual space, there exists $x$ with norm 1 and $f(x)=\|f\|$ if space is reflexive (and nontrivial) Let $X\ne\{0\}$ be a reflexive space and let $f\in X^*$, where $X^*$ is the dual of $X$. I want to know: in general, does there exist an $x\in X$ with $\|x\|=1$, and $f(x)=\|f\|$, where $\|f\|$ is defined as $\sup\{|f(x)|:x\in X,\|x\|=1\}$? I know this is true for $\mathbb{R}^n$ with the norm from the standard inner product, but I'm wondering if it is true in general.
If the space is reflexive than the immersion $$ \iota:X\to X^{**}, x\mapsto x(L):=L(x) $$ is a linear bijection. Now let $f\in X^*$ and define the following map $$ L: \mathbb{R}f:=\{g\in X^*: g=\alpha f, \alpha\in \mathbb{R}\}\to \mathbb{R}, g=\alpha f\mapsto \alpha\|f\|. $$ It is well defined and continuous, moreover its norm is $1$ (check directly), so we can apply Hahn-Banach extension theorem to find an extension $$\tilde{L}:X^{**},\ \tilde{L}|_{\mathbb{R}f} = L,\ \|\tilde{L}\|= \|L\|=1. $$ By reflexivity there exists $x\in X$ such that $L(g) = x(g)$ for all $g\in X^*$ and $\|L\|=\|x\|=1$. But we done since $$ f(x) = x(f) = \tilde{L}(f) = L(f) = 1\|f\|=\|f\|. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2558598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Ratio of an inscribed circle's tangent to original square In the diagram, the circle is inscribed within square $PQRS,$ $\overline{UT}$ is tangent to the circle, and $RU$ is $\frac{1}{4}$ of $RS.$ What is $\frac{RT}{RQ}$?
Suppose that the circle touches $RS$, $TU$ and $RQ$ at $X$, $Y$ and $Z$ respectively. If $RQ=4$, $RU=1$. Let $RT=x$. Then $TU=TY+YU=TZ+XU=2-x+1$. So $1^2+x^2=(3-x)^2$ and thus $1+x^2=9-6x+x^2$. $\displaystyle x=\frac{4}{3}$. $\displaystyle \frac{RT}{RQ}=\frac{1}{3}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2558670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Intersection of two x powers Many months ago in class I came up with the problem: $$x^{(x+1)} = (x+1)^x$$ Using the solve function on my calculator I have found that the answer is around 2.29... This is backed up by the graph. However I was determined to find the inverse function where: $$x = f(y) $$ or find the answer algebraically $$ x = _-$$ Being a lowly first year A-level student this has been pretty much impossible. So far doing some simple rearranging the equation looks like: $$ x^{\frac 1 x} - \frac 1 x - 1=0$$I've tried many methods and had a good look online. So far I have just about been able to solve the equation $x^{\frac 1 x} = y$ by finding the inverse of $x^x$ graphically so $$ x^x = y $$ $$ x = P(y)$$ where P is the inverse function of $x^x$, then doing $$x^{\frac 1 x} = y$$ $$e^{ln(x^{\frac 1 x})}=y$$ $$\frac1 x (ln(x))=ln(y)$$ $$xln(x)=\frac 1 {ln(y)}$$ $$x^x = e^{\frac1 {ln(y)}}$$ $$x=P\biggl(e^{\frac1 {ln(y)}}\biggr)$$I don't know how to fit this in to my original equation to have just x on one side and no x's on the other side ... I do not want to use any guesswork methods or methods where you work your way to the answer slowly. I have tried using methods where you go one step up above powers so $x^x$ becomes something like $x@2$ where @ is used like + or X then trying to find the inverse of this like - is to + and / is to X and $\sqrt x$ is to $x^2$, to help you bridge the barrier between the $x^x$ and the $\frac 1 x$ but I couldn't find any way of doing this. Thank you for the help.
We can approximate the solution building around $x=2$ the Taylor expansion of function $$f(x)=(x+1)\log(x)-x \log(x+1)$$ the first derivatives of which being$$f'(x)=\frac{1}{x}+\frac{1}{x+1}+\log (x)-\log (x+1)$$ $$f''(x)=-\frac{x^2+x+1}{x^2 (x+1)^2}\qquad f'''(x)=\frac{(2 x+1) \left(x^2+x+2\right)}{x^3 (x+1)^3}$$ This would give $$f(x)=-\log \left(\frac{9}{8}\right)+ \left(\frac{5}{6}-\log \left(\frac{3}{2}\right)\right)(x-2)-\frac{7}{72} (x-2)^2+\frac{5}{162} (x-2)^3+O\left((x-2)^4\right)$$ Using the expansion to $O\left((x-2)^2\right)$ would give a solution which is $$x=2+\frac{6 \log \left(\frac{9}{8}\right)}{5-\log \left(\frac{729}{64}\right)}\approx \color{red}{2.2}7528$$ I shall not write the solution of the quadratic corresponding to the expansion up to $O\left((x-2)^3\right)$ but the solution will be $x\approx \color{red}{2.29}506$. Solving the cubic corresponding to the expansion up to $O\left((x-2)^4\right)$ would give $x\approx \color{red}{2.29}297$. We could even do better building the simplest $[1,1]$ Padé approximant of function $f(x)$ $$f(x) \approx \frac{f(2)+\left(f'(2)-\frac{f(2) f''(2)}{2 f'(2)}\right)(x-2) } {1-\frac{ f''(2)}{2 f'(2)}(x-2) }$$ and solving for $x$ the numerator get an explicit expression which evalutes $x\approx \color{red}{2.293}65$ Building the simplest $[1,2]$ Padé approximant would lead to $x\approx \color{red}{2.2931}3$ still a the price of a linear equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2558804", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
The intersection of a sequence of bases of a Banach space Let $V$ be a Banach space Let $\{B_n\}_{n \in \mathbb{N}}$ be a sequence of subsets of $V$ such that: $$ B_{n+1} \subsetneq B_n $$ and $$ \bigcap_{n=1}^\infty B_n = \{b_0\} $$ with $b_0 \in V$ and $\forall n \in \mathbb{N}: B_n$ is linearly independent. I would like to know if $$ \bigcap_{n=1}^\infty \overline{\operatorname{span}{B_n}} = \operatorname{span}\{b_0\} $$ Thanks.
The answer is no, in general. Let $V$ be any Banach space with $\dim V \ge 2$. Let $x \in V$ be a nonzero vector and let $(x_n)_{n=1}^\infty$ be an injective sequence in $V$ which converges to $x$, such as $x_n = \frac{n}{n+1}x$ for $n \in \mathbb{N}$. Also, let $y \in V$ be a vector such that $\{x, y\}$ is linearly independent. Define $B_n = \{x_{n}, x_{n+1}, x_{n+2}, \ldots\} \cup \{y\}$ for $n \in \mathbb{N}$. We have $B_{n+1} \subsetneq B_n$ since $(x_n)_{n=1}^\infty$ is injective and $\displaystyle\bigcap_{n=1}^\infty B_n = \{y\}$. However, since $x_n \xrightarrow{n\to\infty} x$, we have $x \in \overline{B_n} \subseteq \overline{\operatorname{span}}B_n$ for every $n \in \mathbb{N}$. Therefore $$\{x, y\} \subseteq \bigcap_{n=1}^\infty \overline{\operatorname{span}}B_n $$ but $x \notin \overline{\operatorname{span}}\{y\}$ so $\displaystyle \bigcap_{n=1}^\infty \overline{\operatorname{span}}B_n \ne \overline{\operatorname{span}}\{y\}$. If $\dim V = 1$ then the statement also isn't true. Let $e \ne 0$ and define $B_n = \{ne, (n+1)e, (n+2)e, \ldots\} \cup \{0\}$. We have $\displaystyle \bigcap_{n=1}^\infty = \{0\}$, but $$\bigcap_{n=1}^\infty \overline{\operatorname{span}}B_n = \bigcap_{n=1}^\infty V = V \ne \overline{\operatorname{span}}\{0\} = \{0\}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2558987", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that there exists the limit $\lim\limits_{n \to \infty}\int_0^1\frac{e^x \cos x}{nx^2+\frac{1}{n}}dx$ Use the A-G inequality, we can easily have that $$\left|\frac{e^x \cos x}{nx^2+\frac{1}{n}}\right| \leq \frac{e^x}{2x}$$ however, $\int_0^1 g(x)dx= \int_0^1\frac{e^x}{2x}dx$ diverges ,thus we cannot apply the Lebesgue Dominated Convergence Theorem.\ I have an idea that use the characteristic function $\chi_{[\frac{1}{n},1]}f_n(x)$ to approximate $f_n(x)$. Then $\left|\chi_{[\frac{1}{n},1]}f_n(x)\right| \leq \left|f_n(x)\right| \leq g(x)$ whose integral convergences on $[\frac{1}{n},1]$, then we can apply LDC Theorem to find the limit of $\int_0^1 f_n(x)=0$.\ Is this idea correct or how to write it down rigorously?
By enforcing the substitution $x=\frac{z}{n}$, $dx=\frac{dz}{n}$ we have $$ \int_{0}^{1}\frac{e^x\cos x}{nx^2+\frac{1}{n}}\,dx = \int_{0}^{n}\frac{e^{z/n}\cos(z/n)}{z^2+1}\,dz=\int_{0}^{+\infty}\frac{f_n(z)}{z^2+1}\,dz -O\left(\frac{1}{n}\right)$$ where $f_n(z)$ is defined as $e^{z/n}\cos(z/n)$ over $[0,n]$ and as $1$ over $[n,+\infty)$. Here we may easily apply the dominated convergence theorem, since $e^x\cos x$ is a positive and continuous function on $[0,1]$, leading to $0\leq f_n(x)\leq M$. For any $z\in\mathbb{R}^+$ we have $\lim_{n\to +\infty}f_n(z)=1$, hence the wanted limit equals $\int_{0}^{+\infty}\frac{dz}{z^2+1}=\color{red}{\large\frac{\pi}{2}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2559078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
how to evaluate $\lim_{x\to0} (x^2)/(e^x-1) $ without L'Hospital So I'm trying to evaluate limit written in the title without L'Hospital nor series (cause its not introduced in our course),I tried to use this recognized limit $\lim_{x\to0} \frac{\left(e^x−1\right)}x =1$ so our limit equal $\lim_{x\to0} x\left(\frac{x}{e^x−1}\right)$. I'm not sure how to prove that $\left(\frac{x}{e^x−1}\right) = 1$ Any facts I can use here or other algebraic manipulation I can use to evaluate limit?
$$ \begin{aligned} \lim _{x \rightarrow 0} \frac{x^{2}}{e^{x}-1} &=\lim _{x \rightarrow 0} \frac{x}{\frac{e^{x}-1}{x}} \\ &=\frac{\displaystyle \lim _{x \rightarrow 0} x}{\displaystyle \lim _{x \rightarrow 0} \frac{e^{x}-1}{x}} \\ &=\frac{0}{1} \\ &=0 \end{aligned} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2559177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 7, "answer_id": 6 }
Why $\int_a^bf(x)\,dx=F(b)-F(a)$ I recently looked at the proof given for the fundamental theorem of calculus in this link: Why is the area under a curve the integral? It made perfect sense, except for one thing. The proof relies on creating a function $F(b)$ that gives the area under the curve $f(x)$ from $0$ to some real number $b$ so that in essence $F(b) = \int_{0}^{b}f(x)dx $ and then proved that $F(b)$ is the anti derivative of $f(x)$. However, if we define the integral in this way, then it seems strange that when integrating a function from 0 to a value b we have to evaluate $F(b)-F(0)$ rather than just evaluate $F(b)$. Since the former would generally imply that if we want to find the area under the curve from a to b, then given the definition of an integral, we simply have to subtract the area from 0 to a from the area from 0 to b. Which in this case makes no sense, since we would be subtracting the area from 0 to 0, ie. 0 from the area from 0 to b. Which means we could just discard the first part of the evaluation, yet this would cause us problems if we wanted to evaluate something like $ \int_{0}^{\pi/2}sin(x)dx $ which would be zero if we just evaluate the antiderivative of sin(x) at $\pi/2$.
From my point.of view a simple way to see this fact is to consider the integral function: $$F(x) = \int_{0}^{x}f(t)dt $$ that rapresent the area “under” the graph from 0 to x. Now if we think to calculate its derivative is pretty clear that for a small change $\Delta x$ the area varies of the quantity: $$\Delta F(x)=f(x)\cdot \Delta x$$ Thus the rate of change is $$\frac{\Delta F(x)}{\Delta x}=f(x) $$ and in the limit $$F’(x)=f(x)$$ That’s the link between the two concept. Now, since the derivative of a constant is zero, any constant may be added to an antiderivative and will still correspond to the same integral (i.e. the antiderivative is a nonunique inverse of the derivative). For this reason, indefinite integrals are written in the form $$\int f(x)dx = F(x) + c$$ where $c$ is an arbitrary constant of integration. For this reason when we evaluate a definite integral we need to calculate it as a difference, just to eliminate the constant of integration.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2559314", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Find a function $f(x)$ such that$\int_0^txf(x)f(\sqrt{t^2-x^2})dx=\sin(t^2)$ I am looking for a function that satisfies the integral equation $\int_0^txf(x)f(\sqrt{t^2-x^2})dx=\sin(t^2)$. It's some sort of convolution. If we denote the above equation by $ f(t)*f(t)=\sin(t^2)$ I was also able to show that $f(t)*(2\cos(t^2))=t^2f(t).$ The definition of $ f(t)*g(t)=\int_0^txf(x)g(\sqrt{t^2-x^2})dx$. I also got $f(t)*\dfrac{f'(t)}{t}+f(0)f(t)=2\cos(t^2)$ through differentiation ($t>0$). That's all.
$$\int_0^t f(x)f(\sqrt{t^2-x^2})xdx=\sin(t^2)$$ HINT : Let $\quad f(x)=g(x^2)$ $$\int_0^t g(x^2)g(t^2-x^2)\frac{d(x^2)}{2}=\sin(t^2)$$ Let $\quad\begin{cases} x^2=X \\t^2=T\end{cases}$ $$\int_0^{\sqrt{T}} g(X)g(T-X)dX=2\sin(T)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2559447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Name for a polygon which is laying on a cylinder? Is there a name for a closed-loop polygon (2D, such as a circle, square, triangle, etc.) which is "draped" over a cylinder (3D)? Picture Dali's famous painting, The Persistence of Memory. If those clocks were of zero thickness, just circles melted over some 3D object, can you still call them polygons? They wouldn't be quadrilaterals, because they themselves do not have a third dimension. Thank you for your insight!
Just call it a polygon. Since the cylinder can be rolled up from a plane without changing any locally measured angles, your polygon will unroll into a polygon in the plane. The mathematics behind this is that the cylinder has Gaussian curvature $0$ and is a developable surface.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2559571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving a derivative through implicit differentiation Find the derivative of $x^2+4xy+y^2=13$ So here is what I did: $$2x+4\left(x\frac{dy}{dx}+y\right)+2y\frac{dy}{dx}=0$$ $$4x\frac{dy}{dx}+2y\frac{dy}{dx}=-2x-4y$$ $$\frac{dy}{dx}(4x+2y)=-2x-4y$$ $$\frac{dy}{dx}=\frac{-2x-4y}{4x+2y}$$ But this isn't the correct answer. Any help?
Given $$ x^2+4xy+y^2=13 $$ and apply the derivative operator $$ d(x^2+4xy+y^2)=d(13)\implies 2x+4x\frac{dy}{dx}+4y+2y\frac{dy}{dx}=0 $$ and so $$ \frac{dy}{dx}=-\frac{2x+4y}{4x+2y}=-\frac{x+2y}{2x+y} $$ and it appears you just didn't cancel the common factor 2 of the denominator and numerator.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2559672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
derivative caculation I met a problem that described as follow. I am not sure the title is suitable for my problem. If you have any advices about the title or the description, please comment below. Aready known: $$F_1(x)=\frac{F_0^\prime(x)}{F_0^\prime(1)}$$ $$G_0(x)=\frac{1}{F_0(u)}F_0(u+(x-1)F_1(u))$$ $$G_1(x)=\frac{G_0^\prime(x)}{G_0^\prime(1)}$$ Expected result that I need to obtain: $$G_1(x)=\frac{G_0^\prime(x)}{G_0^\prime(1)}=\frac{1}{F_1(u)}F_1(u+(x-1)F_1(u))$$ I tried: $$G_0^\prime(x)=\frac{\mathrm{d}G_0(x)}{\mathrm{d}x}=\frac{\mathrm{d}(\frac{1}{F_0(u)}F_0(u+(x-1)F_1(u)))}{\mathrm{d}x} =\frac{F_1(u)}{F_0(u)}\frac{\mathrm{d}(F_0(u+(x-1)F_1(u)))}{\mathrm{d}x}$$ $$G_0^\prime(1)=\left . G_0^\prime(x)\right\vert_{x=1}=\frac{F_1(u)}{F_0(u)}\left . \frac{\mathrm{d}F_0(u+(x-1)F_1(u))}{\mathrm{d}x}\right\vert_{x=1}$$ $$F_1(u+(x-1)F_1(u))=\frac{\frac{\mathrm{d}(F_0(u+(x-1)F_1(u)))}{\mathrm{d}x}}{\left . \frac{\mathrm{d}(F_0(u+(x-1)F_1(u)))}{\mathrm{d}x}\right\vert_{x=1}}$$ But I did not get the expected result. P.S. $u$ is not a function of $x$. So $u$ should be treated as a constant. Could anyone do me this favor? Please do not mislead by the step that I tried.
We have $$G_0'(x)=\frac{F_1(u)}{F_0(u)}F_0'(u+(x-1)F_1(u))$$ and substituing $x=1$ gives $$G_0'(1)=\frac{F_1(u)}{F_0(u)}F_0'(u+(1-1)F_1(u))=\frac{F_1(u)}{F_0(u)}F_0'(u)$$ Hence $$G_1(x)=\dfrac{\dfrac{F_1(u)}{F_0(u)}F_0'(u+(x-1)F_1(u))}{\dfrac{F_1(u)}{F_0(u)}F_0'(u)}=\dfrac{\dfrac{F_0'(u+(x-1)F_1(u))}{F_0'(1)}}{\dfrac{F_0'(u)}{F_0'(1)}}=\dfrac{F_1(u+(x-1)F_1(u))}{F_1(u)}$$ as required.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2559755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Markov Chains. How to find F with different shape of I and Q First for all I want to say that I deal with Markov Chains only second day so I have some problems with terminology. Sorry for that. I need to solve following matrix: Now in I need to compute FR. In order to compute FR I need to find F first which is (I-Q)^-1. So my question is what if I and Q have different shape. On the image above Q is smaller than I. And what about case when Q is larger?
$"I"$ that you see in $(I-Q)^{-1}$is an identity matrix and is of the same dimension as that of Q. The $"I"$ that you see in the image is also an identity matrix by construction and lets you know the number of absorbing states that you have in the system . This one and Q need not have the same dimension.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2559857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Differentiate the squared dot product I am new to Mathematics stack exchange community and has no experience in asking question so please bear with me. I am watching deep learning course from Coursera and encounter a question during the video. $$\left|\frac d{d\vec x}(\vec x\cdot\vec x)^2\right|=?$$ $$\vec x=\begin{bmatrix}x_1\\x_2\end{bmatrix}=\begin{bmatrix}3\\4\end{bmatrix}$$ I am not sure how to perform dot product on 2 vectors since they are 2x1 and 2x1 dimension. Please guide me! Thanks Note: After some try and error, I got the answer, but I don't understand the solution.
By definition, the derivative of a scalar $f(x,y)$ with respect to a vector $(x,y)$ is the following vector $$\left(\frac{\partial f}{\partial x},\frac{\partial f}{\partial y}\right).$$ We have $$f(x_1,x_2)=(\vec x \cdot \vec x)^2 =\left(x_1^2+x_2^2\right)^2.$$ So the derivative in question is the vector $$(4(x_1^2+x_2^2)x_1,4(x_1^2+x_2^2)x_2).$$ At $(3,4)$, this is $$(300,400).$$ EDIT Taking the absolute value: $$|(300,400)|=\sqrt{300^2+400^2}=100\times5=500.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2560013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Cardinality of $\Bbb N^{k}$ How can i determine the cardinality of $\Bbb N^{k}$ for $k \in \Bbb N$ ? I know that $\Bbb{N} \times \Bbb{N}$ is of cardinality $\aleph_o$, is there any valid induction for $k\in \Bbb{N}$?
Yes. Let $f:\Bbb N^2\to \Bbb N$ be a bijection. For $k\geq 2$ suppose there is a bijection $g: \Bbb N^k\to \Bbb N.$ For $x=(x_1,...,x_{n+1})\in \Bbb N^{n+1}$ let $h(x)=f(g(x_1,...,x_n),x_{n+1}).$ Then $h:\Bbb N^{n+1}\to \Bbb N$ is a bijection. This is a common technique in inductive proofs.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2560188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is the method to show exactly one positive root of a Cubic equation? I have $a x^3 + b x^2 + c x + d =0$ and $a>0 , d<0$ $a, b, c, d$ is the function of all parameters. I'm looking for an analytic solution of this cubic equation. How to prove that is cubic equation has exactly one positive root?
The standard method to define that the cubic polynomial \begin{align} f(x)&=ax^3+bx^2+cx+d \end{align} has only one real root is to check that its discriminant $\Delta<0$: \begin{align} \Delta &= 18abcd -4b^3d + b^2c^2 - 4ac^3 - 27a^2d^2 , \end{align} Obviously, just the two conditions $a>0$, $d<0$ is not enough to define even if there is only one real root.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2560326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
$f$ continuous $\Leftrightarrow f\circ p$ continuous implies $p$ quotient map Let $X,Y,Z$ be topological spaces. Let $p:X\rightarrow Y$ be a continuous surjection. Let $f:Y\rightarrow Z$ be continuous if and only if $f\circ p:X\rightarrow Z$ is continuous. I want to prove that this makes $p$ a quotient map. My thoughts: Since $p$ is a continuous surjection, all I need is for $p$ to also be open. If I can show that $p^{-1}$ exists and is continuous, then $p$ must be open, and therefore a quotient map. Since $p$ is surjective, I know that $p$ at least has a right inverse, so some function $g$ exists such that $p\circ g = Id_Y$. I don't know how to proceed, however. Am I on the right track?
I assume you want the property to hold for all spaces $Z$. In this case, pick $Z = Y$ as sets, endowed with the quotient topology for $p$. Let $f:Y \to Z$ be the identity map. We will show that $f$ is a homeomorphism, and hence that $Y$ also has the quotient topology. First, $\tilde p =f \circ p$ is continuous because $p$ is, hence by the universal property $f$ is also continuous. Next, we may factor the continuous map $p$ as $$p = f^{-1} \circ f \circ p =f^{-1} \circ \tilde p,$$ and by the corresponding universal property for for $\tilde p$, this means that $f^{-1}$ is continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2560556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Proof of the equation of the fundamental matrix in absorbing Markov chains The standard or canonical form of the transition matrix of an absorbing Markov chain is $$P = \begin{bmatrix}I & 0 \\ R & Q \end{bmatrix}$$ and the fundamental matrix is calculated as $$F=(I - Q)^{-1}$$ such that $FR$ spells out the probability of eventually landing on each absorbing states from different transient states. At the same time, $F$ provides the expected number of steps required. What is the proof that $F$ and $FR$ include this information?
The submatrix $R$ contains the transition probabilities from transient to absorbing states, while $Q$ contains the transition probabilities from transient to transient states. Powers of the transition matrix $P$ approach a limiting matrix with a pattern: $$\begin{align} P^2 &=\begin{bmatrix}I & 0 \\ R & Q \end{bmatrix}^2= \begin{bmatrix}I & 0 \\ R+QR & Q^2 \end{bmatrix}\\[3ex] P^3 &=\begin{bmatrix}I & 0 \\ R+QR & Q^2 \end{bmatrix}\begin{bmatrix}I & 0 \\ R & Q \end{bmatrix}=\begin{bmatrix}I & 0 \\ R+QR+Q^2R & Q^3 \end{bmatrix}\\[3ex] P^k &=\begin{bmatrix}I & 0 \\ \left(I+Q+Q^2+\cdots+Q^{k-1}\right)R & Q^k \end{bmatrix}\tag 1 \end{align}$$ The key now is that $Q^k\to 0$ as $k\to \infty.$ The fundamental matrix is a geometric series: $$F= I+Q+Q^2+\cdots=(I-Q)^{-1}$$ and replacing the expression $I+Q+Q^2+\cdots+Q^{k-1}$ in (1) with $F:$ $$\begin{align}P^{\infty}&=\begin{bmatrix}I & 0 \\ FR & 0 \end{bmatrix}\end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2560653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
If $p$ is a prime such that $p\equiv 4 \mod 7,$ then $p \equiv 4\mod 14.$ Prove or disprove : If $p$ is a prime such that $p\equiv 4 \mod 7,$ then $p \equiv 4\mod 14.$ I think it is a false statement because if $p=11$ then $p\equiv 4 \mod 7$ but $p \not \equiv 4\mod 14.$ Is my counterexample correct? thanks
Note that $n\equiv 4 \pmod {14}$ is an even number. Thus it can’t be a prime.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2560755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
In the sequence $1,4,11,26$… each term is $2⋅(n-1)^{th}$ term $+ n$. What is the $n^{th}$ term? I readily see that it is $2^{n+1} - (n+2)$ but how can I deduce the $n^{th}$ term from the given pattern i.e. $2⋅(n−1)^{th}$ term $+n$ .
Hint: $a_n = 2 a_{n-1} + n \iff a_n + n + 2 = 2 \left(a_{n-1} + (n-1) + 2\right)\,$, so $a_n+n+2$ is a geometric progression with common ratio $2$. [ EDIT ]   To followup on comments about doing it "by inspection", the heuristic would go like: * *try adding some multiple of $n$ on both sides of the given recurrence in such a way that the terms in $n$ can be "folded" into / incorporated into the general term of a suitable sequence $$ \require{cancel} \begin{align} a_n \,+\, \color{red}{?} \cdot n &\,=\, 2 a_{n-1} \,+\, \color{red}{?} \cdot n \,+\, n \\ &\,=\, 2 \big( a_{n-1}\,+\, \color{red}{?} \cdot (n-1) \big) \,-\, \bcancel{\color{red}{?} \cdot n} \,+\, 2 \,+\, \bcancel{n} \end{align} $$ * *it follows that $\color{red}{?} = 1$ for the free terms in $n$ to cancel out, which leaves $$ a_n \,+\, n \,=\, 2 \big( a_{n-1}\,+\, (n-1) \big) \,+\, 2 $$ * *repeat essentially the same process to now fold the constant into the general term $$ \begin{align} a_n \,+\, n \,+\, \color{red}{??} &\,=\, 2 \big( a_{n-1}\,+\, (n-1) \big) \,+\, 2 \,+\, \color{red}{??}\\ &\,=\, 2 \big( a_{n-1}\,+\, (n-1) \,+\, \color{red}{??}\big) \,-\, \bcancel{\color{red}{??}} \,+\, \bcancel{2} \end{align} $$ * *again it follows that $\color{red}{??} = 2$ for the free terms to cancel out, thus in the end $$a_n + n + 2 = 2 \big(a_{n-1} + (n-1) + 2\big)$$ The latter shows that $a_n + n + 2$ is a geometric progression with common ratio $2\,$, so: $$ a_n + n + 2 = 2^n (a_0 + 0 + 2 ) \quad\iff\quad a_n = (a_0+2)\,2^n - (n + 2) $$ The above is technically equivalent to doing it "by the book", of course, but the individual steps are small enough that they can be worked out mostly "by inspection".
{ "language": "en", "url": "https://math.stackexchange.com/questions/2560866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to write Euler product and Zeta function for $K = \mathbb{Q}(\sqrt{3})$ For general zeta number fields, I have found a carefully written statement of the Dedekind zeta function and it's Euler product: $$ \zeta_L(s) = \sum_{\mathfrak{a}\subseteq \mathcal{O}_L} \frac{1}{N_{L/\mathbb{Q}}(\mathfrak{a})^s} = \prod_{\mathfrak{p}\subseteq \mathcal{O}_L} \left( 1 - \frac{1}{N_{L/\mathbb{Q}}(\mathfrak{p})^s}\right)^{-1} $$ If we use a quadratic field, e.g. $L = \mathbb{Q}(\sqrt{3})$ then I believe the ring of integers are to adjoin the square root: $$ \mathcal{O}_{\mathbb{Q}(\sqrt{3})} = \mathbb{Z}\big[\sqrt{3}\big] = \{ a + b \sqrt{3} : a,b \in \mathbb{Z}\}$$ I would write the zeta function as the sum of the numbers in this set: $$ \zeta(s) = \sum'_{(a,b) \in \mathbb{Z}^2} \frac{1}{(a^2 - 3b^2)^s} $$ This is not right... Ideals are indexed by these elements except now there is the possibility of having infinitely many numbers of size $1$: $$ N_{\mathbb{Q}(\sqrt{3})/\mathbb{Q}}\big[(2 - \sqrt{3})^n \big] = (2 - \sqrt{3})^n(2 + \sqrt{3})^n = (2^2 - 3)^n = 1^n = 1$$ So... all the terms will appear infinitely many times. What is the correct subset: How can I index the ideals of $\mathcal{O}_{\mathbb{Q}(\sqrt{3})}$ as pairs of integers $(a,b) \in \mathbb{Z}^2$? My best guess is to find a way to index: $$ \mathbb{Z}[\sqrt{3}]^\times / \{(2 - \sqrt{3})^n : n \in \mathbb{Z}\} $$ There migth not be a nice way to index that set. As for the Euler product, I'd like to know which primes split over $\mathbb{Q}(\sqrt{-3})$. Quadratic reciprocity (or possibly a geometry of numbers argument) wouls say: $$ \big( \frac{3}{p}\big) = (-1)^{\frac{p-1}{2}\frac{3-1}{2}}\big( \frac{p}{3}\big) = \left\{ \begin{array}{rl} 1 & p \equiv 1 \pmod 3\\ -1 & p \equiv 0,2 \pmod 3 \end{array} \right.$$ The closest (certainly wrong) statement of the Euler product I could come up with is: $$ \sum'_{(a,b) \in \mathbb{Z}^2} \frac{1}{(a^2 - 3b^2)^s} = \prod_{p \equiv 1 \pmod 3} \frac{1}{1 - \frac{1}{p^{2s} }} \prod_{p \equiv 0,2 \pmod 3} \frac{1}{1 - \frac{1}{p^{2s} }} $$ This is the progress that I have so far, hope I have given the idea of what I am looking for. Appreciate advice or corrections.
Dedekind zeta functions of non-imaginary quadratic number fields are more complicated. There is how you'll show $$\zeta_{\mathbb{Q}(\sqrt{3})}(s)= \sum_{I \subset \mathbb{Z}[\sqrt{3}]} N(I)^{-s}= \!\!\!\!\!\!\!\!\!\!\!\! \sum_{n+m\sqrt{3} \in \mathbb{Z}[\sqrt{3}]^*/\mathbb{Z}[\sqrt{3}]^\times}\!\!\!\!\!\! \!\!\! N(n+m\sqrt{3})^{-s} = \!\!\!\! \!\!\!\!\!\!\sum_{n+m\sqrt{3} \in \mathbb{Z}[\sqrt{3}]^*/(2-\sqrt{3})^\mathbb{Z}}\!\!\!\!\!\! |n^2-3m^2|^{-s} $$ still comes from a theta function, allowing to obtain the analytic continuation and the functional equation. Let for $(x,y) \in (0,\infty)^2$ $$\vartheta(x,y) = \!\!\!\!\!\!\! \sum_{n+m\sqrt{3} \in \mathbb{Z}[\sqrt{3}]^*/(2-\sqrt{3})^\mathbb{Z}}\!\!\!\!\!\! \!\!\! e^{- x |n-m \sqrt{3}|^2-y |n+m \sqrt{3}|^2}$$ $$\Theta(x,y) = \sum_{n+m\sqrt{3} \in \mathbb{Z}[\sqrt{3}]^*} e^{-x |n-m \sqrt{3}|^2-y |n+m \sqrt{3}|^2} = 2\sum_{k=-\infty}^\infty \vartheta(x|2-\sqrt{3}|^{2k},y|2+\sqrt{3}|^{2k})$$ Note $\int_0^\infty x^{s-1} e^{-ax}dx = a^{-s} \Gamma(s)$ means $$\iint_{(0,\infty)^2} (xy)^{s-1} \vartheta(x,y)dxdy =\Gamma(s)^2 \!\!\!\!\!\!\!\!\!\!\!\! \sum_{n+m\sqrt{3} \in \mathbb{Z}[\sqrt{3}]^*/(2-\sqrt{3})^\mathbb{Z}}\!\!\!\!\!\! \!\!\!\!\!\!\!\!\! |n-m \sqrt{3}|^{-2s}|n+m \sqrt{3}|^{-2s} = \Gamma(s)^2 \zeta_{\mathbb{Q}(\sqrt{3})}(2s) $$ The claim is that there is some simply connected subset $S \subset (0,\infty)\times (0,\infty)$ such that $$(0,\infty)\times (0,\infty) = \bigcup_{k=-\infty}^\infty S \cdot ( |2-\sqrt{3}|^{2k},|2+\sqrt{3}|^{2k})$$ where $\cdot$ means $(a,b) \cdot (c,d) = (ac,bd)$. Proof : take $T$ a strip in the plane being a fundamental domain for $\frac{\mathbb{R}^2 }{ \mathbb{Z}\ (\log |2-\sqrt{3}|^2,\log |2+\sqrt{3}|^2)}$ then $S = \exp(T)$. One can check that $$\iint_{(0,\infty)^2} (xy)^{s-1} \vartheta(x,y)dxdy=\frac12\iint_S (xy)^{s-1} \Theta(x,y)dxdy$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2560960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Kernel of injective group homomorphism If $\phi$ is a homomorphism from the group $G$ with identity $e$ to the group $G'$ with identity $e'$, then $\phi$ is injective if and only if $\ker\phi=$? I think the $Ker\phi = \{x\in G:\phi(x)=1\}$ because you are looking at an identity then it is all reals $0$ does not work a because if $x=0, e'x=0$ but then i realize it is injective how does that affect the kernel?
A homomorphism $ \phi $ is injective iff its kernel is {e}. To prove this , suppose $\phi$ is injective . Then $\phi (x) = e' =\phi (e) $ means that $x=e$. Thus $ker \phi = {e} $. Now suppose $ker \phi ={e}$. Then if $ \phi (a) =\phi (b)$ it means that $\phi (a b^-1) = e' $ ie $a b^-1 = e$ and thus a=b. Thus $\phi $is injective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2561163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to find $\frac{d^n}{dx^n} e^x\cos x$ How can I get a formula for the n-th derivative of this function? I know that it cycles every 4 derivatives with a factor of $-4$. $e^x(\cos x-\sin x) \to e^x(-2\sin x) \to -2e^x(\sin x+\cos x) \to -4e^x\cos x$
You can use the addition formula: $$\cos x\cos y-\sin x\sin y=\cos(x+y)$$ Hence: $$y=e^x\cos x\\ y'=e^x\cos x-e^x\sin x=\sqrt2\cdot e^x\left(\frac1{\sqrt2}\cos x-\frac1{\sqrt2}\sin x\right)=\\ \sqrt2\cdot e^x\left(\cos \frac{\pi}4\cos x-\sin \frac{\pi}4\sin x\right)=2^{\frac12}e^x\cos \left(x+\frac{\pi}{4}\right)\\ y''=2^{\frac22}e^x\cos \left(x+\frac{2\pi}{4}\right)\\ \vdots\\ y^{(n)}=2^{\frac n2}e^x\cos \left(x+\frac{n\pi}{4}\right).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2561227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Set of functions from empty set to $\{0,1\}$ How does the set of all functions $\{f \,|\, f: \emptyset \to \{0,1\}\}$ look like? Is it empty or does it contain infinitely many functions? Does the definition $f: \emptyset \to \{0,1\}$ make sense at all? I was wondering because we know that the two sets $\{0,1\}^X$ and $\mathcal{P}(X)$ have the same cardinality. But this is only true if $X$ is non-empty, right?
Yes, it makes sense. There is one and only one function from $\emptyset$ into $\{0,1\}$, which is the empty function. Think about the definition of function as a set of ordered pairs to see why.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2561353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Greens theorem on the circle $x^2 + y^2 = 16$. Use Greens theorem to calculate the area enclosed by the circle $x^2 + y^2 = 16$. I'm confused on which part is $P$ and which part is $Q$ to use in the following equation $$\iint\left(\frac{\partial Q}{\partial x}-\frac{\partial P}{\partial y}\right){\rm d}A$$
Hint: You want $$\frac{\partial Q}{\partial x}-\frac{\partial P}{\partial y}=1$$ so the integral is $$\iint_{x^{2}+y^{2}\leq 16}{\rm d}A$$ Can you find $P$ and $Q$ that satisfy this? Notice that there is more than one choice.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2561576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Find out all solutions for the system Given the system $$ \left[ \begin{array}{ccc|c} x_1&x_2&x_3&k\\ x_1&x_2&kx_3&1\\ x_1&kx_2&x_3&1\\ kx_1&x_2&x_3&1\\ \end{array} \right] $$ I tried to solve this...It looks simple but I found a problem at the end... $$ \left[ \begin{array}{ccc|c} 1&1&1&k\\ 1&1&k&1\\ 1&k&1&1\\ k&1&1&1\\ \end{array} \right] $$ $$ \left[ \begin{array}{ccc|c} 1&1&1&k\\ 0&0&k-1&1-k\\ 0&k-1&0&1-k\\ k-1&0&0&1-k\\ \end{array} \right] $$ $$ \left[ \begin{array}{ccc|c} 1&1&1&k\\ 0&0&1& \frac{1-k}{k-1}\\ 0&1&0& \frac{1-k}{k-1}\\ 1&0&0& \frac{1-k}{k-1}\\ \end{array} \right] $$ Finally, $$ \left[ \begin{array}{ccc|c} 1&0&0& -1\\ 0&1&0& -1\\ 0&0&1& -1\\ 0&0&0&k+3\\ \end{array} \right] $$ There is no way to get infinitely many solutions. $$$$ However, I tried to put the system into other online calculator... Click me$$$$ There is infinitely many solutions when k=4 or other numbers. What's wrong of my work...
The system has solutions only if $\det(A|B)=0$, otherwise $\text{rank }(A|B)=4>\text{rank }(A)$ that is $k^4-6 k^2+8 k-3=0\to (k-1)^3 (k+3)=0$ for $k=-3$ and $k=1$ for $k=-3$ we get the system $\begin {cases} x+y+z=-3\\ x+y-3 z=1\\ x-3 y+z=1\\ -3x+y+z=1\\ \end{cases} $ which has solution $(-1,-1,-1)$ for $k=1$ we get $\begin {cases} x+y+z=1\\ x+y+ z=1\\ x+y+z=1\\ x+y+z=1\\ \end{cases} $ that is $x+y+z=1$ which gives infinite solutions $(t,u,1-t-u)$ Hope this helps
{ "language": "en", "url": "https://math.stackexchange.com/questions/2561684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Find all the equilibrium points of this second-order equation $x''+2x'=3x-x^3$ Find all the equilibrium points of this second-order equation $x''+2x'=3x-x^3$ I know that you must the roots of the equation, but when the equation isn't equal to $0$ I am confused on where to begin, thank you for any help! edit: sorry, i did leave out a prime in there. It is fixed now
The equilibrium points are such that $x'=x''=0$. Hence $x=0,\pm\sqrt3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2561837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Integers and Sequences Is there a sequence of integers such that for ∀ k it contains an arithmetic subsequence of length k but it does have no infinitely long arithmetic subsequence?
There's two approaches to constructing a sequence like this. First, we might try to space out blocks of the sequence sufficiently far. For example, $$ 1, \qquad 3,4, \qquad 9,10,11, \qquad 23,24,25,26, \qquad 53,54,55,56,57, \dots $$ where, whenever a block ends at $x$, the next block begins at $2x+1$. In particular, the gap between the $k^{\text{th}}$ block and the $(k+1)^{\text{th}}$ is wider than the range spanned by the first $k$ blocks. If an arithmetic progression includes at least two terms from the first $k$ blocks, its difference is less than that range, so it can never jump the gap. So there is no infinite arithmetic progression, but there are obviously finite ones of arbitrary length (each block by itself). Second, we can apply a "just-do-it" construction. For simplicity, we'll consider only increasing sequences of integers. Enumerate all the infinite arithmetic progressions: this is possible, because there are only countably many pairs $(a,d)$ where $a$ is the first term and $d$ is the difference. Now, to generate our sequence, we alternate the following steps: * *Find a term, $x$, of the $k^{\text{th}}$ infinite arithmetic progression which is larger than any term of our current sequence, and write down $x+1$ (ensuring that we will never write down $x$). *Write down $x+2, x+3, \dots, x+k$, ensuring that we have a finite arithmetic progression of length $k$. For each infinite arithmetic progression, eventually we will skip one of its terms, and therefore no infinite progression is a subsequence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2561968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find a sequence $f_n$ so that $\int_0 ^1 |f_n(x)| = 2$ and $\lim_{n \to \infty} f_n(x) = 1$. The entire problem reads: (a) Find a sequence $f_n: [0,1] \rightarrow \mathbb{R}$ such that $\int_0 ^1 |f_n(x)| = 2$ for all $n \in \mathbb{N}$ and $\lim_{n \to \infty} f_n(x) = 1$ for all $x \in [0,1]$. (b) If the $f_n$ are as in part (a), then prove $\lim_{n \to \infty} \int_0 ^1 |f_n(x) -1| dx = 1$. I'm having trouble finding the proper $f_n$ to fit this situation. Any help is appreciated. Once I find $f_n$, I may need tips for part (b) as well, but I think I could figure that part out given (a). Thanks in advance.
Choose $$f(x):=4n^2x+1 \text{ for }x\in \left[0,\frac{1}{2n}\right]\\ f(x):=-4n^2x+1+4n \text{ for }x\in \left[\frac{1}{2n},\frac{1}{n}\right]\text{and }f(x)=1\text{ otherwise}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2562088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Show that $f(x)=\begin{cases}1/b& x=\frac{a}{b}\in [0,1],a,b\in\mathbb Z\\ 0&x\in \mathbb R\backslash \mathbb Q \cap [0,1]\end{cases}$ is integrable. Show that $$f(x)=\begin{cases}\frac{1}{b}& x=\frac{a}{b}\in [0,1],a,b\in\mathbb Z\\ 0&x\in \mathbb R\backslash \mathbb Q \cap [0,1]\end{cases}$$ is integrable on $[0,1]$. Let $$S_\sigma =\sum_{i=1}^n m_i(x_{i+1}-x_i)\quad \text{and}\quad S^{\sigma }=\sum_{i=1}^n M_i(x_{i+1}-x_i),$$ where $$M_i=\max_{ [x_{i+1},x_i]}f,\quad \text{and}\quad m_i=\min_{[x_{i+1},x_i]}f.$$ I have to show that $\overline{S}=\underline{S}$ where $$\overline{S}=\sup_{\sigma }S_\sigma \quad \text{and}\quad \underline{S}=\inf_\sigma S^\sigma .$$ Obviously $S_\sigma =0$ for all partition $\sigma $, and thus $\overline{S}=0$. But for $\underline{S}$ I have some problem. I just can't find $M_i$. But maybe something as : I consider $ \sigma _n : 0<\frac{1}{n}<...<\frac{n-1}{n}<1$. Then I would say that $M_i\leq \frac{1}{i}$, and thus $$S^{\sigma _n}\leq \frac{1}{n}\sum_{k=1}^n\frac{1}{k}\underset{n\to \infty }{\longrightarrow }0,$$ and thus, the claim follow. It work ?
$f$ is integrable iff $\exists P$ for every $\epsilon >0$ such that $S^{\sigma}-S_{\sigma}<\epsilon$. Choose $P=\{x_0, ....,x_n\}$ such that $x_i - x_{i-1} = \frac{b-a}{n}$. $S_{\sigma} = 0$ on $P$ because for every interval $[x_i, x_{i+1}]$, $m_i=0$. Also, since $f(x)<1$ for $x<1$, we also have $M_i < 1$ and so $$\sum_{i=1}^n M_i < n$$ Put $\sum_{i=1}^n M_i = M <n$ Therefore $S^{\sigma}-S_{\sigma} = \frac{b-a}{n}M - 0<\epsilon$ on $P$ for large enough $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2562245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What is the domain and range of $f(x,y)=x^4/(x^4 + y^2)$? I believe the domain is: x,y cannot equal to 0 OR x can be any real number but y cannot equal to 0 and the range, well I just have no approach in solving it. Are just supposed to know because it's a rational function? or is there a proper approach in finding the range? I know that the function is always positive due to the terms having squares, so it must be greater than 0, but how do we find the end limit?
For the domain note that $$ x^4 + y^2=0\iff(x,y)=(0,0). $$ Hence the domain is $\mathbb{R}^2\setminus \{(0,0)\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2562325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to find a closed form expression for the following summation? How to find a closed form expression for the following summation? $$ \sum_{m\geq 0} \frac{m r^m \Gamma(m+c)}{\Gamma(m+1)} $$
Since $\Gamma(m+c)=\int_{0}^{+\infty}t^{m+c-1}e^{-t}\,dt$, for any $r<1$ the given series can be written as $$ \int_{0}^{+\infty}\sum_{m\geq 0}\frac{m r^m t^{m+c-1}}{m!} e^{-t}\,dt =\int_{0}^{+\infty} r t^c e^{(r-1)t}\,dt=\color{red}{\frac{r\,\Gamma(c+1)}{(1-r)^{c+1}}}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2562431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to calculate the limit $\lim_{x\to +\infty}x\sqrt{x^2+1}-x(1+x^3)^{1/3}$ which involves rational functions? Find $$\lim_{x\to +\infty}x\sqrt{x^2+1}-x(1+x^3)^{1/3}.$$ I have tried rationalizing but there is no pattern that I can observe. Edit: So we forget about the $x$ that is multiplied to both the functions and try to work with the expression $(x^2+1)^{1/2}-(x^3+1)^{1/3}.$ Thus we have, $$(x^2+1)^{1/2}-(x^3+1)^{1/3}=\frac{((x^2+1)^{1/2}-(x^3+1)^{1/3})((x^2+1)^{1/2}+(x^3+1)^{1/3})}{(x^2+1)^{1/2}+(x^3+1)^{1/3}}$$ $$=\frac{x^2+1-(x^3+1)^{2/3}}{(x^2+1)^{1/2}+(x^3+1)^{1/3}}=\frac{(x^2+1)^2-(x^3+1)^{4/3}}{(x^2+1)^{3/2}+(x^2+1)(x^3+1)^{1/3}+x^3+1+(x^3+1)^{2/3}(x^2+1)}=??$$
Hint: Your expression equals $$x^2[(1+1/x^2)^{1/2} - (1+1/x^3)^{1/3}].$$ Now use the fact that $(1+h)^p = 1 + ph +o(h)$ as $h\to 0.$ (This fact is equivalent to the statement that the derivative of $(1+x)^p$ at $x=1$ is $p.$)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2562520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
On the volume of a non-right tetrahedron and the integral of a linear function over it I've solved quite a bit of these questions with tetrahedrons with a 90 degree angle, but I am not sure how to approach a non-right angle tetrahedron. I am unsure how to find the triple integral's bounds. Calculation itself is no problem. $ \int_B (x + y + 2z) dV$ where B is the tetrahedron with vertices $(0, 0, 0)$, $(0, 2, 0)$, $(1, 3, 1)$ and $(1, 1, 0)$. Is the way of finding the bounds same for non-right angle tetrahedrons?
Actually the integration of a linear function on a tetrahedron with a vertex at the origin does not require to find the volume of such tetrahedron, but we may solve both problems at once. $\mathbb{R}^3$ is spanned by $v_1=(0,2,0)^T$, $v_2=(1,3,1)^T$ and $v_3=(1,1,0)^T$ and the wanted volume is just $$ \frac{1}{6}\left|\det M\right|=\frac{1}{6}\left|\det\begin{pmatrix}0 & 1 & 1 \\ 2 & 3 & 1 \\ 0 & 1 & 0\end{pmatrix}\right|=\frac{1}{3} $$ by a Laplace expansion along the first column. Each point of the given tetrahedron $T$ is a linear combination of $v_1,v_2,v_3$ with non negative coefficients, whose sum does not exceed $1$. If we enforce the substitution $$(x,y,z)^T = M (X,Y,Z)^T$$ we have that $$\iiint_{T}(x+y+2z)\,d\mu = 2\iiint_{\substack{X,Y,Z\in[0,1]\\X+Y+Z\leq 1}}2X+6Y+2Z\,d\mu $$ and the integration problem over a generic tetrahedron boils down to the integration problem over a trirectangular tetrahedron. By the linearity of the integral and symmetry, this further simplifies into just computing $$ \iiint_{\substack{X,Y,Z\in[0,1]\\X+Y+Z\leq 1}}X\,d\mu = \int_{0}^{1}X\cdot\frac{1}{2}(1-X)^2\,dX=\frac{1}{24}.$$ It follows that the value of the wanted integral is $\frac{2(2+6+2)}{24}=\color{red}{\frac{5}{6}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2562619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Diffusion Equation with a noncontinuous Auxiliary Condition I am currently trying to show that if we are given the Diffusion Equation $u_t=cu_{xx}$ with Auxiliary Condition $u(x,0)=g(x)$ where $g$ is non continuous at a point $x_0$, we get the statement: $$\lim_{t\rightarrow 0^+}u(x_0,t)=\frac{1}{2}[g(x_0+)+g(x_0-)]$$ Where $g(x_0+)=\lim_{\tau\rightarrow x_0^+} g(\tau)$, and an analogous statement holds for $g(x_0-)$. I proved this fine when $g$ is continuous, but I don't know where to begin with this one. We denote $u(x,t)$ as: $$u(x,t) = \int_{\infty}^\infty \Phi(x-y)g(y) dy$$ Where $\Phi$ is the Heat Kernel. Now I proved it analytically (the case where $g$ continuous), but if someone could show me how to relate the noncontinuous case to the continuous case I would appreciate it. The wya I approached the continuous case was showing that $\forall\epsilon$ $\exists\delta$ such that if $0<t<\delta$ then we had: $$\left|\int_{-\infty}^\infty \Phi(x-y)g(y) dy\right|<\epsilon$$ The proof was involved, and thus I am not going to restate it here, and also I'm not expecting a full length answer for this, I just want to know where the similarities lie. Thank you
First show that the result holds for the function $$ H(x)=\begin{cases}0 & \text{if }x<0,\\1 & \text{if }x\ge0.\end{cases} $$ Then consider the function $f(x)=g(x)+c\,H(x-x_0)$, where the constant $c$ is chosen so that $f$ is continuous at $x_0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2562742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can the interval category be expressed as a colimit? A category with two objects and no non-identity morphisms can be expressed as the colimit of a diagram consisting of two copies of the trivial category. Naively, I would think that the interval category might be the colimit (or 2-colimit) of a diagram consisting of two copies of the trivial category connected by the identity functor (call this colimit $C$). However, the definition of colimit requires that everything commute. So, we have two morphisms $0, 1 : \text{pt} \to C$, and a commutative triangle $0 \cong 1 \circ \text{id}$, which tells me that $0 \cong 1$, even though, for the interval category, there should merely be a natural transformation from $0$ to $1$. So, is it even possible to express the interval category as a (co)limit, or is it something that we just have to assume is there? Is there a weaker sort of limit that would work? Is there a different universal property I should be looking at?
$\DeclareMathOperator*{\colim}{colim}$ You can obtain the category $[1]=\{0\to 1\}$ joining two copies of the terminal category $[0]=\{0\}$. A join is a colimit: given simplicial sets $X,Y$ you have to compute $$ \colim_{[n]\to [p+ q+1]} X_p\times Y_q $$ or, in a more elegant way, the convolution $$ \int^{p,q} X_p\times Y_q \times \Delta(-,[p]\oplus [q]) $$ where $\oplus$ is the ordinal sum $[p]\oplus [q]:=[p+q+1]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2562927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Bayesian network I have this following question and i would like to know if anyone can answer it. For a given Bayesian network where $P(a) =.6, P(b|a) =.8, P(b|-a)=.4, P(c|a)=.4$ and $P(c|-a) = .3$, compute $P(c|b)$. Note that $a, -a, b$, etc. are propositions: e.g.) $a \leftrightarrow A = true , -a \leftrightarrow A = false$. I know that $P(a\land b\land c) = P(b|a)P(c|a)P(a)$ but i don't know how to solve the $P(c|b)$? Thank you for help in advance.
Begin with the definition of conditional probability. $\hspace{25ex}\mathsf P(c\mid b) = \dfrac{\mathsf P(b\cap c)}{\mathsf P(b)}$ Now apply the law of total probability, $\hspace{25ex}\mathsf P(c\mid b) = \dfrac{\mathsf P(a\cap b\cap c)+\mathsf P(\neg a\cap b\cap c)}{\mathsf P(a\cap b)+\mathsf P(\neg a\cap b)}$ Then use the relations from the DAG (the directed acyclic graph).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2563078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that the sequence $S_n := \sum_{k=0}^{n} \lambda_k A^k$ has a limit in $ L(X)$. I have a simple question, but I do not know what to do with it: The question is as follows: Let $X$ be a Banach space. Consider an integral operator $A \in L(X)$ and a function $\varphi(t) = \sum_{k=0}^{+\infty} \lambda_k t^k$ for $\lambda_k \in \mathbb{R}$. Where the series converges on the whole $\mathbb{R}$. Prove that the sequence $S_n := \sum_{k=0}^{n} \lambda_k A^k$ has a limit $\varphi(A) \in L(X)$, when $n \to +\infty.$ The question looks simple, but I cannot understand it! It seems something is missing? Or we just need to show that $\|\sum_{k=0}^{n} \lambda_k A^k - \varphi(A) \| $ goes to zero in operator norm? Can someone please let me know how can we show it? Thanks!
The power series $\varphi(t)=\sum_{k=0}^{+\infty} \lambda_k t^k$ converges absolutely in each $t \in \mathbb R$. Put $S_n:=\sum_{k=0}^{n} \lambda_k A^k$. For $n,m \in \mathbb N$ with $m>n$ we have $||S_m-S_n||= ||\sum_{k=n+1}^{m} \lambda_k A^k|| \le \sum_{k=n+1}^{m} |\lambda_k| \cdot ||A||^k=|a_m-a_n|$, where $a_n:=\sum_{k=0}^{n} \lambda_k ||A||^k$ $(a_n)$ is convergent, hence a Cauchy sequence. This shows that $(S_n)$ is Cauchy sequence in $L(X)$. Since $X$ is a Banach space, we have that $L(X)$ is a Banach space. Conclusion ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2563223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Euler method with infinite gradient at initial value The title itself is self explanatory - I am trying to numerically solve an ODE with an initial value that has an infinite gradient. It seemed problematic to me and I am not certain as to how I should approach this. e.g. $\frac {dy}{dx} = \frac y{\sqrt x} , y(0)=1$ (Obviously this can be solved analytically but I would like to know if there is any numerical method that tackles problems like this) Thank you very much!
Others can probably give a more "conventional" approach but one thing you can do, because $y'(x)$ is independent of $y(x)$, is take the following approach for the first step (at least). For the initial slope think rather angle. You want to proceed with a certain angle but you can't use $\theta_1=\frac{\pi}{2}$... which is obviously no good. Instead take as angle the mid-angle of $\theta_1=\pi/2$ and the angle of the curve at $x=h$: $$\theta_2:=\tan^{-1}\left(y'(h)\right)=\tan^{-1}\left(\frac{1}{\sqrt{h}}\right),$$ and so take as initial angle: $$\theta_0=\frac{\theta_1+\theta_2}{2}=\frac{\frac{\pi}{2}+\tan^{-1}\left(\frac{1}{\sqrt{h}}\right)}{2},$$ and so an initial slope of: $$m_0:=\tan(\theta_0).$$ From here you can do normal Euler or perhaps keep it going with slope $$m_k=\frac{y'(x_k)+y'(x_{k+1})}{2}.$$ This is an adaptation of Heun's Method. With a step-size of $h=0.1$ this gives an error for $y_1\approx y(x_1)$ of the order of $0.015$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2563330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to prove if $\frac{20b}{19b-20}=a$ is possible or not, where $a$ and $b$ are positive integers? This is a follow up question to the one posted here. (The method posted here works for all cases where there isn't a coefficient in front of $b$ in the denominator) Here is the problem: $$\frac{20b}{19b-20}=a$$ $a$ and $b$ have to be positive integers. Prove whether the equation is possible or not possible. By the way, the answer is that it isn't possible. What I've attempted My first method was to check if $20b$ are divisible by $1$ or $5$. This didn't work, as you will see. We know $19b-20$ will never be $1$. I tried to check whether $19b - 20$ will be a multiple of $5$, however this approach was wrong, as it may be a multiple of $5$, but that does not mean that it is a factor of $20b$. My second attempt was playing with the equation algebraically. I got to $20(b+a)=19(ba)$. This shows that $ba$ has to be even, however it does not prove whether the equation is possible or not. I don't know how to approach these problems that require you to disprove the whether an expression is divisible, especially if the variables are both present in both the denominator and the numerator. Could someone please provide some assistance in proving/disproving this? Thanks for your time.
If $b=1$, then we get $a=-20$ which is not a positive integer. In the following, $b\ge 2$. We have $$a=\frac{20b}{19b-20}=\frac{19b-20+b+20}{19b-20}=1+\frac{b+20}{19b-20}$$ So, $\frac{b+20}{19b-20}$ has to be a positive integer. Then, we have to have $$\frac{b+20}{19b-20}\ge 1\implies b+20\ge 19b-20\implies 18b\le 40\implies b\le 2$$ If $b=2$, then we get $a=\frac{20}{9}$ which is not an integer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2563452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$\operatorname{Ext}^1(M,R/m)=0$ iff $M$ is projective In this answer it is suggested that over a commutative local ring, a module $M$ is projective iff $\operatorname{Ext}^1_R(M,R/m)=0$. A similar result holds for flatness and Tor. In the case of Tor, I can find a proof in Robert Ash's Commutative Algebra, which involves specific isomorphisms with tensor. But I cannot prove the case of Ext. Can anyone show me the proof? Thank you. FYI, the "proof" in the next answer in that link is quite vague to me, so it would be ok if you guys can clear things out here.
This is answered in the question you link to. It is clear that if $M$ is projective then $\text{Ext}_R^1(M,k)$ vanishes where $k= R/m$ is the residue field of $R$. To show that if $\text{Ext}_R^1(M,k)$ vanishes then $M$ is in fact free, you use the fact modules over local rings admit minimal free resolutions, i.e. there is a resolution $F\to M$ such that for each $i\geqslant 0$ the $R$-module $F_i$ is free, and such that for each $i\geqslant 1$ it holds that $dF_i \subseteq mF_{i-1}$. Now the complex $\hom_R(F,k)$ has zero differential since the resolution is minimal. Indeed, the differential acts by $df(x) = f(dx)$, but since $dx\in mF$ and $f$ is linear, $f(dx) \in mk = 0$. Then the homology at $\hom_R(F_1,k)$, which is $\text{Ext}_R^1(M,k)$, is zero. This means $\hom_R(F_1,k) = 0$, so $F_1=0$, which gives an exact sequence $0\to F_0\to M\to 0$, showing that $M$ is free, as desired. One way to think about the proof is the following: first, one shows that over a local ring $\hom_R(F,k) = 0 $ implies $F=0$ for $F$ free. Then, by dimension shifting and minimality, one has that $\text{Ext}_R^1(M,k) =\hom_R(F_1,k)=0$, showing that $M$ is free if and only if such Ext group vanishes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2563541", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
When finding the maximum and minimum points of a function on an interval what do you do if the derivative is undefined at zero Say I have a function and I want to find the maximum and minimum on an interval, I would first differentiate the function and equate it to zero. Then use this value and the two ends of the interval to find the max and min. If, however the the derivative is undefined when it equals zero do I just scrap that as a value and use the bounds of the interval in the original function? does this mean that when its equal to zero its a critical point?
The candidates for maximum and minimum of a continuous function on an interval are the zeros of the derivative, the endpoints, and the points where the derivative does not exist. Whether the term "critical point" includes a point where the function is not differentiable is a question of convention: most authors do not call these critical points, but some do.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2563657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Finding the number of possible sequences(of any length) Given $x$ & $y$. Count the number of distinct sequences $a_1, a_2, \dots, a_n$ $(\forall  a_i \ge 1)$ consisting of positive integers such that $\gcd(a_1, a_2, \dots, a_n) = x$ and $\sum_{i=1}^n a_i =y$.
First, it is obvious that if $x\not | \ \ y$, then such sequence does not exist. Therefore we suppose $x | y$. We first enumerate the number of positive sequences $a_1, \cdots, a_n$ such that $x|a_i$ for all $i$ and $\sum_{i=1}^n a_i = y$. By taking $a_i' = \frac{a_i}{x}$ for each $i$ this means a positive sequence $a_1', \cdots, a_n'$ such that $\sum_{i=1}^n a_i' = \frac yx$. It is a combinatoric fact that there are $p(x, y, n) := \binom{\frac yx - 1}{n-1}$ of them. Note that among the sequences considered above, there are sequences with higher gcd. We denote by $g(x, y, n)$ the number of positive sequences $a_1, \cdots, a_n$ satisfying $\gcd(a_1, \cdots, a_n) = x$ and $\sum_{i=1}^n a_i = y$, then we know that $$ \sum_{\substack{{u > 0}\\{u | \frac yx}}} g(ux, y, n) = p(x, y, n) $$ By Möbius inversion formula, we see that $$ g(x, y, n) = \sum_{\substack{{u > 0}\\{u | \frac yx}}} \mu(u) p\left(ux, y, n\right) = \sum_{\substack{{u > 0}\\{u | \frac yx}}} \mu(u) \binom{\frac{y}{ux}-1}{n-1}, $$ where $$ \mu(u) = \begin{cases} 0, &\mbox{if } u \mbox{ is not square free};\\ (-1)^{\omega(u)}, &\mbox{if } u \mbox{ is a product of } \omega(u) \mbox{ distinct primes}, \end{cases} $$ is the Möbius function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2563744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to show that $\frac{dy}{dx}=\frac{dy}{d(x-c)}$? It seems intuitive to me that $\frac{dy}{dx}=\frac{dy}{d(x-c)}$ (the derivative of $y$ with respect to $(x-c)$, where $c$ is a constant), since subtracting a constant from $x$ doesn't change the slope of $y$, but how can I show it? Thanks in advance.
Note that if $z=x-c$ then the chain rules says that $$\frac{dy}{dz} = \frac{dy}{dx}\cdot \frac{dx}{dz} = \frac{\frac{dy}{dx}}{dz/dx}=\frac{dy}{dx}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2563987", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Left-multiplication of a finite field element - Matrix representation Can someone explain me why and how a left multiplication of an element of a finite field GF(2^k) can be seen as a linear transformation on GF(2^k) over GF(2)? I read this https://www.maa.org/sites/default/files/Wardlaw47052.pdf but it is not clear to me. Thank you!
The field $\operatorname{GF}(2^k)$ is a finite dimensional vector space over $\operatorname{GF}(2)$ for $k$ any integer greater zero, so we can talk about linear transformations on this vector space. Let $\alpha \in \operatorname{GF}(2^k)$ and define the following map $$ \begin{align} T_\alpha: \operatorname{GF}(2^k) &\to \operatorname{GF}(2^k) \\ x & \mapsto \alpha x \end{align} $$ That is, $T_\alpha$ is a map which takes an element of $x \in \operatorname{GF}(2^k)$ and maps it to $\alpha x \in \operatorname{GF}(2^k)$. This is the multiplication map you are talking about. This is indeed a linear transformation since for $x, y \in \operatorname{GF}(2^k)$ and $c \in \operatorname{GF}(2)$ we have that $$ T_\alpha(x + y) = \alpha(x + y) = \alpha x + \alpha y = T_\alpha(x) + T_\alpha(y) $$ and $$ T_\alpha(cx) = \alpha(cx) = c(\alpha x) = cT_\alpha(x) $$ Edit: As mentioned in the comment by lhf, all of the above argument still holds if the field extension $\operatorname{GF}(2^k)/\operatorname{GF}(2)$ is replaced by an arbitrary field extension $L/K$. It is not necessary that the field $K$ be finite, nor is it necessary that the degree of the extension be finite. This is readily seen because the fact that $L$ is a vector space over $K$, as well as linearity of $T_\alpha$ follows simply from the field axioms and the fact that $K$ is a subfield of $L$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2564093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Find a geometrical description of the set of all vectors of the form av+bw where a+b=1 My first intuition for solving this problem was to solve for $a$, therefore $a = 1-b$, and letting $v$ and $w$ be arbitrary vertices $v =(1,3)$ and $w = (5,2)$. Since the original question is $av+bw$ I just substitute $(1-b)*(1,3)+b(5,2) = ?$ Then multiply by using the dot product, $(1,-3b) + (5b+2b)$... I'm really stuck in here, now my question is am I doing this alright? and if so what do I do next? I also though of some how relating this to a span of vectors, which is $c_1v_1+c_2v_2+c_3v_3+\ldots+c_nv_n$ And if this is the case how do I start ? Thank you in advance (:
$1-b$ is a number, there is no dot product for us to take. $$(1-b)\cdot(1,3) + b\cdot (5,2)= (1,3) +b((5,2)-(1,3))$$ Try to simplify the expression above and think of which object is it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2564179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving $e^x\leq e^a\frac{b-x}{b-a}+e^b\frac{x-a}{b-a}$ I'm trying to prove $$e^x\leq e^a\frac{b-x}{b-a}+e^b\frac{x-a}{b-a}$$ for any $x\in[a,b]$. Since this looks reminiscent of the mean value theorem or linear approximations I jotted down some equations relating to those, but didn't see any way of making progress with them. I know that $e^x$ is an increasing function so if I could perhaps show that the value on the right is equal to $e$ to some value and prove that value is greater than $x$, it would be sufficient. But I'm not seeing any way to make that work either. The right-hand side is also equal to this line $$\left(\frac{e^b-e^a}{b-a}\right)x+\frac{e^ab-e^ba}{b-a}$$ But I can't think of how I would prove that two curves don't intersect in a region.
We can consider the function $$g(x) =f(b) - f(x) - \frac{f(b) - (a)} {b-a} (b-x) $$ where $f(x) = e^{x} $. We have to prove that $g(x) \geq 0$ for all $x\in[a, b] $. We have via mean value theorem $$f'(c) =\frac{f(b) - f(a)} {b-a} $$ for some $c\in(a, b) $ and since $f'(c) =f(c) $ we get $$g(x) =f(b) - f(x) - (b-x) f(c)$$ We will show that if $x\in(a, b) $ then $g(x) >0$ and we obviously have $g(a) =g(b) =0$. If $c\leq x<b$ then we can see via mean value theorem that $$g(x) =(b-x) (f'(d)-f(c)) =(b-x) (f(d) - f(c)) $$ for some $d\in(x, b)\subseteq(c, b) $. Clearly $f$ is strictly increasing and we have thus $f(c) <f(d) $ and therefore $g(x) >0$ for all $x\in[c, b) $. To handle the case when $x\in(a, c] $ just note that $g(x) $ can also be rewritten as $$g(x) =f(a) - f(x) +\frac{f(b) - f(a)} {b-a} (x-a) =f(a) - f(x) +(x-a) f(c)$$ and the proof can be completed as before. Note that we have used two properties of $f$ here namely $f'(x) =f(x) $ and $f(x) >0$. If one carefully sees the proof one will realize that all we need here is that $f'$ is strictly increasing which can be ascertained if $f''(x) >0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2564321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Terms for 3 aspects of a function If, for some function $f$ and some value $p\ne 0$, $$\forall x, f(x)=f(x-p),$$ then $f$ is periodic and $p$, if $>0$ and minimal, is $f$'s period. If, instead, for some values $u\ne 0$ and $v$, $$\forall x, f(x)=f(x-u)+v,$$ then what is the correct adjective to describe $f$, and what are the correct terms for $u$ and $v$ in relation to this $f$? The domain is a subset of $\mathbb{R}$. The codomain might, but need not, be a subset of $\mathbb{R}$; just any group. I have some terminology in mind, but have come to doubt my term for $v$ because (w.r.t. functions) that term is used to denote something else. So I seek other opinions. In the fullness of time I will edit this OP to state the terms I use at the moment, but I want to avoid the situation where people answer "yes, that's OK" (which doesn't me any further) and others are put off answering because this question looks to them as if it has a satisfactory answer (which doesn't get future readers any further than I am). EDIT: A staircase function is piecewise constant on intervals of length $u$, so I feel that dxiv's suggestion of "staircase-like function" suggests that the function might be something like that -- that is not my intention. Anyway, the terminology I already knew is that from combinatorial game theory: "arithmetico-periodic", "period" and "saltus". "Period" is of course problematic because of the more specific sense, which is well established. And "saltus" is problematic because of its sense "jump discontinuity".
Not an authoritative answer by any stretch, but I'd call it maybe "staircase-like function", or perhaps "linearly augmented periodic function". The latter coming from the observation that $f(ux) - v x$ is in fact periodic: $$\require{cancel}\;f\big(u(x+1)\big) - v(x+1) = f(ux+u)-vx-v = f(ux)+\bcancel{v}-vx-\bcancel{v}=f(ux)-vx\,$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2564438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Mathematical induction proof that $f(n)=\frac{1}{2}+\frac{3}{2}(-1)^n$ The function $f(n)$ for $n=0,1...$ has the recursive definition $$f(n)= \begin{cases} 2 & \text {for n=0} \\ -f(n-1)+1 & \text{for n=1,2...} \end{cases}$$ Prove by induction that the following equation holds: $$f(n)=\frac{1}{2}+\frac{3}{2}(-1)^n$$ So, I begin by checking that the basic step holds * *$f(0)=\frac{1}{2}+\frac{3}{2}(-1)^0=2$ OK *Assume that the equation holds for a given $n$ *Show that n+1 holds: $f(n+1)=\frac{1}{2}+\frac{3}{2}(-1)^{n+1} \Rightarrow f(n+1)=\frac{1}{2}+\frac{3}{2}(-1)^{n} \cdot (-1) = -f(n)-\frac{1}{2}$ I get kind of stuck here. Any advice on how I should approach this?
Well, you have come close in proving the correct form of $f(n+1)$. We then just need to do one more step. Notice that: $$f(n+1)= \frac12+ \frac32(-1)^{n+1}= (1)-(\frac12 +\frac32(-1)^n) =1-f(n)$$ proving that $f(n+1)$ is also true. The proof is thus finished!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2564657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Solving $z^2 - 8(1-i)z + 63 - 16i = 0$ with reduced discriminant $z^2 - 8(1-i)z + 63 - 16i = 0$ I have come across a problem in the book Complex Numbers from A to Z and I do not understand the thought process of the author when solving: The start to the solution is as follows: $\Delta$ = discriminant $\Delta ' = (4 - 4i)^2 - (63 - 16i) = -63 - 16i$ and $r = |\Delta '| = \sqrt{63^{2} + 16^{2}} = 65$, where $\Delta ' = (\frac{b}{2})^{2} - ac$ It is this last part I do not understand. I do not believe this definition for the discriminant has been previously mentioned in the book, and I struggle to see where it has come from. Thanks
Note that solving quadratic equations involving complex numbers follows the same procedure as solving those involving real numbers. Consider a quadratic equation with real coefficients: $$F(x)=ax^2+bx+c=0$$ How would you solve it? Well, the first step would involve solving out for the discriminant, right? Let us now consider a quadratic with complex coefficients: $$F(z)=az^2+bz+c=0$$ As in a similar method like before, we can still calculate the Discriminant of $F(z)$ which equals, $\Delta = b^2-4ac$ and then you solve the quadratic. What the author intended to do here was to find the Discriminant of the equation $az^2+2bz+c=0$ with $a=1$, $b=-4(1-i)$ and $c=63-16i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2564731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Proof by mathematical induction in Z Is it possible to proof the following by mathematical induction? If yes, how? $a\in \mathbb{Z} \Rightarrow 3$ | $(a^3-a)$ I should say no, because in my schoolcarrier they always said that mathematical induction is only possible in $\mathbb{N}$. But I never asked some questions why it is only possible in $\mathbb{N}$...
Since $\mathbb{Z}$ is countable as $\mathbb{N}$ we can extend induction over $\mathbb{Z}$. BASE CASE: $$a=1 \implies 3|0$$ INDUCTIVE STEP 1 "UPWARD" assume: $3|a^3-a$ $$(a+1)^3-(a+1)=a^3+3a^2+3a+1-a-1=a^3-a+3a^2+3a\equiv0\pmod 3$$ thus $$3|(a+1)^3-(a+1)$$ INDUCTIVE STEP 2 "DOWNWARD" assume: $3|a^3-a$ $$(a-1)^3-(a-1)=a^3-3a^2+3a-1-a+1=a^3-a-3a^2+3a\equiv0\pmod 3$$ thus $$3|(a-1)^3-(a-1)$$ Thus: $$\forall a\in \mathbb{Z} \Rightarrow 3|a^3-a$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2564830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 8, "answer_id": 7 }
Expectation of the minimum of two first passage times of a standard Brownian motion Let $W_t$ be a standard Brownian motion. For any real constant $a$, $T_a=\min(t≥0:W_t=a)$. I know how to derive the distribution of $T_a$. Now given $a>0$ and $b<0$, I want to compute $E[\min(T_a,T_b)]$. Any help? Are $T_a$ and $T_b$ independent?
Hints: * *Show that $\min\{T_a,T_b\} = T := \inf\{t \geq 0; W_t \notin (b,a)\}$. *Show that $W_T \in \{a,b\}$ and $|W_{t \wedge T}| \leq \max\{|a|,|b|\}$ almost surely. *Use the optional stopping theorem for the martingale $(W_t)_{t \geq 0}$ to prove that $$\mathbb{E}(W_{T \wedge t})=0$$ for any $t \geq 0$. Conclude from step 2 and the dominated convergence theorem that $$0 = \mathbb{E}(W_T) = b \mathbb{P}(W_T=b) + a \mathbb{P}(W_T=a). \tag{1}$$ Combine this with the fact that $$\mathbb{P}(W_T=a)+\mathbb{P}(W_T=b)=1. \tag{2}$$ to compute $\mathbb{P}(W_T=a)$ and $\mathbb{P}(W_T=b)$. *Apply the optional stopping theorem to the martingale $(W_t^2-t)_{t \geq 0}$ to show that $$\mathbb{E}(W_{T \wedge T}^2) = \mathbb{E}(T \wedge t). \tag{3}$$ By step 2, this implies $$\mathbb{E}(T \wedge t) \leq \max\{a^2,b^2\}< \infty.$$ Conclude that $\mathbb{E}T<\infty$. *Let $t \to \infty$ in (3) to show that $$\mathbb{E}(W_T^2) = \mathbb{E}(T).$$ Thus, by step 2, $$a^2 \mathbb{P}(W_T=a) + b^2 \mathbb{P}(W_T=b) = \mathbb{E}(T).$$ Now plug in the probabilites you have computed in step 3....
{ "language": "en", "url": "https://math.stackexchange.com/questions/2564978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
What's wrong with this equation involving cumulative distribution functions? Suppose $X$ is a continuous random variable with finite mean $\mu = 1$ and standard deviation $\sigma$. Suppose also that it's pdf is symmetric about $x=\mu$. Show that $P(|X-\mu|\le 2\sigma) = 2P(X\le\mu + 2\sigma) - 1$. My solution is as follows. Let $Z = X - \mu$ (which has pdf $f_Z(z) = f_X(z+\mu)$ and cdf $F_Z(z) = F_X(z+\mu)$). Then $$ \begin{aligned} P(|X-\mu|\le 2\sigma) & = P(|Z|\le2\sigma) \\ & = P(-2\sigma\le Z\le 2\sigma) \\ & = F_Z(2\sigma) - F_Z(-2\sigma) \\ & = 2F_Z(2\sigma)\quad\text{since $F_Z$ is symmetric about $x=0$} \\ & = 2F_X(2\sigma + \mu) = 2P(X\le\mu + 2\sigma) \end{aligned} $$ So somehow my answer is off by a $-1$ term. Where did I miss it? I've scanned over the steps, and I can't find anything off.
Alright I realized the error, I thought I'd include it for anyone else curious: $$ F_Z(2\sigma) - F_Z(-2\sigma)\neq 2F_Z(2\sigma) $$ This is plainly false. What is true is $$ F_Z(2\sigma) - F_Z(-2\sigma) = 2(F_Z(2\sigma) - F_Z(0)) $$ and $F_Z(0) = \frac12$ by symmetry, which introduces a $-1$ term, as expected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2565112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Domain of $y=\arcsin\left({2x\sqrt{1-x^{2}}}\right)$ How do you find the domain of the function $y=\arcsin\left({2x\sqrt{1-x^{2}}}\right)$ I know that the domain of $\arcsin$ function is $[-1,1]$ So, $-1\le{2x\sqrt{1-x^{2}}}\le1$ probably? or maybe $0\le{2x\sqrt{1-x^{2}}}\le1$ , since $\sqrt{1-x^{2}}\ge0$ ? EDIT: So many people have answered that the domain would be $[-1,1]$ but my book says that its $[\frac{-1}{\sqrt{2}},\frac{1}{\sqrt{2}}].$ Can anyone explain how are those restrictions made in the given formulas?
A proof without calculus or trigonometry. $$\begin{aligned} (1-2x^2)^2&\geq0 \\ (1-4x^2+4x^4)&\geq0 \\ x^2-x^4&\leq\frac{1}{4} \\ 0\leq \sqrt{x^2({1-x^2})}&\leq\frac{1}{2} \\ 0\leq |x\sqrt{1-x^2}|&\leq\frac{1}{2} \\ -1\leq 2x\sqrt{1-x^2}&\leq1 \end{aligned}$$ Then, since $\arcsin:\,[-1,1]\mapsto\left[\frac{-\pi}{2},\frac{\pi}{2}\right]$, the domain is $[-1,1]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2565221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Composition of Proximal Operators Consider a proximable function $f$ where the proximal operator is defined as follows, $$\operatorname{prox}_{\lambda f}(x) = \arg \min_z \frac{1}{2\lambda}\left\| z - x \right\|_2^2 + f(z)$$ $\lambda \geq 0$. With an additional constraint the problem is $$\arg \min_{z \ge 0} \frac{1}{2\lambda}\left\| z - x \right\|_2^2 + f(z)$$ which we can rewrite as $$\arg \min_z \frac{1}{2\lambda}\left\| z - x \right\|_2^2 + f(z) + I_{\left\{z \ge 0\right\}}(z)$$ $I_C(z)$ is an indicator function. The function $g$ defined as $$g(z) = I_{\left\{ {z \ge 0} \right\}}(z)$$ is also proximable and the resultant prox operator for $g$ is just a projection on the non-negative orthant. Combining all the above our unconstrained problem is defined as $$\arg \min_z \frac{1}{2\lambda} \|z - x\|_2^2 + f(z) + g(z)$$ Consider a toy example $f\left(x\right) = \|x\|_1$, then the solution of the constrained problem is obtained from component-wise thresholding, $z_i = \max(\max( |x_i| - \lambda, 0) \operatorname{sgn}(x_i),0)= \max\left(x_i-\lambda,0\right)$. The solution seems like a composition of two proximal operators, $\operatorname{prox}_{g} \circ \operatorname{prox}_{\lambda f}$. Stated differently, it is also projection on the intersection of two sets, $\ell_1$-norm ball and the positive orthant. It is also similar to the idea of projected subgradient descent algorithm. Are there any general results/conditions where prox operator can be applied to composition of two proximable functions.
You are indeed right and can read about it in the paper - On Decomposing the Proximal Map. Also have a look on the answers in Proximal Mapping of Least Squares with $ {L}_{1} $ and $ {L}_{2} $ Norm Terms Regularization (Similar to Elastic Net).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2565332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Applicability of L'Hôpital rule on infinite sum. $$\lim_{n \to \infty} \left( \frac{n}{n^2+1} + \frac{n}{n^2+2} + \frac{n}{n^2+3} + \space ... \space + \frac{n}{n^2+n}\right) $$ As $n$ is not $\infty$ but tends to $\infty$ I can split the limit of sums into sum of limits. i.e. $$\lim_{n \to \infty} \frac{n}{n^2+1} +\lim_{n \to \infty} \frac{n}{n^2+2} +\lim_{n \to \infty} \frac{n}{n^2+3} + \space ... \space +\lim_{n \to \infty} \frac{n}{n^2+n} $$ Applying L'Hôpital rule. $$\lim_{n \to \infty} \frac{1}{2n} +\lim_{n \to \infty} \frac{1}{2n} +\lim_{n \to \infty} \frac{1}{2n} + \space ... \space +\lim_{n \to \infty} \frac{1}{2n} $$ $$= \lim_{n \to \infty} \left( \frac{1}{2n} + \frac{1}{2n} + \frac{1}{2n} + \space ... \space + \frac{1}{2n} \right) = \lim_{n \to \infty} \frac{n}{2n} $$ $$ =\lim_{n \to \infty } \frac{1}{2} = \frac{1}{2}$$ Whereas applying Sandwich theorem with $g(x) \leq f(x) \leq h(x)$, where $$g(x) = \frac{n}{n^2+n} + \frac{n}{n^2+n} + \frac{n}{n^2+n} + \space ... \space + \frac{n}{n^2+n} = \frac{n^2}{n^2+n}$$ and $$h(x) = \frac{n}{n^2+1} + \frac{n}{n^2+1} + \frac{n}{n^2+1} + \space ... \space + \frac{n}{n^2+1} = \frac{n^2}{n^2+1 }$$ Yields $$\lim_{n \to \infty} g(x) = \lim_{n \to \infty} h(x) = 1 \implies \lim_{n \to \infty} f(x) = 1 $$ Sandwich theorem is very intuitive to discard hence I suppose there is some issue with the application of L'Hôpital's rule. Is there any special condition involved with converting limit of sums to sum of limits ?
Clearly the problem is way before L'Hopital's, splitting the limit into a sum of limits is not allowed (precisely because the number of terms tends to infinity, even though it is not infinite). For instance, consider: $$ 1 = \lim_{n \to \infty} \left( n \frac{1}{n} \right)= \lim_{n \to \infty} \underbrace{\left( \frac{1}{n} + \frac{1}{n} + \dots + \frac{1}{n} \right)}_{n \ \text{times}} =\lim_{n \to \infty} \frac{1}{n}+\lim_{n \to \infty} \frac{1}{n} + \dots + \lim_{n \to \infty} \frac{1}{n} = 0 \neq 1 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2565466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Find $\int_\gamma \frac{dz}{(z-\frac{1}{2}-i)(z-1-\frac{3i}{2})(z-1-\frac{i}{2})(z-\frac{3}{2}-i)}\, $ Let $f(z)=\frac{1}{[(z-\frac{1}{2}-i)(z-1-\frac{3i}{2})(z-1-\frac{i}{2})(z-\frac{3}{2}-i)]}$ and let $\gamma$ be the polygon $[0,2,2+2i,2i,0]$. Find $\int_{\gamma}^{} f$ . I'm trying to use the partial fractions decomposition method, but it's getting too long and I'm lost in the accounts. I do not know if the author expects me to do so. Can anybody help me? Conway, pg. 96, prob., 7.
By the residue theorem: $\int_{\gamma}f(z)dz=2\pi i\sum_i\textrm{res}_{z_i}$. So the problem essentially is to evaluate the residue of each pole. The poles are at: $1/2+i$, $1+3i/2$, $1+i/2$, $3/2+i$. Simply check which are in your boundary and evaluate the residues.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2565584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Integrating $\int_0^{\frac{\pi}{2}} x (\log\tan x)^{2n+1}\;dx$ Does anybody have any thoughts about how to integrate $$I=\int_0^{\frac{\pi}{2}} x (\log\tan x)^{2n+1}\;dx$$ for integral $n$ where $n\ge 1$ In the case $n=0$ $$\int_0^{\frac{\pi}{2}} x \log\tan x \;dx=\lambda(3)=\frac{7}{8}\zeta(3)$$ I have managed to integrate the function when the exponent is even, that is $(\log\tan x)^{2n}$, using the substitution $y=\left(\frac{\pi}{2}-x \right)$ over the two intervals $[0,\frac{\pi}{4}]$ and $[\frac{\pi}{4},\frac{\pi}{2}]$, but the same trick does not apply in regard to the odd powers. Basically via integration by parts I am left with the repeated integral $$\int_0^{\frac{\pi}{2}} \int_0^x (\log\tan u)^{2n+1}\;du\;dx$$ As far as I know $(\log\tan u)^{2n+1}$ does not have a definite integral I can use, so I am stuck. I've tried a few substitutions and those have not helped. Any ideas how I might proceed? Some Added Background Notes * *To obtain a function more suitable for numerical integration use the substitution $u=\log \tan x$ to give $$\int_0^{\frac{\pi}{2}} x (\log\tan x)^{n}\;dx=\int_{-\infty}^{+\infty} \arctan(e^u)\frac{u^n}{e^u+e^{-u}} \;du$$ This shows that the integral $I$ is closely related to the standard integral for the $\beta(n)$ function. The analogous integral $\int_0^{\infty} x (\log\tanh x)^{n}\;dx$ via a similar change of variables is seen to be related to the standard integral for the $\lambda(n)$ function.
By setting $x=\arctan u$ we are left with $$ \mathcal{I}(n) = \int_{0}^{+\infty}\frac{\arctan u}{1+u^2}\left(\log u\right)^{2n+1}\,du =\left.\frac{d^{2n+1}}{d\alpha^{2n+1}}\int_{0}^{+\infty}\frac{u^\alpha \arctan u}{1+u^2}\,du\right|_{\alpha=0}$$ but the integral in the RHS is related to the Beta function. By un-doing the previous substitution, $$ \int_{0}^{+\infty}\frac{u^\alpha \arctan u}{1+u^2}\,du = \int_{0}^{\pi/2}x\left(\sin x\right)^{\alpha}\left(\cos x\right)^{-\alpha}\,dx $$ where we may write $x$ as $$ \arcsin(\sin x)= \sum_{n\geq 0}\frac{(\sin x)^{2n+1}}{(2n+1)4^n}\binom{2n}{n}$$ leading to: $$ \int_{0}^{+\infty}\frac{u^{\alpha}\arctan u}{1+u^2}\,du=\sum_{n\geq 0}\frac{\binom{2n}{n}}{(2n+1)4^n}\cdot\frac{\Gamma\left(\frac{1}{2}-\frac{a}{2}\right)\, \Gamma\left(1+\frac{a}{2}+n\right)}{2\,\Gamma\left(\frac{3}{2}+n\right)}. $$ Now "it is enough" to differentiate both sides with respect to $\alpha$ the correct number of times and perform an evaluation at $\alpha=0$ in order to convert the original integral in a "twisted hypergeometric series", whose terms depend both on hypergeometric terms and generalized harmonic numbers (arising from the differentiation of the Beta function).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2565726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
MIT PRIMES Question, polynomial satisfying conditions There was an MIT PRIMES application problem that goes like this: (don't worry, the admission ended on Dec 1, so I'm not cheating or anything) For all $d\geq 0$, determine if there is a polynomial $p(x)$ with degree $d$ such that for all $n$ in $1\ldots 99$ inclusive satisfies $$p(n)=n+\frac1n$$ and if so, what is $p(x)$ and find the value of $p(100)$. In English terms: find a polynomial with degree $d$ such that $$p(1)=1+\frac11=2$$ $$p(2)=2+\frac12=2.5$$ and so on. What values of $d$ work? What are the polynomials for each of those $d$? What is $p(100)$ for each of those polynomials? My solution was to brute force it by solving a matrix (see Finding polynomials from huge sets of points), but Java wasn't good enough to do all that math (couldn't contain huge numbers with good enough precision). Is there a better, more elegant way to do the problem? If not, how would I brute force it?
From the given condition $nP(n)=n^2+1$ for $n=1,2,\cdots,99$. So the polynomial $P^{\star}(x)=xP(x)-x^2-1$ of degree $d+1$, has 99 zeros. If $d<98$ then $P^{\star}(x)$, despite of being of degree $d+1$, would have $>d+1$ roots, and hence should be identically equal to zero for every $x$. Then $P(100)$ would be simply $\frac{P^{\star}(100)+(100^2+1)}{100}=100+\frac1{100}$. Now if $d=98$ then we can represent $$P^{\star}(x)=(x-1)(x-2)\cdots(x-99)$$ assuming $P$ to be a monic polynomial. Then $P(100)=\frac{P^{\star}(100)+(100^2+1)}{100}=\frac{99!+10001}{100}$. If $d>98$ then we can represent $$P^{\star}(x)=(x-1)(x-2)\cdots(x-99)g(x)$$ With $g(x)$ being a polynomial of degree $\ge1$. But then due to indeterminacy of $g(x)$ we can't calculate the exact value of $P(100)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2565836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Counting Multi-Sets for Donuts The problem is: Suzy is selecting 20 donuts to bring to her club meeting. The donut store sells 7 varieties of donuts. Donuts of the same variety are all the same. There is a large supply of each variety of donuts, except for jelly donuts. There are only 5 jelly donuts available. How many ways are there for Suzy to select the donuts? I was thinking that the answer was $5{20 + 6 - 1 \choose 6-1}$, since there is not an unlimited supply of jelly donuts. Would that logic be correct?
Christian Blatter has explained how to correct your approach and why your approach was incorrect. Here is another method. Let $x_j$ denote the number of jelly donuts. Let $x_k$, $1 \leq k \leq 6$, be the number of donuts of type $k$ that Suzy purchases. Then $$x_1 + x_2 + x_3 + x_4 + x_5 + x_6 + x_j = 20 \tag{1}$$ is an equation in the nonnegative integers. If we temporarily ignore the restriction that $x_j \leq 5$, a particular solution of equation 1 corresponds to the placement of six addition signs in a row of $20$ ones. For instance, $$1 1 1 + 1 1 + 1 1 1 1 + 1 1 + + 1 1 1 1 1 1 + 1 1 1$$ corresponds to the solution $x_1 = 3$, $x_2 = 2$, $x_3 = 4$, $x_4 = 2$, $x_5 = 0$, $x_6 = 6$, and $x_j = 3$. There are $$\binom{20 + 6}{6} = \binom{26}{6}$$ such solutions since we must choose which $6$ of the $26$ positions required for $20$ ones and $6$ addition signs will be filled with addition signs. From these, we must subtract the number of cases in which the condition $x_j \leq 5$ is violated. Suppose $x_j > 5$. Then $x_j' = x_j - 6$ is a nonnegative integer. Substituting $x_j' + 6$ for $x_j$ in equation 1 yields \begin{align*} x_1 + x_2 + x_3 + x_4 + x_5 + x_6 + x_j' + 6 & = 20\\ x_1 + x_2 + x_3 + x_4 + x_5 + x_6 + x_j' & = 14 \tag{2} \end{align*} Equation 2 is an equation in the nonnegative integers with $$\binom{14 + 6}{6} = \binom{20}{6}$$ solutions. Since these are the cases that violate the restriction that $x_j \leq 5$, we conclude that the number of ways that Suzy can purchase $20$ donuts at the store without buying more than five jelly donuts is $$\binom{26}{6} - \binom{20}{6}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2565956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Probability that $x$ divides $k \leq n$ and an equation question. The exact probability that a fixed positive integer $x \leq n$ divides a randomly selected positive integer $k \leq n$ is $\dfrac{\lfloor \dfrac{n}{x} \rfloor}{n}$ which is the same as $\dfrac{n - n_{(x)}}{xn}$ where $n_{(x)}$ is defined to be the smallest positive residue mod $x$. So the probability that a randomly selected positive integer $k \leq n$ is divisible by $x$ or $y$ is $P(x, y) = \dfrac{n - n_{(x)}}{xn} + \dfrac{n - n_{(y)}}{yn} - \dfrac{n - n_{(xy)}}{xyn}$. Setting this equal to $1$ gives $y(n - n_{(x)}) + x(n - n_{(y)}) - (n - n_{(xy)}) = xyn = n(y + x - 1) - yn_{(x)} - xn_{(y)} + n_{(xy)}$ So is $x = 2, y = 3, n = 4$ the unique solution to this weird equation?
* *Your expression for $P(x, y)$ does not hold generally. To be more precise, it should be $$ P(x, y) = P(x) + P(y) - P(\mathsf{lcm}(x,y)) $$ where $P(x) = \left(n - n_{(x)}\right)/{nx}$, and $\mathsf{lcm}(x, y)$ is the least common multiple of $x$ and $y$. *Any combination with $x = 1$, $1 \leq y \leq n$ is a solution to the equation since $$ P(1, y) = P(1) + P(y) - P(\mathsf{lcm}(x, y)) = P(1) + P(y) - P(y) = P(1) = 1 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2566082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
the characteristic function of Levy Distribution The standard Levy distribution has the PDF: $$f(x)=\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2x}}\frac{1}{x^{3/2}},$$ where $x\geq0$. My question is how to compute its characteristic function: $$ \phi(t)=\int_{-\infty}^{\infty} e^{jtx}f(x)\mathrm{d}x. $$ I tried to use the Residue theorem.
I can't find the question this duplicated, so I'll write the answer here. Define $a:=(1-i)\sqrt{t}$ so $a^2=-2it$. Since $\phi(-t)=\phi^\ast(t)$, once we've proven that $t>0\implies \phi(t)=\exp -a$ we'll know that more generally $\phi(t)=\exp -(1-i\operatorname{sgn}t)\sqrt{|t|}$. For the $t>0$ case$$\phi(t)=\int_0^\infty\tfrac{\exp -a}{\sqrt{2\pi}}x^{-3/2}\exp\big[-\tfrac{1}{2}(x^{-1/2}-ax^{1/2})^2\big]dx.$$The substitution $x\mapsto a^{-2}x^{-1}$ obtains the alternative expression$$\phi(t)=\int_0^\infty\tfrac{\exp -a}{\sqrt{2\pi}}ax^{-1/2}\exp\big[-\tfrac{1}{2}(x^{-1/2}-ax^{1/2})^2\big]dx.$$Averaging the expressions and substituting $y=x^{-1/2}-ax^{1/2}$,$$\phi(t)=\int_\mathbb{R}\frac{\exp (-a-\tfrac{1}{2}y^2)}{\sqrt{2\pi}}dy=\exp -a.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2566188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Does $f(x)=O(\log x)\implies f'(x)=O(1/x)$? Does $f(x)=O(\log x)\implies f'(x)=O(1/x)$ ? My attempt: Since $f(x)=O(\log x)$, then there exists $C>0$ such that $f(x)<C\log x$ for sufficiently large $x$. What I'm proposing is that $f'(x)=O(1/x)$, but this would mean there exists $C>0$ such that $$f'(x)<\frac{C}{x},$$ and I don't think differentiation works in general over inequalities... in which case, can we say anything at all, perhaps with some assumptions on $f$ if necessary? Update In response to Surb's comment below, whilst differentiation does not generally hold over inequalities, integration does, e.g. suppose $f(x)=O(1/x)$, then $$\int_1^yf(x)dx=O(\log y).$$ Proof: if $f(x)=O(1/x)$, then there exists $C>0$ such that $$f(x)<\frac{C}{x}\implies \frac{C}{x}-f(x)>0,\tag{1}$$ for $x>x_0$ where $x_0$ is sufficiently large. Then $$\int_1^y\left(\frac{C}{x}-f(x)\right)dx>0\implies C\log(y)-\int_1^yf(x)dx>0,$$ as a result of inequality (1). Hence, $$\int_1^yf(x)dx<C\log(y)\implies\int_1^yf(x)dx=O(\log y).$$
Consider $f(x)=\sin(x^{2})$. Then $f(x)=O(1)$ so $f$ is certainly $O(\log(x))$. But $$f'(x) = 2x\cos(x^{2}) = O(x)$$ which is certainly not $O(x^{-1})$. It is a general principle that bounded functions can oscillate very heavily (imagine any curve you like, perturbed by lots of small-amplitude but high-frequency wiggles).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2566226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 5 }
Given 8 cube vertices and a point inside, find two points that a line going from inside point with vector V cuts the surface of the cube. I have the coordinates of a vertices of a cube and a point inside the cube. Now I draw a line from that point with vector V. How can I find the coordinates in which that line cuts the cube surface.
Let $p$ be the point inside the cube, and $\textbf{v}$ the vector. For each of the six faces of the cube, find the point where the ray from $p$ in direction $\textbf{v}$ intersects the plane. The ray can be represented as $p + t \,\textbf{v}$, where $t \ge 0$ is a parameter. Solve for the $t$ that places $p + t \,\textbf{v}$ on each plane. Discard the negative $t$ solutions. The smallest positive $t$ solution represents the first plane hit by the ray. This is the point you seek.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2566362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can we use simultaneous row and column operations in solving same determinant? Please help.. Can both(row and column) operations be used simultaneously in finding the value of same determinant means in solving same question at a single time?
A row operation corresponds to multiplying a matrix $A$ on the left by one of several elementary matrices whose determinants are easy to compute to get a matrix $B = EA$. For instance, swapping the rows of a 2x2 matrix is done with $$ \pmatrix{0 & 1 \\ 1 & 0 } \pmatrix{a & b \\ c & d} $$ The determinant of the resulting row-swapped matrix is the product of the two determinants. Hence $det B = det E det A$. Since $det E = -1$ in this case, you can compute the det of the new matrix and multiply it be $-1$ to get the det of the original one. (This is typically not very useful, but it's an example). In the same way, a column op is done with $A \mapsto AE$, and you can use the same rule -- prduct of determinants -- to relate the determinant of $B = AE$ to the determinant of $A$. In short: you can do a sequence of row and column ops, each of which adds a factor to the determinant, until you reach the identity. You don't have to do just a sequence of row ops or just a sequence of column ops. Personal advice: Just use one or the other. It'll take a little longer, but you're much less likely to make a mistake in my experience with many students over the years.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2566483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Degree of chromatic polynomial We have graph with $n$ vertices. Chromatic number of this graph is $\chi(G)=3$. What degree has a chromatic polynomial of this graph? I ended up with degree of size $n$, but I can not find proof for that.
The degree of the chromatic polynomial is equal to the number of vertices. Here is a simple proof from first principles. Let $G=(V,E)$ be a (simple finite) graph of order $n.$ For $x\in\mathbb N$ let $P(x)$ be the number of proper colorings $\varphi:V\to\{1,2,3,\dots,x\}.$ For $e=uv\in E,$ let $A_e$ be the number of maps $\varphi:V\to\{1,2,3,\dots,x\}$ such that $\varphi(u)=\varphi(v).$ Plainly, $$P(x)=x^n-\left|\bigcup_{e\in E}A_e\right|.\tag1$$ Using the in-and-out formula (the so-called "Principle of Inclusion and Exclusion") we can rewrite this as $$P(x)=x^n+\sum_{\emptyset\ne F\subseteq E}(-1)^{|F|}\left|\bigcap_{e\in F}A_e\right|=x^n+\sum_{\emptyset\ne F\subseteq E}(-1)^{|F|}|A_F|\tag2$$ where $A_F=\bigcap_{e\in F}A_e.$ But $|A_F|=x^{n(F)}$ where $n(F)$ is the number of connected components of the graph $(V,F),$ so we can rewrite this as $$P(x)=x^n+\sum_{\emptyset\ne F\subseteq E}(-1)^{|F|}x^{n(F)}.\tag3$$ Since $n(F)\lt|V|=n$ when $F\ne\emptyset,$ this is a polynomial in $x$ of degree $n.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2566599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }